content
stringlengths
86
994k
meta
stringlengths
288
619
Book review: NumPy Cookbook This year I have the chance to review the book NumPy Cookbook written by Ivan Idris and published by Packt Publishing . It introduces the numpy library by examples (which the author refers as recipes :). It is written with a simple language and it covers a wide range of topics, from the istallation of numpy to the combination with Cython. My impression of the book was good and, in particular, I liked the structure of the book. Every chapter face a series of problem related to the a specific topic through examples. Each example comes with an introduction to the problem that will be solved, the code commented line by line and a short recap of the techniques applied to solve the problem. Most of the examples are about practical problems and the code is made to be adapted in your own projects. Favorite chapters Chapters 5 and 10 are my favorite. The first one is about audio end image processing and explains some basic operation about the manipulation, the generation and the filtering of audio and video signals. The second is about the combination of numpy with some scikits,like scikits-learn, scikits-statsmodels and pandas. I loved these chapters because they cover some topics related to complex fields, such as machine learning and data analysis, in a very straightforward fashion. Favorite example Some examples presented by the book kept my attention. In particular, I found very interesting the one about the generation of the Mandelbrot. This example contains an explanation of the mathematical formula behind the fractal and the combination of the image generated using the formula and a simpler one. It is my favorite because provides one of the most practical explanation of the Mandelbrot fractal I have ever seen. This book could be a good starting point for who want to begin with numpy using a gentle approach. It can be used also as a manual which can help you in the development of small parts of more complex
{"url":"http://glowingpython.blogspot.com/2013/01/book-review-numpy-cookbook.html","timestamp":"2014-04-21T14:40:57Z","content_type":null,"content_length":"104612","record_id":"<urn:uuid:99cc821f-71e7-4160-8966-e1d8439a9645>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
polynomial root November 18th 2006, 09:13 PM #1 Nov 2006 polynomial root If we are told that $a+\frac{b}{2}+\frac{c}{3}+\frac{d}{5}=-\frac{e}{6}$, where $a,b,c,d,e \in {R}$. Prove that the polynomial $f(x)=a+bx+cx^2+dx^4+ex^5$ has at least one real zero. $x \to \infty$$f(x) \to + \infty$ $x \to -\infty$$f(x) \to - \infty$ $f(x)$ as its continuous has a real root. In fact all odd order polynomial with real coefficients must have at least one real root. Use mean value theorem for integral. Consider the continous function on $[0,1]$, $f(x)=a+bx+cx^2+dx^4+ex^5$. By the integral mean value theorem there is a number $c\in [0,1]$ such as, $f(c)(1-0)=\int_0^1 a+bx+cx^2+dx^4+ex^5dx$ But that tells us that, $f(c)=ax+\frac{bx^2}{2}+\frac{cx^3}{3}+\frac{dx^5}{ 5}+\frac{ex^6}{6} \big|_0^1$ $f(c)=a+\frac{b}{2}+\frac{c}{3}+\frac{d}{4}+\frac{e }{6}=0$ Note only we know it has a solution we know it has a solution on the interval $[0,1]$ Well this partiular statement is independent of $e$ as I said any odd order polynomial, but in this case yes I suppose $e$ could in principle be zero. In fact I do have another method using Descartes rule of signs which does use the condition, which I beleive handles the possibility that $e=0$ (but I don't intend to check that now). November 19th 2006, 12:23 AM #2 Grand Panjandrum Nov 2005 November 19th 2006, 06:00 AM #3 Global Moderator Nov 2005 New York City November 19th 2006, 10:20 AM #4 Global Moderator Nov 2005 New York City November 19th 2006, 10:31 AM #5 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/7747-polynomial-root.html","timestamp":"2014-04-20T12:50:32Z","content_type":null,"content_length":"50236","record_id":"<urn:uuid:fa7d6555-a622-4759-81f8-5c77e82169b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Their infinite wisdom Hotel guests come and go. But in the first decade of the 1900s, a pair of frequent Russian visitors to the Hotel Parisiana, near the Sorbonne on Paris’ Left Bank, stood out vividly. The children of the hotel’s proprietors, the Chamont family, remembered them into the 1970s as “hardworking” and “pious” men. The guests, Dimitri Egorov and Nikolai Luzin, were mathematicians, studying in Paris; they often prayed and went to church. The Russians were embarking on a grand project: exploring the unknown features of infinity, the notion that a quantity can always increase. Infinity’s riddles have fascinated intellectuals from Aristotle to Jorge Luis Borges to David Foster Wallace. In ancient Greece, Zeno’s Paradox stated that a runner who keeps moving halfway toward a finish line will never cross it (in effect, Zeno realized the denominator of a fraction can double infinitely, from 1/2 to 1/4 to 1/8, and so on). Galileo noticed but left unresolved another brain-teaser: A series that includes every integer (1, 2, 3, and so on) seems like it should contain more numbers than one that only includes even integers (2, 4, 6, and so on). But if both continue infinitely, how can one be bigger than the other? As it happens, infinity does come in multiple sizes. And by discovering some of its precise characteristics, the Russians helped show that infinity is not just one abstract concept. Egorov and Luzin, with the help of another colleague, Pavel Florensky, created a new field, Descriptive Set Theory, which remains a pillar of contemporary mathematical inquiry. They also founded the Moscow School of mathematics, home to generations of leading researchers. The Russians’ success in grasping infinity concretely went hand in hand with their unorthodox religious beliefs, according to MIT historian of science Loren Graham. In a recent book, Naming Infinity: A True Story of Religious Mysticism and Mathematical Creativity, co-written with French mathematician Jean-Michel Kantor and published this year by Harvard University Press, Graham describes how the Russians were “Name-worshippers,” a cult banned in their own country. Members believed they could know God in detail, not just as an abstraction, by repeating God’s name in the “Jesus prayer.” Graham thinks this openness to apprehending the infinite let the trio make its discoveries--before Egorov and Florensky were swept up in Stalin’s purges. “The impact of the Russian mathematicians has been enormous,” says Graham, who has spent a half-century studying the history of science in Russia. “But their fates were tragic.” Settling set theory In studying infinity, the Russians followed Georg Cantor, the German theorist who from the 1870s to the 1890s formalized the notion that infinity comes in multiple sizes. These relative sizes, Cantor suggested, can be determined by seeing if there is a one-to-one correspondence between members of infinite series. A bit counterintuitively, the two lists of integers Galileo pondered have the same size, because their members can be paired off indefinitely (1 with 2, 2 with 4, and so on). Similarly, while there exists an infinite series of rational numbers (fractions) between any two integers, these rational numbers can be paired off with integers, one by one. However, as Cantor noticed, in addition to the infinite series of rational numbers in between integers, it is possible to create new, non-repeating fractions, as expressed in decimal form. Thus the infinity of real numbers, which includes non-repeating decimals as seen in pi (3.14159 … ), is larger than the infinity of either rational numbers or integers. Cantor’s work made it clear that the study of infinity was actually the study of sets: their properties and the functions used to create them. Today, set theory has become the foundation of modern math. But in the aftermath of Cantor, the basics of set theory were unclear. As Graham and Kantor describe it, even leading mathematicians found the situation unsettling. Three French thinkers — Emile Borel, Henri Lebesgue, and Rene Baire — who made advances in set theory nonetheless decided by the early 1900s that the study of infinity had lost its way. They felt theorists were relying more on arbitrary rule-making than rigorous inquiry. “The French lost their nerve,” says Graham. By contrast, Graham and Kantor assert, the Russian trio found “freedom” in the mathematical uncertainties of the time. It turns out there were plenty of concrete advances in set theory yet to be made; Luzin in particular pushed the field forward in the 1910s and 1920s, making discoveries about numerous types of sets involving the continuum of real numbers (the larger of the infinities Cantor found); Descriptive Set Theory details the properties of these sets. In turn, many of Luzin’s students in the Moscow School also became prominent figures in the field, including Andrei Kolmogorov, the best-known Russian mathematician of the 20th century. What’s in a name? Naming Infinity argues that the Russians thought their mathematical inquiries corresponded to their religious practices. The Name-worshippers believed the name of God was literally God, and that by invoking it repeatedly in their prayer, they could know God closely — a heretical view for some. Graham and Kantor think the Russians saw their explorations in math the same way; they were defining (and naming) sets in areas where others thought knowledge was impossible. Luzin, for one, often stressed the importance of “naming” infinite sets as a part of discovering them. The Russians “believed they made God real by worshipping his name,” the book states, “and the mathematicians … thought they made infinities real” by naming and defining them. Graham also suggests a parallel between the Russians and Isaac Newton, another believer (and heretic). Historians today largely view Newton’s advances in physics as part of a larger personal effort — including readings in theology and alchemy experiments — to find divine order in the world. Similarly, the Russians thought they could comprehend infinity through both religion and mathematics. Mathematicians have responded to Naming Infinity with enthusiasm. “It’s a wonderful book for many reasons,” says Barry Mazur, the Gerhard Gade University Professor at Harvard, who regards it as “an excellent way of getting into the development of set theory at the turn of the century.” Moreover, Mazur agrees that the connection between the religious impulses of the three Russians and their mathematical studies seems significant, even if there is only a general affinity between the two areas in matters such as naming objects. “It is more a conveyance of energy, than a conveyance of logic,” Mazur says. Religion could not trigger precise mathematical moves, he thinks, but it provided the Russians with the intellectual impetus to move forward. Victor Guillemin, a professor of mathematics at MIT, also finds this account convincing. In the 1970s, it was Guillemin, staying at the Hotel Parisiana like Egorov and Luzin before him, who discussed the Russians’ lives with the Chamont family daughters (then elderly women, having been children just after the turn of the century). While reading Graham and Kantor’s book, Guillemin says, “I was fascinated at the idea that the Russians were able to push the subject further because they had less trepidation at dealing with infinity.” As Graham and Kantor point out, many other prominent mathematicians have had a mystical bent, from Pythagoras to Alexander Grothendieck, an innovative French theorist of the 1960s who now lives as a recluse in the Pyrenees. Yet Graham emphasizes that mysticism is not a precondition for mathematical insight. “To see if science and religion are opposed to each other, or help each other,” Graham says, “you have to select a specific episode and study it.” Egorov’s exile, Florensky’s fate Naming Infinity also starkly recounts the sorry fates of Egorov and Florensky, as publicly religious figures in atheist, postrevolutionary Russia. Egorov was exiled to the provinces and starved to death in 1931. Florensky, a flamboyant figure who wore priestly garb in public, was executed in 1937. Luzin was spared after the physicist Peter Kapitsa made a direct appeal to Stalin on his behalf. These men were not just endangered by their religiosity, however, but also by their style of math. The intangible nature of infinity contradicted the Marxist notion that intellectual activity should be grounded in material matters, a charge made by one of their accusers: Ernst Kol’man, a mathematician and seemingly sinister figure called “the dark angel” for his role as an informant on other Soviet intellectuals. Graham, who knew both Kapitsa and Kol’man, says Kol’man “really believed his Marxism, and believed it was wrong to think mathematics has no relationship to the material world. He thought this was a threat to the Soviet order.” Even so, Kol’man, who died in 1979, left behind writings acknowledging he had judged such matters “extremely incorrectly.” The Russian trio was thus part of a singular saga, belonging to a now-vanished historical era. Naming Infinity rescues that story for readers who never had the chance to hear it directly from the owners of the Hotel Parisiana. December 14, 2009 All huge scientists (and many Russians) belived in God. Isaak Newton, Pavlov, genetic Timofeev-Ressovski and Vavilov, physisists Lebedev and many others. It is was standart of culture. If I know more im mathematics, then any professor in MIT, it means that they worked less then I in some direction. For example, nobody in the MIT don't know which functions are exact analitical solution of the Navier-Stocks equation. It is very interesting problem connected with scenarium Feigenbaum. I know, how to solve this promblem, after many years hard work. It is not 'Religious Mysticism and Mathematical Creativity'.Infinity - it it abstraction of our brain. There is not infinity in nature. December 15, 2009 Hey Gorskin, I'm not a mathemetician, but doesn't our perception of infinity make it a reality in nature? Consider that Human beings are just computers that run on math. I'd be interested to know your thoughts. December 27, 2010 if just applying two bits in computer people can do magic! think about ten digits, i bet you can't even process the data. sorry if i hurt any body.
{"url":"http://newsoffice.mit.edu/2009/infinity-1214","timestamp":"2014-04-16T10:12:12Z","content_type":null,"content_length":"95260","record_id":"<urn:uuid:806c68fe-aa19-47cb-a907-8d182425a729>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
April 3rd 2011, 11:02 AM #1 Dec 2009 the question is simplY: solve for 0<x<10 ln(2x+1)<3 cos x {it is only woth 4 marks, part one, which was worth 2 marks,was what 2 transformations take ln(x) to ln(2x+1) so i cant see there is to much work here but i cant get it.} I can see i need to find the points of intersection, but i must be missing something here. The mark scheme is no help,it just states the answer. Have you graphed each of these functions? What do you get? yes,graphed them so i just need points of intersection, but how to find them is the problem Fair problem to have. You need to use technology (spreadsheet or graphic calculator) to find the intersection of these functions, if you can't use those, a numerical method like the bisection or newton's method will this is what i guessed, but as i say its only 4 marks,compared to the 2 marks for the 2 transformations in part (1). it gives no starting values and i need 4 numbers (0 being the easy on to find) on another paper have same sort of question when need to solve e^(-x)-x+1=0, once again mark scheme gives no suggestion as to how the answer is to be found. I can tell you there are 3 points of intersection between $\displaystyle \ln (2x+1)$ and $\displaystyle 3\cos x$ on this interval. I suggest when using a choosen numerical method, use the starting points as x=1,5 & 7. April 3rd 2011, 02:28 PM #2 April 4th 2011, 06:17 AM #3 Dec 2009 April 4th 2011, 01:44 PM #4 April 8th 2011, 09:44 AM #5 Dec 2009 April 10th 2011, 02:00 PM #6
{"url":"http://mathhelpforum.com/pre-calculus/176685-inequality.html","timestamp":"2014-04-20T17:53:40Z","content_type":null,"content_length":"45328","record_id":"<urn:uuid:26fcd165-0e30-4171-852b-6b62b7c5266b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Oracle's Oration Noticings students may generate: □ Amani and Biagio are two travelers who stop by the side of the road to eat. □ Amani has seven bread rolls. □ Biagio has five bread rolls. □ Amani and Biagio have 12 bread rolls all together. □ Caleb would like to share Amani and BiagioÕs bread with them. □ I wonder if sharing means eating the same number of rolls. □ Each of the three eats four bread rolls. □ I wonder if sharing the rolls means that they ate all of them. □ The three ate 12 rolls all together. □ The number of rolls they ate is equal to the number of rolls Amani and Biagio started with. □ There were no rolls left after the three ate their share. □ Amani ate 4 of her rolls and gave out 3. □ Biagio ate 4 of his rolls and gave out 1. □ Caleb didn't have any rolls of his own. □ Caleb took all of the rolls he ate from Biagio and Amani. □ Caleb got one of his rolls from Biagio and 3 of his rolls from Amani. □ Caleb left a payment of 12 silver pieces for the rolls. □ I wonder what bread rolls Caleb paid for. Did he pay of all of the rolls or the ones he ate or something else? □ Amani thinks that she should get seven of the 12 silver pieces Caleb left. □ Amani thinks that Biagio should get five of the 12 silver pieces Caleb left. □ I wonder whether Amani thinks that Caleb left 1 silver piece for each of the 12 bread rolls they all started with. □ Biagio thinks that he and Amani should share the silver pieces equally and get 6 pieces each. □ Biagio thinks that if he and Amani shared the rolls equally then they should be sharing the pieces of silver equally. □ I wonder whether Biagio thinks that sharing the rolls equally means eating the same number of rolls. □ I wonder if the number of rolls Amani and Biagio shared was equal to those that they each ate or to those that they each gave out. □ I wonder if Caleb paid for the rolls that they all ate or for those that he ate. □ The Oracle decided that Amani should get nine silver pieces. □ The Oracle decided that Biagio should get three silver pieces. □ The Oracle decided that Amani should get 6 more silver pieces than Biagio. □ The Oracle decided that Amani should get 3 times the number of silver pieces Biagio gets. □ I wonder how did the Oracle make his decision. □ The Oracle didn't think that Amani and Biagio shared their rolls equally. □ The Oracle gave Amani 2 more silver pieces than the number of bread rolls she started with and gave Biagio 2 fewer silver pieces than the number of bread rolls he started with. □ Amani gave two bread rolls to Caleb and one bread roll to Biagio. □ There are 3 people sharing the bread rolls and the Oracle gave Amani three times the amount of money he gave to Biagio.
{"url":"http://mathforum.org/workshops/te/noticingoracle.html","timestamp":"2014-04-16T16:51:12Z","content_type":null,"content_length":"3506","record_id":"<urn:uuid:3c894636-0b34-4b24-bd6e-de9046753995>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with maths Help with maths Hi, I'm new to C++ (only started on saturday), and I'm trying to make my first program that is actually useful. I'm trying to chart population growth using the formula: (Where N is the starting population, R is the growth in %, A is the length of time of the growth, and P is the population after the growth has happened) Anyway, I want to just start off with it being over 1 year, which eliminates A from the equation. Here is my code: int X, Y, Z, A, B; int C = 100; cout<<"What is the population that is going to grow? \n"; cout<<"\nWhat is the growth rate, in %?\n"; The declared-but-unused variables may be used by me later on. I used: To get around the problem of "error C2106: '=' : left operand must be l-value" But now, with the 'X*(1+Y)=Z' I've hit that problem again, and I don't know how to get around it. If it matters, my compiler is Visual C++ Express Edition. Anyone know how, or can someone point me towards a guide to doing maths in C++? Your best bet would be to learn C++ essentials before you start coding things. There are lots of helpful guides on an array of topics, but they all assume you have a fair handle on C++. Starting on Saturday is only a very small beginning. Learning any language, whether spoken or written, takes a lot of time and effort. ^_^ Originally Posted by Darklighter137 Your best bet would be to learn C++ essentials before you start coding things. There are lots of helpful guides on an array of topics, but they all assume you have a fair handle on C++. Starting on Saturday is only a very small beginning. Learning any language, whether spoken or written, takes a lot of time and effort. ^_^ Well, this is only going to be 30 lines or so, and is very much a learning project for me. I got it working, by rearranging it to "Z = X*1+Y" . But now it's saying that 5 divided by 100 = 0. Which is messing it up. It seems that it doesn't like decimals, nor does it like fractions. you dont need the cin.ignore(), just the 'cin >>' is adequate. and if you just use '"abcdef? ";' at the end of your questions, the input will be at the end of the question. 5 divided by 100 = 0. Which is messing it up because it is int you cannot store something like0.05 in the int variable. use double Originally Posted by vart because it is int you cannot store something like0.05 in the int variable. use double Ah, thanks, I got it working. I obviously have alot to learn about variables. Thanks for the help. you dont need the cin.ignore(), just the 'cin >>' is adequate. and if you just use '"abcdef? ";' at the end of your questions, the input will be at the end of the question. Yeah, I'm going to tidy up the code. Originally Posted by vart because it is int you cannot store something like0.05 in the int variable. use double that would be a "float" wouldnt it? double works too. float = single precision (typically 32 bit floating-point) double = double precision (typically 64 bit floating-point) You may as well use doubles; the FPU on your PC can handle them just as well as single precision floats. Originally Posted by Cat double works too. float = single precision (typically 32 bit floating-point) double = double precision (typically 64 bit floating-point) You may as well use doubles; the FPU on your PC can handle them just as well as single precision floats. ok, i take it that using two's complement? Originally Posted by Coritani I got it working, by rearranging it to "Z = X*1+Y" . Yes, every = sign does the following: 1. Evaluates the expression on the right-hand side 2. Assigns the results to the left hand side. int x; x = 3; // This is valid 3 = x; // This is not. It's needed to have some way to figure out what assignment is being made. For example: int x = 2, y = 3; x = y; // This sets x = 3. Originally Posted by dac ok, i take it that using two's complement? It's actually IEEE 754, which does not use two's compliment (2's compliment is how integers are stored). It basically stores numbers with a sign bit, an exponent (in excess-1023) and a mantissa. You shouldn't worry about how numbers are stored. In a good, portable program, it shouldn't matter.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/85353-help-maths-printable-thread.html","timestamp":"2014-04-17T20:14:02Z","content_type":null,"content_length":"14926","record_id":"<urn:uuid:2a8a16e9-c8b9-4a0f-b4a0-5b0011e24427>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
help with infinite sum January 23rd 2006, 01:39 PM help with infinite sum i'm trying to do some calc hw, but i ran into this equation Σ t^k = 5 is there some kind of rule which changes this equation so i can solve for k? January 23rd 2006, 01:58 PM Originally Posted by hongster5 i'm trying to do some calc hw, but i ran into this equation Σ t^k = 5 is there some kind of rule which changes this equation so i can solve for k? Solve for k?!? That makes no sense you probably mean for t This is, Assuming that this infinite series converges then it is a geometric series for $|t|<1$ and such as its sum is 5. But its sum is given by $\frac{1}{1-t}$. thus, $t=\frac{4}{5}$ Something interesting to note, the eqaution Always has a unique solution, and that, Thus, any infinite geometric sum can be made to converge to any real number. January 23rd 2006, 03:35 PM thx alot. i wasnt thinking of geometric sequences... not that i would have known that equation anyways, lol January 23rd 2006, 03:52 PM
{"url":"http://mathhelpforum.com/calculus/1715-help-infinite-sum-print.html","timestamp":"2014-04-17T19:44:55Z","content_type":null,"content_length":"6502","record_id":"<urn:uuid:ba503f7b-29db-4f05-a6b2-5f0db21d6691>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Xiannan Li Department of Mathematics University of Illinois at Urbana-Champaign 1409 W. Green Street Urbana, IL, 61801 USA I am interested in analytic number theory, specifically L-functions and objects related to them. (with Y. Lamzouri and K. Soundararajan) On the least quadratic non-residue, values of $L$-functions at $s=1$ and related problems.
{"url":"http://www.math.uiuc.edu/~xiannan/","timestamp":"2014-04-19T11:56:54Z","content_type":null,"content_length":"4714","record_id":"<urn:uuid:270a547c-ea41-411f-868f-259f235965f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Groton, MA Precalculus Tutor Find a Groton, MA Precalculus Tutor I have 9 years of experience teaching all levels of high school mathematics in the public schools. I also have more than 6 years of experience tutoring mathematics to students ranging from 7 years old through adult learners. I have taught and/or tutored mathematics from basic addition and subtraction through calculus. 14 Subjects: including precalculus, calculus, trigonometry, SAT math ...I've also taught Trigonometry as a long term substitute and the high school level. My instruction focuses on the unit circle. My instruction included the following topics: Use and apply radian 13 Subjects: including precalculus, physics, ASVAB, algebra 1 ...The biggest changes were to the verbal reasoning section. The GRE is very similar to the SAT but with two essays instead of one in the Analytical Writing section, and more variety of question formats in the rest. I focus not only on the essential reading, quantitative, and writing skills, but a... 44 Subjects: including precalculus, chemistry, writing, calculus ...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. 16 Subjects: including precalculus, French, elementary math, algebra 1 ...I have worked in numerous corporations before becoming a full-time tutor. Over the years, I have helped numerous high school, college and graduate students with career development and finding out which subjects and programs are best suited for their needs. I have helped students with college and graduate school applications, as well as with their resumes and provided career advice. 67 Subjects: including precalculus, English, reading, calculus Related Groton, MA Tutors Groton, MA Accounting Tutors Groton, MA ACT Tutors Groton, MA Algebra Tutors Groton, MA Algebra 2 Tutors Groton, MA Calculus Tutors Groton, MA Geometry Tutors Groton, MA Math Tutors Groton, MA Prealgebra Tutors Groton, MA Precalculus Tutors Groton, MA SAT Tutors Groton, MA SAT Math Tutors Groton, MA Science Tutors Groton, MA Statistics Tutors Groton, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/groton_ma_precalculus_tutors.php","timestamp":"2014-04-16T19:22:11Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:b8623764-acf3-4c07-82e5-3288ab065359>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by nena Total # Posts: 36 Sally was making 3 sandwiches. there were only 10 slices. if she used 2 slices of bread to make each sandwich. what fraction of the bread did she use to make the sandwiches? Larry has a ladder that is 16' long. If he sets the base of the ladder on level ground 5 feet from the side of the house, how many feet above the ground will the top of the ladder reach when it rests against the house? POL 201 American National Government Theodore Lowi suggested the remedy for cumbersome bureaucracy was (Points : 1) returning to juridical democracy. delegating even more authority. reducing the size of government. throwing money at the Solve the following: 56 is ______% of 125 What number is 45% of 680? Solve the following: 315 is 126% of If a number increases from 47 to 70.5, what is the rate of increase? Michael Reeves, an ice cream vendor, pays $17.50 for a five-gallon container of premium ice cream. From this quantity he sells 80 scoops at $0.90 per scoop. If he sold smaller scoops, he could sell 98 scoops from the same container; however, he could charge only $0.80 per scoo... The Call of the Wild is a story about a dog named Buck. Buck is a pampered dog who lives with a wealthy family in southern California. During the Gold Rush, Buck is captured, sold, and eventually shipped to Alaska to work as a sled dog. Along the way, Buck is mistreated by a s... oh yea im sorry I keep thinking about it and yea its B thankyou I think A because the other answer choices don't make sense to me (1) After my interview with these four young people, I reflected on the quiet sense of "difference" I sensed with many of these Upward Bound students. (2) As a college teacher who has also taught seventh-grade science, I have some experience with the faces and attitu... A manufacturing firm is thinking of launching a new product. The firm expects to sell $950,000 of the new product in the first year and $1,500,000 each year thereafter. Direct costs including labor and materials will be 45% of sales. Indirect incremental costs are estimated at... What groups would be ionized in a solution of pH 12 college algebra Given the function f(x) = x^3+ 3x Find the rate of change between the two stated values for x: 1 to 2 Find the equation of a secant line containing the given points: (1, f(1)) and (2, f(2)) physics, help! urgent! Tiger Woods hits a 0.050-kg golf ball, giving it a speed of 75 m/s. What impulse does he impart to the ball? physics, help! urgent! Tiger Woods hits a 0.050-kg golf ball, giving it a speed of 75 m/s. What impulse does he impart to the ball? Tiger Woods hits a 0.050-kg golf ball, giving it a speed of 75 m/s. What impulse does he impart to the ball? if my input is 1,2,3,and 4 and my output is 5,10,15,and 20 what would my rule be? -2 squared(-2-x)-x to the zero power(3-2)= -2(x+3) college algebra Graph the following function using transformations. Be sure to graph all of the stages on one graph. State the domain and range. For example, if you were asked to graph y= x^2+11 using transformations, you would show the graph of y = x^2 and the graph shifted up 1 unit physical science a sheet of paper is withdrawn from under a glass of milk without spilling it if the paper is removed quickly. this best demostrates that? algebra 1 sketch, label and mark each figure. 1.Isoscles obtuse triangle TRI with vertex angle T. 2.Rhombus RHOM with acute <H and the shorter diagonal. 3.Scalene right triangle SCA with midpoints L,M and N on SC, CA, and SA, respecitively. 4.Trapeoziod TRAP with TR|| A[, RE PA and P... From the top of a lighthouse 210 feet high, the angle of depression to a boat is 27 degress. Find the distance from the boat to the food of the lighthouse. The lighthouse was built at sea level. please some sugesstions. algebra 1 A ladder leaning against a house makes an angle of 60 degress with the ground. The foot of the ladder is 7 feet from the foundation of the house.how long is the ladder? please some sugesstions algebra 1 What are the values of a and b.if any,where a|b-2|<0? please some sugestions The area of trapezoid is h( b1+b2)/5, where h is the altitude, and b1 and b2 are the lenghts of the parallel bases. If trapezoid has an altitude of 5 inches, an area of 55 square inches , and one base 12 inches long, what is the lenghth, in inches, of its other base? Please fo... If a and b are any real numbers such that 0<a<1<b, which of the following must be true ov the value ab?? Can you explain to me how to figure out unit rate per prices? 8th grade Math Write a unit rate for the situation driving 140 mi in 2 h 45 min I understand. Thank you for helping me. Sonya has X amount of money. Bob has three times as much as Sonya has, less $14.62. Write an expression , using X, that tells how much does Bob has." You sail is: 3x-14.62 but this is the expression and I need to know how much in dollars,too.... please "Sonya has X amount of money. Bob has three times as much as Sonya has, less $14.62. Write an expression , using X, that tells how much does Bob has." You sail is: 3x-14.62 but this is the expression and I need to know how much in dollars,too.... please 4 grade Mario got his $10.00 weekly allowance on Monday. He spent 25% of his weekly allowance on Tuesday, 15% of his weekly allowance on Wednesday, and 10% more on Thursday. How much money did he have left to spend for the rest of the week? Algebra 1B How is doing operations (adding, subtracting, multiplying, and dividing) with rational expressions similar to or different from doing operations with fractions? Can understanding how to work with one kind of problem help understand how to work another type? When might you use ... world literature Can someone tell me some websites I can use to define the following literary terms 1. literary journal 2.literary canon 3. reader-response criticism 4. anapest 5. tone 6. plot 7. characterization 8. setting 9. point of view 10. irony 11. theme 12. figurative language 13. chara... Can someone check and tell me if these are correct or not. subtract -88-53= Subtract 7/9 - (-13/18)=-11/18 multiply (3)(7)(-9)=-189 find the reciprocal of (-1/7)(14/3)(9) AP U.S. History 1920's immigration wtf no one wants 2 know ur personal life tiff?? lesbo freak earth science which term is best defined as a measure of the amount of space a subtance occupies? volume It looks like you have answered your own question. If it is three dimensional, it is volume. However, if it is two dimensional, it is area. I hope this helps a little more. Thanks for as...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=nena","timestamp":"2014-04-16T23:08:29Z","content_type":null,"content_length":"14498","record_id":"<urn:uuid:eb09a0e2-22e9-4a17-af98-5d330dfbf2a5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Their infinite wisdom Hotel guests come and go. But in the first decade of the 1900s, a pair of frequent Russian visitors to the Hotel Parisiana, near the Sorbonne on Paris’ Left Bank, stood out vividly. The children of the hotel’s proprietors, the Chamont family, remembered them into the 1970s as “hardworking” and “pious” men. The guests, Dimitri Egorov and Nikolai Luzin, were mathematicians, studying in Paris; they often prayed and went to church. The Russians were embarking on a grand project: exploring the unknown features of infinity, the notion that a quantity can always increase. Infinity’s riddles have fascinated intellectuals from Aristotle to Jorge Luis Borges to David Foster Wallace. In ancient Greece, Zeno’s Paradox stated that a runner who keeps moving halfway toward a finish line will never cross it (in effect, Zeno realized the denominator of a fraction can double infinitely, from 1/2 to 1/4 to 1/8, and so on). Galileo noticed but left unresolved another brain-teaser: A series that includes every integer (1, 2, 3, and so on) seems like it should contain more numbers than one that only includes even integers (2, 4, 6, and so on). But if both continue infinitely, how can one be bigger than the other? As it happens, infinity does come in multiple sizes. And by discovering some of its precise characteristics, the Russians helped show that infinity is not just one abstract concept. Egorov and Luzin, with the help of another colleague, Pavel Florensky, created a new field, Descriptive Set Theory, which remains a pillar of contemporary mathematical inquiry. They also founded the Moscow School of mathematics, home to generations of leading researchers. The Russians’ success in grasping infinity concretely went hand in hand with their unorthodox religious beliefs, according to MIT historian of science Loren Graham. In a recent book, Naming Infinity: A True Story of Religious Mysticism and Mathematical Creativity, co-written with French mathematician Jean-Michel Kantor and published this year by Harvard University Press, Graham describes how the Russians were “Name-worshippers,” a cult banned in their own country. Members believed they could know God in detail, not just as an abstraction, by repeating God’s name in the “Jesus prayer.” Graham thinks this openness to apprehending the infinite let the trio make its discoveries--before Egorov and Florensky were swept up in Stalin’s purges. “The impact of the Russian mathematicians has been enormous,” says Graham, who has spent a half-century studying the history of science in Russia. “But their fates were tragic.” Settling set theory In studying infinity, the Russians followed Georg Cantor, the German theorist who from the 1870s to the 1890s formalized the notion that infinity comes in multiple sizes. These relative sizes, Cantor suggested, can be determined by seeing if there is a one-to-one correspondence between members of infinite series. A bit counterintuitively, the two lists of integers Galileo pondered have the same size, because their members can be paired off indefinitely (1 with 2, 2 with 4, and so on). Similarly, while there exists an infinite series of rational numbers (fractions) between any two integers, these rational numbers can be paired off with integers, one by one. However, as Cantor noticed, in addition to the infinite series of rational numbers in between integers, it is possible to create new, non-repeating fractions, as expressed in decimal form. Thus the infinity of real numbers, which includes non-repeating decimals as seen in pi (3.14159 … ), is larger than the infinity of either rational numbers or integers. Cantor’s work made it clear that the study of infinity was actually the study of sets: their properties and the functions used to create them. Today, set theory has become the foundation of modern math. But in the aftermath of Cantor, the basics of set theory were unclear. As Graham and Kantor describe it, even leading mathematicians found the situation unsettling. Three French thinkers — Emile Borel, Henri Lebesgue, and Rene Baire — who made advances in set theory nonetheless decided by the early 1900s that the study of infinity had lost its way. They felt theorists were relying more on arbitrary rule-making than rigorous inquiry. “The French lost their nerve,” says Graham. By contrast, Graham and Kantor assert, the Russian trio found “freedom” in the mathematical uncertainties of the time. It turns out there were plenty of concrete advances in set theory yet to be made; Luzin in particular pushed the field forward in the 1910s and 1920s, making discoveries about numerous types of sets involving the continuum of real numbers (the larger of the infinities Cantor found); Descriptive Set Theory details the properties of these sets. In turn, many of Luzin’s students in the Moscow School also became prominent figures in the field, including Andrei Kolmogorov, the best-known Russian mathematician of the 20th century. What’s in a name? Naming Infinity argues that the Russians thought their mathematical inquiries corresponded to their religious practices. The Name-worshippers believed the name of God was literally God, and that by invoking it repeatedly in their prayer, they could know God closely — a heretical view for some. Graham and Kantor think the Russians saw their explorations in math the same way; they were defining (and naming) sets in areas where others thought knowledge was impossible. Luzin, for one, often stressed the importance of “naming” infinite sets as a part of discovering them. The Russians “believed they made God real by worshipping his name,” the book states, “and the mathematicians … thought they made infinities real” by naming and defining them. Graham also suggests a parallel between the Russians and Isaac Newton, another believer (and heretic). Historians today largely view Newton’s advances in physics as part of a larger personal effort — including readings in theology and alchemy experiments — to find divine order in the world. Similarly, the Russians thought they could comprehend infinity through both religion and mathematics. Mathematicians have responded to Naming Infinity with enthusiasm. “It’s a wonderful book for many reasons,” says Barry Mazur, the Gerhard Gade University Professor at Harvard, who regards it as “an excellent way of getting into the development of set theory at the turn of the century.” Moreover, Mazur agrees that the connection between the religious impulses of the three Russians and their mathematical studies seems significant, even if there is only a general affinity between the two areas in matters such as naming objects. “It is more a conveyance of energy, than a conveyance of logic,” Mazur says. Religion could not trigger precise mathematical moves, he thinks, but it provided the Russians with the intellectual impetus to move forward. Victor Guillemin, a professor of mathematics at MIT, also finds this account convincing. In the 1970s, it was Guillemin, staying at the Hotel Parisiana like Egorov and Luzin before him, who discussed the Russians’ lives with the Chamont family daughters (then elderly women, having been children just after the turn of the century). While reading Graham and Kantor’s book, Guillemin says, “I was fascinated at the idea that the Russians were able to push the subject further because they had less trepidation at dealing with infinity.” As Graham and Kantor point out, many other prominent mathematicians have had a mystical bent, from Pythagoras to Alexander Grothendieck, an innovative French theorist of the 1960s who now lives as a recluse in the Pyrenees. Yet Graham emphasizes that mysticism is not a precondition for mathematical insight. “To see if science and religion are opposed to each other, or help each other,” Graham says, “you have to select a specific episode and study it.” Egorov’s exile, Florensky’s fate Naming Infinity also starkly recounts the sorry fates of Egorov and Florensky, as publicly religious figures in atheist, postrevolutionary Russia. Egorov was exiled to the provinces and starved to death in 1931. Florensky, a flamboyant figure who wore priestly garb in public, was executed in 1937. Luzin was spared after the physicist Peter Kapitsa made a direct appeal to Stalin on his behalf. These men were not just endangered by their religiosity, however, but also by their style of math. The intangible nature of infinity contradicted the Marxist notion that intellectual activity should be grounded in material matters, a charge made by one of their accusers: Ernst Kol’man, a mathematician and seemingly sinister figure called “the dark angel” for his role as an informant on other Soviet intellectuals. Graham, who knew both Kapitsa and Kol’man, says Kol’man “really believed his Marxism, and believed it was wrong to think mathematics has no relationship to the material world. He thought this was a threat to the Soviet order.” Even so, Kol’man, who died in 1979, left behind writings acknowledging he had judged such matters “extremely incorrectly.” The Russian trio was thus part of a singular saga, belonging to a now-vanished historical era. Naming Infinity rescues that story for readers who never had the chance to hear it directly from the owners of the Hotel Parisiana. December 14, 2009 All huge scientists (and many Russians) belived in God. Isaak Newton, Pavlov, genetic Timofeev-Ressovski and Vavilov, physisists Lebedev and many others. It is was standart of culture. If I know more im mathematics, then any professor in MIT, it means that they worked less then I in some direction. For example, nobody in the MIT don't know which functions are exact analitical solution of the Navier-Stocks equation. It is very interesting problem connected with scenarium Feigenbaum. I know, how to solve this promblem, after many years hard work. It is not 'Religious Mysticism and Mathematical Creativity'.Infinity - it it abstraction of our brain. There is not infinity in nature. December 15, 2009 Hey Gorskin, I'm not a mathemetician, but doesn't our perception of infinity make it a reality in nature? Consider that Human beings are just computers that run on math. I'd be interested to know your thoughts. December 27, 2010 if just applying two bits in computer people can do magic! think about ten digits, i bet you can't even process the data. sorry if i hurt any body.
{"url":"http://newsoffice.mit.edu/2009/infinity-1214","timestamp":"2014-04-16T10:12:12Z","content_type":null,"content_length":"95260","record_id":"<urn:uuid:806c68fe-aa19-47cb-a907-8d182425a729>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Philippe de La Hire Born: 18 March 1640 in Paris, France Died: 21 April 1718 in Paris, France Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Philippe de la Hire's father was Laurent de La Hire (27 February 1606 - 28 December 1656). Laurent was born in Paris and became a painter of some distinction, getting commissions from the church, from politicians, and from wealthy Parisians who wished their portrait painted. He became a professor at the Academy of Painting and Sculpture. Philippe's mother was Marguerite Coquin (died 1669) and the de la Hire home was a stimulating place for children to grow up in, for Laurent and Marguerite entertained artists, scientists and mathematicians. The most notable mathematician who was frequently in their home was Girard Desargues. Philippe was the oldest of his parents five children having a younger brother Barthélemy and three younger sisters; Marie (who was born between the two boys) and the two youngest members of the family Marguerite and Louise. The family had two homes in Paris, one in the rue Montmartre with four floors and a garden, and the other, a smaller dwelling, in the rue Gravilliers. La Hire was educated as an artist and became skilled in drawing and painting. Although he received no formal education either in a school or in a university, nevertheless his father expected his son to follow his profession and trained him accordingly. La Hire was sixteen years old when his father died and at that time he was fully committed to a life as an artist. His health had been poor as he grew up so, three years after his father's death, he made plans to visit Italy. There were two reasons for the visit: he hoped that a stay in Italy would see his health improve, and also his father had given him a love of Italian art despite Laurent never having himself been to Italy. La Hire set off for Venice in 1660 and there spent four years developing his artistic skills and learning geometry. The interest in geometry arose from his study of perspective in art, but soon he was finding his mathematics classes more enjoyable than painting. Returning to Paris in 1664, La Hire was a wealthy man and able to pursue his interests without the need to seek employment. Sturdy writes [8]:- An intelligent young man from France who had spent the best part of four years [in Venice] could not fail to return home a more mature, self-confident, sophisticated, and worldly-wise person than when he left. He continued to paint but his serious studies were devoted to geometry. He had a friend, Abraham Bosse, with whom he could share both artistic and mathematical interests. Bosse was an artist who was much older than La Hire, but had attended classes on geometry by Girard Desargues from 1641. Desargues, who La Hire had known from childhood, had died in 1661. Bosse had published a series of works developing the geometric ideas that he had learnt from Desargues and had established his own school of art in 1661. Much influenced by the work of Desargues, both directly and through his friendship with Bosse, La Hire worked on conic sections which he treated projectively. He published his first work Observations sur les Points d'Attouchement de Trois Lignes Droites qui touchent la Section d'un Cone in 1672, followed by his famous treatise Nouvelle méthode en géometrie pour les sections des superficies coniques et cylindriques in 1673. Taton [1] writes that the Nouvelle méthode:- ... is a comprehensive study of conic sections by means of the projective approach, based on a homology which permits the deduction of the conic sections under examination from a particular circle. This treatise was completed shortly afterwards by a supplement entitled 'Les planiconiques' which presented this method in a more direct fashion. The 'Nouvelle méthode' clearly displayed Desargues' influence, even though La Hire, in a note written in 1679 ..., affirmed that he did not become aware of the latter's work until after publication of his own. Yet what we know about La Hire's training seems to contradict this assertion. Furthermore, the resemblance of their projective descriptions is too obvious for La Hire's not to appear to have been an adaption of Desargues 's. Nevertheless, La Hire's presentation, which was in classical language and in terms of both space and the plane, was much simpler and clearer. Thus La Hire deserves to be considered, after Pascal, a direct disciple of Desargues in projective geometry. This assessment may be a little harsh on La Hire who was an extremely honest and meticulous person. It is possible that he knowledge these ideas of Desargues came through Bosse rather than directly from Desargues, so his statement that he did not know of Desargues's publications until after his own had been published could still be true. But we should understand a little more about the contents of the Nouvelle méthode [1] :- La Hire provided an exposition of the properties of conic sections. He began with their focal definitions and applied Cartesian analytic geometry t the study of equations and the solution of indeterminate problems; he also displayed the Cartesian method for solving certain types of equations by intersections of curves. Although not a work of great originality, it summarises the progress achieved in analytical geometry during half a century and contained some interesting ideas, among them the possible extension of space to more than three dimensions. La Hire's mother died in 1669 and the two Paris homes were left jointly to the five children. La Hire bought out the shares of his brother and three sisters shortly after his mother's death. He married Cathérine le Sage in 1670 and they took up residence in their home in rue Montmartre. Their second home in rue Gravilliers was let out to rent. Philippe and Cathérine La Hire had four children; Cathérine-Geneviève (born 1671), Marie-Ann (born 1673), Gabriel-Philippe (born 1677) and Anne-Julie (born 1680). On 26 January 1678 La Hire was elected to the Académie des Sciences. Rather surprisingly, his election was to the astronomy section. He had, at that time, made no contributions to astronomy but Fontenelle [3] suggests that his election was on the strength of his excellent publications in geometry. Of course, often someone deserving of admission to the Academy would enter the section in which a vacancy occurred rather than be forced to wait, perhaps for many years, for a vacancy in a more appropriate section. Election to the Academy was a great honour for La Hire, but it also meant a change in life style. The Academy was a working organisation so election meant that be was no longer a man of leisure. Jean-Baptiste Colbert, the French Minister of Finance, had been instrumental in founding the Academy in 1666 and he now assigned La Hire to assist Jean Picard in the surveying work he was undertaking with the ultimate aim of producing more accurate maps of France. Together La Hire and Picard undertook surveying work in Brittany in 1679 and in Guyenne in 1880. La Hire then went, without Picard, to survey around Calais and Dunkirk in 1681 and the coast of Provence in 1682. We note that La Hire's maps of the Earth were made with the centre of projection, not at the pole, but at r/√2 along a radius produced through the pole (where r is the radius of the Earth). On 1 April 1681 La Hire's wife Cathérine died. He had little option than to remarry quickly, having four children the youngest being just over one year old. He married Cathérine Nonnet, the daughter of notary Jean Nonnet and his wife Marie, on 18 September 1681. By this time La Hire's work for the Academy was closely linked to the Paris Observatory which, like the Academy, had been founded largely due to Colbert. The director was Giovanni Cassini, and the Observatory had published the Connaissance des temps in 1679 which was the world's first nautical almanac. La Hire chose to live with his new wife at the Observatory rather than in his house on rue Montmartre. Sturdy writes [8]:- Whatever the prestige attached to residence in the Observatory, from a domestic point of view it had many drawbacks. The accommodation was extremely cramped. The La Hires had only two bedrooms, a room where Philippe worked, a kitchen and the use of a cellar. Not only were there the children of Philippe's first marriage to be catered for: four more children were born to Philippe and Cathérine. The La Hire family by the late 1680s and early 1690s numbered ten and must have felt intense pressure from overcrowding ... To some extent the overcrowding was eased by the fact that, at least in the early years of their marriage, La Hire was often absent undertaking work for the Academy. He continued the surveying work for the French atlas, but after the death of Colbert in 1683, he was directed by his successor François Michel le Tellier, Marquis de Louvois. The Royal Court had moved into the Palace of Versailles in 1682 and La Hire was given surveying projects relating to the supply of water to the new Palace. The following quote from Fontenelle [3] tells us a lot about La Hire's character:- La Hire, scrupulously exact almost to the point of superstition, used to present to M de Louvois lists of expenses drawn up day by day, in which even fractions were not neglected. The minister habitually used to tear them up without looking at them, and have the sums sent in rounded up figures. In December 1682 he was appointed to the chair of mathematics at the Collège Royale which had remained vacant following the death of Gilles Roberval in October 1675. Courses he lectured included astronomy, mechanics, hydrostatics, dioptrics, and navigation. Four years after being named professor, he was appointed, in addition, to the chair of architecture at the Académie Royale d'Architecture [8]:- La Hire took his duties in the Collège Royale and Académie d'Architecture seriously, preparing his courses conscientiously and lecturing as regularly as possible. If his other commitments stood in his way, his eldest son, Gabriel-Philippe, lectured on his behalf. In fact, in exactly the same way as La Hire's own father had trained him to follow in his profession as an artist, La Hire had trained his own eldest son to follow his own career. La Hire's aim was to have his son elected to the Academy and indeed Gabriel-Philippe La Hire assisted his father in a while range of scientific activities and was elected an 'élève' of the Academy in 1694 at the age of seventeen; in so doing he became the youngest member of the Academy in the seventeenth century. Despite his interests across a whole range of scientific disciplines, La Hire remained fascinated by geometry. In 1685 he published a comprehensive work on conic sections Sectiones conicae which contained a description of Desargues' projective geometry. In 1708 he calculated the length of the cardioid. He also wrote memoirs on the cycloid, the epicycloid, the conchoid and quatratures. However he had other mathematical interests and also wrote on magic squares. He published Traité de méchanique in 1695. Taton writes [1]:- Although passed over by the majority of the historians of mechanics, this work marks a significant step towards the elaboration of a modern manual of practical mechanics, suitable for engineers of various disciplines. Other topics to which he made important contributions included astronomy, physics and geodesy. In astronomy he installed the first transit instrument in the Paris Observatory. He also produced tables giving the movements of the Sun, Moon and the planets which he published in 1687, publishing further such tables in 1702. La Hire became involved in experimental work in many different scientific areas. For example he did experiments on falling bodies (for example with Mariotte in 1683), on magnetism, on the heat reflected by the moon, on the transmission of sound, on the physical properties of water, on electrostatics, on respiration, and on physiological optics. He also studied the instruments involved in experimental work. For example for surveying, which one of his major tasks, he designed an instrument to find the level at a site and studied instruments to compute slopes and elevations. He also studied instruments to measure climatic conditions such as temperature, pressure and wind speed, making measurements with such instruments at the Paris Observatory. Other experiments involved accurate time keeping so he studied clocks as well as magnets and electrostatic machines used in other experimental work. We should also mention La Hire's contributions in editing the works of Jean Picard, Edme Mariotte, Gilles Roberval, and Frenicle de Bessy. Finally we quote Taton's evaluation of La Hire's contributions [1]: It is difficult to make an overall judgement on a body of work as varied as La Hire's. A precise and regular observer, he contributed to the smooth running of the Paris Observatory and to the success of the different geodesic undertakings. Yet he was not responsible for any important innovation. His diverse observations in physics, meteorology, and the natural sciences simply attest to the high level of his intellectual curiosity. Although his rejection of the infinitesimal calculus may have rendered a part of his mathematical work sterile, his early works in projective, analytic, and applied geometry place him among the best of the followers of Desargues and Descartes. Finally, his diverse knowledge and artistic, technical, and scientific experience were factors in the growth of technological thought, the advances of practical mechanics, and the perfecting of graphic techniques. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (11 books/articles) A Poster of Philippe de la Hire Mathematicians born in the same country Honours awarded to Philippe de la Hire (Click below for those honoured in this way) Lunar features Mons La Hire Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © December 2008 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/La_Hire.html","timestamp":"2014-04-21T09:37:25Z","content_type":null,"content_length":"28243","record_id":"<urn:uuid:ac38d1b5-6ed0-43c6-b7c2-e136b2adce13>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Javascript to find the weeknumber (Gregorian Calendar) This short article reveals the javascript-way to get week-number based on a certain date. Week-number normally spans from week 1 (first week of a year) to week 52, but in the gregorian calendar some years have 53 weeks. I have myself looked around for a javascript to find weeknumber and use the gregorian calendar rules. But I could not find any. Giving up the search I started the work making the function by myself. I looked up the rules of the calendar, found a neat formel by Peter-Paul Koch, implemented it in javascript, and here it is for you all to use free of charge. Using the code Since this code is not a complete project, rather just a snippet, there is no download. You just have to copy the code and paste it in your favourite editor. The function expects numeric values for year, month and day. In Javascript the months are 0 to 11, so the funtion expects that span of numbers representing the months (ex. january = 0 .... desember = 11). I didn't bother to write a code calling the function as I think it is pretty obvious how to use it. Here it is: function getWeek(year,month,day){ //lets calc weeknumber the cruel and hard way :D //Find JulianDay month += 1; //use 1-12 var a = Math.floor((14-(month))/12); var y = year+4800-a; var m = (month)+(12*a)-3; var jd = day + Math.floor(((153*m)+2)/5) + (365*y) + Math.floor(y/4) - Math.floor(y/100) + Math.floor(y/400) - 32045; // (gregorian calendar) //var jd = (day+1)+Math.Round(((153*m)+2)/5)+(365+y) + // Math.round(y/4)-32083; // (julian calendar) //now calc weeknumber according to JD var d4 = (jd+31741-(jd%7))%146097%36524%1461; var L = Math.floor(d4/1460); var d1 = ((d4-L)%365)+L; NumberOfWeek = Math.floor(d1/7) + 1; return NumberOfWeek; Please ask in the forum below if you need help using the function. Points of Interest There are rules and ways in the Gregorian calendar that is not common all around the world. I believe USA dont use this week-number method. Consider this before implementing the function. - getWeek 1.0 released
{"url":"http://www.codeproject.com/Articles/4044/Javascript-to-find-the-weeknumber-Gregorian-Calend?msg=2408782","timestamp":"2014-04-18T12:47:52Z","content_type":null,"content_length":"98367","record_id":"<urn:uuid:31337cf6-9eb1-4bd5-8076-379946aecee5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 2: Numeration Systems and Sets Chapter 2: Numeration Systems and Sets By lovelykag Chapter 2 in my math course Added: December 06, 2009 03:22:30 800 views | 24 downloads Shared to: Chapter 2: Numeration Systems and Sets Chapter 2: Numeration Systems and Sets 1 Section 1: Numeration Systems The number system we use now is a base 10 system, and the written symbols we use are called numerals. throughout history, other numerals and base systems have been used. Here is a table depicting some of those systems in relation to our own: A Numerations system is a set of symbols and rules that define numbers systematically. Our system, the hindu-arabic system, defines our way of looking at numbers, and we use 10 symbols: 0,1,2,3,4,5,6,7,8, and 9. Different Base Systems: in a base 5 system, 5 is zero, or 10. let's look: 1,2,3,4,5 are the only numbers we use, and we do not write 5, we write "10" so here is a table: Base 10 Base 5 1 1 2 2 3 3 4 4 5 10 6 11 7 12 8 13 9 14 10 20 11 21 12 22 Since it is base 5, 5 is never written and instead it is written an 10, 20,...so on and so forth. The more we understand different number systems, the more insight we gain into our own. In this section I learned all about lace values: 546 is 5 x 100 + 4 x 10 + 6 x 1 and how to convert from one system to another by using place values as a guide. for example: to convert 25 to base 2: 2 Section 2: Describing Sets In this section, we did a lot of algebra review of sets, which are a way to organize a collection of data into an easily understood collection. the elements of the set are the individual pieces of info in the set. Here is an example of a set: D={ 1.2.3.4.5} the numbers are the "elements" or "members" of the set. this method of writing a set is called the Listing method. Another way to write a set is called "Set Builder Notation" You read this as: C= { x | x ∈ W} C is equal to the set C= { of all elements x x such that | x is a whole number x ∈ W} We learned about how if 2 sets have the same elements then they are equal. We also defined a subset as a set containing some but not all the elements of another set. This section was very fun and helped a lot in the next chapters. 3 Section 3: Other Set Operations and Their Properties In this section, we talked about the intersection of sets. I learned that to describe the intersection of 2 sets, we use a U symbol. the intersection of sets A and B is written as: A∪B. The intersection of both sets is written as A∩B. We looked closely at how one set interacts with another and how to determine the intersection of 2 or more sets.
{"url":"http://www.xmind.net/m/gPjn/","timestamp":"2014-04-18T23:16:35Z","content_type":null,"content_length":"24362","record_id":"<urn:uuid:d079a0ea-ad5d-4967-8a37-e09dd3d86aa0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
LANS Publications "An Efficient High-Order Time Integration Method for Spectral-Element Discontinuous Galerkin Simulations in Electromagnetics" M. Min and P. Fischer Preprint ANL/MCS-P1830-0111 Preprint Version: [pdf] We investigate efficient algorithms and a practical implementation of an explicit-type high-order timestepping method based on Krylov subspace approximations, for possible application to large-scale engineering problems in electromagnetics. We consider a semi-discrete form of the Maxwell equations resulting from a high-order spectral-element discontinuous Galerkin discretization in space whose solution can be expressed analytically by a large matrix exponential of dimension nxn. We project the matrix exponential into a small Krylov subspace by the Arnoldi process based on the modified Gram-Schumidt algorithm and perform a matrix exponential operation with a much smaller matrix of dimension m x m (m ≪ n). For computing the matrix exponential, we obtain eigenvalues of the m x m matrix using available library packages and compute an ordinary exponential function for the eigenvalues. The scheme involves mainly matrix-vector multiplications, and its convergence rate is generally O([Delta]t[sup m−1]) in time so that it allows taking a larger timestep size as m increases. We demonstrate CPU time reduction compared with results from the five-stage fourth-order Runge-Kutta method for a certain accuracy. We also demonstrate error behaviors for long-time simulations. Case studies are also presented, showing loss of orthogonality that can be recovered by adding a low-cost reorthogonalization technique.
{"url":"http://www.mcs.anl.gov/research/LANS/publications/index.php?p=pub_detail&id=1507","timestamp":"2014-04-19T18:51:08Z","content_type":null,"content_length":"5333","record_id":"<urn:uuid:b3858e85-1615-4620-b31e-1058dd089844>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
RETNO INDAH SARI, EVI (2007) THE PROBLEMS FACED BY THE TEACHER IN IMPLEMENTING CONTEXTUAL TEACHING AND LEARNING (CTL) AT SMA NEGERI 2 JOMBANG. Other thesis, University of Muhammadiyah Malang. Download (144kB) | Preview This study is conducted to describe the problems, the causes of the problems, and the solutions of the problems proposed by the teacher in implementing Contextual Teaching and Learning (CTL) strategy. In this study, the descriptive research design was used to obtain the information related to the research problems. The subject of this study was third grade English teacher. The instruments used to collect the data were interview and observation. The result of the study showed that the teacher’s problems in teaching and learning English using CTL include: the difficulty in making the students understand the material well (constructivism) because they were lack of vocabulary knowledge. In coping with this problem, the teacher repeated the explanation and translated it into Indonesian language. Second, the teacher found difficulty in making the students find the knowledge by themselves (inquiry) because they have less vocabulary mastery, moreover analyzing the problem in English language. To solve this problem, the teacher gave additional theories or explanation related to the task and gave the questions related to the passage. Third, the teacher faced problems when she grouped the students (learning community) because the clever students dominated the class. Here, the teacher solved the problem by encouraging the less clever students to be active to work. Moreover, the teacher had difficulty in giving the examples to the students about the right pronunciation (modeling) because she was under the influence of Indonesian language. In solving the problem, the teacher tried to practice how to pronounce a word correctly. The next, the teacher found difficulty in giving the authentic assessment to the students that was taken during and after the class because she taught in several large classes, therefore she could not pay attention the students one by one. In coping with the problem, the teacher approached the students in order that she knew the students’ ability. Then, the teacher faced problems in teaching listening subject because the school did not prepare an up to date and various listening materials. To solve this problem, the teacher got the material from Kang Guru Magazine and Cassettes. The last, the teacher had difficulty in making the students pay attention to her because the students were tired and less motivated to study when the English lesson was given in the end of the school period. It was solved by giving her students motivation and advices in order that they were motivated to study. The last two problems were the problems that did not belong to the problem in the components of CTL. Actions (login required)
{"url":"http://eprints.umm.ac.id/9983/","timestamp":"2014-04-21T12:13:48Z","content_type":null,"content_length":"25266","record_id":"<urn:uuid:4c5594be-991a-4cd3-8768-0b26cacde67a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café The Reasoner Posted by David Corfield My past and future colleague Jon Williamson started a monthly digest of research on reasoning - The Reasoner. I was asked to guest edit the August edition, which required me to write an editorial and to interview someone of my choice. I opted for Brendan Larvor, a philosopher with very close interests to my own. You can read these items here. Posted at August 26, 2007 9:20 AM UTC How do mathematicians steer? Re: The Reasoner “How do mathematicians steer their research” is a wonderful question. I also like the use of “steer” for its association with the old Greek word “kybernetes” and thus Cybernetics. It sees that such a process is important, yet not within Mathematics at the formal level. It is not reducible to Citation Index and h-number, for example. The hierarchical classification of mathematical topics may play a role in what is apparently a very non-hierarchical process. Brendon Larvor’s insights into the Lakotos programme are fascinating, as is your introduction, and the entire August edition. Thank you for sharing these. Posted by: Jonathan Vos Post on August 26, 2007 6:54 PM | Permalink | Reply to this Re: How do mathematicians steer? Re: The Reasoner That’s generous, thanks. And thanks too for the connection between steering and cybernetics. A new thought for me. Posted by: Brendan on September 6, 2007 8:56 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2007/08/the_reasoner.html","timestamp":"2014-04-21T09:40:13Z","content_type":null,"content_length":"11432","record_id":"<urn:uuid:dec4cf62-607d-4d1b-ac4a-9c6bb10ac790>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Iterative Learning Control for Remote Control Systems with Communication Delay and Data Dropout Mathematical Problems in Engineering Volume 2012 (2012), Article ID 705474, 14 pages Research Article Iterative Learning Control for Remote Control Systems with Communication Delay and Data Dropout ^1State Key Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China ^2Department of Electrical and Information Engineering, Shaoxing College of Arts and Sciences, Shaoxing 31200, China ^3Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576 Received 21 October 2011; Accepted 10 January 2012 Academic Editor: Yun-Gang Liu Copyright © 2012 Chunping Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Iterative learning control (ILC) is applied to remote control systems in which communication channels from the plant to the controller are subject to random data dropout and communication delay. Through analysis, it is shown that ILC can achieve asymptotical convergence along the iteration axis, as far as the probabilities of the data dropout and communication delay are known a priori. Owing to the essence of feedforward-based control ILC can perform trajectory-tracking tasks while both the data-dropout and the one-step delay phenomena are taken into consideration. Theoretical analysis and simulations validate the effectiveness of the ILC algorithm for network-based control tasks. 1. Introduction Iterative leaning control (ILC) is a control method that achieves perfect trajectory tracking when the system operates repeatedly. ILC has made significant progresses over the past two decades [1–3] and covered a wide scope of research issues such as continuous-time nonlinear system control [4], discrete-time nonlinear system [5], the initial reset problem [6, 7], stochastic process control [8], state delays [9], and data dropout [10]. On the other hand, the research on networked control systems has attracted much attention [11, 12] over the past few years. In network control, two frequently encountered issues are data dropout and communication delays, which are causes of poor performance of remote control systems. A central research area in remote control systems is to evaluate and compensate data dropout and time-delay factors [13–16]. Since data dropout and delay are random and time varying by nature, the existing control methods for deterministic data dropout and communication delays cannot be directly applied. Significant research efforts have been made on the control problems for networked systems with random data dropout and communication delays that are modeled in various ways in terms of the probability and characteristics of sources and destinations, for instance [10, 17]. It is in general still an open research area in ILC when remote control systems problems are concerned, except for certain pioneer works that address linear systems associated with either random data dropout [10, 18] or random communication delays [17, 19–21]. This paper investigates the implementation of ILC in a remote control systems setting, specifically focusing on compensation when both random data dropout and delays occur at the communication channels between the plant output and the controller. Since ILC is in principle a feedforward technique, it is possible to send the controller signal before the task is executed. This would not be possible for feedback-based control systems. Hence, the data dropout can be circumvented to certain extent by using network protocols that assure the delivery of data packets. Likewise, the large delay due to large data package can also be avoided when the package is used for repeated task executions, namely, in future executions. ILC task is carried out in a finite-time interval, hence the time-domain stability is not a concern. Thus, unlike most network control works that focus on the stability issue, ILC can be applied to address trajectory-tracking tasks and the learning convergence is achieved in the iteration domain. On the other hand, the use of data in the feedforward fashion would require the temporal analysis and management of data packages as well as resending the missing data package, which may not be available in certain remote control systems tasks. In this work, we adopt an ILC scheme that uses pastcontrol signals, as well as the error signals that are perturbed by the data dropout and communication delay. The ILC law adopts classical D-type algorithm and a revised learning gain that takes into consideration the probabilities of both data-dropout and communication-delay factors. As a result, the output tracking errors can be made to converge along the iteration axis. The ILC scheme can be applied to linear discrete-time plants with trajectory-tracking tasks. The paper is organized as below. Section 2 formulates the remote control systems problem. Sections 3 and 4 prove the convergence property of ILC for linear discrete-time plants. Section 5 presents a numerical examples. Throughout the paper, the following notations are used. Let be the expected value of a random variable, the probability of an event, the Euclidean norm of a vector, and the maximal singular value of a matrix. Let is a discrete time signal with . For any and any , define where . 2. Problem Formulation Consider a deterministic discrete-time linear time-invariant dynamics system: where “” and “” denote the iteration index and discrete time, respectively. , , and for all are system states, inputs, and outputs, respectively, at the th iteration. , , and are constant matrices with appropriate dimensions. The schematic diagram of the remote control systems under consideration is shown in Figure 1. It should be noted that the open-loop system from the ILC input to the plant output is deterministic. The randomness occurs during the data transmission from the plant output to the ILC input. There are two approaches in analyzing the closed-loop system. The first approach is to treat the entire closed-loop system as a random or stochastic process. In such circumstances, the topology of the overall system keeps changing and the control process is either a Markovian jump process or a switching process. Another approach, which is adopted in this work, is to retain the essentially deterministic structure of the original open-loop system, meanwhile model the random data dropout and communication delay into two random factors with known probability distributions. As a consequence, the signals used in ILC, are the modulated plant output with the two random factors. When the control process is deterministic, an effective ILC law for the linear system (2.1) is where and are control inputs at the th and th iterations, namely, the present trial and the previous trial, respectively. is the output tracking error at the time th time instance of the th iteration. is a learning gain matrix. Remark 2.1. Note that in the ILC law (2.2), the control signal of the present iteration, , consists of both the pastcontrol input, and the past error with one-step temporal advance, . The current-cycle feedback errors, such as , are not used. Since ILC does not require the current-cycle feedback nor the temporal stability, it is an effective control method for remote control systems problems with random data dropout and communication delay. To facilitate the ILC design and convergence analysis, data dropout and one-step communication delay are formulated. First formulate the data-dropout problem. Denote a stochastic variable with Bernoulli distribution taking binary values 0 and 1, where denotes an occurrence of data dropout and denotes a normal data communication. The probabilities of are where is a known constant. Here, we assume that is a stationery stochastic process, thus the data dropout rate is independent of the time . In subsequent derivations, we treat as time invariant. When the data dropout occurs in multiple communication channels, we can similarly define for the th communication channel. Thus, denote the corresponding mathematical expectation is where is known a Due to the data dropout, the plant output received by the controller at the th iteration is Generally speaking, the occurrences of data dropouts at two iterations are uncorrelated, thus independent. On the other hand, ILC law at the current iteration, the th iteration, uses only signals of the previous iteration, namely, th iteration, as shown in (2.2). Thus with the control input contains data dropouts upto the th iteration. Therefore, and are independent, that is, Without the loss of generality, we assume , namely, the data dropout rate is invariant at different iterations. Next formulate the one-step communication delay problem. Denote is a random delay factor with Bernoulli distribution, which takes binary values 0 and 1 that indicate, respectively, the presence and absence of an one-step communication delay. Here we assume that is a stationery stochastic process, thus the occurrence of the communication delay is independent of the time . In subsequent derivations we treat as time invariant. With multiple communication channels, we define matrix , where denotes the occurrence of communication delay at the th communication channel. Denote and . The plant output received by ILC with possible communication delay is formulated by where is the communication delay at the th iteration. Without the loss of generality, we assume , namely, the probability of the communication delay is invariant at different iterations. Analogous to data dropout, assume that communication delay at any two iterations are independent, then and are independent, so are and because contains communication delays upto the th iteration through the ILC law (2.2). It is worthwhile noting that stochastic variables and are not completely independent. A delayed or nondelayed communication occurs only when , that is, no data dropout. Hence, we should have the condition probability for data transmission without delay and the condition probability for data transmission with one-step delay As a consequence, we have The relationship between data drop out and communication delay, (2.11), can be extended to multiple channels at the th iteration At the th iteration, the output signals perturbed by data dropout and one-step communication delay can be expressed as where is a unity matrix of appropriate dimensions. The mathematical expectation of can be derived using the independence property between , , and , as well as the relationship(2.12) The objective of control design is to seek an appropriate ILC law that can take into consideration data dropout and communication delay concurrently. The following ILC law is adopted where where . 3. Convergence Analysis for Left-Invertible Systems In this section, we derive the convergence property of the ILC (2.15) in the presence of data dropout and communication delays. In ILC, the learning convergence can be derived in terms of either the output tracking error, , or the input tracking error, . In this section, we prove the learning convergence property of . Assumption 3.1. For a given output reference trajectory , which is realizable, there exists a unique desired control input such that where is uniformly bounded for all . It is assumed that for all is a random variable with . Define the input and state errors then from (2.1) and (3.1), we have From (2.15), using the relationship (2.12), we have Theorem 3.2. Suppose that the update law (2.15) is applied to the networked control system and satisfied the Assumption 3.1. If there exist satisfying then the input error along the iteration axis, , converges to a bound that is proportional to the factor . Proof. First, subtracting from both sides of the ILC law (2.15) yields Applying the ensemble operator to both sides of (3.6) and substituting the relationship (3.4) with , we obtain Substituting the state error dynamics (3.3) into (3.7) leads to the following relationship: Define . Now let us handle the second term on the right hand side of (3.8), which is related to . Applying the ensemble operation to the following relationship: Substituting the relation (3.9) into (3.8), taking the norm on both sides, the following relationship is derived: where and in this work we choose if . In order to handle the exponential term with in (3.11), we introduce the norm. From Assumption 3.1, multiplying both sides of (3.10) by and taking the supermum over yield where , , and that is independent of . Since is bounded, so is Substituting the properties of Lemma A.1 into (3.11) yields Since , it is possible to choose sufficiently large such that Therefore we can rewrite (3.13) as which implies Note that is proportional to , namely, the maximum difference between in , which is bounded and small when the reference trajectory is smooth or the sampling interval is sufficiently small. When the probability associated with the data communication delay, , is known a priori, we can further revise the reference trajectory to an augmented one, such that the resulting . Corollary 3.3. Revising the original reference into an augmented one , then and the ILC (2.15) ensures a zero-tracking error. Proof. Note that . Suppose that , then the delay-perturbed output should be . In other words, the augmented reference trajectory for should be . As a result, implies . Now replacing in (2.16) with , we can derive Comparing the above expression with (2.16), we conclude that , subsequently , which implies a zero-tracking error according to (3.16). 4. Convergence Analysis for Right-Invertible Systems In this section, we prove the learning convergence property of . Assumption 4.1. always exists. Theorem 4.2. Suppose that the update law (2.15) is applied to the networked control system and satisfied the Assumption 4.1. If then the tracking error along the iteration axis, , converges to a bound that is proportional to the factor . Proof. First note the relationship: Substituting ILC law (2.15), (2.16), and (4.3) into (4.2) yields Assumption 4.3. Assume .Applying the ensemble operator to both sides of (4.4) and substituting the relationship (4.2), we obtain Taking the norm on both sides of (4.5), the following relationship is derived where and in this work we choose if . In order to handle the exponential term with in (4.5), we introduce the norm. Multiplying both sides of (4.5) by and taking the supermum over yield Substituting the properties of Lemma A.2 into (4.7) yields where and . Since , it is possible to choose sufficiently large such that Therefore, we can rewrite (4.8) as which implies 5. Numerical Examples Consider the following linear discrete-time system: with the initial condition . The desired trajectory is . The tracking period is . The control profile of the first iteration is for . Two sets of probabilities for the data dropout rate and communication delay are considered, which are , , , and , respectively. The learning gain is , which yields , and with respect to the two sets of probabilities. The tracking performance of two ILC algorithms is given in Figure 2, where Max Error denotes the maximum absolute error of each iteration. 6. Conclusion In this work, we address a class of networked control system problems with random data dropout and communication delay. D-type ILC is applied to handle this remote control systems problem with repeated tracking tasks. Through analysis, we illustrate the desired convergence property of the ILC. Although we focus on one-step communication delay in this work, the results could be extended to multiple delays, which is one of our ongoing research topics. In our future work, we will also explore the extension to more generic nonlinear dynamic processes. Lemma A.1. For all, for all , for all , the inequality: holds. Proof. Consequently Lemma A.2. For all, for all , for all , the inequalities hold. Proof. Consequently This work is supported by the National Natural Science Foundation of China (Grants no. 60736021 and no. 60721062), The 973 Program of China (Grant no. 2009CB320603). 1. Y. Q. Chen and K. L. Moore, “Harnessing the nonrepetitiveness in iterative learning control,” in Proceedings of the 41st IEEE Conference on Decision and Control, vol. 3, pp. 3350–3355, Las Vegas, Nev, USA, 2002. 2. D. A. Bristow, M. Tharayil, and A. G. Alleyne, “A survey of iterative learning control,” IEEE Control Systems Magazine, vol. 26, no. 3, pp. 96–114, 2006. View at Publisher · View at Google 3. Z. Bien and I. M. Huh, “Higher-order iterative learning control algorithm,” IEE Proceedings D, vol. 136, no. 3, pp. 105–112, 1989. View at Zentralblatt MATH 4. J.-X. Xu and Y. Tan, Linear and Nonlinear Iterative Learning Control, Springer, Berlin, Germany, 2003. View at Zentralblatt MATH 5. C.-J. Chien, “A discrete iterative learning control for a class of nonlinear time-varying systems,” IEEE Transactions on Automatic Control, vol. 43, no. 5, pp. 748–752, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. M. X. Sun and D. Wang, “Iterative learning control with initial rectifying action,” Automatica, vol. 38, no. 8, pp. 1177–1182, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 7. K.-H. Park, “An average operator-based PD-type iterative learning control for variable initial state error,” IEEE Transactions on Automatic Control, vol. 50, no. 6, pp. 865–869, 2005. View at Publisher · View at Google Scholar 8. S. S. Saab, “A discrete-time stochastic learning control algorithm,” IEEE Transactions on Automatic Control, vol. 46, no. 6, pp. 877–887, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 9. Y. Q. Chen, Z. Gong, and C. Y. Wen, “Analysis of a high-order iterative learning control algorithm for uncertain nonlinear systems with state delays,” Automatica, vol. 34, no. 3, pp. 345–353, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. H.-S. Ahn, K. L. Moore, and Y. Q. Chen, “Discrete-time intermittent iterative learning controller with independent data dropouts,” in Proceedings of the 17th World Congress (IFAC '08), Seoul, Korea, 2008. View at Publisher · View at Google Scholar 11. J. Lam, H. Gao, and C. Wang, “Stability analysis for continuous systems with two additive time-varying delay components,” Systems & Control Letters, vol. 56, no. 1, pp. 16–24, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 12. B. Tang, G. P. Liu, and W. H. Gui, “Improvement of state feedback controller design for networked control systems,” IEEE Transactions on Circuits and Systems, vol. 55, no. 5, pp. 464–468, 2008. View at Publisher · View at Google Scholar 13. T. C. Yang, “Networked control system: a brief survey,” IEE Proceedings Control Theory and Applications, vol. 153, no. 4, pp. 403–412, 2006. View at Publisher · View at Google Scholar 14. J. Hespanha, “Stochastic hybrid systems: application to communication networks,” in Lecture Notes in Computer Science, vol. 2993, pp. 387–401, Springer, New York, NY, USA, 2004. 15. H. Gao and C. Wang, “Delay-dependent robust ${H}_{\infty }$ and ${L}_{2}-{L}_{\infty }$ filtering for a class of uncertain nonlinear time-delay systems,” IEEE Transactions on Automatic Control, vol. 48, no. 9, pp. 1661–1666, 2004. View at Publisher · View at Google Scholar 16. H. Gao and C. Wang, “A delay-dependent approach to robust ${H}_{\infty }$ filtering for uncertain discrete-time state-delayed systems,” IEEE Transactions on Signal Processing, vol. 52, no. 6, pp. 1631–1640, 2004. View at Publisher · View at Google Scholar 17. F. W. Yang, Z. D. Wang, Y. S. Hung, and M. Gani, “${H}_{\infty }$ control for networked systems with random communication delays,” IEEE Transactions on Automatic Control, vol. 51, no. 3, pp. 511–518, 2006. View at Publisher · View at Google Scholar 18. X. H. Bu and Z. S. Hou, “Stability of iterative learning control with data dropout via asynchonous dynamical system,” International Journal of Automation and Computing, vol. 8, no. 1, pp. 29–36, 2011. View at Publisher · View at Google Scholar 19. L. Zhou, Z. J. Zhang, G. P. Lu, and X. Q. Xiao, “Stabilization of discrete-time networked control systems with nonlinear perturbation,” in Proceedings of the 27th Chinese Control Conference (CCC '08), pp. 266–270, 2008. View at Publisher · View at Google Scholar 20. B. G. Marieke, M. Cloosterman, N. van de Wouw, W. P. M. H. Heemels, and H. Nijmeijer, “Stability of networked control systems with uncertain time-varying delays,” IEEE Transactions on Automatic Control, vol. 54, no. 7, pp. 1575–1580, 2009. View at Publisher · View at Google Scholar 21. S. Hu and W. Y. Yan, “Stability robustness of networked control systems with respect to packet loss,” Automatica, vol. 43, no. 7, pp. 1243–1248, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
{"url":"http://www.hindawi.com/journals/mpe/2012/705474/","timestamp":"2014-04-19T22:48:09Z","content_type":null,"content_length":"609160","record_id":"<urn:uuid:4ca2e568-2671-4446-89e6-a23c42fa9104>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmwood Park, IL Math Tutor Find an Elmwood Park, IL Math Tutor ...I've also tutored students preparing for multi-subject standardized tests (ACT, TEAS, COMPASS). I have an MA in English. I taught high school English classes for three years and have been teaching college composition for six years. I've also tutored students preparing for multi-subject standard... 17 Subjects: including algebra 1, algebra 2, grammar, geometry ...Learning is personal, so my goal is to connect with each and every student in whatever way is most helpful to them. I look forward to working with you and your children!I was an advanced math student, completing the equivalent of Algebra 1 before high school. I continued applying algebraic skills in high school, where I was a straight A student and completed calculus as a junior. 12 Subjects: including logic, algebra 1, algebra 2, geometry ...I am a certified special education teacher. My son and some of his friends have Aspergers and I have worked with Aspergers students in my classes. I am a certified special education teacher. 33 Subjects: including linear algebra, SAT math, algebra 1, prealgebra ...I have taught philosophy of physics at Notre Dame, the Illinois Institute of Technology, and the University of Chicago. If you want to improve your understanding of the physics you'll be studying in high school and college, you've come to the right place. I have taken graduate courses in the history of molecular genetics. 21 Subjects: including discrete math, differential equations, linear algebra, algebra 1 ...I then I obtained a bachelor's degree in automation and informatics engineering, a master's degree in control systems engineering, an internship in Sweden that focused on environmental intelligence, and a graduate research assistant in Aerospace Engineering at West Virginia University. Throughou... 15 Subjects: including calculus, linear algebra, logic, trigonometry Related Elmwood Park, IL Tutors Elmwood Park, IL Accounting Tutors Elmwood Park, IL ACT Tutors Elmwood Park, IL Algebra Tutors Elmwood Park, IL Algebra 2 Tutors Elmwood Park, IL Calculus Tutors Elmwood Park, IL Geometry Tutors Elmwood Park, IL Math Tutors Elmwood Park, IL Prealgebra Tutors Elmwood Park, IL Precalculus Tutors Elmwood Park, IL SAT Tutors Elmwood Park, IL SAT Math Tutors Elmwood Park, IL Science Tutors Elmwood Park, IL Statistics Tutors Elmwood Park, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Elmwood_Park_IL_Math_tutors.php","timestamp":"2014-04-17T01:18:59Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:f54eae80-6f19-4bbc-b809-991dd025dabf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
[*] Axisymmetric three-integral models for galaxies Cretton N., de Zeeuw P.T., van der Marel R.P., Rix H.W. ApJ Supplement, 124, 383-401, 1999 &copy 1999. The American Astronomical Society. All Rights Reserved. [*] Citations to this paper in the ADS We have developed a practical method for constructing galaxy models that match an arbitrary set of observational constraints, without prior assumptions about the phase-space distribution function (DF). Our method is an extension of Schwarzschild's orbit superposition technique. As in Schwarzschild's original implementation, we compute a representative library of orbits in a given potential. We then project each orbit onto the space of observables, consisting of position on the sky and line-of-sight velocity, while properly taking into account seeing convolution and pixel binning. We find the combination of orbits that produces a dynamical model that best fits the observed photometry and kinematics of the galaxy. A key new element of this work is the ability to predict and match to the data the full line-of-sight velocity profile shapes. A dark component (such as a black hole and/or a dark halo) can easily be included in the models. Our method is applicable to any geometry. In an earlier paper (Rix et al.) we described the basic principles, and implemented them for the simplest case of spherical geometry. Here we focus on the axisymmetric case. We first show how to build galaxy models from individual orbits. This provides a method to build models with fully general DFs, without the need for analytic integrals of motion. We then discuss a set of alternative building blocks, the two-integral and the isotropic components, for which the observable properties can be computed analytically. Models built entirely from the two-integral components yield DFs of the form f(E,L_z), which depend only on the energy E and angular momentum L_z. This provides a new method to construct such models. The smoothness of the two-integral and isotropic components also makes them convenient to use in conjunction with the regular orbits. We have tested our method extensively, by using it to reconstruct the properties of a two-integral model built with independent software. The test model is reproduced satisfactorily, either with the regular orbits, or with the two-integral components. Applications of our method to the galaxies M32 and NGC 4342 are described elsewhere (van der Marel et al., Cretton & van den Bosch). Return to my bibliography. Return to my home page. Last modified February 4, 1999. Roeland van der Marel, marel@stsci.edu. Copyright Notice.
{"url":"http://www.stsci.edu/~marel/abstracts/abs_R24.html","timestamp":"2014-04-21T02:06:01Z","content_type":null,"content_length":"4717","record_id":"<urn:uuid:ab3d8065-b792-4c94-934c-20992303b493>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding best guess Hi gurus, I'm trying to solve a small system with a best guess of an integral interval, but having some problems. I define a funcion called "outer" (I parametrized all coefficientes in the main program): function y=outer(x) global alfa e2 delta x1 x4 v4 c1 c2 then I have to integrate "outer" in the limit x3 < x < x4 where I know x4 but no idea about x3. however, I know that the integral x3->x4 = Qw (which I also parametrize in the initial). It means that quad(@outer,x3,x4) - Qw = 0 (or less then a tolerance value). I was thinking to use fzero to find the solution, but of course it will not work. I made a loop to find the best guess of x3, but also it was too slow. is there any better approach to find this ? Thanks in advance
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/330932","timestamp":"2014-04-18T08:34:12Z","content_type":null,"content_length":"31128","record_id":"<urn:uuid:17f9f528-3f7c-487f-bb80-a1abdc2da0e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Multiple density plots, rotated and distributed on x-axis Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Multiple density plots, rotated and distributed on x-axis From Venable <venablito@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: Multiple density plots, rotated and distributed on x-axis Date Mon, 1 Aug 2011 14:26:48 -0400 Dear Statalist, I would like to create a graph that contains several kernel density plots, but with the plot rotated 90% right from the usual orientation, and with each plot aligned to a point on the x axis. To be more concrete (borrowing an example from Angrist and Pischke, Mostly Harmless Econometrics, Figure 3.1.1), suppose I wanted to show the distribution of incomes in the US at different levels of eduction, e.g. 8 years, 12 years, 14, 16, etc. The x-axis of the graph would be years of education and the y axis the level of income. Above each year, there would be a kernel density plot of incomes for that year, with each rotated to the right, so that the "height" (density) is horizontal distance. An additional bonus would be to be able to mark a few quantiles on each density and connect these between years. For example, mark the 25th, median and 75th percentile for each year and have these points Is this possible in Stata? I fear I may not be explaining this well in words, in spite of the fact that it is very easy to show as a picture. Unfortunately Figure 3.1.1 is not available in the Google Preview version of Mostly Harmless Econometrics, so I have posted a simple hand-drawn version Many thanks. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-08/msg00027.html","timestamp":"2014-04-16T04:35:06Z","content_type":null,"content_length":"8742","record_id":"<urn:uuid:71ce7052-9971-47fb-951d-df26befe7044>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Tim Porter on Formal Homotopy Quantum Field Theories and 2-Groups Posted by David Corfield Guest post by Bruce Bartlett I’d like to give something of a report-back on Tim Porter’s second talk at Barcelona, on Formal Homotopy Quantum Field Theories and 2-Groups (slides). Firstly let me say that this was the first time I had the pleasure of meeting Tim. If anyone at the Café would like to know, Tim is one of those curious and charming breeds of Englishmen who was born in Wales but occasionally lapses into an Irish accent, and whose constitution requires for its good upkeep a steady diet of fine European cuisine (especially seafood), regular cups of rooibos tea, a daily dollop of French, bird sightings (you live in Buenos Aires or Birmingham? Tim will tell you the magical birds you can see there!), and frequent screenings of The Two Ronnies. Is it Tim’s love of birds that caused him to title his gimungous pedagogical opus on cohomology, simplicial sets and crossed gadgetry, the Crossed Menagerie? Tim gave the Friday morning talk at Barcelona. In the first hour, he began by saying that much of the material he had been wanting to cover had been touched on at various points in the conference - but that he wanted to highlight and point out various important constructions which are not as well-known as perhaps they should be. It was a very interesting talk, but I am not confident to write about it, because my simplicial skills are lagging behind somewhat (I seriously need a go at that menagerie). I’ll just mention the following topics which Tim felt important to stress: Kan complexes, the Moore complex of a simplical group, decollages, crossed complexes (and T-complexes), the “W bar” construction, and the Puppe sequence. In his second, “presentation” talk, he presented the slides on Formal homotopy quantum field theories and 2-groups. Formal homotopy quantum field theories (HQFT’s) were invented by Turaev, and they are essentially TQFT’s in a background space $B$, up to homotopy. Rodrigues showed that an $n$-dimensional HQFT can be regarded as a monoidal functor (1)$Z : nCob(B) \rightarrow Vect$ where $nCob(B)$ is the category whose objects are $(n-1)$ closed manifolds equipped with a map into $B$, and whose morphisms are cobordisms equipped with a map into $B$, considered up to homotopy (in $B$) fixing the boundary. So… a HQFT is midway between an “abstract” TQFT (with no background space) and a Stolz-Teichner style “smooth” TQFT embedded in $B$. That makes them an important thing to understand. Tim reviewed classification results for these gismos. For instance if $B$ is a $K(\pi, 1)$ for some group $\pi$, then Turaev showed that a 2d HQFT in $B$ is the same thing as a crossed $\pi$-algebra. My understanding is that a crossed $\pi$-algebra can be thought of as a Frobenius algebra object in $Rep \Lambda \pi$ - the category of representations of the loop groupoid of $\pi$. Is this correct? Similarly Brightwell and Turner showed that if $B$ is a $K(A, 2)$, then a 2d HQFT over $B$ is the same thing as a Frobenius algebra equipped with an action of $A$. Again, my understanding is that this is the same thing as a Frobenius algebra object in $Rep A$. Is this correct? Then Tim spoke about his work with Turaev, on extending these results to all 2-types. We know that a 2-type corresponds algebraically to a crossed module, so one begins by fixing a crossed module $\ mathcal{C} = (C, P, \partial)$ and working from there. The main theorem is that formal 2d HQFT’s over a 2-type represented by $\mathcal{C}$ correspond to Frobenius algebras equipped with an action of the 2-group $\mathcal{C}$ by automorphisms. The disclaimer “formal” is there to indicate that since they work directly with the 2-group $\mathcal{C}$ and a triangulation, they are not sure if they get out all HQFT’s in this way. Though I don’t understand the details, based on my experience with TQFT’s my rough intuition is that you should indeed generically get out all HQFT’s in this way - except possibly for the non-semisimple ones, which are non-generic, and can’t be achieved from a triangulation construction… is that right? Anyhow something which was very interesting for me here is how an algebra can be acted on by a 2-group. I hadn’t thought of that before. Posted at June 24, 2008 4:57 PM UTC Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups So… a HQFT is midway between an “abstract” TQFT (with no background space) and a Stolz-Teichner style “smooth” TQFT embedded in $B$. It might be noteworthy that decategorifying the category of cobordisms with homotopy classes of maps into a certain space we get cobordism rings for cobordisms with the corresponding structure. Oriented cobordisms for maps into $B SO$. Spin cobordisms for maps into $B Spin$. String cobordisms for maps into $B String$. And so on. This does play an important role for instance in the stuff that Hopkins talked about, though he didn’t seem to be aware of Turaev’s concept of HQFT. Posted by: Urs Schreiber on June 25, 2008 7:06 AM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups I am not sure I have sorted out how the following two concepts are meant to be related in the context of HQFT: given a manifold $X$ (for instance a cobordism) and given an $n$-group $G$ with corresponding one-object $n$-groupoid $\mathbf{B}G$, there are two different types of $n$-functors to $\mathbf{B}G$ which one might consider. 1) For $Y \to X$ a surjective submersion and $Y^\bullet$ the corresponding Čech $n$-groupoid, $n$-functors $Y^\bullet \to \mathbf{B}G$ (or equivalently the corresponding simplicial maps, if you 2) For $T \subset X$ a triangulation of $X$, i.e. a $n$-cell complex modeling $X$, and $[T]$ the corresponding $n$-category, $n$-functors $[T] \to \mathbf{B}G \,.$ Homotopy classes of the maps in 1) give classes of $G$-$n$-bundles on $X$, hence homotopy classes of maps from $X$ to $B |G|$. Whereas classes of the maps in 2) give classes of flat $G$-connections on $X$. Posted by: Urs Schreiber on June 25, 2008 12:14 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups I’m not sure of the answer to Urs’ question. What I do know is that classically any triangulation gives rise to an open cover and hence to a simplicial sheaf (similar to the Cech n-groupoid). The best treatment I know of that sort of idea is probably Tibor Beke’s Higher Cech Theory, (see http://faculty.uml.edu/tbeke/). The question then is, sort of, whether there is a `simplicial approximation theorem’ in this context. The fact that the manifold is locally contractible should be pencilled in somewhere so we can assume the open covers are Leray covers. Posted by: Tim Porter on June 25, 2008 12:48 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups any triangulation gives rise to an open cover By (maybe first passing to a dual triangulation and then) taking open neighbourhoods of all top-dimension cells and taking their disjoint union, right? Doing that, an $n$-functor/simplicial map from the Čech groupoid of the cover thus obtained to some $\mathbf{B}G$ is not the same as a decoration of the original triangulation by $\mathbf{B}G$, it seems to me. This is why I am not sure how I am supposed to think of this in the HQFT context. But I need to have a closer look at some of the references… Posted by: Urs Schreiber on June 25, 2008 3:04 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups The classical construction was to use what is called the star open cover of the triangulation. (I seem to remember it being in Spanier) The nerve of that open cover is then ISOMORPHIC (if I remeber it rightly) to the simplicial complex used for the original triangulation. There is a subsidiary result that given any open cover there is a triangulation `finer’ than it. (This assumes that we start with a manifold.) By that is meant that the star open cover of the triangulation is finer than the given open cover. The definition of star open cover uses the vertex star of each vertex this being the union of the vertex and all interiors of all simplices of which it is a vertex. (Something like that..) The idea is discussed in sources which show that Cech cohomology and simplicial cohomology coincide. (Don’t trust my memory for the details!!!!!) Posted by: Tim Porter on June 25, 2008 7:07 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups Thanks for the comment. I believe I follow what you say so far. But are you claiming also that functors from the triangulation classify $G$-bundles? Maybe it would help me to look at a simple example, the torus and the 2-sphere, maybe. Let’s pick the standard triangulation coming from starting with a square with sides identified and then cut in half diagonally to produce two triangles. Let also $G$ be just an ordinary group. Then $G$-colorings of the triangulation classify flat $G$-connections on that surface, but not $G$-bundles on that surface. Maybe you can describe for me in terms of this example what the statement used in HQFT would be? I might still be misunderstanding something. Posted by: Urs Schreiber on June 25, 2008 7:18 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups I will check up on things once I am back home. I should point out that the decomposition of the torus that you suggest is not a valid one as a triangulation, as it has only one vertex. The only simplicial complex with exactly one vertex is the zero simplex. (That should not disturb other things however as it is probably just a technicality.) I am a bit hazy about the flatness issue hence need to do a bit of ferreting around. It may be worth looking in Turaev’s HQFT papers which are on the web, for instance math/9910010. From my understanding of what he did, the key is a form of cellular approximation theory. Another important fact is that things are base point preserving, so the manifolds, taken as objects, have a chosen base point in each connected component, but the cobordisms with their characteristic maps are not thought of as being pointed, That is confusing but I think it just means that they are As I said I will check up. I have the feeling that the objects are manifolds with flat connection as you said. One thing that I found confusing was that several authors think of the HQFTs as flat bundles on the base $B$, but we know a lot about $B$ if it is a $K(\pi,1)$ so why explore it like this. The degenerate case of TQFTs (i.e. with trivial base) tells you invariants of the manifolds and cobordisms, and that seems to me to be the more important aspect. I may of course be wrong! Posted by: Tim Porter on June 26, 2008 8:04 AM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups I discussed this with Pietro Polesello who is also visiting Paris (and who gave a nice talk at Barcelona that no one has yet discussed, perhaps I will try to do so as Bruce has done more than me!) After meeting accidently over lunch, we discussed further possible interpretations of the morphism of crossed modules that classified a formal HQFT. He had some interesting ideas but they need to be worked out more fully so I will say nothing more than that. However the one point that emerged (and had been staring me in the face) was that in the work I was reporting on the groups and crossed modules were discrete not Lie groups. This seems to be the key observation that I had missed. Does that resolve the difficulty? Posted by: Tim Porter on June 26, 2008 6:45 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups discrete not Lie Ah, I see. Right, I should have been aware of that. Okay, so we expect that a 2-bundle with discrete 2-group has a unique connection, necessarily flat, and is uniquely characterized by that. Saying so, I must admit that I realize that I need to think more closely about the details of the truth of this statement. But David Roberts comes to the rescue. His work on 2-Covering Spaces should contain the answer to this question. Posted by: Urs Schreiber on June 26, 2008 11:57 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups Okay, so we expect that a 2-bundle with discrete 2-group has a unique connection, necessarily flat, and is uniquely characterized by that. Saying so, I must admit that I realize that I need to think more closely about the details of the truth of this statement. But David Roberts comes to the rescue. His work on 2-Covering Spaces should contain the answer to this question. As I have said before, one reason for my work on 2-covering spaces is to construct some 2-bundles globally (at the time I said explicitly, but that was probably the wrong word). For example, the universal 2-covering space - constructed using tricks from Urs’ and my paper and some other stuff already published by others. The question that came to me just yesterday was: “is there some sort of canonical connection on the universal 2-covering space?” For this to even have a hope of being true, I have to find out the answer to this question: “is there some sort of canonical connection on the universal 1-covering space?” This should be well known if it is true. Presumably for discrete groups which have a finite dimensional smooth model for their classifying space* we can cook something up, but what about in general? (* this includes things like torsion-free discrete subgroups of Lie groups - consider some sort of double coset space. My notes on this are not where I am, so take this with a little grain of salt) Posted by: David Roberts on June 27, 2008 8:02 AM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups to construct some 2-bundles globally By the way, just for the record: I suppose you know that Toby Bartels gives a general abstract method for (re-)constructing a (global in this sense) 2-bundle from its cocycle in the proof of his prop. 22. This general abstract prescription was recently spelled out by Christoph Wockel in the proof of his prop. I.20 (with comments on how it relates to Toby’s prescription in remark I.24). I gather you want a less “by-gluing”-construction, but I thought I’d mention it anyway. is there some sort of canonical connection on the universal 2-covering space? For Lie 2-groups, you know that I have been claiming for a while now that there is – at least if you allow me to replace $B |G|$ by a rational approximation (but possibly you won’t allow me that :-): the $L_\infty$-algebraic notion of that universal connection is discussed in section 7.4 of $L_\infty$-connections and the way to integrate these to fully-fledged 2-bundles with connection I describe in On nonabelian differential cohomology. But I haven’t really thought in detail about the case of discrete structure 2-groups. For those one would expect not only a canonical connection on the universal thing, but even a unique (and flat) connection on every single one of them. Isn’t your discussion of 2-covering spaces particularly geared towards finite 2-groups? Posted by: Urs Schreiber on June 27, 2008 12:55 PM | Permalink | Reply to this Re: Tim Porter on Formal Homotopy Quantum Field Theories and 2-groups Was going to post this last night, but some piece of software somewhere was being silly. I gather you want a less “by-gluing”-construction, What I meant is that I want to construct some 2-bundles without using cocycle data at all. I realise what a stupid question this is: “is there some sort of canonical connection on the universal 1-covering space?” Since the fibre is zero-dimensional so the tangent space at a point on the covering space has an obvious splitting - the trivial one! It’s a bit trickier for 2-covering spaces since the fibres are not discrete groupoids, only weakly equivalent to them, and so the Lie-2-algebra the connection forms will take values in is not trivial, but equivalent to the trivial one. And I’m not restricted to finite 2-groups - the easiest example I can think of has fibre $\mathbf{B}\mathbb{Z}$ Posted by: David Roberts on June 27, 2008 11:44 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2008/06/tim_porter_on_formal_homotopy.html","timestamp":"2014-04-19T01:47:36Z","content_type":null,"content_length":"53408","record_id":"<urn:uuid:10acbeae-3568-45ec-a000-e326686f209e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
counting and primes Author counting and primes Joined: Oct 19, 2005 So im having trouble with this code, bascially all i need (i think) is some sort of solution to test if a number is a prime... Posts: 14 import java.io.*; public class prac5prime2try2 { // main method public static void main(String args[]) throws IOException { BufferedReader myInput = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Enter a value: "); int n = Integer.parseInt(myInput.readLine()); System.out.println("The number entered was "+n); // loop up to n, and check every number if it's a prime. for(int i = 1; i <= n; i++) { if(isPrime(i)) { // this is the same as: if(isPrime(i) == true) System.out.println (i+" is a prime number"); else { System.out.println (i+" is not a prime number"); *********************this is where i need help******************** // Method to test if a number is prime private static boolean isPrime(int num) { if(num < 2) return false; return true; Ranch Hand Do you get any error on running this code or do you want the logic to implement isPrime() ?? Joined: Sep [ November 03, 2005: Message edited by: Srinivasa Raghavan ] 28, 2004 Thanks & regards, Srini Posts: 1228 MCP, SCJP-1.4, NCFM (Financial Markets), Oracle 9i - SQL ( 1Z0-007 ), ITIL Certified Ranch Hand Joined: Oct lol 05, 2001 Posts: 1170 Checking for prime numbers is something that folks create algorithms around. its a big deal to create an efficient check. But its also a common thing in classes I would expect. Try googleing for an algorithm or pay off some math students I like... Keeper I'm guessing this is a homework assignment because in practice (as someone suggested) you would NEVER code this by hand, especially for large primes. Joined: Oct There is a method BigInteger.isProbablePrime() which always returns false if the number is not a prime, and returns true if a number is likely prime. So you could: 23, 2005 Posts: 3697 5 Since prime numbers are scarse compared to compositive, this would have decent performance depending on how your complex check is written. Although if this is a homework assignment, using isProbablePrime() may not be allowed. [ November 03, 2005: Message edited by: Scott Selikoff ] I like... My Blog: Down Home Country Coding with Scott Selikoff Ranch Hand A prime number is a number evenly divisible only by itself and one. So, the easiest method to check for a prime number it to start dividing the number with all integers from two to the Joined: Jul number. At each division, it there is a remainder, then check the next number. If the result of the division leaves no remainder, then you have found a component of the number and it is 31, 2003 not prime. Posts: 263 After a little more examination you'll find that you do not need to check all the integers less than the given number. You can stop checking at the square root of the given number. Take the number 16. 16 is 1x16, 2x8, 4x4, 8x2, 16x1. You'll notice as soon as I reach the square root of 16, which is four, the components are just reversals of pairs previously found. To further optimize this operation, recognize that once you divide by two, then there is no need to divide by 4, 6, 8, 10... or any other even number. This same analogy is true for the other integers. If the number is not divisible by 3 then you do not need to check for multiples of 3, etc. Following the above to it's logical conclusion, you will quickly discover that you only need to check if the given number is divisible by other prime numbers up to the square root of the given number. Tom Blough<br /> <blockquote><font size="1" face="Verdana, Arial">quote:</font><hr>Cum catapultae proscriptae erunt tum soli proscripti catapultas habebunt.<hr></blockquote> subject: counting and primes
{"url":"http://www.coderanch.com/t/378251/java/java/counting-primes","timestamp":"2014-04-19T12:55:45Z","content_type":null,"content_length":"32388","record_id":"<urn:uuid:665e6a3d-d34c-41b9-8499-3fc9a7df6570>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Grove Hall, MA Math Tutor Find a Grove Hall, MA Math Tutor ...If you need a tutor for beginner to intermediate Italian, I am your man. As well as being a Spanish teacher, I am moderately fluent in Italian and do well tutoring beginners. If you need help preparing for standardized tests like the SAT Math, SAT Reading or SAT Writing, I am the tutor for you. 21 Subjects: including algebra 1, American history, vocabulary, grammar ...Computer Programming is the creating of computer programs! Everything that we do with computers, from Word, to the internet, every page of the internet, games, etc. Everything on a computer or even a handheld device (cell phone, etc.) has a program on it. 19 Subjects: including discrete math, algebra 1, algebra 2, calculus ...I know what the test is like and can help teach study strategies that can prepare one for the types of questions they like to ask while also developing general reading skills. I have taken the ACT and received an English score of 35. I know what the ACT is looking for in the English section and can help with study strategies to succeed in this portion of the exam. 20 Subjects: including algebra 1, ACT Math, SAT math, reading ...Don't hesitate contact me now! *ATTENTION PROSPECTIVE MASSACHUSETTS TEACHERS: I CAN TUTOR YOU TO PASS THE FOLLOWING MTELS: *Communication and Literacy *Foundations of Reading *General Curriculum (Elementary Education 1-6)I taught 3rd grade at Weatherbee Elementary School. I substitute taught ... 30 Subjects: including algebra 2, statistics, ESL/ESOL, GRE ...My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. My strength is my ability to look at a challenging concept from different angles. 8 Subjects: including algebra 1, algebra 2, calculus, geometry Related Grove Hall, MA Tutors Grove Hall, MA Accounting Tutors Grove Hall, MA ACT Tutors Grove Hall, MA Algebra Tutors Grove Hall, MA Algebra 2 Tutors Grove Hall, MA Calculus Tutors Grove Hall, MA Geometry Tutors Grove Hall, MA Math Tutors Grove Hall, MA Prealgebra Tutors Grove Hall, MA Precalculus Tutors Grove Hall, MA SAT Tutors Grove Hall, MA SAT Math Tutors Grove Hall, MA Science Tutors Grove Hall, MA Statistics Tutors Grove Hall, MA Trigonometry Tutors Nearby Cities With Math Tutor Cambridgeport, MA Math Tutors Dorchester, MA Math Tutors East Braintree, MA Math Tutors East Milton, MA Math Tutors East Watertown, MA Math Tutors Kenmore, MA Math Tutors North Quincy, MA Math Tutors Quincy Center, MA Math Tutors Readville Math Tutors South Boston, MA Math Tutors South Quincy, MA Math Tutors Squantum, MA Math Tutors West Quincy, MA Math Tutors Weymouth Lndg, MA Math Tutors Wollaston, MA Math Tutors
{"url":"http://www.purplemath.com/grove_hall_ma_math_tutors.php","timestamp":"2014-04-19T09:56:58Z","content_type":null,"content_length":"24071","record_id":"<urn:uuid:16c8bb81-47f8-43e6-bf4a-06aedb3c3600>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Popular subsets in an intersecting family with a minimality condition up vote 3 down vote favorite The following problem came from some joint research with Kevin Ford and Regis de la Breteche. We believe a confirmation of the question presented at the end of this post will allow us to sharpen our We are considering families $F$ of sets which satisfy three properties. First, each family $F$ is intersecting; that is, for all $A_1, A_2\in F$, we have that $A_1\cap A_2$ is non-empty. Second, each family is finite, and each set in $F$ is finite as well, with the maximum cardinality of sets $A\in F$ called the size of $F$. And third, each $F$ satisfies the following minimality condition: if $A\ in F$ and $A'$ is a proper, non-empty subset of $A$, then the family of sets $F \cup {A'} \setminus {A}$ is not intersecting. It can be shown quite quickly that if $F$ has size $n$, then $|F|\le n^n$. It is easy to construct a great many such families. If one has a family $F$ of size $n$ and a set $B=\{ b_1, b_2 , \dots, b_{n+1} \}$ that does not intersect any set of $F$, then one can create a new family $F'$ that contains the set $B$ and new sets of the form $A\cup \{b_i \}$, $1\le i \le n+1$, where each $b_i$ and each set $A$ is part of at least one set of this type. One can start from the trivial family $F=\{ \{a\}\}$ and build up many ($n$-uniform) families in this way. By appending each $b_i$ to as few sets as possible, one can obtain families of size $n$ with very few total sets (as few as $n+1$). By appending each $b_i$ to every set $A\in F$, one can obtain families of size $n$ with cardinality almost that of the maximal order $n^n$ Our question is, in bare form, this: suppose a family of size $n$ is very large, close to the maximal order $n^n$, then must there exist an element or subset that is extremely popular, in the sense that it is contained in many more sets than the average? Put more concretely: Is it true that given a family $F$ of size $n$, there exists a set $C$, with $|C|=k$ such that $$\left|\{ A \in F: C \subset A \} \right| \ge \frac{|F|}{n^{o(k)}} $$ as $n \to \infty$? If so, how good a bound can one obtain? Would $$\frac{|F|}{100^k}$$ work? Our reason for believing this may be true comes from the appending construction mentioned above. If the appending procedure is applied several times, those elements/subsets belonging to the original $F$ will be very popular in the final family obtained. The construction you give seems to provide a family of size n with n^(n-2) as an upper bound to the number of elements once n > 5. Do you have examples of F of size n with at least n^(n-1) elements for n > 3? Gerhard "Ask Me About System Design" Paseman, 2011.08.31 – Gerhard Paseman Sep 1 '11 at 6:53 In fact, your construction gives more like (e-1)n! ellements each of size n. Do you have any F of size n with cardinality greater than (e- 1)n! ? Gerhard! "Ask Me About System Design" Paseman, 2011.09.01 – Gerhard Paseman Sep 1 '11 at 7:11 Gerhard, no, the large construction listed here is the largest that we have been able to detect (and it is a construction lifted from Erdos and Lovasz' paper in "Finite and Infinite Sets" if I recall correctly). – Joseph Vandehey Sep 1 '11 at 12:05 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/74227/popular-subsets-in-an-intersecting-family-with-a-minimality-condition","timestamp":"2014-04-16T11:06:35Z","content_type":null,"content_length":"51270","record_id":"<urn:uuid:e1378cf7-527a-45ff-aac6-fe319691a5a1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Mike McIlveen on 01 Apr 10 Teaching the world mathematics for free. My blog. Contains intuitive explanation of mathematics topics ranges from elementary to undergraduate. Discussess integration of computers in teaching math. Lots of Tutorials: GeoGebra, Wordpress blogging, Geometer's Sketchpad, etc. Maggie Verster on 28 Aug 09 Postings include tips on teaching math facts, and information about two math practice books, Two Plus Two Is Not Five: Easy Methods to Learn Addition and Subtraction and Five Times Five Is Not Ten: Make Multiplication Easy. Maggie Verster on 28 Aug 09 Planet Infinity...My K.H.M.S. Math Class Come and explore the beautiful world of Mathematics with me.Web 2.0 has powered us with tools and new methods to make teaching and learning an enjoyable process. Let's learn together!
{"url":"https://groups.diigo.com/group/math-links/content/tag/blog%20school2.0?dm=middle&page_num=0","timestamp":"2014-04-18T16:44:40Z","content_type":null,"content_length":"45271","record_id":"<urn:uuid:81830d14-7f00-40b3-87e5-3364283fe860>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: On d'Alembert substitution Computer Center of the Russian Academy of Science Vavilova 40, Moscow 117967, Russia e­mail: abramov@sms.ccas.msk.su Let some homogeneous linear ordinary di#erential equation with coef­ ficients in a di#erential field F be given. If we know a nonzero solution #, then the order of the equation can be reduced by d'Alembert substitution y = # # v dx , where v is a new unknown function. In the situation when # # F , after d'Alembert substitution an equation with coe#cients in F arises again. Let the obtained equation have a nonzero solution # # F , then it is possible to reduce the order of the equation again and so on, until an equation without nonzero solutions in F is obtained. If we can find solutions not only in F but in some larger set L as well (L can be a field or a linear space), then we can build up a certain subspace M (d'Alembertian subspace) of the space of all solutions of the original equation. Thus if we have algorithms AF and AL to search for the solutions in F and L, then by incorporating d'Alembert substitution we
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/629/3902216.html","timestamp":"2014-04-18T09:45:21Z","content_type":null,"content_length":"8286","record_id":"<urn:uuid:57a42857-d8a9-475a-b003-f5dff5469a5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2012 [00150] [Date Index] [Thread Index] [Author Index] Re: Splitting sums in mathematica • To: mathgroup at smc.vnet.net • Subject: [mg126854] Re: Splitting sums in mathematica • From: "james.a.gordon1" <james.a.gordon1 at googlemail.com> • Date: Wed, 13 Jun 2012 04:57:39 -0400 (EDT) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • References: <jqusrg$rnm$1@smc.vnet.net> Hi Koopman, Thanks for the quick reply, I believe your suggestions solve some of my problem. To further clarify my question, I have an equation that looks like this F = (X - Y1 - Y2 - Y3 + Z)^2 X = Sum_p Sum_q Sum_r [a_p*a_q*a_r] Yi = Sum_p Sum_q!=p b_p*b_q, where different X_i have different terms in the sums Z = Sum_p Sum_q!=p Sum_r!=q!=p In other words I end up with a large number of terms, each of which can have as many as 6 nested sums, all of which need to be expanded resulting in hundreds of terms. I want mathematica to algebraically expand these nested sums for me rather than me having to type up the expanded form myself. I believe with your solution I would still need to manually keep track of the indices for each expansion. Once all the nested sums are expanded, I will make some substitutions of the a, a^2, a^3...a^6 terms, then simplify. At no point to I need to calculate this with real values, I just need an algebraic expression. Thanks again
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Jun/msg00150.html","timestamp":"2014-04-18T20:42:49Z","content_type":null,"content_length":"26159","record_id":"<urn:uuid:dca15f00-e2df-4c52-994c-afba033ff0b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Plasmonic Coupling in Three-Dimensional Au Nanoparticle Assemblies Fabricated by Anodic Aluminum Oxide Templates Journal of Nanomaterials Volume 2013 (2013), Article ID 823729, 6 pages Research Article Plasmonic Coupling in Three-Dimensional Au Nanoparticle Assemblies Fabricated by Anodic Aluminum Oxide Templates ^1Department of Physics, Ewha Womans University, Seoul 120-750, Republic of Korea ^2Department of Mechanical Engineering, Kyung Hee University, Yongin 446-701, Republic of Korea Received 29 August 2013; Accepted 1 October 2013 Academic Editor: Joondong Kim Copyright © 2013 Ahrum Sohn et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We investigated optical properties of three-dimensional (3D) assemblies of Au nanoparticles (NPs), which were fabricated by dewetting of thin Au layers on anodic aluminum oxides (AAO). The NP assembly had hexagonal array of repeated multiparticle structures, which consisted of six trimers on the AAO surface and one large NP in the AAO pore (pore-NP). We performed finite-difference time-domain simulation to explain the optical response of the NP assemblies and compared the calculation results with experimental data. Such complementary studies clearly revealed how the plasmonic coupling between the constituent NPs influenced the spectral response of our NP assemblies. In particular, comparison of the assemblies with and without pore-NPs suggested that strong plasmonic coupling between trimers and pore-NP significantly affected the spectra and the field distribution of the NP assemblies. Plasmonic multi-NP assemblies could provide us new platforms to realize novel optoelectronic devices. 1. Introduction Nanoplasmonics has surfaced as one of the most interesting and important research topics in recent days. Significant increase of the optical extinction and concomitant subwavelength confinement of electromagnetic field enabled us to explore the plasmonic nanostructures for various fields, including high-performance light emitting devices, high-efficiency solar cells, and high-sensitive molecular sensors (e.g., surface-enhanced Raman scattering, SERS) [1–18]. The optical spectra can be largely varied depending on the material and size of the metal nanostructures [1–10]. There has been growing research interest in nanoparticle (NP) assembly, whose optical response can be modulated by controlling the interparticle gap separation and the spatial arrangement of NPs. In this regard, three-dimensional (3D) NP assemblies, where NPs do not lie on a single plane and some of them are above/below the plane containing other NPs, multiple plasmonic coupling is expected to give rise to hybridized plasmon mode formation [1]. Fabrication of artificial nanostructures, however, usually requires high-cost and complicated top-down processes [1–8, 18]. Thus, it is rather difficult to find many examples of experimental studies of 3D NP assemblies. In this work, we investigated how the plasmonic coupling influenced the optical properties of 3D multi-NP assemblies prepared via honeycomb-shaped pore-array templates, consisting of NP-trimer array and additional NP in the pore. Experimental optical spectra showed characteristic dips in visible range, implying excitation of plasmonic modes. Calculation studies clearly revealed that the 3D arrangement induced unique coupling between constituent NPs. The NP in the pore weakened the interaction between adjacent NP trimers and strongly confined the light at the gap between the trimer and the NP in the pore. Such hybridized plasmon modes dominated the overall optical spectra of our 3D NP assemblies. 2. Experimental Anodic aluminum oxide (AAO) templates with hexagonally arrayed nanopores were used to fabricate the 3D plasmonic nanostructures. Regular array of nanopores (diameter: 72nm) formed honeycomb structures with average pore-to-pore distance of 100nm. The AAO thickness (70nm) was controlled by the duration of the second anodization process. Thin Au layers (thickness: 20nm) were coated on top of the AAO surface, and 3-hour-annealing processes to produce Au NPs were followed at 600°C. At such high temperatures, the Au thin films were broken and agglomerated NPs were formed. These procedures can provide a low-cost and large-area fabrication technique for the 3D plasmonic structures and details can be found in recent publications [9, 19]. The microstructures of the samples were examined by a scanning electron microscope (SEM) (JEO, SJSM7401F). Optical reflectance spectra were obtained by a spectrophotometer (JASCO). To understand the optical properties of the NP assemblies, we used finite-difference time-domain (FDTD) method which enabled us to solve the Maxwell equations numerically and to obtain real-space electric field distributions. The samples were modeled in FDTD (Lumerical FDTD Solutions) as periodic NP arrays with AAO templates. The array had hexagonal symmetry due to the honeycomb structure of AAO. Properly chosen unit cell was embedded in air with perfectly matched layers on the top and bottom and with periodic boundary conditions at the sidewalls. The optical reflectance spectra were obtained by measuring the amount of power flowing into and out of the sample surfaces. 3. Results and Discussion Figures 1(a) and 1(b) show schematic diagrams and SEM images of a typical NP assembly prepared by AAO templates. The Au NPs belong to two groups, as illustrated in Figure 1(a). The first group corresponds to a set of trimers located on the top of the AAO surface and the average diameter of NPs in the trimers is 25nm. The trimers form an array with hexagonal symmetry due to the characteristic morphology of AAO [9, 19]. The second group contains large-sized NPs located in the nanopores of the template. These NPs also have hexagonal symmetry and the average diameter is 60nm. The average diameter and average depth of the nanopore in AAO are 72nm and 60nm, respectively. The thickness of AAO is 70nm and nonoxidized aluminum layer remains under the AAO layer. Figures 2(a) and 2(b) show the top-view (-plane) and cross-sectional (-plane) diagrams of our NP assembly used for the FDTD simulations, respectively. As described above, two kinds of NPs are present in the assembly. The area depicted in Figure 2(a) represents the unit cell of our samples. In the simulation, the light polarization was along the -axis. To clarify the role of the NP in the nanopore, simulations were also performed for the identical structures without the large NPs in the pores. Hereafter, the large NP in the pore will be called a pore-NP. Figures 3(a) and 3(b) show experimental and simulative optical reflectance spectra, respectively. In the calculation, the following three types of NP assemblies are compared as shown in Figure 3(b): trimers only, pore-NPs only, and trimers with pore-NPs. The results in Figure 3(b) are obtained for linearly polarized light along the -axis and the simulation data for the -polarization are almost invariant. The trimers belong to the point group, and the plasmon hybridization does not have polarization dependence [6]. The calculation reveals that all the NP assemblies have similar optical spectra with broad dips in the wavelength range from 400nm to 600nm. Although there is overall shift of the spectra, the experimental result looks similar to the calculation results. Thus, the shape of the spectra alone cannot clarify the physical origin of the optical characteristics. To achieve better understanding of the 3D arrangement of NPs, optical spectra of an isolated heptamer consisting of seven Au NPs with identical size were studied. The six outer NPs form hexagon (oNPs) and the last one is located at the center of the hexagon (cNP). The diameter of all the NPs is 100nm and the gap between the oNPs is 10nm. The calculation results show double peaks in the extinction cross-section, as shown in Figure 4(a). At the local maxima of the extinction spectra, the electric field components along the light polarization direction of all the NPs have the same sign. In contrast, the field components of some NPs have the opposite sign at the local minima of the spectra. It has been known that the bonding (super radiant) mode and the antibonding (subradiant) modes are formed at the former cases and the latter cases, respectively [1–4]. Figure 4(a) also shows the extinction spectra of the heptamers whose cNP is moved upward or downward from the plane containing other six heptamers, as illustrated in Figures 4(b) and 4(c). The double peaks, the signature of the Fano resonance, become less noticeable when the vertical displacement of the central NP is increased. This means that increase of the cNP’s vertical displacement weakens the coupling between oNPs and cNP. Thus, the result in Figure 4(a) clearly shows that cNP plays a key role in the Fano resonance [2]. It should be noted that the two peaks exhibit distinct dependence on the vertical movement of cNP. The position of the short-wavelength peak does not change much, and the intensity is slightly increased, since the interaction between oNPs is much stronger than that between cNP and oNPs. In contrast, the long-wavelength peak moves toward short-wavelength region and its intensity is reduced, because the interaction between cNP and oNPs is stronger than that between oNPs in the specific mode. This suggests that the dipole interaction between the constituent NPs should depend not only on the interparticle spacing but also on the direction between the dipoles in the 3D configuration. The results in Figure 4(a) show that the 3D arrangement of NPs can provide an alternative way to modulate the optical spectral response of the plasmonic NP assembly. The considered NPs have somewhat simple structures, but this study gives us an insight into understanding more complicated 3D NP assemblies. If oNPs in the heptamer are replaced with trimers and cNP is moved along the vertical direction from the plane containing oNPs, then such 3D NP structure will look similar to our NP assembly (see Figures 1 and 2). In addition, our 3D NP assembly consists of multisized NPs. The size of pore-NP (diameter: 60nm) is larger than that of NPs in the trimers (diameter: 25nm). The degree of the interaction between NPs depends on both the interparticle spacing and the NP diameter. Large-sized NP can enhance the coupling strength due to the large dipole moments. Our NP assembly forms a hexagonal array with the aid of the periodic AAO template. Thus, NPs can interact with NPs in nearby unit cells as well as NPs in an identical unit cell. The dielectric constant of AAO (alumina) is somewhat large, and hence the electric field due to pore-NP is confined in the nanopore. As a result, the interaction between neighboring pore-NPs is not so strong enough to cause serious array effects. In contrast, the trimers in adjacent unit cells are located so close that they can induce notable coupling. Figures 5(a) and 5(b) show top-view (-plane) electric field distribution of the NP assemblies with and without pore-NPs, respectively (wavelength of incident light: 633nm). The field map is obtained at the plane containing the center of NPs in the trimers. The strong field confinement is clearly seen at the gap between NPs of the trimers. In particular, the dipole-dipole interaction is much stronger between NPs located along the light polarization (i.e., -axis) [1]. At the gap between NPs placed perpendicular to the light polarization (i.e., -axis), the dipoles of the neighboring NPs are oppositely directed, and hence the superposed field intensity is very small. Field enhancement is also seen between the neighboring trimers, which are placed along the -axis. The existence of pore-NPs drastically influenced the field distribution. Overall field intensity in Figure 5(a) is smaller than that in Figure 5(b). This indicates that the plasmonic coupling between NPs becomes weak due to pore-NPs. Brandl et al. theoretically investigated the hybridized plasmonic modes of trimers and predicted multipolar resonances [6]. Alegret et al. fabricated Ag NP trimers using electron beam lithography and experimentally observed the hybridized plasmon modes [7]. In both of these studies, in-plane symmetry adapted coordinates (SACs) for allowed dipolar modes of trimers can be found. The in-plane SACs are classified in two types: one with vanishing total dipole moment and the other with a finite moment. The latter is known to dominate the optical response and belongs to the representation of the point group [7]. The SACs are twofold degenerate, and they are called bonding (low-energy) and anti-bonding (high-energy) dipolar oscillations. The dipole configuration in Figure 5(a) can be found in the SACs but that in Figure 5(b) cannot. This indicates that the trimers on pore-NPs behave more like isolated trimers. This is well explained by the reduced field intensity around the trimers, as aforementioned. The NP assemblies without pore-NP, that is, hexagonal array of trimers, exhibit plasmon hybridization modes quite different from the isolated trimer, due to the strong coupling between the neighboring trimers. The cross-sectional filed distributions of the NP assemblies with and without pore-NP are shown in Figures 6(a) and 6(b), respectively. As discussed above, the near-field intensity near the pore-NP is strongly confined in the AAO pore due to the large dielectric constant of AAO. Hot spots can be found at the gap between the trimer and pore-NP as well as the pore-NP vicinity. In the NP assembly without pore-NP, strong field can be formed at the gap between adjacent trimers. Comparison of these two distributions shows that the pore-NP weakens the coupling between trimers, as expected from Figures 5(a) and 5(b). The pore-NP has a larger size than the NPs in the trimers, producing strong field nearby pore-NP. The dipole field from the pore-NP is oppositely directed to that from the trimers. As a result, the plasmonic coupling between the trimers is drastically suppressed. This explains well the reason why the reflectance spectrum of our 3D NP assembly looks more or less similar to that of pore-NP array, rather than that of trimer array. 4. Conclusions We investigated optical properties of 3D Au NP assemblies fabricated with the aid of AAO templates. The unit cell of the NP assembly consisted of six trimers on the AAO surface forming hexagonal shape and one large NP in the AAO pore (pore-NP) under the center of each hexagon. The optical reflectance spectra showed broad dips in the visible range, indicating excitation of resonant plasmonic modes. In order to clarify the plasmonic coupling, we performed FDTD simulations for several NP assemblies, including heptamers (2D and 3D), trimer arrays, and pore-NP arrays. All these results showed that the direction of dipoles in each NP as well as the interparticle gap spacing could influence the optical spectra of the 3D NP assemblies. The plasmonic coupling between trimers and pore-NP dominantly affected the optical properties and the field distribution of our 3D NP assemblies. This work was supported by the Pioneer Research Center Program (2010-0002231) through the National Research Foundation of Korea Grant and the New & Renewable Energy Technology Development Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) Grant (20123010010160). 1. N. J. Halas, S. Lal, W. Chang, S. Link, and P. Nordlander, “Plasmons in strongly coupled metallic nanostructures,” Chemical Reviews, vol. 111, no. 6, pp. 3913–3961, 2011. View at Publisher · View at Google Scholar · View at Scopus 2. M. Hentschel, M. Saliba, R. Vogelgesang, H. Giessen, A. P. Alivisatos, and N. Liu, “Transition from isolated to collective modes in plasmonic oligomers,” Nano Letters, vol. 10, no. 7, pp. 2721–2726, 2010. View at Publisher · View at Google Scholar · View at Scopus 3. K. Bao, N. A. Mirin, and P. Nordlander, “Fano resonances in planar silver nanosphere clusters,” Applied Physics A, vol. 100, no. 2, pp. 333–339, 2010. View at Publisher · View at Google Scholar · View at Scopus 4. B. Luk'Yanchuk, N. I. Zheludev, S. A. Maier et al., “The Fano resonance in plasmonic nanostructures and metamaterials,” Nature Materials, vol. 9, no. 9, pp. 707–715, 2010. View at Publisher · View at Google Scholar · View at Scopus 5. P. K. Jain and M. A. El-Sayed, “Plasmonic coupling in noble metal nanostructures,” Chemical Physics Letters, vol. 487, no. 4–6, pp. 153–164, 2010. View at Publisher · View at Google Scholar · View at Scopus 6. D. W. Brandl, N. A. Mirin, and P. Nordlander, “Plasmon modes of nanosphere trimers and quadrumers,” Journal of Physical Chemistry B, vol. 110, no. 25, pp. 12302–12310, 2006. View at Publisher · View at Google Scholar · View at Scopus 7. J. Alegret, T. Rindzevicius, T. Pakizeh, Y. Alaverdyan, L. Gunnarsson, and M. Käll, “Plasmonic properties of silver trimers with trigonal symmetry fabricated by electron-beam lithography,” Journal of Physical Chemistry C, vol. 112, no. 37, pp. 14313–14317, 2008. View at Publisher · View at Google Scholar · View at Scopus 8. P. Nordlander and C. Oubre, “Plasmon hybridization in nanoparticle dimers,” Nano Letters, vol. 4, no. 5, pp. 899–903, 2004. View at Publisher · View at Google Scholar · View at Scopus 9. S. Hong, T. Kang, D. Choi, Y. Choi, and L. P. Lee, “Self-assembled three-dimensional nanocrown array,” ACS Nano, vol. 6, no. 7, pp. 5803–5808, 2012. 10. A. Bouhelier, M. R. Beversluis, and L. Novotny, “Characterization of nanoplasmonic structures by locally excited photoluminescence,” Applied Physics Letters, vol. 83, no. 24, pp. 5041–5043, 2003. View at Publisher · View at Google Scholar · View at Scopus 11. M. Gwon, E. Lee, D. Kim, K. Yee, M. J. Lee, and Y. S. Kim, “Surface-plasmon-enhanced visible-light emission of ZnO/Ag grating structures,” Optics Express, vol. 19, no. 7, pp. 5895–5901, 2011. View at Publisher · View at Google Scholar · View at Scopus 12. M. Kwon, J. Kim, B. Kim et al., “Surface-plasmon-enhanced light-emitting diodes,” Advanced Materials, vol. 20, no. 7, pp. 1253–1257, 2008. View at Publisher · View at Google Scholar · View at 13. H. A. Atwater and A. Polman, “Plasmonics for improved photovoltaic devices,” Nature Materials, vol. 9, no. 3, pp. 205–213, 2010. View at Publisher · View at Google Scholar · View at Scopus 14. F. J. Beck, A. Polman, and K. R. Catchpole, “Tunable light trapping for solar cells using localized surface plasmons,” Journal of Applied Physics, vol. 105, no. 11, Article ID 114310, 2009. View at Publisher · View at Google Scholar · View at Scopus 15. S. Li, M. L. Pedano, S. Chang, C. A. Mirkin, and G. C. Schatz, “Gap structure effects on surface-enhanced raman scattering intensities for gold gapped rods,” Nano Letters, vol. 10, no. 5, pp. 1722–1727, 2010. View at Publisher · View at Google Scholar · View at Scopus 16. D. Lim, K. Jeon, J. Hwang et al., “Highly uniform and reproducible surface-enhanced Raman scattering from DNA-tailorable nanoparticles with 1-nm interior gap,” Nature Nanotechnology, vol. 6, no. 7, pp. 452–460, 2011. View at Publisher · View at Google Scholar · View at Scopus 17. N. Liu, M. L. Tang, M. Hentschel, H. Giessen, and A. P. Alivisatos, “Nanoantenna-enhanced gas sensing in a single tailored nanofocus,” Nature Materials, vol. 10, no. 8, pp. 631–636, 2011. View at Publisher · View at Google Scholar · View at Scopus 18. J. Kim, Y. U. Lee, B. Kang et al., “Fabrication of polarization-dependent reflective metamaterial by focused ion beam milling,” Nanotechnology, vol. 24, no. 1, Article ID 015306, 2013. 19. D. Choi, Y. Choi, S. Hong, T. Kang, and L. P. Lee, “Self-organized hexagonal-nanopore SERS array,” Small, vol. 6, no. 16, pp. 1741–1744, 2010. View at Publisher · View at Google Scholar · View at
{"url":"http://www.hindawi.com/journals/jnm/2013/823729/","timestamp":"2014-04-20T18:35:40Z","content_type":null,"content_length":"57424","record_id":"<urn:uuid:768ecde8-1aeb-4da3-b1b2-13e792b07163>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Reachability analysis of Petri nets using symmetries Results 11 - 20 of 33 "... Symmetries are inherent in systems that consist of several interchangeable objects or components. When reasoning about such systems, big computational savings can be obtained if the presence of symmetries is recognized. In earlier work, symmetries in constraint satisfaction problems have been h ..." Cited by 17 (1 self) Add to MetaCart Symmetries are inherent in systems that consist of several interchangeable objects or components. When reasoning about such systems, big computational savings can be obtained if the presence of symmetries is recognized. In earlier work, symmetries in constraint satisfaction problems have been handled by introducing symmetry-breaking constraints. - Acta Informatica , 1997 "... A definition of Petri net symmetries is given and an algorithm is introduced, which computes these symmetries. Then three examples are given how algorithms from different fields of Petri net analysis can be improved using symmetries, namely computation of reachability graphs, semipositive place inva ..." Cited by 15 (5 self) Add to MetaCart A definition of Petri net symmetries is given and an algorithm is introduced, which computes these symmetries. Then three examples are given how algorithms from different fields of Petri net analysis can be improved using symmetries, namely computation of reachability graphs, semipositive place invariants and structural deadlocks, , 1996 "... Validation of industrial designs is becoming more challenging as technology advances and demand for higher performance increases. One of the most suitable debugging aids is automatic formal verification. Unlike simulation, which tests behaviors under a specific execution, automatic formal verificati ..." Cited by 15 (1 self) Add to MetaCart Validation of industrial designs is becoming more challenging as technology advances and demand for higher performance increases. One of the most suitable debugging aids is automatic formal verification. Unlike simulation, which tests behaviors under a specific execution, automatic formal verification tests behaviors under all possible executions of a system. Therefore, it is able to detect errors that cannot be reliably repeated using simulation. However, automatic formal verification is limited by the state explosion problem. The number of states for practical systems is often too large to check exhaustively within the limited time and memory that is available. Existing solutions have widened the range of verifiable systems, but they are either insufficient or hard to use. This thesis presents several techniques for reducing the number of states that are examined in automatic formal verification. These techniques have been evaluated on high-level descriptions of industrial designs, ... , 1996 "... The goal of net reduction is to increase the effectiveness of Petrinet -based real-time program analysis. Petri-net-based analysis, like all reachability-based methods, suffers from the state explosion problem. Petri net reduction is one key method for combating this problem. In this paper, we exten ..." Cited by 14 (3 self) Add to MetaCart The goal of net reduction is to increase the effectiveness of Petrinet -based real-time program analysis. Petri-net-based analysis, like all reachability-based methods, suffers from the state explosion problem. Petri net reduction is one key method for combating this problem. In this paper, we extend several rules for the reduction of ordinary Petri nets to work with time Petri nets. We introduce a notion of equivalence among time Petri nets, and prove that our reduction rules yield equivalent nets. This notion of equivalence guarantees that crucial timing and concurrency properties are preserved. Partially supported by NSF grants CCR-9108753 and CCR-9314258. Email: sloan@eecs.uic.edu. y Partially supported by NSF grants CCR-9109231 and CCR-9314258. Email: buy@eecs.uic.edu. 1 Introduction Petri nets have proven to be a very useful tool for the analysis of concurrent systems. To date several approaches have been defined that use Petri nets to model a system being analyzed , 1998 "... We present a fully automatic framework for identifying symmetries in structural descriptions of digital circuits and CTL* formulas and using them in a model checker. We show how the set of sub-formulas of a formula can be partitioned into equivalence classes so that truth values for only one sub-for ..." Cited by 10 (0 self) Add to MetaCart We present a fully automatic framework for identifying symmetries in structural descriptions of digital circuits and CTL* formulas and using them in a model checker. We show how the set of sub-formulas of a formula can be partitioned into equivalence classes so that truth values for only one sub-formula in any class need be evaluated for model checking. We unify and extend the theories developed by Clarke et al [CEFJ96] and Emerson and Sistla [ES96] for symmetries in Kripke structures. We formalize the notion of structural symmetries in net-list descriptions of digital circuits and CTL* formulas. We show how they relate to symmetries in the corresponding Kripke structures. We also show how such symmetries can automatically be extracted by constructing a suitable directed labeled graph and computing its automorphism group. We present a novel fast algorithm for solving the graph automorphism problem for directed labeled graphs. , 2003 "... The symmetry reduction method is a technique for alleviating the combinatorial explosion problem arising in the state space analysis of concurrent systems. This thesis studies various issues involved in the method. The focus is on systems modeled with Petri nets and similar formalisms, such as the ..." Cited by 8 (1 self) Add to MetaCart The symmetry reduction method is a technique for alleviating the combinatorial explosion problem arising in the state space analysis of concurrent systems. This thesis studies various issues involved in the method. The focus is on systems modeled with Petri nets and similar formalisms, such as the MurĪ description language. For place/transition nets, the computational complexity of the sub-tasks involved in the method is established. The problems of finding the symmetries of a net, comparing whether two markings are equivalent under the symmetries, producing canonical representatives for markings, and deciding whether a marking symmetrically covers another are classified to well-known complexity classes. New algorithms for the central task of producing canonical representatives for markings are presented. The algorithms apply and combine techniques from computational group theory and from the algorithms - In Tools and Algorithms for the Construction and Analysis of Systems; 6th International Conference, TACAS 2000, S. Graf and M. Schwartzbach, Eds. Lecture Notes in Computer Science , 1999 "... . We present three methods for the integration of symmetries into reachability analysis. Two of them lead to perfect reduction but their runtime depends on the symmetry structure. The third one works always fast but does not always yield perfect reduction. Keywords: (Theory) Computer tools for n ..." Cited by 7 (1 self) Add to MetaCart . We present three methods for the integration of symmetries into reachability analysis. Two of them lead to perfect reduction but their runtime depends on the symmetry structure. The third one works always fast but does not always yield perfect reduction. Keywords: (Theory) Computer tools for nets 1 Introduction Symmetric structure yields symmetric behavior. Thus, symmetries can be employed to reduce the size of reachability graphs. Instead of storing all states, only (representatives of) equivalence classes of states are stored. There are two major problems that need to be solved in the context of symmetries. Before starting reachability graph generation, we need to investigate the symmetries of the system. During graph generation, we need to decide repeatedly whether for a (recently generated) state an equivalent one has been explored earlier. In the context of high level Petri nets, we can use operations and relations of the color sets to describe the symmetries - In Proc. Seventh Internat. Workshop on Software Specification and Design , 1993 "... We propose to extend existing Petri-net-based tools for concurrency analysis to real-time analysis. The goal is to create a fully automated system, which starts from code in a higher level language for real-time programming, and answers programmers' queries about timing properties of the code. The k ..." Cited by 7 (4 self) Add to MetaCart We propose to extend existing Petri-net-based tools for concurrency analysis to real-time analysis. The goal is to create a fully automated system, which starts from code in a higher level language for real-time programming, and answers programmers' queries about timing properties of the code. The key difficulty with all reachability-based approaches is that the state space quickly becomes intractably large. To circumvent this state explosion problem, we propose using a combination of several heuristics for model reduction and state space reduction that have been effective for untimed concurrency analysis. In: Proceedings of the Seventh International Workshop on Software Specification and Design, pp. 56--60, December 1993, IEEE Computer Society Press. 1 Introduction The analysis of real-time software is very difficult. Indeed, the activities of design, implementation and testing are costly and complex even for traditional software, considerably more costly and complex for untimed co... - In Proc. 1994 Internat. Sympos. on Software Testing and Analysis , 1994 "... We present a first report on our PARTS toolset for the automated static analysis of real-time systems. The PARTS toolset is based upon a timed extension of Petri nets. Our simple time Petri nets or STP nets are specifically aimed at facilitating real-time analysis. Our analysis approach uses the sta ..." Cited by 6 (5 self) Add to MetaCart We present a first report on our PARTS toolset for the automated static analysis of real-time systems. The PARTS toolset is based upon a timed extension of Petri nets. Our simple time Petri nets or STP nets are specifically aimed at facilitating real-time analysis. Our analysis approach uses the state space of an STP net in order to answer queries about the concurrency and timing behavior of the corresponding system. An attractive feature of STP nets is that they support a variety of techniques for controlling the number of states that must be explicitly enumerated. These techniques were originally defined for the analysis of concurrency properties of untimed systems, and in this paper we discuss the extension of each to the timed domain. We also report on some preliminary experimental results that we obtained by running our toolset on examples of real-time systems. In: Proceedings of the 1994 Internatinal Symposium on Software Testing and Analysis (ISSTA '94), pp. 228--239, August 1...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=124348&sort=cite&start=10","timestamp":"2014-04-20T07:00:22Z","content_type":null,"content_length":"37088","record_id":"<urn:uuid:84001f10-9aaa-4d23-8810-c2d6fe3d6231>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory Statistics Education and the National Science Foundation Megan R. Hall and Ginger Holmes Rowell Middle Tennessee State University Journal of Statistics Education Volume 16, Number 2 (2008), www.amstat.org/publications/jse/v16n2/rowell1.html Copyright © 2008 by Megan R. Hall and Ginger Holmes Rowell all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor. Key Words: Introductory Statistics; Curriculum Guidelines; Teaching Materials; Grant Projects. This paper describes 27 National Science Foundation supported grant projects that have innovations designed to improve teaching and learning in introductory statistics courses. The characteristics of these projects are compared with the six recommendations given in the Guidelines for Assessment and Instruction in Statistics Education (GAISE) College Report 2005 for teaching an introductory course in statistics. Through this analysis, we are able to see how NSF-supported introductory statistics education projects during the last decade achieve the GAISE ideals. Thus, materials developed from many of these projects provide resources for first steps in implementing GAISE recommendations. 1. Introduction For many years, statistics educators have been concerned with reforming undergraduate education, especially the introductory course in statistics. Throughout this article, introductory statistics refers to Joan Garfield’s definition: the "non-calculus based, often terminal, introductory applied statistics course" for students not majoring in the subject (Garfield 2000, p.2). The latest efforts to address this first course have evolved for some time and have resulted in the Guidelines for Assessment and Instruction in Statistics Education (GAISE) College Report (2005), which sets forth six recommendations for teaching the introductory course. Likewise, the National Science Foundation (NSF) has addressed undergraduate statistics education by funding many projects aligned with statistics education standards; George Cobb (1993) reviewed 12 such projects, lending inspiration to this report. Following Cobb’s lead, this article reviews the GAISE recommendations and supporting literature, describes the NSF programs used to support this reform, and examines 27 NSF projects, funded from 1993 to 2004, which address the introductory statistics course. By comparing these projects with the GAISE recommendations, we are able to show how NSF has been supporting GAISE principles over the past decade. 2. Guidelines for the Introductory Statistics Course 2.1 Evolution of GAISE As statistics has made its way into the undergraduate curriculum over the past century, the introductory course has undergone numerous changes. Always at the forefront of reform is the effort to improve teaching and student learning in this course (GAISE 2005). In the early 1990’s, George Cobb organized a focus group to set up guidelines for teaching this course. This group produced a paper called "Teaching Statistics" that set forth three recommendations: 1) emphasize statistical thinking; 2) more data and concepts, less theory and fewer recipes; and 3) foster active learning (Cobb 1992). Toward the end of the decade, the launching of the Undergraduate Statistics Education Initiative (USEI) drew more attention to the introductory course through a paper calling for increased attention to statistical thinking. That article also reported that teachers of statistics were already increasing their use of technology and active learning in the classroom (Garfield, Hogg, Schau, Whittinghill 2002). These publications are just two examples of the work done in statistics education reform that has helped lead to the production of the Guidelines for Assessment and Instruction in Statistics Education (GAISE) College Report (2005). 2.2 GAISE Recommendations The GAISE College Report (2005) was developed by a group of statisticians/statistics educators with funding from the American Statistical Association (ASA). On May 17, 2005, the ASA approved this document which provides six primary recommendations for teaching introductory statistics: emphasize statistical literacy and develop statistical thinking, use real data, stress conceptual understanding rather than mere knowledge of procedures, foster active learning in the classroom, use technology for developing conceptual understanding and analyzing data, and use assessments to improve and evaluate student learning. 2.2.1 Emphasize Statistical Literacy and Develop Statistical Thinking The common thread throughout introductory statistics education reform efforts is the emphasis on statistical thinking and literacy (Cobb 1992, Snee 1993, Garfield et al 2002, MAA 2004). Instructors of introductory level courses want their students to understand statistical terms, symbols, graphs, and fundamental ideas, which the GAISE authors consider to be statistical literacy. Along with literacy, students in these courses should be able to think statistically, meaning they should understand the need for data, the importance of data production, the omnipresence of variability, and the quantification and explanation of variability (GAISE 2005). Rumsey (2002) adds to this definition the ability to make informed decisions, while Chance (2002) wants her students to see the big picture and think of statistics in terms of the whole process, rather than isolated techniques. Furthermore, since statistics are present everywhere in the media it is important for citizens to be able think critically about the information thrust upon them (Rumsey 2002; Sullivan 1993). 2.2.2 Use Real Data Because statistics is about understanding data (Hakeem 2001), students should have access to and experience with real data. The use of real data in introductory statistics courses provides authenticity, helps address issues of data production and collection, gives real-life context to a problem, and can increase student interest in the course (GAISE 2005). There are three kinds of data which accomplish these goals, each with advantages and disadvantages. Class generated data can provide meaningful connections for students because they participated in its production; unfortunately, it can also be toy-like and shallow. Archival data gives students experience with real-world statistics and can be complex and rich in nature; however, students’ excitement may be compromised since they were excluded from the production process, and variability can be hidden. Simulated data emphasizes variability well and allows the instructor more control, but it is not real (Cobb 1993). Regardless of the type, real data used in context can motivate and engage students in the statistical process without being burdensome thanks to technological advances (GAISE 2005). 2.2.3 Stress Conceptual Understanding Rather Than Mere Knowledge of Procedures Like Cobb (1992), the GAISE authors believe that topical coverage can be sacrificed for conceptual understanding and suggest paring down the course syllabus. "If students don’t understand the important concepts, there’s little value in knowing a set of procedures," (GAISE 2005, p. 10). Experts in other disciplines, such as biology professors Udovic, Morris, Dickman, Postlethwait, and Wetherwax (2002), agree that deep understanding of fewer concepts is better than shallow understanding of many. This has been a principle part of the undergraduate statistics education reform movement, as more instructors are focusing on concepts in their courses (Garfield 2000). The Mathematical Association of America (2004) also endorsed this recommendation for statistics courses in their Committee on the Undergraduate Program in Mathematics 2004 Curriculum Guidelines. 2.2.4 Foster Active Learning in the Classroom There are many advantages to incorporating active learning in the introductory statistics course. It allows students to discover concepts, engage in the statistical process, communicate in statistical language, and work in teams, as well as provides instructors informal methods of assessment (GAISE 2005). Active learning ideas can be traced back to Socrates and are found in the work of Dewey, Piaget, and Lewin (Zeichner, Litcher 1998). In a study of student learning styles in English, chemistry, mathematics, and psychology courses, August, Hurtado, Wimsatt, and Dey (2002) found that 91% of students felt they learned better from in-class activities and 85% found lecture-only classes boring. McConnell, Steer, and Owens (2003) found that active learning techniques in geology courses increase student participation and that students who have engaged in active learning perform better on exams and logical thinking tests than those in traditional settings. To foster active learning, activities should focus on conceptual understanding and discovery learning and can be group work, laboratory activities, or class discussions (GAISE 2005). 2.2.5 Use Technology for Developing Conceptual Understanding and Analyzing Data Technology, a resource which has been transforming statistical research for many years, has also been a major component of statistics education reform (Moore, Cobb, Garfield, Meeker 1995, Garfield 2000). This remarkable tool can be used in the introductory statistics course to analyze data, simulate concepts, or provide alternative assessments, while motivating and exciting students (GAISE 2005, Schenone-Stevens 1999). However, there are some cautions teachers should heed when exploiting technology’s benefits. When used for its own sake, technology has no redeeming educational value ( Schenone-Stevens 1999); rather than replace human interaction, technology should enrich teaching styles and techniques (Moore, Cobb, Garfield, Meeker 1995). When used correctly, technology can greatly enhance student learning (GAISE 2005). 2.2.6 Use Assessments to Improve and Evaluate Student Learning It is understood that students are concerned with how they are assessed; therefore, assessment techniques should place value on learning objectives and understanding key ideas (GAISE 2005). High quality assessments should be aligned with national standards, should measure what matters for improvement, and should be learning experiences themselves (Cobb 1993; Caudell 1996; Chance and Garfield 2002). Such instruments drive curriculum, reflect student learning, and have real-life context (Caudell 1996). Examples include formative evaluations, written reports, portfolios, experiments, essays, speeches, projects, and activities (Caudell 1996, McConnell, et al 2003, GAISE 2005). 2.2.7 Interconnection of the GAISE Recommendations These recommendations made by the GAISE authors do not stand alone. Each can be met by materials or techniques intended to address another. For instance, a teacher may implement technology in order to convey a particular concept, an activity may involve student collection of data, or a group project may be used for assessment. Furthermore, all of these can move students toward being statistically literate or employing statistical thinking. In any case, statistics educators need not approach these guidelines as six separate techniques to master and implement, but as one complete way to help students become good statistical citizens in an information age. 2.3 GAISE in Other Disciplines These approaches to teaching are not limited to introductory statistics or any statistics course. The ideas of scientific reasoning, active learning, conceptual understanding, use of technology, and appropriate assessments are present in biology, geology, business, engineering psychology, and other disciplines that depend on statistics (Rolker-Dolinsk, Qualters 1994; Craddock 1998; Bass, Rosenzwig 1999; Hakeem 2001; McConnell, et al 2003; McCormick, MacKinnon, Jones 1999; Udovic, et al 2002). Some National Science Foundation (NSF) projects described in this paper meet GAISE recommendations by addressing introductory level statistics through courses in these disciplines. 3. NSF Division of Undergraduate Education Programs The NSF has supported undergraduate education since its inception in 1950. In order to take a more central role in reform efforts, NSF established the Division of Undergraduate Education (DUE) (NSF 1996). The following programs are or were supported by DUE and have had an impact on introductory statistics courses. The Instrumentation and Laboratory Improvement (ILI) program of NSF DUE began in 1988 in order to encourage and support improvement in laboratory curricula for science, technology, engineering, and mathematics (STEM) education institutionally and nationwide. Projects funded through this program helped create and equip laboratory facilities, upgrade equipment for laboratory instruction, develop laboratory exercises that demonstrate basic principles, and stimulate interest in STEM courses by making them relevant and understandable. The program accepted its final proposals in fiscal year 1998 and transitioned into the Course, Curriculum, and Laboratory Improvement (CCLI) program (NSF 1998). The main objectives of the Course and Curriculum Development (CCD) program, which ran from 1988 until 1998, were to improve undergraduate STEM teaching, increase student understanding of and attitudes toward STEM, and to place greater value on teaching and scholarship through the development and adaptation of courses, curriculum, and educational materials (Eiseman, Fairweather, Rosenblum, Britton 1998). Grants funded through this program often produced textbooks, manuals, or course materials or created courses or sequences of courses. Like ILI, CCD was finally assimilated into CCLI (NSF 1998). Established in 1998, the Course, Curriculum and Laboratory Improvement (CCLI) program combined properties of CCD and ILI, funding proposals for curricular development and purchase of instructional laboratory equipment. The initial four tracks of CCLI were intended to stimulate creative teaching and pedagogical scholarship among faculty (NSF 1998). The Educational Materials Development (EMD) track aimed to encourage and support the development of quality instructional materials that enhance student learning in STEM, while the Adaptation and Implementation (AI) track assisted in integrating exemplary materials, laboratory experiences, and educational practices at other diverse universities (NSF 2003a, NSF 2003b). By sponsoring faculty development opportunities, the National Dissemination (ND) track of CCLI promoted the introduction of exemplary materials, practices, and techniques to large numbers of colleges and universities nationwide (2003a). Finally, the Assessment of Student Achievement (ASA) track developed effective assessment tools associated with student learning in STEM and supported the adaptation, implementation, and dissemination of such tools (NSF 2003c). These four tracks were phased out in 2006 to make room for a cyclical model of knowledge production and improvement with five supporting components: teaching and learning research, learning materials development, faculty enhancement, innovative materials implementation, and assessment of learning innovations (NSF 2005a). Another program that was integrated into CCLI was the Undergraduate Faculty Enhancement (UFE) program, which operated from 1988 to 1998. UFE sought to provide faculty with opportunities to experience new and exciting developments in undergraduate education such as new content, teaching methods, experimental techniques, and technology. Funded projects conducted workshops, short courses, seminars, and other such activities to promote these developments. The program supported more than 500 projects and over 750 workshops during its lifetime (Marder, McCullough, Perakis 2001). Another program for educators was the Collaboratives for Excellence in Teacher Preparation (CETP) program founded in 1993. The goal was to increase the number and quality of future pre-Kindergarten through 12^th grade teachers, emphasizing subject area competence, effective pedagogical techniques, and national standards for math and science. This program was redesigned, and from 2003 to 2005 was the Teacher Professional Continuum (TPC) program ( NSF 1999; Prival 2008b). Currently, components of this program are interwoven with the Discovery Research K-12 (DR-K12). (Prival 2008a) The National Science, Technology, Engineering, and Mathematics Digital Library (NSDL) program supports the collection and organization of educational materials into a national online digital library through projects that develop and enhance collections as well as implement digital library services. Projects can support existing resource providers, maintain material currency and selection criteria, select existing materials for inclusion, or fund workshops promoting the library (NSF 2005b). 4. NSF Projects Meeting GAISE Recommendations 4.1 Overview As noted above, any given educational technique, practice, or set of materials need not be isolated to one specific GAISE recommendation. Often, by setting out to meet one recommendation, educators end up meeting several at a time. By searching NSF's Award Search Webpage (www.nsf.gov), we were able to find 110 projects affecting introductory statistics funded between 1993 and 2004. Of these, 95% met at least one GAISE recommendation, while 65% met more than one. The NSF funded projects that follow are described in terms of one GAISE recommendation, but that does not mean they meet only that recommendation. Projects were selected to exemplify the qualities of a particular recommendation, even if they meet more than one, as many do. Furthermore, we do not claim that this list is exhaustive of NSF projects that meet GAISE guidelines. If you participated in an NSF project that fits the nature and scope of this article and is not discussed or listed in the Appendix, we extend our apologies for the omission. 4.2 Projects that Emphasize Statistical Literacy and Thinking Projects that address the first guideline are targeted at helping students become statistically literate, critical thinkers, and informed statistical citizens. The Electronic Encyclopedia of Statistical Examples and Exercises (EESEE; http://www.whfreeman.com/eesee/eesee.html) and the Data and Story Library (DASL: http://lib.stat.cmu.edu/DASL/) are online resources full of real datasets, case studies, and other materials for use in statistics classes. In order to enhance these two resources, project investigators Paul Velleman, William Notz, Elizabeth Stasny, and Dennis Pearl led "Interactive Video Resources for Learning Statistics" (#9555073, #9555233). This project added video resources of current events found on the news and other television programs to help students think critically about statistical applications in real world events (Notz, Pearl, Stasny 1996). Another project focused on current events that utilizes the EESEE and DASL, is "Change: Current Studies of Current Chance Issues, Phase II," (#9354592) by Laurie Snell. Chance is a quantitative literacy course designed to turn students into informed critical readers by basing the course on statistical concepts found in current events. Students read articles from Chance Magazine and other journals, critiquing the statistical methods used. Topics covered include probability concepts, descriptive statistics, design of experiments, sampling, correlation, and exploratory data analysis. Online and printed materials are available to help instructors teach such a course (Snell 1994, http://www.dartmouth.edu/~chance/). Additionally, a workshop called "Chance Workshop" (#9653416) was conducted to teach educators how to use current events to teach probability and statistics concepts (Snell 1997). Other workshops have been conducted to help teachers with little or no statistical training learn better ways to teach statistical literacy. Two examples are George Cobb and Mary Parker’s "Statistical Thinking and Teaching Statistics"(#9255447) and its successor "STATS: Statistical Thinking with Active Teaching Strategies," (#9554621) led by Allan Rossman and Thomas Short. These projects supported numerous weeklong workshops emphasizing statistical thinking through real data, conceptual understanding, active learning, software, and assessments, thus touching all components of GAISE (Cobb and Parker 1998; Rossman and Short 1999). Students in other disciplines also benefit from statistical literacy and thinking skills. To help biology students appreciate statistics and improve their quantitative reasoning skills, "IBASE: Integrating Biology and Statistics Education," (#0309751) by James Watrous, Deborah Lurie, and Denise Ratterman, created two courses, biology and statistics, to be taken simultaneously. In the biology course, students collected data from experiments completed in the lab and then analyzed the data in the statistics course. Thus the students were able to learn real-world applications in a relevant situation, reducing anxiety toward statistics and providing better understanding (Watrous, Lurie, Ratterman 2003). 4.3 Projects that Use Real Data Real data is often easiest to collect in other disciplines that use statistics on a regular basis. For example, the American Sociological Association, partnering with the University of Michigan, sponsored "Collaborative Project on Integrating Census Data Analysis into the Curriculum" (#0088715, #0089006) led by William Frey, Carla Howery and Felice Levine. This National Dissemination project attempted to revise the sociology curriculum at numerous schools by emphasizing the use of real data from the US Census Bureau. The project investigators called for proposals from other universities to take part in this project in order to have widespread impact (Frey 2001, American Sociological Association 2002). The natural sciences are another area with easy access to real data. "Service Learning in Chemistry: Lead in Soil from Vehicle Emissions" (#0410115) by Hal Van Ryswyk incorporated data analysis into introductory chemistry classes. Students sampled and tested soil for lead, analyzed their own data, and prepared written and oral presentations. The students in these courses also collaborated with students in probability and statistics courses as well as local elementary schools (Van Ryswyk 2004). For those educators without such easy access to real data, there are other ways to find it. For example, James Albert designed an introductory statistics course based entirely on baseball statistics through "Development of Sports Statistics Modules for Introductory Statistics Classes" (#0088703). Data came from baseball cards, the Internet, and simulation. Students were able to understand concepts and analyze real data in an interesting context (Albert 2002). Analyzing data can be difficult without computer access and appropriate software. Robert Gould and Mahtash Esfandiari’s goal for "A Statistics Undergraduate Computing Laboratory" (#9981172), funded in 2000, was to establish a computer laboratory for statistics courses, where students could analyze real data, teaching the value of statistical thinking and deeper intuitive understanding of the entire data analysis process. The project investigators believe real datasets can help students confront important basic problems in statistics without the datasets being huge and messy (Gould, Esfandiari 2003). 4.4 Projects that Stress Conceptual Understanding over Knowledge of Procedures In order to place a deliberate emphasis on conceptual understanding versus theoretical background, educators often employ real data, active learning, and technology. The "Rice Virtual Laboratory in Statistics" (#9751307) by David Lane, Joe Austin, David Scott, Keith Baggerly, and Miguel Quinones is a web-based resource for students and teachers of statistics. The site (http://onlinestatbook.com /rvls.html) houses the Hyperstat Online textbook, case studies, simulations, and some basic analysis tools. The intended progression is for users to explore a statistical concept demonstrated in a case study, which will link them to explanatory material from the online textbook, leading them to simulations of the concept through Java applets, ending with the users’ own experiments, providing the student or teacher with a thorough lesson and deeper conceptual understanding (Lane, Austin, Scott, Baggerly, Quinones 2000). Associated with the Rice Virtual Labs is "Online Statistics Education: An Interactive Multimedia Course of Study" (#0089435) by David Lane, David Scott, Rudy Guerra, Michelle Hebl, and Daniel Osherson. This online statistics course (http://onlinestatbook.com/ ) contains lecture materials, simulations, self-testing, and real data from case studies, and can be modified depending on audience level. To learn the concepts, students answer a series of questions, then conduct simulations to see if they were correct, and then answer the questions again (Lane, Scott, Guerra, Hebl, Osherson 2004). Beth Klingner and Nira Herrmann make use of Lane’s online course in their project, "Enhancing the Mathematical Foundation of Students through Online Course Modules" (#0311016). They adapted the materials into modules designed to teach quantitative and analytical skills in relevant contexts (Klingner, Herrmann 2003). For those with little or no computer access, "Laboratory Lessons for Discovery-Based Statistics" (#9650581) by Richard Scheaffer produced hands-on, student-directed lessons that teach fundamental concepts like randomness, sampling distributions, confidence, and significance, which can include, but do not require, computer use (Scheaffer, 1996). 4.5 Projects that Foster Active Learning Many projects that involve active learning do so through computer laboratory modules, activities, or course-long projects. These activities are designed to help students practice the scientific process while gaining deeper conceptual understanding. Both "Fostering Conceptual Understanding Using a ‘Hands-On’ Approach in Undergraduate Statistics" (#9452320) by Danuta Bukatko and Patricia Kramer and "A Statistical Laboratory for Active Learning" (#9550891) by Richard Scheaffer created interactive, hands-on computer module activities that help students learn statistical concepts using graphing and analysis (Bukatko, Kramer 1994; Scheaffer 1995). "An Activity-Based Statistics Course for Engineers" (#0126815) by Steven Butt, Bob White, and Tycho Fredericks gives students the opportunity to collect their own data and solve real-world problems in weekly labs and workshops, while "Development of an Inquiry-based Curriculum in Ecology" (#0088369) by Richard Tankersley and John Morris has integrated laboratory modules into a four semester sequence of ecology courses, where students sharpen their thinking skills through statistical technique exercises and self-designed investigations (Butt, White, Fredericks 2004; Tankersley, Morris 2005). Another active learning project that spans courses is "Promoting Undergraduate Research through Development of Two Interdisciplinary Research Methods/Statistics Courses and Increased Support of Student Research," (#0126435) by Kathy Silgailis and Vishwa Bhat. Resulting from this project is a two-semester sequence of courses for science majors in which students complete a sequence-long project. In the first semester, students develop a research question and hypothesis while learning basic statistical procedures, and in the second semester, they refine the question, design and conduct data collection, and prepare written and oral presentations of their findings. This in-depth project also includes software training and laboratory activities (Silgailis, Bhat 2005). 4.6 Projects that Effectively Use Technology It would be very tempting to say that any project that implements some type of technology meets the technology recommendation; however, technology for the sake of technology is not what this guideline recommends. Utilizing technology means providing access to hardware or software that enhances the learning of statistics, especially computational or simulation software. For instance, Roxy Peck and James Daly’s "Studio Environment for Introductory Statistics" (#9750663) established a laboratory classroom so that students could complete lab activities, projects and simulations to help them learn statistics (Peck, Daily 1997). Simulation software is useful technology for teaching statistics. Sampling SIM, software developed by Joan Garfield, Robert delMas, and Beth Chance through "Tools for Teaching and Assessing Statistical Inference" (#9752523), helps students with conceptual understanding by allowing them to make and test predictions. The simulation software provides instructional materials and is available freely over the web at http://www.tc.umn.edu/~delma001/stat_tools/ (Garfield, delMas 2000). A larger software program focused on data analysis was developed through "A Data Analysis Exercise Server for Introductory Statistics Courses" (#9980973) by Todd Ogden and Webster West and "DoStat.com: A Web Site for Educational Data Analysis and Assessment" (#0226097) by Webster West and James Lynch. "StatCrunch," initially called "WebStat," is low-cost online computational software found at http://www.statcrunch.com. This Excel compatible software allows students to upload their own data and perform descriptive statistics, hypothesis testing, confidence intervals, regression, ANOVA, categorical or quantitative graphing, and more (West, Wu, Heydt 2004). A project utilizing StatCrunch is "Visualizing Statistics – A Online Introductory Course" (#9950671) by Alexander Kugushev and CyberGnostics, Inc. This online course offers explanatory text, applets, real data, testing, and "StatCrunch" analytical software (CyberGnostics 2005). 4.7 Projects on Assessment Determining if you are meeting your instructional goals for your students is difficult without proper assessment instruments. Two projects have been funded which provide access to statistical assessments. The "Web-based ARTIST Project" (#0206571) by Joan Garfield, Beth Chance, and Robert delMas consists of an assessment builder of over 1,000 items varying by format, level, and statistical topic as well as project ideas, article critiques, group work options, and scoring guidelines. References of works published on assessment topics are also available on the site, https:// app.gen.umn.edu/artist/ (Garfield, Chance 2004). "Statistical Concepts Inventory (SCI): A Cognitive Achievement Tool in Engineering Statistics" (#0206977) by Teri Rhoads, Teri Murphy, and Robert Terry is a multiple choice exam intended to measure the ability of engineering students to apply statistical concepts to real-world situations. This assessment tool includes questions related to statistical topics important in engineering such as designing and conducting experiments and analyzing and interpreting data. It is available at http://onlinestatbook.com/rvls.html/ (Rhoads, Murphy 4.8 Projects that Address All Recommendations Many of the projects previously described meet more than one recommendation. However, few projects meet all GAISE recommendations. One such project that reaches across the discipline of statistics to touch students and teachers at all levels is "CAUSEweb: A Digital Library of Undergraduate Statistics Education" (#0333672) led by Dennis Pearl and supported by the Consortium for the Advancement of Undergraduate Statistics Education (CAUSE). This digital library, found at http://causeweb.org/, includes a resource section which provides descriptions and/or reviews for statistics education materials. Students and teachers can search for resources by material type, audience level, math level, application area, or statistical topic (Green, McDaniel, Rowell 2005). 5. Conclusion The GAISE College Report recommendations are the result that evolved from many years of work by the statistics education community to determine the best standards for teaching and learning introductory statistics. With so many NSF-funded projects achieving the ideals described in GAISE, it is apparent that the NSF supports the implementation of these recommendations. NSF-supported resources described in this paper provide a good starting place for introductory statistics teachers to find ideas to help them implement one or more of the GAISE recommendations. The Appendix includes additional "information" on over 100 NSF-supported projects in introductory-level statistics education. Additional information about these projects can be found by using the NSF Awards Search webpage (http://www.nsf.gov/awardsearch/) and entering the award number in the "Search Award for" dialog box. Appendix: NSF Projects │Award Number│ Title │ PI │ Start Date │NSF Program│Award Amount│ Institution │ │ 9950494 │Computer Enhanced Mathematics Instruction │ Addison Frey │ June 1, 1999 │ AI │ $25,429 │ Alfred University │ │ 9950161 │An Interactive Learning Environment in Statistics: Integrating Multimedia Laboratory Exercises and Courseware into the Statistics Curriculum │ Deborah A. Nolan │ July 1, 1999 │ AI │ $99,238 │U. California Berkeley │ │ 9950509 │High-Tech., Project-Based Beginning Algebra and Statistics Course for Two-Year Colleges │ Sue E. Stokley │ August 1, 1999 │ AI │ $32,847 │Spartanburg Tech. Coll.│ │ 9950628 │Beyond Mapping and Reporting: Improving Students' Skills in Science and Analysis for Geography, Environmental Studies, and Ecology │ Robert Werner │ August 1, 1999 │ AI │ $42,200 │ Univ. of St. Thomas │ │ 9972494 │Integrated Statistics and Computer Application Courses │ Melinda A. Holt │ August 1, 1999 │ AI │ $87,577 │ Texas Women's Univ. │ │ 9950229 │Quantitative Reasoning and Informed Citizenship: Implementing an Activity-based Laboratory Course │ Kay Somers │September 1, 1999 │ AI │ $79,412 │ Moravian College │ │ 9950856 │New Laboratory and Integrated Course Materials to Improve the Psychology Curriculum │ Scott Ottaway │September 1, 1999 │ AI │ $86,276 │ West. Washington U. │ │ 9980995 │Using the LaCEPT Model to Reform an Elementary Statistics Course │ Frank Neubrander │ January 1, 2000 │ AI │ $74,063 │ LSU, A&M Coll. │ │ 9952620 │Development of Laboratory and Field Experience Based Course in Asphalt Technology for Civil Engineering Undergraduate Students │ Rajib Mallick │ February 1, 2000 │ AI │ $31,479 │ Worcester Poly. Inst. │ │ 9981172 │A Statistics Undergraduate Computing Laboratory │ Robert L. Gould │ March 1, 2000 │ AI │ $69,181 │ UCLA │ │ 0087680 │A Multifunctional Technology Classroom for the Teaching of Data-Intensive Statistics │ Steven C. Patch │ January 1, 2001 │ AI │ $49,450 │ UNC, Asheville │ │ 0088369 │Development of an Integrated Inquiry-based Curriculum in Ecology │ Richard Tankersley │ February 1, 2001 │ AI │ $201,134 │Florida Inst. Of Tech. │ │ 0088377 │Political Analysis in an Experiential/ Collaborative Setting │ Allan McBride │ March 15, 2001 │ AI │ $45,642 │ Univ. Southern Miss. │ │ 0088422 │Adaptation and Implementation of Computer Technology into the Mathematical Science Curriculum │ T. Len Miller │ July 1, 2001 │ AI │ $93,011 │ MSU │ │ 0126815 │An Activity-Based Statistics Course for Engineers │ Steven E. Butt │ January 23, 2002 │ AI │ $52,139 │ Western Michigan U. │ │ 0126914 │Integrating Mathematics and Statistics into the Biology Curriculum │ Eric Marland │ March 1, 2002 │ AI │ $159,583 │ Appalachian S.U. │ │ 0126682 │A Multi-stage, Technology-intensive Approach to Statistics Instruction │ Jeff Knisley │ May 1, 2002 │ AI │ $124,996 │ ETSU │ │ 0126435 │Promoting Undergraduate Research through the Development of Two Interdisciplinary Research Methods/ Statistics Courses and Increased Support of Student Research │ Kathy Silgailis │ July 1, 2002 │ AI │ $197,975 │ Will. Patterson Univ. │ │ 0309751 │IBASE: Integrating Biology and Statistics Education │ James J. Watrous │ July 1, 2003 │ AI │ $89,188 │St. Joseph's University│ │ 0311016 │Enhancing the Mathematical Foundation of Students through Online Course Modules │ Beth Klingner │ August 15, 2003 │ AI │ $164,985 │ Pace University, NY │ │ 0310932 │Implementing Activity-based Cooperative Learning and Technology (ACT Curriculum) in Statistics Courses for Non-majors and K-12 Preservice Teachers │ Carl M. Lee │September 1, 2003 │ AI │ $177,052 │Central Michigan Univ. │ │ 0311579 │Collaborative Research: Adapting and Evaluating Online Materials for Undergraduate Statistics using LON-CAPA Technology │ Deborah A. Kashy │September 15, 2003│ AI │ $35,078 │ Michigan State Univ. │ │ 0311695 │Collaborative Research: Adapting and Evaluating Online Materials for Undergraduate Statistics using LON-CAPA Technology │ Jennifer G. Boldry │September 15, 2003│ AI │ $47,125 │ Montana State Univ. │ │ 0410115 │Service-Learning in Chemistry: Lead in Soil from Vehicle Emissions │ Hal Van Ryswyk │September 1, 2004 │ AI │ $41,227 │ Harvey Mudd College │ │ 0411041 │Integrating Data Analysis into the Curriculum: Responding to the Scientific Literacy Gap Among Undergraduate Students in the Social Sciences │ Esther Wilder │September 1, 2004 │ AI │ $175,000 │ CUNY, Herbert Lehman │ │ 0206571 │The Web-based ARTIST Project │ Joan Garfield │ August 15, 2002 │ ASA │ $551,094 │ UMN, Twin Cities │ │ 0206977 │The Statistical Concepts Inventory (SCI): A Cognitive Achievement Tool in Engineering Statistics │ Teri R. Rhoads │September 1, 2002 │ ASA │ $499,999 │University of Oklahoma │ │ 9254087 │A Modular Laboratory and Project-Based Statistics Curriculum │Joseph D. Petruccelli│ January 1, 1993 │ CCD │ $165,000 │ Worcester Poly. Inst. │ │ 9254182 │Realizing the Power of Computers in Business Statistics Instruction: A Next Step │ Ronald Tracy │ February 1, 1993 │ CCD │ $60,029 │ Oakland University │ │ 9354506 │Developing Statistical Understanding through Interactive Computing/Graphics │ Leo Breiman │ March 1, 1994 │ CCD │ $166,637 │U. California Berkeley │ │ 9354419 │Constructing Knowledge of Statistical Concepts through Modern Technology │ Dennis D. Wackerly │ May 1, 1994 │ CCD │ $99,992 │ University of Florida │ │ 9354592 │Change: Current Studies of Current Chance Issues, Phase II │ J. Laurie Snell │ July 1, 1994 │ CCD │ $209,914 │ Dartmouth College │ │ 9455393 │New Engineering Course with a Virtual Computer Laboratory │ Norma F. Hubele │ February 1, 1995 │ CCD │ $100,600 │ Arizona State Univ. │ │ 9455300 │New Geology Laboratories: Interactive Data Acquisition, Analysis, and Multimedia Modules of Geologic Phenomena, Part II │ Dennis Hodge │ May 1, 1995 │ CCD │ $75,000 │ SUNY, Buffalo │ │ 9455601 │Coupling Mathematics and Life Science Courses │ Marlene Wilson │ June 1, 1995 │ CCD │ $51,293 │ Univ. of Portland │ │ 9455578 │Revitalizing Introductory Statistics for Engineering by Capitalizing on Interdisciplinary Cooperation and State-of-the-Art Technology │Panickos N. Palettas │ August 1, 1995 │ CCD │ $57,866 │ Virginia Tech │ │ 9696174 │Revitalizing Introductory Statistics for Engineering by Capitalizing on Interdisciplinary Cooperation and State-of-the-Art Technology │Panickos N. Palettas │ January 1, 1996 │ CCD │ $140,975 │ Ohio State University │ │ 9555073 │Interactive Video Resources for Learning Statistics │ William I. Notz │ March 1, 1996 │ CCD │ $103,701 │ Ohio State University │ │ 9554805 │Synergistic Learning in Biology and Statistics (SLIBS) │ Robert V. Blystone │ June 1, 1996 │ CCD │ $246,336 │ Trinity University │ │ 9555233 │Interactive Video Resources for Learning Statistics │ Paul F. Velleman │ June 1, 1996 │ CCD │ $51,598 │ Cornell University │ │ 9653153 │Earth Math Phase 3; Calculus and Statistics for a New World │ Nancy Zumoff │ January 1, 1997 │ CCD │ $22,800 │ Kennesaw S.U. │ │ 9653267 │Revitalizing the Study of Probability through Applications, Technology, and Collaborative Learning │ Michael Bean │September 1, 1997 │ CCD │ $180,001 │University of Michigan │ │ 9752428 │A Probability/Activity Approach for Teaching Introductory Statistics │ James Albert │ January 1, 1998 │ CCD │ $50,000 │ Bowling Green Univ. │ │ 9752559 │Probability and Surprise: Animations and Simulations │ Susan P. Holmes │ January 1, 1998 │ CCD │ $99,970 │ Cornell University │ │ 9752523 │Tools for Teaching and Assessing Statistical Inference │ Joan B. Garfield │ February 1, 1998 │ CCD │ $100,021 │ UMN, Twin Cities │ │ 9752645 │Intersection of Population Biology and Mathematics │ Jane Gallagher │ June 1, 1998 │ CCD │ $150,000 │ CUNY │ │ 9850035 │Science Education for Tomorrow │ Elizabeth Boylan │September 1, 1998 │ CCD │ $196,152 │ Barnard College │ │ 9996235 │Probability and Surprise: Animations and Simulations │ Susan P. Holmes │ January 1, 1999 │ CCD │ $62,048 │ Stanford University │ │ 9653224 │Revitalizing Classroom Teaching and Learning: A Beginning for Two-Year College Mathematics │ Elizabeth Higgins │ February 1, 1997 │ CCD/ATE │ $99,799 │Greenville Tech. Coll. │ │ 9752185 │Integrating Pedagogical and Curriculum Theory with Teaching Practice Throughout all Mathematics and Science Courses in the College of Arts & Sciences and Evaluating ...│ Edward Dubinsky │ March 1, 1998 │ CCD/CETP │ $100,000 │ Georgia State Univ. │ │ 9354529 │Informed Statistical Reasoning in an Uncertain World: Situated Simulations for Undergraduates │ Sharon Derry │ June 1, 1994 │ CETP │ $202,316 │ UW-Madison │ │ 9950671 │Visualizing Statistics - An On-Line Introductory Course │ Alexander Kugushev │ October 1, 1999 │ EMD │ $260,484 │ CyberGnostics Inc. │ │ 9980796 │Development of an Interactive Tutorial on Statistical Design and Analysis of Experiments │ John O'Haver │ February 1, 2000 │ EMD │ $79,898 │ University of Miss. │ │ 9980973 │A Data Analysis Exercise Server for Introductory Statistics Courses │ R. Todd Ogden │ May 1, 2000 │ EMD │ $75,000 │ USC, Columbia │ │ 0088703 │Development of Sports Statistics Modules for Introductory Statistics Classes │ James H. Albert │ January 1, 2001 │ EMD │ $67,258 │ Bowling Green Univ. │ │ 0089435 │Online Statistics Education: An Interactive Multimedia Course of Study │ David M. Lane │ February 1, 2001 │ EMD │ $401,990 │Will. Marsh Rice Univ. │ │ 0126855 │Case-Based Reasoning for Engineering Statistics │ George C. Runger │ December 1, 2001 │ EMD │ $74,622 │ Arizona State Univ. │ │ 0126433 │Teaching Psychological Research Methods with Online Examples │ William Maki │ April 15, 2002 │ EMD │ $102,147 │Texas Tech. University │ │ 0226097 │DoStat.com: A Web Site for Educational Data Analysis and Assessment │ R. Webster West │ June 15, 2002 │ EMD │ $130,002 │ USC, Columbia │ │ 0230803 │Stem and Tendril: Vertically Integrated Statistics Laboratories │ Andrew Poje │ January 15, 2003 │ EMD │ $74,836 │ CUNY Staten Island │ │ 0341210 │Improving the Quality of and Access to Undergraduate Statistics Education │ Fred Speed │ January 1, 2004 │ EMD │ $74,826 │ Texas A&M Univ. │ │ 0341529 │An Audio-Tactile Curriculum to Support Visually Impaired Statistics Students │ Karen Gourgey │February 15, 2004 │ EMD │ $30,032 │ CUNY, Baruch │ │ 9752705 │UFE: Teaching Computer-Intensive Resampling Techniques │ Amer. Stat. Assoc. │February 15, 1998 │ EMD/UFE │ $60,000 │ Amer. Stat. Assoc. │ │ 9250330 │Behavioral Sciences Computer Laboratory │ James Raymondo │ July 1, 1992 │ ILI │ $25,280 │ Union College │ │ 9351126 │Data Analysis Laboratory │ Loren Haskins │ April 1, 1993 │ ILI │ $32,355 │ Carleton College │ │ 9351926 │Elementary Statistics Computer Laboratory │ Louis M. Friedler │ April 1, 1993 │ ILI │ $19,259 │ Beaver College │ │ 9352131 │An Interdisciplinary Laboratory for Data Acquisition, Analysis, and Modeling │ Dwight Krehbiel │ April 1, 1993 │ ILI │ $49,560 │ Bethel College │ │ 9351035 │Discovering Statistics: A Laboratory Approach │Richard L. Scheaffer │ June 1, 1993 │ ILI │ $25,000 │ University of Florida │ │ 9351493 │Computational Classroom Facility for Biometry Courses │ Charles McCulloch │ June 1, 1993 │ ILI │ $40,000 │ Cornell University │ │ 9352076 │Technology for: Improvements of Mathematical Concepts and Initiation to Professional Tools │ Karla Foss │ June 1, 1993 │ ILI │ $35,000 │ Pellissippi STCC │ │ 9352110 │Instrumentation for Novel Laboratory Instruction in Undergraduate Statistics Curricula │ Walter R. Pirie │ June 1, 1993 │ ILI │ $52,646 │ Virginia Tech │ │ 9350746 │A Computer Lab for Biological Statistics │ Daniel E. Wujek │ July 1, 1993 │ ILI │ $26,887 │Central Michigan Univ. │ │ 9352312 │Novel Laboratory Instruction in Undergraduate Statistics Curricula │Panickos N. Palettas │ August 1, 1993 │ ILI │ $90,221 │ Virginia Tech │ │ 9451814 │Multidisciplinary Statistics Curriculum and Computing Laboratory │ Chris Noble │ June 1, 1994 │ ILI │ $38,560 │ Lawrence University │ │ 9452229 │Interactive Computerized Statistics Classroom │ Louise Hainline │ June 1, 1994 │ ILI │ $70,072 │ CUNY, Brooklyn │ │ 9451972 │In-Class Experimental Learning in Four Fundamental Courses │ John Stone │ July 1, 1994 │ ILI │ $70,000 │ Grinnell College │ │ 9452622 │Developing a Computer Lab for the Technology Enhanced Teaching of Undergraduate Statistics │ Judith Treas │ August 1, 1994 │ ILI │ $55,000 │ U. California, Irvine │ │ 9451398 │A Computer Classroom for Introductory Statistics │Joseph D. Petruccelli│ August 15, 1994 │ ILI │ $53,348 │ Worcester Poly. Inst. │ │ 9452156 │Enhancement of Statistics, Research Methods and Experimental Psychology Laboratories │ Virginia A. Diehl │September 1, 1994 │ ILI │ $33,659 │ West. Illinois Univ. │ │ 9452320 │Fostering Conceptual Understanding Using a "Hands-On" Approach │ Danuta Bukatko │September 1, 1994 │ ILI │ $17,640 │ Holy Cross College │ │ 9550891 │A Statistical Laboratory for Active Learning │Richard L. Scheaffer │ May 1, 1995 │ ILI │ $14,940 │ University of Florida │ │ 9551850 │Computer Classroom for Statistical Instruction │ Ronald L. Tracy │ August 1, 1995 │ ILI │ $65,000 │ Oakland University │ │ 9551275 │Computer Stat Lab │Patricia R. Wilkinson│September 1, 1995 │ ILI │ $29,391 │ CUNY, BMCC │ │ 9551460 │From Descriptive to Adaptive Understanding: Using Interactive Computer Simulation in Quantitative Biology and Statistics Labs │ David G. Huffman │September 1, 1995 │ ILI │ $62,320 │ Southwest Texas S.U. │ │ 9552311 │Fostering Creativity, Teamwork, and Scientific Thinking in Introductory Statistics through Computer-Based Laboratories │ Peter G. Jessup │September 1, 1995 │ ILI │ $24,333 │ Ursinus College │ │ 9650048 │Interactive Undergraduate Statistical Computing Laboratory │ John I. Marden │ May 1, 1996 │ ILI │ $38,070 │ UI, Urbana-Champaign │ │ 9650871 │Computer Laboratories in Calculus & Statistics │ Bruce Torrence │ June 1, 1996 │ ILI │ $41,597 │ Randolph-Macon Coll. │ │ 9696158 │Novel Laboratory Instruction in Undergraduate Statistics Curricula │Panickos N. Palettas │ June 1, 1996 │ ILI │ $47,651 │ Ohio State University │ │ 9650645 │Mathematics and Statistics Computer Classroom │ I-Lok Chang │ July 1, 1996 │ ILI │ $23,680 │ American University │ │ 9651186 │Mathematics Multimedia Presentation Classroom │ Kevin McDonald │ July 1, 1996 │ ILI │ $60,000 │ Mt. San Antonio Coll. │ │ 9650032 │A Microcomputer Laboratory for Experimental Psychology │ Sarah Ransdell │ August 1, 1996 │ ILI │ $22,000 │Florida Atlantic Univ. │ │ 9650581 │Laboratory Lessons for Discovery-Based Statistics │Richard L. Scheaffer │ August 1, 1996 │ ILI │ $46,620 │ University of Florida │ │ 9650659 │Department of Economics Computer Center │ Byron David │ August 1, 1996 │ ILI │ $19,900 │ CUNY, City College │ │ 9651271 │Computer Assisted Interdisciplinary Problem Solving in Mathematics and Science │ Samantha Prashanta │September 1, 1996 │ ILI │ $31,830 │ Finger Lakes CC │ │ 9750663 │Studio Environment for Introductory Statistics │ Roxy Peck │ June 1, 1997 │ ILI │ $61,429 │ Cal Poly State Univ. │ │ 9751571 │Data-Driven Statistics Courses in an Interactive Teaching Computer Laboratory │ Andre M. Lubecke │ June 1, 1997 │ ILI │ $54,969 │ Lander University │ │ 9751307 │The Rice Virtual Lab in Statistics │ David M. Lane │ July 1, 1997 │ ILI │ $200,000 │Will. Marsh Rice Univ. │ │ 9851421 │STATLAB - An Interactive Classroom and Laboratory for Introductory Statistics │ David C. Carothers │ June 1, 1998 │ ILI │ $59,936 │ James Madison Univ. │ │ 9851146 │Equipping the Statistical Toolkit: An Intranet-Based Approach to Introductory Statistics │ Gavin M. Cross │ July 1, 1998 │ ILI │ $40,953 │ Coe College │ │ 9851321 │Computers for an Introductory Interdisciplinary Data Analysis Course │ Laura P. Eisen │ July 1, 1998 │ ILI │ $16,780 │ Trinity College │ │ 9851559 │Computing-Enhanced Experiential Learning in the Introductory Statistics Course │ Ann R. Cannon │ July 1, 1998 │ ILI │ $21,090 │ Cornell College │ │ 0089005 │MAA Comprehensive Professional Development Program For Mathematics Faculty │ J Michael Person │ April 1, 2001 │ ND │ $966,291 │ MAA │ │ 0088715 │Collaborative Project on Integrating Census Data Analysis into the Curriculum │ William H. Frey │ May 15, 2001 │ ND │ $522,205 │University of Michigan │ │ 0089006 │Collaborative Project on Integrating Census Data Analysis into the Curriculum │ Felice J. Levine │ May 15, 2001 │ ND │ $417,241 │ Amer. Soc. Assoc. │ │ 0341481 │PRofessional Enhancement Program (PREP) │ J Michael Pearson │ February 1, 2004 │ ND/NSDL │ $462,690 │ MAA │ │ 0333672 │CAUSEweb: A Digital Library of Undergraduate Statistics Education │ Dennis Pearl │ October 1, 2003 │ NSDL │ $824,945 │ Ohio State University │ │ 9255447 │Statistical Thinking and Teaching Statistics │ George W. Cobb │ March 1, 1993 │ UFE │ $450,068 │ MAA │ │ 9554621 │STATS: Statistical Thinking with Active Teaching Strategies │ Allan J. Rossman │ January 1, 1996 │ UFE │ $202,844 │ MAA │ │ 9653416 │Chance Workshop │ J. Laurie Snell │ January 1, 1997 │ UFE │ $87,660 │ Dartmouth College │ │ 9653442 │Elementary Statistics Laboratory Workshop │ John D. Spurrier │ March 1, 1997 │ UFE │ $67,845 │ USC, Columbia │ Megan R. Hall worked on this project as an undergraduate student at Middle Tennessee State University, Murfreesboro, TN. This work was supported by Middle Tennessee State University through a Faculty Research and Creative Activity Committee grant. Albert, J.H. (2002), "Development of Sports Statistics Modules for Introductory Statistics Classes #0088703." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web American Sociological Association. (2002), "Integrating Census Data Analysis into the Curriculum Call for Applications." Retrieved May 24, 2005 from http://www.asanet.org/members/ida.html. August, L., Hurtado, S., Wimsatt, L.A., Dey, E.L. (2002), "Learning Styles: Student Preferences versus Faculty Perceptions," Association for Institutional Research 2002 Forum Paper (42^nd, Toronto, Canada, June 2-5, 2002). Bass, R. and Rosenzwig, R. (1999), "Rewiring the History and Social Studies Classroom: Needs, Framework, Dangers, and Proposals," Forum on Technology in Education: Envisioning the Future Proceedings, Washington D.C., December 1-2, 1999. Bukatko, D. and Kramer, P. (1994), "Fostering Conceptual Understanding Using a ‘Hands-On’ Approach in Undergraduate Statistics Abstract #9452320." Retrieved March 2, 2005 from NSF Online Database Butt, S.E., White, B.E., Fredericks, T.K. (2004), "An Activity-Based Statistics Course for Engineers #0126815." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/ Caudell, L.S. (1996), "Voyage of Discovery," Northwest Education, v2, n1, p2-7. Retrieved June 8, 2005 from http://www.nwrel.org/nwedu/fall_96/article2.html. Chance, B. (2002), "Components of Statistical Thinking and Implications for Instruction and Assessment," Journal of Statistics Education, v10, n3. Retrieved June 10, 2005 from http://www.amstat.org/ Chance, B. and Garfield J. (2002), "New Approaches to Gathering Data on Student Learning for Research in Statistics Education," Statistics Education Research Journal, v1, n2, p38. Cobb, G.W. (1992), "Teaching Statistics," Heeding the Call for Change, L.A. Steen, ed., Mathematical Association of America Notes Series, p3-43. Cobb, G.W. (1993), "Reconsidering Statistics Education: A National Science Foundation Conference," Journal of Statistics Education, v1, n1. Retrieved October 29, 2003 from http://www.amstat.org/ Cobb, G. and Parker, M. (1998), "Statistical Thinking and Teaching Statistics #9255447." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/search/ Craddock, J.N. (1998), "Experiences with Teaching Basic Statistics in an Introduction to Civil Engineering," International Conference on Engineering Education, Rio de Janeiro, Brazil, August 17-20, CyberGnostics, Inc. (2005), "CyberStats Online Statistics." Retrieved May 25, 2005 from http://statistics.cyberk.com/splash/index.cfm. Eiseman, J.W., Fairweather, J.S., Rosenblum, S., Britton, E. (1998), "Evaluation of the Division of Undergraduate Education’s Course and Curriculum Development Program." Retrieved September 2, 2005 from http://www.nsf.gov/pubs/1998/nsf9839/nsf9839.htm. Frey, W.H. (2001), "Collaborative Project on Integrating Census Data Analysis into the Curriculum Abstract #0088715." Retrieved March 2, 2005 from NSF Online Database http://www.nsf.gov/awardsearch/. Garfield, J.B. (2000), "Evaluating the Impact of Educational Reform in Statistics: A Survey of Introductory Statistics Courses." Final Report for NSF REC Grant #9732404. Retrieved June 2, 2005 from Garfield, J.B. and delMas, R.C. (2000), "Tools for Teaching and Assessing Statistical Inference #9752523." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/ Garfield, J.B., Hogg, B., Schau, C., Whittinghill, D. (2002), "First Courses in Statistical Science: The Status of Educational Reform Efforts," Journal of Statistics Education, v10, n2. Retrieved March 2, 2005 from http://amstat.org/publications/jse/v10n2/garfield.html. Garfield, J.B. and Chance, B. (2004), "The Web-based ARTIST Project #0206571." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/search/RetrieveRecord.asp? Gould, R.L., Esfandiari, M. (2003), "A Statistics Undergraduate Computing Laboratory #9981172" DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/search/ Green, L.B., McDaniel, S.N., and Rowell, G.H. (2005), "Online Statistics Resources Across Disciplines," Journal of Online Learning and Teaching, pending publication. Guidelines for Assessment and Instruction in Statistics Education College Report (2005) American Statistical Association. Retrieved March 2, 2005 from Hakeem, S. (2001), "Effect of Experiential Learning in Business Statistics," Journal of Education for Business, v77, n2, p95(4). Klingner, B. and Herrmann, N. (2003), "Enhancing the Mathematical Foundation of Students through Online Course Modules Abstract #0311016." Retrieved March 2, 2005 from NSF Online Database http:// Lane, D.M., Austin, J.D., Scott, D.W., Baggerly, K.A., Quinones, M.A. (2000), "Rice Virtual Laboratory in Statistics #9751307." DUE PIRS Search Engine. Retrieved March 2, 2005 from https:// Lane, D.M., Scott, D.W., Guerra, R., Hebl, M.R., Osherson, D. (2004), "Online Statistics Education: An Interactive Multimedia Course of Study #0089435." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/search/RetrieveRecord.asp?Awd_Id=0089435. Mathematical Association of America. (2004), "Undergraduate Programs and Courses in the Mathematical Sciences: CUPM Curriculum Guide." Retrieved March 2, 2005 from www.maa.org/cupm/. Marder, C., McCullough, J., and Perakis, S. (2001), "Evaluation the National Science Foundation’s Undergraduate Faculty Enhancement Program." Retrieved March 21, 2005 from http://www.nsf.gov/pubs/ McConnell, D.A., Steer, D.N., Owens, K.D. (2003), "Assessment and Active Learning Strategies for Introductory Geology Classes," Journal of Geoscience Education, v51, n2, p206-216. McCormick, B., Mackinnon, C., Jones, R.L. (1999), "Evaluation of Attitude, Achievement, and Classroom Environment in a Learner-Centered Introductory Biology Class," National Aeronautics and Space Administration, Washington D.C. Moore, D.S., Cobb, G.W., Garfield, J.B., Meeker, W.Q. (1995), "Statistics Education Fin de Siecle," The American Statistician, v49, n3, p250(11). NSF. (1996), "Shaping the Future: New Expectations for Undergraduate Education in Science, Mathematics, Engineering, and Technology," Report nsf96139, National Science Foundation, Directorate for Education and Human Resources. Retrieved May 12, 2005 from http://www.nsf.gov/pubs/stis1996/nsf96139/nsf96139.txt. NSF. (1998), "Undergraduate Education Science, Mathematics, Engineering, Technology Program Announcement and Guidelines," nsf9845, NSF EHR DUE. Retrieved May 12, 2005 from http://www.nsf.gov/pubs/ NSF. (1999), "Collaboratives for Excellence in Teacher Preparation (CETP) Program Announcement and Guidelines," NSF EHR DUE. Retrieved May 12, 2005 from http://www.nsf.gov/pubs/1999/nsf9953/ NSF. (2003a), "Course, Curriculum, and Laboratory Improvement (CCLI) Educational Materials Development (EMD) and National Dissemination (ND) Tracks Program Solicitation," nsf03558, NSF EHR DUE. Retrieved May 12, 2005 from http://www.nsf.gov/pubs/2003/nsf03558/nsf03558.htm. NSF. (2003b), "Course, Curriculum, and Laboratory Improvement (CCLI) Adaptation & Implementation (A & I) Track Program Solicitation," nsf03598, NSF EHR DUE. Retrieved May 12, 2005 from http:// NSF. (2003c), "Course, Curriculum, and Laboratory Improvement (CCLI) Assessment of Student Achievement (ASA) Track Program Solicitation," nsf03584, NSF EHR DUE. Retrieved May 12, 2005 from http:// NSF. (2005a), "Course, Curriculum, and Laboratory Improvement (CCLI) Program Solicitation," nsf05559, NSF EHR DUE. Retrieved May 12, 2005 from http://www.nsf.gov/pubs/2005/nsf05559/nsf05559.htm. NSF. (2005b), "National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) Program Solicitation," NSF EHR DUE. Retrieved March 21, 2005 from http://www.nsf.gov/pubs/ Notz, W.I., Pearl, D.K., Stasny, E.A. (1996), "Interactive Video Resources for Learning Statistics Abstract #9555073." Retrieved March 2, 2005 from NSF Online Database Peck, R. and Daly, J.C. (1997), "Studio Environment for Introductory Statistics Abstract #9750663." Retrieved March 2, 2005 from NSF Online Database http://www.nsf.gov/awardsearch/. Prival, J.T. (2005), Personal Communication. February 8, 2005. Prival, J.T. (2008a), Personal Communication. June 13, 2008. Prival, J.T. (2008b), Personal Communication. June 13, 2008. Rhoads, T.R. and Murphy, T.J. (2005), "Statistical Concepts Inventory (SCI): A Cognitive Achievement Tool in Engineering Statistics #0206977." DUE PIRS Search Engine. Retrieved March 2, 2005 from Rolker-Dolinsk, B. and Qualters, D. (1994), "They Can’t Learn When They Don’t Know How: Teaching Statistics in a Learning to Learn Model," Teaching of Psychology: Ideas and Innovation, S. Hartog and J. Levine, eds. Proceedings of the 8^th Annual Conference on Undergraduate Teaching of Psychology, March 18, 1994, Farmingdale, NY, SUNY. Rossman, A. and Short, T. (1999), "STATS: Statistical Thinking with Active Teaching Strategies #9554621." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/ Rumsey, D. (2002), "Statistical Literacy as a Goal for Introductory Statistics Courses," Journal of Statistics Education, v.10, n.3. Retrieved June 10, 2005 from www.amstat.org/publications/jse/v10n3 Scheaffer, R.L. (1995), "A Statistical Laboratory for Active Learning Abstract #9550891" Retrieved March 2, 2005 from NSF Online Database http://www.nsf.gov/awardsearch/. Scheaffer, R.L. (1996), "Laboratory Lessons for Discovery-Based Statistics Abstract #9650581." Retrieved March 2, 2005 from NSF Online Database http://www.nsf.gov/awardsearch/. Schenone-Stevens, M.C. (1999), "Reflections from the Classroom on the Effects of Computer-Assisted Instruction on the Teaching-Learning Process," Annual Meeting of the National Communication Association (85^th, Chicago, Illinois, November 4-7, 1999). Silgailis, K. and Bhat, V. (2005), "Promoting Undergraduate Research through Development of Two Interdisciplinary Research Methods/Statistics Courses and Increased Support of Student Research # 0126435." DUE PIRS Search Engine. Retrieved July 20, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/search/RetrieveRecord.asp?Awd_Id=0126435. Snee, R.D. (1993), "What's Missing in Statistical Education?" The American Statistician, 47, 149-154. Snell, J.L. (1994), "Take a Chance on CHANCE," Trends. Retrieved May 24, 2005 from http://www.dartmouth.edu/~chance/course/Articles/ume.html. Snell, J.L. (1997), "Chance Workshop Abstract #9653416." Retrieved March 2, 2005 from NSF Online Database http://www.nsf.gov/awardsearch/. Sullivan, M.M. (1993), "Students Learn Statistics When They Assume a Statistician’s Role," Annual Conference of the American Mathematical Association of Two-Year Colleges (19^th, Boston, Massachusetts, November, 1993). Tankersley, R. and Morris, J. (2005), "Development of an Inquiry-based Curriculum in Ecology #0088369." DUE PIRS Search Engine. Retrieved March 2, 2005 from https://www.ehr.nsf.gov/pirs_prs_web/ Udovic, D., Morris, D., Dickman, A., Postlethwait, J., Wetherwax, P. (2002), "Workshop Biology: Demonstrating the Effectiveness of Active Learning in an Introductory Biology Course," BioScience, v52, n3, p. 272(10). Van Ryswyk, H. (2004), "Service Learning in Chemistry: Lead in Soil from Vehicle Emissions Abstract #0410115." Retrieved March 2, 2005 from NSF Online Database http://www.nsf.gov/awardsearch/. Watrous, J.J., Lurie, D., and Ratterman, D.M. (2003), "IBASE: Integrating Biology and Statistics Education Abstract #0309751." Retrieved March 2, 2005 from NSF Online Database http://www.nsf.gov/ West, R.W., Wu, Y., Heydt, D. (2004), "An Introduction to StatCrunch 3.0," Journal of Statistical Software, v9, n5. Retrieved March 30, 2006 from http://www.jstatsoft.org/v09/i05/scjss/. Zeichner, J.T., Litcher, J. (1998), "Passive versus Active Learning: A Qualitative Study," L. McCoy, ed., Wake Forest University Department of Education Annual Research Forum, 1997. Megan R. Hall Middle Tennessee State University Murfreesboro, TN 37132 Ginger Holmes Rowell, Ph.D. Middle Tennessee State University Murfreesboro, TN 37132 Volume 16 (2008) | Archive | Index | Data Archive | Resources | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications
{"url":"http://www.amstat.org/publications/jse/v16n2/rowell1.html","timestamp":"2014-04-19T17:07:42Z","content_type":null,"content_length":"559984","record_id":"<urn:uuid:4d28d5a2-2b91-45db-be4e-d8347cccb084>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
The Origins of Diverse Domains of Mathematics: Generalist Genes but Specialist Environments The authors assessed 2,502 ten-year-old children, members of 1,251 pairs of twins, on a Web-based battery of problems from 5 diverse aspects of mathematics assessed as part of the U.K. national curriculum. This 1st genetic study into the etiology of variation in different domains of mathematics showed that the heritability estimates were moderate and highly similar across domains and that these genetic influences were mostly general. Environmental factors unique to each twin in a family (rather than shared by the 2 twins) explained most of the remaining variance, and these factors were mostly specific to each domain. Keywords: twin method, quantitative genetics, covariation, individual differences
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2743325/?lang=en-ca","timestamp":"2014-04-21T11:36:05Z","content_type":null,"content_length":"141149","record_id":"<urn:uuid:59899902-9a48-44e9-87e4-246066ac60b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Maxwell's Equations Next: The Wave Equation Up: The Free Space Wave Previous: The Free Space Wave Contents We begin with Maxwell's Equations (ME): in SI units, where For the moment, let us express the inhomogeneous MEs in terms of just permittivity permeability ^9.1: It is difficult to convey to you how important these four equations are going to be to us over the course of the semester. Over the next few months, then, we will make Maxwell's Equations dance, we will make them sing, we will ``mutilate'' them (turn them into distinct coupled equations for transverse and longitudinal field components, for example) we will couple them, we will transform them into a manifestly covariant form, we will solve them microscopically for a point-like charge in general motion. We will (hopefully) learn them. For the next two chapters we will primarily be interested in the properties of the field in regions of space without charge (sources). Initially, we'll focus on a vacuum, where there is no dispersion at all; later we'll look a bit at dielectric media and dispersion. In a source-free region, Maxwell's Equations in a Source Free Region of Space: where for the moment we ignore any possibility of dispersion (frequency dependence in Next: The Wave Equation Up: The Free Space Wave Previous: The Free Space Wave Contents Robert G. Brown 2007-12-28
{"url":"http://www.phy.duke.edu/~rgb/Class/phy319/phy319/node34.html","timestamp":"2014-04-16T13:48:23Z","content_type":null,"content_length":"13075","record_id":"<urn:uuid:5f639408-f404-481d-845f-0a103ff966e8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: So I've got a simple C program that runs in 40 milliseconds on my x86 (1.6GHz Intel Atom). 40 milliseconds is not fast enough for me; I want it to happen in under 10 milliseconds. How do I optimize my C code? What are the sequence of steps that a programmer takes when optimizing code? How do I profile my program and find out what parts I need to refactor/ use a better algorithm, etc. • one year ago • one year ago Best Response You've already chosen the best response. so far all i've done is use gcc -O2 . what else can i do to optimize my program to run in my calculator? Best Response You've already chosen the best response. did you try with this ? -Ofast "Disregard strict standards compliance. -Ofast enables all -O3 optimizations. It also enables optimizations that are not valid for all standard compliant programs. It turns on -ffast-math and the Fortran-specific -fno-protect-parens and -fstack-arrays." Best Response You've already chosen the best response. Will you please provide the source code, so that I can have a look on it? You should not depend blindly on optimizations provided by compilers. Before optimizations, know when to optimize, what to optimize, and how to optimize. To answer first problem, if performance improvement is significant without TOOOOOOOOOO MUCH headach, go for it! To answer second problem, use a profiler. It will show you where most of the processing time is consumed in your program. Optimize that part first. For third one, you may use a better algorithm, or employ some *trikcy* fast solutions. It depends upon the case. Best Response You've already chosen the best response. I just tried -Ofast with no luck :( still giving me about 40 milliseconds. here is th e source: #include "prog.h" int main(int argc, char *argv[]) { switch (argc) { case 1: solve_from_stdin(); break; default: return -1; break; } return 0; } Best Response You've already chosen the best response. Please provide prog.h as well! In case you are working on some secret project, you may use profiler(s) or ask other project members. Best Response You've already chosen the best response. which way are you measuring the time ? Best Response You've already chosen the best response. bash's builtin time command Best Response You've already chosen the best response. did you try removing anything else but the pure main to have the minimum execution time ? Best Response You've already chosen the best response. this is what prog.h looks like #ifndef _PROG_H_ #define _PROG_H_ unsigned int a(int *, int *, const unsigned int, const unsigned int); unsigned int b(int *, int *, const unsigned int, const unsigned int, const unsigned int); unsigned int c(int *, const unsigned int); void solve_from_stdin(void); #endif /* ifndef _PROG_H_ */ what prfiler should i use? Best Response You've already chosen the best response. AMD APP Profiler is a free C/C++ Profiler Best Response You've already chosen the best response. Intel Parallel Studio also contains a profiler. Best Response You've already chosen the best response. In the header file you just have declarations, not definitions, so nobody can understand what the code really does; but could you try to execute a main() without the function call and report the measured execution time ? #include "prog.h" int main(int argc, char *argv[]) { switch (argc) { case 1: // solve_from_stdin(); break; default: return -1; break; } return 0; } Best Response You've already chosen the best response. but those kits are both exclusive to Windows/visual studio :( without the solve_from_stdin(), time outputs 0.000 :-D so that one routine is taking 99.9% of cpu time :D Best Response You've already chosen the best response. Intell Parallel Studio is available for LINUX as well. Best Response You've already chosen the best response. that's awesome I'll check my package manager then Best Response You've already chosen the best response. alright I have intel parallel studio xe in my package manager Best Response You've already chosen the best response. ok, now reactivate the call to the function but eliminate any action in the function body, then go on reactivating part of the code in the function body until you can understand which part is taking more execution time Best Response You've already chosen the best response. old times profiling style :-) Best Response You've already chosen the best response. okay I deactivated procedure a() and I'm also getting 0.000 from bash Best Response You've already chosen the best response. but procedure a() calls procedure b() Best Response You've already chosen the best response. then reactivate procedure a() but not procedure b() Best Response You've already chosen the best response. that decreased my time from 40 to 22 ms Best Response You've already chosen the best response. does procedure a() do anything else apart calling procedure b() ? Best Response You've already chosen the best response. procedure a also calls itself before calling procedure b Best Response You've already chosen the best response. it sounds strange, it should loop forever, unless there is some kind of counter to avoid it Best Response You've already chosen the best response. yeah at the start it tests for the value of (const int $1 + const int $2) / 2 Best Response You've already chosen the best response. is the function solve_from_stdin() recursive ? Best Response You've already chosen the best response. here's procedure b() void merge_int(int* left, unsigned int len_left, int* right, unsigned int len_right, int* end) { unsigned int i, j, k; for (i = j = k = 0; i < len_left && j < len_right; ++k) { if (left[i] < right[j]) { end[k] = left[i]; ++i; } else { end[k] = right[j]; ++j; } } for (; i < len_left; ++i, ++k) { end[k] = left[i]; } for (; j < len_right; ++j, ++k) { end[k] = right[j]; } Best Response You've already chosen the best response. is the b() procedure called just once in the a() procedure ? Best Response You've already chosen the best response. right just once Best Response You've already chosen the best response. but since a() calls itself recursively, it ends up calling b quite a lot of times :) Best Response You've already chosen the best response. if so you could try to directly write the b() procedure's content into the a() procedure body, so that you save one function call time (context saving time into the stack) Best Response You've already chosen the best response. I got a seg fault :( Best Response You've already chosen the best response. did you use correct types for left, right, end variables ? Best Response You've already chosen the best response. alright I'm going to refurbish my code and use a different data structure and see how it goes... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fdf6265e4b0f2662fd5b7af","timestamp":"2014-04-17T12:54:16Z","content_type":null,"content_length":"112053","record_id":"<urn:uuid:20e4a7d6-cf5f-4c59-ae99-ac7ab060c51f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Guarded Choice with MonadPlus In the previous article , I introduced the class and three examples of monads that allow for non-determinism in programming ( and the list data type, both of which are types and , which can be coerced into a type). These types were introduced, but besides showing (unexplained) examples and minimal explanation of the lookup example, there is not much there to show how to program in a declarative nondeterministic manner. Let's rectify that. First, we'll show how to program nondeterministically and narrow the options down with . We will be using the standard nondeterministic "Hello, world!" problem, that is: solving the cryptarithmetic problem ... SEND + MORE = MONEY ... by iteratively improving the efficiency of the solution. First up, list compression is a powerfully expressive programming technique that so naturally embodies the nondeterministic programming style that users often don't know they are programming nondeterministically. List compression is of the form: [ x | qualifiers on x] where x represent each element of the generated list, and the qualifiers either generate or constraint values for x Given the above definition of list compression, writing the solution for our cryptarithmetic problem becomes almost as simple as writing the problem itself: [(s,e,n,d,m,o,r,e,m,o,n,e,y) | s ← digit, e ← digit, n ← digit, d ← digit, m ← digit, o ← digit, r ← digit, y ← digit, s * 1000 + e * 100 + n * 10 + d + m * 1000 + o * 100 + r * 10 + e ≡ m * 10000 + o * 1000 + n * 100 + e * 10 + y] where digit = [0..9] Easy, but when run, we see that it's not really what we needed for the answer is ... ... and 1153 others. No, we wish to have SEND + MORE = MONEY such that aren't zero and that all the letters represented digits, not, as was in the case of the first solution, all the same digit ( ). Well, whereas we humans can take some obvious constraints by implication, software must be explicit, so we need to code that positive (meaning, "greater than zero") and that all the letters are different from each other. Doing that, we arrive at the more complicated, but correct, following solution ... [(s,e,n,d,m,o,r,e,m,o,n,e,y) | s ← digit, s > 0, e ← digit, n ← digit, d ← digit, m ← digit, m > 0, o ← digit, r ← digit, y ← digit, different [s,e,n,d,m,o,r,y], num [s,e,n,d] + num [m,o,r,e] ≡ num [m,o,n,e,y]] where digit = [0..9] num = foldl ((+).(*10)) 0 different (h:t) = diff' h t diff' x [] = True diff' x lst@(h:t) = all (/= x) lst && diff' h t A bit of explanation -- the function num fold s the list of digits into a number. Put another way ... num [s,e,n,d] ≡ ((s * 10 + e) * 10 + n) * 10 + d ... and the function , via the helper function , ensures that every element of the argument list are (not surprisingly) -- a translation of is ... diff' x [] = True "A list is 'different' if there is only one number" diff' x lst@(h:t) = all (≠ x) lst && diff' h t "A list is 'different' if one of the numbers is different than every other number in the list and if this is true for all the numbers in the list" ... and after a prolonged period [434 seconds] , it delivers the answer: Okay! We now have the solution, so we're done, right? Well, yes, if one has all that time to wait for a solution and is willing to do tha waiting. However, I'm of a more impatient nature: the program be faster; the program be faster. There are few ways to go about doing this, and they involve providing hints (sometimes answers) to help the program make better choices. We've already done a bit of this with the constraints for both to be positive and adding the requirement that all the letters be different digits. So, presumably, the more hints the computer has, the better and faster it will be in solving this problem. Knowing the problem better often helps in arriving at a better solution, so let's study the problem again: The first (highlighted) thing that strikes me is that in , the is free-standing -- its value is the carry from the addition of the and the . Well, what is the greatest value for the carry? If we maximize everything, then the values assigned are 8 and 9, then we find the carry can at most be 1, even if there's carry over (again, of at most 1) from adding the other digits. That means , since it is not 0, must be What about for , can we narrow its value? Yes, of course. Since is fixed to 1, must be of a value that carries 1 over to . That means it is either 9 if there's no carry from addition of the other digits or 8 if there is. Why? Simple: O cannot be 1 (as has taken that value for itself), so it turns out that there's only one value for to be: 0! We've fixed two values and limited one letter to one of two values, 8 or 9. Let's provide those constraints ("hints") to the system. But before we do that, our list compression is growing larger with these additional constraints, so let's unwind into an alternate representation that allows us to view the smaller pieces individually instead of having to swallow the whole pie of the problem in one bite. This alternative representation uses the -notation, with constraints defined by is of the following form: guard :: MonadPlus m ⇒ Bool → m () What does that do for us? kinds have a base value ( ) representing failure and other values, so translates the input ean constraint into either (failure) or into a success value. Since the entire monadic computation is chained by , a failure of one test voids that entire branch (because the failure through the entire branch of computation). So, now we are armed with , we rewrite the solution with added constraints in the new do let m = 1 o = 0 s ← digit guard $ s > 7 e ← digit n ← digit d ← digit r ← digit y ← digit guard $ different [s,e,n,d,m,o,r,y] guard $ num [s,e,n,d] + num [m,o,r,e] ≡ num [m,o,n,e,y] return (s,e,n,d,m,o,r,e,m,o,n,e,y) where digit = [2..9] Besides the obvious structural difference from the initial simple solution, we've introduced some other new things -- • When fixing a value, we use the let-construct. • As we've grounded M and O to 1 and 0 respectively, we've eliminated those options from the digit list. • Since the do-notation works with monads in general (it's not restricted to lists only), we need to make explicit our result. We do that with the return function at the end of the block. What do these changes buy us? [(9,5,6,7,1,0,8,5,1,0,6,5,2)] returned in 0.4 seconds One thing one learns quickly when doing logic, nondeterministic, programming is that the sooner a choice is settled correctly, the better. By fixing the values of we entirely eliminate two lines of inquiry but also eliminate two options from all the other following choices, and by refining the we eliminate all but two options when generating its value. In nondeterministic programming, elimination is good! So, we're done, right? Yes, for enhancing performance, once we're in the sub-second territory, it becomes unnecessary for further optimizations. So, in that regard, we are done. But there is some unnecessary redundancy in the above code from a logical perspective -- once we generate a value, we know that we are not going to be generating it again. We know this, but , being the amb operator doesn't, regenerating that value, then correcting that discrepancy only later in the computation when it encounters the different guard We need the computation to work a bit more like we do, it needs to remember what it already chose and not choose that value again. We've already use when we implemented the sequence and the function with the monad; so let's incorporate that into our generator here. What we need is for our operator to select from the pool of digits, but when it does so, it that selected value from the pool. In a logic programming language, such as Prolog, this is accomplished easily enough as nondeterminism and memoization (via difference lists) are part of the language semantics. A clear way of dissecting this particular problem was presented to me by Dirk Thierbach in a forum post , so I present his approach in full: • I need both state and nondeterminism, so I have to combine the state monad and the list monad. This means I need a monad transformer and a monad (you need to have seen this before, but if you have once, it's easy to remember). • The state itself also has to be a list (of candidates). • So the final monad has type StateT [a] [] b. • I need some function to nondeterministically pick a candidate. This function should also update the state. • Played around a short time with available functions, didn't get anywhere. • Decided I need to go to the "bare metal". • Expanded StateT [a] [] a into [a] → [(a,[a])], then it was obvious what choose should do. • Decided the required functionality "split a list into one element and rest, in all possible ways" was general enough to deserve its own function. • Wrote it down, in the first attempt without accumulator. • Wrote it down again, this time using an accumulator. With this approach presented, writing the implementation simply follows the type declaration: splits :: Eq a ⇒ [a] → [(a, [a])] splits list = list >>= λx . return (x, delete x list) Although, please do note, this implementation differs significantly from Dirk's, they both accomplish the same result. Now we lift this computation into the (transformers are a topic covered much better elsewhere ) ... choose :: StateT [a] [] a choose = StateT $ λs . splits s ... and then replace the (forgetful) generator with the (memoizing) (which then eliminates the need for the different guard ) to obtain the same result with a slight savings of time [the result returned in 0.04 seconds] . By adding these two new functions and lifting the nondeterminism into the we not only saved an imperceptibly few sub-seconds (my view is optimizing performance on sub-second computations is silly), but, importantly, we eliminated more unnecessary branches at the nondeterministic choice-points. In summary, this entry has demonstrated how to program with choice using the class. We started with a simple example that demonstrated (naïve) nondeterminism, then improved on that example by pruning branches and options with the helper function. Finally, we incorporated the technique of here that we exploited to good effect in other computational efforts to prune away redundant selections. The end result was a program that demonstrated declarative nondeterministic programming not only fits in the (monadic) idiom of functional program but also provides solutions efficiently and within acceptable performance measures. 1 comment: Andy said... This post is very informative, though I will need some time to digest it. I am currently using nondeterminism in instruction selection for a compiler - it relates to branch instruction size being dependent on the offset jumped. This will involve some dependency analysis (via Data.Graph I imagine) - but currently just tries all options - which makes it scale pretty horribly when multiple branches instructions exist.
{"url":"http://logicaltypes.blogspot.cz/2008/05/guarded-choice-with-monadplus.html","timestamp":"2014-04-16T07:13:04Z","content_type":null,"content_length":"99419","record_id":"<urn:uuid:910edc71-3f06-46ab-8682-1d1e735951c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry Reference Values Quiz: sin, cos of 0°, 15°, 30°, 45°, 60°, 75°, 90° This page is for when you are convinced you need some rote memorization to learn these reference values. It randomly generates a 14-question multiple choice quiz over the basic reference values for sin() and cos(), and then administers the quiz. How to take a quiz: 1. Click on the "Start Quiz" button. 2. For each reference value requested, click on the radio button to the left of the answer. At the end of the quiz, the page will pop up a scoring report. I have decided on a total of fourteen points: • Seven points: cofunction identities. One point for giving the same answer for each side of a cofunction identity. • Seven points: the actual answer. One point for giving the correct answer at least once for each cofunction identity. Opinions, comments, criticism, etc.? Let me know about it. Return to the Trigonometry page, or the main page.
{"url":"http://www.zaimoni.com/TrigRefQuiz2.htm","timestamp":"2014-04-20T15:51:09Z","content_type":null,"content_length":"2529","record_id":"<urn:uuid:47525226-059e-45c1-a70d-3e745217624e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Gratuity Valuation – A Simple Example Liability Calculations – Defined Benefit Plan Let us assume Emily is an employee of LifeCorp Inc. which has a gratuity plan that pays a lump sum benefit upon normal retirement age (r) of 60 years. The lump sum benefit is defined as follows: Retirement Benefit = Final monthly salary per year of service The salary is assumed to grow at a rate of 8% per annum. For simplicity let us also assume that there are no death benefits for death in service and that there are no other preretirement terminations other than death. Emily’s details are as follows: Date of Birth = 30-June-1980 Date of employment = 30-January-2002 Current Monthly Salary = 5000 am, an actuary, has been employed by LifeCorp Inc. to determine the actuarial liability as at 31-Dec-2010 and what the suggested funding level for the next year will be for Emily. This suggested funding amount is the Normal Cost. The contribution made by the company can be more or less than the normal cost determined. The amount of normal cost calculated is based on the actuarial funding cost method chosen. According to IAS 19, the International Accounting Standards dealing with Employee Benefits, the actuarial funding cost or valuation method to be used is the Projected Unit Credit (PUC) Method. Under the PUC methodology the current salary is projected to the retirement date using a salary growth scale. If the unit benefit is the same for each year of service, as it is for Emily, then under this method the projected retirement benefit is distributed evenly over the years of service of the employee. In order to determine the present value of the defined retirement benefit obligations, i.e. the actuarial liability, the benefit has to be attributed to the current and prior years of service on a prorated basis. The actuarial liabilities are therefore related to the normal cost and therefore also based on the actuarial cost method used. The first stage in the process is to determine what Emily’s final salary will be: Age at entry, e = 22 Age nearest birthday on 31-Dec-2010, x= 31 Projected Final salary = Current Salary * (1+8%)^60-31=5000*(1.08)^29= 46,586 The next stage is to determine the projected retirement benefit/ gratuity amount = Projected Final Salary * No. of years of service: Total number of years of service = Normal Retirement Age – Age at entry = 60-22=38 years Projected Gratuity Amount = 46586*38 = 1,770,282 The proportion of projected benefit accrued up to age 31, B[31], is: No. of years since date of entry to valuation date = 31-22 = 9 years Proportion of projected gratuity benefit accrued up to age 31 = Projected Final Salary* No. of years since date of entry = 46586 * 9 = 419,277 The gratuity benefit that will accrue in the following year based on the PUC method, assumes that the retirement benefit is distributed evenly over the years of service as mentioned earlier. Therefore this unit benefit, b[31], will be = Projected Gratuity Amount/ No. of years of service = 1770282/38 = 46,586 All the benefit amounts calculated above (proportion of projected benefit and unit benefit) are applicable as at the date of Emily’s retirement from service. In order to determine their values (i.e. Actuarial Liability and Normal Cost, respectively) the benefit amounts need to be discounted for interest (time-value of money) and mortality. Note that we had earlier assumed that there are no other pre-retirement terminations or decrements. However, in an actual actuarial valuation exercise the actuary may also consider other decrements such as terminations, early retirements, disability, etc. In order to discount the benefit amounts to the date of valuation we will calculate the following discount factor: By using first principles: By using commutation functions: The superscript, (τ), indicates that the function has considered all decrements, which for our example is only termination by death. Let us assume that the discount rate is 13%. The values of the commutation functions are as follows: The discount factor thus works out to: Add comment 1. An ample example…Thanx..
{"url":"http://financetrainingcourse.com/education/2011/01/gratuity-valuation-a-simple-example/","timestamp":"2014-04-18T05:41:38Z","content_type":null,"content_length":"51018","record_id":"<urn:uuid:7572fbe0-e365-4359-a2a1-1c778709ba51>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Coercive subtyping Results 11 - 20 of 43 - In Logical aspects of computational linguistics (LACL’2011). LNAI 6736 , 2011 "... Abstract. Word meanings are context sensitive and may change in different situations. In this paper, we consider how contexts and the associated contextual meanings of words may be represented in typetheoretical semantics, the formal semantics based on modern type theories.Itisshown,inparticular,tha ..." Cited by 10 (4 self) Add to MetaCart Abstract. Word meanings are context sensitive and may change in different situations. In this paper, we consider how contexts and the associated contextual meanings of words may be represented in typetheoretical semantics, the formal semantics based on modern type theories.Itisshown,inparticular,thatthe framework of coercive subtyping provides various useful tools in the representation. 1 , 2000 "... A lambda-free logical framework takes parameterisation and definitions as the basic notions to provide schematic mechanisms for specification of type theories and their use in practice. The framework presented here, PAL + , is a logical framework for specification and implementation of type theor ..." Cited by 9 (1 self) Add to MetaCart A lambda-free logical framework takes parameterisation and definitions as the basic notions to provide schematic mechanisms for specification of type theories and their use in practice. The framework presented here, PAL + , is a logical framework for specification and implementation of type theories, such as Martin-Lof's type theory or UTT. As in Martin-Lof's logical framework [NPS90], computational rules can be introduced and are used to give meanings to the declared constants. However, PAL + only allows one to talk about the concepts that are intuitively in the object type theories: types and their objects, and families of types and families of objects of types. In particular, in PAL + , one cannot directly represent families of families of entities, which could be done in other logical frameworks by means of lambda abstraction. PAL + is in the spirit of de Bruijn's PAL for Automath [dB80]. Compared with PAL, PAL + allows one to represent parametric concepts such as famil... "... A number of important program rewriting scenarios can be recast as type-directed coercion insertion. These range from more theoretical applications such as coercive subtyping and supporting overloading in type theories, to more practical applications such as integrating static and dynamically typed ..." Cited by 9 (2 self) Add to MetaCart A number of important program rewriting scenarios can be recast as type-directed coercion insertion. These range from more theoretical applications such as coercive subtyping and supporting overloading in type theories, to more practical applications such as integrating static and dynamically typed code using gradual typing, and inlining code to enforce security policies such as access control and provenance tracking. In this paper we give a general theory of typedirected coercion insertion. We specifically explore the inherent tradeoff between expressiveness and ambiguity—the more powerful the strategy for generating coercions, the greater the possibility of several, semantically distinct rewritings for a given program. We consider increasingly powerful coercion generation strategies, work out example applications supported by the increased power (including those mentioned above), and identify the inherent ambiguity problems of each setting, along with various techniques to tame the ambiguities. , 1999 "... A notion of dependent coercion is introduced and studied in the context of dependent type theories. It extends our earlier work on coercive subtyping into a uniform framework which increases the expressive power with new applications. A dependent coercion introduces a subtyping relation between a ty ..." Cited by 8 (5 self) Add to MetaCart A notion of dependent coercion is introduced and studied in the context of dependent type theories. It extends our earlier work on coercive subtyping into a uniform framework which increases the expressive power with new applications. A dependent coercion introduces a subtyping relation between a type and a family of types in that an object of the type is mapped into one of the types in the family. We present the formal framework, discuss its meta-theory, and consider applications such as its use in functional programming with dependent types. 1 Introduction Coercive subtyping, as studied in [Luo97, Luo99, JLS98], represents a new general approach to subtyping and inheritance in type theory. In particular, it provides a framework in which subtyping, inheritance, and abbreviation can be understood in dependent type theories where types are understood as consisting of canonical objects. In this paper, we extend the framework of coercive subtyping to introduce a notion of dependent coer... - Logical Aspects of Computational Linguistics (LACL’98 , 1998 "... ) Zhaohui Luo and Paul Callaghan Department of Computer Science, University of Durham fZhaohui.Luo, P.C.Callaghang@durham.ac.uk 1 Introduction This paper investigates the use of constructive type theory in lexical semantics. Our intention is to explore how a rich language of types with subtyping c ..." Cited by 8 (8 self) Add to MetaCart ) Zhaohui Luo and Paul Callaghan Department of Computer Science, University of Durham fZhaohui.Luo, P.C.Callaghang@durham.ac.uk 1 Introduction This paper investigates the use of constructive type theory in lexical semantics. Our intention is to explore how a rich language of types with subtyping can be used to express lexical knowledge, both as an application of type theory and as an alternative to current approaches. In particular, we show that coercive subtyping [Luo97, Luo98a], provides a formal framework with useful mechanisms for lexical semantics. Coercive subtyping extends constructive type theories (eg, Martin-Lof's intensional type theory [NPS90] and the type theory UTT [Luo94]) with a simple abbreviational mechanism. It provides elegant and flexible means of representing inheritance and overloading. In our earlier paper on the structure of Mathematical Vernacular [LC98], coercive subtyping is used to represent the inheritance relationships between mathematical concepts and ... "... Abstract. We address the problem of representing mathematical structures in a proof assistant which: 1) is based on a type theory with dependent types, telescopes and a computational version of Leibniz equality; 2) implements coercive subtyping, accepting multiple coherent paths between type familie ..." Cited by 8 (4 self) Add to MetaCart Abstract. We address the problem of representing mathematical structures in a proof assistant which: 1) is based on a type theory with dependent types, telescopes and a computational version of Leibniz equality; 2) implements coercive subtyping, accepting multiple coherent paths between type families; 3) implements a restricted form of higher order unification and type reconstruction. We show how to exploit the previous quite common features to reduce the “syntactic ” gap between pen&paper and formalised algebra. However, to reach our goal we need to propose unification and type reconstruction heuristics that are slightly different from the ones usually implemented. We have implemented them in Matita. 1 "... Abstract. Matita is an interactive theorem prover being developed by the Helm team at the University of Bologna. Its stable version 0.5.x may be downloaded at ..." Cited by 8 (6 self) Add to MetaCart Abstract. Matita is an interactive theorem prover being developed by the Helm team at the University of Bologna. Its stable version 0.5.x may be downloaded at - Journal of Formalized Reasoning , 2008 "... We present a formalisation of a constructive proof of Lebesgue’s Dominated Convergence Theorem given by Sacerdoti Coen and Zoli in [CSCZ]. The proof is done in the abstract setting of ordered uniformities, also introduced by the two authors as a simplification of Weber’s lattice uniformities given i ..." Cited by 7 (4 self) Add to MetaCart We present a formalisation of a constructive proof of Lebesgue’s Dominated Convergence Theorem given by Sacerdoti Coen and Zoli in [CSCZ]. The proof is done in the abstract setting of ordered uniformities, also introduced by the two authors as a simplification of Weber’s lattice uniformities given in [Web91, Web93]. The proof is fully constructive, in the sense that it is done in Bishop’s style and, under certain assumptions, it is also fully predicative. The formalisation is done in the Calculus of (Co)Inductive Constructions using the interactive theorem prover Matita [ASTZ07]. It exploits some peculiar features of Matita and an advanced technique to represent algebraic hierarchies previously introduced by the authors in [ST07]. Moreover, we introduce a new technique to cope with duality to halve the formalisation effort. - Types for Proofs and Programs, volume 1956 of LNCS , 2000 "... . In the context of Plastic, a proof assistant for a variant of Martin-Lof's Logical Framework LF with explicitly typed -abstractions, we outline the technique used for implementing inductive types from their declarations. This form of inductive types gives rise to a problem of non-linear patter ..." Cited by 4 (2 self) Add to MetaCart . In the context of Plastic, a proof assistant for a variant of Martin-Lof's Logical Framework LF with explicitly typed -abstractions, we outline the technique used for implementing inductive types from their declarations. This form of inductive types gives rise to a problem of non-linear pattern matching; we propose this match can be ignored in well-typed terms, and outline a proof of this. The paper then explains how the inductive types are realised inside the reduction mechanisms of Plastic, and briefly considers optimisations for inductive types. Key words: type theory, inductive types, LF, implementation. 1 Introduction This paper considers implementation techniques for a particular approach to inductive types in constructive type theory. The inductive types considered are those given in Chapter 9 of [15], in which Luo presents a variant of Martin-Lof's Logical Framework LF which has explicitly typed -abstractions, and a schema for inductive types within this LF which , 1999 "... System F ! is an extension of system F ! with subtyping and bounded quantification. Order-sorted algebra is an extension of many-sorted algebra with overloading and subtyping. We combine both formalisms to obtain IF ! , a higher-order typed -calculus with subtyping, bounded quantification a ..." Cited by 4 (3 self) Add to MetaCart System F ! is an extension of system F ! with subtyping and bounded quantification. Order-sorted algebra is an extension of many-sorted algebra with overloading and subtyping. We combine both formalisms to obtain IF ! , a higher-order typed -calculus with subtyping, bounded quantification and order-sorted inductive types, i.e. data types with built-in subtyping and overloading. Moreover we show that IF ! enjoys important meta-theoretic properties, including confluence, strong normalization, subject reduction and decidability of type-checking. 1 Introduction Typed functional programming languages such as Haskell and ML and typetheory based proof-development systems such as Coq and Lego support the introduction of inductively defined types such as natural numbers or booleans, parameterized inductively defined types such as lists and even parameterized mutual inductively defined types such as trees and forests. In addition, those languages support the definition of functions ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=706442&sort=cite&start=10","timestamp":"2014-04-23T19:39:04Z","content_type":null,"content_length":"37529","record_id":"<urn:uuid:444446b0-4a40-4983-afaf-87eee83d919b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Delay composition theory: A reduction-based schedulability theory for distributed real-time systems Abstract: This thesis develops a new reduction-based analysis methodology for studying the worst-case end-to-end delay and schedulability of real-time jobs in distributed systems. The main result is a simple delay composition rule, that computes a worst-case bound on the end-to-end delay of a job, given the computation times of all other jobs that execute concurrently with it in the system. This delay composition rule is first derived for pipelined distributed systems, where all the jobs execute on the same sequence of resources before leaving the system. We then derive the delay composition rule for systems where the union of task paths forms a Directed Acyclic Graph (DAG), and subsequently generalize the result to non-acyclic task graphs as well, under both preemptive and non-preemptive scheduling. The result makes no assumptions on periodicity and is valid for periodic and aperiodic jobs. It applies to fixed and dynamic priority scheduling, as long as all jobs have the same relative priority on all stages on which they execute. The delay composition result enables a simple reduction of the distributed system to an equivalent hypothetical uniprocessor that can be analyzed using traditional uniprocessor schedulability analysis to infer the schedulability of the distributed system. Thus, the wealth of uniprocessor analysis techniques can now be used to analyze distributed task systems. Such a reduction significantly reduces the complexity of analysis and ensures that the analysis does not become exceedingly pessimistic with system scale, unlike existing analysis techniques for distributed systems such as holistic analysis and network calculus. Evaluation using simulations suggest that the new reduction-based analysis is able to significantly outperform existing analysis techniques, and the improvement is more pronounced for larger systems. We develop an algebra, called delay composition algebra, based on the delay composition results for systematic transformation of distributed real-time task systems into single-resource task systems such that schedulability properties of the original system are preserved. The operands of the algebra represent workloads on composed subsystems, and the operators define ways in which subsystems can be composed together. By repeatedly applying the operators on the operands representing resource stages, any distributed system can be systematically reduced to an equivalent uniprocessor that can be analyzed later to determine end-to-end delay and schedulability properties of all jobs in the original distributed system. The above reduction-based schedulability analysis techniques suffer from pessimism that results from mismatches between uniprocessor analysis assumptions and characteristics of workloads reduced from distributed systems, especially for the case of periodic tasks. To address the problem, we introduce {\em flow-based mode changes\/}, a uniprocessor load model tuned to the novel constraints of workloads reduced from distributed system tasks. In this model, transition of a job from one resource to another in the distributed system, is modeled as mode changes on the uniprocessor. We present a new iterative solution to compute the worst-case end-to-end delay of a job in the new uniprocessor task model. Our simulation studies suggest that the resulting schedulability analysis is able to admit over 25\% more utilization than other existing techniques, while still guaranteeing that all end-to-end deadlines of tasks are met. As systems are becoming increasingly distributed, it becomes important to understand their {\em structural robustness\/} with respect to timing uncertainty. Structural robustness, a concept that arises by virtue of multi-stage execution, refers to the robustness of end-to-end timing behavior of an execution graph towards unexpected timing violations in individual execution stages. A robust topology is one where such violations minimally affect end-to-end execution delay. We show that the manner in which resources are allocated to execution stages can affect the robustness. Algorithms are presented for resource allocation that improves the robustness of execution graphs. Evaluation shows that such algorithms are able to reduce deadline misses due to unpredictable timing violations by 40-60\%. Hence, the approach is important for soft real-time systems, systems where timing uncertainty exists, or where worst-case timing is not entirely verified. We finally show two contexts in which the above theory can be applied to the domain of wireless networks. First, we developed a bandwidth allocation scheme for elastic real-time flows in multi-hop wireless networks. The problem is cast as one of utility maximization, where each flow has a utility that is a concave function of its flow rate, subject to delay constraints. The delay constraints are obtained from our end-to-end delay bounds and adapted to only use localized information available within the neighborhood of each node. A constrained network utility maximization problem is formulated and solved, the solution to which results in a distributed algorithm that each node can independently execute to maximize global utility. Second, we study the problem of minimizing the worst-case end-to-end delay of packets of flows in a wireless network under arbitrary schedulability constraints. Using a coordinated earliest-deadline-first strategy, we show that a worst-case end-to-end delay bound that has the same form as our delay composition results for distributed systems can be obtained. We discuss several avenues for future work that build on top of the theory developed in this thesis. We hope that this thesis will provide the foundation to develop a more comprehensive and widely applicable theory for the study of delay, schedulability, and other end-to-end properties in distributed systems.
{"url":"https://ideals.illinois.edu/handle/2142/18459","timestamp":"2014-04-20T11:26:21Z","content_type":null,"content_length":"31294","record_id":"<urn:uuid:64cd7848-0290-44b8-ace2-79526862cb02>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Samples, Surveys, and Bias Most students have probably participated in some type of statistical survey, whether it was asking others for their opinions in order to make a decision, or participating in an on-line survey on the Internet. A survey asks people their opinions or experiences of a particular event or issue. All individuals fitting a particular description are called a population. A small group or part of a population is a sample. If that sample has characteristics similar to the entire population, it is said to be a representative sample. A representative sample has no bias. A biased sample favors one particular answer. A convenience sample is taken of people that are easily available to complete the survey. A convenience sample is often biased because the sample population has some common attribute, such as all of them being teenagers or all being senior citizens. A random sample asks questions of all segments of the population. A random sample may or may not be biased, often depending upon how the question is asked. If a representative sample is asked an unbiased question, then the results can be used to predict the response of the entire population. Proportions are used to make such predictions. Suppose 400 people out of 1,000 people surveyed answered yes to an unbiased sample question. Then a proportion can be used to predict how many people out of 60,000 would answer yes. The prediction is that 24,000 people would answer yes to the survey. Interpreting Statistics When interpreting statistics, it is important to be alert for misleading statistics. Some examples of misleading statistics are given below. Misleading Statements: “Our survey shows 2 out of 3 investors in our company made money last year.” Possible misleading factors: The investors may have made money from some other investments, not from this company. The investors may have been salaried employees of the company. Biased Survey Question: “Should students be taught how to ride a skateboard to increase physical coordination?” Possible misleading factor: Developing physical coordination is a positive goal, thus it shows bias toward whatever is being suggested. Biased Graphic Displays Possible misleading factor: This graph uses a scale that starts at 47%. It makes it look as if many more people answered no. When interpreting statistics, it is always important to read the data accurately as well as all of the scales and labels on a graphic display. The Fundamental Counting Principle The fundamental counting principle states that if an experiment or problem has two steps and there are m possible choices or outcomes for the first step and n possible choices or outcomes for the second step, then the total number of possible choices or outcomes for both steps is m × n. The set of all possible outcomes is called the sample space. The result of an experiment is called an outcome. If the experiment is to toss a 1−6 number cube, then there are six possible outcomes, one for each face of the cube. An event is any collection of outcomes. Examples of events for tossing a number cube are that the number tossed is even, that the number is 1 or 2, or that the number is 3. The probability of an event is a measure of the likelihood that the event will occur. The probability is always a number between 0 and 1 that can be written as a fraction, a decimal, or a percent. A probability of 0 means that an event is impossible, while a probability of 1 means that an event is certain. If each outcome is equally likely, the theoretical probability of an event is the ratio of the number of outcomes in the event to the total number of possible outcomes. The experimental probability of an event is the ratio of the number of times the event occurs to the total number of experiments. This is an experimental estimate of the theoretical probability. For large numbers of experiments, the experimental probability approaches the theoretical probability of the event. Computing Theoretical Probability When a coin is tossed, there are two outcomes, heads or tails. Either outcome is equally likely. When a 1−6 number cube is tossed, each face is equally likely to turn up. When a marble is chosen from thoroughly mixed marbles of the same size, without looking, each marble has same chance of being chosen. Such outcomes are said to be equally likely outcomes. Sometimes the word fair is used as in “fair coin” or “fair number cube.” To indicate that each outcome is equally likely, the word random is used, such as in saying “The object is chosen at random.” When you toss two 1−6 number cubes, one way to record the outcome is to sum the numbers on the two cubes. The possible sums range from 2 to 12, but each sum is not equally likely. There are more ways to toss a sum of 7 than there are to toss a sum of 2. If the first cube shows a 3 and the second shows a 5, then this outcome can be represented by the ordered pair (3, 5). There are 36 such ordered pairs, each with an equally likely outcome. When all outcomes of an experiment are equally likely, the theoretical probability of event A is given by this ratio. P(A) = number of outcomes in the event total number of possible outcomes Tossing a fair coin. List all outcomes: H (head), T (tail). The probability of tossing a head is P(H) = Tossing a fair 1−6 number cube. List all outcomes: 1, 2, 3, 4, 5, 6. The probability of tossing an even number is the probability of tossing 2, 4, or 6. P(2, 4, 6) = Tossing two fair number cubes. List all 36 equally likely outcomes. The probability of tossing a sum of 7 is the probability of getting any of the outcomes (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1). So P(7) = Disjoint Events If two events A and B have no outcome in common, then A and B are said to be disjoint events. When A and B are disjoint events with probabilities P(A) and P(B), respectively, the probability of A or B occurring is given by P(A or B) = P(A) + P(B). If the experiment is tossing two 1–6 number cubes, then the event S (getting a sum of 7) and the event L (getting a sum less than 7) are disjoint events. If event E is tossing an even sum, then S and E are disjoint events, but L and E are not disjoint since 2, 4, and 6 are even sums that are also less than 7. For the events S, L, and E above, P(S) = L) = E) = P(S or L) = P(S) + P(L) = P(S or E) = P(S) + P(E) = The Probability of not A If A is an event, then the event not A consists of all outcomes that are not in A. The events A and not A are disjoint events. The events A and not A include all possible outcomes, so the probability is 1. Thus P(not A) = 1 − P(A). What is the probability of tossing two 1−6 number cubes and not tossing a sum of 7? 1 − P(7) = 1 − Dependent and Independent Events A compound event consists of the outcomes of two or more events. In the compound event A and B, A and B are independent events if the outcome A has no influence on the outcome of B. If the outcome of A does have an influence on the outcome of B, then A and B are dependent events. If A and B are independent events, then the probability of A and B occurring is simply the product of the probability of A and the probability of B. P(A and B) = P(A) × P(B) Toss a fair coin twice. What is the probability of getting two heads in a row? Since the outcome of the first toss has no influence on the outcome of the second toss, the probability is simply P( H and H) = If A and B are dependent events, then the probability of A and B is the product P(A) × P(B given that A has occurred). P(A and B) = P(A) × P (B given that A has occurred) A jar contains 5 red and 10 white marbles. Find the probability of drawing, at random, two red marbles in a row if the first marble is not returned. Event A: The first marble drawn is red. Event B: The second marble drawn is red. P(A) = Now find P(B) assuming that the first marble chosen is red. P(B given that A has occurred) = P(A and B) = P(A) × P(B given that A has occurred) = Teaching Model 19.2: Theoretical Probability
{"url":"http://www.eduplace.com/math/mw/models/overview/6_19_2.html","timestamp":"2014-04-19T20:17:25Z","content_type":null,"content_length":"18496","record_id":"<urn:uuid:56c72012-24c8-49d9-a9b5-5f8f5f7e3403>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Talk:Fractals With Stars From Math Images Hey Michael! This is Jason from Swarthmore. Hope your summer is great and refreshing so far! I hope you feel refreshed enough to work on this page because i have some suggestions. Don't worry, we'll be working together on this emerging and exciting page. This is a great idea for a page and it deserves to have more work done on it. I'm excited and expected to see it polished and ready for publishing by July 26. First off, great job presenting! People loved it. I know for a fact that my peers at Swarthmore enjoyed it as well. You were very engaging and confident about your page, now let's let the page do more of the talking rather than you. Here are some areas ideas i had in mind: Basic Description □ I really admire what you did in terms of making the image, Mr. Taranta and i are really impressed. What concerns me, however, is how you communicate the information that you know. I know it can be hard, but i am here to help. □ I know you were rushed typing this the day before, but bullet your procedure in the basic description. ☆ Don't write too many or you will overwhelm the reader. ☆ There are some sentences that need splitting up, like "Rotate the midpoint around point A at 36 degrees to create point C and then rotate C at 72º to create point D and then rotate point D by 72º 4 times to create the outline points of a star" maybe write with different bullets containing bullet saying "(bullet 1)Establish a midpoint C between A and B. (bullet 2) Repeat the 72 degree rotating process done previously, starting from C" ☆ Don't necessarily write all your sentences in bullet form, really determine what sentences/phrases you want to keep. □ Where exactly do you connect points? Be specific. (I'm Referring to "Then hide C and connect all the points to form an outline of a star") □ Can you explain what you mean by "making a tool"? Also, you should reference somewhere that people need to be familiar with GSP. □ Why exactly did you decide that successive stars would be 2.2676 times smaller than the star before it? If this was an arbitrary decision of yours, make that clear. I know you like to play around with GSP at times, so for the next time you make manipulation based on an arbitrary number, make it clear. □ The part "and map a polar grid around it and then try to find a parametric function that intersects the centers of the stars" is not necessary for making the star. You can say, in another subsection of your basic description (probably named Mathematical Explorations [which should not be math-intensive but rather introduce the reader to the mathematical concepts you will be working with])) that if desired, they can produce a paramatric equation of one of the spirals by mapping a polar grid around the stars of interest. A More Mathematical Description □ i recommend hiding the procedure (yes, hiding a section on a section that is hidden), and leave it there if people are interested. I can show you how to hide if you are interested. □ Try to find if there is some finite area that you can mathematically derive (assuming that we can ignore that they overlap each other, that is, add of the areas of the stars) □ try to determine if there is infinite perimiter and try to derive it mathematically (with the same assumption that we can ignore overlap) □ I am not sure if you have seen this already but there is a page already, called Koch Snowflake, that is similar to what you are doing. it has derivations of area and perimiter which might be very similar to those you will encounter. Why it's Interesting □ I like that you found it interesting because of your expericence of GSP in the classroom and your adoration of stars. Remember that outsiders will be seeing your page and will not relate to the experience you had in the classroom. Try to focus on the implications it has with fractals in real life. Maybe refer to the Romanesco Broccoli and make a certain person happy? □ I'll leave it up to you to find interesting applications of fractals. Good Luck! Keep me updated! I look forward to working with you more on this --Jason 16:27, 3 July 2013 (EDT)
{"url":"http://mathforum.org/mathimages/index.php/Talk:Fractals_With_Stars","timestamp":"2014-04-19T08:42:45Z","content_type":null,"content_length":"18854","record_id":"<urn:uuid:c4f6b923-eef8-43b0-8246-422e37426cf9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization of Concentric Ring Array Geometry for 3D Beam Scanning International Journal of Antennas and Propagation Volume 2012 (2012), Article ID 625437, 5 pages Research Article Optimization of Concentric Ring Array Geometry for 3D Beam Scanning National Key Laboratory of Antennas and Microwave Technology, Xidian University, P.O. Box 377, Shaanxi, Xi'an 710071, China Received 16 January 2012; Revised 29 February 2012; Accepted 3 March 2012 Academic Editor: Zhongxiang Q. Shen Copyright © 2012 Li Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Optimization of the element placement in a concentric ring array for three-dimensional (3D) beam scanning with minimum peak sidelobe levels (SLLs) is addressed in this paper. In order to achieve 3D beam scanning with the lowest peak SLL, both the radius and the element spacing of each ring are optimized. Moreover, the aperture size is a very important constraint for the array, since there is an upper limit for the aperture size of a given array in real-life environment. Hence, in our optimization design, the maximum radius of the concentric ring array is constrained. Through the optimization, the peak SLL of the optimal concentric ring array is about 6dB lower than that of the uniform concentric ring array, but the directivity is reduced by only 0.3dB. 1. Introduction Antenna arrays for radar tracking, remote sensing, biomedical imaging, satellite and ground communications have often to support three-dimensional (3D) scanning with a suitable beampattern shape in the given angular region [1]. Towards this end, planar arrays have to be used and large apertures are necessary to provide satisfactory angular resolutions along both azimuth and elevation [1]. Among planar arrays, circular array has received considerable interest. Circular array can consist of either a single ring [2, 3] or multiple concentric rings [4–10]. However, uniformly excited and equally spaced circular ring array has high directivity but it usually suffers from high side lobe level (SLL). Therefore, to gain high performance of the circular array, the geometry and weights can be optimized. In [2], the weights and positions were simultaneously optimized in order to reduce the SLL in the azimuthal plane. In [3], three different evolutionary algorithms were employed to optimize the amplitude and phase excitations of the circular array for designing the beam scanning with the lowest peak SLL in the whole azimuthal plane. Amplitude weights for a concentric ring array can be found to yield an array factor that is invariant pattern over a specified bandwidth [4]. Kumar and Branner [5] proposed a method of lowering sidelobe level through optimizing the ring radii. In [6], an artificial neural network (ANN) with hidden Bessel activation functions was employed to design ultralow sidelobe level concentric ring arrays. In [7], both the ring radii and the number of elements in each ring were optimized to achieve the lowest peak SLL. Pathak et al. [8] synthesized the thinned concentric ring array to suppress the SLL at boresight. The weights and geometry were simultaneously optimized in order to determine the minimum possible sidelobe level in concentric ring arrays [9]. Optimization of the ring radii has been attempted for beam scanning at boresight with the lowest peak SLL [10], but this was done with respect to suppressing the SLL only for the given plane. To the best of our knowledge, optimization of the concentric ring array geometry for 3D beam scanning with minimum peak SLL has not been addressed. In this paper, optimizing the element placement in a concentric ring array for 3D beam scanning for minimizing sidelobe levels is proposed. In order to achieve 3D beam scanning with the lowest peak SLL, both the radius and the interelement spacing of each ring are optimized by a differential evolution algorithm. Through the optimization, the peak SLL of the optimal concentric ring array is about 6dB lower than that of the uniform concentric ring array, but the directivity is reduced by only 0.3dB. 2. Formulation of the Problem Here, consider a concentric ring array having rings and elements in ring . The maximum radii of concentric ring arrays is . The geometry of the concentric ring array of point sources is shown in Figure 1, where denotes the azimuth angle and the elevation angle with respect to the -axis. The array factor of the concentric ring array with the uniform excitation amplitude of each ring is given as follows: where is the radii distance to the th ring from the center of the array; is the free-space propagation constant, and is the operating wavelength. The elements in each ring are also assumed to be uniformly distributed around the circular ring, that is, ; is the desired steering angle; , . The number of equally spaced elements in ring is given by since the number of elements must be an integer, the value in (2) must be rounded up or down. To keep and allow sufficient element spacing, the digits to the right of the decimal point are dropped. For optimization design of the concentric ring array geometry, both the radius of each ring and the interelement spacing in each ring can be chosen as the optimal variables. The major advantage of the use of optimization of both the ring radii and interelement spacings is that it can enhance computational efficiency by greatly reducing the dimensionality of the space in which any optimization procedure employed in the design process is carried out. In order to alleviate possible mutual coupling effects, the rings are assumed to be separated by a minimum distance of and , and the inter-element spacing in each ring is also at least . To achieve the lowest peak SLL within the desired steering angle in 3D space, the optimization problem is formulated as the following problem: where denotes the sidelobe region; is the peak value of main beam. 3. Numerical Examples In this paper, a DE/rand/1 version of DE is employed to optimize concentric ring arrays. Of course, other global optimization algorithms could also be successfully applied. More details about the DE algorithms refer to [11]. The DE used a scaling factor , a crossover rate , population size , and the maximum number of generations . In this section, a concentric ring array having rings is optimized by the DE algorithm. Without loss of generality, the number of rings is , is considered as the maximum radii of the concentric ring array, and is the desired steering angle. For the array, the average ring spacing is . Placing 6 rings at the periodic intervals and the spacing between elements with an approximately constant (≃λ/2) for all rings will create relatively high SLL and limit the usefulness of the array. The number of array elements on each ring is then found from (2) as , from the innermost to the outermost ring. The total number of array elements is 216. Figure 2 shows the patterns of the worst normalized SLL throughout the scanning space. Here the highest SLL is −16.15dB, and the directivity is 26.46dB. The radiation pattern in plane of the array is plotted in Figure 3. Optimizing D: to reduce the peak SLL, we try to optimize the element spacings of all the rings . is limited between and . The radii of ring is placed for (0.8333λ, 1.6667λ, 2.5000λ, 4.1667λ, 5.0000λ ). The DE is employed to optimize element spacings of all the rings. The optimal element spacings of the 6 rings by the DE algorithm are (0.5432λ, 0.5834λ, 0.5696λ, 0.6281λ, 0.5210λ, 0.5056λ). The number of array elements on each ring is , and the resulting array has 198 elements arranged as shown in Figures 4 and 5 shows the patterns of the worst SLL at the scanning space. Here the highest SLL is −16.31dB, and the directivity is 26.07dB. The improvement is very small. The radiation pattern in plane of the array is plotted in Figure 6. Optimizingboth D and R: Optimizing the element spacings of all the rings for reducing the peak SLL is not obvious, so we can attempt to optimize both the radii and the element spacings of all the rings. For this array, the maximum radius of the ring is , that is, . The DE is used to determine how to combine the radii and the element spacings of all the rings to achieve the lowest SLL. In this optimization process, the solution may be infeasible, that is, it is not satisfied the constraints . To avoid this drawback, the constraint-handling technique in [12] is adopted in the DE algorithm. The final optimized results are shown in the following. The optimal radii of the 6 rings are (1.0964λ, 1.7713λ, 2.2939λ, 2.7939λ, 3.4964λ, 5.0000λ), and the optimal element spacings of the 6 rings are (0.5251λ, 0.5176λ, 0.5706λ, 0.5000λ, 0.5043λ, 0.5664λ). The optimal array has only 192 elements, and the number of elements in each ring is . The optimal geometry of the array is shown in Figure 7. The corresponding patterns are plotted in Figures 8 and 9. The directivity is 26.16dB and the peak SLL of the optimal array is −22.05dB. The performance parameters of the arrays in the presented cases are listed in Table 1. As can be seen from Table 1, the lowest sidelobe level of the concentric ring array is obtained by optimizing both the ring radii and the inter-element spacing in each ring. This array has the fewest number of elements so far at 192 or 88.9% of the uniform concentric circular array. Directivity analysis shows that the methodology not only provides an effective way to regulate the radiation patterns but also is comparable in efficiency to standard array geometries. The directivity of the uniform array, the array with only optimizing , and the array with optimizing both and were found to be 26.46dB, 26.07dB, and 26.16dB, respectively. Compared with the uniform design, the highest SLL of the optimal array is reduced by about 6dB, but the directivity is reduced by only 0.3dB. Among these arrays, the uniform array has the highest directivity, but the SLL is also the highest; the optimal array (optimizing both the ring radii and the interelement spacing in each ring) has the lowest SLL and moderate directivity. 4. Conclusion In this paper, optimization of the concentric ring array geometry for 3D beam scanning to minimize sidelobe levels is proposed. Both the ring radius and the number of elements in each ring are optimized to achieve 3D beam scanning with the lowest peak SLL. The optimization problem is solved via the differential evolution algorithm. Through the optimization, the peak SLL of the optimal concentric ring array is about 6dB lower than that of the uniform concentric ring array. It is found that array geometry has a significant effect on the performance of the concentric ring arrays. This work was supported by the Fundamental Research Funds for the Central Universities (K50511020007).
{"url":"http://www.hindawi.com/journals/ijap/2012/625437/","timestamp":"2014-04-17T10:45:32Z","content_type":null,"content_length":"113341","record_id":"<urn:uuid:30508b1c-8073-4dee-8067-b2f1b002bd2c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Weyl group multiple Dirichlet series are generalizations of the Riemann zeta function. Like the Riemann zeta function, they are Dirichlet series with analytic continuation and functional equations, having applications to analytic number theory. By contrast, these Weyl group multiple Dirichlet series may be functions of several complex variables and their groups of functional equations may be arbitrary finite Weyl groups. Furthermore, their coefficients are multiplicative up to roots of unity, generalizing the notion of Euler products. This book proves foundational results about these series and develops their combinatorics. These interesting functions may be described as Whittaker coefficients of Eisenstein series on metaplectic groups, but this characterization doesn't readily lead to an explicit description of the coefficients. The coefficients may be expressed as sums over Kashiwara crystals, which are combinatorial analogs of characters of irreducible representations of Lie groups. For Cartan Type A, there are two distinguished descriptions, and if these are known to be equal, the analytic properties of the Dirichlet series follow. Proving the equality of the two combinatorial definitions of the Weyl group multiple Dirichlet series requires the comparison of two sums of products of Gauss sums over lattice points in polytopes. Through a series of surprising combinatorial reductions, this is The book includes expository material about crystals, deformations of the Weyl character formula, and the Yang-Baxter equation. Ben Brubaker is assistant professor of mathematics at Massachusetts Institute of Technology. Daniel Bump is professor of mathematics at Stanford University. Solomon Friedberg is professor of mathematics at Boston College. Subject Area: • Mathematics Shopping Cart:
{"url":"http://press.princeton.edu/titles/9490.html","timestamp":"2014-04-17T18:25:54Z","content_type":null,"content_length":"16963","record_id":"<urn:uuid:272c51fa-0581-4940-8fc6-4ae2d14d72ca>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
The Pythagorean Theorem Perhaps the most famous theorem in mathematics is the Pythagorean Theorem. Most people who have had a high school geometry course still remember the statement a^2 + b^2 = c^2 long after their children (and even grandchildren) have learned it. The statement applies to a right triangle whose leg lengths are a and b, and whose hypotenuse has length c. a^2 + b^2 = c^2 The name of the theorem comes from the ancient Greek mathematician and mystic, Pythagoras, but the theorem was known by the Babylonians before that. The earliest known proof, however, is Greek Since then, literally hundreds of proofs have been devised. Below is a picture proof found by James Garfield, twentieth president of the United States. Can you find the equations suggested by the figure which prove the theorem? A Presidential Proof Here is another picture proof. First you might like to try to assemble the following "tangram" puzzle. The three pieces are shown on a grid so that you can cut accurate copies out of graph paper. When you assemble them, you may rotate and flip them, but you may not cut, fold, or overlap them. • Task 1: form two exact squares adjacent to each other. • Task 2: form one exact square. After you have tried the puzzle, read on. Pieces 2 and 3 form congruent right triangles: label their legs a and b, and their hypotenuses c. The single square made from the three pieces has side c, so its area is c^2. The two squares made from the same pieces have sides a and b, respectively. So their areas are a^2 and b^2, respectively. Since the areas of the two figures must be the same, you have just proved that a^2 + b^2 = c^2 ! Continue on to see an animated solution:
{"url":"http://www2.stetson.edu/~mhale/ideas/pythag/index.htm","timestamp":"2014-04-20T11:25:14Z","content_type":null,"content_length":"3484","record_id":"<urn:uuid:10f7e042-2201-4622-b5d8-c38fb9b3c061>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how hard is it to get an interview with Microsoft? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5078ce05e4b0ed1dac511686","timestamp":"2014-04-20T20:57:47Z","content_type":null,"content_length":"41980","record_id":"<urn:uuid:6ed387b9-edf8-4479-b3f1-58dec53c66fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Telephoto zoom lenses Applet: Andrew Adams Text: Marc Levoy If long focal length lenses were built using a single thin lens, with object and image distances given by the Gaussian lens formula, then a 250mm lens focused on a subject 1 meter away would need to be placed 333mm (13 inches) from the sensor. But you can buy a Tamron 18-250mm zoom lens that, even when extended to 250mm focal length, measures only 6 inches long. How is this possible? The secret lies in a clever arrangement of convex and concave lenses that together are called a telephoto lens. Since many telephoto lenses also let you change their focal length, including the Tamron product just mentioned, it's worth folding that functionality into our explanation. Formally, the Tamron is called a telephoto zoom lens. How a zoom lens works Start by clicking on the "Close-up Filter" check box. A close-up filter is a weak convex lens you can attach to the end of any lens, by screwing it into the filter threads. Its purpose is to shorten the object distance, for example to turn a regular lens into a macro lens without the need to buy a separate macro lens. We're going to use a close-up filter here so that the in-focus plane (where the blue and red bundles of rays individually converge to two points in object space) fits inside our applet frame. For the rest of this discussion, try to pretend that this filter doesn't exist; it just makes the visualization easier to understand. Now click on the "Equivalent Thin Lens" check box. A thin green lens should appear. Imagine that your long focal length lens consists solely of this lens. For the moment, ignore the two lenses to its right. The red bundle of rays start from the in-focus plane on the left edge of the applet, diverge for a while, pass through the green lens (remember that we're ignoring the close-up filter), then bend and follow the green lines, reconverging at the red circle on the sensor (vertical gray bar). The blue bundle of rays does the same thing, converging at the blue circle. Since the red and blue circles lie at the two ends of the sensor, the angle subtended by the central rays of the red and blue bundles where they strike the green lens represents the field of view. To complete our analysis, the object distance is the distance from the green lens to the left side of the applet, and the image distance is the distance from the green lens to the sensor. The focal length of the green lens is neither of these distances, but is related to them through the Gaussian lens formula. Try moving the focal length slider. This changes the focal length of the green lens. Note that it gets thicker and thinner as you do this, reflecting what would be required to actually change the focal length of a single-lens system like this. As the focal length increases, the field of view (angle between the red and blue bundles) decreases, as you would expect. You can also move the sensor size slider to change the field of view. This arrangement is called a zoom lens. Here's where it gets interesting. Notice that as you adjust the focal length, the applet keeps the in-focus object plane and the sensor stationary. We do this by solving a system of two simultaneous equations: (1) the Gaussian lens formula, with the focal length fixed at the value you set using the slider, (2) the sum of object distance and image distance must equal the distance from the left edge of the applet to the sensor, which is fixed by the design of the applet. This arrangement, where the optics stays focused at the same object distance (a.k.a. subject distance) while you change the focal length, is called an optically compensated zoom lens. How a telephoto zoom lens works One problem with this design is that the green lens is far from the sensor. If built this way, it would yield a physically long lens, as explained in the introduction. Another problem, of course, is that there's no way to make glass lenses change shape (get thicker and thinner) once they've been fabricated. To address both problems, we move to a multi-element design. Unclick the "Equivalent Thin Lens" check box. Now the red and blue bundles continue spreading out as they pass the place where the green lens was, strike the convex lens, bend inwards towards the optical axis (central horizontal line), strike the concave lens, and bend outwards again, converging to the sensor at the same points struck by the green rays. In other words, these two optical arrangements - the green lens alone or the convex-concave lens combination - have the same effective focal length. As a result, they make the same picture. Why would you prefer the second arrangement over the first? Look how much closer the convex-concave lens combination is to the sensor than the green lens was. This is a more compact design. It's called a telephoto lens. Try changing the focal length. The two lenses move, and the field of view changes. So it's a telephoto zoom lens. But the in-focus object plane and sensor also remain stationary. So it's a optically compensated telephoto zoom lens. It's interesting to see how the two lenses move; they don't move together. Explaining how we compute their motion is beyond the scope of this applet; we do it using ray transfer matrices. Briefly, any system of thin lenses and air gaps can be modeled as a 2 x 2 matrix describing how that system bends and shifts rays of light. By constructing and equating the matrices for an ideal thin lens and a telephoto zoom lens system, we can derive equations that make one system optically equivalent to the other. In a commerical lens these motions are encoded into curved slots in the sides of the lens barrel, as suggested by the patent application drawing at left. Finally, try moving the "Focus" slider. Now the location of the in-focus plane changes in object space; it is no longer fixed at the left edge of the applet. Look how the two lenses move; this time they do move together. More slots in the lens barrel. By the way, this is not the only possible design for a telephoto zoom lens. In fact most commercial lenses have many more lens elements. However, our applet gives the basics, and to our knowledge you can't make a simpler arrangement than the one we've shown here. Questions or Comments? Please e-mail us. &copy 2010; Marc Levoy Last update: March 1, 2012 12:59:48 AM Return to index of applets
{"url":"http://graphics.stanford.edu/courses/cs178/applets/zoom.html","timestamp":"2014-04-17T00:50:01Z","content_type":null,"content_length":"9105","record_id":"<urn:uuid:19b31a16-c931-4885-bf64-e0f5c461e65e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
A conjecture concerning random cubical complexes A conjecture concerning random cubical complexes Nati Linial and Roy Meshulam defined a certain kind of random two-dimensional simplicial complex, and found the threshold for vanishing of homology. Their theorem is in some sense a perfect homological analogue of the classical Erdős–Rényi characterization of the threshold for connectivity of the random graph. Linial and Meshulam’s definition was as follows. $Y(n,p)$ is a complete graph on $n$ vertices, with each of the ${n \choose 3}$ triangular faces inserted independently with probability $p$, which may depend on $n$. We say that $Y(n,p)$ almost always surely (a.a.s) has property $\mathcal{P}$ if the probability that $Y(n,p) \in \mathcal{P}$ tends to one as $n \to \infty$. Nati Linial and Roy Meshulam showed that if $\omega$ is any function that tends to infinity with $n$ and if $p = (\log{n} + \omega) / n$ then a.a.s $H_1( Y(n,p) , \mathbb{Z} / 2) =0$, and if $p = (\ log{n} - \omega) / n$ then a.a.s $H_1( Y(n,p) , \mathbb{Z} / 2) eq 0$. (This result was later extended to arbitrary finite field coefficients and arbitrary dimension by Meshulam and Wallach. It may also be worth noting for the topologically inclined reader that their argument is actually a cohomological one, but in this setting universal coefficients gives us that homology and cohomology are isomorphic vector spaces.) Eric Babson, Chris Hoffman, and I found the threshold for vanishing of the fundamental group $\pi_1(Y(n,p))$ to be quite different. In particular, we showed that if $\epsilon > 0$ is any constant and $p \le n^{-1/2 -\epsilon}$ then a.a.s. $\pi_1 ( Y(n,p) ) eq 0$ and if $p \ge n^{ -1/2 + \epsilon}$ then a.a.s. $\pi_1 ( Y(n,p) ) = 0$. The harder direction is to show that on the left side of the threshold that the fundamental group is nontrivial, and this uses Gromov’s ideas of negative curvature. In particular to show that the $\pi_1$ is nontrivial we have to show first that it is a hyperbolic group. [I want to advertise one of my favorite open problems in this area: as far as I know, nothing is known about the threshold for $H_1( Y(n,p) , \mathbb{Z})$, other than what is implied by the above I was thinking recently about a cubical analogue of the Linial-Meshulam set up. Define $Z(n,p)$ to be the one-skeleton of the $n$-dimensional cube with each square two-dimensional face inserted independently with probability $p$. This should be the cubical analogue of the Linial-Mesulam model? So what are the thresholds for the vanishing of $H_1 ( Z(n,p) , \mathbb{Z} / 2)$ and $\pi_1 ( Z (n,p) )$? I just did some “back of the envelope” calculations which surprised me. It looks like $p$ must be much larger (in particular bounded away from zero) before either homology or homotopy is killed. Here is what I think probably happens. For the sake of simplicity assume here that $p$ is constant, although in realty there are $o(1)$ terms that I am suppressing. (1) If $p < \log{2}$ then a.a.s $H_1 ( Z(n,p) , \mathbb{Z} /2 ) eq 0$, and if $p > \log{2}$ then a.a.s $H_1 ( Z(n,p) , \mathbb{Z} /2 ) = 0$. (2) If $p < (\log{2})^{1/4}$ then a.a.s. $\pi_1 ( Z(n,p) ) eq 0$, and if $p > (\log{2})^{1/4}$ then a.a.s. $\pi_1 ( Z(n,p) ) = 0$. Perhaps in a future post I can explain where the numbers $\log{2} \approx 0.69315$ and $(\log{2})^{1/4} \approx 0.91244$ come from. Or in the meantime, I would be grateful for any corroborating computations or counterexamples.
{"url":"http://matthewkahle.wordpress.com/2010/01/08/a-conjecture-concerning-random-cubical-complexes/?like=1&source=post_flair&_wpnonce=8ef4095fcf","timestamp":"2014-04-18T13:08:35Z","content_type":null,"content_length":"55252","record_id":"<urn:uuid:fe2ab9d1-b0ab-4e11-b343-252c32d91621>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
DSpace Collection:Analytical solutions and bifurcation of nonlinear oscillators with discontinuities and impulsive systems by a perturbation-incremental methodMathematical modelling, analysis and computation of some complex and nonlinear flow problemsUniform asymptotic expansions of the Tricomi-Carlitz polynomials and the Modified Lommel polynomialsPortfolio management, stochastic volatility and credit derivatives : three important issues in quantitative finance http://dspace.cityu.edu.hk:80/handle/2031/774 2014-04-07T01:15:25Z 2014-04-07T01:15:25Z Wang, Hailing (汪海玲) http://dspace.cityu.edu.hk:80/handle/2031/6973 2013-06-13T02:37:49Z 2012-01-01T00:00:00Z Title: Analytical solutions and bifurcation of nonlinear oscillators with discontinuities and impulsive systems by a perturbation-incremental method Authors: Wang, Hailing (汪海玲) Abstract: Nonlinear equations have been widely used in many areas of physics and engineering. They are of significant importance in mechanical and structural dynamics for the comprehensive understanding and accurate prediction of motion. Analytical solution obtained from classical perturbation methods such as Lindstedt-Poincar'e method, Krylov-Bogoliubov-Mitropolsky method, method of multiple scales and averaging method are usually accurate for small perturbation. For nonlinear oscillators with discontinuities, accurate analytical solution may not be easily obtained due to the nonsmooth property at the switching points. For impulsive systems, there is a sudden jump in the phase portrait. To the best of our knowledge, no harmonic balance method has ever been applied to investigate the bifurcation and continuation of period solutions of such systems. In this thesis, we investigate analytical solutions of nonlinear oscillators with discontinuities using a nonlinear time transformation method, and bifurcation and continuation of impulsive systems using a perturbation-incremental method. First, we study analytical periodic solutions of a generalized Duffing-harmonic oscillator having a rational form for the potential energy by a nonlinear time transformation method. An analytical solution is expressed in Pad'e approximation which often gives a better approximation of a function than its truncating Taylor series. Period solutions with large amplitude and those near to homoclinic/heteroclinic orbits are computed. Excellent agreement of the approximate presentations with the numerical simulation has been demonstrated and discussed. We also compared the results with those from the cubication method. Next, we present a nonlinear time transformation method to obtain analytical solutions of nonlinear oscillators with discontinuities. The essence of this method is that a periodic solution is approximated by the Chebyshev polynomials with a nonlinear time s rather than the physical time t. Since the first derivative of an approximate limit cycle oscillation obtained from the present method is piecewise continuous which agrees qualitatively with the exact solution, it gives accurate analytical solutions for the nonlinear oscillators with discontinuities. In some cases, the present method gives exact solutions while other perturbation methods give only approximate solutions. For those systems where exact solution is impossible, the approximate solution obtained from the present method is compared to He's homotopy perturbation method which is a powerful method with good accuracy for many systems. Finally, a perturbation-incremental (PI) method is presented for the bifurcation analysis of periodic solutions of impulsive systems. For such systems, a periodic solution is also approximated by the Chebyshev polynomials instead of the Fourier series so as to overcome the sudden jump in the phase portrait. In the perturbation step, a perturbed solution is obtained at bifurcation through solving a system of low-dimensional linear equations and is taken as an initial guess for incremental iteration. Through an incremental process, period solutions can be calculated to any desired degree of accuracy and their stabilities can be determined by the Floquet theory. As the parameter varies, period-doubling solutions leading to chaos can be identified. Notes: CityU Call Number: QA867.5 .W36 2012; vi, 123 leaves : ill. 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2012.; Includes bibliographical references (leaves [101]-121) 2012-01-01T00:00:00Z Li, Buyang (李步揚) http://dspace.cityu.edu.hk:80/handle/2031/6972 2013-06-13T02:37:46Z 2012-01-01T00:00:00Z Title: Mathematical modelling, analysis and computation of some complex and nonlinear flow problems Authors: Li, Buyang (李步揚) Abstract: This thesis consists of two parts: (I) modelling, analysis and computation of sweat transport in textile media; (II) unconditional convergence and optimal error analysis of the Galerkin FEM for nonlinear parabolic equations. The first part of the thesis is concerned with heat and sweat transport in porous textile media, which can be viewed as a nonisothermal, multiphase and multicomponent flow with complex phases changes. We present a more precise formulation of the condensation/evaporation process with a truncated Hertz-Knudsen equation, which makes the model applicable in the general dry-wet case. We introduce a flux type boundary condition for the fiber absorption equation to describe the absorption process in a wet environment more precisely, while the previous models with a simple saturated condition may not be realistic. Numerical simulations are performed to compare with experimental data, with both finite difference methods and finite element methods. Several practical cases are simulated for clothing assemblies with the human thermoregulation system. Moreover, we provide optimal error estimates for an uncoupled finite difference method in one-dimensional space and a splitting finite element method in three-dimensional space. The error analysis relies on some interesting skills used in PDEs analysis and physical features in modelling. The physical process of heat and sweat transport is governed by a system of nonlinear, degenerate and strongly coupled parabolic equations in general. However, mathematical analysis for these models is very limited due to the lack of reasonable link between modelling in engineering and analysis in mathematics. We prove existence of weak solutions for the dynamic models with complex phase changes. The proof is based on the nature of gas convections in the mass equations and energy equation, with physically realistic assumptions. The analysis presented in this thesis may be applied to the multicomponent heat and mass transport models in many other areas, and it also provides a fundamental tool for theoretical analysis of numerical methods. The second part of the thesis is concerned with unconditional convergence and optimal error analysis of the Galerkin/mixed finite element method for nonlinear parabolic equations, with commonly-used linearized semi-implicit schemes for the time discretization. To illustrate our method, we study the time-dependent nonlinear Joule heating equations and the equations of incompressible miscible flow in porous media, respectively. Optimal L2 error estimates are obtained without any time step restriction, while all the previous works required certain conditions for the time stepsize. Theoretical analysis is based on more precise analysis of a corresponding time-discrete partial differential equations. The approach used in this paper is applicable for more general nonlinear evolution equations and many other linearized semi-implicit (or implicit) time discretizations.for which previous works often require certain restrictions on the time stepsize τ. Notes: CityU Call Number: QC173.4.P67 L5 2012; 2, 221 leaves : ill. 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2012.; Includes bibliographical references (leaves [207]-221) 2012-01-01T00:00:00Z Lee, Kei Fung (李奇峰) http://dspace.cityu.edu.hk:80/handle/2031/6971 2013-06-13T02:37:44Z 2012-01-01T00:00:00Z Title: Uniform asymptotic expansions of the Tricomi-Carlitz polynomials and the Modified Lommel polynomials Authors: Lee, Kei Fung (李奇峰) Abstract: In this thesis, we derive uniform asymptotic expansions of the Tricomi-Carlitz poly- nomials f(α)n(x) and the modified Lommel polynomials hn,ν(x), as n → ∞, valid for x in (0,∞). Since these two polynomials do not satisfy a second-order differential equation, the powerful tools developed for differential equations are not applicable. Our discussion is divided into three parts. In the first part, we derive directly from the three-term recurrence relation (n+1)f(α)n+1(x)− (n+α)xf(α)n(x)+f(α)n−1(x) = 0, an asymptotic expansion for f(α)n(x) which holds uniformly in regions containing the critical values x=±2/√ν, where ν=n+2α−1/2. This method is based on the turning-point theory for three-term recurrence introduced by Wang and Wong [Numer. Math. 91 (2002) and 94 (2003)]. In the second part, the expansion is derived by using the cubic transformation for the integral ∫cJ(s;t) exp[νϕ(s;t)] ds, where J(s;t) and ϕ(s;t) are analytic functions of s, t is a bounded real parameter and ϕ(s; t) have two saddle points s±(t) which coalesce as t tends to some real number t0. Then we apply the integration-by-part technique suggested by Bleistein. As an application, an asymptotic expansion for the zeros of the Tricomi-Carlitz polynomials is derived. The validity for bounded t can be extended to unbounded t by using a sequence of rational functions introduced by Olde Daalhuis and Temme. The expansion involves the Airy functions and their derivatives. Error bounds are also given for one-term and two-term approximations. We finally study a asymptotic expansion for the modified Lommel polynomials hn,ν(t/N) which holds uniformly in regions containing the critical values x=±1/N, where N=n+ν. This method is again based on the turning-point theory for three-term recurrence; their zeros are also derived. Notes: CityU Call Number: QA404.5 .L44 2012; iv, 148 leaves : ill. 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2012.; Includes bibliographical references (leaves [144]-148) 2012-01-01T00:00:00Z Gao, Ming (高明) http:// dspace.cityu.edu.hk:80/handle/2031/6970 2013-06-13T02:37:42Z 2011-01-01T00:00:00Z Title: Portfolio management, stochastic volatility and credit derivatives : three important issues in quantitative finance Authors: Gao, Ming (高明) Abstract: During the financial crisis, most stock markets experienced large drawdown from the peak, volatilities in financial markets increased significantly, and the credit market became illiquid. This thesis consists of three parts which study three important topics related to the issues listed above. First, we propose a dynamic investment strategy which can do a good job to follow the market when the market soars, and can retain a part of the profit gained from the soaring market when the market experiences dramatic drawdown. We analyze the behavior of such an investment strategy and validate it by the empirical study. Secondly, we derive an analytic asymptotic formula for pricing European options in the fast mean-reverting stochastic volatility model. Approximations available in literatures failed to capture the behavior of the option prices when the current volatility is very large. Our new formula is in excellent agreement with the fully numerical solutions of option prices. Thirdly, we propose a pricing framework for credit derivatives in illiquid markets. In our framework, the default intensity, the position of the current portfolio, the trading size and the risk aversion of the investor are key inputs for pricing credit derivatives. One can determine a quote price for a trade at the current position, and determine the trading size for given market prices. Notes: CityU Call Number: HG106 .G36 2011; iv, 129 leaves : ill. 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2011.; Includes bibliographical references (leaves [124]-129) 2011-01-01T00:00:00Z
{"url":"http://dspace.cityu.edu.hk/feed/atom_1.0/2031/774","timestamp":"2014-04-16T10:35:50Z","content_type":null,"content_length":"14115","record_id":"<urn:uuid:b287a5fc-c5d5-4c80-8e3a-ef5326260e3c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
New Rochelle Science Tutor ...My resume is available upon individual requests. References can also be provided if requested. My goal is to make the complicated easy and understandable. 4 Subjects: including biochemistry, anatomy, physiology, pharmacology ...I passed the first time I took my nursing board examinations (NCLEX). I know that it is very difficult to just take practice exams from a review book. Learning should be fun and relevant, not dull and boring. I believe that the right 1:1 tutor-student relationship is what makes the difference! 2 Subjects: including nursing, NCLEX I earned my Master of Science degree in Biotechnology at New York University and Bachelor's degree in Biological Sciences at the State University of New York at Buffalo. Both my junior and senior years at Buffalo I tutored evolutionary biology, cell biology, molecular biology, developmental biology... 6 Subjects: including biology, physiology, physical science, ecology ...I have been a private tutor since 2005 and have guided many students through the often stressful process of standardized testing. At Harvard, I concentrated in Visual and Environmental Studies and Literature. I'm currently an MFA candidate at the Graduate Film Program at NYU Tisch, and my short... 36 Subjects: including biology, calculus, chemistry, writing ...I tutored college and high school students alike in maths, and sciences and achieved great successes. I learned that it's important to assess the existing knowledge of the student in regard to the specific subject. This allows to bring the additional and/or missing pieces of information necessary to support future learning. 5 Subjects: including chemistry, French, algebra 1, algebra 2
{"url":"http://www.purplemath.com/new_rochelle_science_tutors.php","timestamp":"2014-04-17T01:20:36Z","content_type":null,"content_length":"23779","record_id":"<urn:uuid:45120f1f-1a5a-4c47-a0c9-f3757b1d9468>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
"Periodicity" in the mass of planets? I've recently come across a very odd article (link at the end ot my post) in the internet, and I'd like to hear other opinions on this topic. The article claims that the mass ratios of earth with any other planet in our solar system can be described by the formula 1.228^n, where n is always extremely close to an integer number. It goes on with the moons in the solar system: The mass ratio of the planet with any of its moons can again be described by 1.228^n, where n is now always extremely close to an integer, or a "half-integer" (... -1.5, -1, -0.5, 0, 0.5, 1, 1.5 ...) The article includes further mass ratios (e.g. ratio of the mass of earth and the mass of an electron, and even ratios of the distances of the planets to the sun) and the formula 1.228^n is - according to the article - always very precise. (It is also pointed out, that there's some (alleged) redshift quantization of QSO with a periodicity of 1.23 EDIT: I'm actually only concerned about the mass "quantization" of planets in the solar So what should one think about all of this? Link to the article:
{"url":"http://www.physicsforums.com/showthread.php?t=646523","timestamp":"2014-04-20T01:06:22Z","content_type":null,"content_length":"33786","record_id":"<urn:uuid:71a5400d-8bed-4c22-82a8-62fccd561ce8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Electric Potential in A Uniform Electric Field Here's the problem: Two points are in an E-field: Point 1 is at (x1,y1) = (4,4) in m, and Point 2 is at (x2,y2) = (13,13) in m. The Electric Field is constant, with a magnitude of 65 V/m, and is directed parallel to the +x-axis. The potential at point 1 is 1000 V. Calculate the potential at point 2. IT IS ABSOLUTELY SICKENING How Many Times I Attempted this Seemingly Easy Problem And Got it WRONG...so apparently, this isn't as easy as I thought ! I KNOW this problem Has to utilize the formula V = Ed (or perhaps V = Edcos(theta) ???) for d i get sqrt((13-4)^2 + (13-4)^2) = 12.7279 and E is given soooooooooo... for the change in potential i get 827. I then add that to the potential of point 1 to get the potential of point 2 and I get 1827. But apparently that's wrong. So are the answers 1000, 1585, and 1292 which I got from slightly tweaking the main formula in different ways. I have no clue what else to try...any help ?
{"url":"http://www.physicsforums.com/showthread.php?t=12973","timestamp":"2014-04-21T14:55:02Z","content_type":null,"content_length":"32819","record_id":"<urn:uuid:b79a4f9e-e9e6-4cef-9a7f-0af19812e657>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Hasbrouck Heights ACT Math Tutors ...If you have the idea of what to do on a problem, you do not need to complete 10 similar problems. As such, I like to spend more time on the why than the what. If you choose me as your tutor, your math skills will improve. 26 Subjects: including ACT Math, calculus, statistics, physics ...I also have extensive experience applying mathematics to real problems arising in the oil, aerospace and investment management businesses. My primary focus is calculus, but I cover other areas at lower or higher levels as well. I will help students gain an advantage in preparing for college entrance and advanced placement exams. 11 Subjects: including ACT Math, calculus, algebra 2, algebra 1 ...A Columbia University graduate, with a B.S. in Mechanical Engineering, I have years of experience guiding students towards excellence. Whether coaching a student to the Intel ISEF (2014) or to first rank in their high school class, I advocate a personalized educational style: first identifying w... 32 Subjects: including ACT Math, reading, calculus, physics ...I'm very familiar with it. I have a strong background in mathematics, including statistics and have applied this knowledge to the statistical study of economics: econometrics. I recently tutored a Brown undergraduate in the subject and helped him better understand the mathematical underpinnings of the material. 40 Subjects: including ACT Math, chemistry, reading, English ...My fee varies based on the level of my student and is negotiable.I have 15 years experience teaching Algebra 1 to both honors 8th graders and high school 9th graders. I have taught all levels of high school biology for 9 years, including AP Biology for the last 6. I attended the AP Institute at Manhattan College 2 summers ago to learn the new college board and lab changes. 16 Subjects: including ACT Math, reading, geometry, biology
{"url":"http://www.algebrahelp.com/Hasbrouck_Heights_act_math_tutors.jsp","timestamp":"2014-04-19T15:44:15Z","content_type":null,"content_length":"25445","record_id":"<urn:uuid:38d86dc6-5559-4d8f-bcd5-e9a61ccb7ee6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
An introduction to computational algebraic geometry and commutative algebra Results 1 - 10 of 89 - IEEE/ACM Transactions on Networking , 2003 "... Abstract—We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by Li et al., who examined the network capacity of multicast networks, we extend the network coding framework to ar ..." Cited by 518 (89 self) Add to MetaCart Abstract—We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by Li et al., who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays. Index Terms—Algebraic coding, network information theory, network robustness. I. , 2002 "... These are the lecture notes for ten lectures to be given at the CBMS ..." - Signal Processing , 1996 "... Symmetric tensors of order larger than two arise more and more often in signal and image processing and automatic control, because of the recent complementary use of High-Order Statistics (HOS). However, very few special purpose tools are at disposal for manipulating such objects in engineering prob ..." Cited by 67 (20 self) Add to MetaCart Symmetric tensors of order larger than two arise more and more often in signal and image processing and automatic control, because of the recent complementary use of High-Order Statistics (HOS). However, very few special purpose tools are at disposal for manipulating such objects in engineering problems. In this paper, the decomposition of a symmetric tensor into a sum of simpler ones is focused on, and links with the theory of homogeneous polynomials in several variables (i.e. quantics) are pointed out. This decomposition may be seen as a formal extension of the Eigen Value Decomposition (EVD), known for symmetric matrices. By reviewing the state of the art, quite surprising statements are emphasized, that explain why the problem is much more complicated in the tensor case than in the matrix case. Very few theoretical results can be applied in practice, even for cubics or quartics, because proofs are not constructive. Nevertheless in the binary case, we have more freedom to devise numerical algorithms. Keywords. Tensors, Polynomials, Diagonalization, EVD, High-Order Statistics, Cumulants. 1 , 2008 "... We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NP-hard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory ..." Cited by 62 (9 self) Add to MetaCart We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NP-hard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of polynomials. We present these hierarchies of approximations and their main properties: asymptotic/finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zero-dimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background. - J. of Complexity , 1999 "... We first review the basic properties of the well known classes of Toeplitz, Hankel, Vandermonde, and other related structured matrices and reexamine their correlation to operations with univariate polynomials. Then we define some natural extensions of such classes of matrices based on their correlat ..." Cited by 51 (29 self) Add to MetaCart We first review the basic properties of the well known classes of Toeplitz, Hankel, Vandermonde, and other related structured matrices and reexamine their correlation to operations with univariate polynomials. Then we define some natural extensions of such classes of matrices based on their correlation to multivariate polynomials. We describe the correlation in terms of the associated operators of multiplication in the polynomial ring and its dual space, which allows us to generalize these structures to the multivariate case. Multivariate Toeplitz, Hankel, and Vandermonde matrices, Bezoutians, algebraic residues and relations between them are studied. Finally, we show some applications of this study to root-finding problems for a system of multivariate polynomial equations, where the dual space, algebraic residues, Bezoutians and other structured matrices play an important role. The developed techniques enable us to obtain a better insight into the major problems of multivariate polynomial computations and to improve substantially the known techniques of the study of these problems. In particular, we simplify and/or generalize the known reduction of the multivariate polynomial systems to matrix eigenproblem, the derivation of the Bézout and Bernshtein bounds on the number of the roots, and the construction of multiplication tables. From the algorithmic and computational complexity point, we yield acceleration by one order of magnitude of the known methods for some fundamental problems of solving multivariate polynomial systems of , 2004 "... We present a new technique for the generation of non-linear (algebraic) invariants of a program. Our technique uses the theory of ideals over polynomial rings to reduce the non-linear invariant generation problem to a numerical constraint solving problem. So far, the literature on invariant generati ..." Cited by 41 (4 self) Add to MetaCart We present a new technique for the generation of non-linear (algebraic) invariants of a program. Our technique uses the theory of ideals over polynomial rings to reduce the non-linear invariant generation problem to a numerical constraint solving problem. So far, the literature on invariant generation has been focussed on the construction of linear invariants for linear programs. Consequently, there has been little progress toward non-linear invariant generation. In this paper, we demonstrate a technique that encodes the conditions for a given template assertion being an invariant into a set of constraints, such that all the solutions to these constraints correspond to non-linear (algebraic) loop invariants of the program. We discuss some trade-offs between the completeness of the technique and the tractability of the constraint-solving problem generated. The application of the technique is demonstrated on a few examples. - IEEE Transactions on Automatic Control , 2002 "... Abstract—We consider distributed parameter systems where the underlying dynamics are spatially invariant, and where the controls and measurements are spatially distributed. These systems arise in many applications such as the control of vehicular platoons, flow control, microelectromechanical system ..." Cited by 33 (0 self) Add to MetaCart Abstract—We consider distributed parameter systems where the underlying dynamics are spatially invariant, and where the controls and measurements are spatially distributed. These systems arise in many applications such as the control of vehicular platoons, flow control, microelectromechanical systems (MEMS), smart structures, and systems described by partial differential equations with constant coefficients and distributed controls and measurements. For fully actuated distributed control problems involving quadratic criteria such as linear quadratic regulator (LQR), P and, optimal controllers can be obtained by solving a parameterized family of standard finite-dimensional problems. We show that optimal controllers have an inherent degree of decentralization, and this provides a practical distributed controller architecture. We also prove a general result that applies to partially distributed control and a variety of performance criteria, stating that optimal controllers inherit the spatial invariance structure of the plant. Connections of this work to that on systems over rings, and systems with dynamical symmetries are discussed. Index Terms—Distributed control, infinite-dimensional systems, optimal control, robust control, spatially invariant systems. , 1996 "... In this paper the dioeerent algebraic varieties that can be generated from multiple view geometry with uncalibrated cameras have been investigated. The natural descriptor, Vn , to work with is the image of IP 3 in IP 2 \Theta IP 2 \Theta \Delta \Delta \Delta \Theta IP 2 under a corresponding product ..." Cited by 32 (4 self) Add to MetaCart In this paper the dioeerent algebraic varieties that can be generated from multiple view geometry with uncalibrated cameras have been investigated. The natural descriptor, Vn , to work with is the image of IP 3 in IP 2 \Theta IP 2 \Theta \Delta \Delta \Delta \Theta IP 2 under a corresponding product of projections, (A1 \Theta A2 \Theta : : : \Theta Am). Another descriptor, the variety Vb , is the one generated by all bilinear forms between pairs of views, which consists of all points in IP 2 \Theta IP 2 \Theta \Delta \Delta \Delta \Theta IP 2 where all bilinear forms vanish. Yet another descriptor, the variety V t , is the variety generated by all trilinear forms between triplets of views. It has been shown that when m = 3, Vb is a reducible variety with one component corresponding to V t and another corresponding to the trifocal plane. Furthermore, when m = 3, V t is generated by the three bilinearities and one trilinearity, when m = 4, V t is generated by the six bil... - J. Algebraic Geom "... We introduce the multigraded Hilbert scheme, which parametrizes all homogeneous ideals with fixed Hilbert function in a polynomial ring that is graded by any abelian group. Our construction is widely applicable, it provides explicit equations, and it allows us to prove a range of new results, includ ..." Cited by 29 (2 self) Add to MetaCart We introduce the multigraded Hilbert scheme, which parametrizes all homogeneous ideals with fixed Hilbert function in a polynomial ring that is graded by any abelian group. Our construction is widely applicable, it provides explicit equations, and it allows us to prove a range of new results, including Bayer’s conjecture on equations defining Grothendieck’s classical Hilbert scheme and the construction of a Chow morphism for toric Hilbert schemes. 1. , 1995 "... There are very close connections between the arithmetic of integer lattices, algebraic properties of the associated ideals, and the geometry and the combinatorics of corresponding polyhedra. In this paper we investigate the generating sets ("Gröbner bases") of integer lattices that correspond to the ..." Cited by 28 (6 self) Add to MetaCart There are very close connections between the arithmetic of integer lattices, algebraic properties of the associated ideals, and the geometry and the combinatorics of corresponding polyhedra. In this paper we investigate the generating sets ("Gröbner bases") of integer lattices that correspond to the Gröbner bases of the associated binomial ideals. Extending results by Sturmfels & Thomas, we obtain a geometric characterization of the universal Gröbner basis in terms of the vertices and edges of the associated corner polyhedra. In the special case where the lattice has finite index, the corner polyhedra were studied by Gomory, and there is a close connection to the "group problem in integer programming." We present exponential lower and upper bounds for the maximal size of a reduced Grobner basis. The initial complex of (the ideal of) a lattice is shown to be dual to the boundary of a certain simple polyhedron.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1028480","timestamp":"2014-04-21T05:07:18Z","content_type":null,"content_length":"39030","record_id":"<urn:uuid:e3e3e68d-c418-4745-b00a-588cd3a6976c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: About this document ... PUBLICATIONS and IN PROGRESS 1. The Second P=?NP Poll. (A Guest column in Lane Hemaspaandra's Complexity Column.) POLL2012 2. The Complexity of Grid Coloring. (with Daniel Apon and Kevin Lawler). GRIDCOL 3. Three proofs of the Hypergraph Ramsey Theorem (with Andy Parrish and Sanai Sandow). HYPER 4. A Statement in Combinatorics that is Independent of ZFC. (with Stephen Fenner) In Progress. ZFCRADO.PDF 5. The Tug of War Game. William Gasarch and Nick Sovich and Paul Zimand. TUG.PDF 6. Rectangle Free Colorings of Grids. (with Fenner, Glover, and Purewal). GRIDPAPER.PDF GRIDTALK.PDF In progress. 7. Limits on the Computational Power of Random Strings. (with Eric Allender and Luke Friedman) RANDOMSTRINGS.PDF To appear in Information and Computation Special issue devotedto ICALP 2011. 8. Lower Bounds on van der Waerden Numbers: Randomized- and Deterministic-Constructife (with Bernhard Haeupler) LOWERVDW Electronic Journal of Combinatorics Vol 18, 2011 9. The complexity of finding SUBSEQ(A) (with Fenner and Postow) Theory of Computing Systems Vol 45, No. 3, 2009, 577-612. SUBSEQ.PDF (Version here has appendices that the journal version did not 10. The Complexity of Learning SUBSEQ(A) (with Stephen Fenner and Brian Postow) Journal of Symbolic Logic Vol. 74, No. 3, 2009, 939-975. learnsubseq.PDF Earlier Conf Version: learnsubseqCONF.PDF. 11. Inferring answers from data (with A. Lee) Journal of Computing and Systems Sciences, (To appear) Conference Version in COLT97. ANSWERS.PDF 12. Finding Large 3-free Sets I: the Small n Case (with James Glenn and Clyde Kruskal), Journal of Computing and Systems Sciences, Volume 74, No. 4, June 2008. 628-655. 3apI.PDF 13. A Nearly Tight Lower Bound for Restricted Private Information Retrieval Protocols (with Richard Beigel and Lance Fortnow), Computational Complexity. Vol 15, No 1, 2006, 82-91. pirlower.PDF 14. The Multiparty Communication Complexity of Exact-T: Improved Bounds and New Problems. (with Richard Beigel and James Glenn), Mathematical Foundations of Computer Science 2006. (I post the long version, which is not the same as the conference version. It has more in it.) multicomm.PDF 15. Lower bounds on the Deterministic and Quantum Communication Complexity of HAM(n,a). (with A. Ambains, A. Srinivasan, A. Utis) Proceedings of the 17th International Symposisum on Algorithms (ISAAC), 2006. HAM.PDF 16. Constant Time Parallel Sorting: An Empirical View (with E. Golub and C. Kruskal) Journal of Computer and Systems Science Vol 67, 2003, pages 63-91. emp-p-sort.PDF, 17. Some connections between bounded query classes and non-uniform complexity (with A. Amir and R. Beigel), Information and Computation Vol 186, 2003, 104-139. Earlier Version in CCC90. Link is to long version that is also at eccc archive. NONUNIFORM.PDF 18. A Survey on Private Information Retrieval Bulletin of the European Association for Theoretical Computer Science Vol 82, February 2004, pages 72-107. Computational Complexity Column. pirsurvey.PDF 19. When Does a Random Robin Hood Win? (with E. Golub and A. Srinivasan) Theoretical Computer Science Vol 304, 2003, pages 477-484. robinhood.PDF 20. Gems in the field of bounded queries. Computability and Models Edited by Cooper and Goncharov. 2003. GEMS.PDF 21. Automata Techniques for Query Inference Machines (with G. Hird), Annals of Pure and Applied Logic Vol. 117, 171-203, 2002. AUT-TECH-QUERY-INF.PDF Earlier version in COLT95, with title Reduction in Learning via Queries 22. Max and min limiters (with James Owings and Georgia Martin), Archives of Mathematical Logic Vol. 41, 2002, pp 483-495. MAX-MIN-DELIM.PDF 23. AHA: An Illuminating Perspective. (With Dan Garcia and David Ginat) Thirty third annual SIGCSE Technical Symposium on Computer Science Education, Feb 2002. (AHA.PDF) 24. The P=?NP Poll. SIGACT NEWS 2002. Complexity Theory Column. POLL.PDF 25. The Communication Complexity of Enumeration, Elimination, and Selection (with Andris Ambainis, Harry Buhrman, Bala Kalyanasundaram, Leen Torenvliet) Vol 63., pages 148-185, 2001. (Special issue for COMPLEXITY 2000). COMM.PDF 26. A Survey of Constant Time Parallel Sorting, for Bulletin of the European Association for Theoretical Computer Science (with Evan Golub and Clyde Kruskal), Vol 72, pages 84-102, October 2000, Computational Complexity Column. SURVEY-CONST-TIME-SORTING.PDF 27. Squares in a Square: An On-line question (with A.Ambainis). Geocombinatorics, Vol X, No 1, 2000 SQUARES.PDF. 28. Computability, Handbook of Discrete and Combinatorial Mathematics. Edited by Kenneth Rosen. Published by CRC Press (Boca Raton, Florida). COMPUT.PDF 29. The Complexity of ODD(n,A) (with R. Beigel, M. Kummer, G. Martin, T. McNichol, and F. Stephan) Journal of Symbolic Logic, Vol. 65, 1-18, 2000. Earlier Version in MFCS96. ODD.PDF 30. A techniques-oriented survey of bounded queries. (with Frank Stephan). Models and Computability (invited papers from Logic Colloquium '97) (Lecture Note Series 259), Editted by Cooper and Truss. London Mathematical Society 117-156, 1999. Forschungsberichte Mathematische Logik 32 / 1998, Mathematisches Institut, Universitaet Heidelberg, Heidelberg, 1998. BDQ-SURVEY-TECH.PDF 31. On the Number of Automorphisms of a Graph (with R. Beals, R. Chang and J. Toran), Chicago Journal of Theory. Feburary 1999. Earlier version in CCC95. NUMAUTO.PDF 32. When can one load a set of dice so that the sum is uniformily distributed? (with C. Kruskal) Mathematics Magazine. Vol. 72, No. 2, 1999, pp 133-138. DICE.PDF 33. A Survey of Recursive Combinatorics. Handbook of Recursive Mathematics Volume 2. Edited by Ershov, Goncharov, Marek, Nerode, and Remmel. 1998. Pages 1041-1176. Published by Elsevier RCOMBSUR.PDF 34. Addition in lg(n) + O(1) Steps on Average: A Simple Analysis (with R. Beigel, M. Li, L. Zhang), Theoretical Computer Science. Vol 191, 1998, 245-248. ADD.PDF 35. Recursion theory and Reverse Mathematics (with Jeffery Hirst). Mathematical Logic Quarterly. Vol. 44, 1998, 465-473. RR.PDF 36. On the Finiteness of the Recursive Chromatic Number (with A. Lee). Annals of Pure and Applied Logic Vol. 93, 73-81, 1998. FINITE-REC-CHROM-NUMBER.PDF 37. Classification via Information (with M. Plezskoch, M. Velauthapillai, and F. Stephan), Annals of Mathematics and Artificial Intelligence. Vol. 23, 147-168, 1998. CLASSIFICATION.PDF Earlier version in ALT94. 38. Relative Sizes of Learnable Sets (with L. Fortnow, R. Freivalds, M. Kummer, S. Kurtz, C. Smith, and F. Stephan), Theoretical Computer Science Vol 197(1-2):139-156, 1998. Earlier version in ICALP95 with the name Measure, Category, and Learning Theory MEASURE.PDF 39. Bounded Queries in Recursion Theory (With Georgia Martin). Birkhauser. 1998. 40. Bounded Queries and Approximation (with R. Chang and C. Lund), SIAM Journal of Computing, Vol. 26, 1997, 188-209 BDQAPPROX.PDF Earlier version in FOCS93 did not have Lund as co-author. 41. Implementing WS1S via Finite Finite Automata. Automata Implementation. (with James Glenn) In Workshop on Implementing Automata-1996 Edited by Raymond, Wood, and Yu. Lecture Notes in Computer Science 1260. 1997 WIA96.PDF 42. Binary search and recursive graph problems (with K. Guimaraes) Theoretical Computer Science Vol 181, 1997, 119-139. (Special issue for LATIN 95 conference). BINARY.PDF Subsumeds the conference papers On the number of components of a recursive graph from LATIN 92. and Unbounded search and recursive graphs from LATIN 95. 43. Asking Questions Versus Verifiability (with M. Velauthapillai), Fundamenta Informaticae Vol. 30, 1-9, 1997 VERIFY.PDF Earlier version in AII92. 44. A Survey of Inductive Inference with an Emphasis on Learning via Queries (with C. Smith). Complexity, Logic, and Recursion Theory. Edited by A. Sorbi. Published by M. Dekker. Volume 187. 1997. 45. The Complexity of Problems, Advances in Computers Volume 43. Edited by Marvin Zelkowitz. Published by Academic Press. 1996. COMPLEXITY.PDF 46. Frequencey Computation and Bounded Queries (with R. Beigel and E. Kinber) Theoretical Computer Science, Vol. 163, 1996, 177-192. Earlier version in CCC95. BDQFREQ.PDF 47. Learning via Queries with Teams and Anomalies (with E. Kinber, M. Pleszkoch, C. Smith, and T. Zeugmann), Fundamenta Informaticae, Vol. 23, Number 1, May 1995, pp. 67-89. LVQTEAMS.PDF Earlier version in COLT90. 48. Recursion theoretic models of learning: some results and intuitions, (with C. Smith) Annals of Mathematics and Artificial Intelligence, Vol. 15, II, 1995, pp. 155-166. MODELS.PDF 49. OptP-Completeness as the Normal Behavior of NP-Complete Problems (with M. Krentel and K. Rappoport), Math Systems Theory, Vol. 28, 1995, 487-514 BDQOPT.PDF 50. Extremes in the Degrees of Inferability (with L. Fortnow, S. Jain, E. Kinber, M. Kummer, S. Kurtz, M. Pleszkoch, T. Slaman, F. Stephan, R. Solovay), Annals of Pure and Applied Logic, Vol. 66, 1994, pp. 231-276. EXTREMES.PDF Subsumes both Learning via Queries to an Oracle from COLT89 and Degrees of Inferability from COLT92. 51. On Honest Polynomial Reductions and P=NP (with R. Downey, and M. Moses), Annals of Pure and Applied Logic, Vol. 70, 1994, pp. 1-27. Earlier version in CCC89. HONEST.PDF (The version on line is the CCC89 version.) 52. Terse, Superterse, and Verbose Sets (with R. Beigel, J. Gill, and J. Owings), Information and Computation, Vol. 103, 1993, pp. 68-85, 1993. BDQTERSE.PDF 53. On Checking Versus Evaluation of Multiple Queries (with Lane Hemachandra and Albrech Hoene), Information and Computation, Vol. 105, 1993, pp. 72-93. CHECK.PDF Earlier version in MFCS90. 54. Index Sets in Recursive Combinatorics (with G. Martin), Logical Methods (In honor of Anil Nerodes's Sixtieth Birthday). Edited by Crossley, Remmel, Shore, and Sweedler. 1993. Edited by Birkhaeuser, Boston. 55. Learning via Queries to [+,<] (with M. Pleszkoch and R. Solovay), Journal of Symbolic Logic, LVQPLUS.PDF Earlier version in COLT90 56. Learning Programs with an Easy to Calculate Set of Errors (with Rameshkumar Sitarman, C. Smith, and Mahendran Velauthapillai), Fundamentica Informaticae, Vol. 16, No. 3-4, pp. 355-370, 1992. ERRORS.PDF Earlier version appearedin COLT88 and AII89. 57. Learning via Queries (with C. Smith), Journal of the Association of Computing Machinery, Vol. 39, 1992, pp. 649-675. LVQ.PDF, Earlier versions appeared at COLT88 and FOCS88. 58. Selection Problems using m-ary queries (with K. Guimaraes and J. Purtilo), Computational Complexity, Vol. 2, 1992, pp. 256-276. ARITY.PDF 59. The Mapmaker's Dilemma (with R. Beigel), Discrete Applied Math (Special Issue on Theoretical Computer Science), Vol. 34, 1991, pp. 37-48. MAP.PDF 60. On Selecting the k Largest with Restricted Quadratic Queries, Information Processing Letters, Vol. 38, 1991, pp. 193-195. 61. A Survey of Bounded Queries in Recursion Theory, Sixth Annual Conferences on Structure in Complexity Theory, Chicago, June 1991. BDQSUR.PDF 62. Training Sequences (with D. Angluin and C. Smith), Theoretical Computer Science, Vol. 66, 1989, pp. 255-272. TRAINING.PDF Earlier version without Angluin at AII86 was called On the inference of sequences of functions 63. On the Complexity of Finding the Chromatic Number of a Recursive Graph I: The Bounded Case (with R. Beigel), Annals of Pure and Applied Logic, Vol. 45, 1989, pp. 1-38. FINDCHROMNUMBER1.PDF 64. On the Complexity of Finding the Chromatic Number of a Recursive Graph II: The Unbounded Case (with R. Beigel), Annals of Pure and Applied Logic, Vol. 45, 1989, pp. 227-247. FINDCHROMNUMBER2.PDF 65. Bounded Query Classes and the Difference Hierarchy (with R. Beigel and L. Hay), Archive for Math. Logic, Vol. 29, 1989, pp. 69-84. BDQDIFF.PDF 66. Nondeterministic Bounded Query Reducibilities (with R. Beigel, and J. Owings), Annals of Pure and Applied Logic, Vol. 41, 1989, pp. 107-118. BDQ-NONDET.PDF 67. Polynomial Terse Sets (with A. Amir), Information and Computation, Vol. 77, No. 1, 1988, pp. 37-56. Earlier version in CCC87. AMIRGASARCH.PDF 68. Oracles for Deterministic vs. Alternating Classes, SIAM Journal of Computing, Vol. 16, Aug 1987, pp. 613-627. ORACLESEVSSIG.PDF 69. Oracles: Three New Results. Marcel Dekker Lecture Notes in Pure and Applied Mathematics Vol. 106, Edited by D.W. Kueker, E.G.K. Lopez-Escobar, and C.H. Smith, 1987, pp. 219-252. 70. Relativizations Comparing NP and Exponential Time (with S. Homer), Information and Control, Vol. 58, July 1983, pp. 88-100. ORACLESEVSNP.PDF Next: About this document ... William Gasarch 2013-07-31
{"url":"http://www.cs.umd.edu/~gasarch/papers/papers.html","timestamp":"2014-04-20T20:57:58Z","content_type":null,"content_length":"19586","record_id":"<urn:uuid:bef064e6-08b5-41a0-bc0c-3742e74460ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How to teach calculus Replies: 1 Last Post: Feb 16, 2005 11:23 PM Messages: [ Previous | Next ] How to teach calculus Posted: Feb 16, 2005 9:50 AM Hi. I'm curious to ask what methods people use to teach calculus in school. It has been a long time since I was in school, and hence I can't remember how I was taught. But, I cannot remember being taught any way apart from basically rule-based. I.e. if we have an equation f(x) = x^n then f'(x) = nx^(n-1). Note: my schooling was in New I've been looking at SOS Math as a resource for CompSci students to brush up on maths. I must say that for the introductory calculus section, I really like the way that they teach it, first showing how you can derive the rules for yourself, and then going onto the product rule, quotient rule, etc., once the basic process makes sense. Can I ask how calculus is taught in schools these days? Do people start this way, or is it a matter of just introducing a rule such as the derivative of x^2 is 2x, and expecting it to be memorised? PS: I've noted some messages on this group bemoaning students' attitudes to maths. At university level *some* students realise their lack of maths is crippling, and do become more motiviated. submissions: post to k12.ed.math or e-mail to k12math@k12groups.org private e-mail to the k12.ed.math moderator: kem-moderator@k12groups.org newsgroup website: http://www.thinkspot.net/k12math/ newsgroup charter: http://www.thinkspot.net/k12math/charter.html Date Subject Author 2/16/05 Ross Clement 2/16/05 Re: How to teach calculus Jim Sprigs
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1119434","timestamp":"2014-04-16T11:20:10Z","content_type":null,"content_length":"18585","record_id":"<urn:uuid:c3fbb223-f93e-43e9-9a16-406d2886d41b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Any smart guys here (substitutions)? May 10th 2011, 04:10 AM #1 May 2011 Any smart guys here (substitutions)? I have a question on how to "combine" substitutions in the sense that there is a dependency between two kinds of substitutions. I'll try to concretise this in an example. I have a structure consisting of a pair: $\mathcal{X} := \langle \mathcal{L} \times 2^\mathcal{L} \rangle$ where $\mathcal{L} := \{a,b,c...,z,aa,bb,...,ab,ac,...\}$ $\mathcal{L}$ is merely a set of labels composed from small letters. Examples of $\mathcal{X}$ are: $x_1 = \langle a, \{ b, c, d\}\rangle$ $x_2 = \langle e, \{ f, g, h\}\rangle$ $x_3 = \langle i, \{ j, k, l\}\rangle$ $x_1, x_2, x_3 \in \mathcal{X}$ $\mathcal{Y}$ is the power set of $\mathcal{X}$ $\mathcal{Y} := 2^\mathcal{X}$ $y = \{ x_1\} \cup \{ x_2\} \cup \{x_3\}$ $y \in \mathcal{Y}$ Moreover, there are two types of substitutions, A and B: $a_1 = \{ a \mapsto aa\}$ $a_2 = \{ e \mapsto ee\}$ $a_3 = \{ i \mapsto ii\}$ $a_1, a_2, a_2 \in \mathcal{A}$ $b_1 = \{ b \mapsto bb, c \mapsto cc, d \mapsto dd \}$ $b_2 = \{ f \mapsto ff, g \mapsto gg, h \mapsto hh \}$ $b_3 = \{ j \mapsto jj, k \mapsto kk, l \mapsto ll \}$ $b_1, b_2, b_3 \in \mathcal{B}$ Substitutions of type A work on the first element of an element of $\mathcal{X}$, while substitutions of type B work on the second element of $\mathcal{X}$. $\mathcal{S} := 2^{\mathcal{A} \times \mathcal{B}}$ $\mathcal{S}$ represents the dependency between substitutions of type A and B. That is, they are grouped in pairs. $s = \{ \langle a_1, b_1\rangle, \langle a_2, b_2\rangle, \langle a_3, b_3\rangle \} , s \in \mathcal{S}$ What I wonder is how to define and apply these substitutions correctly to achieve the following: $y | s \equiv \{ \langle aa, \{ bb, cc, dd \} \rangle\ , \langle ee, \{ ff, gg, hh \} \rangle\ , \langle ii, \{ jj, kk, kk \} \rangle\}$ $y \in \mathcal{Y} , s \in \mathcal{S}$ | represents the action\operator that applies the substitutions correctly. That is, the substitutions of type B (second element of S) is only applied if the substitution of type A (first element of S) is applied. Any ideas? A small typo in the above result. The result should (obviously) be: $y | s \equiv \{ \langle aa, \{ bb, cc, dd \} \rangle\ , \langle ee, \{ ff, gg, hh \} \rangle\ , \langle ii, \{ jj, kk, ll \} \rangle\}$ What do you want to get: a strict mathematical description, code in some programming language or something else? I would start by defining the result of applying a substitution of type B to a set. An important issue is when s = {<a1,b1>, <a2,b2>}, y = {x} and both a1 and a2 are applicable to the first element of x. What happens then? What do you want to get: a strict mathematical description, code in some programming language or something else? I would start by defining the result of applying a substitution of type B to a set. An important issue is when s = {<a1,b1>, <a2,b2>}, y = {x} and both a1 and a2 are applicable to the first element of x. What happens then? Thanks very much for your response! Appreciated! I'm trying to come up with a concise mathematical notation that describes the "combined" action of the substitutions. That said, I'm not sure how to do this in a "proper" way. Obviously, I could write a textual description telling how the desired result should be, but I prefer a pure mathematical notation with less text. For instance, I'm not sure how to define application of several substitutions which are found in a set. That is, if I have a set of substitutions, how can I apply all of these substitutions to a structure. Let's say $b = \{a \mapsto\ b, c \mapsto d \}$ Could I do something like this: $l(x), l \in b$ And does this mean that all substitutions of b are "carried out" resulting in x (where labels are changed). The case you refer to where two A substititons match the first element of X should not be legal. First, I should note that I encountered the term "substitution" only when a variable occurring in a syntactic expression is replaced by another expression. For example, if E = x^2 = x + 1 and ϴ = [x ↦ 2 + 3] is substitution, then Eϴ = (2 + 3)^2 = (2 + 3) + 1. It is customary to denote substitutions by Greek letters and denote the result of the application by Eϴ or ϴ(E). Let u range over elements of L and w range over subsets of L. We can identify the set of substitutions A with L x L, i.e., the set of pairs of elements of L, and B with the set of functions from L to L. If f ∊ B and w ⊆ L, it is customary to denote {f(u) | u ∊ w} by f[w]. So, let y ∊ Y and s ∊ S. Define y | s, or s(y), to be {<a', f[w]> | <a, w> ∊ y, <<a, a'>, f> ∊ s} ∪ {<a, w> | <a, w> ∊ y, <<a, a'>, f> ∉ s for all a', f}. The second set in the union corresponds to those pairs x whose first element is not subject to a substitution from y, if this is possible. To prove that y | s is well-defined one has to show that it is impossible that <a, w> ∊ y and <<a, a'>, f'> ∊ s, <<a, a''>, f''> ∊ s for a' ≠ a'' or f' ≠ f''. First, I should note that I encountered the term "substitution" only when a variable occurring in a syntactic expression is replaced by another expression. For example, if E = x^2 = x + 1 and ϴ = [x ↦ 2 + 3] is substitution, then Eϴ = (2 + 3)^2 = (2 + 3) + 1. It is customary to denote substitutions by Greek letters and denote the result of the application by Eϴ or ϴ(E). Let u range over elements of L and w range over subsets of L. We can identify the set of substitutions A with L x L, i.e., the set of pairs of elements of L, and B with the set of functions from L to L. If f ∊ B and w ⊆ L, it is customary to denote {f(u) | u ∊ w} by f[w]. So, let y ∊ Y and s ∊ S. Define y | s, or s(y), to be {<a', f[w]> | <a, w> ∊ y, <<a, a'>, f> ∊ s} ∪ {<a, w> | <a, w> ∊ y, <<a, a'>, f> ∉ s for all a', f}. The second set in the union corresponds to those pairs x whose first element is not subject to a substitution from y, if this is possible. To prove that y | s is well-defined one has to show that it is impossible that <a, w> ∊ y and <<a, a'>, f'> ∊ s, <<a, a''>, f''> ∊ s for a' ≠ a'' or f' ≠ f''. Awesome! I'm so glad you took time to elaborate this. This notation is both concise and intuitive! Thanks very much! May 12th 2011, 07:04 AM #2 May 2011 May 12th 2011, 01:56 PM #3 MHF Contributor Oct 2009 May 13th 2011, 01:42 AM #4 May 2011 May 13th 2011, 02:12 PM #5 MHF Contributor Oct 2009 May 16th 2011, 02:45 AM #6 May 2011
{"url":"http://mathhelpforum.com/discrete-math/180088-any-smart-guys-here-substitutions.html","timestamp":"2014-04-21T00:34:01Z","content_type":null,"content_length":"55699","record_id":"<urn:uuid:5949744f-e6e2-4b21-a9b8-ceccaa49d9a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoboken, NJ SAT Math Tutor Find a Hoboken, NJ SAT Math Tutor ...While I'm originally from outside Boston, Massachusetts, I've been living in Manhattan since 2010. I've also spent time living in Costa Rica, Spain, and even onboard a cruise ship in the Caribbean! As the oldest of three siblings--I have a sister who is now 22, and a brother with down syndrome who is now 24--teaching, tutoring, and mentoring have always been a part of my life. 26 Subjects: including SAT math, English, reading, writing ...My Ivy Boot Camp-style mission is to help students achieve peak test preparation and academic fitness. This requires not just determination and discipline, but also expert coaching and guidance. That’s where I come in. 52 Subjects: including SAT math, English, reading, writing ...Pricing depends on subject(s) taught, travel required, and minimum hours per week. I am available to teach on weekends and after 6 p.m. on most weekdays. If you have more than one child or would like semi-private tutoring, rates may be adjusted further. 34 Subjects: including SAT math, reading, writing, ESL/ESOL ...Further, I will help you understand key test-taking strategies to use on this test. The ACT Reading section includes four passages, each followed by ten questions, to be completed in 35 minutes. The passages are presented in a specific order: Prose Fiction, Social Science, Humanities, Natural Science. 9 Subjects: including SAT math, GMAT, ACT Math, SAT reading ...Because of the nature of this course of study, enrollment is $50/hour. If you are interested in one-on-one sessions following this intensive curriculum please contact me for pricing details. I am also happy to send out a syllabus if requested. 37 Subjects: including SAT math, reading, English, algebra 1
{"url":"http://www.purplemath.com/Hoboken_NJ_SAT_math_tutors.php","timestamp":"2014-04-17T04:46:05Z","content_type":null,"content_length":"24005","record_id":"<urn:uuid:44c4db82-db9b-40d4-b774-07fa45538122>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Matheology § 210 Replies: 24 Last Post: Feb 12, 2013 1:12 PM Messages: [ Previous | Next ] Re: Matheology § 210 Posted: Feb 8, 2013 1:12 PM On 8 Feb., 12:13, Alan Smaill <sma...@SPAMinf.ed.ac.uk> wrote: > WM <mueck...@rz.fh-augsburg.de> writes: > > On 7 Feb., 20:17, William Hughes <wpihug...@gmail.com> wrote: > ... > >> In classical set theory the accessible numbers are listable > >> Note from the Wikipedia quote > >> > Constructively it is consistent to assert the > >> > subcountability of some uncountable collections > > Of course, the intuitionists accepted this nonsense, perhaps forced by > > the matheologians. > What a joker! > You tell us that you do not know Brouwer's opinion on this question, > but here you are telling us what intuitionists accept. I know Brouwer's opinion very well But I do not discuss with you about that opinionb because you turn every word in my mouth. Therefore I repeat only what he wrote. You see in the parallel thread that you are completely off. > WM is inconsistent. > As for intuitionists being "forced" into taking up a > position inconsistent with classical mathematics by classical > mathematicians ... > a classic absurdity. No. Hilbert fired Brouwer from his most prestigious position with the Annalen. That is only one example. The matheologians are in possession of the academic keys. To tell them the truth can be very dangerous for a man who is young and striving for an academic carrer. I am not in danger to loose my post, although some special guys like Bader or Rennenkampf have in fact revealed the abyys of their stupend stupidity by fighting in written letters for my dismissal. And here is a not very important but very interesting example: In MathOverflow I am not welcome. Everything is immediately deleted. Therefore, in June 2010 I put a question under cover. This question got several positive votes, more than 2k views, and a very good answer. It remained open for 9 month. Why has it been closed? On April 28, 2011 I reveiled my authorship in a comment. *On the same day* the question has been closed by a gang of angry louts (there is not the slightest inkling even for a convinced matheologian that the question is antimatheologiocal). Here you can see (not you, of course, but the objective reader) that matheologians not only rule the print media and the academic realm. They most aggressively suppress every deviating opinion. In this area they are really good. There is no other explanation for the continued existence of matheology. Could an intelligent man or woman who observes that all levels of the Binary Tree are crossed by a finite number of distinct paths really believe that there are uncountably many, where uncountable means much more than infinitely many? Regards, WM
{"url":"http://mathforum.org/kb/message.jspa?messageID=8274016","timestamp":"2014-04-16T14:09:30Z","content_type":null,"content_length":"46375","record_id":"<urn:uuid:8175d5e7-f334-4677-bc9e-48f4355ce2ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Einstein replaced Newton's conception of gravitation as a force with general relativity, which views gravitation as the dynamics of spacetime. In 1917 he applied his theory to the universe as a whole. He made two assumptions: the universe is homogeneous on average and static; and it is closed on itself, a curved volume of space with no boundary. However, Einstein's equations have no such solutions unless an extra term is inserted that acts as a repulsion to offset the gravitational attraction of matter for itself. Thus were born both modern cosmology and the notion of a cosmological In 1929 Hubble found that the universe is expanding, a feature that Friedmann and Lemaître had shown were necessary consequences of Einstein's equations if Lambda were zero. There are then three models depending on whether the geometry of space is closed, Euclidean, or open. All three models are characterized by a deceleration in the expansion from a big bang. Since Hubble's discovery, astronomers have largely focused on determining which of the three Lambda-free models applies on the large scale to the actual universe. Brian Schmidt recognized that white dwarf stars induced to explode as supernovae in galaxies of high expansional redshift z constitute a promising luminosity standard with which to measure the geometry of spacetime. In 1994 he formed the High-z Supernova Search team to develop this method. They performed the necessary local calibrations and the renormalizations of the different light-curve shapes needed to get accurate results. Contemporaneously, Saul Perlmutter assumed the leadership of a team that used robotic telescopes to find and characterize supernovae that explode in nearby galaxies. With a redirected effort, the Supernova Cosmology Project automated and brought to maturity the empirical techniques developed by astronomers. The discovery of many supernovae became routine and contributed to the early statistics that the universe may currently be accelerating in its expansion rate, a surprising conclusion reached by the Perlmutter and Schmidt teams simultaneously in 1998. Adam Riess realized that observations at redshifts z larger than readily measurable by telescopes on the ground could eliminate alternative explanations. He led the effort to use the Hubble Space Telescope to find supernovae at z larger than unity. These definitive observations show that supernovae look substantially fainter at large z than predicted by any of the Lambda-free models. Acceleration is required. The best fit for the data is achieved when the current energy-density of the vacuum is about 70% of the critical value that makes the large-scale geometry of space Euclidean, where the last result is suggested by the fluctuations in the microwave background. The corresponding small but nonzero value for the cosmological constant then turns out neatly to resolve the conflict of the universe's age in Euclidean-space models where Lambda is set to zero. The discovery of a non-vanishing energy density of the vacuum, or some more bizarre alternative, has profound consequences for physics, astronomy, and philosophy. It is an accomplishment richly deserving of the Shaw Prize in Astronomy 2006. Astronomy Selection Committee The Shaw Prize 12 September 2006, Hong Kong
{"url":"http://www.shawprize.org/big5/shaw.php?tmp=3&twoid=51&threeid=61&fourid=96&count=169194795","timestamp":"2014-04-20T18:23:16Z","content_type":null,"content_length":"22437","record_id":"<urn:uuid:f2fc8263-30be-430d-bcdc-584fd5a3e7b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Find out why physicists believe they might be in a lecture by John D. Barrow at the Bath Royal Literary and Scientific Institution on April 13th. John D. Barrow FRS has been a Professor of Mathematical Sciences at the University of Cambridge since 1999, carrying out research in mathematical physics, with special interest in cosmology, gravitation, particle physics and associated applied mathematics. He is the author of over 420 articles and 19 books, translated in 28 languages, exploring the wider historical, philosophical and cultural ramifications of developments in mathematics, physics and astronomy. Most importantly (to us anyway), he is the director of the Millennium Mathematics Project of which Plus is a part. The lecture starts at 7:30pm and you can book tickets here (£8/£6 concession). If you can't get to Bath you can also read Barrow's Plus article Are the constants of nature really constant? or listen to Barrow in the accompanying podcast.
{"url":"http://plus.maths.org/content/comment/reply/5884","timestamp":"2014-04-18T21:25:25Z","content_type":null,"content_length":"21661","record_id":"<urn:uuid:fcbed3d9-2565-4d76-aed6-11fe6ccd3829>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
User Nathanael Berestycki bio website statslab.cam.ac.uk/~beresty age 33 visits member for 2 years, 4 months seen Oct 24 '12 at 19:15 stats profile views 305 30 awarded Yearling 25 awarded Nice Question Aug Markov chains: invariant measures and explosion 25 comment It is a classical (and surprising) feature of continuous Markov chains that they can have an invariant measure while being transient. See, for instance, section 3.5 in James Norris' book on Markov chains. (Notice that the definition of invariant measure is the usual one and does not require the process to be recurrent or non-explosive). In any case, no matter how you would call such a measure, I hope you'll agree it is interesting to know what it means for the process... Aug Markov chains: invariant measures and explosion 25 comment Dear Robert, Thanks for your comments and sorry for long time in response. I am not totally comfortable with the derivation of your equation. I always thought this process would satisfy the Kolmogorov backward and forward equations - without being the minimal solution. See, for instance, section 2.9 in James Norris' book on Markov chains. But you may well be right - in which case the question of "what is this invariant measure" is even more puzzling to me ! 21 asked Markov chains: invariant measures and explosion 31 answered Is the maximum tree-path length distributed lognormally (in the limit) ? Dec Green's formula for a Markov process 20 comment I don't know if this is what you are looking for, but the left-hand side in your last identity is called the Dirichlet form $\mathcal{E}(f,g)$. For a Markov chain on a countable space and with invariant measure $\pi$, (not necessarily reversible), it is always true that $$\mathcal{E}(f,g) = \sum_{x,y} \pi(x) P(x,y) g(x) \nabla_{x,y} f.$$ But this is an easy calculation, so presumably you are aware of it. Dec Correlations in last-passage percolation 19 comment Hi James ! Thanks, very helpful. I guess the story about competing interface in FPP (which results in random slope) shows indeed that there is a nonzero probability that the geodesics have no edge in common at all. That is somehow slightly counterintuitive to me, you'd expect the geodesics to go get the same goodies for a while, before they diverge... So is the covariance $O(1)$ as well? 19 awarded Scholar 19 accepted Correlations in last-passage percolation Dec Correlations in last-passage percolation 19 comment very cool picture ! what did you use to generate it? 18 awarded Student 18 asked Correlations in last-passage percolation 17 answered Comparing two measures on trees on $n$ vertices 17 answered Stopping time of a Markov chain Dec A percolation problem 5 comment Yes, I agree ! Very interesting... Dec A percolation problem 3 comment @Peter: yes, I agree that my comment above is not correct: the model is not strictly equivalent to finding monotone paths. But the argument I outlined initially to show $p_c \ge 1/2$ is still valid, do you agree? ps. sorry to be answering here, but this is the only place I am allowed to put comments... Dec A percolation problem 3 comment Come to think of it, the question can be rephrased in terms of standard percolation. It's fairly easy to check that the question is equivalent to asking for the existence of monotone paths (by which I mean, paths for which the x and y coordinates are monotone functions of time, e.g. that travel only in the North and East direction). I think the problem is clearer this way. Phrased this way, it is clear that the problem is monotone in $p$ so $p_c$ is well-defined, and moreover it is obvious that $p_c \ge 1/2$. 2 awarded Supporter Dec answered A percolation problem
{"url":"http://mathoverflow.net/users/19649/nathanael-berestycki?tab=activity","timestamp":"2014-04-16T07:21:59Z","content_type":null,"content_length":"45863","record_id":"<urn:uuid:1db777d7-bc63-4847-a6a5-0d6cec9bf691>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Bethesda, MD SAT Math Tutor Find a Bethesda, MD SAT Math Tutor ...I graduated from Dartmouth College with a degree in economics and mathematics. There, I tutored calculus to several students. Post-undergraduate, I began as a volunteer group tutor before also becoming a private one-on-one tutor as well. 15 Subjects: including SAT math, chemistry, calculus, geometry ...Fellow classmates always called upon me to proof and edit their work on papers, essays, or articles. I have a love and passion for the English language, and love making other work better. Personal ACT Scores: 33 Composite, 32 Reading My BA in Political Science and Japanese from Tufts Universit... 33 Subjects: including SAT math, reading, English, writing ...Each student has a different way of learning a subject. In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly, spending extra time to prepare for the session prior to meeting with the student. My broad background in math,... 16 Subjects: including SAT math, calculus, physics, statistics Hello. I tutor math. I tutor question by question, explaining concepts along the way. I cannot guarantee results. I also may not be able to answer your math questions since I have much to learn. However, I will try my best to help you understand and improve math skills. 11 Subjects: including SAT math, Spanish, calculus, statistics ...I have tutored a high school student about 15 hours for AP Computer Science II last year in the spring under subject Java. I am able to understand, write, and fix codes written in languages C, C++, and Java. I have great math background from working toward master's degree in computer science. 15 Subjects: including SAT math, chemistry, calculus, physics Related Bethesda, MD Tutors Bethesda, MD Accounting Tutors Bethesda, MD ACT Tutors Bethesda, MD Algebra Tutors Bethesda, MD Algebra 2 Tutors Bethesda, MD Calculus Tutors Bethesda, MD Geometry Tutors Bethesda, MD Math Tutors Bethesda, MD Prealgebra Tutors Bethesda, MD Precalculus Tutors Bethesda, MD SAT Tutors Bethesda, MD SAT Math Tutors Bethesda, MD Science Tutors Bethesda, MD Statistics Tutors Bethesda, MD Trigonometry Tutors Nearby Cities With SAT math Tutor Arlington, VA SAT math Tutors Chevy Chase SAT math Tutors Chevy Chase Village, MD SAT math Tutors Chevy Chs Vlg, MD SAT math Tutors Falls Church SAT math Tutors Gaithersburg SAT math Tutors Hyattsville SAT math Tutors Martins Add, MD SAT math Tutors Martins Additions, MD SAT math Tutors Mc Lean, VA SAT math Tutors Rockville, MD SAT math Tutors Silver Spring, MD SAT math Tutors Somerset, MD SAT math Tutors Takoma Park SAT math Tutors Washington, DC SAT math Tutors
{"url":"http://www.purplemath.com/Bethesda_MD_SAT_Math_tutors.php","timestamp":"2014-04-18T08:32:35Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:95a13938-00df-472d-9d51-a42306537a89>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Bridgeview ACT Tutor ...I always earned high grades and accolades and learned those concepts quickly and easily. When I work with students, first I make sure that student understands all related concepts then I spend time teaching strategies to solve the problems assigned. I prepare students for quizzes, tests, homework and ISAT or any other standardized tests. 23 Subjects: including ACT Math, chemistry, algebra 1, algebra 2 ...I earned a Master's degree in Literature and Rhetoric with a perfect 4.0 and achieved secondary teacher certification in English. My bachelor's degree is also in English. I have been teaching college composition for more than six years. 17 Subjects: including ACT Math, reading, English, geometry ...I enjoyed working with seniors as much as I had enjoyed working with younger students. As a tutor, I have tutored College Algebra and Trig as well as all levels of math including 7th and 8th graders through college students. I have prepared students in the math portion of the ACT test. 14 Subjects: including ACT Math, geometry, algebra 1, GED ...I also minored in Asian Studies. After graduating from Loyola University, I began tutoring in ACT Math/Science at Huntington Learning Center in Elgin. I took pleasure in helping students understand concepts and succeed. 26 Subjects: including ACT Math, chemistry, English, reading ...Finally, I have been using algebra ever since, from teaching college-level physics concepts to building courses for professional auditors. I was an advanced math student, completing the equivalent of Algebra 2 before high school. I continued applying algebraic skills in high school, where I was a straight A student and completed calculus as a junior. 13 Subjects: including ACT Math, calculus, statistics, geometry
{"url":"http://www.purplemath.com/Bridgeview_ACT_tutors.php","timestamp":"2014-04-18T21:42:56Z","content_type":null,"content_length":"23670","record_id":"<urn:uuid:93ff635d-35fa-4ac4-801f-da412cf6ecd4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
The combinatorics of authentication and secrecy codes Results 1 - 10 of 15 , 1991 "... unconditionally secure authentication codes without secrecy. This idea is most useful when the number of authenticators is exponentially small compared to the number of possible source states (plaintext messages). We formally de ne some new classes of hash functions and then prove some new bounds a ..." Cited by 59 (1 self) Add to MetaCart unconditionally secure authentication codes without secrecy. This idea is most useful when the number of authenticators is exponentially small compared to the number of possible source states (plaintext messages). We formally de ne some new classes of hash functions and then prove some new bounds and give some general constructions for these classes of hash functions. Then we discuss the implications to authentication codes. - In Advances in Cryptology — EUROCRYPT ’99 , 1999 "... Abstract. This paper compares the parameters sizes and software performance of several recent constructions for universal hash functions: bucket hashing, polynomial hashing, Toeplitz hashing, division hashing, evaluation hashing, and MMH hashing. An objective comparison between these widely varying ..." Cited by 26 (0 self) Add to MetaCart Abstract. This paper compares the parameters sizes and software performance of several recent constructions for universal hash functions: bucket hashing, polynomial hashing, Toeplitz hashing, division hashing, evaluation hashing, and MMH hashing. An objective comparison between these widely varying approaches is achieved by defining constructions that offer a comparable security level. It is also demonstrated how the security of these constructions compares favorably to existing MAC algorithms, the security of which is less understood. 1 , 1999 "... ... In this paper, we focus on another collection of recent applications in the general area of communications, including cryptography and networking. Applications have been chosen to represent those in which design theory plays a useful, and sometimes central, role. Moreover, applications have been ..." Cited by 25 (2 self) Add to MetaCart ... In this paper, we focus on another collection of recent applications in the general area of communications, including cryptography and networking. Applications have been chosen to represent those in which design theory plays a useful, and sometimes central, role. Moreover, applications have been chosen to reflect in addition the genesis of new and interesting problems in design theory in order to treat the practical concerns. Of many candidates, thirteen applications areas have been included. They are as follows: - Designs, Codes and Cryptography , 1996 "... For any authentication code for k source states and v messages having minimum possible deception probabilities (namely, P d 0 = k=v and P d 1 = (k \Gamma 1)=(v \Gamma 1)), we show that there must be at least v encoding rules. (This can be thought of as an authentication-code analogue of Fisher's In ..." Cited by 19 (4 self) Add to MetaCart For any authentication code for k source states and v messages having minimum possible deception probabilities (namely, P d 0 = k=v and P d 1 = (k \Gamma 1)=(v \Gamma 1)), we show that there must be at least v encoding rules. (This can be thought of as an authentication-code analogue of Fisher's Inequality. ) We derive several properties that an extremal code must satisfy, and we characterize the extremal codes for equiprobable source states as arising from symmetric balanced incomplete block designs. We also present an infinite class of extremal codes, in which the source states are not equiprobable, derived from affine planes. 1 Introduction Authentication codes were invented in 1974 by Gilbert, MacWilliams and Sloane [4]. The theory of authentication codes was developed throughout the 1980's by Simmons and others. Numerous papers have given constructions and bounds for authentication codes; see the list of references for a representative sample. For a survey of authentication - , 1998 "... An authentication protocol is a procedure by which an informant tries to convey n bits of information, which we call an input message, to a recipient. An intruder, I, controls the network over which the informant and the recipient talk and may change any message before it reaches its destination ..." Cited by 12 (1 self) Add to MetaCart An authentication protocol is a procedure by which an informant tries to convey n bits of information, which we call an input message, to a recipient. An intruder, I, controls the network over which the informant and the recipient talk and may change any message before it reaches its destination. a If the protocol ha security p, then the the recipient must detect this a cheating with probability at leat I - p. This paper "... to Bob, she encrypts x using the encryption rule e K . That is, she computes y = e K (x), and sends y to Bob over the channel. When Bob receives y, he decrypts it using the decryption function dK , obtaining x. Informally, perfect secrecy means that observation of a ciphertext gives no informatio ..." Cited by 11 (4 self) Add to MetaCart to Bob, she encrypts x using the encryption rule e K . That is, she computes y = e K (x), and sends y to Bob over the channel. When Bob receives y, he decrypts it using the decryption function dK , obtaining x. Informally, perfect secrecy means that observation of a ciphertext gives no information about the corresponding plaintext. This idea can be stated more precisely using probability distributions. Suppose there is are probability distributions pP on P, and pK on K. Then a probability distribution p C is induced on C. A cryptosystem is said to provide perfect secrecy provided that pP (xjy) = pP<F24. - Proc. of CRYPTO’94, LNCS 839 , 1997 "... Unconditionally secure authentication codes with arbitration (A²-codes) protect against deceptions from the transmitter and the receiver as well as that from the opponent. In this paper, we present combinatorial lower bounds on the cheating probabilities and the sizes of keys of A²-codes. ..." Cited by 10 (3 self) Add to MetaCart Unconditionally secure authentication codes with arbitration (A&sup2;-codes) protect against deceptions from the transmitter and the receiver as well as that from the opponent. In this paper, we present combinatorial lower bounds on the cheating probabilities and the sizes of keys of A&sup2;-codes. Especially, our bounds for A&sup2;-codes without secrecy are all tight for small size of source states. - IN IEICE TRANS , 1996 "... This paper presents a combinatorial characterization of broadcast authentication in which a transmitter broadcasts v messages e 1 (s); \Delta \Delta \Delta ; e v (s) to authenticate a source state s to all n receivers so that any k receivers cannot cheat any other receivers, where e i is a key. Supp ..." Cited by 7 (0 self) Add to MetaCart This paper presents a combinatorial characterization of broadcast authentication in which a transmitter broadcasts v messages e 1 (s); \Delta \Delta \Delta ; e v (s) to authenticate a source state s to all n receivers so that any k receivers cannot cheat any other receivers, where e i is a key. Suppose that each receiver has l keys. First, we prove that k ! l if v ! n. Then we show an upper bound of n such that n v(v \Gamma 1)=l(l \Gamma 1) for k = l \Gamma 1 and n ` v dl=ke ' = ` l dl=ke ' + ` v dl=ke ' for k ! l \Gamma 1. Further, a scheme for k = l \Gamma 1 which meets the upper bound is presented by using a BIBD and a scheme for k ! l \Gamma 1 such that n = ` v dl=ke ' = ` l dl=ke ' is presented by using a Steiner system. Some other efficient schemes are also presented. , 1998 "... . This paper provides new combinatorial bounds and characterizations of authentication codes (A-codes) and key predistribution schemes (KPS). We first prove a new lower bound on the number of keys in an A-code without secrecy, which can be thought of as a generalization of the classical Rao bound fo ..." Cited by 4 (0 self) Add to MetaCart . This paper provides new combinatorial bounds and characterizations of authentication codes (A-codes) and key predistribution schemes (KPS). We first prove a new lower bound on the number of keys in an A-code without secrecy, which can be thought of as a generalization of the classical Rao bound for orthogonal arrays. We also prove a new lower bound on the number of keys in a general A-code, which is based on the Petrenjuk, Ray-Chaudhuri and Wilson bound for t-designs. We also present new lower bounds on the size of keys and the amount of users' secret information in KPS, the latter of which is accomplished by showing that a certain A-code is "hiding" inside any KPS. 1. Introduction In the usual model of authentication codes (or A-codes) due to Simmons [8], there are three participants: a transmitter T , a receiver R and an opponent O. T and R share an encoding rule (or key) e 2 E. Given a source state s 2 S, T sends a message m 2 M to R over a public channel. O tries to cheat R b... , 2000 "... Unconditionally secure authentication codes with arbitration (A²-codes) protect against deceptions from the transmitter and the receiver as well as that from the opponent. We first show that an optimal A²-code implies an orthogonal array and an affine alpha-resolvable design. Next we defin ..." Cited by 1 (0 self) Add to MetaCart Unconditionally secure authentication codes with arbitration (A&sup2;-codes) protect against deceptions from the transmitter and the receiver as well as that from the opponent. We first show that an optimal A&sup2;-code implies an orthogonal array and an affine alpha-resolvable design. Next we define a new design, an affine alpha-resolvable + BIBD, and prove that optimal A&sup2;-codes are equivalent to this new design. From this equivalence, we derive a condition on the parameters for the existence of optimal A&sup2;-codes. Further, we show tighter lower bounds on the size of keys than before for large sizes of source states which can be considered as an extension of the bounds on the related designs.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1446327","timestamp":"2014-04-24T07:04:38Z","content_type":null,"content_length":"36558","record_id":"<urn:uuid:5e820bd2-1756-4443-a774-8192729faf92>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Arrow’s Theorem, Take Two Tyler Cowen is of course one of the primary reasons to be grateful that you live in the age of the Internet. But none of us is infallible, and I believe Tyler has stumbled in his account of Arrow’s Theorem. His example: Let’s say you had two people on a desert island, John and Tom, and John wants jazz music on the radio and Tom wants rap. Furthermore any decision procedure must be consistent, in the sense of applying the same algorithm to other decisions. In this set-up (with a further assumption), there is only dictatorship, namely the rule that either “Tom gets his way” or “John gets his way.” Not true. A rule (or, in Arrow’s language, a social welfare function) has to prescribe a choice not just today, but every day, even as Tom’s and John’s preferences might change from one day to another. So there are in fact 16 possible rules. One is “Tom always gets his way.” Another is “John always gets his way.” Another is “Always turn the radio to jazz”, which seems pretty unreasonable since it prescribes jazz even on days when Tom and John both prefer rap. Yet another is: • If Tom and John agree, do whatever they agree on. If they disagree, turn the radio to jazz. That last rule is particularly interesting because it satisfies every one of Arrow’s “reasonableness” criteria without anointing a dictator. What Arrow’s theorem says is that no non-dictatorial rule can meet all of those criteria. Hold on a minute. I just gave you an example of a rule that meets all of Arrow’s criteria, and then told you that according to Arrow there is no such rule. What gives? What gives is the reason why Tyler’s example is irrelevant: Arrow’s theorem applies only when there are at least three options. With two voters and two options, the theorem fails and everything is copacetic. In my own recent attempt to explain Arrow’s theorem, I assumed three voters and three options. It would have been simpler (and therefore better) to emulate Tyler by assuming only two voters (say Ann and Bob) arguing over three options (say Anchovies, Mushrooms and Pepperoni). Then you’re up against the fact that your rule must tell you what to do on days when their preferences run like this: │ │Alice │Bob │ │ │ │ │First Choice │Anchovies│Mushrooms│ │ │ │ │Second Choice │Mushrooms│Pepperoni│Third Choice│Pepperoni│Anchovies│ On those days, you must either grant Alice a smidgen of dictatorial power by ranking Anchovies over Pepperoni even though she’s the only voter with that preference, or grant Bob a smidgen of dictatorial power by ranking Pepperoni over Anchovies. Once you’ve granted (say) Alice that smidgen of dictatorial power, Arrow’s argument demonstrates that — in order to satisfy his reasonableness criteria — you’ve got to grant her a bigger smidgen by ranking Anchovies over Pepperoni on any day when she has that preference. And then in order to continue satisfying his criteria, you’ve got to grant her yet another smidgen by ranking Anchovies over Mushrooms on any day when she has that preference. And then you’ve got to grant her another and another until finally you’ve made her an absolute dictator. The details of this “argument by increasing smidgens” are in my earlier post, where you can just ignore the third voter (Charlie) to keep things a little simpler. But Tom and John, living on Tyler’s island and facing only two choices, are exempt from all this and therefore an irrelevant diversion. 11 Responses to “Arrow’s Theorem, Take Two” 1. 1 1 Does Arrow’s theorem exclude what I consider the most fair algorithm? Minimize the choice number. In the example given: Anchovies would be 1 + 3 = 4. Mushrooms would be 2 + 1 = 3. Pepperoni would be 3 + 2 = 5. Therefore, the choice for this day would be Mushrooms. Granted, you must add to the algorithm what to do if there are two or more options with the lowest choice numbers. If this algorithm is, indeed, excluded, then how does this make Arrow’s theorem useful rather than simply needlessly prescriptive? 2. 2 2 Ron, One of Arrow’s axioms is that if choice A is preferred to choice B then adding new options can not mean that choice B is preferred to choice A (this is to prevent things like: “would you prefer chocolate or vanilla ice cream?” “chocolate” “We also have strawberry” “In that case “vanilla”) But minimzing the choice number can fall foul of this axiom in certain scenarios. 3. 3 3 This is embarrassing (for me, not Steven) – first, let me say as a budding political scientist of a more theoretical and less mathematical bent that I have always had difficulty understanding Arrow’s Theorem. I’ve never had a teacher who could teach it either. Consequently, I get it in a big, vague way in theory, but that really means I don’t understand it. The idea that kick-started this attempt to explain it – that Arrow’s Theorem is both pretty important and almost completely…um… impossible to explain to people of even above-average intelligence – is absolutely true. I speak from experience. I have sought far and wide for an account that really brought it home to me. I read Riker’s book and it flew over my head. I read an account in Mueller’s Public Choice II, and that flew over my head and pooped on it to boot. Etc. I even tried to read Arrow’s little big book itself. Needless to say, it crushed my spirit. All of this is to say that, embarrassingly, even after reading Steven’s last post on the Theorem, I still don’t get it. But my vantage point can help to explain what it is that we dunderheads don’t get about the Theorem (everyone who reads this site seems to have no problem with it – it kind of makes me think this site must have the smartest readers on the net, by the by). Anyway, here goes: The first problem is the axioms. It is possible to explain what they mean relatively cogently, though things always get very blurry when people try to explain “independence of irrelevant alternatives” and one of the others I can’t recall. But what we fail to see is why just those axioms and only those axioms are the only ones there, and what exactly their logical connection is to each other. My sense is that if we got a really clear and forceful account of that, we would find the subsequent logic much easier to follow. The second problem, then, is this “smidgen” notion. Why does a smidgen of dictatorial power become absolute necessarily? Forgive me, but I simply look around me and I don’t see many ABSOLUTE dictators. But I’m probably just not getting the idea here. Can the dictator be a group? A party? An institution of some sort? And anyway, how is “being a dictator” different from “being a representative” or someone who is delegated decision-making authority? The point is just that these are the questions that occur to folks in my IQ range, and the people who try to teach the theorem seem to take it for granted that the answers are obvious. Maybe they are – to really smart logicians. I may be in a minority, I may simply be really bad at the kind of formal-logical thinking the Arrow Theorem requires, but with all of that noted, I still don’t get it. Not really. In theory, as I said, I do, in the way I get most public choice theory without being able to see it all Riker-style. But if I was trying to tell my grandfather what cool stuff I learn in political science, and I mentioned Arrow’s Theorem, and he asked, “Huh? How’s that?” I couldn’t begin to explain it to him. I also read, fwiw, Alex’s post about the cycling problem and the wacky “preferences” of groups – that was pretty good, and I think he was onto something in attempting to explain it that way. I still couldn’t put the pieces together, but it felt like a start. Just my humble two cents. PS – I’ve noticed that we who have trouble with Arrow also have major trouble with Bayes. I’ve tried to read “simple” accounts of Bayesianism, and I never get past the third or fourth paragraph without crumbling to pieces. It’s kind of similar to Arrow. I get it in a big vague way. But not “really.” Thanks, though, for making the effort (I mean that). 4. 4 4 Ron: EricK has this exactly right. 5. 5 5 re. the two-player, two candidate version. What does it even mean to satisfy the hypotheses of Arrow’s Theorem in this set-up? I can’t quite figure out what the relevant version of IIA says, or does that axiom just become vacuous? 6. 6 6 John Faben: IIA becomes vacuous in that case. 7. 7 7 “Argument by increasing smidgens” -I love it! Is this a formal logic term? In this example, it is clear (using Ron’s reasoning) that Mushrooms should be the choice. This seems intuitive. In the 3 person example there was no clear winner. It was arbitrary that Alice got her way on Tuesday, but here we have a valid reason for giving Bob his first choice. I think if you follow the logic through using Bob as the dictator, it still works. However, on some days you will probably have to ignore the “minimize the choice number” rule which gave Bob his way at the beginning. You could avoid this by having M/A/P, P/A/M as the choices – each then has 4 points in Ron’s scheme. Perhaps here is a simpler contradiction – if you follow Arrow’s rules, you must contradict the intuitive “minimise choice number” rule? 8. 8 8 Wow! Are you guys for real? You’re talking about voting in a 2 person scenario? Do you not see how absurd this is? I disagree with my son as to what pizza topping to order. He says anchovy, I say mushroom. I suggest we vote on it- because I just read Prof. Landsburg post and that’s what 2 reasonable people do- don’t they? I vote for mushroom, he votes for peperoni OMG! What just happened?! Voting didn’t solve anything! Does this explain Arrow’s theorem? No. My son, all of four years old, explains why- ‘Daddy you big stoopid, voting is only good when Mummy is there.’ And then the tears start and the tantrums and the drunk dialling once he’s safe in bed. 9. 9 9 Kev: But in the case of two people and two toppings, there *is* a voting system that meets all Arrow’s criteria — we get whatever we both agree on, or mushrooms if we disagree. So if your claim is that you can already see the problem in a two-person, two-topping scenario, you have misunderstood what the problem is. 10. 10 10 My objection is to the use of the word ‘voting’ rather than ‘bargaining’ or ‘comparing preferences’ in a 2 person scenario. Why? Well, we’re talking about popularizing Arrow- i.e. helping ordinary people see that this theorem encapsulates something empirical or provides a hueristic. The notion of 2 people voting rather than bargaining or discussing conjures up an absurd image. Nothing to do with Econ, everything to do with ordinary language- the latter being the binding constraint on ‘poupularization’. I wasn’t saying there was a way to collapse three options into 2 and still keep Arrow. ‘So if your claim is that you can already see the problem in a two-person, two-topping scenario, you have misunderstood what the problem is.’I agree, if that were my claim I’d have misunderstood the problem. The other point about a 2 person bargaining situation is that the common-sense approach is to think of the 2 parties getting into the mechanism design business for themselves- i.e. Alice and Bob may start a dialogue on what’s a good way to resolve this, perhaps a coin toss? At this point intensity of preference and contingent dynamic considerations can express themselves- Alice may stipulate that pepperoni gives her gas so if Bob is calculating the odds on some nookie tonight that option should be handicapped appropriately. The problem here is that once Game theory and mechanism design get entangled then, my intuition is, you have a complexity problem with rules out impossibility results in advance. Which is not to say that this area of study shouldn’t be popularized or that it isn’t highly relevant to the ordinary blokes. Here in England, Ken Binmore became a folk hero for the recent 3G 11. 11 11 there *is* a voting system that meets all Arrow’s criteria — we get whatever we both agree on, or mushrooms if we disagree. Actually, if one revives non-imposition there is an ordinary language argument for Arrow as follows- If the only information fed into a S.W.F calculator are preference rankings with no further information about consequences for the ‘voter’ then you can get very bad results. Alice likes anchovies, mushrooms not to so much and pepperoni not at all. Her preferences remain the same but today she gets a piece of advice from her Doctor- there is a sound medical reason for her preferences. This changes things. It would probably affect how Bob’s vote if he has this information. Can a deontic voting rule capture this change? Comments are currently closed.
{"url":"http://www.thebigquestions.com/2010/10/28/arrows-theorem-take-two/","timestamp":"2014-04-16T10:44:24Z","content_type":null,"content_length":"54432","record_id":"<urn:uuid:429a539d-db02-47ee-9e9f-27dcd4fc0a55>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
To revolutionize both math and physics by the grassroots popularization of a new quantitative tool which is introduced below. | A conversation on TED.com This conversation is closed. Start a new conversation or join one » To revolutionize both math and physics by the grassroots popularization of a new quantitative tool which is introduced below. We start with the Law of the Excluded Middle. Consider it's complement and call this new law the Law of the Exclusive Middle. Let these two laws be equivalent. You now have something similar to Fuzzy Math, but it is very different. These two laws are connected by equivalence which is very different than Fuzzy Math. Next consider Descartes "I think therefore I am". We reverse engineer this into a statement which reflects our foundation (above), to derive "Maybe I think therefore maybe I am". We keep both of these statements and set them as equivalent. We then proceed to all of the standard tools of mathematics which are used for the analytic quantification of magnitudes. In math, things are said to exist. In our new system things are regarded as "maybe existing". We keep both of these tools and regard them as being equivalent. We will call one of them Mathematics, and the other should be called something like Conjectural Modeling to reflect that it is based entirely on absolute indeterminacy. We now have a quantitative tool which is split down the middle, essentially a kind of mirror image. On one side, absolute determinacy. On the other side, absolute indeterminacy. Both sides held together by equivalence. We now have a tool which is capable of addressing both the equivalence inherent to relativity, and the indeterminacy which is inherent to Quantum Mechanics. We can write correct and accurate quantitative models using either system. In fact, for every possible question there should be two solutions. One based on determinacy, and the other based on randomness. These two answers are equivalent. As an example, whether I know with absolute certainty that I have 10 dollars, this is quantitatively identical to not knowing but "expecting" that I have 10 dollars where 10 is an expected value instead of a value known with absolute certainty. I have many examples and a lot of math to reinforce these views. I am convinced that this solution is extremely important. Showing single comment thread. View the full conversation. Sep 29 2012: Deterministic and stochastic models are different. They will always be different. But this difference is qualitative. The reason we can make these things equivalent is because quantitatively they are the same. In other words, whether you have a stochastic model, or a deterministic model, you get the same numbers. You always get the same answers. You get two answers to every possible problem. One is certain, and the other is uncertain or an expected value. If we say that these are equivalent, then we have a kind of duality in everything that we are doing. You can probably see that this would give a fresh new approach to resolving paradoxes regarding the wave-particle duality of light. It also provides a framework for offering an easy to understand justification for the occurrence of which-way information in QM. There are many benefits of this approach. This is just the tip of the iceberg. My purpose here is to popularize this view, or share it with as many people as possible because I am confident that it is correct. Sep 29 2012: That is precisely correct. Every problem that is solvable using mathematics of logic should have 2 solutions. One deterministic, and the other essentially uncertain or For example, the number 5. It can be regarded as a known value. It is known with an implied (but unstated) inherent certainty. We could however regard this quantity as an expected value, in which case it is precisely the same magnitude but it's "qualitative" properties have changed, it is now inherently uncertain. The magnitudes 5 (known) and 5 (expected) are identical, but their qualities are different. For an empiricist, what this means is that there must be two equivalent models of the entire universe. One deterministic, the other essentially uncertain or stochastic. Both models would produce the same exact numbers. One universe is deterministic, and the other is stochastic. They must be equivalent. The connections to Relativity and QM are obvious. This should also be true for the entirety of all mathematics. I have come close to a proof for the general case for "all of mathematics", but I do not have that proof completed. I have come close and dont want this line of research to be forgotten. That is why I engage in online debate. Sep 29 2012: In my view neither case is degenerative. You may have the emergence or order from a disordered system, or emergence of disorder from one which is deterministic. But neither case is really degenerative. Together they form a duality. The duality is held together by assumption of equivalence. And in fact we can come very close to proving the equivalence of these two structures, but I am more comfortable simply saying that the duality forms a consistent system. By embracing a quantitative tool which has duality built into it, we can look at the wave particle duality with this new tool and understand it in a whole new way. Perhaps for the first time. I want to apply this work to Bell's Inequality, various works of Alain Aspect and others. That is my goal. Either to do it myself, or provide a tool for others to follow. I want to create a new tradition within science which embraces the duality of random and nonrandom, that acknowledges that they are equivalent and proceeds from there. Showing single comment thread. View the full conversation.
{"url":"http://www.ted.com/conversations/14111/to_revolutionize_both_math_and.html?c=541650","timestamp":"2014-04-18T14:01:25Z","content_type":null,"content_length":"51410","record_id":"<urn:uuid:77c96d0e-4382-4d4b-b590-a8607b367169>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Kenshi Miyabe 24 Mar 2014. Submitted Reducibilities relating to Schnorr randomness Full paper Some measures of randomness have been introduced for Martin- L ̈of randomness such as K-reducibility, C-reducibility and vL-reducibility. In this paper we study Schnorr-randomness versions of these reducibilities. In particular, we characterize the computably-traceable reducibility via relative Schnorr randomness, which was asked in Nies’ book (Problem 8.4.22). We also show that Schnorr reducibility implies uniform-Schnorr-randomness version of vL-reducibility, which is the Schnorr-randomness version of the result that K-reducibility implies vL-reducibility. Characterization of Lebesgue points for integral tests 6 Mar 2014, the slide file was uploaded Characterization of Lebesgue points for integral tests Mathematical Society of Japan Philosophy of probability This page is only in Japanese A Gap Phenomenon for Schnorr Randomness 20 Feb 2014, the slide file was uploaded A Gap Phenomenon for Schnorr Randomness Algorithmic randomness by philosophers This page is only in Japanese. Algorithmic information theory This page is only in Japanese. Derandomization in Game-Theoretic Probability 12 Feb 2014. Submitted Derandomization in Game-Theoretic Probability (with A. Takemura) Full paper We give a general method for constructing a deterministic strategy of Reality from a randomized strategy in game-theoretic probability. The construction can be seen as derandomization in game-theoretic probability. Unified Characterizations of Lowness Properties via Kolmogorov Complexity 19 Jan 2014, submitted Unified Characterizations of Lowness Properties via Kolmogorov Complexity (with T. Kihara) Full paper Consider a randomness notion $\mathcal C$. A uniform test in the sense of $\mathcal C$ is a total computable procedure that each oracle $X$ produces a test relative to $X$ in the sense of $\mathcal C$. We say that a binary sequence $Y$ is $\mathcal C$-random uniformly relative to $X$ if $Y$ passes all uniform $\mathcal C$ tests relative to $X$. Suppose now we have a pair of randomness notions $\mathcal C$ and $\mathcal D$ where $\mathcal{C}\subseteq \mathcal{D}$, for instance Martin-L\”of randomness and Schnorr randomness. Several authors have characterized classes of the form Low($\mathcal C, \mathcal D$) which consist of the oracles $X$ that are so feeble that $\mathcal C \subseteq \mathcal D^X$. Our goal is to do the same when the randomness notion $\mathcal D$ is relativized uniformly: denote by Low$^\star$($\mathcal C, \mathcal D$) the class of oracles $X$ such that every $\mathcal C$-random is uniformly $\mathcal D$-random relative to $X$. (1) We show that $X\in{\rm Low}^\star({\rm MLR},{\rm SR})$ if and only if $X$ is c.e.~tt-traceable if and only if $X$ is anticomplex if and only if $X$ is Martin-L\”of packing measure zero with respect to all computable dimension functions. (2) We also show that $X\in{\rm Low}^\star({\rm SR},{\rm WR})$ if and only if $X$ is computably i.o.~tt-traceable if and only if $X$ is not totally complex if and only if $X$ is Schnorr Hausdorff measure zero with respect to all computable dimension functions. $L^1$-computability, layerwise computability and Solovay reducibility 17 July 2013, published 27 Mar 2013, accepted 19 Sep 2012, submitted L1-computability, layerwise computability and Solovay reducibility Full paper Computability, 2:15-29, 2013. We propose a hierarchy of classes of functions that corresponds to the hierarchy of randomness notions. Each class of functions converges at the corresponding random points. We give various characterizations of the classes, that is, characterizations via integral tests, L1-computability and layerwise computability. Furthermore, the relation among these classes is formulated using Solovay reducibility for lower semicomputable functions. Proposition 2.3. Let $\mu$ be a computable measure on a computable metric space. Then there exists a computable sequence $\{r_n\}$ such that $\mu(\overline{B}(\alpha_i,r_j)\setminus B(\alpha_i, r_j))$ for all $i$ and $j$. This statement should be the following. Proposition 2.3. Let $\mu$ be a computable measure on a computable metric space. Then there exists a computable sequence $\{r_n\}$ such that $\{ r_0,r_1, … \}$ is dense in the interval $(0 , \infty)$ and $\mu(\overline{B}(\alpha_i,r_j)\setminus B(\alpha_i, r_j))$ for all $i$ and $j$. This problem was pointed out by K. Weihrauch on 19 Jan 2014. I appreciate his notice. Unpredictability of initial points 25 Dec 2013, the slide file was uploaded Unpredictability of initial points RIMS Workshop: Dynamical Systems and Computation
{"url":"http://kenshi.miyabe.name/wordpress/","timestamp":"2014-04-17T18:42:06Z","content_type":null,"content_length":"37984","record_id":"<urn:uuid:7eaf7cdb-9400-45f4-97f9-99bc2c79b410>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Graduate Texts in Mathematics. 217. New York, NY: Springer. viii, 342 p. EUR 64.95/net; sFr. 108.00; £45.50; $ 59.95 (2002). The author’s intended audience for this high level introduction to model theory is graduate students contemplating research in model theory, graduate students in logic, and mathematicians who are not logicians but who are in areas where model theory has interesting applications. His goal in writing this text is to present the basic material and to illustrate how the two traditional themes of model theory interact. These traditional themes are the investigation concrete mathematical structures and the sets definable in them and the investigation of sets of sentences (theories) and the general structure of their models. Ideally, the reader of this text should already have an acquaintance with first-order logic, be comfortable with basic set theory (Zorn’s lemma, cardinals, ordinals), and have had a year long course in algebra at the graduate level. The text has eight chapters and two appendices, one on set theory and one on real algebra. Each chapter ends with a section of exercises and remarks. There are numerous exercises that vary in difficulty; some ask for proofs of results mentioned in the text; some work out examples that further illustrate material in the text; some introduce topics not appearing in the text (ultraproducts, for example); some require more outside knowledge and are marked with a dagger. The remarks contain some historical information. They also contain references to useful, mostly secondary, sources. The author also uses this opportunity to describe further results and suggest further reading. There is an extensive bibliography and a brief index. This text is noteworthy for its wealth of examples and its desire to bring the student to the point where the frontiers of resrearch are visible. The author briefly indicates sections to comprise a one semester course, but there is no dependency graph for its sections. The text is packed and negotiating a path through it and its exercises may require careful thought. In any case this book should be on the shelf of anybody with an interest in model theory. Here are the chapter headings and a partial indication of their contents. 1. Structures and Theories: first-order languages $ℒ$ and $ℒ$-structures, theories and elementary classes, definable sets and interpretability; 2. Basic Techniques: compactness and Henkin constructions, complete theories, Löwenheim-Skolem theorems, back and forth constructions, ${ℒ}_{{\omega }_{1},\omega }$ and Scott’s isomorphism theorem, Ehrenfeucht-Fraïssé games; 3. Algebraic Examples: quantifier elimination (QE), (ordered) divisible abelian groups, Presburger arithmetic, algebraically closed fields and the elimination of imaginaries, real closed fields; 4. Realizing and Omitting Types: types, omitting types and prime models, prime model extensions of $\omega$-stable theories, saturated and homogeneous models, QE for differentially closed fields, Vaught’s two cardinal theorem, number of countable models, Morley’s analysis of countable models; 5. Indiscernibles: order indiscernibles, Ehrenfeucht-Mostowski models, a many-models theorem, an independence result for Peano arithmetic (Paris-Harrington); 6. $\omega$-Stable Theories: uncountably categorical theories, the Baldwin-Lachlin proof of Morley’s categoricity theorem, Morley rank, forking and independence, uniqueness of prime model extensions, prime models of $\omega$-stable theories; 7. $\omega$-Stable Groups: chain conditions, generic types, indecomposibility theorem, definable groups in algebraically closed fields, algebraic and constructible groups, generically presented groups and Hrushovski’s theorem; 8. Geometry of Strongly Minimal sets: pregeometries, geometry of strongly minimal sets, Zariski geometries, applications to Diophantine geometry (a special case of Hrushovski’s proof of the Mordell-Lang Conjecture for function fields). 03Cxx Model theory 03-01 Textbooks (mathematical logic) 03-02 Research monographs (mathematical logic)
{"url":"http://zbmath.org/?format=complete&q=an:1003.03034","timestamp":"2014-04-18T05:56:04Z","content_type":null,"content_length":"24889","record_id":"<urn:uuid:ec044b16-404f-49b4-8266-4b0e53d1ee1e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Theorem on quadrilaterals September 11th 2006, 07:38 AM #1 Theorem on quadrilaterals Hi, all. Is this a known theorem, meaning you've seen it stated somewhere? A simple quadrilateral whose minimum distance between vertices is 1 and whose maximum distance is sqrt(2) is a unit Thanks for the reply. The statement is mine but the idea is not. Someone else is analyzing an algorithm for finding the closest pair out of a set of points. That person wants to prove an equivalent statement: the maximum number of points that can be placed in a unit square is 4 when the minimum distance between points is 1. It seems obvious but developing a proof has not been. I have one now, but I did not want to bother posting it if this is, say, a well-known exercise from high-school geometry. September 11th 2006, 08:21 AM #2 Global Moderator Nov 2005 New York City September 11th 2006, 09:01 AM #3
{"url":"http://mathhelpforum.com/geometry/5415-theorem-quadrilaterals.html","timestamp":"2014-04-20T18:51:23Z","content_type":null,"content_length":"36871","record_id":"<urn:uuid:1071fef8-262d-4cf3-a75a-a1322d64bfc8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
January 29th 2008, 01:47 AM The following table shows the number of items of certain product imported into the United Kingdom is given below in thousands of units for the Years Years 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 Consumption 170 210 188 98 83 131 205 182 90 92 of Cotton (Thousands of bales). i) 5-years moving total ii) 5-years moving average. All calculations should be in Excel. January 29th 2008, 05:12 AM The following table shows the number of items of certain product imported into the United Kingdom is given below in thousands of units for the Years Years 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 Consumption 170 210 188 98 83 131 205 182 90 92 of Cotton (Thousands of bales). i) 5-years moving total ii) 5-years moving average. All calculations should be in Excel. Excel automatically compiles these results for you by graphing it. However, please note that a simple moving average (SMA) is the unweighted mean of the previous n data points. Therefore, you calculate the mean of the last five years of data for each point. Likewise, you sum the values of the last five years for the moving total. January 29th 2008, 06:37 AM Excel automatically compiles these results for you by graphing it. However, please note that a simple moving average (SMA) is the unweighted mean of the previous n data points. Therefore, you calculate the mean of the last five years of data for each point. Likewise, you sum the values of the last five years for the moving total. I would have put the five year moving avaerage for year $n$ tobe: that is I would always prefer the central moving average. (of course Excel just gives the rolling MA) January 29th 2008, 06:40 AM It depends where it is centered around. Simple moving averages look at past data, because that data is given and known. You cannot use unknown values in the future for a moving average.
{"url":"http://mathhelpforum.com/business-math/27040-calculations-print.html","timestamp":"2014-04-18T13:10:29Z","content_type":null,"content_length":"8062","record_id":"<urn:uuid:0bf835ce-715c-4dfa-bf46-b2432e1e25f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Parallelogram Orthocenters Replies: 7 Last Post: Oct 22, 2013 7:28 AM Messages: [ Previous | Next ] Topics: [ Previous | Next ] Re: Parallelogram Orthocenters Posted: Oct 16, 2013 12:24 PM > Hi Avni, > I suppose locating orthocenter by finding altitudes ( > = 2 Area of triangles / base ) etc. can be done. > By geometric construction H1H2 ~ 7.57188, its > component parallel to AB is 7. It can be noted that > O, midpoint of DB or H1H2 is the anti-symmetric > centre of opposite vertices of parallelogram ABCD. > Regards > Narasimham Hi Narasimham, you are right, but of course an analytical solution is required. I gave some numerical values of side lengths and angles only for clarity, but they are fully unimportant. Let it be AB=DC=a, AD=BC=b, and angle(BAD)= fi. There are some further interesting properties of this structure that I want to reveal after an analytical solution is provided. Best regards, Date Subject Author 10/15/13 Parallelogram Orthocenters Avni Pllana 10/16/13 Re: Parallelogram Orthocenters Narasimham 10/16/13 Re: Parallelogram Orthocenters Peter Scales 10/16/13 Re: Parallelogram Orthocenters Avni Pllana 10/18/13 Re: Parallelogram Orthocenters Avni Pllana 10/18/13 Re: Parallelogram Orthocenters Avni Pllana 10/19/13 Re: Parallelogram Orthocenters Peter Scales 10/22/13 Re: Parallelogram Orthocenters Avni Pllana
{"url":"http://mathforum.org/kb/thread.jspa?messageID=9308984&tstart=0","timestamp":"2014-04-20T18:51:50Z","content_type":null,"content_length":"25272","record_id":"<urn:uuid:dd1f5da8-69f0-4126-82c5-d7396b024f6e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by ann on Friday, December 16, 2011 at 11:28am. please is this correct? write a direct variation equation that relates atoms of O oxygen to atoms of hydogen H Mary combined 8 atoms of hydrogen and 4 atoms of oxygen to get 4 molecules of water is this equation correct? O + 2H = WM • algebra - Willie, Saturday, December 17, 2011 at 10:36am In chemistry class you might write an equation in that fashion to suggest that 1 molecule of water is composed of the "sum" of 1 oxygen atoms and 2 hydrogen atoms. In your math work, since you are doing the topic of "direct variation" you are expected to think about how the quantities vary. That means an equation of the form y = kx, where k is a constant and x and y are the variables. In this case, what is asked for is the ratio of hydrogen to oxygen, which is constant no matter how many molecules you are talking about. For example, you could have (oxygen, hydrogen) values of (1, 2), (2, 4), (3, 6), (4, 8), etc. This can be expressed as H = 2O, in other words, number of hydrogen atoms is two times whatever the number of oxygen atoms. y = k * x H = 2 * O In this case, the equation is expressing how hydrogen "varies directly" with oxygen. The constant of variation is 2. You could also write this: y = k * x O = (1/2) * H That would express how oxygen varies directly with hydrogen. The constant of variation is 1/2. Graph of y = kx is a line through (0, 0) always. The slope of the line is k. So instead of an "addition" equation, direct variation is always a "multiplication" equation. Note: If you have a situation that does not pass through (0, 0), it is not considered "direct variation." So, y = 3x + 5 is not direct variation. • algebra - ann, Sunday, December 18, 2011 at 6:21am Thank you very much for explaining it to me. I am taking my first algebra course and I am having a difficult time in undestanding my text. You made this so clear. I really appreciate it. thanks • algebra - sarah, Saturday, July 21, 2012 at 5:41pm COrrect ! great job Related Questions Algebra - Each ordered pair is a solution of a direct variation. Write the ... Algebra 2 - Let (x1,y1) be a solution, other than (0,0), of a direct variation ... Algebra - This is about direct variation. The area a painter can paint varies ... Algebra - Write an equation of the direct variation that includes the point (–3... algebra - Tell whether equation represents direct variation. If so, identify the... math - This is about direct variation. The area a painter can paint varies ... Algebra 1 - I need some help with Direct linear variation. I had tooken a test, ... Math- steve or ms. sue - does the data in the table represent a direct variation... Algebra 2 - I am trying to figure out if these problems are Direct Variation, ... Algebra - Need some Homework help for my son and I am not doing so great ...
{"url":"http://www.jiskha.com/display.cgi?id=1324052892","timestamp":"2014-04-18T08:53:34Z","content_type":null,"content_length":"10343","record_id":"<urn:uuid:be3ff4ef-9647-49ca-aeaf-095ba7fc90d5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
F# Seq.Unfold So I have been playing around with more Euler problems and I have found it a great way to learn some of the basic functions of F# and Seq specifically. The more I play the more I am really loving F# and how succinct the language is. Today I thought I would put up a brief post on the Seq.unfold function. So, I see this function being used everywhere… my understanding of it is that it is a way that one can build a sequence based on a series of commands. I will use a small code snippet to show an example implementation of it… let SeqTo10 startnum = |> Seq.unfold(fun x -> if (x<10) then Some(x,x+1) else None) Basically what this function does is takes a number “startNum”, and generates a sequence of all the numbers between startNum and 10 (provided startNum is less than 10). So the first thing we do is pass the starting number (or initial state argument) into the sequence generator. Then using the anonymous functions and the lambda expression generate the generator expression. x -> if (x<10) then Some(x,x+1) else None The anonymous function can basically have any number of expressions in it, but it will only stop generating the sequence when a None value is passed to the generator. So for instance in the code snippet above if x is less than ten, then we add the current value of x to the “new sequence” and pass a new value value to the anonymous function (x+1) in this case. This is done by the Some(x, x+1). Once x is less than 10 we pass the value None to the anonymous function and the sequence ends. To illustrate this in another way, we can show another example of the unfold function, this time passing more than one value to the generator. let SeqTo2D = |> Seq.unfold(fun (x,y) -> if (x<10) then Some((x,y),(x+1,y)) else None) In this instance the (1,1) is the initial state argument. The Some((x,y), (x+1,y)) means that it will add an element to the sequence (x,y) and then pass to the generator. Print | posted on Wednesday, June 23, 2010 7:28 PM | Filed Under [ F# ] No comments posted yet.
{"url":"http://geekswithblogs.net/MarkPearl/archive/2010/06/23/f-seq.unfold.aspx","timestamp":"2014-04-18T15:54:03Z","content_type":null,"content_length":"33817","record_id":"<urn:uuid:071899ea-ba86-480e-b1fe-22a7d03d6ff1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Pasadena, CA Precalculus Tutor Find a Pasadena, CA Precalculus Tutor ...In addition to tutoring, I am currently working as a mechanical engineer for the R&D department of semiconductors company. I graduated with a B.S. in Mechanical Engineering from the California Institute of Technology about two years ago. When I tutor, my personal teaching philosophy is to make sure that my students learn by intuition and logic rather than memorization. 19 Subjects: including precalculus, chemistry, GRE, calculus ...Japan has the highest level of math education in the world. I was always a top student of school in math.I had tutored algebra for several years at University of Nevada, Las Vegas. I had experience with tutoring mathematics when I was a high school student in Japan. 13 Subjects: including precalculus, calculus, Japanese, geometry ...I love tutoring because it gives me a chance to focus on one person at a time and most people just need that extra attention to excel. I have had great results with all of my clients and I have a good sense of humor, so the time we spend together will not be boring. I have often been told that ... 11 Subjects: including precalculus, physics, geometry, algebra 1 ...I like to mix both jazz and classical genres since they complement each other so well on the saxophone. Don't start playing the sax without a teacher! It is easy to learn but hard to re-learn if you have learned something incorrectly. 18 Subjects: including precalculus, chemistry, calculus, algebra 2 ...I taught calculus in the classroom for 5 years and have tutored several people very successfully. This is one of my favorite subjects. I look forward to sharing it with you or your student. 24 Subjects: including precalculus, chemistry, English, calculus Related Pasadena, CA Tutors Pasadena, CA Accounting Tutors Pasadena, CA ACT Tutors Pasadena, CA Algebra Tutors Pasadena, CA Algebra 2 Tutors Pasadena, CA Calculus Tutors Pasadena, CA Geometry Tutors Pasadena, CA Math Tutors Pasadena, CA Prealgebra Tutors Pasadena, CA Precalculus Tutors Pasadena, CA SAT Tutors Pasadena, CA SAT Math Tutors Pasadena, CA Science Tutors Pasadena, CA Statistics Tutors Pasadena, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/pasadena_ca_precalculus_tutors.php","timestamp":"2014-04-20T19:22:33Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:3eef7d9a-7fcd-4dbe-8a80-bbab116b84b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Web Resources Online Assessments This site is a group of online assessments for math, science, vocabulary, and geography. This is a free service for teachers who want to replace paper tests with online testing. Teachers must create a free account to utilize this site. Account includes an online grade book providing fast analysis of class and individual student progress, an exchange for teachers to share the resources they create, a skills site for students, especially useful for math practice and testing. Simplifying Fractions Baseball Simplifying Fractions Baseball is a fun game that requires students to simplify fractions in order to play baseball. Shoot Out at the Fraction Corral! This interactive game requires the students to reduce fractions to lowest terms or simplified form. The student must shoot the fraction form to match the model form. Students may choose from Game 1 or 2 and Level 1,2,or 3 and they may choose relaxed mode or times mode. Results may be sent to the teacher with the percent correct. Students scores are posted on a score board and compared with other students. Learning Activities Proteacher Collection: Fractions This Web site contains comments and ideas for teaching fractions. Online Assessments This site is a group of online assessments for math, science, vocabulary, and geography. This is a free service for teachers who want to replace paper tests with online testing. Teachers must create a free account to utilize this site. Account includes an online grade book providing fast analysis of class and individual student progress, an exchange for teachers to share the resources they create, a skills site for students, especially useful for math practice and testing. Shoot Out at the Fraction Corral! This interactive game requires the students to reduce fractions to lowest terms or simplified form. The student must shoot the fraction form to match the model form. Students may choose from Game 1 or 2 and Level 1,2,or 3 and they may choose relaxed mode or times mode. Results may be sent to the teacher with the percent correct. Students scores are posted on a score board and compared with other students.
{"url":"http://alex.state.al.us/weblinks_category.php?stdID=53726","timestamp":"2014-04-19T04:19:53Z","content_type":null,"content_length":"35955","record_id":"<urn:uuid:bd175115-18f2-41db-99c3-d8e0f635e3f1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Haddon Heights Algebra 2 Tutor Find a Haddon Heights Algebra 2 Tutor ...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! 14 Subjects: including algebra 2, physics, calculus, ASVAB ...I am stern but caring, serious but fun, and nurturing but have high expectations of all of my students. Together as a team, you and I can help your child to do his or her best. I look forward to working with you and your child!I am a certified and current teacher in the public schools. 12 Subjects: including algebra 2, geometry, algebra 1, trigonometry ...I frequently work with students far below grade level and close education gaps. I have also worked with accelerated groups in Camden with students that have gone on to receive scholarships and success at highly accredited local high schools. My strength in tutoring is using vocabulary and phras... 8 Subjects: including algebra 2, geometry, algebra 1, SAT math ...I have worked three semesters as a computer science lab TA at North Carolina State University, as well as three semesters as a general math tutor for the tutoring center at the Community College of Philadelphia. I have tutored privately in both these subjects for many years. I have had the opportunity to work with a wide variety of students from all backgrounds and age groups. 22 Subjects: including algebra 2, calculus, statistics, geometry ...PLEASE NOTE: I only take serious SAT students who have time, the drive, and a strong personal interest in learning the tools and tricks to boost their score. Background: I graduated from UCLA, considered a New Ivy, with a B.S. in Integrative Biology and Physiology with an emphasis in physiology ... 26 Subjects: including algebra 2, English, chemistry, reading
{"url":"http://www.purplemath.com/Haddon_Heights_Algebra_2_tutors.php","timestamp":"2014-04-21T11:06:41Z","content_type":null,"content_length":"24223","record_id":"<urn:uuid:533b6b5a-2c11-4dbc-9cdc-861b1bfce11f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
All about torque It gets your 'bot going The basics of torque │Let's start with the "official" definition from our friends at the Encyclopædia Britannica: ││ │ ││ │Torque -- also called MOMENT OF A FORCE, in physics, the tendency of a force to rotate the body to which it is applied. ││ │ ││ │The torque, specified with regard to the axis of rotation, is equal to the magnitude of the component of the force vector lying in the plane perpendicular to the axis, multiplied by the shortest ││ │distance between the axis and the direction of the force component. Regardless of its orientation in space, the force vector F can always be located in a plane parallel to the axis. ││ │ ││ │In the figure, the force vector F lies in the plane parallel to the line OL; the component F[L], being parallel to OL, has no moment about OL, while the component F[P], lying in the plane ││ │perpendicular to OL, has a moment, or torque, about OL equal to F[P] * d, in which d, the shortest distance between F[P] and OL, is the moment arm or lever arm. ││ So in short, torque is the combination of force applied at a point with the right angle (perpendicular) distance from that point to the axis of rotation (in our case, this'll be an axle). The formula to compute torque is simply this: T = F * d, where T = Torque F = Force (here, the force applied perpendicular to the axis) d = Distance In 2 dimensions, you can visualize this simply: So, if you had a pulley of 1 inch radius and a cord fixed to it with a 1 ounce weight hanging down on the end, it would produce 1 inch-ounce of torque on the pulley shaft (its axis): T = F * d = (1 oz.) * (1 in.) = 1 inch-ounce Bear in mind that we can also turn this all around. Given a value of torque, and the value of d (generally called the moment arm), we can also compute the tangential force that would be generated. Take our torque equation (T = F * d), and divide both sides of the equation by d: T / d = (F * d) / d T / d = F So we're now armed with two powerful equations to follow torque through a system of shafts, gears, pulleys, and the like. Before we get too far, you should be aware that we're making a few assumptions to simplify things: a) Torque stays constant along a shaft (so we're neglecting any friction in bearings that hold the shaft in place). b) We're neglecting friction in our gears; a fairly accurate simplification for our applications. In particular, this means that force is the same on each gear at the contact point of two meshed gears (you'll see where this comes in later). c) We can neglect the mass of any gears (the mass of gears only comes into play when they are particularly large, or particularly heavy -- both unlikely for BEAMbots). Calculating Torque in a Geared System Now, using all this, how can you determine the torque of a given motor in a geared system? Let's look at an example with one pulley, and two meshed gears: If we start at the pulley with a 1 oz. weight hanging off, then we have a torque on the top shaft of T = F * d = (1 ounce) * (1 inch) = 1 inch-ounce of torque. Now torque stays the same (we like to say "remains constant") all along the shaft so at the big gear (#2) it still has 1 inch-ounce of torque trying to turn it. What is the force at mesh point B? Well, we can use the second version of our torque equation: F = T / d = (1 inch-ounce) / (1 inch) = 1 ounce Now, since the gear teeth are meshed, the force at point B is pushing on the smaller gear (#1) with the same 1 ounce force. Since we're neglecting friction, the force remains constant across the interface between gears. Now, the torque on the motor shaft is just... T = F * d = (1 ounce) * (1/4 inch) = 1/4 inch-ounce So there is 1/4 inch-ounce of torque trying to twist the motor, thus the motor must be twisting with 1/4 inch-ounce torque to oppose it. If we look at the motor's point of view, it's torque of 1/4 inch-ounce is magnified to 1 inch-ounce at the pulley. Cool, huh! The tradeoff is that the pulley turns slower than the motor (but I don't really want a bot zipping off the end of the table). You can see that by applying only one formula T = F * d and it's algebraically rearranged form F = T / d, we can work our way through any geared system of this type. If we know the motor's torque, then we can calculate the output torque on the final gear shaft (we could even find the horizontal force the wheel surface exerts on your tabletop). If we know the torque turning the wheels on your drive shaft, then we can back-calculate the motor's torque. Calculating Motor Stall Torque Imagine building a setup like this with a tube of 1 inch radius instead of just a pulley. You could have a winch that you could attach weights to, to see what finally stalls the motor. You could then back-calculate and find your motor torque. Start with a chain (like you pull to turn on a light), and it's sitting on the floor attached to the tube by fishing line, perhaps. As the winch pulls the cord, more and more of the chain rises and finally enough weight is pulling on the winch that your motor stalls. You could measure the height of the chain and multiply by the chain's weight per inch and you could calculate your torque value. Chain weight W = X inches * 1/4 ounce per inch F = W since weight is a force Then your motor's stall torque comes straight from our old friend T = F * d (but you probably knew that!).
{"url":"http://www.solarbotics.net/bftgu/tutorials_mech_torque.html","timestamp":"2014-04-18T10:36:06Z","content_type":null,"content_length":"18751","record_id":"<urn:uuid:6aff5638-5c34-4d08-8121-e31a5a65748b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Testing random matrix theory vs the zeta zeros Seminar Room 1, Newton Institute I will give a tutorial on methods of testing predictions of random matrix theory on data. There is some nice math (symmetric function theory) and some subtlety (the level repulsions lead to correlated data and need cutting edge tools such as the block bootstrap). This is joint work with Marc Coram.
{"url":"http://www.newton.ac.uk/programmes/RMA/seminars/2004071216001.html","timestamp":"2014-04-20T01:17:08Z","content_type":null,"content_length":"4556","record_id":"<urn:uuid:4d1c041f-1638-4f07-81ef-ca6c78df8d46>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Hermite interpolation formula From Encyclopedia of Mathematics A form of writing the polynomial The Hermite interpolation formula can be written in the form [1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian) Hermite interpolation can be regarded as a special case of Birkhoff interpolation (also called lacunary interpolation). In the latter, not all values of a function Such a matrix [a1]. [a1] G.G. Lorentz, K. Jetter, S.D. Riemenschneider, "Birkhoff interpolation" , Addison-Wesley (1983) [a2] I.P. Mysovskih, "Lectures on numerical methods" , Wolters-Noordhoff (1969) pp. Chapt. 2, Sect. 10 [a3] B. Wendroff, "Theoretical numerical analysis" , Acad. Press (1966) pp. Chapt. 1 How to Cite This Entry: Hermite interpolation formula. M.K. Samarin (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Hermite_interpolation_formula&oldid=13280 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Hermite_interpolation_formula","timestamp":"2014-04-17T18:55:21Z","content_type":null,"content_length":"20072","record_id":"<urn:uuid:b35ed433-28ce-453b-86f3-006cb0822d37>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Paper No 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Numerical Simulation on the Mechanism of the Drop Deformation and Breakup in Shear Flow S.L. Chen, C.Z. Lin and L.J. Guo State Key Laboratory of Multiphase Flow in Power Engineering, Xi'an Jiaotong University Xi'an, 710049,China Keywords: Simple shear flows; diffuse interface method; critical capillary number; viscosity ratio This paper presents the deformation and breakup of an isolated drop immersed in immiscible liquid phase undergoing shear flow, using the diffuse interface method. The interface between the drop and the fluid is tracked by an order parameter, namely the mass concentration. Two dimensional Navier-Stokes equations for an incompressible fluid are solved by a projection method on a fixed Cartesian grid and surface tension effects are incorporated into the model through a modified stress. In the paper, the critical capillary number was plotted as a function of viscosity ratios with the method of approximation. Small deformation and breakup of the droplets was investigated. Breakup of the drop occurred by three mechanisms, namely, necking, end pinching, and capillary instability. The distribution of drop breakup mechanism at a given viscosity ratio was also simulated numerically. In the end, velocity field was analysed to investigate the mechanism of drop deformation and breakup. Good agreement was found between numerical simulations and the experimental results, which indicate that the diffuse interface method can successfully capture the main behavior of the drop deformation and breakup. The deformation and breakup of an isolated drop immersed in an immiscible liquid phase undergoing shear flow has been the subject of a number of studies. They are ubiquitous in the oil recovery, material processing, medicine, paints and cosmetics industries (Mason & Bibette, 1996). In such areas, the size distribution of drops can affect the efficiency of the industrial process, and the deformation and breakup of a droplet have a direct effect on the interface area and hence mass transfer. In order to optimize and control the performance, it is necessary to understand the mechanism of drop deformation and breakup. Drop dynamics are mainly characterized by the capillary number Ca (where the capillary number Ca =yrla/a) and viscosity ratio 2. Many experimental studies of drop deformation and breakup have been reported (Mason & Bibette, 1996). Since Tayor's pioneer work in the 1930s (Taylor, 1934), there have many valuable results on the deformation drops in well-defined flow fields such as the steady drop shape at small deformation (Guido & Villone,1982), the critical condition for breakup (Grace, 1982;Bently & Leal, 1986), breakup of threads in a quiescent matrix (Tomotika, 1935;Stone & Leal, 1989), and quasi-equilibrium breakup(Torza et al.,1972;Janssen & Meijer, 1993). Reviews (Stone, 1994; Rallison, 1984) gave useful summaries of those topics. Elemans et al.(1993) found that drop deforms affinely when Cal>2Ca, for a Newtonian system of X=0.135 under a constant shear rate. Tsakalos et al.(1998) observed viscous elastic drop breakup due to both end pinching and capillary instability at Ca,>>2Ca,. They found that the capillary instability starts to develop at a constant thread diameter which does not depend on the initial drop diameter and is inversely proportional to the shear rate. However, the complexity of this three dimensional free surface problem has limited the investigation mostly to experiments, and the existing theoretical studies are carried out primarily using simplified treatments. From a fundamental viewpoint, the key to investigate the drop dynamics is modelling the moving interface for it is previously unknown and may undergo severe deformations. Therefore, the choice of an interface tracking method is critical for a successful simulation. A conceptually straightforward way of handling the moving interfaces is to employ a mesh that has grid points on the interfaces, and deforms according to the flow on both sides of the boundary. This has been implemented in boundary integral methods (Cristini et al., 1998) and the front tracking method (Tryggvason et al., 2001). These methods have been successfully used for the simulations of complex multiphase flows. However, these interface descriptions break down when interfaces undergo severe deformation because significant deformations may cause loss of simulation accuracy and singularities in the solution. Thus, these methods have been applied mostly to relatively mild deformations. As an alternative, fixed-grid methods that regularize the interface have been highly successful in Paper No treating deforming interfaces. These include the volume-of-fluid (VOF) method (Li & Renardy 2000a) and the level-set method (Chang et al. 1996). These methods introduce the interfacial tension as a body force and a single set governing equation is solved. The interface is tracked by an artificial colour function. The disadvantage of the level-set method is that mass is not conversed and the disadvantage of the VOF method is that it is difficult to compute accurate local curvatures from the discontinuous volume fractions. In the diffuse interface method, the interface between two immiscible fluids is considered to have a small but finite thickness (Lowengrub J & Truskinovsky L., 1998). Various variables change continuously over this interfacial region. The advantage of the diffuse interface method is that the explicit tracking of the interface is unnecessary and changes in the interface topology are easily handled. The present work focuses on the small deformation and transient breakup of a drop in simple shear flow. The study involves numerical modelling based on the diffuse interface method. The next section contains computational techniques. These are followed by a result and discussion section and a Ca Capillary number a Radius (m) Greek letters y Shear rate ( s-') 17 Viscosity (Pas) a Interfacial tension (mN-m ) A Viscosity ration p Density(g-m11) c Critical i Initial Numerical Scheme 2.1 Problem statement We considered an incompressible and immiscible drop with volume 47a3 and viscosity A/, in a fluid of viscosity p. The drop subjected to simple shear flow generated by the motion of top and bottom walls, as shown in Fig 1, where the upper walls moved to the right with constant velocity U and the lower wall moved to the opposite direction with the constant velocity of -U. The domain rage in the x and y directions is Lx and Ly, respectively. Besides, the average shear rate was y=2 UILy, and a was interfacial tension. 0 )72 Figurel: The diagram of the drop immersed in shear flow 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 2.2 Cahn-Hilliard equation The dynamics of fluid at hand was modelled using the diffuse interface method to mimic the experimental results obtained through visualization. The governing equations of the model are described by Navier-Stokes-Cahn-Hilliard equations. The surface tension is incorporated into the model though a modified stress. The uniform staggered Cartesian grid is used (Lowengrub & Truskinovsky, 1998; Anderson et al., 1998; Verschueren, 1999; Kim,2002). p(u,+u.Vu)=-Vp+V.[7(c)(Vu+Vu')]+Fr (1) V-u=0 (2) c, +u- Vc= V (M (c) V) (3) dF(c) _2Ac dc (4) where M(c)=c(1-c) is the mobility, p/ is the generalized chemical potential, F(c)=0.25c(1-c) is the Helmholtz free energy, r(c) is the dimensionless viscosity, Pe is the diffusional Peclet number, and C is the Cahn number respectively. Pe is the ratio between convective and diffusive mass transport. C is a measure of the thickness of the interface. The term Fs is the body force arising from interfacial tension. A mass concentration field c(x,y) was introduced to denote the mass ratio of one of the components in the mixture of two fluids. The transition of c(x,y) across the interface was smooth in the interfacial region. For simplification, the drop was located in the center of the domain region. The initial velocity was equal to zero and the initial concentration was (Jacqmin, 2000): 1 1 Lx 2 L]2 1- ltn 2 r (x Lv ) c'(x,y) = 1+tanh 24 2 2 r-E 2.3 Projection method An effective solver for the continuity and momentum equation is the approximate projection method (Almgren et al., 1998). The time-stepping procedure is based on a Crank-Nicholson type method. The advection term in equation (8) is solved by second order essentially nonoscillatory (ENO) method (Harten et al., 1987).The first step is to calculate an intermediate velocity u', which u -u } -At (u-Vu) n-V n+- -VP 2 +F] 2 + V (Vu+ (Vu)T (6) + V [q"+(Vu+(Vu+l))] 2Re Id In general, u is not divergence free. Next, solve the pressure correction 0 from the Poission equation: A = V (7) At ) Then update the new velocity u"+'2 at the time n+l, which satisfies V-u"+=0. u" = u- AtVq (8) Paper No and pressure p P 2 =P2+ (9) p =p + (9) The resulting discrete equations are solved using a multigrid method. 2.4 Surface tension force The flow field depends on the concentration field c(x,y) through the introduction of extra stresses that model the surface tension between the two fluid components. The effects of surface tension are included in the computational model through an external forcing term added to the momentum equation (10). The surface tension force is based on a continuum surface force formulation and introduced according to Kim as: F, = V V VcI ReCa c (10) To match the surface tension of the sharp interface model, a must satisfy: ea J (c) dx 1 (11) From reference (Jacqmin, 2000), we obtain a = 6, . 2. 5 Numerical process The outline of the simulation was as follows. For each n-th time step, n = 1, 2,... (1) To initialize c(x,y,0) to be the locally equilibrated concentration profile. (2) To solve the Cahn-Hilliard equation with nonlinear full approximation storage multi-grid method to obtain c"+ and pn+1. While p-7Vc was calculated by using a second-order ENO scheme (Harten et al.,1987). (3) Using c' 2=(3c"- c"')/2 to compute the surface force (4) To solve the Navier-Stokes equations by an approximate projection method to obtain p/ /2 and p+1, While u-Vu was calculated by using a second-order ENO scheme . (5) To update the time and repeat steps 2-4. Results and Discussion It is well known that, in simple shear flows, for a given viscosity ratio 2, if the strain rate is sufficiently small, the isolated drop immersed in an immiscible liquid phase becomes deformation and will attain equilibrium shape but never break up. However, as the strain increases, initial capillary number exceeds a critical value, and the drop will begin to break up. Such a critical value is known as critical capillary number, described as Ca,. And while the initial capillary number of the drop exceeds Ca, as the increase of Ca,, the breakup mechanisms of the drop present variance. 1. Steady shape We study the deformation of a viscous drop for subcritical capillary numbers, where the drop is stretched to an approximately ellipsoidal shape. We have studied the case which has been most analyzed in the literature with the boundary integral method and VOF method, which is in the Stokes flow regime with A=1. Numerical simulations have 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 been conducted for capillary numbers Ca,, 0.2, 0.3, and 0.4. For deformation parameter calculation, the computational domain is a box of dimension 8a x 4a and the computations have been done with the mesh128x64. The deformation parameter D for the steady state solution is shown in Fig.2. By comparing our results to the previous results in the literature, we can judge the accuracy of our method. In our simulations, the Reynolds number is 0.0625 which is different from the boundary integral method simulation which has been done for Stokes flow. The values of deformation parameter are only slightly larger than the values obtained by small deformation theory (C. E.Chaffey & H. Brenner,1967)and the boundary integral method (J.M.Rallison & A.Acrivos, 1981)for Stokes flow, and consistent with the results from the VOF method (J. Li & Y.Y.Renardy, 2000) for low Reynolds number flow and the trend of the experimental results(FD.Rumscheidt & S.GMason, 1961). From Fig.2, we can conclude that the deformation parameter increases with Ca. The capillary number is the ratio between the viscous shear stress which deforms the drop and the capillary pressure which resists the deformation. When Ca, is under the critical value, the capillary pressure is dominant, so that the drop will not breakup. As Ca, increases, more viscous shear stress is imposed. The longer the drop will be stretched, and the larger the curvature at the ends of the drop. A Present simulation - Theory calculation SVOF simulation a SExperiment observation A A BIM simulation Figure 2: Steady state drop deformation parameter D under various capillary numbers 2. The relationship between 2 and Cac In this section, we presented the result of the relationship between viscosity ratio 2 and the critical capillary number Ca,, as depicted in figure 3. Drop breakup was reached for different viscosity ratios between 0.01-3.78, corresponding to Ca, between 0.4-1.3. Comparison of the results literature data had also been depicted. In the figure, the solid line represents fit equation in the shear flow by Marks (1998); the dash line pictures the theoretical value by Barthes-Biesel and Acrivos(1973); X and O represents the critical Capillary number by Grace(1982) and Torza et al.(1972) respectively, using pseudo-steady state experiment. A depicted the numerical results in the paper. The critical Capillary numbers Ca, are found to be higher at lower and higher viscosity. When the viscosity ratio is close to 0.5, the critical Capillary number reaches the minimum. This tendency is expected for the comparison of experimental and theoretical results in the literature. However, the deviation of the obtained results from presented by Grace and Torza needs attention. The experiment data presented by Grace and Torza used pseudo-steady method, while we used approximation method while we used approximation method in our Paper No simulation. Possible reasons for systematic deviation maybe the assumption that the liquids in the simulation were treated as Newtonian, while no absolutely Newtonian liquids exist in the nature. 2.4X Fl 1962 ..... x 9 2.2 1 3o a a t. J..1972 1.2 - 108 XX 0.4 - 1E.3 0.01 B.1 1 10 Viscosity ratio Figure 3: The critical Capillary number as a function of viscosity ratios and the comparison of the results with literature date depicted in the figure 3. Drop breakup mechanisms with different initial capillary number The deformation and breakup of the drop with different Capillary number (1.01 Ca, 1.1 Ca,, 1.3 Ca,, 1.98 Ca,) is showed in the figure 4. In the simulation, the drop and the liquid had the same viscosity ratio of 0.5. As the figure depicted, different breakup mechanisms was presented with different initial Capillary number. First of all, the initial shape of the drop was spherical. And the drop had the same viscosity ratio of 0.5. (a): Ca = 1.O1Ca,, = 4.6s-1, a, t =0.109 t= 0.435 a t=0269 s t=1D87 s t=1304 a t=1.521 5 t =1.739 t=1956 s t= 0.782 s t =1.172 j t =1953 s i =3.516 j t = 5.079 s t =6641 j = 7.423 5 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 (1)While the initial capillary number of the droplet slightly bigger than Ca, (t=0.217s) under shear flow, the process of the drop deformation and breakup presented in the figure. 4(a). As the time went on, the drop kept stretching, and a neck formed in the middle of the drop (t=0.625s). The drop became thinner and thinner, and broke up to two daughter drop eventually (t= 1.962). The two daughter drops had the same size and opposite flow direction. In the same time, there were smaller satellite drops between the daughter drops. After that, the distance between the drops became farer and farer. This process is named necking mechanism. (2)As slightly larger capillary number with Ca,=1. Ca,, see figure.4(b), at first, initially spherical drop deformed to an ellipsoid (t=0.263s) under shear flow. As keeping stretching, the drop became dumbbell shaped. The drop kept the shape in the end and became thinner and thinner in the middle (t=2.105). In the end, it broke up to two daughters, with a bigger satellite drop between them (t=2.368s). (3)Figure.4(c) depicted the drop deformation and breakup with a capillary of 1.3Cac. The shear flow filed deformed the initially spherical drop to an ellipsoid (t=0.782s).As the time went on, the drop was stretched into a long thread its end bulb up, and a bridge formed between the bulbous end and the uniform central thread (t=5.079s). The bridge continued to be thinner which lead to the pinch off of a daughter drop in the end (t=6.641s). Similar processes repeated on the remaining part of the thread as time went on (t=7.423s). This phenomenon was called the end pinching. ____- p t=0263 s t= 0.789 s t=1842 s S=2.105 j t= 2.368 t= 2.632 s t =2.895 t = 0.0556 s = 0223 = 0556 = 0668 t=7314 j. .. S- t =8.204 5 ....- -. ....------ t=8.790s j a** .. * (c): Ca = 1.3Ca ,y = 2.7s5, a, =1.31 (d): Ca = 1.98Ca, = 9s1, a, = 0.61 Figure 4: the numerical result of different processes of drop breakup under shear flow (b): Ca = 1.Ca,y= 3.8s 1,a Paper No (4)When the capillary increased to 1.98Cac, initially spherical drop deformed to an ellipsoid (t=0.0556s) under shear flow. As time went on, the drop behaved as the same as the condition of the capillary number 1.3 Cac. However, the capillary instability grew on the central part of the thread and the thread broke into a line of uniform sized daughter drops finally. initial capillary number (C, end pinching mechanism necking mechanism capillary instability mechanism * no breakup I I 19 The number Figure 5: Distribution of drop breakup mechanism at a given viscosity ratio Good agreement was found between numerical simulations and former experimental results (Lin & Guo, 2007). As the initial capillary number changing from Ca, to 2Ca,, the deformation and breakup of the drop presents three mechanisms: necking mechanism, the end pinching; the capillary instability, which was the same as the results of the experimental results. However, the time used for drop breakup in the simulation results was far less than that in the experiment results. As mentioned before, in our paper, the velocity field at the beginning was given as the stability of a shear flow. We can get from the figure that the critical capillary number gotten in step flow is lower than pseudo-steady state flow. The time used for making the fluid to a shear flow field seems much more than making the drop breakup. Another reason may be that the drop in the simulation was treated as Newtonian, while no absolutely Newtonian liquids exist in the experiment. From the comparison, we also can see the advantage of the numerical simulation method in the studies of drop deformation and breakup as numerical simulation closer to theoretical value. In the paper, distribution of drop breakup mechanism at a given viscosity ratio was simulated numerically, as figure 5 shown. From numerical results above, the breakup of a drop immersed in another liquid occurs by three mechanism: when Ca, slightly bigger than critical capillary number, the drop breakup via necking mechanism; As fig.5 depicted in the figure, for Cal dominant drop breakup mechanism; for Ca, near or bigger than 1.8Ca,, the capillary instability and end pinching are dominant drop breakup mechanism. In the process of the breakup, satellite drops come into being with daughter drops. They are symmetrical to the center of the drop and have the opposite flow velocities. 4. The velocity fields of drop breakup Further insights into the process of the drop breakup can be gained by examining the velocity fields at different stages 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 of the drop breakup. The velocity fields in the x-z plane through the centre of drop during the breakup are presented in Figure 6. There existed a vortical motion inside the bulb which was created by the competition of viscous shear stress and surface tension, except near the neck. The surface tension force stressed the flow faster toward the bulbs end while in the waist near the centre the flow was much slower, which induced the thinning of the bridge and finally pinched to generate a daughter drop. (a) Elongation (b) Evolving c) Breakup Figure 6: Velocity fields in x-z plane through the center The deformation and breakup of a droplet immersed in another liquid was investigated in the paper, using the diffuse interface method. The problem we concerned was the mechanism of the drop deformation and breakup. In view of the results presented in this work following conclusions can be drawn: 1. For a given viscosity ratio 2, if the strain rate is sufficiently small, the isolated drop immersed in an immiscible liquid phase becomes deformation and will attain equilibrium shape but never break up. As Cai increases, more viscous shear stress is imposed. The longer the drop will be stretched, and the larger the curvature at the ends of the drop. 2.The breakup occurs by three mechanisms: for Cai~l.OCac,necking mechanism; for Cai <=1.8 Cac end pinching; for Cai >1.8 Cac capillary instability mechanism and end 3. The diffuse interface method can numerically simulate the deformation and breakup of the drop immersed in another immiscible liquid. Using such method, we can simulate more complex flow field that experiment can not solve easily. 4. In the simulation, drop and the fluid were treated as Newtonian, while no absolutely Newtonian liquids exist in the experiment. In our later work, we will concern the Non-Newtonian fluids to our method. Paper No The authors are grateful to the National Science Foundation of China (Contract No.50823002 and No.50536020) for financial support of this work. We thank the referees of this paper for their valuable suggestions to improve the quality of this paper. J.M.Rallison, A.Acrivos, 1981 A numerical study of the deformation and burst of a viscous drop in general shear flow, J. Fluid Mech. 109, 465. Mason, T.G, Bibette, J.,1996.Emulsification in viscoelastic media. Phys. Rev. Lett. 77, 3481-3484. Taylor, GI., 1934.The formation of emulsions in definable fields of flow. Proc. R. Soc. London, Ser. A 146, 501- 523. Guido, S., Villone, M.,1998. Three dimensional shape of a drop under simple shear flow. J. Rheol. 42, 395-415. Grace, H.P.,1982. Desperion phenomena in high viscosity immiscible fluid systems and application of static mixers as dispersion devices in such systems. Chem. Eng. Commun. 14, 225-277. Bentley, B.J., Leal, L.G.,1986. An experimental investigation of drop deformation and breakup in steady, two-dimensional linear flows, J. Fluid Mechanics 167, Tomotika, S.,1935. On the instability of a cylindrical thread of viscous liquid surrounded by another viscous fluid. Proc. R. Soc. London, Ser. A 150, 322-337. Stone, H.A., Leal, L.G.,1989. Relaxation and breakup of an initially extended drop in an otherwise quiescent fluid. J fluid Mech. 198,399-427. Torza, S., Cox, R.G., Mason, S.G.,1972. Particle motions in sheared suspensions X X VII Transient and steady deformation and burst of liquid drops. J. Colloid Sci. Janssen, J.M.H., Meijer, H.E.H,1993. Droplet break up mechanisms: Stepwise equilibrium versus transient dispersion. J. Rheol. 37(4), 597-608. Stone, H.A., 1994. Dynamics of drop deformation and breakup in viscous fluids. Annu. Rev. Fluid Mech. 26, Rallison, J.M.,1984. The deformation of small viscous drops and bubbles in shear. Annu. Rev. Fluid Mech. 16, 45- 66. Elemans, P.H.M., Bos, H.L., Janssen, J.M.H., Meijer, H.E.H., 1993.Transient phenomena in dispersive mixing. Chem. Eng. Sci. 48, 267-276. Tsakalos, VT., Navard P., Peuvrel-Disdier, E., 1998.Deformation and breakup mechanisms of single drops during shear. J. Rheol. 42, 1403-1417. J.M.Rallison, A.Acrivos, 1981 A numerical study of the deformation and burst of a viscous drop in general shear flow, J. Fluid Mech. 109, 465. H.A.Stone, 1994, Dynamics of drop deformation and breakup in viscous fluids, Annu. Rev. Fluid Mech. 26, 65. Cristini, V, Blawzdziewicz, J. & Loewenberg, M. 1998 Drop breakup in three-dimensional iscous flows. Phys. Fluids 10, 1781-1783. Tryggvason G, Bunner B, Esmaeeli A, Juric D, Al-Rawahi N, Tauber W, Han J, Nas S, Jan YJ. A front tracking method for the computations of multiphase flow. J Comput Phys Li, J. & Renardy, Y. 2000b Shear-induced rupturing of a 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 viscous drop in a Bingham liquid. J. Non-Newtonian Fluid Mech. 95, 235-251. Chang, Y. C., Hou, T. Y, Merriman, B. & Osher, S. 1996 A level set formulation of Eulerian interface capturing methods for incompressible fluid flows. J. Comput. Phys. Lowengrub J, Truskinovsky L. Quasi-incompressible Cahn-Hilliard fluids and topological transitions. R Soc Lond Proc SerA 1998;454:2617-2654. Chaffey, C.E., Brenner, H., Mason, S.G, "Particle motions in sheared suspensions XVII: Deformation and migration of liquid drops", Rheol. Acta., 4, 1-56(1965). Anderson D, Mcfadden GB,Wheeler AA,1998. Diffuse interface methods in fluids mechanics[J]. Ann.Rev.Fluid Mech. 30,139-165 Verschueren M, 1999. A diffuse interface model for structure development in flows[D]. PH.D.Thesis,Technische Universiteit indhoven, the Netherlands Kim J.,2002. Modelling and simulation of Multi-Component, Multi-Phase Fluid Flows[D].PHD thesis, University of Minnesota. Jacqmin D.,2000.Contact-line dynamics of a diffuse fluid interface. [J].J.Fluid.Mech 402,57-58. Harten A, Engquist B, Osher S, Chakravarthy SR,1987. Uniformly high order accurate essentially non-oscillatory schemes III. J Comput Phys 71,231-303. Marks CR.,1998. Drop breakup and deformation in sudden onset strong flows[D]. PH.D.thesis, University of Maryland at College Park. Barthes-Biesel D,Acrivos,1973. Deformation and Buist of a Liquid Droplet Freely Susoended in a Linear shear Field[J].J.Fluid Mech. 61,1-21 Greac HP,1982. Dispersion Phenomena in high viscosity immiscible fluid systems and application of static mixers as dispersion devices in such systems[J]. Chem. Eng. Commun. Torza S, Cox RG, Mason SG1972. Particle motions in sheared suspensions XXVII Transient and steady deformation and burst of liquid drops[J]. J.Colloid Sci. C. E.Chaffey, and H. Brenner, 1967. "A second-order theory for shear deformation of drops," J. Colloid Interface Sci. 24, 258 F.D.Rumscheidt, and S.G.Mason, 1961. Particle motions in sheared suspensions. XII. Deformation and burst of fluid drops in shear and hyperbolic flow, J.Colloid Sci. 16, 238 A.S. Almgren, J.B. Bell, P. Colella, L. H. Howell, and M.L. Welcome, 1998. A Conservative Adaptive Projection Method for the Variable Density Incompressible Navier-Stokes quations, J. Comput. Phys. 142, 1 A. Harten, B. Engquist, S. Osher, and S. R. Chakravarthy, 1987. Uniformly high order accurate essentially non-oscillatory schemes III, J. Comput.Phys. 71, 231 Lin CZ, Guo LJ, 2007.Experimental Study of Drop Deformation and Breakup in Simple Shear. Chin. J. Chem. Eng. 15(1) 1-5. D. Jacqmin, 2000. Contact-line dynamics of a diffuse fluid interface, J. Fluid Mech.402, 57
{"url":"http://ufdc.ufl.edu/UF00102023/00197","timestamp":"2014-04-16T16:11:47Z","content_type":null,"content_length":"48523","record_id":"<urn:uuid:bcd81599-cff0-489b-a11e-2c9cbf33983e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Lakoba, Taras I. - Department of Mathematics and Statistics, University of Vermont • CALCULUS II, MATH 022 Instructor: Dr. T.I. Lakoba Strategy of testing a series for convergence • CALCULUS II, MATH 022.Z1 Instructor: Dr. T.I. Lakoba Preparation sheet for Test 2 • A generalized Petviashvili iteration method for scalar and vector Hamiltonian equations with • Accelerated Imaginary-time Evolution Methods for the Computation of Solitary Waves • Publications of T. Lakoba 1 Publication List of Taras I. Lakoba • Polarization-mode dispersion of a circulating loop T. I. Lakoba • MATH 022.Z1 Calculus II / Summer 2010 (Session 1) Textbook: Calculus (Early transcendentals), by J. Stewart, 6th Ed. • JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 23, NO. 9, SEPTEMBER 2005 2647 Transmission Improvement in Ultralong • CALCULUS II, MATH 022.Z1 Instructor: Dr. T.I. Lakoba Preparation sheet for Test 3 • Conjugate Gradient Method for finding fundamental solitary waves • JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 10, MAY 15, 2009 1379 BER Degradation by Signal-Reshaping Processors • A comparative study of noisy signal evolution in 2R all-optical regenerators • IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS, VOL. 14, NO. 3, MAY/JUNE 2008 599 Multicanonical Monte Carlo Study of the BER • A mode elimination technique to improve convergence of iteration methods for finding • Universally-Convergent Squared-Operator Iteration Methods for Solitary Waves in General Nonlinear • All-optical multichannel 2R regeneration in a fiber-based device • 382 JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 22, NO. 2, FEBRUARY 2004 Effect of a Raman Co-Pump's RIN on the BER • Probability-density function for energy perturbations of isolated optical pulses • CALCULUS II, MATH 022.Z1 Instructor: Dr. T.I. Lakoba Preparation sheet for Test 1 • Math 22 Lab 4 Integration • INSTITUTE OF PHYSICS PUBLISHING EUROPEAN JOURNAL OF PHYSICS Eur. J. Phys. 23 (2002) 2126 PII: S0143-0807(02)26048-1 • CALCULUS II, MATH 022.Z1 Instructor: Dr. T.I. Lakoba Preparation sheet for the Final Exam • MATLAB PRIMER This chapter will serve as a hands-on tutorial for beginners who are unfa- • Error Estimation and Control for ODEs L.F. Shampine • Taras I. Lakoba Dept. of Mathematics and Statistics, University of Vermont, Burlington, VT 05401 • Convergence conditions for iterative methods seeking multi-component solitary waves with • NALM-based, phase-preserving 2R regenerator of high-duty-cycle pulses • Instability analysis of the split-step Fourier method on the background of a soliton of • Low-Power, Phase-Preserving 2R Amplitude Regenerator • MATH 260 Foundations of Geometry / Fall 2011 Textbooks: Complex numbers and Geometry, by L.-s. Hahn (required) • MATH 121.A Calculus III / Fall 2011 Textbook: Calculus (Early transcendentals), by J. Stewart, 6th Ed. • 13.3 CLASSICAL STRAIGHTEDGE AND COMPASS CONSTRUCTIONS As a simple application of the results we have obtained on algebraic extensions, and in • CALCULUS III, MATH 121 Instructor: Dr. T.I. Lakoba Preparation sheet for the Final Test (Fall 2011) • MATH 260.A --Foundations of Geometry / Fall 2011 Preparation sheet for the Final Test • CALCULUS III, MATH 121 Instructor: Dr. T.I. Lakoba Preparation sheet for Test 3 • CALCULUS III, MATH 121 Instructor: Dr. T.I. Lakoba Preparation sheet for Test 2 • CALCULUS III, MATH 121 Instructor: Dr. T.I. Lakoba Preparation sheet for Test 1 • MATH 260.A --Foundations of Geometry / Fall 2011 Preparation sheet for Test 2 • MATH 260.A --Foundations of Geometry / Fall 2011 Preparation sheet for Test 1 • Acta Numerica (2003), pp. 1--51 c fl Cambridge University Press, 2003 • MATH 337, by T. Lakoba, University of Vermont 65 6 Boundary-value problems (BVPs): Introduction • MATH 337, by T. Lakoba, University of Vermont 173 17 Method of characteristics for solving hyperbolic PDEs • MATH 337, by T. Lakoba, University of Vermont 6 1 Simple Euler method and its modifications • MATH 337, by T. Lakoba, University of Vermont 47 5 Higher-order ODEs and systems of ODEs • Error Estimation and Control for ODEs L.F. Shampine • MATH 337, by T. Lakoba, University of Vermont 15 2 Runge-Kutta methods • Solving Boundary Value Problems for Ordinary Differential Equations in Matlab with bvp4c • MATH 337, by T. Lakoba, University of Vermont 36 4 Stability analysis of finite-difference methods for ODEs • MATLAB PRIMER This chapter will serve as a hands-on tutorial for beginners who are unfa- • MATH 337, by T. Lakoba, University of Vermont 140 15 The Heat equation in 2 and 3 spatial dimensions • MATH 337.A Numerical Differential Equations Spring 2012 • Edward Neuman Department of Mathematics • Edward Neuman Department of Mathematics • MATH 337, by T. Lakoba, University of Vermont 69 7 The shooting method for solving BVPs • MATH 121.C Calculus III / Spring 2012 Textbook: Calculus (Early transcendentals), by J. Stewart, 6th Ed. • Numerical methods for distributed models T.I. Lakoba • Computation Visualization • MATH 337, by T. Lakoba, University of Vermont 125 14 Generalizations of the simple Heat equation • Numerical methods for local models T.I. Lakoba • MATH 337, by T. Lakoba, University of Vermont 167 16 Hyperbolic PDEs • MATH 337, by T. Lakoba, University of Vermont 79 8 Finite-difference methods for BVPs • MATH 337, by T. Lakoba, University of Vermont 94 9 Concepts behind finite-element method • MATH 337, by T. Lakoba, University of Vermont 101 11 Classification of partial differentiation equations (PDEs) • An International Joumal computers & • CALCULUS III, MATH 121 Instructor: Dr. T.I. Lakoba Preparation sheet for Test 1 • Applied Numerical Mathematics 57 (2007) 1935 www.elsevier.com/locate/apnum • MATH 337, by T. Lakoba, University of Vermont 107 12 The Heat equation in one spatial dimension • Edward Neuman Department of Mathematics • MATLAB PrimerThird Edition Kermit Sigmon • Edward Neuman Department of Mathematics • MATH 337, by T. Lakoba, University of Vermont 21 3 Multistep, Predictor-Corrector, and Implicit methods • MATH 337, by T. Lakoba, University of Vermont 1 0 Preliminaries • Computation Visualization • A Practical Introduction to Matlab (Updated for Matlab 5) • MATLAB has many tools that make this package well suited for numerical computations. This tutorial deals with the rootfinding, interpolation, numerical differentiation and integration and • MATH 337, by T. Lakoba, University of Vermont 118 13 Implicit methods for the Heat equation • BIT33 (1993), 17~175. VARIABLE STEP SIZE DESTABILIZES THE
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/49/630.html","timestamp":"2014-04-18T06:31:04Z","content_type":null,"content_length":"19760","record_id":"<urn:uuid:fd25d97b-a8d7-492e-8b94-0294e129a036>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
3D Solid Model Reconstruction -> -> Correct and consistent representations of three-dimensional objects are required by applications as varied as modeling, simulation, visualization, CAD/CAM, and finite element analysis. However, most acquired 3D models, whether created by hand or by using automatic tools, contain errors and inconsistencies. They contain wrongly-oriented, intersecting, or overlapping polygons, cracks, and T-junctions; polygons are missing; and topological information is inconsistent. Solid modeling information is rarely available. Rather, polygon soup seems to be the rule. Problems arise from many sources, including designer errors or software errors in the modeling tool. These errors can be compounded by specific data exchange problems: (i) automated transfer between CAD formats (eg, IGES, STEP, DXF, binary files from CATIA or autoCAD), between B-spline or NURBS-based formats, or (ii) geometric transformation into an engineering analysis system (eg, triangular surface mesh). Currently, techniques to reconstruct manifold models from acquired 3D models are not very robust. Existing methods either require user-input, assume that the polygons in the input set are consistently oriented, assume that most polygons are orthogonal, or use scene-relative tolerances to ``fill over'' cracks in the model. Boundary-based approaches that try to infer solid structures from how input polygons mesh together are likely to perform incorrectly in the presence of non-manifold geometry. Other approaches based on merging features within some tolerance of each other do not work well when the size of errors are larger than the smallest feature in the model. In this case, no suitable tolerance can be chosen that both fills cracks and preserves small We are developing a system that automatically reconstructs a consistent 3D solid model and boundary representation from an arbitrary set of polygons (polygon soup). The system partitions space into convex polyhedral regions separated by planes supporting the input polygons. Solid regions are identified by solving a linear system of equations derived from rules based on the opacities of boundaries between regions: 1) two adjacent cells sharing a mostly transparent boundary are likely to have the same solidities (i.e., if one is solid, then the other is too), 2) two adjacent cells sharing a mostly opaque boundary are likely to have opposite solidities (i.e., if one is solid, then the other is not), and 3) the unbounded cells (i.e., the ones on the outside that contain a point at infinity) are not solid. Once solid regions have been identified, the system can output consistent solid model and boundary representations without intersecting, coplanar, or unconnected polygons. This is joint work with T.M. Murali. Related Publications:
{"url":"http://www.cs.princeton.edu/~funk/modeling.html","timestamp":"2014-04-18T15:41:17Z","content_type":null,"content_length":"4714","record_id":"<urn:uuid:e3582206-34ed-4b3c-b234-cc330dd1d544>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
WotLK talent Preview/Discussion - Page 175 - Warriors I believe Blizzard has figured out, though I think they probably had an inkling before, that holding all talents to the same standard is folly. By nature there must be iconic, spec-defining talents that break out of the "1% per point" paradigm. Having a baseline of "1% per point" works as a great foundation to build off of, but true balance can't be obtained by rigidly sticking to that concept. Talents like Bloodthirst and Titan's Grip are iconic, just as Shadow Form, Earth Shield, Fel Guard, Crusader Strike, Bestial Wrath, Water Elemental, Shadowstep and Mangle (and many others) are. Whether providing utility, damage, or other effectiveness these talents can't play by the normal rules. Good gameplay must trump everything except class/spec balance. It's very hopeful to see that Blizzard is willing to reverse nerfs made for the sake of arbitrary rules. Whether or not we'll see the removal of the 5% penalty probably rests more on how the armor buff to bosses affects us relative to other melee/physical DPS.
{"url":"http://forums.elitistjerks.com/topic/27503-wotlk-talent-previewdiscussion/page-175","timestamp":"2014-04-20T11:27:05Z","content_type":null,"content_length":"110428","record_id":"<urn:uuid:8a537905-96ca-4fd4-8812-d1844f3a4090>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula F could do that to me? Re: Linear Interpolation FP1 Formula I can not predict what she would do next. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Personally I think she would talk to me, act as if nothing happened, and she would probably say she has to go somewhere only for me to never see her again (again). Except this time I won't be as intimidated by her as I was a year ago. Re: Linear Interpolation FP1 Formula Is that better? You can not even imagine her being nice to you. Not in your wildest dreams. Ignore her if she is there. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I will try... Re: Linear Interpolation FP1 Formula Luke: Okay, I will try... Yoda:Do or do not, there is no try. Let's follow Yoda here. When you see that potato face train your stomach to become nauseous. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Maybe I don't need to go to that kind of length, but I could instead just do a brief nod, as if I am not surprised by her return -- being indifferent to her presence. Re: Linear Interpolation FP1 Formula Evidence suggests she will be glad to ignore you so if you want to please her do the same. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula That behaviour still confuses me. Hanh tells me it was because she didn't want a relationship with me (no kidding!), so chose not to tell me because she was too embarrassed (what?). Re: Linear Interpolation FP1 Formula She still sounds kooky, perhaps it is not even her. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes, it could be anyone... Re: Linear Interpolation FP1 Formula Might be somebody much better. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Given my recent luck, I highly doubt it... and given that it is a maths module, it is most probably a guy. Re: Linear Interpolation FP1 Formula That would be the best result. You would not notice him. That way you will concentrate better. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I guess so... Re: Linear Interpolation FP1 Formula Cmon, snap out of it. You are pining away for some girl... Forget em! There are a million more where she came from. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula It doesn't seem like a million. But, maybe it will do at UCL. Re: Linear Interpolation FP1 Formula Maybe you should try a different sort of girl. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula What sort of girl? Re: Linear Interpolation FP1 Formula One that is not English. I understand that Britain has the highest rate of immigrations of any country in the world. Twice the rate of the US. There are lots of eastern European girls supposedly In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Well, adriana was not English... and to be fair, F was actually half Scottish half Armenian, whilst C was of Irish descent. R is Scottish, IY was Greek, Hannah is Bosnian...... Re: Linear Interpolation FP1 Formula Were they born in England? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I am not sure. Re: Linear Interpolation FP1 Formula Then I would suggest staying away from the brainy types. I never did particularly well with them. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula C and IY weren't the brainy type. adriana, not sure if I would classify her as the brainy type.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=265088","timestamp":"2014-04-20T11:51:53Z","content_type":null,"content_length":"34117","record_id":"<urn:uuid:9c47113c-79eb-4064-9cce-c812fb2d186e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
San Rafael, CA Prealgebra Tutor Find a San Rafael, CA Prealgebra Tutor ...I have been an independent study teacher for over 10 years and have extensive experience working with elementary students in math. I use a variety of resources and teaching styles depending on the individual needs of the child. I have a California Multiple Subjects Credential. 17 Subjects: including prealgebra, reading, English, GED ...The multiple-choice portion reflects spelling, grammar, and punctuation, and how to correct sentences and paragraphs using those mechanics of writing. The essay is more open-ended, and asks the test-taker to comment on an assigned question or topic in question. An essay is built upon the following layout: Introduction, Main Body, Conclusion. 17 Subjects: including prealgebra, English, reading, grammar ...The subjects have ranged from pre-algebra to Calculus II. Along with taking my classes, I am teaching Algebra 1 this Fall at CSUEB. A lot of people know Math, a lot of people can tutor Math, but for me it's about the individual needing help. 8 Subjects: including prealgebra, reading, algebra 1, algebra 2 ...I've supported math students in their studies from addition/subtraction to algebra, geometry, and through high school-level calculus and statistics, and science students from elementary through chemistry, physics, and biology. I volunteered as a beginning clarinet teacher at local elementary sch... 34 Subjects: including prealgebra, Spanish, reading, writing ...I also have experience in the "Lindamood-Bell" literacy, comprehension, and math techniques. I graduated Summa Cum Laude from Creighton University with a B.S. in Environmental Science and Spanish. I love teaching and have experience with a wide range of students. 24 Subjects: including prealgebra, reading, Spanish, chemistry
{"url":"http://www.purplemath.com/San_Rafael_CA_prealgebra_tutors.php","timestamp":"2014-04-20T04:11:52Z","content_type":null,"content_length":"24158","record_id":"<urn:uuid:70538536-bfce-43fb-943f-4f602c2a5acd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
User Dick Palais bio website vmm.math.uci.edu location Univ. of California at Irvine age 82 visits member for 3 years, 9 months seen Apr 8 at 22:40 stats profile views 6,376 I'm a Professor at UC Irvine, but spent most of my career at Brandeis. My research areas: Differential Topology, Transformation Groups, and Global Analysis. Recently I developed a math visualization program, 3D-XplorMath, freely available at http://3D-XplorMath.org) and a companion website, the Virtual Math Museum at http://VirtualMathMuseum.org. Last year I co-authored a differential equations text with my son Bob, most of which is downloadable from http://ode-math.com . 7 answered Notes for Bott's 1963 lectures on Morse theory 7 answered Any good books on numerical methods for ordinary differential equations? Oct Proof synopsis collection 16 comment @mathahada Where do you see a geometric series in my proof? (One of the points I had in publishing the above proof was to show that the geometric series argument (used since Banach's original proof) is really not necessary. Oct Can the level set of a critical value be a regular submanifold? 14 comment You probably should say amend the statement of the theorem to say that a non-empty level set of a regular value of a smooth function f:M→ℝ on a smooth manifold is a regular submanifold of codimension one. (That takes care of the problem with f identically zero.) 11 answered Proofs for doubly ruled surfaces Sep Definition of Sobolev spaces as a space of sections of certain type 21 revised Added a link to a copy of referenced work; deleted 1 characters in body; added 1 characters in body 20 answered Definition of Sobolev spaces as a space of sections of certain type Sep Collapsing of Riemannian manifolds with a group action 19 comment "...Consider the fixed point set F, it is of course a submanifold of M by the slice theorem". Note that it is really simpler than that; in geodesic coordinates at a point p of F, the fixed point set is locally the linear subspace left fixed by the linearized action at p. Sep First known proof of $\sqrt 2$ is irrational with prime factorization? 15 comment Your right Franz, it doesn't. It's just that there seems to be a belief that you NEED unique prime factorization to prove the irrationality of non-square integers, and when I first saw this (much more elementary) proof I found it an eye-opening experience. 14 answered First known proof of $\sqrt 2$ is irrational with prime factorization? Sep Area of union of random circles in a plane 3 comment You will probably get a more "natural" answer if you choose a "torus", i.e., identify opposite edges of a square, to eliminate edge effects. 31 awarded Good Answer 29 awarded Nice Answer 25 awarded Enlightened 25 awarded Nice Answer Aug Square root of a positive $C^\infty$ function. 25 revised improved citation Aug Square root of a positive $C^\infty$ function. 25 comment Yes, and functions of this type are discussed in section 2 of the reference I gave in my answer. 25 answered Square root of a positive $C^\infty$ function. 24 answered Measure theory treatment geared toward the Riesz representation theorem Aug awarded Guru
{"url":"http://mathoverflow.net/users/7311/dick-palais?tab=activity&sort=all&page=4","timestamp":"2014-04-19T18:08:32Z","content_type":null,"content_length":"46870","record_id":"<urn:uuid:56f6770a-d491-4740-a512-d10f567c496b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Contemporary Mathematics Linearization of Local Cohomology Modules Josep ‘ Alvarez Montaner and Santiago Zarzuela Abstract. Let k be a field of characteristic zero and R = k[x 1 , . . . , xn ] the polynomial ring in n variables. For any ideal I # R, the local cohomolgy modules H i I (R) are known to be regular holonomic An (k)­modules. If k is the field of complex numbers, by the Riemann­Hilbert correspondence there is an equivalence of categories between the category of regular holonomic DX ­ modules and the category Perv (C n ) of perverse sheaves. Let T be the union of the coordinate hyperplanes in C n , endowed with the stratification given by the intersections of its irreducible components and denote by Perv T (C n ) the subcategory of Perv (C n ) of complexes of sheaves of finitely dimensional vector spaces on C n which are perverse relatively to the given stratification of T . This category has been described in terms of linear algebra by Galligo, Granger and Maisonobe. If M is a local cohomology module H i I (R) supported on a monomial ideal, one can see that the equivalent perverse sheaf belongs to Perv T (C n ). Our main purpose in this note is to give an explicit description of
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/273/3727511.html","timestamp":"2014-04-20T12:08:06Z","content_type":null,"content_length":"8318","record_id":"<urn:uuid:9c685e38-b417-48b0-9519-6f618b15d441>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Professor Brian Greene, Columbia University Listen Now Download as mp3 from the show Beyond the Universe - Multiverses and More Ben - Parallel Universes sound like an idea straight out of science fiction – but they are genuinely based in fact. The best mathematical models we have to explain the universe often predict some form of multiverse may exist as well. Brian Greene, professor of physics and mathematics at Columbia University, examines 9 different multiverse proposals in his new book, The Hidden Reality. I caught up with him at the Cambridge Science Festival... Brian - We’re asking one of the grandest of all questions which is, “Is our universe the only universe?” And that at first sight is a strange question because we’re used to thinking of "universe" to mean everything, the totality, but a lot of research from a variety of different directions over the last few decades have suggested that what we long thought to be everything may not be everything. It may be a piece of a larger whole and that larger whole may contain other universes and that's where we come to the idea of multiple universe. Ben - There are actually quite a few different theories born largely from the maths. How do we try and tie them altogether? Are they at the moment just competing ideas or are they all actually pieces of one large jigsaw? Brian - Well, in my book, I actually describe 9 different variations on the theme of multiverse and there are relationships between them, but we have not yet develop any kind of meta-multiverse framework in which all of these proposals would sit. They're not necessarily mutually exclusive. We don't know if any of these are right. We will only know that when there is some experimental or observational evidence, but we’re Ben - Following the maths is obviously a very good way to get at what may be objectively the truth, but trying to understand these in human terms is very difficult. It’s very counter intuitive. How do you get around trying to explain to people what the results of these numbers might actually mean? Brian - Well, a number of the multiverse proposals are not that hard to grasp. For instance, we all know of the Big Bang, right? We have been trying to understand the Big Bang with greater precision in recent decades than we did in the past. In essence we’ve been trying to fill in a missing piece of the Big Bang theory, which is what started the bang. What was the bang? What started space to undergo this outward swelling? We now have a proposal on the table. It’s called inflationary cosmology. The name is not all that important, but I bring it up because when we study the math of this proposal, it suggests that the Big Bang may not have been a unique one-time event. There may be many Big Bangs, each giving rise to its own swelling realm of space. It’s as if our universe is a growing bubble in a grand cosmic bubble bath with other universes. That is a strange idea, but it’s not that hard to wrap your mind around this possibility and this is one of the proposals that I discuss in the book. Ben - As well as looking at things on the grand scale of the entire universe, we have to consider things at the subatomic scale, smaller than we currently know about. How does that fit into the idea of looking at parallel or multiple universes? Brian - Well, you wouldn’t think that it would. When studying tiny things- molecules, atoms, subatomic particles, that would not suggest that you were enroute to a theory of Parallel Universes. But, surprisingly have found that it does lead to that possibility from a number of different perspectives. Let me just give you one. Quantum mechanics, the study of the smallest ingredients in the world, broke the older Newtonian model of the world by saying that you can't predict with absolute certainty the result of any experiment. You can only predict the probability of getting one outcome or another. The electrons say, there's a 50% chance being here, 50% chance of being over there. Now, that's weird enough – a world governed by probabilities, but a puzzle that still persists to this day is, when you do a measurement of the electron, you find it at one location or another, so what happened to the other possible outcome? One suggestion is that it happened too. You saw the electron over here in one universe, but the maths suggests that there was a copy of you who sees the electron over there in another universe, two universes coming from the probabilistic framework of the quantum world – multiple universes. Ben - So would the implication of there being multiple universes be that there are actually lots and lots of me, pointing lots and lots of microphones at lots and lots of you, all throughout this multiverse and with very, very slight differences between them? - It’s quite possible. In some of the multiverse scenarios, exactly that would happen. For instance, in perhaps the simplest of all multiverses, it comes from imagining that space goes on infinitely far. We don't know that it does. Maybe it curves back on itself like the surface of the Earth, but if it does go infinitely far, there is a breath-taking conclusion along the lines of what you just mentioned which is this: In any finite region of space, matter can only arrange itself in finitely many different ways. It’s like if you take a deck of cards, this is my favourite analogy to describe this, if you shuffle the deck, the cards come out in different orders. But there are only finitely many different orders for the cards, so if you shuffle the deck enough times the order of the cards, must repeat. No way around it. Similarly, as space goes on infinitely far then the order of the particles region, by region, by region must repeat too. There just aren’t enough different arrangements to go around. Now you and I, we’re just an arrangement of particles. If the arrangement repeats out there then we’re having this conversation out there. And like you say, it’s even easier for the particle arrangement to almost, but not exactly repeat. That would mean that perhaps I'm interviewing you in one of those universes. So it’s a startling idea, but it comes from simple assumption. Space goes on infinitely far and also, the hidden assumption as well that the laws of physics that we know about here are the laws of physics everywhere. So we can actually say something sensible about what happens out there. But under those mild assumptions, you come to this startling conclusion. Ben - How can other areas of science actually pay a part? We’ve already mentioned cosmology, we’ve mentioned astronomers, we’ve mentioned particle physics. How are these different groups all feeding in to find the same answer? Brian - Well I think the different groups play different but overlapping and complimentary roles. When we talk about multiple Big Bangs, this comes from inflationary cosmology, which makes some predictions that observational astronomers can look to the sky and try to test. Inflationary cosmology says some very definite things about the microwave background radiation. This is heat left over from our Big Bang. It speaks to tiny temperature differences in the sky that inflationary cosmology implies should be there. The observational astronomers turn telescopes skyward and they have found those tiny temperature differences in the sky, confirming one prediction of this approach. Then when those ideas also suggest something else that may seem more far out, like multiple universes, we’re compelled to take that idea seriously. Ben - What do you think will be the next stage? What do we need to do to get a bit further with this work? Brian - Well I think there are two major directions: One, we need to understand the mathematical underpinnings of all of these ideas with yet, greater precision. That is vital in order that we can make more precise statements of what experimenters and observers of astronomy should find. Then, on the experimental and observational front, we need to keep pushing onward. I mean, the Large Hadron Collider is a device that may give us a lot of insight in the coming years. Some of the parallel universe proposals do come out of string theory and we need to see whether string theory is right or wrong. There's at least a chance that the Hadron Collider could give us some insight looking for supersymmetric particles, a class of particles that string theory says should be out there, but we’ve not seen. The idea of extra dimensions which comes from string theory - the Collider actually has a chance of finding them. How? Slam two protons together, the equations show that some of the debris created in that high energy collision can be ejected out of our dimensions into the others. How would you notice that? The debris would take away some energy which means our detector would measure less energy after the collision than before. So there's a real possibility for some interplay between experiment, observation, and theory and they all need to go forward hand in hand. Subscribe Free Related Content
{"url":"http://www.thenakedscientists.com/HTML/content/interviews/interview/1590/","timestamp":"2014-04-18T18:12:16Z","content_type":null,"content_length":"31391","record_id":"<urn:uuid:c32f01f1-111b-42a9-b6d8-de464368b6be>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the total amount if you deposit $500 at a rate of 5% for two years using simple interest. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5102e01fe4b0ad57a562aaed","timestamp":"2014-04-16T04:38:06Z","content_type":null,"content_length":"60680","record_id":"<urn:uuid:0ac336e1-34f7-48c5-b4bb-d584cf858ce2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Parallel computers 'evolutionize' research A major research trend is harnessing advanced computers to complement theory and experiment. Advanced computing allows scientists to conduct experiments that could not otherwise be done, to test possible experiments before investing the time and money to physically carry them out, and to create models of complex phenomena. Fueling the growth of scientific computing are the rapid expansion and availability of parallel computing facilities, such as Argonne's Chiba City, a 512-processor parallel computer based on the Linux operating system. A typical desktop computer has only one central processing unit. A parallel computer, however, has many processors that coordinate to work on smaller parts of a larger problem. Modern technology can even link computers around the world, providing still greater power to solve complex problems (see Globus Toolkit enables Grid computing). Argonne computer scientist Mike Minkoff sees scientific computing as a natural evolution of science. "Research," he said, "combines three activities: experimental observation; the development of mathematical models, such as Newton's laws of motion, to describe the observations; and computation to test the models by applying them to new experimental observations." In the past 20 years, the computer has given experimentalists more direct feedback, partly by speeding computation and lowering the costs of solving larger problems and partly by enabling researchers to develop and test models quickly. Modeling the "Perfect Storm" John Taylor of Argonne's Regional Climate Center develops large-scale models to estimate regional impacts of climate change. He focuses on the American Midwest and Great Plains, but his tools and techniques have been applied to other regions. "Global climate models don't do a good job of predicting regional climate," Taylor said, "because the smallest area they examine is typically a square that measures 200 by 200 kilometers. A grid cell that large can't reveal extreme weather, which tends to be local in scale." Taylor's models use 1- to 10-km cells. "At this level of detail," he said, "the model can include specific regional features--mountains, valleys, bodies of water--that shape local winds and precipitation. You begin to see extreme events, such as more intense rainfall and wind storms." Taylor's group has developed a model of the "Perfect Storm," which hit the north Atlantic in October 1991 and subsequently inspired a best-selling novel and a Hollywood movie. At 20-km resolution, the model revealed a second hurricane that weather services never reported. Because computing time rises sharply as resolution increases, regional climate modeling requires considerable computing power. To meet this requirement, Taylor's group uses Argonne's Chiba City. "The calculations scale well," he said, "and we can access a large number of processors to perform the runs cost effectively. Argonne is one of the few places in the world with a large-scale cluster testbed available for this kind of research." His research is funded by DOE's Office of Biological and Environmental Research and the U.S. Environmental Protection Agency. Mapping reactor behavior David Weber's group in Argonne's Reactor Analysis and Engineering Division is working with the Korea Atomic Energy Research Institute (KAERI) and Purdue University to model the core of pressurized-water reactors, the world's most common type of commercial reactor. Their work, funded by DOE's Office of Nuclear Energy, could improve reactor performance, extend operating lifetime and increase output without compromising safety. The project uses advanced computing tools to predict fuel and coolant temperatures throughout the core during normal and abnormal conditions. Because of the expense of operating massively parallel computers and the large size of the model, these tools will be used to improve and verify smaller, more economical models for routine use. "This project is using massively parallel computers to look at the complex feedback relationships between reactor fuel and coolant in an integrated reactor system," said Weber, Reactor Analysis and Engineering Division director. Nuclear reactors use fission to produce heat. The heat boils water to produce steam, and the steam turns a turbine, which generates electricity. Heat generation in the core depends on two phenomena: Neutron flux, the rate at which the core emits neutrons, and Neutron cross section, the probability that neutrons strike fertile nuclei to drive the chain reaction. Higher neutron flux can raise fuel temperature; but higher fuel temperature can reduce neutron cross section, which lowers the fission rate and may reduce fuel temperature. "Feedback is balanced when the reactor operates normally at constant power," Weber said, "because heat production and heat removal are equal. But when something changes--an operator adjusts the power or some short-term incident occurs--the system's response depends on feedback and operator actions." The Argonne-KAERI-Purdue collaboration is building on previous Argonne work, which churned through a 240-million-cell model of a reactor core in 58 hours on a 200-processor parallel computer at IBM's SP Benchmark and Enablement Center, Poughkeepsie, N.Y. "The earlier work," he said, "looked only at the temperature of the coolant as it flows through the core. This project incorporates details of neutron flux and fuel temperatures that we couldn't attempt without massively parallel computers." Additional details include turbulent mixing when the water flow encounters components in the core. The mixing is beneficial, Weber said, because it creates more homogeneous coolant temperature and enhances heat transfer, but the turbulence makes the pumps work harder. "Our work may help redesign components to promote mixing while reducing pumping losses." Combustion modeling Mike Minkoff collaborates with chemist Al Wagner to model the basic chemical reactions of burning fuels. They are improving methods for calculating combustion-rate coefficients, which industry uses to model cleaner-burning, more efficient energy systems. "Coefficients for reactions involving simple chemical species--such as oxygen reacting with hydrogen--can be calculated precisely on a desktop PC," Minkoff said. But as molecular size increases, the calculations rapidly outstrip the capabilities of most massively parallel computers. Minkoff and Wagner use a matrix-based approach that allows parallel computers to calculate systems involving molecules containing many atoms. The elements of the matrix are combinations of kinetic and potential energy associated with the molecules' relative proximity and orientation. The computations involve multidimensional space and identify the most stable energy states among the molecules. Key to their research is PETSc, the "Portable, Extensible Toolkit for Scientific computation" developed by Argonne's Mathematics and Computer Science Division to solve large-scale, specialized Minkoff and Wagner start by modeling reactions of simple, two-atom molecules and test their results against the precise mathematical solution. They then expand their model to include more complex molecules containing many atoms and compare their results with those from the traditional approach, which uses statistical methods to estimate the coefficients. Their work is funded by the U.S. Department of Energy's (DOE) Office of Basic Energy Science and Office of Advanced Scientific Computing Research. Diving into the nucleus Argonne physicists Steve Pieper and Bob Wiringa work deeper inside the atom. They use parallel supercomputers to calculate the forces that bind together nucleons--protons and neutrons--to form atomic nuclei. Their goal is to develop theory that matches observation. "Good theory has to explain, for example, why there's no stable eight-body nucleus," Wiringa said. "This imposes a limit on the nuclei created in the earliest moments of the Big Bang. In the beginning, there were no nuclei with more than seven nucleons." A key challenge is to find models that work for both neutron-rich nuclei--those with a high ratio of neutrons to protons--and for those with equal numbers. "If we want to compute the forces in neutron stars, which are essentially all neutrons," Wiringa said, "we need to understand neutron-rich nuclei like helium-10, which is unstable." With two protons and eight neutrons, helium-10 is the most neutron-rich nucleus known. Pieper and Wiringa study nuclei with five to 10 nucleons. Their work begins with a model that calculates the binding energies for two-body nuclei. Known from thousands of experiments involving collisions, binding energies are the precise energies required to break a nucleus apart. Each nucleus has more than one binding energy, depending on whether it is at its "ground" or most stable state, or whether it has been excited to an intermediate state by an interaction that imparted some energy but not enough to break it apart. Pieper and Wiringa's two-body model is the "Argonne potential," published in 1995 by Wiringa and colleagues from Flinders University, South Australia, and Old Dominion University, Va. In 2000, their paper was the world's most cited theoretical nuclear physics publication. To study nuclei with more than two nucleons, Pieper and Wiringa extend the two-body model by adding the "Illinois family," a collection of three-body models they developed with Vijay Pandharipande of the University of Illinois at Urbana-Champaign. "The required computing power," explained Pieper, "increases exponentially with the number of nucleons. Calculations for up to six bodies can be done on a modern PC. For seven or eight, we can use Chiba City comfortably, but it's a stretch for nine. For 10, we use the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory." To compute a single 10-body energy state, 500 processors work in parallel at 250 million operations a second for eight to 15 hours. "And," said Pieper, "we have to test a whole family of energy Their recent work, funded by DOE's Nuclear Physics Division, provides a fairly consistent picture of binding in nuclei with up to 10 nucleons. Their next step, as computing power continues to expand, will be to extend their work to larger nuclei. For more information, please contact David Baurac.
{"url":"http://www.eurekalert.org/features/doe/2003-12/dnl-pc031804.php","timestamp":"2014-04-18T23:43:12Z","content_type":null,"content_length":"27621","record_id":"<urn:uuid:4d0eaba2-57df-4aed-9832-b3295f62cec7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Atherton Prealgebra Tutor Find an Atherton Prealgebra Tutor ...I have taught economics, operations research and finance related courses. I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong background in statistics and econometrics. 49 Subjects: including prealgebra, calculus, physics, geometry ...I received my bachelor's degree in economics/finance from the University of Colorado, and I was in the honor's program and a teaching assistant while there. While mathematics, life sciences, history, English/literature, and SAT and ACT test preparation are my specialties, I have well-rounded kno... 52 Subjects: including prealgebra, English, chemistry, reading ...My name is Eduardo. I was born and raised in Puerto Rico, where I completed my bachelor's degree in Industrial Biotechnology at the University of Puerto Rico at Mayagüez. During college, I was recruited by the National Institutes of Health's Minority Access to Research Careers (NIH-MARC) program in order to support and accelerate my transition into my Ph.D. studies and a career in 13 Subjects: including prealgebra, Spanish, biology, geometry ...I am looking to build longer term relationships with a few students. I am most comfortable as a Chemistry or English tutor, though I have also taught algebra and physics, done some SAT/ACT/GRE tutoring and have performed extensive college application editing for many of my students and friends (... 22 Subjects: including prealgebra, chemistry, reading, English UPDATE, April 3, 2014 I am not currently accepting new students. I expect to have openings again around April 27. Inquiries in advance are welcome. 17 Subjects: including prealgebra, chemistry, statistics, geometry
{"url":"http://www.purplemath.com/Atherton_Prealgebra_tutors.php","timestamp":"2014-04-18T23:49:46Z","content_type":null,"content_length":"23990","record_id":"<urn:uuid:dc2de6e6-df21-47bd-be5f-708d1467b557>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Permutation Simple Question July 15th 2008, 10:44 PM #1 Junior Member Jun 2008 Permutation Simple Question "How many permutations exist of the letters in the word COMPUTER?" I said 8! because there are 8 letters and so it should just be n! where n is the number of choices. Right? "How many permutations end in a vowel?" I thought 7!3! but I'm not certain. Because there are 3 vowels and I want one of them in the last position, and the other 7 letters can be whatever else is left.. Is this correct? If not, can someone tell me what I am doing wrong? "How many permutations exist of the letters in the word COMPUTER?" I said 8! because there are 8 letters and so it should just be n! where n is the number of choices. Right? "How many permutations end in a vowel?" I thought 7!3! but I'm not certain. Because there are 3 vowels and I want one of them in the last position, and the other 7 letters can be whatever else is left.. Is this correct? If not, can someone tell me what I am doing wrong? How many end in E? How many end in U? How many end in O? How many does that make? I don't know how to figure that out. 7... Or is this a trick question? I already said 7 though.. Yes, but I implied there were 7 choices before the vowel and was wondering about the vowel part specifically, which is why I said 7! originally... it was the 3! that wasn't sitting well with me. There's a miscommunication here, and I'm sorry for that. anyway, I am going with 7! * C(3,1) ... I will check with someone in math lab tomorrow. Sorry for wasting your time... Yes, but I implied there were 7 choices before the vowel and was wondering about the vowel part specifically, which is why I said 7! originally... it was the 3! that wasn't sitting well with me. There's a miscommunication here, and I'm sorry for that. anyway, I am going with 7! * C(3,1) ... I will check with someone in math lab tomorrow. Sorry for wasting your time... How many end in E: 7! How many end in U: 7! How many end in O: 7! Total: 7! + 7! + 7! = 3*7! This is actually the same as 7! * C(3,1) but I'm not sure that you got the correct answer for the correct reason. Isnt this a combination of "k-permutations" and the addition principle. How many end in E? If we subtract the letter E from the rest of the letters we have 7 left. And seven letters can be permutated in 7! ways. The same goes for O and U. So 7! + 7! + 7! = 15120 ways of permutating the word COMPUTER when the word ends with a vowel. If the question was "How many ways can the word be permutated and DONT end with a vowel?" You would just take the total permutations 8! = 40320 and subtract 15120 40320 - 15120 = 25 200 I HOPE this is right, I am very very new at discrete math though. Plz correct me if Im wrong. July 15th 2008, 11:08 PM #2 Grand Panjandrum Nov 2005 July 15th 2008, 11:23 PM #3 Junior Member Jun 2008 July 15th 2008, 11:27 PM #4 July 15th 2008, 11:30 PM #5 Junior Member Jun 2008 July 15th 2008, 11:32 PM #6 July 15th 2008, 11:38 PM #7 Junior Member Jun 2008 July 15th 2008, 11:41 PM #8 July 15th 2008, 11:43 PM #9 July 16th 2008, 02:52 AM #10 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/discrete-math/43805-permutation-simple-question.html","timestamp":"2014-04-20T14:03:58Z","content_type":null,"content_length":"61573","record_id":"<urn:uuid:c6c80476-e470-4705-b0a9-f5bb5c33587b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Mathematics and the Roots of Postmodern Thought Date: Mar 30, 2013 9:39 AM Author: Jesse F. Hughes Subject: Re: Mathematics and the Roots of Postmodern Thought david petry <david_lawrence_petry@yahoo.com> writes: > As I have argued previously, if we treat mathematics as a science > and accept falsifiability as the cornerstone of mathematical > reasoning, then Godel's theorem is utterly utterly trivial, while at > the same time, his proof of the theorem is not a valid proof. You've never given a clear explication of what falsifiability means in this context. Jesse F. Hughes "[M]eta-goedelisation as the essence of the globalised dictatorship by denial of sense." -- Ludovico Van makes some sort of point.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8789285","timestamp":"2014-04-16T08:19:55Z","content_type":null,"content_length":"1654","record_id":"<urn:uuid:7db5ef2d-57a4-41ae-aa19-26ee9ae77622>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Modelling the Shear-Tension Coupling of Woven Engineering Fabrics Advances in Materials Science and Engineering Volume 2013 (2013), Article ID 786769, 9 pages Research Article Modelling the Shear-Tension Coupling of Woven Engineering Fabrics ^1School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK ^2Department of Materials Science and Engineering, Seoul National University, Seoul 151-742, Republic of Korea Received 2 January 2013; Accepted 10 February 2013 Academic Editor: Abbas Milani Copyright © 2013 F. Abdiwi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An approach to incorporate the coupling between the shear compliance and in-plane tension of woven engineering fabrics, in finite-element-based numerical simulations, is described. The method involves the use of multiple input curves that are selectively fed into a hypoelastic constitutive model that has been developed previously for engineering fabrics. The selection process is controlled by the current value of the in-plane strain along the two fibre directions using a simple algorithm. Model parameters are determined from actual experimental data, measured using the Biaxial Bias Extension test. An iterative process involving finite element simulations of the experimental test is used to normalise the test data for use in the code. Finally, the effectiveness of the method is evaluated and shown to provide qualitatively good predictions. 1. Introduction Press forming of woven engineering fabrics can be used to create complex geometries, suitable for subsequent liquid moulding and cure for the manufacture of composite parts [1]. During the press-forming process, in-plane tension is generally used to mitigate process-induced defects such as wrinkling and to some degree to control the final fibre orientation distribution across the component after forming [2–4]. Tension is controlled through boundary conditions applied to the perimeter of the material using a blank-holder [3–7]. The deformation kinematics of woven engineering fabrics during the forming process is dominated by trellis shear. However, the tension along tows that occurs as a result of the blank-holder load applied around the perimeter of the forming blank and due to the forming process itself can influence the shearing resistance of the woven fabric [2, 8, 9]. As such, consideration of the shear-tension coupling, when formulating constitutive models, could possibly result in improved accuracy both in terms of shear angle and wrinkling predictions. With the exception of Lee et al. [8, 9], current constitutive models for woven engineering fabrics assume no coupling between the shear resistance and the tension in the fabric despite strong experimental evidence showing that such a coupling does exist, for example, [10–13]. This paper describes a method of introducing a shear-tension coupling into finite element (FE) simulation predictions [14]. Experimental data measured recently using a novel shear test for woven engineering fabrics, the Biaxial Bias Extension (BBE) test, [10] is used to fit model parameters and the predictions of the model are then evaluated using two simple numerical tests. The structure of the remainder of this paper is as follows. A brief description of the FE model used in the fitting process is given, the method of implementation of the shear tension coupling in the constitutive model is described, and the iterative procedure used to fit the model to experimental results is discussed. Finally, predictions of the model are compared against experimental shear force measurements produced using the BBE 2. Finite Element Modeling Strategy The commercial FE code Abaqus Explicit has been used throughout this investigation. The FE model uses the same combination of mutually constrained truss elements (representing the high tensile stiffness fibres) and membrane elements (representing the shear properties of the fabric) as that described in [15] (see Figure 1). The mesh is automatically generated using an in-house mesh generation code. A simple approximate homogenisation method has been used to calculate truss dimensions and mechanical properties. Using the equation where is the cross-sectional area per unit length of the ends of either the warp tows of a typical glass fabric (e.g., ~0.000086m^2 per metre in Harrison et al. [10]) and is the combined cross sectional area per unit length of the truss elements in the mesh, is the tensile stiffness of typical glass tows (e.g., 30–73GPa [16–21]) and is the stiffness of the truss elements used in the FE mesh. The truss properties chosen for the truss elements here (stiffness = 6GPa, length = 0.01237m, circular cross-sectional area 1×10^−6m^2 gives an area per unit length, , of 0.000082m^2 per m) produce a sheet with a tensile response between about 5 and 13 times lower than an actual woven glass fabric, and for simplicity, the nonlinear tensile behaviour in the tows due to fabric crimp, for example, [22–24], is neglected. In this investigation, decreasing the tensile modulus of the truss elements in this way has been found to produce improved performance when modelling a shear-tension coupling and also tends to reduce simulation times when using the explicit FE method (due to the Courant stability condition). Previous researchers have also used this technique to improve computational efficiency [24, 25]. If this is done, care has to be taken to ensure that this reduction in stiffness has a negligible influence on the final complex forming simulation predictions. For example, in one forming case study, Willems [24] found that reducing the tensile stiffness by factor of 20 caused a 2° of change in the resulting shear deformation predictions. In this investigation, as will be shown, the method of implementing the shear-tension coupling is based on the tensile strain along the fibre directions. The latter influences the coupling behaviour and is determined by both the truss properties and the mesh density. Thus, once the shear-tension coupling is calibrated to a given mesh density, any subsequent change in mesh density has to be compensated for by an appropriate change in the truss properties (either by changing the modulus or cross section of the truss elements). Given that the main aim of this work is to examine the possibility of modelling the experimentally observed coupling between in-plane tension and shear stiffness [10], more accurate modelling of the tensile response of the fabric is deferred to future work. Ideally, this will involve correctly capturing the coupling between tensile strains in the two fibre directions due to fabric crimp. The membrane elements provide no contribution to the tensile stiffness of the mesh and are only used to add shear resistance to the sheet. The membrane elements have an initial thickness of 0.0002m with a Poisons ratio of 0. The shear stresses within the membrane elements are modelled using an enhanced version of the shear part of the original non-orthogonal constitutive model [15, 26, 27] (S-NOCM), as discussed in the following section. By replacing the tensile part of the original non-orthogonal constitutive model (T-NOCM) [15, 26, 27] with truss elements, the stress field within the membrane elements can be completely decoupled from the tensile stresses occurring along the fibre directions within the membrane element. The shear stress in the membrane elements can consequently be precisely controlled as a function of any of the state-dependent variables defined within the user-subroutine used to implement the constitutive model (e.g., shear angle, angular shear rate, temperature, or strain along the fibre directions). This strategy has been used recently to create a rate-dependent or viscous constitutive model for thermoplastic advanced composites [16, 28, 29]. The original implementation of the S-NOCM VUMAT user-subroutine has been modified in order to implement a shear-tension coupled version of the model, as described in the next section. 3. Implementation of Shear-Tension Coupling in the S-NOCM Implementation of the shear-tension coupled S-NOCM involves linking the shear parameters in the original S-NOCM model with the tensile stresses (or equivalently the tensile strains) acting along the warp and weft fibre directions in the fabric. Like the shear angle, the tensile strains are accessible as state-dependent variables within the Abaqus user-subroutine. In this section, a method of producing the same shear-tension coupling in the numerical model as that measured in actual woven engineering fabrics is described. The technique involves a four-stage process, as follows. 3.1. Stage One This involves simulating the BBE test; details of the actual experiments can be found in [10]. A BBE test sample with dimensions 210 × 210mm and a clamping length of 70mm is modelled (see Figure 2) using mutually constrained truss and membrane structural elements (572 truss and 264 membrane elements) as shown in Figure 1. The typical computation time for each simulation was about 10 minutes using a Dell OptiPlex 760 Intel (R) Core(TM)2 Duo CPU E7500@2.93GHz and 3.25GB of RAM running Abaqus Explicit v6.9. Faster simulation speeds could have been obtained using the symmetries of the test (e.g., by simulating just a quarter of the test), though this would require modification to the automatic mesh generator to create triangular elements along the centrelines of the specimen, also since future work will involve exploring the influence of fibre orientation variability on test results (e.g., see [30]), this would negate the existing symmetries of the test, and so full specimen simulations have been conducted. These have been conducted in two steps. Step one involves application of a constant transverse load, equal to the loads used in [10] (5, 37, 50, 75, and 100N). The superscript is the experiment number ( to 5) with each experiment using a different transverse load ( corresponds to 5N, corresponds to 37N, etc). The transverse load is applied to nodes at the edge of the central section of the right and left sides of the blank (Region C in [10] see Figure 2). Step two involves applying a displacement controlled boundary condition on the upper and lower centrally located node-sets at the middle of the top and bottom side lengths of the blank (corresponding to the edge of Region C in [10]) see Figure 2. The corresponding experimental shear forces versus shear angle curves, , measured on a plain weave glass engineering fabric were used as input curves in the standard S-NOCM to conduct these preliminary simulations, and for simplicity, , the shear angle in Region A, is taken from one of the central elements of Region A (see Figure 2). This approximation assumes the shear angle across all elements in Region A is uniform. In practice, a variation in the shear angle within each of the regions A, B, and C exists. It will be shown later that the size of this variation is small and depends on the shear angle and the size of the transverse load applied to the specimens. are initially approximated from the axial load, , [10] using (2). In Stage 4 of the fitting process, this estimate is improved using a simple normalisation Note that to determine , contributions to the measured total axial force, , from the reaction force, , which is caused by application of the transverse clamping load, , must first be removed before applying (2). The method of doing this for experimental results is described in [10]. To do this for the numerical results, the following equations are used: where is determined from using where and are the vertical and horizontal velocities of the nodes at the upper, bottom, right, and left node sets, respectively. 3.2. Stage Two This involves determining the average tensile strains, , along the warp and weft fibre directions, and , as a function of the shear angle for to 5. The tensile strains are given as state-dependent variables within the VUMAT user-subroutine and have been verified to be the same as the tensile strains occurring along the truss elements bounding the corresponding membrane element. The average tensile strain across the entire specimen along the two fibre directions is determined as a function of the shear angle in Region A, by taking an average of and from a selection of elements across both Regions A and B. The average fibre tensile strain is determined for each value of the transverse loads, , as a function of the shear angle, and a polynomial curve is fitted to the data from each of the five simulations, , the coefficients of which are stored for later reference by the enhanced S-NOCM code during the course of the simulations (the subscript indicates this is a fitted polynomial function). Thus, each shear force input curve has a corresponding average fibre strain curve . 3.3. Stage Three This involves implementing the shear-tension coupling in the VUMAT user-subroutine. To do this, code has been added within the original VUMAT user-subroutine for the S-NOCM to compare the value of in each membrane element at each time increment against the values of using the shear angle within the element (also given as a state dependent variable in the VUMAT user subroutine). Depending on the value of , the code assigns the appropriate shear force curve to the element using the algorithm given in flow chart Figure 3. The shear stress within the element is then determined using the S-NOCM. Thus, the shear force input curve is now a function of both the shear angle and the fibre strain within the membrane element. The process is illustrated in Figure 4, which shows actual shear force data measured in experiments and values of the average tensile strain along the fibre directions predicted in the FE simulations of the BBE test (see Figure 4). The process of assigning the appropriate shear force versus shear angle curve is described and illustrated in Figure 4 using a specific example. Note that in Figure 4, only data corresponding to tranvserse loads of 5, 50, and 100N are shown in order to simplify the figure. Consider an element that has a shear angle of 45° at time, . The average tensile strain, , inside the element is determined, and in this case, the value is 0.03. An orange point indicates the () coordinate in Figure 4. The algorithm in the flow chart shown in Figure 3 is run to determine where the average tensile strain in the element, , lies in relation to the average tensile strain versus shear angle polynomial curves, (plotted as black lines in Figure 4). Once the appropriate polynomial is identified and assigned to the element (the assignment is indicated by a blue arrow in Figure 4 , in this case, for the 50N transverse load), the corresponding shear force versus shear angle curve (plotted as red lines in Figure 4) is also assigned to the element, indicated by a red arrow in Figure 4. is used to determine the shear stiffness of the membrane element using the S-NOCM, as has previously been described in detail in [15]. At this point, it is possible to compare the results of the enhanced S-NOCM against the experimental input data, as shown in Figure 5. Here, experimental data from [10] are plotted as thin continuous lines with error bars (a different colour for each transverse load), and numerical predictions are plotted as thick continuous lines (the same colour as the corresponding experimental curve). Agreement between numerical prediction and experimental input curve is quite poor at this stage as the experimental shear force input curves supplied to the code are not yet normalised. The issue of normalisation of shear test results for advanced composites in bias-extension tests is well known, and some author have used gauge sections [31], while others have considered the energy or force contributions from the entire specimen, including Regions A, B, and C [32–36]. In these cases, the precise normalisation procedure depends on the test method (uniaxial or biaxial), the specimen geometry, and the material’s response during shear (rate dependent, rate independent, or showing a coupling between tensile strain and shear resistance). A theoretical method to normalise BBE test results for materials with a strong shear-tension coupling was described in detail in [37]. The method requires custom software to retrieve the underlying normalised data via an automated iterative process. Future work will involve use of this theory for accurate and fast normalisation. For now, a simpler approximate normalisation technique is described in the final stage, Stage 4, of the fitting process. The aim of normalisation procedures is to find the shear response of the fabric per unit length or per unit area, in order to determine the parameters governing the shear behaviour in the material’s constitutive model. Normalisation is relatively simple for the picture frame test [33], where the entire specimen undergoes homogenous deformation but is more complex for bias-extension tests. Here, the specimen undergoes different deformations in different regions (e.g., see Figure 2), and this has to be taken into account when interpreting test results. 3.4. Stage Four This involves a simple normalisation procedure aimed at normalising the experimental input curves (which have to be supplied as shear force per unit length of fabric). By correctly normalising the experimental biaxial bias-extension curves, the numerical simulations should produce approximately the same shear force versus shear angle predictions as those observed in experiments. To do this, an approximate procedure is used here by the following simple iterative method. (i) The input shear force versus shear angle curves are divided by the predicted shear force versus shear angle curves to produce a ratio (also a function of the shear angle). (ii) Polynomial functions, , are fitted to each ratio curve (iii) Input curves are multiplied by the ratio curves to produce a next generation of input curves (iv) The process is repeated until reasonable agreement between numerical BBE test predictions and experimental results is obtained. Normally around three iterations are required. This is a simple method designed only to examine the possibility of introducing a shear-tension coupling in the model. Future work will involve employing the more rigorous normalisation developed in Harrison [37]. Figure 6 shows the comparison between the original experimental results and the final predicted shear force versus shear angle curves after conducting this normalisation process. The horizontal error bars given on the numerical results indicate the variation in shear angle across Region A, calculated using the standard deviation of the shear angle of all elements in Region A. The vertical error bars on the experimental results indicate the variation in the measured force, calculated using the standard deviation of 3 tests. Thus, the full length of each error bar represents two standard deviations. The agreement between numerical predictions and experimental data is clearly improved compared to Figure 5. To test the effectiveness of the modelling approach, two final BBE simulations are conducted, this time using transverse loads increasing linearly in time from 5N to 100N rather than using constant transverse loads. In Figures 7(a) and 7(b), the grey curves are experimental results originally reported in [10], and the black curves are the numerical predictions following the approximate normalisation process described in Stage 4, when applying constant transverse loads of 5, 37, 50, 75, and 100N (the same information is shown in Figure 6). The blue curves in Figures 7(a) and 7(b) are the results predicted by the coupled S-NOCM when increasing transverse loads are applied over the course of the test. In Figures 7(c) and 7(d), the applied transverse load is plotted against rather than against time, creating slightly nonlinear profiles. In Figures 7(a) and 7(c), the transverse load is increased from 5N at 0s and linearly increased in time to 100N at the end of the simulation. In Figures 7(b) and 7(d) the transverse load is held constant at 5N for the first 60% percent of the total simulation time, then increased linearly to 100N for a further 80% percent of the total simulation time, and then held constant at 100N until the end of the simulation. As expected, the axial force predictions of the enhanced shear-tension coupled S-NOCM, made using increasing transverse loads, move across the normalised numerical predictions generated using constant transverse loads (the black curves). The different transverse loads versus shear angle profiles, , shown in Figures 7(c) and 7(d), produce different axial force predictions, as can be seen by comparing Figures 7(a) and 7(b). The result in Figure 7(a) is close to that which might be expected from the woven glass fabric used in the experimental investigation [10]. However, while the result of Figure 7(b) appears correct until around 30°, an unrealistic softening is apparent above this shear angle. Thus, at this point the predictions of the model have been found to be qualitatively correct under simple loading conditions though they can show unexpected behaviour under more complex loading. Possible explanations for the unexpected predictions could be related to the following. (i)The first is the choice of elements used to create the average strain curves, . The resulting predictions have been found to be sensitive to this choice, and future work may involve using a more refined mesh to model the BBE test and use a larger selection of elements to examine this sensitivity.(ii)The second is the normalisation technique used in this work. The very simple normalisation procedure used here takes no account of the shear-tension coupling in the fabric, and a more rigorous method was recently proposed in [37]. Future work will aim to employ this method to improve accuracy and reduce the uncertainty in the shape of the input curves passed to the S-NOCM. (iii)The third is the method of calculating the stress increment at each time step. A tangent stiffness matrix has been used to determine this stress increment; that is, The linearisation process is known to reduce the sensitivity of the technique of using multiple input curves to control the shear compliance of the membrane elements, a point discussed in detail in [16]. Nevertheless, the linearised increment was used in this first attempt to model to the shear-tension coupling, as the method has the advantage of being particularly robust. Future work will focus on improving the sensitivity of the approach, using the methods described in [16]. Despite the irregularities in the predictions of the shear-tension coupled model under certain in-plane loading conditions, it is clear that the technique proposed here produces a shear-tension coupling similar to that seen in actual experiments. Future work will focus on improving the accuracy of the method, though the model predictions are considered to be sufficiently accurate at this stage to begin to examine the question of whether or not and also under which conditions, the influence of a shear-tension coupling on the shear angle and wrinkling predictions of complex forming simulations, is important. 4. Conclusion A method of modelling the coupling between shear compliance and in-plane tension in woven engineering fabrics has been demonstrated. The method is similar to that used previously to create rate-dependent “viscous” behaviour using a hypoelastic model [16] though here the average in-plane strain along the two tow directions, rather than the angular shear rate, is used to control the selection of the shear force versus shear angle curve for use in the non-orthogonal constitutive model (used to relate the shear force and shear stress) [8, 9]. A simple normalisation procedure has been proposed. The sensitivity of the modelling approach is assessed and found to give reasonable results, clearly showing a coupling between shear compliance and in-plane strains in the fibre directions. Future work will involve refining the modelling and normalisation process in order to improve the accuracy of the predictions and could also involve reimplementing the technique using fibre stress rather than strain to control input shear curve selection. The shear-tension coupled model will be used to evaluate the importance of a shear-tension coupling on the predictions of complex forming simulations. Conflict of Interests Please note that none of the authors of the paper has a direct financial relation with the commercial identity mentioned in this paper that might lead to a conflict of interests for any of them. The authors wish to express their thanks for The Public Treasury of Libyan Society, the Royal Academy of Engineering for a Global Research Award (10177/181), and the National Research Foundation (NRF) for sponsoring this research through the SRC/ERC Program of MOST/KOSFE (R11-2005-065). 1. C. Rudd, A. Long, K. Kendall, and C. Mangin, Liquid Moulding Technologies, vol. 1, 1997. 2. P. Boisse, N. Hamila, E. Vidal-Sallé, and F. Dumont, “Simulation of wrinkling during textile composite reinforcement forming. Influence of tensile, in-plane shear and bending stiffnesses,” Composites Science and Technology, vol. 71, no. 5, pp. 683–692, 2011. View at Publisher · View at Google Scholar · View at Scopus 3. H. Lin, P. Evans, P. Harrison, J. Wang, A. Long, and M. Clifford, “An experimental investigation into the factors affecting the forming performance of thermoset prepreg,” in Proceedings of the 9th International ESAFORM Conference on Materials Forming, 2006. 4. H. Lin, J. Wang, A. C. Long, M. J. Clifford, and P. Harrison, “Predictive modelling for optimization of textile composite forming,” Composites Science and Technology, vol. 67, no. 15-16, pp. 3242–3252, 2007. View at Publisher · View at Google Scholar · View at Scopus 5. A. Cherouat and J. L. Billoët, “Mechanical and numerical modelling of composite manufacturing processes deep-drawing and laying-up of thin pre-impregnated woven fabrics,” Journal of Materials Processing Technology, vol. 118, no. 1–3, pp. 460–471, 2001. View at Publisher · View at Google Scholar · View at Scopus 6. Q. Fu, W. Zhu, Z. Zhang, and H. Gong, “Effect of variable blank holder force on rectangular box drawing process of hot-galvanized sheet steel,” Journal of Materials Science and Technology, vol. 21, no. 6, pp. 909–913, 2005. View at Scopus 7. M. Hou, “Stamp forming of continuous glass fibre reinforced polypropylene,” Composites A, vol. 28, no. 8, pp. 695–702, 1997. View at Scopus 8. W. Lee, J. Cao, P. Badel, and P. Boisse, “Non-orthogonal constitutive model for woven composites incorporating tensile effect on shear behavior,” International Journal of Material Forming, vol. 1, no. 1, pp. 891–894, 2008. View at Publisher · View at Google Scholar · View at Scopus 9. W. Lee, M. K. Um, J. H. Byun, P. Boisse, and J. Cao, “Numerical study on thermo-stamping of woven fabric composites based on double-dome stretch forming,” International Journal of Material Forming, vol. 3, no. 2, pp. 1217–1227, 2010. View at Publisher · View at Google Scholar · View at Scopus 10. P. ] Harrison, F. Abdiwi, Z. Guo, P. Potluri, and W. Yu, “Characterising the shear-tension coupling and wrinkling behaviour of woven engineering fabrics,” Composites A, vol. 43, no. 6, pp. 903–914, 2012. View at Publisher · View at Google Scholar 11. P. Harrison, M. Clifford, and A. Long, “Shear characterisation of woven textile composites,” in Proceedings of the 10th European Conference on Composite Materials, pp. 3–7, 2002. 12. S. B. Sharma, M. P. F. Sutcliffe, and S. H. Chang, “Characterisation of material properties for draping of dry woven composite material,” Composites A, vol. 34, no. 12, pp. 1167–1175, 2003. View at Publisher · View at Google Scholar · View at Scopus 13. A. Willems, S. V. Lomov, I. Verpoest, and D. Vandepitte, “Picture frame shear tests on woven textile composite reinforcements with controlled pretension,” in Proceedings of the 10th European Conference on Composite Materials, pp. 999–1004, April 2006. View at Scopus 14. F. Abdiwi, P. Harrison, W. R. Yu, and Z. Guo, “Modelling the shear-tnsion coupling of engineering fabrics,” in Proceedings of the 8th European Solid Mechanics Conference (ESCM '12), Graz, Austria, 2012. 15. W. R. Yu, P. Harrison, and A. Long, “Finite element forming simulation for non-crimp fabrics using a non-orthogonal constitutive equation,” Composites A, vol. 36, no. 8, pp. 1079–1093, 2005. View at Publisher · View at Google Scholar · View at Scopus 16. P. Harrison, W. R. Yu, and A. C. Long, “Rate dependent modelling of the forming behaviour of viscous textile composites,” Composites A, vol. 42, pp. 1719–1726, 2011. 17. M. Komeili and A. S. Milani, “Shear response of woven fabric composites under meso-level uncertainties,” Journal of Composite Materials, 2012. View at Publisher · View at Google Scholar 18. P. Badel, E. Vidal-Sallé, and P. Boisse, “Computational determination of in-plane shear mechanical behaviour of textile composite reinforcements,” Computational Materials Science, vol. 40, no. 4, pp. 439–448, 2007. View at Publisher · View at Google Scholar · View at Scopus 19. M. A. Khan, T. Mabrouki, E. Vidal-Sallé, and P. Boisse, “Numerical and experimental analyses of woven composite reinforcement forming using a hypoelastic behaviour. Application to the double dome benchmark,” Journal of Materials Processing Technology, vol. 210, no. 2, pp. 378–388, 2010. View at Publisher · View at Google Scholar · View at Scopus 20. X. Peng and J. Cao, “A dual homogenization and finite element approach for material characterization of textile composites,” Composites B, vol. 33, no. 1, pp. 45–56, 2002. View at Publisher · View at Google Scholar · View at Scopus 21. P. Badel, S. Gauthier, E. Vidal-Sallé, and P. Boisse, “Rate constitutive equations for computational analyses of textile composite reinforcement mechanical behaviour during forming,” Composites A , vol. 40, no. 8, pp. 997–1007, 2009. View at Publisher · View at Google Scholar · View at Scopus 22. P. Boisse, M. Borr, K. Buet, and A. Cherouat, “Finite element simulations of textile composite forming including the biaxial fabric behaviour,” Composites B, vol. 28, no. 4, pp. 453–464, 1997. View at Scopus 23. P. Boisse, B. Zouari, and A. Gasser, “A mesoscopic approach for the simulation of woven fibre composite forming,” Composites Science and Technology, vol. 65, no. 3-4, pp. 429–436, 2005. View at Publisher · View at Google Scholar · View at Scopus 24. A. Willems, Forming Simulation of Textile Reinforced Composite Shell Structures, vol. 281, Faculteit Ingenieurswetenshapen Arenbergkasteel, Katholieke Universiteit Leuven, Leuven, Belgium, 2008. 25. P. Harrison, P. Gomes, R. Correia, F. Abdiwi, and W. Yu, “Press forming the double-dome benchmark geometry using a 0/90 uniaxial cross-ply advanced thermoplastic composite,” in Proceedings of the 15th European Conference on Composite Materials, Venice, Italy, June 2012. 26. W. R. Yu, F. Pourboghrat, K. Chung, M. Zampaloni, and T. J. Kang, “Non-orthogonal constitutive equation for woven fabric reinforced thermoplastic composites,” Composites A, vol. 33, no. 8, pp. 1095–1105, 2002. View at Publisher · View at Google Scholar · View at Scopus 27. W. R. Yu, M. Zampaloni, F. Pourboghrat, K. Chung, and T. J. Kang, “Sheet hydroforming of woven FRT composites: non-orthogonal constitutive equation considering shear stiffness and undulation of woven structure,” Composite Structures, vol. 61, no. 4, pp. 353–362, 2003. View at Publisher · View at Google Scholar · View at Scopus 28. P. Harrison, A. Long, W. Yu, and M. Clifford, “Investigating the performance of two different constitutive models for viscous textile composites,” in Proceedings of the 8th International Conference on Textile Composites (TEXCOMP '06), Nottingham, UK, October 2006. 29. P. Harrison, W. R. Yu, J. Wang, T. Baillie, A. C. Long, and M. J. Clifford, “Numerical evaluation of a rate dependent model for viscous textile composites,” in Proceedings of the 15th International Conference on Composite Materials, Durban, South Africa, 2005. 30. F. Abdiwi, P. Harrison, I. Koyama et al., “Characterising and modelling variability of tow orientation in engineering fabrics and textile composites,” Composites Science and Technology, vol. 72, no. 9, pp. 1034–1041, 2012. View at Publisher · View at Google Scholar 31. M. Sutcliffe, S. Sharma, A. Long et al., “A comparison of simulation approaches for forming of textile composites,” in Proceedings of the 5th International ESAFORM Conference on Materials Forming , Krakow, Poland, April 2002. 32. J. Cao, R. Akkerman, P. Boisse et al., “Characterization of mechanical behavior of woven fabrics: experimental methods and benchmark results,” Composites A, vol. 39, no. 6, pp. 1037–1053, 2008. View at Publisher · View at Google Scholar · View at Scopus 33. P. Harrison, M. J. Clifford, and A. C. Long, “Shear characterisation of viscous woven textile composites: a comparison between picture frame and bias extension experiments,” Composites Science and Technology, vol. 64, no. 10-11, pp. 1453–1465, 2004. View at Publisher · View at Google Scholar · View at Scopus 34. P. Harrison, P. Potluri, K. Bandara, and A. C. Long, “A normalisation procedure for Biaxial Bias Extension tests,” International Journal of Material Forming, vol. 1, no. 1, pp. 863–866, 2008. View at Publisher · View at Google Scholar · View at Scopus 35. P. Harrison, J. Wiggers, and A. C. Long, “Normalization of shear test data for rate-independent compressible fabrics,” Journal of Composite Materials, vol. 42, no. 22, pp. 2315–2344, 2008. View at Publisher · View at Google Scholar · View at Scopus 36. J. Launay, G. Hivet, A. V. Duong, and P. Boisse, “Experimental analysis of the influence of tensions on in plane shear behaviour of woven composite reinforcements,” Composites Science and Technology, vol. 68, no. 2, pp. 506–515, 2008. View at Publisher · View at Google Scholar · View at Scopus 37. P. Harrison, “Normalisation of biaxial bias extension test results considering shear tension coupling,” Composites A, vol. 43, no. 9, pp. 1546–1554, 2012. View at Publisher · View at Google
{"url":"http://www.hindawi.com/journals/amse/2013/786769/","timestamp":"2014-04-19T15:14:32Z","content_type":null,"content_length":"121634","record_id":"<urn:uuid:ba59a363-937c-46c0-b658-14819694b1b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
82B10 Quantum equilibrium statistical mechanics (general) We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics" We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" - Presentation at the Università di Pisa, Pisa, Itlay 3 July 2002, the conference on Irreversible Quantum Dynamics', the Abdus Salam ICTP, Trieste, Italy, 29 July - 2 August 2002, and the University of Natal, Pietermaritzburg, South Africa, 14 May 2003. Version of 24 April 2003: examples added; 16 December 2002: revised; 12 Sptember 2002. See the corresponding papers "Zeno Dynamics of von Neumann Algebras", "Zeno Dynamics in Quantum Statistical Mechanics" and "Mathematics of the Quantum Zeno Effect"
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/collection/id/13183","timestamp":"2014-04-17T19:12:12Z","content_type":null,"content_length":"18881","record_id":"<urn:uuid:93cc20af-9ffd-4daf-8923-1b2928792ef4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
CFD Simulation and Experimental Validation of Fluid Flow and Particle Transport in a Model of Alveolated Airways Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 μm aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy. Keywords: pulmonary fluid mechanics, aerosol transport, PIV, PTV
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2699293/?lang=en-ca","timestamp":"2014-04-18T15:03:56Z","content_type":null,"content_length":"107483","record_id":"<urn:uuid:122d530c-8f60-4f29-bd9b-cedbfb66361b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Contact generation between 3D convex meshes A large part of a rigid body dynamics engine is collision detection, in particular finding the contact points between objects. In games, non-moving world geometry is often represented by concave triangle meshes. Common collision shape representations for moving objects are basic convex shapes such as spheres, boxes and capsules and convex polyhedral meshes. There are several efficient ways to generate contact points between convex polyhedral meshes, one method is based on Sutherland Hodgman clipping and another method builds multiple contact points by adding closest points to a persistent contact cache. I implemented both methods in the Bullet physics library, and you can download the source code from in particular search for . There is also a basic sample in the Bullet3 github repository , the screenshot above is taken from sample/api_bullet_physics/1_stable_convex. This blog posting I discuss the contact clipping method, and I’ll discuss the persistent contact caching method in a future posting. The basic recipe 1) find the separating axis between the two convex polyhedra A and B 2) search for a face in A and B that support the separating axis 3) generate contact points by clipping one face from A against the convex hull B Finding the separating axis Given two convex polyhedra A and B, we need to find the axis with the minimum projected separation. For step 1 in above recipe you can any method to find the separating axis. Two popular methods to find the separating axis are the separating axis test (SAT) and GJK named after its creators Gilbert, Johnson and Keerthi. Note that GJK needs a companion algorithm such as the expanding polytope algorithm (EPA) for the penetration case. The SAT test is easier to explain and implement than GJK/EPA so here is a brief description. You can check the books by Gino van den Bergen and Christer Ericson for more details. SAT in a nutshell The potential separating axis can be either a face normal from A or B, or a cross product using edges from A and B, unless the edges are parallel. We can do an exhaustive search using all potential axis and computing the projection of the convex hull on those axis. We search for the axis with the largest separating distance. If maximum distance of the projection is positive, the object don’t overlap and we can terminate. In my implementation this step is implemented in btPolyhedralContactClipping::findSeparatingAxis. Projection of the convex hull onto an axis The projection of a convex hull onto an axis can be computed by taking the vertex with the maximum dot product with the axis. Such search for a extreme vertex given a direction vector is also known as a support mapping. Reducing the number of separating axis tests The number of separating axis can grow fast and testing all axis can become a performance bottleneck. 1) Only use unique directions. For a box-box test this reduces the total number of candidates from 156 to 15 axis. 2) First check against interior objects to cull detailed tests This was recently discussed by Pierre Terdiman in this blog posting: if the separating distance with a simplified object is already larger than the current smallest distance, then we don’t need to test the distance with the more complex convex hull. This optimization will easily reduce the total number of tests 90%. 3) Once the minimal separating distance for a face goes beyond the current smallest distance, we can discard the remaining vertices for this face. Even with above optimizations, I found that GJK/EPA test still outperforms the SAT test in many cases. I’m sure there are other optimizations possible, if you know any please leave a comment. Searching clipping faces Once we have the separating axis we can search for a face on A and a face on B with a normal that is closest to the separating axis. These are faces with maximum and minimum dot product of its face normal against the separating axis. In the picture those two faces are drawn in red: Sutherland–Hodgman clipping To generate multiple contact points, we can clip the face from one object against the hull of the other object. The wikipedia page has a good illustration about this clipping process: Instead of clipping against the entire hull, we can only clip against the edge planes connected to the incident face in the other object. In the picture those edge planes are drawn in green: You can choose to perform this clipping in world space, or in the space of object A or B. The resulting contact points can be reported in world space or in one of the object spaces. You can check the implementation in btPolyhedralContactClipping::clipFaceAgainstHull. Contact reduction The contact generation method can produce a lot of contacts. We can reduce the number of contacts in several ways, for example only keep the following contact points: 1) the deepest point 2) the furthest from the deepest point 3) the furthest point from (2) 4) the supporting point of a direction orthogonal to the edge connecting point (2) and (3) 5) the support point of the negative orthogonal direction to this edge Convex decomposition Convex polyhedral meshes can be created manually using authoring tools, or generated automatically from concave triangle meshes using convex decomposition methods. Khaled Mammou just released a brand new library for Hierarchical Approximate Convex Decomposition of 3D Meshes (HACD) that can be downloaded here https://sourceforge.net/projects/hacd I plan to try out his convex decomposition method and discuss this in an upcoming blog posting.
{"url":"http://www.altdevblogaday.com/2011/05/13/contact-generation-between-3d-convex-meshes/","timestamp":"2014-04-21T09:48:10Z","content_type":null,"content_length":"24710","record_id":"<urn:uuid:6beeb875-43a5-4df9-aef9-a6b4f864e015>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Time Fourier Transform (DTFT) Search Mathematics of the DFT Would you like to be notified by email when Julius Orion Smith III publishes a new entry into his blog? Discrete Time Fourier Transform (DTFT) The Discrete Time Fourier Transform (DTFT) can be viewed as the limiting form of the DFT when its length normalized radian frequency variable, amplitude at sample number The inverse DTFT is which can be derived in a manner analogous to the derivation of the inverse DFT (see Chapter Instead of operating on sampled signals of length continuum. That is, the DTFT is a function of continuous frequency complex plane (see Fig.6.1). Thus, as Previous: Fourier Transforms for Continuous/Discrete Time/FrequencyNext: Fourier Transform (FT) and InverseAbout the Author: Julius Orion Smith III Julius Smith's background is in electrical engineering (BS Rice 1975, PhD Stanford 1983). He is presently Professor of Music and Associate Professor (by courtesy) of Electrical Engineering at Stanford's Center for Computer Research in Music and Acoustics (CCRMA) , teaching courses and pursuing research related to signal processing applied to music and audio systems. See for details.
{"url":"http://www.dsprelated.com/dspbooks/mdft/Discrete_Time_Fourier_Transform.html","timestamp":"2014-04-20T23:27:39Z","content_type":null,"content_length":"64823","record_id":"<urn:uuid:22c53692-af15-45da-8c63-d20f2d749e87>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionThe ModelsClusteringLEACHSKATERWSNs and signal processingThe dataSensor deploymentSignal sampling and reconstructionScenarios of Interest and Performance AssessmentResultsConclusions and Future WorkReferencesFigures and Tables AWireless Sensor Network (WSN) consists of spatially distributed autonomous devices, which cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants, at different locations [1–3]. WSNs have been used in many applications as environmental monitoring, military field surveillance, and many other applications where the human presence may not be suitable or desirable [4, 5]. WSNs are usually tailored to specific applications. The sensors scattered in a sensor field have the capability to collect, and aggregate data [6], and route [7] them to a base station [1]. The base station usually presents the result of these operations, which could be used to reconstruct the phenomena of interest and to provide information for making decisions, to the user. Most of current studies on WSNs focus on the sensors’ energy constraint as a key design feature. For this reason, techniques abound in the literature aiming at reducing energy consumption and, therefore, increasing the lifetime of the whole network. Since communication among nodes is the main cause of energy consumption, many techniques involving clustering and information fusion have been proposed to increase the network lifetime, some of them can be found in [5] and in [8]. In the following, we will consider hierarchical networks, and will present a strategy for assessing the impact of several factors from the viewpoint of the quality of the data delivered to the user. The aforementioned techniques have impact on the quality of the information delivered to users and, as consequence, have influence on the decisions they take. For instance, consider the case of a hierarchical WSN that uses information fusion to efficiently help the base station taking decisions about temperature management. Suppose the cluster head of each cluster sends the mean of the temperature measured by individual sensors of that cluster. This approach is prone to imprecisions and, among other issues, it is quite sensitive to outliers. In this case, for instance, information about data variability is lost in this process. For applications in which data dependability is critical, such issues are not acceptable. Among the factors that impact the quality of the reconstructed signal, we emphasize the following: Data granularity: how spatially coarse (smooth, less variable) and temporally stable the signal is; Sampling strategy: how sensors are deployed on the field and their operating characteristics; Node clustering: how sensors are gathered in clusters for energy saving; Data aggregation: how data from the same cluster is summarized before being forwarded; Data reconstruction: how the base station (or the user) infers about the original signal using the available information, i.e., from summarized data. The impact of such factors in the quality of the reconstructed information signal in WSN is seldom present in the literature. Even the studies of how wireless sensor networks are able to report data they collect by means of estimated errors are scarcely found. Some authors as [9–12] studied analytical bounds of the quality of the reconstructed signal by means of the classical Shannon-Nyquist theory. Specifically, Nordio et al. [10] derive analytical expressions that describe the degradation of the quality of the reconstructed data in clustered sensor networks. Sung et al. [13] investigate the asymptotic behavior of ad hoc sensor networks deployed over correlated random fields. In that work the authors do not consider information fusion nor hierarchical networks. The aforementioned proposals attempt to established theoretical limits for the reconstruction problem considering some aspects such as clustering and correlated random field data. The work presented below assesses the impact of those factors on the quality of the reconstructed signal by modeling WSNs as signal processing problems on ℝ^2, where the data might be irregularly sampled. We conclude that, using the error metric defined in this work, we observe smaller errors for (i) coarser processes, (ii) more regular sensor deployment, (iii) data-aware aggregation, and (iv) the reconstruction based on Kriging. In particular, we quantitatively assess: data granularity by using a Gaussian field model for the data (disregarding temporal variation); sensors deployment by a new stochastic point process, which is able to describe from regularly spaced to tightly packed sensors distribution; the evaluation of two node clustering techniques (LEACH, a geographic clustering, and SKATER, which also incorporates data homogeneity; no clustering is considered as a benchmark); and two reconstruction strategies, namely, Voronoi cells and Kriging. A constant perception radius and the mean value as data aggregation are assumed. We show that all the aforementioned factors have significant impact on the quality of the reconstructed signal, for which we provide quantitative measures. The paper unfolds as follows. Section 2. presents the main models we employ, namely the clustering strategy (Section 2.1.), WSNs as a whole (Section 2.2.), the data (Section 2.3.), the sensor deployment (Section 2.4.), and signal sampling and reconstruction (Section 2.5.). Section 3. describes the scenarios of interest and the methodology. Section 4. presents the results, and Section 5. concludes the paper. This section presents the four central models for our work, namely clustering strategy (Section 2.1.), WSNs from the signal processing viewpoint (Section 2.2.), a model for the observed data (Section 2.3.) and a model for sensor deployment (Section 2.4.) WSNs present several constraints such as battery capacity, and limited computing capabilities [1]. Among those constraints, energy limitation is considered as the most important aspect to address in order to improve the network lifetime. Many lifetime-maximizing techniques have been proposed, and each approach provides a certain level of energy saving [14]. Clustering sensors into groups is a popular strategy to save energy [15] by exploring correlation present in the data collected by neighbor sensors. This technique is usually performed in three phases: (i) leader election, which aims at choosing one representative for each group, the Cluster Head (CH); (ii) cluster formation, where all other nodes will join only one group represented by its CH; and (iii) data communication, where group members report their data to CH. The CH usually performs data fusion, and delivers the fused data toward to the sink node. Nodes are attached to groups and the ideal number of groups depend on the clustering objective. Abbasi and Younis [15] describe a taxonomy of WSN clustering techniques, and discuss some clustering objectives. In the following, two clustering approaches are detailed. The former creates clusters based on geographical information, while the later is based on a data-aware clustering technique. These approaches will be assessed in terms of the quality of reconstructed signal in Section 4. LEACH (Low-Energy Adaptive Clustering Hierarchy) [16] is a popular WSN clustering approach. It executes in rounds, and each round performs the three aforementioned phases. LEACH assumes that all nodes are able to reach the sink node in one hop, and that they are capable of organizing the groups and the communication by power control schemes. Both CHs and group members deliver their data to the sink and to CHs, respectively, directly (single hop). There are two different versions of LEACH proposed in [16]: one considers that CHs are elected in a distributed fashion, and the other in a centralized way. Initially (first round), the election occurs randomly, following an uniform law, by a rule tuned to elect k CHs, in average. In the next rounds, the nodes that were chosen as CHs in the last [n/k] rounds, being n the number of nodes and k the number of clusters, are not eligible. This approach warrants that the CH role will be alternated in order to better distribute the energy consumption. The remaining energy of the nodes may be used to adjust the probability law, and force nodes with more energy to be elected more likely. In the second version, CHs are elected in a centralized fashion (LEACH-C). Each node sends the information about its current location and energy level to the sink node. The problem to find k optimal clusters and the CHs nodes that minimize the energy consumption is NP-hard and it is solved by the sink applying a simulated annealing solution. Once the election finishes, CHs inform their role by an advertisement message. Thus, all other nodes receive this message and join only one group represented by the CH that requires the minimum communication energy. LEACH takes this decision based on the received signal strength of the advertisement message from each CH. Note that, typically, this will lead to choosing the closest CH, unless there is an obstacle impeding communication. After clusters are formed, each group member configures its power to reach its corresponding CH. The communication within the group uses a TDMA scheme and, outside the groups, CHs employ a direct-sequence spread spectrum. These schemes attempt to diminish intraand inter-group interferences. The main goal of our assessment is to analyze the reconstruction error. Thus, questions related to energy consumption were not considered and CHs were chosen randomly in a similar manner of the distributed version of LEACH. The difference is that we forced the CHs to be far from at least r units (in our scenarios, r = 30). This choice makes CHs more equally distributed on the sensor field and diminishes the reconstruction error. LEACH assumes that nearby nodes have correlated data, while SKATER (Spatial ‘K’luster Analysis by Tree Edge Removal) [17, 18] introduces an additional restriction to produce good quality data summaries. SKATER uses a data-aware clustering procedure that mainly influences the way clusters are formed. Its hypothesis is that data fused on spatially homogeneous clusters will have a better statistical quality (less variability) than those fused on geographical clusters such as LEACH. Apart from the proposals by Kotidis [19], and Toulone and Madden [20], data homogeneity is rarely used for sensor clustering. As spatially homogeneous clusters, SKATER looks for a partition with three properties: (i) nodes of the same group have to be similar to each other in some predefined attributes; (ii) the attributes are different among different groups, and (iii) nodes of the same group must belong to a predefined neighborhood structure. SKATER works in two steps. First, it creates a minimal spanning tree (MST) from the graph representation for the neighborhood of the geographical structure of the nodes. The cost of the edges represents the similarity of the sensors’ collected data, defined as the euclidian square distance between them (data might be in ℝ^p). In the second step, SKATER performs a recursive partitioning of the MST to get contiguous clusters. The partitioning method considers the internal homogeneity of the clusters, i.e., it uses the sensors’ data information. Thus, SKATER transforms the regionalization problem into a graph partitioning problem. The partitioning method chooses the edge whose removal leads to more homogeneous clusters, and, recursively, creates a new graph that is a forest. The process is repeated until the forest has k trees (k clusters). This process uses an objective function proportional to the variance of the data collected by the same group sensors. SKATER is a centralized clustering processing and presents high computational cost due to the exhaustive comparison of all possible values of the objective function. However, SKATER uses a polynomial-time heuristic for fast tree partitioning. In our work, we used SKATER to build homogeneous clusters. The process is similar to that described in LEACH, but the cluster formation is performed in the same manner as in SKATER. CHs are chosen randomly among cluster members. As presented in Aquino et al. [21], and Frery et al. [22], a WSN can be conveniently described as sampling/reconstruction processes within the signal processing framework. A WSN collecting information can be represented by the diagram shown in Figure 1, where 𝒩 denotes the environment and the process to be measured, F is the phenomenon of interest, with V* its spatiotemporal domain. A set of ideal rules (R*) leading to ideal decisions (D*) could be devised if true, complete and uncorrupted observation of the phenomenon was possible. One has, instead, sensors S = (S[1], . . ., S[n]), each measuring the phenomenon in a certain position and producing a report in its domain V[i], 1 ≤ i ≤ n; all possible domain sets are denoted V = (V[1], . . ., V [n]). From the signal theory viewpoint, F is the stochastic process that models the signal to be analyzed, S is the sampling strategy. Most of the time, collecting all data from every sensor is a waste of resources since there is redundant information. In order to save resources, e.g., energy and, therefore, to extend the network lifetime, information fusion techniques are used [8]. They are denoted by Ψ and produce values in a reduced subset V′ ⊂ V. A reconstruction function F̂ is then applied to these fused data, aiming at restoring the events described by F as close as possible; this function should be regarded to as an estimator. Using this new information, the sets of rules and decisions become R′ and D′, respectively. Ideally, D′ and D* are the same. The class of transformations Ψ we consider here is formed by two different steps: the first is the clustering of nodes, and the second is data aggregation. Aggregated data, with their corresponding locations, are used as input to a reconstruction process that runs in the sink, and then delivered to the user. The data sent to the user, i.e., the reconstructed signal, is compared with the phenomenon of interest by means of a measure of error which we use to assess the impact of sensor placement and data aggregation on the performance of the WSN. This is performed for a number of phenomena of interest. Besides the already defined clustering techniques, namely, LEACH and SKATER, Pointwise data processing that makes neither clustering nor aggregation is used in this work as a benchmark. In our study, data aggregation will be done by taking the mean value of the data observed at each cluster; this reduction makes sense when these data can be safely summarized by a single value. Signal reconstruction is performed with two strategies: Voronoi cells and Kriging. They require the same information, namely sensor position and value, being the latter more computationally Sensors measure a continuously varying function F describing, for instance, the illumination on the ground of a forest or the air pressure in a room [18, 23]. Random fields are collections of random variables indexed in a d-dimensional space [24, 25]. Such models can be used to describe natural phenomena, such as temperature, moist and gravity. Following Reis et al. [18], we use a zero-mean isotropic Gaussian random field for describing the truth being monitored by the WSN, i.e., F in the diagram shown in Figure 1. We assume a stable covariance function exp(−d^s), where d ≥ 0 is the Euclidian distance between sites, and s > 0, called scale, is the parameter that characterizes this model. The scale is related to the granularity of the process. Figure 2 shows four situations, from fine (s = 5) to coarse (s = 20) granularity. Samples from this process can be readily obtained using the RandomFields package for R [25]. We used a red-yellow-white color table in order to enhance the different values. Sampling outcomes of F will be performed, typically, in irregularly spaced locations, which we describe by means of spatial point processes. The location of those sensors will be described by a stochastic point process, presented in the following section. Point processes are stochastic models that describe the location of points in space. They are useful in a broad variety of scientific applications such as ecology, medicine, and engineering [26]. The isotropic stationary Poisson model, also known as fully random or uniformly distributed, is the basic point process. The number of points in the region of interest follows a Poisson law with mean proportional to the area. The location of each point does not have influence on the location of the other points. The other process we will use is a repulsive one, where points cannot lie at less than a specified distance. Using these two processes we build a composed point process able to describe many practical situations. The Poisson point process over a finite region W ⊂ ℝ^2 is defined by the following properties: The probability of observing n ∈ ℕ[0] points in any set A ⊂ W follows a Poisson distribution: Pr(N[A] = n) = e^−ημ(A)[ημ (A)]^n/n!, where η > 0 is the intensity and μ(A) is the area of A. Random variables used to describe the number of points in disjoint subsets are independent. Without loss of generality, in order to draw a sample from a Poisson point process with intensity η > 0 on a squared window W = [0, ℓ] × [0, ℓ], first sample from a Poisson random variable with mean ηℓ^2. Assume n was observed. Now obtain 2n samples from independent identically distributed random variables with uniform distribution on [0, ℓ], say x[1], . . ., x[n], y[1], . . ., y[n]. The n points placed at coordinates (x[i], y[i])[1≤i≤n] are an outcome of the Poisson point process on W with intensity η. If n is known beforehand, rather than the outcome of a Poisson random variable, then the n points placed at coordinates (x[i], y[i])[1≤i≤n] are an outcome of the Binomial point process on W; this last process is denoted B(n). The Matérn’s Simple Sequential Inhibition process can be defined iteratively as the procedure that places at most n points in W. The first point is placed uniformly, and until all the n points are placed or the maximum number of iterations t[max] is reached, a new location is chosen uniformly on W regardless the previous points. A new point is placed there if the new location is not closer than r to any previous point; otherwise the location is discarded, the iteration counter is increased by one and a new location is chosen uniformly. At the end, there are m ≤ n points in W that lie at least r units from each other. This process describes the distribution of non-overlapping discs of radii r/2 on W; denote it M(n, r). We build an attractive process by merging two Poisson processes with different intensities. A step point process in W′ ⊂ W ⊂ ℝ^2 with parameters a, λ > 0 is defined as two independent Point processes: one with parameter λ on W \ W′, and other with parameter aλ on W′. Denote this process S(n, a). Without loss of generality, we define the compound point process W = [0, 100]^2, W′ = [0, 25]^2 and η = 1, denoted by 𝒞(n, a), as 𝒞 ( n , a ) = { M ( n , r max ( 1 − e a ) ) , if a < 0 B ( n ) , if 0 ≤ a ≤ 1 S ( n , a ) , if a > 1. where r[max] is the maximum exclusion distance, which we set to r[max] = n^−1/2. The 𝒞(n, a) point process spans in a seamless manner the repulsive (a < 0, Figure 4(a)), full random (a ∈ [0, 1], Figure 4(b)) and attractive cases (a > 0, Figure 4(c)). For the sake of completeness 𝒞(n, −∞) denotes the deterministic placement of n regularly spaced sensors on W at the maximum possible distance among them. Samples from the 𝒞 process can be easily generated using basic functions from the spatstat package for R [27]. Repulsive processes are able to describe the intentional, but not completely controlled location of sensors as, for instance, when they are deployed by a helicopter at low altitude. Sensors located by a binomial process could have been deployed from high altitude, so their location is completely random and independent of each other. Attractive situations may arise in practice when sensors cannot be either deployed or function everywhere as, for instance, when they are spread in a swamp: those that fall in a dry spot survive, but if they land on water they may fail to function. Without loss of generality, in the following we consider that the whole process takes place on W = [0, 100]^2 and W′ = [0, 25]^2 with intensity η = 1, and that there are n = 100 sensors. Once the signal f = F(ω), outcome of the Gaussian random field with parameter s ∈ ℝ presented in Section 2.3., is available, it will be sampled at positions (x[1], y[1]), . . ., (x[100], y[100]), which, in turn, are the outcome of the compound point process 𝒞(100, a), a ∈ ℝ, defined in Section 2.4.. For each 1 ≤ i ≤ 100, sensor i, located at (x[i], y[i]) ∈ W, captures a portion of f: the mean value observed within its area of perception p[i], i.e., it stores the value v[i] = ∫[p[i]] f. We chose to work with isotropic homogeneous sensors, where p i = { ( x , y ) ∈ W : x 2 + y 2 ≤ r 2 }being r > 0 the perception radius, which we set to 100 / π ≈ 5.64. If 100 sensors were deployed in regular fashion on W, their Voronoi cells would have areas of 100 squared units; the same area is produced by circular perception areas of radii 100 / π, therefore our choice. Once every node has its value v[i], 1 ≤ i ≤ 100, clustering begins. LEACH groups nearby sensors, while SKATER also employs the values they have stored. Once clusters are formed, the mean of the values stored in the sensors belonging to each cluster are sent to the sink by each CH, along with the information of the position of each node. The next stage begins then, namely, signal Two reconstruction methodologies were assessed in this work: Voronoi cells and Kriging. The former consists in first determining the Voronoi cell of each sensor, i.e., the points in W that are closer to it. Each cluster becomes responsible for the area corresponding to the union of the Voronoi cells that belong to the sensors that form it. Then the reconstructed value at position (x, y) ∈ W is the mean value returned by the cluster responsible for that point; see Figure 4. These computations were easily implemented using the deldir package for R. Kriging is the second reconstruction procedure we employed. It is a geostatistical method, whose simplest version (“simple Kriging”) is equivalent to minimum mean square error prediction under a linear Gaussian model with known parameter values. No parameter was assumed known and, regardless the true covariance model imposed to the Gaussian field, we estimated a general and widely accepted covariance function: the Matérn model given by C ( d ) = 1 Γ ( ν ) 2 ν − 1 ( d ρ ) ν K ν ( d ρ ) ,where d > 0 is the distance between points, Γ is the Gamma function, K[ν] is the modified Bessel function of second kind and order ν > 0, and the parameters to be estimated are ρ > 0, which measures how quickly the correlation decays with distance, and ν > 0, which is the smoothness parameter. More details about this covariance function, including particular cases, inference and its application, can be seen in [28]. Given the data and their location, the covariance function is estimated using maximum likelihood. Then, the means are estimated by generalized minimum squares using the covariance as weight: closer values have more influence than distant ones. Notice that such procedure requires the same information needed by Voronoi reconstruction, namely, the sampled data and their position; see Figure 4. Ordinary Kriging was used by Yu et al. [29] for the simulation of plausible data to be used as the input of sensor network assessment procedures by simulation. For details and related techniques, please refer to Diggle and Ribeiro Jr. [30]. As a benchmark, the result of applying ordinary Kriging to the original v[1], . . ., v[100] sampled values without clustering or aggregation is also presented. This approach, which provides the best possible input for any reconstruction procedure, is too costly from the energy consumption viewpoint, but provides a measure of the loss introduced by LEACH, SKATER or any other similar procedure. Figure 4 presents the general setup and the alternatives we considered. Figure 5(a) shows a sample of the Gaussian random field with coarse granularity, i.e., s = 20. Figure 5(b) presents the sensors deployed by a repulsive point process (a = −30) and their radii of perception; notice that they overlap, introducing further correlation among the sampled data. Figure 5(c) shows the pointwise reconstruction, i.e., without sensor cluster or data aggregation, using Voronoi cells, while Figure 5(d) shows the result of using Kriging on the same data. The result of applying LEACH followed by Voronoi reconstruction is shown in Figure 5(e), while Figure 5(f) presents the result of using LEACH and Kriging. If SKATER is used as an clustering/aggregation technique, and then Voronoi reconstruction is applied, one obtains the results presented in Figure 5(g), while if Kriging is employed on those data the reconstructed signal is the one shown in Figure 5(h). Notice that SKATER better preserves the overall shape of the original data set; this will be quantified in Section 4.. Figures 5 and 6 illustrate the influence of sensor deployment on the Voronoi and Kriging reconstruction approaches for, respectively, coarse (s = 20) and fine (s = 5) granularity processes, using SKATER. The dots show the six CHs at time considered. Figures 5(a) and 6(a) show samples from the coarse and fine processes, respectively. The result of applying SKATER and reconstruction by Voronoi to data obtained from sensors deployed regularly (a = −∞), and in repulsive (a = −15) and attractive (a = 30) manners are presented in Figures 5(b) and 6(b), 5(c) and 6(c), and in Figures 5(d) and 6(d). If instead of Voronoi, we used ordinary Kriging, one obtains the results shown in Figures 5(e) and 6(e), 5(f) and 6(f), and in Figures 5(g) and 6(g). It is noticeable that the coarse process is easier to reconstruct, regardless the deployment. Regardless the coarseness of the process and the reconstruction, the more repulsive the deployment the better the reconstruction. Regardless the coarseness and the deployment, ordinary Kriging provides better reconstruction than Voronoi; because of this, only results produced by Kriging are presented in the remainder of this work. The performance of each procedure is assessed by the absolute value of the relative error between the true signal f and its reconstructed version f̂. The study was conducted discretizing the signals on a 100 × 100 regular grid, so the error is computed by ε ( f , f ^ ) = 1 10 4 ∑ 1 ≤ , i , j ≤ 100 | f ( i , j ) − f ^ ( i , j ) f ( i , j ) | ,provided f(i, j) ≠ 0, which is granted with probability 1 by the continuous nature of the Gaussian random field. This is a global measure of error that disregards the contribution of W′ and its complement to the overall reconstruction quality. The following scenarios are reported: four levels of coarseness: s ∈ {5, 10, 15, 20}, seven deployment situations: a ∈ {−∞, −30, −15, 0, 5, 15, 30}, and three sensor clustering and data aggregation procedures: neither clustering nor aggregation (Pointwise data delivery), LEACH (geographic clustering), and SKATER (geographic data-aware clustering). These scenarios span a wide variety of situations, and allow the investigation of the influence of each factor on the reconstruction error. One hundred sensors are randomly placed at each replication. LEACH uses a fixed number of CHs, namely six, following the recommendation provided by the authors [c.f. 16, p. 666] who find the best results using between 3 and 5 CHs. Our choice is slightly more conservative regarding signal quality preservation, i.e., the more CHs the less fragmented the signal will be. SKATER also uses six CHs, in order to make a fair comparison between One hundred independent samples were generated for each of the 4 × 7 × 3 = 84 different situations, and the absolute value of the relative error, defined in equation (1), was recorded. This number of replications was considered sufficient for hypothesis testing sample mean differences at usual (95% and 99%) significance levels. Simulations were performed using R [31], with the spatstat library for point processes [27] and RandomFields for the generation of Gaussian processes. Graphics were produced with the lattice library for this platform [32]. A cluster of 40 PCs running Debian was used to perform the simulations. Details about hardware, seeds and random number generators can be obtained upon request from the first The results are reported in next section. Figure 7 shows the main results. It presents the reconstruction error as a function of three factors, namely, clustering/aggregation strategies (the rows, from top to bottom, LEACH, SKATER and Pointwise), phenomenon granularity (the columns, from left to right, 5, 10, 15 and 20) and deployment process (the colors, see figure caption). Each box shows a non-parametric estimate of the error density. This figure only shows the results of applying ordinary Kriging since, as previously mentioned, Voronoi reconstruction was consistently outperformed by it. Regarding the first factor, i.e., clustering/aggregation strategies, the smallest errors are produced by the Pointwise strategy (bottom row). It comes to no surprise, since this strategy makes no data aggregation; it is the ideal situation where one is able to listen to every single sensor. This situation is included to serve as a mere reference. LEACH and SKATER (first and second rows, respectively) introduce higher errors than the former, being SKATER consistently better that LEACH for every granularity and deployment (all densities in the second row are to the right of the corresponding one in the first row). Regarding the second factor, namely process granularity, it is clear that the coarser the observed phenomenon, i.e., the more the column to the right, the smaller the error SKATER and LEACH introduce. SKATER is more sensitive to granularity than LEACH, and consistently produces smaller errors for the same level of granularity. While granularity clearly affects the mean and the spread of the reconstruction error introduced by SKATER, it mainly affects the spread of the error produced by LEACH, though it also has some influence on the mean. Regarding the third factor, i.e., deployment process, it clearly exerts strong influence on SKATER: blue densities (which correspond to regular deployment, i.e., a = −∞ denoted as a = −1000) are consistently to the left of maroon densities (produced by the most attractive process, i.e., a = 30). Intermediate deployments produce densities that vary between the blue and maroon. While this effect is clear in SKATER, it is not in LEACH; the error introduced by the latter overcomes SKATER’s more subtle and better performance, masking this dependence. All the aforementioned dependencies of the reconstruction error with respect to granularity and deployment are augmented when no clustering/aggregation is performed, but since this situation was only presented as a theoretical reference, it is not further commented. Tables 1 and 2 present the quantitative results, i.e., the mean reconstruction error observed using ordinary Kriging and Voronoi reconstruction, respectively. Table 1 presents a quantitative comparison of the main situations here analyzed. Instead of showing the values computed with Equation (1), it shows the relative reconstruction error with respect to the best situation, i.e., the mean error over the 100 replications divided by the smallest mean error. The best situation was ε(f, f̂) = 0.013, and it was produced by SKATER under regular deployment (a = −∞) and coarse Gaussian process (s = 20), using ordinary Kriging. This entry is shown in boldface for visual reference. Each cell shows the relative error as a function of the two clustering algorithms (SKATER and LEACH), the seven deployments (a ∈ {−∞, −30, −15, 0, 5, 15, 30}) and the four granularities (s ∈ {5, 10, 15, 20}), using ordinary Kriging. One can readily see that SKATER is consistently better than LEACH, the smaller the error the larger the difference (ranging from 72% in the best situation to 10% in the worst one). The error, for each clustering procedure, increases with both attractivity, being the most sensitive situation SKATER on the coarse process (s = 20), where it increases 50% from the regular deployment (a = −∞) to the most attractive one (a = 30). The error decreases with granularity in both clustering procedures, being the most sensitive situation SKATER on the coarse process, where it doubles from the coarsest (s = 20) to the finest (s = 5) Table 2 presents a quantitative comparison of the results obtained using Voronoi reconstruction. It shows the relative reconstruction error with respect to the best situation using ordinary Kriging reconstruction, i.e., ε(f, f̂) = 0.013, which corresponds to SKATER, a = −∞ and s = 20. Each cell shows the relative error as a function of the two clustering algorithms (SKATER and LEACH), the seven deployments (a ∈ {−∞, −30, −15, 0, 5, 15, 30}) and the four granularities (s ∈ {5, 10, 15, 20}), using Voronoi reconstruction. The first conclusion is that reconstruction by ordinary Kriging consistently produces smaller errors than those obtained by using Voronoi reconstruction: the values in Table 1 are always smaller than the corresponding ones in Table 2. The rest of the behavior is quite similar between the tables: reconstruction error increases with attractivity, decreases with granularity, and using SKATER is (with a single exception) consistently smaller than using LEACH. Tables 1 and 2 present data with two digits that coincide in a few cases, but all the mean error values were tested significantly different at the 95% confidence level, and only then turned into relative errors by dividing them by the best situation.
{"url":"http://www.mdpi.com/1424-8220/10/3/2150/xml","timestamp":"2014-04-20T16:51:55Z","content_type":null,"content_length":"87818","record_id":"<urn:uuid:cbeb372e-7378-4b63-8ed1-c44d9f86cf78>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00259-ip-10-147-4-33.ec2.internal.warc.gz"}