content
stringlengths
86
994k
meta
stringlengths
288
619
ELET 8214 - Circuit Design & Implementation ELET 2103 [1] with D or better and (MATH 4114 with D or better or MATH 5014 [2] with D or better) Calculus-based circuit theory includes representation of ideal and non-ideal characteristics of circuit elements. Circuit analysis using fundamental circuit laws, network theorems and standard engineering complex variable notation. Transistor circuits are modeled using realistic parameters including junction capacitances and internal noise generation. Circuit models are applied to amplifier designs for low noise, high frequency response, etc. Laboratory implementation is compared to mathematical models, computer simulation, general purpose interface bus testing and discrepancies are resolved.
{"url":"http://www.alfredstate.edu/print/academics/courses/elet-8214-circuit-design-implementation","timestamp":"2014-04-20T06:03:08Z","content_type":null,"content_length":"9844","record_id":"<urn:uuid:3bc82dd6-5064-447c-8671-e6f8eac096f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
T. Padmanabhan Invited Review for Physics Reports. For a PDF version of the article, click here. For a Postscript version of the article, click here. COSMOLOGICAL CONSTANT - THE WEIGHT OF THE VACUUM T. Padmanabhan IUCAA, Pune University Campus, Ganeshkhind, Pune 411 007, India. email: nabhan@iucaa.ernet.in Abstract. Recent cosmological observations suggest the existence of a positive cosmological constant G c^3) ^-123. This review discusses several aspects of the cosmological constant both from the cosmological (sections 1 - 6) and field theoretical (sections 7 - 11) perspectives. The first section introduces the key issues related to cosmological constant and provides a brief historical overview. This is followed by a summary of the kinematics and dynamics of the standard Friedmann model of the universe paying special attention to features involving the cosmological constant. Section 3 reviews the observational evidence for cosmological constant, especially the supernova results, constraints from the age of the universe and a few others. Theoretical models (quintessence, tachyonic scalar field, ...) with evolving cosmological `constant' are described from different perspectives in the next section. Constraints on dark energy from structure formation and from CMBR anisotropies are discussed in the next two sections. The latter part of the review (sections 7 - 11) concentrates on more conceptual and fundamental aspects of the cosmological constant. Section 7 provides some alternative interpretations of the cosmological constant which could have a bearing on the possible solution to the problem. Several relaxation mechanisms have been suggested in the literature to reduce the cosmological constant to the currently observed value and some of these attempts are described in Section 8. Next section gives a brief description of the geometrical structure of the de Sitter spacetime and the thermodynamics of the de Sitter universe is taken up in section 10. The last section deals with the role of string theory in the cosmological constant Key words: cosmological constant, dark energy, cosmology, cmbr, quintessence, de Sitter spacetime, horizon, tachyon, string theory PACS: 98.80.-k, 98.80.Es , 98.80.Cq , 98.80.Qc , 04.60.-m Table of Contents EVIDENCE FOR A NON-ZERO COSMOLOGICAL CONSTANT MODELS WITH EVOLVING COSMOLOGICAL "CONSTANT" STRUCTURE FORMATION IN THE UNIVERSE CMBR ANISOTROPIES
{"url":"http://ned.ipac.caltech.edu/level5/Sept02/Padmanabhan/Pad_contents.html","timestamp":"2014-04-16T10:11:32Z","content_type":null,"content_length":"8410","record_id":"<urn:uuid:1c395d58-4e35-4ce9-b6f5-85f6b3e030ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Circuit Diagram (Ohm's Law Application) Well, you can combine the resistances on the right side to 142.24 Ohms, then add it to the serial connection with the 500 Ohms for a total of 642.24 Ohms of resistance in the circuit. Using that, we can find that the current flowing through the circuit as a whole is 9.34e-3 Amps. Then we find the voltage drop across the first resistor. (9.34e-3)(500)=4.67V. Use the 1.33 Voltage on the 250Ohm resistor I=V/R=532mA. The analogy that seemed to help me the most was from an online friend. And it's funny, I'll probably try to put all my faith into Vikings from now on: (5:02:29 PM) : well, here's my wicked viking analogy. (5:03:06 PM) : think of the electrons as a horde of screaming vikings descending on a village of poor, unfortunate, yet sexy, irish womenfolk with attractive brown eyes. (5:03:27 PM) : now, these women, being irish, are cunning (5:03:41 PM) : so they cleverly erect a series of barries to protect them from the pillaging hordes. (5:04:06 PM) : now, the first time the vikings attacked, they put up three in a row (5:04:09 PM) : each one higher than the last (5:04:16 PM) : but the charging vikings were having none of it (5:04:22 PM) : and they burst on through each wall. (5:04:37 PM) : but they got more and more tired (5:04:57 PM) : so the first wall was pretty short and lame, it was just some bales of hay (5:05:17 PM) : they got through it and only lost 1.5 V of momentum (5:05:24 PM) : but getting rid of it wasn't much work. (5:05:38 PM) : the second wall was a lot better, it took them twice as much energy to get through it (5:05:44 PM) : so they lost 3 V of momentum (5:05:57 PM) : but they had to move a bunch of boulders out of their way - it was hard work! lots of work done here. (5:06:11 PM) : the last wall was made of SOLID AWESOME (5:06:33 PM) : but the vikings were undaunted and broke on through anyway; those irish girls were extremely alluring (5:06:48 PM) : but it slowed them down the rest of the way (5:06:54 PM) : and they had to do a crap ton of work to get rid of the wall (5:07:09 PM) : so they were too zoned out for rape & pillage anyway by the time they got there (see, clever irish chicks) (5:07:18 PM) : so they just limped back to their longboats.
{"url":"http://www.physicsforums.com/showthread.php?t=330348","timestamp":"2014-04-16T04:20:03Z","content_type":null,"content_length":"61211","record_id":"<urn:uuid:da718b6f-c50d-4a0c-bd32-14f6c01bac3c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Cylinder Volume help 1. The problem statement, all variables and given/known data A right circular cylinder has a radius of 2.36 cm and a length of 44.0 cm. Its volume is ____ cm3? 2. Relevant equations (i am not sure if this equation is correct because the say to use height and i am given length) 3. The attempt at a solution Volume= 770 cm3
{"url":"http://www.physicsforums.com/showthread.php?t=320283","timestamp":"2014-04-16T07:36:01Z","content_type":null,"content_length":"26167","record_id":"<urn:uuid:58b27d68-7ec5-4b9f-b21c-ad87a5f44c1c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
THIS is where I am confused about triangle vector calculation April 29th 2010, 07:51 AM THIS is where I am confused about triangle vector calculation Okay so I have a thread posted about my confusion regarding calculating the area of a triangle. Long story short, I was confused what to do with the cross product to get the area. I`m going to take this example out of my book because it doesn`t make any sense to me. We have three points: P (2,2,0) Q (-1,0,2) R (0,4,3) PQ (-3,-2,2) PR (-2,2,3) PR x PR (-10,5,-10) I can do this myself, and this is what the book gives as answers aswell. What really gets me going is I know, both from my teacher in class and this forum that I simply put the cross product into a formula and we get this: (1/2) | square root of a^2 + b^2 + c^2 | (magnitude) which is in this case (1/2) | square root of 100 + 25 + 100 | I've always been solving this type of question like this, however... the book says the anser is 15/2. How did they get this? How come they did not find the magnitude and then divide it by half? April 29th 2010, 09:22 AM Okay so I have a thread posted about my confusion regarding calculating the area of a triangle. Long story short, I was confused what to do with the cross product to get the area. I`m going to take this example out of my book because it doesn`t make any sense to me. We have three points: P (2,2,0) Q (-1,0,2) R (0,4,3) PQ (-3,-2,2) PR (-2,2,3) PR x PR (-10,5,-10) I can do this myself, and this is what the book gives as answers aswell. What really gets me going is I know, both from my teacher in class and this forum that I simply put the cross product into a formula and we get this: (1/2) | square root of a^2 + b^2 + c^2 | (magnitude) which is in this case (1/2) | square root of 100 + 25 + 100 | I've always been solving this type of question like this, however... the book says the anser is 15/2. How did they get this? How come they did not find the magnitude and then divide it by half? $\sqrt{100+25+100} = \sqrt{225} = 15$...
{"url":"http://mathhelpforum.com/advanced-algebra/142129-where-i-am-confused-about-triangle-vector-calculation-print.html","timestamp":"2014-04-23T18:02:44Z","content_type":null,"content_length":"6185","record_id":"<urn:uuid:9071c2a7-5f9f-4f83-9286-0089c3d213c6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics The role of critical exponents in blowup theorems. (English) Zbl 0706.35008 The survey presents some basic results on the critical exponents for nonlinear evolution problems. One typical example is the following nonlinear problem for the heat equation $\left(F\right)\phantom{\rule{1.em}{0ex}}{u}_{t}={\Delta }u+{u}^{p},\phantom{\rule{1.em}{0ex}}x\in {ℝ}^{N},\phantom{\rule{1.em}{0ex}}t>0,\phantom{\rule{1.em}{0ex}}u\left(0,x\right)={u}_{0}\left(x\ right),\phantom{\rule{1.em}{0ex}}x\in {ℝ}^{N},$ where ${\Delta }$ denotes the N-dimensional Laplace operator. A result due to Fujita guarantees that for the critical exponent ${p}_{c}\left(N\right)=1+2/N$ the following two statements are (A) If $1<p<{p}_{c}\left(N\right)$, then the only nonnegative global (in time) solution of (F) is $u=0·$ (B) If $p>{p}_{c}\left(N\right)$, then there exists a global positive solution of (F), if the initial data are sufficiently small. The survey is divided into four sections. The first section deals with some extensions of the problem (F). The first part 1.1 of this section is devoted to the cases of other geometries, various linear dissipative terms or other reaction terms. More precisely, if D $\left(\subset {ℝ}^{N}\right)$ is any bounded or unbounded domain, then in the place of (F) the author considers the initial boundary value problem $\left(D\right)\phantom{\rule{1.em}{0ex}}{u}_{t}={\Delta }u+{u}^{p},\phantom{\rule{1.em}{0ex}}\left(x,t\right)\in D×\left(0,T\right),\phantom{\rule{1.em}{0ex}}u\left(0,x\right)={u}_{0}\left(x\right), \phantom{\rule{1.em}{0ex}}x\in D,\phantom{\rule{1.em}{0ex}}u\left(t,x\right)=0,\phantom{\rule{1.em}{0ex}}\left(x,t\right)\in \partial D×\left(0,T\right),$ or the following generalization of (D) $\left(GD\right)\phantom{\rule{1.em}{0ex}}{u}_{t}={\sum }_{i,j=1}^{N}{\left({a}_{ij}\left(t,x\right){u}_{{x}_{i}}\right)}_{{x}_{j}}+{\sum }_{i=1}^{N}{b}_{i}\ left(t,x\right){u}_{{x}_{i}}+{u}^{p}$ (p$\le 1\right)$, $u\left(0,x\right)={u}_{0}\left(x\right)$, $x\in D$, $u\left(t,x\right)=0$, (x,t)$\in \partial D×\left(0,T\right)$, where the coefficients of the linear operator of the right-hand side are uniformly bounded in $D×\left(0,\infty \right)$. A result due to Meier asserts that a critical exponent ${p}_{c}\left(GD\right)\ge 1$ exists. An explicit representation of ${p}_{c}\left(GD\right)$ or ${p}_{c}\left(D\right)$ is known for special cases of D. For example, if D is the “orthant” ${D}_{k}=\left\{x\in {ℝ}^{N}$; ${x}_{1}>0,···$, ${x} _{k}>0\right\}$, then we have ${p}_{c}\left({D}_{k}\right)=1+2/\left(k+N\right)$ according to a result due to Meier. Another case studied in the first part of section 1 is the case of a cone D with a vertex at the origin. An explicit representation of ${p}_{c}\left(D\right)$ is found by Levine, Bandle and Meier. Another problem close to (F) is the Dirichlet problem for the nonlinear heat equation in which ${u}^{p}$ is replaced by ${|u|}^{p-1}u$. In this case one is interested in real valued solutions. This part contains also a summary on the results for ${u}_{t}={\Delta }u+{|x|}^{\sigma }{u}^{p}$ or ${u}_{t}={\Delta }u+{t}^{k}{|x|}^{\sigma }{u}^{p}$ and the dependence of the critical exponent on k, $\ sigma$, p. For the general case of the problem (GD), where D has bounded complement upper and lower bounds for ${p}_{c}\left(GD\right)$ are found according to the results of Bandle and Levine. Part 1.2 of section 1 summarizes the results on the problem ${u}_{t}=A\left(u\right)+{u}^{p}$, where A(u) is in general a nonlinear dissipative term. Various special choices of A(u) are studied. For example a typical choice of A is given by $A\left(u\right)=div\left\{\frac{{abla }_{x}u}{\left(1+|{abla }_{x}u{{|}^{2}\right)}^{1/2}}\right\}$ representing the mean curvature operator. Another choice of A was considered by Galaktionov, $A\left(u\right)={\sum }_{i=1}^{N}{\partial }_{{x}_{i}}{\left(|abla u|}^{\sigma }{\partial }_{{x}_{i}}u\ Part 1.3 of section 1 contains a summary of results for bounded domains D, while the part 1.4 is devoted to systems of equations. For example ${u}_{t}={\Delta }u+{v}^{p}$, ${v}_{t}={\Delta }v+{u}^{p} In section II the author considers the nonlinear Schrödinger equation $\left(NLS\right)\phantom{\rule{1.em}{0ex}}i{u}_{t}+{\Delta }u+{|u|}^{p-1}u=0,\phantom{\rule{1.em}{0ex}}x\in {ℝ}^{N},\phantom{\rule{1.em}{0ex}}t>0,\phantom{\rule{1.em}{0ex}}u\left(0,x\right)={u}_{0}\ For this problem the critical exponent is ${p}_{nls}\left(N\right)=1+4/N·$ In section III the nonlinear wave equation ${u}_{tt}={\Delta }u+{|u|}^{p}$, $u\left(0,x\right)={u}_{0}\left(x\right)$, ${u}_{t}\left(0,x\right)={u}_{1}\left(x\right)$ as well as the critical exponent for this equation are examined. The critical exponent is the larger root of the quadratic equation $\left(N-1\right){p}^{2}-\left(N+1\right)p-2=0·$ Finally, in section IV some concluding remarks are discussed. 35B30 Dependence of solutions of PDE on initial and boundary data, parameters 35B40 Asymptotic behavior of solutions of PDE 35K55 Nonlinear parabolic equations 35Q55 NLS-like (nonlinear Schrödinger) equations 35K65 Parabolic equations of degenerate type 35-02 Research monographs (partial differential equations) 35L70 Nonlinear second-order hyperbolic equations
{"url":"http://zbmath.org/?q=an:0706.35008","timestamp":"2014-04-16T07:21:50Z","content_type":null,"content_length":"35638","record_id":"<urn:uuid:e95f7a82-1cf7-4d29-b4cb-fdf5465c73a5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
G cannot be the union of conjugates July 15th 2010, 03:51 AM #1 Feb 2009 G cannot be the union of conjugates If $G$ is a finite group and $H$ is a subgroup of $G$ then prove that $\displaystyle G eq \bigcup\limits_{a \in G} aHa^{-1}$ Of course, we assume H is a PROPER subgroup. Let G act by conjugation on the set X of all its proper subgroups; then we get that $s:=|Orb(H)|=[G:N_G(H)]\leq [G:H]=r$ , say, and since $|aHa^{-1}|=|H|\,\,\,\forall\,a\in G$ ,we get that: $|\bigcup aHa^{-1}|\leq 1+s(|H|-1)\leq 1+r(|H|-1)=1+r|H|-r=|G|-(r-1)<|G|$ (Question: why $|\bigcup aHa^{-1}|\leq 1+s(|H|-1)$ ??) July 15th 2010, 02:02 PM #2 Oct 2009
{"url":"http://mathhelpforum.com/advanced-algebra/151004-g-cannot-union-conjugates.html","timestamp":"2014-04-17T02:14:03Z","content_type":null,"content_length":"35059","record_id":"<urn:uuid:cb8663ef-a35c-49ed-ae80-0a2a4a67c2f8>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Strategy and Tactics: General Urban Rivals | Free Online Manga Trading Card Game | TCG | MMO Saturday 27/10/07 I played a guy with lamar Flo Frankie Hi Mickey T but that only appeared when i clicked the card. The next thing i see is that his deck is 2 and a DJ Korr Cr 1 message Here's my latest deck after experimenting with a handful of other clans. Bryan Gertrud Burger Jane Ramba Tank Wardog Dean Uranus I chose La Junta for several reasons and there is a reason why I decided against using a level 5. La Junta has several good cards with a lot of damage and good abilities, which makes it excellent for a single clan deck. My reason for forgoing a level 5 is the fact that it is just one card and in mode, I have learned that it does more harm to me than good. Most of the time I never get it in my hand and during the games I do play it, it turns out to be useless half the time against certain cards my opponent has that seem specifically designed to take out my level 5 card. It also consumes a lot of stars that force me to use more level 2s/3s than I feel comfortable with. In any card game, it's also foolish to rely on a single card in order to win. So I decided to stop playing against the odds and play with them by using 3 level 4s, 3 level 3s, and 2 level 2s. With this deck, there is no possible way I can get hands as bad as I did with my Allstars deck, Pussy Cats deck, Ulu Watu Ulu Watu deck, and Ulu Watu deck. All of which I sold one at a time for the funds needed for a new and improved deck. This deck has plenty of power, a lot of of damage, and several abilities that compensate for having only one clan instead of two. 4 messages Here is my deck: (U) *4 (U) *4 (U) *2 (C) *2 (U) *5 (U) *3 (R) *3 (C) *2 i am planing to buy Lamar (R) in exchange to (U), but the problem is that one star slot will be left... help me pls.. here is my collection thanks: (R) *5 : Frankis Hi *3 (C) *3 (C) *3 (C) *2 (U) *3 (U) *5 (C) *3 (C) *3 thanks ALOT!! Friday 26/10/07 1 message The support attack plus 3 bonus of the clan can only be triggered when you have two or more of them in hand..the bonus increases for each card of the same clan you have in your hand.. cards in hand would mean each of them get attack plus 6 cards in hand would mean each of them get attack plus 9 cards in hand would mean each of them get attack plus 12 as for the support ability..the ability is multiplied by two if two members of the clan are present.. multiplied by 3 if three cards of the same clan are present.. and multiplied by four if all your cards are of the same clan.. please correct me if I am wrong..Ggs 7 messages When i play first in or torni, how should i start eg bluff with damage reducer, 6 pillz high damage, all pillz on , bluff with i really have trouble playing the first move. if they get it, then i can counter well but if i'm first i feel unsafe 6 messages What deck should i form in which i could use currently have 12k and growing please for and type 1 use please reply long live T_O_F_U 2 messages card does it do x hp added for x hits for all characters in hand or only her? Thursday 25/10/07 10 messages - last answer from 0- JP , Thursday 25/10/2007, 07:52. I think the clan is very good but is it better to have a pure deck or bring in some support from another clan? 4 messages - last answer from 0- JP , Thursday 25/10/2007, 07:34. Please can anyone help me with my deck? Chad Bread*3 If you have suggestions please give me tips! Thanks! 9 messages Im curious everyone likes Kenny so much. Im curious about what his ability is. Im guessing its poison but im not quite so sure. Can somone clear this up for me? Thank you its greatly appreciated. 9 messages - last answer from 0- JP , Thursday 25/10/2007, 07:17. I have 5k clintz to spend. my deck is La Junta Wardog Dean Bruce Jane Ramba (gonna buy rossa instead.) Don Ottavia Mona Give me your changes, and changing half of the deck is fine. I have 5k clintz BEFORE buying , but I can sell so it's allright. I also have if i should use him. 4 messages Hello there, I am not really sure if this is considered spoiler and thus frowned upon / forbidden by the rules. If it's all correct and possible to explain, can someone give informations on the personal abilities of all the lvl 5 leaders? You know, they're somewhat expensive and I'd like to know them before making a blind guess in the market. If it's considered rude or spoiling, I'd go well with a private message. Thank you in advance for the attention Wednesday 24/10/07 10 messages I heard from a friend that it's a good to use 4 pillz for each character no matter what happens. I wondering is this a good strategy because I suck at this game. Urban Rivals | Free Online Manga Trading Card Game | TCG | MMO The Urban Rivals team is made by lovers of all kind of Collectibles Cards Games and Trading Cards Games like: Magic the Gathering, Dominion, Vampire, Yu-Gi-Oh!, Pokemon, Wakfu TCG, Assassin Creed Recollection, Shadow Era, Kard Kombat and Might & Magic Duel of Champions.
{"url":"http://www.urban-rivals.com/en/community/forum/?mode=viewtheme&page=630&id_theme=4","timestamp":"2014-04-18T05:35:13Z","content_type":null,"content_length":"69330","record_id":"<urn:uuid:1642b287-ed86-4329-a19c-6d1f8b24e38c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 531.05042 Autor: Chung, F.R.K.; Erdös, Paul; Spencer, Joel Title: On the decomposition of graphs into complete bipartite subgraphs. (In English) Source: Studies in pure mathematics, Mem. of P. Turán, 95-101 (1983). Review: [For the entire collection see Zbl 512.00007.] A B-covering (respectively B-decomposition) of a graph G is a collection of complete bipartite graphs G[i] such that any edge of G is in at least (respectively exactly) one G[i] (i = 1,2,...,t). Let \beta(G; B) (respectively \alpha(G; B)) denote the minimum value of sum^t[i = 1]|V(G[i])| over all B-coverings (respectively B-decompositions) of G. Let \beta(n; B) (respectively \alpha(n; B)) denote the maximum value of \beta(G; B) (respectively \alpha(G; B)) as G ranges over all graphs on n vertices. "In this paper we show that, for any positive \epsilon, we have (1- \epsilon)\frac{n^2}{2e log n} < \beta(n; B) \leq \alpha(n; B) < (1+\epsilon)\frac{n^2}{2 log n}, where e is the base of the natural logarithms, provided n is sufficiently large." A number of related questions and conjectures are discussed. For example, if G[n] denote the set of the 2^\binom{n {2}} labelled graphs on n vertices, it is conjectured that lim[n > oo]sum[G in G[n]]\alpha(n; B)/2^\binom{n{2}}n^2/ log n Reviewer: W.G.Brown Classif.: * 05C35 Extremal problems (graph theory) 05C99 Graph theory 60C05 Combinatorial probability Keywords: decomposition; covering; bipartite graph Citations: Zbl 512.00007 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/53105042.htm","timestamp":"2014-04-20T11:06:45Z","content_type":null,"content_length":"5079","record_id":"<urn:uuid:590c8fb3-19f3-4b3b-83bb-6ed1496f251a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Issue With MATLAB Homework Problem February 24th 2009, 03:41 AM #1 Feb 2009 Issue With MATLAB Homework Problem Hey guys, Long story short, my DiffEq class has to use MATLAB, but they're not really teaching it to us. I'm trying my best to do some problems given to us, but it's difficult to do this stuff if you don't have much experience. So here's the problem: Apparently we're supposed to use 'inline' and 'ode45' to solve this, and then plot it. So I started with this: >> f=inline('[-.1*x(1)*x(2);-1*x(1)]','x','t') f = Inline function: f(x,t) = [-.1*x(1)*x(2);-1*x(1)] ...and then I tried this and got a bunch of errors: [t y]=ode45(f,[0,15],[10,15]) Not quite sure what the problem is here. If anyone could help me out a little bit, I'd greatly appreciate it. Hey guys, Long story short, my DiffEq class has to use MATLAB, but they're not really teaching it to us. I'm trying my best to do some problems given to us, but it's difficult to do this stuff if you don't have much experience. So here's the problem: Apparently we're supposed to use 'inline' and 'ode45' to solve this, and then plot it. So I started with this: >> f=inline('[-.1*x(1)*x(2);-1*x(1)]','x','t') f = Inline function: f(x,t) = [-.1*x(1)*x(2);-1*x(1)] ...and then I tried this and got a bunch of errors: [t y]=ode45(f,[0,15],[10,15]) Not quite sure what the problem is here. If anyone could help me out a little bit, I'd greatly appreciate it. These are the Lanchester equations appropriate to guerilla warfare, where the rate of casulties inflicted by the regular forces is proportional to their number and the number of targets, and the rate of casulties inflicted by the irregular forces is just proportional to their number. >[t, y]=ode45(f,[0,15],[10;15]) The order of the arguments in the derivative is f(t,x) you had them the other way around. Also the initial value vector in ode45 is a column vector, you had a row vector. Ah, thanks so much for the help! February 25th 2009, 02:13 AM #2 Grand Panjandrum Nov 2005 March 2nd 2009, 02:48 AM #3 Feb 2009
{"url":"http://mathhelpforum.com/math-software/75486-issue-matlab-homework-problem.html","timestamp":"2014-04-21T11:21:29Z","content_type":null,"content_length":"37236","record_id":"<urn:uuid:8cde3115-c54e-448c-ada5-e655a345ae9a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Regular Surface + Isometry December 21st 2010, 04:02 PM #1 Regular Surface + Isometry Let $S_1$ and $S_2$ be regular surfaces in $\mathbb{R}^3$. Let $\phi: S_1 \rightarrow S_2$ be an isometry, that is to say $I_p^{S_1} (x,y) = I_{\phi(p)}^{S_2} (d_p \phi x, d_p \phi y)$ for all $x,y \in T_p S_1$, the tangent plane at $p\in S_1$, where I is the first fundamental form on the appropriate tangent plane, and $d_p$ is the differential map. Let $(U_1,F_1,V_1)$ and $(U_2,F_2,V_2)$ be local parametrisations of $S_1$ at $p$ and $S_2$ at $\phi(p)$ respectively. Let $u = F^{1}(p).$ I am trying to show that the gaussian curvature is the same at $p$ and $\phi(p).$I guess that this means I am trying to show that the first fundamental forms at $p$ and $\phi(p)$ are the same when i put the appropriate basis vectors in, i.e. $I_p^{S_1} (X_i, X_j)= I_{\phi(p)}^{S_2} (X_i ', X_j ')$, where $X_i = DF_1(u) e_i$ and $X_i = DF_2 (F_2^{-1} \circ \phi \circ F_1 (u)) e_i$. I figured showing this would be sufficient because then the gaussian curvature is completely determined by the first fundamental form (the notation I've used means you seperate the two columns of the jacobian and use them as basis vectors). So if I try to show this, then I end up with $<br /> I_p^{S_1} (X_i, X_j) = I_{\phi(p)}^{S_2} (D(\phi \circ F_1)(u)e_i, D(\phi \circ F_1)(u)e_j ) <br />$and then I am totally stuck. Can anyone offer any advice? I understand this question is a bit long winded. since isometries keep the first fundamental form I and I determines the gaussian curvature, this is already done. what are you trying to prove? Re-writing an abstract expresion in specific coordinates does not make mush sense. why are the first fundamental forms the same? this is what I am trying to show, i.e. $X_i \cdot X_j = X_i ' \cdot X_j '$ or am I fundamentally misunderstanding something here? The only definition of an isometry I have is $I_p^{S_1} (x,y) = I_{\phi(p)}^{S_2} (d_p \phi x, d_p \phi y)$. Why does this mean the first fundamental forms have the same matrix? do you understand the expression I_p(x,y) = I_f(p)( df(x), df(y) ) ? ( I'm using f instead of \phi) This expression tells: the first fundamental form I of S1 at p, corresponds to the first fundamental form I of S2 at f(p). "Corresponds to" here means: Given any two vectors x and y of S1 at p, their inner product I_p(x,y), or you can write as x . y, equals to the inner product of their images under the differential map df. This is exactly you Xi . Xj = Xi' . Xj'. Using any choice of coordinates won't change that. You need to understand that coordinates are only used for simplifying computation. The important thing is the underlying geometry. For your question, consider two vector spaces V and W, and a linear isomorphism f: V -> W. Also their are two extra structures, their inner products, are defined for each. Suppose that f is further an isometry, that is, f keeps inner products, <f(x), f(y)> = <x,y>. Then their two inner products have the same matrix, in any choice of basis v1,..., vn in V and f(v1),...,f(vn) in W. so if $\phi:S_1 \rightarrow S_2$ is an isometry is $d_p \phi$ necessarily an isomorphism? YES it is! Every tangent vector $\alpha'(0) \in T_{\phi(p)} S_2$ looks like $d_p \phi (\phi^{-1} \circ \alpha) ' (0),$ so we have surjectivity $\iff$ we have injectivity $\iff$ bijectivity since the linear map is on a finite dimensional vector space. I guess there is a more general way of doing this December 21st 2010, 09:13 PM #2 Senior Member Mar 2010 Beijing, China December 21st 2010, 11:05 PM #3 December 22nd 2010, 03:43 AM #4 Senior Member Mar 2010 Beijing, China December 22nd 2010, 07:02 AM #5 December 22nd 2010, 07:21 AM #6
{"url":"http://mathhelpforum.com/differential-geometry/166734-regular-surface-isometry.html","timestamp":"2014-04-20T20:27:01Z","content_type":null,"content_length":"49147","record_id":"<urn:uuid:211ef783-419d-4568-9de1-09c475538d8c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2006 [00159] [Date Index] [Thread Index] [Author Index] Re: Reduction of Radicals • To: mathgroup at smc.vnet.net • Subject: [mg71984] Re: [mg71902] Reduction of Radicals • From: Daniel Lichtblau <danl at wolfram.com> • Date: Thu, 7 Dec 2006 06:25:28 -0500 (EST) • References: <200612031126.GAA08075@smc.vnet.net> dimitris wrote: > Based on this reference > Cardan Polynomials and the Reduction of Radicals (by T. Osler) > (see also references therein) > (you can download the paper here: > http://www.jstor.org/view/0025570x/di021218/02p0059q/0?currentResult=0025570x%2bdi021218%2b02p0059q%2b0%2c03&searchUrl=http%3A%2F%2Fwww.jstor.org%2Fsearch%2FBasicResults%3Fhp%3D25%26so%3DNewestFirst%26si%3D1%26Query%3DOsler > ) > the following expression can be reduced to 1 > z = (2 + Sqrt[5])^(1/3) + (2 - Sqrt[5])^(1/3) > Mathematica gives > N[%] > 1.9270509831248424 + 0.535233134659635*I > This is because by default it returns a complex number for the cube > root of a negative number > List @@ z > N[%] > {(2 - Sqrt[5])^(1/3), (2 + Sqrt[5])^(1/3)} > {0.30901699437494756 + 0.535233134659635*I, 1.618033988749895} > However defining > mycuberoot[x_] := Block[{w}, w = w /. Solve[w^3 == 1][[3]]; If[Re[x] < > 0, w*x^(1/3), x^(1/3)]] > Then > {2 - Sqrt[5], 2 + Sqrt[5]} > mycuberoot /@ % > FullSimplify[%] > Together[Plus @@ %] > {2 - Sqrt[5], 2 + Sqrt[5]} > {(-1)^(2/3)*(2 - Sqrt[5])^(1/3), (2 + Sqrt[5])^(1/3)} > {(1/2)*(1 - Sqrt[5]), (1/2)*(1 + Sqrt[5])} > 1 > Is there a particular reason why by default Mathematicas returns a > complex number for the cube root of a negative number or it is a matter > of choise? > Following the same procedure I prove that > (10 + 6*Sqrt[3])^(1/3) + (10 - 6*Sqrt[3])^(1/3) > is equal to 2. Indeed > {10 + 6*Sqrt[3], 10 - 6*Sqrt[3]} > mycuberoot /@ % > FullSimplify[%] > Together[Plus @@ %] > {10 + 6*Sqrt[3], 10 - 6*Sqrt[3]} > {(10 + 6*Sqrt[3])^(1/3), (-1)^(2/3)*(10 - 6*Sqrt[3])^(1/3)} > {1 + Sqrt[3], 1 - Sqrt[3]} > 2 > This behavior of Mathematica does not affect simplifications by e.g. > RootReduce? > I must admit that I have gaps on my knowledge in these symbolic aspects > (I start to be interested in after I try to solve the the secular > Rayleigh equation) > so more experienced members of the forum may forgive any possible > mistakes of mine! > Anyway I don't understand this difference in treating nested radicals > between literature and Mathematica. > I really appreciate any kind of insight/guideness/comments. > Regards > Dimitris The use of principal roots has received some replies but since I liked the Osler article above and wanted to comment I thought I'd revisit. (1) As pointed out by others (A. Kozlowski, M. Eisenberg) use of principal values for fractional roots means, among other things, that Power can be defined in terms of Log, and they can share a branch cut. This is useful in and of itself (try figuring out jumps in definite integration with a proliferation of functions having unrelated btranch Another reason to like the definition a^b==Exp[b*Log[a]] is that for r>0 it makes f[x_]=(-r)^x differentiable in x. With a choice of negative roots for x equal to 1/n, n an odd integer, such a function would fail even to be continuous. Another nice feature is that it becomes simple to recover "surds" (that is, the full set of values for a radical a^(1/n)) simply by taking the principal value and multiplying by powers of the principal nth root of unity. Were we to have (-1)^(1/3), say, be simply -1, then one would be forced to use explicit complex exponentials instead of root-of-unity radicals in order to attain the principal value. But having (-1)^(1/3) be the principal value means we can easily attain other roots such as -1 by multiplying by appropriate powers of this root of unity. (2) Osler's paper discusses some ways to reduce certain radicals to simpler forms. This is a special case of radical denesting. In this case one can use polynomial algebra techniques coupled with a selection procedure to remove parasite roots. One example uses something resembling (2+Sqrt[5])^(1/3) + (2-Sqrt[5])^(1/3) EXCEPT with the convention that the cube of the negative is a negative rather than principal value. To find the desired value one might make new variables for radicals and polynomials to define them (in effect giving the surds, or full sets of values), eliminate all variables other than the one representing the value of interest, and then find the root of the resulting polynomial (in that remaining variable) that lies in the region of interest. For this example we might let with defining polynomial x^3-(2+z) (where y is given by z^2-5). Letting t be the value of interest and continuing in this way, we would get polys = {t-(x+y), x^3-(2+z), y^3-(2-z), z^2-5}; Now form a Groebner basis eliminating all but t, and solve for t. roots = t /. Solve[First[GroebnerBasis[polys,t,{x,y,z}]]==0, t]; Last, select the root that is real valued. In[36]:= Select[roots, Element[#,Reals]&] Out[36]= {1} Of course we could use direct built in functionality, provided we first translate the expression as per note (1) above so that we are indeed getting the negative root for the second summand. This summand thus becomes (-1)^(2/3)*(2-Sqrt[5])^(1/3) and we do In[37]:= RootReduce[(2+Sqrt[5])^(1/3) + (-1)^(2/3)*(2-Sqrt[5])^(1/3)] Out[37]= 1 The last example in the paper is a bit more complicated but can be handled in exactly the same ways. It was from the dedication of a 1997 paper, commerating an anniversary of the birth of Ramanujan. const = (32*(146410001/48400)^3 - 6*(146410001/48400)); polys = {t^6-(const+b), b^2-(const^2-1)}; roots = t /. Solve[First[GroebnerBasis[polys,t,b]]==0, t]; Now grab any root real and larger than 1. In[41]:= Select[roots, Element[#,Reals]&&#>1&] Out[41]= {110} (3) Osler defines and uses "Cardan polynomials" to do radical reduction. I cannot help but notice* that these have interesting combinatorial, algebraic, and analytic properties, a few of which I'll describe. First his definition: Ca[n_,x_,y_] := Expand[2*y^(n/2)*ChebyshevT[n,x/(2*Sqrt[y])]] For example: In[45]:= InputForm[Ca[9,x,y]] Out[45]= x^9 - 9*x^7*y + 27*x^5*y^2 - 30*x^3*y^3 + 9*x*y^4 We'll work with a closely related family wherein we take absolute values of coefficients and also add a term y^n. Da[n_,x_,y_] := Expand[-I^n*Ca[n,I*x,y]+y^n] In[47]:= InputForm[Da[9,x,y]] Out[47]= x^9 + 9*x^7*y + 27*x^5*y^2 + 30*x^3*y^3 + 9*x*y^4 + y^9 Finally we'll want to define some very familiar polynomials. Ru[n_,x_,y_] := Expand[(x+y)^n] In[49]:= InputForm[Ru[9,x,y]] x^9 + 9*x^8*y + 36*x^7*y^2 + 84*x^6*y^3 + 126*x^5*y^4 + 126*x^4*y^5 + 84*x^3*y^6 + 36*x^2*y^7 + 9*x*y^8 + y^9 Note that Ru[n,x,y] has coefficients from the nth row of the Pascal (A) If one looks at rows of coefficients from Da[n,x,y] as n increases one sees an assymmetric triangle. For example, the 11th row would be We can derive this from the 11th row of the Pascal triangle using a one-sided differencing. That is, for the kth element of the mth row, we'll take absolute value of alternating sums from the prior row, up to the kth element. Pa[n_,k_,0] := Binomial[n,k] Pa[n_,k_,m_] := Abs[Sum[(-1)^(j-1)*Pa[n,j,m-1],{j,m,k}]] Now let's look at a table of these values, for n=11. assymmetrictable[n_] := Table[Pa[n,k,j], {j,0,(n-1)/2}, {k,0,n-1}] In[146]:= InputForm[assymmetrictable[11]] {{1, 11, 55, 165, 330, 462, 462, 330, 165, 55, 11}, {0, 11, 44, 121, 209, 253, 209, 121, 44, 11, 0}, {0, 0, 44, 77, 132, 121, 88, 33, 11, 0, 0}, {0, 0, 0, 77, 55, 66, 22, 11, 0, 0, 0}, {0, 0, 0, 0, 55, 11, 11, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 11, 0, 0, 0, 0, 0}} We have recovered the nontrivial "middle" coefficients for Da[11,m] (the two "end" coefficients are simply unity) from the first nonzero element of row m: 11, 44, 77, 55, 11. There are nice combinatorial formulas for these coefficients; I wanted to illustrate an algorithmic approach to recovering them from the binomial coefficients of the Pascal triangle. (B) The maps Ru[n,x,y] and Da[n,x,y] each posess a group invariance property. And, returning to the original intent of the thread, thses involve multiplication by principal roots of unity. For Ru[n,x,y] we have the map given by the matrix rumat = {{e,0},{0,e}} where e=(-1)^(2/n) is the principal nth root of unity. This gives rise to a group action {z,w} --> {e*z,e*w}. For Da[n,x,y] we use damat = {{e,0},{0,e^2}} giving the group action {{z,w} --> {e*z,e^2*w}. So let's define these actions explicitly. groupAction[n_,m_,expr_,x_,y_] := expr /. Now we'll check the claimed invariance properties for n=9. In[156]:= InputForm[groupAction[9,1,Ru[9,x,y],x,y]] x^9 + 9*x^8*y + 36*x^7*y^2 + 84*x^6*y^3 + 126*x^5*y^4 + 126*x^4*y^5 + 84*x^3*y^6 + 36*x^2*y^7 + 9*x*y^8 + y^9 This is indeed simp,ly Ru[9,x,y] In[157]:= InputForm[groupAction[9,2,Da[9,x,y],x,y]] Out[157] x^9 + 9*x^7*y + 27*x^5*y^2 + 30*x^3*y^3 + 9*x*y^4 + y^9 which again is just Da[9,x,y] (C) It turns out that both Ru[n,x,y] and Da[n,x,y] map the lines x+y=1 to 1. That is, if we replace y by 1-x the polynomials will evaluate to For example: In[161]:= Ru[9,x,1-x] Out[161]= 1 In[162]:= Da[9,x,1-x] Out[162]= 1 The case of Ru[n,x,1-x] should not be a surprise. After all Ru[n,x,y] is simply (x+y)^n and this is of course 1 on x+y=1. That Da[n,x,1-x]=1 is a bit more subtle. Also: all polynomial maps with this property that are invariant under the action of groupAction[n,1,...] can be obtained from straightforward operations that amount to "tensoring", and inversion thereof, of this basic polynomial Ru[n,x,y]. Similarly, all polynomial maps that take x+y==1 to 1 and are invariant under groupAction[n,2,...] are obtained from such operations on Da[n,x,y]. Clearly (he said,) there are no other 2x2 finite matrix group representations for which there are invariant polynomials taking x+y=1 to 1 (except for those obviously equivalent to damat, as these can each be represented in two ways). Anyone else catch these?* Daniel Lichtblau Wolfram Research *Just trolling. This stuff is far from obvious but was familiar from a previous lifetime. "Pa" is for Pascal, "Ru" for Rudin, "Da" for D'Angelo. Obviously Rudin's polynomials predate Rudin; his contribution was to note that they both map the unit ball in C^2 to higher dimensional balls (analogous to those polynomials taking x+y-1 to 1), and satisfy a group invariance property. As noted above, the Cardan polynomials from Osler's article are D'Angelo's but with one term dropped and signs alternating. For odd n D'Angelo had defined them in an article from 1988, showing how they give maps from the unit ball in C^2 with properties similar to Rudin's, but working with a less trivial group. He also conjectured no other such groups could be used in this way, either for C^2 or higher dimension domains. Some possible matrix groups had been ruled out around that time by F. Forstneric. Ruling out the rest was interesting work, and I think it's all covered in D'Angelo's book "Several Complex Variables and the Geometry of Real Hypersurfaces". The cases of n even-valued were used in an article by D'Angelo from the 90's to discuss mappings from the ball to hyperquadrics that have similar invariance properties. We do not know if these polynomials were defined in any context prior to that. Various combinatorial properties of the D'Angelo polynomials appear in a number of his articles. The connection of these to Chebyshev polynomials is observed in an American Mathematical Monthly article by Dilcher and Stolarsky (October 2005). That all polynomial maps invariant under that groupAction[n,2,...] and taking x+y=1 to 1 can be obtained from Da[n,x,y] was proved in a hammock. • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Dec/msg00159.html","timestamp":"2014-04-16T16:06:15Z","content_type":null,"content_length":"46770","record_id":"<urn:uuid:64196bb7-d0ef-4322-aef5-883df4ff05e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodhaven Prealgebra Tutor Find a Woodhaven Prealgebra Tutor ...My approach to tutoring the GREs is to focus on your weak areas, while reinforcing your strengths. The GRE does not test any math past the 10th grade, but it does come up with tricky questions so the important thing is using the best test-taking techniques. The verbal section is focused on figu... 26 Subjects: including prealgebra, calculus, statistics, writing ...After graduation, I spent an additional 7 weeks at Middlebury College studying Portuguese through its intensive Summer Language School Program, leaving the program with an advanced proficiency in the language. I also speak Spanish at home as it was my first language. I have always been a lover ... 5 Subjects: including prealgebra, geometry, Japanese, Portuguese ...I am genuinely passionate about this area of study and when I see that a student is able to grasp complex concepts and learn to love science, it reminds me how lucky I am I am currently an enrichment instructor to high school students in advanced placement biology. I also lead review sessions ... 8 Subjects: including prealgebra, chemistry, biology, algebra 1 ...I've taught math across grades 6-8 in Manhattan, the Bronx and currently Queens. I can help your child with any/all middle school math concepts/skills and help prepare them for the state tests and unit tests (which their math teachers give). Please contact me to find out more about how I can he... 4 Subjects: including prealgebra, algebra 1, algebra 2, linear algebra ...My approach as a tutor is to first establish my tutee's needs and abilities, and to then fashion an individualized program for him/her. I take the student through math examples in a step-by-step manner, making sure the student grasps each point. We then do several additional, similar examples, ... 8 Subjects: including prealgebra, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Woodhaven_Prealgebra_tutors.php","timestamp":"2014-04-21T14:46:18Z","content_type":null,"content_length":"24092","record_id":"<urn:uuid:3dd4eb7a-bbc1-486d-a8fa-0a62feaa0228>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
The Transmission Line block lets you choose between the following models of a transmission line: 1. Delay-based and lossless 2. Delay-based and lossy 3. Lumped parameter L-section 4. Lumped parameter pi-section The first option provides the best simulation performance, with options 2, 3 and 4 requiring progressively more computing power. Delay-Based and Lossless This first option, Delay-based and lossless, models the transmission line as a fixed impedance, irrespective of frequency, plus a delay term. The defining equations are: v[1]( t ) – i[1]( t ) Z[0] = v[2]( t – τ ) + i[2]( t – τ ) Z[0] v[2]( t ) – i[2]( t ) Z[0] = v[1]( t – τ ) + i[1]( t – τ ) Z[0] ● v[1] is the voltage across the left-hand end of the transmission line. ● i[1] is the current into the left-hand end of the transmission line. ● v[2] is the voltage across the right-hand end of the transmission line. ● i[2] is the current into the right-hand end of the transmission line. ● τ is the transmission line delay. ● Z[0] is the line characteristic impedance. Delay-Based and Lossy To introduce losses, the second option, Delay-based and lossy, connects N delay-based components, each defined by the above equations, in series via a set of resistors, as shown in the following N is an integer greater than or equal to 1. r = R · LEN / N, where R is the line resistance per unit length and LEN is the line length. Lumped Parameter L-Section The following block diagram shows the model of one L-line segment. The lumped parameter parameterization uses N copies of the above segment model connected in series. Parameters are as follows: ● R is line resistance per unit length. ● L is the line inductance per unit length. ● C is the line capacitance per unit length. ● G is the line conductance per unit length. ● LEN is the length of the line. ● N is the number of series segments. Lumped Parameter Pi-Section The following block diagram shows the model of one pi-line segment. The lumped parameter parameterization uses N copies of the above segment model connected in series. The parameters are as defined for the L-section transmission line model. Unlike the L-section model, the pi-section model is symmetric. Lumped Parameter Line Model Parameterization The lumped-parameter models (L-section or pi-section) are the most challenging to simulate, typically needing many more segments (greater N) than for the delay-based and lossy model [1]. Cable manufacturers do not typically quote an inductance value per unit length, but instead give the characteristic impedance. The inductance, capacitance, and characteristic impedance are related L = C · Z[0]^2 The block lets you specify either L or Z[0] when using the lumped parameter model.
{"url":"http://www.mathworks.com.au/help/physmod/elec/ref/transmissionline.html?nocookie=true","timestamp":"2014-04-23T16:30:50Z","content_type":null,"content_length":"43143","record_id":"<urn:uuid:42b123e5-5264-43c1-ba88-7245e9c21018>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Activity Based Physics Thinking Problems in Mechanics: Dynamics 1) A crate is pulled along a horizontal surface at constant velocity by an applied force F[a] that makes an angle q with the horizontal. The coefficient of kinetic friction between the crate and the surface is m. Show that the magnitude of F[a] is a minimum when the angle q is given by tan^-1 m. 2) Suppose you are sitting in a car that is speeding up. Draw well-separated force diagrams of the following objects: □ your own body; □ the seat in which you are sitting (apart from the car); □ the car (apart from the seat); □ the road surface where the tires and the road interact. Assume the car has rear wheel drive. □ Describe each force in words; show larger forces with longer arrows. □ Identify the third law pairs. □ Explain carefully in your own words how the force imparting acceleration to the car originates. 3) A block of mass M is pushed along a frictionless table by a force F for a distance s as shown in the figure at the right. The force is inclined to the horizontal at an angle q. When it reaches s, the force is removed. The block starts from rest at clock time 0:00. Express your answers in terms of M, F (= the magnitude of F), q, and s. (a) Calculate the acceleration of the block during the time the force is acting. (b) At what time will the box have moved through the distance s? Find the object's velocity at that time. (c) Calculate the change in the object's kinetic energy using the work-energy theorem and show that it gives the same result you would calculate using change in the object's velocity. (d) What is the object's change in momentum from the time it starts to the time it reaches s? (e) If the same force pushed a larger mass for the same distance, would the object have more or less kinetic energy than the original box? What about momentum? Give a qualitative explanation for your results that makes them plausible. 4) Newton's first law states that an object will move with a constant velocity if nothing acts on it. This seems to contradict our everyday experience that all moving objects come to rest unless something acts on it to keep it going. Does our everyday experience contradict one of Newton's Laws? If it does not, explain the apparent contradiction. If it does, explain why we bother to teach Newton's first law anyway. 5) A worker is pushing a cart along the floor. At first, the worker has to push hard in order to get the cart moving. After a while, it is easier to push. Finally, the worker has to pull back on the cart in order to bring it to a stop before it hits the wall. The force exerted by the worker on the cart is purely horizontal. Take the direction the worker is going as positive. Below are shown graphs of some of the physical variables of the problem. Match the graphs with the variables in the list below. You may use a graph more than once or not at all. (Note: the time axes are to the same scale, but the ordinates {"y axes"} are not.) (a) friction force (b) force exerted by the worker (c) net force (d) acceleration (e) velocity. 6) You are pushing on a block that is resting on the table. (a) You press on the block but not hard enough to start it moving. What are the forces acting on the block? (Be sure to specify the type of force and the object causing each force.) Wherever you can, compare the magnitudes of forces. Give a brief explanation for how you know each of the results you state. (b) You press a bit harder and the block begins to move. After a moment of starting up, you press so that the block is moving with a constant velocity. For the time while the block is moving with a constant velocity have any of the comparisons you stated in (a) changed? Which ones? Have any of the forces changed? Which ones? (c) Suppose the block has a mass of 0.4 kg and the coefficient of friction between the block and the table is 0.3. What force will you have to use to keep it going at a constant velocity of 0.2 m /s? (You may take g = 10 N/kg.) 7) A block of mass M[1] is sitting on a frictionless table. It is connected by a massless string over a massless and frictionless pulley to another block of mass M[2]. (a) Build free-body diagrams for each of the masses and write equations of motion for each object. Use the coordinate x[1] shown in the figure for the position of mass M[1] and coordinate y[2] shown in the figure for the position of mass M[2]. (b) Use these equations of motion to obtain the acceleration of the two objects. Explicitly state any conditions that you are applying to solve the equations. 8) A block of mass M[1] is sitting on a table with a block of mass M[2] sitting on top of it. Attached to the bottom block is a ring and a string is attached to the ring. The string is pulled with a tension T. The coefficient of friction between block 1 and the table is m[1], and the coefficient of friction between block 2 and block 1 is m[2]. (Ignore the difference between static and kinetic friction.) (a) For one given value of T the blocks accelerate together. □ Draw free body diagrams for each of the blocks. □ Calculate the acceleration of the blocks. □ What can you say about the magnitude and direction of the various frictional forces in the system? (b) For a second, larger, value of T, the first block continues to accelerate, but the second block begins to slip back on the first block. □ How do the free body diagrams for the two blocks change from the case above? □ Calculate the acceleration of block 2. 9) The system shown in the figure is initially motionless and the pulleys are of negligible mass. (a) Write free-body diagrams for each mass and their equations of motion. (b) Show that the accelerations of the three objects are related by a[1] + a[2] + 2 a[3] = 0 (c) Suppose m[2] = 2 m[1]. When the system is released, it is found that body 1 remains stationary. What is the tension in the string supporting body 1? (d) Find the magnitude and direction of the acceleration of body 2, of pulley A, of body 3, and the tension in the string supporting pulley A. (e) What is the mass of body 3? 10) Write an essay about Newton's second law. In your essay, make sure you state the law in words and equation form. For the equation you give, explain carefully the meaning of each symbol in the equation. Also include a brief discussion of "what the equation is good for", that is, how you might use it in practice. 11) A physics student is pulling a small wooden crate across a wooden floor using a rope tied to a ring which is attached to the box. The box has a mass m, the student has a mass M, and the coefficient of friction between the box and the floor is m. (Ignore the difference between static and kinetic friction.) The angle between the rope and the horizontal is q. (a) What force does the student have to exert on the rope to keep the box moving at a constant speed? (Express your answer in terms of the relevant given symbols.) (b) If q = 30º, m = 20 kg, M = 80 kg, m = 0.4, and g = 10 N/kg, find the tension in the rope. (c) State one approximation you have made in order to make this problem more easily solved. (By "approximation" I mean some simplification that has been made or real-world effect that is being ignored. I do not mean that one of the numbers given might be a bit wrong.) A boy with a mass m = 50 kg is skating on a flat, level sidewalk. He can get himself up to a speed V = 6 m/s. If he then just coasts, he travels a distance d = 10 m before coming to a stop. 12) (a) Find the coefficient of friction between the boy and the sidewalk. (b) He approaches a part of the sidewalk which rises at an angle q = 30º to the ground for a distance D = 5 m before leveling out. If he reaches the rise going at his maximum speed will he be able to coast to the top? Explain. 13) Two blocks, A and B, of masses M[A] and M[B] respectively, are tied together with a rope, R, of mass M. The small block, B, is being pushed with a constant horizontal force as shown below. Assume that there is no friction between the blocks and the table, and that the blocks have already been moving for a while at the instant shown and their relative position is not changing. The "push" is exerting a force F on block B. (a) Will block B be moving at a constant speed or will it be speeding up? Explain your answer. (b) Draw careful free-body diagrams specifying all the forces acting on the three objects in the problem. (c) Calculate the acceleration of the system. 14) A large heavy cart of mass M is sitting on a table at rest. The cart has wheels and rolls on the table with negligible friction. A smaller block of mass m is tossed so it lands on top of the cart at time t = 0 . At this instant, when the block first touches the cart, the cart is at rest and the block is moving with a velocity v[0 ]as shown in the figure below. The coefficient of friction between the cart and the block is m. (a) Assume that the block slides on the cart for a while before being brought to a stop relative to the cart. (Assume the block is not moving fast enough to slide off the end of the cart.) Describe what happens to the cart and the block. (b) While the block is sliding on the cart, draw separate free-body diagrams for the cart and the block, showing all forces that act on each one. (c) Is the momentum of the system consisting of the cart + block conserved? Explain. (d) Find the final velocity of the cart-block system after the block has come to rest on top of the cart. Express your answer in terms of the symbols given in the description of the problem. 15) The figure to the right shows a multiple-exposure photograph of a ball rolling up an inclined plane. (The ball is rolling in the dark, the camera lens is held open, and a brief flash occurs every 3/4 sec four times.) The left-most ball corresponds to an instant just after the ball was released. The right-most ball is at the highest point the ball reaches. (a) Copy this picture and, at each ball, draw an arrow to indicate the velocity of the ball at the instant when it was at that point in space. Explain what is happening ("tell the story" of the (b) For the instant of time when the ball is at the second position shown from the left, draw a free-body diagram for the ball and indicate all forces acting on it. (c) If the mass of the ball is m, what is its acceleration? (d) If the angle q is equal to 30^o, how long is the distance s? 16) Two identical billiard balls are labeled A and B. Maryland Fats places ball A at the very edge of the table, ball B at the other side. He strikes ball B with his cue so that it flies across the table and off the edge. As it passes A, it just touches ball A lightly, knocking it off. The balls are shown just at the instant they have left the table. Ball B is moving with a speed v[0], ball A is essentially at rest. (a) Which ball do you think will hit the ground first? Explain your reasoning. Below are shown a number of graphs of a quantity versus time. For each of the items below, select which graph could be a plot of that quantity vs. time. If none of the graphs are possible, write N. The time axes are taken to have t=0 at the instant both balls leave the table. Use the x and y axes shown in the figure. For each of the following, which graph could represent (b) the x-component of the velocity of ball B? (c) the y-component of the velocity of ball A? (d) the y-component of the acceleration of ball A? (e) the y-component of the force on ball B? (f) the y-component of the force on ball A? (g) the x-component of the velocity of ball A? (h) the y-component of the acceleration of ball B? A B C D E F G H 17) A hand pushes a 3 kg block along a table from point A to point C as shown in the figure below. The table has been prepared so that the left half of the table (from A to B) is frictionless. The right half (from B to C) has a non-zero coefficient of friction equal to m. The hand pushes the block from A to C using a constant force of 5 N. The block starts off at rest at point A and comes to a stop when it reaches point C. The distance from A to B is 1 meter and the distance from B to C is also 1 meter. (a) Describe in words the motion of the block as it moves from A to C. (b) Draw a free-body diagram for the block when it is at point P. (c) What is the direction of the acceleration of the block at point P? If it is 0, state that explicitly. Explain your reasoning. (d) Does the magnitude of the acceleration increase, decrease, or remain the same as the block moves from B to C? Explain your reasoning. (e) What is the net work done on the object as it moves from A to B? From B to C? (f) Calculate the coefficient of friction m. 18) You and your friends have prepared a large pot of soup to take to the local homeless shelter. The pot is large two feet high and two feet in diameter. In order to get it to the shelter, you put the pot in the back of your friend's pick-up truck, pressed against the back wall (next to the cab) and against the left wall (on the driver's side). You are pretty sure that you have to drive carefully so the pot doesn't tip over. Do you have to be more careful when starting or when stopping? When turning left or when turning right? Explain your choice.
{"url":"http://www.physics.umd.edu/perg/abp/think/mech/mechdyn.htm","timestamp":"2014-04-18T21:57:34Z","content_type":null,"content_length":"20827","record_id":"<urn:uuid:a0358249-f36e-4e30-9efa-1d9cce6bacbb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparison of Four Filtering Options for a Radar Tracking Problem, Accession Number : ADA329021 Title : Comparison of Four Filtering Options for a Radar Tracking Problem, Corporate Author : NAVAL SURFACE WARFARE CENTER DAHLGREN DIV VA Personal Author(s) : Lawton, John A. ; Jesionowski, Robert J. ; Zarchan, Paul PDF Url : ADA329021 Report Date : 1997 Pagination or Media Count : 10 Abstract : Four different filtering options are considered for the problem of tracking an exoatmospheric ballistic target with no maneuvers. The four filters are an alpha-beta filter, an augmented alpha-beta filter, a decoupled Kalman filter, and a fully-coupled extended Kalman filter. These filters are listed in the order of increasing computational complexity. All of the filters can track the target with some degree of accuracy. While the pure alpha-beta filter appreciably lags the other filters in performance for this problem, its augmented version is very competitive with the extended Kalman filter under benign conditions. Perhaps the most surprising result is that under all conditions examined, the decoupled (linear) Kalman filter, which is at least an order of magnitude less computationally complex, performs nearly identical to the coupled, extended Kalman filter. Four different filtering options are considered for the problem of tracking an exoatmospheric ballistic target with no maneuvers. The four filters are an alpha-beta filter, an augmented alpha-beta filter, a decoupled Kalman filter, and a fully-coupled extended Kalman filter. These filters are listed in the order of increasing computational complexity. All of the filters can track the target with some degree of accuracy. While the pure alpha-beta filter appreciably lags the other filters in performance for this problem, its augmented version is very competitive with the extended Kalman filter under benign conditions. Perhaps the most surprising result is that under all conditions examined, the decoupled (linear) Kalman filter, which is at least an order of magnitude less computationally complex, performs nearly identical to the coupled, extended Kalman filter. Descriptors : *RADAR TRACKING, COMPUTATIONS, COMPARISON, KALMAN FILTERING, TARGETS, FILTERS, BALLISTICS, EXOSPHERE. Subject Categories : Active & Passive Radar Detection & Equipment Distribution Statement : APPROVED FOR PUBLIC RELEASE
{"url":"http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA329021","timestamp":"2014-04-19T22:25:23Z","content_type":null,"content_length":"5831","record_id":"<urn:uuid:96afc9a3-40d1-44bd-8f2a-e27d82f163cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction To Quadrilaterals This lesson is designed to introduce students to quadrilaterals. Included in this lesson are discussions of parallelograms, rectangles, and trapezoids. Upon completion of this lesson, students will: • have been introduced to quadrilaterals and their properties. • have learned the terminology used with quadrilaterals. • have practiced creating particular quadrilaterals based on specific characteristics of the quadrilaterals. The activities and discussions in this lesson address the following NCTM Standard: Analyze characteristics and properties of two- and three-dimensional geometric shapes and develop mathematical arguments about geometric relationships • precisely describe, classify, and understand relationships among types of two- and three-dimensional objects using their defining properties • understand relationships among the angles, side lengths, perimeters, areas, and volumes of similar objects • create and critique inductive and deductive arguments concerning geometric ideas and relationships, such as congruence, similarity, and the Pythagorean relationship Use visualization, spatial reasoning, and geometric modeling to solve problems • draw geometric objects with specified properties, such as side lengths or angle measures Links to other standards. Student Prerequisites • Geometric: Students must be able to: □ recognize the general shape of a square and a rectangle. □ recall information about angles (particularly right angles), parallel lines, and possibly the concept of congruency. • Technological: Students must be able to: □ perform basic mouse manipulations such as point, click and drag. □ use a browser, such as Netscape, for experimenting with the activities. Teacher Preparation Students will need: • Access to a browser • pencil and paper • Copies of supplemental materials for the activities: Key Terms This lesson introduces students to the following terms through the included discussions: • rhombus • rectangle • square Lesson Outline 1. Focus and Review Remind students of what they learned in previous lessons that will be pertinent to this lesson and/or have them begin to think about the words and ideas of this lesson: □ Class, you might remember that we learned about triangles before. We learned that triangles are a family of polygons and that there are different types of triangles. □ Can anyone in here tell me what some of the types of triangles are? [i.e. right triangles, isosceles triangles, scalene triangles, etc.] □ Just as the three-sided polygon, a triangle, has a family of shapes with names, four-sided polygons have names. 2. Objectives Let the students know what they will be doing and learning today. Say something like this: □ Today, class, we will be talking more about the four-sided figures, called quadrilaterals. □ We are going to use the computers to learn about quadrilaterals, but please do not turn your computers on or go to this page until I ask you to. I want to show you a little about this program 3. Teacher Input You may choose lead the students in a short discussion about quadrilaterals. A series of discussions will introduce students to the different types of quadrilaterals: Explain to the students how to do the assignment. You should model or demonstrate it for the students, especially if they are not familiar with how to use our computer applets. □ Open your browser to Floor Tiles in order to demonstrate this activity to the students. □ Explain that the quadrilateral on the screen will always remain as a quadrilateral, even though you move the sides and corners. □ Show the students that they may access information about the sides and angles by using the Information button. □ Pass out the Worksheet to Accompany "An Introduction To Quadrilaterals." 4. Guided Practice Try an example with your students, letting the students direct your moves. □ Ask the students to help you create a trapezoid from the square on the screen. As they direct your moves, have them specify which characteristic of the trapezoid they are attempting to □ When the class is satisfied with the trapezoid that has been created, show them how to gain information about the quadrilateral from the Information button. □ Allow the students to comment on how they think the information shows that the quadrilateral is a trapezoid. Students should recognize that it is necessary to show that two of the lines in the quadrilateral are parallel. This can be done several ways: ☆ Remind students to consider what they know about parallel lines. If the lines are parallel, and one of the other sides acts as a transversal, students can identify angles that should be Remind them that angles 1 and 3 are congruent (since alternate interior angles are congruent), and angles 1 and 2 are supplementary (since the two angles form a linear pair), therefore angles 2 and 3 should be supplementary, if the lines are parallel. ☆ If your students are not familar with the properties of parallel lines, they may prove that the lines are parallel by calculating the slope of the lines they suspect are parallel. The Information button contains the coordinates of each vertex. Students may use these coordinates to find the slope of the appropriate lines. 5. Independent Practice □ Allow students to work on their own and to complete the worksheet, should you choose to provide it. Monitor the room for questions and to be sure that the students are on the correct web □ Another option: Let students form several groups. Each group should design a different quadrilateral and prove that its creation fits the desired characteristics of the specified quadrilateral. The groups could then show the class what they created and how they showed that the desired characteristics were present. 6. Closure You may wish to bring the class back together for a discussion of the findings. Once the students have been allowed to share what they found, summarize the results of the lesson. Especially emphasize the importance of knowing the characteristics of the different types of quadrilaterals. Alternate Outlines This lesson can be rearranged in several ways if there is only one available computer: • Groups of students may take turns creating a quadrilateral and proving that it has the characteristics necessary to define that shape. □ Assign each group a different quadrilateral. Let the groups take turns using the computer to create the quadrilateral and take note of the information. □ When each group has finished, allow the groups an opportunity to teach the class what they found and how they proved that the necessary characteristics were present. • The class may work together as a whole to create the quadrilaterals suggested on the worksheet. □ Students may direct the instructor's movements and suggest calculations that need to be done before the class. □ OR Students may take turns using the demonstration computer to modify the quadrilateral. The whole class can make the necessary calculations and then check them with a partner. Suggested Follow-up This lesson may be followed by: • Length, Perimeter, and Area: Introduces students to finding the length, perimeter, and area or two dimensional figures. • Surface Area and Volume: A lesson that introduces students to determining the surface area and volume of three dimensional figures.
{"url":"http://www.shodor.org/interactivate1.9/lessons/quads.html","timestamp":"2014-04-19T09:25:16Z","content_type":null,"content_length":"11409","record_id":"<urn:uuid:aaf5e84e-e501-4d67-94eb-49f444b4252a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Do all curves over finite fields have covers with a sqrt(q) eigenvalue? On my recent visit to Illinois, my colleage Nathan Dunfield (now blogging!) explained to me the following interesting open question, whose answer is supposed to be “yes”: Q1: Let f be a pseudo-Anosov mapping class on a Riemann surface Sigma of genus at least 2, and M_f the mapping cylinder obtained by gluing the two ends of Sigma x interval together by means of f. Then M_f is a hyperbolic 3-manifold with first Betti number 1. Is there a finite cover M of M_f with b_1(M) > 1? You might think of this as (a special case of) a sort of “relative virtual positive Betti number conjecture.” The usual vpBnC says that a 3-manifold has a finite cover with positive Betti number; this says that when your manifold starts life with Betti number 1, you can get “extra” first homology by passing to a cover. Of course, when I see “3-manifold fibered over the circle” I whip out a time-worn analogy and think “algebraic curve over a finite field.” So here’s the number theorist’s version of the above Q2: Let X/F_q be an algebraic curve of genus at least 2 over a finite field. Does X have a finite etale cover Y/F_{q^d} such that the action of Frobenius on H^1(Y,Z_ell) has an eigenvalue equal to q^{d/2}? One nice thing about Q2 is that I know some cases where the answer is yes: namely, the case where X is a hyperelliptic curve and the characteristic of F_q is congruent to 2 mod 3. When X is hyperelliptic, a theorem of Bogomolov and Tschinkel guarantees that X has an etale cover which dominates an elliptic curve E_0 with CM by Q(sqrt(-3)). Now for every p congruent to 2 mod 3, E_0 is supersingular mod p, and in particular has a Frobenius eigenvalue of q^{1/2} whenever q is a sufficiently divisible power of p. When I mentioned this fact to Nathan, he remarked that Q1 also has a positive answer when X is hyperelliptic! This is a theorem of Joe Masters. What about the general case? Let’s jot down a careless heuristic argument and see what happens. First of all, what’s the probability that a genus g curve Y over F_q has a Frobenius eigenvalue of q^{1/2}? A theorem of DiPippo and Howe suggests that the number of Weyl polynomials of degree 2g is on order q^{cg^2}. Of these, about q^{c(g-1)^2} can be expressed as (X – q^{1/2})^2 P(x) for a Weyl polynomial P of degree 2g-2. If the characteristic polynomial of Frobenius on H^1(Y) is a randomly chosen Weyl polynomial, then the chance that it has q^{1/2} as a root should thus be exponential in -g. Guess 1: The probability that a random genus-g curve over F_{q^d} has a positive real Frobenius eigenvalue (i.e. an eigenvalue of q^{d/2}) is on order of q^{-dg}. Now we need to think about the fields of definition of etale covers of X. Let C_n be the set of all degree-n covers of X/F_qbar. Then Frobenius acts on C. But this action is clearly not transitive; for instance, if Y -> X is a degree-n etale cover, the image of pi_1(X) in the permutation group of the n sheets is a subgroup of S_n defined up to conjugacy, the monodromy group of the cover. (We might restrict to connected covers, i.e. to transitive monodromy groups, but I don’t think this makes a big difference.) Guess 2: The action of Frobenius is well-modeled by a permutation which permutes randomly the covers with each monodromy group. Suppose this is the case. Let G in S_n be a monodromy group and let C_{n,G} be the set of covers with monodromy group G. Then a random permutation on C_{n,G} will have one fixed point, 1/2 cycles of length 2, 1/3 cycles of length 3 and so on. Thus, there should on average be 1/k covers in C_{n,G} whose field of definition is F_{q^k}. These covers should have genus on order ng, where g is the genus of X. And so, according to Guess 1, the expected number of positive real eigenvalues among the covers with monodromy group G is $\sum_{k=1}^\infty (1/k)q^{-kng}$ which is dominated by its main term, q^{-ng}. But there are a lot of subgroups of S_n; at least on order of c^{n^2}, according to a theorem of Pyber. For large n this swamps the negative exponential in q^{-ng}, suggesting that there are indeed lots of covers of X with a positive real Frobenius eigenvalue. In fact, it seems to suggest that you could find a cover with a ton of positive real eigenvalues; maybe even a cover Y whose Jacobian has constant * g(Y) supersingular elliptic curves as isogeny factors. That last seems unreasonably strong, but I don’t see that it’s immediately ruled out. 6 thoughts on “Do all curves over finite fields have covers with a sqrt(q) eigenvalue?” 1. Two comments on my own post: 1. The prediction that there exist infinitely many covers Y/F_{q^d}, some constant proportion of whose Frobenius eigenvalues are equal to q^{d/2}, is perhaps not so unreasonable; the analogue back on the 3-manifold side would be the prediction that M_f has a sequence of covers whose first Betti number grows linearly with degree. But people believe that 3-manifolds have finite covers which are large, i.e. whose fundamental group surjects onto a free group of rank 2, and this implies exactly such a statement on growth of Betti numbers. 2. Guess 2 is more precise than necessary — if Frobenius is thought of as drawn from some group much smaller than the full group of permutations, it only increases the expected number of short cycles, and thus the expected number of degree-n covers of X defined over small extensions of F_q. 2. Wait, can you explain why you consider q1 and q2 to be analogous? q1 is about dimensions of cohomology groups of the total space, and q2 is about eigenvalues of the monodromy on the cohomology of the geometric fiber (or at least that’s what I think you mean by the notation H^1(Y,Z_l)). 3. So: if M_f is the mapping torus of some diffeo f of a Sigma_g, then passage around the circle induces a symplectic automorphism of H_1(Sigma_g), which is the analogue of the Frobenius action on H_1(Y). The Betti number of M_f is bigger than 1 just when this automorphism has an eigenvalue of 1. Of course, Frobenius _can’t_ have an eigenvalue of 1, but (and you’re welcome to reject this analogy) I take q^{1/2} to be analogous to 1 here. 4. Shouldn’t one count the subgroups of S_n up to conjugacy? Pyber’s result (according to the link to OEIS) is about subgroups themselves, and the corresponding sequence for conjugacy classes (here) seems to grow much slower (though still quite fast). 5. But this only diminishes by a factor of n! at most, right? So it should still be exponential in n^2. 6. Yes, of course… Certain growth rates dissolve intuition… Tagged 3-manifolds, algebraic curves, algebraic geometry, arithmetic geometry, finite fields, finite groups, frobenius, heuristics, napkin scratching, nathan dunfield, number theory, things i don't know, topology
{"url":"http://quomodocumque.wordpress.com/2009/09/09/do-all-curves-over-finite-fields-have-covers-with-a-sqrtq-eigenvalue/","timestamp":"2014-04-19T19:48:53Z","content_type":null,"content_length":"70983","record_id":"<urn:uuid:eaadd68a-2e12-4df4-a7ce-868a4307d07e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry of Linear Transformations of the Plane Let $V$ and $W$ be vector spaces. Recall that a function $T:V \rightarrow W$ is called a linear transformation if it preserves both vector addition and scalar multiplication: \begin{eqnarray*} T({\bf v_1}+ {\bf v_2}) & = & T({\bf v_1}) + T({\bf v_2}) \\ T(r{\bf v_1}) & = & rT({\bf v_1}) \end{eqnarray*} for all ${\bf v_1, v_2} \in V$. $\qquad\qquad\qquad\qquad$ If $V = R^{2}$ and $W = R^{2}$, then $T:R^2 \rightarrow R^2$ is a linear transformation if and only if there exists a $2 \times 2$ matrix $A$ such that $T({\bf v}) = A{\bf v}$ for all ${\bf v} \in R^ 2$. Matrix $A$ is called the standard matrix for $T$. The columns of $A$ are $T \left( \left[ {1 \atop 0} \right] \right)$ and $T \left( \left[ {0 \atop 1} \right] \right)$, respectively. Since each linear transformation of the plane has a unique standard matrix, we will identify linear transformations of the plane by their standard matrices. It can be shown that if $A$ is invertible, then the linear transformation defined by $A$ maps parollelograms to parallelograms. We will often illustrate the action of a linear transformation $T:R^2 \rightarrow R^2$ by looking at the image of a unit square under $T$. The standard matrix for the linear transformation $T:R^2 \rightarrow R^2$ that rotates vectors by an angle $\theta$ is $$ A = \left[\begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\ theta \end{array} \right]. $$ This is easily drived by noting that \begin{eqnarray*} T\left( \left[ {1 \atop 0} \right] \right) & = & \left[ {\cos\theta \atop \sin\theta} \right] \\ T\left( \left[ {0 \atop 1} \right] \right) & = & \left[ {-\sin\theta \atop \cos\theta} \right]. \end{eqnarray*} For every line in the plane, there is a linear transformation that reflects vectors about that line. Relection about the $x$-axis is given by the standard matrix $$ A = \left[ \begin{array}{cc} 1 & 0\\ 0 & -1 \end{array} \right] $$ which takes the vector $\left[ {x \atop y} \right]$ to $\left[ {x \atop -y} \right]$. Reflection about the $y$-axis is given by the standard matrix $$ A = \left[ \ begin{array}{cc} -1 & 0\\ 0 & 1 \end{array} \right] $$ taking $\left[ {x \atop y} \right]$ to $\left[ {-x \atop y} \right]$. Finally, reflection about the line $y=x$ is given by $$ A = \left[ \begin {array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right] $$ and takes the vector $\left[ {x \atop y} \right]$ to $\left[ {y \atop x} \right]$. Expansions and Compressions The standard matrix $$ A = \left[ \begin{array}{cc} k & 0 \\ 0 & 1 \end{array} \right] $$ "stretches" the vector $\left[ {x \atop y} \right]$ along the $x$-axis to $\left[ {kx \atop y} \right]$ for $k > 1$ and "compresses" it along the $x$-axis for $0~ < ~ k ~ < ~ 1$. Similarlarly, $$ A = \left[ \begin{array}{cc} 1 & 0 \\ 0 & k \end{array} \right] $$ stretches or compresses vectors $\left[ {x \atop y} \right]$ to $\left[ {x \atop ky} \right]$ along the $y$-axis. The standard matrix $$ A = \left[ \begin{array}{cc} 1 & k \\ 0 & 1 \end{array} \right] $$ taking vectors $\left[ {x \atop y} \right]$ to $\left[ {x+ky \atop y} \right]$ is called a shear in the Similarly, $$ A = \left[ \begin{array}{cc} 1 & 0 \\ k & 1 \end{array} \right] $$ takes vectors $\left[ {x \atop y} \right]$ to $\left[ {x \atop y+kx} \right]$ and is called a shear in the • If finitely many linear transformations from $R^2$ to $R^2$ are performed in succession, then there exists a single linear transformation with thte same effect. • If the standard matrix for a linear transformation $T: R^2 \rightarrow R^2$ is invertible, then it can be shown that the geometric effect of $T$ is the same as some sequence of reflections, expansions, compressions, and shears. In the following Exploration, you can investigate the connection between the entries in a standard matrix and the effect the corresponding linear transformation has geometrically. Key Concept For every linear transformation $T: R^2 \rightarrow R^2$ of the plane, there exists a standard matrix $A$ such that $$ T({\bf v}) = A{\bf v} {\small\textrm{ for all }} {\bf v} \in R^2. $$ Every linear transformation of the plane with an invertible standard matrix has the geometric effect of a sequence of reflections, expansions, compressions, and shears.
{"url":"http://www.math.hmc.edu/calculus/tutorials/lineartransformations/","timestamp":"2014-04-20T23:29:59Z","content_type":null,"content_length":"7917","record_id":"<urn:uuid:94a1ab33-445e-477c-a2fe-0f3beec5975d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
143 is largest number yet to be factored by a quantum algorithm Quantum factorization of 143 using the adiabatic quantum algorithm. As the system evolves to its ground state k = 6, it reaches a superposition of states 6 and 9, which denotes the answers 11 and 13. Image credit: Xu, et al. ©2012 American Physical Society (Phys.org) -- While factoring an integer is a simple problem when the integer is small, the complexity of factorization greatly increases as the integer increases. When the integer grows to more than 100,000 or so digits, the problem reaches a point at which it becomes too complex to solve using classical computing methods. But quantum computers, with their use of entanglement and superposition, can theoretically factor a number of any size. However, the largest number that has been factored on a quantum processor so far is 21. Now in a new study, physicists have set a new record for quantum factorization by developing the first quantum algorithm that can factor a three-digit integer, 143, into its prime factors, 11 and 13. The physicists, Nanyang Xu at the University of Science and Technology of China in Hefei, China, have published their study on the new quantum computation algorithm in a recent issue of Physical Review Letters. They explain that, despite the potential for factoring any size number, quantum algorithms still face fundamental challenges. Quantum algorithms can theoretically solve the factoring problem; however, it is still challenging for today s technologies to control a lot of qubits for a long enough time to factor a larger number, Xu told Phys.org. The environmental noises and other imperfections make the quantum system so fragile that decoherence could destroy everything stored in qubits in a short time. As the physicists note in their study, the first and most well-known quantum algorithm for factorization is Shor's algorithm, which was developed by mathematician Peter Shor in 1994. This algorithm, which involves quantum entanglement, is based on a circuit model in which a sequence of operations is performed to solve the problem. In the current study, Xu and coauthors use an alternative to Shor's algorithm called adiabatic quantum computation (AQC). Proposed by Edward Farhi, et al., in 2001, AQC was developed for optimization problems, in which the best value of many possible values is sought. Several computational problems, including factoring, have been formulated as optimization problems and then solved using AQC. Here, the scientists' algorithm builds on one of these formulations by Peng, et al., in 2008, which used AQC to factor the largest number before now, 21. Unlike Shor's algorithm, AQC does not run through a sequence of operations, but instead relies on quantum adiabatic processes. More specifically, the algorithm finds a mathematical function called the Hamiltonian in which all possible solutions are encoded as eigenstates, and the correct solution is encoded as the ground state. To solve a problem, the algorithm gradually evolves the Hamiltonian according to a mathematical equation, resulting in the system reaching its ground state and providing the correct answer. (In its physical implementation, the system consists of a liquid-crystal nuclear magnetic resonance (NMR) system like those used in magnetic resonance imaging (MRI), in which magnetic nuclei absorb and re-emit radiation at a specific frequency.) While the adiabatic-based strategy works well in theory, in reality it still faces challenges when factoring large numbers because the Hamiltonian's spectrum of all possible eigenstates grows exponentially with the size of the integer. So Xu and coauthors developed a way to suppress the spectrum's growth by simplifying the mathematical equations governing the Hamiltonian. In the end, the physicists' simplified equations significantly decreased the growth rate of the spectrum to make it easier to factor larger numbers than before. We use a new method and reduce the qubits needed in the algorithm, which finally made the factorization of 143 available in realization, Xu said. Our work shows the practical importance of the adiabatic quantum algorithm. In the future, the strategies used here could lead to even larger integer factorization by quantum algorithms. It is possible to factor a larger number using the strategies in our current paper on current quantum computing platforms, Xu said. In this issue, we plan to improve our control ability towards the NMR quantum processor to factor a larger number, and the exact time complexity of the algorithm is still an open question. More information: Nanyang Xu, et al. Quantum Factorization of 143 on a Dipolar-Coupling Nuclear Magnetic Resonance System. PRL 108, 130501 (2012). DOI: 10.1103/PhysRevLett.108.130501 1 / 5 (2) Apr 11, 2012 If the computational power of classical computers is limited with uncertainty principle, why the computational power of quantum computers should be higher? 0.3 / 5 (37) Apr 11, 2012 Ahahahahaha... At first I thought that 143 was the number of digits. 5 / 5 (2) Apr 11, 2012 Ah, shucks! They cracked my private/public key pair :( 1 / 5 (5) Apr 12, 2012 Great. 11 and 13, the largest two primes that each fit in a half-byte. We are just blazing ahead with this quantum technology. How many million dollars did this cost to develop this computer? not rated yet Apr 12, 2012 Did they even use a quantum computer for this ? It's not clear from the article. not rated yet Apr 12, 2012 Did they even use a quantum computer for this ? It's not clear from the article. I'm going with a yes BUT, based on: "Quantum algorithms can theoretically solve the factoring problem; however, it is still challenging for todays technologies to control a lot of qubits for a long enough time to factor a larger number, Xu told Phys.org. The environmental noises and other imperfections make the quantum system so fragile that decoherence could destroy everything stored in qubits in a short time. As the physicists note in their study, the first and most well-known quantum algorithm for factorization is Shor's algorithm, which was developed by mathematician Peter Shor in 1994. This algorithm, which involves quantum entanglement, is based on a circuit model in which a sequence of operations is performed to solve the problem." So, it's a shady yes. 3.9 / 5 (7) Apr 12, 2012 How many million dollars did this cost to develop this computer According to you we should have never started banging rocks together because the development of fire was still some days in the future. Think of in what kind of abject poverty and terrible medical conditions you would be living in right now if we hadn't spent the occasional dollar on fundamental science in the past. If you're over thirty you'd be likely already dead without it. So stop your whining. not rated yet Apr 12, 2012 the human condition: a corporeal linear existence witnessing a new birth and hoping that intelligence as an attribute will finally exceed socially, entertainers and athlete's. i'm going back to my studies rather than discuss my perceived insights of quantum math in it's various forms. aarrrggghhh i'm so driven envious and jealous. 2 / 5 (4) Apr 12, 2012 According to you we should have never started banging rocks together because the development of fire was still some days in the future. Not quite. We already have classical computers that can factor numbers larger than ordinary human comprehension, so it's quite a different animal. At the rate things are progressing, they are still about 20 to 30 years or more away from creating a quantum computer that actually out-performs a mid-1990's era PC, even in simple "quantum-friendly" and it will probably cost a billion dollars each to do that. It looks like quantum computers be factoring the first mega-byte sized number in about 30 years. Normal computers did that several decades ago. not rated yet Apr 18, 2012 We already have classical computers that can factor numbers larger than ordinary human comprehension, so it's quite a different animal. No, we do not have that. With conventional computers 128-bit symmetric keys are considered effectively unbreakable. It looks like quantum computers be factoring the first mega-byte sized number in about 30 years. No, it will make non-special 128 bit numbers factorizable, but 256 bit numbers will still be out of reach. Normal computers did that several decades ago. Laughably wrong.
{"url":"http://phys.org/news/2012-04-largest-factored-quantum-algorithm.html","timestamp":"2014-04-19T02:42:53Z","content_type":null,"content_length":"85683","record_id":"<urn:uuid:e9a62d66-6ca9-4674-9d23-12509cc7d1ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
In opposition of Functor as super-class of Monad kahl at cas.mcmaster.ca kahl at cas.mcmaster.ca Tue Jan 4 16:15:33 CET 2011 On Tue, Jan 04, 2011 at 02:24:21AM -0800, oleg at okmij.org wrote: > I'd like to argue in opposition of making Functor a super-class of > Monad. I would argue that superclass constraints are not the right > tool for expressing mathematical relationship such that all monads are > functors and applicatives. > Then argument is practical. It seems that making Functor a superclass > of Monad makes defining new monad instances more of a chore, leading > to code duplication. To me, code duplication is a sign that an > abstraction is missing or misused. The argument about code duplication somehow seems to assume that class member instances need to be defined as part of the instance declaration. This is not the case, and in fact I am arguing in general against putting any interesting code into instance declarations, especially into declarations of instances with constraints (since, in ML terminology, they are functors, and putting their definition inside an instance declaration constrains their applicability). In my opinion, the better approach is to define (generalised versions of) the functions mentioned in the class interface, and then just throw together the instances from those functions. This also makes it easier to adapt to the ``class hierarchy du jour''. The point for the situation here is that although we eventually need definitions of all the functions declared as class members, there is absolutely nothing that constrains the dependency relation between the definitions of these functions to be conforming in any way to the class hierarchy. For a simpler example, assume that I have some arbitrary data type > data T a = One a | Two a a and assume that I am interested only in Ord instances, since I want to use T with Data.Set, and I am not really interested in Eq instances. Assume that the order will depend on that for |a|, so I will define a function: > compareT :: (a -> a -> Ordering) -> T a -> T a -> Ordering Then I can thow together the necessary instances from that: > instance Ord a => Ord (T a) where > compare = compareT compare > instance Ord a => Eq (T a) where > (==) = eqFromCompare compare assuming I have (preferably from the exporter of Eq and Ord): > eqFromCompare :: (a -> a -> Ordering) -> (a -> a -> Bool) > eqFromCompare cmp x y = case cmp x y of > EQ -> True > _ -> False The same approach works for Oleg's example: > For the sake of the argument, let us suppose that Functor is a > superclass of Monad. Let us see how to define a new Monad > instance. For the sake of a better illustration, I'll use a complex > monad. I just happen to have an example of that: Iteratee. > The data type Iteratee is defined as follows: > type ErrMsg = String -- simplifying > data Stream el = EOF (Maybe ErrMsg) | Chunk [el] deriving Show > data Iteratee el m a = IE_done a > | IE_cont (Maybe ErrMsg) > (Stream el -> m (Iteratee el m a, Stream el)) > [...] > It _almost_ makes me wish the constraint go the other way: > > instance Monad m => Functor m where > > fmap f m = m >>= (return . f) > That is, we need an instance rather than a superclass constraint, and > in the other direction. The instance constraint says that every monad > is a functor. Moreover, > \f m = m >>= (return . f) > is a _proof term_ that every monad is a functor. We can state it once > and for all, for all present and future monads. I would expect that proof term to exported by the package exporting Functor and Monad; let us define it here: > fmapFromBind (>>=) f m = m >>= (return . f) Now you can write, no matter which class is a superclass of which: > bindIt return (>>=) (IE_done a) f = f a > bindIt return (>>=) (IE_cont e k) f = IE_cont e (\s -> k s >>= docase) > where > docase (IE_done a, stream) = case f a of > IE_cont Nothing k -> k stream > i -> return (i,stream) > docase (i, s) = return (bindIt return (>>=) i f, s) > instance Monad m => Monad (Iteratee el m) where > return = IE_done > (>>=) = bindIt return (>>=) > instance Monad m => Functor (Iteratee el m) where > fmap = fmapFromBind (>>=) Of course this assumes that you are not actually interested in an instance of shape: instance (Functor ...) => Functor (Iteratee el m), but this seems to be a plausible assumption. Defining the functionality really has nothing to do with declaring an instance of a type class, and it is normally better to keep the two separated. And that does not lead to any real code duplication, only extremely boring instance declarations. More information about the Haskell-prime mailing list
{"url":"http://www.haskell.org/pipermail/haskell-prime/2011-January/003320.html","timestamp":"2014-04-16T23:41:37Z","content_type":null,"content_length":"7999","record_id":"<urn:uuid:bd56d6dc-c761-4dc8-96b3-865041e03978>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Inverse a 3X3 Matrix Edit Article Edited by Bhish, Luv_sarah, Glutted, Peter and 33 others Calculating the inverse of a 3x3 matrix by hand is a tedious job. But this has several uses, including solving various matrix equations. Find det(M), the determinant of the Matrix M. The determinant will usually show up in the denominator of the inverse. If the determinant is zero, the matrix won't have an inverse. Find M^T, the transpose of the matrix. Transposing means reflecting the matrix about the main diagonal, or equivalently, swapping the (i,j)th element and the (j,i)th. Find the determinant of each of the 2x2 minor matrices.Find the determinant of each of the 2x2 minor matrices. Represent these as a matrix of cofactors as shown, and multiply each term by the sign indicated. The result of these steps is the adjugate matrix (sometimes also called the adjoint), notated Adj Find the inverse by dividing the adjugate found in the previous step by the determinate from the first step. 6. 6 One can combine these steps by transposing, copying over the first two columns and rows, and finding the 2x2 determinants around each dot. Checking one's work computes the determinant three times; if they agree, this is the denominator. With this "torus" method, the signs are already correct.^[1] • Note that this same method can be applied to a matrix containing variables or unknowns, for example an algebraic matrix, M, and its inverse, M^-1. • Write down all your steps as it is extremely difficult to inverse a 3x3 matrix in your head • Computer programs exist that work out the inverses of matricies for you^[2], up to and including the size of 30x30 matricies • The adjugate matrix is the transpose of the matrix of cofactors, that is why we transpose the matrix in step 2, to find the a transposed matrix of cofactors. • Check that is accurate by multiplying M by M^-1. You should be able verify that M*M^-1 = M^-1*M = I. I is the identity matrix, consisting of 1s along the main diagonal and 0s elsewhere. If not, you made an error somewhere. • Not all 3x3 matrices have inverses. If the determinant of the matrix is equal to 0, then it does not have an inverse. (Notice that in the formula we divide by det(M). Division by zero is not Article Info Thanks to all authors for creating a page that has been read 1,172,098 times. Was this article accurate?
{"url":"http://www.wikihow.com/Inverse-a-3X3-Matrix","timestamp":"2014-04-20T15:54:54Z","content_type":null,"content_length":"72572","record_id":"<urn:uuid:9cccdc71-cc0f-4a9c-b2ea-97cf64cd3c56>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Logarithms in GF(p) Using the Number Field Sieve Results 1 - 10 of 53 - SIAM J. on Computing , 1997 "... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..." Cited by 882 (2 self) Add to MetaCart A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored. - SIAM Journal on Computing , 1982 "... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..." Cited by 393 (1 self) Add to MetaCart A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored. AMS subject classifications: 82P10, 11Y05, 68Q10. 1 Introduction One of the first results in the mathematics of computation, which underlies the subsequent development of much of theoretical computer science, was the distinction between computable and ... , 2004 "... Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician Serge Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves ..." Cited by 369 (17 self) Add to MetaCart Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician Serge Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves also figured prominently in the recent proof of Fermat's Last Theorem by Andrew Wiles. Originally pursued for purely aesthetic reasons, elliptic curves have recently been utilized in devising algorithms for factoring integers, primality proving, and in public-key cryptography. In this article, we aim to give the reader an introduction to elliptic curve cryptosystems, and to demonstrate why these systems provide relatively small block sizes, high-speed software and hardware implementations, and offer the highest strength-per-key-bit of any known public-key scheme. , 2004 "... We present the first known implementation of elliptic curve cryptography over F2 p for sensor networks based on the 8-bit, 7.3828-MHz MICA2 mote. Through instrumentation of UC Berkeley's TinySec module, we argue that, although secret-key cryptography has been tractable in this domain for some time, ..." Cited by 183 (3 self) Add to MetaCart We present the first known implementation of elliptic curve cryptography over F2 p for sensor networks based on the 8-bit, 7.3828-MHz MICA2 mote. Through instrumentation of UC Berkeley's TinySec module, we argue that, although secret-key cryptography has been tractable in this domain for some time, there has remained a need for an efficient, secure mechanism for distribution of secret keys among nodes. Although public-key infrastructure has been thought impractical, we argue, through analysis of our own implementation for TinyOS of multiplication of points on elliptic curves, that public-key infrastructure is, in fact, viable for TinySec keys' distribution, even on the MICA2. We demonstrate that public keys can be generated within 34 seconds, and that shared secrets can be distributed among nodes in a sensor network within the same, using just over 1 kilobyte of SRAM and 34 kilobytes of ROM. - Proceedings of PKC 2001, volume 1992 of LNCS , 1992 "... Abstract. This paper introduces a novel class of computational problems, the gap problems, which can be considered as a dual to the class of the decision problems. We show the relationship among inverting problems, decision problems and gap problems. These problems find a nice and rich practical ins ..." Cited by 122 (11 self) Add to MetaCart Abstract. This paper introduces a novel class of computational problems, the gap problems, which can be considered as a dual to the class of the decision problems. We show the relationship among inverting problems, decision problems and gap problems. These problems find a nice and rich practical instantiation with the Diffie-Hellman problems. Then, we see how the gap problems find natural applications in cryptography, namely for proving the security of very efficient schemes, but also for solving a more than 10-year old open security problem: the Chaum’s undeniable signature. , 2000 "... This paper introduces the XTR public key system. XTR is based on a new method to represent elements of a subgroup of a multiplicative group of a finite field. Application of XTR in cryptographic protocols leads to substantial savings both in communication and computational overhead without compromis ..." Cited by 80 (11 self) Add to MetaCart This paper introduces the XTR public key system. XTR is based on a new method to represent elements of a subgroup of a multiplicative group of a finite field. Application of XTR in cryptographic protocols leads to substantial savings both in communication and computational overhead without compromising security. , 1996 "... . We present a new method to forge ElGamal signatures if the public parameters of the system are not chosen properly. Since the secret key is hereby not found this attack shows that forging ElGamal signatures is sometimes easier than the underlying discrete logarithm problem. 1 Introduction ElGamal ..." Cited by 38 (0 self) Add to MetaCart . We present a new method to forge ElGamal signatures if the public parameters of the system are not chosen properly. Since the secret key is hereby not found this attack shows that forging ElGamal signatures is sometimes easier than the underlying discrete logarithm problem. 1 Introduction ElGamal's digital signature scheme [4] relies on the difficulty of computing discrete logarithms in the multiplicative group IF p and can therefore be broken if the computation of discrete logarithms is feasible. However, the converse has never been proved. In this paper we show that it is sometimes possible to forge signatures without breaking the underlying discrete logarithm problem. This shows that the ElGamal signature scheme and some variants of the scheme must be used very carefully. The paper is organized as follows. Section 2 describes the ElGamal signature scheme. In Section 3 we present a method to forge signatures if some additional information on the generator is known. We show - Math. Comp , 1999 "... Abstract. The discrete logarithm problem in various finite abelian groups is the basis for some well known public key cryptosystems. Recently, real quadratic congruence function fields were used to construct a public key distribution system. The security of this public key system is based on the dif ..." Cited by 36 (8 self) Add to MetaCart Abstract. The discrete logarithm problem in various finite abelian groups is the basis for some well known public key cryptosystems. Recently, real quadratic congruence function fields were used to construct a public key distribution system. The security of this public key system is based on the difficulty of a discrete logarithm problem in these fields. In this paper, we present a probabilistic algorithm with subexponential running time that computes such discrete logarithms in real quadratic congruence function fields of sufficiently large genus. This algorithm is a generalization of similar algorithms for real quadratic number fields. 1. - Advances in Cryptology – EUROCRYPT 2006, LNCS 4004 (2006 "... Abstract. In this paper, we study the application of the function field sieve algorithm for computing discrete logarithms over finite fields of the form Fqn when q is a medium-sized prime power. This approach is an alternative to a recent paper of Granger and Vercauteren for computing discrete logar ..." Cited by 27 (8 self) Add to MetaCart Abstract. In this paper, we study the application of the function field sieve algorithm for computing discrete logarithms over finite fields of the form Fqn when q is a medium-sized prime power. This approach is an alternative to a recent paper of Granger and Vercauteren for computing discrete logarithms in tori, using efficient torus representations. We show that when q is not too large, a very efficient L(1/3) variation of the function field sieve can be used. Surprisingly, using this algorithm, discrete logarithms computations over some of these fields are even easier than computations in the prime field and characteristic two field cases. We also show that this new algorithm has security implications on some existing cryptosystems, such as torus based cryptography in T30, short signature schemes in characteristic 3 and cryptosystems based on supersingular abelian varieties. On the other hand, cryptosystems involving larger basefields and smaller extension degrees, typically of degree at most 6, such as LUC, XTR or T6 torus cryptography, are not affected. 1 - SIAM Journal on Computing , 2001 "... Abstract. We use hypotheses of structural complexity theory to separate various NP-completeness notions. In particular, we introduce an hypothesis from which we describe a set in NP that is ¡ P T-complete but not ¡ P tt-complete. We provide fairly thorough analyses of the hypotheses that we introduc ..." Cited by 26 (12 self) Add to MetaCart Abstract. We use hypotheses of structural complexity theory to separate various NP-completeness notions. In particular, we introduce an hypothesis from which we describe a set in NP that is ¡ P T-complete but not ¡ P tt-complete. We provide fairly thorough analyses of the hypotheses that we introduce. Key words. Turing completeness, truth-table completeness, many-one completeness, p-selectivity, p-genericity AMS subject classifications. 1. Introduction. Ladner, Lynch, and Selman [LLS75] were the first to compare the strength of polyno-), truth-), that mial-time reducibilities. They showed, for the common polynomial-time reducibilities, ( ¢ Turing P T ( ¢ table P tt), bounded truth-table ( ¢ P btt), and many-one ( ¢ P m
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=65019","timestamp":"2014-04-16T21:04:03Z","content_type":null,"content_length":"38176","record_id":"<urn:uuid:1f10695a-1ccd-42b0-aa65-683dd21887b3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
hard problem solving~~>< plz HELP September 8th 2006, 01:05 AM hard problem solving~~>< plz HELP In a class students made a full cup of coffee with milk for themself. After finishing her cup, Kathy calculated that she drank 1/5 of the total amount of coffee and 1/7 of the total amount of milk used by all students. Assuming that everyone had cups of the same capacity, how many students were in the class? Find all possible anwsers. >.< so difficult any help appreciated thank you!! September 8th 2006, 01:42 AM Originally Posted by fring In a class students made a full cup of coffee with milk for themself. After finishing her cup, Kathy calculated that she drank 1/5 of the total amount of coffee and 1/7 of the total amount of milk used by all students. Assuming that everyone had cups of the same capacity, how many students were in the class? Find all possible anwsers. >.< so difficult any help appreciated thank you!! Let the amount of coffee consumed by the class be $c$ cups Let the amout of milk consumed by the class be $m$ cups. We do not assume that $c$ and/or $m$ are integers, just that they are greater than to zero (because Kathy drank some of both neither can be zero). Kathy drank a cup of coffee milk mixture, and from what we are told we know that: $<br /> 1=\frac{c}{5}+\frac{m}{7}<br />$. Rearranging this gives: $<br /> 35=7c+5m\ \ \ \dots(1)<br />$. Also if there are $N$ students in the class, as they each drink a cup we have: $<br /> N=c+m\ \ \ \dots(2)<br />$. From $(1)$ it is clear that $c\le 5$ cups and $m \le 7$, so $N \le 12$. Now trial and error should allow you to find that the only solution consistent with $(1)$, $(2)$, and the constraints on $c$, $m$ and $N$ is: $N=6,\ c=2.5,\ \mbox{and }m=3.5$. You will need to check this yourself. September 8th 2006, 01:51 AM Originally Posted by CaptainBlack Let the amount of coffee consumed by the class be $c$ cups Let the amout of milk consumed by the class be $m$ cups. We do not assume that $c$ and/or $m$ are integers, just that they are greater or equal to zero. Kathy drank a cup of coffee milk mixture, and from what we are told we know that: $<br /> 1=\frac{c}{5}+\frac{m}{7}<br />$. Rearranging this gives: $<br /> 35=7c+5m\ \ \ \dots(1)<br />$. Also if there are $N$ students in the class, as they each drink a cup we have: $<br /> N=c+m\ \ \ \dots(2)<br />$. From $(1)$ it is clear that $c\le 5$ cups and $m \le 7$, so $N \le 12$ 7c +5m=35 N is an integer less than 7 otherwise c will be zero or negative. Keep Smiling September 8th 2006, 02:26 AM Originally Posted by malaygoel 7c +5m=35 N is an integer less than 7 otherwise c will be zero or negative. Keep Smiling I saw no practical need to tighten up the constraint on $N$ as 12 is a small enough number so that further reduction in computation in not worth the trouble (and the lower constraint becomes evident when one does the search, my search only went as far as $N=7$ as it was evident that no greater value was viable). My objective was to constrain the problem sufficiently so that a search for feasible solutions would not be too arduous, and I stopped when this was the case. I expect with some effort we could constrain the problem sufficiently to allow only a single feasible solution, and so need mo search at all. September 8th 2006, 03:20 AM Originally Posted by CaptainBlack I saw no practical need to tighten up the constraint on $N$ as 12 is a small enough number so that further reduction in computation in not worth the trouble (and the lower constraint becomes evident when one does the search, my search only went as far as $N=7$ as it was evident that no greater value was viable). My objective was to constrain the problem sufficiently so that a search for feasible solutions would not be too arduous, and I stopped when this was the case. I expect with some effort we could constrain the problem sufficiently to allow only a single feasible solution, and so need mo search at all. You are right. the only solution is n=6, c=5/2 and m=7/2 Keep Smiling
{"url":"http://mathhelpforum.com/algebra/5375-hard-problem-solving-plz-help-print.html","timestamp":"2014-04-18T13:57:37Z","content_type":null,"content_length":"15282","record_id":"<urn:uuid:4438b194-39bf-4d2e-b2bc-0d3569876366>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
143 is largest number yet to be factored by a quantum algorithm Quantum factorization of 143 using the adiabatic quantum algorithm. As the system evolves to its ground state k = 6, it reaches a superposition of states 6 and 9, which denotes the answers 11 and 13. Image credit: Xu, et al. ©2012 American Physical Society (Phys.org) -- While factoring an integer is a simple problem when the integer is small, the complexity of factorization greatly increases as the integer increases. When the integer grows to more than 100,000 or so digits, the problem reaches a point at which it becomes too complex to solve using classical computing methods. But quantum computers, with their use of entanglement and superposition, can theoretically factor a number of any size. However, the largest number that has been factored on a quantum processor so far is 21. Now in a new study, physicists have set a new record for quantum factorization by developing the first quantum algorithm that can factor a three-digit integer, 143, into its prime factors, 11 and 13. The physicists, Nanyang Xu at the University of Science and Technology of China in Hefei, China, have published their study on the new quantum computation algorithm in a recent issue of Physical Review Letters. They explain that, despite the potential for factoring any size number, quantum algorithms still face fundamental challenges. Quantum algorithms can theoretically solve the factoring problem; however, it is still challenging for today s technologies to control a lot of qubits for a long enough time to factor a larger number, Xu told Phys.org. The environmental noises and other imperfections make the quantum system so fragile that decoherence could destroy everything stored in qubits in a short time. As the physicists note in their study, the first and most well-known quantum algorithm for factorization is Shor's algorithm, which was developed by mathematician Peter Shor in 1994. This algorithm, which involves quantum entanglement, is based on a circuit model in which a sequence of operations is performed to solve the problem. In the current study, Xu and coauthors use an alternative to Shor's algorithm called adiabatic quantum computation (AQC). Proposed by Edward Farhi, et al., in 2001, AQC was developed for optimization problems, in which the best value of many possible values is sought. Several computational problems, including factoring, have been formulated as optimization problems and then solved using AQC. Here, the scientists' algorithm builds on one of these formulations by Peng, et al., in 2008, which used AQC to factor the largest number before now, 21. Unlike Shor's algorithm, AQC does not run through a sequence of operations, but instead relies on quantum adiabatic processes. More specifically, the algorithm finds a mathematical function called the Hamiltonian in which all possible solutions are encoded as eigenstates, and the correct solution is encoded as the ground state. To solve a problem, the algorithm gradually evolves the Hamiltonian according to a mathematical equation, resulting in the system reaching its ground state and providing the correct answer. (In its physical implementation, the system consists of a liquid-crystal nuclear magnetic resonance (NMR) system like those used in magnetic resonance imaging (MRI), in which magnetic nuclei absorb and re-emit radiation at a specific frequency.) While the adiabatic-based strategy works well in theory, in reality it still faces challenges when factoring large numbers because the Hamiltonian's spectrum of all possible eigenstates grows exponentially with the size of the integer. So Xu and coauthors developed a way to suppress the spectrum's growth by simplifying the mathematical equations governing the Hamiltonian. In the end, the physicists' simplified equations significantly decreased the growth rate of the spectrum to make it easier to factor larger numbers than before. We use a new method and reduce the qubits needed in the algorithm, which finally made the factorization of 143 available in realization, Xu said. Our work shows the practical importance of the adiabatic quantum algorithm. In the future, the strategies used here could lead to even larger integer factorization by quantum algorithms. It is possible to factor a larger number using the strategies in our current paper on current quantum computing platforms, Xu said. In this issue, we plan to improve our control ability towards the NMR quantum processor to factor a larger number, and the exact time complexity of the algorithm is still an open question. More information: Nanyang Xu, et al. Quantum Factorization of 143 on a Dipolar-Coupling Nuclear Magnetic Resonance System. PRL 108, 130501 (2012). DOI: 10.1103/PhysRevLett.108.130501 1 / 5 (2) Apr 11, 2012 If the computational power of classical computers is limited with uncertainty principle, why the computational power of quantum computers should be higher? 0.3 / 5 (37) Apr 11, 2012 Ahahahahaha... At first I thought that 143 was the number of digits. 5 / 5 (2) Apr 11, 2012 Ah, shucks! They cracked my private/public key pair :( 1 / 5 (5) Apr 12, 2012 Great. 11 and 13, the largest two primes that each fit in a half-byte. We are just blazing ahead with this quantum technology. How many million dollars did this cost to develop this computer? not rated yet Apr 12, 2012 Did they even use a quantum computer for this ? It's not clear from the article. not rated yet Apr 12, 2012 Did they even use a quantum computer for this ? It's not clear from the article. I'm going with a yes BUT, based on: "Quantum algorithms can theoretically solve the factoring problem; however, it is still challenging for todays technologies to control a lot of qubits for a long enough time to factor a larger number, Xu told Phys.org. The environmental noises and other imperfections make the quantum system so fragile that decoherence could destroy everything stored in qubits in a short time. As the physicists note in their study, the first and most well-known quantum algorithm for factorization is Shor's algorithm, which was developed by mathematician Peter Shor in 1994. This algorithm, which involves quantum entanglement, is based on a circuit model in which a sequence of operations is performed to solve the problem." So, it's a shady yes. 3.9 / 5 (7) Apr 12, 2012 How many million dollars did this cost to develop this computer According to you we should have never started banging rocks together because the development of fire was still some days in the future. Think of in what kind of abject poverty and terrible medical conditions you would be living in right now if we hadn't spent the occasional dollar on fundamental science in the past. If you're over thirty you'd be likely already dead without it. So stop your whining. not rated yet Apr 12, 2012 the human condition: a corporeal linear existence witnessing a new birth and hoping that intelligence as an attribute will finally exceed socially, entertainers and athlete's. i'm going back to my studies rather than discuss my perceived insights of quantum math in it's various forms. aarrrggghhh i'm so driven envious and jealous. 2 / 5 (4) Apr 12, 2012 According to you we should have never started banging rocks together because the development of fire was still some days in the future. Not quite. We already have classical computers that can factor numbers larger than ordinary human comprehension, so it's quite a different animal. At the rate things are progressing, they are still about 20 to 30 years or more away from creating a quantum computer that actually out-performs a mid-1990's era PC, even in simple "quantum-friendly" and it will probably cost a billion dollars each to do that. It looks like quantum computers be factoring the first mega-byte sized number in about 30 years. Normal computers did that several decades ago. not rated yet Apr 18, 2012 We already have classical computers that can factor numbers larger than ordinary human comprehension, so it's quite a different animal. No, we do not have that. With conventional computers 128-bit symmetric keys are considered effectively unbreakable. It looks like quantum computers be factoring the first mega-byte sized number in about 30 years. No, it will make non-special 128 bit numbers factorizable, but 256 bit numbers will still be out of reach. Normal computers did that several decades ago. Laughably wrong.
{"url":"http://phys.org/news/2012-04-largest-factored-quantum-algorithm.html","timestamp":"2014-04-19T02:42:53Z","content_type":null,"content_length":"85683","record_id":"<urn:uuid:e9a62d66-6ca9-4674-9d23-12509cc7d1ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: ! w!11 G!v3 M3D41 Input in standard form the equation of the given line. The line that passes through (1, 5) and (-2, 3) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a3a93fe4b0e745f520bee4","timestamp":"2014-04-19T07:19:08Z","content_type":null,"content_length":"39517","record_id":"<urn:uuid:c69c74d7-0b10-4fdd-bff6-72b318003f66>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Heuristic Algorithm and Simulation Approach to Relative Location of Facilities Results 1 - 10 of 39 , 1989 "... Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could ..." Cited by 186 (10 self) Add to MetaCart Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could possibly enumerate 10 9 tours per second on a computer it would thus take roughly 10 639 years of computing to establish the optimality of this tour by exhaustive enumeration." This quote shows the real difficulty of a combinatorial optimization problem. The huge number of configurations is the primary difficulty when dealing with one of these problems. The quote belongs to M.W Padberg and M. Grotschel, Chap. 9., "Polyhedral computations", from the book The Traveling Salesman Problem: A Guided tour of Combinatorial Optimization [124]. It is interesting to compare the number of configurations of real-world problems in combinatorial optimization with those large numbers arising in - DISCRETE APPLIED MATHEMATICS , 2002 "... The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NP-hard, and can be solved to optimality only for fairly small size instances ..." Cited by 108 (11 self) Add to MetaCart The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NP-hard, and can be solved to optimality only for fairly small size instances (typically, n < 25). Neighborhood search algorithms are the most popular heuristic algorithms to solve larger size instances of the QAP. The most extensively used neighborhood structure for the QAP is the 2-exchange neighborhood. This neighborhood is obtained by swapping the locations of two facilities and thus has size O(n²). Previous efforts to explore larger size neighborhoods (such as 3-exchange or 4-exchange neighborhoods) were not very successful, as it took too long to evaluate the larger set of neighbors. In this paper, we propose very largescale neighborhood (VLSN) search algorithms where the size of the neighborhood is very large and we propose a novel search procedure to heuristically enumerate good neighbors. Our search procedure relies on the concept of improvement graph which allows us to evaluate neighbors much faster than the existing methods. We present extensive computational results of our algorithms on standard benchmark instances. These investigations reveal that very large-scale neighborhood search algorithms give consistently better solutions compared the popular 2-exchange neighborhood algorithms considering both the solution time and solution accuracy. - In Proceedings of the DIMACS Workshop on Quadratic Assignment Problems, volume 16 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science , 1994 "... . Quadratic Assignment Problems model many applications in diverse areas such as operations research, parallel and distributed computing, and combinatorial data analysis. In this paper we survey some of the most important techniques, applications, and methods regarding the quadratic assignment probl ..." Cited by 91 (16 self) Add to MetaCart . Quadratic Assignment Problems model many applications in diverse areas such as operations research, parallel and distributed computing, and combinatorial data analysis. In this paper we survey some of the most important techniques, applications, and methods regarding the quadratic assignment problem. We focus our attention on recent developments. 1. Introduction Given a set N = f1; 2; : : : ; ng and n \Theta n matrices F = (f ij ) and D = (d kl ), the quadratic assignment problem (QAP) can be stated as follows: min p2\Pi N n X i=1 n X j=1 f ij d p(i)p(j) + n X i=1 c ip(i) ; where \Pi N is the set of all permutations of N . One of the major applications of the QAP is in location theory where the matrix F = (f ij ) is the flow matrix, i.e. f ij is the flow of materials from facility i to facility j, and D = (d kl ) is the distance matrix, i.e. d kl represents the distance from location k to location l [62, 67, 137]. The cost of simultaneously assigning facility i to locat... - Evolutionary Computation , 2000 "... The fitness landscape of the graph bipartitioning problem is investigated by performing a search space analysis for several types of graphs. The analysis shows that the structure of the search space is significantly different for the types of instances studied. Moreover, with increasing epistasis ..." Cited by 48 (13 self) Add to MetaCart The fitness landscape of the graph bipartitioning problem is investigated by performing a search space analysis for several types of graphs. The analysis shows that the structure of the search space is significantly different for the types of instances studied. Moreover, with increasing epistasis, the amount of gene interactions in the representation of a solution in an evolutionary algorithm, the number of local minima for one type of instance decreases and, thus, the search becomes easier. We suggest that other characteristics besides high epistasis might have greater influence on the hardness of a problem. To understand these characteristics, the notion of a dependency graph describing gene interactions is introduced. - Computers and Operations Research , 1997 "... The Quadratic Assignment Problem (QAP) is one of the classical combinatorial optimization problems and is known for its diverse applications. In this paper, we suggest a genetic algorithm for the QAP and report its computational behavior. The genetic algorithm incorporates many greedy principles in ..." Cited by 43 (2 self) Add to MetaCart The Quadratic Assignment Problem (QAP) is one of the classical combinatorial optimization problems and is known for its diverse applications. In this paper, we suggest a genetic algorithm for the QAP and report its computational behavior. The genetic algorithm incorporates many greedy principles in its design and, hence, is called the greedy genetic algorithm. The ideas we incorporate in the greedy genetic algorithm include (i) generating the initial population using a randomized construction heuristic; (ii) new crossover schemes; (iii) a special purpose immigration scheme that promotes diversity; (iv) periodic local optimization of a subset of the population; (v) tournamenting among different populations; and (vi) an overall design that attempts to strike a balance between diversity and a bias towards fitter individuals. We test our algorithm on all the benchmark instances of QAPLIB, a well-known library of QAP instances. Out of the 132 total instances in QAPLIB of varied sizes, the g... - in Proc. of INFOCOM’10 , 2010 "... Abstract—The scalability of modern data centers has become a practical concern and has attracted significant attention in recent years. In contrast to existing solutions that require changes in the network architecture and the routing protocols, this paper proposes using traffic-aware virtual machin ..." Cited by 36 (1 self) Add to MetaCart Abstract—The scalability of modern data centers has become a practical concern and has attracted significant attention in recent years. In contrast to existing solutions that require changes in the network architecture and the routing protocols, this paper proposes using traffic-aware virtual machine (VM) placement to improve the network scalability. By optimizing the placement of VMs on host machines, traffic patterns among VMs can be better aligned with the communication distance between them, e.g. VMs with large mutual bandwidth usage are assigned to host machines in close proximity. We formulate the VM placement as an optimization problem and prove its hardness. We design a two-tier approximate algorithm that efficiently solves the VM placement problem for very large problem sizes. Given the significant difference in the traffic patterns seen in current data centers and the structural differences of the recently proposed data center architectures, we further conduct a comparative analysis on the impact of the traffic patterns and the network architectures on the potential performance gain of traffic-aware VM placement. We use traffic traces collected from production data centers to evaluate our proposed VM placement algorithm, and we show a significant performance improvement compared to existing generic methods that do not take advantage of traffic patterns and data center network characteristics. I. - Proc. Congress on Evolutionary Computation, IEEE , 1999 "... A memetic algorithm (MA), i.e. an evolutionary algorithm making use of local search, for the quadratic assignment problem is presented. A new recombination operator for realizing the approach is described, and the behavior of the MA is investigated on a set of problem instances containing between 25 ..." Cited by 32 (4 self) Add to MetaCart A memetic algorithm (MA), i.e. an evolutionary algorithm making use of local search, for the quadratic assignment problem is presented. A new recombination operator for realizing the approach is described, and the behavior of the MA is investigated on a set of problem instances containing between 25 and 100 facilities/locations. The results indicate that the proposed MA is able to produce high quality solutions quickly. A comparison of the MA with some of the currently best alternative approaches -- reactive tabu search, robust tabu search and the fast ant colony system -- demonstrates that the MA outperforms its competitors on all studied problem instances of practical interest. 1 Introduction The problem of assigning a set of facilities (with given flows between them) to a set of locations (with given distances between them) in such a way that the sum of the product between flows and distances is minimized is known as the facilities location problem [1] or the quadratic assignment ... - INFORMS Journal on Computing , 1996 "... The application of genetic algorithms (GA) to constrained optimization problems has been hindered by the inefficiencies of reproduction and mutation when feasibility of generated solutions is impossible to guarantee and feasible solutions are very difficult to find. Although several authors have ..." Cited by 26 (12 self) Add to MetaCart The application of genetic algorithms (GA) to constrained optimization problems has been hindered by the inefficiencies of reproduction and mutation when feasibility of generated solutions is impossible to guarantee and feasible solutions are very difficult to find. Although several authors have suggested the use of both static and dynamic penalty functions for genetic search, this paper presents a general adaptive penalty technique which makes use of feedback obtained during the search along with a dynamic distance metric. The effectiveness of this method is illustrated on two diverse combinatorial applications; (1) the unequalarea, shape-constrained facility layout problem and (2) the series-parallel redundancy allocation problem to maximize system reliability given cost and weight constraints. The adaptive penalty function is shown to be robust with regard to random number seed, parameter settings, number and degree of constraints, and problem instance. 1. Introduction ... - Computational Optimization and Applications , 1994 "... . Quadratic Assignment problems are in practice among the most difficult to solve in the class of NP-complete problems. The only successful approach hitherto has been Branch-andBound -based algorithms, but such algorithms are crucially dependent on good bound functions to limit the size of the space ..." Cited by 24 (6 self) Add to MetaCart . Quadratic Assignment problems are in practice among the most difficult to solve in the class of NP-complete problems. The only successful approach hitherto has been Branch-andBound -based algorithms, but such algorithms are crucially dependent on good bound functions to limit the size of the space searched. Much work has been done to identify such functions for the QAP, but with limited success. Parallel processing has also been used in order to increase the size of problems solvable to optimality. The systems used have, however, often been systems with relatively few, but very powerful vector processors, and have hence not been ideally suited for computations essentially involving non-vectorizable computations on integers. In this paper we investigate the combination of one of the best bound functions for a Branchand -Bound algorithm (the Gilmore-Lawler bound) and various testing, variable binding and recalculation of bounds between branchings when used in a parallel Branch-and-Bo...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=895652","timestamp":"2014-04-17T01:35:19Z","content_type":null,"content_length":"41130","record_id":"<urn:uuid:0b3f45db-8f1c-4015-9ff6-334f6b54c0bb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - relativistic balance I read in “N. David Mermin, It’s about time, (Princeton University Press, 2005) p.144: ”Furthermore, even when you put together identical bricks, it turns out (in the relativistic case) that the mass of the object you construct depends on how you put the bricks together.” In order to test the way in which the students understand the statement I proposed them the following problem: Consider a high sensitivity balance with equal arms. Put on its left pan a cube of volume a3. Put on its right pan a sphere of radius r made from the same material (density ). The balance is located in a uniform vertical gravitational field (g). The balance being in a state of mechanical equilibrium and in thermal equilibrium with the surrounding, determine the relationship between a and r. what is the solution you propose?
{"url":"http://www.physicsforums.com/showpost.php?p=1022675&postcount=1","timestamp":"2014-04-19T02:15:23Z","content_type":null,"content_length":"9118","record_id":"<urn:uuid:96defcdf-a63f-4700-8307-e5c5f4e60b2c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Paper-folding geometry 1 Origami is a popular hobby around the world. Many books were written and many clubs were organized around the world to learn this art of paper folding. Our school also has such a club, and its president happens to be a co-president of our Math club. I thought of learning origami and trying to use it for geometry. Once I found in the Google library a book that was first printed in 1901. It was about paper-folding geometry, and I had a lot of fun doing some problems described by T. Sundara Rao. But all problems used a square as a base. I started to look around internet and found some more good sources that can be used at early stages of learning geometry. But I believe that we can do hands-on paper-folding geometry starting, possibly, from day Let me try to build something basic around simple paper-folding that may be useful even for younger kids. One big advantage of this method is that we simultaneously develop 3D imagination and first "knowledge" in geometry. For example, if we fold a sheet of paper, we will get a straight line, and everyone can see that two planes intersect at straight line. If we fold our paper in different direction, we will get two intersecting lines. Straighten a paper out, and you will see, that two lines define only one plane and intersect at one point. Do straight lines always intersect? Fold a paper in half, then fold one half again, and you get two parallel lines. Take a toothpick, p unch a hole at any point that does not belong to one of your folding lines and leave a toothpick inside. Our toothpick will define a third line that intersects the plane (paper), but, will never intersect the other two lines that belong to our plane. So we model skew lines. We can ask a lot of questions based only on the experiments that were done, for example, • · How many common points do a paper sheet and a toothpick have? • · Is it possible that they have only two common points? Three or more? • · Show how they can have more than one point of intercept. • · How many planes can your ‘toothpick straight line’ belong to? etc. To be continued...
{"url":"http://mathforkidsofallages.blogspot.com/2013/04/paper-folding-geometry-1.html","timestamp":"2014-04-19T22:10:19Z","content_type":null,"content_length":"69249","record_id":"<urn:uuid:8d5d35c8-d0f7-42ab-b223-52e57fa8c85d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Circles You Are Chordially Invited Quiz Think You Got it Down? Test your knowledge again: Circles: You Are Chordially Invited Quiz Think you’ve got your head wrapped around Circles? Put your knowledge to the test. Good luck — the Stickman is counting on you! Q. Which of the following is a false statement? A diameter of a circle is also a chord of the same circle. A radius of a circle can also be a chord of the same circle. A central angle is never congruent to its associated chord. In congruent circles, if two chords are congruent, then their associated central angles are congruent. In congruent circles, if two central angles are congruent, then their associated chords are congruent. Q. &odot;O has a radius of 2 cm. What is the length of the longest chord of &odot;O you can possibly draw? 1 cm 2 cm 4 cm 8 cm There is no limit; you can draw a chord as long as you like Q. In the figure below, AC and BD are diameters of ⊙O. AB must be congruent to what other segment? None of the above Q. An angle is inscribed in a circle and intercepts an arc with measure 78°. What is the measure of the inscribed angle? We don't have enough information to find the inscribed angle's measure Q. In the figure below, points A, B, and C are on ⊙O. Which of the following must be true? AC = AB AC = BC AC = 2 × AO m∠AOB = 2 × m∠ACB m∠AOB = 360° – m∠ACB Q. In the figure below, points A, B, and C are on the circle and AB = BC = CA. What is the measure of arc AB? There is not enough information to find the measure of arc AB Q. In the figure below, points A, B, and C are on the circle and ∠ACB is a right angle. Which of the following statements must be true? Arc AB is a semicircle AC > AB AC < AB AC + CB = AB None of the above Q. Which of the following measures cannot be the measure of an inscribed angle? An inscribed angle might have any of these measures Q. An inscribed angle has a measure of 45°. What is the measure of the arc it intercepts? Q. An inscribed angle has a measure of 58°. What is the measure of the arc it intercepts?
{"url":"http://www.shmoop.com/circles/quiz-2.html","timestamp":"2014-04-18T21:00:46Z","content_type":null,"content_length":"46301","record_id":"<urn:uuid:039ea1fb-024f-42bd-840e-351a60351135>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Presentation Summary The author discusses constructivist learning techniques in a program for secondary mathematics teacher preparation at a large state university. Fifteen students were selected for participation in the program. The selection was based on GPA and whether students expressed interest in secondary teaching. Students were required to take Discrete Mathematics, Modern Geometry, and a third course, a practicum. The faculty were encouraged to use constructivist techniques. This report focuses on the practicum course in which students met twice weekly in a discussion section and in a Macintosh-based computer lab. The stated objectives of the practicum were as follows. (a) To introduce students to appropriate technology. In this case, geometry, logic and fractal modeling software were made available. (b) To encourage ``active learning'', students were required to participate in problem solving, group discussions, and the presentation of a semester (c) To encourage awareness of learning styles and teaching styles, students were required to discuss and critique the teaching of faculty associated with the program. They were also required to write three papers on their own learning styles. (d) To encourage use of constructivist techniques, students were required to demonstrate their knowledge of a mathematical concept by researching its historical roots, experimenting with various problem solving techniques from an historical to modern perspective, and then presenting their findings to the class. Students were also required to apply the use of constructivist techniques in an actual classroom environment. (e) To develop professional contacts for the future, the program adopted a ``cohort'' model, suggested by Professor Uri Treisman, in which students take the mathematics courses required for certification as a group. They also had the opportunity to meet with outstanding high school teachers from the area. The success of the program is evaluated based upon faculty response, a Likert scale student survey, and student exit interviews. Carol Jean Bell, University of Texas, Austin
{"url":"http://mathforum.org/orlando/construct.bell.html","timestamp":"2014-04-21T07:43:03Z","content_type":null,"content_length":"5485","record_id":"<urn:uuid:6ee534ce-b01d-467a-b83e-8efdd58336db>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
The Secret to Understanding Arc Flash Calculations A few years ago, the term “arc flash” crept into our electrical technical vocabulary. Since that time, performing arc flash calculations remains a challenge for many of us. Calculating incident energy levels and arc flash boundary distances for the purpose of estimating the hazard risk category (HRC) a worker would be exposed to while working on electrical equipment opens a window into the inner workings of the power distribution system. Arc flash calculations can tell us a great deal about how the system will behave during a fault condition. They also offer us a golden opportunity to optimize the system for safety and attempt to prevent the hazard from happening in the first place. Arc flash regulations may be one of the best things that have ever happened to electrical designs, because they force engineers to look closer at details they might have otherwise overlooked in the past and put the power system calculations front and center in the design process. The very notion of considering arc flash early on in the design of a power distribution system is not only prudent, but also economical. The following two documents are the foundation for truly understanding arc flash calculations: • NFPA 70E, Standard for Electrical Safety in the Workplace, 2012 Edition • IEEE Std 1584, Guide for Performing Arc-Flash Hazard Calculations, 2002 Edition In this article, we’ll concentrate on NFPA 70E instead of IEEE Std 1584. The calculations shown below will also focus on alternating current systems. Chapter 1, Safety-Related Work Practices (Art. 100 Definitions) The definitions in Chapter 1 include the terms used in the calculations, which help you understand the concept. Boundary, arc flash. When an arc flash hazard exists, an approach limit at a distance from a prospective arc source within which a person could receive a second-degree burn if an electrical arc flash Boundary, limited approach. An approach limit at a distance from an exposed energized electrical conductor or circuit part within which a shock hazard exists. Boundary, prohibited approach. An approach limit at a distance from an exposed energized electrical conductor or circuit part within which work is considered the same as making contact with the electrical conductor or circuit part. Boundary, restricted approach. An approach limit at a distance from an exposed energized electrical conductor or circuit part within which there is an increased risk of shock (due to electrical arc-over combined with inadvertent movement) for personnel working in close proximity to the energized electrical conductor or circuit part. Ground Fault. An unintentional, electrically conducting connection between an ungrounded conductor of an electrical circuit and the normally non-current-carrying conductors, metallic enclosures, metallic raceways, metallic equipment, or earth. Incident energy. The amount of energy impressed on a surface, at a certain distance from the source, generated during an electrical arc event. One of the units used to measure incident energy is calories per centimeter squared (cal/cm2). Incident energy analysis. A component of an arc flash hazard analysis used to predict the incident energy of an arc flash for a specified set of conditions. Qualified person.One who has skills and knowledge related to the construction and operation of the electrical equipment and installations — and has received safety training to recognize and avoid the hazards involved. Unqualified person. A person who is not a qualified person. Short circuit current rating. The prospective symmetrical fault current at a nominal voltage to which an apparatus or system is able to be connected without sustaining damage exceeding defined acceptance criteria. Informative Annex C, Limits of Approach Annex C introduces the following logical concept: “As the distance between a person and the exposed energized conductors or circuit parts decreases, the potential for electrical accident increases.” It also breaks down the discussion of safe approach distance for both unqualified and qualified persons. The Annex also illustrates the limits of approach, the basic concept of which is illustrated in Figure 1: Table 130.4(C)(a) of NFPA 70E introduces “Approach Boundaries to Energized Electrical Conductors or Circuit Parts for Shock Protection, Alternating-Current Voltage Systems.” The prohibited approach boundary, restricted approach boundary, and limited approach boundary are all dependent on system voltage. Informative Annex D, Incident Energy and Arc Flash Boundary Calculation Methods Annex D introduces five sets of equations to calculate the arc flash boundary and/or the incident energy level. It also provides formulas for calculating arc flash energies and boundaries to be used with current-limiting Class L and Class RK1 fuses as well as with circuit breakers. This Annex also includes numerical examples that demonstrate the calculation procedure. The equations in this Annex can be used for low-voltage and medium-voltage systems, but each has its own limitations. Thus, the reader must use the set of equations that best suits his/her application. The limitations are in terms of voltage, short circuit current range, open air space, or inside a cubical (applicable to arc flashes emanating from within switchgear, motor control centers, or other electrical equipment enclosures). For typical low-voltage applications (<600V), these equations seem to best fit. The following equation is used to estimate the incident energy in a cubic box (20 in. on each side): E[MB] = 1038.7 D[B]^-1.4738 t[A] [ 0.0093F^2 - 0.3453 F + 5.9675 ] E[MB] : is the maximum 20-inch cubic box incident energy in cal/cm2. D[B][:] is the distance from arc electrodes in inches. D[B] is the working distance and it is 18 in. for low-voltage application the origin of this value is in NFPA 70 Table 110.26(A)(1) Working Space (Low Voltage). F: is the short circuit current, kA (for range of 16kA to 50kA), for the circuit under consideration. t[A]: is the arc duration in seconds. To calculate t[A], first calculate the arc fault current (I[A]) from the following equation: log(I[A]) = K + 0.662log(I[bf])+0.0966V+0.000526(G) + 0.5588(V)log(I[bf])-0.00304(G)log(I[bf]) I[A]: is the arc fault current. I[bf]: is the bolted short circuit current (3-phase symmetrical rms kA). G: is the gap between conductors or buses. Obtain the value of G from Table D.7.2 Factors for Equipment and Voltage Classes. K: -0.153 for open air or -0.097 for “In Box.” V: is the system voltage (0.208kV to 15kV). Then, calculate IA = 10 lg I[A] Time is the most controllable factor in the amount of incident energy and can be controlled by the settings of the upstream circuit breaker during the time current characteristics TCC coordination study. The time can be directly obtained from protective device time current characteristics TCC curve. The maximum value for the time to be used in calculations is 2 sec. For 480V systems, the industry accepted minimum level for a sustaining arcing fault is 38% of the available bolted fault, 3-phase short circuit current. The highest incident energy exposure could occur at these lower levels where the overcurrent device could take seconds or minutes to open. Notice that you can use 0.85 × I[A] to find a second arc duration. This second arc duration accounts for variations in the arcing current and the time for the overcurrent device to open. Calculate the incident energy using both arc durations (Ia and 0.85 × I[A]), and use the these two values to obtain from the TCC of the upstream protective devices two values for t[A]. Then calculate the incident energy and use the largest amount. Another set of data you will find useful when performing arc flash calculations is the 480V portion of Table D.7.7, “Incident Energy and Arc Flash Protection Boundary by Circuit Breaker Type and Rating 480V and Lower”: │ Rating │Breaker│Trip Unit│ Incident Energy │ Arc Flash │ │ (Amp) │(Type) │ (Type) │ (J/cm2) │[ Boundary (mm) ] │ │ 100 to 400 │ MCCB │ TM or M │0.189 Ibf + 0.548 │ 9.16 Ibf+ 194 │ │600 to 1,200│ MCCB │ TM or M │0.223 Ibf + 1.590 │ 8.45 Ibf + 364 │ │600 to 1,200│ MCCB │ E, LI │0.377 Ibf + 1.360 │ 12.50 Ibf + 428 │ In this table, MCCB stands for molded-case circuit breaker, TM is a thermal-magnetic trip unit, M is a magnetic (instantaneous only) trip unit, and E is an electronic trip unit that has three characteristics, which may be used separately or in combination — long time, short time, and instantaneous. The equations in the Table have one unknown: I[bf]. When the incident energy is known, the HRC can be determined from the information in Table 2. I[bf] is based on a working distance of 455 mm (18 in.). I[bf] is between 700A and 106,000A. TCC curves are not necessary when Ibf is in the range above. The equations above can be used for checking calculations or in lieu of detailed calculations. The incident energy is in joule/cm^2 and needs to be converted to cal/cm^2 as follows: 1 J/cm^2 = 0.238902957619 cal/cm^2 Informative Annex H, Guidance on Selection of Protective Clothing and Other Personal Protective Equipment Table H.3(b) provides guidance on selection of arc-rated clothing and other personal protective equipment (PPE) for use when incident exposure is determined by a hazard analysis. By calculating the incident energy, you can determine the HRC. Then use table H.3(b) to determine PPE. Table H.4(a) for low-voltage systems introduces maximum 3-phase bolted fault-current limits at various system voltages and fault clearing times of circuit breakers for recommended use of 8 cal/cm^2 and 40 cal/cm^2 PPE in an “arc-in-a-box” situation. Table H.4(b) for high-voltage systems introduces maximum 3-phase bolted fault-current limits at various system voltages and fault clearing times of circuit breakers for recommended use of 8 cal/cm2 and 40 cal/cm^2 PPE in an “arc-in-a-box” situation. Tables H.4(a) and H.4(b) can really help during the design and review stages of an arc flash study. These two tables can be used in several ways, as follows: • Knowing the maximum 3-phase fault current, a maximum value of the upstream protection fault clearing time can be established in order to achieve an HRC value of 2 or 4. • Knowing the 3-phase fault current at a point in the system and upstream circuit breaker clearing time, you can use the Tables to check the calculations if you are reviewing a study without actually performing the calculations yourself. • You can establish a maximum for the 3-phase short circuit current in a new system, and use it as a criterion for the design. In conclusion, arc flash regulations have brought a great deal of challenge to the industry, but also present a great opportunity to improve the electrical safety and the quality of a power distribution system design. Electrical designers and project reviewers alike should look to arc flash calculations as a tool for continued improvement. Elgazzar is a senior electrical power engineer for the federal government and is currently registered as a professional engineer in the states of Virginia and South Carolina. He can be reached at
{"url":"http://ecmweb.com/print/arc-flash/secret-understanding-arc-flash-calculations","timestamp":"2014-04-20T14:29:35Z","content_type":null,"content_length":"30288","record_id":"<urn:uuid:570c9bfc-918d-47b9-830b-d8c2fa231f04>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
September 2 The Fastest Man The Bell X-2 rocket plane was built to explore the flight regime beyond Mach 3. Dropped from a Boeing B-50 mother plane, it would fly higher and faster than humans had dared before. Early tests indicated that aircraft at Mach 3 would encounter severe aerodynamic heating and severe stability problems. On September 7, 1956 test pilot Ivan Kincheloe (standing) became the first pilot to exceed 100,000 feet, flying the X-2 to an altitude of 126,200 feet. The saga of Icarus continues. 50 years ago September 27, 1956 Milburn "Mel" Apt (seated) flew to Mach 3.2, the first man to exceed three times the speed of sound. Having been instructed not to attempt any rapid control movements at high speed, Mel flew a nearly perfect flight profile. Unfortunately, shortly after reaching top speed the X-2 went out of control, leading to a flat spin. Both aircraft and pilot were lost. Apt's widow was informed that day. Until the arrival of the X-15, no one would fly so high or so fast. This amazing photo shows Atlantis and ISS silhouetted against the Sun. The "Faint Young Sun" has long been a paradox. Astrophysicists have developed detailed models of the Sun's evolution. According to these models life should not have evolved on Earth, because 4 billion years ago the Sun shone with barely 70% of its present luminosity. Earth's temperature would have been 15 degrees below zero centigrade and the planet woud have been frozen solid. Evolution of life would have been very unlikely. Geology shows evidence of sedimentation 4 billion years ago, indicating presence of rivers and seas. Other geological markers confirm the presence of liquid water. Paleontology dates the earliest organisms at least 3.4 billion and possibly 4 billion years old. Clearly water and life both existed when models say that Earth was frozen solid. The fact that life exists to read this post contradicts the standard solar model. This conflict with observations is the Faint Young Sun Paradox. You heard it here first: Anousheh Ansari is the true leader of Persian people. She represents the aspirations of women worldwide. When the history of regime change is written, her influence will be noted. She has advanced the cause of freedom as much as a squadron of B-52's. Dozens of space travel enthusiasts, most of them women, burst into applause at dawn at an observatory near Tehran as the spacecraft carrying the first Iranian woman to travel into space appeared in the sky. Anousheh Ansari, who began her journey into space Monday aboard a Soyuz TMA-9 capsule from Baikonur, Kazakhstan, has become an inspiration to women in male-dominated Iran. The Soyuz spacecraft containing Ansari and Expedition 14 Commander Michael Lopez-Alegria and Flight Engineer Mikhail Tyurin docked with the International Space Station on Wed. Sept. 20, two days after launch. Space enthusiasts gathered Saturday at the Zaferanieh Observatory in Tehran were rapt as they followed the progress of the space station, visible to the naked eye for about two minutes, as it streaked across the sky. "Anousheh is my hope," said teenager Delageh Dabdeh, watching the spacecraft as tears of joy rolled down her cheek. "She will shine in Iranian history as the woman who broke barriers and succeeded in conquering Space with her endeavour. Ansari has shown Iranian women the road to progress. We only need to believe in ourselves." (AP) Ansari said she has received many messages from around the world, including Iran, particularly from girls and women. "I want to reach women and girls in remote parts of the world where women are not encouraged to go into science and technology jobs," she said, "They should believe in what they want and pursue it." (NY Times) In the country where Ansari was born women are forced to wear headscarves, a man can have up to four wives, minorities are suppressed and protesters are jailed without trail. Her adopted country offers the opportunity to get rich, travel in Space and look great at 40. She has a bachelor's degree in EECS, a master's degree in EE, and is getting her next degree in Australia! Which system is right and which is an axis of evil? Mahndisa's Thoughts are most valuable. Over the weekend she posed some very thoughtful questions. I hope to answer them adequately. Mahndisa, you are welcome to ask more. Hopefully this can answer Q9's question too. 1) When you speak of 2 objects separated by a distance in spacetime, from what frame are you taking the measurements; the observer frame, the frame of the moving object, or what? Separation ds between two objects is an invariant as in Special Relativity, regardless of where it is measured from. 2) Are the objects moving with respect to one another as the most general case of your equations? That separation ds is also invariant if they are moving with respect to one another. 3) Are calculations in the non inertial reference frames a feature of your theory, or are you performing calculations in the weak gravity limit only? Someone should have asked this of Albert, because Special Relativity makes no allowance for gravity. SR can be modified to account for gravity, which will lead to a c change. 4) When you invoke a changing c, how is the expression modified in non inertial frames? Change in c results directly from a non inertial frame with gravity included. 5) What is the mechanism for the change in c? This results from unifying the local conditions of Special Relativity (which do not account for gravity) with the large-scale Universe of General Relativity. In SR, the interval ds is given by: 6) Are you assuming that the large scale structure of the Universe is Euclidean, given the expression R = ct? No, it only appears Euclidean on the local scale. In the large scale it is spherical of radius R = ct. 7) Given this information, how have you applied the Minkowski metric to your derivations of distance and your distinctions between timelike, lightlike and spacelike separations? As should be seen above, this curved metric with changing c reduces to the Minkowski metric. The math links SR and GR. 8) I also thought that you assumed a spherically symnetric spacetime in your model. Correct. R = ct applies to the distance from the Big Bang singularity. On the local scale, distances can be considered as little r = ct. On the large scale, those distances are geodesics. For instance, the distance light has travelled since the Big Bang is (3/2)ct. 9) Do you think that your equations are scale invariant? Not quite, because all this predicts that the Universe has a limited size. For example, the power spectrum of the CMB is not scale invariant, something the WMAP team blithely ignores. 10) Are your equations diffeomorphism invariant? Yes, they are the same under all coordinate systems. This math should show that they are also Lorentz invariant. I wish to expand upon question 5, to answer both Mahndisa and Quasar. As Einstein figured out in 1917, mass of the Universe will cause light to follow circular paths, like satellite orbits. Every photon that we see today appears to have the same velocity, because they are orbiting at the same Space/Time distance from the Big Bang origin. When the Universe expands, that distance increases. Like a satellite shifting to a higher orbit, velocity goes down. Gebar, you are right about the equations of the Lorentz transformation. They are the equations of a sphere! Many questions have come in lately. Today Kea's curiosity about coral reefs will be answered. Mahndisa's questions will be next, with both graphics and equations. Since we can't afford a SNAP spacecraft, we must be creative. The coral reefs of Queensland and Hawaii are an indescribable adventure. Snorkelling in the reef is a visit to another world. Among the reef's ancient wisdom are clues to cosmology. Among the equipment left on the Moon by Apollo astronauts was the Lunar Laser Ranging Experiment (LLRE). This simple passive device uses corner reflectors like those on a bicycle. By bouncing laser beams from Earth, astronomers can measure distances to the Moon with great accuracy. Data from the LLRE has told us that the Moon still has a liquid core, that Newton's G is indeed constant, and provides one more test of General Relativity. Most important for this narrative, the LLRE tells us that the Moon is slowly drifting away from the Earth. Most of this apparent drift is due to tidal forces. As the Moon creates tides, the tidal bulge outraces the Moon due to Earth's 24-hour rotation. This bulge pulls the Moon forward by a tiny amount, increasing the Moon's orbital velocity. In this way angular momentum is tranferred from Earth to Moon across 384,000 kilometres of Space. This small acceleration is causing the Moon to slowly drift away. After 35 years of lunar ranging, this drift is measured to be 3.84 cm per year. Geologists and paleontologists can tell more precisely how the Moon's orbit has changed. Coral gains layers on both daily and yearly cycles, dependent upon tides caused by the Moon. By studying fossilised coral, paleontologists can tell the length of Earth's day, therefore how much angular momentum Earth has lost. Growth rings in coral tell the height of lunar tides, indicating how close the Moon was in the past. Earth's record tells us that the Moon's average recession over the last 650 million years is only 2.17 cm per year. Small discrepancies in orbits can be very significant. Copernican theory finally triumphed over Ptolemy because it could predict planetary orbits more precisely than epicycles. Mercury's orbital ellipse precesses at 5,600 arc seconds per century, yet a change of only 43 arc seconds per century was enough to verify General Relativity. With the additional momentum, the Moon's recession today is no more than about 2.9 cm per year. If the Moon appears to recede 1/3 faster than geology says, it is a serious anomaly. If the speed of light slows, that would increase the time for light to return from the Moon, making the Moon appear to recede faster. From GM=tc^3, predicted c change per year is 1 in 41 billion. Multiplied by the Moon's distance of 384,402 km, that distance will appear to increase by 0.94 cm per year. Change in c may precisely account for the discrepancy in the Moon's drift. Like Mercury's orbit, the Moon may give us clues about cosmology and the Universe. As c change makes expansion appear to accelerate, it also makes the Moon appear to recede faster. Above picture is from Australia, below is Hanauma Bay, Oahu SpaceX was founded by Elon Musk, co-founder of Zip2 and Paypal. On March 24 his experimental Falcon I was launched from Kwajalein Atoll. Unfortunately the main engine failed after 29 seconds. Undeterred, SpaceX plans another launch in November. The much larger Falcon 9 is scheduled to launch in late '007. It will launch Bigelow's inflatable modules and the DRAGON spacecraft with a crew of 6-7. DRAGON will compete for America's Space Prize, 50 million for the first private spacecraft to fly 5 people into orbit twice within 60 days. This prize deadline is in 2010, so start building your rockets now. At the International Space Development Conference this May, Musk told us he has contracts for 11 paid spaceflights. On August 18, NASA chose SpaceX along with Rocketplane Kistler to develop commercial launchers for the ISS. Musk estimates that price to orbit can be reduced to about 1000/kg. At that price, tourism starts to become very attractive! A Lockheed representative told us that they could have Orion flying by 2012-2014. Some of these vehicles could be alternate launchers for the Orion. Doubts have been expressed about the Ares I design with its solid fuel first stage. Crews might get nervous flying atop a giant bottle rocket, for solid rockets can not be shut down or throttled once started. Above are more models of ATLAS boosters. Below is the ASC SPACEPLANE, which is being developed in California. American Spacecraft Corporation is working on a full-scale mockup, which I hope to get photos of when available. They estimate that by 2018 there could be seven space stations with 40 people continuously in Space. Pioneers must be ready to take risks, but the opportunities are there before us. Contributor Nigel has written something so nice that I can only refer to his post. He has investigated whether quantities like G or c have changed. I hope that Nigel's writings are noticed by a larger audience, NEW, CV and especially Reference Frame. Our side is winning! This J-2 rocket engine is powered by liquid hydrogen and oxygen. The original Saturn V used five of these in its second stage, plus one in the final stage boosting astronauts toward the Moon. An updated version of this engine may be used in the new Ares V. Plans for the Moon. Mars and Beyond use a combination of new technology with what worked in the past. This is a meeting of people really working on Space, and the excitement is infectious. Men and women are enthusiastic, entepreneurial, and optimistic about the future. There is a great feeling of teamwork toward a common goal. Everyone feels that we are on the frontier, the same feeling we have in Australia and New Zealand. On March 25 researchers from the University of Queensland test-fired a hypersonic scramjet engine, capable of propelling an aircraft at Mach 6.5! A hypersonic aircraft would be powered by liquid methane, and neither pollute nor leave sonic booms. It would pay off economically because each plane could fly multiple long flights in a day. After enduring 14-hour flights aross the Pacific, we would love to do it in one or two. One of Australia's best-known physicists, Paul Davies CB (Order of the Bath) will be moving to Arizona State University. He will continue to be associated with the Australian Center for Astrobiology at Macquarie University. He has written many popular books like THE COSMIC BLUEPRINT. His writings and research focus on the very origins of life and the Universe. Davies has also written about a changing speed of light! There's someone in that suit. News is coming about Space faster than I can write about it. Atlantis landed safely at Kennedy Spaceport at 0621 EST. For a few days there were 12 humans in Space. Still onboard the ISS is Persian-American Anousheh Ansari, who will be conducting experiments on behalf of ESA. She represents the true aspirations of Persian women. I am still saving my own 20 million! From his HQ in Las Vegas, entepreneur Robert T. Bigelow built the Genesis inflatable habitat. It was launched into orbit by a Russian rocket July 12. Today Bigelow announced SUNDANCER, a fully crewed habitat to launch around 2009. It will weigh 19,000 lb and have an intial crew of 3. This will be a privately built space station! You heard it here first! We are witnessing a transition in Space, where NASA is moving to the frontier of exploration and leaving Earth orbit for the private sector. Bigelow also announced plans to work with Lockheed on this ATLAS V booster. It is descended from the Atlas ICBM that boosted John Glenn into orbit. This is also a possible booster for the Orion CEV. In response to a couple of requests, a more recently published paper on GM=tc^3 has been posted. I would like to have more, but it is not easy publishing papers saying that c is changing, and they have to be short. The math is similiar to the SLAC paper, because equations last forever. I've been at the American Institute for Aeronautics and Astronautics (AIAA) Meeting. Talks and displays are about going to the Moon, Mars and Beyond. One could blog forever about problems in the world, but this room is full of the excitement about resuming humanity's greatest adventure. Above is Lockheed's model of the Orion CEV. Below are models of the Ares I (left) and Ares V launch vehicles. Ares I will have a first stage adapted from the shuttle Solid Rocket Booster, a liquid-fueled second stage and the Orion with escape system on top. Ares V has two SRB's surrounding a liquid-fueled core, and a liquid-fueled second stage. The Lunar Lander or other big payloads can be carried on top. Configuration of the lander will depend on whether it has separate descent and ascent stages (like in the 1960's) or uses two stages for descent. I have seen many fascinating concepts for Moon ships. Engineers here are evaluating designs as we speak. Witnessing a spaceship being designed is very exciting. Plans for returning to the Moon have also been called Back to the Future. To reproduce the thrill for our generation, NASA is planning retro rockets similiar to those of the 1960's. Hey, if the retro Ford Mustang can be a hit....To answer Kea's question, an upcoming post will relate the Moon to corals and cosmology. For one spacecraft to travel from Earth to the Moon and back would require a huge rocket with more stages than is practical. Von Braun and company had to choose between Earth Orbit Rendezvous (two rockets launched separately into Earth orbit so that one could refuel the other) or Lunar Orbit Rendezvous. Both plans were risky in the early 1960's when spacecraft had never mated in orbit, especially near the Moon. Thanks to some dedicated engineers, the latter plan was chosen and the US beat Russia to the Moon. Because we wish to send even larger payloads, the new plan will use both EOR and LOR. A Lunar Module will be launched atop the Ares V heavy-lift rocket, which is based on Shuttle technology. The Orion Command Module and Service Module will launch separately using the smaller Ares I, which uses a single solid rocket booster. The two spacecraft will mate in Earth orbit, then use the Ares V upper stage to reach Earth escape velocity. Like the Ford Mustang, the Orion looks like a 1960's vehicle but uses modern technology, like cupholders. The outside sports solar panels for green power, allowing Orion to remain in orbit for months. Today we have computers in a watch more powerful than those aboard Apollo. Thanks to automation, Michael Collins will not have to stay in the ship while Neil and Buzz see the Moon. Inside it will be much larger for a crew of 4-6. Use of modern composite materials will help the crew feel less like Spam in the can. Construction of Orion has just been awarded to Lockheed-Martin. Returning to the Moon uses retro rockets because it should have been done yesterday. This program is vitally important to all kinds of science, especially astronomy and cosmology. The technology will someday allow us to put telescopes on the Moon. It will also ensure that the next crewed spacecraft to land on the Moon will have a flag on it and not MADE IN CHINA. A future post will show how cosmology benefits from the Moon and coral reefs too. Below is a mockup of the original Lunar Module displayed at Worldcon. This week I'll be talking to engineers about design of the new LM. More props at the Worldcon, replica Time Machine from "Back to the Future." Remember Doc Brown imploring Marty to think four-dimensionally? Processes that are irreversible in time are called Arrows of Time. We already can answer the Cosmological Arrow with R = ct. Expansion of the Universe is indistinguishable from the forward flow of Time. If we drop a cup and it shatters, the pieces will not re-assemble themselves. Entropy of the Universe always increases. This Thermodynamic Arrow is related to Planck value h. Since most measurements indicate that the product hc is constant, as c slows h increases. Uncertainties related to h increase with Time. This small link between Relativity and Quantum Mechanics explains the Thermodynamic Steinn Sigurdsson has the latest dish on a Dark Energy Mission. We can wish them luck as they compete for funding with other projects. Queenslanders, a memorial for Steve Irwin will be held at his Australia Zoo on Wednesday. I am not the sort who can ask for billion-dollar spacecraft to seek a dark energy equation of state. The Earth offers answers of her own. Evidence for a Theory involves the Hubble Space Telescope, our Subaru Telescope atop Mauna Kea, observations of the Sun, and even coral reefs. The Earth has made records of the speed of light, we need only look at them. I have to travel light, often without extra clothes. Fortunately the World Science Fiction Convention has many costumes on display. (Honestly, I am probably too thin to wear this.) Disney says, "If You Wish Upon A Star...Your Dreams Come True." Discovering the secrets of Space/Time is the best Sci Fi dream come true. Using the Hubble Space Telescope and the Subaru telescope atop our Big Island, astronomers have discovered galaxies that formed barely 500 million years after the Big Bang! Galaxies formed around Black Holes with the mass of a million Suns. These supermassive Black Holes could not have formed by evolution of stars. They are primordial, formed from collapse of quantum fluctuations. Mass of a collapsing Black Hole is limited by the amount of the Universe within its reach. Size of primordial Black Holes is determined by a horizon distance related to the speed of light. The earlier we find these supermassive PBH's, the more evidence favours a changing speed of light. This has been a quiet week so far, with nothing new from some science blogs. Viewership here continues to increase geometrically. Thank you LOS ANGELES, CHICAGO, WASHINGTON DC and other places for reaching 100+ viewings! Welcome OAHU; it is nice to see someone else from Hawaii enjoying this site. Your comments are welcome. I have travelled to Anaheim, California for the World Science Fiction Convention. This event is held in a different world city each year, and draws everyone from film stars to scientists. Even Phil "The Bad Astronomer" was on some panels. This is an exciting event with more to see than any one person could describe. Across the street is Disneyland! The California Screamin roller coaster illustrates the latest inflated paradigm, Slinky Inflation. According to this idea, the Universe bounces between periods of acceleration like a giant spring toy. Like all such ideas, this violates both Relativity and the First Law of Thermodynamics. Theorists have also been promoting "eternal inflation," which would create multiple universes with differing physical laws. Christine has been nice enough to give one author a fair review. This idea is suspiciously like the string landscape, and may draw similiar criticism. Rather than explaining the universe, scientsits would invoke multiple universes. The entire inflationary house of cards relies on mystical repulsive "scalar fields" which can not be observed in nature. Below is the Twilight Zone Tower of Terror! In capturing the spirit of Rod Serling, it takes you on some real ups and downs! We must endure life's heights with its depths, but there are adventures in both. You know it is a bad day when Rod Serling is in a corner describing your life. Today's image is much more pleasant. History shows that our achievements last longer than the obstacles. That is reason to spend years working on something that may not be recognised in this lifetime. I hope you all enjoy the graphics. The "Big Bang" can be drawn as a point. Cosmologists agree that the baby Universe had a finite volume and therefore a finite mass M. All points of this Universe were very close to one another. Our separation from the Big Bang is a matter of Time, not Space. That interval is simply age t of the Universe. Astronomers have estimated that age at about 14 billion years. The light cone represents the local conditions of Special Relativity. Our mass, even the mass of our galaxy, is negligible compared to mass M of the Universe. We are within the light cone of that enormous mass, and its gravity is affecting us. Fortunately there was only one Big Bang to pull us back. As Newton showed, the gravitational force from a spherical mass distribution is the same as if all that mass were concentrated in a point. Einstein theorised that this Universe is of spherical shape. Our three dimensions x, y and z are now confined to a circle. Any direction that we can travel in Space keeps us within that circle. The Universe might appear infinite and flat to our experience, but can still be curved in the fourth dimension. In this spherical Universe, the combined gravitational attraction would be the same as if everything was in a point. There is no centre in Space, for every bit resembles every other bit. There is a centre in the Time dimension, which we call a "Big Bang." As time passes, our Universe expands away from the Big Bang. Newton's Law of Gravitation insists that mass affects us at a distance. It is meaningless to express this attraction over Time, so we must add the conversion factor c. Here we can expand on one principle of Special Relativity. Since Space and Time are one phenomenon related by c: Scale R of the Universe is its age t multiplied by c. This astonishingly simple relation tells why we live in a growing Universe. As time t increases, scale R expands. Most laws of physics are time-symnetrical; they do not favour any direction forward or backward in Time. Expansion does not share this symnetry, it is called an Arrow of Time. At a distant beginning time of zero, the Universe would have had zero dimension. Everything that we know resembled a single point. Since that time our Universe has been expanding. It can't expand at the same rate c continuously, for gravity slows it down. Conversion factor c must be further related to t. This leads to an astonishing but testable prediction. On September 8, 1966 the first episode of STAR TREK was broadcast in the US. At that time nobody had stepped on the Moon, telephones had cords and computers were big as a room. The show has gone through many generations, inspiring many to reach for the stars. This replica of the Enterprise bridge was displayed at the World Science Fiction Convention in Anaheim this year. Astronomers, what do you notice about the pictures behind McCoy? Those images of the Sun and galaxy were taken from X-Ray spacecraft, something else that didn't exist in 1966. Left picture is from SOHO and right is from Chandra. Who says scientists can't have fun? From Roger Penrose's "Road to Reality," a fascinating book that takes a properly skeptical view of fashionable ideas like inflation and the cosmic constant. It also contains an introduction to the spherical harmonics that calculate these graphs. This is one of the graphs that "proves" a Concorde cosmology. Look at the left side. The data points only follow the prediction line for angles less than 30 degrees. Starting at l = 3 the points depart significantly from the prediction. Presented here in logarithmic form, the departure does not seem obvious. However, most of the sky is greater than 30 degrees! Look for l = 2. This point has been COVERED by the vertical axis. That axis should properly be at l = 1, but here is placed to cover up that pesky data point. Presentation of this graph has been changed to better support the prediction. Quoting Penrose: "In my opinion, we must be exceedingly cautious about claims of this kind--even if seemingly supported by high-quality experimental results. These are frequently analysed from the perspective of some fashionable theory." Many of you work in physics or are interested enough to study and attend lectures. Your comments show that you are capable of critical thinking. The science presented here is strong enough to withstand any criticism. The next time someone lectures about inflation or dark energy, question them! The BIG SHOT atop the Stratosphere Tower launches me up the spire at 4.5 G's. (Maximum kinetic energy, minimum potential) At the top of that trajectory I float 1049 feet above the Las Vegas Strip! (Minimum kinetic energy, maximum potential energy) How much energy is in an object? We always measure gravitational potential in relation to some other point. An object's potential is measured from Earth's centre, but that is only a tiny portion of the total. The object has more potential from the Sun. We don't feel the Sun's much greater pull because we share Earth's 19 km/sec orbital velocity. (Even many physicists don't know that the Sun exerts a greater pull) There is still more potential from the Milky Way galaxy, to say nothing of all the other objects in the Universe. To find the total potential of an object, we would have to sum all those up! (Remember that R = ct and GM=tc^3) Does anyone want to argue with this E = -mc^2 business? To avoid confusion, we'll rename Eu as U = -mc^2 the Newton energy, and call E = +mc^2 the Einstein energy. E + U = 0. So the total energy of any object is just 0! This applies to any mass, no matter how large or small. Though there is no room to prove everything, this result also applies when you add kinetic energy, and even for massless particles like photons. (Anyone care to guess what the energy of a vacuum is?) The total energy of the entire Universe is 0! It's the ultimate free lunch, which is how the Universe managed to expand from a tiny point to the complexity we observe today. Thank you, ST LOUIS for being gateway to the American West, building a great arch, financing Lindbergh's flight and being home to the McDonnell-Douglas plant, among other things. Many pilots have enjoyed the "Viking" takeoff, when an F-15 fresh from the factory lifts off and does an immediate 90 degree climb. Thank you also for reaching the milestone of 1000+ viewings! Welcome, PARIS to the 100 viewings club. I would love to hear from you sometime. Concerning Steve Irwin, we should all applaud those willing to take risks. Today it is easy to be anonymous hiding behind a computer terminal. Sticking one's neck out invites criticism, slings and arrows. If not for the Steven Irwins, the Jacqueline Cochrans, or the Einsteins the world would not move forward. We need to encourage potential leaders not snipe at them. When Stephen Hawking first proposed that Black Holes give off radiation, the idea seemed to defy common sense. The moderator of his talk got up and said, "Sorry Stephen, but this is absolute rubbish." Fortunately, others checked Hawking's calculation and the idea was soon accepted. Hawking's startling idea began his climb to fame. Alan Guth first proposed the inflationary idea to explain problems with Big Bang cosmology, like uniformity in the microwave background and the apparent flatness. Inflation predicts that quantum density fluctuations expanded to form the seeds of structures. Because this idea was couched in the language of particle physics, it was quickly accepted. Now that the older generation is preparing to move on, new ideas are needed. Even the inflationary paradigm is leaking. To answer Mahndisa's wise question: It is fashionable to say that the Universe is flat, like the Earth. Even the tiniest mass causes it to be curved. If you squeeze a cosmologist, they will admit that it must have begun with a finite topology, like a sphere. Inflation would have expanded a sphere so big that it would appear flat to our experience. Like an insect's view of the Earth, it is only flat if you can't conceive all of it! It is also fashionable to say that WMAP "proves" inflation. The incredible force that would make the Universe expand at warp speed is still just a speculation. Inflation predicts that density fluctuations are the same at all scales. In fact, fluctuations are virtually zero for angles greater than 60 degrees, disproving inflation's prediction. The above graph was given to me by Dr. Ned Wright of the WMAP team. As you can see, inflation's predicted spectrum is ruled out by both WMAP and COBE. The red line is a prediction of a Unified Space/ Time. This prediction is hard to distinguish from the data points. Lack of large-scale fluctuations shows that the Universe is curved, just as a ship disappearing over the horizon shows that Earth is Some people refused to peer in Galileo's telescope for fear of upsetting their Ptolemaic worldview. A growing number of cosmologists are aware of deficiencies in the "standard model." The WMAP team's latest report concludes: "An alternative model that better fits the low l data would be an exciting development." Time to look at those new models, boys! With great sorrow Queenslanders mourn the sudden death of STEVE IRWIN, our world-famous "Crocodile Hunter". He was apparently killed in a freak accident with a stingray. Many of us will remember seeing him as children either on the telly or in his shows at the Australia Zoo. Queensland will not be the same without him. I photographed this playful fellow in Australia earlier this year. Stingrays are not natural enemies of humans, but they often hide themselves in the sand. Many accidents have occurred when unfortunate divers stepped on them. The stingray's barb is extremely powerful and can pierce wood. It secretes a poison that causes reduced blood pressure and possibly shock. How unfortunate to be struck in the heart!
{"url":"http://riofriospacetime.blogspot.com/2006_09_01_archive.html","timestamp":"2014-04-21T12:21:33Z","content_type":null,"content_length":"110379","record_id":"<urn:uuid:fb227870-5246-402c-971f-a13af61e18ce>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Transition Between Parts Transitions and Assemblies Two parts can be glued together using the block boundary feature. This assures that the coincident nodes in the two parts are identical and there is a 1:1 correspondence on both sides. This makes it easier to build complex models by first building each component and then gluing the components together. Assembling Many Parts In this example a complex model is broken into 177 parts and glued together. Each color represents a different part. Zooming in on the mesh you can see the parts match perfectly Transition Between Parts If the number of elements in one part is half or double the number in the other part, then a transitional block boundary is needed. A row of hex (quad shell) elements at the interface are automatically replaced with a row that sews the two parts together to produce a node for node, edge for edge, and face for face matching across the parts interface. The transition layer can also sew together two parts where the number of elements differ in both directions along the interface. The ratio 2:4 or 1:3 is required in both directions. The ratios do not have to be the same in both directions. This example is 2:4 in two directions. For the complex model transitional elements are used at every level to match parts. Multiple Layers Forming Locally Dense Meshs Transitions can be layered to radically change the mesh density. The example below has three transitions in two directions with a transition scale factor of 12 in each direction. The same concept is seen for the complex model. A transition layer is used to increase the mesh density for a region. Irregular Region Filled using Transitions Frequently, a part is needed that cannot be meshed with a simple block. If this part is in the interior where surrounding parts dictate the number of elements along the boundary of the part, a complex solution is needed. Several parts are constructed with transitions between them. If we do not have a difference of even number of elements one can force a degeneracy on a side by with a wedge element. Here is an example with three parts and three wedge columns. The wedges are easily formed by collapsing a face of a hex element. Use Transitions with Caution All boundary conditions, initial conditions, and properties are maintained along when the trasitional element substitution is made. This feature can be abused because, by the very nature of these transitions, the quality is limited. In particular, the angles can be severe and can introduce numerical errors in the simulation. It is advised to carefully plan to reduce the number of transitions that are needed and to place these transitions in areas of low interest.
{"url":"http://www.truegrid.com/Transition.html","timestamp":"2014-04-18T23:17:55Z","content_type":null,"content_length":"10549","record_id":"<urn:uuid:8f1d64c3-62c1-448f-a63f-5c6e646f8c4b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about pattern on The Math Less Traveled Category Archives: pattern I’ve finally gotten around to making a nice factorization diagram poster: You can buy high-quality prints from Imagekind. (If you order soon you should have them before Christmas! =) I’m really quite happy with imagekind, the print quality is fantastic … Continue reading Here’s something I made yesterday! (Note, I strongly suggest watching it fullscreen, in HD if you have the bandwidth for it.) Can you figure out what’s going on? The source code for the animation is here; I was inspired by … Continue reading I got some Penrose refrigerator magnets in the mail the other day! They look nice on my fridge, don’t you think? Here’s a close-up: (Unfortunately, since this was just a Kickstarter project there’s no way to order more at this … Continue reading In a previous post I posed the question: is there a way to list the permutations of in such a way that any two adjacent permutations are related by just a single swap of adjacent numbers? (Just for fun, let’s … Continue reading In a comment on my previous post, Juan Valera mentioned something about visualizing multiples of prime numbers in Pascal’s Triangle: In college, there was a poster with different Pascal Triangles, each of them highlighting the multiples of different prime numbers. … Continue reading Inspired by the comments on this post, I’ve had some ideas brewing for a while—I’m just only now getting around to writing them up. The topic is visualizing winning strategies for "nim-like" games. What do I mean by that? By … Continue reading Just a link today—check out this awesome tiling database! It’s got tons of beautiful plane tilings (with information and further reading about each one) and many ways to search through the database. It’s a great way to find examples of … Continue reading Picture this! is a very cool interactive thingy, made by Jason Davies, intended to get students (or anyone, really) thinking about some interesting math. Go play around with it and see if you can answer any of the listed questions … Continue reading In a previous post, I challenged you to prove If evenly divides , then evenly divides , where denotes the th Fibonacci number (). Here’s one fairly elementary proof (though it certainly has a few twists!). Pick some arbitrary and … Continue reading I haven’t written anything here in a while, but hope to write more regularly now that the semester is over—I have a series on combinatorial proofs to finish up, some books to review, and a few other things planned. But … Continue reading
{"url":"http://mathlesstraveled.com/category/pattern/","timestamp":"2014-04-16T22:17:24Z","content_type":null,"content_length":"75086","record_id":"<urn:uuid:7db413a5-35b1-42ee-a4d2-341a9330ec29>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Cranston SAT Math Tutor Find a Cranston SAT Math Tutor Hi Students & Parents, I am a certified substitute teacher with the RI Department of Education currently working for the East Providence (RI) School Department. My primary teaching philosophy with my clients is to first focus on managing their studying methods, then work on helping them understand... 22 Subjects: including SAT math, chemistry, physics, geometry ...I work with students to help them understand what the problem is asking, writing down the given information, and choosing the most efficient problem solving technique. The math portion of the GED involves knowledge of basic arithmetic, pre-algebra, algebra, geometry and applications of each of t... 15 Subjects: including SAT math, chemistry, algebra 2, biology ...While some of my time was spent teaching students in a resource classroom, I also spent a good deal of time teaching my students in their elementary classrooms where I worked with them in small and large groups. I primarily taught reading (including phonics), writing, spelling and mathematics, b... 30 Subjects: including SAT math, reading, writing, English I'm a semi-retired lawyer, with years of trial experience. As you might expect from a lawyer, I teach primarily by the Socratic method, leading students to find the right answers themselves. I have excelled in every standardized test I have taken: SAT 786M/740V, LSAT 794, National Merit Finalist. 20 Subjects: including SAT math, English, reading, writing ...For a total of eight years, I was a teacher at the Landmark High School, in Beverly Massachusetts. Landmark specializes in students with language based learning disabilities, such as dyslexia. I am accustomed to teaching small groups of students with individualized learning needs, modifying curriculum and assignments to meet those needs. 14 Subjects: including SAT math, chemistry, algebra 2, biology
{"url":"http://www.purplemath.com/Cranston_SAT_Math_tutors.php","timestamp":"2014-04-17T01:26:54Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:20a630e1-c990-4349-8a7b-f46898417d5e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
quantum field theory quantum field theory, body of physical principles combining the elements of quantum mechanics with those of relativity to explain the behaviour of subatomic particles and their interactions via a variety of force fields. Two examples of modern quantum field theories are quantum electrodynamics, describing the interaction of electrically charged particles and the electromagnetic force, and quantum chromodynamics, representing the interactions of quarks and the strong force. Designed to account for particle-physics phenomena such as high-energy collisions in which subatomic particles may be created or destroyed, quantum field theories have also found applications in other branches of physics. The prototype of quantum field theories is quantum electrodynamics (QED), which provides a comprehensive mathematical framework for predicting and understanding the effects of electromagnetism on electrically charged matter at all energy levels. Electric and magnetic forces are regarded as arising from the emission and absorption of exchange particles called photons. These can be represented as disturbances of electromagnetic fields, much as ripples on a lake are disturbances of the water. Under suitable conditions, photons may become entirely free of charged particles; they are then detectable as light and as other forms of electromagnetic radiation. Similarly, particles such as electrons are themselves regarded as disturbances of their own quantized fields. Numerical predictions based on QED agree with experimental data to within one part in 10 million in some cases. There is a widespread conviction among physicists that other forces in nature—the weak force responsible for radioactive beta decay; the strong force, which binds together the constituents of atomic nuclei; and perhaps also the gravitational force—can be described by theories similar to QED. These theories are known collectively as gauge theories. Each of the forces is mediated by its own set of exchange particles, and differences between the forces are reflected in the properties of these particles. For example, electromagnetic and gravitational forces operate over long distances, and their exchange particles—the well-studied photon and the as-yet-undetected graviton, respectively—have no mass. In contrast, the strong and weak forces operate only over distances shorter than the size of an atomic nucleus. Quantum chromodynamics (QCD), the modern quantum field theory describing the effects of the strong force among quarks, predicts the existence of exchange particles called gluons, which are also massless as with QED but whose interactions occur in a way that essentially confines quarks to bound particles such as the proton and the neutron. The weak force is carried by massive exchange particles—the W and Z particles—and is thus limited to an extremely short range, approximately 1 percent of the diameter of a typical atomic nucleus. The current theoretical understanding of the fundamental interactions of matter is based on quantum field theories of these forces. Research continues, however, to develop a single unified field theory that encompasses all the forces. In such a unified theory, all the forces would have a common origin and would be related by mathematical symmetries. The simplest result would be that all the forces would have identical properties and that a mechanism called spontaneous symmetry breaking would account for the observed differences. A unified theory of electromagnetic and weak forces, the electroweak theory, has been developed and has received considerable experimental support. It is likely that this theory can be extended to include the strong force. There also exist theories that include the gravitational force, but these are more speculative.
{"url":"http://www.britannica.com/print/topic/486221","timestamp":"2014-04-20T17:23:36Z","content_type":null,"content_length":"15265","record_id":"<urn:uuid:fa4ac629-2921-43e2-a29d-3df0daaabe9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with finding roots of complicated quadratic April 25th 2011, 04:14 PM #1 Apr 2011 Help with finding roots of complicated quadratic I'm having trouble finding the roots of the equation below. While I have the final answer, I'm not sure how to arrive at that step-by-step. Any help would be appreciated. x^2 + (z ln z - zb)x - (z^2 * b * ln z) = 0 The final answer has 2 roots in terms of 3 variables: x, b, z Thanks in advance. I'm having trouble finding the roots of the equation below. While I have the final answer, I'm not sure how to arrive at that step-by-step. Any help would be appreciated. x^2 + (z ln z - zb)x - (z^2 * b * ln z) = 0 The final answer has 2 roots in terms of 3 variables: x, b, z Thanks in advance. A = 1 B = (z ln z - zb) C = - (z^2 * b * ln z) Use the quadratic formula and simplify. Take a shovel and start digging. Or, completing the square: $x^2+\alpha{x}-\beta = \left(x+\frac{\alpha}{2}\right)^2-\left(\beta+\frac{\alpha^2}{4}\right).$ Last edited by TheCoffeeMachine; April 25th 2011 at 04:56 PM. I did try this before posting but couldn't get anywhere near the final answer. -(z ln z - zb) +/- sqrt{z ln z -zb) * (z ln z -zb) - 4z^2*b*ln z} / 2 -z ln z + zb +/- sqrt{z^2 (ln z)^2 - z^2*b*ln z - z^2*b*ln z + z^2*b^2 - 4 z^2*b*ln z} / 2 -z ln z + zb +/- sqrt{z^2(ln z)^2 - 6z^2*b*ln z + z^2*b^2} / 2 Final answer is: (x - zb) (x + z ln z) I don't see this simplifying to that, or at least I don't know how to do it. There is a mistake in your first line: - 4z^2*b*ln z should be: + 4z^2*b*ln z Simplify further, and it factors nicely. Let me simplify the problem a bit and you can return to the general case. You are basically faced with the problem of simplifying an expression of the form $(a - b)^2 + 4ab = a^2 - 2ab + b^2 + 4ab$ Now add like terms: $(a - b)^2 + 4ab = a^2 + 2ab + b^2$ How can you simplify that? I'd like to see where the square root went...there is no square root in your equation and this is not similar to solving: -z ln z + zb +/- sqrt{z^2(ln z)^2 + 2z^2*b*ln z + z^2*b^2} / 2 Maybe you should elaborate so the other person can actually see what they are doing wrong. Such a reply isn't very helpful - its been obvious from my all my posts that I've already to simplify it. I'm probably missing something, fine, but to keep saying "this simplifies" won't help me find my mistake. [QUOTE=ForgotMath;643563]I did try this before posting but couldn't get anywhere near the final answer. All right. A further hint on how to simplify the discriminant: $(z ln z -zb)^2 + 4z^2*b*ln z$ (Note that in my previous post a = z ln(z) and b = zb.) $= z^2~ln^2(z) - 2z^2b~ln(z) + z^2b^2 + 4z^2b~ln(z)$ $= z^2~ln^2(z) + 2z^2b~ln(z) + z^2b^2$ This is of the form $a^2 + 2ab + b^2$. So how do you simplify this further? April 25th 2011, 04:31 PM #2 April 25th 2011, 04:43 PM #3 Super Member Mar 2010 April 25th 2011, 05:18 PM #4 Apr 2011 April 25th 2011, 06:03 PM #5 April 26th 2011, 02:48 AM #6 Apr 2011 April 26th 2011, 05:13 AM #7 May 4th 2011, 06:17 AM #8 Apr 2011 May 4th 2011, 06:54 AM #9 May 4th 2011, 07:16 AM #10 Apr 2011 May 4th 2011, 08:42 AM #11
{"url":"http://mathhelpforum.com/algebra/178603-help-finding-roots-complicated-quadratic.html","timestamp":"2014-04-24T16:13:43Z","content_type":null,"content_length":"67165","record_id":"<urn:uuid:bdb292f2-5266-4993-a78f-898217d08d54>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
College Algebra Presented at the Conference to Improve College Algebra U.S. Military Academy West Point, NY 10996 February 7-10, 2002 By Arnold Packer College Algebra can stay on its current path of a technique-driven curriculum. In this case, techniques are emphasized and applications or problems are chosen that are susceptible to the specific technique. Data shows this approach is unsuccessful in attracting students who have other choices or encouraging students to proceed on to calculus. Or, College Algebra can become problem-based quantitative mathematics. In the latter case, the curriculum-design task is choosing generic problems and the mathematical techniques needed to solve them. In this alternative approach, College Algebra is divided into two roughly equal parts. The first half of this new course is "Mathematics for Planning." Students learn how to efficiently allocate four kinds of resources: money, as in budgets; time, as in schedules; space, as in architecture or space planning; and staff, as in staff requirements. The second unit is called "Modeling Systems" and teaches students to understand, monitor, and design systems. Students learn ? at a deep level ? what exactly graphic and symbolic representations of reality imply. A pilot study, using the second approach in Algebra II led to student success. College Algebra: As another paper to be presented at this conference makes clear, College Algebra is the last mathematics course many students take. A majority may enter the classroom having already decided that it will be their final mathematics course. Recent data, contained in the preliminary report of an MAA Task Force, indicate that only one in ten College Algebra (CA) students go on to take a full length calculus sequence. Not coincidentally, one in ten of these students are enrolled in a mathematics intensive field ^i. Many would skip College Algebra if they did not have to pass it to get the degree they need to enter their chosen career field. Enrollment in CA tends to fall dramatically when colleges make quantitative reasoning or intermediate algebra the requirement. Finally, a few years after finishing the course, getting their degree, and starting their professional life, they cannot recall anything they learned. Or, equivalently, they have never used anything they learned in College Algebra. All of this is unfortunate and related. Mathematics courses that seem hard, boring, and irrelevant prior to College Algebra establish the expectation that College Algebra will be more of the same. Moreover, the course ? as conventionally taught ? does nothing but confirm the foreboding. Look at a typical description: This course is a modern introduction to the nature of mathematics as a logical system. The structure of the number system is developed axiomatically and extended by logical reasoning to cover essential algebraic topics: algebraic expression, functions, and theory of equations. Who decided that "algebraic expression, functions, and theory of equations" is essential, and if so, essential to who or what? The course covers the following topics: Radicals, Complex Numbers, Quadratic Equations, Absolute Value and Polynomial Functions, Equations, Synthetic Division, the Remainder, Factor, and Rational and Conjugate Root Theorems, Linear-Quadratic and Quadratic-Quadratic Systems, Determinants and Cramer's Rule, and Systems of Linear Inequalities. That is a long list of topics; yet, it is only half the topics listed. How much can students learn in two days and what will they remember? How close is this official curricula to the one actually taught? Is this the topic list the mathematics department would present if it were an elective course for those not majoring in mathematics or engineering or intending to go on to graduate or professional school? For too many students it looks like ? and is -- a painful experience that they would prefer to skip. Presumably, these kinds of questions led to this conference at West Point. What should CA accomplish? One way to answer is to consider how to evaluate a changed CA course. What empirical evidence would this audience want to see before they called a new CA course successful? More students taking subsequent mathematics courses would please some. More students leaving college who could be deemed quantitatively literate would please others. College algebra can stay on its current path of a technique-driven curriculum. In this case, techniques are emphasized and applications or problems can be chosen that are susceptible to the specific technique. Or, CA can become problem-based quantitative mathematics. In the latter case, the decision is what generic problems should be included and what mathematical techniques are needed to solve Quantitative Literacy and SCANS The term Quantitatively Literacy is defined extensively in Lynn Steen's book Mathematics and Democracy, the Case for Quantitatively Literacy. Let me be specific. To me, quantitative literacy implies, at a minimum, the mathematical competency to solve problems in the SCANS domains. SCANS is the acronym for the Secretary's Commission on Achieving Necessary Skills. Ten years ago, this commission of 31 senior Human Resource executives and educators and union officials issued two reports, entitled: What Work Requires of Schools and Learning a Living. The so-called SCANS skills include the ability to use basic and advanced skills, such as mathematics and problem-solving to solve problems in five domains. The two math-intensive SCANS problem domains are planning and systems. Let me be specific. I would divide CA into two parts and spend one-half the semester on each. The first half of this new course would be called something like "Mathematics for Planning" or " Mathematics for Resource Allocation." Planning, according to SCANS, means the process of allocating four kinds of resources. The four are: money, as in budgets, time, as in schedules, space, as in architecture or space planning, and staff, as in staff requirements. What mathematics skills are needed to solve problems in these domains? In what follows I'll be mentioning some examples from applications we are running in Baltimore high schools' Algebra I and II and community college algebra (and other) courses. Preparing or evaluating budgets require the ability to work with matrices expressed in spreadsheets as well as the algebraic equations imbedded in the program. These tasks do not require inverting a matrix manually or many of the things taught in a matrix algebra class. Mathematics faculty at a community college told me that teaching spreadsheets was not "mathematics" and they do not do it in their classes. The algebra requirements do not include irrational or complex numbers; but, instead, require knowing how to express complex relationships algebraically. For example, we ask our Algebra I students to figure costs of printing brochures where there is a step function between 1000 and more copies. This application takes place in the context of developing a marketing plan for a tourism company. The ultimate mathematical question is choosing between a four-page and eight-page brochure; the former is less costly but also less effective. Schedules require the ability to handle different numbering systems ? decimal, clock-time based on 60, and calendar-time based on 28, 30, 31, and (in leap years) 29 days in the month. Students should know how to figure out when a heat-treatment will be completed if it has to be treated for 108 hours. Students should be able to use Gantt and Pert charts and Harvard Project Planner to plan more complex undertaking. In our Baltimore work, students have to schedule their presentations and all the tasks that need to be completed on the way. Space planners need to understand geometry but not necessarily proving something about the interior angles of parallelograms. In another application, dealing with a business plan for a retail store, students need to design their store. This includes looking at sight lines for discouraging shoplifting and dealing with the conical shape of light emanating from overhead fixtures to lay out a lighting plan. Students should know that most real world allocation situations require trade-offs to live within constraints. They should also know something about determining objective functions: What does the decision maker want to maximize and minimize? In business, it may be profits or costs. In a health application, it might be some weighted average of efficacy and side-effects. In one of our community college applications, students use decision theory to locate a factory. They balance objectives for environmental problems and economic cost. By the way, the model includes some Gaussian equations for particle dispersal. Students know it is there, but need not really understand it. To an economist, resource allocation problems bring to mind optimization under constraints. This, in turn, brings to mind differential calculus and linear programming. Most realistic problems turn out to much too complex for calculus but it is worthwhile for students to understand the concepts of maxima, minima, and inflexion points. They should know the difference between the first and second derivatives and rates of change and acceleration. Actually solving calculus problems is, for almost any job except teaching calculus, unnecessary. Linear programming is more useful. In another application, students (community college this time) are asked to put together a plan for introducing a new product (an electric car). Inequalities and very simplified linear programming are used to find the optimum allocation of the advertising budget among alternative media. The larger problem, which includes allocating funds to training and R&D, is too complicated, however. Successful students use spreadsheet simulation to find a satisfactory, if not optimum, solution. The second half of my recommended College Algebra course would cover systems, the second of the math-intensive SCANS problem domains. The unit would be called "Modeling Systems." The SCANS commission recommended that students be able to understand, monitor, and design systems. Understanding mathematical models of systems requires that students grasp ? at a deep level ? what exactly graphic and symbolic representations of reality imply. Clearly, this lesson should be repeated in physical and social science courses, but it is crucial in applied mathematics. Students completing two years of college should comprehend positive and negative feedback loops. All of this requires some coordination with the social and physical science courses. Can students understand an epidemiological model and the positive feedback loops that let the epidemic spread? Do they grasp the negative feedback that finally brings the epidemic to an end? Can they understand a model of an inventory control system and its stability-enhancing negative feedback? (The latter is part of what is needed in our application on the business plan.) For the vast majority of students, being able to answer these and similar questions is more important than Cramer's rule. One of our other community college applications deals with statistical process control. Students obtain randomly generated data from a production process, make hypotheses regarding the cause of the quality problems, and test their hypotheses. Clearly, statistics is involved. To understand or build models of systems, students should know something about linear models with and without uncertainty. They need to know some functions (but not necessarily trigonometric functions) such as the normal, binomial, and exponential. Do they know the 80/20 rule? Most importantly, can they use and build mathematical models to predict system performance? In neither part of this new CA course would I use "x" or "y" as variables. Let students visualize the reality by using meaningful symbols: p for population and t for time in the epidemiological models, r for revenue and c for cost, and so on. Many think this a horrible idea. The very power of mathematics is its ability to generalize ? to use the same technique in a variety of fields. But the abstract "general" approach to mathematics is not necessarily a big favor for those who "love" mathematics and science. "No scientist thinks in equations," said Einstein, who employed visual images and muscular feelings. The mathematician S.M. Ulam said that he uses "mental images and tactile sensations to perform calculations, replacing numerical values with the weights and sizes of imagined objects." Joshua Lederberg becomes "an actor in a biological process, to know how [to] behave as if I were a chromosome" ^ii Lynn Steen, in his introduction to Why Numbers Count refers to scientific mathematics in which mathematical variables always stand for physical quantities ? "a measurement with a unit and implicit degree of accuracy"^iii. Jim Rutherford says "? citizens need to possess certain basic mathematical capabilities understood in association with relevant scientific and technological knowledge." (Italics in the original.)^iv Mathematics educators, properly, want their students to understand the power of mathematics to solve general problems, ones that are not rooted in an existing situation. That point can be made near the end of the mathematics course and demonstrated to students. Teach that the equation for velocity can be used in many equations relating to change. True generality can be saved for those mathematics students who will still be taking mathematics courses beyond CA. There may be more such students if mathematics was less abstract in the earlier years of school. Indeed, the empirical evidence is that mathematics educators are not achieving their goals by their current practice. By the year 2000, U.S. students were to be first in science and math. By any measure they are not. NAEP scores in mathematics (for 17 year olds) in 1996 were only three percent better than they were in 1982. Their average NAEP score was 307, meaning that the average 17-year old can compute with decimals, fractions and percents, recognize geometric figures, solve simple equations, and use moderately complex reasoning. The average among blacks (286) and Hispanics (292) were below 300, meaning the ability to do the four arithmetic operations with whole numbers and solve one-step problems^v. Over half of the students entering the California College system need to take a "developmental" course. Over one in four college freshmen feels that they will need tutoring or remedial work in math. This compares to one in ten for English, science, and foreign language. ^vi What happens when students get to college ? which they are doing in greater and greater numbers. In the paper she prepared for this conference Mercedes McGowan asks "why we are attracting fewer and fewer students into our mathematics-intensive programs?" She points out that, in four-year colleges, enrollment in mainstream calculus is declining in absolute and relative terms. About 405,000 such students were 24% of mathematics enrollment in 1980; by 2000 the enrollment had fallen by 53,000 and the share to 20 %. In light of these trends I would hope that all recognize that the practice of mathematical education must be improved ? and quickly. Why Is It Important and for Whom? Many of you may know about Bob Moses and his Algebra Project. Moses, Harlem-born in 1935, attended Hamilton College graduate school at Harvard University. During the 1960's, Moses worked with the Student Nonviolent Coordinating Committee (SNCC) to increase voter registration in Mississippi (for which he was later awarded a Macarthur "genius" award). In the 1980's he decided that "the absence of mathematics literacy in urban and rural communities throughout this country is an issue as urgent as the lack of registered Black voters in Mississippi was in 1961." [p.5] Moses makes the point that symbolic representations of reality are the keys to using the new technology and algebra is the place where youngsters "learn this symbolism." [p 13] His effort is the Algebra Project that is focused, with good results, on middle schools with the hope that it will lead to minorities and others going on to college and being able to take college-level, rather than remedial, courses. He, Moses, mentions one college where 90% of the entering minority students take the non-credit course. Other studies indicate that taking non-credit remedial courses is a predictor of dropping out of college ? especially if student fails the course. I came across some numbers last week, by Gerald Bracey that I found startling. Remember, we ? the United States ? was supposed to be first in mathematics and science by the year 2000. The study of 28 industrial countries, by the Organization for Economic Cooperation and Development (OECD) found instead we were only average in mathematical literacy ? the ability to apply mathematics to "real life" questions. If, however, we isolate the data for white students the U.S comes in 7^th. Black and Hispanic students come in 27^th in the list. Something must change at all levels of mathematics education, and college algebra ? typically the first and last college mathematics course ? has to change too. Why is mathematics a required course in most colleges? A cynic might read the following, which I quote. "There are many factors that will make change difficult. Courses below calculus, like college algebra, are often departmental "cash cows." They may have large enrollments and can be taught relatively cheaply by part-timers or TA's." The same document, more idealistically, says "?students will need to develop mathematical thinking to support lifelong learning?to read and assess quantitative arguments they will encounter?for diverse workforce needs? [and] prepare them for potentially multiple job changes." [Emphasis added.] I want to concentrate on the idealism and emphasize the word encounter in answering the question of: Why require mathematics? The students believe ? correctly ? that they will not encounter complex numbers or Rational and Conjugate Root Theorems. They may well, however, encounter budgeting and scheduling ? in their roles as citizen, worker, or consumer. They are unlikely to encounter Linear-Quadratic and Quadratic-Quadratic Systems; but ? as citizen, worker, or consumer ? they will have to interpret statistical data. Determinants and Cramer's rule will not come up; building and interpreting results from a mathematical model may. The goals of College Algebra should be to get students to internalize mathematics and come to understand certain ideas conceptually. Bob Moses and most others who have been successful with all students know that these goals can be obtained only if the mathematics is connected to everyday life. What does that mean for College Algebra? In my judgment it means starting with the applications ? not the techniques. What problems should all college graduates ? those from two-year and four-year colleges who do not major in a math-intensive subject ? be able to solve? It is not how to factor a polynomial or complete a square; it is how to put together a budget and schedule for a proposal or building project or sales campaign. It is not how to solve a quadratic equation; it is how to use a statistical process control or model a production or health system. I mentioned before that CA is a cash cow that many mathematics departments do not want to tamper with, especially if it means smaller classes and higher costs. If this attitude is maintained too long both the cash and the cow may disappear as students opt for useful and interesting quantitative literacy that teaches them how to solve problems they will encounter and be paid to solve. i. Mercedes McGowan ii. Robert S. and Michele Root-Bernstein Learning to Think With Emotion, The Chronicle of Higher Education, January 14, 2000, p A64 iii. Lynn Steen, Preface: the New Literacy, in Why Numbers Count. iv. F. James Rutherford, Thinking Quantitatively about Science in Why Numbers Count. v. Do You Know The Good News About American Education? Center on Education Policy, Washington D.C., 2000.p13 vi. This Year?s Freshmen: A Statistical Profile, The Chronicle of Higher Education, 1/28/00 p A50.
{"url":"http://www.maa.org/college-algebra","timestamp":"2014-04-17T01:23:41Z","content_type":null,"content_length":"109010","record_id":"<urn:uuid:be5a71a1-0a86-42f2-8a5d-33083c1ee82d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Category Archives: Math Musings One fun thing math lets us do is measure difficult-to-measure things. Like fame. We all have an instinct for what fame is, and the more we put it into words, the more we’ll find we can translate fully into math. … Continue reading “… at this point, it’s in the hands of people who are mathematically inclined.” —Stephen Hsu The January 6th New Yorker contains an article on B.G.I., a Chinese company seeking to do major work in the field of genetics. According … Continue reading I just read this wonderful interview with Tom Zhang, who made recent, important progress on the Twin Prime conjecture. It’s a strange, quiet interview, and a lovely departure from the world of the fame-obsessed. Another thing I like: he emphasizes … Continue reading I was just observing a third grade class learning/reviewing basic fraction to decimal conversion, and I overheard a great remark. A girl, reading a word problem, said to her table mate, “Jessica ate 6/10 of a cake?! She’s fat.” There’s … Continue reading Reading an Alfie Kohn’s article on what kids learn from failure made me think of the most common question I hear from teachers about the Common Core Practices: How can I teach perseverance? It’s an excellent question, and the answer isn’t … Continue reading When I try to describe great teaching, I notice a certain phrase pops out of my mouth again and again. Productively stuck. As in, the goal of the teacher is to get her students productively stuck as soon as possible. … Continue reading ANNOUNCEMENT: Sign up now for our Common Core Crash Course for 1st-5th grade teachers, this August 20-21 in Seattle. __________________ It’s happened to every teacher. It’s Thursday, but your students don’t seem to remember Wednesday or Tuesday, and you’ve got … Continue reading I recently posted this interesting inversion problem: The question is this: in mod n, how many functions f(x)= ax +b are their own inverses? For example, the function f(x) = 5x + 2, applied twice in mod 12, is equal … Continue reading I’ve been exploring a new problem with a couple of students recently that I find incredibly compelling, and I thought I’d mention it here. The main idea is looking at the behavior of functions of the form f(x) = ax … Continue reading There has been considerable backlash against processed food products in the last few years, and for good reason. A slew of health problems implicate what we eat, and processed food products are more product than they are food. As industry … Continue reading
{"url":"http://mathforlove.com/category/math-musings/","timestamp":"2014-04-16T22:18:59Z","content_type":null,"content_length":"43203","record_id":"<urn:uuid:18030afb-a38b-47bb-a161-7657c7be7f9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of Limit proof: x -> infinity of sqrt(x) - x = -infinity January 28th 2011, 07:26 AM #1 Jan 2011 Definition of Limit proof: x -> infinity of sqrt(x) - x = -infinity First post! yay! Okay so yesterday I had a problem set due and it had two definition of limit at infinity proofs on it. I ended up leaving them... well not blank but all I did was write the definition. the question was: prove the following statements using the appropriate definition of limit: (i) Limit as x-> infinity sqrt(x) - x = -infinity I got as far as: Given N<0, there exists M>0 such that x>M implies sqrt(x) - x < N which I believe is the appropriate definition. I always draw a blank on these proofs. The first thing I did when I made my account here was print the sticky on definition of limit proofs. I am going through it now. Anyways any help on this would be amazing, Thanks! Take into account that is... $\displaystyle \sqrt{x}-x = - \frac{x\ \sqrt{x} - x}{\sqrt{x}} = - x\ \frac{\sqrt{x}-1}{\sqrt{x}}$ Kind regards If $N\ge 3$ then $N^2>2N$ or $N^2-N>N$. If $J\in \mathbb{Z}^-~\&~J<-2$ then if $x\ge J^2$ can you conclude that $\sqrt{x}-x^2<J~?$ Hey thanks for the help guys, sorry I haven't been able to follow up on my question, I had a bunch of assignments due on Monday. So I was able to prove this limit as follows: I started playing with the inequality sqrt(x) - x < N. I multiplied by -1 to get x-sqrt(x)>-N --> sqrt(x)(sqrt(x) - 1) > -N. then since N<0 by assumption, --> -N>0. Now if sqrt(x) - 1 > sqrt(-N) and sqrt(x) > sqrt(x) - 1 > sqrt(-N) then we get sqrt(x)(sqrt(x)-1) > sqrt(-N)sqrt(-N) = -N. so I worked with sqrt(x) - 1 > sqrt(-N) ---> x > (1 + sqrt(-N))^2. So I concluded by choosing M = (1+sqrt(-N))^2 and showed that it works. Can someone just confirm with me that this proof works? and also how do i post the math symbols like you guys did above me? January 28th 2011, 07:55 AM #2 January 28th 2011, 08:56 AM #3 February 1st 2011, 12:36 PM #4 Jan 2011
{"url":"http://mathhelpforum.com/calculus/169583-definition-limit-proof-x-infinity-sqrt-x-x-infinity.html","timestamp":"2014-04-20T00:49:26Z","content_type":null,"content_length":"42193","record_id":"<urn:uuid:2887bea6-dfe1-4bd1-a751-41c4ff309b15>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
The 16 Universal Types are defined in ISO/CCITT X/409. These types, shown in #tbt72#1619> form the basic elements from which all other ASN.1 constructions can be defined. Table: ASN.1 Predefined Types ID Codes 0, 16 and 17 are ;SPM_quot;special;SPM_quot; • Code 0 denotes End of Contents when indefinite length form objects are used. This requires that the input stream be scanned continuously object by object until this ;SPM_quot;token;SPM_quot; is recognized, so indefinite length objects are frequently encoded as sequences of known length ;SPM_quot;fragments;SPM_quot; to improve processing overheads. • Code 16 introduces an ordered list of items of any type. • Code 17 introduces a set of unordered items of any type. • Since both of the last two items allow for recursive use, and a notation for choice also exists, any set or sequence -derived structure can be built. ASN.1 Construction Rules These are the Backus-Naur Form (BNF) representations of the basic rules for constructing and Reading ASN.1 specifications. You must be careful to distinguish between: • BNF notation for specifying the ASN.1 • Actual ASN.1 syntax itself. • In the BNF, not all terminal objects are quoted, following the practice of quoting symbols which may conflict with BNF but allowing plaintext to be entered unquoted where it is unambiguous. General BNF Rules used. (from X400) • Symbols rendered in bold are nonterminals • All other symbols are terminals. • The Terminals ;SPM_quot;::=;SPM_quot;,;SPM_quot;|;SPM_quot;,;SPM_quot;string;SPM_quot;,;SPM_quot;identifier;SPM_quot;, ;SPM_quot;number;SPM_quot; and ;SPM_quot;empty;SPM_quot; are quoted to distinguish them from the BNF operators, and any built-in non-terminals listed immediately below them. • Non-terminals whose first letter is capital are defined in the grammar • Other non-terminals, of which there are four are defined here: • non-terminal string is a sequence of zero or more characters. • non-terminal identifier is a sequence of one or more characters chosen from the capital letters, the small letters, the decimal digits and the hyphen; the first letter must be a letter. case is significant and distinguishes one identifier from another. • non-terminal number denotes a non-negative integer and has two forms: the first specifies the integers value in decimal (radix 10) notation, it is a sequence of one or more decimal digits. the second specifies the integers value in hexadecimal (radix 16) notation, it is a sequence of one or more hexadecimal digits followed by the letter ;SPM_quot;H;SPM_quot;. To aid clarity Binary values may be subscripted with ;SPM_quot;2;SPM_quot;, Hexadecimal values with ;SPM_quot;16;SPM_quot; and decimal left unsubscripted. • non-terminal empty denotes the null or empty string of symbols. • Comments are embedded in the notation, proceeded by two hyphens ;SPM_quot;--;SPM_quot; and ended by two hyphens or the end of a line.
{"url":"http://www.cl.cam.ac.uk/~jac22/books/ods/ods/node224.html","timestamp":"2014-04-18T03:07:54Z","content_type":null,"content_length":"3565","record_id":"<urn:uuid:bbd7a7f8-fddd-4e24-bf08-1af008d293e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Subtraction Facts to 20 Addition and Subtraction Facts to 20 • Why should I learn the facts? Help children to volunteer answers, such as they will need to know the facts to add the score for games, to know how much to pay for two or more items, to make change, to find how many when two or more groups are combined, to find how many are left, how many more, how many fewer, and so on. You can also tell them that these facts are like the alphabet they will need them later as they learn to do more complex math. • If I learn one fact, will it help me to remember other facts? Explain to children that once they learn a fact, they know the fact family. They can use doubles to find the answer for doubles plus one. Also explain that other strategies help them remember facts, e.g., zero doesn't change a number, one makes a number go up or down by 1. • If I don't know a fact, what can I do? Remind children that they can try to remember its related fact, or use one of the strategies they learned to figure out the answer.
{"url":"http://www.eduplace.com/math/mathsteps/1/a/1.addsubfacts.ask.html","timestamp":"2014-04-19T11:56:41Z","content_type":null,"content_length":"5951","record_id":"<urn:uuid:c0b697b4-0a74-40d6-95c7-84901f793732>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
population proportion December 4th 2007, 02:28 AM #1 Nov 2006 population proportion This may not be advanced statistics but I really do not understand: what is a population? Why are we making a difference with a sample? I do not understand when is a proportion when is a sample? Where this idea of proportion comes from? Thanks for all your help. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/24122-population-proportion.html","timestamp":"2014-04-20T23:49:38Z","content_type":null,"content_length":"28927","record_id":"<urn:uuid:b5530c84-2f0b-47ab-9d52-349854e35f10>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
a \not\equiv b (mod m), then b \not\equiv a (mod m) June 26th 2010, 04:46 PM a \not\equiv b (mod m), then b \not\equiv a (mod m) $a ot\equiv b \ \mbox{(mod m)}$, then $b ot\equiv a \ \mbox{(mod m)}$. Is this all there is to this one: $mmid (a-b)\rightarrow mxeq (-1)(b-a)\rightarrow m(-x)eq (b-a)\rightarrow mmid (b-a)$ June 26th 2010, 05:12 PM If your other congruence post is logically prior to this one, then it looks good. Otherwise, you should interpose, twice, the intermediate step AsZ showed in the other post. Just for June 26th 2010, 05:14 PM In the book, it defines $m|(a-b) \ \mbox{as} \ a\equiv b \ \mbox{(mod m)}$. The problem is it justed seemed to easy to be true though. June 26th 2010, 05:16 PM Oh, ok. Different authors define congruence differently, I guess. June 26th 2010, 06:21 PM You have the right idea. $m\mid a-b \iff m\mid b-a$ July 5th 2010, 02:44 AM If you rephrase this, the problem is: If $m$ does not divide $a-b$, then also $m$ does not divide $b-a$. Now, $a-b$ and $b-a$ have the same divisors. So if $m$ is not a divisor of $a-b$, it can't be a divisor of $b-a$. This is essentially the same explantion "chiph588@" gave.
{"url":"http://mathhelpforum.com/number-theory/149456-not-equiv-b-mod-m-then-b-not-equiv-mod-m-print.html","timestamp":"2014-04-17T14:56:06Z","content_type":null,"content_length":"9183","record_id":"<urn:uuid:e83f682a-9017-4277-8daa-f508e586ce59>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
calculating into scientific notation September 1st 2010, 06:11 PM #1 Junior Member Aug 2010 calculating into scientific notation Calculate the following numbers into scientific notation (include units), for example 3.67 x10-6 m (65 x104 m) x (4.5 x10-6s-1) = 2.925 x 10^-2 m/s Are we using significance arithmetic? You seem to be rather careless about how you ask the question, anyway maybe what you mean is this $(6.5\cdot10^4\ \text{m})(4.5\cdot10^{-6}\ \text{s}^{-1})=2.9\cdot10^{-1}\ \text{m/s}$ September 1st 2010, 07:27 PM #2
{"url":"http://mathhelpforum.com/math-topics/155002-calculating-into-scientific-notation.html","timestamp":"2014-04-17T13:34:16Z","content_type":null,"content_length":"33587","record_id":"<urn:uuid:24f3614d-bdce-4b5f-9e54-cf32b3baaf28>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem Solving For The Ninth Grader An Introduction (Delayed) Let me begin by saying that this is not a six week unit in problem solving. There is no way in which I could get a group of students to sit and work on word problems for such an extended period of time. Such an attempt on my part or on anyone else’s part would be a student turn off. Thisunit will be designed as an “on Going” unit. It will not just be a few painful moments at the end of each chapter. Problems should be plugged into the classroom activities. The instructor will be the key to the success of such a plan. He must know when to feed in the right type of problem, one which will keep the class both moving and involved. My audience is to range from the slow to average algebra I student. Many of the problems I will suggest will have an appeal to a wide range of students. My main area of concern will remain with those young men and women who traditionally have had matching problems with their reading skills. Some of these students drift through the four years of the high school without ever having solved a word problem. This bothers me’ In a one to one situation these same students are very street wise. They do not lack common sense. It is my feeling that we can tap this energy and focus it into the area or mathematical problems. To capture and redirect this energy we will have to look for good word problems which will have them in mind. There are lots of good word problems which are appropriate for the slow to average student. Just this week I picked up a copy of the magazine “Psychology Today.” In it was an article by Eugene Raudesepp entitled “More Creative Gamesmanship,” It is a good source for some starter problems for the students. I have also seen a new magazine called "Games" which Playboy Enterprises puts out. It is totally devoted to puzzles, magic squares and problems. mere are a lot of ideas out there if we take the time to look for them. By making full use of these and other sources we can create a more exciting and more positive class atmosphere, an atmosphere which wi11 encourage all of the students to grow intellectually. To prepare students for the problem solving process it would be wise to have them buy a loose leaf notebook which will be kept exclusively for this work. In this book they can record new words, definitions, problem solutions, problems in various stages of completion and sketches of their problems. I suggest a loose leaf type of book be used so that you will be able to give the students problem sheets: which can be put directly into the book and not “lost’” In this unit the student’s first encounterswith word problems will be delayed a bit until he has had some lessons in evaluating expressions and in plug in information into formulas. This type of a problem is not a problem in the true sense for the mathematical operations are clearly stated. These exercises do serve a definite purpose in that they are good practice in the observation of patterns, they give the student confidence in his arithmetic ekills and finally they do train the student to speed up his calculations. I suggest that time limits be placed on these drills. In this way the mechanical process will not overshadow the thought process. At this point the manipulative skills: which we have sharpened will aid us as we go into the process of translating ideas from the written or spoken word into algebraic symbols and algebraic Most of us recall those days when we had to translate Latin into English. We had day after day of word searching and tense finding and then one day it all fell into place. The task had become second nature to us. In looking for some good material to aid us in this task I have found that problems about percent lend themselves to quick and interesting translation. On the board we usually make a student key and the process looks like t’is. Given the question; (figure available in print form) It is possible to throw out a lot of this: type of problem and get the entire class working and achieving good and quick results. Here I have to say, “Use that calculator whenever possible!!” Encourage the students to bring their calculators to class for their own use. It’s sad to see a student with a fancy calculator in his hand and he does not have the slightest: idea about how to use it. See him after school or take the time in the class to show how this: tool is to be used to make all. of our jobs easier. There a lot of calculator oriented problems to be found in the National Council of Teachers of Mathematics publications ,Arithmetic Teacher” and “The Mathematics Teacher”. Learn to use them as a teaching tool. In trying to think of ways to present this paper I toyed with a problem time table of sorts, you know the kind of plan in which you state that on the third day we will do this or that. I think the listing looks good but it is not very useful. The idea was abandoned. In its place I will try to show you some of the things I have tried to do with my classes. I hope that you find some of the ideas useful and interesting. Most of these ideas will be presented in story or play form for that is the way I try to present them in my classroom. In working with word problems I have found that one must sketch, sketch, sketch. Wherever possible I include my own likeness. The students love it and seem to pay a little more attention to the proceedings. Encourage the students to display their sketches for area, perimeter and distance problems. A poor sketch can lead to a dead end in problem . If you do check the sketches you will see many which will tell you at once if the student understands the problem. I have thought about some or the difficulty which some of the students have in handling mixture problems. You know the ones: about coffee, nuts or candy and how they are selling for $1.20 a pound and they have to be mixed with another more expensive kind and eventually the entire mixture will be sold for lets say $1.80 a pound? It occurred to me that we are: using problems which made a lot of sense to us but not much sense to today’s student. A case in point. I grew up in the city of New Haven in the 1940’s. The city was then about 70% Italian. When you walked into an Italian market the food was out in the open. It was there to be seen, touched and experienced: I used to put on ten pounds by just inhaling the aromas. We did not get a slick package with its coating of plastic. There were, for example tour to five kinds of coffee beans in the bins for us to buy. It could be scooped out, blended to your wishes, poured into a grinding machine, and you had your very own coffee. It was two scoopsof this and one of that. God forbid if you came home with the reverse mix. Those merchants mixed candy, nuts and coffee to your taste or to the taste of your wallet. When today’s student enters a store he finds none of what I have just described. Everything is in a box, wrapped in plastic or frozen stiff. You take it or you leave it! So the next time you are at the board (and you are over 40) and the students do not “relate” to the mixture problems, you are just going to have to take some time out and tell them about " the way it The students will still cry that the problems are not real to them. It is my contention that it is the student who just not observant. A case in point is the ticket problem. I do not know a student who has not gone to the movies or to a play and who has not been intimately involved in the ticket buying process. They are there but they do not pay attention. Here is basically how I try to handle such a problem. The Cross Players are presenting a Shakespearean drama in the little theater which has a seating capacity of 110. The play is a sell out. The sign on the door lists the prices Teachers $1.50 and Students $.8O. A count of the money at the end of the show reveals that the play took in $158.00. How many teachers went to the play? Silence... A lot of prodding from the teacher and still no luck. Wild guesses are always present but no answers. To break the ice we agreed to pretend that we were actually there buying tickets, entering the theater and finally counting the money. The first thing we did was to make tickets from scraps of colored paper. One color for the teachers and another distinct color for the student tickets. Some of the students tried to “walk” into the theater after buying the tickets. They were stopped by that ever present little man who reaches out, grabs and tears up your only souvenir of that never to be forgotten night. Stop “ Who is this man? What is his real duty in the theater? Is he just there to annoy the customer? What does he do with all of the half tickets? Here I try to put in a little reality, I need the students to have a little more input. Suppose the gal who sold you the ticket was putting some of the money into her pocketbook. How could the manager catch her when he has to be at the door collecting and tearing your tickets? The good class has awakened! They are quick to tell me about the tickets (halves). All you have to do is to split tickets into two piles according to their color. Then each pile is counted and multiplied by its ticket cost and you add it all up and compare this to the night’s take and if she is stealing you have her. The class enjoys this but does not see the connection between the original problem and this last discussion. Here is where I can go two ways to get the idea across. My first try is as follows. I pick up all of the ticket stubs and display them to the class. How many tickets do I have here? With out question the answer will be 110. While they are still watching I break the pile by removing one type (color) of ticket, say the student’s tickets. Now holding the student’s tickets over my head for all to see, I ask “How many tickets do I have here?”. Silence ... Many times I get an automatic answer of “half” in spite of the fact that the original pile was designed to be heavily one type or color of ticket. Next a leading question from the teacher. “If we have not counted these tickets then what is the only thing I can name them?” Usually the class will give back an “X!” “Now then what do I have in the other pile?” "Is it still 110?” The class will come back with the fact that it is 110 but without the X. If this is done, than we quickly mark this information on the chalk board, before the thought is lost . (figure available in print form) We now try to make a combination of sketch and equation which will help us visualize the problem. The tickets are put in piles to be counted and if all goes well we should be able to solve the (figure available in print form) The reason for so few students attending the play was of course obvious; it was by William Shakespeare. Before I forget to mention it, I find its always a good idea to have the class actually take and add the 110 X and X to make sure they see it as the original 110. Do not forget to plug in the answers to see if the problem works out. The ticket problem is one of a type and it does lend itself to the colored scraps of paper. In the event I need another tool for the same or similar problem I grab a long sheet of foolscap paper and quickly scribble a big 110 on it. Then just as quickly I tell the class to think of it (sheet of paper) as my pile of tickets. “How many do I have?”, the answer of 110 comes back quickly. I then tear off a good chunk of the same paper and tell the class, “These are the student tickets!”. “Did I count them?” “No I did not”. “So what do we call them?” Answer from the class "X!" Mark it this way. (figure available in print form) In my other hand I still hold the original sheet which is still marked 110. “Are there still 110 tickets here?” If no then why not? The class should see that it is really 110 without the X. ‘ It would be recorded this way. (figure available in print form) Here the sight of you rejoining the torn sheet should drive home the idea that the two parts do indeed account for all of the 110 tickets. The class will then complete the problem in much the same manner that was described in the previous paragraphs. It is not the time to stop work on this problem. I do suggest that you have the students list the X as the teachers tickets and see for themselves if there is a change in the answers. This is done to dispel the thought which some students have about the teacher having prior knowledge about the problem. When word problems are presented in story or play form more students seem to get involved. Interject a little intrigue (the cashier stealing money from the theater) and you will catch the ears of more students. Students who might be sitting there letting the “smarter” or more vocal classmates handle the work. Try to get each student involved, so that by the time the problem is over they can truly say that they too had a hand in its solution. There are times when you would rather not be involved in the act of teaching word problems;. This: summer was one of them. The temperature hovered about the 100 degree mark and all of us would rather have been at the beach. I needed a problem which would break the monotony, stimulate some interest, and lend itself to a good clean cut and quick solution. I decided to up date an oldie. The problem concerns a dude named Clyde. Clyde went to his corner package store and bought a bottle of their best wine for $2.10. As dudes will do he complained about the high price. The store owner agreed and to point up the fact he told Clyde that this bottle which he had just purchased cost two dollars more than its cork. Later in the evening when Clyde and his girlfriend had a lull in their conversation he mentioned his conversation with the package store owner. His gal then asked him just what the cork cost. Clyde was not a heavy weight in the math department but he gave it his best try. “Why it cost 10¢ I” Was Clyde’s answer correct? The class agreed with Clyde, the cork had to cost 10¢. When they were told this was not the answer they agreed to kick the solution around on the black board. They began by putting a very simple sketch down, and then the parts of the problem began to fall into place. Bottle plus Cork equals $ (total cost) (figure available in print form) Next they added some simple facts like $ could be replaced with $ 2.10. In place of the sketch of the bottle (to which they insisted on adding a label) they were able to put in (substitute) “$ 2.00 more than the cork “. (figure available in print form) Its an off beat solution. The students had fun putting in Ripple for the wine label. They had a far more important experience for they got away from doing a problem with the typical x variable. I hope that by doing this they were able to see that the sketch of the bottle and its cork conveyed as much if not more understanding than the usual variables. Encourage the students to be creative in this work. They have inventive minds which should not be locked into forms. Try to get them to experience the problem solving process. Not all word problems can be quickly explained. Some times when the class is very slow you have to resort to basic techniques. One day we took a simple problem about a man who had purchased two Sunday papers. We were given the total cost and we were to determine the cost of the individual papers. I converted the problem to one about Mr. Cochrane who went out on a Sunday morning to buy two newspapers. The cost of the papers was $2.30. This made Mr. Cochrane curious, so he inquired about the cost of the individual papers. “It’s because the Times cost 70¢ more than the Register”, was the reply the druggist gave him. What then did each paper cost? Because the class was not able to translate the paper cost relationship, we had to try a different approach. The following line of thought was tried. There is some connection between the cost of the two papers. Let us put down that very simple thought. The Times is related to the Register. (figure available in print form) Here we stress that we are only saying that there is some kind of a connection (mathematical) between the cost of the two papers. Our next thought is can we tell which paper cost the most? Use the appropriate inequality symbol to show this. (figure available in print form) The next thought is also very simple. “If the two newspapers were on some device like a child’s seesaw how would they look?” (figure available in print form) Its quite clear that the Times outweighs the Register. While this helps us, it does not establish an equality relationship between the two newspapers. The thought is not a difficult one to complete for the class realizes that the Times outweighs the Register by $.70. From their playground experience they know that to effect a balance they need to add this $.70 to the Register’s side of the (figure available in print form) Now to wrap up the problem. The original problem shows the two newspapers have a total cost of $2.30. We can not put this into an equation form and solve it for we will have an equation with two unknowns in it. If we take the fact that the Times equals the Register plus 70¢ and plug this in place of the T in the equation we can solve our problem. “Let us try it!” (figure available in print form) Again we try to get the students to relate to the work. Here we were able to draw on a childhood play experience, the seesaw, and solution flows from it. Some students have said that its like doing math with out math. When your doing word problems there are bound to be a few favorites. Here is such a problem. It has many solutions some complex and some simple. The general information about the problem is this ; we have a square and within that square is an area formed by the overlapping of four semicircles. Our job is to determine the exact area of the shaded part. (figure available in print form) When this type of a problem is presented to a class most of the students will tell you that it just cannot be done. They hope that you will agree with them and go on to some other problem which will require less thinking. One is then forced to offer some sort of a suggestion or hint to the class. “Would anyone object to studying just one “petal” of the figure?” There is usually no objection. The class is quick to realize that the partial solution would have to be multiplied by 4 to get a final solution. The class is waking up! At times a petal looks a lot like a leaf. This may aide us in our search for a solution. If we had a leaf line or rather an axis of symmetry line through the leaf (petal) would we see anything? “Lets take a good look.” (figure available in print form) Do not forget to bring those dimensions along to keep this inset in its mathematical framework. “You know I am getting an idea but there is still too much in my sketch”. “Lets just look at half a “petal.” Any solution gotten from this would have to have a factor of 8 to get back to the original area. There are some old shapes here which we can all identify. “What are they?” Play with the sketch and see if YOU can come up with any ideas. Here is a very interesting time for me and hopefully for the students. With some sketches a complete turn up side gets us to see a solution. Here is What worked this time. (figure available in print form) From what I can see we have two basic shapes here. It is hoped that this class (not a dull class) would pick up on the circle or rather the 1/4 circle shape and the right triangle shape. In simple terms we are looking at the area expressed as the difference between the two figures. (figure available in print form) As I just finished these sketches it occurred to me that the basic dimensions should be changed to another letter other than R to keep from being confused with the radius r. The R does lead the way to the problem’s solution by suggesting r which is associated with the circle. Lets keep it in. Wow! This solution leaps off the board at you! It is clear! A circle minus a square. Is it as easy as it looks? The length of the sides of the square can be determined if we sketch in some diagonals, which will lead to the idea of right triangles, which leads to the dreaded beast “Hypotenuse.” Yes the easy way looks a little less easy now that the thought of the pythagorean theorm has come up. The class is brave and it went on with a big loss of its enthusiasm. The sketch and first thoughts follow. The area of the circle is not too hard to determine, the usual rrr2. The determination of the length of the side of the square did require a good background in the mathematics. Do not expect that all members of the class can follow the proceedings. Now wait a minute! Was all this work really necessary? What are all those little shapes within the square, you see them, the ones created by the diagonals. They are right triangles, whose area we can easily compute by the 1/2bh rule. “We did not have to go and get poor old Pythagoras to help us!” Let the poor guy sleep! The parts will fit in and we have another good solution to the same problem. Just when I thought it was all done, a voice from the back of the room said “I could do that problem in a faster and better way.” Seeing is believing so we took a look. Her solution involved looking at just two of the semi circles and their shaded area. Her plan was to compute the shaded area subtract this from the original square, doubling the difference and then subtracting this difference from the original square for the second time. It was hard on the ears. The problem looked a lot better in sketch form as you will see. What you are about to see requires that I can not only recall it, but also that I sketch it. I am not an artist so bear with me. I did have a choice in sketching this and I as you see tried for the very rough sketch as a change of pace. The work on the board looks like this an in a way it makes the work and ideas flow . This is, as I have mentioned in the sketch itself, not unlike a negative image. `The solution was backed into but it was fun and showed a lot of imagination. The mathematics was very simple . As you can see this is a “better yet” solution. I can see one or two of you trying for your own solution. The process is very infectious. There is no easy way to teach problem solving. What works for the first class of the day will not work in the next class. The hardest thing to do is to get the student to do most of the work. There is an old saying by Kam Fong which says in effect “I hear, and I forget I do, and I understand”
{"url":"http://yale.edu/ynhti/curriculum/units/1980/7/80.07.03.x.html","timestamp":"2014-04-20T03:26:23Z","content_type":null,"content_length":"30891","record_id":"<urn:uuid:f2890db3-e0ae-47de-82e2-4c211e16381d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Clausius Clapeyron It has been years scene I have taken a math class and I need help to see if I am setting the equation up correctly and if I have solved the problem correctly. Diethyl ether has ∆Hvap of 29.1kJ/mol and a vapor pressure of 0.703 atm at 25.0 C. What is its vapor pressure at 95.0 C? I converted the temp to K and kJ to J In P2/P1=delta H vaporization/R (1/T2- 1/T1) rearranged to P2/P1= antilog (delta H vap/R (1/T2- 1/T1)) Then muliply by P1 P2= P1 antilog (delta H vap/R (1/T2- 1/T1)) antilog is also exp so... P2= P1 exp(delta H vap/R (1/T2- 1/T1)) P2= .703 atm exp (29100/8.3145 (1/368-1/298) P2= 2.19 atm
{"url":"http://www.physicsforums.com/showthread.php?t=362908","timestamp":"2014-04-18T15:43:52Z","content_type":null,"content_length":"22953","record_id":"<urn:uuid:aa254589-d64b-47e5-8901-d1f90b4b7371>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/71092 … t-60-parts First the world renowned DZ refers to their place as Maths Is Fun. Alright anyone can make a mistake, he is a great mathematician. No fun to be found over there that is for sure. One of them is a moderator elsewhere and pursues me like a hound from hell. No computational boys over there. Best they can do is plug his problem into their favorite CAS. Of course first they have to form a committee to discuss whether they should even allow such a question. After all it is hard work typing that GF into pari. Some say yes and some say no and some say... yes, you guessed it, Undecidable! Question is, why did he not bring it here? For one thing I need the 100 dollars. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! Unbelievable that DZ forgot to google! And people there discussing "Is this question acceptable?", funny! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! Hi gAr; Have you ever seen him lecture? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! hi bobbym, Glad to hear from you today. Somehow I missed you yesterday so I was worried about whether you're alright. Anyway, to business. Seems to me this group have made a logical contradiction. In the rules they say The site works best for well-defined questions: math questions that actually have a specific answer. But then in the thread: MathOverflow is designed for people to ask questions to which they do not know the answer, in order to enlist the aid of the MO community. So you can only post there if you know a question has an answer but don't know what it is. That's a major limit on what can be posted. And if anyone challenges a post, how's the poster supposed to prove his post was valid. One good thing though; they did give Maths Is fun a plug, even if they did spell it incorrectly. Last edited by bob bundy (2011-09-05 18:24:39) You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Snubbed! Hi bob bundy; I am fine just caught up in chores and things. You are correct, their theme is contradictory to me too. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! hi bobbym, And it's no good trying to tell them this: MathOverflow is not for questions about MathOverflow. In fact, doesn't that also rule out anyone trying to object to a post too? And anyone objecting about someone who is objecting about a post. And anyone who is objecting about anyone objecting about someone who is objecting about a post. And ................................... ad infinitum. So they disappear up their own logical backsides. Last edited by bob bundy (2011-09-05 18:43:47) You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Snubbed! I agree. Have you ever seen him lecture? No, I haven't seen him lecture yet. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! If your hearing is good, he has finally taken my advice and gone to youtube. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! You mean he is a registered user there? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! Hi gAr; I guess so, because he has vids there now. Pretty strange stuff. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! Okay. but I didn't get his username. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! What was it? Shalosh? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! Youtube username? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! Was Shalosh the name he is under? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! But the user "Shalosh" has no videos, and country and age dowsn't match with that of DZ. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! That is strange. Shalosh is definitely him. He used to publish under that name to avoid being turned down. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! Are we talking about youtube here? I was refering to this: http://www.youtube.com/user/shalosh "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! Yes, youtube. It could be a copycat. But that is an alias he used professionally. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! Yes, may be an imposter, if it wasn't linked from DZ's website. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! It is linked to his site? Then it is him. Shalosh was not exactly him. It was programs he wrote. Shalosh Ekhad. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! It is linked to his site? Then it is him. I was telling it's not! Which video of his did you watch on youtubee? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! Shalosh Ekhad was an alias he published articles under. Actually they were computer programs that solved the problems. They are not exactly him but sort of an alter ego. He has about 10 -15 of them. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! I know Shalosh B. Ekhad, isn't it his computer rather than his programs? I was talking about the videos, which is it that you watched on youtube? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Snubbed! Hi gAr; All of them. I started on Experimental math and finished them. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Snubbed! There are few links from his site to youtube, but it's not uploaded by him. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
{"url":"http://mathisfunforum.com/viewtopic.php?id=16363","timestamp":"2014-04-19T07:29:45Z","content_type":null,"content_length":"38995","record_id":"<urn:uuid:7fe48961-d653-466a-9cca-2e1f463535ab>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Kazufumi Ito Department of Mathematics North Carolina State University, Raleigh, North Carolina Phone Number: 919-515-7140 E-mail address: kito@math.ncsu.edu Teaching:Spring, 2014: MA 425 Homework Assignment MA 748 Stochatic Differential Equations MA 747, Probability and Stochastic Processes Lecture Notes Research interests: Control Theory, Inverse Problems and Stochatic Analysis and Theoretical and Numerical Analysis for Solutions to PDEs Recent publications Lecure Notess Fractional Evolution Equations Feedback and Time Optimal Control for Quantum Spin Systems NCSU Comsol User-guide for Undersea Acoustics and Domain Decomposition Technique On Convergence of A Fixed-Point Iterate for Quasilinear Elliptic Equations Error Estimates for Viscosity Solutions of Hamilton--Jacobi Equation under Quadratic Growth Conditions Receding Horizon Optimal Control for Infinite Dimensional System Level Set Methods for Variational Problems and Applications Preconditioned Iterative Methods on Sparse Subspaces A Fast Iterative Solver for Scattering by Elastic Objects in Layered Media On Fluid mechanics formulation of Monge-Kantorovich Mass Transfer Problem
{"url":"http://www4.ncsu.edu/~kito/","timestamp":"2014-04-20T17:18:36Z","content_type":null,"content_length":"3439","record_id":"<urn:uuid:8da54592-573b-4540-9f00-81603a94e3a6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
UserLinux chooses Python as "interpretive language" of choice Terry Reedy tjreedy at udel.edu Sun Dec 21 20:30:36 CET 2003 "John Roth" <newsgroups at jhrothjr.com> wrote in message news:vub584opb8sd0f at news.supernews.com... > Well, the basic idea was simply to make the () optional for functions > with no parameters. In mathematics (except in lambda calculus), there is generally no such thing since functions generally have no side effects and hence no-param function = constant, and name with no parens is same as normal name ref to constant, whether number, function, or other mathematical object. On the other hand, one-param functions *are* ofter written without parens for simple args. Hence I suspect that above would lead to the suggestion that parens also be optional for single-arg funcs. The overloading of no-op juxtaposition to mean either * or () (alternatively, the interpretation of function * number as function(number)) is rather cute. Terry J. Reedy More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2003-December/184887.html","timestamp":"2014-04-18T08:09:31Z","content_type":null,"content_length":"3838","record_id":"<urn:uuid:96faba28-f78c-4970-a765-2d0a93d1abe2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Bothell Algebra Tutor Find a Bothell Algebra Tutor ...I have math tutor experience from when I was a senior civil engineering student at the University of Minnesota, Minneapolis, MN. I have math tutoring experiences helping freshmen and sophomore with their algebra, pre-calculus, and calculus course works at college. I have taken extra mathematics classes in my college years with high grade points. 15 Subjects: including algebra 1, algebra 2, calculus, geometry ...I believe that all kids have the potential to learn and solve problems if given the right tools of teachings. I am a very patient person and also had the opportunity of working with students who have an IEP. Please email or call me to work out the rates as they might be less or more than the posted rate, depending on the distance traveled and/or subject, level taught. 24 Subjects: including algebra 2, chemistry, algebra 1, English ...I believe that I had one of the best math teachers a teen can ever have. She is my inspiration when it comes to teaching Geometry as well as any subjects. Geometry was also the first math subject I taught at my first job. 20 Subjects: including algebra 1, algebra 2, reading, calculus ...In addition to tutoring test prep, I am also available to tutor a variety of subjects. As a former teacher, I have experience working with students of all ages. I am excellent at teaching Math to those who hate it. 36 Subjects: including algebra 1, English, writing, reading ...I have been involved in volunteering groups since elementary school and have never found anything that matches the joy I receive when I have positively impacted someones life. Throughout high school I tutored pre-calculus students. Working with them and going over multiple problems until they understood the concepts they were struggling with. 15 Subjects: including algebra 2, reading, geometry, Spanish
{"url":"http://www.purplemath.com/Bothell_Algebra_tutors.php","timestamp":"2014-04-19T23:47:26Z","content_type":null,"content_length":"23747","record_id":"<urn:uuid:d8a0ca08-3ce7-4f27-9be6-04652341e11a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Seemingly complex logic/set-theoretic puzzle up vote 1 down vote favorite I got this puzzle some time ago and it has been bugging me since, I cant solve it - but it is supposedly solvable, I am interested in a solution or any tips on how to proceed. In front of you is an entity named Adam. Adam is a solid block with a single speaker, through which he hears and communicates. For all propositions (statements that are either true or false) $p$, if $p$ is true and logically knowable to Adam, then Adam knows that $p$ is true. Adam is confined to his physical form, cannot move, and only has the sense of hearing. The only sounds Adam can make are to play one of two pre-recorded audio messages. One message consists of a very high note played for one second, and the other one a very low note played for one second. Adam has mentally chosen a specific subset of the Universe of ordinary mathematics. The Universe of ordinary mathematics is defined as follows: Let $S_0$ be the set of natural numbers: $$S_0 = \{1,2,3,\ldots\}$$ $S_0$ has cardinality $\aleph_0$, the smallest and only countable infinity. The power set of a set $X$, denoted $2^X$, is the set of all subsets of $X$. The power set of a set always has a cardinality larger than the set itself, $$|2^X| = 2^{|X|}$$ Let $S_1 = S_0 \cup 2^{S_0}$. $S_1$ has cardinality $2^{\aleph_0} = \beth_1$. Let $S_2 = S_1 \cup 2^{S_1}$. $S_2$ has cardinality $2^{\beth_1} = \beth_2$. In general, let $S_{n+1} = S_n \cup 2^{S_n}$. $S_{n+1}$ has cardinality $2^{\beth_n} = \beth_{n+1}$. The Universe of ordinary mathematics is defined as $$\bigcup_{i=0}^\infty S_i$$ This Universe contains all sets of natural numbers, all sets of real numbers, all sets of complex numbers, all ordered $n$-tuples for all $n$, all functions, all relations, all Euclidean spaces, and virtually anything that arises in standard analysis. The Universe of ordinary mathematics has cardinality $\beth_\omega$. Your goal is to determine the subset Adam is thinking of, while Adam is trying to prevent you from doing so. You are only allowed to ask Adam yes/no questions in trying to accomplish your task. Adam must respond to each question, and does so by playing a single note. After Adam hears your question, he either chooses the low note to mean yes and the high note to mean no, or the high note to mean yes and the low note to mean no, for that question only. He also decides to either tell the truth or lie for each question after hearing it. If at any time you ask a question which cannot be answered by Adam without him contradicting himself, Adam will either play the low note or the high note, ignoring the question entirely. Adam has given you an infinite amount of time to accomplish your task. More specifically, the set of both questions asked by you and notes played by Adam can be of any cardinality. If in your strategy this set is uncountably large, for any number of possibilities of Adam's chosen subset, you must describe the order that the elements of this set take place in as completely as possible. During your questioning, you are keeping track of the following numbers: $B_1 = $ The number of questions in which Adam had the option of truthfully responding in the affirmative. (This number and the following numbers can of course be cardinal numbers.) $B_2 = $ The number of questions in which Adam had the option of truthfully responding in the negative. $B_3 = $ The number of questions in which Adam had the option of falsely responding in the affirmative. $B_4 = $ The number of questions in which Adam had the option of falsely responding in the negative. $B_5 = $ The number of questions in which Adam responded with the high note. $B_6 = $ The number of questions in which Adam responded with the low note. $B_7 = $ The number of questions. Let $C = B_1+B_2+B_3+B_4+B_5+B_6+B_7$ A strategy exists which will eventually allow you to determine Adam's chosen subset. Describe such a strategy in which $C$ is as small as possible, for all possibilities of Adam's chosen subset. lo.logic set-theory 1 It's fine that you post here as well, but at least acknowledge that you've posted it on MSE beforehand. math.stackexchange.com/q/17688/622 – Asaf Karagila Jan 16 '11 at 18:42 4 I don't get it. It seems to me that the rules permit "Adam" to, for instance, play the high note in response to every question, no matter what subset he has chosen. Did you mean to impose some consistency requirement on his answers? (In particular, I'm not sure what is meant by "contradicting himself", since it seems he can answer each individual question however he wants.) – Pete L. Clark Jan 16 '11 at 20:10 1 Do you have the original source for this puzzle? – Michael Blackmon Jan 16 '11 at 21:11 4 The problem is part of Unigeg World's Smartest Person Contest 2010, available at psiq.org/human_intelligence_test.pdf The instructions say, "discussing contents of test with others is prohibited...discussing answers with others is strictly prohibited...publishing test in full or part thereof is prohibited," but there's no indication of any prize on offer or any enforcement mechanism. – Gerry Myerson Jan 16 '11 at 21:57 In my opinion this is not a research-level math question, because (i) it has been taken from a list of puzzles almost verbatim (but without attribution), so (ii) it is not possible to get the OP 8 to clarify the meaning of the question. Having somewhat cryptic instructions may be okay for a puzzle, if you like that sort of thing. It is not okay for a math question. – Pete L. Clark Jan 17 '11 at 3:27 show 10 more comments 3 Answers active oldest votes First, observe that you can get around the difficulty that you don't know if high means yes or low in the following way. If you really want to ask the question $\varphi$, you should instead ask the question "high means yes for this round if and only if $\varphi$". If high means yes, then this is the same as asking $\varphi$. But if high means no, then it is like asking $\neg\varphi$, and so we may interpret a high anwer to this question as yes to $\varphi$. This transformation therefore ensures that we can in effect know that high means yes. (I mentioned a similar trick in this MO answer about guessing a number, when there can be wrong answers.) For the lying issue, let me assume that by lying, you mean that Adam first decides whether to lie or tell the truth, and then calculates what a truthful answer would be, and then when telling the truth plays the appropriate tone, but if lying plays the opposite tone. With this interpretation, a similar trick allows us to extract the desired information. Namely, if you want to ask $\psi$, instead ask "you have decided to be truthful for this question iff $\psi$". If Adam decides to be truthful, then this question is answered the same as $\psi$. If he decides to lie, then he calculates what a truthful answer would be, given that he has already decided to lie, which is the opposite of $\psi$, and so he says the opposite of this. In this way, the double negation of the transformation allows us to get the desired information. Combining the two transformations allows us to get answers to any desired question. Now, we simply proceed as follows. Since it seems permissible in the world of your question, let us enumerate all the elements of what you call the universe of ordinary mathematics, and up vote 16 ask of each such element whether it is in Adam's set, using the transformations above. In this way, we find out exactly the set of which he is thinking. down vote accepted The end result is $\beth_\omega$ many questions. This is the optimal in the sense that any smaller bound on the number of questions would be less than $\beth_n$ for some $n$, with only $\ beth_{n+1}$ many possible patterns of answers, but there are $\beth_{\omega+1}$ many sets that Adam might be considering. Incidently, what you call the universe of ordinary mathematics is closely related to what is known in set theory as $V_{\omega+\omega}$, which is a model of the Zermelo axioms of set theory, one of the first axiomatizations of set theory. The $V$ hierarchy begins with $V_0$ being the empty set, and $V_{\alpha+1}=P(V_\alpha)$ and $V_\lambda=\bigcup_{\alpha\lt\lambda} V_ \alpha$ for limit ordinals $\lambda$. Your universe is contained within $V_{\omega+\omega}$, but is actually missing huge parts of $V_{\omega+1}$, because you started only with the natural numbers, rather than the hereditary finite sets. For example, the set $\{\ a_k\mid k\in\mathbb{N}\ \}$, where $a_k=\{\{\{\cdots\}\}\}$ has depth $k$, is missing from your universe, but exists in $V_{\omega+1}$. It follows that your world of mathematics does not have the set HF consisting of all hereditary finite sets, or any similar set with unbounded finite depths. From this, it follows that your world does not satisfy some of the very elementary axioms of set theory, which would allow you to construct HF from the natural numbers. For example, the set mentioned above is the result of a very simple induction on finite-depth finte sets. The fact that $V_{\omega+\omega}$ itself has no sets of size $\beth_\omega$ is precisely what led to the realization that the Zermelo axioms are too weak to prove even that $\beth_\omega$ exists. This realization led directly to the addition of the Replacement axiom to the axioms of set theory, resulting in the theory now known as ZFC. add comment I wonder why Adam cannot reason as follows: If the answer to the question is yes, then I will answer truthfully and use the high note to mean "yes". Thus I will play the high note. If the answer to the question is no, then I will answer truthfully and use the high note to mean "no". Thus I will play the high note. Suppose I get asked a question of the form "Does the high note mean 'yes' iff $\varphi$?" Well, I don't know whether the high note means yes until I know what the answer to the question is. up vote But if $\varphi$ is true and I assume that the high note means 'yes', then the answer is 'yes', which I will signify by playing the high note. So this is at least a consistent answer. If $\ 4 down varphi$ is true and I assume that the high note means "no", then the answer to the question is "no", which I will signify by playing the high note. Thus, so long as $\varphi$ is true, it vote doesn't matter whether I think the high note means 'yes' or 'no', since both assumptions are consistent and lead to the same answer: play the high note. Now suppose $\varphi$ is false. Assume first that the high note means 'yes'. Then the answer to the question is 'no', so since I will answer truthfully and play the high note, the answer is no. That's a contradiction. So now let me assume that the high note means 'no'. Then the answer to the question is 'yes', so then the high note signifies 'yes', also a contradiction! What do I do in this situation? Ah, the rules say that if there is no way to answer without contradicting myself, then I can answer however I want. I will play the high note, since I am after all supposed to be giving away as little information as possible and if I play the high note every time then I am obviously giving away no information whatsoever. Pete, I assume your answer is a response to Joel's answer? In his solution, you don't simply ask "Does high mean 'yes' iff $\varphi$?", you have to combine it with the strategy to deal with lying, so you'd ask: "Is it the case that this is a truth-telling round iff (this is a high-means-'yes' round iff $\varphi$)?" – Amit Kumar Gupta Jan 17 '11 at 21:15 Pete, my solution interprets the procedure as: first the question is asked, then Adam decides on meaning of high/low and whether to tell truth/lie, and then he answer accordingly. In particular, the truth conditions for the statement should be determined after the choices are made, and clearly my solution depends on this. If we allow Adam to determine the truth conditions before making the decisions, then I agree with you that we can learn nothing from him. In this case, the true nature of Adam would have to be clarified. A similar issue arises in the answer I linked to in my answer. – Joel David Hamkins Jan 17 '11 at 22:24 2 @Amit: my response is motivated by Joel's answer. I argue that there is an interpretation of the question under which Adam can play the high tone every time, which clearly gives away no information. Because Joel gave a winning strategy under a different interpretation, I decided to clarify why in my interpretation the questions in Joel's strategy don't work. To simplify things, I made Adam's strategy such that he always tells the truth, so questions which ask about truth do not need to be considered. – Pete L. Clark Jan 17 '11 at 22:45 1 @Pete It appears that Joel's answer was not the authors intended answer. I have found a newer version of the puzzle which seems to disallow such questions. seti.weebly.com/uploads/1/8/2/4 /1824936/… – fastforward Jan 18 '11 at 21:22 1 This seems to throw doubt on the credentials of those trying to evaluate the "World's Smartest Person"... – Cam McLeman Jan 28 '11 at 17:38 add comment Based on the answers proposed to this question, I think this problem is ill-defined. The set of possible questions and assignment of truth valuations is not well defined. A fair definition of a "question" would be to specify a sset $X \subseteq \beth_\omega$, where yes means that the chosen set is inside $X$ and no means it is not. In this case, Pete's answer obviously up vote 0 proves that Adam always has a winning strategy. down vote add comment Not the answer you're looking for? Browse other questions tagged lo.logic set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/52246/seemingly-complex-logic-set-theoretic-puzzle","timestamp":"2014-04-19T04:47:56Z","content_type":null,"content_length":"80517","record_id":"<urn:uuid:3e31498e-f830-4f17-af53-093e2419381c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Perl Guru Forums: Perl Programming Help: Advanced: "bits and bits" !~ /scambled eggs for brain/; Apr 20, 2000, 4:49 PM Post #6 of 10 (4899 views) Re: "bits and bits" !~ /scambled eggs for brain/; [In reply to] Can't Post You go out of your way to produce five-digit strings for $thiskey, and $binary is a five-digit string, so you seem to be doing a *stringwise* AND. But the code is doing a *bitwise* AND, and the result happens -- purely by chance! -- to look like the result of a stringwise AND, except for the case of '01000'. The clue is that $truth is being printed as an integer, not as a five-digit string. When I stringify $thiskey explicitly (usefully double-quoting a simple my $truth = $binary + "$thiskey"; the 104 becomes 00000 (and all the other TRUTHs are five-digit strings, as expected). The reason I am baffled is this: print "Padded key : $thiskey\n"; # In the line above, $thiskey is a five-digit string. my $truth = $binary & $thiskey; # In the line above, $binary is a five-digit string. print "The TRUTH is $truth\t ($binary and $thiskey )\n"; # In the line above, $thiskey is a five-digit string. Why does $thiskey become an integer, which 'integrifies' the value of $binary, for the duration of the conjuction statement, but is still a string after it??? In binary, 00110 is 0001101110 01000 is 1111101000 The bitwise AND is 0001101000 which is 104 Because if you do the same conversions and bitwise arithmetic as I did above, you will get BY PURE COINCIDENCE a binary result which, when converted to decimal, looks the same as a string-wise AND.
{"url":"http://perlguru.com/gforum.cgi?post=5731;sb=post_subject;so=ASC;forum_view=forum_view_collapsed;guest=","timestamp":"2014-04-19T07:42:11Z","content_type":null,"content_length":"38138","record_id":"<urn:uuid:54f90bcf-8ac1-418a-a898-9b615b96fadb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
(5,25) (0,40) y=3(x)=b how do you find the slope of this equation - WyzAnt Answers (5,25) (0,40) y=3(x)=b how do you find the slope of this equation (5,25) (0,40) y=3(x)=b find the slope Tutors, please to answer this question. Hi Nana; I am assuming that... Henceforth, the slope is 3. I am assuming that... (5,25) (0,40) are coordinates, (x,y) Slope=(y[2-][ ]y[1])/(x[2] - x[1]) Please recheck the equation... should indicate the slope as -3, not 3.
{"url":"http://www.wyzant.com/resources/answers/15435/5_25_0_40_y_3_x_b_how_do_you_find_the_slope_of_this_equation","timestamp":"2014-04-24T17:45:23Z","content_type":null,"content_length":"37627","record_id":"<urn:uuid:cc7b212c-568c-4247-954d-a1fa07d45606>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Descartes' dream Descartes' dream: the world according to mathematics Harcourt Brace Jovanovich , 1986 - 321 pages Descartes' Dream consists of a series of thought-provoking essays examining the impact of mathematics and computers on universal ideas of reality, knowledge and time. Contains 71 illustrations. We haven't found any reviews in the usual places. Descartes Dream 3 Where the Dream Stands Today 9 Are We Drowning in Digits? 15 18 other sections not shown Bibliographic information
{"url":"http://books.google.com/books?id=GNjuAAAAMAAJ&q=example&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-18T13:45:58Z","content_type":null,"content_length":"111883","record_id":"<urn:uuid:7a624718-a7bc-4b3e-8694-6e75fe02be0a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Textbook Makeover Nothing special. Part a takes all the fun out of it and immediately abstracts the situation into a math problem. No one got a chance to build a garden. How I would modify it: • Give each student a couple of drinking straws (the length of which would replace the 40 ft. noted in the problem) • Take a page in their notebook and draw a line as the fence • Give them 10-15 minutes to build a couple gardens/playgrounds/squares, offering no hints as to how they might make the biggest one • Compare gardens with their table members, determine which person at the table made the biggest based on sight • Calcuate the actual area of their gardens • Plot our numbers of width vs. area From there it could get complex in a number of different ways depending on what you wanted to do. • Can we get the same area from two different widths? • Is there a definitive "biggest" garden? • Can we generalize the shape of this fence with variables? • What if I added another condition: x fencing and y sod? • What if the existing fence wasn't there? And with the idea properly motivated, I might present a scenario like #53 and have them jump right to the function. In fact, a series of problems like that might be the classwork. If you want a technology angle, I might have them generate something like this: I've had numerous problems trying to jump right in with things like #53 because students don't think in terms of functions. This was a fun thought exercise and fun to do with Algebra II. Pre-Cal is proving to be a bit more difficult but I will try.
{"url":"http://infinitesums.com/commentary/2013/6/21/textbook-makeover","timestamp":"2014-04-18T10:34:43Z","content_type":null,"content_length":"35411","record_id":"<urn:uuid:3a4c1790-acc3-469b-87fc-0d5744c58fbd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
black hole No--what I apparently have to convince you of is that, in a non-vacuum region, the tangential measure *does* depend on the potential. Obviously it has to, for the rest of what I said to work. To see why it does, consider why it *doesn't* in the exterior vacuum region: it's because the "direction of gravity" is purely radial--the only "gravity" that is "pulling" on you at a given point is the inward radial "gravity" of the distant massive object. In a non-vacuum region, that's no longer true; there is non-zero stress-energy surrounding any given point in the non-vacuum region, on all sides, so there is "gravity" pulling on any given point on all sides, not just a purely radial "acceleration" due to the distant gravity source as there is in the exterior vacuum region. So the effects of gravity in the non-vacuum region change *all* of the spatial components of the metric, not just the radial one... Nice try, but sadly still not buying that as is. Still opting that the nice transition is purely mathematical artifact of a force-fit coordinate scheme. Here's the crux of the problem imo. We agree tangent spatials by SC's are invariant everywhere exterior to r . There is in that region the potential (1-r , plus it's spatial derivatives to all orders, just as there is within the matter region of shell wall. Only essential difference I see is the relative size distribution of potential and derivatives. This gets down then to the *fundamental character* of the relation between potential and various metric components. It makes perfectly good sense that there is a fairly abrupt transition in the 'g' field from maximum at r , to zero at r <= r , and likewise for tidal terms - they explicitly are potential derivative *in nature* and must cease in the equi-potential interior. Where is there any analogous physical basis, in the tangent metric component case, for *total* exterior indifference to potential *and all it's derivatives*, yet a relatively steep dependence just within the matter region? 'Pulling in all directions' (I realize this is just you 'dumbing it down' for my benefit) just won't near cut it as explanation, as I have outlined above. What 'essence' or geometric 'object' can be entirely absent >= r , yet there strongly for r , so as to explain it? And what's more it has to be shown to be cumulative in effect, and not a mere 'blip' that leaves no trace on exit past r<r , so to speak. Tall order indeed! Sole uniquely present identity I can think of might be divergence, but that seems most unlikely a solution, and in itself creates another issue. Namely, if divergence is truly absent exterior to r , this gives the lie to those claiming that in GR 'gravity truly gravitates'. What say you sir? [One final comment: in #222 you mentioned agreement between yourself and DrGreg's finding in , but I read him there as saying interior length are as at infinity, once the metric is applied. A misunderstanding?]
{"url":"http://www.physicsforums.com/showthread.php?p=3561869","timestamp":"2014-04-21T04:41:57Z","content_type":null,"content_length":"102494","record_id":"<urn:uuid:a5009033-e988-4db0-8030-a5039392745d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
solve for the TWO values of x that satisfy the equation `(1/16)csc^2(x/4)-cos^2(x/4)=0` on 0° < x <... - Homework Help - eNotes.com solve for the TWO values of x that satisfy the equation `(1/16)csc^2(x/4)-cos^2(x/4)=0` on 0° < x < 360° express exact solutions in degrees. Since by definition cosecant is `csc z=1/sin z` we have Since `2sin z cos z=sin(2z)` we have Now we take quare root and get two solutions 1) `2sin(x/2)=1` 2) `2sin(x/2)=-1` 1) `sin(x/2)=1/2` `x/2=-30^o +k360^o` `x=-60^o + k720^o` <-- First solution `x/2=210^o + k360^o` `x=420^o +k720^o` <-- Second solution 2) `sin(x/2)=-1/2` `x/2=30^o +k360^o` `x=60^o + k720^o" <--Third solution"` `x/2=150^o +k360^o` `x=300^o +k720^o" <-- Fourth solution"` Now since first and second solution don't lie between 0° and 360° you have only two solutions and they are 60° and 300°. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/solve-two-values-x-that-satisfy-equation-1-16-csc-433011","timestamp":"2014-04-19T15:30:32Z","content_type":null,"content_length":"26135","record_id":"<urn:uuid:7cfbb6f2-d1ea-405d-8abf-444d341608c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Percent of Change ( Read ) | Arithmetic What if a printer that normally cost $125 were marked down to $100. How could you calculate the percent it was marked down by? After completing this Concept, you'll be able to determine the percent of change in problems like this one. Watch This CK-12 Foundation: 0315S Percent of Change (H264) A useful way to express changes in quantities is through percents. You’ve probably seen signs such as “20% extra free,” or “save 35% today.” When we use percents to represent a change, we generally use the formula $\text{Percent change} = \frac{\text{final amount - original amount}}{\text{original amount}} \times 100\%$ $\frac{\text{percent change}}{100} = \frac{\text{actual change}}{\text{original amount}}$ This means that a positive percent change is an increase, while a negative change is a decrease. Example A A school of 500 students is expecting a 20% increase in students next year. How many students will the school have? First let’s solve this using the first formula. Since the 20% change is an increase, we represent it in the formula as 20 (if it were a decrease, it would be -20.) Plugging in all the numbers, we get $20\% = \frac{\text{final amount} - 500}{500} \times 100\%$ Dividing both sides by 100%, we get $0.2 = \frac{\text{final amount} - 500}{500}$ Multiplying both sides by 500 gives us $100 = \text{final amount} - 500$ Then adding 500 to both sides gives us 600 as the final number of students. How about if we use the second formula? Then we get $\frac{20}{100} = \frac{\text{actual change}}{500}$$\frac{1}{5}$$\frac{1}{5} = \frac{\text{actual change}}{500}.$ Cross multiplying is our next step; that gives us $500 = 5 \times (\text{actual change})$ A markup is an increase from the price a store pays for an item from its supplier to the retail price it charges to the public. For example, a 100% mark-up (commonly known in business as keystone) means that the price is doubled. Half of the retail price covers the cost of the item from the supplier, half is profit. Example B A furniture store places a 30% markup on everything it sells. It offers its employees a 20% discount from the sales price. The employees are demanding a 25% discount, saying that the store would still make a profit. The manager says that at a 25% discount from the sales price would cause the store to lose money. Who is right? We’ll consider this problem two ways. First, let’s consider an item that the store buys from its supplier for a certain price, say $1000. The markup would be 30% of 1000, or $300, so the item would sell for $1300 and the store would make a $300 profit. And what if an employee buys the product? With a discount of 20%, the employee would pay 80% of the $1300 retail price, or $0.8 \times \1300 = \1040$ But with a 25% discount, the employee would pay 75% of the retail price, or $0.75 \times \1300 = \975$ So with a 20% employee discount, the store still makes a $40 profit on the item they bought for $1000—but with a 25% employee discount, the store loses $25 on the item. Now let’s use algebra to see how this works for an item of any price. If $x$$x$$0.3x$$x + 0.3x$$1.3x$$0.8 \times 1.3x = 1.04x$$0.75 \times 1.3x = 0.975x$ So the manager is right: a 20% employee discount still allows the store to make a profit, while a 25% employee discount would cause the store to lose money. It may not seem to make sense that the store would lose money after applying a 30% markup and only a 25% discount. The reason it does work out that way is that the discount is bigger in absolute dollars after the markup is factored in. That is, an employee getting 25% off an item is getting 25% off the original price plus 25% off the 30% markup, and those two numbers together add up to more than 30% of the original price. Solve Real-World Problems Using Percents Example C In 2004 the US Department of Agriculture had 112071 employees, of which 87846 were Caucasian. Of the remaining minorities, African-American and Hispanic employees had the two largest demographic groups, with 11754 and 6899 employees respectively.$^*$ a) Calculate the total percentage of minority (non-Caucasian) employees at the USDA. b) Calculate the percentage of African-American employees at the USDA. c) Calculate the percentage of minority employees who were neither African-American nor Hispanic. a) Use the percent equation $\text{Rate} \times \text{Total} = \text{Part}$ The total number of employees is 112071. We know that the number of Caucasian employees is 87846, which means that there must be $112071 - 87646 = 24225$part. Plugging in the total and the part, we get $\text{Rate} \times 112071 = 24225$ Divide both sides by 112071 to get $\text{Rate} = \frac{24225}{112071} \approx 0.216$ 21.6% of USDA employees in 2004 were from minority groups. b) Here, the total is still 112071 and the part is 11754, so we have $\text{Rate} \times 112071 = 11754$$\text{Rate} = \frac{11754}{112071} \approx 0.105$ 10.5% of USDA employees in 2004 were African-American. c) Here, our total is just the number of non-Caucasian employees, which we found out is 24225. Subtracting the African-American and Hispanic employees leaves $24225 - 11754 - 6899 = 5572$ So with 24225 for the whole and 5572 for the part, our equation is $\text{Rate} \times 24225 = 5572$$\text{Rate} = \frac{5572}{24225} \approx 0.230$ 23% of USDA minority employees in 2004 were neither African-American nor Hispanic. Watch this video for help with the Examples above. CK-12 Foundation: Percent of Change • A percent is simply a ratio with a base unit of 100—for example, $13\% = \frac{13}{100}$ • The percent equation is $\text{Rate} \times \text{Total} = \text{Part}$ • The percent change equation is $\text{Percent change} = \frac{\text{final amount - original amount}}{\text{original amount}} \times 100\%.$positive percent change means the value increased, while a negative percent change means the value decreased. Guided Practice In 1995 New York had 18136000 residents. There were 827025 reported crimes, of which 152683 were violent. By 2005 the population was 19254630 and there were 85839 violent crimes out of a total of 491829 reported crimes. (Source: New York Law Enforcement Agency Uniform Crime Reports.) Calculate the percentage change from 1995 to 2005 in: a) Population of New York b) Total reported crimes c) violent crimes This is a percentage change problem. Remember the formula for percentage change: $\text{Percent change} = \frac{\text{final amount - original amount}}{\text{original amount}} \times 100\%$ In these problems, the final amount is the 2005 statistic, and the initial amount is the 1995 statistic. a) Population: $\text{Percent change} &= \frac{19254630 - 18136000}{18136000} \times 100\%\\&= \frac{1118630}{18136000} \times 100\%\\&\approx 0.0617 \times 100\%\\&= 6.17\%$ The population grew by 6.17%. b) Total reported crimes: $\text{Percent change} &= \frac{491829 - 827025}{827025} \times 100\%\\&= \frac{-335196}{827025} \times 100\%\\&\approx -0.4053 \times 100\%\\&= -40.53\%$ The total number of reported crimes fell by 40.53%. c) Violent crimes: $\text{Percent change} &= \frac{85839 - 152683}{152683} \times 100\%\\&= \frac{-66844}{152683} \times 100\%\\&\approx -0.4377 \times 100\%\\&= -43.77\%$ The total number of violent crimes fell by 43.77%. For questions 1-3, a hair stylist charges $70 for a haircut. Depending on how much you tip, what will be the total cost of the haircut? 1. You tip 15%. 2. You tip 20%. 3. You tip 25%. 4. 250 is what percentage of 195? 5. 0.0032 is what percentage of 0.045? 6. An employee at a store is currently paid $9.50 per hour. If she works a full year she gets a 12% pay raise. What will her new hourly rate be after the raise? 7. A TV is advertised on sale. It is 35% off and now costs $195. What was the pre-sale price? 8. A TV was advertised on sale. If you saved $40, and bought it for $160, what percentage off was it? 9. Another TV is advertised on sale. If this TV is also $40 cheaper than the pre-sale price, was it also the same percentage off as the TV in the question above? Explain! 10. Store $A$$B$$A$$B$$B$ Texas Instruments Resources In the CK-12 Texas Instruments Algebra I FlexBook, there are graphing calculator activities designed to supplement the objectives for some of the lessons in this chapter. See http://www.ck12.org/
{"url":"http://www.ck12.org/arithmetic/Percent-of-Change/lesson/Percent-of-Change---Intermediate/","timestamp":"2014-04-20T09:22:21Z","content_type":null,"content_length":"121074","record_id":"<urn:uuid:e06e7e50-3422-419c-8a77-3f5061caa0c1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Physics at the U of I Professor Aida El-Khadra received her PhD. in 1989 from the University of California, Los Angeles, after receiving her diplom from Freie Universitaet, Berlin, Germany. She held postdoctoral research appointments at Brookhaven National Laboratory, Fermi National Accelerator Laboratory, and the Ohio State Univerity before joining the Illinois faculty in 1995. El-Khadra is a fellow of the American Physical Society, a recipient of the Department of Energy's Outstanding Junior Investigator Award, and a Sloan foundation fellow. In addition, she has a received a number of other research and teaching awards. Prof. El-Khadra's area of research is theoretical particle physics. Her research focuses on the application of lattice Quantum Chromodynamics (also called the strong interactions) to phenomenologically interesting processes in flavor physics, which are relevant to the experimental effort at the so-called intensity frontier. She is a leader of one of the most successful collaborations working in Lattice Field Theory in the world, the Fermilab Lattice collaboration. Select highlights include the first quantitative determination of the the strong coupling from lattice QCD, a new formulation of heavy quarks on the lattice that is the foundation of many important, phenomenologically relevant lattice calculations, for example, predictions of the D and Ds meson decay constants, predictions of the shape of the semileptonic D-meson form factors, and lattice calculations of semileptonic B-meson form factors that yield the most precise determinations of the associated CKM matrix elements, Vcb and Vub to date. A recent highlight is the most precise calculation of the semileptonic Kaon form factor which improves upon our knowledge of the CKM matrix element Vus. The high-energy theory group has a wide variety of research interests. Topics include the top quark, electroweak symmetry breaking, quantum chromodynamics and lattice field theory, standard-model phenomenology, dynamical supersymmetry breaking, duality in supersymmetric field theory and string theory, M theory, and grand unification. Standard Model Phenomenology with Lattice QCD Quantum chromodynamics (QCD), the theory of the strong interactions, is amenable to perturbative calculations only at high energies. A quantitative understanding of the low-energy behavior of QCD, like the interactions of quarks inside hadrons, requires nonperturbative methods. Lattice field theory offers a systematic approach to solving QCD nonperturbatively. The space-time continuum is replaced by a discrete lattice. Part of our research is concerned with improvements in the formulation of lattice QCD. Other projects deal with applications of lattice QCD to phenomenologically interesting processes that yield insight into the standard model of particle physics. © 2014 The Board of Trustees at the University of Illinois | Department of Physics | College of Engineering | University of Illinois at Urbana-Champaign Department of Physics 1110 West Green Street Urbana, IL 61801-3080
{"url":"http://physics.illinois.edu/people/profile.asp?axk","timestamp":"2014-04-20T08:14:00Z","content_type":null,"content_length":"45070","record_id":"<urn:uuid:36259d9d-6289-4180-b2be-d03143259f79>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please help! What is the sum of the series? 20 on top and n-1 on the bottom of ∑(5-n) • 10 months ago • 10 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51ac17f6e4b08f9e59e396fb","timestamp":"2014-04-17T06:52:34Z","content_type":null,"content_length":"58525","record_id":"<urn:uuid:580d1210-dbd4-452e-8824-406d7b9cba5d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
You Get What You Give? President Obama’s recent remarks about business owners have intensified the national debate over the proper relation of the individual to the rest of society. Economist and pundit Paul Krugman has entered the fray from a different angle, analyzing the rhetoric of conservatives: On the one hand, they praise the free market for paying individuals according to their contribution to the economy, yet on the other hand they also love entrepreneurs for creating jobs and new products, thereby showering benefits on everyone. Krugman claims that these two principles contradict each other. Yet Krugman is making a basic mistake in economic theory. There’s nothing wrong with standard conservative attitudes toward the meritocracy of the market and the social benefits of high achievers. To set the context, let’s reproduce Krugman’s argument from July 9, when he thought he caught Romney supporters contradicting themselves: So, imagine a Romney supporter named John Q. Wheelerdealer, who works 3000 hours a year and makes $30 million. And let’s suppose that he really does contribute that much to the economy, that his marginal product per hour—the amount he adds to national income by working an extra hour—really is $10,000. This is, by the way, standard textbook microeconomics: in a perfectly competitive economy, factors of production are supposedly paid precisely their marginal product. Now suppose that President Obama has reduced Mr. Wheelerdealer to despair … [s]o Wheelerdealer decides to go Galt. Well, actually just one-third Galt, reducing his working time to just 2000 hours a year so he can spend more time with his wife and mistress. According to marginal productivity theory, this does in fact shrink the economy: Wheelerdealer adds $10,000 worth of production for every hour he works, so his semi-withdrawal reduces GDP by $10 million. Bad! But what is the impact on the incomes of Americans other than Wheelerdealer? GDP is down by $10 million—but payments to Wheelerdealer are also down by $10 million. So the impact on the incomes of non-Wheelerdealer America is … zero. Enjoy your leisure, John! …
 So there’s a huge contradiction in the whole position of the self-regarding rich—a contradiction that I’m quite sure bothers them not at all. More champagne? Thus, Krugman thinks that very high income-earners—according to textbook economic theory—take out from the economy exactly what they put into it. Therefore, it won’t do for conservatives to lambaste soak-the-rich policies on the grounds that they will hurt middle-class and poor workers, because (Krugman claims) the total income of everyone besides the high-income earners will be unaffected if the super-achievers withdraw from the economy. Krugman is saying that conservatives need to get their story straight: Does the market really pay workers according to their marginal productivity, as the free-market economists tell us with glee, or do the super-rich entrepreneurs actually add more to the economy than they take out in terms of their enormous earnings? There’s actually no contradiction here: Krugman is botching basic price theory, as even a Keynesian fellow-traveler economist pointed out last year when Krugman made the same mistake. I’ll illustrate the confusion with a simple tale involving apples. In order to pinpoint Krugman’s error, I’ll have to use a few numbers, but I’ll keep it relatively painless. Imagine Smith, the owner of an orchard who is considering hiring some day laborers to pick apples. If just one man shows up, Smith lets him use his ladder (the only one Smith owns) to go up and down from tree to tree, filling up baskets with apples. If it’s one man working by himself, he can fill 20 baskets per hour. However, if two workers show up, then one guy will climb up the ladder, and start dropping apples down to the other. The man on the ground will catch each apple and place it carefully in the basket. This approach cuts out much of the ladder time, and so two men working in this fashion can fill 30 baskets of apples per hour. Finally, if three workers show up, then two of them proceed as before. The third will ferry the sole basket back and forth to the barn, where Smith stores the apples. This change makes the operation even more efficient, so that three men working in tandem manage to fill 35 baskets of apples per hour. There’s one final wrinkle: Down the road a few miles is another orchard, owned by Jones, who also owns one ladder and one basket. All of the numbers would be the same for Jones’s apple operation. Now the fun part. Suppose initially there are two workers total, who are both equally agile and competent at the various tasks, and they don’t have any preference for one task over the other. They are aware of both Smith and Jones’ operations, and are trying to find work picking apples. In a competitive labor market, what will be the “real hourly wage” of the workers, if they are paid directly in apples? An economist would solve this problem by first calculating the “marginal product” of each subsequent worker’s efforts on a given orchard. Recall that with one worker, either orchard owner reaps 20 apples per hour, while with two workers, the total harvest jumps to 30 per hour, and finally with three workers the total jumps to 35. Therefore, the marginal product of the first worker is 20 apples per hour, the marginal product of the second worker is 10 apples per hour, and the marginal product of the third worker (if there were one) would be 5 apples per hour. Now back to our scenario with just two workers. If both of them for some reason were working at Smith’s orchard, then Jones would be willing to pay up to 20 apples per hour to hire one of them to come work for him. But Smith can’t afford to pay 20 apples per hour to both workers, since together they only harvest 30 apples total. We know that in equilibrium, therefore, Smith and Jones will hire one worker each. But how much will they pay these workers? Whatever the number is, we know that it must be the same number. To see why, suppose that Smith paid his worker (say) 15 apples per hour, while Jones paid his worker 18. Then the worker getting 15 from Smith would approach Jones and say, “Hey, you should fire that clown you’ve got working for you now at 18, because I’ll do the same job for 17.” We could add more complications to the story, involving collusion between Smith and Jones to hold down wages and so forth, but in the spirit of textbook theory (which assumes there are a large number of potential employers and workers), let’s assume that things settle down with Smith and Jones each paying his worker 20 apples per hour, i.e. their marginal product. Thus, the workers pick 40 apples total per hour at the two orchards, and that’s exactly what they consume as their wages. Smith and Jones, the orchard owners, aren’t hurt by their decision to hire the workers, but they aren’t helped At this point, we seem to have justified Krugman’s argument. Here we have workers earning their marginal product—20 apples per hour—but they are consuming exactly as many apples as their labor added to the harvest. So it seems as if Krugman was right, that when someone joins the economy, he doesn’t really affect anybody else’s standard of living. Ah, but the problem here is that marginal productivity theory applies—as the name suggests—on the margin. It tells us that workers get paid according to how much the last unit of extra input raises the total output. Once we take that into account, we see that Krugman’s “insight” collapses. In terms of our story, we can illustrate the problem by imagining that two more workers show up in town, looking for work picking apples. Through reasoning similar to that above, we conclude that in the new equilibrium, one of the newcomers goes to Smith’s orchard, the other to Jones’s, and they both get paid 10 apples per hour. However—and here is the crucial part—the original workers now only get paid 10 apples per hour, too. Because the four workers are all interchangeable, they have to get paid the same amount, and we know that the marginal product of the new guys is only 10 apples per Step back now and survey the whole picture. There are a total of 60 apples being harvested every hour among both orchards, while the four workers are only being paid 40 apples total. Thus the orchard owners are enjoying a “surplus” of 10 apples each, for every hour the laborers are at work. Thus, it is not true to say that the workers in this hypothetical world are “getting paid what they put in,” in the sense that Krugman means. Yet even so, the workers are being paid their marginal product, according to textbook price theory. Now a Marxist might conclude from our story, “Aha! The capitalist system skims product out of the effort of workers into the hands of the fat cats.” Yet that would be an odd way to describe the situation, since the owners in this story are supplying the orchards and equipment. Although his observation doesn’t justify higher taxes and other federal regulations, President Obama was right in pointing out that no individual can produce in our economy without contributions from others. Finally, to underscore the point that there is nothing sinister here involving the exploitation of workers, realize that this same phenomenon occurs with the prices of goods and services, too. For example, the owners of agricultural land earn a lower income than the owners of diamond mines, even though food is essential to life. This is because the marginal value to consumers of an additional bushel of crops is lower than the same amount of diamonds, simply because diamonds are relatively scarce. Yet even though agricultural owners are currently paid according to the value of the marginal bushel of crops, it would be silly to conclude that if all owners withdrew their farms from production, that the effect on everybody else would be a wash. By the same token, entrepreneurs and other high-income earners really do contribute to society; the rest of us have a higher standard of living because of their participation in the economy. This is true, even though their compensation tends to equal the market value of what they created on the margin. Krugman’s elementary confusion on what “marginal product” means, led him to falsely accuse conservatives of a contradiction. There is no dichotomy in praising the meritocracy of the market, while also acknowledging the social benefits of successful entrepreneurs. Robert P. Murphy is author of The Politically Incorrect Guide to Capitalism. His blog is Free Advice. Follow him on Twitter.
{"url":"http://www.theamericanconservative.com/articles/you-get-what-you-give/","timestamp":"2014-04-19T01:52:41Z","content_type":null,"content_length":"95830","record_id":"<urn:uuid:041cf4bf-238e-4d54-a2ec-74257677fce5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Sauk Village, IL Math Tutor Find a Sauk Village, IL Math Tutor ...Since I have had many years of teaching from middle schools to universities, I have been afforded the luxury of teaching many ages. I have taught in middle schools, high schools, community colleges, and universities. The main subject I have taught is mathematics. 12 Subjects: including calculus, statistics, differential equations, probability ...Likewise, a student needs to be sufficiently motivated to embrace learning in a familiar supportive manner.I earned a Bachelor of Music degree from the Eastman School of Music and, shortly thereafter, in 1985, became the music director of the Chicago Philharmonic Orchestra - a position that I sti... 37 Subjects: including SAT math, piano, finance, public speaking Since graduating summa cum laude in mathematics education, my passion for teaching math has only grown stronger. I have two and a half years experience teaching high school math and three and a half years experience tutoring college math. Some of my favorite moments as a teacher have been when I w... 12 Subjects: including algebra 1, algebra 2, calculus, geometry ...I also had the privilege of taking some Refresher Courses meant for Senior teachers. I have nearly 40+ years of teaching mathematics at Senior level (grades 11 and 12) to students from India, Pakistan, Kuwait and the U.S.A. Besides this I have been preparing students for Engineering Entrance Ex... 14 Subjects: including statistics, probability, algebra 1, algebra 2 I am currently employed as a high school math teacher, going into my eighth year. I will be teaching pre-algebra and Algebra 2. I have taught every math class from Pre-Algebra to Pre-Calculus. I am also very knowledgeable about the math portion of ACT. I have numerous resources for test prep. 7 Subjects: including algebra 1, algebra 2, geometry, prealgebra Related Sauk Village, IL Tutors Sauk Village, IL Accounting Tutors Sauk Village, IL ACT Tutors Sauk Village, IL Algebra Tutors Sauk Village, IL Algebra 2 Tutors Sauk Village, IL Calculus Tutors Sauk Village, IL Geometry Tutors Sauk Village, IL Math Tutors Sauk Village, IL Prealgebra Tutors Sauk Village, IL Precalculus Tutors Sauk Village, IL SAT Tutors Sauk Village, IL SAT Math Tutors Sauk Village, IL Science Tutors Sauk Village, IL Statistics Tutors Sauk Village, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Sauk_Village_IL_Math_tutors.php","timestamp":"2014-04-21T11:03:12Z","content_type":null,"content_length":"24087","record_id":"<urn:uuid:65561aaf-9a6c-4824-a29c-fc9e4880a10e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello everyone, for school i have to make a function that hangs pumpkins (= lampion in the code) on a horizontal line we already have a few functions in place : (define (lampion r l) (vc-append (filled-rectangle 2 l) (pumpkin r))) which makes a pumpkin on a vertical line (define (rijg-aan-draad lampionnetjes) which hangs all the pumpkins on a tread, in this function "lampionnetjes" = a collection of pumpkins (vc-append (filled-rectangle (pict-width lampionnetjes) 2) these are the programs i made , i had to make it a few different ways and i am only having trouble with the last way the first way wa through regular recursion (define (slinger-simpel n r) (define (lampionnetjes k) (if (= k 1) (lampion r 10) (ht-append (lampion r 10) (lampionnetjes (- k 1))))) (rijg-aan-draad (lampionnetjes n))) the second way was through iteration where the "acc" was used as a blanco sheet to put the pumpkins on now i have to do it again, but now i have to use the "do" function. I am really having trouble with this. I know the body of a do funtion, where you first say which parameters and what they have to do evertyime. And then you write how the program stops and at last you write the body's u need. This is the code i already got (define (slinger4 n r l acc) (do ((aantal n (- aantal 1)) (lengte-koord 1 (+ lengte-koord 1))) ((= aantal 0)) (ht-append (lampion r lengte-koord) for your simplicity i will translate into english (its dutch) (define (handle4 n r l acc) (do ((amount n (- amount 1)) (length-string 1 (+ length-string 1))) ((= amount 0)) (ht-append (pumpkin r length-string))) so i have n = the amount of pumpkins r = width pumpkins l = length of the string connecting the pumpkins to the horizontal line acc = the blanco list where all the pumpkins need to go so to summarize my problem, i know how to use recursion and iteration, but i dont know how to impement a do function into this. Many thanks for your help i would upload a picture of how the function looks like, but i dont know how to do this :s
{"url":"http://lispforum.com/viewtopic.php?p=11496","timestamp":"2014-04-21T04:33:25Z","content_type":null,"content_length":"15067","record_id":"<urn:uuid:41b1ad69-2fed-4cb6-83c3-f1d116532772>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Doing Physics Problems: Music by Joe Green (Giuseppe Verdi) Words by Br. R.W.Harris Doing Physics problems can be trying, and you'll often find your answers are unfortunately wrong! If you use the method I'm supplying, then you'll find you're getting answers right before long. First the problem read, then to the diagram proceed, and then you write down every formula you know. If you keep the numbers and the labels straight, then you may graduate a "Physics Pro."! Doing Physics problems without method is a little bit like beating your poor head against a wall. But they tell me that if you don't do a lot of problems, then you're headed for a big fall. First the problem read, then to the diagram proceed, and then you write down every formula you know. If you keep the numbers and the labels straight, then you may graduate a "Physics Pro."! When the specter of examination day is drawing close, and all your thoughts seem scrambled in your mind, You can say with confidence "I know a way to deal with physics mysteries of every kind..." First the problem read, then to the diagram proceed, and then you write down every formula you know. If you keep the numbers and the labels straight, then you may graduate a "Physics Pro."! Music by Vincenzi Bellini Words by Br. R.W.Harris. d equals vit plus half a t squared. d equals vit plus half a t squared. d equals vit plus half a t squared; and don't leave out units, you'll incur the wrath! Time is the only scalar here. A,V and d are vectors. Distance and time, your secrets we've shared. d equals vit plus half a t squared. Yes, d is vit plus half a t squared, And I never make the left-out-units gaffe! Time is the only scalar here. A, V, and d are vectors. d equals vit plus half a t squared, And please don't neglect to check your math! Melody: I'm Looking Over a Four Leaf Clover, by Harry Woods. Words: Br. R.W.Harris There's no disputin' Sir Isaac Newton was one very brainy guy. He started thinkin' (It didn't take long), Soon he wrote three laws that make up this song: First was inertia, F equals m a came next and as we all know; Force and Reaction are equal always, In opposite ways they go. There's no disputin' Sir Isaac Newton Was one very brainy guy. He made up Physics, and Calculus too! No one was able his work to outdo. Motion and forces, and heat and optics, and gravity he explained. Science and Math he explained succinctly, and over them both he reigned! The Impulse Song F delta t equals delta p F delta t equals delta p F delta t equals delta p A physical fact: An impulse must act to change the MV F delta t equals delta p F delta t equals delta p F delta t equals delta p A physical fact: An impulse must act to change the MV Conservation laws are what we now proclaim; But momentum can change, It's a tricky game! How, oh how can you know-- if it stays the same? A physical fact: An impulse must act to change the MV. F delta t equals delta p F delta t equals delta p F delta t equals delta p An impulse must act to change the MV! Equal They Will Be --inspired by Rob Ryan and Andy O’Connor (Melody: Home on the Range) One bright sunny day, when an incident ray hit a mirror, it then stopped to say... "When reflected, I’ll know what direction to go, at what angle I’ll reflect away." Equal they will be, Angles I and R, you will see. By the Normal they’re split, and the Normal will hit the surface perpendic-u-lar-ly. Reflection, we know, governs where things will go when they’re moving and bouncing around. Whether light rays or balls -- when they’re bouncing off walls, By this law they will always be bound. Equal they will be, Angles I and R, you will see. By the Normal they’re split, and the Normal will hit the surface perpendic-u-lar-ly. Songs on this page ©2000 Br. R.W.Harris Page created and maintained by Br. Robert W. Harris Click here to return to the contents page.
{"url":"http://www.ionaphysics.org/library/physics%20songs/Songs.htm","timestamp":"2014-04-20T10:46:37Z","content_type":null,"content_length":"7684","record_id":"<urn:uuid:17487516-0f27-4bc3-ab59-dca6d42d2cfb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 814.11043 Autor: Erdös, Paul; Hall, R.R.; Tenenbaum, G. Title: On the densities of sets of multiples. (In English) Source: J. Reine Angew. Math. 454, 119-141 (1994). Review: Let A denote a strictly increasing sequence of integers greater than 1, and let M(A) = {ma: m \geq 1, a in A}. The authors call A a Besicovitch sequence if M(A) has an asymptotic density; if this density equals 1, then A is a Behrend sequence. It was shown by Besicovitch in 1934 that there are sequences A for which M(A) does not have a density. In 1948, Erdös obtained a criterion for A to be a Besicovitch sequence, and a short proof of his result is included in this paper. The authors prove several theorems concerning Besicovitch sequences. For example, Theorem 3 states that A = {a[1],a[2],...} is a Besicovitch sequence if, for some fixed positive integer k, every gcd (a[i],a[j]), i \ne j, has at most k distinct prime factors. Let \tau(n,A) denote the number of divisors of n belonging to A, so M(A) = {n: \tau(n,A) > 0}, and let A^(k) denote the k-th derived sequence of A, so M(A^(k)) = {n: \tau(n,A) > k}. The remaining theorems provide quantitative forms of the result that \tau(n,A) > oo p.p. whenever A is Behrend, and these are stated in terms of the logarithmic density t[k](A) of {n: \tau(n,A) \leq k}. For example, the authors prove in Theorem 5 that inf{t[0](A): |A| \leq k} = prod^k[j = 1] (1 - {1\over p[j]}) where p[j] denotes the j-th prime. Reviewer: E.J.Scourfield (Egham) Classif.: * 11N25 Distribution of integers with specified multiplicative constraints 11B75 Combinatorial number theory Keywords: sets of multiples; strictly increasing sequence of integers; density; Behrend sequence; Besicovitch sequences; number of divisors; logarithmic density © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/81411043.htm","timestamp":"2014-04-18T08:05:50Z","content_type":null,"content_length":"5362","record_id":"<urn:uuid:d4b2a1a4-0f26-4c5c-853e-1ee583b7b565>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Dividing the Continent Dividing the Continent Landscape Images Back home again, and plugged in to libraries as well as computers, I was not surprised to learn that others had gone before me in thinking about the nature of watersheds and divides. But I was surprised to learn just who my predecessors were. Two of the best publications on the subject are short papers by Arthur Cayley (a founder of topology and graph theory) and James Clerk Maxwell (the author of the electromagnetic theory of light). Cayley and Maxwell do not focus on continental divides—perhaps the concept is not an obvious one for residents of an island nation—but their analysis of landforms in general clarifies aspects the divide problem. They emphasize peaks, pits and saddles as the keys to delineating the fundamental regions of a landscape. Much as Euler gave a formula for the number of faces, edges and vertices in a polyhedron, Maxwell relates the number of topographic peaks, pits and saddles on a surface. In the case of a sphere, the formula is p+q–s=2, where p is the number of peaks, q the number of pits and s the number of saddles. Maxwell also outlines a procedure for dividing the landscape into watershed regions. Whereas all my own methods progressed either down from peaks or up from pits, Maxwell argues that the right way to do it is to start in the middle—at a saddle—and proceed toward both peaks and pits along lines of steepest ascent or descent. The more recent literature on divides and watersheds held another surprise. I had expected to find work by geographers and cartographers, and indeed they have written extensively on the subject. But there is also a body of work by students of image analysis and artificial vision. The connection is clear once it’s pointed out. Just as a topographic map can be presented as an image in which elevation is encoded in brightness, so also a digitized image can be interpreted as a surface where altitude represents shades of gray or color. Finding watersheds in such a surface is a useful approach to identifying objects in the image. The idea of using watersheds for image analysis was first proposed at the School of Mines in Paris in the 1970s. (The images that needed analyzing were micrographs of ore samples.) Workers there have continued to refine the method. The version of the algorithm I have found most helpful was devised by Luc Vincent and Pierre Soille when they were both working at the School of Mines. The Vincent-Soille algorithm is related to what I have dubbed the global-warming method, but with a number of enhancements. One remarkably simple device greatly reduces the computational effort. In addition to storing the array of points that represents the landscape, they keep a list of the same points sorted in order of increasing elevation. With this list in hand, there is no need to search for points that will be submerged each time the water level rises; you can simply cross them off in order. As it happens, one contemplated application of the watershed algorithm in image processing is the old dream of a car that drives itself. The "watersheds" detected in a video image might be the edges of a roadway. So perhaps the next time I cross the continental divide I’ll be able to pay more attention. I won’t have to keep my hands on the wheel. © Brian Hayes
{"url":"http://www.americanscientist.org/issues/pub/dividing-the-continent/7","timestamp":"2014-04-21T12:12:48Z","content_type":null,"content_length":"116902","record_id":"<urn:uuid:ce98b8ca-ad8e-4378-b9ed-bb8131f27c7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Sequence: product of sequences diverges What if I say: 1. The problem statement, all variables and given/known data If a_n diverges to +inf, b_n converge to 0; prove a_n*b_n diverges to +inf 2. Relevant equations 3. The attempt at a solution My attempt follows: I seem to have trouble getting things in the right order, so I am trying to work on my technique, with your help. Also, I am afraid I may have omitted reference to some theorem that I am taking for granted, which is another of my bad habits. Please review for me and advise as appropriate. I am determined to conquer this subject! Thanks. Let L, M, e > 0, L, M, e \in R. By definition of a limit of a sequence, we can choose N_b such that |b_n - L|< e, n >= N. Then -e < b_n - L < e, so L - e < b_n < L + e. So b_n > L - e. We can then choose N_a such that a_n >= M/(L-e), n >= N. Let N = max (N_b, N_a). Then a_n*b_n >= [M/(L-e)]*L-e, which = M, n >= N. Thus, {a_n*b_n} diverges to + infinity.
{"url":"http://www.physicsforums.com/showthread.php?t=345479","timestamp":"2014-04-19T15:06:17Z","content_type":null,"content_length":"46098","record_id":"<urn:uuid:91f7316f-4cfa-48e5-81a4-9b1e48d4eeac>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
FW: st: weights in pooled repeated cross sections Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] FW: st: weights in pooled repeated cross sections From "Ivica Rubil" <irubil@eizg.hr> To <statalist@hsphsun2.harvard.edu> Subject FW: st: weights in pooled repeated cross sections Date Tue, 27 Sep 2011 16:40:01 +0200 so, you're saying I just pool the 4 datasets, divide the weights by 4, and apply -svyset- to the pooled dataset? Or should I rather use -svyset- for each of the 4 datasets, pool them in one, and then divide the weights by 4? Further, sorry for bothering you: what are PSUs? How do I check if they change form year to year? Ivica Rubil Ekonomski institut / The Institute of Economics, Zagreb Trg J. F. Kennedyja 7, 10 000 Zagreb, Croatia tel. +385-1-2362-269 fax. +385-1-2335-165 -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Steven Sent: 27. rujan 2011 15:51 To: statalist@hsphsun2.harvard.edu Subject: Re: st: weights in pooled repeated cross sections You can use the individual weights, Ivica, but divide by 4 so that they sum to the average population total over the four years You still have to write the rest of the -svyset- command. If the PSUs did not change over the four years, then treat the pooled sample as one large sample, and use the same -svyset- statement that you would use for a single year. If some PSUs changed you will have to do some stratum recoding. For an example see On Sep 26, 2011, at 12:03 PM, Ivica Rubil wrote: Dear all, I am trying to pool four repeated cross-sections of Croatian Hpusehold Budget Survey. For each year that I want to pool, I have sampling weights for each observation (both household and person). My questions are: What should I do with the weights once I pool the four datasets? Is it wrong to use dataset-specific weights in the pooled dataset and just run estimation commands with the weight option, if available? I am confused. Please, help. Ivica Rubil Ivica Rubil Ekonomski institut / The Institute of Economics, Zagreb Trg J. F. Kennedyja 7, 10 000 Zagreb, Croatia tel. +385-1-2362-269 fax. +385-1-2335-165 * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-09/msg01212.html","timestamp":"2014-04-20T20:09:55Z","content_type":null,"content_length":"10138","record_id":"<urn:uuid:ff7e7346-fbc9-4cbe-b202-c030416ae43b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Murray Sargent: Math in Office Comments 8 The W3C announced October 21, 2010 that the MathML 3.0 specification is a W3C Recommendation. This post describes some of the features added to MathML in version 3.0. The specification also includes numerous clarifications that are helpful for people wanting to implement MathML 3.0. The specification’s introductory section “Status of this Document” concludes with a summary of the features. Improved Unicode Alignment In February 2001 when the “first edition” of MathML 2.0 became a W3C Recommendation, Unicode/ISO 10646 was still officially missing many math symbols. Unicode 3.1 (May, 2001) added the Unicode math alphanumerics (U+1D400.U+1D7FF) and the differential-d set (U+2145..U+2149). Unicode 3.2 (March, 2002) added the lion’s share of remaining math symbols in the STIX set, including the arrows and math operators in the range U+2900..U+2AFF. The “second and current edition” of MathML 2.0 was released in October 2003 partly to re-align with Unicode and to include math character blocks instead of private use area code points. The operator dictionary did not include Unicode code points. MathML 3.0 is fully aligned with Unicode 6.0+. In addition to the discussions of the Unicode math character sets in Chapter 15 of The Unicode Standard, Unicode Technical Report #25, and Unicode Technical Note #28, MathML 3.0 adds a thoroughly revamped math operator dictionary featuring Unicode and giving basic operator properties. Complementing this is the new W3C Recommendation XML Entity Definitions for Characters, which incorporates all standard entities used for mathematics. In particular, Section 3 of this document features Unicode blocks of characters that you can mouse over to see relevant math properties. Each character is displayed twice, once with a STIX font and once with a png. Right-to-Left Math The earlier posts Tailoring the Unicode Bidi Algorithm and Directionality in Math Zones discuss right-to-left (RtL) math and give useful references for further study, notably Arabic mathematical notation. MathML 3.0 includes full support for RtL math zones, including a directionality attribute, dir, which can appear on most elements. If the dir attribute does not appear, the directionality of the <math> element and children is inherited from the current Bidi embedding. Adil Allawi of Diwan Software Limited notes (in a private communication) that the right-to-left features of MathML 3.0 make “it possible, for the first time, to build standards-based and truly interoperable electronic maths books for students in the Arab countries." Elementary math People write long division and multiplication in various ways around the world. Particularly for educational purposes, it is desirable to have an interchangeable format to represent such conventions on the web and in other electronic media. Accordingly, MathML 3.0 defines elements to handle many such conventions. The facility allows special alignments as well as borrow/carry annotations. MathML is supported by various accessibility tools and has been incorporated into the DAISY Standard. Adding MathML 3.0’s elementary math capabilities widens the range for accessible math. CSS for math In a companion specification, a subset of MathML 3.0 is described that can be rendered with CSS. This allows browsers that do not have sophisticated math layout engines to display many mathematical Line breaking MathML 3.0 includes a very rich set of attributes. Office 2010 implements a subset that handles the most common scenarios, including options to display duplicated operator as ++, +-, --, but it is not nearly as general. For example, MathML 3.0 allows breaking at an arbitrary character position with a wealth of possible indents/alignments for the wrapped line(s). href's everywhere You can put an href (hyperlink) attribute on most any element in MathML 3.0. This makes it easy, for example, to provide links on a fraction numerator or denominator. Such links can aid the teaching of mathematics as well as to define the meanings of expressions. The <mpadded> element has been generalized and clarified. This can be useful for tweaking the positioning of arguments for printing. Caveat emptor: since math layout engines often differ in their spacing choices, tweaking for one engine may spoil the spacing for another engine. The elements <mglyph> and <maction> are also motivated and described more thoroughly. Content MathML A substantial effort has gone into improving Content MathML, notably in allowing a rigorous correspondence to OpenMath with its content dictionaries. This work is valuable for symbolic and numerical Mixing Markup One reason Microsoft Office went with its own “Office MathML”, that is, OMML, was to be able to combine a math XML with parent namespaces, such as WordProcessingML. MathML 3.0 has generalized the <semantics> element and clarified various scenarios to make such mixing of namespaces more convenient. What’s not yet done A number of things were postponed for future consideration. Document math defaults have not been specified. These include the default math font, equation alignment(s), space before/after, breaking conventions, n-ary/integral limit placements, all of which are specified in the OMML <mathPr> element. The power of the OMML math paragraph is also missing, although to some degree it can be emulated by the <mtable> element. Equation numbering is not represented, nor is it in OMML. The feeling of the MathML working group is that these concepts belong to the document container in which the MathML <math> elements reside. However, I think that some discussion should be given to lend a degree of interoperability to these concepts, which are clearly very important for the electronic representation of mathematical text. An example of the problem is that one browser currently left aligns equations by default, whereas the common convention is to center equations. What becomes of WG? When the MathML 2.0 Working Group finished the MathML 2.0 specification, the group dissolved. It took some time to put it back together for MathML 3.0. Therefore, the plan for the future is to keep the group active with no specific agenda other than to maintain the spec, answer questions, and dream about the future. Something as complex and important as MathML needs a live pool of expertise. See also Neil Soiffer’s blog about MathML 3.0.
{"url":"http://blogs.msdn.com/b/murrays/archive/2010/10/28/mathml-3-0.aspx","timestamp":"2014-04-19T16:08:11Z","content_type":null,"content_length":"78068","record_id":"<urn:uuid:d1cf92e1-4834-4e18-8a16-ec814b4a5598>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Construct symmetric matrix Pierre GM pgmdevlist@gmail.... Sun Apr 5 14:40:25 CDT 2009 On Apr 5, 2009, at 12:18 AM, David Cournapeau wrote: > On Sun, Apr 5, 2009 at 11:17 AM, Pierre GM <pgmdevlist@gmail.com> > wrote: >> All, >> I'm trying to build a relatively complicated symmetric matrix. I can >> build the upper-right block without pb. What is the fastest way to >> get >> the LL corner from the UR one ? >> Thanks a lot in advance for any idea. > I may be missing something, but why not building it from scratch + a > couple of elementary operations ? Is it too slow ? For example: > # Assuming the lower part of a is zero before the operation > a + a.T - np.diag(a.diagonal()) Sweet! Works pretty well indeed. I didn't think about that (that was a bit late on a Sat. night for numpy). Thx again. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-April/041665.html","timestamp":"2014-04-17T04:06:12Z","content_type":null,"content_length":"3568","record_id":"<urn:uuid:879e1863-c91c-462d-986a-645d2b091594>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: calculate size of idx From: DENNIS WILLIAMS <DWILLIAMS_at_LIFETOUCH.COM> Date: Tue, 17 Feb 2004 09:11:29 -0600 Message-ID: <0186754BC82DD511B5C600B0D0AAC4D607AFFF94@EXCHMN3> The space required to store a NUMBER will vary by the number of digits the number contains. IIRC the bytes are 1 + CEIL(digits/2). You can get closer on your index size estimate if you will use the avg(vsize(column_name)) on each of the columns you will have in your index. For both tables and indexes you can further improve the precision of your estimate by calculating the number of rows that will fit into a block, then compute the number of blocks that will be If you use Google to search using terms like oracle index size formula you will find several formulas for estimating the table and index sizes. Unfortunately in the recent tests I conducted, I could not find any formula that produced really accurate results. Dennis Williams Lifetouch, Inc. -----Original Message----- From: Kommareddy, Srinivas (MED, Wissen Infotech) [mailto:Srinivas.Kommareddy_at_med.ge.com] Sent: Monday, February 16, 2004 10:36 PM To: oracle-l_at_freelists.org Subject: calculate size of idx Hi Lists, We are going to perform a data load to a table. To calculate the space required for n rows for a table we use avg_row_len*n rows from dba_tables. But how we can do that for indexes ? B'coz my table has 7 indexes which are built on 10 different columns. I want to add space to index tablespace also... (most of the indexes columns are NUMBER, should I take 32 bytes * n rows to calculate the size ) Thanks and Regards, Please see the official ORACLE-L FAQ: To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line. Archives are at FAQ is at Please see the official ORACLE-L FAQ: To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line. Archives are at FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html Received on Tue Feb 17 2004 - 09:11:29 CST
{"url":"http://www.orafaq.com/maillist/oracle-l/2004/02/17/1399.htm","timestamp":"2014-04-16T18:23:55Z","content_type":null,"content_length":"8917","record_id":"<urn:uuid:5f85f371-7335-4455-9498-1ccbf86a394c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
The Pagenumber of Genus g Graph is 0(g) The Pagenumber of Genus g Graph is 0(g) (1990) The Pagenumber of Genus g Graph is 0(g). Technical Report TR-90-21, Computer Science, Virginia Polytechnic Institute and State University. Full text available as: PDF - Requires Adobe Acrobat Reader or other PDF viewer. TR-90-21.pdf (2338855) In 1979, Berhart and Kainen conjectured that graphs of fixed genus g greater than or equal to 1 have unbounded pagenumber. This proves that genus g graphs can be embedded in 0(g) pages, thus disproving the conjecture. An Omega(square root of g) lower bound is also derived. The first algorithm in the literature for embedding an arbitrary graph in a book with a non-trivial upper bound on the number of pages is presented. First, the algorithm computes the genus g of a graph using the algorithm of Filotti, Miller, Reif (1979), which is polynomial-time for fixed genus. Second, it applies an optimal-time algorithm for obtaining an 0(g)-page book embedding. We give separate book embedding algorithms for the cases of graphs embedded in orientable and nonorientable surfaces. An important aspect of the construction is a new decomposition algorithm, of independent interest, for a graph embedded on a surface. Book embedding has application in several areas, two of which are directly related to the results obtained: fault-tolerant VLSI and complexity theory.
{"url":"http://eprints.cs.vt.edu/archive/00000203/","timestamp":"2014-04-19T07:14:56Z","content_type":null,"content_length":"6895","record_id":"<urn:uuid:3d0f9825-bba0-47df-a86f-169b59f80c2a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Strategic Management Strategic Management ­ MGT603 Lesson 30 Learning objective Grand strategy matrix is a last matrix of matching strategy formulation framework. It same as important as BCG, IE and other matrices. This chapter enables you to understand the preparation of GS matrix. The Quantitative Strategic Planning Matrix (QSPM) The last stage of strategy formulation is decision stage. In this stage it is decided that which way is most appropriate or which alternative strategy should be select. This stage contains QSPM that is only tool for objective evaluation of alternative strategies. A quantitative method used to collect data and prepare a matrix for strategic planning. It is based on identified internal and external crucial success factors. That is only technique designed to determine the relative attractiveness of feasible alternative action. This technique objectively indicates which alternative strategies are best. The QSPM uses input from Stage 1 analyses and matching results from Stage 2 analyses to decide objectively among alternative strategies. That is, the EFE Matrix, IFE Matrix, and Competitive Profile Matrix that make up Stage 1, coupled with the TOWS Matrix, SPACE Analysis, BCG Matrix, IE Matrix, and Grand Strategy Matrix that make up Stage 2, provide the needed information for setting up the QSPM (Stage 3). Preparation of matrix Now the question is that how to prepare QSPM matrix. First it contains key internal and external factors. An internal factor contains (strength and weakness) and external factor include (opportunities and threats). It relates to previously IFE and EFE in which weight to all factors. Weight means importance to internal and external factor. The sum of weight must be equal to one. After assigning the weights examine stage-2 matrices and identify alternatives strategies that the organization should consider implementing. The top row of a QSPM consists of alternative strategies derived from the TOWS Matrix, SPACE Matrix, BCG Matrix, IE Matrix, and Grand Strategy Matrix. These matching tools usually generate similar feasible alternatives. However, not every strategy suggested by the matching techniques has to be evaluated in a QSPM. Strategists should use good intuitive judgment in selecting strategies to include in a QSPM. After assigning the weight to strategy, determine the attractiveness score of each and afterwards total attractiveness score. The highest total attractiveness score strategy is most feasible. Steps in preparation of QSPM 1. List of the firm's key external opportunities/threats and internal strengths/weaknesses in the left column of the QSPM. 2. Assign weights to each key external and internal factor 3. Examine the Stage 2 (matching) matrices and identify alternative strategies that the organization should consider implementing 4. Determine the Attractiveness Scores (AS) 5. Compute the Total Attractiveness Scores 6. Compute the Sum Total Attractiveness Score Strategic Management ­ MGT603 Strategy 1 Strategy 2 Strategy 3 Key External Factors Economy conditions Consumer attitude Key Internal Factors Research and Development Computer Information 1. Requires intuitive judgments and educated assumptions 2. Only as good as the prerequisite inputs 3. Only strategies within a given set are evaluated relative to each other 1. Sets of strategies considered simultaneously or sequentially 2. Integration of pertinent external and internal factors in the decision making process
{"url":"http://www.zeepedia.com/read.php?grand_strategy_matrix_preparation_of_matrix_key_external_factors_strategic_management&b=58&c=30","timestamp":"2014-04-21T04:33:05Z","content_type":null,"content_length":"50900","record_id":"<urn:uuid:007d20fb-2dfb-4888-aaf5-0d8d1629d9cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of Perpendiculars 1. Noun. (plural of perpendicular) ¹ ¹ Source: wiktionary.com Definition of Perpendiculars 1. perpendicular [n] - See also: perpendicular Perpendiculars Pictures Click the following link to bring up a new window with an automated collection of images related to the term: Perpendiculars Images Lexicographical Neighbors of Perpendiculars perpend perpendicular plate of palatine bone perpent perpended perpendicular recording perpent stone perpender perpendicular style perpent stones perpenders perpendicularity perpents perpendicle perpendicularly perper perpendicles perpendiculars (current term) perpers perpendicular perpending perpession perpendicular fasciculus perpends perpessions perpendicular plate perpension perpetrable perpendicular plate of ethmoid bone perpensions perpetrate Literary usage of Perpendiculars Below you will find example usage of this term as found in modern and/or classical literature: 1. A Treatise on Conic Sections: Containing an Account of Some of the Most by George Salmon (1879) "For example, if PA, PS, PC, PD be the perpendiculars let fall from any point of ... The product of the perpendiculars from The product of the perpendiculars ..." 2. A Treatise on the Higher Plane Curves: Intended as a Sequel to A Treatise on by George Salmon (1879) "An important property of the perpendiculars let fall from the foci on any tangent is at once derived from the equation expressed in that system of ..." 3. Encyclopædia Britannica: Or, A Dictionary of Arts, Sciences, and by Colin MacFarquhar, George Gleig (1801) "AH the perpendiculars, fuch as PR, on one lids of the plane CDFE, being equal to ... The fame mud be affirmed of all the perpendiculars PM, and of all the ..." 4. A Treatise on Surveying: Comprising the Theory and the Practice by William Mitchell Gillespie (1897) "perpendiculars may be set out with the chain alone, by a variety of methods. ... Diagonals and perpendiculars. 96. We have seen in the preceding pages that ..." 5. Mathematical Questions and Solutions by W. J. C. Miller (1888) "In a rectangular tetrahedron, the perpendiculars from the angular points on the opposite faces meet in a point, the centre of perpendiculars. ..." 6. A Treatise on the Analytic Geometry of Three Dimensions by George Salmon (1882) "The equation of the sphere circumscribing a tetrahedron may be most simply obtained as follows : Let the four perpendiculars on each face from the opposite ..." 7. A Treatise on Land-surveying: Comprising the Theory Developed from Five by William Mitchell Gillespie (1869) "(171) By perpendiculars. The straight line, AB in the figure, is supposed to be stop- c DEF ped by a tree, a house, or other obstacle, and it is desired to ..." 8. Chapters on the Modern Geometry of the Point, Line, and Circle: Being the by Richard Townsend (1863) "But the converse property which establishes the relation as a criterion of concurrence of the several perpendiculars is true only for the triangle. ..." Other Resources Relating to: Perpendiculars
{"url":"http://www.lexic.us/definition-of/perpendiculars","timestamp":"2014-04-19T09:40:44Z","content_type":null,"content_length":"34348","record_id":"<urn:uuid:aae530e4-3d0f-43ae-ac1b-c05d11680b35>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Highland, IN Algebra Tutor Find a Highland, IN Algebra Tutor Mathematics is a dying art in today’s America. High school students are graduating with the ability to balance their check book and nothing more. I believe that this is unacceptable and these students need to know more than just the four basic operations of math. 11 Subjects: including algebra 2, precalculus, algebra 1, SAT math ...I received my bachelor's in the biological sciences at Benedictine University. I have always enjoyed helping out others with what I am good at, and was told that one of the ways I can continue doing so is through tutoring. My experience began after I completed my first year of college and my mo... 59 Subjects: including algebra 1, algebra 2, English, Spanish ...The recent shift in my career change has allowed me to make the decision that this is my purpose in life; this is who I am. I am the educator, activist and supporter that many of our children need to help make the difference in their life. I take pride with the responsibility of grooming the academic, social, mental and physical growth of our children. 4 Subjects: including algebra 2, algebra 1, prealgebra, probability ...Talk to you soon! KellyI have been using Apple products daily since my first purchase in 2009. I use an iMac, iPad, iPhone, and iPod and have tutored both adults and children to learn how to use these devices. 26 Subjects: including algebra 1, algebra 2, chemistry, Spanish ...I also have Masters in Business Administration. I have achieved a very strong math and science background from my collegiate background. One of my greatest strengths is walking into an unknown environment and fostering a productive and eager classroom. 12 Subjects: including algebra 1, algebra 2, chemistry, English Related Highland, IN Tutors Highland, IN Accounting Tutors Highland, IN ACT Tutors Highland, IN Algebra Tutors Highland, IN Algebra 2 Tutors Highland, IN Calculus Tutors Highland, IN Geometry Tutors Highland, IN Math Tutors Highland, IN Prealgebra Tutors Highland, IN Precalculus Tutors Highland, IN SAT Tutors Highland, IN SAT Math Tutors Highland, IN Science Tutors Highland, IN Statistics Tutors Highland, IN Trigonometry Tutors
{"url":"http://www.purplemath.com/Highland_IN_Algebra_tutors.php","timestamp":"2014-04-18T19:08:24Z","content_type":null,"content_length":"23880","record_id":"<urn:uuid:e3a49a0a-ba8d-48d5-a1ca-d07cb940cdbd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Johnson's Rules Johnson's Rule 1. What is Johnson's Rule 2. Objectives of Johnson's Rule 3. Conditions for the Johnson's Rule 4. Steps Involved In Johnson's Rule 5. Example with Solution 6. Simulation 1. What is Johnson's Rule Johnson's Rule is a technique that can be used to minimise the completion time for a group of jobs that are to be processed on two machines or at two successive work centres. 2. Objectives of Johnson's Rule The Objectives of the Johnson's Rule are: To minimise the processing time for sequencing a group of jobs through two work centres. To minimise the total idle times on the machines. To minimise the flow time from the beginning of the first job until the finish of the last job. 3. Conditions for the Johnson's Rule In order for the technique to be used, several conditions must be satisfied: Job time(including setup and processing) must be known and constant for each job at each work centre. Job times must be independent of the job sequence. All jobs must follow the same two-setup work sequence. Job priorities cannot be used. 4. Steps Involved In Johnson's Rule Johnson's Rule involves four steps: 1. All jobs are to be listed, and the processing time of each machine is to be listed. 2. Select the job with the shortest processing time. at the end. Ties in activity times can be broken arbitrarily. 3. Once the job is scheduled, go to step 4. 4. Repeat steps2 and step3 to the remaining jobs, working towards the centre of the Example with Demo solution Back to Top [Job Sequencing] [Johnson's Rule] [Example with Solution] [Johnson's Rule Simulation]
{"url":"http://kewhl.tripod.com/johnson.htm","timestamp":"2014-04-20T01:59:29Z","content_type":null,"content_length":"32166","record_id":"<urn:uuid:5167c295-af37-47b3-9bf6-bb310d640726>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
T 455/09 – From The Department Of Corrections This is an examination appeal. Claim 1 before the Board read (in English translation): 1. Method for the treatment of a digital audio, image or video file, comprising a reduction phase were at least one couple of values of the file is reduced to a representative compressed value (Q [R]), and a reconstitution phase where the file is reconstituted to its original form from said representative compressed value, characterised in that I. the reduction phase comprises ☆ taking into account for said at least one couple of values of the file, one of these values being greater than the other (sic); ☆ determining of an integrating value V[T] consisting in the greater value of said couple, and an integrated value that is equal to the ratio of the smaller value V[I] and the integrating value V[T]; ☆ determining the representative compressed value Q[R] based on which the couple of values of the original file can be recomposed, this representative compressed value Q[R] being equal to the sum of a value C[F] that is equal to the rounded value of the integrated value V[I] /V[T] multiplied by ten, C[F] = (V[I] /V[T] ∙10), and the rounded value of the ratio of the integrating value V[T] and a coefficient C[V] that is variable according to the desired compression and error rates, multiplied by ten, Q[R] = rnd [V[T] / C[V] ∙10]+ C[F]; II. the reconstitution phase for the couple of values of the file comprises, consecutively, according to a process that is inversed with respect to the reduction phase: ☆ calculating the reconstituted integrating value V[T]^* by means of the relationship V[T]^* = rnd [Q[R]/10∙ C[V])]; ☆ calculating the reconstituted smaller value V[I]^* by means of the formula V[I]^* = [V[T]^*∙ C[F]/10]; ☆ reconstituting the file comprising the couple of reconstituted values V[T]^* and V[I]^*. One of the issues discussed by the Board was whether the corrections under R 139 requested by the applicant were to be allowed. [2.1] There are obvious inconsistencies in the description and the claims as filed, between the formulas for calculating the representative value Q[R] and the reconstituted values V[I] and V[T]* on the one hand and the numerical examples that should exemplify them on the other hand. In order to be allowable, the correction of these errors pursuant to R 139 has to be obvious for the skilled The reconstituted value V[I]^* [2.2] The original formula for calculating the reconstituted value V[I]^* refers to a variable “C[V]” that is not described elsewhere and is clearly erroneous. The replacement of “C[V]” by “C[F]” is obvious in view of the fact that the formula has to correspond to the inverse of the formula for calculating C[F]. Thus the formula may be corrected pursuant to R 139 to read: V[I]^* = [V[T]^*∙ C[F]/10]. The reconstituted value V[T]^* [2.3] The original formula for calculating the reconstituted value V[T]^* refers to a variable “V[R]” that is not described elsewhere and is clearly erroneous. In analogy to V[I]^*, the replacement of “V[R]” by “Q[R]” is obvious. Thus the formula may be corrected pursuant to R 139 to read: V[T]^* = rnd (Q[R]/10∙ C[V]). Representative value Q[R] [2.4.1] The numerical example contains two obvious mistakes. The displacement of the opening bracket “[“ in the numerical examples for the values C[F] and Q[R] (page 6, lines 15 and 16) is obvious because it has to delimit the argument of the rounding function in these examples. [2.4.2] The appellant requests the replacement of the original formula for calculating Q[R], i.e. Q[R] = rnd [V[T] / C[V] ∙10] + C[F] by the corrected formula Q[R] = rnd [V[T] / C[V]]∙10 + C[F]. The Board finds this correction to be inacceptable, for the following reasons: The numerical value that is obtained for Q[R] is 409 when the original formula is used, whereas the description (page 6, lines 15 and 16) is 407. The skilled person facing the obvious inconsistencies in the original disclosure would have (at least) two possible choices, i.e. either to correct the formula, as requested by the appellant, or to correct the numerical example. The calculation according to the corrected formula would yield the described value Q[R] = 407. Based on this result, the reconstituted values would be V[T]^* = 4070 and V[I]^* = 2849. The calculation according to the original value would yield a value Q[R] = 409. Based on this result, the reconstituted values would be V[T]^* = 4090 and V[I]^* = 2863. The reconstituted values obtained with the two alternatives are very close of the initial values 4024 and 2869, such that the skilled person could not exclude either alternative. The appellant points out that the reconstituted values according to the original formula were V[T]^* = 4090 and V[I]^* = 3681. The value for V[I]^* was so far from the initial value 2869 that the skilled person would exclude this alternative and correct the formula rather than the numerical example. However, this conclusion presupposes that the skilled person would have understood that the value for C[F] to be used in the reconstitution formula was not the one obtained according to the description (page 6, line 7) and to claim 1, but the one obtained by isolating the figure of units “9” of the Q[R] value (“409”). This step is not explained in the original application, which only mentions the action of “dissociating the representative value (Q[R])” (see page 6, line 18). The need to isolate the figure of the units can only be deduced by analysing the corrected formula and, therefore, by setting aside the original formula, which is precisely the formula to be used for calculating Q[R] in the alternative under consideration. The Board is of the opinion that the appellant’s reasoning is not In conclusion, the skilled person, even if it envisaged only the two alternatives of correction mentioned above, could not exclude either with certainty. As a consequence, it is not apparent, directly and without ambiguity, that no other text than the one resulting from the correction could have been envisaged initially. [2.4.3] Consequently, the correction under R 139, second sentence, of the formula for calculating Q[R] has to be refused. Should you wish to download the whole decision (in French), just click here. The file wrapper can be found here. 0 comments:
{"url":"http://k-slaw.blogspot.com/2013/01/t-45509-from-department-of-corrections.html","timestamp":"2014-04-20T21:24:39Z","content_type":null,"content_length":"89642","record_id":"<urn:uuid:bad86dcb-fd40-4ba4-a09e-e98f4ad31f54>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Sudoku Terminology To understand the different techniques for solving sudoku puzzles you must first understand the terminology Sudoku Terminology A backdoor more or less refers to a scenario in which the puzzle can be solved through the use of a single candidate, and all puzzles have a few of these backdoors. Using a backdoor is considered a trial and error solving technique. Big Number A big number is a number that is the solved value for a cell. Cells without a big number have candidates, or small numbers. Bilocation (unit) A unit (two cells) with the same two candidates remaining. These units allow for the implementation of various solving techniques. Bivalue (cell) A single cell with two possible candidates remaining. Like the above units, this cell becomes useful for trying out different solving techniques. A standard Sudoku board is comprised of 9 boxes, each with 9 cells arranged in a 3x3 formation. Each box must contain one of each of the digits from 1-9. A candidate is any number that could still be a possible solution for a cell. Eliminating candidates is one of the prime objectives of many Sudoku solving techniques and strategies. A single square on the Sudoku board. A standard board is comprised of 81 of these cells. Each cell is part of one row, one column, and one box. There are 9 vertical columns on each Sudoku board, with 9 cells in each. Along with rows, these columns are often referred to as lines. Each column must contain one of each of the digits from 1-9. The standard way to refer to a specific cell is to use coordinates in the format R#C#, which means the row number followed by the column number. So if I was referring to the cell in the fifth row from the top, and the 4th column from the left, it would be cell R5C4. This guide will use similar terminology when explaining strategies, and Sudoku software programs also use this numbering system. The cells which have values placed them to get the player started are called givens. These values cannot be altered, and set up the difficult of the entire puzzle. Most Sudoku puzzles have a minimum of 17 givens, as this is the minimum amount necessary to provide the puzzle with a unique solution. Unlike trial and error, guessing is not considered logical in any way, and is frowned upon by most Sudoku players. A guess is often used by players when they’ve run out of solving techniques to utilize, including trial and error. The player would then fill in one or more boxes with guesses and continue to solve the puzzle based on those guesses, in an attempt to see if a solution can be reached. As Sudoku is considered a game of logic, and each board can be solved through logic alone, guessing is often seen as cheating and/or giving up and admitting defeat at the hands of the board. Each group of 9 cells is considered a house, whether it be a row, column, or box. The standard Sudoku board contains 27 houses, and each must contain one of the numbers from 1-9. Improper Sudoku A sudoku board which has multiple possible solutions. Rows and columns are often called lines. There are 18 of these lines, 9 rows and 9 columns, on a standard Sudoku board. A peer is any cell located within the same house as the cell in question. Each cell has 20 different peers, 8 within the box, and 6 along that cell’s row and column, outside the box. None of a cell’s peers may contain the same number as it. Proper Sudoku This is a board which only has one possible correct solution. There are 9 horizontal rows on each Sudoku board, with 9 cells in each. Along with columns, these rows are often referred to as lines. Each row must contain one of each of the digits from 1-9. Small Number Small numbers are also known as candidates or pencilmarks. They are numbers which are possible big numbers for that cell, though it isn’t certain which of the small numbers is the actual big number Strong Inference A strong inference is a deduction that can be drawn from two candidates which are linked. For example, both candidates in such a scenario could not both be false without causing a conflict, allowing the player to surmise that one is true and the other false. Weak inference can be used in a similar way, though in its case, it implies that both scenarios cannot be true, so if one is true, the other is automatically false. Strong Link Link between 2 different candidates in a cell or unit with only 2 remaining candidates. These are great for solving techniques, as they assure that one of them must be true, and the other false. Weak links on the other hand, in which more than 2 candidates remain, are still useful for techniques, but take more work to uncover. Trial and Error A viable solving method, though more serious players may not use it, and consider just a step above guessing. Still, trial and error is a logical method to reach conclusions about candidates or values, so should not be discarded. Many other solving techniques use some form of trial and error in their own methodology. No comments found. Please enter your details to login
{"url":"http://www.sudokudaddy.com/sudoku-tips/2011/tips-for-solving-sudoku-puzzles-page2.php?controller=pjLoad&action=pjActionLogin","timestamp":"2014-04-17T21:23:59Z","content_type":null,"content_length":"19972","record_id":"<urn:uuid:bd07dec9-17d1-44de-9d51-e2544bdbced7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-sig-Geo] Masking interpolations Wesley Roberts wroberts at csir.co.za Mon Feb 16 10:58:09 CET 2009 Hi Paul, Many thanks for the reply. I am not sure about the convex hull approach as I am not sure how to implement it as part of my program. Is the code you wrote below replacing the following statements? Would I then pass grd to the autoKrige statement as shown in the original code below? > x.range <- as.integer(range(a at coords[,1])) > y.range <- as.integer(range(a at coords[,2])) > grd <- expand.grid(x=seq(from=x.range[1], to=x.range[2], by=0.1), y=seq(from=y.range[1], to=y.range[2], by=0.1)) > coordinates(grd) <-~ x+y > gridded(grd) <- TRUE I have uploaded some pics of my points and interpolations to flickr. Link 1 shows the timber compartment with the lidar points overlayed and the bounding polygon used to subset the point data set. You cant really see the irregularity of the points but trust me, they are all over the place. Average distance between points is about 17cm so a 10cm interpolation resolution should be okay. Link 1 Link 2, is the height interpolation using the parameters you suggested on Friday. As you can see the lack of points outside the polygon results in a type of edge effect at the boundaries. I can mask the rest out but would prefer to limit the interpolation to minimize errors at the boundaries. Link 2 Finally link 3 shows what I think is the kriging variance although I cant be sure. When I import the tiff written by writeGDAL there are three bands (using GRASS). The first is the interpolated variable, the next two are a mystery to me. If this is indeed the krig variance then limiting the interpolation based on kriging variance seems like a good idea? What do you think? Link 3 Many thanks, Wesley Roberts MSc. Researcher: Earth Observation (Ecosystems) Natural Resources and the Environment Tel: +27 (21) 888-2490 Fax: +27 (21) 888-2693 "To know the road ahead, ask those coming back." - Chinese proverb >>> Paul Hiemstra <p.hiemstra at geo.uu.nl> 02/16/09 10:29 AM >>> Hi Wesley, You could take a look at using a convex hull. I'm not sure if this will fix your problem as we cannot see how exactly your points are irregular. The latest version on my website (0.5-2) uses a convex hull off the data if you don't pass a new_data object. You could try this. The function making the convex hull is: create_new_data = function(obj) { # Function that creates a new_data object if one is missing convex_hull = chull(coordinates(obj)[,1],coordinates(obj)[,2]) convex_hull = c(convex_hull, convex_hull[1]) # Close the polygon d = Polygon(obj[convex_hull,]) new_data = spsample(d, 5000, type = "regular") gridded(new_data) = TRUE If you want to call it directly from the package use Wesley Roberts wrote: > Dear R-sig-geo'ers > I am currently running some interpolations using automap written by Paul Hiemstra. So far my interpolations have been producing suitable results except for one problem. From the code you will see that the boundaries of the spatial grid are determined using the range of the X and Y coordinates creating a square grid. My point data do not cover the entire grid and I would only like to interpolate in areas where data exists otherwise I get a significant edge effect. Is it possible to limit / mask my interpolation to only predict where data exists? > The point data are lidar canopy returns for an irregular shaped timber compartment and number around 10 000 irregular spaced points. > Any help on this matter would be greatly appreciated. > Kind regards, > Wesley > library(automap) > library(gstat) > a <- read.csv("AreaOne_4pts.csv", header=TRUE) > coordinates(a) <-~ x+y > x.range <- as.integer(range(a at coords[,1])) > y.range <- as.integer(range(a at coords[,2])) > grd <- expand.grid(x=seq(from=x.range[1], to=x.range[2], by=0.1), y=seq(from=y.range[1], to=y.range[2], by=0.1)) > coordinates(grd) <-~ x+y > gridded(grd) <- TRUE > height = autoKrige(H~1, a, grd, nmax=100) > writeGDAL(height$krige_output, fname="test.tiff", drivername ="GTiff", type = "Float32") > intensity = autoKrige(I~1, a, grd, nmax=100) > writeGDAL(intensity$krige_output, fname="test.tiff", drivername ="GTiff", type = "Float32") > Wesley Roberts MSc. > Researcher: Earth Observation (Ecosystems) > Natural Resources and the Environment > CSIR > Tel: +27 (21) 888-2490 > Fax: +27 (21) 888-2693 > "To know the road ahead, ask those coming back." > - Chinese proverb Drs. Paul Hiemstra Department of Physical Geography Faculty of Geosciences University of Utrecht Heidelberglaan 2 P.O. Box 80.115 3508 TC Utrecht Phone: +31302535773 Fax: +31302531145 This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. and is believed to be clean. MailScanner thanks Transtec Computers for their support. More information about the R-sig-Geo mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-geo/2009-February/005061.html","timestamp":"2014-04-21T04:38:19Z","content_type":null,"content_length":"9375","record_id":"<urn:uuid:85dc1640-619f-469f-addf-413de5e56a94>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
How to illustrate a normal distribution? You would need to create a set of X,Y coordinates that, when plotted, provide a bell curve, and then graph them. You would probably find it easier to do in Excel. Take a look he Herb Tyson MS MVP Author of the Word 2007 Bible "alice" wrote in message I was asking for the concept in general. To take an example, how do I draw normal distribution with variance 2 and mean 4? The aim is to get smooth, good-looking lines - or smooth lines on a bell (rather than edgy lines is the best I can do now). "JoAnn Paules" skrev: Depends on your definition of "normal distribution". JoAnn Paules MVP Microsoft [Publisher] Tech Editor for "Microsoft Publisher 2007 For Dummies" "alice" wrote in message I want to draw a normal distribution in a graph in Word 2007. I am aware the "insert", "figures", but no figure seems to help me in getting a distribution (or a nice picture of a bell). Do you have suggestions?
{"url":"http://www.wordbanter.com/showthread.php?t=132930","timestamp":"2014-04-18T23:16:13Z","content_type":null,"content_length":"52816","record_id":"<urn:uuid:68dab0bd-1dba-4675-bebe-6559c107002d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Semistability of Nonlinear Impulsive Systems with Delays Mathematical Problems in Engineering Volume 2012 (2012), Article ID 260236, 10 pages Research Article Semistability of Nonlinear Impulsive Systems with Delays ^1Department of Mathematics, Zhengzhou University, Zhengzhou 450052, China ^2College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China Received 22 June 2012; Revised 23 August 2012; Accepted 15 September 2012 Academic Editor: Bo Shen Copyright © 2012 Xiaowu Mu and Yongliang Gao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is concerned with the stability analysis and semistability theorems for delay impulsive systems having a continuum of equilibria. We relate stability and semistability to the classical concepts of system storage functions to impulsive systems providing a generalized hybrid system energy interpretation in terms of storage energy. We show a set of Lyapunov-based sufficient conditions for establishing these stability properties. These make it possible to deduce properties of the Lyapunov functional and thus lead to sufficient conditions for stability and semistability. Our proposed results are evaluated using an illustrative example to show their effectiveness. 1. Introduction Due to their numerous applications in various fields of sciences and engineering, impulsive differential systems have become a large and growing interdisciplinary area of research. In recent years, the issues of stability in impulsive differential equations with time delays have attracted increasing interest in both theoretical research and practical applications [1–9], while difficulties and challenges remain in the area of impulsive differential equations [10], especially those involving time delays [11]. Various mathematical models in the study of biology, population dynamics, ecology and epidemic, and so forth can be expressed by impulsive delay differential equations. These processes and phenomena, for which the adequate mathematical models are impulsive delay differential equations, are characterized by the fact that there is sudden change of their state and that the processes under consideration depend on their prehistory at each moment of time. In the transmission of the impulse information, input delays are often encountered. Control and synchronization of chaotic systems are considered in [12, 13]. By utilizing impulsive feedback control, all the solutions of the Lorenz chaotic system will converge to an equilibrium point. The application of networked control systems is considered in [14–17], while in [14], when analyzing the asymptotic stability for discrete-time neural networks, the activation functions are not required to be differentiable or strictly monotonic. The existence of the equilibrium point is first proved under mild conditions. By constructing a new Lyapnuov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the discrete-time neural networks to be globally asymptotically stable. In [18], Razumikhin-type theorems are established which guarantee ISS/iISS for delayed impulsive systems with external input affecting both the continuous dynamics and the discrete dynamics. It is shown that when the delayed continuous dynamics are ISS/iISS but the discrete dynamics governing the impulses are not, the ISS/iISS property of the impulsive system can be retained if the length of the impulsive interval is large enough. Conversely, when the delayed continuous dynamics are not ISS/iISS but the discrete dynamics governing the impulses are, the impulsive system can achieve ISS/iISS. In [19, 20], the authors consider linear time invariant uncertain sampled-data systems in which there are two sources of uncertainty: the values of the process parameters can be unknown while satisfying a polytopic condition and the sampling intervals can be uncertain and variable. They model such systems as linear impulsive systems and they apply their theorem to the analysis and state-feedback stabilization. They find a positive constant which determines an upper bound on the sampling intervals for which the stability of the closed loop is guaranteed. Population growth and biological systems are considered in [21, 22]. Stochastic systems are considered in [23–25], and so forth. However, the corresponding theory for impulsive systems with time delays having a continuum of equilibria has been relatively less developed. The purpose of this paper is to study the stability and semistability properties for nonlinear delayed impulsive systems with continuum of equilibria. Examples of such systems include mechanical systems having rigid-body modes and isospectral matrix dynamical systems [26]. Such systems also arise in chemical kinetics, compartmental modeling, and adaptive control. Since every neighborhood of a nonisolated equilibrium contains another equilibrium, a nonisolated equilibrium cannot be asymptotically stable. Thus asymptotic stability is not the appropriate notion of stability for systems having a continuum of equilibria. Two notions that are of particular relevance to such systems are convergence and semistability. Convergence is the property whereby every solution converges to a limit point that may depend on the initial condition. Semistability is the additional requirement that all solutions converge to limit points that are Lyapunov stable. More precisely, an equilibrium is semistable if it is Lyapunov stable, and every trajectory starting in a neighborhood of the equilibrium converges to a (possibly different) Lyapunov stable equilibrium. It can be seen that, for an equilibrium, asymptotic stability implies semistability, while semistability implies Lyapunov stability. We will employ the method of Lyapunov function for the study of stability and semistability of impulsive systems with time delays. Several stability criteria are established. A set of Lyapunov-based sufficient conditions is provided for stability criteria, then we extend the notion of stability to develop the concept of semistability for delay impulsive systems. Finally, an example illustrates the effectiveness of our approach. 2. Preliminaries Let denote the set of positive integer numbers. Let denote the set of piecewise right continuous functions with the norm defined by . For simplicity, define , for . For given , if , then for each , we define by and , respectively. A function is of class , if is continuous, strictly increasing, and . For a given scalar , let . Let be an open set and for some . Given functionals , satisfying . Considering the following nonlinear time-delay impulsive system described by the state equation where is the system state, denotes the right-hand derivative of , and denote the limit from the right and the limit from the left at point , respectively. is the initial time. Here we assume that the solutions of system are right continuous, that is, . is a strictly increasing sequence of impulse times in where . Definition 2.1. The function is said to be composite-PC, if for each and , and is continuous at each in , then the composite function . Definition 2.2. The function is said to be quasi-bounded, if for each , , and for each compact set , there exists some , such that for all . Definition 2.3. The function with is said to be a solution of if(i) is continuous at each in ;(ii)the derivative of exists and is continuous at all but at most a finite number of points in ;(iii)the right-hand derivative of exists and satisfies (2.1) in , while for each , (2.2) holds;(iv)Equation (2.3) holds, that is, . We denote by (or , if in not confusing) the solution of . is said to be a solution defined on if all above conditions hold for any . We make the following assumptions on system .(A1) is composite-PC, quasi-bounded and locally Lipschitzian in .(A2) For each fixed , is a continuous function of on . Under the assumptions above, it was shown in [11] that for any , system admits a solution that exists in a maximal interval ) and the zero solution of the system exists. Definition 2.4. An equilibrium point of is a point satisfying for all where is the solution of . Let denote the set of equilibrium points of . Definition 2.5. Consider the delay impulsive system .(i)An equilibrium point of is Lyapunov stable if for any there exists , such that implies for all , where is the initial function for . An equilibrium point is uniformly Lyapunov stable, if, in addition, the number is independent of .(ii)An equilibrium point of is semistable if it is Lyapunov stable and there exists an open subset of containing such that for all initial conditions in the trajectory of converges to a Lyapunov stable equilibrium point, that is, , where is a Lyapunov stable equilibrium point.(iii)System is said to be uniformly asymptotically stable in the sense of Lyapunov with respect to the zero solution, if it is uniformly stable and . Definition 2.6. The function is said to belong to the class if(i) is continuous in each of the sets and for each exists;(ii) is locally Lipschitzian in , and for all , . Definition 2.7. Let . For any , the upper right-hand derivative of with respect to system is defined by 3. Main Results In the following, we will establish several sufficient conditions for Lyapunov stability and semistability for impulsive differential system with time delays. Theorem 3.1. System is uniformly stable, and the zero solution of is asymptotically stable if there exists a Lyapunov function which satisfies the following.(i) such that (ii)For any , and , there exists , such that (iii)There exist a and a subsequence of the impulsive moments such that (iv)For any , there exists a function , such that (v)For any , there exists a function such that Proof. Let be an equilibrium point of the system . We first prove that is uniformly stable, that is, for , there exists such that implies for all . For , let such that For any , by condition (3.4), we get By (3.3), it is clear that is nonincreasing along the subsequence , so we have For any , by (3.5), we get Combining (3.7), (3.8), and (3.9), we conclude that By condition (3.2), for any , we have and then, by (3.10), for any we derive that . Hence, by (3.1) we obtain that . Since , we get which implies that system is uniformly Lyapunov stable. Next, we will prove that the zero solution of is asymptotically stable. Since system is uniformly stable, from (3.1), there must exist a real number such that . Hence, there exists a such that In the following, we will show that . Without loss of generality, we can suppose that there exists a sequence , such that From (3.3) we get Since , we obtain If the sequence is the same as the sequence , then it is obvious that . If , it follows from the assumptions above that (3.16) holds. Otherwise, we assume that ; there exists a such that . Then from condition (3.5) we get So which implies . Hence, we derive that . Finally, by (3.1), we have which implies that the zero solution of the system is asymptotically stable. The proof is completed. Next, we present a sufficient condition for semistability for system . Let . Theorem 3.2. Consider the system ; assume that there exists nonnegative-definite continuous function such that Let . If every equilibrium point of system is Lyapunov stable, then every point in is Proof. Define It follows from (3.19) and (3.3) that Since is nonnegative, it follows that . Next, we show that as . If it is not true, then there exists and an infinite sequence of times such that . By definition of we have that does not belong to the set of impulsive times . Note that from (3.19), it follows from Proposition 3.1 of [26] that is bounded for all . Hence, it follows from the Lipschitz continuity of that is bounded for all ; thus, is uniformly continuous on. So, there exists such that every is contained in some interval of of length on which . This contradicts . Hence as . It follows that as . Since is bounded, we get (as ). Next, let . For every open neighborhood and , , it follows from Proposition 5.1 of [26] that there exists such that . Since every point in is Lyapunov stable, and hence is a Lyapunov stable equilibrium of , it follows that is semistable. Finally, since is arbitrary, this implies every point in is semistable. The proof is completed. 4. Numerical Example In this section, we give an example about compartmental systems to illustrate the effectiveness of the proposed method. Compartmental systems involve dynamical models that are characterized by conservation laws (e.g., mass and energy) capturing the exchange of material between coupled macroscopic subsystems known as compartments. Each compartment is assumed to be kinetically homogeneous, that is, any material entering the compartment is instantaneously mixed with the material of the compartment. Example 4.1. Consider the nonlinear two-compartment time-delay impulsive systems given by where . Let Lyapunov function , then for any we have Let , , , and , then the conditions of Theorem 3.1 are satisfied, which means the equilibrium points of the system are Lyapunov stable, and Let then we derive that ; it follows from Theorem 3.2 that every point in is semistable. The simulation result is depicted in Figure 1, where the length of the impulsive intervals is second and the time delay second. The authors would like to thank the Editor and the anonymous reviewers for their valuable comments and suggestions that improved the overall quality of this paper. This work was supported by Grants of NSFC-60874006. 1. G. Ballinger and X. Liu, “Existence, uniqueness and boundedness results for impulsive delay differential equations,” Applicable Analysis, vol. 74, no. 1-2, pp. 71–93, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 2. X. Liu and G. Ballinger, “Existence and continuability of solutions for differential equations with delays and state-dependent impulses,” Nonlinear Analysis, vol. 51, no. 4, pp. 633–647, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 3. X. Liu, X. Shen, Y. Zhang, and Q. Wang, “Stability criteria for impulsive systems with time delay and unstable system matrices,” IEEE Transactions on Circuits and Systems. I, vol. 54, no. 10, pp. 2288–2298, 2007. View at Publisher · View at Google Scholar 4. J. S. Yu, “Stability for nonlinear delay differential equations of unstable type under impulsive perturbations,” Applied Mathematics Letters, vol. 14, no. 7, pp. 849–857, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 5. X. Liu and Q. Wang, “The method of Lyapunov functionals and exponential stability of impulsive systems with time delay,” Nonlinear Analysis, vol. 66, no. 7, pp. 1465–1484, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. Z. Luo and J. Shen, “Stability of impulsive functional differential equations via the Liapunov functional,” Applied Mathematics Letters, vol. 22, no. 2, pp. 163–169, 2009. View at Publisher · View at Google Scholar 7. J. Liu, X. Liu, and W. C. Xie, “Input-to-state stability of impulsive and switching hybrid systems with time-delay,” Automatica, vol. 47, no. 5, pp. 899–908, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 8. W. Zhu, “Stability analysis of switched impulsive systems with time delays,” Nonlinear Analysis: Hybrid Systems, vol. 4, no. 3, pp. 608–617, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 9. Q. Wu, J. Zhou, and L. Xiang, “Global exponential stability of impulsive differential equations with any time delays,” Applied Mathematics Letters, vol. 23, no. 2, pp. 143–147, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. V. Lakshmikantham, D. D. Baĭnov, and P. S. Simeonov, Theory of Impulsive Differential Equations, vol. 6 of Series in Modern Applied Mathematics, World Scientific, Teaneck, NJ, USA, 1989. 11. G. Ballinger and X. Liu, “Existence and uniqueness results for impulsive delay differential equations,” Dynamics of Continuous, Discrete and Impulsive Systems, vol. 5, no. 1–4, pp. 579–591, 1999. View at Zentralblatt MATH 12. C. Li, X. Liao, X. Yang, and T. Huang, “Impulsive stabilization and synchronization of a class of chaotic delay systems,” Chaos, vol. 15, no. 4, Article ID 043103, p. 9, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 13. X. Liu, “Impulsive stabilization and control of chaotic system,” Nonlinear Analysis, vol. 47, no. 2, pp. 1081–1092, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 14. Y. Liu, Z. Wang, and X. Liu, “Asymptotic stability for neural networks with mixed time-delays: the discrete-time case,” Neural Networks, vol. 22, no. 1, pp. 67–74, 2009. View at Publisher · View at Google Scholar · View at Scopus 15. Y. K. Li and P. Liu, “Existence and stability of positive periodic solution for BAM neural networks with delays,” Mathematical and Computer Modelling, vol. 40, no. 7-8, pp. 757–770, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 16. Z. Yang and D. Xu, “Stability analysis of delay neural networks with impulsive effects,” IEEE Transactions on Circuits and Systems II, vol. 52, no. 8, pp. 517–521, 2005. View at Publisher · View at Google Scholar · View at Scopus 17. M. Tan and Y. Tan, “Global exponential stability of periodic solution of neural network with variable coefficients and time-varying delays,” Applied Mathematical Modelling, vol. 33, no. 1, pp. 373–385, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 18. W. H. Chen and W. X. Zheng, “Input-in-state stability and integral input-to-state stability of nonlinear impulsive systems with delays,” Automatica, vol. 45, no. 6, pp. 1481–1488, 2009. View at Publisher · View at Google Scholar 19. P. Naghshtabrizi, J. P. Hespanha, and A. R. Teel, “Exponential stability of impulsive systems with application to uncertain sampled-data systems,” Systems & Control Letters, vol. 57, no. 5, pp. 378–385, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 20. N. van de Wouw, P. Naghshtabrizi, M. B. G. Cloosterman, and J. P. Hespanha, “Tracking control for sampled-data systems with uncertain time-varying sampling intervals and delays,” International Journal of Robust and Nonlinear Control, vol. 20, no. 4, pp. 387–411, 2010. View at Publisher · View at Google Scholar 21. X. Liu, “Impulsive stabilization and applications to population growth models,” The Rocky Mountain Journal of Mathematics, vol. 25, no. 1, pp. 381–395, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 22. X. Liu and K. Rohlf, “Impulsive control of a Lotka-Volterra system,” IMA Journal of Mathematical Control and Information, vol. 15, no. 3, pp. 269–284, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 23. Z. Wang, B. Shen, H. Shu, and G. Wei, “Quantized ${H}_{\infty }$ control for nonlinear stochastic time-delay systems with missing measurements,” IEEE Transactions on Automatic Control, vol. 57, no. 6, pp. 1431–1444, 2012. View at Publisher · View at Google Scholar 24. Z. Wang, B. Shen, and X. Liu, “${H}_{\infty }$ filtering with randomly occurring sensor saturations and missing measurements,” Automatica, vol. 48, no. 3, pp. 556–562, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 25. H. Dong, Z. Wang, and H. Gao, “Distributed filtering for a class of time-varying systems over sensor networks with quantization errors and successive packet dropouts,” IEEE Transactions on Signal Processing, vol. 60, no. 6, pp. 3164–3173, 2012. View at Publisher · View at Google Scholar 26. S. P. Bhat and D. S. Bernstein, “Nontangency-based Lyapunov tests for convergence and stability in systems having a continuum of equilibria,” SIAM Journal on Control and Optimization, vol. 42, no. 5, pp. 1745–1775, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
{"url":"http://www.hindawi.com/journals/mpe/2012/260236/","timestamp":"2014-04-16T19:40:54Z","content_type":null,"content_length":"417191","record_id":"<urn:uuid:7a65e39e-d447-4e28-b110-5814c299670a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrals Involving Inverse Trig Functions September 12th 2009, 08:42 PM Chicken Enchilada Integrals Involving Inverse Trig Functions Now for integrals involving inverse trig functions. A couple problems: $\int \frac {1}{1 + 4x^2}dx$ For this one, I know that $\frac {d}{dx} \arctan {x} = \frac {1}{1-x^2}$ but I don't know what to do with the 4 in there. $\int \frac {1}{x^2 - 6x + 13}dx$ This one I have no clue. September 12th 2009, 08:54 PM Chris L T521 Now for integrals involving inverse trig functions. A couple problems: $\int \frac {1}{1 + 4x^2}dx$ For this one, I know that $\frac {d}{dx} \arctan {x} = \frac {1}{1-x^2}$ but I don't know what to do with the 4 in there. $\int \frac {1}{x^2 - 6x + 13}dx$ This one I have no clue. For the first one, note that $\int\frac{\,dx}{4x^2+1}=\int\frac{\,dx}{\left(2x\r ight)^2+1}$. Then let $u=2x$. For the second one, note that $\int\frac{\,dx}{x^2-6x+13}=\int\frac{\,dx}{(x^2-6x+9)+4}=\int\frac{\,dx}{(x-3)^2+4}=\tfrac{1}{4}\int\frac{\,dx}{\left(\frac{x-3}{2}\right)^2+1}$. Then let $u=\frac Can you take it from here? September 12th 2009, 09:51 PM Chicken Enchilada For the first one, note that $\int\frac{\,dx}{4x^2+1}=\int\frac{\,dx}{\left(2x\r ight)^2+1}$. Then let $u=2x$. For the second one, note that $\int\frac{\,dx}{x^2-6x+13}=\int\frac{\,dx}{(x^2-6x+9)+4}=\int\frac{\,dx}{(x-3)^2+4}=\tfrac{1}{4}\int\frac{\,dx}{\left(\frac{x-3}{2}\right)^2+1}$. Then let $u=\frac Can you take it from here? So for the first one the answer is: $\frac {1}{2} \arctan (2x) + c$ and for the second: $\frac {1}{2} \arctan (\frac {x-3}{2}) + c$ Sweet! That was actually very simple. You taught me how to solve these problems in a matter of seconds, whereas my prof couldn't do it in 2 hours. It really helps if you can see the problem being solved unambiguously with every little step shown. I think my prof tends to skip a bunch, and everything comes out muddy to me. I appreciate the help Chris. Math Help Forum rules!
{"url":"http://mathhelpforum.com/calculus/101979-integrals-involving-inverse-trig-functions-print.html","timestamp":"2014-04-19T12:31:58Z","content_type":null,"content_length":"9816","record_id":"<urn:uuid:423e52a2-ac4c-4b1b-b3d2-408cd1ab4a30>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiset Addition Rule (proof required) August 30th 2009, 02:54 AM #1 Junior Member Aug 2009 Multiset Addition Rule (proof required) Show that M(n,r) = M(n, r-1) + M(n-1, r) for n,r>=1 by a combinatorial argument and by an algebraic argument using M(n,r) = C(n+r-1, r). What combinatorial argument are they looking for here? Suppose you consider a class of n people, one of whom is David. How many ways can a committee of r people be chosen? Is that number the same as counting the committees having David as a member plus the number of committees not having David as a member? But how does this example (which I'm familiar with from combinations) work for multisets- since they contain identical objects? It's the number of multisets (a set where repeated elements are allowed) containing r objects from a set containing n distinct objects. Or the number of ways to place r identical balls in n distinct boxes. August 30th 2009, 03:07 AM #2 August 31st 2009, 07:10 AM #3 Junior Member Aug 2009 August 31st 2009, 07:15 AM #4 August 31st 2009, 07:21 AM #5 Junior Member Aug 2009
{"url":"http://mathhelpforum.com/discrete-math/99757-multiset-addition-rule-proof-required.html","timestamp":"2014-04-18T04:05:11Z","content_type":null,"content_length":"43662","record_id":"<urn:uuid:c0a890f6-cdac-4062-9427-ad70b2bb9812>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 2 cos theta = 1, find all solutions • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ef1fb7e4b0d4a537cddef7","timestamp":"2014-04-21T15:47:24Z","content_type":null,"content_length":"111763","record_id":"<urn:uuid:87784f74-5cb4-402d-b488-c6889d1754c0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2012 [00120] [Date Index] [Thread Index] [Author Index] Re: Integrating over 3D vector • To: mathgroup at smc.vnet.net • Subject: [mg128384] Re: Integrating over 3D vector • From: pw <p.willis at telus.net> • Date: Thu, 11 Oct 2012 02:10:28 -0400 (EDT) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • Delivered-to: l-mathgroup@wolfram.com • Delivered-to: mathgroup-newout@smc.vnet.net • Delivered-to: mathgroup-newsend@smc.vnet.net • References: <20121004033915.2CDC06871@smc.vnet.net> <513F5F84AADF427991A95D687F8E5492@JEROMEPC> <20121005064847.53C5C6858@smc.vnet.net> <DD5AE69A35CE44D89F02E6C12507EC1D@JEROMEPC> < 20121009043627.6A017685B@smc.vnet.net> <80FE834A8C554DEF886678E43F0BF4D4@JEROMEPC> I see what you are getting at. You have demonstrated the vector direction and force along the 3D wire path according to biot-savart. I took the liberty of rewriting the code sections from the notebook you sent so that I could rearrange things and enter (iterate) more numbers. This is very close to what I was looking for. I have not had much chance to do much more than look at your notes and rearrange your code into programmatic space. I will attempt to rearrange the code to allow me to make a volumetric plot with a series of fixed distances from the wire. What I hope is that I can ultimately step my way along equally spaced positions along the (any) path and iterate through a series of radial distances from those. On 10/09/2012 05:09 PM, mathgroup wrote: > Here are 2 examples of a odd wire shape for 2D and 3D geometries...... > I used the parameter 's' to define x,y,z of the wire......and than > performed an integration over the variable 's'.....assumed CCW current.... > here are the results...I tested them with several locations along the x > and y axes that would tell me if my results were correct and they seem > correct.....I made the H vectors larger than the actual values to make > it easier to see the vectors.......unfortunately, I had to use > NIntegrate because dH was too complicated.... > jerry b..... > -----Original Message----- From: pw > Sent: Monday, October 08, 2012 11:36 PM > To: mathgroup at smc.vnet.net > Subject: Re: Integrating over 3D vector > Hello, > The attachment is interesting in the linearity of the procedure > used to solve the problem. > I am, however, thinking that it may be more efficient to simply > calculate a linear integral over the full calculated length > (straight length) of the wire, and then simply transform the > results into the direction and 3D position of each point > on the wire. Then simply perform integration of the 3D > results by position within the volume of the winding space. > Peter > On 10/06/2012 08:52 AM, mathgroup wrote: >> I don?t know you're background, so don?t know if this is of value to you >> or not.... >> here are 2 examples of biot savart calculations for a triangle and a >> loop.....it's interesting that, unfortunately,in each case I had to use >> Integrate with GenerateConditions->False, otherwize it took forever to >> do the integrals...this often happens with definite integrals... >> jerry b >> -----Original Message----- From: pw >> Sent: Friday, October 05, 2012 1:48 AM >> To: mathgroup at smc.vnet.net >> Subject: Re: Integrating over 3D vector >> Hello, >> I think I need to do a line integral like NIntegrate[]. >> I think I may just need to collect single position integrals >> and then transform the integrals using the 3D coordinates... >> Thank you for mentioning the Classroom Assistant Palette, >> that was very useful. >> Current linux distro is Ubuntu PP . Mathematica distro is 8.04. >> I am looking to integrate magnetic flux along a series of curved >> windings. >> Peter >> On 10/04/2012 02:33 PM, mathgroup wrote: >>> several questions.... >>> 1. of course, you can nest integrals....I personally use the Classroom >>> Assistant palette and put together the number of integrals I want.... >>> 2. sometimes getting vector potential A is easier....etc.. >>> 3. could you tell me exactly what the current distribution is.....I'm >>> assuming from what you say that there isnt any symmetry to break it down >>> into one integral only, e.g. circular loop.... >>> If you are interested, I can send you a double integral example to >>> calculate the force between a wire along the z axis and a square loop >>> positioned away from it....this involves double integrals.... >>> jerry blimbaum >>> -----Original Message----- From: pw >>> Sent: Wednesday, October 03, 2012 10:39 PM >>> To: mathgroup at smc.vnet.net >>> Subject: Integrating over 3D vector >>> Hello, >>> Biot?Savart Law is used to calculate the magnetic field strength >>> at some vector location relative to the path of a conductor. >>> The function of the law requires integration along the path of the >>> current in 3 dimensions which indicates 3D displacement. >>> QUESTION: Is it possible to 'nest' a series of integrals >>> that follow the series of XYZ coordinates. >>> ie: Integrate[{x1,x2,x3}, >>> Integrate[{y1,y2,y3}, >>> Integrate[{z1,z2,z3}, >>> .... >>> ] >>> ] >>> ]; >>> Thanks for any interest, >>> Peter • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Oct/msg00120.html","timestamp":"2014-04-17T00:52:40Z","content_type":null,"content_length":"31856","record_id":"<urn:uuid:6ac2af30-52d4-4b19-97b0-5ae25234eb54>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Online games and resources for exponents You are here: Home → Online resources → Exponents Online games and resources for exponents This is an annotated and hand-picked list of online games and resource related to exponents. I have tried to gather a variety of resources, and have personally hand-picked each website, to make sure it is truly useful for my site visitors! Exponents and powers Free exponent worksheets Create a variety of customizable, printabe worksheets to practice exponents. Baseball Exponents Choose the right answer from three possibilities before the pitched ball comes. Exponents Quiz from ThatQuiz.org Ten questions, fairly easy, and not timed. You can change the parameters as you like to include negative bases, square roots, and even logarithms. Exponents Jeopardy The question categories include evaluating exponents, equations with exponents, and exponents with fractional bases. Pyramid Math Simple practice of either exponents, roots, LCM, or GCF. Drag the triangle with the right answer to the vase. Exponents Battleship A regular battleship game against the computer. Each time you "hit", you need to answer a math problem involving exponents (and multiplication). Exponent Battle A card game to practice exponents. I would limit the cards to small numbers, instead of using the whole deck. Pirates Board Game Steer your boat in pirate waters in this online board game, and evaluate powers. A self-teaching worktext for 5th-6th grade that covers simple equations, scales problems, expressions that involve a variable, the order of operations, long multiplication, long division, and graphing simple linear functions. Download ($7.30). Also available as a printed copy. => Learn more and see the free samples! See more topical Math Mammoth books
{"url":"http://www.homeschoolmath.net/online/exponents.php","timestamp":"2014-04-16T13:03:48Z","content_type":null,"content_length":"24331","record_id":"<urn:uuid:60253e22-6e52-4597-989a-25fc6a37d0aa>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about 2007 on Roy MacLean's VBA Blog Pivot Table Calculated Column – 2 Published October 5, 2010 Excel 1 Comment Tags: 2007, Excel, pivot Thanks to Dick Kusleika and Andy Pope for offering solutions to my pivot table problem: how to have a column that shows the difference between Max and Min aggregations (or other ‘custom’ Dick’s solution is to have an additional column in the source data: effectively calculating Max, Min and MaxMinDiff before aggregation in the pivot table (so the Max and Min are not done in the pivot I have actually added three columns, to break the formula down: the Max and Min columns are just intermediates. The Max and Min values are within Month: since the records are sorted by Month, this can be seen easily. Here’s the pivot table: In the pivot table, the Max and Min columns are normal aggregations of the Sales column in the data: they are not derived from the Max and Min columns in the data. However, the MaxMinDiff column in the pivot table is derived from the Diff column in the data. Since all the values underlying a MaxMinDiff cell have the same value (e.g. Data H2:H6 underlies Pivot Table D3), we can use any of the non-summing aggregation functions: Max, Min, even Average. I changed the lable to MaxMinDiff, because “Max of Diff” looks weird. Now the formulas in the additional data columns. The Max and Min formulas are single-cell array formulas: {=MAX([Sales] * ([Month] = SalesTable[[#This Row],[Month]]))} {=MIN(IF([Month] = SalesTable[[#This Row],[Month]],[Sales],””))} The MAX formula is multiplying the Sales column by an array of Booleans, where the latter come from testing the Month column against this row’s Month. I do like the table-range addressing, although I don’t see why a formula within SalesTable needs to refer to SalesTable. The resulting array contains Sales figures for a specific month, and zeroes for the other months; this is then MAXed. The MIN formula is slightly trickier, because we can not leave the false-case zeroes, as these would of course be the minimum value. However, MIN of a number and an empty value is the number, so “” is substituted for false/zero. The Diff column formula is then just a simple difference: =SalesTable[[#This Row],[Max]]-SalesTable[[#This Row],[Min]] (Again, I don’t see why the syntax requires the reference to SalesTable). This all works fine, provided that Month is the only visible dimension in our pivot table. This is because the data Max and Min values are within Month, as is obvious from the formulas. However, we could equally well want to see MaxMinDiff for values within Region or within Sales Rep. This would necessitate further columns, or changing the formulas to test a different dimension (e.g. Region). Andy Pope suggested a VBA solution, handling the Worksheet_PivotTableUpdate event: Private Sub Worksheet_PivotTableUpdate(ByVal Target As PivotTable) With Target.DataBodyRange With .Columns(.Columns.Count) .Offset(-1, 1).Resize(1, 1) = "MaxMinDiff" .Offset(, 1).FormulaR1C1 = "=RC[-2]-RC[-1]" End With End With End Sub This certainly inserts a column of formulas, but it is next to, rather than part of the pivot table. Consequently, it does not adapt to changes in the pivot table layout. Also, it requires the Max and Min columns to be in the pivot table (I guess you could live with that). On a completely different tack, a PivotCache can be based on a Recordset (typically sourced by a SELECT query on some data). I wonder if it’s possible to manipulate such a Recordset to add columns… ‘Database’ Functions in Excel Published September 21, 2010 Excel 1 Comment Tags: 2007, Excel, Formulas, VBA In Excel there is a category of functions called “Database”. This has always been a confusing term, as now (in 2007+) they relate to Tables (a.k.a. ListObjects, in VBA), or equivalent Ranges. The function names all start with ‘D’ (for database): DCOUNT, DSUM, etc. These are equivalents of the standard aggregation functions COUNT, SUM, etc (by aggregation, I mean a function that takes a set of values, and returns a single value). What these D-functions allow is selective aggregation of data from a table, given a set of criteria – in other words, the combination of an advanced filter with an aggregation, without the need for the filtered data to reside on a worksheet. Here’s an example: The simple case is where the criteria are directly related to the data in the table. So, to count Bob’s records, the criteria range is B2:B3, and the DCOUNT formula is in B5: The reference to “SalesTable[[#Headers],[Contacts]]” is just because I clicked on D9 – I could have just put “Contacts” (but I hate typing :-)). Note that the blank headers in row 2 are required as part of the criteria range (row 1 is just labels). Unsurpisingly, there are 12 records – one for each month of the year. However, this kind of subtotalling by ‘dimension member’ (to use the OLAP term) is what pivot tables do. More interesting is when the criteria involve a Boolean-returning formula applied to each For example, suppose that we want to know how many reps had more than average number of contacts (yes, I know it will be about half). So in D7 we have: and in D3: The criterion is a formula referring to the first data record, which returns a Boolean. I think of this formula as being filled down through the table records, as if in an additional column (note the absolute/relative addresses). Even more interesting is when the criteria involve functions on multiple fields in a record. For example, suppose that we are interested in records where Contacts2 is greater than Contacts - these are the guys who are improving. So in E3, we have: (again, referring relatively to the first record). This is fine, but more generally, our Boolean function operates on the entire record. So in F3, we have: where TestRecord is a pure VBA function: Public Function TestRecord(rec As Range) As Boolean TestRecord = rec.Item(4) < rec.Item(5) End Function Clearly, the body of this function can be as complicated as we wish, using the cells in rec. However, it depends on a particular ordering of the columns within the table. It is possible that we would want to use this function on tables that have the Contacts columns in different positions. So, an improved version intersects the named columns with the given rec. Public Function TestRecord2(rec As Range) As Boolean Dim table As ListObject Set table = rec.ListObject Dim arg1 As Range Set arg1 = Intersect(table.ListColumns("Contacts").Range, _ Dim arg2 As Range Set arg2 = Intersect(table.ListColumns("Contacts2").Range, _ TestRecord2 = arg1.Value &lt; arg2.Value End Function Since the supplied Range is the first record in the table, we could simply pass the Table name to the function and derive from that the Range for the first record (but it’s getting late…). Constraints on the Database functions are: • the criteria have to be Ranges (and thus on a Worksheet), not in-code arrays • they have to be vertically-oriented, contiguous Ranges (so can not be filled down). A somewhat more esoteric limitation is that you can not plug a custom aggregation function into the basic D-function mechanism – DWEIRDSUM, perhaps. Navigating Part Relationships – 2 Published August 16, 2010 VBA Projects Leave a Comment Tags: 2007, Excel, Software, User Interface, VBA In the previous post, I introduced the idea of a cursor object that allows you to navigate around a graph of component-part relationships: Navigation could be: • down the ‘contains’ relationships • along the ‘used in’ relationships • back through the history of visited records. We’ll also have a Reset operation, which jumps back to the start of the table, and clears the history. The navigation is done using keyboard shortcuts (but could be done via a form). The core of the design is a Class Module GraphCursor. This provides our four operations: CursorDown, CursorNextUse, CursorBack and CursorReset. When an instance of this class initializes, it points itself at ListObjects(1) on the ActiveSheet (there is only one sheet, to keep things simple), and does a CursorReset. A GraphCursor maintains a history of visted components using a linked List class (a simple chain of ListItem objects – nothing to do with ListObject a.k.a. Table). CursorDown and CursorNextUse use Range.Find with the currently selected cell value. I assume this is pretty efficient – and in any case is neater in code terms than iterating through rows explicitly. The Range for CursorDown is just the first column (Component); the Range for CursorNextUse is the Part columns below the row of the current selection. Something needs to create and hold on to an instance of GraphCursor – this is a standard module Client. This also provides procedures that are called via the keyboard shortcuts. Public gc As New GraphCursor Sub GCDown() End Sub 'similarly for the other three operations The keyboard shortcuts are set up on Workbook_Open: Private Sub Workbook_Open() Application.OnKey "^+d", "GCDown" Application.OnKey "^+n", "GCNextUse" Application.OnKey "^+r", "GCReset" Application.OnKey "^+b", "GCBack" End Sub Here’s the workbook (Excel 2007). Since each navigation step is worked out dynamically, we can insert or delete records from our table as we like. This would not be the case for an indexed solution (maintaining a map of Component to row number). You could argue that each Component-Part relationship should be a separate record – for example, [A, B], [A, C]. This would allow us to associate quantities or other information with each relationship. In this case, we would also need a CursorNextPart operation. Sub-sequence Iterator 3 Published June 8, 2010 VBA Projects 2 Comments Tags: 2007, Excel, Software, VBA In the previous post, we were looking at an iterator object that bound to successive sub-sequences of records in a table, according to a StartCondition and an EndCondition. In that example, the sub-sequences were contiguous, non-overlapping, and so could be summarized to give sub-totals in a very flexible way. A slightly different example is where the start and end records for each sub-sequence are all intermixed. Here’s a time-ordered sequence of start and finish actions, for some identified entities: You can see that B and C both start and finish before A finishes. Suppose that want to find out the duration of each activity. We then need to find matching start and finish records, and calculate the difference between the two dates. In this case, the sub-sequence is the ‘lifetime’ of the entity. Although we are not interested in anything other than the start and finish for an entity, it’s conceivable in other examples that we might be interested in the intermediate values. In the previous example, the sub-sequences were non-overlapping, so the MoveNext operation started looking for a new StartRow immediately after the old EndRow. Here, the sub-sequences can overlap, so the MoveNext operation starts looking for a new StartRow immediately after the old StartRow . At present, each entity is assumed to have a finish record. StartCondition simply looks for a ‘start’ record; when it finds one, it records the Id. EndCondition looks for a ‘finish’ record with the current Id. The client code creates an iterator on the Actions table (see above), moves it through the table, calculates the duration for each entity (finish – start + 1), and writes a summary record: Here’s the client procedure: Sub Run() Dim ssi As SSI2 Set ssi = New SSI2 With ssi .WsName = "Actions" 'Worksheet .TableName = "ActionTable" .KeyName1 = "Id" 'Key column .KeyName2 = "Action" 'there are 2 key columns .ValName = "Date" 'Value column End With Worksheets("Summary2").Activate 'for the output Dim row As Long row = 1 ssi.MoveNext 'to first sub-sequence Do Until ssi.IsAfter Range("A" & row).Value = ssi.Label Range("B" & row).Value = Summarized(ssi.values) row = row + 1 End Sub Private Function Summarized(values As Variant) As Variant 'pre: IsArray(values) Dim last As Long last = UBound(values) Summarized = values(last) - values(1) + 1 End Function I think this is quite an interesting example, as it’s not obvious to me how you would do it with formulas, even array ones. At present there are two different classes for the two examples (non-overlapping and overlapping) – hence SSI2, above. Maybe they should both implement a common interface. In practice, though I think you’d know which flavour you needed. Here’s the workbook with both examples, if you want to try it out. Sub-sequence Iterator 2 Published June 1, 2010 VBA Projects 1 Comment Tags: 2007, Excel, Software, VBA Following on from the previous post, I’ve had a go at implementing a Sub-sequence Iterator (SSI) class, which operates on a named Table ( a 2007 table, a.k.a. ListObject). There are two variants, depending on whether the sub-sequences can overlap or not. The first case is similar to the earlier data partitioning, where we want to break our table rows into contiguous, non-overlapping sub-sequences, and perform some summarizing operation over the values – for example, subtotalling. Here’s the data table: The Date column is our ‘key’, by which we work out the sub-sequences. In this case it’s just a weekly incrementing date. The Value column contains the data that we want to summarize. For simplicity, let’s say that we want to summarize by month – although it could be something more interesting. Our SSI object has a MoveNext operation, called by some client code, which makes it iterate through the sub-sequences. For each sub-sequence, the SSI makes available to the client code: • a Label that can be used to identify each sub-sequence • an array of Values for the sub-sequence. The client code can then pass the Values to a function Summarized, which in this case calculates a (sub)total. The client code then writes the Label and the Summarized value to another worksheet: Here, the Label is just “Month ” prepended to the month number (January -> 1, etc). Now, you might be wondering why we don’t just add a Month column to our table, and generate a pivot table, aggregating by month. Firstly, we might want to partition our dates in many different ways: by month, quarter, Mayan Lunar Year, and so on. It would be cumbersome to have to add a column, with appropriate values, for each of these partitionings. Indeed, our table might be linked to some external data source which has only the raw data. Secondly, a particular partitioning might depend on the data values themselves (for example, a negative value terminates a sub-sequence), or some dynamic value, such as today’s date. To customize the SSI class for a particular table, we need to write: • Function StartCondition(row As Long) As Boolean • Function EndCondition(row As Long) As Boolean • Function Label() As String In this example, StartCondition and EndCondition are both looking for changes in Month. When the SSI finds new rows satisfying these conditions, it sets StartRow and EndRow, which then delimit a new array of Values. I’ll talk about the second variant – where we can have overlapping subsequences – in the next post. Worksheet Hierarchies – part 2 Published April 22, 2010 VBA Projects 2 Comments Tags: 2007, Excel, VBA In Part 1, I was toying with the idea of worksheets that are hierarchically related. In this example, the hierarchy reflects geographical areas: Country, Area, City. Data is held by the leaf-node worksheets, and aggregated by the higher-level worksheets. Using the INDIRECT function means that the aggregation formulas are generic, and so can be used as-is from a template worksheet. This is clearly pivot table territory, so what’s the difference with this approach? Firstly, the scenario I have in mind is where you are collating worksheets sent to you in separate workbooks (for example, the London office sends you London.xlsx, containing a single London worksheet). You have already developed a bit of VBA to copy each of the worksheets into a single collated workbook (UK.xlsx, say). However, the incoming worksheets are effectively reports, and might contain information in addition to the raw Sales and Costs figures. So I can’t immediately collate the Sales/Costs information into a single table: if I need to, that’s a subsequent step (see later). Secondly, each leaf-node worksheet only needs to know about its parent node. For example, London knows it’s in the South area, but not about its place in the overall hierarchy. Similarly, a non-leaf node knows only about its parent and its children. This makes it easy to change the hierarchy – for example, breaking London into Central London and Outer London. In contrast, if the basic data were in table form, the levels of the hierarchy would need to be explicit, as columns: If I decided to break London into Central and Outer zones, then I’d need an additional ‘Zone’ column, with values (a lot ‘n/a’) for each record. However, the tabular data is clearly useful for generating pivot table summaries – here’s one at the Area level: or with multiple fields on the Row axis, to get a hierarchical summary: So, I’m thinking that it would be useful have some VBA to do the second stage of collation, of the leaf-node Sales/Costs data into a single table. This sounds like it will involve a tree walk around the worksheet hierarchy. Perhaps in two passes: once to find out the maximum depth (and thus the columns of the table), and again to copy the data into the table. Worksheet Hierarchies Published April 13, 2010 VBA Projects Leave a Comment Tags: 2007, Excel, Formulas Back from the Easter break now… It occurred to me that a common requirement is for a workbook to reflect a hierarchical structure – for example: • divisions/units within an organization • geographical areas (country, region, city) • product/part breakdown • reporting period (year, quarter, month). An obvious approach is to have a worksheet for each node in the hierarchy, with the leaf nodes holding the data, and the non-leaf nodes aggregating the data using formulas. (Note that I’m only considering single-dimension aggregation here, not hypercubes). So I thought it would be useful to have a template worksheet that can be copied for each node in a hierarchy, and linked in to the hierarchy using the worksheet names. The aggregation formulas are already in place, and pick up the names of child nodes. And to aid navigation, we can have some hyperlinks to root, parent and child nodes. My example is the start of a geographical hierarchy: UK is the root; London and Brighton are in the South region. Each node worksheet has been copied from Template. Here’s the UK sheet: The child nodes (i.e. the regions) are represented in a table, with the names entered in the Region column. The Sales and Costs columns pick up these names, to get the Sales/Costs values from the child worksheets. So, for example, cells E9 and F9 contain the formulas: =INDIRECT($D9 & “!Sales”) =INDIRECT($D9 & “!Costs”) The UK node’s owns Sales and Costs values are just totals of the relevant columns: The Sales/Costs cells are named ranges, needless to say. A leaf node worksheet, such as London, has an empty child table, and literal values for Sales and Costs: I’ve left the formulas in the table in case this node gets decomposed further, but it is in practice empty. Here’s an intermediate node, for South region: There are links to the root node (UK), the parent node (also UK here) and the child nodes. Each of these is a HYPERLINK formula that picks up the relevant node (i.e. worksheet ) name. For example, C2 contains the formula: =HYPERLINK(“[" & Root & ".xlsm"]” & B2 & “!$A$1″, “link”) Note that HYPERLINK needs a filename, even if the link is internal to this workbook. By convention, the workbook has the same name as the root node. So ‘Root’ is just a range name for B1 on the UK worksheet. The links all go to A1 on the target sheet. Similarly, G9 contains: =HYPERLINK(“[" & Root & ".xlsm"]” & D9 & “!$A$1″, “link”) So, to create a hierarchy node, you need to: • copy the Template worksheet • set Parent, Name, and Children (if any) • set Sales and Costs values for leaf nodes. No VBA so far, but I think it would be useful to build: • a hyperlinked Table of Contents just to the leaf nodes – that is, where the data entry happens • hierarchical summaries (i.e. each one on a single worksheet), for individual data categories (e.g. Sales). More in due course. Table Subtotals Published March 28, 2010 Excel 2 Comments Tags: 2007, Excel, Formulas The built-in Subtotals facility (now on the Data tab) has never been particularly useful. Inserting subtotal rows into your data rows breaks one of the commandments of Excel: Thou shalt not sully the purity of Thy Data. Not only does the data have to be sorted prior to subtotalling, you can not then re-sort or filter the data. Furthermore, with 2007, you can not use subtotals with Tables, which as I’ve mentioned previously, are rather useful. So what are the alternatives? One way is to use an array formula as a calculated filter. Here’s an example: As you can see, it’s an old workbook (2005), which I’ve ‘upgraded’. To get the subtotals by month, you compare the Date column with the two boundary values, and multiply with the Value column. D2 contains the following single-cell array formula: Obviously, Table1 should have a more meaningful name. As you might already know, we can’t use AND here, as this collapses the tests to a single Boolean value, rather than a column-sized array of Booleans. The trick is to treat the Booleans as 0/1 values and multiply them. So for row 2, we’ll have (1 * 1 * 54). The formula then Fills Right. We could equally well have the subtotals vertically, and Fill Down. If you don’t fancy the array formulas, here’s an alternative: Here, I’ve added a Month column, to make the filtering simpler. B2, for example, contains: =MONTH(Table2[[#This Row],[Date]]) Each of the subtotals is a DSUM formula, using the cells above as the criteria for the filter. E3 contains: Note that we need the [#All] accessor, otherwise the formula does not pick up the header row. Note also that the column name is a string, not a name, and is thus in quotes. The formula Fills Right. A limitation is that the criteria ranges have to be vertical – that is, column label above criterion – so we can’t have the subtotals arranged vertically. However, for presentation purposes, we could use an array function to transpose the range from 3R x 4C to 4R x 3C: Although we were using the DSUM variant to avoid array formulas, TRANSPOSE is easy enough to understand. DIY Scenarios Published March 18, 2010 Excel 3 Comments Tags: 2007, Excel, Formulas I’ve always thought that the built-in Scenario Manager is a bit feeble. In particular, the constituent values of a scenario should be visible on a worksheet, not squirrelled away in the SM. It’s really quite straightforward to do it yourself, especially with 2007 Tables. Here’s a very simple example. This is a ‘Model’ worksheet, showing two different scenarios: There are three values in our scenarios, which are dropped into D2, D3, D4. These cells are named A, B, C respectively. Output (F2) is a suitably complicated formula that uses A, B and C – obviously, there could be lots of other formulas dependent on the scenario values, making up a complex model. B1 is named Scenario. Each scenario is in a Table on a separate worksheet, with the Tables named Scenario1, Scenario2, etc: Back on the Model worksheet, D2, D3, D4 each contain the formula: =INDIRECT(“Scenario” & Scenario & “[[#This Row],[value]]”) The [#This Row] accessor requires the formula to be on the same row as the corresponding row in the scenario tables (even though on different worksheets). If this is an issue, then D2:D4 could contain the array formula: {=INDIRECT(“Scenario” & Scenario & “[value]“)} since this works anywhere with respect to the scenario tables. In fact, you don’t really need tables – you could just use named ranges on the scenario worksheets – but it saves (re)defining names manually. The Set buttons on the scenario worksheets are just a convenience, so you can inspect a scenario and then make it the current one, without having to remember which number it is. Here’s the button code for the Scenario1 button: Private Sub SetCommand1_Click() Worksheets("Model").Range("Scenario").Value = _ Right(Me.Name, 1) End Sub And similarly for the other buttons. Note that the button names have to be unique within the workbook. Excel COUNTIFS Function Published March 12, 2010 Excel 3 Comments Tags: 2007, Excel, Formulas My wife, Liz, is a Business Intelligence (i.e. mega-database reporting) consultant, and came up with a problem to which I offered to produce an Excel (non-VBA) solution. Since the problem is of quite wide applicability, I’ll describe it here. I also came across an Excel function that was new to me. The problem is this. Suppose that you have a table of employee records, with Sex and Salary. For entirely proper reasons you want to find out the proportion of females in top n% of salaries. (The n% is of employees, not salary range). However, some employees just outside the n% will have salaries the same as the lowest-paid members of the n%, and so need to be included, otherwise we’d lose the former from the proportion calculation. Here’s a simple example with 10 employees: Obviously, with so few records, the percentage of top-earners we want to see has to be large – typically, we’d be looking at 1% to 10%. Here we want to see the female proportion for the top 25% of salaries. (The data is shown here sorted, for clarity, but does not need to be so). 25% takes us ‘half way down’ employee #3, who clearly needs to be included. But we also want to include employee #4, who earns the same. In this top-four, we have 1 female, so the proportion is 25%. The formulas are as follows: idcount: = ROWS(SalaryTable) pcount: = CEILING(idcount * percentile,1) cutoffsalary: = LARGE(SalaryTable[salary],pcount) mcount: = COUNTIFS(SalaryTable[sex],”=m”, SalaryTable[salary],”>=” & cutoffsalary) fcount: = COUNTIFS(SalaryTable[sex],”=f”, SalaryTable[salary],”>=” & cutoffsalary) fpercent: = fcount / (fcount + mcount) The COUNTIFS function was new in 2007, and is new to me. You can have up to 127 range/criterion pairs. The ranges must have the same dimensions (i.e. column length, here). The criteria are evaluated on a cell-by-cell basis (i.e. by row, here), and ANDed together. Note also the use of the LARGE function, to get the ith largest salary, for i = pcount. I was rolling up my sleeves for some array formulas, but it turned out to be quite straightforward.
{"url":"http://roymacleanvba.wordpress.com/tag/2007/","timestamp":"2014-04-19T19:38:36Z","content_type":null,"content_length":"76378","record_id":"<urn:uuid:a2dcd4df-2159-44a4-9a88-773009c6b56e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
sequence of functions February 14th 2008, 04:16 PM #1 Jan 2008 sequence of functions I have a sequence of functions defined by [(n^2)(x^2)]/(exp (nx)). I know that the limit of this sequence of functions is zero for all x in R, x >= 0. However, I have to show that for a > 0, this sequence converges uniformly on the interval [a, inf], but it does not converge uniformly on the interval [0, inf]. My textbook says that for n sufficiently large, the uniform norm in the interval [a, inf] is [(n^2)(x^2)]/(exp (ax)), which makes total sense and the limit of this goes to zero. However, it says the uniform norm in the interval [0, inf] is just 4/(e^2) (and obviously this limit does not go to zero). Can someone tell me how you get this second uniform norm and/or show me another way to prove it is not uniformly convergent in the second interval [0, inf]? Thanks for your help. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/28283-sequence-functions.html","timestamp":"2014-04-19T02:51:26Z","content_type":null,"content_length":"28965","record_id":"<urn:uuid:ac77f606-b8ba-4e5d-9668-1f0fdaa09352>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Polynomial Graph Generator This Demonstration produces test quality graphs of polynomial functions. Use the sliders to change vertical stretch and shift from negative to positive. THINGS TO TRY • Slider Zoom "Polynomial Graph Generator" from the Wolfram Demonstrations Project Contributed by: Ed Zaborowski (Franklin Road Academy) This Demonstration produces test quality graphs of polynomial functions. Use the sliders to change vertical stretch and shift from negative to positive.
{"url":"http://demonstrations.wolfram.com/PolynomialGraphGenerator/","timestamp":"2014-04-18T03:01:48Z","content_type":null,"content_length":"42062","record_id":"<urn:uuid:6d356ce5-2e92-44d8-88aa-aa66720ffcdd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
The resource theory of quantum reference frames: manipulations and monotones Every restriction on quantum operations defines a resource theory, determining how quantum states that cannot be prepared under the restriction may be manipulated and used to circumvent the restriction. A superselection rule is a restriction that arises through the lack of a classical reference frame. The states that circumvent it (the resource) are quantum reference frames. We consider the resource theories that arise from three types of superselection rule, associated respectively with lacking: (i) a phase reference, (ii) a frame for chirality, and (iii) a frame for spatial orientation. Focussing on pure unipartite quantum states, we identify the necessary and sufficient conditions for a deterministic transformation between two resource states to be possible and, when these conditions are not met, the maximum probability with which the transformation can be achieved. We also determine when a particular transformation can be achieved reversibly in the limit of arbitrarily many copies and find the maximum rate of conversion. (joint work with Gilad Gour)
{"url":"http://perimeterinstitute.ca/videos/resource-theory-quantum-reference-frames-manipulations-and-monotones","timestamp":"2014-04-17T01:43:10Z","content_type":null,"content_length":"27213","record_id":"<urn:uuid:fc4085bc-2beb-4932-abf5-29aea10d6335>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Richard Epstein's view Timothy Y. Chow tchow at alum.mit.edu Fri Mar 16 10:39:37 EDT 2012 Buried in the now-defunct thread about fictionalism, Richard Epstein >In my recent book *Reasoning in Science and Mathematics* (available from >the Advanced Reasoning Forum) I present a view of mathematics as a >science like physics or biology, proceeding by abstraction from >experience, except that in mathematics all inferences within the system >are meant to be valid rather than valid or strong. In that view of >science, a law of science is not true or false but only true or false in >application. Similarly, a claim such as 1 + 1 = 2 is not true or false, >but only true or false in application. It fails, for example, in the >case of one drop of water plus one drop of water = 2 drops of water, so >that such an application falls outside the scope of the theory of >On this view numbers are not real but are abstractions from counting and >measuring, just as lines in Euclidean geometry are not real but only >abstractions from our experience of drawing or sighting lines. The >theory is applicable in a particular case if what we ignore in >abstracting does not matter there. This sounds like a version of nominalism. On this view, I think, mathematical nouns are akin to pronouns. So we can recognize the truth of You refer to me as "you" and refer to yourself as "me" while at the same time denying that asking whether "you" exists makes any sense except insofar as it asks about the existence of some particular *instantiation* of "you." This view must be very old, but as I think about it now, I don't recall it being discussed explicitly very often. Can someone name some famous proponents of it? More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2012-March/016323.html","timestamp":"2014-04-18T04:35:15Z","content_type":null,"content_length":"4478","record_id":"<urn:uuid:2295c333-4da5-4258-b33f-1f8fde1f8630>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Detecting tilings by toric geometry up vote 9 down vote favorite This is probably a silly question, but I figured that if there is a good answer, this would be a good place to ask. Ever since I got my hands on the book "Toric Varieties" by Cox, Little and Schenck, I've been excited to learn about the different combinatorial properties of polytopes that one can deduce from the corresponding toric varieties. In fact, toric varieties can prove combinatorial theorems not only about polytopes but also about many other objects living in $\mathbb Z^n$. One such thing would be tilings of $\mathbb R^d$ by integral polytopes. I believe the following comes as a natural question: Can one tell if a convex polytope $P$ tiles Euclidean space by looking at its corresponding projective toric variety? Can one deduce properties of the tiling this way? In case there is no simple answer, do tilings by polytopes correspond to algebraic gadgets in the same spirit that polytopes are in bijection with projective toric varieties with a specified ample line bundle? add comment 2 Answers active oldest votes A related question (but not exactly the one you asked) is: Can one tell if a convex polytope $P$ and its translations by $\mathbb Z^n$ tile $\mathbb R^n$? Which polytopes $P$ have this property? Fix some positive quadratic form $q$ on $\mathbb R^n$ and the corresponding distance function. Let $P^0$ be the set of points in $\mathbb R^n$ which are closer to $0$ than to any other integral (i.e. in $\mathbb Z^n$) point. The closure $P$ of $P^0$ is called the Voronoi polytope w.r.t. $q$. Then $P$ obviously has the above property. Voronoi's conjectured circa 1907 that the opposite is true, i.e. any such $P$ is a Voronoi polytope w.r.t. some $q$. This conjecture is known for $n\le 4$ due to Delaunay and for zonotopes by Erdahl "Zonotopes, Dicings, and Voronoi Conjecture on Parallelohedra". It is still open in general, I up vote 11 down So what is special about the toric variety $X_P$ corresponding to $P$? I am not sure. If you look at the Delaunay tiling which is dual to the Voronoi tiling $P+\mathbb Z^n$, then the vote accepted polytopes in that tiling and the corresponding toric varieties have a clear geometric meaning: they describe degenerations of principally polarized abelian varieties. But this is a dual picture. Note by the way that Delaunay polytopes have vertices in $\mathbb Z^n$, so they indeed correspond to projective polarized toric varieties. In contrast, the Voronoi polytope for a generic $q$ will have irrational vertices. Also, when you vary $q$ continuously, the Voronoi polytope will vary continuously. But the Delaunay polytopes will jump discretely, and there are only finitely many Delaunay polytopes modulo $GL(n,\mathbb Z)$. One place where the Voronoi tilings appear is tropical geometry. Indeed, a principally polarized tropical abelian variety $A$ is just the real torus $\mathbb R^n / \mathbb Z^n$ together with the positive definite form $q$. Then the $(n-1)$-skeleton of the Voronoi tiling modulo $\mathbb Z^n$ is the theta divisor on $A$. See Mikhalkin-Zharkov http://arxiv.org/ abs/math/0612267 for more details. 1 Voronoi's conjecture is still open, see e.g. "Affine Equivalent Classes of Parallelohedra" by Dolbilin, Itoh and Nara, 2011. The conjecture was proven by Voronoi himself for "primitive" tilings, i.e., where the dual polyhedral complex is simplicial. Conjecturally, "primitive" tilings are dense. – Misha Apr 11 '12 at 0:44 add comment Apparently, yes, see this beautiful talk of Valery Alexeev's. up vote 8 down vote Wow, thanks! Embarrassingly, this material goes a bit over my head. I think I can see how he associates a tiling to certain varieties but it's not clear to me if there is an inverse to this process. Maybe there is a lowbrow explanation of this somewhere? – Gjergji Zaimi Apr 10 '12 at 0:28 1 I certainly cannot claim to have any sort of deep understanding of this. Perhaps one can pester Alexeev directly?! – Igor Rivin Apr 10 '12 at 0:52 I will try to do that, but first I must make an attempt to understand this talk and the references. – Gjergji Zaimi Apr 10 '12 at 1:20 You will be a sadder and wiser man when you do (and perhaps you can keep the sadness and pass on the wisdom to the rest of us). – Igor Rivin Apr 10 '12 at 3:28 3 @Igor: As far as I can tell, VA is Valery Alexeev. – Mark Sapir Apr 10 '12 at 12:39 show 2 more comments Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry toric-varieties co.combinatorics convex-polytopes tiling or ask your own question.
{"url":"http://mathoverflow.net/questions/93610/detecting-tilings-by-toric-geometry","timestamp":"2014-04-19T02:55:40Z","content_type":null,"content_length":"63725","record_id":"<urn:uuid:b059c167-606e-4f03-bbb8-e0f31444b3e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
The octaflexagon has 8 triangles meeting in the center. Three variants are described here: • isosceles made from 45-67.5-67.5 triangles • silver made from 45-45-90 triangles • star made from 45-18.5-116.5 triangles Isosceles Octaflexagon One variety of octaflexagon uses 45-67.5-67.5 isosceles triangles. When the 45 degree angles are in the center, you have an octagon, but 2/3 of the time it doesn’t lay flat when the 67.5 degree angles are in the center. You can use a pinch flex with 4-fold symmetry to move between the sides. Or you can fold it in half and explore opening up and pivoting flaps. Cut out the net and pre-crease all the edges. Copy the small numbers on to the back. For the 3 sided version, fold adjacent 2’s on top of each other giving you eight 1’s on one side and eight 3’s on the other. Tape the tabs onto the appropriate faces. Print out two copies of the following strip and join the 1/4 face to the 3/6 face. Fold 4 on 4, 5 on 5 and 6 on 6. Fold 2 on 2 and tape the tabs onto the appropriate faces. Silver Octaflexagon If you instead use 45-45-90 (silver) triangles, it’s flat 2/3 of the time when the 45 degree angles are in the center. You can use a 4-fold pinch with this one also, though there are two distinct places to do it. Or you can fold it in half and try opening up flaps. The 90 degree angle offers several interesting flexes since it allows you to open up a flap one way and fold it back a different way. You can also mix up the faces when using these flexes. One copy of the following strip gives you a 3 sided silver tetraflexagon. Two copies give you a 3 sided silver octaflexagon. Connect the 1/2 tab to the 3/2 tab. Fold 3 on 3 to give you a square with 8 triangles in it with 1’s on one side and 2’s on the other. Tape the tabs to the appropriate faces. Likewise, one copy of this next strip gives you a 6 sided silver tetraflexagon and two copies give you a 6 sided silver octaflexagon. Connect the 1/4 tab to the 3/6 tab. Fold 4 on 4, 5 on 5 and 6 on 6. Fold 3 on 3 to give you a square with 8 triangles in it with 1’s on one side and 2’s on the other. Tape the tabs to the appropriate faces. While there's only a single 4-sided hexaflexagon made from equilateral triangles, there are two different 4-sided silver octaflexagons. For each of the following octaflexagons, cut along the solid black lines and fold along the dashed gray lines. For the straight strip, fold 4 on 4 and 2 on 2. For the square, fold 4 on 4 and 3 on 3. When finished folding each one, tape the first and last triangles together. 6 sided The following strip is shown in the video above. It's one of the 6 sided silver octaflexagons. Click on the two thumbnails below to get the larger versions. You can either print the second picture on the back of the first or print them out separately and paste them back-to-back. One end of each piece should have 5 on one side and 4 on the other, while the other end has 1 on one side and 2 on the other. Connect the ends of the two pieces together to make one long strip by taping the left side of the first 5 to the right side of the last 1. To fold, find the first pair of adjacent 6's and fold them together. This will bring a pair of 5's together. Fold together the 5's, the adjacent pair of 3's and the adjacent pair of 4's. Step to the next pair of adjacent 6's and repeat the procedure until the entire flexagon is folded into a square with all 1's on one side and 2's on the other. Tape the first 5 to the final 1 to finish. Star Octaflexagon Cut along the solid lines and pre-fold along the dashed lines. Copy the small numbers onto the backs of the triangles. Fold each pair of adjacent 3's together so you have a 4 pointed star. Tape the first and last triangles together, leaving you with all 1's on one side and all 2's on the other. One flex that works on this star flexagon is the pivot flex. Cut along the solid lines and pre-fold along the dashed lines. The dashed lines along the outside edge of triangles 1/3 and 2/4 indicate that these edges will be connected to another edge. Copy the small numbers onto the backs of the triangles. The easiest way to assemble all four pieces is to start by folding each piece separately. Fold 5 on 5, 4 on 4 and 3 on 3. Then tape the short side of the 1/3 triangle on one piece to the short side of the 2/4 triangle on another piece such that the 1's are on one side and the 2's on the other. Connect all four pieces together this way so you have a four pointed star with 1's on one side and 2's on the other. Next: The Enneaflexagon Some of the possible flexes Octaflexagon Path Puzzle Return to the Triangle Flexagon Bestiary
{"url":"http://loki3.com/flex/octa.html","timestamp":"2014-04-18T13:06:22Z","content_type":null,"content_length":"9246","record_id":"<urn:uuid:04eb3bf0-8329-4863-a85d-bc756a790d10>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry/Circles and Triangles/The Miquel Point In general, four straight lines define four triangles (by omitting each line in turn we have three lines forming a triangle). The four circumcircles of these triangles all intersect at the Miquel point. This point and the four circumcentres all lie on a circle. The unique parabola of which the four lines are all tangents has its focus at the Miquel point. Last modified on 14 February 2011, at 13:23
{"url":"http://en.m.wikibooks.org/wiki/Trigonometry/Circles_and_Triangles/The_Miquel_Point","timestamp":"2014-04-21T02:35:46Z","content_type":null,"content_length":"14195","record_id":"<urn:uuid:e00fa06f-395d-4981-804f-0b7074ff7dcd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalues up the wazoo An anonymous commenter on my last post asks: isn’t the real problem with complexity theory that the resulting mathematics is usually superficial and shallow? does this make it less fun to do complexity theory? are complexity theorists ever saying something deep? Later, the same commenter writes: i’m just curious because I don’t really understand how I feel about the issue myself. maybe we should start with something more basic. can we all agree that “logic” (i.e. foundations of math) is pretty boring and flavorless? sure, we all got that little rush when we heard the story of the “fall of mathematics” in the early 20th century … and then maybe again with the axiom of choice, continuum hypothesis, independence, forcing, and various incompleteness theorems. but is logic actually fun to do? on an emotional level, do you achieve understanding, intuition? Alright, look, anonymous. You’ve nailed why I don’t work on logic myself — besides not understanding what the big, meaty open problems are. For me, frankly, reading about logic (or recursion theory, or programming language semantics, or distributed computing) has always felt like sipping broth. Sure, it might be delicious broth. In the case of (say) Gödel and Cohen’s independence results, it might even be the best broth I’ve ever tasted. But eventually I hanker for some noodles, some carrots, maybe some complex numbers or a Cauchy-Schwarz inequality. I mean, how long can a person go without bounding anything? But you see, anonymous, that’s what I like about complexity. It packs the same theological punch as logic does, but it’s got math in it too. And I’m not just talking combinatorics and graph theory. Let me put it to you this way: You like groups? We got groups. You like vector spaces? We got them too. But what about number theory? Finite fields? Fourier transforms? Continued fractions? “Shor.” Eigenvalues? Chebyshev polynomials? Gaussians? Random walks? Lattices? Convex polytopes? Banach spaces? Metric embeddings? You better believe it. Or how about this, anonymous: what’s your favorite constant? π? e? The golden mean? Maybe 0.288…=(1/2)(3/4)(7/8)(15/16)…? Becoming a complexity theorist doesn’t mean bidding any of them goodbye. Look, we even got knots, braids, manifolds, unitary representations, varieties, cohomologies, plethysms — alright, maybe not so much of the last few. But if your favorite mathematical object isn’t in stock, bring it yourself! That’s the thing about complexity: anything is fair game if it yields a new upper or lower bound. The reason it’s so hard to prove P!=NP is precisely that a fast SAT algorithm could be hiding anywhere in the math library. Now let me turn the tables, anonymous. Can you name a subfield of math that involves so many different kinds of math? Kurt Says: Comment #1 February 20th, 2006 at 1:45 am Can you name a subfield of math that involves so many different kinds of math? Uh, complexity theory? Oh, wait, nevermind… But with regard to the broth analogy, if it eventually turns out that P vs. NP is logically independent, you might find yourself sampling that broth yet again. Scott Says: Comment #2 February 20th, 2006 at 1:57 am But with regard to the broth analogy, if it eventually turns out that P vs. NP is logically independent, you might find yourself sampling that broth yet again. Right — and for me, that’s one more reason to guess that it’s not independent (or at least not provably so)! It’s hard to imagine how anyone could scale this mountain on a diet of logical broth. Kurt Says: Comment #3 February 20th, 2006 at 2:02 am Okay, I was just being cute in my last comment. But I do regard TCS as being a branch of mathematics, even if it does have quite a different feel than classical areas of math. With regard to the possible logical independence of P vs. NP, I think most computer scientists would be totally unphased by such a result. Since P vs. NP represents a question about the physical world, as well as a mathematical question, logical independence would only serve to show an insufficiency in the axioms of ZFC. It would certainly be of interest to FOM types, but I don’t think it would have any significant impact on most complexity theorists. (And of course the main reason for thinking about independence in the first place is just an argument from arrogance–gee, this problem is really hard, maybe it’s not that we’re ignorant but that the problem is actually unsolvable.) Anonymous Says: Comment #4 February 20th, 2006 at 2:04 am first of all, yes, I can name other subfields of mathematics which involve all these things and more. in fact, I can name sub-sub-fields, like equidistribution of primes (e.g. how many prime solutions less than N satisfy a system of polynomial equations as N to infty) or, say, geometric group theory (what can we say about a group using only the language of its intrinsic geometry). but the question you are asking is probably not the right one. indeed, I can say rather confidently that the connections in number theory are deeper than those in geometric group theory. it’s not surprising: number theory is a much older field. perhaps one can really test the strength of connections by asking if they are only uni-directional. does complexity theory contribute back to the mathematical fields from which it borrows? I know of a small number of (rather unimpressive) cases for which this is true, but they are unconvincing at best. is this even the right question? I don’t know. but certainly the mathematics of Shor’s algorithm (i.e. fourier transforms on abelian groups) is elementary. the same goes for the use of chebyshev polynomials in proving degree lower bounds for a polynomial fitting some data (e.g. to prove lower bounds on quantum query complexity). I suspect that mathematicians would not call this deep. (I am not, in any way, degrading the _difficulty_ or _novelty_ of these proofs, only their “deepness”). there are instances of deeper mathematics being used in complexity theory, but usually in a superficial way, e.g. the use of Milnor-Thom for lower bounding algebraic decision trees. but no one is delving into hard core Morse theory to accomplish their goals, and that brings up another point: why don’t we expect mathematicians to find the deep connections that lead to P neq NP (as opposed to complexity theorists)? (p.s. does blogger show the ip addresses of commenters to the blog author? I assumed this was the case, and thus scott knew (essentially) the identity of posters.) Scott Says: Comment #5 February 20th, 2006 at 2:12 am With regard to the possible logical independence of P vs. NP, I think most computer scientists would be totally unphased by such a result. Dude. If that doesn’t faze you, then you’re ipso facto not a computer scientist. (Note that the current proof techniques don’t go outside certain fragments of PA, let alone ZFC. See my survey paper for more.) Scott Says: Comment #6 February 20th, 2006 at 2:25 am p.s. does blogger show the ip addresses of commenters to the blog author? I assumed this was the case, and thus scott knew (essentially) the identity of posters. I don’t go sleuthing with IP addresses, not only because it seems ungentlemanly but also because I don’t have time. Why don’t you unmask yourself? Your views on complexity might be mistaken, but they’re not a discredit to you. Scott Says: Comment #7 February 20th, 2006 at 2:36 am why don’t we expect mathematicians to find the deep connections that lead to P neq NP (as opposed to complexity theorists)? Well, no one’s stopping them. Seriously: I talk to mathematicians all the time about my complexity problems. Sometimes they’re a real help. Other times all they can do is restate the problems in fancier language… Scott Says: Comment #8 February 20th, 2006 at 3:24 am perhaps one can really test the strength of connections by asking if they are only uni-directional. does complexity theory contribute back to the mathematical fields from which it borrows? There’s a huge number of cases where complexity led to new questions being raised. (Babai’s short presentation conjecture, the majority-is-stablest conjecture, Rudich’s minterm conjecture, degree vs. approximate degree, …) There are also many cases where complexity led to a new kind of question being asked about a familiar object. I.e., is it NP-hard to distinguish a knot from the unknot? Is finding a Nash equilibrium as hard as finding other kinds of fixpoints? But I’ll admit that, if your goal is to contribute to analysis, group theory, etc. (rather than just using them on a daily basis), then complexity probably isn’t for you. You should continue studying “deep” math, and then, when something like Shor’s algorithm or PRIMES in P comes along, remind yourself that you could easily have done it first. Anonymous Says: Comment #9 February 20th, 2006 at 11:52 am Scott’s post is accurate of course, but in some sense this is beside the point. If you really like mathematical area X, then go work on X. While it may turn out to be connected to complexity, the surest way to making a contribution to X is working on X, just as the surest way of making a contribution to complexity is working on complexity (and that’s why I’m not holding my breath for a great mathematician to stumble upon a proof that P is not NP). Everyone has different reasons to do what they do, but I work on complexity because I deeply care and am very curious about the questions. I also find it to be fun – there are a lot of beautiful proofs and new techniques pop up all the time. For me the fact that it is a fresh field with everything wide open is a plus and not a minus – I’m not jealous of the people 300 years from now that will be working on Aaronson’s hypothesis that the circuit complexity of circuit-SAT is exactly 2^{n-sqrt{n}}, even if these people will scoff at the stuff we do now as elementary and shallow. The lesson that one should take from Scott’s examples is not that complexity is interesting because it’s connected to this or that subfield of math. Complexity, dealing with some of the clearest to state and thought provoking questions of science, does not need this validation. The slew of different areas (even if it’s only “elementary” parts of these areas) that are used just demonstrates the variety of tools and skills that can be employed in complexity. This contributes to a wonderful and varied set of people working on complexity (which is another reason why I like it so much). They also tend to quite nice and un-snobbish – perhaps working in a field where you are daily humiliated by the most elementary questions does this to you. p.s. Sorry Scott, I picked you because you’re the youngest researcher I know, you’re about -100 years old, right? Anonymous Says: Comment #10 February 20th, 2006 at 11:54 am sorry about the ugly formatting -boaz Scott Says: Comment #11 February 20th, 2006 at 12:10 pm Boaz: Thanks for the eloquent comment! I’m not jealous of the people 300 years from now that will be working on Aaronson’s hypothesis that the circuit complexity of circuit-SAT is exactly 2^{n-sqrt{n}} But I never hypothesized that! Incidentally, do we know that 2^{n-sqrt{n}} is an upper bound? Sorry Scott, I picked you because you’re the youngest researcher I know, you’re about -100 years old, right? Plus or minus 124 years. Greg Kuperberg Says: Comment #12 February 20th, 2006 at 1:37 pm I agree that complexity theory can be seen as an area of pure mathematics. (Maybe that’s why I like it.) But your argument is a closer fit for quantum computation than for the rest of complexity theory. (Which is maybe why I like quantum computation in particular.) Anonymous Says: Comment #13 February 20th, 2006 at 6:05 pm Scott said: You should continue studying “deep” math, and then, when something like Shor’s algorithm or PRIMES in P comes along, remind yourself that you could easily have done it first. but anonymous had said: I suspect that mathematicians would not call this deep. (I am not, in any way, degrading the _difficulty_ or _novelty_ of these proofs, only their “deepness”). So he/she is not claiming the discoveries in the field are “easy”, he/she seems to be saying only that the field doesn’t seem to be a “deep” field of mathematics to him/her. While I disagree with the tone of his/her remarks, I think he/she is raising a point that can be reasonably discussed. Anonymous Says: Comment #14 February 20th, 2006 at 7:55 pm “I suspect that mathematicians would not call this deep” Could you define what you mean by deep? Deep as in related to questions that are interesting to mathematicians in general? to algebraists? to combinatorialists? Deep as having broad applications to other sub-fields of math? Deep as being very general? In any case, who cares whether the “mathematics used are deep” (whatever the definition of deep may be…). What matters here is that the *computational* results are deep. And by deep, I mean relevant to fundamental questions about our world and its limits, from a perspective that has never been really considered before. Anonymous Says: Comment #15 February 20th, 2006 at 9:01 pm I don’t go sleuthing with IP addresses, not only because it seems ungentlemanly but also because I don’t have time. Why don’t you unmask yourself? being anonymous affords one the ability to express sentiments which are not completely in line with their actual views, while not being held accountable for this expression later. I don’t think I would have gotten such interesting responses from scott and boaz had I simply said “isn’t complexity theory neat?” consider, for instance: While I disagree with the tone of his/her remarks, I think he/she is raising a point that can be reasonably discussed. while I thought my tone the whole time had been primarily inquisitive, it was apparently perceived as aggressive by some who disagreed with me. one can ignore these subtle social implications once under the shroud of anonymity… Anonymous Says: Comment #16 February 20th, 2006 at 9:21 pm Could you define what you mean by deep? Deep as in related to questions that are interesting to mathematicians in general? if you list something like “knot theory” or “cohomology” as having some relation to your field, one can ask about the substance of that relationship. if the answer is that “well, deciding the equivalence of two knots is NP-complete,” then I think we all agree that this relationship is superficial and not “deep.” if, on the other hand, you use algebraic topology to prove that ACC_0 neq P, this probably qualifies as more substantial. Anonymous Says: Comment #17 February 20th, 2006 at 9:36 pm in any case, my main concern is whether complexity theory has gotten side-tracked precisely because working on the real guts of the matter proved unrewarding, in terms of the richness of techniques people jumped on the razborov-rudich bandwagon a little too eagerly. why don’t we have entire schools of thought devoted to circuit lower bounds? why is it essentially career-ending in computer science to produce (on average) only one paper every year, even if it is very good? it doesn’t seem like we actually value depth. why is it so hard to get a job of you’re doing complexity theory? do we really assign our fundamental problems the same importance that other fields do theirs? Greg Kuperberg Says: Comment #18 February 20th, 2006 at 10:22 pm I think that “deep” could be the most arrogant word in mathematical research. I have no idea what it means. Anonymous Says: Comment #19 February 20th, 2006 at 10:34 pm I may not hit upon the original poster’s meaning of the word, but to me what is not “deep” about complexity theory (and TCS in general) is that it is essentially a series of unrelated results (each using very nice tricks and techniques), but without much unifying structure or theory. (Obviously some tricks turn out to be very useful and have produced a series of results. But I still have the feeling we are left with a bag of tricks at the end of the day and not much else…) I suspect (and hope!) that this will change as complexity theory matures. Scott Says: Comment #20 February 20th, 2006 at 10:50 pm I think that “deep” could be the most arrogant word in mathematical research. I have no idea what it means. Greg, I’m starting to agree with you about this. I’m reminded of a comment by John Conway in Bill Gasarch’s P vs. NP poll: “In my opinion [P vs. NP] shouldn’t really be a hard problem; it’s just that we came late to this theory, and haven’t yet developed any techniques for proving computations to be hard. Eventually, it will just be a footnote in the books.” For those who don’t quite grasp this, let me spell it out for you. The Riemann hypothesis is Deep. The Langlands conjectures are Deep. For P vs. NP, the problem is merely that we don’t yet have techniques for proving computations to be hard. Anonymous Says: Comment #21 February 20th, 2006 at 11:42 pm I think that “blistolpock” could be the most arrogant word in mathematical research. I have no idea what it means. somehow it seems hard to feel strongly about a word if you don’t know what it means, but anonymous 10:34 did a good job of describing what many people mean when they refer to TCS as not (yet) being scott, I see how it’s easy to get offended by conway’s comment, but it’s ridiculous to have such a strong reaction when you are not largely aware of the mathematics surrouding the riemann hypothesis. it’s probably a ridiculous comment for conway to make as well, given that he has not actually worked on P vs. NP. I disagree with conway, but I don’t see why it’s such a bizarre thing to suggest, or why–not having a significant background in number theory–you can somehow assert that P vs. NP is at least as “Deep” as RH. in fact, if you’re going to wait a year before taking a job, I would suggest you take first a course in complex analysis, and then a course in analytic number theory. for the same reason a painter might want to understand the art of a musician–an appreciation for a type of structure and richness in mathematics that doesn’t arise (yet) in complexity theory. it’s really quite stunning. Scott Says: Comment #22 February 21st, 2006 at 12:16 am scott, I see how it’s easy to get offended by conway’s comment… I’m not offended, just amused. I haven’t studied RH much, but my opinion is that it shouldn’t be a hard problem. It’s just that we haven’t yet developed any techniques for proving that the nontrivial zeroes of the zeta function have real part 1/2. in fact, if you’re going to wait a year before taking a job, I would suggest you take first a course in complex analysis, and then a course in analytic number theory. I’m always up for learning new things if I’ll understand them, but in this lifetime I’m through with taking courses. I did take real analysis, abstract algebra, and topology at Cornell. The theorems in real analysis all seemed like painstaking formalizations of the obvious. I enjoyed abstract algebra and topology, but I kept asking myself: “What is the computational problem here, and can it be solved efficiently?” Anonymous Says: Comment #23 February 21st, 2006 at 2:38 am I may not hit upon the original poster’s meaning of the word, but to me what is not “deep” about complexity theory (and TCS in general) is that it is essentially a series of unrelated results (each using very nice tricks and techniques), but without much unifying structure or theory. (Obviously some tricks turn out to be very useful and have produced a series of results. But I still have the feeling we are left with a bag of tricks at the end of the day and not much else…) I suspect (and hope!) that this will change as complexity theory matures. On the contrary, mathematics would be so much more useful if mathematicians actually tried to solve problems that can be understood, rather than trying to construct complex structures and theories that are so difficult to understand to be useless to solve problems. Complexity theory is applied mathematics. Structure and unifying theorems is not important. When you get so stuck up with constructing theories, you fail to solve problems. Anonymous Says: Comment #24 February 21st, 2006 at 2:46 am in fact, if you’re going to wait a year before taking a job, I would suggest you take first a course in complex analysis, and then a course in analytic number theory. for the same reason a painter might want to understand the art of a musician–an appreciation for a type of structure and richness in mathematics that doesn’t arise (yet) in complexity theory. it’s really quite stunning. too bad that structure and richness doesn’t help you get laid. lower bounds on real problems are much more stunning to hot chicks than classification of groups. You can just pick the chick up and explain waht the problem is easily. Try that with math. Scott Says: Comment #25 February 21st, 2006 at 3:01 am When you get so stuck up with constructing theories, you fail to solve problems. Let’s agree that when someone like Galois or Wiles produces a deep theory that also solves a simply-stated open problem — “dude.” But if I have to choose between a theory with no easily-stated results or easily-stated results without a theory, I’ll always go for the results. Anonymous Says: Comment #26 February 21st, 2006 at 3:36 am what is this picture of the world where mathematicians are “just constructing useless theories that nobody can understand?” it’s ignorant, myopic, and arrogant. - if you design a theoretical algorithm that (a) nobody ever implements and uses and (b) does not help develop the theory of algorithms itself, then you have failed on both counts, and your contribution is essentially useless. - funny, I thought the only thing that complexity _does have_ is structure and unifying theorems. the biggest advances in complexity theory are about unification (e.g. hardness vs. randomness, time vs. space, determinism vs…). - the unfortunate thing about computer science is that everyone thinks that problems should be easy. I mean, Moore’s law says that we only have to wait 1.2 years before our abilities double (I have no idea what the real number is), so certainly the number of theorems we produce should grow at the same rate. We have little tolerance for intermediate progress (e.g. “look, harmonic analysis of boolean functions was really useful in solving problem A, and such and such issue arose. we study this issue, and understand some complex behavior precisely, and in an interesting an unexpected way. we expect it to be useful in the future.” and responses are well, but what are they solving for us NOW? where is contrived problem A’ so that we all feel safe that this is actually computer science…) - finally, for picking up “chicks” or “dudes” or whatever the best way is actually to make what you do sound as much like art as possible (which isn’t so far from the truth). for this, geometry seems useful, and quantum mechanics works too, and crypto has a certain cool factor. explaining to her some contrived story about placing factories so as to minimize the average expected commute time gets really boring, really fast. Anonymous Says: Comment #27 February 21st, 2006 at 3:57 am people seem to be getting defensive, so let’s clarify: TCS is an utterly amazing field in which to exist. you can do as much or as little mathematics as you want, you can go work for google, you can quite easily make significant contributions to real world applications by working with colleagues in AI, systems, and databases, and they are a constant source of interesting problems. TCS people are amazing too, and they’re really fucking smart. a lack of depth is made up for by an abundance of creativity, and there is depth if you allow yourself to delve. in my opinion, you will find as many new ideas in focs/stoc as in any top math journal. additionally, the connections with mathematics are growing faster than ever before, and this trend will only continue: they like that we are using their techniques, and are quite willing to work on our problems if we can state them in a way they understand (and then they say “can I put that in my grant proposal?”) we’re all very lucky. the end. Elad Verbin Says: Comment #28 February 21st, 2006 at 12:20 pm Scott said: I did take real analysis, abstract algebra, and topology at Cornell. The theorems in real analysis all seemed like painstaking formalizations of the obvious. The most interesting subjects, in my eyes, are exactly those that seem obvious when the right framework is set. The construction of these theories was probably incredibly hard, and took some time until it all settled down. I’ll give a few examples from TCS: Look at VC-dimension arguments. This is a unifying concept that rules across of TCS, from geometry, through learning, to complexity. And it’s totally obvious — but only after you see it. Look at Irit Dinur’s new proof of the PCP Theorem. It’s obvious. You see the proof and you say — “well, this is completely obvious. It’s just a bunch of simple ideas, and a bit of technicalities, and you’re done.” Not to mention the Zig-Zag product itself, which is another obvious thing. But both of them had far-reaching consequences. Sariel Har-Peled’s results on core-sets (and results involving sampling in general) is another one of these things, where you think a result should be hard, and then someone comes up to you and explains it to you using completely trivial ideas. It’s like a magic trick — Scott’s comment discussed the magician’s obvious “abracadabra”, while the thing you did not see him perform is a small change in perspective that _made_ it all seem completely trivial. The above results are all “painstaking formalizations of the obvious”. Is it obvious that there are unmeasurable sets before you study real analysis? You took a couple of courses on Calculus before you took real analysis, and you were completely content with the Reimann integral, not once thinking, like I imagine Lebesgue thought, “this is bullshit, just because the function is not defined on the rationals it doesn’t mean it is not integrable”. Real analysis is deep. The presentation is trivial. (by the way, Linear Algebra is the same way. It’s totally deep, and the presentation is totally trivial. Annoyingly so). I can go on and on with examples like this, but instead I’ll mention an article of Tim Gowers which discusses many points which were discussed in this thread, especially the subject of “Deep”ness. Gowers argues that combinatorics (the same point can be made for complexity) is deep because although they can be viewed as a huge bag of tricks, there is actually a small number of extremely novel unifying trends behind these tricks, and that is what combinatorics (or complexity) gives to science. He says that thinking of a “typical” object was very hard until the probabilistic method came along, and allowed us to say something like: You want to prove that every graph with 100 vertices has either a clique or an ind. set of size 3? Take a random graph and prove it there. This completely mathematicians’ ways of thinking. The article is called “The two cultures of mathematics” and is available at http://www.dpmms.cam.ac.uk/~wtg10/2cultures.ps Also, Scott said: I’m always up for learning new things if I’ll understand them, but in this lifetime I’m through with taking courses. I know how you feel, but I’m quite sure that you don’t really mean that. A course _is_ one of the best ways to learn a new subject in depth, without actually having to read the material yourself. And you get better at understanding _well-structured_ courses as your skill develops, so taking, say Noga Alon’s course on the probabilistic method now is very different for me than taking it 5 years ago. (by the way, I believe that tail-estimates are perhaps the most horribly under-taught subject to CS undergrads, at least here). Elad Verbin Says: Comment #29 February 21st, 2006 at 12:55 pm Unfortunately, it seems that Lebesgue would not agree with me: “Reduced to general theories, mathematics would be a beautiful form without content” Anonymous Says: Comment #30 February 21st, 2006 at 3:27 pm I think I lost track of this discussion, but just a couple of quick comments: 1. If the whole point of that deep comment is that more people should work on lower bounds then I absolutely agree – Razborov-Rudich should be viewed as a hint to how we can proceed, not a limitation (similar to relativization. So, anonymous, you can say we’re shallow, boring and stupid – just please prove a non-natural lower bounds (whether it’s using Morse theory or using stupid induction). 2. I think that people who have very few good results are greatly appreciated within the TCS community. The main problem is that because TCS by itself is not appreciated enough by the outside world (and especially funding agency) people who do core TCS work and do not have also a more practical-sounding aspect to their research are at a disadvantage. This is what theorymatters is trying to 3. I think it’s important that we keep a pluralistic culture where we have and appreciate people of very different modes of works and skill sets, if we want to make progress on our goals. At least in this point I see this community as quite successful, and don’t see a need to try to emulate other mathematical communities. 4. I’m not sure I understand the meaning of “deep” and so I’m not sure whether it’s or not it’s a good thing. I guess it’s also depends whether you’re asking about the depth of the questions or the answers. Obviously if you measure depth in terms of logical dependencies in current proofs then complexity has not been around long enough for our answers to compete with other fields, which I’m sure have deep and beautiful struture complexity has yet to develop. As I said in my previous comment, it’s a question of whether you prefer to work close or far to the root of the result tree. About the questions, if we measure depth by how hard/long/logically deep is their shortest proof, or the time it will take mankind to solve it, then I guess nobody can really know. I’m very ignorant of number-theory and math in general, and so may be completely offbase, but I would not compare P vs. NP to the Riemann Hypothesis, but rather to questions such as the status of the Euclidean 5th postulate (called by Gauss a “shameful part of mathematics”) – a seemingly elementary question that it’s simply outrageous we can not solve. What’s surprising is that the questions of complexity seem to be several orders of magnitude harder than this and other similar examples in math history. It may be that we’re all missing something simple, and it may be that we’re like the ancient greeks – trying to solve a question 2000 years before its time. Greg Kuperberg Says: Comment #31 February 21st, 2006 at 3:49 pm What’s the issue with Euclid’s 5th postulate? Projective geometry and hyperbolic geometry show that it doesn’t have to be true. Anonymous Says: Comment #32 February 21st, 2006 at 4:54 pm So, anonymous, you can say we’re shallow, boring and stupid… anonymous (= me) never said any of these things. I questioned whether complexity theory itself was deep (as opposed to the people who do it), and thought this might be a reason that fewer people work on “core” complexity. I get the sense that people are thirsting for a richness and variety that we haven’t seen in, e.g. communication complexity. I am not trying to discourage people, I am just asking questions. The amount of money & time spent on hardness of approximation vs. circuit complexity is disproportionately large. the same thing goes for quantum computing or extractors. is it because these other sub-fields have a far more rich/interesting collection of techniques/connections? elad Says: Comment #33 February 22nd, 2006 at 10:24 am Greg Kuperberg said… What’s the issue with Euclid’s 5th postulate? Projective geometry and hyperbolic geometry show that it doesn’t have to be true. Why did the chicken crose the road? Beause it felt fat. Here is a question: Suppose that we accept P=NP as an axiom (but without actually getting an algorithm that proves it, not even as a black box). Then the complexity hierarchies collapse, and so on. But do we actually get something we can use from the axiom we added? Can there be a specific algortihm for factoring, call it A, such that if we know that P=NP then we know that A runs in polynomial time? Or vice versa: Can you prove that there are no “constructive” ramafications to knowing that P=NP by itself, without getting an actual algorithm that shows it to be true? Scott Says: Comment #34 February 22nd, 2006 at 11:43 am Elad: That’s an excellent question! The answer to it is yes — there’s an explicit algorithm A that factors in polynomial time if P=NP. Here it is: given an integer n, dovetail over all possible Turing machines. Halt when one of them has output the factors of n. (Sure, there’s a pretty big constant overhead, but that’s not our department…) Here I used the fact that there always exists a prime factorization. More generally, we can say that there’s explicit algorithm A that finds a satisfying assignment to any satisfiable formula in polynomial time if P=NP (but might not halt if P!=NP). There’s also an explicit n^{log log log n}-time algorithm that decides all but finitely many instances of SAT if P=NP (do you see why?). Anonymous Says: Comment #35 February 22nd, 2006 at 11:45 am I’m through with these discussions of math is better/worse than CS. In CS’s infancy, it was a truly open question whether there was such a thing as a “science” of computing. But today we are way past that, and CS as such is as valid as any other science. Not surprisingly as a new science CS uses (a) mostly new and uncomplicated techniques (duh!) and (b) has incredibly important problems for the picking. The same was true for, say, Topology when it first vaulted into the world of mathematics. I’m sure Brouwer and Jordan got flack for doing non-deep mathematics. One hundred years later most of those mathematicians criticizing them are forgotten while Brouwer and Jordan live on. Today a large percentage of the old guard in pure mathematicians haven’t gotten over the fact that first Graph Theory and then Computer Science have stolen their thunder. They sit atop their stale perches and criticize the lack of “depth” and “elegance” in the new field, all the while these fields continue to evolve and pass them by. I don’t care much about their opinion. As Thomas Kunz observed those people will never be swayed, they will simply slowly drift into retirement while the younger generation works in the new reality from the get go (anyone noticed how many young solid mathematicians are succesfully dabbing into the field of computing?). I worry more that as CS grows it’ll repeat this pattern. Elsewhere in the blogosphere we have seen complexity types sneering at algorithmic results for their lack of “depth” and “elegance”. There is nothing to be gained in that line of argument, and complexity theorists will not create jobs for themselves by going there. CC is valuable (or not) on its own merits and talking down the competition is a loss-loss proposition. Anonymous Says: Comment #36 February 22nd, 2006 at 12:32 pm I think you mean ‘Thomas Kuhn’. I have a question: why do people seem to be sweaty over the prospect of topological ideas revolutionizing complexity? Are there any serious vindications of this hunch, or has the seeming exoticism of the idea (and the prospect of visualizing donuts ‘n things) simply transfixed people? Scott Says: Comment #37 February 22nd, 2006 at 1:05 pm why do people seem to be sweaty over the prospect of topological ideas revolutionizing complexity? I’m not sweaty over it. (Living in Canada, I’m not really sweaty over anything.) Mike Freedman has had some interesting ideas about a topological approach to P vs. NP, but nothing concrete that I know of yet. In a different direction, there’s also work by Freedman, Kitaev, and Wang (and more recently Aharonov, Jones, and Landau in STOC’06), which shows that simulating topological quantum field theories and approximating the Jones polynomial at a root of unity are BQP-complete. As for P vs. NP, there’s no shortage of exotic ideas: the Mulmuley-Sohoni approach (based on geometric invariant theory), Mike Nielsen’s approach (based on geodesics in Riemannian manifolds), approaches based on cohomology, etc. etc. I don’t pretend to understand most of this stuff; what I keep coming back to is how any of it gets around the Razborov-Rudich barrier. Anonymous Says: Comment #38 February 23rd, 2006 at 6:35 am I’m through with these discussions of math is better/worse than CS. at no point did I propose that “math is better than TCS” or that “subfield A of TCS is better than subfield B.” I don’t see why many people are incapable of having a reasoned discussion about their field without getting defensive and clouding everything with personal insecurities. I think perhaps my real question should have been something like “is it the right time in history to solve the big problems of complexity theory?” in other words, is there anything to be gained from attacking the main questions head-on, as opposed to setting in for the long haul, developing a theory, and being perfectly happy with a lot of very incremental progress for the next 20 years. what should we expect of ourselves? should be people be encouraging their students to work in a different field? as for the sweat-inducing prospects of applying topology to complexity theory: - the stuff of freedman, et. al. is very cool, but the reason topology comes in there (topological quantum field theories) is very different (a priori, at least) from the reason it might be useful for e.g. circuit lower bounds. - I think the hopes for topology in complexity theory revolve around connected-ness and information flow. for instance, we know there are problems for which every circuit of a certain type requires that there exist disjoint paths from every set of k inputs to every set of k outputs. in other words, the problem itself imposes certain connectivity requirements on the circuit. this can be used to get very weak but non-trivial lower bounds on the number of wires required by any circuit that solves the problem (see valiant, pudlack, or raz and shpilka for examples of this that I can recall). at this elementary level, one might hope that higher-order topological invariants (e.g. homotopy, homology) might lead to stronger lower bounds, but it’s unclear whether this is true, and what it should mean. - bjorner, lovasz, yao, and others used topological methods to study size and depth lower bounds for algebraic decision trees. suppose you have a subset S of R^n and you want a decision tree that decides membership in S. one method to prove lower bounds is to count the number of connected components of S (or its complement). but if S is connected, then this only provides trivial bounds. here, one can use higher order notions of “connectivity” like the betti numbers of S to get lower bounds on depth (this uses non-trivial results on the betti numbers of algebraic varieties, which I mentioned upthread somewhere). I guess there is hope that similar methods might hold a lot of power, but I agree there is little evidence. there is some recent attempt by joel friedman to study this stuff more generally, but I don’t really understand what his paper is saying. Anonymous Says: Comment #39 February 23rd, 2006 at 11:30 am Reading comments such as: isn’t the real problem with complexity theory that the resulting mathematics is usually superficial and shallow? are complexity theorists ever saying something deep? I don’t see why many people are incapable of having a reasoned discussion about their field without getting defensive and clouding everything with personal insecurities. elad Says: Comment #40 February 23rd, 2006 at 2:21 pm Hot mama, a troll. And there I was thinking that they hide under a bridge or something. Maybe it’s that way in Canada. Moshe Vardi Says: Comment #41 February 24th, 2006 at 5:11 pm This discussion reminds me of a sentence I once heard from an MIT doctoral student: “Isn’t everything about databases trivial?” A short inquiry quickly revealed that this student knew nothing about databases. One does not find depth unless one looks for depth. Anonymous Says: Comment #42 February 24th, 2006 at 7:36 pm Calling the questioner a “troll” when he/she seems to have made some interesting points (in addition to the initial aserbic objection) is probably an easy way out, as is comparing her/him to someone with no knowledge of databases, since the questioner seems to know more about complexity theory than me. Jonathan Says: Comment #43 February 26th, 2006 at 5:30 am Could you define what you mean by deep? Here’s how it was explained to me. (I’m a theoretical CS guy, so I’m not speaking from personal experience.) In mathematics, especially pure mathematics, a result is “deep” if it exposes a connection between disparate fields of mathematics, where nobody expected a connection to exist. For example, the original proof of the prime number theorem is “deep” because it’s based on complex analysis. (It has little to do with the length of the proof.) In any case, who cares whether the “mathematics used are deep”? “Pure” mathematicians do. (Here, I use “pure mathematician” as a shorthand for “mathematicians whose work is not motivated by applications.”) They need a philosophy or aesthetic for deciding how important mathematical results are. Since “usefulness” is out, “depth” is a big one. (“Generality” and “elegance” are big ones too.) Anonymous Says: Comment #44 February 27th, 2006 at 6:41 pm Calling the questioner a “troll” when he/she seems to have made some interesting points Calling the questioner a troll after he has just accused his reasoned opposition of being “incapable of having a reasoned discussion about their field without getting defensive and clouding everything with personal insecurities” is most definitely the right thing to do. Jonathan Vos Post Says: Comment #45 December 9th, 2006 at 8:18 pm Compexity Theory is NOT the same as the Theory of Complex Systems. I say that awkwardly, having been Session Chair of 3 sessions at the 6th International Conference on Complex Systems [google: ICCS and NECSI). That was the one where lifetime achievement awards were presented to John Forbes Nash, jr., and “Renormalization Group” Wilson,… who certainyly did Deep work. Jonathan Vos Post Says: Comment #46 December 9th, 2006 at 9:39 pm Oh, and the hot word in PNAS papers about developmental biogenetics is EIGENGENE.
{"url":"http://www.scottaaronson.com/blog/?p=57","timestamp":"2014-04-19T02:00:28Z","content_type":null,"content_length":"62430","record_id":"<urn:uuid:8abfa89a-c044-4b7c-8885-95c64a7b44f9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
CRANS Algorithm: Overview (Ryan Lilien) CRANS The core of the CRANS algorithm analyzes the set of rotation differences to identify finite sets of rotations that satisfy the subgroup axioms. Given a set of w cross-rotation peaks (rotations) C = {r[1], r[2], . . . , r[w]}, where r[i] is an element of the group of three-dimensional rotations SO(3), CRANS examines the w^2 rotation differences d[ij] where d[ij] = r[i]^−1 r[j] and r[i]^−1 is the inverse of r[i] (such that r[i]^−1 r[i] is the identity). Conceptually, the rotation difference d[ij] is the rotation that rotates the model oriented by r[j] into the model oriented by r[i] (Figure). We can test the NCS consistency of a set of cross-rotation peaks by examining their rotation differences and verifying that they form a finite subgroup of SO(3). In the event of missing rotations, the CRANS algorithm completes each partial set of identified NCS-consistent rotations by generating missing rotations with quaternions. Although rotation differences can be computed using a number of rotation representations, we chose to use quaternions because they have a single compact representation with explicit axis and angle components, are easily composed, are free of singularities, and represent a uniform parameterization of rotation space.
{"url":"http://www.cs.duke.edu/donaldlab/software/CRANS/cransalgooverview.html","timestamp":"2014-04-20T03:12:41Z","content_type":null,"content_length":"2665","record_id":"<urn:uuid:d29512f0-51f5-4246-bef5-0143dd003b65>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Please send your solutions to Valeria Pandelieva 641 Kirkwood Avenue Ottawa, ON K1Z 5X5 Suppose that x^2 + y^2x^2 - y^2 + x^2 - y^2x^2 + y^2 = k . Find, in terms of k, the value of the expression x^8 + y^8x^8 - y^8 + x^8 - y^8x^8 + y^8 . Given a triangle ABC with an area of 1. Let n > 1 be a natural number. Suppose that M is a point on the side AB with AB = nAM, N is a point on the side BC with BC = nBN, and Q is a point on the side CA with CA = nCQ. Suppose also that {T} = AN ÇCM, {R} = BQ ÇAN and {S} = CM ÇBQ, where Ç signifies that the singleton is the intersection of the indicated segments. Find the area of the triangle TRS in terms of n. (a) Are there four different numbers, not exceeding 10, for which the sum of any three is a prime number? (b) Are there five different natural numbers such that the sum of every three of them is a prime number? Suppose that the measure of angle BAC in the triangle ABC is equal to a. A line passing through the vertex A is perpendicular to the angle bisector of ÐBAC and intersects the line BC at the point M. Find the other two angles of the triangle ABC in terms of a, if it is known that BM = BA + AC. Find a function that satisfies all of the following conditions: (a) f is defined for every positive integer n; (b) f takes only positive values; (c) f(4) = 4; 1f(1)f(2) + 1f(2)f(3) + ¼+ 1f(n)f(n+1) = f(n)f(n+1) . A natural number is a multiple of 17. Its binary representation (i.e., when written to base 2) contains exactly three digits equal to 1 and some zeros. (a) Prove that there are at least six digits equal to 0 in its binary representation. (b) Prove that, if there are exactly seven digits equal to 0 and three digits equal to 1, then the number must be even.
{"url":"http://cms.math.ca/Competitions/MOCP/2001/prob_oct.html","timestamp":"2014-04-20T00:40:45Z","content_type":null,"content_length":"23693","record_id":"<urn:uuid:cb95bd3d-daa9-449c-ac38-21ee4eca3a1b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Dedham, MA Algebra 2 Tutor Find a Dedham, MA Algebra 2 Tutor ...I've also worked with new teachers to develop their course material and understand the finer points of their science discipline during their first year in the classroom.Since 2003, I've tutored dozens of students for hundreds of hours in algebra for coursework and for standardized tests. I've al... 23 Subjects: including algebra 2, chemistry, physics, calculus ...I also received a 5 on my own AP English exam (a while ago). I am an excellent tutor in SAT, ACT and PSAT exam taking. I did quite well on them, as did my children (one an NM Scholar competitor). I am an experienced educator and tutor as well. I am currently tutoring in ACT reading with great success. 90 Subjects: including algebra 2, chemistry, English, reading Hi! I graduated from Tufts University with an undergraduate degree in Biopsychology and from Stony Brook University with a Master's degree in Physiology and Biophysics. I have extensive experience in tutoring high school math (algebra, trigonometry, pre-calculus, calculus) and science (biology, ch... 10 Subjects: including algebra 2, chemistry, geometry, biology ...I facilitate learning by promoting open and comfortable academic discourse. I have been thanked for making economics relevant, current and timely and for being able to translate difficult topics into valuable and useful information which can be applied to one's everyday-life and understanding of... 6 Subjects: including algebra 2, accounting, algebra 1, economics ...I've been tutoring Algebra for 5 years. I have prior experience totaling 5 years in helping kids prepare for standardized tests. I have a Bachelor’s degree in Mathematics and have also excelled in various standardized tests like SAT ( I received an academic full scholarship for entire undergrad... 13 Subjects: including algebra 2, calculus, geometry, algebra 1 Related Dedham, MA Tutors Dedham, MA Accounting Tutors Dedham, MA ACT Tutors Dedham, MA Algebra Tutors Dedham, MA Algebra 2 Tutors Dedham, MA Calculus Tutors Dedham, MA Geometry Tutors Dedham, MA Math Tutors Dedham, MA Prealgebra Tutors Dedham, MA Precalculus Tutors Dedham, MA SAT Tutors Dedham, MA SAT Math Tutors Dedham, MA Science Tutors Dedham, MA Statistics Tutors Dedham, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/dedham_ma_algebra_2_tutors.php","timestamp":"2014-04-20T09:12:52Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:9125e887-ef99-47a3-9771-5c9c1becac48>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00368-ip-10-147-4-33.ec2.internal.warc.gz"}