content
stringlengths
86
994k
meta
stringlengths
288
619
Mathematics and Chess Page The latest version of this webpage is being maintained here. I'll update this page at some point when I get more time. For now, please go to the Mathematicians and Chess page. Chess teaches many things, including strategic thinking. Though one might think at first that this type of thinking is unrelated to mathematics, in fact, chess also teaches a type of "calculation" (see Soltis's book [4] for the exact idea). To anyone who thinks doing mathematics and playing chess are unrelated, this page is for you. A paraphrase from the entry under Mathematics and Chess in [5]: In 1893, a Professor Binet (of Stanford-Binet IQ test fame) made a study of the connection between mathematics and chess. After questioning a large number of leading plyers, he discovered that 90% were very good mental calculators. On the other hand, he discovered that although mathematicians are often interested in chess, few become top-class players.... Professor Binet commented that both chess and mathematics have a common direction and the same taste for combinations, abstraction, and precision. One characteristic which was missing from mathematics was the combat, in which two individuals contend for mastery, with all the qualities required of generals in the field of battle. This page contains information on • which mathematicians (which we define as someone who has earned a PhD or equivalent in Mathematics) play(ed) chess at the International Master level or above (also included are those who have an IM or above in chess problem solving or composing), and • how to get papers on mathematical chess problems. Mathematicians who play, or have played, chess: • Adolf Anderssen (1818-1879). Pre World Championships but is regarded as the strongest player in the world between 1859 and 1866. He received a degree (probably not a PhD) in mathematics from Breslau University and taught mathematics at the Friedrichs gymnasium from 1847 to 1879. He was promoted to Professor in 1865 and was given an honorary doctorate by Breslau (for his accomplishments in chess) in 1865. • Robert Coveyou (1915 - 1996) completed an M.S. degree in Mathematics, and joined the Oak Ridge National Laboratory as a research mathematician. He became a recognized expert in pseudo-random number generators. He is known for the quotation "The generation of random numbers is too important to be left to chance," which is based on a title of a paper he wrote. An excellent tournament chess player, he was Tennessee State Champion eight times. • Noam Elkies (1966-), a Professor of Mathematics at Harvard University specializing in number theory, is a study composer and problem solver (ex-world champion). Prof. Elkies, at age 26, became the youngest scholar ever to have attained a tenured professorship at Harvard. One of his endgame studies is mentioned, for example, in the book Technique for the tournament player, by GM Yusupov and IM Dvoretsky, Henry Holt, 1995. He wrote 11 very interesting columns on Endgame Exporations (posted by permission). Some other retrograde chess consructions of his may be found at the Dead Reckoning web site of Andrew Buchanan. See also Professor Elkies's very interesting Chess and Mathematics Seminar 2003, 2004, pages and the mathematical papers on his chess page. • Machgielis (Max) Euwe (1901-1981), World Chess Champion from 1935-1937, President of FIDE (Fédération Internationale des Echecs) from 1970 to 1978, and arbitrator over the turbulent Fischer - Spassky World Championship match in Reykjavik, Iceland in 1972. I don't know as many details of his mathematical career as I'd like. One source gives: PhD (or actually its Dutch equivalent) in Mathematics from Amsterdam University in 1926. Another gives: Doctorate in philosophy in 1923 and taught as a career. Published a paper on the mathematics of chess "Mengentheoretische Betrachtungen uber das Schachspiel". • Ed Formanek (194?-), International Master. Ph.D. Rice University 1970. Currently on the mathematics faculty at Penn State Univ. Works primarily in matrix theory and representation theory. • Charles Kalme (Nov 15, 1939-March 22, 2002), earned his master title in chess at 15, was US Junior champ in 1954, 1955, US Intercollegiate champ in 1957, and drew in his game against Bobby Fischer in the 1960 US championship. In 1960, he also was selected to be on the First Team All-Ivy Men's Soccer team, as well as the US Student Olympiad chess team. (Incidently, it is reported that this team, which included William Lombary on board one, did so well against the Soviets in their match that Boris Spassky, board one on the Soviet team, was denied forieng travel for two years as punishment.) In 1961 graduated 1st in his class at the Moore School of Electrical Engineering, The University of Pennsylvania, in Philadelphia. He also received the Cane award (a leadership award) that year. After getting his PhD from NYU (advisor Lipman Bers) in 1967 he to UC Berkeley for 2 years then to USC for 4-5 years. He published 2 papers in mathematics in this period, "A note on the connectivity of components of Kleinian groups", Trans. Amer. Math. Soc. 137 1969 301--307, and "Remarks on a paper by Lipman Bers", Ann. of Math. (2) 91 1970 601--606. He also translated Siegel and Moser, Lectures on celestial mechanics, Springer-Verlag, New York, 1971, from the German original. He was important in the early stages of computer chess programming. In fact, his picture and annotations of a game were featured in the article "An advice-taking chess computer" which appeared in the June 1973 issue of Scientific American. He was an associate editor at Math Reviews from 1975-1977 and then worked in the computer industry. Later in his life he worked on trying to bring computers to elementary schools in his native Latvia A National Strategy for Bringing Computer Literacy to Latvian Schools . His highest rating was acheived later in his life during a "chess comeback": 2458. Here is his game against Bobby Fischer referred to above: [Event "?"] [Site "New York ch-US"] [Date "1960.??.??"] [Round "3"] [White "Fischer, Robert J"] [Black "Kalme, Charles"] [Result "1/2-1/2"] [NIC ""] [Eco "C92"] [Opening "Ruy Lopez, Closed, Ragozin-Petrosian (Keres) Variation"] 1.e4 e5 2.Nf3 Nc6 3.Bb5 a6 4.Ba4 Nf6 5.O-O Be7 6.Re1 b5 7.Bb3 O-O 8.c3 d6 9.h3 Nd7 10.a4 Nc5 11.Bd5 Bb7 12.axb5 axb5 13.Rxa8 Qxa8 14.d4 Nd7 15.Na3 b4 16.Nc4 exd4 17.cxd4 Nf6 18.Bg5 Qd8 19.Qa4 Qa8 20.Qxa8 Rxa8 21.Bxf6 Bxf6 22.e5 dxe5 23.Ncxe5 Nxe5 24.Bxb7 Nd3 25.Bxa8 Nxe1 26.Be4 b3 27.Nd2 1/2-1/2 • Emanuel Lasker (1868-1941), World Chess Champion from 1894-1921, PhD (or actually its German equivalent) in Mathematics from Erlangen Univ in 1902. Author of the influential paper [1], where the well-known Lasker-Noether Primary Ideal Decomposition Theorem in Commutative Algebra was proven . (See [2] for a statement in the modern terminology. For more information, search "Lasker, Emanuel" in the chess encyclopedia, as well as the links provided there.) • Lev Loshinski (1913-1976) , F.I.D.E. International Grandmaster of Chess Compositions. Taught mathematics (at Moscow State University?). (PhD unknown but considering the reputation of Moscow State University, he may have one.) • A. Jonathan Mestel, grandmaster in over-the-board play and in chess problem solving, is an applied mathematician specializing in fluid mechanics and is the author of numerous research papers. He is on the mathematics faculty of the Imperial College in London. • Walter D. Morris (196?-), International Master. Currently on the mathematics faculty at George Mason Univ in Virginia. • Nick J. Patterson, International Master (?), D. Phil. (from Cambridge Univ.) in 197? in group theory, under Prof. Thompson. Has published several papers in group theory, combinatorics, and the theory of error-correcting codes. For some of his chess games, click here. He was the Irish Chess Champion in 1969. • John Nunn (1955-), Chess Grandmaster, D. Phil. (from Oxford Univ.) in 1978 at the age of 23 (and the youngest undergraduate at Oxford since Cardinal Wolsey). PhD thesis in Algebraic Topology and author of the paper [3] (Search "Nunn" in the chess encyclopedia for more chess information.) • Martin Kreuzer (1962-), CC Grandmaster, is rated over 2600 in correspondence chess (ICCF, as of Jan 2000). His OTB rating is over 2300 according to the chessbase encyclopedia. His specialty is computational commutative algebra and applications. Here is a recent game of his: Kreuzer, M - Stickler, A [Eco "B42"] 1.e4 c5 2.Nf3 e6 3.d4 cxd4 4.Nxd4 a6 5.Bd3 Nc6 6.c3 Nge7 7.0-0 Ng6 8.Be3 Qc7 9.Nxc6 bxc6 10.f4 Be7 11.Qe2 0-0 12.Nd2 d5 13.g3 c5 14.Nf3 Bb7 15.exd5 exd5 16.Rae1 Rfe8 17.f5 Nf8 18.Qf2 Nd7 19.g4 f6 20.g5 fxg5 21.Nxg5 Bf6 22.Bf4 Qc6 23.Re6 Rxe6 24.fxe6 Bxg5 25.Bxg5 d4 26.Qf7+ Kh8 27.Rf3 Qd5 28.exd7 Qxg5+ 29.Rg3 Qe5 30.d8=Q+ Rxd8 31.Qxb7 Rf8 32.Qe4 Qh5 33.Qe2 Qh6 34.cxd4 cxd4 35.Bxa6 Qc1+ 36.Kg2 Qc6+ 37.Rf3 Re8 38.Qf1 Re3 39.Be2 h6 40.Kf2 Re8 41.Bd3 Qd6 42.Kg1 Kg8 43.a3 Qe7 44.b4 Ra8 45.Qc1 Qd7 46.Qf4 1-0 • Chess problem composer Hans-Peter Rehm (1942-), a Professor of Mathematics at Karlsruhe Univ. He has written several papers in mathematics, such as "Prime factorization of integral Cayley octaves", Ann. Fac. Sci. Toulouse Math (1993), but most in differential algebra, his specialty. Some of his problems can be found on the internet, for example: problem set (in German). A collection of his problems has been published as: Hans+Peter+Rehm=Schach Ausgewählte Schachkompositionen & Aufsätze (= selected chess problems and articles), Aachen 1994. Some other possible entries for the above list: • Alexander, Conel Hugh O'Donel (1909-1974), late British chess champion. Alexander may not have been a mathematician but he did mathematical (code and cryptography) work during WWII, as did the famous Soviet chess player David Bronstein (see the book Kahn, Kahn on codes). He was the strongest English player after WWII, until Jonathan Penrose appeared (see below for more on Penrose.) (Search "Alexander" in the chess encyclopedia for more information.) • Christoph Bandelow teaches mathematics at the Ruhr-University Bochum. He specializes in stochastic processes and has written a number of excellent books on the magic cube (or "Rubik's cube") and related puzzles. Some of his chess problems are (by permission) : problem 1, problem 2, problem 3. (More to come.) Prof Bandelow was also a pioneer in computer problem solving, having written (in 1961) the first German computer program to solve chess problems (this program is described in "Schach und Zahl"). • Magdy Amin Assem (195?-1996) specialized in p-adic representation theory and harmonic analysis on p-adic reductive groups. He published several important papers before a ruptured aneurysm tragically took his life. He was IM strength (rated 2379) in 1996. • Prof. Vania Mascioni, also IECG Chairperson (IECG is the Internet Email Chess Group), is rated 2326 by IECG (as of 4-99). He is a professor of Mathematics at the University of Texas at Austin (his area is Functional Analysis and Operator Theory). • Stanislaw Ulam, the famous mathematician and physicist (author of the autobiographical, Adventures of a mathemacian) was a strong chess player. Rating unknown. • Kenneth S. Rogoff, Professor of Economics at Harvard University, is a Grandmaster. He has a PhD in Economics but has published in statistical journals. • Kenneth W. Regan, Professor of Computer Science at the State Univ. of New York Buffalo, is currently rated 2453. His research is in computational complexity, a field of computer science which has a significant mathematical component. • Otto Blathy, who is a very famous many mover problemist, held a doctorate in mathematics from Budapest and Vienna universities at his time. (For a reference, see A.Soltis: Chess to Enjoy. • Canadian grandmaster Duncan Suttles (b.1945 in San Francisco, moved to Vancouver as a child). Suttles studied for though did not (yet anyway) receive a PhD in mathematics. Suttles also has the grandmaster title in correspondence chess. • Problem composer J. G. Mauldon (deceased, formerly a mathematician at Amherst College) has written several papers in mathematics. One of his retro problems can be found on the internet, for example: problem. • Problem composer John D. Beasley has also written several books on the mathematics of games. He is secretary of the British Chess Variant Society. There is some misleading information given either in the literature or on some internet web pages. • Karl Fabel (1905-1975), F.I.D.E. International Master of Chess Compositions. Not a tournament player but an ingenious problem composer. He received a Doctorate in Chemistry and reportedly worked as a mathematician, civil judge, and patents expert. He was, according to his friend Christoph Bandelow, a chemist not a mathematician. Some Fabel problems: problem 1, problem 2. He was also the co-author of the book Schach und Zahl on mathematics and chess and the problem book Rund um das Schachbrett. Publisher: Walter de Gruyter 1955. • Rueben Fine was not a mathematician (however, his son Ben is an active research mathematician who teaches at Fairfield University in Connecticut). Reuben Fine was a psychologist. • GM James Tarjan (a Los Angeles librarian, I'm told) is the brother of the well-known computer scientist (some of his research has been published in mathematical journals) Robert Tarjan. • World chess champion Kasparov is not a mathematician (as far as I know), though he has made contributions to computer science. (There is a well-know mathematician named Kasparov who works in K-theory and C^*-algebras but they are different people.) • Jonathan Penrose (mentioned above - one of the strongest chess players in Britain in the 1950's and 1960's) is the brother of the well-known mathematician and physicist Sir Roger Penrose. 1. Eero Bonsdorff, Dr Karl Fabel, Olavi Riihimaa, Schach und Zahl, unterhaltsame schachmathematik, Walter Rau Verlag, Dusseldorf, 1966 2. Lasker, E. "Zur theorie der moduln und ideale," Math. Ann. 60(1905)20-116 3. Kunz, Introduction to Commutative Algebra and Algebraic Geometry, Birkhauser, Boston, 1985 4. Nunn, J. D. M. "The homotopy types of finite H-spaces," Topology 18 (1979), no. 1, 17--28 5. A. Soltis, The Inner Game of Chess, David McKay Co. Inc, (Random House), New York, 1994 6. A. Sunnucks, The Encyclopedia of Chess, 2nd ed, St Martins Press, New York, 1976 Papers about mathematical problems in chess: I only know of a few sources: • Timothy Chow, "A Short Proof of the Rook Reciprocity Theorem", in volume 3, 1996, of the Electronic Journal of Combinatorics. • Noam Elkies, "On numbers and endgames: Combinatorial game theory in chess endgames", in 1996 "Games of No Chance" = Proceedings of the workshop on combinatorial games held July'94 at MSRI. Available from MSRI Publications -- Volume 29 or Noam Elkies' site. • Noam Elkies and Richard Stanley, "Chess and Mathematics". • Max Euwe, "Mengentheoretische Betrachtungen uber das Schachspiel", Konin. Akad. Weten. (Proc Acad Sciences, Netherlands), vol 32, 1929, 633-642 • Awani Kumar, "Knight's Tours in 3 Dimensions", in The Games and Puzzles Journal The On-line Journal for Mathematical Recreations, Issue 43, January-April 2006. • Richard M. Low and Mark Stamp, "King and Rook Vs. King on a Quarter-Infinite Board", in Integers, volume 6(2006). • Igor Rivin, Ilan Vardi, Paul Zimmermann, "The N-queens problem," American Mathematical Monthly 101 (1994), no. 7, 629-639. • Lewis Benjamin Stiller, "Exploiting symmetries on parallel architecture", PhD thesis, CS Dept, Johns Hopkins Univ. 1995 Closely related is his Games of No Chance paper, "Multilinear Algebra and Chess Endgames". • Mario VELUCCHI's NON-Dominating Queens Problem or math chess problems • Wikipedia's, Eight queens puzzle. • Herbert S. Wilf, "The Problem of the Kings", and Michael Larsen, "The Problem of Kings", both in volume 2, 1995, of the Electronic Journal of Combinatorics. • Papers on odd king tours by D. Joyner and M. Fourte (appeared in the J. of Rec. Math., 2003) and even king tours by M. Kidwell and C. Bailey (in Mathematics Magazine, vol 58, 1985). • Lesson 3 in the chess lessons by Coach Epshteyn at UMBC. • Wikipedia has an artcle on mathematicians who studied chess. Created 10-28-97. Thanks to Christoph Bandelow, Max Burkett, Elaine Griffith, Hannu Lehto, John Kalme, Ewart Shaw, Richard Stanley, Will Traves, Steven Dowd, Z. Kornin, and Noam Elkies for help and corrections on this page. Last updated 2012-11-12. Any comments or additions to suggest? Please email me at:
{"url":"http://www.permutationpuzzles.org/chess/math_chess.html","timestamp":"2014-04-18T02:58:47Z","content_type":null,"content_length":"22208","record_id":"<urn:uuid:77b8817c-1e3b-41a8-a6ed-e38586b1647e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
For any prime $p$, is there $C$ such that if $x\ge C$, then all but one integer among $x+1, x+2, \dots, x+p$ has Greatest Prime Factor $> p$ up vote 1 down vote favorite I apologize if this is a naive question about greatest prime factors (gpf). I was thinking about the sequence of integers where $\mathrm{gpf}(x) \le p$ where $p$ is any prime. Clearly, as $x$ increases, the distance $d$ between an integer where $\mathrm{gpf}(x) \le p$ and $\mathrm{gpf}(x+d) \le p$ increases at a seemingly every increasing rate. For all primes $p$, does there exist an integer $C$ where if $x \ge C$, then there is at most one integer in the sequence $x+1, x+2, \dots, x+p$ has $\mathrm{gpf}(x) \le p$ For example, if $p=2$, $C = 2$ since for any $x \ge 2$, either $x$ or $x+1, \mathrm{gpf}(x) > 2$ nt.number-theory prime-numbers factorization integer-sequences Thanks very much! Could you provide more details on why there are only finitely many such equations for each p? I suspected that this was correct but I was having trouble identifying the reasoning. Thanks! -Larry – Larry Freeman Sep 9 '12 at 17:34 Actually, I think that you answered my question with your mention of the Thue equation. I'll read up on the Thue equation to better understand the details! :-) – Larry Freeman Sep 9 '12 at 17:35 1 As suggested above, there is a theorem by Stormer on consecutive smooth numbers which supports your conjecture. I personally believe that C is O(nlogn). My memory says the equations are quadratic, though, and not cubic. Gerhard "Ask Me About System Design" Paseman, 2012.09.09 – Gerhard Paseman Sep 9 '12 at 18:22 1 Also, I think C=9 for p=3 and C=9801 for p=11. Doubtless OEIS has entries for you. Gerhard "Ask Me About System Design" Paseman, 2012.09.09 – Gerhard Paseman Sep 9 '12 at 18:37 1 Assuming the ABC-conjecture, $C$ can be taken as small as $c_{\epsilon} ( p e^p )^{1 + \epsilon} $ for any $\epsilon >0$. – js21 Sep 10 '12 at 8:27 show 6 more comments 3 Answers active oldest votes This is a summary of Gerhard Paseman's comment. Suppose that $A$ and $B$ are two numbers with $A-B = c$ small such that $A$ and $B$ are only divisible by prime numbers less than or equal to $p$. Then one can write $$A = a x^3, \qquad B = b y^3,$$ where all the prime factors of $a$ and $b$ are less than or equal to $p$, and all the exponents of each prime number is at most $2$ (the higher factors get absorbed into the cube). Note that if $p$ is the $n$th prime number, then there are $3^n$ possible values of $a$ and $b$. The answer to your problem will be positive provided that one can show that each of the ($3^ {2n}$) equations $$a x^3 - b y^3 = c$$ Has only finitely many solutions. Yet a theorem of Thue (1908) guarantees that if $f(x,y)$ is irreducible of degree $\ge 3$, then $f(x,y) = c$ has only finitely many integral solutions. This leaves the reducible cases corresponding to the ratio $[a:b]$ being a perfect cube. Yet these are trivial; after absorbing the coefficients and multiplying through, one is left with up vote 2 an equation down vote accepted $$X^3 - Y^3 = C$$ to solve in integers $X$ and $Y$. Yet this also has finitely many solutions for a trivial reason - the consecutive cubes become further apart. Thue's results are actually effective, one can also prove this result by various other means, like Baker's method. This problem is actually the baby case of the $S$-unit equation. In particular, it ultimately boils down to solving the equation $$A + B = C$$ where $A$, $B$, and $C$ have a fixed set of prime factors dividing $S$ (Take $S$ to be the product of primes less than or equal to the maximum of $p$ and the range corresponding to $c$ in the Thue equations above). In other words, one is looking for a solution in units to the equation $A+B=C$ in the ring $\mathbf{Z}[1/S]$. The methods above can be generalized to applied to similar problems where $\mathbf{Z}[1/S]$ is replaced by the $S$-units in a number field, and one can even increase the number of terms (providing that one is careful only considers primitive solutions, ruling out trivialities like $A-A+B-B=0$.) Thanks very much, Pauline! That really provides the additional details to help me understand Gerhard's explanation! :-) – Larry Freeman Sep 10 '12 at 13:34 add comment I heard a great answer to this question based on the Thue equation. I investigated the Thue equation and there was one point that was not clear to me. It seems to me that there are an infinite number of values that $a$ and $b$ can take. If there are an infinite number of combinations of finite solutions, then there is an infinite number of solutions. Right? So, if I understand it, the Thue equation alone doesn't seem to work. I apologize if I am misunderstanding the classical result there. Here's an argument that seems to work as far as I understand: (1) For any prime $p$, there is a finite number of combinations of primes that are less than $p$. (2) For any of these combinations there exists an integer $x$ such that if a combination is greater than $x$, then at least one of the primes that make up the combination are of a degree greater than $2$. (3) Let $c$ be either the highest of the values $x$ from step (2) or $4p^{4}(3\prod_{p} p^{\frac{1}{2}})^3$ depending on which is higher. (4) There is a finite number of ways that we can pair these different combinations so that we have an equation of the form $ax^3 - by^3 = c$ where $c < p$. (5) If $a \ne b$ and $gcd(x,y)=1$, then using a result from Siegel, there is at most $1$ solution (see Theorem A in the link below). (6) if $a = b$ and $a = c$, then using a result from Michael Bennett, there is at most $1$ solution. Here's the reference: up vote 0 down vote M. A. Bennett, Rational approximation to algebraic numbers of small height : the Diophantine equation $|ax^n-by^n|=1$}, J. Reine Angew. Math. 535 (2001), 1-49 (7) if $a = b$ and $a < c$, then the equation has a form such as: $x^3 - y^3 = \frac{c}{a}$ This is a Thue Equation and we can conclude that there is a finite number of solutions. (8) if $a \ne b$ and $gcd(x,y) > 1$, then it means that both combinations consist of the same prime so we have an equation of the form: $x^m - x^n = c$ Then, as I understand it, we have a Thue equation so we can again assume that there is a finite number solutions. I believe that covers all the possible cases. Since, there are only a finite number of solutions, it follows that there exists an integer $c$ which is greater than all of these solutions and for all $x \ge c$, we have at most $1$ integer in the sequence where $gpc(x+i) \le p$. Apologies for the length of this argument. I'm sure a professional mathematician such as user 631 would be able to state the argument more elegantly. :-) 1 You might prefer Lehmer's rewrite of Stormer's result. Check out the Wikipedia entry on Stormer's Theorem. Gerhard "Ask Me About System Design" Paseman, 2012.09.09 – Gerhard Paseman Sep 10 '12 at 3:55 Clearly your notion of being insulted differs from my notion,and likely differs from commonly accepted notions as well. I think he showed respect by referring to you by number. If that offends you, he might resort to something more oblique. Were I in a situation similar to Larry's, I might insult you further. Gerhard "Honk Honk Back At You" Paseman, 2012.09.09 – Gerhard Paseman Sep 10 '12 at 4:23 Thanks! I just read the link. It looks great: en.wikipedia.org/wiki/Stormer%27s_theorem I'll start reading up on Stormer's Theorem to better understand it! – Larry Freeman Sep 10 '12 at Hi Harpo, I meant no insult. You changed your name two times today. I thought your answer was brilliant. I willl change the user 61 to Harpo Marx immediately. Apologies. -Larry – Larry Freeman Sep 10 '12 at 4:28 I will do as you ask. I will remove your name. I meant no insult. Apologies. I really appreciated your comment about the Thue Equation! That was exactly the guidance I was looking for. :-) – Larry Freeman Sep 10 '12 at 4:40 show 4 more comments I don't know how far Larry went in pursuing this problem, but this touches on a topic I've spent some time on, ie. Lehmer's method. Let $S_j$ be the maximum $S$ for which the pair {$S, S+j$} is $p$-smooth, and let $S_m$ be the maximum of $\{S_1, S_2 \ldots S_p\}$. Also let k = $\pi(p)$, ie. the number of primes $\leq It follows then that the minimal $C$ for which the desired property holds is $C = S_m$. Determining each $S_j$ is not so straight-forward, apart from the cases $j=1, 2$, which are a direct application of Lehmer's method, which provides for the enumeration of all smooth pairs of the form $\{S, S+1\}$, $\{S, S+2\}$, by solving roughly $2^k$ standard Pell equations, ie. $x^2 - Dy^2 = 1$, for $D$ ranging over all combinations of the $k$ primes $\leq p$. Both sets of pairs can be obtained with a single pass. For $3 \leq j \leq p$, however, things are not so simple. Lehmer did not address these cases, and perhaps we can understand why. We can in fact extend Lehmer's method to identify smooth pairs $\{S, S+j\}$, but this requires solving $x^2 - Dy^2 = j^2$, again for all $2^k$ values of $D$. The good news is that these equations can be solved from the $x^2 - Dy^2 = 1$ solutions, so that the number of continued fractions we have to compute is still the same. See John Robertson's article on the LMM method (Lagrange-Matthews-Mollin) at JPR_Pell. up vote 0 Note that there can be multiple solution classes for any $j$. down vote The bad news is that Lehmer's main achievement, by which he is able to reduce the number of Pell equations from $3^k$ to $2^k$, is not applicable for $j \geq 3$. For $j = 1, 2$ he showed that any smooth pair that does not turn up as a fundamental solution $(x_1, y_1)$ will be found at some $(x_m, y_m)$ with $m \leq (p+1)/2$. This is because the $y_n$ values form a Lucas sequence, and so $y_1$ divides all $y_n$. Thus, if $y_1$ isn't smooth, neither will be any other $y_n$. And if $y_1$ is smooth, we only need check a limited number of $y_n$. Sadly, the multiple solutions in any class of solutions to $x^2 - Dy^2 = N$, $(N=j^2)$, do not have these Lucasian properties. So we don't know how many $(x_n, y_n)$ to look at, and we can't assume that $y_1$ not being smooth means that $y_2$ isn't either. We could of course revert to the original Störmer method, where we solve for $D$ being all possible combinations of the $k$ primes to the power $\{ 0, 1, 2 \}$, thus requiring roughly $3^k$ equations to be solved. That's very slow, but guarantees that smooth pairs occur only as fundamental solutions. Alternately, it might well be that $S_1 > S_j$ always, in which case we would avoid all of these complications, solving only the standard equations $x^2 - Dy^2 = 1$. I have not yet done any investigation of this question, but I remember that generally $S_2 < S_1$, so this property can't be ruled out. Finally, I would like to know if Larry looked into the method described above involving $X^3 - Y^3 = C$, and if so, how it works. Correction: $S_2 < S_1$ was only true if $gcd(S, S+2)=1$. Clearly any $S, S+1$ will correspond to smooth pairs $(kS, kS + k)$ for any $k$ you like, so there is no avoiding the complications (wrt the Lehmer method) described above. – Jim White Jan 11 '13 at 2:20 Ah, but then again, we might still get lucky, for perhaps our $S_m$ is in fact always $(pS_1, pS_1 + p). – Jim White Jan 11 '13 at 2:38 I meant $(pS_1, pS_1 + p)$. You can't edit comments! – Jim White Jan 11 '13 at 2:40 Hi Dr. Memory, My interest in primarily in understanding the context behind the Sylvester-Schur Theorem: mathoverflow.net/questions/111823/… For example, I am especially interested in patterns like this: oeis.org/A213253 Cheers, -Larry – Larry Freeman Jan 11 '13 at 21:28 I'm kind of preoccupied with a couple of other questions, so I don't quite know what you mean by "patterns like A213523". This could just be attention-deficit on my part, but can you explain in more detail what you are looking for? – Jim White Jan 12 '13 at 3:12 show 1 more comment Not the answer you're looking for? Browse other questions tagged nt.number-theory prime-numbers factorization integer-sequences or ask your own question.
{"url":"http://mathoverflow.net/questions/106738/for-any-prime-p-is-there-c-such-that-if-x-ge-c-then-all-but-one-integer/118592","timestamp":"2014-04-19T10:07:05Z","content_type":null,"content_length":"87780","record_id":"<urn:uuid:d86b6b12-10e9-4f70-851c-35bf24734328>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Material added 17 December 2006 When Chesspieces Attack Bernd Rennhak reminded me of Erich Friedman's Attacking Chess Pieces Math Magic of the Month. Erich has had many great investigations lately, including Polyline compatibility, building trees, polyform addition, and triangulations. Bernd also did his own analysis. W and H oddity Improvements George Sicherman: Tonight I shrank the H pentahex oddity from 59 to 29 tiles and the W pentahex oddity from 149 to 41. I cheated a little--I used a computer! Hofstadter Butterflies of Bilayer Graphene A sheet of graphene can have fascinating patterns. Expandible Circular Tables Fletcher Capstan Tables have the amazing ability to double in size with a 3 second spin. The videos are well worth a look. Christmas Letters Jed Martinez: The ten four-lettered words below contain four different things associated with Christmas. The first three of these things can be found by taking one letter from each four-lettered word, and reading it from left to right. Circle one letter from each word, to spell out the first thing - this way, you won't make the mistake of reusing said letters, in order to determine what the other two things are. (NOTE: The first three things associated with Christmas each consists of two or three words. It is up to you to figure out how many letters there are in each word.) When you have accomplished this task, there will be ten letters left (one from each word) without a circle. Rearrange these ten letters to spell out the fourth Christmas-related thing (which is a one-word answer). Good luck... and "Merry Christmas"! SONS HALO DONE TAIL GRAY INCH ROIL GAIT HUED SECT Loop the Loop David Millar: I have some sudoku-esque puzzles called 'Loop de Loop' that I am making for my site. I have a few posted at The Griddle and I also have them available through Zoho Sheet, which is a rather cool online spreadsheet application. I'm also branching out and trying other online spreadsheet utilities like Google Docs and Spreadsheets and EditGrid. It's quite interesting to see all of the options there are out there. [Ed - Quite fascinating.] Material added 9 December 2006 Rectilinear Crossing Number Project The Rectilinear Crossing Number Project has made a number of discoveries since 2000. They've resolved the K10 problem, then K11, and then all the way up to K17. K18 is still unresolved. They've also solved K19 and K21, shown below. Those red dots are pts = {{35, 67}, {18, 28}, {63, 50}, {27, 69}, {17, 23}, {87, 66}, {8, 79}, {18, 16}, {69, 53}, {5, 86}, {0, 0}, {94, 66}, {4, 88}, {5, 6}, {115, 69}, {9, 74}, {14, 13}, {108, 67}, {6, 85}, {2, 2}, {111, 68}}. Google books vs Amazon books For looking up info on obscure topics, I have often gone beyond Google to Amazon, which can find quotes inside books. Now, Google Books is available. Both are excellent sources for researching a topic. For "hexecontahedron", for example, Amazon gave 17 books, and Google Books 12 books. Polycube solutions Bob Harris: I've also uploaded a nice rendering of my favorite polycube puzzle-- the 10x10x10 filled with the 166 hexacubes (and a 2x2x1 void in the center of one face). World's Smartest Person Contest Matt Sheppeck: Thanks to you (and Ken Duisenberg) for posting that link to the 2006 World's Smartest Person Challenge. I took first place along with seven others, earning me $62.50 and a T-shirt. Maybe the WSP title gets me a discount on a cup of coffee somewhere? http://www.highiqsociety.org/wsp_highscores.php Eleven people were given a set of tie-breaking questions. When I saw that the tie-breaking questions were relatively easy, I considered submitting one wrong answer in the hopes of being the sole winner of the $200.00 second prize, rather than a small share of the $500 first prize. As it turned out, three people split the second prize, earning them $66.67 each. So who's smarter? Thanks again for your site, always a delight. Ken Nordine Fibonacci The video fibonacci numbers by Ken Nordine was nicely done. I've always been a big Ken Nordine fan, so I'm glad to see he's making videos. Overview of Math A nice overview of mathematics is given on page 2 of "Is the theory of everything merely the ultimate ensemble theory?" by Max Tegmark. After mentioning spidrons last week, I was reminded by no-one less than Daniel Erdély that I should link to the main Spidron page, spidron.hu. Material added 13 November 2006 New Largest Probable Prime Norm((1+I)^1127239+ 1)/5, or (2^1127239+2^563620+1)/5, has 339333 digits, which puts it at the top of the Probable Primes list. Norm((1+I)^n+1)/5 is proven prime for n = 5, 6, 7, 9, 11, 13, 17, 29, 43, 53, 89, 283, 557, 563, 613, 691, 1223, 2731, 5147, 5323, 5479, 9533, 10771, 11257, 11519, and 12583. For n=23081 , 36479, 52567, 52919, 125929, 221891, 235099, 305867, 311027, 333227, 365689, and 1127239, all the probable prime tests work, but no method is known for proving primality. Many congratulations to Borys Jaworski for setting this new record. Andrew Clarke: Here is as a Sudoku type problem. In each 6x6 square all rows and columns contain 1-6 as does each hexomino. See also Numbered Polyiamonds and Numbered Polyominoes. The Popular Science Periodic Table Theo Grey's Periodic Table is now a gorgeous interactive screen at Popsci.com. If you get the latest copy of Popular Science, the Dow ad features a beautiful fold-out poster of all the elements. One of these items is recently in the news -- polonium-210, the active ingredient in a camera brush. Wordplay DVD My longtime friend Will Shortz now has a DVD out: Wordplay. This film about intelligent puzzle people wound up being one of the best reviewed movies of the year. Powerful vector support Geometry Expressions, the most math-intensive vector drawing program, now interfaces directly with Mathematica. Snakes on a Plane The records for the matchstick snake contest are in. Vadim Trofimov is the big winner. I hope to show pictures of most of the solutions soon. Fractal Maze Martin Windischer: On www.mathpuzzle.com I´ve read about the fractal mazes, but they don´t looked very attractive for me. If you insert the original maze in each of the boxes it will be too small in very few steps. So I tried to make another fractal maze with only one limit point. Here is the result. Send Answer. Irfanview 3.99 My favorite free fast art editor, Irfanview, has just come out in an improved version. The Lobster-Snake puzzle George Sicherman: Here are two of the 12 hexiamonds, the Lobster and the Snake. Make two congruent shapes by adjoining the same hexiamond to both. This arose from Erich's latest Math Magic. Erich Friedman: Find a polyomino that can be joined with each of the two pictured hexominoes to get two congruent shapes. Send Answers. Colonel Sicherman also sent me a note that the T and X pentahexes are compatible. Non-unique Town Placements Given six road lengths between 4 towns, can the town placements be discerned? Turns out it's ambiguous. Can 5 points be done ambiguously? Can 4 points be set up so that all distances are integers? Send Answer. Bimagic Square of Primes Christian Boyer has made the first Bimagic square of primes. Under 701, only the primes 2, 3, 523, 641, and 677 are missing. It's a magic square that remains magic if all the terms are squared. Polyspidrons at Kadon Jacques Ferroul's Polyspidrons are now available as a handsome set from Kadon Enterprises. Polyspidrons uses the famous Spidrons of Dániel Erdély. Ivars Petersen wrote an article about the Spidrons, Swirling Seas. I've also made Ivars a permanent link, in the upper right. 120-cell Sculpture Ivars also wrote about a recently dedicated 120-cell scupture. Here's a picture from that event, showing the 120-cell trapping John Conway's arm in the 4th dimension. It was part of a dedication for Coxeter, whose biography, King of Infinite Space, recently came out. Jenn 3D Poor John Conway... when he tried to blow some bubbles after the dedication, they looked like the below. Actually, this figure is from Jenn3d, an excellent free program for playing around with higher polytopes - beautiful stuff. Lorenz and Modular Flows A very tubular and very beautiful set of animations is available at Lorenz and Modular Flows at ams.org, in an article by Etienne Ghys. Material added 29 October 2006 Interview with Martin Gardner Paul Halmos passes More highlights from Games magazine Making Salt the Hard Way Material added 21 October 2006 Games 100 The latest Games Magazine is out, with the Games 100 list of top games. Robert Abbott has a devious logic maze on the cover. I was very pleased to see that the Thinkfun Gordian's Knot was the puzzle of the year - congrats to Frans de Vreugd and George Miller for designing it. Gordian's Knot recently showed up at the end of Numb3rs episode The Mole. Even more amazing -- the Hoffman-Singleton game was selected as one of the 100 top games of the year. I thought this elegant item from mathematics would make a great card game, and many playtesters agreed with me. Column about the game. Hexed Chess Robots Erich Friedman: I've added a number of new puzzle types to Puzzle Palace: Hex turn, Chess attack, Robot mazes, Color strip puzzles, Chess mazes, Unequal length puzzles, Color mazes, and Left right mazes. (wowsers.) Math Pick-up Lines, and N E W S Alan O'Donnell: I thought you might like some of these 'Math pick-up lines' - no warranty provided or implied! eg: My love for you is a monotonically increasing unbounded function, Let's take each other to the limit to see if we converge, and I wish I were a derivative so I could lie tangent to your curves. I've always noticed that American urban architects like their roads to travel N/S and E/W, but did you know they sometimes they get a bit anal about this? (An amazing road design) That image made me shake my head slowly in disbelieving astonishment - thought it might amuse you too... Peter Hugo McClure - Math Artist Peter McClure: Greetings... I am independent artist and maths/geometry/puzzles have been my greatest inspiration see my web-site at: http://www.peterhugomcclure.com. [Ed - 129 fascinating thumbnails of math art, right off the bat] Fractal Forum There are forums on many topics... one I just learned about is the Fractal Forum. What other good forums are out there? Pseudoku and Sudorku Timothy Y. Chow: I've made several new sudoku variants based on objects other than Latin Squares. The first variant is the closest to ordinary Sudoku, and is the one I like best, because it is closely related to an unproved mathematical conjecture of mine, as mentioned in the endnotes. If you have any suggestions about how I could spread the word about this variant, or encourage other puzzle creators to make other instances of it, I would be interested to hear them. [Also very cute is a recent Foxtrot comic. Another puzzle is Hard Corners, by Joseph White] World Puzzle Championship Blog The final results are in for the World Puzzle Championship. The WPF site has more info. Wei-Hwa Huang (link to his excellent puzzle gadgets) came in second. Thomas Snyder (4th place) wrote a WPC blog item about the event. Repeated squares, and the Smartest Person Challenge Ken Duisenberg's latest Puzzle of the Week, Unique 2x2 Squares, is quite nice. He has a link to the 2006 World's Smartest Person Challenge, put on by the High IQ Society. A nice set of puzzles. Material added 11 October 2006 Solving Techniques If some of the puzzles at the WPC seem difficult, Cihan Altay offers an essay of various solving techniques (PDF). Magnetic Tetris If you'd like to cover your refrigerator with Tetris pieces, artlebedev.com has them. Just 278 rubles per set. Patterns in Ramanujan Tau Simon Plouffe's homepage has hundreds of fascinating patterns. The lastest patterns explore the Ramanujan Tau function. W-pentahex Oddity Col. George Sicherman: I finally found a full-symmetry oddity for the W pentahex! Emma Lehmer's 100th birthday Emma Lehmer, a famous mathematician, will soon be celebrating her 100th birthday. People are invited to send best wishes to her mailbox, 1180 Miller Avenue, Berkeley CA 94708. Magic Square of Cubes Christian Boyer: Nobody knows a 4x4 magic square of cubes, using distinct positive integers. But very interesting step with the first known 4x4 semi-magic squares of cubes (semi-magic means non-magic diagonals), by Lee Morgenstern, USA. Who will construct a 4x4 magic square of cubes, with 2 magic diagonals? [Much more news is given at multimagie.com] 16^3 20^3 18^3 192^3 180^3 81^3 90^3 15^3 108^3 135^3 150^3 9^3 2^3 160^3 144^3 24^3 Halloween Themed Squiggly Sudoku Bob Harris: I've added a new batch 43 puzzles with halloween related clues. Some are pretty easy (the first five might be nice for a children's halloween party). The ones at the bottom of the page are probably hard. Color Mazes Erich Friedman: A new type of puzzle for you: Color mazes. Find a path from Start to Finish (moving only horizontally and vertically and never passing through a square more than once) so that you pass through each (non-white) color an equal number of times. The Tangent Circle Problem Dick Hess: I ran across this one recently in the Pi Mu Epsilon Journal and wondered if you'd seen it before. Four planar circles are pairwise externally tangent. Three of them are also tangent to a line, L. If the fourth circle has unit radius what is the distance of its's center from the line? It's amazing to me the distance in question doesn't depend on the relative sizes of the other three circles. Send Answer. Me and Numb3rs The local Gazette-Telegraph wrote an article about Wolfram Research and Numb3rs. The Numb3rs season 2 DVD ($38) just came out, and it's loaded with great stuff. Near the start of episode Dark Matter, Andy Black and Cheryl Heuton graciously mention a story about me on the commentary track. You can still get Numb3rs Season 1 ($40), which curiously has fewer episodes at a higher price. If you prefer digital versions, Numb3rs Amazon Digital offers individual episodes for $2 each. Material added 30 September 2006 Asymmetrical Looney Gear Pentagon Rule broken Sudoku Variations Numb3rs Puzzle Math Factor on iTunes Perfect Sequential Rectangles Material added 17 September 2006 Ultimate Periodic Table A long-term local project is the ultimate periodic table by Theodore Gray, which I've mentioned here many times. At considerable expense, Theo has now made a spectacular illustrated periodic table poster, along with photographs of every element. Many sizes are available at reasonable prices. The large scale views are amazing. New Mersenne Prime A new biggest prime exists. 2^32,582,657 - 1. See the story on MathWorld. Coxeter: The Man Who Saved Geometry A delightful book has just been released by Siobhan Roberts: The King of Infinite Space: Donald Coxeter: the Man Who Saved Geometry. In my latest maa.org column, I mention the love of geometry that Washington and Jefferson had. At the turn of the century, geometry was on it's way out. Largely by himself, Coxeter revived the entire field. A great book, with lots of wonderful math along the way. Lightforce Games Jean-Charles Meyrignac: I just discovered Lightforce. There are a lot of nice puzzles on this site. Numb3rs Season Premiere The first episode of the third season starts Friday, 22 September. Among other things, I was given the task of picking out data points across the United States as a part of the plot. Here are two of them: Colby, Kansas and Milford, Utah. Can you figure out what trait many of the data points have in common? Lots of cool stuff is coming up in the next few episodes. Replacement Puzzles Erich Friedman: I've made a number of replacement puzzles. For example, Rule #1: {2,1,2}-->{1,1}, Rule #2: {2}-->{1,2,2}, Rule #3: {1,1}-->{2} QUESTION Using only these three replacement rules, one at a time, on some consecutive substring, get from {2,2,2} to {1,2,1,1,1} in 13 moves. You never need a string longer than 6 digits. Mathematical Equivoque Ken Suman: I have seen on various sites that you are interested in crosswords and wordplay as well as mathematics. So am I. I have been building a collection of mathematical wordplays that I hope you might find interesting. Coordinates of the Harborth Graph Eberhard Gerbracht: I have prepared a much deeper analysis of the Harborth Graph. I put some more thought to the problem and finally (with the help of MATHEMATICA) was able to produce minimal polynomials of degree 22 which completely describe the coordinates of the vertices of the Harborth graph. Thus now the problem is not horrifying anymore, but has become quite nice (at least in my opinion). Thus I found it worthy of a more thorough exposition. Non-crossing knight tour Alexander Fischer: Hello dear friends of mathematical recreation. Here is another improvement concerning the longest noncrossing leaper tour on a 14*14 chess board, it's of 135 steps length. By board extension some other records can possibly be improved. [Alexander also found 163 moves on the 16x16.] Melbourne, City of Math My latest maa.org column is about the spectacular math architecture in Melbourne. Christian Boyer pointed out some other unusual buildings. Also, A-R-M has more math buildings in the works. Rolling Block Ramp Cihan Altay: A red block, with a length of 39 units, is standing next to another block which supports a ramp that doesn't move. Roll the red block around the board, without exceeding the rectangular border, so that it ends up standing upright on the supporting block. A move is to roll the red block using one of its edges (touching the ground) as the axis of rotation. There is a chasm on the board on which the red block can not stand upright, but it may lie down. What is the minimum number of moves to achieve the goal, following these rules? [A simple solution is flop the block S, then to roll W a lot, N at the proper spot.... but that is not the shortest path.] Material added 27 August 2006 New Mersenne Prime Anisohedral Tiling Database Sudoku Variations New Hanayama Puzzles Survo Puzzles Poincare Conjecture Proved Clickmazes Update Combinatorial Object Server Davis Megamaze
{"url":"http://www.mathpuzzle.com/17Dec06.html","timestamp":"2014-04-20T18:24:09Z","content_type":null,"content_length":"61203","record_id":"<urn:uuid:95590ffd-4e18-4168-ae24-f42261ea8c0b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
limit = e Say that this limit exists and it equals A. Then $=\frac{1}{t}\ln(1+t)=\ln(A)$ The LHS is also $\frac{\ln(1+t)}{t}$ and when evaluating at t=0 you get 0/0, which is indeterminate and L'Hopitals can be used. So $\ln(A)=\frac{\frac{1}{1+t}}{1}$ Try the limit again, and use algebra to solve for A again. edit: This is in the precalc section, so forgive me for using calculus. Yes that would appear so after looking some more :) Thank you both so much, I understand it now. :) Alternate solution: recall that $e^{\ln X} = X$ for any $X > 0$. so that: $\lim_{t \to 0}(1 + t)^{\frac 1t} = \lim_{t \to 0}\text{exp} \left( \frac 1t \ln (1 + t) \right)$ $= \text{exp} \left( \lim_ {t \to 0} \frac {\ln (1 + t)}t \right)$ now since $\lim_{t \to 0} \frac {\ln (1 + t)}t = 1$ (By L'Hopital's rule, or power series or ...) we have that the limit is $\text{exp}(1) = e$ This is very similar to Jameson's method, both of these methods you should consider when taking the limit of an expression with the variable in the power. That is, let $A$ and $B$ be expressions in a single variable in which the variable appears, then we can usually find various limits that give indeterminate forms by doing the following: The method Jameson used: Say $y = A^B$ Then $\ln y = B \ln A$ so that $\lim \ln y = \lim B \ln A$ and so $\lim y = e^{\lim B \ln A}$ by taking the anti-log of both sides. The method I used: simply write $y = A^B$ as $e^{\ln A^B} = e^{B \ln A}$, and take the limit of both sides. note that $\lim e^X = e^{\lim X}$ Thanks, but could you explain how $\lim e^X=e^{\lim X}$ ? If $\lim_{x \to a} f(x) = L$ and $g$ is continuous on $L$, then $\lim_{x \to a} g(f(x)) = g\left(\lim_{x \to a} f(x)\right) = g(L)$ the rigorous theory might get a bit complicated, but it is somewhat similar to the fact that we can factor constants out side of limits. since the limit deals with X and not e, taking the limit of e to some power really only affects the power, since anything that is not a variable in X is not affected by the limit. EDIT: Chop Suey was a bit more rigorous :p. Thanks, I got it now. ;) I need to have a good look at limits again, as I haven't looked at them for ages and have forgotten alot.
{"url":"http://mathhelpforum.com/calculus/63898-limit-e-print.html","timestamp":"2014-04-17T03:00:26Z","content_type":null,"content_length":"14977","record_id":"<urn:uuid:ae0ff62b-c236-484a-a62a-60b535fd798f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
digitalmars.D - About 0^^0 Bill Baxter <wbaxter gmail.com> Found this link about 0^^0: I think this explains pretty well why Wolfram is justified in saying 0^^0 is indeterminate, but a PL like D is perfectly justified in saying it's 1. In particular the article asserts: "Consensus has recently been built around setting the value of 0^0 = 1" Dec 09 2009 On 12/9/2009 11:50 AM, Bill Baxter wrote: Found this link about 0^^0: I think this explains pretty well why Wolfram is justified in saying 0^^0 is indeterminate, but a PL like D is perfectly justified in saying it's 1. In particular the article asserts: "Consensus has recently been built around setting the value of 0^0 = 1" Wikipedia also has a section discussing this: http://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_zero_power Of particular interest may be the list of particular languages, programs and calculators that treat it each way: http://en.wikipedia.org/wiki/Exponentiation#Treatment_in_programming_languages.2C_symbolic_algebra_systems.2C_and_calculators Janzert Dec 09 2009 Bill Baxter wrote: Found this link about 0^^0: I think this explains pretty well why Wolfram is justified in saying 0^^0 is indeterminate, but a PL like D is perfectly justified in saying it's 1. In particular the article asserts: "Consensus has recently been built around setting the value of 0^0 = 1" Yeah. It's driven by pragmatism. Setting 0^^0 = 1 is highly useful, especially for the binomial theorem (Knuth says "it *has* to be 1"!) There are a few contexts where setting 0^^0 = 1 is problematic. But AFAIK none of them are relevant for int^^int. And pow() already sets 0.0^^0.0 = 1.0. So the decision has already been made. Since Mathematica has such an emphasis on symbolic algebra it's not as clear for them. But it's still interesting that Mathematica makes x^^0 == 1, regardless of the value of x, yet makes 0^^0 undefined. Dec 10 2009
{"url":"http://www.digitalmars.com/d/archives/digitalmars/D/About_0_0_103322.html","timestamp":"2014-04-19T23:37:08Z","content_type":null,"content_length":"12781","record_id":"<urn:uuid:15356f13-a1bb-456e-abdd-b217008213d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Let N be a positive integer. Show that if a_n=b_n for n >= N, then Sum(a_n) and Sum(b_n) either both converge, or both diverge. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f52fbb3e4b019d0ebb0775c","timestamp":"2014-04-16T22:37:09Z","content_type":null,"content_length":"46842","record_id":"<urn:uuid:7a3f93a3-4265-4c77-8e03-0be40e578c23>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Little Falls, NJ Precalculus Tutor Find a Little Falls, NJ Precalculus Tutor I have taught Mathematics in courses that include basic skills (arithmetic and algebra), probability and statistics, and the full calculus sequence. My passion for mathematics and teaching has allowed me to develop a highly intuitive and flexible approach to instruction, which has typically garnere... 7 Subjects: including precalculus, calculus, geometry, algebra 1 ...I have been helping students/adults even while in high school. A matter of fact, I self taught myself most of the topics before I started high school. I was even employed at a company while in high because of my math skills I excelled at this level also. 7 Subjects: including precalculus, calculus, algebra 2, trigonometry ...For students whose goal is to learn particular subjects, I make sure that the student understands the basics prior to delving into the details. In a nutshell, I provide tutoring based on the student's need. Thank you for your time reading this profile! 15 Subjects: including precalculus, chemistry, calculus, geometry I recently completed a Master's degree in Education at the Concordia University in Curriculum and Instruction. I have over 10 years experience in tutoring, and I am a mentor to a lovely group of youth aged 4 through 19. I have experience in tutoring Mathematics, Science, and physics. 21 Subjects: including precalculus, English, reading, geometry ...This can be held as one long 1.5 hr session, or it can be split up into two sessions if need be.During high school for approximately 2 years, I had been tutoring as a volunteer for grades 1-5 in math, science and English in a Hartford, Connecticut public elementary school. After graduation, I ha... 44 Subjects: including precalculus, reading, Spanish, French Related Little Falls, NJ Tutors Little Falls, NJ Accounting Tutors Little Falls, NJ ACT Tutors Little Falls, NJ Algebra Tutors Little Falls, NJ Algebra 2 Tutors Little Falls, NJ Calculus Tutors Little Falls, NJ Geometry Tutors Little Falls, NJ Math Tutors Little Falls, NJ Prealgebra Tutors Little Falls, NJ Precalculus Tutors Little Falls, NJ SAT Tutors Little Falls, NJ SAT Math Tutors Little Falls, NJ Science Tutors Little Falls, NJ Statistics Tutors Little Falls, NJ Trigonometry Tutors Nearby Cities With precalculus Tutor Cedar Grove, NJ precalculus Tutors Fair Lawn precalculus Tutors Fairfield, NJ precalculus Tutors Fairlawn, NJ precalculus Tutors Hawthorne, NJ precalculus Tutors Lincoln Park, NJ precalculus Tutors Lyndhurst, NJ precalculus Tutors North Caldwell, NJ precalculus Tutors Nutley precalculus Tutors Paterson, NJ precalculus Tutors Singac, NJ precalculus Tutors Totowa precalculus Tutors Verona, NJ precalculus Tutors Wayne, NJ precalculus Tutors Woodland Park, NJ precalculus Tutors
{"url":"http://www.purplemath.com/little_falls_nj_precalculus_tutors.php","timestamp":"2014-04-20T06:39:04Z","content_type":null,"content_length":"24312","record_id":"<urn:uuid:e68adf86-9cf2-4eba-931b-cb2cbb81f965>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
The transfer of statistical equilibrium from physics to economics Parrinello, Sergio and Fujimoto, Takao (1995): The transfer of statistical equilibrium from physics to economics. Download (1788Kb) | Preview Two applications of the concept of statistical equilibrium, taken from statistical mechanics, are compared: a simple model of a pure exchange economy, constructed as an alternative to a walrasian exchange equilibrium, and a simple model of an industry, in which statistical equilibrium is used as a complement to the classical long period equilibrium. The postulate of equal probability of all possible microstates is critically re-examined. Equal probabilities are deduced as a steady state of linear and non-linear Markov chains. S. Parrinello wrote the first draft, presented at the “Conference Growth, Unemployment and Distribution: alternative approaches”, (New School for Social Research, New York, March 1995) and published as a Working Paper of the Dipartimento di Economia Pubblica, Università "La Sapienza", Roma (July, 1995). Later T. Fujimoto added section 4 together with Appendix II. Abridged Italian version: S. Parrinello, "Equilibri Statistici e Nuovi Microfondamenti della Macroeconomia", in Incertezza, Moneta, Aspettative ed Equilibrio: saggi in onore di Fausto Vicarelli, a cura di Claudio Gnesutta, Il Mulino, Bologna, 1996. Item Type: MPRA Paper Original Title: The transfer of statistical equilibrium from physics to economics Language: English Keywords: Statistical equilibrium; thermodynamics; industrial economics C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C16 - Specific Distributions Subjects: A - General Economics and Teaching > A1 - General Economics > A12 - Relation of Economics to Other Disciplines B - History of Economic Thought, Methodology, and Heterodox Approaches > B1 - History of Economic Thought through 1925 > B16 - Quantitative and Mathematical Item ID: 30830 Depositing Sergio Parrinello Date Deposited: 10. May 2011 15:21 Last Modified: 18. Feb 2013 02:49 Brown, A.F. (1967) Statistical Physics, Edinburgh at the University Press. Champernowne, D.G. (1953) "A Model of Income Distribution", Economic Journal 63, June, 318-51. Farjoun, E. and M. Machover (1983), Laws of Chaos, a probabilistic approach to political economy ,Verso, London. Fast, J.D. (1970), Entropy, MacMillan, Philips Technical Library,1970. Feller, W. (1970), Introduction to Probability Theory and its Applications, Vol. I, 3rd Edition, Wiley, N.Y.. Foley, D. (1991) Minimum Entropy Exchange Equilibrium, Working paper 92-02,Columbia University. Foley, D. (1994) "A Statistical Equilibrium Theory of Markets”, Journal of Economic Theory, April 1994. Foley, D. (1996) “Statistical equilibrium in a simple labor market”. Metroeconomica, 47(2):125–147, June 1996. References: Fujimoto, T. and U. Krause (1985) "Strong Ergodicity for Strictly Increasing Nonlinear Operators" , Linear Algebra and its Applications, Vol.7l. Fujimoto, T. and U. Krause (1994) "Stable Inhomogeneous Iterations of Nonlinear Positive Operators on Banach Spaces", SIAM Journal on Mathematical Analysis, Vol.25 (4), July, Gibrat, R. 1931. Les inegalités économiques. Paris: Recueil Sirey. Hollinger, H. B. and M. Z. Zenzen (1985) The Nature of Ineversibility, D.Reidel Publ. Co.,'Dordrecht. Newman, P. and J.N. Wolf (1961) "A Model for the Long-Run Theory of Value", Review of Economic Studies,196l, 29, pp.5l{1. Parrinello, S. (1990) "Some Reflexions on Classical Equilihhru Elcpectations and Random Dishrbances", Political Economy,Vol.6, No l-2. Seneta, E. (1973) Non-negatlve Matrices: an inîroduction to theory and applications,G. Allen & Unwin Simon" H. and C. Bonid (1958) "The Size Distibution ofBnsiness firms" Amerìcan Economic Review, 48, September, 607-17. Steindl, J. (1962) Random Processes and the Growth of Firms; London: Griffith. URI: http://mpra.ub.uni-muenchen.de/id/eprint/30830
{"url":"http://mpra.ub.uni-muenchen.de/30830/","timestamp":"2014-04-16T19:43:09Z","content_type":null,"content_length":"23169","record_id":"<urn:uuid:2caf7acf-4668-4b05-8ea6-bf17a41d3287>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: algebraic question Replies: 3 Last Post: May 3, 2005 10:46 PM Messages: [ Previous | Next ] Re: algebraic question Posted: May 2, 2005 4:24 PM Having problems understanding this problem. Please if you can help me solve Thank you Jessica a^2 - 24a + 144
{"url":"http://mathforum.org/kb/message.jspa?messageID=3752403","timestamp":"2014-04-17T10:52:58Z","content_type":null,"content_length":"18195","record_id":"<urn:uuid:cdb26a73-c085-4f2d-b05d-695ff6d91c5a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Glen Burnie Math Tutor Find a Glen Burnie Math Tutor ...Whether the student is gifted, preparing for a standardized test, or struggling, I believe the tutor experience should be customized to be effective. I have spent most of my time teaching biology, economics and finance. However I have also taught speech, history, and literature. 45 Subjects: including ACT Math, economics, accounting, English ...I play a variety of instruments (although none of them, beside guitar, well enough to tutor in) and can definitely help with things like ear-training and sight-singing/reading, two of my strongest points. I also have a passion for social studies (and am a sociology major alongside majoring in in... 36 Subjects: including finite math, trigonometry, ACT Math, Spanish ...I teach students to: (1) consider the audience; (2)develop a theme and supporting ideas; (3) enrich the content with memorable stories, facts, and other materials that engage the audience; and (4) practice a comfortable delivery. I have taught grades 4-12. Two things help a child feel confident and prepared to do his or her best on the HSPT and future standardized tests. 32 Subjects: including algebra 1, algebra 2, biology, chemistry ...My students and I loved learning and discussing new information on the history of music through listening pieces by many composers. I began playing clarinet in 5th grade and continued to play throughout high school and college. I was the principal clarinet for the University Concert Band and second chair clarinet for the University Wind Ensemble while attending Radford University. 6 Subjects: including algebra 1, elementary (k-6th), music theory, general music ...I am a recent college graduate looking to tutor students while a pursuing a Standard Professional Teaching License in the state of Maryland. It is good practice before I receive a classroom of high school students in the fall! I majored in Economics and minored in Spanish. 74 Subjects: including algebra 2, biology, calculus, chemistry
{"url":"http://www.purplemath.com/Glen_Burnie_Math_tutors.php","timestamp":"2014-04-21T12:33:22Z","content_type":null,"content_length":"23931","record_id":"<urn:uuid:0473d0ee-9b69-452f-bb04-7defc1bc227f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Tutors Sacramento, CA 95822 Math Tutor with a Math Degree I am a graduate from Santa Clara University, where I earned my Bachelors of Science in . Seeing as I was raised in a family of educators, it is only fitting that I, too, strive to be a teacher in the coming years. As an educator, it is my goal to attend... Offering 8 subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/Sacramento_CA_mathematics_tutors.aspx","timestamp":"2014-04-20T15:04:44Z","content_type":null,"content_length":"61699","record_id":"<urn:uuid:82a945bb-3877-4964-a2bc-9c46831a4e76>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
geometric progression problem October 21st 2008, 04:30 AM #1 Oct 2008 Perth, Australia (i.e desert land) geometric progression problem here's the question I need a hand with Find the first three terms of a geometric progression where all terms are positive, S4 (the sum to four terms) = 21 2/3 and the sum to infinity = 27 thanks in advance, all help is much appreciated Find the first three terms of a geometric progression where all terms are positive, S4 (the sum to four terms) = 21 2/3 and the sum to infinity = 27 The formulas used for GP Sn=a(1-(r^n))/(1-r) for n terms...(2) and for infinite terms Si=a/(1-r) for 0<r<1.......(1) a=1st term r=constant ratio use it in above put (1) in (2) for n= 4 to get r and thus a further use (2) to find the terms reqd Hello, listeningintently! Find the first three terms of a geometric progression where all terms are positive, $S_4 \:=\:\frac{65}{3}$, and $S_{\infty} = 27$ You're expected to know these formulas: . . Sum of the first $n$ terms: . $S_n \;=\;a\frac{1-r^n}{1-r}$ . . Sum to infinity: . $S_{\infty} \:=\:\frac{a}{1-r}$ The sum of the first four terms is $\tfrac{65}{3}\!:\quad a\frac{1-r^4}{1-r} \:=\:\frac{65}{3}$ .[1] The sum to infinity is 27: . $\frac{a}{1-r} \:=\:27 \quad\Rightarrow\quad a \:=\:27(1-r)$ .[2] Substitute [2] into [1]: . $27(1-r)\cdot\frac{1-r^4}{1-r} \;=\;\frac{65}{3} \quad\Rightarrow\quad 27(1-r^4) \:=\:\frac{65}{3}$ . . $1 - r^4 \:=\:\frac{65}{81} \quad\Rightarrow\quad r^4 \:=\:\frac{16}{81}\quad\Rightarrow\quad r \:=\:\sqrt[4]{\frac{16}{81}} \quad\Rightarrow\quad \boxed{r \:=\:\tfrac{2}{3}}$ Substitute into [2]: . $a \:=\:27\left(1-\tfrac{2}{3}\right) \:=\:9 \quad\Rightarrow\quad \boxed{a \:=\:9 }$ Therefore, the first three terms are: . $a, \,ar,\, ar^2 \;=\;9,\:6,\:4$ October 21st 2008, 05:04 AM #2 October 21st 2008, 05:07 AM #3 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/algebra/54895-geometric-progression-problem.html","timestamp":"2014-04-21T02:56:29Z","content_type":null,"content_length":"40413","record_id":"<urn:uuid:8973d5c2-3c25-4112-a7dc-ff508c4bb98b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Lobachevskii Journal of Mathematics, Volume XV F.G. Avkhadiev and K.-J. Wirths Concave schlicht functions with bounded opening angle at infinity Md. Azizul Baten On the smoothness of solutions of linear-quadratic regulator for degenerate diffusions Mohammed Benalili On a class of non linear differential operators of first order with singular point Renat N.Gumerov On the existence of means on solenoids Konstantin B. Igudesman Dynamics of finite-multivalued transformations Per K. Jakobsen and Valentin V. Lychagin Quantizations in a category of relations Cathrine V. Jensen Linear ODEs and D-modules, solving and decomposing equations using symmetry methods. Vitali Rechnoi Existence Theorems for Commutative Diagrams Vadim V. Shurygin, junior Poisson structures on Weil bundles Oleg Zubelevich On regularity of stationary solutions to the Navier-Stokes equation in 3-D torus
{"url":"http://www.emis.de/journals/LJM/content17.htm","timestamp":"2014-04-20T16:24:02Z","content_type":null,"content_length":"3068","record_id":"<urn:uuid:92fc6837-3954-4011-9190-73ae888d4f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Pi Day Fun Facts! “Now go on, boy, and pay attention. Because if you do, someday, you may achieve something that we Simpsons have dreamed about for generations: you may outsmart someone!” -Homer Simpson Today, March 14th, is known tongue-in-cheek as Pi Day here in the United States, as 3.14 (we write the month first) are the first three well-known digits to the famed number, π. As you know, it’s the ratio of a perfect circle’s circumference to its diameter. It’s also very, very, very hard to calculate exactly, because it’s impossible to represent π as a fraction. (You may remember that’s part of the definition of an irrational number.) But that doesn’t mean we haven’t tried! The easiest way to try is to either inscribe or circumscribe a regular polygon around a circle of radius 1, and calculate the polygon’s area. The more sides you make, the closer you’ll get. Archimedes, who discovered the fraction 22/7 (which is why Pi Day is July 22 in Europe), took the equivalent of a 96-sided polygon to do this, and found that π was between 220/70 and 224/71, which is not bad for two thousand years ago! But it’s hardly the most impressive approximation for π from back then. That honor goes to the Chinese mathematician, Zu Chongzhi. He discovered — in the 5^th Century — the approximation Milü, which is 355/113. Which is equal to, for those of you at home, 3.1415929… meaning you have to go to the eighth digit to see the difference between this number and π. In fact, if we look at the best fractional approximations of π… we wouldn’t find a better one until 52163/16604! (Exclamation point, not factorial!) That was the world’s best approximation for π for something like 900 years, until this guy came along. Pretty But what if you wanted to calculate π, but wanted to do as little math as possible? No geometry, just basic counting and four-function mathematics? Well, if you can play darts, you can do it! It will only get you to π very slowly, but throwing darts (randomly) at a circle with a square of area equal to the circle’s radius will allow you to calculate π! How so? Count the darts that land in the circle, divide by the number of darts that land in the square, and that’s how you calculate π. (For those of you who write a computer program that can do this, congratulations, you’ve just written your first monte carlo simulation!) But let’s say you wanted to be more efficient, but you wanted to get to π with arbitrary accuracy, given enough time. Have I got a fun method for you: you can represent it as a continued fraction, and the farther you continue it, the more accurate you’ll get! For example, here’s the results from the first few terms ; not bad! Pi Day is also a special day for anyone interested in astronomy and space! Four famous astronomy and space heroes have their birthday on Pi Day; can you name them all from their pictures? (Okay, okay, one of them is easy!) As far as the pies go, I’m still no good at making pie crust, but I do have a special treat that I can make, with a circumference and a diameter and everything. Yes, it’s a Leche Flan! Hope your day is as sweet as they come, hope that you enjoyed all the fun facts about pi, and if you’re up late over the next couple of nights, enjoy the Pi Day miracle of the Jupiter-Venus conjunction in the night sky! Happy Pi Day! (And your birthday boys are, from L-R, Albert Einstein, Apollo 8 Commander Frank Borman, Astronomer Giovanni Schiaparelli, and last-man-on-the-Moon Gene Cernan.) 1. #1 HP March 14, 2012 A couple π-related trivia bits I picked up recently from watching BBC “history of maths [sic]” vids on YouTube. Apparently, the Sumerians worked it out to four digits a millenium or so before Archimedes — assuming that the documentarians are telling me the truth about cuneiform numbers. They showed a clay tablet with angles and arcs and everything! Also apparently (because I saw this in a doco and don’t have handy links), there was a student of Pythagoras working on π, and he confided to a classmate that he didn’t think there was an integer ratio that would exactly represent π. Because Pythagoras had mathematics all confused with religion, and whole-number ratios were the basis of Pythagoreanism, said student was accused of blasphemy and thrown off a cliff and drowned. There’s a reason that they’re called irrational numbers; it’s not always pretty. 2. #2 Childermass March 15, 2012 Three of them are easy. Einstein plus two astronauts with visible name tags. The last one is not hard either if you notice a famous astronomer listed in Wikipedia’s list of people born on March 3. #3 Ole Phat Stu March 15, 2012 Just FYI : Pi is the ratio of the circumference of any convex curve of constant diameter to that diameter. Usually a circle is stated, but this more precise definition includes e.g. Wankel rotors, and others of that ilk with more than the Wankel’s three lobes 4. #4 nanonaren March 15, 2012 Pi joke: Your mama is so fat that her BMI number is Pi. 5. #5 Patrick Dennis March 15, 2012 An easier method of determining pi would be to legislate it, as the Indiana legislature supposedly attempted to do in 1897. More sensible is the (unfortunately, probably doomed) movement to declare the ubiquitous “circle constant” to be the ratio of a circle’s circumference to its radius, rather than to its diameter. That number, equal to 2*pi, has been given the name tau. The rationale is nicely laid out here: http://tauday.com/tau-manifesto , and lies mainly on the fact that a lot of equations would make more intuitive sense if one trip around a circle equaled one “circle unit” (or tau radian) instead of two “pi radians.” More often than not, when pi shows up in an equation, it is as the term, 2*pi. Supposed exceptions, as in Euler’s equation and the area of a circle, are shown in the manifesto cited above to be more consistent with the tau formulation than with pi. 6. #6 Greg23 March 15, 2012 In the U.S. pi day will be especially significant in 2015. or 2016 (due to rounding) 7. #7 cope March 15, 2012 As always, an interesting, stimulating and informative post, thank you. I would just like to add a link to a look of the use of dropped toothpicks to approximate pi HERE. I’ve always wanted to incorporate this activity into my classes but have never really figured a pertinent way to do so with my earth/space science and astronomy students. Maybe some day, just for fun… 8. #8 Carlos Lopes March 15, 2012 Infinite recursion in “Exclamation point, not factorial!”. 9. #9 Anonymosity March 16, 2012 I find it interesting how many people obsess over the number of digits they can memorize or utilize. 11 significant figures will allow you to calculate the circumference of any circle that will fit inside the earth to within a millimeter’s tolerance. Just for fun, I figured out that 63 sig figs will let you measure the circumference of a circle with the radius of the observable universe to within a Planck length. 10. #10 Jon Mitchell March 16, 2012 It it true that as one travels closer to the speed of light, the value of pi changes? 11. #11 ashoka March 16, 2012 it is really interesting, i enjoyed lot,thank you. 12. #12 Joffan March 16, 2012 No, it’s the flavor of pi that changes near the speed of light. It becomes more… racy. 13. #13 NSherrard March 16, 2012 To be fair to Archimedes, Zu Chongzhi was not “back then,” he lived 750 years later. That’s not far off the 900 years between Chongzhi and Madhava. 14. #14 sean t March 19, 2012 Actually, the most significant pi days were in 1592 (or 1593 if you round). Wonder if anyone noticed. 15. #15 sean t March 19, 2012 Jon Mitchell, Certainly the value of pi does not change when travelling near the speed of light. There would be no length contraction observed in a circle in a comoving reference frame. Pi would therefore have its normal value. In an arbitrary reference frame, length contraction in the direction of motion would render any circle into an ellipse whose eccentricity would depend on the relative speed of the observer’s and the figure’s reference frames. What you may be thinking of is in general relativity, there are effects that could lead to observation of a circle which has a circumference to diameter ratio different than pi. This would be the direct result of a change from Euclidean to non-Euclidean curved geometry, however. Since pi is defined for Euclidean geometry, I would think a good argument could be made that this really is not a change in the value of pi, but rather just an alternate geometry. Disclaimer: I know just enough about physics and mathematics to get myself in trouble, so I would appreciate anyone who knows more correcting any inaccuracies in my post. 16. #16 Stephen March 28, 2012 Ok, 3.14. But if “3″ is the month, then .14 of a month gets you to March 4. Computer heads use base 8, and pi turns out to be 3.11 in octal. They (we) also use base 16, and pi turns out to be 3.24 – handy in case you missed it. Tau Day is 6.28. Since that’s 2 * pi, you get to eat twice as much. So March 4, 11, 14, 24, June 28, July 22. Lots of pi to eat. 17. #17 joshua atkins 136 eagen carkl March 14, 2013 i want to no who ivnent the pi number
{"url":"http://scienceblogs.com/startswithabang/2012/03/14/pi-day-fun-facts/","timestamp":"2014-04-19T15:05:30Z","content_type":null,"content_length":"83520","record_id":"<urn:uuid:acb447c5-ad27-497f-9c6f-e3de426eef45>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm of the Week: Minimum Spanning Tree Here’s a classical task on graphs. We have a group of cities and we must wire them to provide them all with electricity. Out of all possible connections we can make, which one is using minimum amount of wire. To wire N cities, it’s clear that, you need to use at least N-1 wires connecting a pair of cities. The problem is that sometimes you have more than one choice to do it. Even for small number of cities there must be more than one solution as shown on the image bellow. Here we can wire these four nodes in several ways, but the question is, which one is the best one. By the way defining the term “best one” is also tricky. Most often this means which uses least wire, but it can be anything else depending on the circumstances. As we talk on weighted graphs we can generally speak of a minimum weight solution through all the vertices of the graph. By the way there might be more the one equally optimal (minimal) solutions. Obviously we must choose those edges that are enough to connect all the vertices of the graph and whose sum of weights is minimal. Since we can’t have cycles in our final solution it must form a tree. Thus we’re speaking on a minimum weight spanning tree, as the tree spans over the whole graph. Does each connected and weighted graph have a minimum spanning tree? The answer is yes! By removing the cycles from the graph G we get a spanning tree, since it’s connected. From all possible spanning trees one or more are minimal. If w(u, v) is the weight of the edge (u, v), we can speak of weight of any spanning tree T – w(T) which is the sum of all the edges forming that tree. Thus the weight of the minimum spanning tree is less than the weight of whatever other spanning tree of G. After we’re sure that there is at least one minimum spanning tree for all connected and weighted graphs we only need to find it somehow. We can go with an incremental approach. At the end we’ll have the minimum spanning tree (MST), but before that on each step of our algorithm we’ll have a sub-set of this final tree, which will grow and grow until it becomes the real MST. This subset of edges we’ll keep in one additional set A. So far we know that on each step we have a subset of the final MST, but first we need to answer a couple of questions. How do we start? Well, we’ll start with the empty set of edges. Clearly the empty set is a subset of any other set, thus it will be also a subset of the MST. How do we grow the tree? Another question we must answer is how to grow the tree. Since we have a MST sub-set (A) on each step how do we add an edge to this set in order to get another (bigger than the previous one) subset of edges, which will be again a subset of the minimum spanning tree? Clearly we must make a decision which edge to add to the growing subset and this is the tricky part of this algorithm. Chose the lowest weight edge! To find the minimum spanning tree on each step we must get the lowest weighted edge that connects our subset (A) with the rest of the vertices. However can we be sure that by choosing the less weighted edge we’ll get the MST? Well, let’s assume that isn’t right in order to prove that wrong! OK so on some step of our growing sub-tree we don’t get the lightest edge (u, v), because we somehow doubt this rule, and we get another edge – let’s say (x, y). Mind that w(x, y) >= w(u, v). Thus our final MST will contain somewhere in its set of edges the edge (x, y), but the weight of MST w(T) is minimal, and if we get another spanning tree that contains the exact same edges as T but instead of (x, y) contains (u, v) we’ll get a smaller weight! That isn’t possible! Thus we proved that on each step we must get the less weighted edge. This particular approach is called “greedy”, because on each step we get the best possible choice. However greedy algorithms don’t always get the right or optimal solution. Fortunately for MST this isn’t true so we can be greedy as much as we can! OK let’s make a summary of our algorithm in the following pseudo code. Pseudo Code 1. We start with an empty set (A) subset of the final MST; 2. Until A does not form T: a. Get the less weighted edge u from G; b. Add u to A; 3. Return A Actually this algorithm is used firstly by Borůvka which started to wire Moravia in 1926. Even without knowing that the “greedy” approach will lead him to the right solution he optimally covered Moravia with electricity. However this algorithm is too general and there are two main algorithms – the Prim’s algorithm and the Kruskal’s algorithm that we shall see in future posts. The thing is that on each step we must get the less weighted edge and both algorithms use different approaches to do that. Related posts: Published at DZone with permission of Stoimen Popov, author and DZone MVB. (source)
{"url":"http://python.dzone.com/articles/algorithm-week-minimum","timestamp":"2014-04-16T21:51:59Z","content_type":null,"content_length":"67705","record_id":"<urn:uuid:8ea794e2-8189-4963-8703-908f5504893e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Jus on Saturday, May 23, 2009 at 7:23pm. Apply algebraic reasoning to show that a=b^(loga/logb) for any a,b>0 • Math - bobpursley, Saturday, May 23, 2009 at 8:17pm take the log of each side now reduce. • Math - Jus, Saturday, May 23, 2009 at 8:43pm I have: (loga/logb )(1/loga)=logb Ok now what? • Math - bobpursley, Saturday, May 23, 2009 at 10:13pm you erred. the logb on the right side divide out (one on numerator, one in denominator) divide both sides by loga. Related Questions Math - Apply algebraic reasoning to show that a=b^(loga/logb) for any a,b>0 Math - 1. Given the product law of logarithms, prove the product law of ... Math - Which of the following is not a property of logarithms? A. log(A - B) = ... Math Algebra 2 - #4. Express the logarithm in terms of log2M and log2N: log2 1/... math/calculus - solve the equation loga(x-5)- loga (x+9)=loga(x-7)-loga(x+9) all... math-please help,need quickly - loga(x-5)-loga(x+9)=loga(x-7)-loga(x+9) all the ... Math - Properties of Logarithm - Can someone show how to this question. Loga(x+... Algebra help - If loga 8=2.079, loga 5=1.609, and loga 3=1.099, find the ... math - of a^2+b^2=2ab, prove log[(a+b)/2]=1/2(loga+logb) Math Algebra 2 - How do I put log in a TI-84 calculator. #1.Express the ...
{"url":"http://www.jiskha.com/display.cgi?id=1243121029","timestamp":"2014-04-18T23:02:01Z","content_type":null,"content_length":"8737","record_id":"<urn:uuid:f874cf9e-d5a9-43eb-805e-d18df6880902>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
our b You are using an old or crippled browser, upgrade your browser or enable Javascript, or click here to visit the old site that doesn't require Javascript. In the Name of God, Most Gracious, Most Merciful More Mathematical Miracle Topics • Short History of the Mathematical Miracle • One of the great miracles • Why 19? • Why 114 chapters? • The order of chapters of Quran, another miracle • What is a numeral? Gematrical Value of the Arabic Alphabets • Quran, Numerically Structured Book • Over it is 19 • Over it is 19, another look • Chapter 74; The Hidden Secret /Al-Muddathther • Chapter 110; The Last Revealed Chapter • Chapter 57, Iron and the Mathematical Miracle • One/Wahed in Arabic Language= waw+ alif+h+d=6+1+8+4=19 • The Initial “Noon” must be written as “Noon- Waw -Noon” • The Initial “Q” and the Mathematical Miracle • God is Possessor of the Highest Degrees; 360 degrees • The word "Hoda" (Guidance) in Quran • The word “Iklaas” in Quran; Devotion • The word “God” in Quran, miracle of all miracles • The word “proof” in Quran and its relation to the Mathematical Miracle • The word “adda” in Quran and its mathematical miracle • The word “everything” and the Mathematical Miracle • The role of the opening verses (Basmalah/In the Name of God, Most Gracious, Most Merciful) in the mathematical Miracle • Meaning of 786 • Benford’s Law and the numerical structure of Quran • Quran, Chemistry and code 19 • Splitting of the Moon; a miracle of Quran and a fulfillment of two prophesies. • Speed of Light or the Greatest Speed, C, determined in Quran • End of the world is coded in Quran – Appendix 25 of the authorized English Translation by Rashad Khalifa, Ph.D.
{"url":"http://www.submission.org/d/moremath.html","timestamp":"2014-04-20T09:46:00Z","content_type":null,"content_length":"4328","record_id":"<urn:uuid:a2794095-8a97-4325-bf8d-322a3c81139b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2000 [00533] [Date Index] [Thread Index] [Author Index] Re: Transformation Methods for Pi • To: mathgroup at smc.vnet.net • Subject: [mg22390] Re: Transformation Methods for Pi • From: Chris Tiee <choni at my-deja.com> • Date: Sun, 27 Feb 2000 18:55:30 -0500 (EST) • References: <892q74$pn9@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com In article <892q74$pn9 at smc.vnet.net>, sniff <sniff at home.com> wrote: > In Mathematica 4, the usage of N[Pi,10000000] (with $MaxPrecision = > Infinity) strongly suggests that the people at Wolfram Research are > not using the best way to calculate Pi. It is very slow. Are they > using a calculus based method (such as arctan functions) to compute > number? On my old NT system, 8,388,608 digits of Pi are calculated in > seconds when using compiled C code of a transformation method and FFT > speed up multiplications. Using Mathematica 4, I terminated the calculation after 17 > hours. - It was still not done. > Where can I find more information regarding Mathematica 4 and its way > compute Pi. The documentation states that the Chudnovsky method for calculating Pi up to 10 million digits (right on the dot for the calculation to your precision). I'm actually not too familiar with various algorithms and I so from experience I have no idea how the calculation gets done or nor any idea of its rate of growth. Keep in mind that Mathematica uses a lot of system resources, as it is a far more general mathematical application than just a pi-digit calculator. Obviously, especially on older systems, the performance of a program specialized to do pi calculation and *only* pi calculation should be vastly more efficient. I conducted an experiment on my computer, a new Pentium 3 550 MHz, with 128 MB memory and 16 GB free space on the hard drive; so this was not exactly what you'd call a resource-deprived system. I set Mathematica on calculation of Pi to 10 million digits, while I continued to actively use the computer. It took approximately 3800 seconds to complete (just over an hour). I then rebooted, freed the system of any resource-hoggers, and disabled the screensavers, etc. and set Mathematica to the task while I left the computer alone. I came back, the job was done in approx 750 seconds. Major improvement. Finally, before I went to sleep, I set it up to calculate pi separately for 1 million, 2 million, 3million, and so on up to 10 million, and show the timing for each in a list. The results I got the next morning were interesting, and I set the computer do the task again. Results were similar, just as interesting: #of digits (in millions) Time Taken (in Seconds) 1 76.67 2 123.91 3 197.57 4 266.11 5 284.02 6 555.73 7 566.60 8 627.63 9 723.42 10 764.40 When looking at just 1 and 10, it appears that the Chudnovsky algorthm has a perfectly linear rate of growth with respect to # of digits. But look between 4 and 5 million, and you notice that's a pretty small gap. And then between 5 and 6, huge difference! What is going on here, anyway (as I said before, I'm not exactly experienced with algorithms of this sort). Oh yes and, what algorithm does Pi calculation beyond 10 million digits use (documentation specifically states Chudnovsky's is used only up to 10 million) Chris Tiee "Choni" | Primary email: choni at ucla.edu Alternate emails: choni at cyberjunkie.com, ch0ni at hotmail.com Sent via Deja.com http://www.deja.com/ Before you buy. Prev by Date: Re: easiest way to display a plot from C to kernel using Mathlink? Previous by thread: Re: Transformation Methods for Pi Next by thread: Re: zigzag & random walk
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Feb/msg00533.html","timestamp":"2014-04-18T15:46:21Z","content_type":null,"content_length":"37740","record_id":"<urn:uuid:0a3fc0bf-5d34-4005-b3c1-c7833b39fe88>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring Valley, TX Math Tutor Find a Spring Valley, TX Math Tutor ...I ask that students have at hand:1. The assignment.2. Their textbook.3. 41 Subjects: including statistics, writing, trigonometry, precalculus I just graduated college about two years ago from Texas A&M University with a mechanical engineering degree, and a minor in business and mathematics. Math has always been one of my favorite subjects, and I'm looking to tutor others in my spare time after work. I'm a very patient person when it comes to teaching others, and I am motivated for others to learn the material. 9 Subjects: including algebra 1, algebra 2, calculus, geometry I have a B.S. in Nuclear Physics from The University of Texas at Austin. I have strong command of mathematics and physics. I am very descriptive in my teaching style, and I don't give up until my student has really understood the materials. 13 Subjects: including prealgebra, algebra 1, algebra 2, calculus ...I took quite a few math courses as well as some engineering courses. After TTU, I moved back to Houston and started working in various industries as a chemist and enjoyed it. I briefly taught at the University of Houston as a TA and this is how I discovered I really enjoy teaching. 24 Subjects: including discrete math, GRE, linear algebra, logic ...I have also tutored high school students in various locations. My reputation at the Air Force Academy was the top calculus instructor. I have taught precalculus during the past two years and have enjoyed success with the accomplishments of my students. 11 Subjects: including trigonometry, statistics, algebra 1, algebra 2 Related Spring Valley, TX Tutors Spring Valley, TX Accounting Tutors Spring Valley, TX ACT Tutors Spring Valley, TX Algebra Tutors Spring Valley, TX Algebra 2 Tutors Spring Valley, TX Calculus Tutors Spring Valley, TX Geometry Tutors Spring Valley, TX Math Tutors Spring Valley, TX Prealgebra Tutors Spring Valley, TX Precalculus Tutors Spring Valley, TX SAT Tutors Spring Valley, TX SAT Math Tutors Spring Valley, TX Science Tutors Spring Valley, TX Statistics Tutors Spring Valley, TX Trigonometry Tutors Nearby Cities With Math Tutor Bellaire, TX Math Tutors Bunker Hill Village, TX Math Tutors Cypress, TX Math Tutors El Lago, TX Math Tutors Hedwig Village, TX Math Tutors Highlands, TX Math Tutors Hilshire Village, TX Math Tutors Hunters Creek Village, TX Math Tutors Iowa Colony, TX Math Tutors Meadows Place, TX Math Tutors North Houston Math Tutors Oak Forest, TX Math Tutors Piney Point Village, TX Math Tutors Southside Place, TX Math Tutors West University Place, TX Math Tutors
{"url":"http://www.purplemath.com/spring_valley_tx_math_tutors.php","timestamp":"2014-04-19T17:07:24Z","content_type":null,"content_length":"24086","record_id":"<urn:uuid:8df85fe7-5e56-4904-8a57-e0f3b1a9f080>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-Factor ANOVA with Replication Help - Transtutors Two-Factor ANOVA with Replication Two-way ANOVA Two-way ANOVA is multiple regressions with two categorical explanatory variables (or factors). In general, ANOVA is the special case of regression where there is a quantitative response variable and one or more categorical explanatory variables. The response variable is modeled as varying normally around a mean that is a linear combination of the explanatory variables. Get latest updates on the related subject in Statistic Homework help, Assignment help at transtutors.com Two Way Anova (Analysis of variance), also known as two factor Anova, can help you determine if two or more samples have the same "mean" or average. This is a form of "hypothesis testing." An example for two is shown below. The two factors are microbial strain and surface type. The response is the amount of material adhering to the surface. 1 2reps 2reps 2reps 2reps 2reps 2 2reps 2reps 2reps 2reps 2reps 3 2reps 2reps 2reps 2reps 2reps 4 2reps 2reps 2reps 2reps 2reps Details of the calculation are not given. They can be found in any good statistical text. The concept is similar to other ANOVA techniques, i.e. the total variability is partitioned into a number of interaction term. This measures how the different surfaces interact with the different strains of microbe. The interaction term can only be calculated when replicates have been used. Note that it is also strongly advised that each combination has an equal number of replicates. This called a balanced design. The interaction term needs some explanation. Let us assume that there is no interaction between the factors, i.e. the effect of a surface is the same for all strains. In this case we might get results as shown right. Now the pattern for strain 4 is very different from the others. Interactions can cause problems. Many statisticians say that the interaction term should be tested first, i.e. find out if the associated F value is significant. If the interaction is found to be significant the analysis should stop. This is because the anova model assumes that the factor effects are additive, interaction implies that this assumption has been violated. Biologists don't usually impose this restraint; indeed interactions often provide important biological information. As usual the results are presented in an anova summary table. (In the following example some example results are shown). │SOURCE OF VARIATION │df│SS │MS │F│ │Surfaces │4 │42.4 │10.6│ │ │Strains │3 │50.4 │16.8│ │ │Surfaces x Strains │12│117.6 │9.8 │ │ │Error │20│34.0 │1.7 │ │ │Total │39│244.4 │ │ │ The F values are not shown above because their calculation depends upon the type of assumed model. All such points are covered in Statistic Homework help,Assignment help at transtutors.com Three types of model are possible depending upon whether the factors are both fixed; both randomand mixed, i.e. one random and one fixed. It is very important that the type of factors employed is From the summary table we test 3 Null Hypotheses. · There are no significant differences in the amount of adhesion on different surfaces. · There are no significant differences in the amount of adhesion between the strains. · There are no surface/strain interactions. Three combinations of Factors are possible, leading to three Model types. │ │Factor A fixed│Factor A random │ │Factor B fixed │Model I │Model III │ │Factor B random │Model III │Model II │ In all 3 models the Interaction F Value = Interaction MS / Error MS The way in which F values for the main effects are calculated depends upon the type of model being used. Model I Both main effects F values are obtained in a similar way. Main Effects F = Main Effect MS / Error MS , for example Surfaces F = Between Surfaces MS /Error MS Model II The main effects F values are obtained by replacing the Error MS by the Interaction MS. For example Surfaces F = Between Surfaces MS / Interaction MS Model III We need to identify the random and fixed main effects. Assume, for example, surfaces are fixed and strains are random effects. Fixed Effect Surfaces F = Between surfaces MS / Interaction MS Random Effect Strains F = Between Strains MS / Error MS To demonstrate the importance of model type the data in the earlier summary table are subjected to all 3 analysis types. Please remember that you would, normally, only use one model type. In the following example significant F values are marked *. Model Type I II III Between surfaces 6.24* 1.08 1.08 Between strains 9.88* 1.71 9.88* Interaction 5.76* 5.76* 5.76* Note how the conclusions would change depending upon the model type used. Consequently, it is very important that you know the model type. If a Null hypothesis has been rejected it may be appropriate to employ a multiple range test. However, this should be done only for fixed effect means. Thus in a MODEL I all means could be compared, in a MODEL II none would be compared and, finally, in a MODEL III it depends upon which factor is fixed. In the LSD test the value for n is the number or replicates used to calculate a mean. Therefore, in this example n would be 8 for surface means and 10 for strain means. Transtutors.com present timely homework help at logical charges with detailed answers to your Statistic questions so that you get to understand your assignments or homework better apart from having the answers. Our tutors are remarkably qualified and have years of experience providing Two-Factor ANOVA with Replication homework help or assignment help. Related Questions • Using these data from the comparative balance sheet of rosalez perform... 1 hr ago Using these data from the comparative balance sheet of rosalez perform vertical analysts Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question • Exploring Inferential Statistics and Their Discontents 2 hrs ago Answer the following questions:1. Jackson even-numbered Chapter exercises (pp. 220-221; 273-275) 2. What are degrees of freedom? How are the calculated? 3. What do inferential statistics allow you to infer? 4. What is the... Tags : Statistics, Basics of Statistics, Theory of probability, Graduation, Booker T. Washington High School ask similar question • Answer the following questions: 1. Jackson even-numbered Chapter exercises... 2 hrs ago Answer the following questions:1. Jackson even-numbered Chapter exercises (pp. 220-221; 273-275) 2. What are degrees of freedom? How are the calculated? 3. What do inferential statistics allow Tags : Statistics, Basics of Statistics, Theory of probability, University, Booker T. Washington High School ask similar question • A random committee of size 3 is selected from 4 doctors and 2 nurses. Write... 9 hrs ago A random committee of size 3 is selected from 4 doctors and 2 nurses. Write a formula for the probability distribution of the random variable X representing the number of doctors on the committee. Find P(2 <... Tags : Statistics, Basics of Statistics, Others, University ask similar question • untitled 14 hrs ago Tags : Statistics, Descriptive Statistics, Charts and Pie Diagrams, University ask similar question • 5-step approach to hypothesis testing 14 hrs ago Epidemiologists are researching the relationship betweeng two levels of tobacco exposure and a positive diagnosis of esophageal cancer. With SPSS, use the data set bd1.sav and the variables tob2 (dichotomized smoking status)... Tags : Statistics, ANOVA, Applications of ANOVA, University ask similar question • A brand of television has a lifetime that is normally distributed with a... 17 hrs ago A brand of television has a lifetime that is normally distributed with a mean of 7 years and a standard deviation of 2.5 years. What is the probability that a randomly chosen TV will last more than 8 years? <font face= Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question • Experimental Designs I 17 hrs ago Jackson, even-numbered Chapter Exercises, pp. 308-310. Tags : Statistics, Basics of Statistics, Others, Doctoral Program/D.phil ask similar question • The Super Discount store (open 24 hours a day, every day) sells 8-packs of... 23 hrs ago The Super Discount store (open 24 hours a day, every day) sells 8-packs of paper towels, at the rate ofapproximately 420 packs per week. Because the towels are so bulky, the annual cost to carry them ininventory is estimated... Tags : Statistics, Basics of Statistics, Theory of probability, University ask similar question • suppose that gm smith estimate the following regression equation for... 1 day ago suppose that gm smith estimate the following regression equation for chevrolet Tags : Statistics, Sampling Theory, Simple Random Sampling, College ask similar question more assignments »
{"url":"http://www.transtutors.com/homework-help/statistics/anova/two-factor-anova-replication/","timestamp":"2014-04-20T18:28:05Z","content_type":null,"content_length":"89041","record_id":"<urn:uuid:69d1a03d-d308-45a1-8b9b-5dcb9824223a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Find mass by linear density? July 13th 2010, 11:19 AM #1 Jul 2010 Find mass by linear density? ......SOLVED........ A thin wire is bent into the shape of a semicircle x^2+y^2=4 x>0 If the linear density is 3, find the mass of the wire. Anyone have a clue how to tackle this problem? Thanks for your help Possible answers - Last edited by Alpina540; July 13th 2010 at 12:45 PM. If the linear density $\lambda$ is constant, then you can just use the formula $M=\lambda L$, where $M$ is the mass, and $L$ is the length of the wire. Where would you go from there? Well it looks like now I need to find the length of the curve of wire. Correct. And how could you find the length of a semicircle? So I just graphed it and cheated a bit and just ruffed out the length of the curve by the Pythagorean theorem and I got the curve to be ~ 6 so 3*6= 18 So the answer is 6pi? Thanks again Adrian! Last edited by Alpina540; July 13th 2010 at 12:44 PM. You can get the exact answer (which is not $18\pi$, by the way). What's the length of a wire that's in the shape of a circle? ok ok you win, lol (duh about the 18pi! thats what happens when 20 vector calc problems are running through your head.) Thanks for your help, I really appreciate it! Last edited by Alpina540; July 13th 2010 at 12:44 PM. You're very welcome. By the way, "pie" is what you eat. "pi" is the standard transliteration of $\pi$. lol........ I knew that I just didn't think about it as I never write pi out in words. July 13th 2010, 11:30 AM #2 July 13th 2010, 11:35 AM #3 Jul 2010 July 13th 2010, 11:42 AM #4 July 13th 2010, 11:43 AM #5 Jul 2010 July 13th 2010, 11:45 AM #6 July 13th 2010, 11:56 AM #7 Jul 2010 July 13th 2010, 12:37 PM #8 July 13th 2010, 12:39 PM #9 MHF Contributor Apr 2005 July 13th 2010, 12:43 PM #10 Jul 2010
{"url":"http://mathhelpforum.com/calculus/150854-find-mass-linear-density.html","timestamp":"2014-04-21T14:58:50Z","content_type":null,"content_length":"58266","record_id":"<urn:uuid:227fc51f-7cf1-461d-8712-0b276a0bb2b5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with explaining a fraction to decimal and vice versa August 10th 2011, 04:04 PM #1 Aug 2011 Need help with explaining a fraction to decimal and vice versa We are learning how to convert fractions into decimals and then into percents. Ok well our teacher had us draw out a line and make 25,50,75,100% on this line and then had us underneath it say what the fraction of it would be. Ok so here is where i am stuck (so to speak) he said to get half of 25% we would need to divide 25 by 2 to get 12.5 and he said to make into a fraction which I got 1/8. This all makes sense to me.. now i know 1/16 is half of one eight and i got the decimal which is 6.25 but how does 6.25 go into a 1/16 fraction? I divided 1/16 which got me 0.0625 which is my decimal and i know to move it over two places to the right to get my percent but i dont know how to make it into a fraction of 1/16? Maybe my brain is fried right now someone help Re: Need help with explaining a fraction to decimal and vice versa $\frac{1}{16} = 0.0625 =$6.25% Going the other way 6.25% = $0.0625$ this much should be clear. Which can be written as $0.0625 = \frac{0.0625}{1}$ as a fraction. Now times both the top and bottom by 1000 to remove the decimal point $\frac{0.0625}{1} =\frac{625}{10000}$ Then simplify and you will get $\frac{1}{16}$ Re: Need help with explaining a fraction to decimal and vice versa Thank you so very much!! I was confused about the 0.0625 numerator. So if I do run into a problem like that again like say 0.0045 i just would times it by its placement value right? Which i dont think i will until i get into more complicated math. Thank you again so very much..i love your icon as well Re: Need help with explaining a fraction to decimal and vice versa Just multiply by the power of 10 which will make your decimal a whole number, do the same multiplication to the denominator. August 10th 2011, 04:14 PM #2 August 10th 2011, 04:30 PM #3 Aug 2011 August 10th 2011, 04:45 PM #4
{"url":"http://mathhelpforum.com/algebra/185938-need-help-explaining-fraction-decimal-vice-versa.html","timestamp":"2014-04-16T13:30:26Z","content_type":null,"content_length":"42527","record_id":"<urn:uuid:736a4b48-fda0-495e-ae0c-b4b47bc638f6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
rubix game Can any one help me to solve this. About the game? Consider a modified 2D version of the rubik’s cube game. It can be any nxn block. You can rotate a row (towards right or left) or a column (upward direction or downward direction), so there are 4n allowed possible moves. For example in the following 3x3 board there are 12 possible moves. The format of the input file is fixed. The first line should have the value of n and the next n lines should have the initial state, e.g., for the above problem the initial state would be. I will be checking your assignments based on this fixed format so please stick to this: If you do not find goal state generate an error message, otherwise, output your complete solution which would be a sequence of actions and states that we have to follow to find the path from initial to goal state. You also have to output the total number of states explored and the total number of states in the fringe at the time of finding the goal state. Search to Implement: implement following searches on this problem • DFS(Depth-first search) • BFS(Breadth-first search) • Depth Limited search (depth limit to be input by user) What else can we do for you? Wash your car? lol ;) Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/general/110441/","timestamp":"2014-04-18T08:16:38Z","content_type":null,"content_length":"8832","record_id":"<urn:uuid:e13a703f-9af8-4822-9c28-adbccb8bc4d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverside, IL Geometry Tutor Find a Riverside, IL Geometry Tutor ...I helped our team go to the National finals and win many awards. Outside of engineering, I also pursue a side career in music composition. I was raised on classical music and more contemporary music from the 50's, 60's, and I do keep up with current hits too. 16 Subjects: including geometry, chemistry, English, algebra 1 ...The college application process is the most exciting time for a high school student. When a student are guided and exposed to various options, wonderful opportunities present themselves. The college counseling process has many important steps, all of which are crucial to student success. 28 Subjects: including geometry, English, writing, algebra 1 ...I have a PhD. in experimental nuclear physics. I have completed undergraduate and/or graduate school coursework in the following subjects - classical mechanics, electricity and magnetism, astronomy, modern physics and quantum mechanics, statistical mechanics and thermodynamics, solid state physi... 10 Subjects: including geometry, calculus, physics, algebra 1 ...I am generally available afternoons/evenings and can work out a consistent schedule if that's what you are looking for. I believe that EVERY student deserves a quality education and am open to discussing rates on a case-by-case basis. Please reach out to me so we can set something up! 27 Subjects: including geometry, chemistry, physics, calculus ...My students have seen incredible success on classroom exams, as well as standardized tests. Algebra 2 is one of my favorite subjects to teach/tutor. I love walking with students from the basic to the complex and watching them shine. 6 Subjects: including geometry, English, algebra 1, GED Related Riverside, IL Tutors Riverside, IL Accounting Tutors Riverside, IL ACT Tutors Riverside, IL Algebra Tutors Riverside, IL Algebra 2 Tutors Riverside, IL Calculus Tutors Riverside, IL Geometry Tutors Riverside, IL Math Tutors Riverside, IL Prealgebra Tutors Riverside, IL Precalculus Tutors Riverside, IL SAT Tutors Riverside, IL SAT Math Tutors Riverside, IL Science Tutors Riverside, IL Statistics Tutors Riverside, IL Trigonometry Tutors Nearby Cities With geometry Tutor Argo, IL geometry Tutors Berwyn, IL geometry Tutors Broadview, IL geometry Tutors Brookfield, IL geometry Tutors Countryside, IL geometry Tutors Forest View, IL geometry Tutors Hines, IL geometry Tutors Hodgkins, IL geometry Tutors Lyons, IL geometry Tutors Mc Cook, IL geometry Tutors Mccook, IL geometry Tutors North Riverside, IL geometry Tutors Stickney, IL geometry Tutors Summit Argo geometry Tutors Summit, IL geometry Tutors
{"url":"http://www.purplemath.com/riverside_il_geometry_tutors.php","timestamp":"2014-04-21T10:29:35Z","content_type":null,"content_length":"23987","record_id":"<urn:uuid:bcf1b77f-3e72-4ac5-9c71-3b6e16b8e296>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Pair potentials for two reference particles with diameter for four different combinations of Yukawa potential parameters and . Crystalline fraction versus packing fraction η of a system of particles which interact with a hard-core repulsive Yukawa pair potential with reference contact value and (a) , (b) 4.0, (c) 6.7, and (d) 10 after a simulation of 2 × 104 Monte Carlo cycles, starting from a bcc ( and 4.0) or fcc ( and 10) crystal structure, for different polydispersities s in the range 0.00–0.10 as labeled. Shift in packing fraction of the crystal-fluid transition Δη(s) (as defined in Eq. (17) ) with size polydispersity s of a system of hard-core repulsive Yukawa particles with reference contact value and , 4.0, 6.7, and 10 as labeled. Crystalline fraction versus packing fraction η of a hard-core repulsive Yukawa system with reference contact value and (a) , (b) 3.3, (c) 6.7, and (d) 10 after a simulation of 2 × 104 Monte Carlo cycles, starting from a bcc ( and 3.3) or fcc ( and 10) crystal structure, for different polydispersities s in the range 0.00–0.15 as labeled. The arrows indicate the edge of the plateau for s = 0.13 and may be used for comparison with Figs. 7 and 8 . Snapshot after 2 × 104 Monte Carlo cycles of a hard-core repulsive Yukawa system with reference contact value , , η = 0.20, and s = 0.15. The color of a particle indicates (a) the average crystallinity in a series of six configurations between 1.5 × 104 and 2 × 104 Monte Carlo cycles, (b) the average local bond-orientational order parameter (Eq. (14) ) after 2 × 104 MC cycles, (c) local bond-orientational order parameter q 6(i) (Eq. (12) ) after 2 × 104 MC cycles, and (d) the square displacement from the particle's ideal lattice position for τ = 2 × 104 Monte Carlo cycles. Normalized probability distribution functions p(σ) of the particle diameter σ for , , s = 0.15, and η = 0.20 (55% most-disordered particles, 25% most-ordered particles). Filled gray curve: all particles, solid blue line: most-ordered particles (particles that are crystalline in at least five out of six configurations between 1.5 × 104 and 2 × 104 Monte Carlo cycles), dashed red line: most-disordered particles (particles that are crystalline in at most one out of six configurations between 1.5 × 104 and 2 × 104 Monte Carlo cycles). Polydispersity s versus packing fraction η of the most-ordered and most-disordered parts of hard-core repulsive Yukawa systems with reference contact value and an overall polydispersity s = 0.13. For (a) , (b) 3.3, (c) 6.7, and (d) 10 after a simulation of 2 × 104 Monte Carlo cycles, starting from a bcc ( and 3.3) or fcc ( and 10) crystal structure. The arrows indicate the same state points as in Fig. 4 . Mean square displacement ⟨Δr(τ)2⟩ from the ideal lattice position (Eq. (16) ) versus packing fraction η for reference contact value and after a simulation of 2 × 104 Monte Carlo cycles, starting from a bcc crystal structure, for different polydispersities s in the range 0.00–0.15 as labeled. The arrow indicates the same state point as in Fig. 4(a) . 2D projections of 3D trajectories during 2 × 105 Monte Carlo cycles of 25 particles from a configuration with , , s = 0.15, and η = 0.20. The initial configuration is a perfect bcc crystal; the initial positions of the particles are indicated by solid red symbols. Initially, the 25 particles are in two parallel {100} planes: 16 particles (solid red circles and solid red squares) in one plane occupy the corners of 3 × 3 unit cells, 9 particles (solid red triangles and solid red diamonds) in the second plane are in the centers of the unit cells. The end positions after 2 × 105 MC cycles are indicated by empty red symbols (again circles, squares, triangles and diamonds; for each particle we used a symbol of the same shape to indicate the initial and end position). The trajectories are shown with four different colours (black, blue, green, and cyan, which correspond to the circles, squares, triangles, and diamonds, respectively). Scitation: Effect of size polydispersity on the crystal-fluid and crystal-glass transition in hard-core repulsive Yukawa systems
{"url":"http://scitation.aip.org/content/aip/journal/jcp/138/11/10.1063/1.4794918","timestamp":"2014-04-16T11:10:05Z","content_type":null,"content_length":"98453","record_id":"<urn:uuid:f6788480-ddcc-4dff-9bf7-ff494b8de0ca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Celebrity Atheist List From Celebrity Atheist List Andrey (Andrei) Andreyevich Markov (14 June 1856 N.S. – 20 July 1922) was a Russian mathematician. He is best known for his work on stochastic processes. A primary subject of his research later became known as Markov chains and Markov processes. Markov and his younger brother Vladimir Andreevich Markov (1871–1897) proved Markov brothers' inequality. His son, another Andrei Andreevich Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. 1.) "Of course, Markov, an atheist and eventual excommunicate of the Church quarreled endlessly with his equally outspoken counterpart Nekrasov. The disputes between Markov and Nekrasov were not limited to mathematics and religion, they quarreled over political and philosophical issues as well." Gely P. Basharin, Amy N. Langville, Valeriy A. Naumov, The Life and Work of A. A. Markov, page 6. 2.) Naming Infinity: A True Story of Religious Mysticism and Mathematical Creativity. Harvard University Press. 2009. p. 69. ISBN 978-0-674-03293-4. "Markov (1856–1922), on the other hand, was an atheist and a strong critic of the Orthodox Church and the tzarist government (Nekrasov exaggeratedly called him a Marxist)."
{"url":"http://www.celebatheists.com/wiki/Andrey_Markov","timestamp":"2014-04-21T10:07:18Z","content_type":null,"content_length":"12587","record_id":"<urn:uuid:6ef6c968-5817-4a58-b3a9-ee741e5fa1be>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
G. Katzer, A. F. Sax, and J. Kalcher, J. Phys. Chem. A, 1999, 103, 7894 – 7899 "Bond Strengthening by Deformation of Bond Angles" (online) In the following abstract, the boldface atom symbols C and Si mean a divalent carbon or silicon atom, respectively. Torsion potentials about the X-X-H bond in homosubstituted primary carbenes (X=C) and silylenes (X=Si) have been investigated at the multi-reference averaged coupled pair functional (MR-ACPF) level of theory. For the triplet species, the potentials are quite flat, but large barriers of torsion have been observed for the singlet states of all carbenes and silylenes whose carbon or silicon atom adjacent to the divalent atom formes small bond angles with two of its further substitutents; other geometry parameters, even the bond angle at the divalent atom, proved to be of little or no importance. The said kind of deformation encourages the formation of a weak dative π-like bond between the X and X atoms, which by its twofold symmetry with respect to torsion about the bond axis, is responsible for the observed two-minima torsion potential.
{"url":"http://www.uni-graz.at/~katzer/publ.html","timestamp":"2014-04-17T12:30:08Z","content_type":null,"content_length":"9176","record_id":"<urn:uuid:faadc11d-ea8e-4a65-95b5-07b3a88a3959>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Two black holes falling head-on I'd suggest using the equivalent Newtonian situation as a rough guide. As far as departures from Newtonian behavior - as far as I know colliding black holes have only been handled numerically, not analytically. The Newtonian equivalent suggests that if a single black hole with mass causes a momentarily stationary particle to fall with a starting acceleration at a radial distance , then two momentarily stationary black holes with that same mass and at that same distance apart, will fall towards each other with a starting relative acceleration of . In the Newtonian case, a = -M/r^2 , geometrically. In the relativistic (Schwarzschild) case: a = -(M/r^2)/(1-2M/r)^(0.5) , i.e., the Newtonian case divided by the gravitational redshift factor. Since we are not working with colliding black holes, but rather with momentarily stationary ones, will it be a reasonable assumption to just double this Schwarzschild acceleration, like in the Newtonian case? As in a previous threat ), we have to ignore the effect of tidal deformation of the objects for simplicity.
{"url":"http://www.physicsforums.com/showthread.php?t=126939","timestamp":"2014-04-23T15:58:24Z","content_type":null,"content_length":"38781","record_id":"<urn:uuid:461966d3-ef96-409b-836c-1e26e8fa06e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Faa di Bruno formula Faa di Bruno formula Faà di Bruno formula is a remarkable combinatorial formula for higher derivatives of a composition of functions. There are various modern approaches to the related mathematics, using Joyal’s theory of species, operads, graphs/trees, combinatorial Hopf algebras and so on. We prove a Faà di Bruno formula for the Green function in the bialgebra of P-trees, for any polynomial endofunctor P. The formula appears as relative homotopy cardinality of an equivalence of groupoids. For suitable choices of P, the result implies also formulae for Green functions in bialgebras of graphs. • Doron Zeilberger, Toward a combinatorial proof of the Jacobian conjecture? in Combinatoire énumérative (Montreal, Que., 1985/Quebec, Que., 1985), 370–380, Lecture Notes in Math. 1234, Springer 1986. MR89c:05009 • Eliahu Levy, Why do partitions occur in Faa di Bruno’s chain rule for higher derivatives?, math.GM/0602183. • E. Di Nardo, G. Guarino, D. Senato, A new algorithm for computing the multivariate Faà di Bruno’s formula, arxiv/1012.6008 In works of T. J. Robinson the formula is treated in the context of vertex algebras, calculus with formal power series and in logarithmic calculus, as well as in a connection to the umbral calculus: • Thomas J. Robinson, New perspectives on exponentiated derivations, the formal Taylor theorem, and Faà di Bruno’s formula, Proc.Conf.Vert.Op.Alg., Cont.Math. 497 (2009) 185-198 arxiv/0903.3391; Formal calculus and umbral calculus, Electronic Journal of Combinatorics, 17(1) (2010) R95 arxiv/0912.0961 Faà di Bruno Hopf algebra • Christian Brouder, Alessandra Frabetti, Christian Krattenthaler, Non-commutative Hopf algebra of formal diffeomorphisms, Adv. Math. 200:2 (2006) 479-524 pdf • Kurusch Ebrahimi-Fard, Frederic Patras, Exponential renormalization, Annales Henri Poincare 11:943-971,2010, arxiv/1003.1679 doi Using Dyson’s identity for Green’s functions as well as the link between the Faà di Bruno Hopf algebra and the Hopf algebras of Feynman graphs, its relation to the composition of formal power series is analyzed. • Hector Figueroa, Jose M. Gracia-Bondia, Joseph C. Varilly, Faà di Bruno Hopf algebras, article at Springer eom, math.CO/0508337 • Jean-Paul Bultel, Combinatorial properties of the noncommutative Faà di Bruno algebra, J. of Algebraic Combinatorics 38:243–273 (2013) MR3081645 We give a new combinatorial interpretation of the noncommutative Lagrange inversion formula, more precisely, of the formula of Brouder–Frabetti–Krattenthaler for the antipode of the noncommutative Faà di Bruno algebra. Revised on October 17, 2013 06:12:07 by Zoran Škoda
{"url":"http://ncatlab.org/nlab/show/Faa+di+Bruno+formula","timestamp":"2014-04-19T10:05:00Z","content_type":null,"content_length":"15530","record_id":"<urn:uuid:6ff85b67-869a-4584-85b1-deadbc8bfa8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional distribution of the modulus of the output of AWGN channel given the modulus of the input up vote 0 down vote favorite Hi everyone, I will be too happy if anybody help me find a solution for the following problem. In fact, I have a big problem that I could not solve it for weeks. Assume that we have we have two independent zero mean Gaussian random variables, $X$ and $Z$ and we define the new random variable $Y$ as $Y=X+Z$. I define the sign and the magnitude of this three random variables as $X_s$, $Y_s$ and $Z_s \in \{+1,-1\}$, $X_M$, $Y_M$ and $Z_M \in R^+$, respectively. It is clear that the pairs $X_s$ and $X_M$ are independent, this is the same for $Y_s$ and $Y_M$, $Y_s$ and $Y_M$. My problem is how to find the conditional distribution $f(y_M|x_M)=?$. I have a solution for this but I could not convince myself that my answer is true. We have, $$f(y_M|x_M)= f(y_M|x_M,x_s=+1)f(x_s=+1|x_M)+ f(y_M|x_M,x_s=-1)f(x_s=-1|x_M)$$ $$=f(y_m|x_M,x_s=+1)f(x_s= +1)+ f(y_M|x_M,x_s=-1)f(x_s=-1)$$ $$=0.5*(f(y_M|x_M,x_s=+1)+ f(y_M|x_M,x_s=-1))$$ then, since both $f(y_M|x_M,x_s=+1)$ and $f(y_M|x_M,x_s=-1)$ have the same distribution, i.e., "Folded normal distribution" then from above equation $f(y_M|x_M)$ follows "Folded normal distribution". If it is correct it means that $f(y_M|x_M)= f(y_M|x_M,x_s)$ that implies $X_s$ and $Y_M$ condition on $X_M$ are independent!!!!!!!!!!! But from $y_M = \lvert x+z \rvert$, $Y_M$ depends on both $X_M$ and $X_s$ !!!! I am really confused and I will be too grateful if anybody help me solve the problem. Thanks a lot in advance, st.statistics pr.probability add comment 1 Answer active oldest votes Yes, $Y_M$ is independent of $X_s$ - it doesn't matter whether or not you condition on $X_M$. You essentially give the answer yourself - conditioning on $X_M$, you get that $Y | X_M$ is a symmetric r.v., with $f(Y|X_M) = \frac{1}{2\sigma_Z}[\varphi(\frac{Y-X_M}{\sigma_z})+\varphi(\frac{Y+X_M}{\sigma_z})]$ where $\varphi$ is the standard normal density. Therefore, summing up vote 1 on $Y=\pm Y_M$ you get $f(Y_M|X_M) = \frac{1}{\sigma_Z}[\varphi(\frac{Y_M-X_M}{\sigma_z})+\varphi(\frac{Y_M+X_M}{\sigma_z})]$ (for any $Y_M \geq 0$) which is equal to $f(Y_M|X_M,X_s)$. It down vote is true that $Y=X+Z$ depends on $X_s$ but you can write $f(Y | X_s=1) = f(-Y | X_s=-1)$ and when you take the absolute value $Y_M = |Y|$ becomes independent of $X_s$. @ OrZuk, Thanks a lot for your answer. – Farzad May 30 '11 at 8:14 @ OrZuk, again thanks for your answer, but I did not tour 4th sentence, summing on $Y=\pm Y_M$ ..., I will be too happy if you explain me in more details? – Farzad May 30 '11 at 9:41 I meant that to get a particular value of $Y_M$ (which is non-negative), $Y$ can be either $Y_M$ or $-Y_M$. Therefore, to get the probability (actually density) of $Y_M$ attending a certain value, you should sum the density of $Y$ attaining this value or it's minus. – Or Zuk May 30 '11 at 13:03 add comment Not the answer you're looking for? Browse other questions tagged st.statistics pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/66346/conditional-distribution-of-the-modulus-of-the-output-of-awgn-channel-given-the","timestamp":"2014-04-17T04:52:49Z","content_type":null,"content_length":"55181","record_id":"<urn:uuid:01d972f8-7b24-4f84-b7d8-cefbab914530>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Autopsy for a Mathematical Hallucination? Matthew Watkins Introduction by Terence McKenna Recently, while in Mexico at the classic Maya site of Palenque, I made the aquaintance of a young British mathematician and psychokinesiologist named Matthew Watkins. Watkins offered the strongest and most interesting critique of the timewave and the assumptions of its construction yet made. Watkins is confident that he has condensed the theory of the timewave into a formula (given below) and is further convinced that there is no rational basis for assuming that the timewave represents the fluctuation of any quantity which can be meaningfully understood as "novelty". Here in Watkins' own words is his formula and his objection: The Meeting I first became aware of the Timewave theory when I discovered a magazine article on Terence McKenna four or five years ago. It briefly mentioned that he had developed a theory which involved mathematically modelling the historical ingression of "novelty" using a fractal generated from the King Wen sequence of I Ching hexagrams. The idea had been revealed to him whilst in an altered state of consciousness brought about by psilocybin mushrooms. I had been studying the I Ching for some time, was working on a PhD in mathematics, and had occasionally contemplated the role of psychoactive plants in ancient religious belief systems, so I was immediately fascinated and searched everywhere for more information. I discovered McKenna's writings and recordings, but although the theory was often referred to and used as a basis for some remarkable speculation, I was unable to find any detailed description of its foundations. Such a description had originally been published in The Invisible Landscape (Terence and Dennis McKenna) in the early seventies, an obscure book long out of print and almost impossible to find. When, in 1994, I discovered that The Invisible Landscape had been republished, I immediately obtained a copy and studied it thoroughly. I was rather disappointed to find that the mathematical process which was applied to the King Wen sequence to generate the fractal "timewave" seemed worryingly arbitrary (no justification being given for many steps) and mathematically clumsy. Beyond that, the described procedure fails to give the same "data points" which appear in the appendix and which are used to ultimately define the fractal in question. More disappointing, I discovered that the December 21, 2012 date (now generally associated with McKenna's name) was in no way calculated - it was selected to give the timewave the "best possible fit" with the historical occurence of novelty as McKenna sees it. It was difficult to accept that such an exotic, imaginative idea could have such unsatisfactory foundations. I thought that perhaps McKenna had been unable to effectively communicate something very real which had been revealed to him, and decided to get in touch immediately. We began an e-mail dialogue about a year ago, after he responded to a letter I sent offering mathematical advice (at this point I had completed my PhD on hyperspatial embeddings of differential manifolds). Little was achieved for many months. He referred to an idea he was exploring which related the distribution of large prime numbers to the timewave, but it was only when I received a copy of the Timewave software that I was able to look into this. I was unable to find any evidence to support the hypothesis, but I did find that the software manual gave a much more detailed account of the construction of the timewave than The Invisible Landscape had. The manual contained the actual source code which the software uses, so I was able to study it with great care and formulate a detailed critique of the theory. We agreed to meet and discuss the issue in Palenque (in the Mexican state of Chiapas) in January, while he was teaching at a Botanical Preservation Corps conference. Terence and I had four lengthy, good natured, and most enjoyable discussions during the week I was in Palenque, and I was able to explain my critique step-by-step. By the final discussion he seemed to have fully grasped the nature of the problem, and had admitted that the theory appeared to have "no basis in rational thought". He claimed (and this struck me as sincere) that he was only interested in the truth, and that someone "disproving" the theory was just as a much of a relief to him as someone confirming its validity. He proposed that we collaborate on a piece provisionally entitled "Autopsy for a Mathematical Hallucination" in which we would carefully take the theory apart and see what had gone wrong. He claimed that I was the first person to approach him with a serious mathematical critique of his ideas, partly explaining why such an unjustifiable theory had not only survived for so long, but also attracted so much interest and attention. The Formula The timewave is a mathematical function defined by applying a "fractal transform" to a piecewise linear function. The latter function is an expression of 384 "data points" (positive integer values) derived from the King Wen sequence. Strangely, McKenna's description of the derivation in The Invisible Landscape fails to yield the data points which appear in the appendix and which have been used since. However, a complete description can be found in the TimeExplorer software manual. With some effort, the multi-step description, largely expressed in graphical or intuitive terms, can be condensed into a single formula. We define a set of 64 values h[1], h[2],..., h[64] such that h[k] is the number of lines which must be changed in hexagram k to give hexagram k+1. Here "hexagram 65" is interpreted as hexagram 1, "hexagram 0" as hexagram 64, etc. These values are as follows: h[1]:=6; h[2]:=2; h[3]:=4; h[4]:=4; h[5]:=4; h[6]:=3; h[7]:=2; h[8]:=4; h[9]:=2; h[10]:=4; h[11]:=6; h[12]:=2; h[13]:=2; h[14]:=4; h[15]:=2; h[16]:=2; h[17]:=6; h[18]:=3; h[19]:=4; h[20]:=3; h[21]:=2; h[22]:=2; h[23]:=2; h[24]:=3; h[25]:=4; h[26]:=2; h[27]:=6; h[28]:=2; h[29]:=6; h[30]:=3; h[31]:=2; h[32]:=3; h[33]:=4; h[34]:=4; h[35]:=4; h[36]:=2; h[37]:=4; h[38]:=6; h[39]:=4; h[40]:=3; h[41]:=2; h[42]:=4; h[43]:=2; h[44]:=3; h[45]:=4; h[46]:=3; h[47]:=2; h[48]:=3; h[49]:=4; h[50]:=4; h[51]:=4; h[52]:=1; h[53]:=6; h[54]:=2; h[55]:=2; h[56]:=3; h[57]:=4; h[58]:=3; h[59]:=2; h[60]:=1; h[61]:=6; h[62]:=3; h[63]:=6; h[64]:=3; The formula for the values w[0], w[1],..., w[383], the 384 "data points" which lie at the heart of the entire timewave construction, can be expressed in the popular mathematical programming language MAPLE as follows (Peter Meyer has written a conversion to C): w[k] := abs( ((-1)^trunc((k-1)/32))* (h[k-1 mod 64] - h[k-2 mod 64] +h[-k mod 64] - h[1-k mod 64]) + 3*((-1)^trunc((k-3)/96))* (h[trunc(k/3)-1 mod 64] - h[trunc(k/3)-2 mod 64] + h[-trunc(k/3) mod 64] - h[1-trunc(k/3) mod 64]) + 6*((-1)^trunc((k-6)/192))* (h[trunc(k/6)-1 mod 64] - h[trunc(k/6)-2 mod 64] + h[-trunc(k/6) mod 64] - h[1-trunc(k/6) mod 64]) ) + abs ( 9 - h[-k mod 64] - h[k-1 mod 64] + 3*(9- h[-trunc(k/3) mod 64] - h[ trunc(k/3)-1 mod 64]) + 6*(9- h[-trunc(k/6) mod 64] - h[ trunc (k/6)-1 mod 64]) ); Here trunc represents trunctation (rounding a number down to its integer part), abs means absolute (positive) value, and mod 64 means "the remainder after dividing by 64". Of this formula, McKenna Naturally [it] is of interest to myself, Terence McKenna and to others, especially Peter Meyer and other mathematicians and computer code writers who have help to advance and formulate the theory of the timewave over the years. On March 25, '96 Peter Meyer sent me e-mail which contained the following statement: "I have tested it (the formula) and have the pleasure of reporting that the formula produces correct values. I have congratulated him by e-mail." As of April 1, 1996 Watkins has significantly advanced understanding of the timewave by writing the formula that has eluded other workers since 1971. Although I was happy to have clarified the issue, I am unaware of any one else who had attempted to find such a formula. It was no great feat, being merely the compression of a step-by-step computer algorithm (as given by Peter Meyer in the TimeExplorer software manual) into a single mathematical expression, something which any competent mathematician could achieve with relatively little effort. The Objection The formula is really quite inelegant, and I personally found it hard to believe that if a map of temporal resonance was encoded into the King Wen sequence, it would look like this. In any case, my main concern was with the powers of -1. These constitute the "missing step" which isn't mentioned in The Invisible Landscape, but which turns up as a footnote of the TimeExplorer software manual. On p.79 we find Now we must change the sign of half of the 64 numbers in angle_lin[] as follows For 1 <=j <=32 When reading this, I immediately thought "WHY?", as did several friends and colleagues who I guided through the construction. There is no good reason I could see for this sudden manipulation of the data. Without this step, the powers of -1 disappear from the formula, and the "data points" are a different set of numbers, leading to a different timewave. McKenna has looked at this timewave and agree that it doesn't appear to represent a map of "novelty" in the sense that the "real" timewave is claimed to. It is possible that by changing the "zero date" Dec. 21, 2012, one could obtain a better fit, but there's no longer any clear motivation to attempt this, as the main reason for taking the original timewave seriously were McKenna's (often very convincing) arguments for historical correlation. These would all be rendered meaningless without the aforementioned step. The footnote associated with this step reads: 22. This is the mysterious "half twist". The reason for this is not well understood at present and is a question which awaits further research This struck me as absurd. After all, why introduce such a step into an (already overcomplicated) algorithm whilst admitting that the reason for doing so is "not well understood at present"? I confronted McKenna on this issue, and he immediately grasped the significance of my challenge. He would have to either (1) justify this mysterious "half twist" or (2) abandon the timewave theory He claimed not to remember the exact details for its inclusion, as it had been decided upon over 20 years ago. After some time, he pointed out the antisymmetry which occurs in the central column of values in the figure below: Figure 1 These are the values of angle_lin[] referred to earlier, and to which the "half twist" is applied. But the antisymmetry is a natural consequence of the fact that the right hand graph is simply a 180-degree rotation of the left hand graph. The values in the column represent relative slopes, and the effect of the "half twist" is the confuse the evaluation. Having conceded that the above doesn't constitute a justification of the "half twist", McKenna went on to claim that without it the collapse of the "multi-levelled complex bi-directional wave" into the 384 values "fails to preserve" some geometric property. The "collapse" is pictured in the figure below: Figure 2 The lefthand form is not fractal, as one might think, but is a simple "piecewise linear" function, essentially expressing the 384 values. The righthand form is the "multi-levelled complex wave", which is in fact just the superimposition of six piecewise linear functions. The "collapse" of the latter into the former is built into my formula, and is essentially a sum of "angular" and "linear" divergences between the three pairs of functions. The "half twist" has the effect of complicating the angular terms, essentially scrambling the +/- information relating to the relative slopes of the various line segments. Remember that McKenna is claiming that the "half twist" is necessary to guarantee the preservation of some geometric property inherent in the righthand form. He has not been able to define this property in a precise mathematical way, only referring to it in intuitive, graphical terms. I'm now in the slightly awkward position of having to use mathematical reasoning to disprove an assertion which hasn't actually be stated in mathematical terms, but which is obviously mathematical in content. There is no doubt that McKenna's timewave is a well-defined (if irrelevant) mathematical function, but any considerations of its interpretation lie ouside the domain of mathematical logic. We must therefore take into account McKenna's argument for the "half twist", for if he as no good argument (as the footnote originally suggested), even he agrees that the theory can no longer be taken seriously. We first note that the formula consists of the sum of two positive values: w = |angular term| + |linear term| We are interested in the angular term which is given as ((h(k-1 mod 64) - h(k-2 mod 64)) - (h(1-k mod 64) - h(-k mod 64))) + 3*((-1)^[(k-3)/96])* ((h([k/3]-1 mod 64) - h([k/3]-2 mod 64)) - (h(1-[k/3] mod 64) - h(-[k/3] mod 64))) + 6*((-1)^[(k-6)/192]* ((h([k/6]-1 mod 64) - h([k/6]-2 mod 64)) - (h(1-[k/6] mod 64) - h(-[k/6] mod 64))) Now the "multi-levelled bi-directional wave" shown on the right hand side of Figure 2 (above) is actually the superimposition of six piecewise linear functions defined over the interval [0,384]. These functions are built from the two halves of Figure 1 (above). McKenna refers to these as (left) the forward flowing wave and (right) the backward flowing wave. Our six functions are: • forward flowing yao resonance which is six copies of the forward flowing wave joined end-to-end (6 x 64 = 384) • backward flowing yao resonance which is six compies of the backward flowing wave joined end-to-end (6 x 64 = 384) • forward flowing trigrammatic resonance which is two copies of the forward flowing wave, magnified x 3, joined end-to-end (2 x (3 x 64) = 384) • backward flowing trigrammatic resonance which is two copies of the backward flowing wave, magnified x 3, joined end-to-end (2 x (3 x 64) = 384) • forward flowing hexagrammatic resonance which is one copy of the forward flowing wave, magnified x 6 (1 x (6 x 64) = 384) • backward flowing hexagrammatic resonance which is one copy of the backward flowing wave, magnified x 6 (1 x (6 x 64) = 384) McKenna's reasons for constructing such an object are based on various speculations regarding relationships between certain Chinese ritual calendars and the I Ching, and concepts of temporal resonance which appear in ancient Chinese literature. This is all documented in The Invisible Landscape. I find the reasoning somewhat unclear, but we shall continue regardless. The "angular term" mentioned above is essentially a weighted sum of the relative slopes of the three pairs of resonances. We can rewrite it as (-1)^[(k-1)/32]*(forward yao slope at k - backward yao slope at k) + 3*(-1)^[(k-3)/96]*(forward tri slope at k - backward tri slope at k) + 6*(-1)^[(k-6)/192]*(forward hex slope at k - backward hex slope at k) Without the powers of (-1), which are a direct consequence of the "half twist", we would have something which could be considered an expression of the local geometry of the "6-levelled object" at k. These powers modify the three contributions in a k-dependent way. Consider the following table: Sign of contributions for yao resonance term 1 <=k <=32 : + 33 <=k <=64 : - 65<=k <=96 : + 97<=k <=128 : - 129<=k <=160 : + 161<=k <=192 : - 193<=k <=224 : + 225<=k <=256 : - 257<=k <=288 : + 289<=k<=320 : - 321<=k<=352 : + 353<=k<=384 : - Sign of contributions for trigrammatic resonance term 1<=k<=2 : - 3<=k<=98 : + 99<=k<=194 : - 195<=k<=288 : + 289<=k<=384 : - Sign of contributions for hexagrammatic resonance term 1<=k<=5 : - 6<=k<=197 : + 198<=k<=384 : - The Conclusion So we see that the value of w(k) cannot be determined from the local geometry of the six-levelled object in a neighbourhood of k. The "collapse mechanism" built into the formula is clearly k -dependent. Therefore we see that not only is the inclusion of the "half twist" failing to guarantee the "preservation" of some geometric property to which McKenna has referred, but the failure is precisely because of its inclusion. McKenna's stated reason for this (crucial) step of the construction is unacceptable. As a mathematician who has met and talked with him, who is sympathetic with the majority of his other work, and who is only interested in spreading clarity, I must conclude that the "timewave" cannot be taken to be what McKenna claims it is. On a more positive note, I should add that I don't find McKenna's timewave exploit to be completely without value. Certain observations (such as the absence of 5's in the set {h(1),...,h(64)} and the correspondence of the Chinese 13-lunation ritual calendar with six 64-day cycles) are certainly worthy of further consideration. It wouldn't surprise me if a fractal map of temporal resonance was encoded into the King Wen sequence, just as it wouldn't surprise me if something quite remarkable does occur on December 21, 2012. The world can be a very strange place, and we all have much to learn. McKenna's hyper-imaginative speculation has fired the imagination of many. With this particular "theory" he has spread awareness of the I Ching and the Mayan calendar, both fascinating and poorly understood systems of ancient human thought. I should therefore end by suggesting that the remainder of his published thought should not be dismissed as a result of my findings which are discussed here. Terence McKenna died April 3, 2000. Gyrus - "The End of the River" (highly recommended) "A critical view of Linear Apocalyptic Thought, and how Linearity makes a sneak appearance in Timewave Theory's fractal view of Time..."
{"url":"http://www.fourmilab.ch/rpkp/autopsy.html","timestamp":"2014-04-18T23:36:03Z","content_type":null,"content_length":"21232","record_id":"<urn:uuid:a15f6b86-7282-41b6-9a6c-99859b2e491c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x H. Wu, "Montgomery Multiplier and Squarer for a Class of Finite Fields," IEEE Transactions on Computers, vol. 51, no. 5, pp. 521-529, May, 2002. BibTex x @article{ 10.1109/TC.2002.1004591, author = {H. Wu}, title = {Montgomery Multiplier and Squarer for a Class of Finite Fields}, journal ={IEEE Transactions on Computers}, volume = {51}, number = {5}, issn = {0018-9340}, year = {2002}, pages = {521-529}, doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2002.1004591}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Montgomery Multiplier and Squarer for a Class of Finite Fields IS - 5 SN - 0018-9340 EPD - 521-529 A1 - H. Wu, PY - 2002 KW - Finite fields arithmetic KW - hardware architecture KW - Montgomery multiplication KW - elliptic curve cryptography VL - 51 JA - IEEE Transactions on Computers ER - Montgomery multiplication in {\rm GF}(2^m) is defined by a(x)b(x)r^{-1}(x)\bmod{f(x)}, where the field is generated by a root of the irreducible polynomial f(x), a(x) and b(x) are two field elements in {\rm GF}(2^m), and r(x) is a fixed field element in {\rm GF}(2^m). In this paper, first, a slightly generalized Montgomery multiplication algorithm in {\rm GF}(2^m) is presented. Then, by choosing r(x) according to f(x), we show that efficient architectures of bit-parallel Montgomery multiplier and squarer can be obtained for the fields generated with an irreducible trinomial. Complexities of the Montgomery multiplier and squarer in terms of gate counts and time delay of the circuits are investigated and found to be as good as or better than that of previous proposals for the same class of fields. [1] http://csrc.nist.govencryption, 2001. [2] A.V. Aho,J.E. Hopcroft, and J.D. Ullman,The Design and Analysis of Computer Algorithms.Reading, Mass.: Addison-Wesley, 1974. [3] ÇK. Koç and T. Acar, “Fast Software Exponentiation in${\rm GF}(2^k)$,” Proc. 13th Symp. Computer Arithmetic, pp. 279-287, July 1997. [4] Ç.K. Koç and T. Acar, “Montgomery Multplication in$\big. GF(2^k)\bigr.$,” Design, Codes, and Cryptography, vol. 14, no. 1, pp. 57-69, 1998. [5] Ç.K. Koç and B. Sunar, Low-Complexity Bit-Parallel Canonical and Normal Basis Multipliers for a Class of Finite Fields IEEE Trans. Computers, vol. 47, no. 3, pp. 353-356, Mar. 1998. [6] M.A. Hasan, M. Wang, and V.K. Bhargava, Modular Construction of Low Complexity Parallel Multipliers for a Class of Finite Fields$GF(2^m)$ IEEE Trans. Computers, vol. 41, no. 8, pp. 962-971, Aug. [7] B.S. Kaliski Jr., “The Montgomery Inverse and Its Applications,” IEEE Trans. Computers, vol. 44, no. 8, pp. 1,064-1,065, Aug. 1995. [8] A. Karatsuba and Y. Ofman, “Multiplication of Multidigit Numbers on Automata,” Sov. Phys.-Dokl. (English translation), vol. 7, no. 7, pp. 595-596, 1963. [9] J.L. Massey and J.K. Omura, “Computational Method and Apparatus for Finite Field Arithmetic,” US Patent No. 4587627, 1984. [10] A.J. Menezes, I.F. Blake, X. Gao, R.C. Mullin, S.A. Vanstone, and T. Yaghoobian, Applications of Finite Fields. Kluwer Academic, 1993. [11] P.L. Montgomery, “Modular Multiplication without Trial Division,” Math. Computation, vol. 44, pp. 519-521, 1985. [12] C. Paar, “Efficient VLSI Architectures for Bit-Parallel Computation in Galois Fields,” PhD thesis, VDI-Verlag, Düsseldorf, 1994. [13] C. Paar, P. Fleischmann, and P. Roelse, “Efficient Multiplier Architectures for Galois Fields,” IEEE Trans. Computers, vol. 47, no. 2, pp. 162-170, Feb. 1998. [14] E. Savas and Ç.K. Koç, “The Montgomery Modular Inverse—Revisited,” IEEE Trans. Computers, vol. 49, no. 7, pp. 763-766, July 2000. [15] E. Savas, A.F. Tenca, and Ç.K. Koç, “A Scalable and Unified Multiplier Architecture for Finite Fields$\big. GF(p)\bigr.$and$\big. GF(2^m)\bigr.$,” Proc. Workshop Cryptographic Hardware and Embedded Systems (CHES 2000), Ç.K. Koçand C. Paar, eds., pp. 277-292, 2000. [16] B. Sunar and Ç.K. Koç, Mastrovito Multiplier for All Trinomials IEEE Trans. Computers, vol. 48, no. 5, pp. 522-527, May 1999. [17] B. Sunar and Ç.K. Koç, An Efficient Optimal Normal Basis Type II Multiplier IEEE Trans. Computers, vol. 50, no. 1, pp. 83-87, Jan. 2001. [18] C.C. Wang,T.K. Truong,H.M. Shao,L.J. Deutsch,J.K. Omura, and I.S. Reed,"VLSI Architectures for Computing Multiplications and Inverses inGF(2m)," IEEE Trans. Computers, vol. 34, no. 8, pp. 709-716, Aug. 1985. [19] M. Wang and I.F. Blake,"Bit-Serial Multiplication in Finite Fields," SIAM J. Discrete Maths., vol. 3, pp. 140-148, Feb. 1990. [20] H. Wu, Low Complexity Bit-Parallel Finite Field Arithmetic Using Polynomial Basis Cryptographic Hardware and Embedded Systems, Ç.K. Koçand C. Paar, eds., pp. 280-291, Berlin: Springer-Verlag, [21] H. Wu, “Montgomery Multiplier and Squarer in$\big. {\rm GF}(2^m)\bigr.$,” Proc. Cryptographic Hardware and Embedded Systems (CHES 2000), pp. 264-276, Aug. 2000. [22] H. Wu, M.A. Hasan, and I.F. Blake, New Low-Complexity Bit-Parallel Finite Field Multipliers Using Weakly Dual Bases IEEE Trans. Computers, vol. 47, no. 11, pp. 1223-1233, Nov. 1998. Index Terms: Finite fields arithmetic, hardware architecture, Montgomery multiplication, elliptic curve cryptography H. Wu, "Montgomery Multiplier and Squarer for a Class of Finite Fields," IEEE Transactions on Computers, vol. 51, no. 5, pp. 521-529, May 2002, doi:10.1109/TC.2002.1004591 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/2002/05/t0521-abs.html","timestamp":"2014-04-20T10:48:47Z","content_type":null,"content_length":"55011","record_id":"<urn:uuid:d74cc375-6d3b-4485-8d07-ceaa7046b44a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Machine Learning (Theory) The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year. My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM. Leon Bottou spoke on single pass online learning via averaged SGD. Yoav Freund talked about parameter-free hedging. In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious thought needs to go in this direction. Unrelated, I found quite a bit of truth in Paul’s talking bears and Xtranormal always adds a dash of funny. My impression is that the ML job market has only become hotter since 4 years ago. Anyone who is well trained can find work, with the key limiting factor being “well trained”. In this environment, efforts to make ML more automatic and more easily applied are greatly appreciated. And yes, Yahoo! is still hiring too KDD and MUCMD 2011 At KDD I enjoyed Stephen Boyd‘s invited talk about optimization quite a bit. However, the most interesting talk for me was David Haussler‘s. His talk started out with a formidable load of biological complexity. About half-way through you start wondering, “can this be used to help with cancer?” And at the end he connects it directly to use with a call to arms for the audience: cure cancer. The core thesis here is that cancer is a complex set of diseases which can be distentangled via genetic assays, allowing attacking the specific signature of individual cancers. However, the data quantity and complex dependencies within the data require systematic and relatively automatic prediction and analysis algorithms of the kind that we are best familiar with. Some of the papers which interested me are: 1. Kai-Wei Chang and Dan Roth, Selective Block Minimization for Faster Convergence of Limited Memory Large-Scale Linear Models, which is about effectively using a hard-example cache to speedup 2. Leland Wilkinson, Anushka Anand, and Dang Nhon Tuan, CHIRP: A New Classifier Based on Composite Hypercubes on Iterated Random Projections. The bar on creating new classifiers is pretty high. The approach here uses a combination of random projection and partition which appears to be compelling for some nonlinear and relatively high computation settings. They do a more thorough empirical evaluation than most papers. 3. Zhuang Wang, Nemanja Djuric, Koby Crammer, and Slobodan Vucetic Trading Representability for Scalability: Adaptive Multi-Hyperplane Machine for Nonlinear Classification. The paper explores an interesting idea: having lots of weight vectors (effectively infinity) associated with a particular label, showing that algorithms on this representation can deal with lots of data as per linear predictors, but with superior-to-linear performance. The authors don’t use the hashing trick, but their representation is begging for it. 4. Michael Bruckner and Tobias Scheffer, Stackelberg Games for Adversarial Prediction Problem. This is about email spam filtering, where the authors use a theory of adversarial equilibria to construct a more robust filter, at least in some cases. Demonstrating this on noninteractive data is inherently difficult. There were also three papers that were about creating (or perhaps composing) learning systems to do something cool. 1. Gideon Dror, Yehuda Koren, Yoelle Maarek, and Idan Szpektor, I Want to Answer, Who Has a Question? Yahoo! Answers Recommender System. This is about how to learn to route a question to the appropriate answerer automatically. 2. Yehuda Koren, Edo Liberty, Yoelle Maarek, and Roman Sandler, Automatically Tagging Email by Leveraging Other Users’ Folders. This is about helping people organize their email with machine 3. D. Sculley, Matthew Eric Otey, Michael Pohl, Bridget Spitznagel, John Hainsworth, Yunkai Zhou, Detecting Adversarial Advertisements in the Wild. The title is an excellent abstract here, and there are quite a few details about the implementation. I also attended MUCMD, a workshop on the Meaningful Use of Complex Medical Data shortly afterwards. This workshop is about the emergent area of using data to improve medicine. The combination of electronic health records, the economic importance of getting medicine right, and the relatively weak use of existing data implies there is much good work to do. This finally gave us a chance to discuss radically superior medical trial designs based on work in exploration and learning Jeff Hammerbacher‘s talk was a hilarilously blunt and well stated monologue about the need and how to gather data in a usable way. Amongst the talks on using medical data, Suchi Saria‘s seemed the most mature. They’ve constructed a noninvasive test for problem infants which is radically superior to the existing Apgar score according to leave-one-out cross validation. From the doctor’s side, there was discussion of the deep balkanization of data systems within hospitals, efforts to overcome that, and the (un)trustworthiness of data. Many issues clearly remain here, but it also looks like serious progress is being made. Overall, the workshop went well, with the broad cross-section of talks providing quite a bit of extra context you don’t normally see. It left me believing that a community centered on MUCMD is rising now, with attendant workshops, conferences, etc… to be expected. Interesting thing at UAI 2011 I had a chance to attend UAI this year, where several papers interested me, including: 1. Hoifung Poon and Pedro Domingos Sum-Product Networks: A New Deep Architecture. We’ve already discussed this one, but in a nutshell, they identify a large class of efficiently normalizable distributions and do learning with it. 2. Yao-Liang Yu and Dale Schuurmans, Rank/norm regularization with closed-form solutions: Application to subspace clustering. This paper is about matrices, and in particular they prove that certain matrices are the solution of matrix optimizations. I’m not matrix inclined enough to fully appreciate this one, but I believe many others may be, and anytime closed form solutions come into play, you get 2 order of magnitude speedups, as they show experimentally. 3. Laurent Charlin, Richard Zemel and Craig Boutilier, A Framework for Optimizing Paper Matching. This is about what works in matching papers to reviewers, as has been tested at several previous NIPS. We are looking into using this system for ICML 2012. In addition I wanted to comment on Karl Friston‘s invited talk. At the outset, he made a claim that seems outlandish to me: The way the brain works is to minimize surprise as measured by a probabilistic model. The majority of the talk was not actually about this—instead it was about how probabilistic models can plausibly do things that you might not have thought possible, such as birdsong. Nevertheless, I think several of us in the room ended up stuck on the claim in questions afterward. My personal belief is that world modeling (probabilistic or not) is a useful subroutine for intelligence, but it could not possibly be the entirety of intelligence. A key reason for this is the bandwidth of our senses—we simply take in too much information to model everything with equal attention. It seems critical for the efficient functioning of intelligence that only things which might plausibly matter are modeled, and only to the degree that matters. In other words, I do not model the precise placement of items on my desk, or even the precise content of my desk, because these details simply do not matter. This argument can be made in another way. Suppose for the moment that all the brain does is probabilistic modeling. Then, the primary notion of failure to model is “surprise”, which is low probability events occurring. Surprises (stumbles, car wrecks, and other accidents) certainly can be unpleasant, but this could be correct if modeling is a subroutine as well. The clincher is that there are many unpleasant things which are not surprises, including keeping your head under water, fasting, and self-inflicted wounds. Accounting for the unpleasantness of these events requires more than probabilistic modeling. In other words, it requires rewards, which is why reinforcement learning is important. As a byproduct, rewards also naturally create a focus of attention, addressing the computational efficiency issue. Believing that intelligence is just probabilistic modeling is another example of simple wrong answer ICML 2011 and the future Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher, Cliff Lin, Andrew Y. Ng, and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks. I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh, which I previously enjoyed visiting in 2005. This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: 1. Colocation. The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGIR, seems to require more advanced planning. If that can be done, I believe there is substantial interest—I understand there was substantial interest in the joint symposium this year. What we did manage was achieving a colocation with COLT and there is an outside chance that a machine learning summer school will precede the main conference. The colocation with COLT is in both time and space, with COLT organized as (essentially) a separate track in a nearby building. We look forward to organizing a joint invited session or two with the COLT program chairs. 2. Tutorials. We don’t have anything imaginative here, except for pushing for quality tutorials, probably through a mixture of invitations and a call. There is a small chance we’ll be able to organize a machine learning summer school as a prequel, which would be quite cool, but several things have to break right for this to occur. 3. Conference. We are considering a few tinkerings with the conference format. 1. Shifting a conference banquet to be during the workshops, more tightly integrating the workshops. 2. Having 3 nights of posters (1 per day) rather than 2 nights. This provides more time/poster, and avoids halving talks and posters appear on different days. 3. Having impromptu sessions in the evening. Two possibilities here are impromptu talks and perhaps a joint open problems session with COLT. I’ve made sure we have rooms available so others can organize other things. 4. We may go for short presentations (+ a poster) for some papers, depending on how things work out schedulewise. My opinions on this are complex. ICML is traditionally multitrack with all papers having a 25 minute-ish presentation. As a mechanism for research, I believe this is superior to a single track conference of a similar size because: 1. Typically some talk of potential interest can always be found by participants avoiding the boredom problem which comes up at a single track conference 2. My experience is that program organizers have a limited ability to foresee which talks are of most interest, commonly creating a misallocation of attention. On the other hand, there are clearly limits to the number of tracks that are reasonable, and I feel like ICML (especially with COLT cotimed) is near the upper limit. There are also some papers which have a limited scope of interest, for which a shorter presentation is reasonable. 4. Workshops. A big change here—we want to experiment with 2 days of workshops rather than 1. There seems to be demand for it, as the number of workshops historically is about 10, enough that it’s easy to imagine people commonly interested in 2 workshops. It’s also the case that NIPS has had to start rejecting a substantial fraction of workshop submissions for space reasons. I am personally a big believer in workshops as a mechanism for further research, so I hope this works out well. 5. [S:Journal integration:S]. I tend to believe that we should be shifting to a journal format for ICML papers, as per many past discussions. After thinking about this the easiest way seems to be simply piggybacking on existing journals such as JMLR and MLJ by essentially declaring that people could submit there first, and if accepted, and not otherwise presented at a conference, present at ICML. This was considered too large a change, so it is not happening. Nevertheless, it is a possible tweak that I believe should be considered for the future. My best guess is that this would never displace the baseline conference review process, but it would help some papers that don’t naturally fit into a conference format while keeping quality high. 6. Reviewing. Drawing on plentiful experience with what goes wrong, I think we can create the best reviewing system for conferences. We are still debating exact details here while working through what is possible in different conference systems. Nevertheless, some basic goals are: 1. Double Blind [routine now] Two identical papers with different authors should have the same chance of success. In terms of reviewing quality, I think double blind makes little difference in the short term, but the public commitment to fair reviewing makes a real difference in the long term. 2. Author Feedback [routine now] Author feedback makes a difference in only a small minority of decisions, but I believe its effect is larger as (a) reviewer quality improves and (b) reviewer understanding improves. Both of these are silent improvers of quality. Somewhat less routine, we are seeking a mechanism for authors to be able to provide feedback if additional reviews are requested, as I’ve become cautious of the late-breaking highly negative review. 3. Paper Editing. Geoff Gordon tweaked AIStats this year to allow authors to revise papers during feedback. I think this is helpful, because it encourages authors to fix clarity issues immediately, rather than waiting longer. This helps with some things, but it is not a panacea—authors still have to convince reviewers their paper is worthwhile, and given the way people are first impressions are lasting impressions. 4. Multisource reviewing. We want all of the initial reviews to be assigned by good yet different mechanisms. In the past, I’ve observed that the source of reviewer assignments can greatly bias the decision outcome, all the way from “accept with minor revisions” to “reject” in the case of a JMLR submission that I had. Our plan at the moment is that one review will be assigned by bidding, one by a primary area chair, and one by a secondary area chair. 5. No single points of failure. When Bob Williamson and I were PC members for learning theory at NIPS, we each came to a decisions given reviews and then reconciled differences. This made a difference on about 5-10% of decisions, and (I believe) improved overall quality a bit. More generally, I’ve seen instances where an area chair has an unjustifiable dislike for a paper and kills it off, which this mechanism avoids. 6. Speed. In general, I believe speed and good decision making are antagonistic. Nevertheless, we believe it is important to try to do the reviewing both quickly and well. Doing things quickly implies that we can push the submission deadline back later, providing authors more time to make quality papers. Key elements of doing things well fast are: good organization (that’s all on us), light loads for everyone involved (i.e. not too many papers), crowd sourcing (i.e. most decisions made by area chairs), and some amount of asynchrony. Altogether, we believe at the moment that two weeks can be shaved from our reviewing process. 7. Website. Traditionally at ICML, every new local organizer was responsible for creating a website. This doesn’t make sense anymore, because substantial work is required there, which can and should be amortized across the years so that the website can evolve to do more for the community. We plant to create a permanent website, based around some combination of icml.cc and machinelearning.org. I think this just makes sense. 8. Publishing. We are thinking about strongly encouraging authors to use arxiv for final submissions. This provides a lasting backing store for ICML papers, as well as a mechanism for revisions. The reality here is that some mistakes get into even final drafts, so a way to revise for the long term is helpful. We are also planning to videotape and make available all talks, although a decision between videolectures and Weyond has not yet been made. Implementing all the changes above is ambitious, but I believe feasible and that each is individually beneficial and to some extent individually evaluatable. I’d like to hear any thoughts you have on this. It’s also not too late if you have further suggestions of your own. A paper not at Snowbird Unfortunately, a scheduling failure meant I missed all of AIStat and most of the learning workshop, otherwise known as Snowbird, when it’s at Snowbird. At snowbird, the talk on Sum-Product networks by Hoifung Poon stood out to me (Pedro Domingos is a coauthor.). The basic point was that by appropriately constructing networks based on sums and products, the normalization problem in probabilistic models is eliminated, yielding a highly tractable yet flexible representation+learning algorithm. As an algorithm, this is noticeably cleaner than deep belief networks with a claim to being an order of magnitude faster and working better on an image completion task. Snowbird doesn’t have real papers—just the abstract above. I look forward to seeing the paper. (added: Rodrigo points out the deep learning workshop draft.) A Variance only Deviation Bound At the PAC-Bayes workshop earlier this week, Olivier Catoni described a result that I hadn’t believed was possible: a deviation bound depending only on the variance of a random variable. For people not familiar with deviation bounds, this may be hard to appreciate. Deviation bounds, are one of the core components for the foundations of machine learning theory, so developments here have a potential to alter our understanding of how to learn and what is learnable. My understanding is that the basic proof techniques started with Bernstein and have evolved into several variants specialized for various applications. All of the variants I knew had a dependence on the range, with some also having a dependence on the variance of an IID or martingale random variable. This one is the first I know of with a dependence on only the variance. The basic idea is to use a biased estimator of the mean which is not influenced much by outliers. Then, a deviation bound can be proved by using the exponential moment method, with the sum of the bias and the deviation bounded. The use of a biased estimator is clearly necessary, because an unbiased empirical average is inherently unstable—which was precisely the reason I didn’t think this was Precisely how this is useful for machine learning isn’t clear yet, but it opens up possibilities. For example, it’s common to suffer from large ranges in exploration settings, such as contextual bandits or active learning. ALT 2009 I attended ALT (“Algorithmic Learning Theory”) for the first time this year. My impression is ALT = 0.5 COLT, by attendance and also by some more intangible “what do I get from it?” measure. There are many differences which can’t quite be described this way though. The program for ALT seems to be substantially more diverse than COLT, which is both a weakness and a strength. One paper that might interest people generally is: Alexey Chernov and Vladimir Vovk, Prediction with Expert Evaluators’ Advice. The basic observation here is that in the online learning with experts setting you can simultaneously compete with several compatible loss functions simultaneously. Restated, debating between competing with log loss and squared loss is a waste of breath, because it’s almost free to compete with them both simultaneously. This might interest anyone who has run into “which loss function?” debates that come up periodically. Another 10-year paper in Machine Learning When I was thinking about the best “10 year paper” for ICML, I also took a look at a few other conferences. Here is one from 10 years ago that interested me: David McAllester PAC-Bayesian Model Averaging, COLT 1999. 2001 Journal Draft. Prior to this paper, the only mechanism known for controlling or estimating the necessary sample complexity for learning over continuously parameterized predictors was VC theory and variants, all of which suffered from a basic problem: they were incredibly pessimistic in practice. This meant that only very gross guidance could be provided for learning algorithm design. The PAC-Bayes bound provided an alternative approach to sample complexity bounds which was radically tighter, quantitatively. It also imported and explained many of the motivations for Bayesian learning in a way that learning theory and perhaps optimization people might appreciate. Since this paper came out, there have been a number of moderately successful attempts to drive algorithms directly by the PAC-Bayes bound. We’ve gone from thinking that a bound driven algorithm is completely useless to merely a bit more pessimistic and computationally intense than might be necessary. The PAC-Bayes bound is related to the “bits-back” argument that Geoff Hinton and Drew van Camp made at COLT 6 years earlier. What other machine learning or learning theory papers from 10 years ago have had a substantial impact? Interesting papers at UAICMOLT 2009 Here’s a list of papers that I found interesting at ICML/COLT/UAI in 2009. 1. Elad Hazan and Comandur Seshadhri Efficient learning algorithms for changing environments at ICML. This paper shows how to adapt learning algorithms that compete with fixed predictors to compete with changing policies. The definition of regret they deal with seems particularly useful in many situation. 2. Hal Daume, Unsupervised Search-based Structured Prediction at ICML. This paper shows a technique for reducing unsupervised learning to supervised learning which (a) make a fast unsupervised learning algorithm and (b) makes semisupervised learning both easy and highly effective. 3. There were two papers with similar results on active learning in the KWIK framework for linear regression, both reducing the sample complexity to . One was Nicolo Cesa-Bianchi, Claudio Gentile, and Francesco Orabona Robust Bounds for Classification via Selective Sampling at ICML and the other was Thomas Walsh, Istvan Szita, Carlos Diuk, Michael Littman Exploring compact reinforcement-learning representations with linear regression at UAI. The UAI paper covers application to RL as well. 4. Ping Li, Improving Compressed Counting at UAI. This paper talks about how to keep track of the moments in a datastream with very little space and computation. I’m not sure I have a use for it yet, but it seems like a cool piece of basic technology. 5. Mark Reid and Bob Williamson Surrogate Regret Bounds for Proper Losses at ICML. This paper points out that via the integral characterization of proper losses, proper scoring rules can be reduced to binary classification. The results unify and generalize the Probing and Quanting reductions we worked on previously. This paper is also related to Nicolas Lambert‘s work, which is quite thought provoking in terms of specifying what is learnable. 6. Daniel Hsu, Sham M. Kakade and Tong Zhang. A Spectral Algorithm for Learning Hidden Markov Models COLT. This paper shows that a subset of HMMs can be learned using an SVD-based algorithm. 7. Samory Kpotufe, Escaping the curse of dimensionality with a tree-based regressor at COLT. This paper shows how to directly applying regression in high dimensional vector spaces and have it succeed anyways because the data is naturally low-dimensional. 8. Shai Ben-David, David Pal and Shai Shalev-Shwartz. Agnostic Online Learning at COLT. This paper characterizes the ability to learn when an adversary is choosing features in the online setting as the “Littlestone dimension”. Interesting Presentations at Snowbird Here are a few of presentations interesting me at the snowbird learning workshop (which, amusingly, was in Florida with AIStat). Interesting Papers at SODA 2009 Several talks seem potentially interesting to ML folks at this year’s SODA. 1. Maria-Florina Balcan, Avrim Blum, and Anupam Gupta, Approximate Clustering without the Approximation. This paper gives reasonable algorithms with provable approximation guarantees for k-median and other notions of clustering. It’s conceptually interesting, because it’s the second example I’ve seen where NP hardness is subverted by changing the problem definition subtle but reasonable way. Essentially, they show that if any near-approximation to an optimal solution is good, then it’s computationally easy to find a near-optimal solution. This subtle shift bears serious thought. A similar one occurred in our ranking paper with respect to minimum feedback arcset. With two known examples, it suggests that many more NP-complete problems might be finessed into irrelevance in this style. 2. Yury Lifshits and Shengyu Zhang, Combinatorial Algorithms for Nearest Neighbors, Near-Duplicates, and Small-World Design. The basic idea of this paper is that actually creating a metric with a valid triangle inequality inequality is hard for real-world problems, so it’s desirable to have a datastructure which works with a relaxed notion of triangle inequality. The precise relaxation is more extreme than you might imagine, implying the associated algorithms give substantial potential speedups in incomparable applications. Yuri tells me that a cover tree style “true O(n) space” algorithm is possible. If worked out and implemented, I could imagine substantial use. 3. Elad Hazan and Satyen Kale Better Algorithms for Benign Bandits. The basic idea of this paper is that in real-world applications, an adversary is less powerful than is commonly supposed, so carefully taking into account the observed variance can yield an algorithm which works much better in practice, without sacrificing the worst case performance. 4. Kevin Matulef, Ryan O’Donnell, Ronitt Rubinfeld, Rocco Servedio, Testing Halfspaces. The basic point of this paper is that testing halfspaces is qualitatively easier than finding a good half space with respect to 0/1 loss. Although the analysis is laughably far from practical, the result is striking, and it’s plausible that the algorithm works much better than the analysis. The core algorithm is at least conceptually simple: test that two correlated random points have the same sign, with “yes” being evidence of a halfspace and “no” not. 5. I also particularly liked Yuval Peres‘s invited talk The Unreasonable Effectiveness of Martingales. Martingale’s are endemic to learning, especially online learning, and I suspect we can tighten and clarify several arguments using some of the techniques discussed. A NIPS paper I’m skipping NIPS this year in favor of Ada, but I wanted to point out this paper by Andriy Mnih and Geoff Hinton. The basic claim of the paper is that by carefully but automatically constructing a binary tree over words, it’s possible to predict words well with huge computational resource savings over unstructured approaches. I’m interested in this beyond the application to word prediction because it is relevant to the general normalization problem: If you want to predict the probability of one of a large number of events, often you must compute a predicted score for all the events and then normalize, a computationally inefficient operation. The problem comes up in many places using probabilistic models, but I’ve run into it with high-dimensional regression. There are a couple workarounds for this computational bug: 1. Approximate. There are many ways. Often the approximations are uncontrolled (i.e. can be arbitrarily bad), and hence finicky in application. 2. Avoid. You don’t really want a probability, you want the most probable choice which can be found more directly. Energy based model update rules are an example of that approach and there are many other direct methods from supervised learning. This is great when it applies, but sometimes a probability is actually needed. This paper points out that a third approach can be viable empirically: use a self-normalizing structure. It seems highly likely that this is true in other applications as well. How do we get weak action dependence for learning with partial observations? This post is about contextual bandit problems where, repeatedly: 1. The world chooses features x and rewards for each action r[1],…,r[k] then announces the features x (but not the rewards). 2. A policy chooses an action a. 3. The world announces the reward r[a] The goal in these situations is to learn a policy which maximizes r[a] in expectation efficiently. I’m thinking about all situations which fit the above setting, whether they are drawn IID or adversarially from round to round and whether they involve past logged data or rapidly learning via interaction. One common drawback of all algorithms for solving this setting, is that they have a poor dependence on the number of actions. For example if k is the number of actions, EXP4 (page 66) has a dependence on k^0.5, epoch-greedy (and the simpler epsilon greedy) have a dependence on k^1/3, and the offset tree has a dependence on k-1. These results aren’t directly comparable because different things are being analyzed. The fact that all analyses have poor dependence on k is troublesome. The lower bounds in the EXP4 paper and the Offset Tree paper demonstrate that this isn’t a matter of lazy proof writing or a poor choice of algorithms: it’s essential to the nature of the problem. In supervised learning, it’s typical to get no dependence or very weak dependence on the number of actions/choices/labels. For example, if we do empirical risk minimization over a finite hypothesis space H, the dependence is at most ln |H| using an Occam’s Razor bound. Similarly, the PECOC algorithm (page 12) has dependence bounded by a constant. This kind of dependence is great for the feasibility of machine learning: it means that we can hope to tackle seemingly difficult problems. Why is there such a large contrast between these settings? At the level of this discussion, they differ only in step 3, where for supervised learning, all of the rewards are revealed instead of just One of the intuitions you develop after working with supervised learning is that holistic information is often better. As an example, given a choice between labeling the same point multiple times (perhaps revealing and correcting noise) or labeling other points once, an algorithm with labels other points typically exists and typically yields as good or better performance in theory and in practice. This appears untrue when we have only partial observations. For example, consider the following problem(*): “Find an action with average reward greater than 0.5 with probability at least 0.99″ and consider two algorithms: 1. Sample actions at random until we can prove (via Hoeffding bounds) that one of them has large reward. 2. Pick an action at random, sample it 100 times, and if we can prove (via a Hoeffding bound) that it has large average reward return it, otherwise pick another action randomly and repeat. When there are 10^10 actions and 10^9 of them have average reward 0.6, it’s easy to prove that algorithm 2 is much better than algorithm 1. Lower bounds for the partial observation settings imply that more tractable algorithms only exist under additional assumptions. Two papers which do this without context features are: 1. Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-armed bandit problems in metric spaces, STOC 2008. Here the idea is that you have access to a covering oracle on the actions where actions with similar average rewards cover each other. 2. Deepak Agarwal, , and Deepayan Chakrabati, Multi-armed Bandit Problems with Dependent Arms, ICML 2007. Here the idea is that the values of actions are generated recursively, preserving structure through the recursion. Basic questions: Are there other kinds of natural structure which allows a good dependence on the total number of actions? Can these kinds of structures be extended to the setting with features? (Which seems essential for real applications.) (*) Developed in discussion with Yisong Yue and Bobby Kleinberg. Interesting papers at COLT (and a bit of UAI & workshops) Here are a few papers from COLT 2008 that I found interesting. 1. Maria-Florina Balcan, Steve Hanneke, and Jenn Wortman, The True Sample Complexity of Active Learning. This paper shows that in an asymptotic setting, active learning is always better than supervised learning (although the gap may be small). This is evidence that the only thing in the way of universal active learning is us knowing how to do it properly. 2. Nir Ailon and Mehryar Mohri, An Efficient Reduction of Ranking to Classification. This paper shows how to robustly rank n objects with n log(n) classifications using a quicksort based algorithm. The result is applicable to many ranking loss functions and has implications for others. 3. Michael Kearns and Jennifer Wortman. Learning from Collective Behavior. This is about learning in a new model, where the goal is to predict how a collection of interacting agents behave. One claim is that learning in this setting can be reduced to IID learning. Due to the relation with Metric-E^3, I was particularly interested in a couple other papers on reinforcement learning in navigation-like spaces. I also particularly enjoyed Dan Klein‘s talk, which was the most impressive application of graphical model technology I’ve seen. I also attended the large scale learning challenge workshop and enjoyed Antoine Bordes talk about a fast primal space algorithm that won by a hair over other methods in the wild track. Ronan Collobert‘s talk was also notable in that they are doing relatively featuritis-free NLP. Interesting Papers at ICML 2007 Here are a few of the papers I enjoyed at ICML. 1. Steffen Bickel, Michael Brüeckner, Tobias Scheffer, Discriminative Learning for Differing Training and Test Distributions There is a nice trick in this paper: they predict the probability that an unlabeled sample is in the training set vs. the test set, and then use this prediction to importance weight labeled samples in the training set. This paper uses a specific parametric model, but the approach is easily generalized. 2. Steve Hanneke A Bound on the Label Complexity of Agnostic Active Learning This paper bounds the number of labels required by the A^2 algorithm for active learning in the agnostic case. Last year we figured out agnostic active learning was possible. This year, it’s quantified. Hopefull soon, it will be practical. 3. Sylvian Gelly, David Silver Combining Online and Offline Knowledge in UCT. This paper is about techniques for improving MoGo with various sorts of learning. MoGo has a fair claim at being the world’s best Go algorithm. There were also a large number of online learning papers this year, especially if you count papers which use online learning techniques for optimization on batch datasets (as I do). This is expected, because larger datasets are becoming more common, and online learning makes more sense the larger the dataset. Many of these papers are of interest if your goal is learning fast while others are about extending online learning into new domains. (Feel free to add any other papers of interest in the comments.) Interesting Papers at COLT 2007 Here are two papers that seem particularly interesting at this year’s COLT. 1. Gilles Blanchard and François Fleuret, Occam’s Hammer. When we are interested in very tight bounds on the true error rate of a classifier, it is tempting to use a PAC-Bayes bound which can (empirically) be quite tight. A disadvantage of the PAC-Bayes bound is that it applies to a classifier which is randomized over a set of base classifiers rather than a single classifier. This paper shows that a similar bound can be proved which holds for a single classifier drawn from the set. The ability to safely use a single classifier is very nice. This technique applies generically to any base bound, so it has other applications covered in the paper. 2. Adam Tauman Kalai. Learning Nested Halfspaces and Uphill Decision Trees. Classification PAC-learning, where you prove that any problem amongst some set is polytime learnable with respect to any distribution over the input X is extraordinarily challenging as judged by lack of progress over a long period of time. This paper is about regression PAC-learning, and the results appear much more encouraging than exist in classification PAC-learning. Under the assumption that: 1. The level sets of the correct regressed value are halfspaces. 2. The level sets obey a Lipschitz condition. this paper proves that a good regressor can be PAC-learned using a boosting algorithm. (The “uphill decision trees” part of the paper is about one special case where you don’t need the Lipschitz Conditional Tournaments for Multiclass to Binary This problem has been cracked (but not quite completely solved) by Alina, Pradeep, and I. The problem is essentially finding a better way to reduce multiclass classification to binary classification. The solution is to use a carefully crafted tournament, the simplest version of which is a single elimination tournament where the “players” are the different classes. An example of the structure is For the single elimination tournament, we can prove that: For all multiclass problems D, for all learned binary classifiers c, the regret of an induced multiclass classifier is bounded by the regret of the binary classifier times log[2] k. Restated: reg[multiclass](D,Filter_tree_test(c)) <= reg[binary] (Filter_tree_train(D),c) 1. Filter_tree_train(D) is the induced binary classification problem 2. Filter_tree_test(c) is the induced multiclass classifier. 3. reg[multiclass] is the multiclass regret (= difference between error rate and minimum possible error rate) 4. reg[binary] is the binary regret This result has a slight dependence on k which we suspect is removable. The current conjecture is that this dependence can be removed by using higher order tournaments such as double elimination, triple elimination, up to log[2] k-elimination. The key insight which makes the result possible is conditionally defining the prediction problems at interior nodes. In essence, we use the learned classifiers from the first level of the tree to filter the distribution over examples reaching the second level of the tree. This process repeats, until the root node is reached. Further details, including a more precise description and some experimental results are in the draft paper. What to do with an unreasonable conditional accept Last year about this time, we received a conditional accept for the searn paper, which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper, leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the overall quality of the reviewing process and the conference. In short, Sanjoy was right. At the time, I operated under the belief that the PC chair’s job was simply too heavy to bother with something like this, but that was wrong. William invited me to post this, and I hope we all learn a little bit from it. Obviously, this should only be used if there is a real flaw in the conditions for a conditional accept paper. The Forgetting How many papers do you remember from 2006? 2005? 2002? 1997? 1987? 1967? One way to judge this would be to look at the citations of the papers you write—how many came from which year? For myself, the answers on recent papers are: │year │2006│2005│2002│1997│1987│1967│ │count │4 │10 │5 │1 │0 │0 │ This spectrum is fairly typical of papers in general. There are many reasons that citations are focused on recent papers. 1. The number of papers being published continues to grow. This is not a very significant effect, because the rate of publication has not grown nearly as fast. 2. Dead men don’t reject your papers for not citing them. This reason seems lame, because it’s a distortion from the ideal of science. Nevertheless, it must be stated because the effect can be 3. In 1997, I started as a PhD student. Naturally, papers after 1997 are better remembered because they were absorbed in real time. A large fraction of people writing papers and attending conferences haven’t been doing it for 10 years. 4. Old papers aren’t on the internet. This is huge effect for any papers prior to 1995 (or so). The ease of examining a paper greatly influences the ability of an author to read and understand it. There are a number of journals which essentially have “internet access for the privileged elite who are willing to pay”. In my experience, this is only marginally better than having them stuck in the library. 5. The recent past is more relevant to the present than the far past. There is a lot of truth in this—people discover and promote various problems or techniques which take off for awhile, until their turn to be forgotten arrives. Should we be disturbed by this forgetting? There are a few good effects. For example, when people forget, they reinvent, and sometimes they reinvent better. Nevertheless, it seems like the effect of forgetting is bad overall, because it causes wasted effort. There are two implications: 1. For paper writers, it is very common to overestimate the value of a paper, even though we know that the impact of most papers is bounded in time. Perhaps by looking at those older papers, we can get an idea of what is important in the long term. For example, looking at my own older citations, simplicity is it. If you want a paper to have a long term impact, it needs to have a simple algorithm, analysis method, or setting. Fundamentally, only those things which are teachable survive. Was your last paper simple? Could you teach it in a class? Are other people going to start doing so? Are the review criteria promoting the papers which a hope of survival? 2. For conference organizers, it’s important to understand the way science has changed. Originally, you had to be a giant to succeed at science. Then, you merely had to stand on the shoulders of giants to succeed. Now, it seems that even the ability to peer over the shoulders of people standing on the shoulders of giants might be helpful. This is generally a good thing, because it means more people can help on a very hard task. Nevertheless, it seems that much of this effort is getting wasted in forgetting, because we do not have the right mechanisms to remember the information. Which is going to be the first conference to switch away from an ordered list of papers to something with structure? Wouldn’t it be great if all the content at a conference was organized in a wikipedia-like easy-for-outsiders-to-understand style? Best Practices for Collaboration Many people, especially students, haven’t had an opportunity to collaborate with other researchers. Collaboration, especially with remote people can be tricky. Here are some observations of what has worked for me on collaborations involving a few people. 1. Travel and Discuss Almost all collaborations start with in-person discussion. This implies that travel is often necessary. We can hope that in the future we’ll have better systems for starting collaborations remotely (such as blogs), but we aren’t quite there yet. 2. Enable your collaborator. A collaboration can fall apart because one collaborator disables another. This sounds stupid (and it is), but it’s far easier than you might think. 1. Avoid Duplication. Discovering that you and a collaborator have been editing the same thing and now need to waste time reconciling changes is annoying. The best way to avoid this to be explicit about who has write permission to what. Most of the time, a write lock is held for the entire document, just to be sure. 2. Don’t keep the write lock unnecessarily. Some people are perfectionists so they have a real problem giving up the write lock on a draft until it is perfect. This prevents other collaborators from doing things. Releasing write lock (at least) when you sleep, is a good idea. 3. Send all necessary materials. Some people try to save space or bandwidth by not passing ‘.bib’ files or other auxiliary components. Forcing your collaborator to deal with the missing subdocument problem is disabling. Space and bandwidth are cheap while your collaborators time is precious. (Sending may be pass-by-reference rather than attach-to-message in most cases.) 4. Version Control. This doesn’t mean “use version control software”, although that’s fine. Instead, it means: have a version number for drafts passed back and forth. This means you can talk about “draft 3″ rather than “the draft that was passed last tuesday”. Coupled with “send all necessary materials”, this implies that you naturally backup previous work. 3. Be Generous. It’s common for people to feel insecure about what they have done or how much “credit” they should get. 1. Coauthor standing. When deciding who should have a chance to be a coauthor, the rule should be “anyone who has helped produce a result conditioned on previous work”. “Helped produce” is often interpreted too narrowly—a theoretician should be generous about crediting experimental results and vice-versa. Potential coauthors may decline (and senior ones often do so). Control over who is a coauthor is best (and most naturally) exercised by the choice of who you talk to. 2. Author ordering. Author ordering is the wrong thing to worry about, so don’t. The CS theory community has a substantial advantage here because they default to alpha-by-author ordering, as is understood by everyone. 3. Who presents. A good default for presentations at a conference is “student presents” (or suitable generalizations). This gives young people a real chance to get involved and learn how things are done. Senior collaborators already have plentiful alternative methods to present research at workshops or invited talks. 4. Communicate by default Not cc’ing a collaborator is a bad idea. Even if you have a very specific question for one collaborator and not another, it’s a good idea to cc everyone. In the worst case, this is a few-second annoyance for the other collaborator. In the best case, the exchange answers unasked questions. This also prevents “conversation shifts into subjects interesting to everyone, but oops! you weren’t cced” problem. These practices are imperfectly followed even by me, but they are a good ideal to strive for.
{"url":"http://hunch.net/?cat=18","timestamp":"2014-04-19T07:33:37Z","content_type":null,"content_length":"109922","record_id":"<urn:uuid:f782cb5e-cde0-4cec-be40-ac73fbab56e3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
HCUP Nationwide Inpatient Sample Design of the HCUP Nationwide Inpatient Sample, 2003 June 14, 2005 Skip Table of Contents Skip Index of Tables Skip Index of Figures The Nationwide Inpatient Sample (NIS) is one of a family of databases and software tools developed as part of the Healthcare Cost and Utilization Project (HCUP), a Federal-State-Industry partnership sponsored by the Agency for Healthcare Research and Quality (AHRQ). The NIS is the largest nationwide all-payer hospital inpatient care database in the U.S. Each year the NIS contains data from approximately seven to eight million hospital stays all discharge data from nearly 1,000 hospitals selected from HCUP State Inpatient Databases (SID) data. The HCUP NIS team developed the NIS to provide analyses of hospital utilization, charges, and quality of care across the United States. This report describes the NIS sample and weights, summarizes the contents of the 2003 NIS, and discusses data analysis issues. Previous NIS releases covered 1988 through 2002. This document highlights cumulative information for all previous years to provide a longitudinal view of the database. Once again, we have enhanced the nationwide representation of the sample by incorporating data from additional HCUP State Partners. The 2003 NIS includes data from 37 states, two more than the 2002 NIS. Hospital Sample Design The NIS sampling frame included all community, non-rehabilitation hospitals in the SID that could be matched to the corresponding American Hospital Association (AHA) Annual Survey data. Based on data from 37 states, there were 3,763 hospitals in the 2003 sampling frame, a 5.4% increase from the 2002 NIS. The target universe includes all acute care discharges from non-rehabilitation, community hospitals in the United States. There were 4,836 hospitals in the target universe in 2003. The NIS is a stratified probability sample of hospitals in the frame, with sampling probabilities calculated to select 20% of the universe contained in each stratum. The overall objective was to select a sample of hospitals representative of the target universe. With this objective in mind, we defined NIS sampling strata based on the following five hospital characteristics contained in the AHA hospital files: 1. Geographic Region Northeast, Midwest, West, and South 2. Control public, private not-for-profit, and proprietary 3. Location urban or rural 4. Teaching Status teaching or non-teaching 5. Bed Size small, medium, and large. After stratifying the universe of hospitals, we randomly selected up to 20% of the total number of U.S. hospitals within each stratum. If a stratum contained too few frame hospitals, then all were selected for the NIS, subject to sampling restrictions specified by states. The resulting sample for 2003 included 994 hospitals, representing 20.6% of the total hospital universe of 4,836 hospitals. Changes to Sampling and Weighting Strategy Beginning with the 1998 NIS Given the increase in the number of contributing states, the NIS team evaluated and revised the sampling and weighting strategy for 1998 and subsequent data years in order to best represent the U.S. These changes included: • Revising definitions of the strata variables. • Excluding rehabilitation hospitals from the NIS hospital universe. • Changing the calculation of hospital universe discharges for the weights. Also, beginning with the 1998 NIS sampling procedures, all frame hospitals within a stratum have an equal probability of selection for the sample, regardless of whether they had appeared in prior NIS samples. This deviates from the procedure used for earlier samples, which maximized the longitudinal component of the NIS series. A full description of the evaluation and revision of the NIS sampling strategy for 1998 and subsequent data years can be found in the special report, Changes in NIS Sampling and Weighting Strategy for 1998. This document is available on the 2003 NIS Documentation CD-ROM and on the HCUP User Support Website at http://www.hcup-us.ahrq.gov/db/nation/nis/nisrelatedreports.jsp. Hospital Sampling Frame The 2003 NIS sampling frame included data provided by 37 HCUP State Partners. On average, 97% of the hospital universe is included in the sampling frame for all but four of these states. Two State Partners Hawaii and South Carolina limited the number of state hospitals included in the frame to between 70 and 85 percent. (Restrictions from other states did not have an appreciable effect on the percentage of hospitals in the sampling frame.) One State Partner, Texas, supplied data from only 73% of the state’s hospitals because some Texas hospitals, mostly small rural facilities, are exempt from statutory reporting requirements. Finally, we dropped 33 Michigan hospitals that did not report total charges from the sampling frame, leaving 70% of Michigan hospitals in the frame. While 20% of the hospitals from each region are selected for the NIS, the comprehensiveness of the sampling frame varies by region. In the Northeast, 91.9% of hospitals are included in the sampling frame, compared with 90.4% in the Midwest (a 14% increase from 2002), 78.6% in the West (an 8% increase from 2002); and 63.1% in the South. Because the NIS sampling frame has a disproportionate representation of the more populous states and includes hospitals with more annual discharges, its comprehensiveness in terms of discharges is higher. The states in the NIS sampling frame contain 97.6% of the population in the Northeast, 99.0% in the Midwest, 92.0% in the West, and 81.3% in the South. Overall, the 2003 NIS sampling frame comprises 77.8% of all U.S. hospitals and covers 90.8% of the U.S. population. Final Hospital Sample The final 2003 sample included 7,977,728 discharges from 994 hospitals selected from all 37 frame states. Hospitals were sampled throughout each region of the United States. In the Northeast and Midwest, where a higher proportion of states are represented, relatively fewer hospitals are sampled from each state than in the South and West, where the proportion of states in the NIS is lower. Since the inception of the original 1988 NIS , its scope has expanded across several dimensions: • The number of states has increased from 8 to 37. • The number of hospitals has increased from 759 to 994. • The number of discharges has increased from 5.2 million to nearly 8 million. The additional states have enhanced the representation of the nationwide population. The 2003 NIS includes data from 37 states two more states than the 2002 NIS and 29 more states than the original 1988 NIS. The states added to the 2003 NIS have increased the percentage of population represented in the Midwest and the West. With the addition of Indiana, the percentage of Midwest population represented in the NIS increased from 90% for 2002 to 99% for 2003. With the return of Arizona, the percentage of Western population represented in the NIS increased from 84% to 92 percent. Ideally, relationships among outcomes and their correlates estimated from the NIS should accurately represent all U.S. hospitals. However, when creating nationwide estimates, it is advisable to check these estimates against other data sources, if available. For example, the National Hospital Discharge Survey (http://www.cdc.gov/nchs/about/major/hdasd/nhds.htm) can provide benchmarks against which to verify national estimates for hospitalizations with more than 5,000 cases. The NIS Comparison Report assesses the accuracy of NIS estimates. The most recent report is available on the NIS Documentation CD-ROM and provides a comparison of a previous year’s NIS with other data sources. The updated report for the current NIS will be posted on the HCUP User Support Website (http://www.hcup-us.ahrq.gov/db/nation/nis/nisrelatedreports.jsp) as soon as it is completed. Two non-overlapping 10% subsamples of discharges were drawn from the NIS file for several reasons pertaining to data analysis. One reason for creating the subsamples was to reduce processing costs for selected studies that will not require the entire NIS. Another reason is that the two subsamples may be used to validate models and obtain unbiased estimates of standard errors. The subsamples were selected by drawing every tenth discharge, starting with two different, randomly-selected starting points. Having a different starting point for each of the two subsamples guaranteed that the resulting subsamples would not overlap. Sample Weights It is necessary to incorporate sample weights to obtain nationwide estimates. Therefore, sample weights were developed separately for hospital- and discharge-level analyses. Within a stratum, each NIS sample hospital's universe weight is equal to the number of universe hospitals it represents during the year. Since 20% of the AHA universe hospitals in each stratum are sampled when possible, the hospital weights (HOSPWT) are usually near five. The calculations for discharge-level sampling weights (DISCWT) are similar to the calculations for hospital-level sampling weights. In the 10% subsamples, each discharge has a 10% chance of being drawn. Therefore, the discharge weights (DISCWT10) are multiplied by 10 for each of the subsamples. Because the 10% subsamples are based on samples of discharges, each hospital is represented in the subsamples. Thus, no adjustment is required for the hospital weight when using the subsamples. Weight Data Elements To produce nationwide estimates, the discharge weights should be used to extrapolate sampled discharges in the Core file to the discharges from all U.S. community, non-rehabilitation hospitals. For the 2000 NIS, DISCWT should be used to create nationwide estimates for all analyses except those that involve total charges, and DISCWTCHARGE should be used to create nationwide estimates of total charges. For all other years of the NIS, including the 2003 NIS, DISCWTCHARGE is not required, and DISCWT (DISCWT_U prior to the 1998 NIS) should be used to create all estimates. For a 10% subsample file, use the corresponding subsample discharge weight, DISCWT10 (D10CWT_U prior to the 1998 NIS) or DISCWTCHARGE10. Data Analysis Missing Values Missing data values can compromise the quality of estimates. If the outcome for discharges with missing values is different from the outcome for discharges with valid values, then sample estimates for that outcome will be biased and will not accurately represent the discharge population. Also, when estimating totals for non-negative variables with missing values, sums would tend to be underestimated because the cases with missing values would be omitted from the calculations. Several techniques are available to help overcome this bias. One strategy is to impute acceptable values to replace missing values. Another strategy is to use sample weight adjustments to compensate for missing values. Such data preparation and adjustment is outside the scope of this report. However, if necessary, it should be done before analyzing data with statistical procedures. Variance Calculations It may be important for researchers to calculate a measure of precision for some estimates based on the NIS sample data. Variance estimates must take into account both the sampling design and the form of the statistic. Standard formulas for a stratified, single-stage cluster sample without replacement may be used to calculate statistics and their variances in most applications. The NIS database includes a Hospital Weights file with variables required to calculate finite population statistics. In addition to the sample weights described earlier, hospital identifiers (Primary Sampling Units or PSUs), stratification variables, and stratum-specific totals for the numbers of discharges and hospitals are included so that finite-population corrections (FPCs) can be applied to variance estimates. Examples of the use of SAS, SUDAAN, and STATA to calculate variances in the NIS are presented in the special report: Calculating Nationwide Inpatient Sample Variances. This report is available on the NIS Documentation CD-ROM and on the HCUP User Support Website at www.hcup-us.ahrq.gov. Longitudinal Analyses All frame hospitals within a stratum have an equal probability of being selected for the sample, regardless of whether they have appeared in prior NIS samples. This deviates from the procedure used for earlier samples, prior to data year 1998, which maximized the longitudinal component of the NIS series. Hospitals that continue in the NIS for multiple consecutive years are a subset of the NIS hospitals for any one of those years. Consequently, longitudinal analyses of hospital-level outcomes may be biased if they are based on any subset of NIS hospitals limited to continuous NIS membership. The analyses may be more efficient (e.g., produce more precise estimates) if they account for the potential correlation between repeated measures on the same hospital over time. Studying Trends When studying trends over time using the NIS, be aware that the sampling frame for the NIS changes almost annually, i.e., more states have been added over time. Estimates from earlier years of the NIS may be subject to more sampling bias than later years of the NIS. In order to facilitate analysis of trends using multiple years of NIS data, an alternate set of NIS discharge and hospital weights for the 1988-1997 HCUP NIS were developed. These alternative weights were calculated in the same way as the weights for the 1998 and later years of the NIS. The special report, Using the HCUP Nationwide Inpatient Sample to Estimate Trends, includes details regarding the alternate weights and other recommendations for trends analysis. Both the NIS Trends Report and the alternative weights are available on the HCUP-US web site under Methods Series (http://www.hcup-us.ahrq.gov/reports/methods/methods_topic.jsp). The NIS Trends Report is also available on the NIS Documentation CD-ROM. The Nationwide Inpatient Sample (NIS) is one of a family of databases and software tools developed as part of the Healthcare Cost and Utilization Project (HCUP), a Federal-State-Industry partnership sponsored by the Agency for Healthcare Research and Quality (AHRQ). The NIS is the largest nationwide all-payer hospital inpatient care database in the U.S. Each year the NIS contains data from approximately seven to eight million hospital stays all discharge data from nearly 1,000 hospitals selected from HCUP State Inpatient Databases (SID) data. The HCUP NIS team developed the NIS to facilitate analyses of hospital utilization, charges, and quality of care across the United States. Potential research issues focus on both discharge- and hospital-level outcomes. Discharge outcomes of interest include trends in inpatient treatments with respect to: • Frequency • Charges • Lengths of stay • Effectiveness • Quality of care • Appropriateness • Access to hospital care. Hospital outcomes of interest include: • Mortality rates • Complication rates • Patterns of care • Diffusion of technology • Trends toward specialization. These and other outcomes are of interest for the nation as a whole and for policy-relevant inpatient subgroups defined by geographic regions, patient demographics, hospital characteristics, physician characteristics, and pay sources. This report describes the NIS sample and weights, summarizes the contents of the 2003 NIS, and discusses data analysis issues. The 2003 NIS includes data for calendar year 2003, while previous NIS releases covered 1988 through 2002. This document highlights cumulative information for all previous years to provide a longitudinal view of the database. Table 1 displays the number of states, hospitals, and discharges in each year and reveals the increase in the number of participating states over time. The two additional states in the 2003 NIS have enhanced the nationwide representation of the sample, making this the most comprehensive NIS to date. Note that one state that appeared in previous years of the NIS, Maine, was unable to provide data for the 2003 NIS. Table 1: Number of NIS States, Hospitals, and Discharges, by Year │ Calendar Year │ States in the Frame │ Number of States │ Sample Hospitals │ Sample Discharges │ │ │ │ │ │ (Millions) │ │ 1988 1992 │ Arizona, California, Colorado, Florida, Iowa, Illinois, Massachusetts, New Jersey, Pennsylvania, Washington, and │ 8 11 │ 759 875 │ 5.2 6.2 │ │ │ Wisconsin │ │ │ │ │ 1993 │ Add Connecticut, Kansas, Maryland, New York, Oregon, and South Carolina │ 17 │ 913 │ 6.5 │ │ 1994 │ No new additions │ 17 │ 904 │ 6.4 │ │ 1995 │ Add Missouri and Tennessee │ 19 │ 938 │ 6.7 │ │ 1996 │ No new additions │ 19 │ 906 │ 6.5 │ │ 1997 │ Add Georgia, Hawaii, and Utah │ 22 │ 1012 │ 7.1 │ │ 1998 │ No new additions │ 22 │ 984 │ 6.8 │ │ 1999 │ Add Maine and Virginia │ 24 │ 984 │ 7.2 │ │ 2000 │ Add Kentucky, North Carolina, Texas, and West Virginia │ 28 │ 994 │ 7.5 │ │ 2001 │ Add Michigan, Minnesota, Nebraska, Rhode Island, and Vermont │ 33 │ 986 │ 7.5 │ │ 2002 │ Add Nevada, Ohio, and South Dakota; Drop Arizona │ 35 │ 995 │ 7.9 │ │ 2003 │ Add Arizona, Indiana and New Hampshire; Drop Maine │ 37 │ 994 │ 8.0 │ The hospital universe is defined as all hospitals located in the U.S. open during any part of the calendar year and designated as community hospitals in the American Hospital Association (AHA) Annual Survey. The AHA defines community hospitals as follows: “All nonfederal short-term general and other specialty hospitals, excluding hospital units of institutions.” Consequently, Veterans Hospitals and other federal facilities (Department of Defense and Indian Health Service) are excluded. Beginning with the 1998 NIS, community rehabilitation hospitals were excluded from the universe because the type of care provided and the characteristics of the discharges from these facilities were markedly different from other short-term hospitals. Figure 1 displays the number of universe hospitals for each year based on the AHA Annual Survey. Between the years 1988-2001, a steady decline in the number of hospitals is evident. However, beginning in 2002, the number of universe hospitals has Figure 1: Hospital Universe, by Year^1 (text version) Hospital Merges, Splits, and Closures All U.S. hospital entities designated as community hospitals in the AHA hospital file, except rehabilitation hospitals, were included in the hospital universe. Therefore, when two or more community hospitals merged to create a new community hospital, the original hospitals and the newly-formed hospital were all considered separate hospital entities in the universe during the year they merged. Likewise, if a community hospital split, the original hospital and all newly-created community hospitals were treated as separate entities in the universe during the year this occurred. Finally, community hospitals that closed during a given year were included in the hospital universe, as long as they were in operation during some part of the calendar year. Stratification Variables Given the increase in the number of contributing states, the NIS team evaluated and revised the sampling and weighting strategy for 1998 and subsequent data years, in order to best represent the U.S. This included changes to the definitions of the strata variables, the exclusion of rehabilitation hospitals from the NIS hospital universe, and a change to the calculation of hospital universe discharges for the weights. A full description of this process can be found in the special report on Changes in NIS Sampling and Weighting Strategy for 1998. This report is available on the 2003 NIS Documentation CD-ROM and on the HCUP User Support Website at www.hcup-us.ahrq.gov. A description of the sampling procedures and definitions of strata variables used from 1988 through 1997 can be found in the special report: Design of the HCUP Nationwide Inpatient Sample, 1997. This report is available on the 1997 NIS Documentation CD-ROM and on the HCUP User Support Website. The NIS sampling strata were defined based on five hospital characteristics contained in the AHA hospital files. Beg inning with the 1998 NIS, the stratification variables were defined as follows: 1. Geographic Region Northeast, Midwest, West, and South . This is an important stratification variable because practice patterns have been shown to vary substantially by region. For example, lengths of stay tend to be longer in East Coast hospitals than in West Coast hospitals. Figure 2 highlights the NIS states in gray, and Table 2 lists the states that comprise each region. Figure 2: NIS States, by Region Table 2: All States, by Region │ Region │ States │ │ 1: │ Connecticut , Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, Vermont │ │ Northeast │ │ │ 2: Midwest │ Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, Wisconsin │ │ 3: South │ Alabama, Arkansas, Delaware, District of Columbia, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, │ │ │ Virginia, West Virginia │ │ 4: West │ Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, Wyoming │ 2. Control government nonfederal (public), private not-for-profit (voluntary), and private investor-owned (proprietary) . Depending on their control, hospitals tend to have different missions and different responses to government regulations and policies. When there were enough hospitals of each type to allow it, we stratified hospitals as public, voluntary, and proprietary. We used this stratification for Southern rural, Southern urban non-teaching, and Western urban non-teaching hospitals. For smaller strata the Midwestern rural and Western rural hospitals we used a collapsed stratification of public versus private, with the voluntary and proprietary hospitals combined to form a single “private” category. For all other combinations of region, location, and teaching status, no stratification based on control was advisable, given the number of hospitals in these cells. 3. Location urban or rural. Government payment policies often differ according to this designation. Also, rural hospitals are generally smaller and offer fewer services than urban hospitals. 4. Teaching Status teaching or non-teaching . The missions of teaching hospitals differ from non-teaching hospitals. In addition, financial considerations differ between these two hospital groups. Currently, the Medicare Diagnosis Related Group (DRG) payments are uniformly higher to teaching hospitals than to non-teaching hospitals. We considered a hospital to be a teaching hospital if it has an AMA-approved residency program, is a member of the Council of Teaching Hospitals (COTH), or has a ratio of full-time equivalent interns and residents to beds of .25 or higher^2. 5. Bed Size small, medium, and large. Bed size categories were based on the number of hospital beds and were specific to the hospital's region, location, and teaching status, as shown in Table 3. We chose the bed size cutoff points so that approximately one-third of the hospitals in a given region, location, and teaching status combination would fall within each bed size category (small, medium or large). We used different cutoff points for rural, urban non-teaching, and urban teaching hospitals because hospitals in those categories tend to be small, medium, and large, respectively. For example, a medium-sized teaching hospital would be considered a rather large rural hospital. Further, the size distribution is different among regions for each of the urban/teaching categories. For example, teaching hospitals tend to be smaller in the West than they are in the South. Using differing cutoff points in this manner avoids strata containing small numbers of hospitals. We did not split rural hospitals according to teaching status, because rural teaching hospitals were rare. For example, in 2003, rural teaching hospitals comprised only 1% of the total hospital universe. We defined the bed size categories within location and teaching status because they would otherwise have been redundant. Rural hospitals tend to be small; urban non-teaching hospitals tend to be medium-sized; and urban teaching hospitals tend to be large. Yet it was important to recognize gradations of size within these types of hospitals. For example, in serving rural discharges, the role of "large" rural hospitals (particularly rural referral centers) often differs from the role of "small" rural hospitals. To further ensure accurate geographic representation, implicit stratification variables included state and three-digit ZIP Code (the first three digits of the hospital's five-digit ZIP Code). Within each stratum, we sorted hospitals by three-digit ZIP Code prior to systematic random sampling. Table 3. Bed Size Categories, by Region │ │ Hospital Bed Size │ │ Location and Teaching Status ├───────┬──────────┬───────┤ │ │ Small │ Medium │ Large │ │ NORTHEAST │ │ Rural │ 1-49 │ 50-99 │ 100+ │ │ Urban, non-teaching │ 1-124 │ 125-199 │ 200+ │ │ Urban, teaching │ 1-249 │ 250-424 │ 425+ │ │ MIDWEST │ │ Rural │ 1-29 │ 30-49 │ 50+ │ │ Urban, non-teaching │ 1-74 │ 75-174 │ 175+ │ │ Urban, teaching │ 1-249 │ 250-374 │ 375+ │ │ SOUTH │ │ Rural │ 1-39 │ 40-74 │ 75+ │ │ Urban, non-teaching │ 1-99 │ 100-199 │ 200+ │ │ Urban, teaching │ 1-249 │ 250-449 │ 450+ │ │ WEST │ │ Rural │ 1-24 │ 25-44 │ 45+ │ │ Urban, non-teaching │ 1-99 │ 100-174 │ 175+ │ │ Urban, teaching │ 1-199 │ 200-324 │ 325+ │ The universe of hospitals was established as all community hospitals located in the U.S. with the exception, beginning in 1998, of rehabilitation hospitals. However, some hospitals do not supply data to HCUP. Therefore, we constructed the NISsampling frame from the subset of universe hospitals that released their discharge data for research use. When the 2003 sample was drawn, the Agency for Healthcare Research and Quality (AHRQ) had agreements with 37 HCUP State Partner organizations to include their data in the NIS. The number of State Partners contributing data to the NIS has expanded over the years, as shown in Table 1. As a result, the number of hospitals included in the NIS sampling frame has also increased over the years, as displayed in Figure 3. The list of the entire frame of hospitals was composed of all AHA community hospitals in each of the frame states that could be matched to the discharge data provided to HCUP. If an AHA community hospital could not be matched to the discharge data provided by the data source, it was eliminated from the sampling frame (but not from the target universe). Figure 3: NIS Hospital Sampling Frame, by Year (text version) Figure 4 illustrates the number of hospitals in the universe, frame, and sample and the percentage of universe hospitals in the frame for each state in the sampling frame for 2003. In most cases, the difference between the universe and the frame represents the difference in the number of community, non-rehabilitation hospitals in the 2003 AHA Annual Survey of Hospitals and the hospitals for which data were supplied to HCUP. For example, for Connecticut, Massachusetts, Minnesota, North Carolina, and Texas, the data organization supplied fewer hospitals than report to the AHA. The largest discrepancy between HCUP data and AHA data is in Texas. As is evident in Figure 4, only 303 out of 414 Texas community, non-rehabilitation hospitals supplied data to HCUP for 2003. Certain Texas state-licensed hospitals are exempt from statutory reporting requirements. Exempt hospitals include: • Hospitals that do not seek insurance payment or government reimbursement • Rural providers. The Texas statute that exempts rural providers from the requirement to submit data defines a hospital as a rural provider if it: I. Is located in a county that: A. Has a population estimated by the United States Bureau of the Census to be not more than 35,000 as of July 1 of the most recent year for which county population estimates have been published; B. Has a population of more than 35,000, but does not have more than 100 licensed hospital beds and is not located in an area that is delineated as an urbanized area by the United States Bureau of the Census; and II. Is not a state-owned hospital or a hospital that is managed or directly or indirectly owned by an individual, association, partnership, corporation, or other legal entity that owns or manages one or more other hospitals. These exemptions apply primarily to smaller rural public hospitals and, as a result, these facilities are less likely to be included in the sampling frame than other Texas hospitals. While the number of hospitals omitted appears sizable, those available for the NIS include 91.8% of inpatient discharges from Texas universe hospitals. However, for Georgia, Hawaii, Indiana, Michigan, Nebraska, South Carolina, and South Dakota, we had to drop several HCUP hospitals from the frame, as described below. The Georgia frame contains two fewer hospitals than the state universe. One hospital was excluded because of sampling restrictions stipulated by the State Partner, and one hospital identified in AHA data was not included in the data supplied to HCUP. The Hawaii frame contains seven fewer hospitals than the state universe. Four hospitals were excluded because of sampling restrictions stipulated by the State Partner, and three hospitals identified in AHA data were not included in the data supplied to HCUP. Similarly, the Indiana frame contains four fewer hospitals than the state universe. One hospital was excluded because of sampling restrictions stipulated by the State Partner, and three hospitals identified in AHA data were not included in the data supplied to HCUP. The Michigan frame contains 43 fewer hospitals than the state universe. The NIS team decided to drop 33 hospitals from the frame that did not provide total charges. Our reasoning is that charges represent a critical outcome variable in the NIS. By dropping these hospitals, we avoid having to adjust the weights or create another weighting variable specifically for total charges. These hospitals are fairly evenly distributed by hospital type. There are no sampling strata in the state containing only hospitals without charges. The total charge data reported for Michigan is similar to total charge data reported by other Midwestern states. Thus, there does not seem to be an obvious bias in the type of cases for which charges are reported. The stratification and weighting scheme will adjust for the hospitals that are being dropped. In addition, 10 Michigan hospitals identified in AHA data were not included in the data supplied to HCUP. The Nebraska frame contains five fewer hospitals than the state universe. One hospital was excluded because of sampling restrictions stipulated by the State Partner. We dropped three additional hospitals from the sampling frame because they had incomplete data and were missing a high percentage of Medicare Discharges. One hospital identified in AHA data was not included in the data supplied to HCUP. The South Carolina frame contains nine fewer hospitals than the state universe. Seven hospitals were excluded because of sampling restrictions stipulated by South Carolina, and two hospitals identified in AHA data were not included in the data supplied to HCUP. Likewise, the South Dakota frame contains five fewer hospitals than the South Dakota universe. Two hospitals were excluded because of sampling restrictions stipulated by South Dakota, while three hospitals identified in AHA data were not included in the data supplied to HCUP. Figure 4: Number of Hospitals in the 2003 Universe, Frame, and Sample for Frame States (text version) Part A: Arizona North Carolina Figure 4: Number of Hospitals in the 2003 Universe, Frame, and Sample for Frame States (text version) Part B: Nebraska West Virginia Design Considerations The NIS is a stratified probability sample of hospitals in the frame, with sampling probabilities calculated to select 20% of the universe of U.S. community, non-rehabilitation hospitals contained in each stratum. This sample size was determined by AHRQ based on their experience with similar research databases. The overall design objective was to select a sample of hospitals that accurately represents the target universe, which includes hospitals outside the frame (i.e., having zero probability of selection). Moreover, this sample was to be geographically dispersed, yet drawn only from data supplied by HCUP Partners. It should be possible, for example, to estimate DRG-specific average lengths of stay across all U.S. hospitals using weighted average lengths of stay, based on averages or regression coefficients calculated from the NIS. Ideally, relationships among outcomes and their correlates estimated from the NIS should hold across all U.S. hospitals. However, the 2003 NIS includes data from only 37 states. Therefore, it is advisable to verify your estimates against other data sources, if available. For example, the National Hospital Discharge Survey (http://www.cdc.gov/nchs/products/pubs/pubd/ series/sr13/ser13.htm) can provide benchmarks against which to check your national estimates for hospitalizations with more than 5,000 cases. The NIS Comparison Report assesses the accuracy of NIS estimates. The most recent report is available on the NIS Documentation CD-ROM and provides a comparison of a previous year’s NIS with other data sources. The updated report for the current NIS will be posted on the HCUP User Support Website (http://www.hcup-us.ahrq.gov/db/nation/nis/nisrelatedreports.jsp) as soon as it is completed. The NIS team considered alternative stratified sampling allocation schemes. However, allocation proportional to the number of hospitals was preferred for several reasons: • AHRQ researchers wanted a simple, easily understood sampling methodology. The concept that the NIS sample could represent a "miniaturization" of the hospital universe was appealing. There were, however, obvious geographic limitations imposed by data availability. • AHRQ statisticians considered other optimal allocation schemes, including sampling hospitals with probabilities proportional to size (number of discharges). They ultimately concluded that sampling with probability proportional to the number of hospitals was preferable. While this approach was admittedly less efficient, the extremely large sample sizes yield good estimates. Furthermore, because the data are to be used for purposes other than producing nationwide estimates, (e.g., regression modeling), it is critical that all hospital types, including small hospitals, are adequately represented. Overview of the Sampling Procedure After stratifying the universe of hospitals, we randomly selected up to 20% of the total number of U.S. hospitals within each stratum. If too few frame hospitals appeared in a cell, then we selected all frame hospitals for the NIS, subject to sampling restrictions specified by states. To simplify variance calculations, we drew at least two hospitals from each stratum. If fewer than two frame hospitals were available in a stratum, then we merged it with an "adjacent" cell containing hospitals with similar characteristics. We drew a systematic random sample of hospitals from each stratum, after sorting hospitals by stratum, then by the three-digit ZIP Code (the first three digits of the hospital's five-digit ZIP Code) within each stratum, and then by a random number within each three-digit ZIP Code. These sorts ensured further geographic generalizability of hospitals within the frame states, as well as random ordering of hospitals within three-digit ZIP Codes. Generally, three-digit ZIP Codes that are proximal in value are geographically near one another within a state. Furthermore, the U.S. Postal Service locates regional mail distribution centers at the three-digit level. Thus, the boundaries tend to be a compromise between geographic size and population size. We drew two non-overlapping 10% subsamples of discharges from the NIS file for each year. The subsamples were selected by drawing every tenth discharge starting with two different starting points (randomly selected between 1 and 10). Having a different starting point for each of the two subsamples guaranteed that they would not overlap. Discharges were sampled so that 10% of each hospital's discharges in each quarter were selected for each of the subsamples. The two samples can be combined to form a single, generalizable 20% subsample of discharges. Change to Hospital Sampling Procedure Beginning with the 1998 NIS Beginning with the 1998 NIS sampling procedures, all frame hospitals within a stratum have an equal probability of selection for the sample, regardless of whether they appeared in prior NIS samples. This deviates from the procedure used for earlier samples, which maximized the longitudinal component of the NIS series. Further description of the sampling procedures for earlier releases of the NIS can be found in the special report: Design of the HCUP Nationwide Inpatient Sample, 1997. This report is available on the 1997 NIS Documentation CD-ROM and on the HCUP User Support Website at www.hcup-us.ahrq.gov. For a description of the development of the new sample design for 1998 and subsequent data years, see the special report: Changes in NIS Sampling and Weighting Strategy for 1998. This report is available on the 2003 NIS Documentation CD-ROM and on the HCUP User Support Website at http:// Zero-Weight Hospitals Beginning with the 1993 NIS, the NIS samples no longer contain zero-weight hospitals. For a description of zero-weight hospitals in the 1988-1992 samples, see the special report: Design of the HCUP Nationwide Inpatient Sample, Release 1. This report is available on the 1988-1992 NIS Documentation CD-ROM. Figure 5 depicts the numbers of hospitals sampled each year, and Figure 6 presents the numbers of discharges in each year of the NIS. For the 1988-1992 NIS, zero-weight hospitals were maintained to provide a longitudinal sample. Therefore, two figures exist for each of these years: one number for the regular NIS sample and another number for the total sample. Figure 7 displays the weighted number of discharges sampled each year. Note that this number decreased from 35,408,207 in 1997 to 34,874,001 in 1998, a difference of 534,206 (1.5%). This slight decline is associated with two changes to the 1998 NIS design: the exclusion of community, rehabilitation hospitals from the hospital universe, and a change to the calculation of hospital universe discharges for the weights. Prior to 1998, we calculated discharges as the sum of total facility admissions (AHA data element ADMTOT), which includes long-term-care admissions, plus births (AHA data element BIRTHS) reported for each U.S. community hospital in the AHA Annual Survey. Beginning in 1998, we calculate discharges as the sum of hospital admissions (AHA data element ADMH) plus births for each U.S. community, non-rehabilitation hospital. This number is more consistent with the number of discharges we receive from the state data sources. We also substitute total facility admissions if the number of hospital admissions is missing. Without these changes, the weighted number of discharges for 1998 would have been 35,622,743. The exclusion of community, rehabilitation hospitals reduced the number of universe hospitals by 177 and the number of weighted discharges by 214,490. The change in the calculation of discharges reduced the weighted number of discharges by 534,252. Figure 5: Number of Hospitals Sampled, by Year (text version) Figure 6: Number of NIS Discharges, Unweighted, by Year (text version) Figure 7: Number of NIS Discharges, Weighted, by Year (text version) Figure 8 presents a summary of the 2003 NIS hospital sample by geographic region and the number of: • Universe hospitals (Universe) • Frame hospitals (Frame) • Sampled hospitals (Sample) • Target hospitals (Target = 20% of the universe) • Surplus hospitals (Surplus = Sample Target). For example, in 2003, the Northeast region contained 657 hospitals in the universe. It also included 604 hospitals in the frame, of which 134 were drawn for the sample. This was three more than the target sample size of 131 hospitals, resulting in a surplus of three hospitals beyond the target. The total sample exceeded the target by 27 hospitals, with a resulting sample of 20.6% of the total hospital universe. We sampled more than the target number of hospitals in each region because we rounded the target sample size for each stratum up to the next highest integer whenever it was not an Figure 9 summarizes the estimated U.S. population by geographic region on July 1, 2003^3. For each region, the figure reveals: • The estimated U.S. population • The estimated population of states in the 2003 NIS • The percentage of estimated U.S. population included in NIS states. For example, the estimated population of the Northeast region on July 1, 2003 was 54,426,252. On that same date, the estimated population of states in the Northeast region that were included in the 2003 NIS was 53,117,047. This represents 97.6% of the total Northeast region’s population. The percentage of estimated U.S. population included in states in the 2003 NIS was lower in the West (92.0%) and in the South (81.3%). However, the states newly added to the 2003 NIS have substantially increased the percentage of the Midwest and West populations represented. With the addition of Indiana, the Midwest population that is represented grew from 58,308,004 in 2002 to 64,795,510 in 2003 an increase of 11 percent. The West region experienced a similar increase; with the return of Arizona, the represented population rose from 54,817,359 to 61,128,276 an increase of 11.5 percent. Although New Hampshire was added to the Northeast region, those gains were offset by the loss of Maine from this year’s NIS. Overall, the states in the 2003 NIS include an estimated 90.8% of the entire U.S population, up 4% from 2002. Figure 10 depicts the number of discharges in the 2003 sample for each state. The number of sampled discharges from each state ranges from 7,141 (South Dakota) to 871,681 (California). Figure 8: Number of Hospitals in 2003 Universe, Frame, Sample, Target, and Surplus, by Region (text version) Figure 9: Percentage of U.S. Population in 2003 NIS States, by Region (text version) Figure 10: Number of Discharges in the 2003 NIS, by State (text version) To obtain nationwide estimates, we developed discharge weights using the AHA universe as the standard. These were developed separately for hospital- and discharge-level analyses. Hospital-level weights were developed to extrapolate NIS sample hospitals to the hospital universe. Similarly, discharge-level weights were developed to extrapolate NIS sample discharges to the discharge universe. Hospital Weights Hospital weights to the universe were calculated by post-stratification. For each year, hospitals were stratified on the same variables that were used for sampling: geographic region, urban/rural location, teaching status, bed size, and control. The strata that were collapsed for sampling were also collapsed for sample weight calculations. Within each stratum s, each NIS sample hospital's universe weight was calculated as: W[s](universe) = N[s](universe) ÷ N[s](sample) where W[s](universe) was the hospital universe weight, and N[s](universe) and N[s](sample) were the number of community hospitals within stratum s in the universe and sample, respectively. Thus, each hospital's universe weight (HOSPWT) is equal to the number of universe hospitals it represents during that year. Because 20% of the hospitals in each stratum were sampled when possible, the hospital weights are usually near five. Discharge Weights The calculations for discharge-level sampling weights were similar to the calculations for hospital-level sampling weights. The discharge weights are usually constant for all discharges within a stratum. The only exceptions are for strata with sample hospitals that, according to the AHA files, were open for the entire year but contributed less than a full year of data to the NIS. For those hospitals, we adjusted the number of observed discharges by a factor of 4 ÷ Q, where Q was the number of calendar quarters for which the hospital contributed discharges to the NIS. For example, when a sample hospital contributed only two quarters of discharge data to the NIS, the adjusted number of discharges was double the observed number. This adjustment was done only for weighting purposes. The NIS data set includes only the actual (unadjusted) number of observed discharges. With that minor adjustment, each discharge weight is essentially equal to the number of AHA universe discharges that each sampled discharge represents in its stratum. This calculation was possible because the number of total discharges was available for every hospital in the universe from the AHA files. Each universe hospital's AHA discharge total was calculated as the sum of newborns and hospital discharges. Discharge weights to the universe were calculated by post-stratification. Hospitals were stratified just as they were for universe hospital weight calculations. Within stratum s, for hospital i, each NIS sample discharge's universe weight was calculated as: DW[is](universe) = [DN[s](universe) ÷ ADN[s](sample)] * (4 ÷ Q[i]) where DW[is](universe) was the discharge weight; DN[s](universe) represented the number of discharges from community hospitals in the universe within stratum s; ADN[s](sample) was the number of adjusted discharges from sample hospitals selected for the NIS; and Q[i] was the number of quarters of discharge data contributed by hospital i to the NIS (usually Q[i] = 4). Thus, each discharge's weight (DISCWT) is equal to the number of universe discharges it represents in stratum s during that year. Because all discharges from 20% of the hospitals in each stratum were sampled when possible, the discharge weights are usually near five. Weight Data Elements To produce nationwide estimates, use one of the following discharge weights to extrapolate discharges in the NIS Core file to the discharges from all U.S. community, non-rehabilitation hospitals. When using one of the 10% subsample files, use the subsample discharge weight (the discharge weight multiplied by 10). When using the hospital weights with the subsample files, there is no need to multiply the hospital weights because all hospitals will be represented in the subsample files. Thus, the same hospital weight (HOSPWT) can be used for the full NIS and for the subsample files. │ NIS Year │ Name of Discharge Weight on the Core File to Use for Creating Nationwide Estimates │ Name of Discharge Weight on the 10% Subsample File to Use for Creating Nationwide Estimates │ │ 2001-2003 │ • DISCWT for all analyses. │ • DISCWT10 for all analyses. │ │ 2000 │ • DISCWT to create nationwide estimates for all analyses except those that involve │ • DISCWT10 to create nationwide estimates for all analyses except those that involve │ │ │ total charges. │ total charges. │ │ │ • DISCWTCHARGE to create nationwide estimates of total charges. │ • DISCWTCHARGE10 to create nationwide estimates of total charges. │ │ 1998-1999 │ • DISCWT for all analyses. │ • DISCWT10 for all analyses. │ │ 1988-1997 │ • DISCWT_U for all analyses. │ • D10CWT_U for all analyses. │ Missing Values Missing data values can compromise the quality of estimates. If the outcome for discharges with missing values is different from the outcome for discharges with valid values, then sample estimates for that outcome will be biased and inaccurately represent the discharge population. There are several techniques available to help overcome this bias. One strategy is to use imputation to replace missing values with acceptable values. Another strategy is to use sample weight adjustments to compensate for missing values^4. Such data preparation and adjustment is beyond the scope of this report. However, if necessary, imputation or adjustments should be done before analyzing data using statistical procedures. On the other hand, if the cases with and without missing values are assumed to be similar with respect to their outcomes, no adjustment may be necessary for estimates of means and rates. This is because the non-missing cases would be representative of the missing cases. However, some adjustment may still be necessary for the estimates of totals. Sums of data elements containing missing values would be incomplete because cases with missing values would be omitted from the calculations. Variance Calculations It may be important for researchers to calculate a measure of precision for some estimates based on the NIS sample data. Variance estimates must take into account both the sampling design and the form of the statistic. The sampling design consisted of a stratified, single-stage cluster sample. A stratified random sample of hospitals (clusters) was drawn and then all discharges were included from each selected hospital. If hospitals inside the frame are similar to hospitals outside the frame, the sample hospitals can be treated as if they were randomly selected from the entire universe of hospitals within each stratum. Standard formulas for a stratified, single-stage cluster sample without replacement could be used to calculate statistics and their variances in most applications. A multitude of statistics can be estimated from the NIS data. Several computer programs are listed below that calculate statistics and their variances from sample survey data. Some of these programs use general methods of variance calculations (e.g., the jackknife and balanced half-sample replications) that take into account the sampling design. However, it may be desirable to calculate variances using formulas specifically developed for some statistics. These variance calculations are based on finite-sample theory, which is an appropriate method for obtaining cross-sectional, nationwide estimates of outcomes. According to finite-sample theory, the intent of the estimation process is to obtain estimates that are precise representations of the nationwide population at a specific point in time. In the context of the NIS, any estimates that attempt to accurately describe characteristics and interrelationships among characteristics of hospitals and discharges during a specific year should be governed by finite-sample theory. Examples of this would be estimates of expenditure and utilization patterns or hospital market factors. Alternatively, in the study of hypothetical population outcomes not limited to a specific point in time, the concept of a “superpopulation” may be useful. Analysts may be less interested in specific characteristics from the finite population (and time period) from which the sample was drawn, than they are in hypothetical characteristics of a conceptual "superpopulation" from which any particular finite population in a given year might have been drawn. According to this superpopulation model, the nationwide population in a given year is only a snapshot in time of the possible interrelationships among hospital, market, and discharge characteristics. In a given year, all possible interactions between such characteristics may not have been observed, but analysts may wish to predict or simulate interrelationships that may occur in the future. Under the finite-population model, the variances of estimates approach zero as the sampling fraction approaches one. This is the case because the population is defined at that point in time, and because the estimate is for a characteristic as it existed at the time of sampling. This is in contrast to the superpopulation model, which adopts a stochastic viewpoint rather than a deterministic viewpoint. That is, the nationwide population in a particular year is viewed as a random sample of some underlying superpopulation over time. Different methods are used for calculating variances under the two sample theories. The choice of an appropriate method for calculating variances for nationwide estimates depends on the type of measure and the intent of the estimation process. Computer Software for Variance Calculations The hospital weights are useful for producing hospital-level statistics for analyses that use the hospital as the unit of analysis, while the discharge weights are useful for producing discharge-level statistics for analyses that use the discharge as the unit of analysis. The discharge weights may be used to estimate nationwide population statistics. In most cases, computer programs are readily available to perform these calculations. Several statistical programming packages allow weighted analyses^5. For example, nearly all SAS (Statistical Analysis System) procedures incorporate weights. In addition, several statistical analysis programs have been developed to specifically calculate statistics and their standard errors from survey data. Version eight or later of SAS contains procedures (PROC SURVEYMEANS and PROC SURVEYREG) for calculating statistics based on specific sampling designs. STATA and SUDAAN are two other common statistical software packages that perform calculations for numerous statistics arising from the stratified, single-stage cluster sampling design. Examples of the use of SAS, SUDAAN, and STATA to calculate NIS variances are presented in the special report: Calculating Nationwide Inpatient Sample Variances. This report is available on the 2003 NIS Documentation CD-ROM and on the HCUP User Support Website at www.hcup-us.ahrq.gov. For an excellent review of programs to calculate statistics from survey data, visit the following Website: http://www.hcp.med.harvard.edu/statistics/ survey-soft/ . The NIS database includes a Hospital Weights file with variables required by these programs to calculate finite population statistics. In addition to the sample weights described earlier, hospital identifiers (Primary Sampling Units or PSUs), stratification variables, and stratum-specific totals for the numbers of discharges and hospitals are included so that finite-population corrections (FPCs) can be applied to variance estimates. In addition to these subroutines, standard errors can be estimated by validation and cross-validation techniques. Given that a very large number of observations will be available for most analyses, it may be feasible to set aside a part of the data for validation purposes. Standard errors and confidence intervals can then be calculated from the validation data. If the analytic file is too small to set aside a large validation sample, cross-validation techniques may be used. For example, tenfold cross-validation would split the data into ten equal-sized subsets. The estimation would take place in ten iterations. In each iteration, the outcome of interest is predicted for one-tenth of the observations by an estimate based on a model fit to the other nine-tenths of the observations. Unbiased estimates of error variance are then obtained by comparing the actual values to the predicted values obtained in this manner. Finally, it should be noted that a large array of hospital-level variables are available for the entire universe of hospitals, including those outside the sampling frame. For instance, the variables from the AHA surveys and from the Medicare Cost Reports are available for nearly all hospitals. To the extent that hospital-level outcomes correlate with these variables, they may be used to sharpen regional and nationwide estimates. As a simple example, each hospital's number of cesarean sections would be correlated with their total number of deliveries. The figure for cesarean sections must be obtained from discharge data, but the number of deliveries is available from AHA data. Thus, if a regression model can be fit predicting cesarean sections from deliveries based on the NIS data, that regression model can then be used to obtain hospital-specific estimates of the number of cesarean sections for all hospitals in the universe. Longitudinal Analyses Hospitals that continue in the NIS for multiple consecutive years are a subset of the hospitals in the NIS for any one of those years. Consequently, longitudinal analyses of hospital-level outcomes may be biased, if they are based on any subset of NIS hospitals limited to continuous NIS membership. In particular, such subsets would tend to contain fewer hospitals that opened, closed, split, merged, or changed strata. Further, the sample weights were developed as annual, cross-sectional weights, rather than longitudinal weights. Therefore, different weights might be required, depending on the statistical methods employed by the analyst. One approach to consider in hospital-level longitudinal analyses is to use repeated-measure models that allow hospitals to have missing values for some years. However, the data are not actually missing for some hospitals, such as those that closed during the study period. In any case, the analyses may be more efficient (e.g., produce more precise estimates) if they account for the potential correlation between repeated measures on the same hospital over time, yet incorporate data from all hospitals in the sample during the study period. Studying Trends When studying trends over time using the NIS, be aware that the sampling frame for the NIS changes almost annually, i.e., more states have been added over time. Estimates from earlier years of the NIS may be subject to more sampling bias than later years of the NIS. In order to facilitate analysis of trends using multiple years of NIS data, an alternate set of NIS discharge and hospital weights for the 1988-1997 HCUP NIS were developed. These alternative weights were calculated in the same way as the weights for the 1998 and later years of the NIS. The special report, Using the HCUP Nationwide Inpatient Sample to Estimate Trends, includes details regarding the alternate weights and other recommendations for trends analysis. Both the NIS Trends Report and the alternative weights are available on the HCUP-US web site under Methods Series (http://www.hcup-us.ahrq.gov/reports/methods/methods_topic.jsp). The NIS Trends Report is also available on the NIS Documentation CD-ROM. Discharge Subsamples The two nonoverlapping 10% subsamples of discharges were drawn from the NIS file for each year for several reasons pertinent to data analysis. One reason for creating the subsamples was to reduce processing costs for selected studies that will not require the entire NIS. Another reason was that the two subsamples may be used to validate models and obtain unbiased estimates of standard errors. That is, one subsample may be used to estimate statistical models, while the other subsample may be used to test the fit of those models on new data. This is a very important analytical step, particularly in exploratory studies, where one runs the risk of fitting noise in the data. For example, it is well known that the percentage of variance explained by a regression, R 2, is generally overestimated by the data used to fit a model. The regression model could be estimated from the first subsample and then applied to the second subsample. The squared correlation between the actual and predicted value in the second subsample is an unbiased estimate of the model's true explanatory power when applied to new data. In this report, we have described the development and use of the NIS sample and weights and summarized the contents of the 2003 NIS. We have included cumulative information for all previous years to provide a longitudinal view of the database. Once again, the nationwide representation of the sample has been enhanced by incorporating data from additional HCUP State Partners, a total of 37 participants for the year 2003. We have highlighted important considerations for data analysis and have provided references to detailed reports on this subject. 1. Most AHA Annual Survey files do not cover a January-to-December period for every hospital. The numbers of hospitals for 1988-1991 are based on adjusted versions of the files which we created by apportioning the data from adjacent survey files across calendar years. The numbers of hospitals for later years are based on the unadjusted AHA Annual Survey files. 2. We used the following AHA Annual Survey data elements to assign the NIS Teaching Hospital Indicator: AHA Data Element Name = Description [HCUP Data Element Name]. BDH = Number of short term hospital beds [B001H]. BDTOT = Number of total facility beds [B001]. FTRES = Number of full time employees: interns & residents (medical & dental) [E125]. PTRES = Number of part time employees: interns & residents (medical & dental) [E225]. MAPP8 = Council of Teaching Hospitals (COTH) indicator [A101]. MAPP3 = AMA approved residency program indicator [A102]. Prior to the 1998 NIS, we used the following SAS code to assign the NIS teaching hospital status indicator, H_TCH: /* FIRST ESTABLISH SHORT-TERM BEDS DEFINITION */ IF BDH NE . THEN BEDTEMP = BDH ; /* SHORT TERM BEDS */ ELSE IF BDH =. THEN BEDTEMP=BDTOT ; /* TOTAL BEDS PROXY */ /* NEXT ESTABLISH TEACHING STATUS BASED ON F-T & P-T */ /* RESIDENT/INTERN STATUS FOR HOSPITALS. */ RESINT = (FTRES + .5*PTRES)/BEDTEMP ; IF (MAPP3 = . AND MAPP8 = .) THEN DO ; IF RESINT > .10 THEN ST_TEACH = 1 ; ELSE ST_TEACH = 0 ; END ; IF (MAPP3=1 OR MAPP8=1) THEN ST_TEACH=1 ; /* 1=TEACHING */ ELSE ST_TEACH=0 ; /* 0=NONTEACHING */ /* CREATE TEACHING CATEGORY VARIABLES TO FURTHER */ /* REFINE TEACHING STATUS DEFINITION. */ IF ST_TEACH = 1 THEN DO ; IF 0 < RESINT < .15 THEN TEACHCAT=0 ; /* MINOR CATEGORY */ ELSE IF RESINT GE .15 THEN TEACHCAT=1 ; /* MAJOR CATEGORY */ ELSE ST_TEACH = 0 ; /* NONTEACH STATUS*/ END ; Beginning with the 1998 NIS, we used the following SAS code to assign the teaching hospital status indicator, HOSP_TEACH: /* FIRST ESTABLISH SHORT-TERM BEDS DEFINITION */ IF BDH NE . THEN BEDTEMP = BDH ; /* SHORT TERM BEDS */ ELSE IF BDH =. THEN BEDTEMP = BDTOT ; /* TOTAL BEDS PROXY */ /* ESTABLISH IRB NEEDED FOR TEACHING STATUS */ /* BASED ON F-T P-T RESIDENT INTERN STATUS */ IRB = (FTRES + .5*PTRES) / BEDTEMP ; /* CREATE TEACHING STATUS VARIABLE */ IF (MAPP8 EQ 1) OR (MAPP3 EQ 1) THEN HOSP_TEACH = 1 ; ELSE IF (IRB GE 0.25) THEN HOSP_TEACH = 1 ; ELSE HOSP_TEACH = 0; 3. U.S. Census Bureau , Population Division. “Table NST-EST2004-01 ­ Annual Estimates of the Population for the United States and States, and for Puerto Rico: April 1, 2000 to July 1, 2004.” Internet Release Date: December 22, 2004. 4. Refer to Chapter 10 in Foreman, E.K., Survey Sampling Principles. New York: Dekker, 1991. 5. Carlson BL, Johnson AE, Cohen SB. “An Evaluation of the Use of Personal Computers for Variance Estimation with Complex Survey Data.” Journal of Official Statistics, vol. 9, no. 4, 1993: 795-814.
{"url":"http://www.hcup-us.ahrq.gov/db/nation/nis/reports/NIS_2003_Design_Report.jsp","timestamp":"2014-04-18T05:30:09Z","content_type":null,"content_length":"109078","record_id":"<urn:uuid:ab9664f8-de35-49ad-a83f-f1d87f32eb5e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacific Institute for the Mathematical Sciences - PIMS Changing the Culture 2011 Changing the Culture 2011: Through Our Teaching Date: Friday, April 29th, 2011 Location: SFU-Vancouver at Harbour Centre, 515 W. Hastings Street, Vancouver, Canada The conference is free, but space is limited, and therefore registration is required. Online registration is now closed. The registration deadline: Wednesday, April 27th, 2011 Conference Program 8:00 Registration 8:45 Opening Remarks, Room 1900, Fletcher Challenge Theatre 9:00 Plenary Talk, As Geometry is lost - What connections are lost? What reasoning is lost? What students are lost? Does it matter?, Walter Whiteley, York University, Room 1900, Fletcher Challenge Abstract:In a North American curriculum preoccupied with getting to calculus, we witness an erosion of geometric content and practice in high school. What remains is often detached from "making sense of the world", and from reasoning (beyond axiomatic work in University). We see the essential role of geometry in science, engineering, computer graphics and in solving core problems in applications put aside when revising math curriculum. A second feature is that most graduates with mathematics degrees are not aware of these rich connections for geometry. We will present some samples of: what we know about early childhood geometry.; and then of the critical role of geometry and geometric reasoning in work in multiple fields outside of mathematics. With a perspective from "modern geometry", we note the critical role of transformations, symmetries and invariance in many fields, including mathematics beyond geometry. With these bookends of school mathematics in mind, we consider some key issues in schools, such as which students are lost when the bridge of geometry is not there to carry them through (caught in endless algebra) and possible connections other subjects. We also consider the loss within these other disciplines. We will present some sample investigations and reasoning which can be supported by a broader more inclusive set of practices and which pays attention to geometric features and reasoning in various contexts. In particular, we illustrate the use of dynamic geometry investigations, hands on investigations and reflections, and making connections to deeper parts of the rest of mathematics and science. Download the presentation file here. Links to sources discussed in the presentation can be found in this document. 10:00 Coffee Break, Room 1400, Segal Centre 10:30 Workshops AB Workshop A: Changing the Culture of Homework, Jamie Mulholland and Justin Gray, SFU, Room 1900 Abstract: Who do your students think their homework is for? Does attaching credit to homework promote student understanding, or encourage students to find answers by whatever means necessary? Are they focused on calculating the answer, or seeing the big picture? Is their homework grade a true reflection of their own understanding of the material, or does it better reflect the understanding of their "support network"? In this workshop we will describe our efforts to improve student feedback and to promote good study skills in first and second year mathematics classes. Workshop B: What is Mathematics-for-Teaching and Why Does it Matter?, Susan Oesterle, Douglas College, Room 1530 Abstract: In this interactive workshop we will review some of the latest research on the nature of specialised mathematics content knowledge for teachers. We will examine particular examples and consider how being aware of MfT can affect not only our approach to the preparation of teachers, but our own mathematics teaching. 12:00 PIMS Award Ceremony: Presentation of the 2011 PIMS Education Prize to Dr. Veselin Jungic, SFU 12:30 Lunch, Room 1400, Segal Centre 13:30 Workshops CD Workshop C: Using Cognitive Load Theory Principles to Construct Calculus Exam Questions, Djun Kim and Joanne Nakonechny, UBC, Room 1900 Abstract: Cognitive load theory (Paas 1993) provides an approach for writing appropriate level questions so that novices will be more likely to answer the intended question. Cognitive load theory focuses on excluding extraneous information and providing only essential information for the learner. Although this approach is generally used to engender the selection of fundamental component parts and ordering them to scaffold learning new material, we have also found that better test questions can be constructed using these same principles; or so we think. Come and experience the difference, learn the principles for writing questions using cognitive load theory, and find out what we have learned about the process. Workshop D: Efficient and Effective Teaching of Learning of Mathematics for Students of Science and Engineering with Software for Symbolic Computation, J. F. Ogilvie, CECM, SFU and Universidad de Costa Rica, Room 1530 Abstract: Although computers have enabled a revolution in many academic and other activities, their impact on the teaching and learning of mathematics has been much less potent than is warranted by the pressing demands of users of mathematics in scientific and technical areas. The present and continuing development of pertinent software facilitates a reappraisal of methods of teaching mathematics, with an emphasis on both improving the understanding of concepts and principles and the implementation of mathematical applications. We discuss how software for symbolic computation can play, and has already played, a significant role in the teaching and learning of mathematics not merely in particular topics but according to an holistic appreciation of all the mathematical knowledge and capabilities that a student of science and engineering might require for a prospective technical career during the twenty-first century. 14:30 Panel Discussion, How to convince our students that you cannot learn mathematics by just watching somebody else do it? Veselin Jungic, SFU, Susan Milner, UFV, Fred Harwood, Hugh McRoberts Secondary, Room 1900, Fletcher Challenge Theatre 16:00 Coffee Break 16:30 Plenary Talk, Raising the Floor and Lifting the Ceiling: Math for All, Sharon Friesen, University of Calgary/Galileo Network, Room 1900, Fletcher Challenge Theatre. Abstract: "Math. The bane of my existence for as many years as I can count. I cannot relate it to my life or become interested in what I'm learning. I find it boring and cannot find any way to apply myself toit since I rarely understand it." (high school student)Today, mathematics education faces two major challenges: raising the floor by expanding achievement for all, and lifting the ceiling of achievement to better prepare future leaders in mathematics, as well as in science, engineering, and technology. At first glance, these appear to be mutually exclusive: But are they? Is it possible to design learning that engages the vast majority of students in higher mathematics learning? In this presentation, I will present the findings and discuss the implications from a research study that explored the ways to teach mathematics that both raised the floor and lifted the ceiling.
{"url":"http://www.pims.math.ca/educational/changing-culture/2011","timestamp":"2014-04-19T04:31:18Z","content_type":null,"content_length":"24706","record_id":"<urn:uuid:21b30dd7-7445-4eff-9e17-5ec6c3884a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Veronika Strnadova, Aydın Buluç, Jarrod Chapman, John R Gilbert, Joseph Gonzalez, Stefanie Jegelka, Leonid Oliker and Daniel Rokshar. Efficient and Accurate Clustering for Large-Scale Genetic Mapping.. CS Department, University of California, Santa Barbara, April 2014. URL, PDF, BibTeX author = "Veronika Strnadova and Ayd{\i}n Bulu\c{c} and Jarrod Chapman and John R. Gilbert and Joseph Gonzalez and Stefanie Jegelka and Leonid Oliker and Daniel Rokshar", title = "Efficient and Accurate Clustering for Large-Scale Genetic Mapping.", institution = "CS Department, University of California, Santa Barbara", year = 2014, month = "April", url = "http://www.cs.ucsb.edu/research/tech_reports/", pdf = {http://gauss.cs.ucsb.edu/publication/sdm14clusteringformapping.pdf" number = "UCSB/CS-2014-03}, abstract = "" Adam Lugowski and John R Gilbert. Efficient Sparse Matrix-Matrix Multiplication on Multicore Architectures. In SIAM Workshop on Combinatorial Scientific Computing (CSC14). July 2014. , BibTeX author = "Adam Lugowski and John R. Gilbert", title = "Efficient Sparse Matrix-Matrix Multiplication on Multicore Architectures", booktitle = "SIAM Workshop on Combinatorial Scientific Computing (CSC14)", year = 2014, month = "July", pdf = "http://gauss.cs.ucsb.edu/publication/QuadMat_CSC14.pdf" Robert W Techentin, Barry K Gilbert, Adam Lugowski, Kevin Deweese, John R Gilbert, Eric Dull, Mike Hinchey and Steven P Reinhardt. Implementing Iterative Algorithms with SPARQL. In EDBT/ICDT Workshops. 2014, 216-223. , BibTeX author = "Robert W. Techentin and Barry K. Gilbert and Adam Lugowski and Kevin Deweese and John R. Gilbert and Eric Dull and Mike Hinchey and Steven P. Reinhardt", title = "Implementing Iterative Algorithms with SPARQL", booktitle = "EDBT/ICDT Workshops", year = 2014, pages = "216-223", pdf = "http://ceur-ws.org/Vol-1133/paper-36.pdf" Kevin Deweese, John R Gilbert, Adam Lugowski and Steve Reinhardt. Graph Clustering in SPARQL. In SIAM Workshop on Network Science. 2013. , BibTeX author = "Kevin Deweese and John R. Gilbert and Adam Lugowski and Steve Reinhardt", title = "Graph Clustering in SPARQL", booktitle = "SIAM Workshop on Network Science", year = 2013, pdf = "http://gauss.cs.ucsb.edu/publication/SIAM-DMA_uRiKA_PP.pdf" Aydın Buluç, Erika Duriakova, Armando Fox, John R Gilbert, Shoaib Kamil, Adam Lugowski, Leonid Oliker and Samuel Williams. High-Productivity and High-Performance Analysis of Filtered Semantic Graphs. In 27th IEEE International Symposium on Parallel and Distributed Processing (IPDPS 2013). May 2013, 237 - 248. , DOI, BibTeX author = "Ayd{\i}n Bulu\c{c} and Erika Duriakova and Armando Fox and John R. Gilbert and Shoaib Kamil and Adam Lugowski and Leonid Oliker and Samuel Williams", title = "High-Productivity and High-Performance Analysis of Filtered Semantic Graphs", booktitle = "27th IEEE International Symposium on Parallel and Distributed Processing (IPDPS 2013)", year = 2013, month = "May", pages = "237 - 248", location = "Boston, Massachusetts, USA", doi = "10.1109/IPDPS.2013.52", pdf = "http://gauss.cs.ucsb.edu/publication/kdtsejits_ipdps13.pdf" Aydın Buluç and John R Gilbert. Parallel Sparse Matrix-Matrix Multiplication and Indexing: Implementation and Experiments. SIAM Journal of Scientific Computing (SISC) 34(4):170 - 191, 2012. , arXiv, DOI, BibTeX author = "Ayd{\i}n Bulu\c{c} and John R. Gilbert", title = "Parallel Sparse Matrix-Matrix Multiplication and Indexing: Implementation and Experiments", journal = "SIAM Journal of Scientific Computing (SISC)", year = 2012, volume = 34, pages = "170 - 191", number = 4, arxiv = "http://arxiv.org/abs/1109.3739", doi = "10.1137/110848244", url = "http://gauss.cs.ucsb.edu/~aydin/spgemm_sisc12.pdf" Aydın Buluç, Armando Fox, John R Gilbert, Shoaib Kamil, Adam Lugowski, Leonid Oliker and Samuel Williams. High-Performance Analysis of Filtered Semantic Graphs. In Proceedings of the 21st international conference on Parallel architectures and compilation techniques. 2012, 463–464. extended abstract. URL, DOI, BibTeX author = "Ayd{\i}n Bulu\c{c} and Armando Fox and John R. Gilbert and Shoaib Kamil and Adam Lugowski and Leonid Oliker and Samuel Williams", title = "High-Performance Analysis of Filtered Semantic Graphs", booktitle = "Proceedings of the 21st international conference on Parallel architectures and compilation techniques", series = "PACT '12", year = 2012, isbn = "978-1-4503-1182-3", location = "Minneapolis, Minnesota, USA", pages = "463--464", numpages = 2, url = "http://doi.acm.org/10.1145/2370816.2370897", doi = "10.1145/2370816.2370897", acmid = 2370897, publisher = "ACM", address = "New York, NY, USA", keywords = "domain specific languages, graph analysis, high-performance computing, kdt, sejits", note = "extended abstract" Aydın Buluç, Armando Fox, John R Gilbert, Shoaib Kamil, Adam Lugowski, Leonid Oliker and Samuel Williams. High-Performance Analysis of Filtered Semantic Graphs. Number UCB/EECS-2012-61, EECS Department, University of California, Berkeley, May 2012. URL, , BibTeX author = "Ayd{\i}n Bulu\c{c} and Armando Fox and John R. Gilbert and Shoaib Kamil and Adam Lugowski and Leonid Oliker and Samuel Williams", title = "High-Performance Analysis of Filtered Semantic Graphs", institution = "EECS Department, University of California, Berkeley", year = 2012, month = "May", url = "http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-61.html", pdf = "http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-61.pdf", number = "UCB/EECS-2012-61", abstract = {High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry "attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry a performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded Just-In-Time Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ /MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance. We also present a new roofline model for graph traversals, and show that our high-performance implementations do not significantly deviate from the roofline.} Adam Lugowski, Aydın Buluç, John Gilbert and Steve Reinhardt. Scalable Complex Graph Analysis with the Knowledge Discovery Toolbox. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). March 2012, 5345–5348. URL, , DOI, BibTeX author = "Adam Lugowski and Ayd{\i}n Bulu\c{c} and John Gilbert and Steve Reinhardt", title = "Scalable Complex Graph Analysis with the Knowledge Discovery Toolbox", booktitle = "IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)", pages = "5345--5348", year = 2012, month = "March", doi = "10.1109/ICASSP.2012.6289128", url = "http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6289128", pdf = "http://gauss.cs.ucsb.edu/publication/ICASSP_12_KDT_filter.pdf" Adam Lugowski, David Alber, Aydın Buluç, John R Gilbert, Steve Reinhardt, Yun Teng and Andrew Waranis. A Flexible Open-Source Toolbox for Scalable Complex Graph Analysis. In Proceedings of the Twelfth SIAM International Conference on Data Mining (SDM12). April 2012, 930–941. Preprint as Technical Report UCSB-CS-2011-10. , , BibTeX author = "Adam Lugowski and David Alber and Ayd{\i}n Bulu\c{c} and John R. Gilbert and Steve Reinhardt and Yun Teng and Andrew Waranis", title = "A Flexible Open-Source Toolbox for Scalable Complex Graph Analysis", booktitle = "Proceedings of the Twelfth SIAM International Conference on Data Mining (SDM12)", year = 2012, month = "April", pages = "930--941", url = "http://siam.omnibooksonline.com/2012datamining/data/papers/158.pdf", preprint = "http://www.cs.ucsb.edu/research/tech_reports/reports/2011-10.pdf", note = "Preprint as Technical Report UCSB-CS-2011-10" Jeremy Kepner and John R Gilbert (eds.). Graph Algorithms in the Language of Linear Algebra. Society for Industrial and Applied Mathematics, 2011. URL, DOI, BibTeX editor = "Jeremy Kepner and John R. Gilbert", title = "Graph Algorithms in the Language of Linear Algebra", publisher = "Society for Industrial and Applied Mathematics", year = 2011, doi = "10.1137/1.9780898719918", url = "http://epubs.siam.org/doi/abs/10.1137/1.9780898719918", eprint = "http://epubs.siam.org/doi/pdf/10.1137/1.9780898719918" Aydın Buluç and John R Gilbert. The Combinatorial BLAS: Design, Implementation, and Applications. The International Journal of High Performance Computing Applications, 2011. , DOI, BibTeX author = "Ayd{\i}n Bulu\c{c} and John R. Gilbert", title = "The {C}ombinatorial {BLAS}: Design, Implementation, and Applications", journal = "The International Journal of High Performance Computing Applications", year = 2011, abstract = "http://www.cs.ucsb.edu/research/tech_reports/abstract.php?id=1010", url = "http://www.cs.ucsb.edu/research/tech_reports/reports/2010-18.pdf", doi = "10.1177/1094342011403516" Aydın Buluç and John R Gilbert. Highly Parallel Sparse Matrix-Matrix Multiplication. Number UCSB-CS-2010-10, UCSB Computer Science Department, June 2010. , arXiv, BibTeX author = "Ayd{\i}n Bulu\c{c} and John R. Gilbert", institution = "UCSB Computer Science Department", title = "Highly Parallel Sparse Matrix-Matrix Multiplication", number = "UCSB-CS-2010-10", month = "June", year = 2010, arxiv = "http://arxiv.org/abs/1006.2183", url = "http://www.cs.ucsb.edu/research/tech_reports/reports/2010-10.pdf" Aydın Buluç, Jeremy T Fineman, Matteo Frigo, John R Gilbert and Charles E Leiserson. Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication Using Compressed Sparse Blocks. In Proceedings of the Twenty-First ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). August 2009. , BibTeX author = "Ayd{\i}n Bulu\c{c} and Jeremy T. Fineman and Matteo Frigo and John R. Gilbert and Charles E. Leiserson", title = "Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication Using Compressed Sparse Blocks", booktitle = "Proceedings of the Twenty-First {ACM} Symposium on Parallelism in Algorithms and Architectures ({SPAA})", year = 2009, month = "August", url = "http://gauss.cs.ucsb.edu/publication/csb2009.pdf", address = "Calgary, Canada" Aydın Buluç, John R Gilbert and Ceren Budak. Solving path problems on the GPU. Parallel Computing 36(5-6):241 - 253, 2010. Earlier version "Gaussian Elimination Based Algorithms on the GPU" available as UCSB technical report CS-2008-15. , DOI, BibTeX author = "Ayd{\i}n Bulu\c{c} and John R. Gilbert and Ceren Budak", title = "Solving path problems on the {GPU}", journal = "Parallel Computing", volume = 36, number = "5-6", pages = "241 - 253", year = 2010, url = "http://gauss.cs.ucsb.edu/publication/parco_apsp.pdf", doi = "10.1016/j.parco.2009.12.002", note = {Earlier version "Gaussian Elimination Based Algorithms on the {GPU}" available as UCSB technical report CS-2008-15} B H McRae, B G Dickson, T H Keitt and V B Shah.. Using circuit theory to model connectivity in ecology and conservation. Ecology (In press)():, 2008. , BibTeX title = "Using circuit theory to model connectivity in ecology and conservation", author = "B. H. McRae and B. G. Dickson and T. H. Keitt and V. B. Shah.", journal = "Ecology", month = "", year = 2008, volume = "(In press)", number = "", pages = "", url = "http://gauss.cs.ucsb.edu/publication/McRae_et_al_circuit_theory_in_press.pdf" Laura Grigori, John R Gilbert and Michel Cosnard. Symbolic and Exact Structure Prediction for Sparse Gaussian Elimination with Partial Pivoting. SIAM Journal on Matrix Analysis and Applications 30(4):1520-1545, 2008. , DOI, BibTeX title = "Symbolic and Exact Structure Prediction for Sparse Gaussian Elimination with Partial Pivoting", author = "Laura Grigori and John R. Gilbert and Michel Cosnard", publisher = "SIAM", year = 2008, journal = "SIAM Journal on Matrix Analysis and Applications", volume = 30, number = 4, pages = "1520-1545", doi = "10.1137/050629343", url = "http://gauss.cs.ucsb.edu/publication/GrigoriGilbertCosnardStructHall.pdf" Viral B Shah and Brad McRae. Circuitscape: A Tool for Landscape Ecology. In Proceedings of the 7th Python in Science Conference (SciPy2008). 2008, 62–65. , BibTeX title = "Circuitscape: A Tool for Landscape Ecology", author = "Viral B. Shah and Brad McRae", booktitle = "Proceedings of the 7th Python in Science Conference (SciPy2008)", pages = "62--65", year = 2008, url = "http://gauss.cs.ucsb.edu/publication/Circuitscape_Python_Scipy08.pdf" J R Gilbert, S Reinhardt and V B Shah. Distributed Sparse Matrices for Very High Level Languages. Advances in Computers 72, June 2008. , BibTeX title = "Distributed Sparse Matrices for Very High Level Languages", author = "J. R. Gilbert and S. Reinhardt and V. B. Shah", journal = "Advances in Computers", volume = 72, year = 2008, month = "June", url = "http://gauss.cs.ucsb.edu/publication/DistributedSparseMatricesVHLL.pdf" Aydın Buluç and John R Gilbert. Challenges and Advances in Parallel Sparse Matrix-Matrix Multiplication. In The 37th International Conference on Parallel Processing (ICPP'08). September 2008, 503-510. , DOI, BibTeX title = "{Challenges and Advances in Parallel Sparse Matrix-Matrix Multiplication}", author = "Ayd{\i}n Bulu\c{c} and John R. Gilbert", booktitle = "{The 37th International Conference on Parallel Processing (ICPP'08)}", year = 2008, month = "September", pages = "503-510", doi = "10.1109/ICPP.2008.45", address = "Portland, Oregon, USA", url = "http://gauss.cs.ucsb.edu/publication/Buluc-ParallelMatMat.pdf" Lorin Hochstein, Victor R Basili, Uzi Vishkin and John R Gilbert. A pilot study to compare programming effort for two parallel programming models. Journal of Systems and Software 81(11):1920 - 1930, 2008. , DOI, BibTeX author = "Lorin Hochstein and Victor R. Basili and Uzi Vishkin and John R. Gilbert", title = "A pilot study to compare programming effort for two parallel programming models", journal = "Journal of Systems and Software", volume = 81, number = 11, pages = "1920 - 1930", year = 2008, doi = "10.1016/j.jss.2007.12.798", url = "http://gauss.cs.ucsb.edu/publication/pram-journal-paper.pdf" Lamia Youseff, Alethea Barbaro, Peterson Trethewey, Bjorn Birnir and John R Gilbert. Parallel Modeling of Fish Interaction. In 11th IEEE International Conference on Computational Science and Engineering. 2008, 234-241. , DOI, BibTeX author = "Lamia Youseff and Alethea Barbaro and Peterson Trethewey and Bjorn Birnir and John R. Gilbert", title = "Parallel Modeling of Fish Interaction", booktitle = "{11th IEEE International Conference on Computational Science and Engineering}", year = 2008, isbn = "978-0-7695-3193-9", pages = "234-241", doi = "10.1109/CSE.2008.8", publisher = "IEEE Computer Society", address = "Los Alamitos, CA, USA", url = "http://gauss.cs.ucsb.edu/publication/parallelModelingOfFishInteraction_YBTBG2008.pdf" Aydın Buluç and John R Gilbert. On the Representation and Multiplication of Hypersparse Matrices. In IEEE International Parallel and Distributed Processing Symposium (IPDPS 2008). April 2008, 1-11. , DOI, BibTeX author = "Ayd{\i}n Bulu\c{c} and John R. Gilbert", title = "{On the Representation and Multiplication of Hypersparse Matrices}", booktitle = "{IEEE International Parallel and Distributed Processing Symposium (IPDPS 2008)}", month = "April", year = 2008, pages = "1-11", doi = "10.1109/IPDPS.2008.4536313", location = "Miami, FL", url = "http://gauss.cs.ucsb.edu/publication/hypersparse-ipdps08.pdf" John R Gilbert, Steve Reinhardt and Viral B Shah. A Unified Framework for Numerical and Combinatorial Computing. Computing in Sciences and Engineering 10(2):20–25, 2008. , BibTeX title = "A Unified Framework for Numerical and Combinatorial Computing", author = "John R. Gilbert and Steve Reinhardt and Viral B. Shah", journal = "Computing in Sciences and Engineering", month = "Mar/Apr", year = 2008, volume = 10, number = 2, pages = "20--25", url = "http://gauss.cs.ucsb.edu/publication/cise-graph.pdf" Viral B Shah. An Interactive System for Combinatorial Scientific Computing with an Emphasis on Programmer Productivity. University of California, Santa Barbara, June 2007. , BibTeX author = "Viral B. Shah", title = "An Interactive System for Combinatorial Scientific Computing with an Emphasis on Programmer Productivity", school = "University of California, Santa Barbara", year = 2007, month = "June", url = "http://gauss.cs.ucsb.edu/publication/Shah_thesis.pdf" John R Gilbert, Steven Reinhardt and Viral Shah. An Interactive Environment to Manipulate Large Graphs. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing 4. April 2007, IV-1201–IV-1204. , BibTeX title = "An Interactive Environment to Manipulate Large Graphs", author = "John R. Gilbert and Steven Reinhardt and Viral Shah", booktitle = "Proceedings of the 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing", month = "April", year = 2007, volume = 4, pages = "IV-1201--IV-1204", url = "http://gauss.cs.ucsb.edu/publication/gapdt-icassp07.pdf" John R Gilbert, Steve Reinhardt and Viral B Shah. High performance graph algorithms from parallel sparse matrices. In Applied Parallel Computing. State of the Art in Scientific Computing. 8th International Workshop, PARA 2006.. 2007, 260–269. , BibTeX author = "John R. Gilbert and Steve Reinhardt and Viral B. Shah", title = "High performance graph algorithms from parallel sparse matrices", booktitle = "Applied Parallel Computing. State of the Art in Scientific Computing. 8th International Workshop, PARA 2006.", year = 2007, pages = "260--269", url = "http://gauss.cs.ucsb.edu/publication/gapdt-para06.pdf" David A Bader, Kamesh Madduri, John R Gilbert, Viral Shah, Jeremy Kepner, Theresa Meuse and Ashok Krishnamurthy. Designing Scalable Synthetic Compact Applications for Benchmarking High Productivity Computing Systems. Cyberinfrastructure Technology Watch, November 2006. , BibTeX author = "David A. Bader and Kamesh Madduri and John R. Gilbert and Viral Shah and Jeremy Kepner and Theresa Meuse and Ashok Krishnamurthy", title = "Designing Scalable Synthetic Compact Applications for Benchmarking High Productivity Computing Systems", journal = "Cyberinfrastructure Technology Watch", year = 2006, month = "Nov", url = "http://gauss.cs.ucsb.edu/publication/ctwatch-ssca.pdf" Viral Shah and John R Gilbert. Sparse Matrices in Matlab*P: Design and Implementation.. In High Performance Computing. HiPC 2004.. 2005, 144–155. , BibTeX author = "Viral Shah and John R. Gilbert", title = "Sparse Matrices in {M}atlab*{P}: Design and Implementation.", booktitle = "High Performance Computing. HiPC 2004.", year = 2005, pages = "144--155", url = "http://gauss.cs.ucsb.edu/publication/dsparse-hipc04.pdf" David R Cheng, Alan Edelman, John R Gilbert and Viral Shah. A novel parallel sorting algorithm for contemporary architectures. Submitted to ALENEX06, 2006. , BibTeX author = "David R. Cheng and Alan Edelman and John R. Gilbert and Viral Shah", title = "A novel parallel sorting algorithm for contemporary architectures", journal = "Submitted to ALENEX06", year = 2006, url = "http://gauss.cs.ucsb.edu/publication/psort.pdf" Andrew Funk, John R Gilbert, David Mizell and Viral Shah. Modelling Programmer Workflows with Timed Markov Models. Cyber Technology Watch, 2006. , BibTeX author = "Andrew Funk and John R. Gilbert and David Mizell and Viral Shah", title = "Modelling Programmer Workflows with Timed {M}arkov Models", journal = "Cyber Technology Watch", year = 2006, url = "http://gauss.cs.ucsb.edu/publication/ctwatch-tmm.pdf" Burton Smith, David Mizell, John Gilbert and Viral Shah. Towards a timed Markov process model of software development. In SE-HPCS '05: Proceedings of the second international workshop on Software engineering for high performance computing system applications. 2005, 65–67. , BibTeX author = "Burton Smith and David Mizell and John Gilbert and Viral Shah", title = "Towards a timed {M}arkov process model of software development", booktitle = "SE-HPCS '05: Proceedings of the second international workshop on Software engineering for high performance computing system applications", year = 2005, isbn = "1-59593-117-1", pages = "65--67", location = "St. Louis, Missouri", url = "http://gauss.cs.ucsb.edu/publication/tmm-icsehpc05.pdf", publisher = "ACM Press", address = "New York, NY, USA"
{"url":"http://gauss.cs.ucsb.edu/home/index.php/publications","timestamp":"2014-04-20T03:16:27Z","content_type":null,"content_length":"45413","record_id":"<urn:uuid:8f789005-979a-465a-bd98-35f539b04314>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Your experience of Computer Science/Programming in Mathematics Education? up vote 5 down vote favorite This is a survey question, which seeks to produce a list of answers from the audience of mathematicians. Motivation: I'm doing research in mathematics education. I'm particularly interested in teaching mathematicians programming and utilizing programming to teach the metacognitive skills necessary for Question: Was programming/computer science brought up in your undergraduate/graduate mathematics education? Did you see any consistent problems that you and your peers experienced with computer science/programming constructs? mathematics-education computer-science survey soft-question 2 This is very much a blog-style question; in particular it definitely doesn't have an answer. – Qiaochu Yuan Nov 2 '09 at 4:16 hrm... well, I would posit that neither does "Most interesting mathematics mistake?" but that's posted here. Maybe it's off-topic, not sure. idk if a rewording would make it more reasonable – Michael Hoffman Nov 2 '09 at 4:19 OK, I think I un-blog-ified it – Michael Hoffman Nov 2 '09 at 4:27 Our policy for now is that the only "discussion" questions allowed are ones that explicitly seek to produce a sorted list of answers. Anything that requires conversation should go elsewhere. I think this question is a poor fit for mathoverflow, and would certainly be more suitable on the blogosphere, but I think it's sufficiently interesting I'm not going to close. :-) – Scott Morrison♦ Nov 2 '09 at 17:06 2 -1: I have often been supportive of questions that others have considered off-topic in the past, but I think that MO is the wrong place for survey questions. – Pete L. Clark Dec 27 '09 at 5:47 show 1 more comment 11 Answers active oldest votes My personal thoughts (and experience): "Was programming/computer science brought up in your undergraduate/graduate mathematics education?" - yes, but I took math+CS+physics for my undergrad. This isn't very rare though; at least in my time (and place), most math majors took some CS (If nothing else, there's a big overlap in the required courses). My feeling is that most mathematicians of the younger generation have at least beginner-level programming skills, but my view could be biased. Of course being able to program could be useful for doing certain types of math research; for other types, it's not really useful at all. "Did you see any consistent problems..." - I don't think so, and in fact I believe that for people who have solid math background, learning the fundamental skills necessary for programming is relatively easy. up vote 5 down vote I don't know what "metacognitive" means, but I feel that it's reasonable to expect that people who understand programming languages may have an easier time grasping certain kinds of mathematical definitions and points of view. For example, it may be useful to think of mathematical objects in an "Object-Oriented" way, thinking about what forms part of their data structure (and what doesn't) and what "methods" they expose. It may also be useful to think about what it means to "calculate" something abstract (like the cohomology ring of a manifold) even when there is no real possibility of implementing the calculation; just reasoning about what it means to say that data X can be calculated for object Y can be useful. So here, too, a certain kind of CS-like training can be useful (and is increasingly common anyway). Interesting... very different from my experience. Very pleased to get a different view on things. Thanks! – Michael Hoffman Nov 2 '09 at 7:48 add comment Most of the responses so far pointed out that people who are good at mathematics tend to benefit from computer programming, and don't have much problem learning it. I majored in math and computer science, and for me the two reinforced each other. I would like to mention some of my experiences using basic computer programming to teach mathematics to non-mathematicians. In particular, I use Excel. Liberal Arts Math: I use Excel in my Liberal Arts Math class, so the programming is very basic and is practically restricted to functions. At that level, though, it really gets the point of functions and iterations. up vote 2 Calculus II: for the definite integral, having the students do all the necessary computations through a spreadsheet helps to solidify the concept. down vote Differential Equations: Euler's method lends itself nicely to a spreadsheet. In that course, we used graphing software to generate solutions to DE, especially solutions to systems of DE, but we didn't program them. On a different note, I remember a few years ago there was an AMS minicourse during the Joint Meetings, on using Flash and Java modules in Discrete Mathematics, and some of the ideas were really neat. But I think that's a bit off topic as to the question at hand, because the students weren't the ones doing the actual programming. add comment Programming is about great algorithms but also great code. Code is great when it is readable, well-structured, self-documenting, extensible, reusable, modular, and especially maintainable. In my experience, it takes most computer science students the entire duration of their undergraduate studies to truly understand how to write such code, and even then they will often initially struggle to write very maintainable and easy-to-read code when they get their first professional job out of college. (This depends also very much on the school you are looking at. up vote This observation would probably be false at, say, MIT.) Put simply, knowing lots of algorithms is necessary but not nearly sufficient for being a great programmer. 2 down vote However, learning to write such code is essentially an exercise in learning how to communicate your solution to a problem to others who may have very different backgrounds. It's hard for me to imagine how this skill would not be useful in mathematics where one will spend much of one's time teaching and writing expository and research articles. add comment I double majored in math and computer science but otherwise, there would have been no requirement for me to take any programming classes. As my programming skills and mathematical problem-solving skills developed, they both informed one another. I often turned to my programming experience for making combinatorial arguments and the mathematician in me made sure I'd thought through all possible ways a program could progress. up vote As far as consistent problems, I'd have to second Michael Hoffman's answer. In my experience, mathematicians have little trouble solving a programming problem but actually getting it into 2 down code can be an obstacle. Knowing what the answer is doesn't help you if you can't write it down within the confines of whatever language you're using. Planning ahead and pseudocode are both vote really helpful for getting around this. Plus, this is probably familiar to mathematicians who don't code - I find myself writing down some mathematics for the first time, then realising that somehow I'm not saying what I really want to say or that I'm not even saying it correctly. Thanks! Do you think that the disconnect with proceduralization might have to do with emphasis on "continuous" mathematics rather than "discrete" mathematics? By that I mean, that in mathematics, at least as I have been taught it, we tend to focus on mathematics that cannot be computed directly because it's "continuous" in nature, and the transfer to a discretization for solution in programming is not readily apparent. Anyway, cheers! – Michael Hoffman Nov 2 '09 at 13:59 I'm not sure that it really has to do with continuous vs. discrete - though, on that note, could you give an example of what you mean by "mathematics that cannot be computed directly"? Something I didn't mention earlier was that I think programming is a neat way to help people learn problem-solving in a concrete way and to repeatedly reinforce common techniques of problem-solving, which can certainly help with learning math. – Myron Minn-Thu-Aye Nov 2 '09 at 23:51 So, by this I mean something of the sort as the analytic computation of an integral, which is not done in real settings in numerical analysis b/c numerical methods can, of course, not be calculated up to infinitesimal limits – Michael Hoffman Nov 3 '09 at 6:26 add comment It's interesting that you mention that topic, since I currently have a programmer/developer job. One particular aspect I didn't think much about before but see clearly now: programmer's job consists of many tasks: • creating the design • getting input from users • writing code up vote 2 • fixing code down vote • writing documentation • testing code • passing the audit The first part is incredibly helped by having math abilities. For the other parts, even being a math genius won't help much: it's more about being punctual, accurate, able to work a lot, concentrate and deal with people. On the other hand, those are necessary for the mathematics as job and profession (teaching, writing papers) as well. add comment Programming and Computer Science was not required in my undergraduate training (I have not started graduate training so I cannot say with regards to that). From what I've seen in numerical analysis classes, there seems to be some disconnect between understanding the problem and being able to proceduralize the algorithm. For some reason proceduralization is up vote 1 particularly hard. down vote add comment I'm like a few previous responses in having enough CS training to handle basic programming skills. My difficulties have primarily been in mundane/trivial things like compiling, or relearning syntax after years of not using a language. In general, I think sometimes it's easy to overgeneralize and say that all younger students are good at computers. This may be partially true, but there are many younger students that are not at all comfortable with computers, and for which programming does not make sense. Although there is a lot of overlap, programming skill doesn't completely correlate with mathematical up vote 1 skill. down vote Probably the difficulty is mostly in the different point-of-view. CS solutions/programs are built using techniques that differ from math solutions. One must be extremely precise, one must be able to extract a template out of a solution process, one must be able to adapt that template to the programming language. I think that is where the main difficulty (and much of the educational benefit) lies. add comment University of Waterloo Computer Science is part of the Faculty of Mathematics so any CS student has to take some Mathematics courses. As I did finish the requirements for a Combinatorics & Optimization major as well as a Pure Math minor, I did enjoy taking a lot of Math courses. I enjoyed seeing both Numerical Analysis and Symbolic Computation as both represented ways to convert Mathematical concepts into programmatic constructs. I enjoyed the contrast of the two as each seems to take a different approach at a fundamental level, to my mind. up vote 1 down vote Some elements like the Simplex Method for Linear Programming and Linear Regression in Statistics did come up in both Math and Computer Science courses with some differences on the focus as each course had its own view on what was important in the material. add comment I very much liked math in high school but liked computers as well. In my second year I started to suspect that I am a practitioner. Three years after that I was sure that I am, but not without visiting a theoretical conference. At this point I would say I am a decent generalist programmer - I am pretty good at writing readable code, using the simplest solution possible and not reinventing the wheel. It is hard for me to imagine what it is like to reason about things in 13th dimension but not being able to translate English into some code that compiles and passes some simple tests (given reasonable scale of work, of course). I love what I do, but I also must have written a for loop over 10,000 times by now, so some aspects of it get boring. I miss the time when I had to think really hard about something new. I liked learning about finite automata back in college because regular expressions libraries utilize them. I would love to take a class on Markov Chains because I can see how they can be up vote applied to very real problems. I also want to take Optimization, Linear programming, Statistics ... maybe one day after work. 0 down vote As you can see my math background is pretty weak, although my math grades were decent and I really liked the subject. There are some things which I am not good at at all - such as thinking in abstract terms. With much effort I can probably force myself to think along those lines, but it is not natural and I have to draw it out, and try to relate it to the actual 3D world that I know. I am definitely not a mathematician, but I can call myself an engineer I suppose. I often wonder if things would have been different, if my parents forced many logical puzzles on me before I was four year old. I have seen math professors do that to their kids ... I guess it is both nature and nurture. add comment My undergrad was B.Sc. Math, and my program required four courses when I entered: two intro courses using Java, a programming lab which just was practise in a particular language (either C++, Prolog, or something else I can't remember), and a data structures course. The latter two were dropped from the requirements since the entire system was redesigned, but I took two extra courses anyway: advanced C++ and formal languages. up vote 0 down vote I can't comment on any struggles, because I was bunched up with engineers and CS majors. Great courses though, and if anything they helped because experimental mathematics is just really fun. add comment I taught my little sister calculus last summer by essentially giving her a mini numerical analysis course using Python. This enabled us to focus on the concepts of calculus, understanding definitions, and seeing how the major theorems all worked (on a discrete approximation level), all without much algebra (which is the major stumbling block for most students). We coded a function grapher, a numerical differentiator, a numerical integrator , an diff Eq. solver. We also coded approximations to the exponential, log, and trig functions by viewing these functions as solutions to diff Eqs, and solved a lot of "real world" problems using these tools. We bounded the error of our numerical integrator so that we could specify a desired degree of accuracy to our integral. This is obviously useful in any applied context (we need to know how accurate our approximations are), and it means she really recognized the importance of the formal definition of a limit. Only after we had all the major concepts of calculus down did we start to play the symbolic manipulation games. This part of the course was less interesting, but we saw that it sped up our computations a lot (instead of having to do thousands of operations to approximate an integral, I can do maybe 10 to get the exact answer). up vote 0 down The experience was very interesting for me because I realized how much of calculus can be seen through the lens of Euler's method. You can really explain pretty much everything. I am very interested in making a sequence of guided programming exercises which would teach calculus this way on the web. I started making some using Khan Academies open source material, but have not gotten very far. It will probably have to wait until the summer. P.S. The net effect? I would say that she is weaker at churning out integrals than most students (I don't think we ever even talked about trig substitution, etc). However if you ask her what a derivative is she can tell you. If you ask her to explain how the fundamental theorem of calculus works, she can explain it to you. If you give her a novel physical problem she is very well equipped to think about breaking it into easy small subpieces, and using a limiting process to get a good approximation to a solution. In other words, I think that she has a much deeper understanding of calculus than a "standard" student, and because of this she will be able to apply calculus when it naturally arises in her life. She also learned how to program, which has independent value. We also had a lot of fun. 1 Oops. Didn't see this was a thread necromancy. – Steven Gubkin Sep 26 '12 at 16:43 How old is she educationally? (tenth grade US?, asking purely for academic purposes, not trying to fix her up.) How much time did it take (rough estimate of coding-testing-tutorial-homework breakdown would be nice to know.) Gerhard "Really, My Interest Is Educational" Paseman, 2012.09.26 – Gerhard Paseman Sep 26 '12 at 21:57 She is going into what would be her senior year, but she graduated a year early. I should note that she doesn't really like math/wasn't doing well in her precalculus class. We did about a week of playing with coding using JES (jython environment for students) doing some image manipulation and making sure she knew about for loops, boolean logic, ect. She got really good at coding through the math projects. We spent about 12 weeks total, working a few hours a day. She learned differential and integral calculus. – Steven Gubkin Sep 27 '12 at 0:23 add comment Not the answer you're looking for? Browse other questions tagged mathematics-education computer-science survey soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/3739/your-experience-of-computer-science-programming-in-mathematics-education?sort=oldest","timestamp":"2014-04-16T13:40:16Z","content_type":null,"content_length":"107062","record_id":"<urn:uuid:dd810e3b-aa07-4a80-aca8-d61a40d79530>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Misner, Thorne and Wheeler, Exercise 8.5 (c) Replies: 38 Last Post: Apr 13, 2013 11:57 PM Messages: [ Previous | Next ] Hetware Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Posted: Apr 8, 2013 12:28 AM Posts: 148 Registered: 4/13/13 On 4/8/2013 12:07 AM, Lord Androcles, Zeroth Earl of Medway wrote: > "Lord Androcles, Zeroth Earl of Medway" wrote in message > news:Njr8t.334633$Ic.40149@fx07.fr7... > "Hetware" wrote in message > news:AuGdnVu7TbVslv_MnZ2dnUVZ_g6dnZ2d@megapath.net... > This is the geodesic equation under discussion: > d^2(r)/dt^2 = r(dp/dt)^2 > d^2(p)/dt^2 = -(2/r)(dp/dt)(dr/dt). > r is radius in polar coordinates, p is the angle, and t is a path > parameter. > The authors ask me to "[S]olve the geodesic equation for r(t) and p(t), > and show that the solution is a uniformly parametrized straight > line(x===r cos(p) = at+p for some a and b; y===r sin(p) = jt+k for some > j and k). > I tried the following: > (d^2(p)/dt^2)/(dp/dt) = -(2/r)(dr/dt) > f=dp/dt > (df/dt)/f = -(2/r)(dr/dt) > -1/2 ln(f) + k = ln(r) > a(f^(1/2)) = r > a(dp/dt)^(1/2) = r > And substitute for r in: > d^2(r)/dt^2 = r(dp/dt)^2 > to get > d^2(r)/dt^2 = a(dp/dt)^(3/2) > But there I'm stuck. > How should the problem be handled? > ============================================= > What you have is a second order differential equation. > Unlike the solution to the general polynomial equation, > ax +bx^2 + cx^3 + ... + kx^n = 0, where you seek a value > for x given values for a,b,c etc., the solution for a > differential equation is a FUNCTION. > In other words you cannot obtain a numerical or algebraic > value (you don't have enough information and that is not > the idea anyway) but you can find functions r(t) and p(t) . > The authors have already told you the solution is a straight > line, which is of course a function. > http://search.snap.do/?q=solving+differential+equations&category=Web > HTH, because we don't do homework for you. > ================================ > Hetware's silence is deafening. > We should all be poised at the ready to answer his questions > immediately instead of sleeping in bed when he writes them. > -- This message is brought to you from the keyboard of > Lord Androcles, Zeroth Earl of Medway. > When the fools chicken farmer Wilson and Van de faggot present an > argument I cannot laugh at I'll retire from usenet. Do you realize how much codswallop is posted to this newsgroup in comparison to actual relevant content? Oh well, whatever, Date Subject Author 4/7/13 Misner, Thorne and Wheeler, Exercise 8.5 (c) Guest 4/7/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dirk Van de moortel 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dirk Van de moortel 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com 4/13/13 MTW, Exercise 8.5 (c), Crotchm reasserts his imbecility Dono 4/13/13 MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his mouth, Dono big time 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono 4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com 4/13/13 MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his mouth, Dono big time 4/13/13 Re: MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his rotchm@gmail.com mouth, big time 4/13/13 Re: MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his Dono mouth, big time 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Guest 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dirk Van de moortel 4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Rock Brentwood 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware 4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
{"url":"http://mathforum.org/kb/message.jspa?messageID=8902504","timestamp":"2014-04-19T06:51:50Z","content_type":null,"content_length":"64931","record_id":"<urn:uuid:15ac8cc1-c70c-4467-af3a-ce65c1e338e3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
≡ Pages Combinational Logic Combinational circuits are built of five basic logic gates: • AND gate - output is 1 if BOTH inputs are 1 • OR gate - output is 1 if AT LEAST one input is 1 • XOR gate - output is 1 if ONLY one input is 1 • NAND gate - output is 1 if AT LEAST one input is 0 • NOR gate - output is 1 if BOTH inputs are 0 There is a sixth element in digital logic, the inverter (sometimes called a NOT gate). Inverters aren’t truly gates, as they do not make any decisions. The output of an inverter is a 1 if the input is a 0, and vise versa. A few things of note about the above image: • Usually, the name of the gate is not printed; the symbol is assumed to be sufficient for identification. • The A-B-Q type terminal notation is standard, although logic diagrams will usually omit them for signals which are not inputs or outputs to the system as a whole. • Two input devices are standard, but you will occasionally see devices with more than two inputs. They will, however, only have one output. Digital logic circuits are usually represented using these six symbols; inputs are on the left and outputs are to the right. While inputs can be connected together, outputs should never be connected to one another, only to other inputs. One output may be connected to multiple inputs, however. Truth Tables The descriptions above are adequate to describe the functionality of single blocks, but there is a more useful tool available: the truth table. Truth tables are simple plots which explain the output of a circuit in terms of the possible inputs to that circuit. Here are truth tables describing the six main elements: Truth tables can be expanded out to an arbitrary scale, with as many inputs and outputs as you can handle before your brain melts. Here’s what a four-input circuit and truth table look like: Written Boolean Logic It is, of course, useful to be able to write in a simple mathematical format an equation representing a logical operation. To that end, there are mathematical symbols for the unique operations: AND, OR, XOR, and NOT. • A AND B should be written as AB (or sometimes A • B) • A OR B should be written as A + B • A XOR B should be written as A ⊕ B • NOT A should be written as A' or A You’ll note that there are two missing elements on that list: NAND and NOR. Typically, those are simply represented by complementing the appropriate representation: • A NAND B is written as (AB)' , (A • B)' , or (AB) • A NOR B is written as (A + B)' or (A + B)
{"url":"https://learn.sparkfun.com/tutorials/digital-logic/combinational-logic","timestamp":"2014-04-16T10:10:01Z","content_type":null,"content_length":"18791","record_id":"<urn:uuid:19f3e0cc-fe4e-459a-bbac-2f09d954fbb9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-Way Merge Sorting Data Structures and Algorithms with Object-Oriented Design Patterns in Java Program gives the code for two sort methods of the TwoWayMergeSorter class. The no-arg sort method sets things up for the second, recursive sort method. First, it allocates a temporary array, the length of which is equal to the length of the array to be sorted (line 8). Then it calls the recursive sort method which sorts the array (line 9). After the array has been sorted, the no-arg sort discards the temporary array (line 10). Program: TwoWayMergeSorter class sort methods. The second sort method implements the recursive, divide-and-conquer merge sort algorithm described above. The method takes three parameters--array, left and right. The first is the array to be sorted and the latter two specify the subsequence of the array to be sorted. If the sequence to be sorted contains more than one element, the sequence is split in two (line 17), each half is recursively sorted (lines 18-19), and then two sorted halves are merged (line 20). Copyright © 1998 by Bruno R. Preiss, P.Eng. All rights reserved.
{"url":"http://www.brpreiss.com/books/opus5/html/page508.html","timestamp":"2014-04-18T18:19:53Z","content_type":null,"content_length":"3481","record_id":"<urn:uuid:fc6ccc16-9e31-40a4-9ce0-e6df09bdb35c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
I need help! September 29th 2006, 12:43 AM #1 Sep 2006 I need help! Hello everyone! I have a tricky one. I can't find the answer. Do you have an answer for me? 1. how can you obtain 10000 by adding/substracting/dividing/multiplying (whatever calcul) with only the number 9? thank you very much! (9+9/9) (9+9/9) (9+9/9) (9+9/9) or are brackets forbidden as well? Thank you very much for your help, that's what i found too after a while. I have another tricky one which is bothering me a lot : Three guys are traveling in china and stop by a hotel. The hotel director announces the rooms' prices. A room for 3 is 30 RMB. They accept and each pays 10rmb. But the director suddenly realizes that he made a mistake. The room for 3 people was only 25rmb. Honest but not too much, he decides to give 1rmb back to each of the 3 guys and to keep the 2rmb resting for himself. So he gives back 3rmb et keeps 2. as a total the 3 guys will have paid only 9rmb each for the rent of the room. The tricky thing : 3x9=27rmb. and 25rmb+3rmb=28rmb where is the missing RMB?where did it disappeared?how is it possible If you have an answer, i ll be really grateful! Thank you very much for your help, that's what i found too after a while. I have another tricky one which is bothering me a lot : Three guys are traveling in china and stop by a hotel. The hotel director announces the rooms' prices. A room for 3 is 30 RMB. They accept and each pays 10rmb. But the director suddenly realizes that he made a mistake. The room for 3 people was only 25rmb. Honest but not too much, he decides to give 1rmb back to each of the 3 guys and to keep the 2rmb resting for himself. So he gives back 3rmb et keeps 2. as a total the 3 guys will have paid only 9rmb each for the rent of the room. The tricky thing : 3x9=27rmb. and 25rmb+3rmb=28rmb where is the missing RMB?where did it disappeared?how is it possible If you have an answer, i ll be really grateful! This is the missing dollar problem. More appropraitely called, "how money is spend in USSR". September 29th 2006, 01:02 AM #2 Grand Panjandrum Nov 2005 September 29th 2006, 06:52 PM #3 Sep 2006 September 30th 2006, 11:32 PM #4 Grand Panjandrum Nov 2005 October 1st 2006, 06:39 AM #5 Global Moderator Nov 2005 New York City October 1st 2006, 07:34 AM #6 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/math-challenge-problems/5946-i-need-help.html","timestamp":"2014-04-16T11:51:15Z","content_type":null,"content_length":"44072","record_id":"<urn:uuid:b9d1c235-4ca7-4356-b150-388b415bad09>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Robust Geometric Computation (Draft) Kurt Mehlhorn and Chee Yap Fundamental Problems in Algorithmic Algebra Chee Yap Oxford University Press, 2000 Papers and Theses Resolution-Exact Planner Non-Crossing 2-Link Robot Zhongdi Luo and Chee K. Yap Submitted (Feb 2014) Resolution-Exact Planner for a 2-Link Planar Robot using Soft Predicates Zhongdi Luo. Masters Thesis, Courant Institute, New York University (Feb 2014) Amortized Analysis of Smooth Box Subdivisions in All Dimensions Huck Bennett and Chee Yap. (To Appear) 14th Scandinavian Symp. and Workshop on Algorithm Theory (SWAT). July 2-4 2014. Copenhagen, Denmark. 2-Dimensional version, [ Amortized Analysis of Balanced Quadtrees 23rd Annual Fall Workshop on Comp. Geometry (FWCG), The City College of NY. Oct 25-26, 2013. Resolution-Exact Algorithms for Link Robots Zhongdi Luo, Yi-Jen Chiang, Jyh-Ming Lien, and Chee Yap. Submitted Dec 2013. Updated Jan 2014. Preliminary version 23rd Annual Fall Workshop on Computational Geometry (FWCG), The City College of New York. Oct 25-26, 2013. Analytic Root Clustering: A Complete Algorithm using Soft Zero Tests C. Yap and M. Sagraloff and V. Sharma (Invited paper, Special session on ``Computational Complexity in the Continuous World'') Computability in Europe (CiE 2013), Milan, Italy, July 1-5, 2013. LNCS No. 7921, pp. 434-444. Soft Subdivision Search and Motion Planning Chee Yap Proceedings, Robotics Challenge and Vision Workshop (RCV 2013), Workshop at Robotics Science and Systems (RSS 2013), Berlin, Germany, June 27, 2013. RCV paper ] [ full paper This paper won the Best Paper Award at RCV. Here is the CCC blog. Award sponsored by Computing Community Consortium (CCC). Soft Predicates in Subdivision Motion Planning Cong Wang, Yi-Jen Chiang and Chee Yap Proc. 29th SoCG, Rio de Janeiro, Brazil. June 17-20, 2013. Lightly edited original [ IROS poster version IROS 2011 Workshop on Progress and Open Problems in Motion Planning, (Sep 30, 2011, San Francisco). Non-Local Isotopic Approximation of Nonsingular Surfaces Long Lin, Chee Yap, and Jihun Yu Computer-Aided Design, Vol. 45, No. 2, pages 451--462, 2013. Proc.Symp. on Solid and Physical Modeling (SPM). University of Burgundy, Dijon, France. Oct 29-31, 2012. Towards Exact Numerical Voronoi Diagrams (Invited Talk) Chee Yap, Vikram Sharma and Jyh-Ming Lien. Proc. 9th Intl. Symp. on Voronoi Diagrams in Science and Engineering (ISVD), IEEE Publisher. Pages 2--16. Rutgers University, NJ. Jun 27-29, 2012. Near Optimal Tree Size Bounds on a Simple Real Root Isolation Algorithm Vikram Sharma and Chee Yap Proc. 37th ISSAC, Grenoble, France. July 22-25, 2012. Certified Computation of Planar Morse-Smale Complexes of Smooth Functions Amit Chattopadhyay and Gert Vegter and Chee Yap Proc. 28th SoCG, Chapel Hill, NC. June 16-20, 2012. Pages 259--268. Explicit Mesh Surfaces for Particle Based Fluids Jihun Yu, Chris Wojtan, Greg Turk and Chee Yap 33rd Eurographics, Cagliari, Italy. May 13-18, 2012. Computer Graphics Forum, Vol.31, No.2, pp.815--824. Isotopic Arrangement of Simple Curves: an Exact Numerical Approach based on Subdivision Vikram Sharma, Gert Vegter and Chee Yap Dec 2011. Surface Representation of Particle Based Fluids Jihun Yu PhD Thesis (Sep 2011) Adaptive Isotopic Approximation of Nonsingular Curves and Surfaces A Simple But Exact and Efficient Algorithm for Complex Root Isolation and its Complexity Analysis Michael Sagraloff and Chee Yap Proc. 36th ISSAC, San Jose, California. June 8-11, 2011. Pages 353--360. Empirical Study of an Evaluation-Based Subdivision Algorithm for Complex Root Isolation Narayan Kamath, Irina Voiculescu and Chee Yap 4th Int'l Workshop on Symbolic Numeric Computation (SNC), San Jose, California. Jun 7-9, 2011. pages 155-164. A Real Elementary Approach to the Master Theorem and Generalizations [ Slides for TAMC talk] C. Yap Proc. Theory and Applications of Models of Computation (TAMC 2011), May 23-25, 2011, Tokyo, Japan. LNCS No. 6648, pp.14-26. Reconstructing Surfaces of Particle-Based Fluids Using Anisotropic Kernels Jihun Yu and Greg Turk Proc. of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 217-225, Madrid, Spain, 2010 (Jihun won the second best paper award.) Subdivision Algorithms for Complex Root Isolation: Empirical Comparisons Narayan Kamath Masters Thesis, Oxford University, August 2010. Pi is in Log Space The Design of Core 2: A Library for Exact Numeric Computation in Geometry and Algebra J.Yu, C. Yap, Z. Du, S.Pion, and H.Bronnimann 3rd International Congress on Mathematical Software (ICMS), Kobe, Japan, Sep 13-17, 2010. LNCS 6327, Springer 2010, pp.121--141. Continuous Amortization: A Non-Probabilistic Adaptive Analysis Technique Michael Burr, Felix Krahmer, Chee Yap Electronic Colloquium on Computational Complexity, TR09 Number 136, Dec 2009. Topologically Accurate Meshing Using Domain Subdivision Techniques Benjamin Galehouse PhD Thesis (Sep 2009) In Praise of Numerical Computation Chee K. Yap in Efficient Algorithms: Essays Dedicated to K.Mehlhorn on the Occasion of his 60th Birthday, Lecture Notes in Computer Science, No.5760, Springer-Verlag, pp.308--407. 2009. Tutorial: Exact Numerical Computation in Algebra and Geometry [3-in-1 Talk] [Talk 1] [Talk 2] [Talk 3] Chee K. Yap Proceedings, 34th ISSAC, pp.387--388. KIAS, Seoul, Korea. Aug 28-31, 2009. Lower Bounds for Zero-Dimensional Projections W.Dale Brownawell and Chee K. Yap Proceedings, 34th ISSAC, pp. 79--86. KIAS, Seoul, Korea. Aug 28-31, 2009. Adaptive Isotopic Approximation of Nonsingular Curves: the Parametrizability and Nonlocal Isotopy Approach Long Lin and Chee K. Yap Discrete & Comp.Geometry. Vol.45, No.4, pp.760-795, 2011. (Special Issue based on 25th ACM Symp. Computational Geometry (SoCG), in Aarhus, Denmark, Jun 8--10, 2009. Click here for Code and Experimental Results Foundations of Exact Rounding Chee K. Yap and Jihun Yu An invited talk for Proc. 3rd Workshop on Algorithms and Computation(WALCOM 2009). Kolkata, India, Feb 18-20, 2009. In: Lecture Notes in Computer Science No.5431, pp.15--31, 2009. Springer-Verlag. Complete Subdivision Algorithms, II: Isotopic Meshing of Singular Algebraic Curves [Full Version] M. Burr, S.W.Choi, B. Galehouse, Chee Yap Proc. Intl. Symp. on Symbolic & Algebraic Computation (ISSAC 2008), pp.87--94. Hagenberg, Austria, Jul 20-23, 2008. J. Symbolic Computation (Special Issue), Vol.47, No.2, pp.131-152 (2012). Complete Numerical Isolation of Real Zeros in General Triangular Systems Jin-San Cheng, Xiao-Shan Gao, Chee Yap Proc. Intl. Symp. on Symbolic & Algebraic Computation (ISSAC 2007), Waterloo, Canada, Jul 29-Aug 1, 2007. J. Symbolic Computation (Special Issue), Vol.44, No.7, pp.768--785, (2009). Complexity Analysis of Algorithms in Algebraic Computation Vikram Sharma PhD Thesis (Jan, 2007) Degeneracy Proof Predicates for the Additively Weighted Voronoi Diagram David L. Millman Masters Thesis (May, 2007) Theory of Real Computation according to EGC Chee Yap Lecture Notes in CS No.5045, pp. 193--237, Springer (2008). Based on talk at Dagstuhl Seminar on ``Reliable Implementation of Real Number Algorithms: Theory and Practice'', Jan 8-13, 2006. Is It Really Zero? Chee Yap KIAS Magazine, No.34, Spring 2007. Almost Tight Recursion Tree Bounds for the Descartes Method [Issac Talk Slides] Arno Eigenwillig, Vikram Sharma and Chee Yap Proc. Intl. Symp. on Symbolic & Algebraic Computation (ISSAC 2006), Genova, Italy, Jul 9-12, 2006. (Eigenwillig and Sharma won the ISSAC Best Student Author award for this paper; award shared with G.Moroz.) Complete Subdivision Algorithms, I: Intersection of Bezier Curves Chee K. Yap Proc. 22nd ACM Symp. on Comp. Geom. (SoCG 2006), Sedona, Arizona, Jun 6-8, 2006. pp.217--226. Decidability of Collision between a Helical Motion and an Algebraic Motion Sung Woo Choi, Sung-il Pae, Hyungju Park and Chee Yap Proc. 7th Conference on Real Numbers & Computers (RNC 7) LORIA, Nancy, France, Jul 10-12, 2006. Guaranteed Precision for Transcendental and Algebraic Computation made Easy Zilin Du PhD Thesis (May, 2006) Robust Approximate Zeros Vikram Sharma, Zilin Du and Chee Yap Proc. 13th Annual European Symposium (ESA) Palma de Mallorca, Spain, October 3-6, 2005. Amortized Bound for Root Isolation Via Sturm Sequences Zilin Du, Vikram Sharma and Chee Yap Proc. Symbolic-Numeric Computation (SNC'05), Xi'an, China, Jul 19-21, 2005. Also: in ``Symbolic-Numeric Computation'' (eds. D. Wang and L. Zhi), Birkhauser Verlag AG, Basel, pp. 113--130, 2007. (ISBN 978-3-7643-7983-4) Uniform Complexity Approximating Hypergeometric Functions with Absolute Error Zilin Du and Chee Yap Proc. 7th Asian Symposium on Computer Mathematics (ASCM), Eds. Sungil Pae and Hyungju Park, pp.246--249. KIAS, Seoul, Korea (Dec 2005) Shortest Path amidst Disc Obstacles is Computable E. Chang, S.W. Choi, D. Kwon, H. Park, C. Yap Special Issue on Geometric Constraints, Intl.J.Comp.Geom. & Applic., 16:5-6(2006)567--590. Also: 21st ACM Symp. on Comp. Geom., (2005)116--125. Classroom Examples of Robustness Problems in Geometric Computation L.Kettner, K.Mehlhorn, S.Pion, S.Schirra, C. Yap Comp.Geom.Theory and Applic. 40:1(2007)61--78. Also, Proc. European Symposium on Algorithms (ESA), Nov 2004. On Guaranteed Accuracy Computation C. Yap Chapter 12 in book "Geometric Computation'', Falai Chen and Dongming Wang (Eds.). World Scientific Pub.Co., pp.322-373, 2004. Constructive Root Bound Method for k-Ary Rational Input Numbers S. Pion and C. Yap Theoretical Computer Science 269:1-3(2006)361-376. DOI link . Also: 19th ACM Symp. on Comp. Geom. (2003)256--263. Pseudo Approximation Algorithms with Applications to Optimal Motion Planning T. Asano, D. Kirkpatrick, C. Yap J.Discrete & Comp. Geom., 31:1(2004)139--171) (Special Conference Issue) Also: Proc. 19th ACM Symp. on Comp. Geom., (2002)170--178. Robust Geometric Computation C. Yap, Chapter 41 in CRC Handbook of Computational and Discrete Geometry. (Eds. J.E.Goodman and J.O'Rourke), pp.927--952, 2004. (Completely revised and expanded version of the 1997 version Hypergeometric Functions in Exact Geometric Computation Z. Du and M. Eleftheriou and J. Moreira and C. Yap Electronic Notes in Theoretical Computer Science, 66:1 (2002). Proc. 5th Workshop on Computability and Complexity in Analysis (CCA), Malaga, Spain. July 12-13, 2002. Towards Robust Geometric Computation Chee Yap and Kurt Mehlhorn Fundamentals of Computer Science Study Conference, July 25-27, 2001, Washington DC. White Paper. Conference sponsored by Computer Science and Telecommunications Board (CSTB) and NSF. Recent Progress in Exact Geometric Computation C. Li, S. Pion and C. Yap J. of Logic and Algebraic Programming, 64:1(2004) 85--111. Special issue on ``Practical Development of Exact Real Number Computation'', Eds. N. Mueller and M. Escardo and P. Zimmermann. Exact Geometric Computation: Theory and Applications Chen Li, PhD Thesis (Jan, 2001) QuickMul: Practical FFT-based Integer Multiplication A New Constructive Root Bound for Algebraic Expressions C. Li and C. Yap, Proc. 12th ACM-SIAM Symp. on Discrete Algorithms (SODA), (2001)496--505. Randomized Zero Testing of Radical Expressions and Elementary Geometry Theorem Proving D. Tulone, C. Yap and C. Li., Proc. ADG'00, Zurich Sep 25-27, 2000. Also, LNCS/LNAI 2061, Springer 2001. A Core Library for Robust Numeric and Geometric Computation V. Karamcheti, C. Li, I. Pechtchanski and C. Yap., 15th ACM Symp. on Comp.Geometry, (1999)351--359. Tutorial for CORE Library A New Number Core for Robust Numerical and Geometric Libraries C. Yap Invited talk, 3rd (CGC) Workshop on Computational Geometry, Brown University, Oct 1998. Precision-sensitive Euclidean Shortest Path in 3-space J. Choi, J. Sellen and C. Yap, 11th ACM SCG (1995) 350--359. In SIAM J.Comp., 29(2000)1577-1595. Approximate Euclidean Shortest Paths in 3-space J. Choi, J. Sellen and C. Yap, 10th ACM SCG (1994) 41--48. In IJCGA 7:4(1997)271-295. Real/Expr: Implementation of an Exact Computation Package Kouji Ouchi, Masters Thesis, New York University (Jan, 1997) A Basis for Implementing Exact Geometric Algorithms Thomas Dubé and Chee K. Yap, Manuscript (Oct 15, 1993) The Exact Computation Paradigm Chee K. Yap and Thomas Dubé , In "Computing in Euclidean Geometry'' (eds. D.-Z. Du and F. K. Hwang) World Scientific Press, Singapore. 2nd edition, pp.452--486, 1995. Towards Exact Geometric Computation Chee K. Yap, CGTA 7(1997)3-23. Invited talk, 5th CCCG (1993) Waterloo, Canada. For older papers of Yap, not necessarily related to exact computation, please see
{"url":"http://cs.nyu.edu/exact/papers/","timestamp":"2014-04-20T06:14:41Z","content_type":null,"content_length":"28603","record_id":"<urn:uuid:16e8f60f-6219-4925-bd52-9affb038b2be>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
From Peirce to Skolem A Neglected Chapter in the History of Logic • Geraldine Brady, Department of Computer Science, University of Chicago, 1100 E 58th Street, Chicago, IL 60637 USA This book is an account of the important influence on the development of mathematical logic of Charles S. Peirce and his student O.H. Mitchell, through the work of Ernst Schröder, Leopold Löwenheim, and Thoralf Skolem. As far as we know, this book is the first work delineating this line of influence on modern mathematical logic. Published: July 2000 Imprint: North-holland ISBN: 978-0-444-50334-3 • The book is well written, and written for a ;arge audience. Many very detailed explanations of terminology, notation and proof techniques in the quotations of historicl texts are given..... M. Guillaume, Mathematical Reviews, 2002 • IntroductionThe Early Work of Charles S. Peirce. Overview of the Mathematical Systems of Charles S. Peirce. Peirce's Influence on the Development of Logic. Peirce's Early Approaches to Logic. Peirce's Calculus of Relatives: 1870. Peirce's Algebra of Relations. Inclusion and Equality. Addition. Multiplication. Peirce's First Quantifiers. Involution. Involution and Mixed-quantifier Forms. Elementary Relatives. Quantification in the calculus of relatives in 1870. Summary. Peirce on the Algebra of Logic: 1880. Overview of Peirce's "On the algebra of logic". Discussion. The Origins of Logic. Syllogism and Illation. Forms of Propositions. The Algebra of the Copula. The Logic of Nonrelative Terms. Conclusion. Mitchell on a New Algebra of Logic: 1883. Mitchell's Rule of Inference. Single-Variable Monadic Logic. Single-Variable Monadic Propositions. Disjunctive Normal Form. Rules of Inference for Single-Variable Logic. Two-Variable Monadic Logic. Mitchell's Dimension Theory. Contrast to Peirce. Three-Variable Monadic Logic. Peirce on Mitchell. Peirce on the algebra of relatives: 1883. Background in Linear Associative Algebras. The Algebra of Relatives. Types of Relatives. Operations on Relatives. Syllogistic in the Relative Calculus. Prenex Predicate Calculus. Summary of Peirce's Accomplishments in 1883. Syntax and Semantics. Quantifiers. Peirce's Appraisal of His Algebra of Binary Relatives. Peirce's Logic of Quantifiers: 1885. On the Derivation of Logic from Algebra. Nonrelative Logic. Embedding Boolean algebra in Ordinary Algebra. Five Peirce Icons. Truth-functional Interpretations of Propositions. First-Order Logic. Infinite Sums and Products. Mitchell. Formulas and Rules. Second-Order Logic. Schröder's Calculus of Relatives. Die algebra der Logik: Volume 1. Die Algebra der Logik: Volume 2. Die Algebra der Logik: Volume 3. Peirce's Attack on the General solutions of Schröder. Lectures VI-X and Dedekind Chain Theory. Lectures XI-XII and Higher Order Logic. Norbert Wiener's Ph.D. Thesis. Löwenheim's contribution. Overview of Löwenheim's 1915 paper. Löwenheim's Theorem. Conclusions. Impact of Löwenheim's Theorem. Conclusions. Impact of Löwenheim's Paper. Skolem's recasting. Appendices. Schröder's Lecture I. Schröder's Lecture II. Schröder's Lecture III. Schröder's Lecture V. Schröder's Lecture IX. Schröder's Lecture XI. Schröder's Lecture XII. Norbert Wiener's Thesis. Bibliography. Index.
{"url":"http://www.elsevier.com/books/from-peirce-to-skolem/brady/978-0-444-50334-3","timestamp":"2014-04-17T06:48:30Z","content_type":null,"content_length":"29245","record_id":"<urn:uuid:6a5655db-c3a7-4502-99b8-3efb1fa17b16>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Test for Moderating variables Amanuel Tekleab posted on Monday, May 01, 2000 - 8:13 pm I am running a path analysis with both continuous and categorical variables. My categorical variable (y1, with two categories) is a dependent variable. I have a continuous variable (x1) predicting y1. I did test the relationship by indicating the categorical variable as stated in the User's Guide. I have another variable x2, which is a continuous variable as a moderator in the relationship between x1 and y1 in my model. My hypothesis predicts that x2 interacts with x1 in predicting y1. I want to run this model as a multigroup analysis. I am using raw data in my analysis. Would there be anyone who can lead me step-by-step on how to test the moderating effect? Thank you very much. Linda K. Muthen posted on Tuesday, May 02, 2000 - 9:57 am If I understand you correctly, you are interested in the interaction between x1 and x2 in predicting y1. If x1 and x2 are both continuous variables, then creating a variable that is the product of the two variables is the interaction term and can be used as a predictor of y1. Multiple group analysis can be used if the variable is categorical. Amanuel Tekleab posted on Wednesday, May 10, 2000 - 6:11 pm Thank you Dr. Muthen for your responses. I got some more questions. Should I use the procedure mentioned above even if the moderating effect is toward the end of a long model: For example, x1-->y1---->y2---->y3; and if I have a moderating variable, let's say x2, on the relationship between y2 and y3, should I create an interaction term for y2 and x2 and use it as a predictor or should I use a multi-group analysis? If I have to use multi-group analysis, how do I constrain the other paths (x1-->y1 and y1--->y2)so that I could see the differences of the two groups with respect to the effect on y2--->y3 moderated by x2? Thank you for your unreserved and immediate responses. Linda K. Muthen posted on Thursday, May 11, 2000 - 4:01 pm You can use either approach. If you use multiple group, you will need to categorize x2. The model statement would be as follows: MODEL: y3 ON y2; y2 ON y1 (1); y1 ON x1 (2); where the numbers in parentheses indicate that the parameters are to be held equal across groups. Amanuel Tekleab posted on Wednesday, May 24, 2000 - 8:32 pm Professor Muthen, I did as you suggested, i.e., holding paths equal across groups. I would appreciate if you please help me with the following questions. 1. If I hold parameters equal, is there an LMTest that indicates a parameter that should be released? Or a chi square which I can compare? I see a note on the output not to trust the chi square indicated since I have a categorical dependent variable. Remember, the moderating effect I am testing is on the relationship between a continuous variable and a categorical variable. 2. How do I know whether or not the parameter I am interested in is different in the two groups? 3. When I run the model by holding the parameters equal, I got only the estimates (no S.E., std., StdX/Y and any indicator of the significance of the values. What should I do to know the significance of the paths? Can I send you the output so that you can see what is happening in there? I thank you very much for your responses. Respectfully yours, Linda K. Muthen posted on Thursday, May 25, 2000 - 5:44 am The note in the ouput warns that WLSM and WLSMV cannot be used for chi-square difference tests. This does not mean that they are not trustworthy for testing model fit. You can use WLS for chi-square difference testing. If you are not getting standard errors and the ratio of the parameter estimate to the standard error, then you must also be getting a message regarding this--for example, your model must not be converging. If this is the case, you would also not get Std or StdYX. Note also that you must ask for Standardized in the OUTPUT command to obtain the standardized coefficients. Please send your output and data along with your license number to support@statmodel.com. finch posted on Wednesday, November 15, 2000 - 12:50 pm I have two questions regarding SEM with MPlus. First, is it possible to test for the moderating effects of a latent variable in a lisrel type model? Secondly, is it possible to estimate non-recursive models in MPlus? Thanks very much. Linda K. Muthen posted on Wednesday, November 15, 2000 - 2:29 pm The answer to both questions is yes. finch posted on Thursday, November 16, 2000 - 6:43 am Thanks much for your response. I think that I asked questions that were too narrowly focused. My obvious next questions are, "How?" and "How?" I looked in the manual and couldn't find examples or discussion of either. I found the on-line discussion of moderation with observed variables, but am unsure how to apply that to the factors. Thanks for any help. Linda K. Muthen posted on Thursday, November 16, 2000 - 10:08 am I guess I could have elaborated a bit. If you want a latent variable to be a moderator variable, just use it on the right hand side of an ON statement. So, for example, f2 ON f1. For a non-recursive model say, f1 ON f2; f2 ON f1; If you use the ON statements to describe the regressions among observed variables in your model and latent variables in your model that have been created using BY statements, Mplus automatically does any other manipulations that are needed. Let me know if this does not answer your question. Anonymous posted on Wednesday, December 13, 2000 - 7:28 am Is there a way to do a latent variable interaction or non-linear effects model such as those described in the Schumaker and Marcoulides book? bmuthen posted on Wednesday, December 13, 2000 - 8:22 am There is a Joreskog-Yang chapter which describes how to do such modeling using non-linear parameter constraints. This facility is not yet available in Mplus. Sandra Lyons posted on Friday, October 26, 2001 - 1:29 pm In your response to a post (By Linda K. Muthen on Thursday, May 25, 2000 - 06:44 am) You replied: The note in the ouput warns that WLSM and WLSMV cannot be used for chi-square difference tests. This does not mean that they are not trustworthy for testing model fit. You can use WLS for chi-square difference testing. Do you mean by this that instead of the default estimator (WLSMV) for analyses with categorical indicators, that the WLS estimator can be selected in order to obtain a chi-square value that can be used for differnce testing? If not, how does one test for differences in multiple group analyses with categorical indicators? Linda K. Muthen posted on Friday, October 26, 2001 - 4:25 pm Yes, WLS can be selected in the ANALYSIS command using the ESTIMATOR= option. We recommend using WLS for difference testing and WLSMV to get a chi-square for the final model. Sandra Lyons posted on Saturday, February 09, 2002 - 7:55 am I have a multi-group model with categorical indicators and want to test the equality of coefficients between the groups. But since the matrix is not positive definite, I can't do chi-square difference tests with WLS. I see that you have posted the steps to compute a chi-square difference test for the chi-square obtained with the MLM estimator. Is there something similar for the adjusted chi-square obtained with WLSMV. Alternatively, is there a way to hand calculate a significance test for the difference between two coefficients. bmuthen posted on Monday, February 11, 2002 - 7:05 am The difference between two coefficients can be tested using the TECH3 output, giving the variances and covariance for the coefficients. The numbering of the parameters is shown in TECH1. The test is the approximately normal quantity where diff is the coefficient difference and sd is the square root of v1 + v2 - 2 cov where v1, v2 and cov are the variance and covariance elements for the coefficients found in the TECH3 output. Sandra Lyons posted on Monday, February 11, 2002 - 12:40 pm In testing the difference between two coefficients, should the chi-square difference test and diff/sd provide the same results (i.e. level of significance)? Linda K. Muthen posted on Tuesday, February 12, 2002 - 8:55 am In large samples, they should be the same. Sandra Lyons posted on Friday, February 15, 2002 - 1:18 pm In calculating t to test coefficient differences, I noticed that for WLS estimates t is significant, but t is not significant for WLSMV estimates because the difference is smaller and the s.e. is larger. Why is this? Since this is the case, how is it that WLS can be used for nested model testing for WLSMV estimated models? Linda K. Muthen posted on Sunday, February 17, 2002 - 10:49 am There can be differences between WLS and WLSMV particularly when the sample size is not large. The WLSMV results are more trustworthy. Because WLSMV cannot be used for difference testing, we recommend using WLS for this purpose only. We recommend WLSMV for the final model. Anonymous posted on Monday, March 17, 2003 - 8:34 pm I am trying to test the moderating effect of a continuous latent variable on the relations between three other continuous latent variables in the model. I understand (via your post on 11/16/00) that by placing the moderator on the right side of the ON statement, you can test for moderating effects on the variables in question (ie. f2 f3 f4 ON f1 w/f1 as the moderator). However, where in the print out can I see whether this moderating effect influences the relations between the two predictors f2 f3 and the outcome variable f4? Linda K. Muthen posted on Tuesday, March 18, 2003 - 8:32 am I am not sure that I understand your question, but I think that you are asking about indirect effects. If so, Mplus does not currently estimate indirect effects. You would have to do these by hand. If this is not what you mean, can you show your entire MODEL command and then tell me what you are not finding in the output? Anonymous posted on Tuesday, March 18, 2003 - 1:11 pm I am sorry if I was unclear. I am attempting to determine whether my moderator, f1, influences the relations between f2 and f4 and f3 and f4. My model command is: f1 BY y1-y4; f2 BY y5-y6; f3 BY y7-y10; f4 BY y11-y13; f2 f3 f4 ON f1; I have found the coefficients for the relations between the moderator and each factor and its significance. However, I am looking for any way to see if the inclusion of the moderator changes the relations between f2, f3, and f4. I appreciate your assistance. Linda K. Muthen posted on Tuesday, March 18, 2003 - 7:04 pm It sounds like you are interested in looking at the model estimated covariances among factors 2, 3, and 4 for two models -- the model above without the ON statment and the model above as it is. In both cases, the model estimated covariances would be the same. Check TECH4 to see if that is the case. Also, the two models should have the same chi-square and degrees of freedom. This is because both models allow for all four factors to be freely interrelated. If this is still not what you are asking, please try again. Anonymous posted on Thursday, March 20, 2003 - 6:10 pm I think I am still being unclear. I am trying to complete a standard moderation analysis with latent variables. I have read Jaccard & Wan (1995)'s article and have investigated that approach. From what I gather from your comments above, it is not possible to do this in MPlus because of the need for nonlinear constraints (12/13/00). I am trying to find an alternative to test if my moderator influences the direct relation between two other latent variables. I have been told that dichotomizing my moderator to create two data sets and using invariance testing on the models is not an appropriate approach. Therefore, I am looking for an alternative. I hope this clarifies my question. bmuthen posted on Thursday, March 20, 2003 - 6:29 pm Let's see if this responds to your question. In your last message it sounds as if you are interested in the relationship between 2 latent variables, e.g. f3 on f2 where this relationship is moderated by a continuous variable, say x. To me, this is the same as x and f2 interacting in their influence on f3. If this represents what you want to do, we have the solution - see Mplus Web Note #6. If not, please try again. Paul Kim posted on Tuesday, April 29, 2003 - 4:44 pm Hi Drs. Muthen, I've been trying to use a random slopes model but I'm running into a couple problems. 1). Is there any way I can conduct a random analysis with a categorical latent dependent variable? That is, I'd like to run a model that interacts a latent factor and an observed variable to see the effects on a categorical latent variable. 2). I'm using mplus version 2.13, but I'm getting an error that says that the Analysis=random command is not recognized. Is there an update or a patch or something I can get to allow the program to recognize this command? Thank you in advance. Linda K. Muthen posted on Wednesday, April 30, 2003 - 10:03 am 1. This will be available in Version 3. 2. It is TYPE=RANDOM not ANALYSIS=RANDOM. Anonymous posted on Wednesday, December 17, 2003 - 11:03 pm Is there a way to do a latent variable interaction or non-linear effects model such as those described in the Schumaker and Marcoulides book? Or is It possible to do the model Including nonlinear constraints in MPLUS2.14? Thank you very much. Linda K. Muthen posted on Thursday, December 18, 2003 - 9:00 am Mplus Version 3 will have latent variable interactions using maximum likelihood estimation. I don't believe that what is described in the Schumaker and Marcoulides book is true maximum likelihood. Version 3 will also have non-linear constraints. In Version 2.14 one can do an interaction between a continuous latent variable and a continuous observed variable by using a trick. Anonymous posted on Wednesday, March 10, 2004 - 8:26 am I would like to test an interaction between a continuous latent variable and continuous observed variable. What is the trick one needs to use in version 2.14 to do this? Thank you! Linda K. Muthen posted on Thursday, March 11, 2004 - 7:36 am This is described in Web Note 6 which can be accessed from the homepage at www.statmodel.com. Anonymous posted on Wednesday, March 17, 2004 - 1:19 pm I'm interested in knowing if its possible to do something like an F-test using the Mplus output (specifically the THETA parameter array). I'd like to perform more "targeted" tests of model fit than I expect can be done with the RMSE and SRMR summary statistics Mplus provides. Specifically, I'm intersted in testing for level-1 interaction effects in a multilevel SEM with mutiple dependent variables. Could the THETA parameter array for a set of constrained versus unstrained models be used in this way ? bmuthen posted on Wednesday, March 17, 2004 - 4:51 pm Could you describe the types of interaction effects that you are interested in? Anonymous posted on Sunday, May 23, 2004 - 9:46 pm I would like to test moderation effect of one continuous latent construct (f1) on another (f2), where the moderator is also a continuous latent construct (f3). I want to divide the sample into two equal percentile subsamples based on f3 (below and above f3 median), and look at the change in coefficients (f2 on f1) between these two groups. How do I implement it in MPlus (dividing sample into two subsamples)? bmuthen posted on Monday, May 24, 2004 - 9:55 am This would be modeled by defining the 3 factors using BY statements and then saying (assuming f2 is your dependent variable) f1xf3 | f1 XWITH f3; f2 on f1 f3 f1xf3; I would not divide the sample but instead use the estimated model. From the estimated model you can deduce what the moderated f2 on f1 slope is for different values of f3. The f3 mean is zero and you get its estimated variance and can therefore try say +1SD and -1SD from the zero mean. How to deduce such moderated slopes is exemplified on page 59 in the Mplus Short Course handout New Features in Mplus Version 3. Essentially, this involves rewriting the expression on the RHS, b1*f1 + b2*f1*f3, (b1 + b2*f3)*f1 showing that f3 moderates the influence of f1 on f2. Anonymous posted on Monday, May 24, 2004 - 10:04 am Thank you. Is it a new feature in MPLUS 3? Can I still do it by dividing the sample in MPLUS 2.14 - I have received a firm external recommendation to do this, instead of testing interaction with a continuous latent variable. Linda K. Muthen posted on Monday, May 24, 2004 - 10:18 am Yes, this is a new feature in Version 3. You can divide the sample in Version 2.14 or Version 3 by saving the factor scores and creating groups based on f3. However, if you believe there is an interaction and save factor scores using a model without an interaction, the factor scores will be biased and the model will be fraught with problems. Anonymous posted on Monday, May 24, 2004 - 12:51 pm Is this feature included in Base, Mixture add-on or Multilevel add-on part of Version 3? bmuthen posted on Monday, May 24, 2004 - 1:04 pm The new ML latent variable interaction feature (XWITH) is included in the Base part of Version 3. For background reading, see Klein and Moosbrugger (2000) in Psychometrika. Anonymous posted on Monday, May 24, 2004 - 1:23 pm Is there any limit on the number of interactions to be tested in one model in Version 3? In other words, if f2 is dependent variable, f3 is a moderator, g1, g2, g3 etc are independent variables, can I test the model: f2 on g1 g2 g3 g4...f3 g1xf3 g2xf3 g3xf3 g4xf3..; for any number of g, or is there any technical/computational limit? All variables here are continuous latent constructs. Linda K. Muthen posted on Monday, May 24, 2004 - 3:58 pm Latent variable interactions require numerical integration which is computationally demanding. So although there is no limit set by Mplus, there is a practical limit on how long you probably want to wait for the model to converge. If you look under numerical integration in the Version 3 Mplus User's Guide, you will find a discussion of numerical integration. Gen posted on Wednesday, July 07, 2004 - 11:41 am Bonjour from Montreal! I'm trying to test a model where I have a latent categorial variable infuencing a dependent latent continuous variable and 3 observed continuous variables as moderators. Is mixture modeling an appropriate way to test the interaction between the latent categorial variable and the observed continuous variables? If so, do you know some readings that can help me with the procedure? Thank you! Anonymous posted on Wednesday, July 07, 2004 - 1:35 pm Hi Linda, I have a methodological question for you and would appreciate if you could offer some insights. We are using structural equation modeling to determine whether the relationship between a predictor and an outcome is mediated by an intermediate variable. We are interested in determining whether these relationships are "independent", but believe that may be confounded by another variable. How would you test for confounding in this case? More generally, how do you test for confounding if a model includes an intermediate variable? Anonymous posted on Wednesday, July 07, 2004 - 1:37 pm Hi Linda, I have a methodological question for you and would appreciate it if you could offer some insights. We are using structural equation modeling to determine whether the relationship between a predictor and an outcome is mediated by an intermediate variable. We are interested in determining whether these relationships are "independent", but believe they may be confounded by another variable. How would you test for confounding in this case? More generally, how do you test for confounding if a model includes an intermediate variable? bmuthen posted on Wednesday, July 07, 2004 - 6:42 pm This is the answer to Montreal - Bonjour! Yes, in the mixture modeling you can allow the effects of the 3 observed continuous variables on the dependent latent continuous variable to vary across classes - so that captures the interaction, i.e. the moderation. Linda K. Muthen posted on Friday, July 16, 2004 - 11:27 am This answer is from David MacKinnon: "One way that you could test the confounder effect is to consider the confounder as a oderator. If the confounder is a grouping variable, then a researcher could test the equality of the mediated effect across the levels of the grouping (confounder) variable(Multiple Group SEM). The equality of the relation of X to M and from M to Y across the groups can be tested. In this way, the researcher could test whether confounder effects the a (X to M) or b (M to Y), or the mediated ab (X to M to Y) effect. It would be especially compelling if the researchers' hypothesized which part of the mediation effect should be confounded.If the confounder is a continuous variable, the same type of tests could be used but would require constructing interactions between the confounder and M, and Y so that there would be interaction effects in the prediction of Y and M. Your work with Andreas Klein is helpful here (Klein & Moosbrugger, 2000 cited in the Mplus manual; Example 5.13). These interactions then provide tests of the equality of the a and b paths across levels of the confounder. aap034 posted on Tuesday, January 11, 2005 - 4:39 pm I am trying to run an analysis where the x, y and z (z being a moderator) are all continuous. Most of the research I am seeing is telling me to catergorize z apriori. However, I am still unsure on how to analyze the data if all three variables are continuous. Can you help? Linda K. Muthen posted on Wednesday, January 12, 2005 - 4:36 pm To examine the interaction between the observed variables x and z use the DEFINE command to create an interaction term and then regress y on x, z, and xz where: DEFINE: xz = x*z; Andre Beauducel posted on Wednesday, March 16, 2005 - 6:04 am I used WLSMV. As far as I understood it is not possible to compare chi-square values of 2 models based on WLSMV. Now I have 2 non-nested models. The models have the same number of free parameters, the same WLSMV-degrees of freedom, they are based on the same data set. Is it perhaps possible to compare the chi-squares of these models under these conditions? Thanks for help! Linda K. Muthen posted on Wednesday, March 16, 2005 - 7:02 am It is possible to do chi-square difference testing of nested models using WLSMV in Version 3. See the DIFFTEST option in the Mplus User's Guide. With WLSMV, I would compare only p-values. Andre Beauducel posted on Wednesday, March 16, 2005 - 7:07 am Thanks. However, I would like to use the chi-square in AIC or BIC for model comparison, since the models are non-nested. Would this be possible when WLSMV-df and data set are identical? Linda K. Muthen posted on Wednesday, March 16, 2005 - 7:21 am I'm not sure about this. Perhaps someone else can comment. Anonymous posted on Thursday, September 22, 2005 - 2:32 pm Hello - I am running a path analysis and am interested in testing for mediating influences (using the IND command) as well as moderation effects. When using MLR, can I still compare across nested models and compute chi-square differences to assess model fit? The output gives me a warning statement "The chi-square value for MLM, MLMV, MLR, WLSM and WLSMV cannot be used for chi-square difference tests." The manual does not cover MLR as one of the estimators for which the DIFFTEST command can be used. Linda K. Muthen posted on Friday, September 23, 2005 - 7:49 am For the estimators mentioned in the message, it is not the DIFFTEST option that you should use but the scaling factor that is printed in the output. See the homepage where a discussion of this is found under Chi-Square Difference Testing for MLM. This applies to all estimators mentioned including MLR. joshua posted on Saturday, November 26, 2005 - 2:57 pm I'm trying to determine whether 9 regression coefficients differ with respect to 2 groups in the context of multigroup analysis. Basically, I've 3 exos and 3 endos and a moderating variable (2 groups) which is categorical. Just to confirm...I would have to fix the path to be equal in both groups in one analysis and allow them to be freely estimated in another analysis. Am i right to assume that this is done one at a time for all 9 of the paths? Also, assuming that holding the first path to be equal did not significantly worsen the fit, we would still hold that path to be equal while moving on to constraining the next path to be equal across both groups, correct? Thanks in advance. Linda K. Muthen posted on Sunday, November 27, 2005 - 2:36 pm If you are interested in 9 slopes beoing equal across groups, I would hold them equal and then not equal and get a joint test for all of them. If that test is rejected, I look at Modification indices in the run with them held equal to see which ones violate equality. joshua posted on Monday, November 28, 2005 - 3:37 am Thanks Linda, 1. Btw, I'm assuming that you would use similar approach to test for measurement invariance for factor loadings in say across both groups? 2. When you mentioned you would look at the modification indices (MIs) to see which violates equality, did you mean that we should be looking for ones above 3.84 for 1 df? What if we have a path with significant MIs across both groups? One group could still have a path that would be significantly larger than another. Should we test this path separately again? Linda K. Muthen posted on Monday, November 28, 2005 - 4:46 pm 1. Yes. 2. Yes. Yes, if you want to know if one is larger than the other. joshua posted on Friday, December 02, 2005 - 8:43 am Thanks Linda, I've run the analysis as what you've suggested. I've found a significant difference between the model in which all 9 regression coefficients were held equal and the model where all 9 were freely estimated. Out of the 9 regression coefficients, I found 3 to be non-invariant across both groups (as suggested by mod. indices). Hence, I can interpret that my dichotomous variable moderated the 3 Here's where my problem lies.... I went on to free the 3 regression coefficients while allowing the other 6 to be freely estimated between both groups. Basically, I respecified the model as suggested by Mod Indices. I observed the regression coefficients..the ones that were previously found to be moderated by ethnicity (as suggested by Mod indices)... one of the 3 regression coefficients that registered a mod indices > 3.84, was not even statistically significant in their respective groups.For example, in one group the Est/S.E = -1.122 and in another 0.438. Am I right to state that the Mod Indices can only be used as a neceesary but not sufficient tool to test for the presence of a moderating effect? If yes, does this mean we still have to observe the individual paths estimate in both groups after respecifying the latent variable model as previously suggested by the Mod Indices? Linda K. Muthen posted on Friday, December 02, 2005 - 10:43 am Once you free a parameter, you would need to rerun the model to see how other modification indices are affected. I would free the parameter with the largest modification index first and then see if the others still need to be freed. Carol posted on Thursday, January 26, 2006 - 12:11 pm Dear Dr. Muthen, I am trying to model the moderating effects of one variable on the heritability of another variable. In other words, are the genetic influences on X stronger in individuals who are high on Y where X and Y are both continuous observed variables. I believe this is the same as asking of there is an interaction between the latent additive genetic factor (A) and the observed variable Y. Without the moderator variable I would specify something like A1 by X1 (11); A2 by X2; where 1 and 2 designate twins. Would I think use XWITH to define a second variable A1xY1 | A1 XWITH Y1 and then also specify that X is indexing this interaction variable? A1xY1 by X1 Thank you for your help. Linda K. Muthen posted on Friday, January 27, 2006 - 8:46 am This sounds correct but say: x1 ON a1xy1; instead of using a BY statement. You cannot use a BY statement when a1xy1 has already been defined in the interaction statement. Carol posted on Monday, January 30, 2006 - 9:17 am Hi Dr. Muthen, Thank you for your suggestion. Below is the script that I am using, recall that I want to know if heritability of X changes across Y (both continuous). With this script I get an error message saying that Y1 and Y2 are uncorrelated with any variables in the model, but I thought that any variables that weren't specified as being uncorrelated were automatically assumed to be correlated. Certainly X and Y are correlated on a phenotypic level. I'm not sure where the error is. Thank you, VARIABLE: NAMES = suid zyg Y1 X1 Y2 X2; USEVARIABLES = Y1 X1 Y2 X2; GROUP= zyg(1=mz 0=dz); ANALYSIS: TYPE = random random; [X1 X2] (1); !means X1@0; X2@0; !fix residual variance to 0 [Y1 Y2] (2); ! Biometric loadings A1 BY X1*.10(11); A2 BY X2*.10 (11); !additive pathway C1 BY X1*.15 (12); C2 BY X2*.15 (12); !shared env E1 BY X1*.20 (13); E2 BY X2*.20 (13); !non-shared env !Define interactions Y1xA1| A1 XWITH Y1; Y2xA2 | A2 XWITH Y2; Y1xC1| C1 XWITH Y1; Y2xC2| C2 XWITH Y2; Y1xE1| E1 XWITH Y1; Y2xE2| E2 XWITH Y2; X1 ON Y1xA1 (14); X2 ON Y2xA2 (14); X1 ON Y1xC1 (15); X2 ON Y2xC2 (15); X1 ON Y1xE1 (16); X2 ON Y2xE2 (16); [A1-E2@0]; !fixes latent means means to 0 A1-E2@1; !fixes latend variable variances Y1-Y2 WITH A1-E2@0; A1-A2 WITH C1-E2@0; C1-C2 WITH E1-E2@0; A1 WITH A2@1.0; C1 WITH C2@1.0; E1 WiTH E2@0; !Sets MZ as default model MODEL dz: A1 WITH A2@.5; Linda K. Muthen posted on Monday, January 30, 2006 - 9:22 am Mplus has defaults that are described in the user's guide. All variables are not automatically correlated. If there are covariances missing from your model results that you want to see, you need to add them to the model using the WITH option. liesbethvanosch posted on Wednesday, December 13, 2006 - 9:14 am Dear Dr. Muthen, I would like to compare two models: one model in which latent variable A is considered a mediator between variables B (observed) and C (observed), and one model in which I have entered an interaction term between A and B (using XWITH and adding ALGORITHM = INTEGRATION to the ANALYSIS command). I have compared the loglikelihoods of both models using the Satorra-Bentler scaled chi-square difference test and find that the mediation model has significant better loglikelihood than the interaction model. However, I would also like to know whether the Rsquare of variable C differs between the models. The Mplus output for the interaction model, however, does not include this information. Is there any way to get Rsquares for the interaction model? Thank you. Linda K. Muthen posted on Wednesday, December 13, 2006 - 10:54 am We do not provide that R-square. You would need to compute it yourself at this time. Jonathan Cook posted on Saturday, January 06, 2007 - 3:52 pm Hi there... I was wondering if you knew of a reference for properly displaying structural models with moderators when using the Xwith command. Specifically, I'm not sure, particularly given the algorithm used by mplus whether I should have indicators of the interaction construct. Thanks for the helpful resource! Linda K. Muthen posted on Monday, January 08, 2007 - 9:00 am Mplus uses maximum likelihood not the ad hoc approach where the product of indicators are used. The following article describes the approach used with XWITH: Klein, A. & Moosbrugger, H. (2000). Maximum likelihood estimation of latent interaction effects with the LMS method. Psychometrika, 65, 457-474. Jonathan Cook posted on Monday, January 08, 2007 - 2:18 pm Right...thanks, Linda. So my question is really just about drawing an appropriate diagram. I have 2 latent exogenous variables and the third "moderator" latent variable. In my diagram, I'm assuming that my moderator variable would not have any indicators. Its a small question...just want to make sure my picture looks right. Linda K. Muthen posted on Monday, January 08, 2007 - 2:27 pm The issues of graphically displaying the interaction between continuous variables are the same for observed or latent variables. This is difficult to do. I would think this is discussed in regression texts and also articles by Aitkin and West. Jonathan Cook posted on Tuesday, January 09, 2007 - 11:34 am Sorry...I've been unclear. My question was not about how to graphically display the interaction, but about how to represent the moderator variable in a path diagram. Its not that important and I don't need to take up more of your time on this. Linda K. Muthen posted on Tuesday, January 09, 2007 - 11:41 am You can see how we displayed an interaction between two latent variables by looking at the path diagram for Example 5.13. I think this is what you want. Kimberley Freire posted on Tuesday, January 30, 2007 - 7:37 am Drs Muthen, I am trying to run a model (all observed and continuous variables) with two moderator variables interacting with four predictor variables to predict the outcome variable. I do not want to use multigroup analysis. Is this currently possible in MPlus? If so, are there issues with model convergence? Thank you in advance for your reply. Linda K. Muthen posted on Tuesday, January 30, 2007 - 9:30 am If you want to create interactions between two observed variables, use the DEFINE option to do so and include those variables in your model as covariates. This has always been possible in Mplus and I know of no obvious convergence problems. Jonathan Cook posted on Tuesday, January 30, 2007 - 11:59 pm I'm specifying a model that has somewhat skewed indicators and an interaction between two exogenous latent variables. Typically I would use the 'two-step approach', where the CFA model is specified first after which the SM is tested. I can still do the first step (i.e., CFA; possibly using mlm or mlmv to deal with the non-normality). But having done that, do I then just start with the final CFA model as my foundation when I move to the SEM? Because there's no way to compre the CFA results to the SEM model with the latent interaction. Also, when using the interaction test (i.e., XWITH), do I need to worry about the non-normality of my indicators? Hope my questions make sense.... Bengt O. Muthen posted on Wednesday, January 31, 2007 - 8:38 am One question is if the lv interaction variable influences the factor indicators or only some other variable. Both are possible in Mplus. Yes, just start with the final CFA as your foundation. Non-normality of indicators is possible even with the normality of the factors that XWITH assumes. Due to non-normal residuals, or due to the lv interaction influencing the indicators. Use MLR to take non-normality of indicators into account. Jonathan Cook posted on Wednesday, January 31, 2007 - 10:05 am Thank you for the response. I didn't realize that I could specify MLR in a latent interaction model. However, now that I've tried it, I'm a bit confused, as the results are identical whether or not I include "Estimator = MLR;" in the analysis command. Here's what I have in ANALYSIS... Estimator = MLR; Type = RANDOM; Algorithm = INTEGRATION; Linda K. Muthen posted on Wednesday, January 31, 2007 - 11:06 am This is because MLR is the default estimator for TYPE=RANDOM; Jonathan Cook posted on Wednesday, January 31, 2007 - 11:12 am Perfect, that's what I wanted to know. Thank you Linda and Bengt. Carol Van Hulle posted on Thursday, February 22, 2007 - 9:21 am Drs Muthen, I am trying to run a model that includes multiple interactions among continuous latent variables (created using XWITH). This apparently requires TYPE=MIXTURE and ALGORITHM=INTEGRATION. However, I keep recieving an error message that says there is not enough memory because " analysis requires 8 dimensions of integration resulting in a total of 0.39062E+06 integration points." I've been assured that I have all the memory I'm likely to get. Is there any other way around this problem? Thank you, Bengt O. Muthen posted on Saturday, February 24, 2007 - 3:20 pm 8 dimensions of integration is difficult to accomplish. You can try integration = montecarlo; which demands less memory. Or, you can try to simplify the model so that you have fewer dimensions of integration. To simplify the model, you can try to break it down into its parts. This might show that not all of your lv interactions play a significant role in the full model. Carol Van Hulle posted on Monday, February 26, 2007 - 12:47 pm Ok. I'll give the montecarlo option a try. Thanks. Marcus Butts posted on Tuesday, April 10, 2007 - 11:17 am I have a quick question. I ran a latent interaction model with two latent variables comprised of 3 items each using the integration algorithm, and the t-value associated with the interaction term is significant. My next step is to plot the interaction using a method similar to Aiken & West. However, this method is based on the assumption that the mean of each latent variable comprsing the interaction is zero due to centering. From my understanding, it is not necessary to center variables when using the Kline & Moosbrugger approach. How then do I find out what the latent means of my variables are? Are they automatically set to zero or is a there a specific command I need to give Mplus to retrieve these values in my output? Bengt O. Muthen posted on Tuesday, April 10, 2007 - 11:48 am In a regular model, the default Mplus setting is zero factor means. If no factor means are reported in the output, they are zero. Exceptions include multi-group models and growth models. Yu Kyoum Kim posted on Wednesday, October 10, 2007 - 8:40 am Dear Dr. Muthen, I ran the SEM with continous factor incators and an interaction between two latent variabels. It only give loglikelihood value for evaluating model fit. Is loglikelihood value same as minimum fit function chi-square value? If not, could you give me the equations which calculate RMSEA, CFI, TLI from loglikelihood value? Thank you so much! Linda K. Muthen posted on Wednesday, October 10, 2007 - 12:41 pm When means, variances, and covariances are not sufficient statistics for model estimation chi-square and related fit statistics are not available. Marvella Bowman posted on Tuesday, November 13, 2007 - 12:35 pm Hello, I am new to Mplus and currently have the demo version. I am trying to gain a better understanding of the program because I am interested in using it for my dissertation research. My hypothesized model posits mediated-moderation, and there are 6 continuous variables involved (4 predictors (2 pairs of interacting variables) and two outcomes (one is a proposed mediator, the other the outcome). Which examples would be most proximal to this type of study? If none exist in the demo version, can you refer me to articles that may have done similar analyses so I can have a better idea of (a) how to explain my proposed data analytic strategies and (b)understand what I will need to do? Linda K. Muthen posted on Tuesday, November 13, 2007 - 1:16 pm I think what you want is similar to Example 3.11 where you use the DEFINE command to create an interaction and use the interaction as a covariate. Giannis Costa posted on Friday, March 28, 2008 - 4:16 am Dear Linda and Bengt, In my model two continuous latent predictors (a,b) and an interaction between these (a*b) predict a latent continuous variable d. As described in the Mplus user guide, I am using: ANALYSIS: TYPE = RANDOM; ALGORITHM = INTEGRATION; and XWITH. I have got following questions: 1) I would like to compare the interaction model with the model without interaction. Which is the correct Mplus syntax (concerning estimation method, model specification etc.) for the model without interaction term? Should I simply run the syntax for the model with interaction (see above) and delete the interaction part? Or use another estimation method, for example : TYPE = GENERAL; ESTIMATOR = MLR or ML? 2) When I am running the interaction model, I do n0t get a R**2 and intercept to calculate the regression between the latent variables. How do I get these? My solution was to save the fscores of the latent variables and then run a regression on these (get R**2 and intercept). However, I get different results for the parameter Regression (latent interaction model): d = no intercept + 0.620*a - 0.208*b - 0.884*inter Regression (fscores; ML,Meanstructure): d = 0.01 + 0.730 *a - 0.150*b - 0.852*inter Linda K. Muthen posted on Friday, March 28, 2008 - 9:01 am 1. Run the model with the interaction and the model without the interaction. You will not need RANDOM and ALGORITHM=INTEGRATION without the interaction. Be sure you use the same estimator. 2. How R-square would be defined in this situation is unclear and is a research question. I would not use factor scores for this purpose. Means and intercepts of factors are fixed at zero in a single group analysis. Giannis Costa posted on Friday, March 28, 2008 - 9:51 am Dear Linda, thanks for your quick response! So my conclusion out of your suggestion is: 1) I can use the same Syntax (RANDOM and ALGORITHM=INTEGRATION) but it is not needed. I also can run TYPE = GENERAL; but as ESTIMATOR I should use MLR, because MLR is used in conjunction with ALGORITHM=INTEGRATION as default. Could I also use another Estimator with ALGORITHM=INTEGRATION? 2) How do I free the intercept? I tried it with: TYPE=RANDOM, ALGORITHM=INTEGRATION and additional I included TYPE=MEANSTRUCTURE into the syntax, but I do not get it. Linda K. Muthen posted on Friday, March 28, 2008 - 11:57 am 1. You must use the same estimator. You can use the same TYPE statement or not. 2. The intercept of a factor cannot be free in single group analysis. Giannis Costa posted on Friday, March 28, 2008 - 12:48 pm I would like to have the intercept of the dependent latent variable to be able to show the latent interaction graphically. Do you know another possibility how to do this, except of running a regression with the saved latent variable scores? Giannis Costa posted on Friday, March 28, 2008 - 1:59 pm sorry, silly question: the intercept is zero, as you mentioned above. Last question: If I use MLR, I have to calculate the difference-test between the two models using the formulas from your website http://www.statmodel.com/chidiff.shtml under "Difference Testing Using the Loglikelihood"? Linda K. Muthen posted on Friday, March 28, 2008 - 3:23 pm You cannot compute R-square. You will have to do without it. Yes, you would use the formulas on the website for difference testing with MLR. Alexandre Morin posted on Saturday, April 19, 2008 - 7:27 am Do you have a reference to suggest on the MLR estimator ? and, If I do, for instance, a SEM with non normal data and missing values, is there any problem in using MLR (would you suggest something else), instead of MLM or MLV which do not accomodate missing, even if the TYPE is not COMPLEX ? Thank you Linda K. Muthen posted on Saturday, April 19, 2008 - 1:58 pm MLM and MLMV can be used with missing data if TYPE=COMPLEX is not used. MLR can be used with and without complex survey data. See Web Note 2 on the website regarding MLR. Linda K. Muthen posted on Monday, April 21, 2008 - 8:18 am Actually, MLM and MLMV cannot be used with missing data. Grainne Cousins posted on Monday, June 30, 2008 - 6:06 am Dear Linda and Bengt I am trying to test the moderating effects of a continous latent variable (L) on the relationship between a binary IV (x) and a binary DV (Y). i followed the instructions of the userguide, using: y ON X L; X x L | x XWITH L; Y ON X x L; this generates an error message ERROR in Model command Only one interaction can be defined at a time.Definition for the following: X X L Perhaps this is because my IV (x) is a binary variable. Am i using the incorrect syntax to test the moderating effects of a latent continous variable on the relationship between 2 binary variables? If so, perhaps you could suggest the appropriate syntax for such a model? Many thanks Linda K. Muthen posted on Monday, June 30, 2008 - 6:57 am You should have one variable name on the left-hand side of the |, for example, XxL. A variable name cannot include blanks. Grainne Cousins posted on Wednesday, September 03, 2008 - 2:57 am Dear Linda I have conducted a mediational analysis using the syntax below, where x is a continuous latent variable, M is continuous observed and y is binary observed. Y on X; Y on M; M on X; Y IND X; This model provided a good fit to the data and indicated partial mediation. However, I would like to determine whether the mediating effects of M are moderated by a confounder (C). C is a binary observed variable, so I understand I must conduct mulitple group analysis. Having read the manual and this site I am unclear as to how exactly I write the syntax to test these effects. What syntax must i add to the model above to determine whether C moderates the mediating effects of M. Many Thanks Linda K. Muthen posted on Wednesday, September 03, 2008 - 7:21 am You would use c as a grouping variable. See Chapter 13 for a discussion of multiple group analysis. Kiarri Kershaw posted on Thursday, December 25, 2008 - 9:43 pm Hello Linda and Merry Christmas! I am running a multiple mediation analysis with a single dichotomous outcome. I am trying to test for moderation by gender. I have included gender as a grouping variable and generated output by gender. Now I'm not sure how to test whether the mediation is different by gender. So for example, if I wanted to test the equality of the mediated effect in each group, would I subtract the mediated effects (a11b11-a12b12) for each mediator I'm testing and then divide it by the average of the standard errors for the two groups? Or is there some code I can include to get Mplus to test this for me? Thanks! Linda K. Muthen posted on Friday, December 26, 2008 - 2:28 pm You can test this by using multiple group analysis where you define and test the equality of the two mediation effects using MODEL CONSTRAINT. Kihan Kim posted on Friday, January 23, 2009 - 6:00 pm I am trying to test a moderating effect with multi-group path analysis. If the model is x1 -> y1 -> y2 -> y3 and if I have a categorical (gender) moderating variable x2, and I expect that X2 will moderate the relationship between y2 and y3 only, then.. I think I should I perform a chi-square difference test between the following two multi-group path models: Model 1: multi-group path model with all paths constrained to be equal. Model 2: multi-group path model with only two two paths (X1 to Y1 & Y1 to Y2) to be constrained to be equal. If the chi-square difference test is significant, then that indicates that the path from Y2 to Y3 cannot be the same. And therefore, X2 did moderate the relationship between Y2 and Y3. Could you please confirm me whether this logic is correct? Bengt O. Muthen posted on Friday, January 23, 2009 - 6:20 pm Seems logical to me. Kihan Kim posted on Friday, January 23, 2009 - 6:38 pm One follow-up questions is: For the sample path model, x1 -> y1 -> y2 -> y3, and X2 moderating the relationship between Y2 and Y3 only, When I run a multi-group path model with NO constraint, I find that the path Y2 to Y3 is significant for one subgroup, but non-significant for another subgroup. Is this also an evidence of moderating effect? If so.. then which method is better: (a) the chi-square difference test between constrained and non-constrained model OR (b) just one multi-group path model with NO constraint. Bengt O. Muthen posted on Saturday, January 24, 2009 - 4:14 pm In (a) you get a likelihood-ratio chi-square. In (b) you can get a Wald chi-square by letting the y2->3 path be different in the 2 groups and then test for equality using Model Test. The 2 chi-square tests are asymptotically equivalent but may differ in any one sample. Richard Silverwood posted on Friday, February 13, 2009 - 7:30 am I have a simple path model which appears to fit my data well (RMSEA<0.001). However, when I introduce an interaction term between two (endogenous) variables the fit becomes dramatically worse (RMSEA= 0.32). Is this a common problem with an obvious explanation? I notice that by removing variables from the path model so that the variables in the interaction term are no long endogenous I again get a good fit (RMSEA<0.001). I'm sure there's something very simple I'm overlooking! Any help gratefully received. Thank you in advance. Full path model without interaction (RMSEA<0.001): X2 ON X1; X3 ON X1; Y1 ON X1 X2 X3; Y2 ON X2 X3 Y1; Full path model with interaction (RMSEA=0.32): X2 ON X1; X3 ON X1; Y1 ON X1 X2 X3 X2X3; Y2 ON X2 X3 X2X3 Y1; Restricted path model with interaction (RMSEA<0.001): Y1 ON X2 X3 X2X3; Y2 ON X2 X3 X2X3 Y1; All variables are continuous and centred about their means. X2X3 denotes the product of X2 and X3. Bengt O. Muthen posted on Sunday, February 15, 2009 - 11:02 am As far as I know this is not a common problem with an obvious explanation. Of the 3 models you listed the content (the restriction that makes the model have degrees of freedom) of the first 2 seems to be that x1 does not influence y2 directly. For some reason, adding the interaction seems to make that restriction less well fitting. It is hard to say why this partial effect is needed with the interaction included and not without it. The third model looks like it has zero d.f.'s. Richard Silverwood posted on Monday, February 16, 2009 - 5:45 am Many thanks for your prompt response. Is it perhaps because I need to include X2X3 ON X2 X3? I had not done so as there is no obvious analogy in standard multivariable regression (i.e. I would just regress Y1 on X2, X3 and X2X3) but thinking about it now this is actually constraining X2 -> X2X3 and X3 -> X2X3 to be zero, which is clearly ridiculous... Best wishes. Bengt O. Muthen posted on Monday, February 16, 2009 - 6:27 am You should not include that. Jaime Derringer posted on Wednesday, February 25, 2009 - 11:32 am I am testing the following model: y1 ON sex x1 x2 x3 x1_x2 x1_x3 x2_x3 x1_x2_x3; y1#1 ON sex x1 x2 x3 x1_x2 x1_x3 x2_x3 x1_x2_x3; Where y1 is a count variable on a zero-inflated Poisson distribution, sex and x1-x3 are coded bivariate 0/1, and underscore terms are interactions between x variables. For graphing purposes, I would like to get [y1] for: * different combinations of x1-x3 levels (e.g., [y1] where x1=0, x2=1, x3=1), with sex covaried out; as well as * [y1] where one of the x variables is treated as an additional covariate (e.g., [y1] for x2=1, x3=1, sex & x1 covaried out) Is there a way to request these covariate-adjusted group means as part of the output? Linda K. Muthen posted on Thursday, February 26, 2009 - 9:48 am See if you can get what you want with the Adjusted Means Plot using the PLOT command. Jessica Brumley posted on Monday, March 23, 2009 - 1:51 pm I am a PhD Student working on my dissertation proposal. I am familiar with SEM using LISREL but was directed towards Mplus becuase of its ability to handle dichotomous outcome variables. I would like to use SEM to test a model of three continuous latent factors with a moderated mediation effect on the dichotomous observed outcome variable. Am I correct in saying that Mplus can be used to test this model and will be able to provide a test of the indirect and interaction effect? Linda K. Muthen posted on Monday, March 23, 2009 - 4:51 pm Yes, this can be done in Mplus. Jessica Brumley posted on Monday, March 23, 2009 - 5:56 pm Great! Thanks for your rapid response. brianne posted on Tuesday, April 07, 2009 - 1:45 pm I am wondering if I can compare two alternate moderator models using model fit indices if the main variables are the same, but the interactions are different. In other words, are these considered the same variables, but different paths OR are they considered different variables? In model 1: a on y b on y a*b on y c on y (control) For model 2: b on y c on y b*c on y a on y (control) brianne posted on Tuesday, April 07, 2009 - 1:50 pm Sorry, my syntax in my examples is backwards. y on a y on b y on a*b y on c (control) y on b y on c y on b*c y on a (control) Bengt O. Muthen posted on Tuesday, April 07, 2009 - 6:24 pm The 2 models are regular regressions so they fit the data perfectly - there are no fit indices to compare. I would simply include the interactions that are significant. Tone Fløysand posted on Friday, April 10, 2009 - 3:11 am Dear Muthén How can I use the define command to reverse scoring of scales such that values 1 2 3 4 5 6 are reversed to 6 5 4 3 2 1. Linda K. Muthen posted on Friday, April 10, 2009 - 6:52 am y = 7 - y; Rob Dvorak posted on Friday, May 08, 2009 - 10:47 am I'm running an SEM with two centered observed variables and their interaction predicting a latent variable. In order to probe the interaction I need to get the latent variable intercept of my equation, but I'm not sure how to get this. Thanks in advance. Linda K. Muthen posted on Saturday, May 09, 2009 - 11:01 am It is zero. Rob Dvorak posted on Sunday, May 10, 2009 - 6:24 am I wondered if that was the case. Thanks! ashley davis posted on Tuesday, May 12, 2009 - 12:47 pm Why doesn't Mplus calculate an R-square when utilizing algorithm = integration and type = random (interaction model)? Is there an article I can cite for this reason? I have reviewed several articles I have seen in the posts, am I right to say that Marsh Wen Hau 2004, when they talk about the nonnormality of the interaction term are answering this question? Linda K. Muthen posted on Wednesday, May 13, 2009 - 8:39 am Because residual variances vary as a function of x, there is not a single R-square with TYPE=RANDOM. This is not related to the Marsh Wen Hau paper. Gloria Li posted on Tuesday, May 19, 2009 - 7:36 pm Dear Prof Muthen, I have a mediated-moderation model involing 5 latent variables. My hypothesized model includes 3 antecedents, Y1 - Y3, they interact with each other (two 2-way and one 3-way interaction) to predict Y4 which then predicts Y5. They are all continous variables. I have used "TYPE=RANDOM & ALGORITHM = INTERGRATIPN" in ANALYSIS, however, I was not able to get the fit indices. Is there anyway to calculate them? If not, how can i report the model fit for the model? Please advise. Many thanks, Linda K. Muthen posted on Wednesday, May 20, 2009 - 5:49 am With TYPE=RANDOM chi-square and related fit statistics are not available. In this case, nested models are compared using -2 times the loglikelihood difference which is distributed as chi-square. Gloria Li posted on Thursday, May 21, 2009 - 8:54 am So what type of language should i fill in under ANALYSIS if i wanna get those indices? many thz! Linda K. Muthen posted on Thursday, May 21, 2009 - 9:18 am They are not available. When means, variances, and covariances are not sufficient statistics for model estimation, chi-square and related statistics cannot be computed. Gloria Li posted on Saturday, May 23, 2009 - 9:20 am In that case, do you have any advice on how we can report the model fit in ways that other researchers can get sense of the study results? Bengt O. Muthen posted on Saturday, May 23, 2009 - 1:22 pm What's typically done in situations like this is to work with "neighboring" models for which you compute 2 times the loglikelihood difference to get a chi-square test. You said you had 5 latent variables y1-y5 where y1-y3 (and their interactions) influence y4 which influences y5. So the model has 2 types of content which can be tested. First, are the measurement models (the factor models) well-fitting? You can test that by regular fit indices of each measurement part separately. Second, are there any direct effects from y1-y3 to y5? This second question can be addressed by a "neighboring" model which allows such direct effects versus one that doesn't. Gloria Li posted on Monday, May 25, 2009 - 7:55 am By measurement model, which vairables should I plug in? Becasue I do not hypothesize the main effect of y1-y3, instead, I have 3 hypotheses respecting interaction effects: y1*y2, y1*y3 and y1*y2*y3 on y4 and then to y5. Regarding "neighboring" model, which section in the user's mannual i can refer to? Would you please give some more info? Bengt O. Muthen posted on Monday, May 25, 2009 - 10:37 am You have one measurement model for each latent variable - see your BY statements. The fact that you have interactions of the latent variables y1-y3 in their influence on y4 does not change the fact that you have a regular factor analytic measurement model for each of y1-y3, plus also for the 3 together. Neighboring models are not explicitly explained in the User's Guide but in our courses. I gave an example of a neighboring model that has direct effects, which seems relevant for what you do. Such a model is straightforward to specify using the UG. Also see our courses available on the web - particularly topic 1. Gloria Li posted on Monday, May 25, 2009 - 11:14 pm Since I am new to Mplus, I am sorry for the overwhelming questions. For the measurement model, I have tested 4 models: (a) y1,y4 & y5; (b) y2,y4 & y5; (c) y3,y4 & y5; (d) y1-y5. Only the first one obtains acceptable fit index. Is that still ok to run SEM of interaction-effect model? I have been searching more info of neighboring models and I have referred to this online handout: http://www.statmodel.com/download/Topic%201.pdf However, I still have not able to write commands for that. Would you please give more guidance? What does it mean by "using the UG"? Many thanks! Bengt O. Muthen posted on Tuesday, May 26, 2009 - 8:43 am Regarding your first question, no you have to have acceptable fit in your measurement models. Otherwise your latent variable modeling has little meaning - not a strong enough relationship to your Regarding your second question, look at UG ex 3.11. The content of this model is that x1 and x3 don't influence y3 directly. As a test of whether this is a good model you can apply the neighboring model idea and include those 2 direct effects: y3 on x1 x2; The 2 times loglikelihood difference is a chi-square test with 2 df of the initial model. You can use that same general approach for the relationships among your latent variables. I suggest that you study the web video of Topic 1 on our web site and read up on the SEM literature. min soo kim posted on Monday, June 15, 2009 - 9:43 pm Hello, Drs. Muthen I¡¯m wondering if it is possible to examine the moderating effect of a second-order continuous factor(f9) between second-order continuous factors (f5 & f13). Could you check my syntax? VARIABLES: NAMES ARE y1-y30 ANALYSIS: TYPE = RANDOM; MODEL: f1 BY y1-y3; f2 BY y4-y6; f3 BY y7-y9; f4 BY y10-y12; f5 BY f1-f4; f6 BY y13-y15; f7 BY y16-y18; f8 BY y19-y21; f9 BY f6-f8; f10 BY y22-y24; f11 BY y25-y27; f12 BY y28-y30; f13 BY f10-f12; f13 ON f5 f9; f5xf9 | f5 XWITH f9; f13 ON f5xf9; OUTPUT: TECH1 TECH8; Linda K. Muthen posted on Tuesday, June 16, 2009 - 8:18 am This seems correct. Eulalia Puig posted on Thursday, October 29, 2009 - 12:42 pm Dear Linda or Bengt, I have a path model with several mediation variables. All dependent variables are continuous. I'd like to know whether the direct effect is smaller than the indirect effect(s) altogether - what test can I use and how do I enter it in Mplus? Thank you so much. Lali Bengt O. Muthen posted on Thursday, October 29, 2009 - 2:06 pm You can test if the direct effect is different from all the indirect effects using Model Constraint and parameter labels to express these effects and their difference. The difference then gets an estimate and a SE. This requires you to know how to express the effects in terms of the parameters. Eulalia Puig posted on Thursday, October 29, 2009 - 3:08 pm Thanks, Bengt. However, I don't know how to express the effects in terms of the parameters, so that I can do the Wald test. Would you be able to offer some input on that? Thank you again, Lali Bengt O. Muthen posted on Friday, October 30, 2009 - 8:38 am You should read David McKinnon's mediation book. Essentially, mediation concerns indirect effects quantified as a*b, where a and b are slopes. So your Mplus model should label a and label b (see UG) and use Model Constraint to compute a*b. It sounds like your model has several such indirect effects which you can then add up and compare to the direct effect. You can check that you do the effects right by comparing to the Model Indirect results. I'm afraid I can't help you more than that. If not sufficient, you have to consult a more experienced Mplus user. Eulalia Puig posted on Friday, October 30, 2009 - 5:10 pm Thanks Bengt. I'm finding tons of things through this reference, so I think I know how to do it. dan berry posted on Sunday, November 01, 2009 - 3:17 pm Dear Mplus folks, I’m looking to fit a three-way moderation model in which the growth factor (i.e., LGM) effect on a distal outcome varies as a function of observed X1, which should, in turn, vary as a function of observed X2 (i.e., X1*Slope*X2 I’m unclear if there’s a way to do a 3-way interaction using the “xwith” function. Is it possible (or sensible) to use the 2-way interaction variable created in one “xwith” command and then use that created two-way interaction term in second “xwith “ command where it is used as one of the variables to be interacted (i.e., does this create a 3-way). Are 3-way interactions possible, when one variable of the 3 is latent? For example: Analysis: algorithm =integration; type= random; Model: int slope | X3@0 X4@1 X5@2; X1bySLOPE | X1 xwith slope; X2bySLOPE | x2 xwith slope; THREEWAY | X1bySLOPE xwith X2; Y1 on X1 X2 int slope X1X2Crosproduct ! from define statement In my case, X2 is binary. Might there be a way to tweak a KNOWNCLASS statement to let the 2-way interaction vary across X2 classes? Thanks for any thoughts. Linda K. Muthen posted on Monday, November 02, 2009 - 9:39 am You can do a three-way interaction as shown above. Or you can use the KNOWNCLASS option for the binary variable. Anonymous posted on Tuesday, December 01, 2009 - 6:20 pm Hi Linda: I am testing an interaction between two continuous latent variables. When I run the model I get an error message saying "RECIPROCAL INTERACTION". What does this mean? It seems this error comes up if the variables are categorical but my variables are continuous. Any suggestion will be helpful. Linda K. Muthen posted on Wednesday, December 02, 2009 - 6:23 am Please send the full output and your license number to support@statmodel.com. I need more information to answer this question. min soo kim posted on Friday, February 05, 2010 - 1:45 pm I'm trying to examine the interaction effect between two second-order factors on a second-order factor. I used OUTPUT: TECH1 TECH8; I'm wondeing if the estimates are standardized or not. Path coefficients were 1.008 for x1, -0.124 for x2, and -.319 for the interaction term. Are those standardized? Also, chi-square statistics, CFI, NFI, RMSEA, and SRMR were not reported, right? How can I assess the overall model fit? Linda K. Muthen posted on Friday, February 05, 2010 - 3:11 pm The estimates in the results section of the output are not standardized. With TYPE=RANDOM, chi-square and related test statistics are not developed. In this case, nested models are compared using -2 times the loglikelihood difference which is distributed as chi-square. Ewan Carr posted on Thursday, May 13, 2010 - 3:57 am Quick question: if a moderating variable has a significant effect, but worsens the overall model fit considerably, what should I do? The model fit without the moderator is > 0.968. When the moderator is included it drops to 0.800. The modification indices suggest I should allow for relationships between the moderator and other predictors in the model, but I'm not sure if that makes sense. Thanks in advance. Linda K. Muthen posted on Thursday, May 13, 2010 - 8:18 am It is not clear what you are doing. Please send the full output and your license number to support@statmodel.com. Syd posted on Friday, July 30, 2010 - 1:56 am I’m testing a model similar to that outlined in Example 5.13 in the Mplus user guide (with direct and interaction effects of two continuous latent variables f1 and f2 on a mediating variable f3, which has a direct effect on a DV f4). My question is, how can we test mediation in this case? If I may ask some sub-questions: 1. Is it possible to test moderated mediation without creating dummy variables for the different levels of the moderating variable? For example, is it possible to test for mediation in the case of a continuous LV interaction just like I would for mediation for any other latent variable? I.e., may a method such as causal steps method (Judd & Kenny, 1981; Baron & Kenny, 1986) be used by treating the interaction term like any other LV, and look at the direct and indirect effects? 2. If the answer to question 1 is yes, then, how do I compare the size of the loadings in the case of interactions, since standardized loadings are not available for TYPE=RANDOM, ALGORITHM= 3. If the answer to question 1 is no, then to test mediation at 2 levels of the moderator (mean +- 1SD), is it possible to obtain the mean and standard deviation of the moderating variable and create a dummy variable in Mplus that codes the mean+-1SD values of the moderator as 0 and 1? Thank you, Linda K. Muthen posted on Friday, July 30, 2010 - 9:38 am 1. Yes. 2. Use the raw coefficients. Syd posted on Friday, July 30, 2010 - 1:38 pm Thank you for the quick response Linda. Is it possible for you to point me towards any references that would support the use this approach for testing moderated mediation? That is, any references suggesting that we can indeed treat the interaction term as any other LV. Bengt O. Muthen posted on Saturday, July 31, 2010 - 4:53 pm I think that judging the significance of indirect and direct effects from latent variable interactions is alright from basic statistical principles. I am not familiar with the "causal steps method (Judd & Kenny, 1981; Baron & Kenny, 1986)" that you mention. To me, mediation means that the indirect effect is significant, be it from a main effect or an interaction effect. I have not seen writings on this particular issue. Syd posted on Saturday, July 31, 2010 - 10:39 pm Hi Bengt, Thank you for the explanation. Please ignore my comment about the causal steps method; it was just a potential solution I was considering to test whether a significant indirect effect exists. As you note, I'm basically trying to establish the significance of the indirect effect of an interaction (f1xf2) on a dependent variable (f4) through a mediating variable (f3). As the MODEL INDIRECT command is not available when using TYPE=RANDOM, ALGORITHM=INTEGRATION, I was not sure about how to test the significance of the indirect effect, even though the size effect can be calculated. So, I will greatly appreciate any guidance you may provide regarding how I can test the significance of the indirect effect of f1xf2 on f4 via f3. I apologize if this is an elementary question, but having looked at the literature in my field, I couldn't see references to any articles that would suggest a method. Thank you Linda K. Muthen posted on Sunday, August 01, 2010 - 9:56 am You can use MODEL CONSTRAINT to define a NEW parameter that is the product of the regression coefficients for f3 on the interaction and f4 on f3. See Slide 170 of the Topic 3 course handout for a suggestion of how to interpret an interaction. Syd posted on Monday, August 02, 2010 - 1:29 am Thank you very much for the information Linda. It worked perfectly. Maria Clara Barata posted on Tuesday, September 28, 2010 - 4:43 pm I was wondering if you could help me figure out this puzzle. I am testing a structural relationship between one latent predictor at time 1 and two latent outcomes at time 2, controlling for the time 1 values of the outcomes. These data come from an intervention study, but because I am only interested on the developmental patterns, I am controlling for intervention effects. The problem is when I add the intervention variable as a predictor of the time 2 outcomes, the model also estimates a series of correlations between all time 1 predictors and the intervention variable. This does not make sense in my model because the time 1 predictors were assessed prior to random assignment to the intervention, so whatever correlation there is between intervention and the time 1 predictors is just spurious. However, when I try to set these correlations to be 0, the model does not converge. Interestingly when I fit a similar model, but using the main effect of the intervention and the interaction of the latent predictor and the observed intervention variable, it no longer presents those strange correlations I did not ask for and do not need. Can you please help me figure out how to make sense or get rid off these spurious correlations? Ultimately they make my models not nested and require me to account for missing degrees of freedom that make no sense. Thanks, Clara Linda K. Muthen posted on Tuesday, September 28, 2010 - 5:43 pm There are certain defaults in Mplus. If you do not want a default covariance/correlation, fix it to zero, for example, y1 WITH y2@0; Maria Clara Barata posted on Thursday, September 30, 2010 - 7:48 am Dear Linda, Thanks for your suggestion. As I explained in the previous posting, I had tried setting those spurious correlations to zero with this code: EF1 with interv1@0; EM1 with interv1@0; EL1 with interv1@0; However, when I do that I get the following message: CONDITION NUMBER IS -0.197D-16. PROBLEM INVOLVING PARAMETER 54. Should I still trust the results of this model and compare the fit with other nested models? Also should i interpret from your words that there are different defaults for main effects models versus models with an interaction? Because when I fit a similar model, but using the main effect of the intervention and the interaction of the latent predictor and the observed intervention variable, it no longer presents those strange correlations. Linda K. Muthen posted on Thursday, September 30, 2010 - 2:23 pm Defaults vary and change when the model changes. Please send your output and license number to support@statmodel.com. Christine Davis posted on Monday, December 06, 2010 - 10:39 am I am trying to run an SEM with an interaction while testing a separate indirect effect. So I initially tried to use the two "pieces" using type=random and estimator=ml (which, of course, doesn't work) Rel_15 ON Rel_6 Peer_Rel; Rel_6 ON C_REL_B ; relxpeer | rel_6 xwith peer_rel; Rel_15 ON relxpeer; Rel_15 VIA Rel_6 C_REL_B; Do you have any suggestion re: the possibility of running this model and the syntax that is needed? Thank you. Linda K. Muthen posted on Monday, December 06, 2010 - 10:49 am You can't use MODEL INDIRECT with TYPE=RANDOM but you can use MODEL CONSTRAINT to define the indirect effect. See the user's guide. Christine Davis posted on Wednesday, December 08, 2010 - 10:58 am Thank you for your time and instruction. Since Monday, I have consulted the user's guide, but unfortunately, I am such a newbie, I'm not sure about the syntax. And unfortunately, others in my department are not familiar with what I want to Can I clarify what I think I am suppose to do? In order to constrain the "indirect effect" (Rel_15 VIA Rel_6 C_REL_B), do I need to create a new variable, the use that new variable in the model constraint syntax? If so, what is the syntax for both pieces, i.e. creating the "new" indirect effect variable and then constraining said indirect effect variable? Linda K. Muthen posted on Wednesday, December 08, 2010 - 6:11 pm You need to label the two parameters that make up the indirect effect in the MODEL command. Then create a NEW parameter in MODEL CONSTRAINT and is the product of the two components of the indirect y2 ON y1 (p1); y1 ON x (p2); NEW (ind); ind = p1*p2; Karen Offermans posted on Wednesday, February 16, 2011 - 1:55 am Dear dr. Muthen, We tested an ordinal logistic structural equation model (outcome variable with 3 categories) and found two interaction effects (between dummy variable and continuous variable). I was wondering if it is possible to probe / plot these interactions in Mplus? If not, do you have any other suggestions for post hoc analyses of these interactions(MODPROBE is unfortunately not possible, only for OLS or dichotomous outcome measures). Hope you can help me. Bengt O. Muthen posted on Wednesday, February 16, 2011 - 10:09 am Why not plot the probability as a function of the continuous variable at the different dummy variable values? So with a binary dummy variable you get 2 plots in one graph. Karen Offermans posted on Thursday, February 17, 2011 - 1:25 am Dear dr. Muthen, Thank you for your reply. The reason why we want to plot the continuous variable is because we wanted to test theoretically whether this variable modifies the dummy variable (and not the other way Is it possible to plot interactions in Mplus? Bengt O. Muthen posted on Thursday, February 17, 2011 - 4:07 pm There is no interaction plot feature in Mplus. I don't think you can tell if the continuous variable modifies the dummy variable or the other way around. But you can certainly see/plot how the DV probabilities at the 2 different dummy values change over the values of the continuous variable. Imagine 2 bars (one for each dummy value; height being the probability) situated at several values for the continuous variable. Marie Taylor posted on Wednesday, February 23, 2011 - 4:51 pm I am sorry if I am not posting my question in the correct spot. I am trying to model the relationship between beliefs (at Time 1) and interracial contact (at Time 1, 2, & 3). Currently, I have: T1RaceContact ON belief; T2RaceContact ON belief; T3RaceContact ON belief; Am I right that this only shows tests of linear relations between T1 beliefs and T1, 2, and 3 race contact, but doesn't depict the relation between T1 belief and the change in race contact from T1 to T2? Or is that in the model, given that T1 is in the model? If it is not, how would I include it? Thank you so much! Bengt O. Muthen posted on Wednesday, February 23, 2011 - 4:56 pm No, this model does not show how belief influences the change in racecontact. You would have to specify a change model for racecontact and let belief influence that change. For instance, using a growth model you could let belief influence the slope growth factor. sunnyshi posted on Wednesday, April 13, 2011 - 3:54 pm Dear Dr. Muthen, I am running an SEM using survey data. My questions are: 1. I try to test the effect of moderator M on the relationship of A and B. A,B, M are latent constructs. The model command: AxM| A XITH M; B ON A M AxM; Just from the model command, how can Mplus know which one is the moderator? A or M? Could I use just the following command: AxM| A XITH M; B ON A AxM; 2. To get the fit of the model, I run a model without the interaction term of AxM and do the chi square difference test. Both the models using MLR estimator. However, I could not obtain the scaling correction factor for the model with the latent interaction term. How should I conduct the test under this situation. 3. To conduct the chi square difference test, I have to assure both models (the models with and without the interaction term) to use the same information matrix. Am I right? Bengt O. Muthen posted on Thursday, April 14, 2011 - 7:59 am 1. An interaction model like this is symmetric in A and M. That is, you can't say that one and not the other is the moderator - your substantive reasoning suggests which way you want to present it, but statistically there is no difference. 2-3. A simpler approach is to just look at the significance of the interaction term. See also Mooijaart & Satorra (2009). On insensitivity of the chi-square model test to nonlinear misspecification in structural equation models. Psychometrika, 74, 443-455. This also shows an example on page 445 of how the R-square contribution for the latent variable interaction is computed. For general covariance algebra with latent variable interactions, see the appendix of Mooijaart & Bentler (2010). An alternative approach for nonlinear latent variable models. Structural Equation Modeling, 17, 357-33. Jen-Hua, Hsueh posted on Sunday, May 29, 2011 - 11:03 pm Dear Dr. Muthen, If I use two enxogenous variables and one endogenous variable to examine the interaction effect with LMS method in Mplus, for example: f1 BY y1-y3; f2 BY y4-y6; f3 BY y7-y9; f1xf2 | f1 XWITH f2; f3 ON f1 f2 f1xf2; The degree of freedom of structural part is -1 (even if the degree of freedom of overall model is positive), namely, underidentified. Do you think such an interaction model is problematic? Bengt O. Muthen posted on Monday, May 30, 2011 - 7:28 am No, because you don't use information from only the covariance matrix (2nd-order moments), but higher-order information - namely the raw data. Chi-square testing in the usual way is not available with latent variable interactions. Jen-Hua, Hsueh posted on Monday, May 30, 2011 - 8:17 am Thanks for your response much! Best regard Bobbi St. Clair posted on Wednesday, June 15, 2011 - 10:03 am Since the use of multiplicative scores can be problematic to measure an interaction between ordinal-level and polytomous variables, is there a preferred approach in creating this term in Mplus? I'm interested in the moderating effect of a 7 category ordinal variable on a 5 category polytomous variable. Bengt O. Muthen posted on Wednesday, June 15, 2011 - 5:19 pm I can't think of a preferred approach. Unless you have strong floor or ceiling effects, perhaps you can simply treat them as continuous and use the product. Otherwise, perhaps you can use substantive reasoning to create a binary dummy variable from the 7-cat vble and let that moderate (or a couple of such dummies). Stacey Conchie posted on Saturday, June 18, 2011 - 12:31 am I'm interested in testing a moderated-mediated model in which an x - me - y relationship is moderated by MO. I want to test two theories, one in which MO affects the entire relationship, and one in which MO affects only part of the relationship, say between x-me. I've read the previous posts on how to create interaction terms in Mplus. However, I wondered (and please forgive what may be a silly question) if the interaction terms can be created outside of Mplus and imported in as separate LVs? I'm just wondering if this would avoid any computational problems with the integration algorithm (I have all continuous measures) and if this would give 'typical' model fit statistics (RMSEA, CFI); although I appreciate that there are other ways to test the fit of a model. Linda K. Muthen posted on Saturday, June 18, 2011 - 11:51 am Is MO a latent variable or an observed variable. Stacey Conchie posted on Sunday, June 19, 2011 - 12:52 am It's a latent variable. However, in a second set of analyses I hope to do, my moderator will be an observed variable. Linda K. Muthen posted on Sunday, June 19, 2011 - 9:27 am I would use the XWITH command to create the interaction unless the latent variable has several very good indicators. Factor scores are not well-determined when there are few indicators. Stacey Conchie posted on Sunday, June 19, 2011 - 1:46 pm Thanks Linda. I wondered if I might ask two further questions (or rather, confirm that my understanding from reading your responses to related topics is correct)? 1. The analysis I described will produce unstandardized estimates (e.g, y on x), but not standardized estimates. Because the analysis is done as a single group, it is not possible to estimate the mean and sd for individual 'factors' in the model (e.g., x, me), and therefore, not possible to compute standardized estimates (from the unstandardized values)? 2. Simple slopes is not possible in Mplus (this one I'm less sure on)? However, it is possible to estimate most necessary statistics (covariance matrix, SE, etc.) to conduct this analysis outside of Mplus (e.g., through Kris Preacher's calculator for 2-way interactions). However, it wasn't clear to me if Mplus computes a constant at the model level, rather than the individual parameter level? Sorry to ask further questions. Mplus is new to me and I want to make sure that my understanding is correct. Stacey Conchie posted on Monday, June 20, 2011 - 5:42 am Linda, I now have a clearer understanding on the two issues that I posted on yesterday so please don't feel the need to reply. From what I've read it seems that the interaction effect can be explored by using the MODEL CONSTRAINT command. I understand how this is achieved, however, I'm unsure how to extract the SD values to enter into this command (if I wanted to see the slopes at +/- 1SD of the mean of the moderator)? Linda K. Muthen posted on Monday, June 20, 2011 - 1:23 pm See the FAQ on the website with the title # The variance of a dependent variable as a function of latent variables that have an interaction is discussed in Mooijaart and Satorra. Peter Kinney posted on Thursday, August 18, 2011 - 1:23 pm Dear Linda, at the moment I am confused with my model. Would be great if you could help me out. First of all I am using MLM, have 300 raw data (with 3 items each) and use for all my analysis the output “standardized tech4” In addition to my “normal model” which looks like as the following: x by(x1-x3) y by(y1-y3) c by(c1-c3) d by(d1-d3) e by(e1-e3) c on x C on y d on c e on d Now I wanna do the following: 1) "f" (is ordinal 1 or 2 :does a multi group analysis make sense when I would like to examine the impact on "e" and "d" (by f)? (I am not interested in the impact on x,y,c) (and also see the data which would be then such 150 each)Would it be better to analyze an indirect effect (ind) f1 f2 ind d f1 f2 ind e? 2)to examine the moderate effect g (g1-g3)with influence between e and d. Would a mod.regression analysis be correct when I write under “e on d”: g with e g with d? Thank you so much for your support Bengt O. Muthen posted on Friday, August 19, 2011 - 6:25 pm 1) I would simply use f as a covariate influencing the DVs you want. 2) If g is a factor that moderates e on d, you would create an interaction: inter | G XWITH d; and regress e on both d and inter. Peter Kinney posted on Thursday, August 25, 2011 - 3:14 pm Dear Bengt, wow! thank you so much for your fast reply! I tried your advice. to 1)"f" is a bivar. variable (0 and 1) all other variables are multivar.: --> when I do "d on f" I get a significant negative value - did the programme take automatically 0 (yes)? So I know that if the answer was "yes" it has an neg. impact? (so I do not have to give any other comment under "names are")? to 2) unfortunately it did not work. I also read the Example 5.13 in your book. But I guess it is s.th else. So I did the following in addition to my normal model (because my moderate variable is G and I wanna have the moderate effect between d and e): INTER |G XWITH D; E ON D E ON INTER; But it did not work. So I included under ANALYsis = RANDOM; ALGORITHM = INTEGRATION; (Do I need this comments really?)but still did not work. AND I would like to have the ESTIMATOR = MLM and it still did not work?? Maybe you have an advice for me? Sorry, I feel just stupid at the moment. Thank you again so much. Peter Kinney posted on Thursday, August 25, 2011 - 3:44 pm Well it does work when I do it as I described (Analysis=Random; Algorithm = Integration; Estimator = MLM;) The programmes calcuates s.th., but it looks very wrong (very slow, black backround (C://windows/system32/cmd.exe)... Wen-Hsu Lin posted on Monday, October 31, 2011 - 1:19 am After reading some of the posts, I am still confused as what to use to evaluate model fit? Is declaring the significance of the multiplicative term (xwith) enough? Thank you. Linda K. Muthen posted on Monday, October 31, 2011 - 4:41 pm Yes, the significance of the interaction is enough. Chi-square is not valid with a latent variable interaction because means, variances, and covariances are not sufficient statistics for model estimation. See the FAQ on the website: The variance of a dependent variable as a function of latent variables that have an interaction is discussed in Mooijaart and Satorra Wen-Hsu Lin posted on Friday, November 11, 2011 - 10:44 pm Hi, Linda: The xwith command was executed with type = random, does that mean the coefficient of the interaction term is like the random effect in the multi-level modeling? The explanation of the coefficient is still like the one we will use for the regular interaction term in the OLS? Thank you. Linda K. Muthen posted on Saturday, November 12, 2011 - 9:03 am The latent variable interaction is a fixed effect. The interpretation is the same as a regular interaction. Wen-Hsu Lin posted on Saturday, November 12, 2011 - 7:38 pm Thank you Linda, one follow up, can I use type = imputation while I am running inter| A xwith B crime on inter? I did run it but the results wont give me significant test like when I ran with one of the five datasets. Linda K. Muthen posted on Sunday, November 13, 2011 - 7:22 am I just ran a similar example and got a significance test. Please send your output and license number to support@statmodel.com so I can see why you did not. Martina Gere posted on Sunday, February 05, 2012 - 11:46 am Dear Dr. Muthen, I am running a model with 3 factors (all continuous indicators), where factor 1 and 2, as well as the interaction of factor 1 x 2 predict factor 3. f3 ON f1 f2 f1xf2; I am using XWITH to create the interaction term, with TYPE=RANDOM and ALGORITHM=INTEGRATION. The model indicates that factor 2 is a significant moderator in the relationship between factor 1 and 3. Because of TYPE=RANDOM the output does not provide the usual model fit indices. You suggested earlier that in order to assess model fit, one should compare nested models using 2 times the loglikelihood difference. I ran the model without the interaction term, only including the regression paths f3 ON f1 f2. All model fit indices for this model are good (Chi-Square Test of Model Fit, p = 0.1209, RMSEA = 0.031, CFI = 0.977). Loglikelihood increases from -1405.365 in the model without interaction to -1401.301 in the model with the interaction. However, the 2 times loglikelihood difference indicates that the models are not significantly different. 1. Does that mean the nested models have equally good model fit? Can i report the model fit from the model without the interaction, and state my model including the interaction has equally good model 2. That means including the interaction does not improve my model significantly? Is this an argument against my moderator hypothesis? Linda K. Muthen posted on Monday, February 06, 2012 - 1:56 pm Two times the difference between your two loglikelihoods is about 8 which is significant for one degree of freedom. This should agree with the z-test for the interaction in the model where the interaction was included. Martina Gere posted on Tuesday, February 07, 2012 - 6:49 am Thanks for your quick reply, that was helpful. Does that mean my moderation model is significantly better than the model without the interaction term? Is it appropriate to report model fit for the model without the interaction term and then state that the model including the interaction has a good model fit too (or according to loglikelihood a significantly better model fit)? Linda K. Muthen posted on Tuesday, February 07, 2012 - 5:29 pm This means that the interaction is significant. Reporting the fit of the model without the interaction is probably the most you can do. See the following FAQ on the website for further information: The variance of a dependent variable as a function of latent variables that have an interaction is discussed in Mooijaart and Satorra Chris Bruell posted on Wednesday, February 08, 2012 - 8:28 pm Good evening, I'm attempting to look at the interaction effect of two latent constructs on a third latent construct: f1 by y1-y5; f2 by y6-y17; f3 by y18-y35; f1 on f3; f2xf3 | f2 xwith f3; f1 on f2xf3; I'm wondering if it's ok if I first save the factor scores (running a CFA) and then use the define function to run the regressions that way. Thanks! Bengt O. Muthen posted on Wednesday, February 08, 2012 - 10:22 pm Using estimated factor scores in regressions is usually not as good as estimating the model you specified. Martina Gere posted on Thursday, February 09, 2012 - 4:53 am A follow-up question on my moderator model (see february 5th). How do I calculate the slopes of f3 ON f1 for different values of my moderator f2? Q1: Can I use the regression coefficients and variances of the latent variables in the output of my interaction model in order to calculate the slopes for mean, -sd, +sd of the moderator by hand? (e.g. for slope when moderator +1sd: unstandardized regression coefficient of f1 + unstandardized regression coefficient of f1xf2 * sd of f2) Q2: Can i test whether slopes are significantly different in this case? How? Q3: Or can I run the model in MPlus while setting different means for f2 (0, +1sd, -1sd) and observe the resulting regression coefficients for f1? If yes, how do I set a mean of a latent variable? Bengt O. Muthen posted on Thursday, February 09, 2012 - 1:59 pm See our Topic 3 short course handout, slides 164-171. Martina Gere posted on Friday, February 10, 2012 - 6:31 am Thank you. I looked at the example in your handout, and there you are calculating different slopes for the mean, +1SD, -1SD of the moderator using an equation with standardized regression coefficients. However, running the interaction model with TYPE=RANDOM, the standardized option in the output is not available. Can I use unstandardized regression coefficients and calculate the standard deviation of my moderator variable based on the variance in the output? Linda K. Muthen posted on Friday, February 10, 2012 - 9:54 am You would use the unstandardized as shown in the course slides and then standardize as also shown using the variance from TECH4. Martina Gere posted on Saturday, February 11, 2012 - 3:02 am When I run my moderator model with TYPE=RANDOM the output tells me that TECH4 is not available for TYPE=RANDOM. Do I use the variances from the simple model before adding the interaction? If yes, I think I understand how to calculate standardized regression coefficients for both independent variables. But how do I calculate the standardized regression coefficient for the interaction of my latent variables? Could you point me to the page in the handout where this is described? I'm sorry for all the questions. I'm new to Mplus. Bengt O. Muthen posted on Saturday, February 11, 2012 - 4:36 pm See slide 170 of Topic 3, which shows that the interaction is broken up into its components - see the "Unstandardized" formula for i and mthcrs7. To standardize the expression you multiple both numbers by the SD of each of i and mthcrs7. You find the variances corresponding to those SDs in the output. Joseph E. Glass posted on Sunday, May 20, 2012 - 9:25 am I have follow-up questions about slide 170-171 of topic 3 (text is pasted here, 3 questions follow). s = 0.417 + 0.087*i + (0.045 – 0.047*i)*mthcrs7 Standardized with respect to i and mthcrs7 s = 0.42 + 0.08 * i + (0.04-0.04*i)*mthcrs7 1. Is the intercept term "a" simply rounded (and unstandardized)? 2. Is "a" unstandardized because you are attempting to calculate an unstandardized "s"? 3. If I wanted to "s" to also be standardized, would I use the StdYX equation to standardize all coefficients with respect to i, s, and mthcrs7, yet only involve i, s, or mthcrs7 in the StdYX formula if they are relevant to the coefficient being standardized? Thank you, Linda K. Muthen posted on Monday, May 21, 2012 - 11:25 am 1. Yes. 2. Yes. 3. This would be difficult to do. You would need to divide all coefficients by the standard deviation of s. TECH4 is normally where you would find this but it is not available with TYPE=RANDOM. Joseph E. Glass posted on Thursday, May 24, 2012 - 1:09 pm Dear Linda, Thank you for your response. I have another question. To standardize the coefficients within the moderator function, which are 0.045 and –0.047 above, do you just multiply them by the standard deviation of the moderated variable (mthcrs7)? Joseph E. Glass posted on Thursday, May 24, 2012 - 6:16 pm Dear Linda- please disregard that last question. Having the web note 6 dataset available to replicate the analyses of Topic 3 slides 165-171 was key to help me understand how to graph an interaction from XWITH. Also, web note 6 described that these calculations are easier if the moderating latent variable is standardized to have a mean of 0 - that was very helpful too. Thanks for these. Minor suggestion- the excellent "Latent variable interactions" FAQ would be more reachable to beginners like me if it included code/output and concrete examples, such as those mentioned above. Thanks as always! Marie-Helene Veronneau posted on Friday, July 06, 2012 - 8:25 am I would like to use "gender" as a predictor in a path model that includes many other observed variables. I am also interested to know if path coefficients among the variables differ across gender. I thought about two strategies: (1) To run a multiple group analysis using "gender" as a grouping variable to test the original model from which gender would be excluded. (2) To create interaction variables (varXgender) for all of the paths that are hypothesized to differ across genders. I would prefer the first strategy, but because the model would not be exactly identical to the original model (i.e., gender would be excluded), I am not sure I can do that. Thanks for your help. Linda K. Muthen posted on Friday, July 06, 2012 - 11:12 am Interactions can be assessed using multiple group analysis or creating an interaction term. In multiple group analysis, an interaction exists if, for example, a regression coefficient is different for the two groups. This can be assessed by chi-square difference testing or using MODEL TEST. Bridget Fredstrom posted on Monday, July 23, 2012 - 12:29 pm I am working on a path analysis where I have 2 latent factors (made up of categorical variables) as my DV's and 3 continuous variables as my IV's. I am interested in doing a multiple group analysis based on gender - and I know how to write that into Mplus. But I also want to look at a continuous puberty score as a moderator from each IV to each DV. Here is the syntax I have so far... f1 by x1 x2 x3 x4 x5 x6; f2 by x7 x8 x9 x10 x11 x12 x13; f1 on teach1 teach2 teach3; f2 on teach1 teach2 teach3; How do I write in the moderator information, using P1 as the variable name? Bengt O. Muthen posted on Tuesday, July 24, 2012 - 8:03 am You use DEFINE to create interaction variables such as p1*teach1 and then include those new variables in the ON statements. NI YAN posted on Monday, October 01, 2012 - 5:14 pm Dear Dr.Muthen, I was trying to use path analysis to demonstrate a significant indirect path (A->B->C). I have two questions about choosing from estimators of MLR and MLMV. First, I was wondering what is the exact difference between MLR and MLMV estimators. Because I know my endogenous variable C is skewed, I chose MLMV as the estimator type. The indirect path was significant. However, when I used MLR, the indirect path could not be detected any more. I did not understand what made the results different. Second, in above responses, you mentioned that "MLMV cannot be used with missing data". I compared the number of observations from both models using MLMV and MLR, and they are completely the same. Does this suggest MLMV could take care of missing data? Looking forward to hearing back from you. Linda K. Muthen posted on Tuesday, October 02, 2012 - 10:21 am I would use MLM or MLR for continuous variables. Both are robust to non-normality. If there is a difference in significance using MLM and MLR, I would be cautious in interpreting the results. Georgia Macnevin posted on Sunday, November 18, 2012 - 4:14 pm Dear Dr Muthen, I'm wanting to run an analysis with the following variables: IV's: X1 (between-subjects, categorical 2 levels), X2/Y1 (attractiveness high/low, within-subjects variable with 2 levels, also a 2 continuous depended variables) X3/Y2 (status high low, within-subjects variables with 2 levels, also two continuous dependent variables) Mediator: Y3, Y4, Y5, Y6 (all continuous) DV: X2/Y1 (dependent variable, also a within-subjects variable) X3/Y2 (dependent variable, also a within-subjects variable) participants had to rate 4 photos on their desire to date the person (high-attractiveness and high status, high-attractiveness and low status, low-attractiveness and high status, low-attractiveness and low status) which made up the within-subjects/dependent variables I want to set up a pathway analysis that tests whether X1, X2, X3 moderate the participants desire to date the photos. Additionally I would like to see if Y3, Y4, Y5, Y6 mediate the effect of X1 on the within/dependent variables. Is it possible to set up this pathway? If so would you please be able to help me set it up? I've looked through the manual and found the within and between commands but I'm still unsure how to use them. Thank you for your time. Linda K. Muthen posted on Monday, November 19, 2012 - 10:29 am The WITHIN and BETWEEN options are for nested data. Do you have nested data, for example, students nested in classrooms? Georgia Macnevin posted on Monday, November 19, 2012 - 3:37 pm Thanks for getting back to me so fast Dr Muthen. I really appreciate it. I don't think I have nested data. It's a basic experimental design that you could run a mixed ANOVA on if it wasn't for the mediator I want to include. Can you suggest an appropriate analysis for this kind of data? I was thinking of just treating the within subject variables a their own separate DV's and testing for mediation effects on each of them. Thank you again Linda K. Muthen posted on Tuesday, November 20, 2012 - 11:53 am Within subject variables are repeated measures of the same variable. You could do the mediation model for each time point separately. Georgia Macnevin posted on Tuesday, November 20, 2012 - 6:01 pm Thanks again Dr Muthen. You've been so helpful. I really appreciate it. Kristine Olson posted on Friday, March 01, 2013 - 1:02 pm I am getting the following error when running a moderation (see below). Can you please advise where I have made my error? Thank you very much for your expert advise. DMHRSc BY dmhrs3; DMHRSw BY dmhrs4; WFCT BY wfct1 wfct2 wfct3; WFCS BY wfcs1 wfcs2 wfcs3; FWCT BY fwct1 fwct2 fwct3; FWCS BY fwcs1 fwcs2 fwcs3; GRT BY grt1 grt2 grt3 get4 grt5 grt6 grt7 grt8 grt9; WFCT ON DMHRSw; WFCS ON DMHRSw; FWCT ON DMHRSc; FWCS ON DMHRSc; HWxGRT | DMHRSw XWITH GRT; HCxGRT | DMHRSc XWITH GRT; WFCT ON HWxGRT; WFCS ON HWxGRT; FWCT ON HCxGRT; FWCS ON HCxGRT; ITERATIONS = 1000; CONVERGENCE = 0.00005; Linda K. Muthen posted on Friday, March 01, 2013 - 1:27 pm Please send your output and license number to support@statmodel.com. Alexander Kapeller posted on Sunday, March 03, 2013 - 4:45 pm considering an interaction model: sow on bankloy (b1); sow on umsat; basa_si | umsat xwith bankloy ; sow on basa_si (b2); I calculate the margianl effect of the focal variable bankloy at the moderator (=umsat) value "1" in the model constraint: ibs_1= (b1+b2*1); IBS_1 0.462 0.418 1.104 0.270 1) Does the output s.e. error 0.418 multiplied with +/- 1,96 result in the corresponding confidence intervall? 2) is the p-value (0.27) for the marginal value calculated via delta method? thanks in advance Bengt O. Muthen posted on Sunday, March 03, 2013 - 5:40 pm Yes and yes. Jackson posted on Tuesday, March 12, 2013 - 4:30 pm If I want to look at the moderation effect of variable M on relations between X and Z and Y and Z. Should I create two interaction terms (X*M and Y*M) and then regress Z on X, Y, XM, and YM? Is there anything particular that I should be doing other than this if so? Linda K. Muthen posted on Wednesday, March 13, 2013 - 12:37 pm Is one of your variables, for example, z a mediator. Or is your model without the interactions: z ON x y; matthew finster posted on Monday, April 08, 2013 - 8:10 pm Is it possible to test an interaction between a continuous latent and a continous observed variable on a dichotomous outcome? Thank you Linda K. Muthen posted on Tuesday, April 09, 2013 - 9:42 am Yes, you would use the CATEGORICAL option, maximum likelihood estimation, and the XWITH option. matthew finster posted on Wednesday, April 10, 2013 - 11:53 am So if I am interested in testing a interaction effect of x48 between f3 and u1 is this the correct input? Variables: names are X1 X7 X8 X9 X11 X47 X14 x22 x24 x25 x33 x34 x35 x36 x37 x48 u1; categorical is u1; analysis: type=random; model: f1 by x47 x1 x7 x8 x9 x11 x14 x22 x24 x25; f3 by x33 x34 x35 x36 x37; f3xx48 | f3 Xwith x48; f3 on f1; u1 on f3xx48; u1 on f3; Linda K. Muthen posted on Wednesday, April 10, 2013 - 1:25 pm That looks correct. You may also want to include u1 on x48 so you have both main effects and the interaction. Cecily Na posted on Friday, May 31, 2013 - 9:46 am Dear professor, I came up with a model in which a causes b, and b subsequently causes c. (a-->b-->c). I hypothesized that the fourth variable d moderates the link a-->b, as well as the link b-->c. Thus, d serves as a moderator twice in this model. Is this a reasonable model and testable in Mplus? If d serves as mediator twice (mediates between a-->b, and b-->c), would it be testable in Mplus as well? Thank you again for your generous help! Bengt O. Muthen posted on Friday, May 31, 2013 - 12:44 pm Yes, this is reasonable and doable. The easiest case is when d is categorical so you can do a multiple-group run. Johanna Seiz posted on Wednesday, June 26, 2013 - 11:42 am recently I calculated latent moderation analysis using the LMS approach. I am looking for a way to test the simple slopes - yet I do not know how to get/caculate the covariance between the latent predictor and the latent interaction term (which is needed for the slopes-tools, e.g. http://quantpsy.org/interact/mlr2.htm). Is there a way doing this in MPLUS? (since TECH 4 doesn't work with TYPE=RANDOM) Thank you in advance already! Bengt O. Muthen posted on Wednesday, June 26, 2013 - 1:08 pm See the FAQ "Latent variable interactions" at Emily Midouhas posted on Wednesday, July 10, 2013 - 3:43 am I am testing for group differences in a two-way interaction by using a multigroup model (boys vs. girls) comparing the constrained model where the two-way interaction (using the define command) was constrained to the model where all parameters were freed. The difference in the degrees of freedom for the two models, however, was 7 (not 1 as I expected). Why? What other parameters have been constrained by fixing the interaction? Do I need to constrain the main effects of the interacted variables as well? Jim Prisciandaro posted on Wednesday, July 10, 2013 - 11:33 am I was wondering if you could give any advice on adapting the formulas in "Latent variable interactions" for cases where there are two exogenous latent factors (and their interaction) predicting an observed outcome (either continuous or categorical)? I am most likely misunderstanding the document in various ways, but it seems like it should be a lot easier to get standardized coefficients and r-squared in such an example (that is, given that the mean and variance are fixed at 0 and 1, respectively, for the exogenous factors and that the mean and variance of the observed outcomes are directly obtainable). Linda K. Muthen posted on Wednesday, July 10, 2013 - 12:13 pm Please send the outputs and your license number to support@statmodel.com. Bengt O. Muthen posted on Thursday, July 11, 2013 - 2:31 pm Your example has the structure of (1) for which the eta-3 variance is in (16) combined with (17). The quantities involved are directly obtained from the estimated model. With those in hand you go through the standardization described in Section 1.4. Jim Prisciandaro posted on Friday, July 12, 2013 - 6:17 am Thank you Bengt. How about if the outcome (variable 3) is categorical (and denoted as such in mplus)? Bengt O. Muthen posted on Friday, July 12, 2013 - 1:36 pm The only thing affected is the residual variance of the DV, which is no longer a free parameter to be estimated. And it refers to an underlying continuous latent response variable behind your categorical outcome. Because you have latent variable interactions you are using ML and you can use either logit or probit link. With logit the residual variance is pi-squared/3 and with probit it is Brittany Solomon posted on Tuesday, August 20, 2013 - 7:44 pm Is there a way to test with a variable predicts correlated changes (e.g., duration predicts the correlated change in partner 1's satisfaction and partner 2's satisfaction (s1 and s2). Linda K. Muthen posted on Wednesday, August 21, 2013 - 9:39 am There is not a direct way to do this. You can approximately define the covariance as: f BY f1@1 f2@1; and regress it on f ON duration; Doris Winkelsett posted on Wednesday, September 18, 2013 - 12:58 am Dear Professor, searching for the Big Fish little Pond Effect, i´m running a doubly latent SEM with a model constraint command in the end. I'd like to analyze a moderating effect of a manifest metric Variable (akzclass) on the BFLPE. Is it possible to use the xwith command on a variable computed by model constraint? My input concerning the Model Constraint looks like this: Model Constraint: bflpe=b_betwn - b_within; bflpe_mod | bflpe xwith akzclass; Thank you very much! Linda K. Muthen posted on Wednesday, September 18, 2013 - 8:32 am No, this is not possible. MODEL CONSTRAINT estimates parameters. It does not create variables. H Ito posted on Monday, October 28, 2013 - 9:29 am I'm interested in whether a variable z moderates an effect of x on y. All of them are observed continuous variables. To my knowledge, there are two methods to address this issue. (1) Creating an interaction term using DEFINE command (xz = x*z; y ON x z xz;) (2) Random coefficient regression using TYPE=RANDOM option (s | y ON x; s y ON z;) I'm considering that the former provides an indirect test of moderation because it does not assume the causal direction of moderation (i.e., whether z moderates the effect of x on y or x moderates the effect of z on y) while the latter provides a direct test of moderation. In my data, the results of the two analysis were very similar, but information criteria (AIC, BIC, aBIC) indicated that the latter model is superior to the former model and to the random coefficient model where x and z were interchanged (s | y ON z; s y ON x;). From these results, can I assert that hypothesized direction of moderation (i.e., z moderates an effect of x on y) is supported? Bengt O. Muthen posted on Tuesday, October 29, 2013 - 6:00 pm I see these two approaches as the same when the residual variance of s in the regression on z is zero. H Ito posted on Tuesday, October 29, 2013 - 8:39 pm Thank you professor. Indeed, I confirmed that these models produce the same results when the residual variance of s is zero. Does this mean that the difference in information criteria between the original models (where the residual variance of s is estimated) can not be interpret? Bengt O. Muthen posted on Wednesday, October 30, 2013 - 9:27 am I think the model with a free residual variance for s is an interesting generalization of an interaction. It says that some of the moderation is unobserved. If it has a better BIC, one could choose to settle on this model. Melissa MacLeod posted on Tuesday, November 19, 2013 - 2:36 pm Hi, I am planning to use SEM to examine the relationship between x and y where y is a latent variable. I will also be running models to test three different moderator variables in this relationship. Is there a way to take into consideration confounders while testing moderation? I've seen posts regarding confounding for mediation and also dealing with them as moderators but I'm wondering if there is another way to handle it when the goal of my model is to test moderation? It is confusing if I have to treat my possible confounders also like my moderator. Thank you! Bengt O. Muthen posted on Wednesday, November 20, 2013 - 1:43 pm Sometimes confounders are simply extra covariates, whereas moderators create interaction terms. The difference is that the former influence intercepts and the latter slopes. Melissa MacLeod posted on Wednesday, November 20, 2013 - 4:50 pm Right so in the model would I just include those variables without specifying a relationship between them and any other variable? I'm just not sure what that syntax would look like. Thank you. Bengt O. Muthen posted on Wednesday, November 20, 2013 - 6:45 pm The confounders? Say you have a confounder Z, a mediator M, and a distal outcome Y. You say M ON Z (and the other predictors, including moderator terms); Y ON M Z (and the other predictors, including moderator terms); Melissa MacLeod posted on Thursday, November 21, 2013 - 11:43 am Perfect, thank you! Anke Schmitz posted on Monday, January 06, 2014 - 7:14 am Dear M&M, I am running a SEM with continuous and dichotomous categorical variables. My categorical observed variable X1 (an easy text version versus a difficult text version) is an independent variable. I have a continuous observed variables X2 (decoding ability) and latent variable X3 (knowledge) predicting a latent construct Y (text comprehension). My hypothesis predicts that both continuous variables X2 and X3 interact with the categorical variable X1 and moderate the relationship between X1 and Y in my model. Furthermore I want to include a mediating latent variable. Could you tell me which procedure is necessary? Thank you very much. Anke Bengt O. Muthen posted on Monday, January 06, 2014 - 8:17 am Combine UG ex 3.18 and 5.13. Note that only dependent variables should be declared categorical. Anke Schmitz posted on Monday, January 06, 2014 - 8:48 am Thanks for your reply. Can you help me once more what to to with my independent categorical variable? When I'm not allowed to include the variable into the model, where to put it? It is my manipulated factor which I cannot exclude from my study. Regards, Anke. Linda K. Muthen posted on Monday, January 06, 2014 - 11:19 am Why can't you include the categorical independent variable in the model? Anke Schmitz posted on Monday, January 06, 2014 - 11:52 pm Linda, I have to admit I don't know. I'm quite new to Mplus and SEM. Bengt wrote this (see note from 01-06-14 above). Can I include the binary observed ndependent variable to create interaction terms with latent continuous variables? Regards, Anke. Linda K. Muthen posted on Tuesday, January 07, 2014 - 6:09 am Bengt said not to put the independent variable on the categorical list. He did not say to not include it in the model. Yes, you can include it and use XWITH to create an interaction with a latent variable. Please look at the suggested examples. Nathan Alkemade posted on Monday, January 13, 2014 - 8:12 pm I am running a moderated mediation model (#3 following the Preacher, Rucker, Hayes paper). If I use the syntax you have provided on the website (thank you by the way). Can variable W be dichotomous? If yes should I make sure the two values are 1 and 2 rather than 0 and 1? I expect I would need to identify the variable as categorical? Bengt O. Muthen posted on Wednesday, January 15, 2014 - 11:04 am If W is a moderator it can be dichotomous and gives easy interpretation when scored 0, 1. It should not be declared categorical since it is an IV. Nathan Alkemade posted on Monday, January 27, 2014 - 7:39 pm Can someone point me to a good paper or handout that illustrates how to interpret the output from a Moderated Mediation analyses using the model three from Preacher, Rucker, and Hayes (2007). Is the interaction term the effect of the moderated mediation and the new parameter the indirect effects of the model at the specified value of the moderator? Kristopher J. Preacher posted on Tuesday, January 28, 2014 - 11:26 pm Andy Hayes lists some citations for examples of applied research articles that use each model in the 2007 paper, including Model 3, here. The 2007 paper is mainly about estimating and testing conditional indirect effects, or indirect effects at specific values of a moderator. If you want to determine whether there is evidence for moderated mediation (that is, whether the indirect effect varies across values of a moderator), there is a way to test that. In Hayes' book, he proposes that (for Model 3) the product of the 'a' path (X -> M) and the 'b3' path (the interaction of M and V predicting Y) quantifies this effect, and that a bootstrap confidence interval for a*b3 is a good way to test H0: a*b3 = 0. Hayes notes on that page that you can contact him for the working paper "An index and simple test of moderated mediation," which goes into more detail. Wang and Preacher (in press) also discuss such tests: Both papers provide Mplus code. Kristopher J. Preacher posted on Tuesday, January 28, 2014 - 11:39 pm I should add that Example 3.18 in the Mplus User's Guide can be used as a starting point for the code. 3.18 is closer to Model 2 from our 2007 paper, but can be modified to render Model 3: MODEL: y ON m(b1) x w m ON x(a); w mw WITH m; You can also add a new parameter to the MODEL CONSTRAINT section that corresponds to a*b3 mentioned above: MODEL CONSTRAINT: NEW(ind mm); Nathan Alkemade posted on Wednesday, January 29, 2014 - 4:37 pm Thanks very much Kris. I really appreciate the detailed response. EM posted on Wednesday, February 05, 2014 - 6:02 am Dear Prof. Muthen, in the document on latent variable interactions a formula is given on R-square (p.6). I was wondering if I can find all of the information needed to calculate R-square in the MPlus output. For example: I'm not sure where to find V(eta3)? Thanks in advance! Bengt O. Muthen posted on Wednesday, February 05, 2014 - 2:21 pm Equation (21) on page 6 shows that eta3 is a DV and as such the variance parameter that is estimated is the residual variance for eta3 (the variance of zhi3). You have to compute the total variance of eta3 using the instructions in this document. Yun Young Choi posted on Monday, February 10, 2014 - 8:45 am I am MD student, writing a dissertation. I want to use SEM to test a model with moderated mediation in which an x - me - y relationship is moderated by MO. I assumed that both x-me and me-y would be moderated by MO. I wonder I can analyze this model at once. If not, each path should be analyzed? (For example, whether x-me is moderated by MO. And then me-y is moderated by MO). MO is a continuous latent variable. I will look forward to hearing from you soon. Thank you Bengt O. Muthen posted on Monday, February 10, 2014 - 11:30 am I would first investigate whether the moderation of the mediation--> y relationship is significant. It most often isn't. And if it isn't you can go on and do the type of analysis shown in UG ex 3.18. Yun Young Choi posted on Saturday, February 15, 2014 - 2:42 am Thank you for your answer. I have a further question. I wonder below is reasonable and doable for testing moderated mediation(x1 and x2-->me-->y and mo moderates simultaneously moderates both the link x-->me and the link me-->y). I will look forward to hearing from you soon. Thank you! Linda K. Muthen posted on Sunday, February 16, 2014 - 11:13 am This looks okay. Back to top
{"url":"http://www.statmodel.com/discussion/messages/11/69.html?1383150441","timestamp":"2014-04-17T16:36:23Z","content_type":null,"content_length":"424299","record_id":"<urn:uuid:aace0350-9b4b-4739-bb82-eb6bb4e029eb>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Help with a school project, a statistical survey on education Replies: 15 Last Post: Jul 13, 2013 4:14 PM Messages: [ Previous | Next ] Re: Help with a school project, a statistical survey on education Posted: Jul 25, 2007 1:56 PM This sounds like an excellent and very ambitious project. Since it is a high school project I'd suggest that you don't get too complicated. As I thing Richard pointed out it can be very complicated but treat the project as a pilot - keep it simple and just remember to point out the limitations when you write the report :) Then like any good researcher you can ask for more funding for future reseach :) > iii)Is there some easily available (preferably free) software that > will let me do all this analysis (brownie points for fitting > probability distributions and graphing)? It would be a nightmare to do > this by hand since we usually work with less than 50 data points > instead of several hundred. Perhaps the Royal Royce of free statistical packages is R http://www.r-project.org/. However it is not all that user-friendly and it might take longer to learn than is worth it for one project. There is plenty of good documentation on the site that can help and there is a very good R-help mailing list where you can ask questions Otherwise here are some sites for some dedictated statistiscs/ graphing packages (Free or shareware in most or all cases). I have not checked the links in a few months though. Date Subject Author 7/20/07 Help with a school project, a statistical survey on education loom91 7/20/07 Re: Help with a school project, a statistical survey on education richardstartz@comcast.net 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/21/07 Re: Help with a school project, a statistical survey on education The Qurqirish Dragon 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education Richard Ulrich 7/23/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education loom91 7/22/07 Re: Help with a school project, a statistical survey on education Bob 7/23/07 Re: Help with a school project, a statistical survey on education loom91 7/24/07 Re: Help with a school project, a statistical survey on education Nick 7/25/07 Re: Help with a school project, a statistical survey on education John Kane 7/25/07 Re: Help with a school project, a statistical survey on education loom91 7/13/13 statistical survey to be done on which topics? divya.nair421@gmail.com 7/26/07 Re: Help with a school project, a statistical survey on education loom91
{"url":"http://mathforum.org/kb/message.jspa?messageID=5825375","timestamp":"2014-04-16T05:38:31Z","content_type":null,"content_length":"36066","record_id":"<urn:uuid:062052a4-f14c-486e-b9ac-2b084abeb958>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the universal inverse semigroup of a commutative semigroup an embedding? up vote 3 down vote favorite The question of existence of a universal inverse semigroup of an arbitrary semigroup has been answered before (this is a construction similar to the Grothendieck group). Let's refer to the universal inverse semigroup of a semigroup $S$ as $G_I[S]$ (for this question). It was noted that $G_I[S]$ for a commutative semigroup $S$ is not necessarily commutative (because of nilpotent elements). For a general non-commutative semigroup $S$, it's also clear that $G_I[S]$ is in general not an embedding, i.e. $S \not\subset G_I[S]$. However, it seems to me as if $G_I[S]$ will be an embedding for a commutative semigroup $S$, i.e. $S \subset G_I[S]$ (more precisely, $G_I[S]$ contains a sub-semigroup isomorphic to $S$). Is this true? Note This question is identical to this question at math.stackexchange.com. ra.rings-and-algebras semigroups Why does it seems to you? This question is indeed more appropriate for math.stackexchange. Take $S=[0,+\infty]$ for a counterexample. – Fernando Muro Jul 2 '12 at 13:52 @Fernando: This semigroup (with addition, I assume) embeds into an inverse semigroup. – Mark Sapir Jul 2 '12 at 14:40 Sorry, I was thinking of the associated group. – Fernando Muro Jul 2 '12 at 14:42 In fact if you read Schein's paper (see my answer, the paper is available online), you will see that the question of whether all commutative semigroups embed into inverse semigroups was considered non-trivial. Even an example of a semigroup with commuting idempotents that does not embed into an inverse semigroup was not known for some time. – Mark Sapir Jul 2 '12 at 15:54 @Fernando: It seemed to me, because I had only checked it for upper triangular matrices with at most one non-zero entry per row and column. Now it's obvious to me that these matrices were much too closely related to partial one-to-one transformations. I admit that the question is more appropriate for math.stackexchange, because finding a counter-example was easy once somebody told me that there is one. However, I really like the paper from B. Schein... – Thomas Klimpel Jul 2 '12 at 19:56 add comment 1 Answer active oldest votes B. Schein described all semigroups embeddable into inverse semigroups in Schein, Boris M., Subsemigroups of inverse semigroups, Le Matematiche LI (1996), Supplemento, 205–227 (in fact the paper was written in the 50s). From that paper it easily follows that not every commutative semigroup embeds into an inverse semigroup. Indeed, look at the quasi-identity $\& R\to u=v$ called $A_1$ on page 218 there. Consider the (finite) nilpotent of class 3 commutative semigroup $S$ given by the presentation $R$ in the variaety of commutative nilpotent semigroups of up vote 6 class 3. More concretely, note that all words that appear in $R$, $u,v$ are of length 2. The semigroup $S$ consists of all words of length 1 or 2 in letters that appear in $R$, and 0; the down vote product is the concatenation whenever it is inside that set of words or 0 otherwise. Words that are equal according to $R$ are identified in $S$. The equalities of $R$ hold in $S$ by accepted definition while the equality $u=v$ is not true in that semigroup. Hence this finite commutative semigroup $S$ does not satisfy $A_1$. But by B. Schein $A_1$ is necessary for embeddability into an inverse semigroup. Hence $S$ does not embed into an inverse semigroup. In particular, the map from $S$ into its universal inverse semigroup is not injective. add comment Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras semigroups or ask your own question.
{"url":"http://mathoverflow.net/questions/101097/is-the-universal-inverse-semigroup-of-a-commutative-semigroup-an-embedding","timestamp":"2014-04-20T01:24:58Z","content_type":null,"content_length":"58112","record_id":"<urn:uuid:f3805ed6-56bc-41d0-91b4-ad2c687488cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
homotopically injective object Homological algebra Let $\mathcal{A}$ be an abelian category with translation. An object in the category of chain complexes modulo chain homotopy, $K(\mathcal{A})$, is homotopically injective if for every $X \in K(\mathcal{A})$ that is quasi-isomorphic to $0$ we have $Hom_{K(\mathcal{A})}(X,I) \simeq 0 \,.$ $QuasiIsoMono = \{f \in Mor(A_c) | f mono and quasiio\}$ be the set of morphisms in the category of chain complexes $Ch_\bullet(\mathcal{A})$ which are both quasi-isomorphisms as well as monomorphisms. A complex $I$ is an injective object with respect to monomorphic quasi-isomorphisms precisely if • it is homotopically injective in the sense of complexes in $\mathcal{A}$; • it is injective as an object of $\mathcal{A}$ (with respect to morphisms $f : X \to Y$ such that $0 \to X \stackrel{f}{\to} Y$ is exact). In complexes in a Grothendieck category Proposition For $\mathcal{A}$ a Grothendieck category with translation $T : \mathcal{A} \to \mathcal{A}$, every complex $X$ in $Ch_\bullet(\mathcal{A})$ is quasi-isomorphic to a complex $I$ which is injective and homotopically injective (i.e. QuasiIsoMono-injective). Relation to derived categories For $\mathcal{A}$ an abelian Grothendieck category with translation the full subcategory $K_{hi}(\mathcal{A}) \subset K(\mathcal{A})$ of homotopically injective complexes realizes the derived category $D(\mathcal{A})$ of $\mathcal{A}$: $Q|_{K_{hi}(A)} : K_{hi}(A) \stackrel{\simeq}{\to} D(A) \,,$ where $Q : K(A) \to D(A)$ and $Q|_{K_{hi}(A)}$ has a right adjoint. It follows that for $D$ any other triangulated category, every triangulated functor $F : K(\mathcal{A}) \to D$ has a right derived functor $R F : D(\mathcal{A}) \to D$ which is computed by evaluating $F$ on injective replacements: for $R : D(\mathcal{A}) \stackrel{\simeq}{\to} K_{hi}(\mathcal{A})$ a weak inverse to $Q$, we have $R F \simeq D(A) \stackrel{R}{\to} K_{hi}(A) \hookrightarrow K(A) \stackrel{F}{\to} D \,.$ Much of this discussion can be found in The general notion of injective objects is in section 9.5, the case of injective complexes in section 14.1.
{"url":"http://www.ncatlab.org/nlab/show/homotopically+injective+object","timestamp":"2014-04-20T21:31:20Z","content_type":null,"content_length":"33846","record_id":"<urn:uuid:e2316b78-f824-40a2-97ae-ad2c6f413104>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Solve Matrices Edit Article Row Addition and SubtractionScalar Matrix Multiplication Edited by Chris, Mahmoued, Laura, Rachelallday and 1 other Matrices (the plural of matrix) are a convenient way of organizing linear functions and systems of equations. Before you can move on to the higher-math applications of matrices, you have to master the basic methods of solving matrices. Your first introduction to solving matrices will probably be using them to solve systems of equations, using basic algebraic operations. 1. 1 Perform the basic matrix operations of row switching, scalar multiplication, addition and subtraction until you've reduced the matrix to reduced row form. □ "Reduced row form" means that any numbers may be in the rightmost column, but the rest of the entries in any given row consist of a single "1" entry accompanied by as many zeroes as necessary to fill the rest of the spaces. □ In reduced row form, order the rows so that the "1" entries like up in a rightward, downward diagonal line. So the first line of the matrix might be "1 0 0 24," the second line "0 1 0 46," and the third line "0 0 1 5." 2. 2 Switch any 2 rows in the matrix to make performing the other operations easier, or to arrange the "1" entries properly in reduced row form. This doesn't affect the overall value of the matrix. □ You must swap the 2 lines completely, with no intermingling of the numbers for each row. So for example if you had a matrix with entries "3 12 2" in the first row and "4 6 3" in the second row, you could swap "4 6 3" to be the first row and "3 12 2" to be the second row. But you couldn't swap just 1 or 2 of the elements from each row. Method 1 of 2: Row Addition and Subtraction 1. 1 Combine the elements of any 2 matrix rows by adding and subtracting them. This creates a third row (the result), which you then substitute for 1 of the original 2 rows. □ Add and subtract each element individually, working your way across the row. So if you were to add the rows "3 12 2" and "4 6 3," the resulting new row would be "7 18 5." □ The results row must replace 1 of the rows you just used to create it; you cannot arbitrarily add a new row to the matrix and keep the other rows unchanged. Method 2 of 2: Scalar Matrix Multiplication 1. 1 Multiply every element of a given row by the same scalar. □ As long as you multiply each element in the row by the same scalar, you don't actually change the value of the matrix. But scalar multiplication can make performing the other matrix row operations easier. For example, if you have the rows "2 5 3" and "-1 2 9," multiplying the second row by 2 is the perfect setup for then adding the resulting rows together. The scalar multiplication gives you "-2 4 18," which when added to the first row yields "0 9 21." If you then scalar multiply the resulting row by 9, you have "0 1 (21/9)", and this row is prepared for reduced row form. Article Info Categories: Mathematics Recent edits by: Rachelallday, Laura, Mahmoued Thanks to all authors for creating a page that has been read 4,353 times. Was this article accurate?
{"url":"http://www.wikihow.com/Solve-Matrices","timestamp":"2014-04-18T00:31:56Z","content_type":null,"content_length":"62066","record_id":"<urn:uuid:bcb1e315-da3b-49f4-b94f-4f98668f9484>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Mechanism Library *Updated! Yo homies. Welcome to the mechanism library! Whether you're here to learn how to be showy, or looking for the way to solve a hard level, chances are we've got the mech for you. Disclamer: I don't know who some of the original mechs were made by!! ~Arms and Retrievers~11952870 from Robotmafia Example solve: This is a very basic retriever, goes back and forth 90 degrees. 11298502 from txsbor25 Example solve: A two segment retriever, it has proved to be very useful. 11299066 from Wafflekins A very useful back and forth retriever. 11301806 from Protogenious Very cool mech, the "reflex quadrilateral". Turns approximately 90 degrees then back 180 degrees. 11746646 from Foss Example solve: The amazing and wonderful, 570 degree brown mech :D 11795963 from Pawel A great 180 to 360 degree turning arm. 11952917 from Ken75 Example solve: Super useful arm that orbits the build area, loads of room for modification 11849529 from Foss Example solve: An amaaaaaaaaaaaaaaaaaaaaazing flippy floppity blippity boo. May be hard to find other uses for this mech. 11954687 from Ken75 Example solve: An arm that goes all the way around the build area 11994666 from Dmasters Example solves: Dmasters Dmasters again A super cool moving thrower thing (There's a lot of them, we need more on here!) 11952922 from Ken75 Example solve: A basic brown pult, pretty poorly made :P 11953417 from {???} Example solve: Super useful brown pult for vertically challenged build areas 12009520 from Marjo Example solve: A basic pult mechanism ~Bridges~11953425 from Foss Example solve: Foss himself has shown that this underbridge/pult is super uselful 11973122 10860484 from Rianbay May make a dissection so you can see the mech more clearly if it's requested 11976810 from Ken75 Example solve: A very easy to tweak and long reaching bridge, best on levels with tall build areas ~Miscellaneous~11401958 from Rianbay A way to join two bridges under one weight. 11865925 from Camero09 Example solve: A mech that moves an arm back and forth 4 (or maybe more) times! 11952889 from Rianbay Example solve: (Middle part has one of them) One of the single most useful mechanisms ever, locks in place so things can be stable with no weight. Also, check out the tutorials section in the Contraptioneering Videos thread, there's a few more complicated things that are shown step by step. Last edited by ken75 on Mon Feb 17, 2014 1:58 pm, edited 14 times in total. Re: Mechanism Library *Updated! The first one I'm 99% sure was done by robotmafia. The other 1% goes to jon Re: Mechanism Library *Updated! Mech: 11795963 from Pawel A great 180 to 360 degree turning arm. lol i forgot about that one Re: Mechanism Library *Updated! We could throw this thing by jdc in there http://www.fantasticcontraption.com/?designId=11938851 he's used it a lot so it looks pretty practical Re: Mechanism Library *Updated! Not sure if it's meaningful or original, but I have this one quadrilateral-powered pult that I think is pretty useful: http://fantasticcontraption.com/?designId=11717743 Re: Mechanism Library *Updated! Wheeeeeeeeeeee once again, the forums are completely scarce of life Re: Mechanism Library *Updated! http://www.fantasticcontraption.com/?designId=11960761 For all to adore, a use of Camero's awesome mech! Re: Mechanism Library *Updated! ken75 wrote:http://www.fantasticcontraption.com/?designId=11960761 For all to adore, a use of Camero's awesome mech! Re: Mechanism Library *Updated! ken75 wrote:http://www.fantasticcontraption.com/?designId=11960761 For all to adore, a use of Camero's awesome mech! WOW, ken! Re: Mechanism Library *Updated! was digging through old designs and found a cool thing I made when I sucked so I decided to rebuild it because now I suck less I'm horrible at doing the second half of extensions but the initial thing is what matters Re: Mechanism Library *Updated! rianbay812 wrote:was digging through old designs and found a cool thing I made when I sucked so I decided to rebuild it because now I suck less I'm horrible at doing the second half of extensions but the initial thing is what matters that looks really cool haha if only there was something to lock it in Re: Mechanism Library *Updated! Mmm it is really nice. I keep procrastinating on adding it to the list for some reason. Re: Mechanism Library *Updated! Keep... This... Alive... Re: Mechanism Library *Updated! I will, I promise, as soon as I'm un-sick Re: Mechanism Library *Updated! Aww, get well soon! Being sick sucks My above mech can also be converted into a 360 mech: http://fantasticcontraption.com/?designId=11993854 Re: Mechanism Library *Updated! nice mech!!! Re: Mechanism Library *Updated! Updated this! And my level set... will post a few more levels there tonight as well now that I'm back in homeostasis Re: Mechanism Library *Updated! Complete overhaul on this, we have loads of retrievers so get posting some other stuff! Re: Mechanism Library *Updated! Here is an arm (sorry) that I came up with, it is similar in function to Ken's circle arm. http://fantasticcontraption.com/?designId=12006747 Pros: I have found that it is easier to tweak than Ken's. Cons: It does not travel the full circle like Ken's. Re: Mechanism Library *Updated! As you said there are many types of pults... can we add them here even if we don't know who first came up with them? Edit: Nice arm Last edited by marjo on Sat Feb 08, 2014 11:16 pm, edited 1 time in total.
{"url":"http://www.fantasticcontraption.com/forum/viewtopic.php?f=3&t=10384&p=275210","timestamp":"2014-04-18T08:17:33Z","content_type":null,"content_length":"52669","record_id":"<urn:uuid:006b5209-7a4d-45f2-bb6f-b153e65074f5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Liters Converter Milliliters - Liters Converter Convert between milliliters and liters using this metric and imperial liquid volume conversion tool. This converter is part of the full liquid volume conversion tool. Simply choose whether you want to convert milliliters to liters or liters to milliliters, enter a value and click the 'convert' button. Default rounding is set to a maximum of 14 decimal places. A list of all the individual liquidvolume converters is available here. Converter Frequently Asked Questions • How many milliliters are there in x liters? • How many liters are there in x milliliters? • How can I convert liters to milliliters? • How can I convert milliliters to liters? To find out the answer to any of these questions, simply select the appropriate unit from each 'select' box above, enter your figure (x) into the 'value to convert' box and click the 'Convert!'
{"url":"http://www.thecalculatorsite.com/conversions/liquidvolume/milliliters-to-liters.php","timestamp":"2014-04-18T11:22:31Z","content_type":null,"content_length":"21941","record_id":"<urn:uuid:c9bc6a6d-487a-41ed-bd97-15c8c86a5179>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Dice Match Dice Match From GameWinners Game Center achievements Complete the following tasks to unlock Apple Game Center achievements. oeag lpt dt, uhgac Colored Dice Points: Score at least 350,000 points using the colored dice. Eight Fives: Get Eight Fives in one swipe! Eight Fours: Get Eight Fours in one swipe! Eight Ones: Get Eight Ones in one swipe! Eight Sixes: Get Eight Sixes in one swipe! Eight Threes: Get Eight Threes in one swipe! Eight Twos: Get Eight Twos in one swipe! Finish Frantic Game: Complete at least one Frantic game! Finish Timed Game: Complete at least one Timed game! Mega Swipe Five Thousand: Get Five Thousand Points in one swipe! Mega Swipe One Thousand: Get One Thousand Points in one swipe! Mega Swipe Ten Thousand: Get Ten Thousand Points in one swipe! Mega Swipe Twenty Thousand: Get Twenty Thousand Points in one swipe! Metal Dice Points: Score at least 300,000 points using the metal dice. Pink Heart Dice Points: Score at least 100,000 points using the pink heart dice! Six Fives: Get Six Fives in one swipe! Six Fours: Get Six Fours in one swipe! Six Ones: Get Six Ones in one swipe! Six Sixes: Get Six Sixes in one swipe! Six Threes: Get Six Threes in one swipe! Six Twos: Get Six Twos in one swipe! Solitude: While play Solitude, leave no dice behind! Ten Fives: Get Ten Fives in one swipe! Ten Fours: Get Ten Fours in one swipe! Ten Ones: Get Ten Ones in one swipe! Ten Sixes: Get Ten Sixes in one swipe! Ten Threes: Get Ten Threes in one swipe! Ten Twos: Get Ten Twos in one swipe! Twelve Fives: Get Twelve Fives in one swipe! Twelve Fours: Get Twelve Fours in one swipe! Twelve Ones: Get Twelve Ones in one swipe! Twelve Sixes: Get Twelve Sixes in one swipe! Twelve Threes: Get Twelve Threes in one swipe! Twelve Twos: Get Twelve Twos in one swipe! Welcome: You got Dice Match! Congratulations! White Dice Points: Score at least 200,000 points using the white dice! Wooden Dice Points: Score at least 250,000 points using the wooden dice!
{"url":"http://www.gamewinners.com/Cheats/index.php/Dice_Match","timestamp":"2014-04-17T12:33:11Z","content_type":null,"content_length":"21183","record_id":"<urn:uuid:19738c39-8883-4d55-b64d-3cbac28d740c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Good function to test speed of language? On 30 Mar 2005 18:26:40 -0800, (E-Mail Removed) > James McIninch wrote: > <snip> > > Also, be sure to use the same precision in both situations (FORTRAN > defaults > > to single-precision transcendental functions, whereas C uses > > double-precision ones; you can obtain single precision libraries for > C as > > well). > The intrinsics of Fortran, such as COS, have been generic since at > least the Fortran 77 standard, meaning that COS(X) will call a single > or double precision version of the cosine function, and return a single > or double precision real, depending on the type of variable X. variable _or value_ X. Explicitly declared variables can be equally easily single or double -- or other system-dependent non-portable precisions. Implicitly declared variables, such as under the hoary "God is real" rule, are more easily single. Fortran floating-point literals are single unless you use 'd' for the exponent, or in >= F90 the _kind syntax. Double real must occupy twice the space of single, but need not actually be twice as precise. C99, not yet widely implemented/available, includes single=float and long double math functions as well as the classic double ones, and adds complex variants where applicable, plus optional (if you #include <tgmath.h>) generic 'wrappers' comparable to Fortran (and others). (Since C89) long double although a distinct type need not actually be more precise or bigger than double, especially in the M$ world; for that matter double need not be better than float if that satisfies the (minimum) requirements. Floating literals in C are double unless you append 'f' float or 'l' long double. Both languages permit calculations to be performed in greater than the standardly-specified precision (and range) if the compiler prefers, especially if as on a certain common machine it is costly to convert. The OP also asked about integers. Fortran doesn't standardly require more than one size/precision of integer, nor any unsigned. C has four and C99 five nominally distinct precisions, in both signed and unsigned (at least <G>), although again the 'higher' ones need not actually be more precise (or bigger) as long as they meet minima. - David.Thompson1 at worldnet.att.net
{"url":"http://www.velocityreviews.com/forums/t437552-good-function-to-test-speed-of-language.html","timestamp":"2014-04-20T11:44:10Z","content_type":null,"content_length":"64737","record_id":"<urn:uuid:6d7657b0-a74c-409c-8ac4-a3e8ded6e00f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which of the following points is a solution to the system of equations? 2x + 3y = 3 3x + 7y = 2 Question 5 options: (3,-1) (0,1) (0,7) (3,0) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f6f3c6de4b0772daa08f77e","timestamp":"2014-04-21T02:10:29Z","content_type":null,"content_length":"39480","record_id":"<urn:uuid:848da412-f29d-4cc8-a734-c60afff309cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
How to analyze Likert type dependent variables Suppose your dependent variable (DV) is a Likert scale or something similar. That is, it’s some sort of rating, from 1 to 5 or 1 to 7 or some such. And suppose you want to regress that on several independent variables. What should you do? There are three broad categories of regression models that might be applicable. A lot of people routinely use linear regression (often simply called regression). Others routinely say this is incorrect, and that you should use ordinal logistic regression. And yet others will do things such as multinomial logistic regression, or collapsing the DV into two categories, and then doing binary logistic. Which is right? The short answer to this is to quote Sir David Cox There are no routine statistical questions, only questionable statistical routines Let’s get more specific. Suppose you are a doctor studying back pain, and suppose your DV is response to a scale: How much pain are you in on a typical day 1 – None 2 – Barely noticeable 3 – Moderate 4 – Severe 5 – Excruciating and your independent variables are things like age, sex, injury status, time since injury and so on. If one is strict about it, linear regression requires a continuous DV – and we do not have one, at least as we’ve measured it, although it could be argued that there is a latent underlying variable here that is continuous. But you’d be hard pressed to prove that the difference between “none” and “barely noticeable” is the same as that between (say) “moderate” and “severe”. Technically, if you follow Steven’s categories of nominal, ordinal, interval, ratio, your DV is ordinal, and should be analyzed with some form of ordinal logistic regression. But the most common type (by far) of ordinal logistic regression is the proportional hazards model, which assumes proportional hazards. That assumption might be violated, in which case, you might want to use multinomial logistic. Since those are relatively unusual methods, some people just collapse the categories into (say) “severe’” or “excruciating” vs. anything less than that. Which is right? The great advantages of linear regression are its ease of interpretation and its familiarity. But it might be wrong. Ordinal logistic is more likely to be correct, but is less known and harder to understand. Multinomial logistic is even harder to understand, and is a very complex model, with many parameters to estimate. Collapsing the variable will only very rarely be correct. It throws away information, and that’s rarely a good thing to do. So, here’s what I recommend: Do ordinal logistic regression and test the assumptions. Then if the assumptions are met, also do linear and regression and compare the results by making a scatterplot of one set of predicted values vs. the other. If they are very similar (YOU decide. Statistical analysis requires thought and judgment) then go with linear regression. If the assumptions are NOT met, then also do multinomial logistic regression, and compare those two sets of results, opting for the simpler ordinal model if results are very similar. Author Bio I specialize in helping graduate students and researchers in psychology, education, economics and the social sciences with all aspects of statistical analysis. Many new and relatively uncommon statistical techniques are available, and these may widen the field of hypotheses you can investigate. Graphical techniques are often misapplied, but, done correctly, they can summarize a great deal of information in a single figure. I can help with writing papers, writing grant applications, and doing analysis for grants and research. Specialties: Regression, logistic regression, cluster analysis, statistical graphics, quantile regression. You can click here to email or reach me via phone at 917-488-7176. Or if you want you can follow me on Facebook, Twitter, or LinkedIn. Comments: 107 Posted by Peter Flom 24 Aug 2013 at 5:33 PM I am not an SPSS user but probably factor is for categorical variable and covariate for continuous. But that’s a guess Posted by Kwan 25 Aug 2013 at 7:57 AM I’m having a hard time interpreting the ordinal regression. Is it okay for me to run linear regression instead? Posted by Peter Flom 25 Aug 2013 at 8:00 AM No, linear regression is probably not going to be OK. The assumptions will almost surely be violated, plus it assumes that the scale of the DV is interval. Posted by Kwan 25 Aug 2013 at 8:11 AM Thank you for the fast reply, Can someone please explain what is Test of Parallel lines, Model Fitting Information and Goodness-of-fit in basic human language? I have little to no background in statistic. All I wanted to know is what IV have impact on the DV. Posted by Peter Flom 25 Aug 2013 at 8:18 AM I cannot teach all of ordinal logistic regression here. If you would like to hire me to help you with your analysis, let me know. Posted by Merve 31 Aug 2013 at 8:04 PM I am doing my dissertation, and now struggling with SPSS, my lecture told me to use ordinal regression because both of my variables,Dependent and Independent, are categorical (Likert-scale rating). Both have the same scale because they are asking the same questions, because I want to know if for instance Brand Association has a significant positive impact on Brand Equity. So what do I need to know. How can I come to my conclusion if my indpendent variables have an impact on my dependent variable? Please help me I am trying to understand regression since two days, still did’t get it. Posted by Peter Flom 01 Sep 2013 at 7:31 AM Hi Merve Regression usually takes an entire semester, and then logistic regression (which is where ordinal logistic comes in) another semester. If you’d like to hire me to help with your data analysis, let me know, but explaining all of regression is not something I can do in a blog post. Leave a Comment! Cancel reply
{"url":"http://www.statisticalanalysisconsulting.com/how-to-analyze-likert-type-dependent-variables/","timestamp":"2014-04-18T08:03:32Z","content_type":null,"content_length":"97669","record_id":"<urn:uuid:007255a0-9942-4670-8f09-2ba2bf92002f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
August 20 I've already talked a lot in this blog about deletion channels. Trace reconstruction involves a similar problem. We start with an original binary string X = X1,X2,...,Xn. A trace consists of a string Y1,Y2,...,Ym obtained from the original string by passing it through a deletion channel, where each bit is independently deleted with probability p. The trace reconstruction problem basically asks how many independent traces do you need to see to reconstruct the original string X with high probability. Unlike the coding setting, where X might be chosen from a codebook of our own design, in this setting two natural models to study are when X is uniform over binary strings (so the high probability is over the choice of X and the traces), and in the worst case (where the high probability is just over the traces). Variations of the problem include operations other than deletions (including, say, insertions and errors). As an example application, a set of sensors might be monitoring a sequence of events. Each individual sensor is weak and might miss a given event, in which case the question is how many sensors are needed to reconstruct the event sequence perfectly, with high Trace reconstruction has some history in the information theory community, and the first CS-style paper I saw on it was by Batu, Kannan, Khanna, and McGregor in SODA 2004. The main result of this paper dealt with random input X and considered p values that were O(1/log n). It seems to me much more natural for p to be constant, and it has remained an open problem to determine an efficient algorithm for constant p. I mentioned this problem last time I visited Microsoft, and it seemed to resonate with some of the people there. Thomas Holenstein, Rina Panigrahy, Udi Wieder and I have a submission with several results, including an algorithm that for random X and sufficiently small constant probability p requires only a polynomial number of traces and polynomial time (with high probability). The SODA 2004 uses a majority voting technique -- the bits are determined sequentially, with each string voting on the next bit. A key idea in our new algorithm is a "smart voting" technique. We only let traces vote if there is good reason (based on the already determined bits) to think that the trace has a good prediction for the subsequent bit. That is, only well-informed strings are allowed to vote. Feel free to make your own political analogies. My intuition is that this smart voting technique is a closer analogue to the full belief propagation (or Bayesian analysis) that we want to do than just majority voting. Because of this, I hope this "smart voting" technique is a general approach that will find other applications. I don't yet have an analysis of a belief-propagation-based algorithm. Also, currently we can't analyze a maximum-likelihood algorithm, that finds the most likely original string X. I also don't know how to implement maximum-likelihood efficiently in this setting. So there's still plenty of open questions in this area. Luca Trevisan points to an article in the AMS by Neal Koblitz on modern cryptography that I think everyone in theoretical computer science (TCS) should read, as it exposes how at least some mathematicians view the culture in TCS. In summary, it's very negative. While I think the article is flawed on many levels, I'd argue that we ought to consider it as constructive criticism, and think about what, if anything, we might learn from it. For example, one statement I found fairly misguided is the following: Math departments usually believe the Conjecture. For the development of mathematics it is better for someone to publish one excellent paper in n years than n nearly worthless papers in one year. In certain other fields of science - including, unfortunately, computer science and cryptography - the analogous conjecture, while most likely true, is not widely believed. I accept the constructive criticism that in computer science we perhaps publish too quickly, and too many incremental things. On the other hand, this conjecture has little to nothing to do with reality. For every Wiles, who spent essentially a decade on what is a very important result, there are probably 100 mathematicians who spent a decade on the problem getting essentially nowhere and publishing what in retrospect are worthless papers. And the implication that TCS is producing only a stream of worthless papers is fundamentally incorrect. The question is really whether the culture should be that a person works only on big problems with the goal of having a very small number of big results (say, 1) over their lifetime, or that a person helps the community make progress through smaller and quicker increments (as well as the occasional larger contribution). Given the important tie between TCS and real-world applications, there's a clear reason why the community has developed with a larger focus on small/quick progress, although as a community we are also supportive of people who want to work the other way as well. The phrasing of the conjecture is clever, but I actually think TCS progresses much better with the culture it has than the one Koblitz favors. (Just like sometimes fast local heuristics are much better than slow exact algorithms.) Rather than just start a tirade against Koblitz, however, I think we'd be best off dissecting the article and understanding what aspects of the TCS culture has led to his opinions and, in many cases, misperceptions. We may find some things worth changing, either to improve our culture, or at least to present ourselves differently to other sciences. I've just returned from a week in the Bay Area, primarily to visit my "corporate sponsors"; Cisco and Yahoo have both provided research money this year, and I felt I owed them a visit before classes start. I also visited Microsoft Silicon Valley; even though I still haven't figured out how to get research money out of Microsoft, I've "consulted" there from time to time, and recently co-wrote a paper with several people there after my last visit. The main purpose of the visit was to talk to people and try to come up with new things to work on. Corporate money seems to synchronize with annual budgets. If you want another check next year, you need to convince them that funding you is still worthwhile. (That's not unusual; I understand if I ever get a DoD-based grant, it will be similar.) I should point out that all research money I've gotten from Cisco and Yahoo is given as a gift -- no requirements (and much less in overhead taken by Harvard!). It's just that it's nice to get these gifts annually, like a research anniversary present, and that means maintaining the relationship. I like to think that getting corporate money is a simple winning proposition. I get access to interesting problems; I get to work with talented people; and I get research money for doing what I would be doing anyway. But perhaps I'm deluding myself? Perhaps I'd be working on other grander problems if my agenda wasn't influenced by the vision of others? In my own personal case, I really doubt it, and it feels like a pure jackpot to me, but it's something every researcher probably has to think about. I'd certainly advise that people interested in "practical algorithms" and "algorithm engineering" should seek collaborations with (and funding from!) such industrial sources. In my experience, it takes some work. (But then again, so does writing NSF grants.) Usually the funding comes after a successful collaboration, not before, or there has to be a clear champion in the organization who can make the case that your work is related to something going on at the company. Building such relationships takes time, and doesn't happen overnight. But it's very worthwhile. To provide evidence, I'll spend some upcoming posts discussing problems I've recently collaborated on with people at Cisco, Microsoft, and Yahoo. Next semester, I'm teaching my class on randomized algorithms and probabilistic analysis, based on the Mitzenmacher/Upfal book. Harvard, like many universities, has an "Extension School", and I offer my courses through the distance education program. Basically, my lectures get taped and put online, I put the assignments online, and you or anyone who pays the Extension fee can take the course. While I'm sure my teaching performance on video is reminiscent of say an early Mel Gibson or perhaps a Moe Howard, I personally still don't think distance education is as good as "being there" by any means (at least, not yet...). But it offers an opportunity that people may not otherwise have. The course is taught at the level of an introductory graduate class, meant for non-theorists as well as theorists. These days, who doesn't need to know randomized algorithms and probabilistic analysis? If you know someone, for example in industry, who might like to take such a course, here's the link to the bare-bones syllabus. In a binary symmetric error channel, n bits are sent, and the channel flips each bit independently with probability p. So, for example, the message sent might be 00110011 and the received message could be 01100011 if the 2nd and 4th bits were flipped. Now suppose the same message is sent through k independent channels, and the receiver sees all of results. (Here k should be thought of as a small constant.) The capacity of this channel can be computed; essentially, in this channel, each bit gets maps to a number in the range [0,k], corresponding to the number of 1's in the appropriate position. (Since all errors are independent, exactly which channels flip a specific bit doesn't matter, just the number of flips matter.) As a specific example, when k = 2, we can think in the following nice way -- if we see two 1's (resp 0's) in bit position i, we think the original bit was 1 (resp 0), and now we have an error with probability p^2. With probability 2p(1-p), we see a 1 and a 0 in the ith position -- this corresponds to an "erasure", since the bit is now equally likely to be a 1 and a 0. So we have a channel that gives errors with probability p^2 and erasures with probability 2p(1-p); we can find the capacity (and codes for) such a channel. In a binary deletion channel, n bits are sent, and the channel deletes each bit independently with probability p. So, for example, the message sent might be 00110011 and the received message could be 010011 if the 2nd and 4th bits were deleted. Now suppose the same message is sent through k independent binary deletion channels, and the receiver sees all of results. Can we say anything useful here? The problem is automatically more challenging since we only have bounds and don't even know the capacity of the standard deletion channel (when k is 1). This is yet another simply stated question from the theory of coding for deletion channels in need of an idea. Registration has opened for FOCS 2007, which will take place in Providence on October 20-23. Please go to http://focs2007.org/ Note that the early registration deadline is September 20, and the deadline for reserving a room at the hotel at the conference rate is also September 20. (The hotel's regular rate is much higher.) The Knuth Prize lecture will be given by Nancy Lynch on October 21. A program of tutorials takes place on October 20, the Saturday before the regular conference program begins. The tutorial talks are as follows: Terrence Tao : Combinatorial Number Theory Dan Boneh : Recent Developments in Cryptography Daniel Spielman : Theory and Applications of Graph Spectra Abstracts for the tutorials should be available soon. This year marked the 11th HotOS workshop. From the call for papers: We request submissions of position papers that propose new directions of research, advocate nontraditional approaches to old (or new) ideas, or generate insightful discussion... As a venue for exploring new ideas, HotOS encourages contributions influenced by other fields such as hardware design, networking, economics, social organizations, biological systems, and the impact of compiler developments on systems and vice versa. We particularly look for position papers containing highly original ideas. Submissions were just due for the sixth HotNets workshop. Here is the HotNets mission statement: The Workshop on Hot Topics in Networks (HotNets) was created in 2002 to discuss early-stage, creative networking research and to debate positions that reflect on the research direction and needs of the broad networking community. Architecture, high-level design work, and positions that may shape long-term research direction are especially welcome. HotNets is structured to work in synergy with conferences such as SIGCOMM by providing a venue in which innovative work may receive feedback to help it mature into conference papers or otherwise have a long-term impact on the community. To fulfill these goals HotNets calls for short position papers that argue a thoughtful point-of-view rather than full-length conference papers, and maintains a broad and diverse scope. Does theory need a Hot Workshop? A workshop specifically designed for somewhat wacky or not-fully-baked ideas that may turn into long-range research directions? SIGCOMM (the major networking conference) will be taking place shortly. Although I won't be attending, I thought I'd mention some of the papers that involve theory in a non-trivial way. This list is not meant to be exhaustive, so I apologize for (and welcome comments on) anything I might have missed. The networking community, I've found, is very amenable to theory. I think there's a real understanding in the community that obtaining systematic improvements in something as wild and large as modern networks requires a solid, fundamental grasp of what's going on. This means good models, and good algorithms and data structures. This doesn't mean that SIGCOMM is full of theory papers; they generally want to see some sort of implementation to see that whatever idea is presented actually works. But many papers have a non-trivial theoretical component, and the SIGCOMM conference is certainly a good place to look for important problems where theoretical insight would be welcome. (The same is true for the larger networking conference, INFOCOM, but that conference is so large it's harder to get a handle on. Also, SIGCOMM aims to be a bit more visionary, or "out there", which I think means there's more opportunity for theory to get involved.) DTN Routing as a Resource Allocation Problem: DTN's are Disruption Tolerant Networks -- in this paper, focusing on mobile nodes that are only connected intermittently. In one section, the paper sets up an appropriate graph model for the problem, and proves lower bounds for various situations. For example, any algorithm that doesn't know the schedule of node meetings is Omega(n)-competitive compared to an offline adversary when n packets are to be delivered. The paper then moves on to heuristic approaches and an experimental evaluation. Orbis: Rescaling Degree Correlations to Generate Annotated Internet Topologies: The paper considers the problem of how to generate realistic Internet graphs, where here the relevant graph is a router graph with nodes labeled to give the corresponding AS (Autonomous System) network. An Axiomatic Basis for Communication: Pretty much like it sounds. Trying to set up a basic logic for network protocols, covering issues like naming, binding, forwarding, etc. Embracing Wireless Interference: Analog Network Coding: Normally, we think of networking coding as XORing packets, or taking random combinations of packet data over some finite field. In wireless, you may be sending signals, not bits. What do you do then? This paper looks at the problem of effective algorithms for this problem. Again, these examples aren't exhaustive; many other papers have Lemmas and Theorems in there. In fact, I'd wager that having words like "Lemma" and "Theorem" in the paper is positively correlated with acceptance to SIGCOMM. Maybe someone on the PC will confirm or straighten me out. In any case, I'll be giving all the SIGCOMM papers a closer look.... One thing I've been enjoying is that as people seem to discover this blog, they comment on the older posts. I appreciate these comments and I hope the newcomers stick around! Suresh asked me to enable the comment feed, which would allow people to easily see these newly arrived comments. They appear to be enabled, but while there is a link for the standard blog feed at the bottom of the page is a link for the blog feed, there is no link for the comments. I spent some time playing with the template to no avail. (I'm happy to take advice from anyone who could get the link to show up.) However, you can just enter the URL directly to your reader. Apparently, the URL you need to get the comments is: and the regular feed for the blog is In my undergraduate class, I devote one class (and one programming assignment) to heuristic search methods. I focus on hill climbing, since it is most clearly connected to other things covered previously (greedy algorithms, the simplex algorithm), but I also discuss other well-known methods, including the Metropolis algorithm, simulated annealing, tabu search, go with the winners, genetic algorithms, and, of course, BubbleSearch. Generally, algorithms textbooks hardly mention heuristics. At best, heuristics enter the picture when discussing approximation algorithms, as though the fact that we can prove that the greedy local search algorithm for max-cut achieves a cut of at least 1/2 the edges is the important thing. I wish the standard textbooks would devote a solid chapter on heuristic techniques; in real life, most students are probably more likely to have to implement a heuristic for some NP-hard problem than to have to implement a minimum spanning tree algorithm. The theory purists will undoubtedly suggest that heuristics don't belong in an algorithms class, because generally we can't prove formal statements about them. Obviously, I disagree. Heuristics are closely tied to other class concepts -- like greedy algorithms and approximations. Perhaps most importantly, a successful heuristic approach depends on sound modeling and building an understanding of a problem, which are exactly the important skills we should be teaching in undergraduate algorithms. Finally, we need to give students some basic, realistic algorithmic tools for handling NP-hard problems in practice. (2-approximations do not suffice.) So while I acknowledge that students may be exposed further to these and other heuristic approaches in for example an AI class, I think it's important to clearly connect it to what they're learning in algorithms. I became more interested in (and knowledgeable about) heuristic methods when I was working on a related project at the nearby Mistubishi Electric Research Laboratory (MERL) some years ago. The project on Human-Guided Search studied the benefits of having a human dynamically interact with heuristic optimization algorithms, and along the way we did some work just on the heuristics Here's a simple heuristic that can be taught, I think, quite easily and productively to undergraduates. Many hard problems -- including most scheduling, packing, and coloring problems -- have natural greedy heuristics, whereby the items are ordered according to some criterion (work, size, degree), and "placed" one at a time according to that ordering. For example, many standard bin packing algorithms such as first fit decreasing and best fit decreasing fit this type. Given time, a natural way to extend such greedy heuristics is to try additional orderings. While we can't hope to consider all orderings, we could certainly try more than one. Of course, intuition tells us we should prefer orderings close to the greedy ordering. There are a variety of ways one could do this. A historically popular way is to sequentially choose an item uniformly at random from the top k of the greedy ordering, place it, and remove it from the list. One has to choose the parameter k. A negative feature of this approach is that there are many orderings that will never even be considered under this approach. We suggest what we call BubbleSearch, which makes use of the Kendall-tau distance. More clearly, the Kendall-tau distance between an ordering A and the greedy ordering B is the number of transpositions of adjacent items you would have to make to get from A to B, which corresponds to the number of swaps you would make using BubbleSort if the sorted order was just the greedy ordering. (Hence the name.) You could just go through all the permutations of items in order by Kendall-tau distance from the sorted order. Most small perturbations of the greedy ordering, however, give very similar results, leading to little or no improvement. A better way for most problems is a variation of the top k approach. To create a new ordering A, we start with a base ordering B (the greedy ordering). We pick the first item of A as follows: choose the first item of B with probability p, and if it isn't selected, choose the next item of B with probability p, and so on down the list (starting at the beginning again if necessary). Once an item is selected, it become the first element of A, and is removed from B. We continue choosing subsequent items for A the same way with the remaining list from B, starting from the beginning of the remaining list. The probability of obtaining an ordering A is then proportional to (1-p)^d(A,B). Here p is the algorithm parameter, determining how close to the base ordering you are likely to be. To me, this approach is much more intuitive than the top-k approach, and in our experiments appeared to do at least marginally better. A further improvement is to change the base ordering to be the best ordering you have seen so far. Once you've beaten the greedy ordering, there's no reason to keep it as the base. A motivation for simple heuristics like BubbleSearch is exactly their simplicity. They are easy to code and rely on essentially no problem-dependent knowledge. If coding time matters, something like BubbleSearch is the way to go. Randomized extensions to greedy algorithms also give rise to interesting theoretical questions, related to work on "priority algorithms". I don't know of any results bounding the performance of random-top-k, BubbleSearch, or similar randomized greedy variations. Most of you probably don't know about Mitsubishi Electric Research Laboratories (MERL), a small lab nestled in Kendall Square (very close to MIT, and a pleasant walk from Harvard). While they did basic research there, it was definitely more application/development oriented, and it specialized more in areas like UI, graphics, speech, etc. Information theorists would know it as the home of Jonathan Yedidia, a big name in coding and belief propagation, but CS theorists might not ever have heard of it. I've done some consulting work there over the years. I knew the lab director, Joe Marks, from my time as a Harvard undergrad, and he hooked me up. It was a very nice place -- very theory-friendly in its application-oriented way. Joe clearly had the mindset that the goal of the lab was to generate new ideas, and that would drive new products. So I was disappointed to hear several months ago that Joe had been removed as lab leader. (Don't feel too bad for Joe -- he's now at Disney, and is chairing SIGGRAPH this week. A talent like him will continue to be successful...) And even more disappointed (but not surprised) to read in Xconomy that MERL was being "re-organized", essentially phasing out the basic research component, and many people were leaving. I've lived vicariously through this before -- I left Digital Systems Research Center before it disappeared (after Digital was bought by Compaq was bought by HP), but knew several people who went through the process. It's very disturbing to see again how hard it is for companies to make research labs work. What makes research labs successful? Can they really last long-term in any non-monopoly environment? Let me make a bold prediction. I personally am not planning on retiring until at least age 65. Pick any research division that's around today. (Microsoft, Yahoo, Google, AT&T, Bell Labs, IBM, any of them...) I'd bet on pretty much any of them that their research wing will be gone before I hit 60. (I'm still under 40. OK, maybe I should just bet they'll be dramatically reduced or transformed into Advanced Development for a number of years somewhere along the way...) This is something people who go to research labs should know going in. Odds are likely you'll have to change jobs somewhere along the way -- not because of your own talents, but because of company-level problems. This might make such jobs seem risky, but let's not exaggerate -- when the company has problems, the talent moves to a new company. I'd love to hear thoughts on what makes good research labs, why I'm wrong about research labs dying out, how people who have experienced such moves feel about them, or anything else on the topic. Update : Jonathan Yedidia says in the comments that the reports of MERL's death have been highly exaggerated. Peter Winzer argued for using (variants of) Valiant's randomized load balancing (route to a random intermediate spot, and go from there to the end destination) as an actual network architecture, over shortest-path and virtual private network methods. And I do mean argued. A heated discussion broke out among various real networking people as to whether this was realistic. The cost of potentially doubling the transport time (speed-of-light time) for say cross-country transmissions was argued to be unacceptable, even if the payoff was essentially zeroing queueing delays and making a simpler, cheaper (less equipment) network. (Perhaps it could be used for smaller networks, or only to/from the center, to avoid doubling cross-country trips.) It was very entertaining and instructive; one of the classis theoretical ideas, an argument about how it could be used and what its benefit could be, and a counterargument based on practical experience, leaving a somewhat vague end result -- you probably can't tell how useful it is until you build it out... Cristian Estan gave an excellent talk giving an overview of data plane algorithms -- hardware router algorithms for longest matching prefix, string/regexp matching, packet classification/deep packet inspection, etc. Ho and Sprintson gave their survey on network coding, covering the background, the integrality gap of network coding for undirected networks (Agarwal and Charikar), finding network codes through integer programming for undirected networks, bounds on the number of encoding nodes needed for various graphs, robust network coding in the face of edge failures, practical implementations, and more. My DIMACS slides are now available on my Talks page. I'm spending a couple of days at the DIMACS Tutorial on Algorithms for Next Generation Networks. Tracey Ho and Alex Sprintson and giving a talk on Network Coding, and since in the past I've promised a post on open questions in network coding, I'm interviewing them. I apologize that the questions might be vague; the problem with a new area is that it's still a little unclear what the right open problems are, and we all use different lingo. But here are a few... 1. For 3 source-receiver pairs and unicast, in an undirected network, with arbitrary coding, is there an advantage from coding over routing?(I think the reference paper is Li and Li, Network Coding: The Case of Multiple Unicast Sessions)? 2. General multiple unicast in directed networks: is there an algorithm that, given a network, computes the capacity? For acyclic networks, there's an implicit characterization, but not an explicit characterization; for cyclic networks, there's not even that. 3. What is the complexity of finding the optimal (in terms of capacity) non-linear code for various network coding problems? Specific problems include determining the truth/falsehood of the following statements: 1. Given an instance of the general network coding problem in a directed graph G and a real number r, there is a polynomial-time algorithm which computes a solution achieving rate r if one exists, and otherwise reports that this is impossible. 2. Given an instance of the k-pairs communication problem in an undirected graph G and a real number r, it is recursively undecidable to determine whether the network coding rate is less than r. 4. For multicast networks, what is the minimum number of nodes that need to do encoding to make network encoding work? That is, can we minimize the coding complexity in terms of the number of nodes doing coding (or some other reasonable metric). 5. There must be open questions in the recent work by Koetter/Kschischang on coding for errors/erasures in random network coding. For example, they seem to give general (network-oblivious) bounds. Are there ways to improve their bounds by using knowledge of the network topology? 6. Is there an algorithm to find the minimum cost subnetwork that guarantees delivery using network coding subject to certain classes of adversarial (or non-adversarial) link failures. 7. A good place to look for CS-style open problems is probably also the Adler/Harvey/Jain/Kleinberg/Lehman paper On the Capacity of Information Networks, which has a nice section on open problems at the end. Maybe I'll have more after the talk... For those of you who haven't succumbed to buying the Mitzenmacher/Upfal textbook on Probability and Computing, there are two very positive reviews that will appear in the next SIGACT newsletter. Here are the reviews, excerpted from the SIGACT book reviews page Bill Gasarch maintains. I'm sure there are plenty of places to get such information, but for basic tech news (like what's up with Google and the FCC, voting machines, and the iPhone), I'm enjoying the Machinist blog, out of Salon.com. The longer weekly columns are interesting too; I'm already looking forward to getting a Zonbu. For straight tech policy, my favorite for quite some time has been Ed Felten's Freedom to Tinker. The past week he's been commenting on the California e-voting reports, which gives further evidence to what most of us already know (or would have guessed) : current voting machines are fundamentally insecure. I prefer to think this is due to incompetence and ignorance rather than malicious intent on anyone's part, but what do I know. My only complaint is that he doesn't post enough!
{"url":"http://mybiasedcoin.blogspot.co.il/2007_08_01_archive.html","timestamp":"2014-04-17T04:00:12Z","content_type":null,"content_length":"153971","record_id":"<urn:uuid:b910f276-e821-4244-ba95-0a8c45103787>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Figures don't Lie but Creationists Figure - By Alec Grynspan Sun 9 Nov 97 15:39 Figures don't Lie but Creationists Figure - By Alec Grynspan One of the Creationists' ploys has been to quote two Astrophysicists as if they were experts in biochemistry. Note that the 2 individuals in question (Hoyle and Wickramasinghe) have no problem with evolution itself and considered Creationists insane. They argue only that the ORIGIN of life, which is not part of evolutionary fact or theory, requires either a much older universe for Panspermia or that life needed a Creator to start. More background: Years ago Hoyle and Wickramasinghe postulated a steady-state universe and opposed the idea of the "big bang". As part of their attack on "big bang", which was rapidly winning ground over steady-state, they cooked up a "probability" for life to originate on Earth that was essentially impossible. To then cover the fact that life actually existed on Earth, they came up with the question-begging hypothesis of Panspermia. The result was that the origin of life was pushed further back. With the probability being so low, it would have taken trillions upon trillions of years for life to form using their concept. BUT - with a steady-state universe, a trillion zeroes in the probability equation would have had no effect on the end result, since the universe would have been eternal. Eventually, however, the steady onslaught of evidence for a "big bang" and against a steady-state universe forced Hoyle and Wickramasinghe to acquiesce. So they were stuck with their bogus equations. What to do? Well, if one postulated the existence of a creator, one eliminated the problem of the equations! One further undermined the concept of a non-created universe, giving them one more kick at the "big bang" cat. This "probability", combined with a distortion and misquotation of Dawkins, has actually been used as a claim by some extremely dishonest Creationists as the foundation of a scientific theory of Creation, even though it is nothing of the sort for other reasons. The flaw in the equations used by Hoyle and Wickramasinghe was that they used anonymous/non-anonymous atoms and, later, genetic sequences, to calculate the probability of a random assembly becoming a modern uni-cellular organism. The same tactic by Behe was used, via the debunked "irreducible complexity" approach, to derive a probability. But this method of applying probability is utterly dishonest. Let us take a simple example - table salt crystals. Table salt is made up of sodium and chlorine atoms, so let's start with a very small quantity (around 50 milligrams) of sodium and chlorine - around 10^20 atoms of each. Let's place these elements in a small container and mix it up. What is the probability of a sodium atom meeting a chlorine atom in this container? Answer: Virtually Unity. What is the probability of a *SPECIFIC* sodium atom meeting a *SPECIFIC* chlorine atom in this container? Answer: Once the sodium atom meets any OTHER chlorine atom, it is out of the picture. Similarly, once the chlorine atom meets any OTHER sodium atom, then IT is out of the picture. The probability of the specific atoms meeting each other? 1 in 10^40. The probability of every single specific sodium atom meeting a specific chlorine atom? 1 in 10^80. 1 with 80 zeroes after it. Once we have 10^20 salt molecules, what is the probability of any salt molecule linking to any other until we have a salt crystal? Answer: Unity. What are the chances of a SPECIFIC salt molecule meeting another SPECIFIC salt molecule? 1 in 10^20. Of all of them meeting like this? 1 in 10^40? Of that batch of Sodium and Chlorine making that crystal? 1 in 10^120 This is how Hoyle and Wickramasinghe and Behe established their probabilities - by using permutations and treating each component of the cell as a totally unique entity with no other properties prior to final assembly than staying where placed. Yet a pyridine molecule(for example) is the same wherever it is! Plus the properties of the variuos components REQUIRED that they have a constrained number of possible combinations. Further, all that we need is some form of self-replication molecule that can absorb other molecules in order to replicate and mutate - already verified to be able to form naturally (although many Creationists will quote 40-year-old editorial opinions as "proof" that it can't happen), plus the verified Dawkins effect to bring on evolution of the final form of that cell. Let's take another look at why this natural selection sequence, which creationists edit out when pretending to quote Dawkins, improves the probability to unity. Let us do a little back-of-the-envelope calculations. Let us presuppose that there were 10^6 mutations that caused 10^6 evolutionary bifurcations - with each alternative being of equal weight. That means that, when that primitive barely-life nucleic acid first started the sequence, the probability against the final result being a specific cellular structure would have been 2^(10^6) or 2^1000000 or 10^300000 - 1 with THREE HUNDRED THOUSAND zeroes after it. But, at any bifurcation, the probability that SOME path would be taken is UNITY. Therefore the probability against life forming is: 1 - 1^(10^6) or 1 - 1^1000000 or zero. In other words, the probability that modern life would form by random mutation with natural selection is Note that this does not take into account the bifurcations where one of the paths is lethal (bad mutation). These would be dead ends and would reduce the probability against the current life form developing. The end result, however, cannot pass the limit of UNITY, so it can only affect the final form of life and not the probability. The argument of Hoyle/Wickramasinghe/Behe and probabilities is therefore debunked! Figures don't lie but Creationists figure, part II "The Second Law Of Thermodynamics (All praise its glory) says that evolution is impossible!" This has been the rallying cry of Creationists for decades. Yet it is one of the biggest lies in their arsenal of lies. Before we go much further, let's take a look at the REAL second Law of Thermodynamics: In plain language: • 1. The amount of work that can be obtained by the flow of heat between two bodies is directly proportional to the difference in concentration of heat energy in the two bodies. • 2. The amount of work cannot exceed that difference. • 3. The amount of work must actually be less than that difference. The above can be proven with great mathematical elegance. Proven. Mathematically. Elegantly. All THREE of the laws of thermodynamics can be shown to true via mathematics and axiomatic assumptions. Or can they? Is there no constraint on these Laws? Yes, there is. The following constraints must be true: • 1. The system is closed. There is no external source of energy to disrupt the equations. • 2. The time period is finite. This is a constraint that becomes necessary because of the fact that heat is the motion of atoms in a random mode. □ a. Too short a time period and the motions of the atoms will not be statistically uniform. □ b. Too long a time period and extremely low probabilities of energetic atoms clustering to form new heat pockets becomes significant. • 3. New on the scene: Quantum effects can add to the conditions in 2 above. BUT!!!! Within an incredibly wide band, the Laws of thermodynamics are the most elegantly simple things in physics! This elegance, combined with the penchant for extrapolation and extension of simple concepts into the slippery world of analogy, has resulted in the laws of thermodynamics to be expressed in "clever" • 1. You can get out what you put in. • 2. You can't get out more than you put in. • 3. In fact, you'll only get out part of what you put in. A similar situation occured years ago, when relational databases were first conceived, by Codd and a method of designing them that was based on 3 basic rules (eventually 5) developed. • 1. All elements of a row (record) are dependent on the unique key of the row. • 2. All elements were, in the case of a composite key, dependent on the whole composite. • 3. All of the elements of a row were dependent ONLY on the key and not on the other elements of the row. This became, as a mnemonic: "The key, The Whole Key and nothing but the key. So help me Codd." Cute, and easy to remember. But not quite the scientific statements that they were based on. In the same way, the laws of thermodynamics, which were a little difficult for the beginners to understand, were rephrased into simpler concepts, divergent from THE ONLY PROVEN SCIENTIFIC BASIS FOR THESE LAWS. These laws, rephrased so cleverly, were grabbed up and used to extend the concepts within Metaphysics. But Metaphysics is not science. It is philosophy. The elegance of "You can't get out more than you put in" is a virtual cornerstone of a large measure of philosophic thought. But it isn't science. Extending that further, we get "Things tend towards disorder", which is actually based on the THIRD law of thermodynamics. Somehow, this has been extended, thru judo arguments and voodoo science, to "The Second Law of Thermodynamics says that entropy keeps things fromm getting more complex. Therefore evolution isn't Yet this has absolutely no basis in science. The Second Law, or the third or the first deal only with HEAT! The rest of the extrapolations are philosophical analogies! Yet! Even though enough scientists have fallen into the same trap and argued based on the same concept - that entropy and bio-complexity are related - the same scientists have won that argument! Because the argument that increasing complexity is negative entropy, bogus though it is, is countered by the fact that the TOTAL entropy of the Earth/Sun pair is actually monstrously positive, courtesy of the Sun! Thus we have a case where the Creationist, using pseudo-science and taking the laws governing heat flow and work out of context, LOSES ON HIS OWN GROUND! Once again, the Creationist loses. This time - twice over!
{"url":"http://www.skeptictank.org/figlie.htm","timestamp":"2014-04-17T00:48:43Z","content_type":null,"content_length":"15006","record_id":"<urn:uuid:6ec5030a-7d26-491d-9e09-44d485fd271e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Goedel: truth and misinterpretations Kanovei kanovei at wmwap1.math.uni-wuppertal.de Wed Nov 1 14:57:04 EST 2000 > Date: Wed, 01 Nov 2000 08:07:48 +0100 > From: Torkel Franzen <torkel at sm.luth.se> > (2) Even if every even number greater than 2 is the sum of two > primes, this is not necessarily provable in ZFC. Mathematically, (2) is meaningless (and basically shows that he who writes (2) either has no proper idea of mathematics at all or does not bother to present his ideas in proper form). Indeed, "A is not necessarily B" means, in standard mathematical language, that there is an example of A which does not belong to B, e.g. "an arbitrary group IS NOT NECESSARILY an abelian group". In principle, this goes back to Aristotelian foundations of logic, any A is B, not any A is B, etc., do you remember ? Why don't you explain what you really mean ? More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-November/004509.html","timestamp":"2014-04-19T04:22:14Z","content_type":null,"content_length":"3227","record_id":"<urn:uuid:fd288043-1598-4266-bd8d-1f42409abaf5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Qualities of URLs and resources From: Geoffrey M. Clemm <geoffrey.clemm@rational.com> Date: Thu, 10 Feb 2000 08:53:49 -0500 Message-Id: <10002101353.AA04843@tantalum> To: w3c-dist-auth@w3.org It's not that time or content negotiation aren't important, but just that including these aspects of the RMap function was not relevant to the point being made. Perhaps modifying the notation will help clarify. Let's use the notation: instead of: {U,t} -R-> {V1, V2, ...} You can expand this to include content negotiation by adding another argument to the R function, i.e. R:{U,T,C}->V. (U is the set of URI's, T is the set of points in time, C is the set of content headers, and V is the set of values, where each value is an entity body and a set of properties). Jim's point is that a "resource" is a function RMap:{T,C}->V. There are more arguments to the RMap function, i.e. the request body and all the other headers, but that doesn't affect the discussion. Let's let RES be the set of all such functions (i.e. each member of RES is some function that maps time and a content header into a value). There is another function, which Jim calls UMap, which maps URI's into resources, i.e. UMap:U->RES. In other (possibly more obscure :-) words, UMap is the result of currying the URI's out of the R function, a resource is a member of the range of UMap. The BIND method gives you control over the UMap function (as do MOVE and DELETE). The semantics of the DAV:resourceid property is that it is not affected by either time or content headers (or any other header). I.e.: for-all RMap in RES if RMap supports the BIND method then there-exists a string, s, such that for-all t,c in T,c the DAV:resoureid property of RMap(t,c) is equal to s. Actually, it is more like ({U,t} -R-> {V1, V2, ...}), where t is the current time, R is the resource, -R-> is a mapping function that has been implemented according to the semantics of resource R), and the range is a set of values representing that resource at time t. So, using your notation, I would re-write the full mapping as: {URI1, URI2, ... URIn} -UMap-> resource -RMap-> {V1, V2, ... Vm} Where UMap is the URI to resource mapping function, and RMap is the resource to value mapping function. I omit time since it's really tangential to our discussion, assuming that the entire set of mappings occurs at a given time t. From: "Larry Masinter" <LM@att.com> Neither of these notations captures content negotiation, and it isn't OK to remove 't'. The whole *point* is to understand what are the things that are stable over time and which things can change, and how. If you just look 'at an instant' then there's no meaningful way of distinguishing URLs from resources, and collapsing -UMap->. I'm guessing you want to make UMap vary more slowly and only with explicit operations (BIND and UNBIND) while RMap encompases all of the content negotiation & time varying behavior of resources without having any explicit operation modify the mapping. Received on Thursday, 10 February 2000 09:04:10 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 2 June 2009 18:43:54 GMT
{"url":"http://lists.w3.org/Archives/Public/w3c-dist-auth/2000JanMar/0281.html","timestamp":"2014-04-18T16:05:42Z","content_type":null,"content_length":"10641","record_id":"<urn:uuid:bb8f9fdb-c3f4-436e-a008-f6dbc02e1ed6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/512e88fde4b02acc415ed5e6","timestamp":"2014-04-17T07:13:03Z","content_type":null,"content_length":"91379","record_id":"<urn:uuid:005c87ae-da84-4c25-b661-69f177c6de11>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Equation June 6th 2010, 08:47 PM #1 Junior Member May 2010 Can anyone work this hard question out for me??? 1) An object is thrown form a building and the height above the ground, y is given by the equation y=40 + 80t - 5t^2 where t is the time of flight(in seconds). b) What is the height after 4 seconds? c) At what times is the height 120m? For b) just plug in $t=4$ For c) solve the equation $120=40+80t-5t^2 \iff 5t^2-80t+80=0 \iff t^2-16t+16=0$ can you finish from here June 6th 2010, 09:00 PM #2
{"url":"http://mathhelpforum.com/algebra/148052-quadratic-equation.html","timestamp":"2014-04-16T16:04:03Z","content_type":null,"content_length":"33357","record_id":"<urn:uuid:7279b53e-613a-41b8-9b0d-792a5ae7543a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Rolling Resistance I came across an interesting little problem doing some work that I though I'd share. Typically, we think of three types of resistance for a wheel: • Static Friction • Kinetic Friction • Rolling Resistance I'm modeling the wheel of an airplane for a flight simulator. When the airplane is sitting still, it has some resistive force before it starts rolling. So I just found this force using the maximum coefficient of static friction for a tire (obtained from manufacturer). Turned out I got an insanely large number. Then I realized, <duh> that would be like having the airplane on full throttle (which it nearly turned out to require) and dragging the wheels across the runway. So the only option left was to model the friction using rolling resistance (which I am already doing when the wheels are in motion). My understanding was that rolling resistance was defined for rotation of the wheel (because my basic books on physics all just assume the wheel is turning). I looked at my vehicle dynamics book and realized the curves for rolling resistance as a function speed go all the way down to zero, and have a power relationship with speed f0+ 3.24*fs(V/100)^2.5. Anyways, the point is that "rolling resistance" is a bit of a tricky term to use. After I plugged in some values I got a frictional force of 25.5 lbs for a 2550lb aircraft. It seems like a very reasonable number. As a side note: All tires slip in the real world when you accelerate/break and that the coefficient of friction goes down as you increase the load. The point is, tires do a lot of non-intuitive stuff and the basic presentation of friction in most books is woeful. You have been warned.
{"url":"http://www.physicsforums.com/showthread.php?p=2245334","timestamp":"2014-04-16T10:30:11Z","content_type":null,"content_length":"59888","record_id":"<urn:uuid:48432ea4-9282-49d4-b6f0-ad5670e153fc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Unknown Dimensions of Parallelograms 10.2: Unknown Dimensions of Parallelograms Difficulty Level: Created by: CK-12 Practice Unknown Dimensions of Parallelograms Have you ever watched someone else work on a project? Jillian loves watching the other women work on their quilt squares. One of the ladies, Marie, came with her quilt pattern already cut out. The quilt square was designed to be filled with parallelograms and triangles. "How much material are you using for one parallelogram?" Jillian asked her. "I am using 18 square inches of material," Marie told Jillian. Jillian watched her measure the length of the base of the parallelogram and noticed that it was 6 inches long. "Hmmm," thought Jillian. "Now I can figure out the height of the parallelogram." Can you use this given information to figure out the height? This Concept is all about finding unknown dimensions of parallelograms. Pay close attention and you will know how to do this at the end of the Concept. We can also work to figure out a missing dimension if we have been given the area and another measurement. We can be given the area and the height or the area and the base. This is a bit like being a detective. You will need to work backwards to figure out the missing dimension. Let’s look at figuring out the base first. A parallelogram has an area of 48 square inches and a height of 6 inches. What is the measurement of the base? To figure this out, let’s look at what we know to do. The area of a parallelogram is found by multiplying the base and the height. If we are looking for the base or the height, we can work backwards by dividing. We divide the given area by the given height or given base. 48 $\div$$=$ The measurement of the base is 8 inches. This will work the same way if we are looking for the height. A parallelogram has an area of 54 square feet and a base of 9 feet. What is the height of the parallelogram? We start by working backwards. We get the area by multiplying, so we can take the area and divide by the given base measurement. 54 $\div$$=$ The measurement of the height is 6 feet. Practice a few of these on your own. Find the missing height or base using the given measurements. Example A Area = 25 square meters Base = 5 meters Solution: The height is 5 meters. Example B Area = 81 square feet Base = 27 feet Solution: The height is 3 feet. Example C Area = 36 square inches Height = 2 inches Solution: The base is 18 inches. Now back to Jillian and the quilt squares. Here is the original problem once again. Jillian loves watching the other women work on their quilt squares. One of the ladies, Marie, came with her quilt pattern already cut out. The quilt square was designed to be filled with parallelograms and triangles. "How much material are you using for one parallelogram?" Jillian asked her. "I am using 18 square inches of material," Marie told Jillian. Jillian watched her measure the length of the base of the parallelogram and noticed that it was 6 inches long. "Hmmm," thought Jillian. "Now I can figure out the height of the parallelogram." To figure this out, we have to divide the given area by the given base. This will give us the height. The height of the parallelogram is 3 inches. Here are the vocabulary words in this Concept. the space within the perimeter of a figure or place. Area often refers to the surface or covering, the middle of a figure. Area is measured in square units. a quadrilateral with two pairs of opposite congruent sides. a parallelogram with two pairs of opposite congruent sides and four 90 degree angles. Guided Practice Here is one for you to try on your own. The area of the parallelogram is 169 square feet. The length of the base is 13 inches. What is the height? To figure this out, we have to divide the given area by the length of the base. This will give us the height. 169 $\div$$=$ The height of the parallelogram is 13 inches. Video Review Here is a video for review. Khan Academy, Area of a Parallelogram Directions: Use the given area and other dimension to find the missing base or height. 1. Area = 22 sq. inches Base = 11 inches 2. Area = 50 sq. miles Base = 10 miles 3. Area = 48 sq. inches Base = 8 inches 4. Area = 30 sq. meters Base = 15 meters 5. Area = 45 sq. feet Height = 3 feet 6. Area = 88 sq. feet Height = 8 feet 7. Area = 121 sq. feet Height = 11 feet 8. Area = 160 sq. miles Height = 20 miles 9. Area = 90 sq. meters Height = 30 meters 10. Area = 100 sq. feet Base = 25 feet 11. Area = 120 sq. feet Base = 20 feet 12. Area = 144 sq. feet Base = 12 feet 13. Area = 200 sq. feet Base = 20 feet 14. Area = 400 sq. feet Base = 200 feet 15. Area = 360 sq. feet Base = 100 feet Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r4/section/10.2/","timestamp":"2014-04-20T01:03:04Z","content_type":null,"content_length":"131064","record_id":"<urn:uuid:eed62689-77d5-4036-ba70-a083eae6592e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Puzzle Input And Array c++ using a-star algorithm Puzzle Input And Array c++ using a-star algorithm I've developed a 8puzzle solver using a fixed width and height Below is my input condition and the pre-set goal state which i use for testing. for (i = 0; i < 9; i++) cin >> initstate[i/3][i%3]; for (i = 0; i < 9 ; i++){ if (i != 8){ endstate[i/3][i%3] = i+1; }else { endstate[i/3][i%3] = 0; Assume initstate and endstate have been defined as two dimensional arrays with dimensions[3][3]. I've used a-star search to find the shortest path f(n) = h(n) + g(n) A couple of questions 1. How do i go about getting the input from a text file ? Yes i am familiar with fstream and getline but for example if the text file is such: 1@(3,1) 6@(3,3) 0@(1,2) etc upto 16 values 1@(1,1) 2@(1,2) 3@(1,3) etc upto 16 values In this case the first line denotes its Height, second line its Width third line its initial state and fourth line its goal state. Also if the input is put into a hashtable(if someone suggests) do the parameters given such as (3,1) (3,3) which are the position values of each of the pre-fix'd numbers "1" @(3,1) if Not then what sort of string strip/trim is required and how do you specify delimiters with @(can be any number, can be anynumber) 2. Equally large problem im facing is how do i use the input Height, Width info from the text file to define/initialise my node & priority queue ? Currently they are all pre-set to function for a 3x3. I want to attempt a 4x4 & 5x3. Basically the size of the board, goal array need to be set at runtime. 3. I've tried substituting certain fixed values such as 3, with 4 and 9 with 16 in for loops and the respective i/dimension, i%dimension cases and the possible number of states 9factorial/8bits replaced by 16!factorial/15 bits respectively where it's required in the program and the base table (multiples of 9 or 16) Where am i going wrong ? what functions for a 3x3 should do the same for 4x4 with appropriately replaced values yes ? I was wondering if parity needs to be used if so then how ? I know the basic parity function. I've tried various test cases for the 3x3 and it performs perfectly. I've also seen a code online which uses template metamorphing but i cant quite get the hang of it (it works for a set dimension value, 4x4) Below is some sample code which gives an idea of my problem // My header file -> class node {private: // functions omitted node(valNum board[3][3]); //valnum can be double or valNum nodeBoard[3][3]; //unsigned short/long int // other functions omitted class Pqueue { // code omitted valNum startState[3][3]; //3x3 i need dynamic 5x3/4x4 char stateChecked[45360]; //9factorial/8bits Pqueue(valNum begin[3][3], valNum end[3][3]); // code omitted void Pqueue::doOpen(node *posn) // code omitted //marks position as visited stateChecked[pos/8] = stateChecked[pos/8] | check; If you require any further code to understand my question better, let me know. Show your attempt at reading in and parsing this data. Thats the point, if it was not like "2@(1,1)" and was "2, 4, 8, 0, 1" i would know how to parse that using a delimiter ',' I have looked at a function fopen FILE *fp; int iValue = -1, iValue2 = -1; if ((fp = fopen(FILENAME, "r")) == NULL) cout << FILENAME << " not found" << endl; fscanf(fp, "(%i", &iNumRows); fscanf(fp, "%i)\n", &iNumCols); std::vector<std::vector<int> > stdStartArray(iNumRows, std::vector<int>(iNumCols)); std::vector<std::vector<int> > stdEndArray(iNumRows, std::vector<int>(iNumCols)); //read in the initial array fscanf(fp, "("); for(unsigned int i = 0; i < iNumRows; i++) fscanf(fp, "("); for(unsigned int j = 0; j < iNumCols; j++) fscanf(fp, "%i", &iValue); if(iValue == iValue2) stdStartArray[i][j] = -1; fscanf(fp, "*"); iValue2 = iValue; stdStartArray[i][j] = iValue; // cout << iValue; fscanf(fp, ") "); fscanf(fp, ")\n"); //read in the final array fscanf(fp, "("); for(unsigned int i = 0; i < iNumRows; i++) fscanf(fp, "("); for(unsigned int j = 0; j < iNumCols; j++) fscanf(fp, "%i", &iValue); if(iValue == iValue2) stdEndArray[i][j] = -1; fscanf(fp, "*"); iValue2 = iValue; stdEndArray[i][j] = iValue; // cout << iValue; fscanf(fp, ") "); fscanf(fp, ")\n"); And i also get a file not found error though the given filename exists in the project folder Okay, since im using Xcode, the absolute path to file is required. Yet, im unable to make it work for the given specifications of the inputfile Why would you use fopen? That's C, not C++. You said you were familiar with fstream. Use it. Read in the line as a C++ string and parse it with the string member functions. Yeah, i'll try it out with fstream. I used fopen because i had used it a long time ago to parse data when coding in c. But let me see if i understand correctly, i should discard of "@(num,num)" for each instance of "num@(num,num)" for ex: Input file 1@(2,1) 0@(2,2) 7@(2,3) <---- So i keep 1 0 7 yeah ? 1@(2,1) 0@(2,2) 7@(2,3) <---- So i keep 1 0 7 yeah ? It's your data. You tell me. I assume you need all of it in some shape or form. That really is an ugly data format, though. If you can guarantee that the numbers in your file are single digit positive integers, you can easily put the lines into strings using getline and get the individual numbers as characters using the [] operator. Yeah, i asked my professor about it and he said we cant change it. Anyways i thought about this i read the file and instead of running a for loop to assign the array index its values i use the values in the brackets(minus 1 since array index starts at 0) and the value before @ is assigned to that array index. Again, my problem lie's with the very mode of inputting it. How do i parse the value before @ and then the values after @ in brackets with a ',' delimiter between them ? fscanf ? istream getline ? getchar ? figuring out how to do the algorithm was one thing but this has me stumped No negative numbers will be input. Only 0 to max 25. Single and double digits. You said you were familiar with fstream. It seems more correct to say that you are not familiar with it at all. :) At any rate, if you're allowed to assume that the data is perfect (you don't have to do any error checking on it) then it's actually very simple. I'm assuming that there are exactly width * height - 1 values in the 3rd and 4th line (since one square in the puzzle must be empty). #include <iostream> #include <fstream> using namespace std; int main() { ifstream f("puzzdat.txt"); int width, height, n, x, y, i, j; char c; // for eating chars f >> width >> height; cout << width << ", " << height << "\n\n"; for (i = 0; i < 2; i++) { for (j = 0; j < width * height - 1; j++) { f >> n >> c >> c; f >> x >> c; f >> y >> c; cout << n << " -> " << x << " : " << y << "\n"; cout << "\n"; Of course this just prints the values. As you say, you need to subtract one from the x and y values and load your data structure (a 2d array presumably) with the n values at position x, y. If there are going to be both single and double character digits, you are better of doing it with C or C++ streams. You can put much of it behind levels of abstraction though, which will also help coding the algorithm in a simpler manner. Make a class..say.. Foo, containing 3 values generated from input like "1@(2,1)". Overload the >> operator for it. Make another class..say..Bar, containing 2 numbers and an array of 16 Foo objects. Overload the >> operator for it with the help of the >> operator of Foo. Then your work with parsing is already done. If you've got a std::ifstream representing your input file, a simple >> operation does the whole parsing. You also get other benefits from it, like having classes representing a node containing the adjacency list (iirc) and another with a coordinate and a weighting value. #include <iostream> #include <fstream> #include <string> #include <stdlib.h> int main() { ifstream f("input.txt"); int array[6][6]; int array2[6][6]; int width = 0, height = 0, n, x, y, i,j ; char c; // for eating chars f >> width; f >> height; cout << width << ", " << height << "\n\n"; for (i = 0; i < width * height; i++) { f >> n >> c >> c; f >> y >> c; f >> x >> c; cout << n << " -> " << x << " : " << y << "\n"; array[y-1][x-1] = n; cout << "\n"; for (i = 0; i < width * height; i++) { f >> n >> c >> c; f >> y >> c; f >> x >> c; cout << n << " -> " << x << " : " << y << "\n"; array2[y-1][x-1] = n; cout << "\n"; for (i = 0; i < width; i++) { for (j = 0; j < height; j++){ cout << array[i][j] << " "; cout << endl; 0@(1,1) 1@(1,2) 3@(1,3) 4@(2,1) 2@(2,2) 5@(2,3) 7@(3,1) 8@(3,2) 6@(3,3) 1@(1,1) 2@(1,2) 3@(1,3) 4@(2,1) 5@(2,2) 6@(2,3) 7@(3,1) 8@(3,2) 0@(3,3) This configuration will work in my current algorithm function, any tips/suggestions for programming it in such a way that where i currently declare board or puzzle[3][3] in my main file and in my header file i can have it set to a pointer or vector pointer puzzle[HeightFromInput][WidthFromInput] so that it takes the height and width from the input file ? If you use std::vector you can grow the board data storage as you read the file in.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/147809-puzzle-input-array-cplusplus-using-star-algorithm-printable-thread.html","timestamp":"2014-04-16T19:39:40Z","content_type":null,"content_length":"31157","record_id":"<urn:uuid:30e1f271-ca00-4c50-9dae-43f5f3100c1a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Burn ropes I know you can do it if I mean 45 minutes, but I mean 50 minutes. The implication of the problem would allow for only a few solutions. Lighting any non-endpoint of the two ropes would be arbitrary and unreliable, since the ropes burn at inconsistant rates. Hence, the only reliable place to burn the ropes is at the endpoints. And there are only 4 possible ends to burn. Since the only timing device is the rope, and the burn rate is arbitrary, the only reliable start/stop times are when particular ropes burn completely. Hence, there are certainly a managable finite number of possibilities. Let's go through them. Let's call them rope 1 (endpoints A & B) and rope 2 (endpoints C & D). Possibility I - Start by lighting A. When rope 1 burns out, 1 hour has elapsed, and we can move on to either lighting C or both C & D (note lighting just D is equivalent to just lighting C). Lighting C alone allows us to measure 2 hours total. Lighting C and D allows us to measure 1.5 hours total. Admittedly, we could also choose not to light C or D, and avoid using rope 2 entirely, with the result of 1 hour. Possibility II - Start by lighting A and B. When rope 1 burns out, 0.5 hours have elapsed, and we can move on to either lighting C or both C & D. Lighting C alone allows us to measure 1.5 hours total. Lighting C and D allows us to measure 1 hour total. Again, we could avoid using rope 2 at all, with the result of 0.5 hours. Possibility III - Start by lighting A and C. Unfortunately, the only measurable point after this is when both ropes burn out, which is after 1 hour, and there aren't any further ropes to burn. Possibility IV - Start by lighting A, B and C. We now have the option of lighting D when rope 1 burns out (after 0.5 hours), or not lighting it at all. If we light D after rope 1 burns out, we can measure 0.75 hours. If we do not light D at all, the only remaining measurement is 1 hour, which is when rope 2 burns out. Possibility V - Start by lighting A, B, C, and D. Again, we have no further options after we make this decision, and are forced into measuring exactly 0.5 hours, which is when both ropes burn out. Possibility VI - The empty set. Burn neither rope 1 nor rope 2, and we can measure 0 hours. And, that's it. 11 possibilities, where we can measure 0 hours, 0.5 hours, 0.75 hours, 1 hour, 1.5 hours, or 2 hours. Since none of these are 50 minutes, your solution must therefore be unreliable (aka arbitrary), or you're making futher assumptions that you're not telling us about the ropes, the fire, or one's ability to keep time. Hence, the best solution would be to measure out 0.75 hours (the closest to 50 minutes without going over), and then take your best guess as to when 5 minutes had elapsed beyond the 45 minutes.
{"url":"http://www.physicsforums.com/showthread.php?t=198570&page=2","timestamp":"2014-04-16T10:28:18Z","content_type":null,"content_length":"71340","record_id":"<urn:uuid:ab0c3bfc-c210-48b0-805c-1f4f2be23bb2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Twenty first British Mathematical Colloquium This was held at Birmingham: 26 - 28 March 1969 The enrolment was 489. The chairman was A M Macbeath and the secretary was H C Wilkie. Minutes of meetings, etc. are available by clicking on a link below General Meeting Minutes for 1969 Committee Meeting Minutes for 1969 The plenary speakers were: Dieudonné, J A Lie groups: classical, algebraic and formal Klingenberg, W Recent developments in Riemannian geometry Smale, S Global stability in dynamical systems The morning speakers were: Adams, J F Generalisations of the so-called Adams spectral sequence Atkin, A O L Some conjectures involving modular forms Beardon, A F Kleinian groups Duncan, J Relations between algebras and geometry in Banach algebras Fröhlich, A Formal Lie groups and arithmetic Lob, M H A model-theoretic characterisation of effective operations Offord, A C A survey of some applications of the theory of probability in analysis Rankin, R A Designs, difference sets and finite projective spaces Rourke, C P Embedded handle theory Sands, A D Primary abelian groups Singer, I M Operator theory and K-theory Thompson, J G Finite simple groups
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/BMC/1969.html","timestamp":"2014-04-21T12:19:33Z","content_type":null,"content_length":"2582","record_id":"<urn:uuid:3bad5bf2-95a2-401d-b946-d4ec23ff67b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Physics Services on Demand Related links Print version ISSN 0103-9733 Braz. J. Phys. vol.38 no.3b São Paulo Sept. 2008 Investigating gluino production at the LHC C. Brenner Mariotto; M. C. Rodriguez Departamento de Física, Fundação Universidade Federal do Rio Grande Caixa Postal 474, CEP 96201-900, Rio Grande, RS, Brazil Gluinos are expected to be one of the most massive sparticles (supersymmetric partners of usual particles) which constitute the Minimal Supersymmetric Standard Model (MSSM). The gluinos are the partners of the gluons and they are color octet fermions, due this fact they can not mix with the other particles. Therefore in several scenarios, given at SPS convention, they are the most massive particles and their nature is a Majorana fermion. Therefore their production is only feasible at a very energetic machine such as the Large Hadron Collider (LHC). Being the fermion partners of the gluons, their role and interactions are directly related with the properties of the supersymmetric QCD (sQCD). We review the mechanisms for producing gluinos at the LHC and investigate the total cross section and differential distributions, making an analysis of their uncertainties, such as the gluino and squark masses, as obtained in several scenarios, commenting on the possibilities of discriminating among them. Keywords: Supersymmetric models; Supersymmetric partners of known particles Although the Standard Model (SM) [1], based on the gauge symmetry SU(3)[c] ⊗ SU(2)[L] ⊗ U(1)[Y] describes the observed properties of charged leptons and quarks it is not the ultimate theory. However, the necessity to go beyond it, from the experimental point of view, comes at the moment only from neutrino data. If neutrinos are massive then new physics beyond the SM is needed. Although the SM provides a correct description of virtually all known microphysical nongravitacional phenomena, there are a number of theoretical and phenomenological issues that the SM fails to address adequately [2]: • Hierarchy problem; • Electroweak symmetry breaking (EWSB); • Gauge coupling unification. The main sucess of supersymmetry (SUSY) is in solving the problems listed above. SUSY has also made several correct predictions [2]: • SUSY predicted in the early 1980s that the top quark would be heavy; • SUSY GUT theories with a high fundamental scale accurately predicted the present experimental value of sin^2 θ[W] before it was mesured; • SUSY requires a light Higgs boson to exist. Together these success provide powerful indirect evidence that low energy SUSY is indeed part of correct description of nature. Certainly the most popular extension of the SM is its supersymmetric counterpart called Minimal Supersymmetric Standard Model (MSSM) [3]. The main motivation to study this models, is that it provides a solution to the hierarchy problem by protecting the electroweak scale from large radiative corrections [4, 5]. Hence the mass square of the lightest real scalar boson has an upper bound given by where h, is expected lighter than Z at tree level (ε = 0). However, radiative corrections rise it to 130 GeV [6]. In the MSSM [3], the gauge group is SU(3)[c] ⊗ SU(2)[L] ⊗ U(1)[Y]. The particle content of this model consists in associate to every known quark and lepton a new scalar superpartner to form a chiral supermultiplet. Similarly, we group a gauge fermion (gaugino) with each of the gauge bosons of the standard model to form a vector multiplet. In the scalar sector, we need to introduce two Higgs scalars and also their supersymmetric partners known as Higgsinos. We also need to impose a new global U(1) invariance usually called R-invariance, to get interactions that conserve both lepton and baryon number (invariance). Other very popular extensions of SM are Left-Right symmetric theories [7], which attribute the observed parity asymmetry in the weak interactions to the spontaneous breakdown of Left-Right symmetry, i.e. generalized parity transformations. It is characterized by a number of interesting and important features [8]: 1. it incorporates Left-Right (LR) symmetry which leads naturally to the spontaneous breaking of parity and charge conjugation; 2. incorporates a see-saw mechanism for small neutrino masses. On the technical side, the left-right symmetric model has a problem similar to that in the SM: the masses of the fundamental Higgs scalars diverge quadratically. As in the SM, the Supersymmetric Left-Right Model (SUSYLR) can be used to stabilize the scalar masses and cure this hierarchy problem. Another, maybe more important raison d'etre for SUSYLR models is the fact that they lead naturally to R-parity conservation [9]. Namely, Left-Right models contain a B L gauge symmetry, which allows for this possibility [10]. All that is needed is that one uses a version of the theory that incorporates a see-saw mechanism [11] at the renormalizable level. The supersymmetric extension of left-right models [12, 13] is based on the gauge group SU(3)[c] ⊗ SU(2)[L] ⊗ SU(1)[R] ⊗ U(1)B-L. On the literature there are two different SUSYLR models. They differ in their SU(2)[R] breaking fields: one uses SU(2)[R] triplets [12] (SUSYLRT) and the other SU(2)[R] doublets [13] (SUSYLRD). Since we are interested in studying only the strong sector, which is the same in both models, the results we are presenting here hold in both models. As a result of a more detailed study, we have shown that the Feynman rules of the strong sector are the same in both MSSM and SUSYLR models [14]. The relevant Feynman rules for the gluino production - Gluino-Gluino-Gluon: - Quark-Quark-Gluor - Squark-Squark-Gluon: k[i,j] are the momentum of the incoming and outcommg squarks, respectively; - Quark-Squark-Gluino: L = The "Snowmass Points and Slopes" (SPS) [15] are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. The aim of this convention is reconstructing the fundamental supersymmetric theory, and its breaking mechanism, from the data. The points SPS 1-6 are Minimal Supergravity (mSUGRA) model, SPS 7-8 are gauge-mediated symmetry breaking (GMSB) model, and SPS 9 are anomaly-mediated symmetry breaking (mAMSB) model ([15-17]). Each set of parameters leads to different gluino and squark masses, wich are the only relevant parameters in our study, and are shown in Tab.(I). Gluino and squark production at hadron colliders occurs dominantly via strong interactions. Thus, their production rate may be expected to be considerably larger than for sparticles with just electroweak interactions whose production was widely studied in the literature [18, 19]. Since the Feynman rules of the strong sector are the same in both MSSM and SUSYLR models, the diagrams that contribute to the gluino production are the same in both models. In the present contribution we study the gluino production in pp collisions at LHC energies. To make a consistent comparison and for sake of simplicity, we restrict ourselves to leading-order (LO) accuracy, where the partonic cross-sections for the production of squarks and gluinos in hadron collisions were calculated at the Born level already quite some time ago [20]. The corresponding NLO calculation has already been done for the MSSM case [21], and the impact of the higher order terms is mainly on the normalization of the cross section, which could be taken into account here by introducing a K factor in the results here obtained [21]. The LO QCD subprocesses for single gluino production are gluon-gluon and quark-antiquark anihilation (gg → and →), and the Compton process qg →, as shown in Fig. 1. For double gluino production only the anihilation processes contribute, obviously. These two kinds of events could be separated, in principle, by analysing the different decay channels for gluinos and squarks [18, 19]. Incoming quarks (including incoming b quarks) are assumed to be massless, such that we have n[f] = 5 light flavours. We only consider final state squarks corresponding to the light quark flavours. All squark masses are taken equal to ^1. We do not consider in detail top squark production where these assumptions do not hold and which require a more dedicated treatment [22]. The invariant cross section for single gluino production can be written as [20] where [i,j] are the parton distributions of the incoming protons and θ and transverse momentum p[T], and ij → are then where m[d] are the masses of the final-state partons produced. The center-of-mass angle θ and the differential cross section above can be easilly written in terms of the pseudo-rapidity variable η = - ln tan(θ/2), which is one of the experimental observables. The total cross section for the gluino production can be obtained from above upon integration. In Fig.2 we present the LO QCD total cross section for gluino production at the LHC as a function of the gluino masses. We use the CTEQ6L [23], parton densities, with two assumptions on the squark masses and choices of the hard scale. The results show a strong dependence on the masses of gluinos and squarks, and also a larger cross section in the degenerated mass case, which agrees with the results presented at [18]. The search for gluinos and squarks (as well as other searches for SUSY particles) and the possibility of detecting them will depend on their real masses. We use the SPS values from Table I and proceed to the calculation of differential distributions for producing gluinos in all presented scenarios. From now on we restrict ourselves to the production of two gluinos, picking only the anihilation processes as explained above. The calculation of producing a single gluino (including the Compton process) is done in a more detailed publication[14]. The results obtained will show the possibility of discriminating among the different SPS scenarios. In Figs.3 and 4 we present the transverse momentum and pseudorapidity distributions for double gluino production at LHC energies. The results show a similar behavior of the p[T] and η dependencies in all scenarios, but a huge diference in the magnitude for different scenarios - SPSla gives the bigger values, SPS9 the smallest one. Also, we find very close values for SPSlb, SPS3 (mSUGRA) and SPS7 (GMSB), which makes difficult to discriminate between these mSUGRA and GMSB models. The same occurs for SPS5 and SPS6 (both mSUGRA). To conclude, we have investigated gluino production at the LHC, which might discover supersymmetry over the next years. Gluinos are color octet fermions and play a major role to understanding sQCD. Because of their large mass as predicted in several scenarios, up to now the LHC is the only possible machine where they could be found. Regarding the strong sector, the Feynman rules are the same for both MSSM and SUSYLR models. Therefore, our results for gluino production are equal in both models. Besides, our results depend on the gluino and squark masses and no other SUSY parameters. Since the masses of gluinos come only from the soft terms, measuring their masses can test the soft SUSY breaking approximations. We have considered all the SPS scenarios and showed the corresponding differences on the magnitude of the production cross sections. From this it is easy to distinguish mAMSB from the other scenarios. However, it is not so easy to distinguish mSUGRA from GMSB depending on the real values of masses of gluinos and squarks (if SPS1b and SPS7, provided the gluino and squark masses are almost similar in these two cases). For the other cases, such discrimination can be done. This work was partially financed by the Brazilian funding agency CNPq, CBM under contract number 472850/2006-7, and MCR under contract number 309564/2006-9. [1] S. L. Glashow, Nucl. Phys. 22, 579 (1961); [ Links ] S. Weinberg, Phys. Rev. Lett. 19, 1264 (1967); [ Links ] A. Salam in Elementary Particle Theory: Relativistic Groups and Analyticity, Nobel Symposium N8 (Alquivist and Wilksells, Stockolm, 1968); [ Links ] S. L. Glashow, J.Iliopoulos and L.Maini, Phys. Rev. D 2, 1285 (1970). [ Links ] [2] D. J. H. Chung, L. L. Everett, G. L. Kane, S. F. King, J. D. Lykken, and L. T. Wang, Phys. Rept. 407, 1 (2005). [ Links ] [3] H. E. Haber and G. L. Kane, Phys. Rep. 117, 75 (1985). [ Links ] [4] K. Inoue, A. Komatsu, and S. Takeshita, Prog. Theor. Phys. 68, 927 (1982). [ Links ] [5] K. Inoue, A. Komatsu, and S. Takeshita, Prog. Theor. Phys. 70, 330 (1983). [ Links ] [6] H. E. Haber, Eur. Phys. J. C 15, 817 (2000). [ Links ] [7] J. C. Pati and A. Salam, Phys. Rev. D 10, 275 (1974); R. N. Mohapatra and J. C. Pati, ibid D 11, 566; 2558 (1975); G. Senjanović and R.N. Mohapatra, ibidD 12,1502 (1975). [ Links ] For details see G. Senjanović , Nucl. Phys. B 153, 334 (1979). [ Links ] [8] A. Melfo and G. Senjanović , Phys. Rev. D 68, 035013 (2003). [ Links ] [9] C. S. Aulakh, A. Melfo, and Goran Senjanović , Phys. Rev. D 57, 4174 (1998). [ Links ] [10] R. N. Mohapatra, Phys. Rev. D 34, 3457 (1986); [ Links ] A. Font, L. E. Ibanez, and F. Quevedo, Phys. Lett. B 228, 79 (1989); [ Links ] L. Ibáñez and G. Ross, Phys. Lett. B 260, 291 (1991); [ Links ] S. P. Martin, Phys. Rev. D 46, 2769 (1992). [ Links ] [11] M. Gell-Mann, P. Ramond, and R. Slansky, in Supergravity, eds. P. van Niewenhuizen and D. Z. Freedman (North Holland 1979); [ Links ] T. Yanagida, in Proceedings of Workshop on Unified Theory and Baryon number in the Universe, eds. O. Sawada and A. Sugamoto (KEK 1979); [ Links ] R. N. Mohapatra and G. Senjanović , Phys. Rev. Lett. 44, 912 (1980). [ Links ] [12] K. Huitu, J. Maalampi, and M. Raidal, Nucl. Phys. B 420, 449 (1994); [ Links ] C. S. Aulakh, A. Melfo, and G.Senjanovic, Phys. Rev. D 57, 4174 (1998); [ Links ] G. Barenboim and N. Rius, Phys. Rev. D 58, 065010 (1998); [ Links ] N. Setzer and S. Spinner, Phys. Rev. D 71, 115010 (2005). [ Links ] [13] K. S. Babu. B. Dutta, and R. N. Mohapatra, Phys.Rev. D 65, 016005 (2002). [ Links ] [14] C. B. Mariotto and M. C. Rodriguez, arXiv:0805.2094 [hep-ph] [ Links ]. [15] B. C. Allanach etal, Eur. Phys. J. C 25, 113 (2002). [ Links ] [16] Nabil Ghodbane, Hans-Ulrich Martyn, hep-ph/0201233. [ Links ] [17] http://spa.desy.de/spa/ [ Links ] [18] H. Baer and X. Tata, Weak Scale Supersymmetry, Cambridge University Press, United Kindom, (2006). [ Links ] [19] M. Dress, R. M. Godbole, and P. Roy, Theory and Phenomenology of Sparticles, World Scientific Publishing Co. Pte. Ltd., Singapore, (2004). [ Links ] [20] S. Dawson, E. Eichten, and C. Quigg, Phys. Rev. D 31, 1581 (1985). [ Links ] [21] W. Beenakker, R. Höpker, M. Spira, and P. M. Zerwas, Nucl. Phys. B 492, 51 (1997). [ Links ] [22] W. Beenakker, M. Krämer, T. Plehn, M. Spira, and P. M. Zerwas, Nucl. Phys. B 515, 3 (1998). [ Links ] [23] J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky, and W. K. Tung, JHEP 0207, 012 (2002). [ Links ] (Received on 14 April, 2008) 1 L-squarks and R-squarks are therefore mass-degenerate and experimentally indistinguishable.
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332008000400024&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-17T05:28:35Z","content_type":null,"content_length":"54187","record_id":"<urn:uuid:5ef8c807-fa2f-4b45-8c78-5f20eca0b1c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-svn] [numpy/numpy] 332d62: ENH: Improve accuracy of numpy.gradient at edges GitHub noreply@github.... Sat Sep 7 13:06:11 CDT 2013 Branch: refs/heads/master Home: https://github.com/numpy/numpy Commit: 332d628744a0670234585053dbe32a3e82e0c4db Author: danieljfarrell <danieljfarrel@me.com> Date: 2013-09-07 (Sat, 07 Sep 2013) Changed paths: M numpy/lib/function_base.py M numpy/lib/tests/test_function_base.py Log Message: ENH: Improve accuracy of numpy.gradient at edges * numpy.gradient has been enhanced to use a second order accurate one-sided finite difference stencil at boundary elements of the array. Second order accurate central difference are still used for the interior elements. The result is a fully second order accurate approximation of the gradient over the full domain. * The one-sided stencil uses 3 elements each with a different weight. A forward difference is used for the first element, dy/dx ~ -(3.0*y[0] - 4.0*y[1] + y[2]) / (2.0*dx) and backwards difference is used for the last element, dy/dx ~ (3.0*y[-1] - 4.0*y[-2] + y[-3]) / (2.0*dx) * Because the datetime64 datatype cannot be multiplied a view is taken of datetime64 arrays and cast to int64. The gradient algorithm is then applied to the view rather than the input array. * Previously no dimension checks were performed on the input array. Now if the array size along the differentiation axis is less than 2, a ValueError is raised which explains that more elements are needed. If the size is exactly two the function falls back to using a 2 point stencil (the old behaviour). If the size is 3 and above then the higher accuracy methods are used. * A new test has been added which validates the higher accuracy. Old tests have been updated to pass. Note, this should be expected because the boundary elements now return different (more accurate) Commit: 089cc017cdc0b8105d40d74eae15539b1e309e01 Author: Charles Harris <charlesr.harris@gmail.com> Date: 2013-09-07 (Sat, 07 Sep 2013) Changed paths: M numpy/lib/function_base.py M numpy/lib/tests/test_function_base.py Log Message: Merge branch 'gradient' * gradient: ENH: Improve accuracy of numpy.gradient at edges Compare: https://github.com/numpy/numpy/compare/7679c14ab9b2...089cc017cdc0 More information about the Numpy-svn mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-svn/2013-September/005857.html","timestamp":"2014-04-18T19:08:47Z","content_type":null,"content_length":"5653","record_id":"<urn:uuid:5f0474b7-2241-47ee-bb00-48edc35c65ed>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Prism Permutations March 28th 2013, 02:15 PM #1 Mar 2013 Prism Permutations I have been trying to solve this problem for the past three evenings and I cannot understand the notation. 'Describe geometrically the symmetries of the prism represented in cycle form by permutations (14)(23) and (25). The problem I have is to understand whether it is one permutation or a composition of permutations, so firs (14)(23) would be a reflection in the horizontal plane and then turned by pi/3. but what is confusing for me is that number 2 is here twice, so I am assuming it is a composition. Pleas advise if you can. l\ 1 /l l \ / l - back 5 l 2|3| \ l / \ l / \ base 4 Last edited by PhiPhi12; March 28th 2013 at 02:18 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/215879-prism-permutations.html","timestamp":"2014-04-19T10:14:14Z","content_type":null,"content_length":"29003","record_id":"<urn:uuid:0e309b18-c6ec-4f8d-91d0-1472ac54a533>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] matrix default to column vector? Robert Kern robert.kern@gmail.... Sun Jun 7 14:08:29 CDT 2009 On Sun, Jun 7, 2009 at 07:20, Tom K. <tpk@kraussfamily.org> wrote: > Olivier Verdier-2 wrote: >> There would be a much simpler solution than allowing a new operator. Just >> allow the numpy function dot to take more than two arguments. Then A*B*C >> in >> matrix notation would simply be: >> dot(A,B,C) >> with arrays. Wouldn't that make everybody happy? Plus it does not break >> backward compatibility. Am I missing something? > That wouldn't make me happy because it is not the same syntax as a binary > infix operator. Introducing a new operator for matrix multiply (and > possibly matrix exponentiation) does not break backward compatibility - how > could it, given that the python language does not yet support the new > operator? > Going back to Alan Isaac's example: > 1) beta = (X.T*X).I * X.T * Y > 2) beta = np.dot(np.dot(la.inv(np.dot(X.T,X)),X.T),Y) > With a multiple arguments to dot, 2) becomes: > 3) beta = np.dot(la.inv(np.dot(X.T, X)), X.T, Y) > This is somewhat better than 2) but not as nice as 1) IMO. 4) beta = la.lstsq(X, Y)[0] I really hate that example. > Seeing 1) with @'s would take some getting used but I think we would adjust. > For ".I" I would propose that ".I" be added to nd-arrays that inverts each > matrix of the last two dimensions, so for example if X is 3D then X.I is the > same as np.array([inv(Xi) for Xi in X]). This is also backwards compatible. > With this behavior and the one I proposed for @, by adding preceding > dimensions we are allowing doing matrix algebra on collections of matrices > (although it looks like we might need a new .T that just swaps the last two > dimensions to really pull that off). But a ".I" attribute and its behavior > needn't be bundled with whatever proposal we wish to make to the python > community for a new operator of course. I am vehemently against adding .I to ndarray. I want to *discourage* the formation of explicit inverses. It is almost always a very wrong thing to do. Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-June/043176.html","timestamp":"2014-04-16T10:50:22Z","content_type":null,"content_length":"5393","record_id":"<urn:uuid:e77b325e-3f68-48d4-ae44-00618b9adaac>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilities in the World of Darkness In the World of Darkness roleplaying games a player resolves an action by rolling a number of ten-sided dice. All dice which roll greater than or equal to a given difficulty value are counted as successes, and these are individually cancelled out by dice which roll ones. If the resulting total is positive, then the action succeeds. If zero or negative, then the action fails. A special case of failure occurs when all dice are lower than the difficulty value and at least one die is a 1; when this happens, the action is said to have been botched. What are the probabilities of a success, a failure, and a botch? Consider a single roll of an $s$-sided die when the difficulty value is $d$. We are concerned with three outcomes: 1. the roll is 1 2. the roll is greater than 1 but less than $d$ 3. the roll is at least $d$. These occur with the following probabilities: \begin{align*} p_1 &= \frac{1}{s} \\ p_2 &= \frac{d-2}{s} \\ p_3 &= \frac{s-d+1}{s}. \end{align*} For $n$ rolls, let $N_i$ denote the number of times outcome $i$ occurs. Then the probability that $N_1$, $N_2$ and $N_3$ will have particular values $n_1$, $n_2$ and $n_3$ has a multinomial distribution, $$P(N_1=n_1, N_2=n_2, N_3=n_3) = \frac{n!}{n_1!n_2!n_3!}\; p_1^{n_1} p_2^{n_2} p_3^{n_3}.\qquad(1)$$ An action succeeds if $N_3>0$ and $N_1<N_3$. Let $r$ be the number of successes resulting from a roll of $n$ dice. If $N_1$ ones were rolled, then $N_3=N_1+r$. Since $N_1+N_2+N_3=n$, it follows that $N_2=n-2N_1-r$, and $N_1$ can vary from 0 to Thus the probability of getting exactly $r$ successes is \begin{equation*} P(N_3-N_1 = r) = \sum_{i=0}^{(n-r)/2} P(N_1=i, N_2=n-2i-r, N_3=i+r), \end{equation*} which can be found by way of Equation (1). The probability of getting at least $r$ successes is therefore \begin{align*} P(N_3-N_1 \geq r) &= \sum_{j=r}^n P(N_3-N_1 = j) \\ &= \sum_{j=r}^n \sum_{i=0}^{(n-r)/2} P(N_1=i, N_2=n-2i-j, N_3= i+j). \end{align*} An action fails if $N_3=0$ and $N_1=0$, or if $N_3>0$ and $N_1\geq N_3$. In the first case, if $N_1=N_3=0$, then $N_2=n$. This contributes $P(N_1=0,N_2=n,N_3=0)$ to the total probability. In the second case, let $j$ be the difference between the number of ones and the number of potential successes. If $N_3$ dice rolled at least $d$, then $N_1=N_3+j$. Since $N_1+N_2+N_3=n$, it follows that $N_2=n-2N_3-j$, and, while $j$ varies from $0$ to $n-1$, $N_3$ can vary from $1$ to $(n-j)/2$. The probability of failure is therefore \begin{align*} & P(N_1=0, N_2=n, N_3=0) \\ +& \sum_{j=0}^{n-1} \sum_{i=1}^{(n-j)/2} P(N_1=i+j, N_2=n-2i-j, N_3=i). \end{align*} An action is botched if $N_3=0$ and $N_1>0$. Here $N_2=n-N_1$, and $N_1$ can vary from $1$ to $n$. The probability of a botch is therefore \begin{equation*} \sum_{i=1}^n P(N_1=i, N_2=n-i, N_3=0). \end{equation*} Click here for tabulated probabilities.
{"url":"http://www.axiscity.hexamon.net/users/isomage/rpgmath/wod/","timestamp":"2014-04-16T22:03:51Z","content_type":null,"content_length":"8620","record_id":"<urn:uuid:46e9f65c-cde8-40d3-a8e4-c1660bbf30ea>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
A073531 - OEIS A073531 Number of n-digit positive integers with all distinct digits. 4 9, 81, 648, 4536, 27216, 136080, 544320, 1632960, 3265920, 3265920, 0, 0, 0 (list; graph; refs; listen; history; text; internal format) OFFSET 1,1 COMMENTS For any base b the number of distinct-digit numbers is finite. For base 10, the maximal distinct-digit number is 9876543210; for any larger number at least two digits coincide. The number of distinct-digit primes is also finite, see A073532. LINKS Table of n, a(n) for n=1..13. Eric Weisstein's World of Mathematics, Digit FORMULA a(n) = 9*9!/(10-n)!. EXAMPLE a(3)=648 because there are 648 three-digit integers with distinct digits. MATHEMATICA Table[9*9!/(10-n)!, {n, 10}] CROSSREFS Cf. A073532. Sequence in context: A213297 A206728 A206857 * A206694 A125910 A171283 Adjacent sequences: A073528 A073529 A073530 * A073532 A073533 A073534 KEYWORD base,nonn AUTHOR Zak Seidov, Aug 29 2002 STATUS approved
{"url":"http://oeis.org/A073531","timestamp":"2014-04-16T19:25:42Z","content_type":null,"content_length":"15027","record_id":"<urn:uuid:6b3101fd-9341-4a6d-9beb-69d797cc6d38>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Cantor's absurdity, once again, why not? Replies: 77 Last Post: Mar 19, 2013 11:02 PM Messages: [ Previous | Next ] Re: Cantor's absurdity, once again, why not? Posted: Mar 15, 2013 9:18 AM david petry <david_lawrence_petry@yahoo.com> writes: > On Thursday, March 14, 2013 7:35:13 PM UTC-7, Jesse F. Hughes wrote: >> Okay, so Fermat's last theorem is, in your view, a properly >> mathematical statement, while it's negation doesn't belong in >> mathematics. > Why don't you tell me where I can look up the definition of > "properly mathematical statement", and then I'll get back to you on > that. Or threaten to quit "debating" with me. That would be nice > too. I assumed that this relationship between "falsifiability" and mathematics allowed one to distinguish non-mathematical claims from mathematical claims. If not, what role does falsifiability play? In science, it distinguishes scientific hypotheses from non-scientific. I also assumed that you aimed to spark discussion of your ideas by posting them here. I didn't realize you instead anticipated that on this, the 97th round of the same ol' shit, everyone on sci.math would finally meekly agree with you and the old guard would be overthrown. So, I guess I was wrong twice. My response should have been, "Why, yes, David, that's very insightful! I agree! Let's kick them evil Cantorians out of our beautiful ivory towers." So, let's pretend I said that, if that's what you really want. Jesse F. Hughes "If the car stops and you're not getting out, then you have to start it again." -- Quincy P. Hughes (age 3) on his father's skills with a manual transmission. Date Subject Author 3/14/13 Cantor's absurdity, once again, why not? David Petry 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? David Petry 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? David Petry 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/17/13 Re: Cantor's absurdity, once again, why not? Shmuel (Seymour J.) Metz 3/17/13 Re: Cantor's absurdity, once again, why not? ross.finlayson@gmail.com 3/18/13 Re: Cantor's absurdity, once again, why not? fom 3/18/13 Re: Cantor's absurdity, once again, why not? Shmuel (Seymour J.) Metz 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/14/13 Re: Cantor's absurdity, once again, why not? harold james 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/14/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? David Petry 3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/15/13 Re: Cantor's absurdity, once again, why not? Virgil 3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/15/13 Re: Cantor's absurdity, once again, why not? Virgil 3/15/13 Re: Cantor's absurdity, once again, why not? fom 3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/16/13 Re: Cantor's absurdity, once again, why not? FredJeffries@gmail.com 3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/16/13 Re: Cantor's absurdity, once again, why not? Virgil 3/16/13 Re: Cantor's absurdity, once again, why not? fom 3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/16/13 Re: Cantor's absurdity, once again, why not? Virgil 3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/16/13 Re: Cantor's absurdity, once again, why not? Virgil 3/17/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/19/13 Re: Cantor's absurdity, once again, why not? Virgil 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? fom 3/19/13 Re: Cantor's absurdity, once again, why not? Virgil 3/16/13 Re: WM's absurdity, once again, why not? Virgil 3/17/13 Re: Cantor's absurdity, once again, why not? fom 3/14/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes 3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de 3/15/13 Re: Cantor's absurdity, once again, why not? Virgil 3/14/13 Re: Cantor's absurdity, once again, why not? David Petry 3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes 3/14/13 Re: Cantor's absurdity, once again, why not? David Petry 3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes 3/15/13 Re: Cantor's absurdity, once again, why not? David Petry 3/15/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes 3/15/13 Re: Cantor's absurdity, once again, why not? David Petry 3/15/13 Re: Cantor's absurdity, once again, why not? Virgil 3/15/13 Re: Cantor's absurdity, once again, why not? fom 3/15/13 Re: Cantor's absurdity, once again, why not? fom 3/15/13 Re: Cantor's absurdity, once again, why not? fom 3/15/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes 3/14/13 Re: Cantor's absurdity, once again, why not? ross.finlayson@gmail.com
{"url":"http://mathforum.org/kb/message.jspa?messageID=8639990","timestamp":"2014-04-17T19:19:18Z","content_type":null,"content_length":"109627","record_id":"<urn:uuid:3bd17270-2afd-4b12-bd4c-e50d3a3862f0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: more questions about changing the distribution [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: more questions about changing the distribution From "Lachenbruch, Peter" <Peter.Lachenbruch@oregonstate.edu> To <statalist@hsphsun2.harvard.edu> Subject st: RE: more questions about changing the distribution Date Tue, 18 Nov 2008 14:25:22 -0800 I am confused about the intent of this message. Forcing the distribution to be bimodal seems to be a consequence of the distribution. Do you want a mixture of distributions? I suspect I'm reacting to wording not exactly what I'm used to. You seem to have a mixture of distributions. Do you want to estimate the mixing parameter and the means and variances of the components? Or is there something else here that I'm missing? Peter A. Lachenbruch Department of Public Health Oregon State University Corvallis, OR 97330 Phone: 541-737-3832 FAX: 541-737-4001 -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Linn Renée Naper Sent: Tuesday, November 18, 2008 7:37 AM To: statalist@hsphsun2.harvard.edu Subject: st: more questions about changing the distribution As some of you probably already noticed, I am working on a distribution of prices trying to force the distribution into being bimodal (two price peaks instead of one). Well, below is the codes I've been using so far. sum mip ret list gen u = (mip - `r(mean)')/`r(sd)' local p = -.3 local sd1 `r(sd)' local sd2 0.9*`r(sd)' local mu1 `r(mean)' local mu2 1.1*`r(mean)' gen e = u * cond(u < `p', `sd1', `sd2') + cond(u < `p', `mu1', `mu2') Mip is the original price, and I am using this distribution to generate a standardized variable u, which I then transform into a new variable with a bimodal distribution. My problem is that when imposing different means and sd for the new distribution I very quickly seem to end up with a "gap" in the distribution (intervals where no prices lie, obviously Related to the defined p). I want some distance between the two peaks (the two means defined). In the example below I reduce sd2 with 10 percent and increases the mean2 by only 10 percent. Increasing the mean by more results in a larger gap. Here p=-0.3, which is equal to the p25 in the generated u. (meaning I want 25 percent of the sample to vary around the lower peak, this can of course be changed as well). I think maybe what I need is to impose a third condition for the Observations for example between p25 and p50 to avoid having the gap. By looking at the codes, can anyone see how this is possible? Or, maybe there is a better way to all this? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-11/msg00811.html","timestamp":"2014-04-17T21:38:59Z","content_type":null,"content_length":"8142","record_id":"<urn:uuid:3d544649-eb3a-474c-a96e-950af91097db>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] 2-Y-axes on same plot Joe Trubisz jtrubisz at mac.com Wed Dec 10 15:35:57 CET 2008 Is this possible in R? I have 2-sets of data, that were collected simultaneously using 2- different data acquisition schemes. The x-values are the same for both. The y-values have different ranges (16.4-37.5 using one method, 557-634 using another). In theory, if you plot both plots on top of each other, the graphs should overlap. The problem I'm having is trying to have to different sets of y-values appear in the same graph, but scaled in the same vertical space. I've seen this done in publications, but not sure if it can be done in R. Any suggestions would be appreciated. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-December/182195.html","timestamp":"2014-04-19T14:31:53Z","content_type":null,"content_length":"3092","record_id":"<urn:uuid:6995b6b4-d52e-4ebb-b195-f3b05a55b67e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Dayana on Thursday, July 21, 2011 at 8:22pm. find the following product 7z (3z^3 - 2z+ 4) • math algebra - mac, Thursday, July 21, 2011 at 9:17pm 7z (3z^3-2z+4)= 21z^4-14z^2+28z i could show you much better vertically, but i can'tmake the curser go where i want nor can i print the numbers on the line that i want, but you simply mult each factor in each term, in this case one term, by each factor in every other term one time only. Related Questions algebra - Short answer. find the following product. -7z(-8z^3+3z^2+8) algebra - short answer. find the following product -7z(-8z^3+3z^2+8) algebra - z-[2z-[3z-(4z-5z)-6z-7z)-8z algebra - simplify z-(2z-[3z-(4z-5z)-6z0-7z]-8z Algebra - 9z-42z-72/3z^2+28z+32divided by 2z^2+10z-48/z^2-9z+18 Can someone help... algebra - Need help showing me the steps without using the carrot function: Find... algebra - 1 A + (B + C)= (A + B) + C associative scalar quantity dot product 2 2... Differential equation- Math - Find the general solution of the following system ... math - How do I simplify the following expression using the distributive ... Algebra* - Short Answer. Find the following product: -3z(-z^2-9)
{"url":"http://www.jiskha.com/display.cgi?id=1311294136","timestamp":"2014-04-18T19:02:52Z","content_type":null,"content_length":"8299","record_id":"<urn:uuid:84ebe84c-8b68-4978-9193-52a92c4841d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
function of y: Keith Wood's SVG Jquery SitePoint Evangelist Join Date Mar 2011 Bellingham, WA 1 Post(s) 0 Thread(s) function of y: Keith Wood's SVG Jquery I've been using Keith Wood's fantastic SVG Jquery plug in for a math application and it suits almost all of my needs to plot mathematical functions. One thing that I need, which I'm not sure if it can do, however, is to create functions of y instead of functions of x; in particular within the plotting mechanism, I'd like to be able plot "x=3" or "x=5" (vertical lines). Any help would be appreciated. Unobtrusively zen Join Date Jan 2007 Christchurch, New Zealand 83 Post(s) 3 Thread(s) You can use Wolfram|Alpha: Computational Knowledge Engine for that. If you have for example: y = x^3-1 you can put in "inverse x^3-1" to get the appropriate formula, which is the third root of (x + 1) which is (x + 1)^(1/3) So, switching the variables, we have: y = x^3 - 1 x = (y + 1)^(1/3) SitePoint Evangelist Join Date Mar 2011 Bellingham, WA 1 Post(s) 0 Thread(s) Thanks for the quick reply! I'm actually not trying to figure out how to compute the inverse of a function (with this much, I'm set!). What I'd like to do is to be able to sketch functions of y as opposed to functions of x. For example, x=y^2 isn't a function of x since each x is associated with two different y values: (1,1) and (1,-1). In addition, I'd love to be able to sketch an equation such as x=4, which is a vertical line (again...sorry if I don't know how much of a math background you have!) but again, not a function of x since each x has an infinite number of y's associated with it. Basically, without actually knowing how javascript produces its functions, my guess is that it "plugs in" lots of x's so that the graph looks smooth, moving across the x axis. What I'm hoping to do is to be able to move across the y axis instead to spit out an appropriate graph. Anyway, I appreciate that this is a bit of a "mathy specific" question so if there's isn't an "easy" solution out there, I'd very much understand. However, any other thoughts would be Unobtrusively zen Join Date Jan 2007 Christchurch, New Zealand 83 Post(s) 3 Thread(s) What I'd like to do is to be able to sketch functions of y as opposed to functions of x. For example, x=y^2 isn't a function of x since each x is associated with two different y values: (1,1) and (1,-1). In addition, I'd love to be able to sketch an equation such as x=4, which is a vertical line (again...sorry if I don't know how much of a math background you have!) but again, not a function of x since each x has an infinite number of y's associated with it. That sounds like something that Mathematica is capable of doing, which also supports SVG for the web. You might find though that only the really big players are capable of performing the types of inverse functions that you require. SitePoint Evangelist Join Date Mar 2011 Bellingham, WA 1 Post(s) 0 Thread(s) You're absolute correct in this regard! But, the "cool" thing that I'm hoping to do is have the students explore different types of "non-functions"; in other words, I give them x=(input box)y ^2 and by typing in different numbers and some javascript magic it will produce the graph for them on the fly. I've been successful at integrating this with Keith's program for functions of x but not for functions of y. Anyway, I don't want to take up too much of your time on this but if you had an extra minute and checked out this link, then maybe it would be clearer what I was hoping to do with "non-functions" by seeing how I was able to integrate with actual functions of x. Thanks for giving this problem some thought. Unobtrusively zen Join Date Jan 2007 Christchurch, New Zealand 83 Post(s) 3 Thread(s) That reminds me of something I saw recently. Khan Academy started with video tutorials, but they now provide web-based learning of a wide range of math-based topics. For example, inverse functions where you can also turn on a scratchpad to write your workings and notes. Here's some info about how they benefit students and teachers too. I know that many schools in the US are using them, and there's even a TED presentation with Bill Gates about them. So, are resources such as that worth considering? SitePoint Evangelist Join Date Mar 2011 Bellingham, WA 1 Post(s) 0 Thread(s) I checked out a bunch of the links and they definitely give me some ideas and some things to think about. Thanks so much. PS Love the TED talks!
{"url":"http://www.sitepoint.com/forums/showthread.php?752123-function-of-y-Keith-Wood-s-SVG-Jquery&mode=hybrid","timestamp":"2014-04-19T12:28:18Z","content_type":null,"content_length":"77231","record_id":"<urn:uuid:c57f2087-4ee3-4a63-a073-df645eb3fcc9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
how do i solve question 1? and is the answer for qusetion 2 correct? July 24th 2010, 12:03 AM #1 how do i solve question 1? and is the answer for qusetion 2 correct? 1. $D_p : {(-\infty , -3)}\cup{(-3,4)}\cup{(4, +\infty)}$ 2: $x=2$ that's region where ur function is defined.... u probably don't mark it as $D_p$ sorry meaning that ur function is not defined in -3 and 4 ... and everywhere else it is... Last edited by yeKciM; July 24th 2010 at 01:11 AM. Reason: my mistake :D No! yeKciM, your first answer was correct. $\frac{x- 4}{x^2- x- 12}= \frac{x-4}{(x-4)(x+3)}$ is not defined for x= 4 or x= -3. $\frac{x- 4}{(x- 4)(x+ 3)}= \frac{1}{x+3}$ only for $xe 4$. If x= 4, $\frac{x- 4}{(x- 4)(x+ 3)}= \frac{0}{0}$ which is indeterminant while $\frac{1}{x+ 3}= \frac{1}{7}$. yes i know but when u plot it ... it's define in x=4.... just for -3 it's not for the rational function $\displaystyle f(x) = \frac{x-4}{(x+3)(x-4)}$ , there is a vertical asymptote at $x = -3$ and a point discontinuity at $x = 4$. well i think its just -3 reason why i think that is, if we look at function like it would mean that it's not defined in $x=4 , x=-3$ and that it has stationary point in $x=4$ but that isn't true ... it hasn't any stationary point's and it's defined in $x=4$ as u see from plot that i posted .... witch is actually function $f_{(x)}=\frac {1}{x+3}$ lol... didn't have this much of polemics for such a simple function a long time P.S. u shouldn't be just interested in "which choice is it a,b,c or d??"... it's more important to u to understand how and why does i say this, or HelsofIvy that, or skeeter... rather then be just interested in right answer... and soon have more problems (with same type) which u can't solve Last edited by yeKciM; July 24th 2010 at 07:45 AM. in which app did u plot it i zoom my a loot and it doesn't show that .. lol I'm not saying that ur wrong TI-84 emulator No, it isn't. The graph of $y= \frac{x-4}{(x-4)(x+3)}$ looks like a hyperbola but with a hole at (4, 1/7). The graph of y= 1/(x+3) is the same hyperbola without the hole. If you plotted it with a calculator or a computer graphing program, your grid was probably too course, and it "jumped over" x= 4. It can be crucially important in Calculus to distinguish things like $\frac{x-4}{(x-4)(x+3)}$ from $\frac{1}{x+3}$. July 24th 2010, 12:10 AM #2 July 24th 2010, 12:13 AM #3 July 24th 2010, 12:17 AM #4 July 24th 2010, 12:21 AM #5 July 24th 2010, 12:36 AM #6 July 24th 2010, 04:58 AM #7 MHF Contributor July 24th 2010, 05:11 AM #8 July 24th 2010, 06:40 AM #9 July 24th 2010, 07:24 AM #10 July 24th 2010, 07:33 AM #11 July 24th 2010, 07:52 AM #12 July 24th 2010, 08:10 AM #13 July 24th 2010, 08:23 AM #14 July 24th 2010, 08:48 AM #15 MHF Contributor
{"url":"http://mathhelpforum.com/pre-calculus/151846-how-do-i-solve-question-1-answer-qusetion-2-correct.html","timestamp":"2014-04-20T14:35:34Z","content_type":null,"content_length":"87663","record_id":"<urn:uuid:204ca952-c40e-49e4-baf2-80acbb1b79a8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 26 In equal circles equal angles stand on equal circumferences whether they stand at the centers or at the circumferences. Let ABC and DEF be equal circles, and in them let there be equal angles, namely at the centers the angles BGC and EHF, and at the circumferences the angles BAC and EDF. I say that the circumference BKC equals the circumference ELF. Join BC and EF. Now, since the circles ABC and DEF are equal, the radii are equal. Thus the two straight lines BG and GC equal the two straight lines EH and HF, and the angle at G equals the angle at H, therefore the base BC equals the base EF. And, since the angle at A equals the angle at D, the segment BAC is similar to the segment EDF, and they are upon equal straight lines. But similar segments of circles on equal straight lines equal one another, therefore the segment BAC equals EDF. But the whole circle ABC also equals the whole circle DEF, therefore the remaining circumference BKC equals the circumference ELF. Therefore in equal circles equal angles stand on equal circumferences whether they stand at the centers or at the circumferences.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookIII/propIII26.html","timestamp":"2014-04-17T21:39:25Z","content_type":null,"content_length":"4030","record_id":"<urn:uuid:e5ce365e-c633-41a8-b057-a75f4b454faf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
DPMMS Seminars 1 - 6 February 1999 UNIVERSITY OF CAMBRIDGE Department of Pure Mathematics and Mathematical Statistics 16 Mill Lane, Cambridge CB2 1SB THIS WEEK'S SEMINARS Monday 1^st February Seminar: Geometry Seminar Location & Time: Syndics Room, DAMTP at 2.00 p.m. Speaker: Graeme Segal Title: More about mirror symmetry for an elliptic curve Seminar: Topology Seminar Location & Time: DPMMS Seminar Room 1 at 3.30 p.m. Speaker: Dr B. Totaro Title: Some recent calculations of cobordism and Chow rings of BG Tuesday 2^nd February Seminar: Number Theory Seminar Location & Time: Seminar Room 1, DPMMS at 4.15 p.m. Speaker: V. Snaith Title: The Wiles unit Seminar: Category Theory Seminar Location & Time: Seminar Room 2, DPMMS at 2.15 p.m. Organisational meeting for the seminars this term followed by talk from Dr Peter Johnstone (after A.Kock) Speaker: Dr Peter Johnstone Title: The amazing strength of amazing right adjoints Wednesday 3^rd February Seminar: Analysis Seminar Location & Time: Seminar Room 1, DPMMS at 2.15 p.m. Speaker: Dr C Read Title: Weakly amenable Banach algebras Seminar: Complex Analysis and Geometry Seminar Location & Time: Seminar Room 2, DPMMS at 4.00 p.m. Speaker: Dr T.W. Ng Title: Ahlfors' Five Islands Theorem Seminar: Complex Analysis and Geometry Seminar Location & Time: Seminar Room 1, DPMMS at 4.30 p.m. Speaker: Dr B. Klopsch Title: Hausdorff dimension in profinite groups Thursday 4^th February Seminar: Combinatorics Seminar Location & Time: Seminar Room 1, DPMMS at 2.15 p.m. Speaker: TBA Title: TBA Friday 5^th February Seminar: Conformal Field Theory Seminar Location & Time: Seminar Room 1, DPMMS at 4.30 p.m. Speaker: Antony Wassermann Title: Discrete series representations of the N = 2 superconformal algebra Part II
{"url":"https://www.dpmms.cam.ac.uk/Seminars/Weekly/1998-1999/Seminars1February.html","timestamp":"2014-04-20T13:47:55Z","content_type":null,"content_length":"3497","record_id":"<urn:uuid:c58b718a-9ef8-4144-8fb0-a5cbb01347ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
193 pounds in kg You asked: 193 pounds in kg 87.54332741 kilograms the mass 87.54332741 kilograms Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/193_pounds_in_kg","timestamp":"2014-04-18T23:41:26Z","content_type":null,"content_length":"53253","record_id":"<urn:uuid:50c1041b-7d6e-4b52-a309-ef74df522f02>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
I'm an answer-checker who is rusty at probability. Can you solve this problem? What is the probability that a random arrangement of the letters in the word THRUSTS will have the two T's next to each [more inside] posted by 23skidoo on Mar 27, 2014 - 11 answers Can/how can one improve the estimate for a chance of an event with a small historical sample size by utilizing the chance of a related event with a large historical sample size? Example and half-assed guess inside. [more inside] posted by Flunkie on Mar 20, 2014 - 16 answers A coin flips three times and comes up heads 2/3. Not suspect. But a coin flips 100,000 times and comes up heads 2 out of 3 times, that starts to look fishy. The standard probability of this is always roughly 50-50, but assuming a 2/3 ratio pointing to a "rigged" coin, how could you plot the increasing likelihood that a given coin is rigged? [more inside] posted by ASoze on Jan 8, 2014 - 17 answers I have a question about probability math. I am essentially flipping a coin (except instead of a 50/50 chance, my odds are 50.5% heads, 49.5% tails). I am concerned with the probability of me hitting heads (a 50.5% chance) several times in a row. [more inside] posted by disillusioned on Dec 25, 2013 - 42 answers Each card in a certain deck has three letters on it. The first letter is either A, B, or C. The second letter is either D, E, F, or G. The third letter is either H, I, J, K, or L. Every possible combination is represented exactly once in the deck. Ergo, there are 3x4x5=60 cards in the deck. How can I determine the probability that a hand of X cards, drawn randomly from the deck, will include at least one of each of the letters? posted by CustooFintel on Nov 2, 2013 - 32 answers Hi guys: I hope that the green can help me on this- perhaps it's an easy problem for you: I have 9 different playing cards and 2 players. The first player can take between 3 to 5 cards and the remainder are given to the second player, and then the game begins. How many different starting hands (collectively between the two players) are there? The order of the cards in each player's hand does not matter. Thanks in advance! posted by JiffyQ on Aug 29, 2013 - 20 answers I'm struggling to understand likelihood ratios (LR) in the context of diagnostic tests, and why a positive LR is influenced by the sensitivity of the test. [more inside] posted by cacofonie on Aug 1, 2013 - 6 answers The fallacy is assuming that statistic information about a thing is more relevant in dealing with a particular instance of that thing than available first-hand data. [more inside] posted by CustooFintel on Mar 12, 2013 - 17 answers I'm a cataloging librarian who works a couple hours a week on the reference desk. This morning I had a patron come in to ask me for sources that back up the claim that the probability that life on earth formed by random chance is so small that some kind of divine intervention is more likely. [more inside] posted by rabbitrabbit on Feb 14, 2013 - 33 answers Can you think of a method that allows an individual to pseudo randomly create a sequence of numbers (at the very least the randomness is opaque to the minds of other people) assuming said individual may only use his mind and body (no physical tools are allowed)? [more inside] posted by Foci for Analysis on Dec 21, 2012 - 7 answers In this game, you roll a number of six-sided dice to get a . The total is either the highest single die result, or the sum of any multiples rolled, whichever is higher. For example: If I roll three dice and get a 3, 4, and 6, my total is 6. But if I roll a 4, 4, and 6, my total is 8, the sum of the two 4s. What I want to find out is the mean, median, mode, and standard deviation of the possible totals given N dice. How might I create a simple script to compute this? [more inside] posted by j0hnpaul on Nov 30, 2012 - 24 answers What great books or resources are there for practicing probability word problems such as for standardized tests like the GRE? [more inside] posted by Mr. Papagiorgio on Nov 8, 2012 - 3 answers Statisticsfilter: Given available information about the distribution of self-selected 4-digit passwords (specifically banking PINs), is it possible to calculate the probability of two randomly selected individuals having the same PIN? If so, what're the odds? [more inside] posted by myrrh on Oct 27, 2012 - 15 answers I'm looking to learn how to calculate probabilities for a multi-round dice game. I've researched this question some, and it looks like I might need to know how to use the multinomial distribution, but I can't find any good introductions. Please point me to the most layman-accessible educational material on this subject, and help me to help myself. [more inside] posted by Richard Daly on Sep 28, 2012 - 6 answers How would one (legally) take advantage of the change in odds of a given NFL team to win the Super Bowl? [more inside] posted by glenngulia on Sep 24, 2012 - 6 answers Math/probability not sports: I am not a gambler, but I am trying out a method of betting on sports with some initial success. At what point can I use the numbers to confidently assume that this is down to the system rather than luck? [more inside] posted by cincinnatus c on Sep 19, 2012 - 4 answers I'm working through an explanation/derivation of the secretary problem that I've never seen before. I know the eventual answer, and I understand most of the steps, but explain this to me like I'm an [more inside] posted by supercres on Sep 16, 2012 - 8 answers Looking for an interesting blog post somewhere from a few months back about maximising your exposure to randomness or your probability of a good outcome. Think it was a geek post somewhere. [more inside] posted by zaebiz on Sep 4, 2012 - 2 answers After tens of thousands of games of pool, every time I rack the balls I seem to switch about half of them around. I know I'm wasting time. So, I want to know exactly how many balls I should expect to swap (the median), and what is the most I should ever have to swap. For those of you who aren't pool nerds like me, I've explained the 8-ball racking process inside. [more inside] posted by omnigut on Apr 4, 2012 - 19 answers What are the most mathematically 'advanced' RPG systems? Pen & paper and otherwise? [more inside] posted by empath on Mar 14, 2012 - 24 answers Is there an equation for figuring out the average wait time for a book on hold at the library. Or how to figure out the average wait time I have left for a book I put on hold because I'm dying to read the rest of it. [more inside] posted by gov_moonbeam on Oct 13, 2011 - 10 answers What is the maximum number of outs possible on the river in heads up Texas hold em? Assuming that out means a card which will take the player who is behind either level or ahead. posted by therubettes on Oct 11, 2011 - 20 answers I'm struggling to understand the empirical content of probability theory. I understand the mathematical theory , and I understand how we get from empirical observations to a mathematical model. I do not understand how we get from the mathematical model back to the real world, e.g., what is the "empirical content" of a statement like "event will occur with probability [more inside] posted by ochlophonic on Sep 8, 2011 - 23 answers I'm trying to rank some non-proper poker hands within the conventional poker hand-ranking framework. I would like to rank them conventionally, i.e. on their probability of occurring in a straight, five-card deal. The non-proper hands are: (1) "Four-card straight" (four cards in a row); (2) "Four-card flush" (four cards of the same suit); (3) "Four-card straight flush" (four cards in a row of the same suit); (4) "Same-color flush" (all 5 red cards or all 5 black cards); (5) "Straight same-color flush" (a straight composed of all black cards or all red cards); (6) "Four-card straight same-color flush" (a four-card straight composed of all black cards or all red cards). My probability skills are "OK" (the odds for the four-card flush and same-color flush are straightforward), but some of them (particularly the straights) seem too tricky for me. [more inside] posted by mrgrimm on May 13, 2011 - 9 answers We have lost a cat. Ignoring obstacles like trees, roads and houses, I am assuming he has gone on a random walk. Me and my fiancée have been out every night since he left, also walking randomly, hoping to find him. But I know a random walk in two dimensions always returns to the origin eventually, so are we actually any better off searching for him than just staying at home? [more inside] posted by hoverboards don't work on water on Feb 5, 2011 - 20 answers Please refresh my memory with regard to a straightforward question of fair dice and probability. [more inside] posted by Justinian on Jan 20, 2011 - 14 answers Which probability distribution should I use to model examination results? [more inside] posted by alby on Jan 13, 2011 - 13 answers Has there ever been any research done on whether there is any correlation between spurts of adding contacts to LinkedIn or LinkedIn activity and someone changing job? [more inside] posted by MuffinMan on Jan 12, 2011 - 2 answers Is there a word for a "collection of possible future events that are all somehow related?" I want something that captures the idea of a "collection of scenarios." Or, pick an forecasted event, fear, or desire: What is "the spectrum of possible outcomes" relevant to this thing that has my attention, plausible or otherwise, expected or unexpected, the good, the bad, and the ugly? Probability cloud? Scenario collection? [more inside] posted by zeek321 on Dec 27, 2010 - 20 answers Probability filter: after eating the TD turkey we play the Turkey game which consists of tossing six dice. The six faces of each die are carved with the letters that spells turkey. Each combination of letters earn a different score (for example T U is 5 points, 3 Ts wipe all the points earned, etc.) with the first TURKEY being the winner. How many tosses would you need to spell TURKEY? [more inside] posted by francesca too on Nov 29, 2010 - 15 answers Help me teach myself enough about probability to properly balance the board game I want to design. [more inside] posted by Caduceus on Aug 9, 2010 - 3 answers I'm scriptwriting an online interactive interview for a curriculum resource. There are ten questions, and students will be able to choose six of these to 'ask', and then see a video clip of each reply. Ideally, I want students to hear four strong answers and two weak ones. Based on this criterion, should I be using probabilty to work out how many of the ten answers should be weak or strong? If so, how? I think I need 6 or 7 strong answers just by applying the ratio. posted by dowcrag on Jun 30, 2010 - 3 answers How does one guess sports betting odds, or determine at what point to place a bet on a sporting event? [more inside] posted by reenum on Apr 6, 2010 - 9 answers What are some for media depictions of seeing briefly into the future, especially in terms of probability/what could have be? [more inside] posted by Nelsormensch on Feb 4, 2010 - 11 answers Hi everyone. Say I have a list of 50 items. I pick 10 of them. I put them back. What is the probability that, the SECOND time I pick 10 items, I pick an item I already picked the first time? How about the THIRD time, from the first or second? How about the probability of picking 3 of the same item, or 5? Thanks! posted by EduTek on Jan 22, 2010 - 17 answers Stats-filter: Given a binary matrix, if I know the total number of ones in a given row and a given column, can I calculate the probability that a given position contains a one? [more inside] posted by chrisamiller on Dec 16, 2009 - 25 answers Here's an obnoxious school question that's been nagging me for days: Suppose that the height (at the shoulder) of adult African bull bush elephants is normally distributed with µ = 3.3 meters and ∂ = .2 meter. The elephant on display at the Smithsonian Institute has a height 4 meters and is the largest elephant on record. What is the probability that an adult African bull bush elephant has height 4 meters or more? [more inside] posted by incomple on Dec 8, 2009 - 8 answers I have an event that has a 75% chance of happening. If I run the trial seven times, what is the probability of the event happening at least once? And what's the math behind it? posted by jackypaper on Nov 6, 2009 - 7 answers
{"url":"http://ask.metafilter.com/tags/probability","timestamp":"2014-04-21T01:08:01Z","content_type":null,"content_length":"67000","record_id":"<urn:uuid:906f0a6b-05ce-4236-8539-7b6959001036>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
8.1.2 Identification Using Input-Output Data (Subspace System Identification Method) The method implemented here is due to Moonen, De Moor, Vandenberghe, and Vandewalle (1989). The method is based on the singular value decomposition of the Hankel matrix If the input is persistently exciting of order Time-domain system identification using the output response. All the options of the function ImpulseResponseIdentify can be used and they have the same meaning. Make sure the application is loaded. Load the collection of test examples. This is a discrete-time state-space model of a steam power system. The ratios of each Hankel singular value to the largest one identify the strong and weak modes. Here is a number of measurements that is enough to identify the system. Make sure NormalDistribution is available. This constructs the white noise input. This simulates the output response with random initial conditions. The multiplicative white noise of amplitude The two weaker modes of this state-space system are obliterated by the added noise. This Bode plot denotes the original and identified models by solid and dashed lines, respectively.
{"url":"http://reference.wolfram.com/legacy/applications/anm/FunctionIndex/OutputResponseIdentify.html","timestamp":"2014-04-21T05:00:50Z","content_type":null,"content_length":"35929","record_id":"<urn:uuid:228be229-5fbd-440e-840a-5edd63462b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
One simple multiple choice question April 26th 2013, 07:36 PM One simple multiple choice question Hi, I have one multiple choice question. When you are designing a research study and considering what hypothesis test you might use, a common rule of thumb is to select the most powerful test. Why is this a good idea? a. The most powerful test is the test most likely to get the right answer. b. The most powerful test is the test most likely to result in a type II error. c. The most powerful test is the test least likely to fail to reject the null hypothesis when it is false. d. The most powerful test is the most likely to not reject the null hypothesis when it is true. Thanks! (Brief explanation will be very helpful to me.) April 26th 2013, 07:47 PM Re: One simple multiple choice question Hey therexists. Hint: The power of a test measures the ability to reject the null hypothesis when it is not true. Do you know what the power represents in terms of probability regarding H0 and H1? April 26th 2013, 08:09 PM Re: One simple multiple choice question I don't know exactly what the power represents in terms of H0 and H1. April 26th 2013, 08:10 PM Re: One simple multiple choice question According to your hint, I guess the answer is C, but I'm not sure of "least likely". April 26th 2013, 08:16 PM Re: One simple multiple choice question The definition of power is given by P(H1 rejected|H1 false) = 1 - B where B = P(H1 rejected|H1 is true). Your Type I and Type II errors are Type I = P(H0 rejected|H0 true) and Type II = B = P(H1 rejected|H1 true) April 26th 2013, 08:23 PM Re: One simple multiple choice question so then the answer is B? April 26th 2013, 08:31 PM Re: One simple multiple choice question No it's the opposite: You want to reduce a Type II error which means you want B to be as small as possible and therefore 1 - B to be as large as possible. April 26th 2013, 08:36 PM Re: One simple multiple choice question The answer is C or D. You said the power of a test measures the "ability" to reject and C said "fail to reject". So then answer is D? April 26th 2013, 08:50 PM Re: One simple multiple choice question It can't be b) because you want to minimize the Type II error. It can't be d) because the power looks at the alternative hypothesis. In terms of a) we would be tempted to say yes but it is not completely true. The reason why I think the answer is c) is because being least likely to reject H0 when H0 is false means that it is most likely to fail to reject H0 when H0 is false and thus accept H1 when H1 is true which is the definition of power. Remember that Power = P(H1 accepted|H1 true) and so you want to maximize this probability to maximize the power of your test. April 26th 2013, 08:53 PM Re: One simple multiple choice question Yes, the answer is C. I was doing online homework. Thank you very much Chiro, and your explanations are really good. I will read it again before exam.
{"url":"http://mathhelpforum.com/statistics/218260-one-simple-multiple-choice-question-print.html","timestamp":"2014-04-21T04:52:11Z","content_type":null,"content_length":"8628","record_id":"<urn:uuid:df173314-dce7-4fce-92b3-61e849624b01>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
2 dimensional array without hard coding values Author 2 dimensional array without hard coding values I have to write a program that would generate a matrix that should display the following output. They are asking that we dont hard code our values in this program. I am lost on how I Joined: Nov will get these values to display without hard coding. My question is how should my expression look. Any information that you all can give me I would greatly appreciate it. 09, 2007 Posts: 4 Output: author and Marshal Hi, Joined: Jul Welcome to JavaRanch! 08, 2003 Posts: 24166 Instead of storing the values in an array, consider how you could compute each value from the loop index variables at the position you need to print it. For example, 2, 4, 6... could be (column + 1)*2, right? If you think a little about it, there's a pretty obvious equation that uses both "row" and "column" and computes the value of every cell in your matrix. I suspect it's OK to hardcode the "5"'s, by the way -- the loop limits. I like... [Jess in Action][AskingGoodQuestions] Joined: Nov 09, 2007 Thanks for all your help, I got it. Posts: 4 Okay I tried what you told me and it works. but my output looks like this: Joined: Nov 09, 2007 init: Posts: 4 deps-jar: BUILD SUCCESSFUL (total time: 0 seconds) This is my code: author and Well, three things: Joined: Jul 08, 2003 1) "" and " " aren't the same; the first is an empty String, the second has one space in it. You want to print spaces between your numbers, not empty strings. Posts: 24166 2) The first loop doesn't have that extra "System.out.println()" after each iteration of the inner loop, so everything is on one line. 3) You need to come up with a single equation which uses both variables to compute each number, and then just use a single set of nested loops (since you only want to print one matrix!) I like... Joined: Oct I made some minor modifications to get it to compile, and stuck in some print statements. try this code, and see if you can figure out what it's doing... 02, 2003 Posts: 10916 I like... There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors You're now adding the same numbers for each row: (column + 1) * 2. Joined: Oct 27, 2005 You still don't see the pattern, do you? Posts: 19543 Let's look at the first number in each row. See a pattern there? How does this relate to the row number? Ok, now you have the first number. How do the rest of these numbers relate to that first number? See another pattern? And how does this relate to the column number? I like... SCJP 1.4 - SCJP 6 - SCWCD 5 - OCEEJBD 6 How To Ask Questions How To Answer Questions Joined: Nov Yes I see the pattern, for every number in the first row you add 2 then the next row you add 3 to each number then so on and so on. I know am missing something on trying to put this all 09, 2007 into one equation. Am going to take a break and then come back and look at it again. To try to find what am missing. Posts: 4 Joined: Oct so, you add 2, then 4, then 5, then 6. 02, 2003 Posts: 10916 hmmm... what can i use to easily get me those numbers, one after the other... 12 HEY!!! that kind of sounds like a for-loop might work. I can start my loop with any value i want, so starting it at 2 shouldn't be a problem. then, I just have to make sure I stop it before my counter gets to be 7. I like... subject: 2 dimensional array without hard coding values
{"url":"http://www.coderanch.com/t/408669/java/java/dimensional-array-hard-coding-values","timestamp":"2014-04-19T14:38:32Z","content_type":null,"content_length":"43192","record_id":"<urn:uuid:65381a8e-2367-49bd-8957-273e600ca74a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
having trouble solving a program 05-27-2010 #1 Registered User Join Date Apr 2010 having trouble solving a program Hi all, i was having trouble solving this question: Write a C program that displays all numbers from 1 to Y,which are divisible by a given number X, Where the user enters the values of X and Y. ex: if the entered values of X and Y are 3 and 30 respectively ,the program will produce the following output: " Numbers from 1 to 30 divisible by 3 are : 3 6 9 12 15 18 21 24 27 30" here is my work so far , can anyone help me with the rest of the program or correct the loop for me ? #include <stdio.h> #include <conio.h> int X,Y,i; printf("Please Enter value of X,Y"); printf("Numbers from 1 to %d divisible by 3 are : %d",Y,i); Well, you don't have the logic down yet. You are just printing out all the numbers between 1 and y. You need to test if they are divisible by x before you print them. So just test if i is divisible by x. If it is print it . Think about the definition of divisibility in order to deduce what your testing condition should be. 1. Get rid of gets(). Never ever ever use it again. Replace it with fgets() and use that instead. 2. Get rid of void main and replace it with int main(void) and return 0 at the end of the function. 3. Get rid of conio.h and other antiquated DOS crap headers. 4. Don't cast the return value of malloc, even if you always always always make sure that stdlib.h is included. how do i test if a no. divided a no. gives an int not a float value ? do i use the % ? help plz #include <stdio.h> #include <conio.h> int X,Y,i; printf("Please Enter value of X,Y"); printf("Numbers from 1 to %d divisible by 3 are : %d",Y,i); but still not printing what i need , any adjustments please ? Yes you use %. A number is divisible by another number if the remainder of the division of that number by the second number is 0. Test for that in an if statement inside your for loop and you are 1. Get rid of gets(). Never ever ever use it again. Replace it with fgets() and use that instead. 2. Get rid of void main and replace it with int main(void) and return 0 at the end of the function. 3. Get rid of conio.h and other antiquated DOS crap headers. 4. Don't cast the return value of malloc, even if you always always always make sure that stdlib.h is included. i did test for that , thanks but can u tell me how to give this exact printout ? " Numbers from 1 to 30 divisible by 3 are : 3 6 9 12 15 18 21 24 27 30" cause my printf dont do it post your code with your new test condition included. did that already lol If the numbers are "divisible by N", why not just make the loop increment by that amount? You're doing it wrong. You don't use a loop with % foo == 0, you use that as a check: for( x = 0; x < 100; x++ ) if( this % that == thisotherthing ) printf( "%d %% %d has %d has a remainder\n", this, that thisotherthing ); You aren't actually thinking about what you are doing, you're just throwing lines of code in and hoping it does what you want. Hope is the first step on the road to disappointment. 05-27-2010 #2 Registered User Join Date Jun 2009 05-27-2010 #3 05-27-2010 #4 Registered User Join Date Apr 2010 05-27-2010 #5 Registered User Join Date Apr 2010 05-27-2010 #6 05-27-2010 #7 Registered User Join Date Apr 2010 05-27-2010 #8 Registered User Join Date Jun 2009 05-27-2010 #9 Registered User Join Date Apr 2010 05-27-2010 #10 Registered User Join Date Sep 2008 Toronto, Canada 05-27-2010 #11
{"url":"http://cboard.cprogramming.com/c-programming/127293-having-trouble-solving-program.html","timestamp":"2014-04-17T13:50:17Z","content_type":null,"content_length":"78342","record_id":"<urn:uuid:61d9bad7-f036-47dc-8151-0a65ab1eb74e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Ray tracing parameters (TRAPAR namelist) Next: Inversion parameters (INVPAR namelist) Up: Input Parameters Previous: Axes parameters (AXEPAR namelist) ishot - an array specifying the directions rays are to be traced from the shot points listed in the arrays xshot and zshot; the following code is used: (1) 0 - no rays are traced (2) -1 - rays are traced to the left only (3) 1 - rays are traced to the right only (4) 2 - rays are traced to the left and right (default: 0) iraysl - use the array irayt to select which ray groups listed in the array ray are to be traced for a particular shot point (default: 0) irayt - an array selecting those ray groups listed in the array ray which are active for a particular shot point listed in the arrays xshot and zshot. If there are n ray groups listed in the array ray, then the jth ray group for the ith shot is referred to in the (2n(i-1)+j)th element of irayt for rays traced to the left , and in the (2n(i-1)+n+j)th element of irayt for rays traced to the right. You must specify values of irayt for each group and for each direction regardless of the values of ishot (default: 1) ifast - use a Runge-Kutta routine without error control to solve the ray tracing equations and a look-up/interpolation routine to evaluate certain trigonometric functions; with ifast=0, a Runge-Kutta method with error control and the intrinsic trigonometric functions are used. Using ifast=1 is about 30-40% faster and provides essentially the same accuracy as ifast=0 (default: 1) Note: ifast=1 is intended to be used for all routine modelling and ifast=0 only to test a final model or if ifast=1 obviously fails i2pt - perform two-point ray tracing, i.e., determine the ray take- off angles that connect sources and observed receiver locations within each ray group; the receiver locations are obtained from the file tx.in for those picks that have the same integer code as specified in the array ivray for the ray group; if i2pt=2, only rays which reflect off the appropriate floating reflector specified in the array frbnd (if frbnd>0) are traced (default: 0) iturn - an array corresponding to the ray groups listed in the array ray to trace rays only to their turning or reflection point in the search mode; if a refracted or head wave ray group is to contain a reflection(s) before it turns (specified using nrbnd and rbnd), then iturn=0 must be used (default: 1) isrch - search again for the take-off angles of a ray group if the same ray code is listed more than once for the same shot in the array ray; isrch=1 must be used if the take-off angles of two or more ray groups with the same ray code are different because of multiple reflections and/or conversions specified by nrbnd, rbnd, ncbnd,and cbnd (default: 0) istop - stop tracing any ray which reflects off a boundary not specified in the arrays ray or rbnd; if istop=2, rays are also stopped if they enter a layer deeper than that specified by the ray code in the array ray, i.e., the layer number is greater than L and the ray code is L.n, where n=0, 1, 2, or 3 (default: 1) idiff - continue to trace headwaves along a boundary even if the velocity contrast across the interface is no longer positive, in which case headwaves emerge parallel to the boundary (idiff=1); if idiff=2, initiate a headwave ray group even if there is no critical point (or it is not found) to model diffractions along the top of a low-velocity layer (default: 0) ibsmth - apply a simulation of smooth layer boundaries (1) or apply the simulation and plot the smoothed boundaries (2); ibsmth=2 has no effect if isep>1 (default: 0) insmth - an array listing the layer boundaries for which the smooth layer boundary simulation is not to be applied, or only applied outside the model distances xminns and xmaxns, when ibsmth=1 or 2, (default: 0) imodf - use the velocity model in the file v.in instead of the model in part (5) of the file r.in (default: 0) xshot - an array containing the x-coordinates (km) of the shot points (default: 0.0) zshot - an array containing the z-coordinates (km) of the shot points (default: a very small distance below the model surface) ray - an array containing the ray groups to be traced; the following code is used: (1) L.1 - rays which refract (turn) in the Lth layer (2) L.2 - rays which reflect off the bottom of the Lth layer (3) L.3 - rays which travel as head waves along the bottom of the Lth layer (4) L.0 - ray take-off angles supplied by the user in the arrays amin and amax nray - an array containing the number of rays to be traced for each ray group in the array ray (default: 10; however, the default is the first element of nray for all ray groups if only one value is specified) space - an array which determines the spacing of take-off angles between the minimum and maximum values for each ray group in the array ray. For space=1, the take-off angles will be equally spaced; for space>1, the take-off angles will be concentrated near the minimum value; for 0<space<1, the take-off angles will be concentrated near the maximum value (default: 1; however, space=2 for a reflected ray group if specified as L.2 in the array ray) amin, amax - arrays containing minimum and maximum take-off angles (degrees); measured from the horizontal, positive downward and negative upward (for rays traveling left to right or right to left); used for ray groups specified by ray=1.0 nsmax - an array containing the maximum number of rays traced when searching for the take-off angles of the ray groups in the array ray (default: 10; however, the default is the first element of nsmax for all ray groups if only one value is n2pt - maximum number of iterations during two-point ray tracing (i2pt>0); this is the maximum number of rays traced for each receiver to determine the take-off angle of the ray that connects the source and receiver (default: 5) x2pt - distance tolerance (km) for two-point ray tracing; less than n2pt rays will be traced for a particular receiver if a ray end point is within x2pt of the receiver location (default: (xmax-xmin)/2000) crit - head waves are generated if a down-going ray in the search mode has an angle of incidence at the bottom of the Lth layer within crit degrees of the critical angle when ray=L.3 (default: 1) hws - the spacing (km) of rays emerging upward from the bottom of the Lth layer when ray=L.3 (default: (xmax-xmin)/25) nhray - maximum number of rays traced for a head wave ray group (default: pnrayf) aamin - minimum take-off angle (degrees) for the refracted ray group in the first layer (default: 5) aamax - maximum take-off angle (degrees) for reflected ray groups specified as L.2 in the array ray (default: 85) stol - if a ray traced in the search mode is of the correct type and its end point is within stol (km) of the previous ray traced in the search mode, then the search for that ray type is terminated; a value of stol=0 will ensure that nsmax rays are always traced in the search mode (default: (xmax-xmin)/3500) xsmax - for reflected ray groups specified as L.2 in the array ray, determine the minimum take-off angle using the search mode so that the maximum range for this ray group is xsmax (km) if iturn=0 or the maximum offset of the reflection point is xsmax/2 (km) if iturn=1; for head wave ray groups specified as L.3 in the array ray, the maximum offset of the point of emergence from the head wave boundary is xsmax (km); xsmax=0 will ensure that the take-off angle of the reflected ray which grazes off the bottom of the Lth layer is determined and head waves are traced along the Lth boundary until the edge of the model is reached (default: 0.0) nrbnd - an array containing the number of reflecting boundaries for each ray group in the array ray (default: 0) rbnd - an array containing the reflecting boundaries specified in the array nrbnd; the following code is used: (1) L - ray traveling downward is reflected upward off the bottom of the Lth layer (2) -L - ray traveling upward is reflected downward off the top of the Lth layer ncbnd - an array containing the number of converting (P to S or S to P) boundaries for each ray group in the array ray (default: cbnd - an array containing the converting boundaries specified in the array ncbnd; the following code is used: (1) i - ray will convert from its present wave type (P or S) at the ith layer boundary encountered (2) 0 - ray will leave the source as an S-wave frbnd - an array containing the floating reflecting boundaries for each ray group in the array ray; the values of frbnd correspond to the order in which the reflectors are listed in the file f.in (default: 0) pois - an array containing the value of Poisson's ratio for each model layer; a value of 0.5 signifies a water layer with a corresponding S-wave velocity of zero (default: 0.25; however, the default is the first element of pois for all layers if only one value is specified) poisl, poisb - arrays specifying the layers and block numbers, respectively, of model trapezoids within which Poisson's ratio is modified over that given by pois using the array poisbl; for poisb, the trapezoids with a layer are numbered from left to poisbl - an array containing the value of Poisson's ratio for the model trapezoids specified in the arrays poisl and poisb overriding the values assigned using the array pois npbnd - number of points at which each layer boundary is uniformly sampled for smoothing if ibsmth=1 or 2 (default: nbsmth - number of applications of a three-point averaging filter to each layer boundary if ibsmth=1 or 2 (default: 10) xminns, xmaxns - minimum and maximum model distance over which the layer boundaries listed in the array insmth are not to be smoothed using ibsmth=1 or 2 (defaults: smooth boundaries between xmin and step - controls the ray step length in the solution of the ray tracing equations according to the relationship step length (km) = step*v/(|vx|+|vz|) where v is velocity and vx and vz are its partial derivatives with respect to x and z (default: 0.05) smin, smax - minimum and maximum allowable ray step length (km) (defaults: (xmax-xmin)/4500, (xmax-xmin)/15) Ingo Pecher Sat Mar 7 19:13:54 EST 1998
{"url":"http://pubs.usgs.gov/of/2004/1426/rayinvr/node7.html","timestamp":"2014-04-17T18:48:22Z","content_type":null,"content_length":"13054","record_id":"<urn:uuid:73ceef69-d419-4c60-b6ba-ddd9e2191be8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Improving efficiency of inferences in randomized clinical trials using auxiliary covariates The primary goal of a randomized clinical trial is to make comparisons among two or more treatments. For example, in a two-arm trial with continuous response, the focus may be on the difference in treatment means; with more than two treatments, the comparison may be based on pairwise differences. With binary outcomes, pairwise odds-ratios or log-odds ratios may be used. In general, comparisons may be based on meaningful parameters in a relevant statistical model. Standard analyses for estimation and testing in this context typically are based on the data collected on response and treatment assignment only. In many trials, auxiliary baseline covariate information may also be available, and it is of interest to exploit these data to improve the efficiency of inferences. Taking a semiparametric theory perspective, we propose a broadly-applicable approach to adjustment for auxiliary covariates to achieve more efficient estimators and tests for treatment parameters in the analysis of randomized clinical trials. Simulations and applications demonstrate the performance of the methods. Keywords: Covariate adjustment, Hypothesis test, k-arm trial, Kruskal-Wallis test, Log-odds ratio, Longitudinal data, Semiparametric theory
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2574960/?lang=en-ca","timestamp":"2014-04-20T09:49:17Z","content_type":null,"content_length":"161980","record_id":"<urn:uuid:00c38091-edc8-4bc1-a9b9-f7012632a86f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex numbers and Derivatives October 1st 2011, 12:22 AM #1 Jan 2008 Complex numbers and Derivatives Hey guys, I've done more questions that I need to check. 1) a) Simplify (z-1) (z^4 + z^3 + z^2 + z + 1) my solution: z^5 -1 b) use part a to find all the roots of the polynomial P(z) = z^4 + z^3 + z^2 + z + 1 (you may keep your answer in exponential polar form my solution: I converted z^5 - 1 = 0 to exponential polar form and got the following roots: z = 1e^i0, 1e^i(2pi/5), 1e^i(-2pi/5), 1e^i(4pi/5), 1e^i(-4pi/5) 2) Find dy/dx if root(y) + x loge(y) = cos(xy^2) my solution: using implicit diff and multiplication rule, i got dy/dx = [-log(y) - y^2 sin(xy^2)] / [ (x^2/2y) + 2x sin(xy^2) + (x^2/y)] please check my solutions to these problems thank you in advance. Re: Complex numbers and Derivatives Your roots are good: 5th roots of unity - Wolfram|Alpha Simplifying the implicit differentiation is a bit messy but I'm pretty sure I don't like your (x^2)'s in the denominator. Just in case a picture helps... ... where (key in spoiler) ... I hope that helps. __________________________________________________ __________ Don't integrate - balloontegrate! Balloon Calculus; standard integrals, derivatives and methods Balloon Calculus Drawing with LaTeX and Asymptote! Last edited by tom@ballooncalculus; October 1st 2011 at 03:22 AM. Re: Complex numbers and Derivatives Hello Tom, Thanks for the prompt reply, I have revamped my solution fixing all silly mistakes - here it is: dy/dx = [ -y^2sin(xy^2) - (ln(y)/2root(y)) - 1/(2root(y))] / [ x/y + 2xysin(xy^2)] Am I correct? Re: Complex numbers and Derivatives No, your numerator was ok before. I suspect your differentiation is at fault. When you differentiate the equation implicitly, do you get the bottom row of my diagram (or equivalent)? You should. But if you do, then it's just an algebra problem. Use latex to show your steps. Click on 'Reply With Quote' and you can copy and paste the portion inside (and including) the [ tex ][ /tex ] tags, and then extend the following. ${\displaystyle \frac{1}{2} \frac{1}{\sqrt{y}} \frac{dy}{dx} + \ln y + \frac{x}{y} \frac{dy}{dx} = -\sin(xy^2) [y^2 + 2xy \frac{dy}{dx}}]$ Last edited by tom@ballooncalculus; October 2nd 2011 at 03:20 AM. Re: Complex numbers and Derivatives {\displaystyle \frac{1}{2} \frac{1}{\sqrt{y}} \frac{dy}{dx} + \ln y + \frac{x}{y} \frac{dy}{dx} = \sin(xy^2) [y^2 + 2xy \frac{dy}{dx}}] Hello Balloon, I don't know how to use latex fluently, it would take me far too long to write it all out. I will work through them in words. First I multiplied out the brackets on the right hand side and took all the terms with a dy/dx to the left and the terms without a dy/dx to the right. I then took out the dy/dx as a common factor and divided what was left. Re: Complex numbers and Derivatives No, your numerator was ok before. I suspect your differentiation is at fault. When you differentiate the equation implicitly, do you get the bottom row of my diagram (or equivalent)? You should. But if you do, then it's just an algebra problem. Use latex to show your steps. Click on 'Reply With Quote' and you can copy and paste the portion inside (and including) the [ tex ][ /tex ] tags, and then extend the following. ${\displaystyle \frac{1}{2} \frac{1}{\sqrt{y}} \frac{dy}{dx} + \ln y + \frac{x}{y} \frac{dy}{dx} = \sin(xy^2) [y^2 + 2xy \frac{dy}{dx}}]$ Is that supposed to be -sin(xy^2)? Re: Complex numbers and Derivatives I have redone the calculations, i now get [-y^2sin(xy^2) - loge(y)] / [ (1/(2root(y)) + (x/y) +2xy sin(xy^2) Re: Complex numbers and Derivatives Re: Complex numbers and Derivatives Thanks mate, have a great night! Re: Complex numbers and Derivatives And you! October 1st 2011, 02:50 AM #2 MHF Contributor Oct 2008 October 1st 2011, 08:03 PM #3 Jan 2008 October 2nd 2011, 01:45 AM #4 MHF Contributor Oct 2008 October 2nd 2011, 02:21 AM #5 Jan 2008 October 2nd 2011, 02:45 AM #6 Jan 2008 October 2nd 2011, 02:51 AM #7 Jan 2008 October 2nd 2011, 03:20 AM #8 MHF Contributor Oct 2008 October 2nd 2011, 03:23 AM #9 Jan 2008 October 2nd 2011, 03:26 AM #10 MHF Contributor Oct 2008
{"url":"http://mathhelpforum.com/calculus/189232-complex-numbers-derivatives.html","timestamp":"2014-04-19T07:09:56Z","content_type":null,"content_length":"57428","record_id":"<urn:uuid:767c24f8-3427-46f2-8b63-07f453800ac6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
A magic square? August 14th 2010, 12:31 AM #1 Aug 2010 A magic square? Hi guys. A challenge for you all: This is the third forum I'm trying, and thus far the squares below remain unsolved. It is my hope that this forum will change that! Attached are two number grids. One is a 10X10 and and contains numbers 0-9. The other is a 12X12 and contains number 1-12. The squares come from different puzzles, but by the same author (I included both as one square is complete, and may give a clue as to how the other is compiled) The puzzles both bare no copyright, and form part of a geocaching puzzle (see Geocaching - The Official Global GPS Cache Hunt Site). By filling in the numbers on the 10X10 square one should be able to obtain a set of GPS coordinates. As we are in Cape Town, South Africa, the numbers will probably look like this: (S33[or 34] XX.XXX ; E18 XX.XXX) Just for interest sake the 12X12 square puzzle came with the following clue which would also reveal a GPS coordinate as for the above: S 34° [9(1) - 9(2)] [7(4)] . [2(2)] [6(2)] [7(3)] E 018° [6(1)] [4(2)] . [4(1)] [9(4)] [1(4)] Wishing you all the best and may thanks The 12X12 My apologies... here is the 12X12.... (Still not necessary for solving the 10X10) Simple brute force will work on the 10x10 because there are at most 10^10 operations, and the main issue would be how long it takes to enter the data. 10^10 is already fast enough, but if enhanced by backtracking, the algorithm will run plenty fast. For the 12x12 there is some trick since the square is already filled in, and presumably it relies on the other clue you gave; my guess would be that 9(1) stands for either the position of the 9th occurrence of 1, or the first occurrence of 9, or the entry in the 9th row and first column, or some such. Since you are familiar with reasonable coordinates, how about try out some such guesses and see if any produces a sensible result. For the 12x12 there is some trick since the square is already filled in, and presumably it relies on the other clue you gave; my guess would be that 9(1) stands for either the position of the 9th occurrence of 1, or the first occurrence of 9, or the entry in the 9th row and first column, or some such. Since you are familiar with reasonable coordinates, how about try out some such guesses and see if any produces a sensible result. Hi there Undefined (cool profile pic ~ HDR?) I have already tried that which you suggest... Once i have counted, for example the 9th occurance of 1 or the inverse, I still do not know what to do next ~position of the block from the start / sum of row + columns / position of block from end... The only clues I can see are in the formatting: [9(1)-9(2)] must equal 1 digit. Likewise [7(4)] must equal 1 digit. 9(1) > 9(2) as you cannot have a negative value here. Values [2(2)] ... [6(2)] ... and [7(3)] must all be single digit... Ok I let me test the above and see if it helps find a pattern. But please. Anyone with any smarter ideas please do share! Thanks for help thus far. For the 10x10, brief analysis reveals the magic sum must be 44. This makes it easy to do on paper. What does HDR stand for? Yeah for the 12x12, I don't know there's a definitive answer, and if all coordinates from 000.00 to 999.99 make sense, then I think it's just guessing.. @Undefinded: The only definites in the co-ords are S34 xx.xxx and E18 xx.xxx @Undefinded: HDR ~ High Dyanmic Range... Really awesome spin on digital photography where you take three images of the same thing at different exposure levels then combine them to get massive pixel depth (and indeed pick up more detail than the human eye in anygiven light! - if done properly) As for the 10X10... I really don't see how the all the rows / columns could add up to 44 where there the bottom row adds to 52 incompleted! Ok you could add a negative number... Could you elaborate as to how you derived 44? @Undefinded: HDR ~ High Dyanmic Range... Really awesome spin on digital photography where you take three images of the same thing at different exposure levels then combine them to get massive pixel depth (and indeed pick up more detail than the human eye in anygiven light! - if done properly) As for the 10X10... I really don't see how the all the rows / columns could add up to 44 where there the bottom row adds to 52 incompleted! Ok you could add a negative number... Could you elaborate as to how you derived 44? I assumed the square was well constructed, so obviously it's not. Not counting the question marks, the first row adds to 41, the last column adds to 35, and the second row adds to 44. So the maximum is 35+9 and the minimum is 44+0, so it's 44, but maybe you shouldn't do this challenge since the challenger doesn't seem to be very careful checking for errors and ambiguities. Or, did the original problem say "magic square" anywhere in it? Because if not then the title of your post is misleading everyone. That's cool about HDR, but I just took the image off some free wallpaper site or something, I have the larger image somewhere on my computer. Edit: Come to think of it, the fact that the first row and last column have different sums should have tipped me off, I just wasn't paying attention and was operating under the notion that the problem made sense.. @Undefined: There is no error ~ this is the work of a genius! Both puzzles have been solved 100+ times over the past 6 years. Title was "number cruncher" or something to that effect. There are, or at least I have tested, 8 different types of "magic square" and have yet to find one that it conforms to. Hence my title ~ "A magic square?" I will find youa link of a 360degree photo I took in HDR.... Ahh here it is: http://www.doceave.com/panoramics/gs...highq_tour.swf Pic taken in too-dim-to-read light... @Undefined: There is no error ~ this is the work of a genius! Both puzzles have been solved 100+ times over the past 6 years. Title was "number cruncher" or something to that effect. There are, or at least I have tested, 8 different types of "magic square" and have yet to find one that it conforms to. Hence my title ~ "A magic square?" I will find youa link of a 360degree photo I took in HDR.... Ahh here it is: http://www.doceave.com/panoramics/gs...highq_tour.swf Pic taken in too-dim-to-read light... Well titling the post "A magic square?" can give a few different ideas: (1) Here is a square that I'm guessing is magic (2) Here is a magic square I cannot solve Without thinking, I thought (2) was meant, and didn't bother looking for other interpretations. Another idea for 10x10 is to look for common/"famous" decimal expansions, like pi or e or sqrt(2); haven't found anything so far. That panoramic photo is quite nifty! @Undefined: Title describes my thoughts and is appropriate. This could very well be an exotic magic square. Magic Square -- from Wolfram MathWorld Over a hundered of the buggers described! Def not any sqaures of centre digits or pi... numbers are way to small and number like 7 just dont fit into those well. WRT the HDR 360: If you let the image load completely you will be able to click a link on the roof taking you to the same image with myself stitiched in... @Undefined: Title describes my thoughts and is appropriate. This could very well be an exotic magic square. Magic Square -- from Wolfram MathWorld Over a hundered of the buggers described! Def not any sqaures of centre digits or pi... numbers are way to small and number like 7 just dont fit into those well. WRT the HDR 360: If you let the image load completely you will be able to click a link on the roof taking you to the same image with myself stitiched in... The standard notion of magic square has all rows and columns with equal sums, and sometimes diagonals too, so you should not find it hard to believe your title can mislead people. If you want to be nice to the people trying to help, then say things like "I have verified that this is not a standard magic square because the rows and columns can't possibly all have the same sum", etc. Otherwise you are wasting our time by making us rediscover the things you already know. A new, non-magical, much less deceptive thread created above! Mmmm, you mean well, but it's against the rules to create a duplicate thread, as it clutters the forum. Cross referenced so that the mods can clean it up.. August 14th 2010, 04:43 AM #2 MHF Contributor Dec 2007 Ottawa, Canada August 14th 2010, 06:57 AM #3 Aug 2010 August 14th 2010, 07:20 AM #4 August 14th 2010, 07:50 AM #5 Aug 2010 August 14th 2010, 07:51 AM #6 August 14th 2010, 07:51 AM #7 Aug 2010 August 14th 2010, 08:11 AM #8 Aug 2010 August 14th 2010, 08:22 AM #9 August 14th 2010, 08:36 AM #10 Aug 2010 August 14th 2010, 08:46 AM #11 August 14th 2010, 08:57 AM #12 Aug 2010 August 14th 2010, 09:03 AM #13 August 14th 2010, 09:13 AM #14 Aug 2010 August 14th 2010, 09:18 AM #15
{"url":"http://mathhelpforum.com/math-puzzles/153645-magic-square.html","timestamp":"2014-04-17T01:30:09Z","content_type":null,"content_length":"83237","record_id":"<urn:uuid:59085305-797b-4c49-994e-c8260f5c9e8b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Bensenville Math Tutor ...I love math and helping students understand it. I first tutored math in college and have been tutoring for a couple years independently. My students' grades improve quickly, usually after only a few sessions. 26 Subjects: including ACT Math, trigonometry, Spanish, precalculus I am an experienced special education teacher and tutor. Being a special education teacher means that I know how to pinpoint exactly how each student learns best and I have the patience and ability to guide students, no matter if they are in special education, gifted education or somewhere in betwe... 33 Subjects: including linear algebra, SAT math, algebra 1, prealgebra ...Usually this strategy gives excellent results and typically a student improves his/her grades by 1-2 letters after 3-4 sessions. I hold a PhD in mathematics and physics and tutor a lot of high school and college students during the last 10 years. The list of subjects includes various mathematical disciplines, in particular Algebra 2. 8 Subjects: including algebra 1, algebra 2, calculus, geometry ...My master's project was on Remote monitoring of engine health using non intrusive methods. Dear Students, I currently work as Sr. Technical Specialist at Case New Holland Industrial in Burr Ridge, IL. 16 Subjects: including trigonometry, statistics, discrete math, differential equations ...Certifications: Certified Teacher (Sub: K-12, Illinois State Board of Education); Certified Paraprofessional (Illinois State Board of Education, K-12); Certified Tutor (Math, American Tutoring Association).I am a U.S. Citizen and resident of Chicago with a degree in Mechanical Engineering from P... 12 Subjects: including calculus, SAT math, ISEE, elementary math
{"url":"http://www.purplemath.com/bensenville_math_tutors.php","timestamp":"2014-04-19T17:23:34Z","content_type":null,"content_length":"23757","record_id":"<urn:uuid:9667d79e-721d-48f4-94f0-a8a65518d765>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Convolution integrals with Dirac's delta and its derivatives The first endpoint at t is a bit of a problem, because of the singularity from the delta function. We do have: [itex]\delta(t-u) \to 0[/itex] as [itex]u\to t[/itex]. At the other endpoint, it is simply [itex]\delta(t)[/itex]. So:
{"url":"http://www.physicsforums.com/showpost.php?p=726553&postcount=12","timestamp":"2014-04-19T07:34:35Z","content_type":null,"content_length":"7500","record_id":"<urn:uuid:28b92b7a-96e0-4c5b-a6f2-afb99ec7c2fb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional random field approach to prediction of protein-protein interactions using domain information • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Syst Biol. 2011; 5(Suppl 1): S8. Conditional random field approach to prediction of protein-protein interactions using domain information For understanding cellular systems and biological networks, it is important to analyze functions and interactions of proteins and domains. Many methods for predicting protein-protein interactions have been developed. It is known that mutual information between residues at interacting sites can be higher than that at non-interacting sites. It is based on the thought that amino acid residues at interacting sites have coevolved with those at the corresponding residues in the partner proteins. Several studies have shown that such mutual information is useful for identifying contact residues in interacting proteins. We propose novel methods using conditional random fields for predicting protein-protein interactions. We focus on the mutual information between residues, and combine it with conditional random fields. In the methods, protein-protein interactions are modeled using domain-domain interactions. We perform computational experiments using protein-protein interaction datasets for several organisms, and calculate AUC (Area Under ROC Curve) score. The results suggest that our proposed methods with and without mutual information outperform EM (Expectation Maximization) method proposed by Deng et al., which is one of the best predictors based on domain-domain interactions. We propose novel methods using conditional random fields with and without mutual information between domains. Our methods based on domain-domain interactions are useful for predicting protein-protein Understanding of protein functions and protein-protein interactions is one of important topics in the field of molecular biology and bioinformatics. Recently, many researchers have focused on the investigation of amino acid residues of proteins to reveal interactions and contacts between residues [1-4]. If residues at important sites for interactions between proteins are substituted in one protein, the corresponding residues in interacting partner proteins are expected to be also substituted by selection pressure. Otherwise, such mutated proteins may lose the interactions. Fraser et al. confirmed that interacting proteins evolve at similar evolutionary rates by comparing putatively orthologous protein sequences between S. cerevisiae and C. elegans[5]. It means that substitutions for contact residues occur in both interacting proteins as long as the proteins keep interacting with each other. Therefore, mutual information (MI) between residues is useful for predicting protein-protein interactions for proteins of unknown function. MI is calculated from multiple sequence alignments for homologous protein sequences. Weigt et al. identified direct residue contacts between sensor kinase and response regulator proteins by message passing, which is an improvement of MI [4]. Burger and van Nimwegen used a dependence tree where a node corresponds to a position of amino acid sequences, and predicted interactions using a Bayesian network method [2]. On the other hand, Markov random field and conditional random field models have been well studied in fields of natural language processing [6,7]. Also in bioinformatics, protein function prediction methods from protein-protein interaction network and other biological networks were developed using Markov random fields [8,9]. On the other hand, several prediction methods have been developed based on domain-domain interactions. Deng et al. proposed a domain-based probabilistic model of protein-protein interactions, and developed EM (Expectation Maximization) method [10]. Based on this probabilistic model, LP (Linear Programming)-based methods were developed [11], and Chen et al. improved the accuracy of interaction strength prediction by APM (Association Probabilistic Method) [12]. In this paper, we propose prediction methods based on domain-domain interactions using conditional random fields with and without mutual information. Furthermore, we perform computational experiments for several protein-protein interaction datasets, compare the methods with the EM method proposed by Deng et al. [10], which is one of the best predictors based on domain-domain interactions, and the association method proposed by Sprinzak and Margalit [13] (the APM method for binary interaction data is equivalent to the association method), and show that our methods outperform the EM method and the association method. Mutual information between domains In order to investigate the relationship between two positions of proteins, MI for distributions of amino acids at the positions is used. Such distributions can be obtained from multiple alignments of protein sequences and domain sequences. In this section, we briefly review MI for distributions of amino acids, and explain MI between domains. We assume that multiple sequence alignments for domains D[m] and D[n] are obtained, respectively (see Figure Figure1).1). In order to calculate MI, we need joint appearance frequencies. However, we cannot see which sequence in the multiple alignment of domain D[m] corresponds to a specified sequence in that of D[n]. Therefore, we assume that sequences contained in the same organism can be paired. In the example of Figure Figure1,1, the second sequence of D[m] is paired with the first one of D[n], the third one of D[m] is paired with the second one of D[n], and so on. The first sequence of D[m] is not counted into the appearance frequencies because it is not paired with any sequence of D[n] although it may be paired with sequences of other domains than D[n]. Illustration on the calculation of mutual information from multiple alignments of domains Domains D[m] and D[n] have multiple alignments of sequences from several organisms, respectively. Mutual information is calculated for each pair of positions i and ... Let A be a set of amino acids, f[i](A) be the appearance frequency of amino acid A at position i in domains D[m] and D[n], and f[ij](A, B) be the joint appearance frequency of a pair of amino acids A at position i in D[m] and B at position j in D[n], where each frequency is divided by the number of paired sequences M in the multiple alignments such that ∑[AA]f[i](A) = ∑[A,BA]f[ij](A,B) = 1. Multiple alignments often include some gaps. Weigt et al. counted the frequencies of gaps as well as amino acids [4]. Therefore, we also consider gaps to be a kind of amino acids, that is, the number of distinct amino acids is |A| = 21. Then, mutual information for positions i in D[m] and j in D[n] is defined as the Kullback-Leibler divergence between the multiplication of appearance frequencies, f[i](A)f[j](B), and the joint appearance frequencies, f[ij](A,B), as follows. If frequency distributions of amino acids at positions i and j are independent from each other, f[ij](A,B) ≈ f[i](A)f[j](B), and MI[ij] approaches to zero. This means that the two positions are not related with each other in the evolutionary process. If domains D[m] and D[n] interact at the positions, it is considered that MI[ij] becomes high because the positions have coevolved through the evolutionary process in order to keep the interaction. It should be noted that two positions i and j do not always directly interact even if MI[ij] is high [4]. However, such proteins with high values of MI have a possibility to directly interact with each other at other positions in the proteins. However, we need to reduce MI[ij] because it can be unnecessarily high depending on distributions of f[i](A) and f[j](B). For that purpose, we make use of MI[ij] from the joint frequency, f[ij](A, B ), obtained by shuffling at random the combinations of sequences in multiple alignments. In this paper, we repeat the procedure 400 times according to [4], and take the average. For practical uses of MI, f[i](A), f[j](B) and f[ij](A,B) should be positive values. Otherwise, we cannot calculate MI[ij] by using computers. Therefore, we use the following pseudocount as in [4], where η is a constant value, in this paper we use η = 1. It should be noted that the sum over all amino acids A, [AA]f[i](A) = ∑[A,BA]f[ij](A,B) = 1. In order to investigate interactions between proteins, we need MI between domains included in the proteins. Thus, we define MI between domains D[m] and D[n], M[mn], to be the maximum of MI over all positions i and j as follows. where vv, i and j are positions of D[m] and D[n], respectively. Since MI[ij] is calculated to be high for the positions i and j that include many gaps, we exclude positions that include more than 20% gaps as in [14]. Conditional random field model for PPI In this section, we propose a probabilistic model for protein-protein and domain-domain interactions using conditional random fields [6,7] because it can be considered that two domains D[m] and D[n] do not always interact even if the mutual information M[mn] is large. For example, Weigt et al. improved MI and proposed direct information (DI) because residues do not always contact with each other even if the MI is large [4]. Most proteins contain domains as is well known. If two proteins do not interact with each other, any two domains contained in the proteins must not interact with each other. In the left example of Figure Figure2,2, protein P[i] consists of domains D[1] and D[2] and protein P[j] consists of domain D[3] respectively. If P[i] and P[j] do not interact, any pair of (D [1], D[3]) and (D[3], D[3]) does not interact. Deng et al. proposed a probabilistic model for a pair of proteins as follows [10]. By assuming that proteins P[i] and P[j] interact if and only if at least a pair of domains included in the proteins interacts, and events that domains interact are independent from each other, they defined Markov random field model for protein-protein interactions Left: Example of proteins P[i] and P[j]. P[i] consists of domains D[1] and D[2], and P[j] consists of domain D[3], respectively. Right: Factor graph G(U,V,E). There exists an edge between P[ij] U and ... where P[ij] = 1 means that proteins P[i] and P[j] interact, D[mn] = 1 means that domains D[m] and D[n] interact, D[mn] P[ij] means that domain D[m] is included in protein P[i] and D[n] is included in P[j] and the product in the right hand side is calculated for all domain pairs (D[m], D[n]) included in the protein pair (P[i], P[j]). By transforming equation (5), we have where λ^(mn) = log(1 – Pr(D[mn] = 1)). From this equation, we can consider the following Markov random field model for protein pair (P[i], P[j]) (see Figure Figure22). where p[ij] d means a set of events on domain-domain interactions, D[mn] = d[mn] (d[mn] Pr(P[ij] = s, D[mn] = t), and Z[ij] denotes the normalization constant. For instance, equation (8) for p[ij] = 0 is equivalent to equation (7) in the case that P[i], P[j]) and s = t = 0, otherwise 0. In Markov random fields, random variables have Markov properties represented as an undirected graph [15]. The factor graph for our model is represented to be a bipartite graph G(U, V, E) with a set of vertices U corresponding to protein-protein interactions P[ij], a set of vertices V corresponding to domain-domain interactions D[mn], and a set of edges E between U and V as the right figure of Figure Figure2.2. There exists an edge between P[ij] U and D[mn] V if and only if D[mn] P[ij]. For the left example of Figure Figure2,2, protein pair (P[i], P[j]) includes domain pairs (D[1], D[3]) and (D[2], D[3]). Then, in the factor graph, the vertex of P[ij] is connected with vertices of D[13] and D[23], respectively. Although the vertex of P[ij] does not have other adjacent vertices than the vertices of D[13] and D[23], those of D[13] and D[23] can be connected with other vertices than that of P[ij] Since Pr(P[ij] = 0|D[mn] =t) = 1 – Pr(P[ij] = 1|D[mn] = t), it is redundant to consider both s = 0, 1, and it is sufficient to consider only s = 1. Therefore, in order to simplify the model, we substitute P[i], P[j]). Then, we have the following joint probability, where p means a set of events on protein-protein interactions, P[ij] = p[ij]. We here introduce mutual information between domains M = {M[mn]} as given conditional data in order to combine it with the probabilistic model. Then, equation (9) can be written as σ(x) = 1/(1 + e^–x) is an increasing function, and c is a positive constant. It should be noted that a negative value, –1, is given to P[ij] with domain-domain interactions D[mn] (see Figure Figure2 For a conditional random field model without MI, we use the following local feature instead of Parameter estimation In this section, we discuss how to estimate the parameters p = {p[ij]} are given. Then, the likelihood function is represented by where Z(M) = [p[ij]p]Z[ij](M). By taking the logarithm, we have We estimate the parameters by maximizing the log-likelihood function, l(λ). Since log(e^x + e^y) is a convex function for variables x and y, that is, l(λ) is a concave function, we are able to obtain a global maximum. For maximizing such functions, various methods such as the steepest descent method, Newton’s method, and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) [16] method have been developed. Newton’s method calculates the inverse of the Hessian matrix for the objective function. However, the computational cost is high. Therefore, the quasi-Newton method approximates the matrix by some efficient method using the first derivatives, the gradient. In this paper, we use the BFGS method, which is one of the quasi-Newton methods. By differentiating equation (15) partially with respect to each parameter In the BFGS method, this equation is repeatedly applied for updating a solution. Computational experiments Data and implementation We used protein-protein interaction data of H. sapiens, D. melanogaster, and C. elegans from the DIP database [17], the file name is ’dip20091230.txt’. We used the UniProt Knowledgebase database (version 15.4) [18] as protein domain inclusion data. We deleted proteins that did not have any domain, and obtained 294 interacting protein pairs as positive data that included 300 distinct proteins and 320 domains for H. sapiens, 449 interacting pairs that included 562 proteins and 449 domains for D. melanogaster, and 250 interacting pairs that included 602 proteins and 476 domains for C. We used the Pfam database (version 24.0) [19] to obtain multiple sequence alignments for domains, and calculated MI, M[mn], for each pair of domains. Figure Figure33 shows the distributions of domain MI M[mn] for H. sapiens, D. melanogaster, and C. elegans. We can see from the figure that most domain MIs are distributed in the part of less than about 0.8 for all organisms. It is considered that domains D[m] and D[n] with M[mn] less than 0.8 may not interact, and domains with M[mn] more than 0.8 have more possibilities to interact with each other. Therefore, we set the constant c in equation (12) to be 0.8. Although we tried several values from 0.6 to 1.0 for c, the results were similar to the case of c = 0.8. Distributions of domain MIs for H. sapiens, D. melanogaster, and C. elegans We selected non-interacting protein pairs as negative data uniformly at random such that negative data did not overlap with the positive data. The number of negative data was the same as that of positive data for each organism. We used libLBFGS (version 1.9) with default parameters to estimate the parameters 20], and is available on the web page, http://www.chokkan.org/software/liblbfgs/. In order to evaluate our method, we compared the proposed CRF method with MI and that without MI with the EM method by Deng et al. [10] and the association method proposed by Sprinzak and Margalit [ 13]. The association method and the APM method [12] estimate probabilities λ[mn] that domains D[m] and D[n] interact as N[mn] (I[mn]) denotes the number of (interacting) protein pairs that include domain pair (D[m], D[n]), and ρ[ij] denotes the interaction strength of protein pair (P[i], P[j]), 0 ≤ ρ[ij] ≤ 1. However, our input interaction data are binary, that is, ρ[ij] takes only 0 or 1. Then, the numerator of the APM method becomes I[mn]. It means that the APM method for binary interaction data is equivalent to the association method. In the EM method, probabilities λ[mn] that domains D[m] and D[n] interact are estimated by the recursive formula, o[ij] = 1 denotes that it was observed that proteins P[i] and P[j] interact with each other, and fn = 0.8. In this paper, the solution of the association method was given as the initial value We performed five-fold cross-validation, that is, split the data into 5 datasets (4 for training and 1 for test), estimated Pr(P[ij] = 1|M) of equation (10) for each protein pair in the test dataset and AUC (Area Under ROC Curve) score, where among the test dataset only protein pairs that included at least a parameter estimated from the corresponding training dataset were always used. We repeated 5 times, and took the average. Tables Tables1,1, ,2,2, and and33 show the results on AUC for training and test datasets by the CRF method with MI, that without MI, the EM method, and the association method for H. sapiens, D. melanogaster, and C. elegans, respectively. An AUC score is the area under an ROC (Receiver Operating Characteristic) curve, and takes a value between 0 and 1. The ROC curve of a random classifier lies on the diagonal line, and the AUC score is 0.5. The ROC curve of a perfect classifier goes through the point (0 (false positive rate), 1 (true positive rate)), and the AUC score is 1. A classifier with the AUC score closer to 1 has better performance. We can see from these tables that the results by the CRF method with MI are better than those by the CRF method without MI, and that the results by the CRF method without MI are better than those by the EM method and the association method. It is also seen that the results by the EM method are almost the same as those by the association method. It might be because the parameters of the EM method were estimated from the solution of the association method and the solution of the EM method already reached a local optimum. Figures Figures4,4, ,5,5, and and66 show the average ROC curves for training and test datasets by the CRF method with MI, that without MI, the EM method, and the association method. For training datasets, the results by all of the methods were almost perfect. For test datasets, the CRF method with MI outperformed that without MI, the EM method, and the association method. It should be noted that the ROC curves of the EM method are almost the same as those of the association method for the same reason discussed above. The AUC results for training and test datasets of H. sapiens by the CRF method with MI, that without MI, the EM method, and the association method The AUC results for training and test datasets of D. melanogaster by the CRF method with MI, that without MI, the EM method, and the association method The AUC results for training and test datasets of C. elegans by the CRF method with MI, that without MI, the EM method, and the association method Average ROC curves for test datasets of H. sapiens by the CRF method with MI, that without MI, the EM method, and the association method Average ROC curves for test datasets of D. melanogaster by the CRF method with MI, that without MI, the EM method, and the association method Average ROC curves for test datasets of C. elegans by the CRF method with MI, that without MI, the EM method, and the association method We proposed novel methods which combine conditional random fields with the domain-based model of protein-protein interactions. In order to give better performance, we introduced mutual information to the probabilistic model. In the improved model, mutual information between domains is given as conditions, where MI between domains is defined as the maximum of MIs between residues in the domains. This method was developed based on the fact that amino acid residues at important sites for interactions have coevolved with each other, and MI has been used for identifying contact residues in interactions. We performed five-fold cross-validation experiments, and calculated AUC for probabilities that two proteins interact. The results suggested that our proposed methods, especially the CRF method with mutual information, are useful. However, the results of AUC for training datasets implied that estimated parameters were overfitting to training datasets. For avoiding that problem, we can improve the methods, for instance, by adding regularization terms, l[1]-norm of parameters to the log-likelihood function. Since CRF has an advantage to be able to incorporate large number of features, it remains as a future work to improve the model itself to obtain better accuracy by, for instance, modifying the local feature and adding new features. Authors contributions JS proposed the use of mutual information for predicting protein-protein interactions. Methods were developed and implemented by MH. MK and TA participated in the discussion during development of the methods. The manuscript was prepared by MH, JS, and TA. Competing interests The authors declare that they have no competing interests. This work was partially supported by Grants-in-Aid #22240009 and #21700323 from MEXT, Japan. JS would like to thank the National Health and Medical Research Council of Australia (NHMRC) and the Chinese Academy of Sciences (CAS) for financially supporting this research via the NHMRC Peter Doherty Fellowship and the Hundred Talents Program of CAS. This article has been published as part of BMC Systems Biology Volume 5 Supplement 1, 2011: Selected articles from the 4th International Conference on Computational Systems Biology (ISB 2010). The full contents of the supplement are available online at http://www.biomedcentral.com/1752-0509/5?issue=S1. • White RA, Szurmant H, Hoch JA, Hwa T. Features of protein-protein interactions in two-component signaling deduced from genomic libraries. Methods Enzymol. 2007;422:75–101. full_text. [PubMed] • Burger L, van Nimwegen E. Accurate prediction of protein-protein interactions from sequence alignments using a Bayesian method. Molecular Systems Biology. 2008;4:165. doi: 10.1038/msb4100203. [ PMC free article] [PubMed] [Cross Ref] • Halabi N, Rivoire O, Leibler S, Ranganathan R. Protein sectors: Evolutionary units of three-dimensional structure. Cell. 2009;138:774–786. doi: 10.1016/j.cell.2009.07.038. [PMC free article] [ PubMed] [Cross Ref] • Weigt M, White RA, Szurmant H, Hoch JA, Hwa T. Identification of direct residue contacts in protein-protein interaction by message passing. Proc. Natl. Acad. Sci. USA. 2009;106:67–72. doi: 10.1073/pnas.0805923106. [PMC free article] [PubMed] [Cross Ref] • Fraser HB, Hirsh AE, Steinmetz LM, Scharfe C, Feldman MW. Evolutionary rate in the protein interaction network. Science. 2002;296:750–752. doi: 10.1126/science.1068696. [PubMed] [Cross Ref] • Sha F, Pereira F. Shallow parsing with conditional random fields. Proc. HLT-NAACL 2003. 2003. pp. 134–141. • Sutton C, McCallum A. Introduction to statistical relational learning. MIT Press; 2006. An introduction to conditional random fields for relational learning; pp. 93–128. • Deng M, Zhang K, Mehta S, Chen T, Sun F. Prediction of protein function using protein-protein interaction data. Journal of Computational Biology. 2003;10(6):947–960. doi: 10.1089/ 106652703322756168. [PubMed] [Cross Ref] • Deng M, Chen T, Sun F. An integrated probabilistic model for functional prediction of proteins. Journal of Computational Biology. 2004;11:463–475. doi: 10.1089/1066527041410346. [PubMed] [Cross • Deng M, Mehta S, Sun F, Chen T. Inferring domain-domain interactions from protein-protein interactions. Genome Research. 2002;12:1540–1548. doi: 10.1101/gr.153002. [PMC free article] [PubMed] [ Cross Ref] • Hayashida M, Ueda N, Akutsu T. Inferring strengths of protein-protein interactions from experimental data using linear programming. Bioinformatics. 2003;19(suppl 2):ii58–ii65. doi: 10.1093/ bioinformatics/btg1061. [PubMed] [Cross Ref] • Chen L, Wu LY, Wang Y, Zhang XS. Inferring protein interactions from experimental data by association probabilistic method. Proteins. 2006;62(4):833–837. doi: 10.1002/prot.20783. [PubMed] [Cross • Sprinzak E, Margalit H. Correlated sequence-signatures as markers of protein-protein interaction. Journal of Molecular Biology. 2001;311:681–692. doi: 10.1006/jmbi.2001.4920. [PubMed] [Cross Ref] • Little DY, Chen L. Identification of coevolving residues and coevolution potentials emphasizing structure, bond formation and catalytic coordination in protein evolution. PLoS One. 2009;4:e4762. doi: 10.1371/journal.pone.0004762. [PMC free article] [PubMed] [Cross Ref] • Moussouri J. Gibbs and Markov random systems with constraints. Journal of Statistical Physics. 1974;10:11–33. doi: 10.1007/BF01011714. [Cross Ref] • Bertsekas DP. Nonlinear Programming. Athena Scientific; 1999. • Salwinski L, Miller CS, Smith AJ, Pettit FK, Bowie JU, Eisenberg D. The Database of Interacting Proteins: 2004 update. Nucleic Acids Research. 2004;32:D449–D451. doi: 10.1093/nar/gkh086. [PMC free article] [PubMed] [Cross Ref] • The UniProt Consortium. The Universal Protein Resource (UniProt) in 2010. Nucleic Acids Research. 2010;38:D142–D148. doi: 10.1093/nar/gkp846. [PMC free article] [PubMed] [Cross Ref] • Finn RD, Mistry J, Tate J, Coggill P, Heger A, Pollington JE, Gavin OL, Gunasekaran P, Ceric G, Forslund K, Holm L, Sonnhammer ELL, Eddy SR, Bateman A. The Pfam protein families database. Nucleic Acids Research. 2010;38:D211–D222. doi: 10.1093/nar/gkp985. [PMC free article] [PubMed] [Cross Ref] • Nocedal J. Updating quasi-Newton matrices with limited storage. Mathematics of Computation. 1980;35(151):773–782. doi: 10.1090/S0025-5718-1980-0572855-7. [Cross Ref] Articles from BMC Systems Biology are provided here courtesy of BioMed Central • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3121124/?tool=pubmed","timestamp":"2014-04-18T03:43:03Z","content_type":null,"content_length":"105808","record_id":"<urn:uuid:58546597-1c55-4e39-9648-d0c6fbf9c6a4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Left and Right Cosets April 22nd 2009, 09:59 PM Left and Right Cosets Let G = S3, the symmetric group of degree 3 and let H = {i,f} where f(x1) = x2, f(x2) = x1, f(x3) = x3 a) find all the left cosets of H in G b) find all the right cosets of H in G c) Is every left coset of H a right coset of H? Please show explicit steps I'm very confused on how to prove these coset problems. Thanks so much! April 22nd 2009, 11:06 PM cycle notation I find it a lot easier to use cycle notation when working with the symmetric group. example $(1,2,4, 7)\in S_7$ is the permutation that takes 1 to 2, 2 to 4, 4 to 7 and 7 to 1. The unmentioned numbers are left the same. Your coset $H=\{(1), (1,2)\}$ it has size two and $|S_n|=n! \Rightarrow |S_3|=3!=6$. So you should expect 3 cosets each of size 2. Basically you just gotta multiply them out and see what happens. Here are the left cosets of H $(1)H=\{(1), (1,2)\}$ $(1,3)H=\{(1,3), (1,2,3)\}$ $(2,3)H=\{(2,3), (1,3,2)\}$ Right cosets of H found similarly $H(1)=\{(1), (1,2)\}$ $H(1,3)=\{(1,3), (1,3,2)\}$ $H(2,3)=\{(2,3), (1,2,3)\}$ Compare these cosets and see that the last two do not match up, so these are not the same. In particular this tells you that H is infact not a normal subgroup of $S_3$ because $(1,3)H ot = H(1,3)
{"url":"http://mathhelpforum.com/advanced-algebra/85182-left-right-cosets-print.html","timestamp":"2014-04-20T02:32:49Z","content_type":null,"content_length":"6482","record_id":"<urn:uuid:9740cf9f-aa09-4c5c-996f-7706727372fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
34,117pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Educational Psychology: Assessment · Issues · Theory & research · Techniques · Techniques X subject · Special Ed. · Pastoral Main article: acalculia Dyscalculia is defined as a specific learning difficulty affecting a person's ability to understand and/or manipulate numbers. Like dyslexia, dyscalculia can be caused by a visual perceptual deficit. Dyscalculia is often used to refer specifically to the inability to perform operations in math or arithmetic, but is defined by some educational professionals as a more fundamental inability to conceptualize numbers themselves as an abstract concept of comparative quantities. It is a lesser known disability, much like dyslexia and dyspraxia. In fact, it is considered by some to be a variation of dyslexia. Dyscalculia occurs in people across the whole IQ range, but means they often have specific problems with mathematics, time, measurement, etc. Dyscalculia (in its more general definition) is not rare. Many of those with dyslexia or dyspraxia have dyscalculia as well. There is also some evidence to suggest that this type of SpLD is partially hereditary, although there are scholars who remind us that dyscalculia, like many other learning differences, may be a socially constructed concept. Potential symptoms • Frequent difficulties with numbers, confusing the signs: +, -, / and x, reversing or transposing numbers etc. • Inability to say which of two numbers is the larger. • Reliance on 'counting-on' strategies, often using fingers, rather than any more efficient mental arithmetic strategies. • Difficulty with times-tables, mental arithmetic, measurements, etc. • Good in subjects like science and geometry until a higher level requiring calculations is needed. • Difficulty with conceptualising time and judging the passing of time. • Difficulty with everyday tasks like checking change and reading analogue clocks. • Inability to comprehend financial planning or budgeting, sometimes even at a basic level, for example estimating the cost of the items in a shopping basket. • Inability to grasp and remember maths concepts, rules, formulae, sequences. • Difficulty keeping score during games. • The condition may lead in extreme cases to a phobia of mathematics and mathematical devices (i.e. numbers) Potential causes • Neurological: Dyscalculia has been associated with lesions to the supramarginal and angular gyri at the junction between the temporal and parietal lobes of the cerebral cortex^[1]^[2]. • Deficits in Working Memory: Adams and Hitch^[3] argue that working memory is a major factor in mental addition. From this base, Geary^[4] conducted a study that suggested there was a working memory deficit for those who suffered with dyscalculia. However, working memory problems are confounded with general learning difficulties, thus Geary's findings may not be specific to dyscalculia but rather may reflect a greater learning deficit. Dealing with students having dyscalculia • Give them extra time for numerical problems. • Make sure that the student has actually understood the problem. • Attempt to determine whether the learning style of the student is primarily visual, auditory or kinaesthetic. • Encourage students to "visualize" the quantities involved in mathematics problems. • Be aware that students may use non-standard methods to solve problems. If their method is helpful, encourage it. • Where appropriate have the student read problems out loud and listen carefully. • Provide plenty of examples and try to relate problems to real-life situations. • Provide uncluttered worksheets. • Dyscalculic students will probably need to spend considerable extra time memorizing mathematical facts. Repetition is greatly important. Rhythm or music may help the process. • Severely dyscalculic students, particularly if they are also dyslexic, may in fact have too poor a memory to memorise by rote at all. In this case, they should first concentrate on strengthening the basic numerical bonds and then use of calculation strategies. • Do not scold or pity the student. • Where appropriate, seek the advice of the SENCO or Ed. Psych. See also • Gerstmann syndrome: dyscalculia is but one symptom. • The DSM-IV diagnosis mathematics disorder can be applied to people whose mathematical abilities are well below the expected level for their age. External links Further reading • Henderson Anne, Came Fil, Brough Mel. "Working with Dyscalculia." [5] Learning Works International Ltd, 2003, ISBN: 0953105520) • Butterworth, Brian. "Dyscalculia Guidance: Helping Pupils With Specific Learning Difficulties in Maths." (David Fulton Pub, 2004, ISBN: 0708711529) • Chinn, Steve. "The Trouble with Maths: A Practical Guide to Helping Learners with Numeracy Difficulties." (RoutledgeFalmer, 2004, ISBN: 041532498X) • Attwood, Tony. "Dyscalculia in Schools: What It Is and What You Can Do." (First and Best in Education Ltd, 2002, ISBN: 1860836143) • Abeel, Samantha. "My Thirteenth Winter." (Orchard Books, 2003, ISBN: 0439339049)
{"url":"http://psychology.wikia.com/wiki/Mathematics_disorder","timestamp":"2014-04-17T05:13:11Z","content_type":null,"content_length":"71050","record_id":"<urn:uuid:5ec5b6a6-db29-47e8-a6e4-ec4aa1b86c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The tow truck company charges $95 plus an additional $0.30 per mile. If your total bill was $101.60, how many miles did the tow truck take you?,,<--- help please! Best Response You've already chosen the best response. 22 miles Best Response You've already chosen the best response. 95+.30x=101.60 Let x equal the miles then subtract 95 and divide by .30 Best Response You've already chosen the best response. i'm just wondering is this your math homework? Best Response You've already chosen the best response. its a geometry thing im making up the end of semester is tommorow and i suck at geometry and dont get it at all but if i dont get a B in the class i wont be able to play highschool soccer this Best Response You've already chosen the best response. don't just ask for the answer or you won't learn anything Best Response You've already chosen the best response. 101.60 - 95 = 6.60 6.60 / 0.30 = 22 miles remember to click good answer Best Response You've already chosen the best response. let me give you a problem (8,2) (6,-4) FIND THE MIDPOINT AND DISTANCE Best Response You've already chosen the best response. ok hold on Best Response You've already chosen the best response. distance is 6.32 and the midpoint is 2,10 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ef14940e4b0dc507db683fc","timestamp":"2014-04-18T18:25:34Z","content_type":null,"content_length":"46723","record_id":"<urn:uuid:a6e9d9be-3fc2-4300-a0ed-594e12a74aaa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
finding the limit March 19th 2009, 01:28 PM #1 Jan 2009 finding the limit what is the limit limit as x-> infinity of (sin x) / sqrt(x) i know lim (sin x)/x = 1 but how do i find the one above??? I hope you don't know that! sin(x)/x goes to 1 as x goes to 0 but your limit is for x going to infinity. You know that sin(x) is never larger than 1 nor less than -1: $\frac{-1}{x}\le\frac{sin x} {x}\le\frac{1}{x}$. Now, what happens to those end terms as x goes to infinity? Moo's hint would be relevant if the limit were as x goes to 0. I think you mislead him! March 19th 2009, 01:34 PM #2 March 19th 2009, 03:56 PM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/79546-finding-limit.html","timestamp":"2014-04-19T07:31:26Z","content_type":null,"content_length":"38155","record_id":"<urn:uuid:f5047543-3334-4ee2-86b9-680ca0a5fbbe>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
TR06-061 | 5th May 2006 00:00 Hardness of Learning Halfspaces with Noise Learning an unknown halfspace (also called a perceptron) from labeled examples is one of the classic problems in machine learning. In the noise-free case, when a halfspace consistent with all the training examples exists, the problem can be solved in polynomial time using linear programming. However, under the promise that a halfspace consistent with a fraction (1-\eps) of the examples exists (for some small constant \eps > 0), it was not known how to efficiently find a halfspace that is correct on even 51% of the examples. Nor was a hardness result that ruled out getting agreement on more than 99.9% of the examples known. In this work, we close this gap in our understanding, and prove that even a tiny amount of worst-case noise makes the problem of learning halfspaces intractable in a strong sense. Specifically, for arbitrary \epsilon,\delta > 0, we prove that given a set of examples-label pairs from the hypercube a fraction (1-\eps) of which can be explained by a halfspace, it is NP-hard to find a halfspace that correctly labels a fraction (1/2+\delta) of the The hardness result is tight since it is trivial to get agreement on 1/2 the examples. In learning theory parlance, we prove that weak proper agnostic learning of halfspaces is hard. This settles a question that was raised by Blum et al in their work on learning halfspaces in the presence of random classification noise, and in some more recent works as well. Along the way, we also obtain a strong hardness for another basic computational problem: solving a linear system over the
{"url":"http://eccc.hpi-web.de/eccc-reports/2006/TR06-061/index.html","timestamp":"2014-04-18T15:39:53Z","content_type":null,"content_length":"20861","record_id":"<urn:uuid:72dbb4a1-b9f0-4854-8943-af04adb9d851>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
application to physics and engineering November 17th 2007, 05:40 PM application to physics and engineering Hi everyone, Could someone please help me on this problem? A spring has a natural length of 20 cm. If a 25 N force is required to keep it stretched to a length of 30 cm, how much work is required to stretch it from 20 cm to 25 cm? I know that work=(force)(distance) Force=25 N Is this correct so far? If it isn't, could you please show me what to do? Thank you very much November 17th 2007, 08:47 PM A spring has a natural length of 20 cm. If a 25 N force is required to keep it stretched to a length of 30 cm, how much work is required to stretch it from 20 cm to 25 cm? I know that work=(force)(distance) Force=25 N Is this correct so far? ... unfortunately no. 1. If you stretch a spring the necessary force is not constant but is proportional to the distance you have stretched the spring: If F is the force, d is the distance of stretching then: $\frac Fd=k$ . This quotient is constant (in certain boarders: You can't stretch a spring of 10 cm to a length of 1 km). In your case: $\frac{25\ N}{10\ cm}=\boxed{k=2.5\ \frac N{cm}}$ 2. Since the force is not constant your formula to calculate the work is not valid. The work done to stretch a spring can be calculated by: $w=\frac12 \cdot k \cdot d^2$ . Plug in the values for k and d. $w=\frac12 \cdot 2.5\ \frac N{cm} \cdot 5^2\ cm^2=31.25\ Ncm = 0.3125 \ J$ November 18th 2007, 10:43 AM Thank you very much I haven't learned that formula Could you also do it this way? 20cm to 30 cm w=int. .10 to .25(250xdx) =(250x^2/2)from .10 to .25 (Are these limits correct?) Thank you very much November 18th 2007, 01:50 PM Actually, the two of you are using the same method: $W = \int \vec{F} \cdot \vec{ds}$ When you put Hooke's law in for the force, you get the formula earboth used. November 18th 2007, 03:40 PM Do you see where I made my mistake? Thank you very much
{"url":"http://mathhelpforum.com/calculus/22978-application-physics-engineering-print.html","timestamp":"2014-04-19T12:54:54Z","content_type":null,"content_length":"8542","record_id":"<urn:uuid:047a3d54-960d-485f-b7ac-bbc33d9ac659>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: coins problem Replies: 7 Last Post: Jun 16, 2009 10:34 PM Messages: [ Previous | Next ] Topics: [ Previous | Next ] Re: coins problem Posted: Feb 1, 1999 8:41 AM On Mon, 1 Feb 1999, Helena Verrill wrote: > By the way, you said that an information count would make you > think you could do (3^n-1)/2 --- I suppose you mean that there are 3 > possibilities for weighing, so n weighing gives 3^n outcomes, > and k coins means 2k possibilitis, (anyone light or heavy), I'd prefer to note the all-balanced case can't happen if there's a fake coin, but does happen if there's no fake, which increases the number of possibilities to 2k+1. > so, max k has 2k=3^n, and since that k is not an integer, > you get (3^n-1)/2.... so how do you prove you can't do this many? I found a very easy argument to prove this when I considered this problem as a student, but can't remember it at this moment. > And why is it, that if you just have the extra information > that you know the odd coin is light, then the 'information > count' does work, and you can do 3^n in n weightings? > what's the difference? Since 3^n is what you'd expect, it requires no explanation, unlike (3^n - 3)/2, for which I'll try to reconstruct the John Conway
{"url":"http://mathforum.org/kb/thread.jspa?messageID=1085030&tstart=0","timestamp":"2014-04-17T12:36:42Z","content_type":null,"content_length":"25544","record_id":"<urn:uuid:4f73c2f8-9e81-4882-8b0d-e6e3482cc2aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Marlborough, MA SAT Math Tutor Find a Marlborough, MA SAT Math Tutor ...All this experience has taught me that all students can learn math and that I have (and continue to develop) excellent skills in convincing students of their abilities and helping students succeed and even excel in mathematics (even if they have had difficulties in the past).In addition to the st... 14 Subjects: including SAT math, calculus, geometry, algebra 1 ...I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. 8 Subjects: including SAT math, calculus, geometry, algebra 1 ...For six years I have been employed as a high school math teacher, mostly specialized in MCAS prep but spending some time teaching Algebra 2, trigonometry, and pre-calculus. I also have degrees from Fitchburg State College in literature and theatre, so although it is currently not my most recent ... 27 Subjects: including SAT math, reading, writing, English ...My graduate research focused on the teaching of reading, so I also very familiar with programs like Foundations. As a long time employee of the Museum of Science in Boston, I have taught many hands on science workshops and courses for students in grades k-6. I really enjoy working with this gro... 38 Subjects: including SAT math, reading, English, Spanish ...I believe that any person can find the joy in learning, so that school becomes a passion and not just a chore. As an educator, I build excitement for learning by motivating, inspiring, and sparking curiosity by meeting students at their OWN level, demonstrating respect for students as people wit... 16 Subjects: including SAT math, reading, writing, algebra 1 Related Marlborough, MA Tutors Marlborough, MA Accounting Tutors Marlborough, MA ACT Tutors Marlborough, MA Algebra Tutors Marlborough, MA Algebra 2 Tutors Marlborough, MA Calculus Tutors Marlborough, MA Geometry Tutors Marlborough, MA Math Tutors Marlborough, MA Prealgebra Tutors Marlborough, MA Precalculus Tutors Marlborough, MA SAT Tutors Marlborough, MA SAT Math Tutors Marlborough, MA Science Tutors Marlborough, MA Statistics Tutors Marlborough, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/marlborough_ma_sat_math_tutors.php","timestamp":"2014-04-21T13:16:11Z","content_type":null,"content_length":"24112","record_id":"<urn:uuid:b2846876-9e99-4b04-8a33-50f60054c881>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"}