content
stringlengths
86
994k
meta
stringlengths
288
619
American Mathematical Society Bulletin Notices AMS Sectional Meeting Program by Day Current as of Tuesday, April 12, 2005 15:21:28 Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org 2001 Spring Eastern Section Meeting Hoboken, NJ, April 28-29, 2001 Meeting #966 Associate secretaries: Lesley M Sibner , AMS Saturday April 28, 2001 • Saturday April 28, 2001, 7:30 a.m.-4:30 p.m. Meeting Registration Bissinger Room, 4th Floor, Howe Center • Saturday April 28, 2001, 7:30 a.m.-4:30 p.m. Book Sale and Exhibit Bissinger Room, 4th Floor, Howe Center • Saturday April 28, 2001, 8:30 a.m.-10:50 a.m. Special Session on Analytic Number Theory, I Room 229, Edwin A. Stevens (EAS) Hall Milos A. Dostal, Stevens Institute of Technology mdostal@stevens-tech.edu Werner G. Nowak, Universit\"at f\"ur Bodenkultur, Vienna, Austria nowak@mail.boku.ac.at • Saturday April 28, 2001, 8:30 a.m.-10:50 a.m. Special Session on History of Mathematics, I Room 222, Edwin A. Stevens (EAS) Hall Patricia R. Allaire, Queensborough Community College, CUNY praqb@cunyvm.cuny.edu Robert E. Bradley, Adelphi University bradley@adelphi.edu • Saturday April 28, 2001, 9:00 a.m.-10:50 a.m. Special Session on Computational Algebraic Geometry and Its Applications, I Room 329, Edwin A. Stevens (EAS) Hall Serkan Hosten, San Francisco State University Frank Sottile, University of Massachusetts at Amherst sottile@math.umass.edu • Saturday April 28, 2001, 9:00 a.m.-10:50 a.m. Special Session on Singular and Degenerate Nonlinear Elliptic Boundary Value Problems, I Room 106, McLean Hall Joe McKenna, University of Connecticut mckenna@math.uconn.edu Changfeng Gui, University of Connecticut gui@math.uconn.edu Yung Sze Choi, University of Connecticut choi@math.uconn.edu • Saturday April 28, 2001, 9:00 a.m.-10:50 a.m. Special Session on Ricci Curvature and Related Topics, I Room 212, Edwin A. Stevens (EAS) Hall George I. Kamberov, Stevens Institute of Technology gkambero@stevens-tech.edu Christina Sormani, Lehman College, CUNY sormanic@member.ams.org Megan M. Kerr, Wellesley College mkerr@wellesley.edu • Saturday April 28, 2001, 9:00 a.m.-10:20 a.m. Special Session on Stability of Nonlinear Dispersive Waves, I Room 105, McLean Hall Yi Li, Stevens Institute of Technology yili@cs.stevens-tech.edu Keith S. Promislow, Simon Fraser University kpromisl@cs.sfu.ca • Saturday April 28, 2001, 9:00 a.m.-10:55 a.m. Special Session on Surface Geometry and Shape Perception, I Room 230, Edwin A. Stevens (EAS) Hall Gary R. Jensen, Washington University George I. Kamberov, Stevens Institute of Technology kamberov@cs.stevens-tech.edu • Saturday April 28, 2001, 9:00 a.m.-10:20 a.m. Special Session on Graph Theory (Dedicated to Frank Harary on His 80th Birthday), I Room 322, Edwin A. Stevens (EAS) Hall Michael L. Gargano, Pace University mgargano@pace.edu Louis V. Quintas, Pace University lquintas@pace.edu Charles Suffel, Stevens Institute of Technology csuffel@stevens-tech.edu □ 9:00 a.m. The Reversible Random $f$-Graph Process. Krystyna T Bali\'{n}ska, The Technical University of Pozna\'{n}, POLAND Louis V Quintas*, Pace University □ 9:30 a.m. Skew Diagrams and Ordered Trees. Melkamu Zeleke*, William Paterson University Robert G Rieper, William Paterson University □ 10:00 a.m. The Eccentric Digraph of a Graph. Fred Buckley*, Baruch College (CUNY) • Saturday April 28, 2001, 9:00 a.m.-10:50 a.m. Special Session on Wavelets, Multiscale Analysis, and Applications, I Room 308, Edwin A. Stevens (EAS) Hall Ivan Selesnick, Polytechnic University selesi@taco.poly.edu Gerald Schuller, Bell Laboratories schuller@research.bell-labs.com • Saturday April 28, 2001, 9:15 a.m.-10:50 a.m. Special Session on Deformation Quantization and Its Applications, I Room 229A, Edwin A. Stevens (EAS) Hall Siddhartha Sahi, Rutgers University sahi@math.rutgers.edu Martin J. Andler, University of Versailles andler@math.uvsq.fr □ 9:15 a.m. Deformation quantization, its genesis and avatars. Daniel H Sternheimer*, CNRS and Université de Bourgogne, France □ 10:00 a.m. □ 10:20 a.m. String Topology Revisited. Alexander A Voronov*, Michigan State University • Saturday April 28, 2001, 9:30 a.m.-10:50 a.m. Special Session on Computational Group Theory, I Room 330, Edwin A. Stevens (EAS) Hall Robert Gilman, Stevens Institute of Technology rgilman@stevens-tech.edu Alexei Myasnikov, City College, New York alexei@rio.sci.ccny.cuny.edu Vladimir Shpilrain, City College, New York shpil@groups.sci.ccny.cuny.edu Sean Cleary, City College, New York cleary@scisun.sci.ccny.cuny.edu □ 9:30 a.m. Rewriting systems for groups and monoids, with applications to Gr\"obner bases. Susan M Hermiller*, University of Nebraska Jonathon P McCammond, Texas A & M University □ 10:00 a.m. Rigidity of Coxeter groups. Jon McCammond*, Texas A&M University Noel Brady, University of Oklahoma Bernhard Muehlherr, Universitat Dortmund Walter Neumann, Barnard College, Columbia University □ 10:30 a.m. Braid Group Representations of Low Degree. Edward W Formanek*, Pennsylvania State University • Saturday April 28, 2001, 9:30 a.m.-10:40 a.m. Special Session on Matchings in Graphs and Hypergraphs, I Room 323, Edwin A. Stevens (EAS) Hall Alexander Barvinok, University of Michigan barvinok@math.lsa.umich.edu Alex Samorodnitsky, Institute for Advanced Study asamor@ias.edu □ 9:30 a.m. \bf A generalization of mixed discriminants and an attempt of Quantum matching theory. Leonid Gurvits*, Los Alamos National Laboratory □ 10:10 a.m. An inequality for polymatroid functions and its applications. Endre Boros, Rutcor, Rutgers University Khaled Elbassioni, Department of Computer Science, Rutgers University Vladimir Gurvich, Rutcor, Rutgers University Leonid Khachiyan*, Department of Computer Science, Rutgers University • Saturday April 28, 2001, 11:00 a.m.-11:50 a.m. Invited Address A Gromov-Witten invariant in the real world. DeBaun Theatre, Edwin A. Stevens (EAS) Hall Frank Sottile*, University of Massachusetts, Amherst • Saturday April 28, 2001, 1:30 p.m.-2:20 p.m. Invited Address Complexity and geometry of counting. DeBaun Theatre, Edwin A. Stevens (EAS) Hall Alexander Barvinok*, University of Michigan, Ann Arbor • Saturday April 28, 2001, 2:30 p.m.-4:50 p.m. Special Session on Analytic Number Theory, II Room 229, Edwin A. Stevens (EAS) Hall Milos A. Dostal, Stevens Institute of Technology mdostal@stevens-tech.edu Werner G. Nowak, Universit\"at f\"ur Bodenkultur, Vienna, Austria nowak@mail.boku.ac.at • Saturday April 28, 2001, 2:30 p.m.-4:50 p.m. Special Session on Computational Algebraic Geometry and Its Applications, II Room 329, Edwin A. Stevens (EAS) Hall Serkan Hosten, San Francisco State University Frank Sottile, University of Massachusetts at Amherst sottile@math.umass.edu • Saturday April 28, 2001, 2:30 p.m.-5:00 p.m. Special Session on Computational Group Theory, II Room 330, Edwin A. Stevens (EAS) Hall Robert Gilman, Stevens Institute of Technology rgilman@stevens-tech.edu Alexei Myasnikov, City College, New York alexei@rio.sci.ccny.cuny.edu Vladimir Shpilrain, City College, New York shpil@groups.sci.ccny.cuny.edu Sean Cleary, City College, New York cleary@scisun.sci.ccny.cuny.edu □ 2:30 p.m. On the number of sixth powers needed to define the Burnside group $B(2,6)$. Charles C Sims*, Rutgers University □ 3:00 p.m. The Membership Problem for Baumslag-Solitar Groups. Paul E. Schupp*, University of Illinois at Urbana-Champaign □ 3:30 p.m. Algorithms for finite linear groups. William M Kantor, University of Oregon, Eugene, Oregon \'Akos Seress*, The Ohio State University, Columbus, Ohio □ 4:00 p.m. □ 4:10 p.m. Representing subgroups of finitely presented groups by quotient subgroups. Alexander J Hulpke*, Ohio State University □ 4:40 p.m. The Word Problem in Groups. Derek F Holt*, University of Warwick • Saturday April 28, 2001, 2:30 p.m.-4:50 p.m. Special Session on Singular and Degenerate Nonlinear Elliptic Boundary Value Problems, II Room 106, McLean Hall Joe McKenna, University of Connecticut mckenna@math.uconn.edu Changfeng Gui, University of Connecticut gui@math.uconn.edu Yung Sze Choi, University of Connecticut choi@math.uconn.edu • Saturday April 28, 2001, 2:30 p.m.-4:50 p.m. Special Session on History of Mathematics, II Room 222, Edwin A. Stevens (EAS) Hall Patricia R. Allaire, Queensborough Community College, CUNY praqb@cunyvm.cuny.edu Robert E. Bradley, Adelphi University bradley@adelphi.edu • Saturday April 28, 2001, 2:30 p.m.-4:20 p.m. Special Session on Matchings in Graphs and Hypergraphs, II Room 323, Edwin A. Stevens (EAS) Hall Alexander Barvinok, University of Michigan barvinok@math.lsa.umich.edu Alex Samorodnitsky, Institute for Advanced Study asamor@ias.edu □ 2:30 p.m. Approximating the permanent. Mark Jerrum, Edinburgh Alistair Sinclair, Berkeley Eric Vigoda*, Edinburgh □ 3:10 p.m. Remarks on the complexity of RAS algorithm, application in permanent and perfect matching, and a generalization in scaling linear operators over the cone of positive definite matrices. Bahman Kalantari*, Rutgers University □ 3:50 p.m. Applications of Sinkhorn balancing: low-cost approximations for hard problems. Francis Sullivan*, IDA Center for Computing Sciences Isabel Beichl, NIST • Saturday April 28, 2001, 2:30 p.m.-4:50 p.m. Special Session on Ricci Curvature and Related Topics, II Room 212, Edwin A. Stevens (EAS) Hall George I. Kamberov, Stevens Institute of Technology gkambero@stevens-tech.edu Christina Sormani, Lehman College, CUNY sormanic@member.ams.org Megan M. Kerr, Wellesley College mkerr@wellesley.edu □ 2:30 p.m. Ricci and Mean curvature -- from combinatorial to smooth. Igor Rivin*, University of Manchester and Temple University Jean-Marc Schlenker, Universite Paul Sabatier, Toulouse □ 3:00 p.m. Symplectic connections of Ricci-type. Simone Gutt*, Universite Libre de Bruxelles □ 3:30 p.m. □ 4:00 p.m. Solutions of the Maxwell-Einstein Equations. Nina Zipser*, MIT □ 4:30 p.m. Conformally invariant differential equations and the curvature tensor of a 4-manifold. Paul Yang*, Institute for Advanced Study and Univ. of Southern California • Saturday April 28, 2001, 2:30 p.m.-5:20 p.m. Special Session on Quantum Error Correction and Related Aspects of Coding Theory, I Room 104, McLean Hall Harriet S. Pollatsek, Mount Holyoke College hpollats@mtholyoke.edu M. Beth Ruskai, University of Massachusetts at Lowell bruskai@cs.uml.edu • Saturday April 28, 2001, 2:30 p.m.-4:55 p.m. Special Session on Surface Geometry and Shape Perception, II Room 230, Edwin A. Stevens (EAS) Hall Gary R. Jensen, Washington University George I. Kamberov, Stevens Institute of Technology kamberov@cs.stevens-tech.edu • Saturday April 28, 2001, 2:30 p.m.-4:20 p.m. Special Session on Graph Theory (Dedicated to Frank Harary on His 80th Birthday), II Room 322, Edwin A. Stevens (EAS) Hall Michael L. Gargano, Pace University mgargano@pace.edu Louis V. Quintas, Pace University lquintas@pace.edu Charles Suffel, Stevens Institute of Technology csuffel@stevens-tech.edu • Saturday April 28, 2001, 2:30 p.m.-4:50 p.m. Special Session on Wavelets, Multiscale Analysis, and Applications, II Room 308, Edwin A. Stevens (EAS) Hall Ivan Selesnick, Polytechnic University selesi@taco.poly.edu Gerald Schuller, Bell Laboratories schuller@research.bell-labs.com • Saturday April 28, 2001, 3:00 p.m.-4:50 p.m. Special Session on Stability of Nonlinear Dispersive Waves, II Room 105, McLean Hall Yi Li, Stevens Institute of Technology yili@cs.stevens-tech.edu Keith S. Promislow, Simon Fraser University kpromisl@cs.sfu.ca • Saturday April 28, 2001, 3:00 p.m.-5:00 p.m. Special Session on Deformation Quantization and Its Applications, II Room 229A, Edwin A. Stevens (EAS) Hall Siddhartha Sahi, Rutgers University sahi@math.rutgers.edu Martin J. Andler, University of Versailles andler@math.uvsq.fr □ 3:00 p.m. Linearization of Poisson actions, singular values of matrix products, and the hyperbolic Duflo isomorphism. Anton Alekseev, Uppsala University Eckhard Meinrenken, University of Toronto Chris Woodward*, Rutgers, New Brunswick □ 3:40 p.m. Hochschild cohomology for the Kontsevich's graphs. Mohsen Masmoudi*, Universit\'e de Metz (France) Didier Arnal, Universit\'e de Metz (France) □ 4:10 p.m. □ 4:30 p.m. Semiclassical geometry of quantum line bundles and Morita equivalence of star products. Henrique Bursztyn*, U.C. Berkeley • Saturday April 28, 2001, 5:00 p.m.-7:00 p.m. Department of Mathematics Reception Great Hall, Samuel Williams Library Inquiries: meet@ams.org
{"url":"http://ams.org/meetings/sectional/2056_program_saturday.html","timestamp":"2014-04-17T20:03:14Z","content_type":null,"content_length":"78114","record_id":"<urn:uuid:bb1760b7-04cd-4092-8c2b-58fb2ebc7ad2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
monoids-0.1.33: Monoids, specialized containers and a general map/reduce framework Source code Contents Index Portability non-portable (type families, MPTCs) Data.Monoid.Combinators Stability experimental Maintainer ekmett@gmail.com Utilities for working with Monoids that conflict with names from the Prelude, Data.Foldable, Control.Monad or elsewhere. Intended to be imported qualified. import Data.Monoid.Combinators as Monoid List-Like Monoid Production repeat :: Reducer e m => e -> m Source A generalization of Data.List.repeat to an arbitrary Monoid. May fail to terminate for some values in some monoids. replicate :: (Monoid m, Integral n) => m -> n -> m Source A generalization of Data.List.replicate to an arbitrary Monoid. Adapted from http://augustss.blogspot.com/2008/07/lost-and-found-if-i-write-108-in.html cycle :: Monoid m => m -> m Source A generalization of Data.List.cycle to an arbitrary Monoid. May fail to terminate for some values in some monoids. QuickCheck Properties prop_replicate_right_distributive :: (Eq m, Monoid m, Arbitrary m, Integral n) => m -> n -> n -> Bool Source Produced by Haddock version 2.4.2
{"url":"http://hackage.haskell.org/package/monoids-0.1.33/docs/Data-Monoid-Combinators.html","timestamp":"2014-04-18T06:39:24Z","content_type":null,"content_length":"8302","record_id":"<urn:uuid:bcf2a44a-8f88-4be3-ab09-7a563190a515>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
question about higher geometric stacks up vote 6 down vote favorite I have a naive question I am asking. Given a higher geometric stack X in the sense of Simpson, Toen etc is it true that there is an affinization Spec Gamma(O_X) such that Hom(X, Spec(A))= Hom(A,Gamma (O_X)) for every affine scheme Spec(A)? Or does this require some more hypotheses. I have very much a hard time finding this out. tag-removed ag.algebraic-geometry derived-algebraic-geometr add comment 1 Answer active oldest votes Yes -- affinization is defined (as you wrote) as the left adjoint to the inclusion of affine schemes into higher stacks. This left adjoint exists by the ($\infty$-categorical) adjoint functor theorem, since the inclusion of affines into higher stacks preserves all limits (though it certainly changes colimits). Some references for this or closely related notions: up vote 8 Toen's Affine Stacks (here) and Lurie's DAG VIII (available here), where the relevant notion is called "coaffine stacks". There's also a less professional and more informal discussion down vote (in the derived context) in Section 3.2 here. add comment Not the answer you're looking for? Browse other questions tagged tag-removed ag.algebraic-geometry derived-algebraic-geometr or ask your own question.
{"url":"http://mathoverflow.net/questions/108618/question-about-higher-geometric-stacks/108642","timestamp":"2014-04-17T12:55:04Z","content_type":null,"content_length":"49890","record_id":"<urn:uuid:9b745f6b-a5e9-4b52-8209-024f6e3c643f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume IV, Number 1. Summer, 1993. \hsize = 6.5 true in %THE FIRST 14 LINES ARE TYPESETTING CODE. %SOLSTlCE is typeset, using TeX, for the %reader to download including mathematical notation. %BACK ISSUES open anonymous FTP host um.cc.umich.edu %account GCFS (do not type the percent sign in any of the %following instructions). After you are in the system, type %cd GCFS and then type on the next line, ls. Then type %get filename (substitute a name from the directory). File %names carry the journal name, tagged with the number and %volume in the file extension--thus, solstice.190, is number %1 of the 1990 volume of Solstice. Account GCFS is for ftp %ONLY--send other messages to solstice@umichum or %Files available on GCFS are (do NOT include the percent sign--that %simply indicates comments in TeX--they are not printed). %The UM Computing Center warns users that the MTS connections %for FTP are old. They suggest dialing in %during non-peak hours and trying repeatedly if failure %is experienced. %Files beginning with this issue, Solstice.193, are available %via anonymous FTP to account IEVG as above. \input fontmac %delete to download, except on Univ. Mich. (MTS) equipment. %same as previous comment line; set font for 12 point type. \baselineskip=14 pt \headline = {\ifnum\pageno=1 \hfil \else {\ifodd\pageno\righthead \else\lefthead\fi}\fi} \def\righthead{\sl\hfil SOLSTICE } \def\lefthead{\sl Summer, 1993 \hfil} \font\big = cmbx17 %this may cause problems in some installations--replace %if it does with a different font. \font\tn = cmr10 \font\nn = cmr9 %The code has been kept simple to facilitate reading as e-mail \centerline{\big SOLSTICE:} \centerline{\bf AN ELECTRONIC JOURNAL OF GEOGRAPHY AND MATHEMATICS} \centerline{\bf SUMMER, 1993} \centerline{\bf Volume IV, Number 1} \centerline{\bf Institute of Mathematical Geography} \centerline{\bf Ann Arbor, Michigan} \centerline{\bf SOLSTICE} \line{Founding Editor--in--Chief: {\bf Sandra Lach Arlinghaus}. \hfil} \centerline{\bf EDITORIAL BOARD} \line{{\bf Geography} \hfil} \line{{\bf Michael Goodchild}, University of California, Santa Barbara. \hfil} \line{{\bf Daniel A. Griffith}, Syracuse University. \hfil} \line{{\bf Jonathan D. Mayer}, University of Washington; joint appointment in School of Medicine.\hfil} \line{{\bf John D. Nystuen}, University of Michigan (College of Architecture and Urban Planning).\hfil} \line{{\bf Mathematics} \hfil} \line{{\bf William C. Arlinghaus}, Lawrence Technological University. \hfil} \line{{\bf Neal Brand}, University of North Texas. \hfil} \line{{\bf Kenneth H. Rosen}, A. T. \& T. Bell Laboratories. \hfil} \line{{\bf Engineering Applications} \hfil} \line{{\bf William D. Drake}, University of Michigan, \hfil} \line{{\bf Education} \hfil} \line{{\bf Frederick L. Goodman}, University of Michigan, \hfil} \line{{\bf Business} \hfil} \line{{\bf Robert F. Austin, Ph.D.} \hfil} \line{President, Austin Communications Education Services \hfil} The purpose of {\sl Solstice\/} is to promote interaction between geography and mathematics. Articles in which elements of one discipline are used to shed light on the other are particularly sought. Also welcome, are original contributions that are purely geographical or purely mathematical. These may be prefaced (by editor or author) with commentary suggesting directions that might lead toward the desired interaction. Individuals wishing to submit articles, either short or full-- length, as well as contributions for regular features, should send them, in triplicate, directly to the Editor--in--Chief. Contributed articles will be refereed by geographers and/or mathematicians. Invited articles will be screened by suitable members of the editorial board. IMaGe is open to having authors suggest, and furnish material for, new regular features. The opinions expressed are those of the authors, alone, and the authors alone are responsible for the accuracy of the facts in the articles. \noindent {\bf Send all correspondence to: Institute of Mathematical Geography, 2790 Briarcliff, Ann Arbor, MI 48105-1429, (313) 761-1231, IMaGe@UMICHUM, Suggested form for citation. If standard referencing to the hardcopy in the IMaGe Monograph Series is not used (although we suggest that reference to that hardcopy be included along with reference to the e-mailed copy from which the hard copy is produced), then we suggest the following format for citation of the electronic copy. Article, author, publisher (IMaGe) -- all the usual--plus a notation as to the time marked electronically, by the process of transmission, at the top of the recipients copy. Note when it was sent from Ann Arbor (date and time to the second) and when you received it (date and time to the second) and the field characters covered by the article (for example FC=21345 to FC=37462). This document is produced using the typesetting program, {\TeX}, of Donald Knuth and the American Mathematical Society. Notation in the electronic file is in accordance with that of Knuth's {\sl The {\TeX}book}. The program is downloaded for hard copy for on The University of Michigan's Xerox 9700 laser-- printing Xerox machine, using IMaGe's commercial account with that University. Unless otherwise noted, all regular ``features" are written by the Editor--in--Chief. {\nn Upon final acceptance, authors will work with IMaGe to get manuscripts into a format well--suited to the requirements of {\sl Solstice\/}. Typically, this would mean that authors would submit a clean ASCII file of the manuscript, as well as hard copy, figures, and so forth (in camera--ready form). Depending on the nature of the document and on the changing technology used to produce {\sl Solstice\/}, there may be other requirements as well. Currently, the text is typeset using {\TeX}; in that way, mathematical formul{\ae} can be transmitted as ASCII files and downloaded faithfully and printed out. The reader inexperienced in the use of {\TeX} should note that this is not a ``what--you--see--is--what--you--get" display; however, we hope that such readers find {\TeX} easier to learn after exposure to {\sl Solstice\/}'s e-files written using {\TeX}!} {\nn Copyright will be taken out in the name of the Institute of Mathematical Geography, and authors are required to transfer copyright to IMaGe as a condition of publication. There are no page charges; authors will be given permission to make reprints from the electronic file, or to have IMaGe make a single master reprint for a nominal fee dependent on manuscript length. Hard copy of {\sl Solstice\/} is available at a cost of \$15.95 per year (plus shipping and handling; hard copy is issued once yearly, in the Monograph series of the Institute of Mathematical Geography. Order directly from IMaGe. It is the desire of IMaGe to offer electronic copies to interested parties for free. Whether or not it will be feasible to continue distributing complimentary electronic files remains to be seen. Presently {\sl Solstice\/} is funded by IMaGe and by a generous donation of computer time from a member of the Editorial Board. Thank you for participating in this project focusing on environmentally-sensitive publishing.} \copyright Copyright, June, 1993 by the Institute of Mathematical Geography. All rights reserved. {\bf ISBN: 1-877751-55-3} {\bf ISSN: 1059-5325} \centerline{\bf TABLE OF CONTENT} \noindent{\bf 1. WELCOME TO NEW READERS} \noindent{\bf 2. PRESS CLIPPINGS---SUMMARY} \noindent{\bf 3. GOINGS ON ABOUT ANN ARBOR--ESRI AND IMaGe GIFT} \noindent{\bf 4. ARTICLES} \noindent{\bf Electronic Journals: Observations Based on Actual Trials, 1987-Present} \noindent {\bf Sandra L. Arlinghaus and Richard H. Zander}. Content issues; Production issues; Archival issues; {\bf Wilderness As Place} \noindent {\bf John D. Nystuen} Visual paradoxes; Wilderness defined; Conflict or synthesis; Wilderness as place; Suggested readings; Visual illusion authors. {\bf The Earth Isn't Flat. And It Isn't Round Either: Some Significant and Little Known Effects of the Earth's Ellipsoidal Shape} {\bf Frank E. Barmore} reprinted from {\sl The Wisconsin Geographer\/}. The Qibla problem; The geographic center; The center of population; {\bf Microcell Hex-nets?} \noindent {\bf Sandra Lach Arlinghaus} Microcell hex-nets; {\bf Sum Graphs and Geographic Information} {\bf Sandra L. Arlinghaus, William C. Arlinghaus, Frank Harary} Sum graphs; Sum graph unification: construction; Cartographic application of sum graph unification; Sum graph unification: theory; Logarithmic sum graphs; Reversed sum graphs; Augmented reversed logarithmic sum graphs; Cartographic application of ARL sum graphs; \smallskip\noindent {\bf 5. DOWNLOADING OF SOLSTICE} \noindent{\bf 6. INDEX to Volumes I (1990), II (1991), and III (1992) of {\sl Solstice}.} \noindent{\bf 7. OTHER PUBLICATIONS OF IMaGe } \centerline{\bf 1. WELCOME TO NEW READERS} Welcome to new subscribers! We hope you enjoy participating in this means of journal distribution. Instructions for downloading the typesetting have been repeated in this issue, near the end. They are specific to the {\TeX} installation at The University of Michigan, but apparently they have been helpful in suggesting to others the sorts of commands that might be used on their own particular mainframe installation of {\TeX}. New subscribers might wish to note that the electronic files are typeset files---the mathematical notation will print out as typeset notation. For example, when properly downloaded, will print out a typeset summation as i goes from one to n, as a centered display on the page. Complex notation is no barrier to this form of journal Many thanks to the members of the Editorial Board of {\sl Solstice\/}. Some of them have refereed articles and offered suggestions, as have others. Thanks to all. \centerline{\bf 2. PRESS CLIPPINGS---SUMMARY} Brief write-ups about {\sl Solstice\/} have appeared in the following publications: \noindent 1. {\bf Science}, ``Online Journals" Briefings. [by Joseph Palca] 29 November 1991. Vol. 254. \noindent 2. {\bf Science News}, ``Math for all seasons" by Ivars Peterson, January 25, 1992, Vol. 141, No. 4. \noindent 3. {\bf Newsletter of the Association of American Geographers}, June, 1992. \noindent 4. {\bf American Mathematical Monthly}, ``Telegraphic Reviews" --- mentioned as ``one of the World's first electronic journals using {\TeX}," September, 1992. \noindent 5. {\bf Harvard Technology Window}, 1993. \noindent 6. {\bf Graduating Engineering Magazine}, 1993. If you have read about {\sl Solstice\/} elsewhere, please let us know the correct citations (and add to those above). Thanks. \centerline{\bf 3. GOINGS ON ABOUT ANN ARBOR} 1. ESRI, negotiating with IMaGe, has agreed to give a University Lab Kit to the University of Michigan, to be housed in the School of Education. All here are very happy and thank ESRI for their generosity. We look forward to pursuing the research projects that we explained to ESRI. Bob Austin, Sandy Arlinghaus, John Nystuen, Fred Goodman, and Bill Drake were all involved in various aspects of developing research and educational projects. 2. In the Fall of 1992, Bill Drake taught a course in ``Transition Theory" (and invited Sandy Arlinghaus to co-teach it) in the School of Natural Resources and the Environment. It was quite popular, and this course that was experimental in nature in 1992-93 has just become part of the permanent graduate curriculum. A monograph written primarily by the students, and published by SNR and E, came from that course. 3. Book co-edited and co-authored by Bill Drake. {\sl Population --- Environment Dynamics\/}, edited by Gayl D. Ness, William D. Drake, and Steven R. Brechin, Ann Arbor: The University of Michigan Press, 1993. This book has 15 chapters organized into four sections plus a final section ``Summary, conclusions, and next steps" by the editors. It also has a Reference listing, information about the contributing authors, and an index. The book is 456 pages and costs \$45. The titles of the four dominant sections are: Global Perspectives: History, Ideas, Sectoral Changes, and Theories. The State as Actor: Population --- Environment Dynamics in Large Collectivities. The State as Environment: Population --- Environment Dynamics in Small Communities. Emergent Ideas: Theory and Method. 4. Fred Goodman of the School of Education has been very helpful in finding space and resources so that IMaGe can give the software it's trying to line up to UM. Fred has been instrumental in providing constructive, diplomatic liason with other units within UM. We also welcome Fred to the {\sl Solstice\/} Board with this issue. \centerline{\bf 4. ARTICLES} \centerline{\bf ELECTRONIC JOURNALS:} \centerline{\bf OBSERVATIONS BASED ON ACTUAL TRIALS, 1987-PRESENT} \centerline{\bf Sandra L. Arlinghaus and Richard H. Zander.$^*$} \noindent{\bf ABSTRACT} Electronic journals offer a 21st-century forum for the interchange of scholarly ideas. They are inexpensive, fast, easy to store, easy to search, and they have long-term archivability; these advantages easily justify the time spent learning to deal with the new technology. The authors, both editors of nationally-noted electronic journals, share with others their interdisciplinary experiences in dealing with this new medium for producing online, refereed journals. During the past six years each of us has created and edited a successful electronic journal (E-journal) in our respective fields of geography ({\sl Solstice: An Electronic Journal of Geography and Mathematics\/} first appeared in June of 1990) (Palca 1991; Peterson 1992) and botany ({\sl Flora Online\/} first appeared in January of 1987) (Palca 1991). Both journals are peer-reviewed; both are available, free, over standard computer networks; and, both have editors who served as authors in early issues--to get the journal off the ground. E-journals provide an opportunity to share computerized information with others in an orderly and responsible fashion, within the context of current technology. They offer: An inexpensive way to share information, quickly, with a large number of individuals; As direct, online, transmissions from editor to individual; in this case, the transmission should be free of charge, in much the way that a library card is free of charge. The editor/publisher bears the cost of journal creation and manufacture; the reader bears the cost of maintaining on online mail box; As direct transmissions to libraries -- libraries should pay for diskettes, hard copy, online transmission, or whatever they desire. The cost to the library is generally greatly reduced from that of conventional journals, thereby freeing library funds for other useful projects. Funds generated from this source may make the E-journal(s) self-sustaining; As posted ``messages" on an electronic bulletin board or files on an ``anonymous FTP" server. The reader bears the cost of accessing the board or server and downloading the article. When E-journals are highly specialized, they can serve as a more formal alternative to large (archived) data banks in the natural sciences and elsewhere. Indeed, when the E-journal is downloaded into a wordprocessor or a data manager, the content can be manipulated and edited carefully to fit the research needs of the individual user. There are many systematic electronic communications already available and there are apparently more in the planning stages. The first edition of Michael Strangelove's ``Directory of Electronic Journals and Newsletters" (1991) catalogues about 30 journals and over 60 newsletters. Major academic societies, notably the American Mathematical Society and the American Association for the Advancement of Science, have announced far-reaching plans to produce other electronic journals; (Janusz 1991; Palca, 1991: 1480). A glance at a flyer for the Annual Meeting for the Society for Scholarly Publishing (July 1992) suggests that more than half of the four-day meetings will be devoted to issues related to electronic publication. There are: ``Genuine" electronic journals. Mere computerized versions of hardcopy titles. Non-archived electronic databases that are not really citable in a scientific paper since the data used may have been changed or may no longer be available, even though these databases may be What makes a systematic electronic communication a ``journal" is a difficult issue (Ni\-chol\-son 1992); concern for rigid, {\sl a priori\/}, definition might better be replaced with open regard for all entries and suitable concern for the broad issues of journal production. For, an E-journal is first and foremost a ``journal" that has simply been {\bf modified} as ``electronic," both linguistically and technologically, by the method of its transmission and production. Thus, we offer a generalized summary of observations that have come from six years of actual trials with {\sl Flora Online\/} and three years of actual trials with {\sl Solstice\/}. It is useful to separate these results into three broad categories: content issues, production issues, and archival issues. \noindent{\bf Content issues.} The most important concern is to obtain good manuscripts. And, to be acceptable as an outlet for scholarly publication, E-journals should approximate standard formats for professional journals, have high standards of scholarship, and be refereed. It does not matter how sophisticated the technological production becomes; if the journal does not have interesting and useful material of high quality, it will fail. This point should be obvious; however, it can become obscured, particularly in light of the exciting capability of the computerized format. Thus, author perceptions of E-journals are critical; the most serious problem involves citation. Will others see the work? Will the work be taken seriously? The following strategies help: the editor should see to it that the E-journal (and when necessary hard copy derived from it) is listed, housed, or otherwise recognized in standard reviews that are specific to the discipline of the the usual indexing services (publications are often judged by the bibliographic and citation services that mention them --- services that accept electronic files are particularly easy to deal with); news media, including field-specific conferences and meetings as well as mass media; standard book/journal registers of documents using conventional book/journal codes (such as ISBN and ISSN); and, library archives. Libraries apparently dislike the idea of downloading journals; they appreciate diskettes mailed to them. Archiving is important for E-journals so that data can be retrieved long after publication. The editor should consider the unusual to boost regard and readership for this mode of journal transmission, such as: the use of reprints (with appropriate copyright permission) of hard-to-find works of field-leaders (prospective authors---of lesser fame---usually perceive some benefit-by-association and field leaders often are interested in participating in a different the use of interactive review of material --- post-publication review followed by online alteration of the original document as a later version (coded appropriately--original is version 1.0 and updates carry larger numbers according to the extent of change); the use of taxonomic, bibliographic, and other data sets consisting of long lists of records that can easily be downloaded and sorted according to user need. Several agencies are preparing monolithic data banks from which scientists can extract items of information using specialized data management programs. Unfortunately, such data banks usually employ in each different management system, complex and difficult for the scientist to learn, and the data banks give second-hand data (digested by those who run the data bank and who are not necessarily scientists). With the advent, however, of electronic publishing, information in the sciences developed by individual scientists can now be easily and directly shared; the use of novel typesetting or other electronic capabilities that display the power of the vehicle of transmission (Horstmann 1991); the sharing of experiences in E-journal editorship with others --- through professional associations directly promoting electronic journal editorship (such as an E-journal editor's association) and with other organizations indirectly promoting it (such as the {\TeX\/} Users' Group; ``{\TeX\/}" is a trademark of The American Mathematical Society). Readers who are initial skeptics can become more receptive when they see actual output; hence, the early need for editor to become author. To increase E-journal availability, and to convert a wide variety of skeptics, E-journals should be distributed in more than one manner (e.g. diskette, File Transfer Protocol (FTP), Bitnet, on a listserv, U.S. mail, hardcopy). When editor becomes author, then a mechanism for review is all the more important. Pre-publication peer-review by an editorial board or by other colleagues is effective and easy to achieve electronically; post-publication feedback in an open or closed forum is also simple electronically. In addition, it is important that the editor continue to publish in various other outlets held in high regard. There are also a number of other reasonable, but less important, concerns that authors might have. These include: Manuscript security; because E-journals can be forwarded easily, alteration of original manuscripts can occur. There are a number of ways to deal with this problem: Copyright a hard copy of the original transmission (thereby placing it in the Library of Congress); Advertise that the original computerized version or a hard copy of the original transmission is available (on-demand) to those wishing it--including libraries; Store single hard copies in selected libraries (including that of the author's institution); Transmit E-copies directly from editor to individuals, over standard electronic networks, using an electronic distribution list automatically marked with the sender's name and time; Download from an electronic bulletin board. A persistent worry here is that a file made available for downloading is not ``published" in the sense of being distributed. This worry underscores, again, the need for adequate reviewing and indexing of the document. However, the prospective author should note that a file made available for downloading is in fact published \item\item{\quad i.} this is the same way hardcopy books are published --- they are simply advertised as available for purchase, and \item\item{\quad ii.} in bibliographic research, the date of publication is the date advertised as available, since it is impossible to track down the date of first purchase or first mailing of the book. Copy-protect diskettes (using some sort of seal unique to the journal) to prevent unthinking abuse. Virus and other crank programming prevention. Downloaded FTP or regular phone modem files from other computers can spread electronic viruses if they are ``executable," and only if they are actually run as programs. Downloaded text files cannot spread viruses: downloaded executable files (.EXE, .COM in MS-DOS) can be examined by commercial programs for viruses before they are run. When the E-journal is made available through a network server, the E-journal's health is simply transferred elsewhere; the network supervisor has considerable responsibility in this regard. Of course, good backup habits and a procedure in place for dealing with viruses if they happen are a must in all workplaces that use programs obtained from outside the workplace. \noindent{\bf Production issues.} Production issues generally appear to fall into one of two categories: Document manufacture and editing, and transmission. Warehousing is not an issue of any significance, nor is the sort of marketing that requires a network of publishers' representatives to sell hardcopy documents. \centerline{\sl Document manufacture and editing.} The manufacture involves creating, or being supplied, electronic files. Editing at this stage in journal computerization generally requires in-house manufacture and distribution of files and their media. It is useful to aim for the lowest common denominator: currently, that means ASCII text and .GIF or .PCX graphics files if needed --- such files are easily read on a IBM PC clone, a Mackintosh, or Unix machine (Xwindows or whatever), by any wordprocessor and most graphic file viewers. It would be nice if the files could be set up with the format of one of the new GUI wordprocessors (e.g. WordPerfect, MSWord) but it seems prudent to wait until a multiplatform wordprocessor that creates text files incorporating graphics images becomes commonly used. Most prospective authors can provide ``manufacture-ready" copy in the form of an ASCII file sent over the e-mail or provided on diskette. Indeed, for MS-DOS environments DCA or RTF (Document Content Architecture or Rich Text Format) are also standard file formats retaining formatting commands; these may be used to transfer a formatted text to any of most commercially available major word processors. It is thus an easy matter to ship the E-file to referees and to provide authors with E-proof to check prior to final production. There are a number of issues, found also in conventional publishing, that remain difficult. For this reason (also), it is useful for editors to be experienced as authors of conventional articles; it is additionally desirable for them to have had editorial experience in dealing with a conventional publisher. When the ASCII file is typeset using {\TeX\/}, mathematical notation, tables, and figures that are rectilinear in shape are easy to handle; otherwise, complex mathematical notation is difficult even to approximate in ASCII. The typeset {\TeX\/} file is itself an ASCII file with ASCII formatting commands, and so can be transmitted easily. The computerized typeset {\TeX\/} file is not strictly ``what-you- see-is-what-you-get"; however, the file is of traditional quality typesetting, and the file of electronic text and notation can now be downloaded and cheaply typeset or printed in hard copy by the journal receiver at his or her expense. To typeset the file, the receiver must first convert the transmitted {\TeX\/} file to a .DVI file and then print it on any available downloading device (such as a Xerox 9700 series machine or an APS phototypesetter). The receiver can view the transmitted {\TeX\/} file on screen (with the {\TeX\/} commands visible). The editor can right-justify the {\TeX\/} file in a word processor (prior to transmission), and bitstrip it to retain it as an ASCII file, in order to produce a journal-like electronic page in the transmitted E-file without interfering with (or influencing) the typesetting of the hardcopy. Right-justified electronic copy tends to reduce the visual impact of the unnatural looking typesetting commands that appear in the {\TeX\/} file as it is viewed online. {\TeX} produces device independent files; however, because different installations of {\TeX} support different features it is good, at present at least, to keep the typesetting simple. To this end, the editor should consider supplying a set of {\TeX} macros to authors wishing to do their own typesetting using {\TeX}; these can be supplied over the electronic mail in much the way that the American Mathematical Society encourages the submission of abstracts for its meetings. Not all individuals have access to {\TeX\/} even though their university has it; individuals in mathematics departments generally do have access to it and know how to use it. Figures, charts, and tables that can be considered as a matrix (such as a crossword puzzle) can be typeset using {\TeX\/}. Maps and non-rectilinear figures generally cannot. One approach to dealing with figures, that works easily, is to scan complicated maps and figures and to incorporate the scanned file into any distributed hardcopy by electronic cutting and pasting. The Xerox DocuTech stores scanned images as electronic files on a hard disk and permits such electronic editing. Hardcopy, complete with figures, can be produced in an on-demand fashion for sale to standing orders and to others who inquire. Warehousing is thus converted to a ``just-in-time" approach requiring virtually no extra space or cost. Hard copies can then be made available in a variety of bindings. If the scanned electronic files are downloaded as part of a text file, then the reader's electronic cutting and pasting is unnecessary. The capability of future word processors holds the answer to the possibility of shipping mathematical notation, maps, and photos in a single easy-to-read, typeset, transmission. Graphics transmission can be executed immediately by making available for distribution binary files of graphics images on an Internet server for downloading via FTP (File Transfer Protocol) or from a standard bulletin board. Yet another approach to the graphics issue might involve linkage to a Geographic Information System to provide a procedure for creating compatible transmittable map files directly from data managers into a {\TeX\/}-ed file. Data files are likely to be quite large; compressed files should be used with instructions for decompression and recompression provided online in ``help files." As above, {\TeX\/} can be used to create an ASCII file that is typeset, including diacritical marks. If, however, the editor chooses not to use {\TeX\/}, publishers can convert the formatting codes of other software such as Microsoft Word, XyWrite (Signature), and other robust word processors. If straight ASCII, perhaps employing the upper IBM ASCII set whenever diacritical marks are important, is used to transmit the electronic files, then another set of issues, some similar to and some different from using {\TeX\/}, confront the editor. At present it is important never to right-justify straight ASCII files. Right-justified text introduces extra spaces in word processors that produce straight ASCII files. To mend this, users must do a number of search-and-replaces, replacing double spaces with single spaces. They need to do this to make the text look like their own text so they can add items from a bibliography to their own bibliographies or add to other downloaded lists of subjects that are searchable with a word processor or data manager. Data-intensive text files, either those for which it is difficult to find a publisher in hardcopy or, in particular, those that are suited to searching and other computer text manipulation (such as bibliographies or checklists), are well-suited to journals employing the straight ASCII format. Data files take two forms: article format, similar to paper publications -- searchable with a standard word processor or with ``text management" software, and, data base format, appropriate for importing into a standard data base manager. The latter should have data presented with an equal number of lines per record and information entered on the appropriate line for each field, or in another ``delimited" format. Large text files should be divided into smaller files each less than 300 kb in size. These can be uploaded as is, or first converted into smaller compressed (e.g., .ARC, .ZIP, or .LZH) files. Split text files can be downloaded and reconnected (through DOS copy command) by the user. Very large files may, for now, be more appropriately distributed on disk. Foreign language characters, symbols, and graphics. Authors should expect that downloaders will generally use 8 data bits and an error-checking protocol, so binary files and text files with the IBM upper ASCII character set (foreign and special characters and graphics) can be easily transmitted. If the text is prepared in something other than a MS-DOS, pure ASCII environment (non-ASCII texts are created by many word processors), authors need to remove all software-specific formatting codes and type-style codes, before uploading. These can, however, be suggested --- underlining codes, for example, might be represented by symbols like @ or $|$ so downloaders can re-underline through Users of operating systems other than MS-DOS generally do not have access to the upper IBM ASCII set, which has foreign characters and symbols such as the degree sign ($^{\circ }$) and simple graphics. Also, because all users may not have MS-DOS microcomputers or compatibles, some authors may wish to substitute special codes for the IBM upper ASCII set used in MS-DOS. It is recommended that instructions for translating (by search and replace) the codes into the actual character be given at the beginning of the publication. Any system can be used; however, a simple system, which can be easily interpreted even before translation and may be easily used by non-MS-DOS systems, is the ``backspace and overstrike" method: many foreign characters may be easily manufactured by causing the printer to backspace and overstrike a diacritical mark. Since some wordprocessors cannot deal with the ASCII backspace character (ASCII 8), substituting an unused lower ASCII character such as @ or $|$ for the backspace character will allow search and replace for (1) the backspace character itself, (2) for an acceptable printer code substitute for it, or (3) replacement of the three characters with an IBM upper ASCII character. Examples of backspace substitution: a$|$` = \`a , A$|$o = \AA, u$|$" = \"u; and of direct substitution: deg. = $^{\circ }$, u = $\mu $ (search for space-u-space and replace with $\mu $). Graphics characters have little utility and cannot easily be coded for non-MS-DOS standard machines, so it is recommended that these be restricted to special applications. There are a number of efforts at an enlarged ASCII set for foreign languages (Hayes 1992). The coming of Unicode or something similar will hopefully provide a complete set of multiplatform foreign In bibliographies, spell out all duplicate author's names (do not use a sequence of hyphens.) so that the author's names can be searched for. Begin each entry flush left and leave an empty line (two hard rights) between each entry. Do not spell any words with all capital letters (this may make it difficult to search for them; it also looks bad). If appropriate, present files in a ``squeezed" form as an .ARC file or .ZIP or another `archiving' utility file format. This allows faster and less costly downloading and keeps diskette files File management seems to be relatively easy with an E-journal. Keeping track of manuscripts, and of who is refereeing them, and of their stage in the production process, is made simpler by the \centerline{\sl Transmission.} E-journals should have standard, and thus easy-to-use, modes of access. They should be transferable across different systems (e.g. various micro, mini and mainframe platforms). Alphabets should be standard (ASCII, ISO Superalphabet eventually) in order to be available to a wide number of users. Transmission can occur in a number of different ways and have various uses. Issues may be obtained by ``anonymous FTP" or downloaded via regular telephone lines by modem from an electronic bulletin board. An electronic bulletin board system is a computer and software system that can be accessed from outside by a caller, who likely has a number of options, including perhaps: Reading or leaving messages. These are typed while online and may be public or private (readable only by the addressee). Depositing or taking away data or text files. These are created with a word processor or data manager previous to calling and are ``up-" or ``down-loaded" as a unit. Extracting information from a large data file. Authors can prepare compiliative publications that they use personally and wish to share. Then they may, if they wish, maintain the publications informally or formally as a series of versions in online data banks. Users of the bulletin board download online files, and use the files directly for searching for particular data or by copying portions to enlarge their own personal files, with due respect, of course, for copyright privileges of the original author. A bulletin board can be of interest to scholars in the following Messages - For exchange of ideas and information. Speed of contact is far greater than with regular mail. Special ``Conference" sections allow public exchanges on single scientific topics that are equivalent to symposia at national meetings. Files --- Electronic publications that may be cited in an author's curriculum vita. Such publications should be copyrighted. These include: original text material and computer programs; text or data files of an ephemeral or informal nature; and, previously published computer programs (of ``reprint" value). With the eventual realization of a network of bulletin boards across the country, this method of transmission holds considerable promise. Ship the E-journal across Bitnet or Internet to a distribution list of subscribers who ask to have the E-journal mailed to them. Some installations do not have the capacity to send files in excess of 25,000 characters. In that case, split the journal apart with instructions to the user to concatenate the files prior to downloading, printing, or typesetting. \noindent{\bf Archival issues.} All journals are useful only for as long as they can be located in the holdings of some institution. As technological formats for producing journals change, it will be important to keep not only the new, but also the old --- as back-up with a known life-span. Some of the issues that will confront archivists include those listed below. Availability --- the E-journal should be archived indefinitely in an institution willing to provide copies or the equivalent on Durability --- Archives should be maintained so as not to degrade with time, e.g. contents of diskette transferred to hard disk, then to optical disk, then to solid state or whatever future technology provides. Duplicates stored off-site, and EMF protection are also advisable in the long-term. Paper burns and degrades with age, but magnetic images can be maintained indefinitely if copied periodically onto new media (diskettes are said to have a maximum data retention life of 10-15 years). Retrievability and salvageability - Standard operating system formats should be changed in a timely fashion: MS-DOS to Unix, etc. Standard word processing formats should be upgraded so they can be read decades hence. Database formats should be standard or also available in ASCII-delimited format. Any required programs (decompression programs, graphics viewing programs, special word processors) should be archived, too, along with necessary hardware We have found that editorial and publishing problems can be overcome within the limits of existing technology such that electronic journals can be successful in transmitting and presenting information to scholarly readers. We foresee a significant upgrade in quality and flexibility of electronic presentations with the advent of standard cross-platform graphics-capable word processors, standard export-import formats, and standard multi-language character sets. The advantages of electronic publication: inexpensive, fast, easy to store, easy to search, long-term archivability, easily justify the time spent learning to deal with the new technology. \noindent{\bf References.} \ref Hayes, Frank. 1992. Superalphabet compromise is best of two worlds. {\sl UnixWorld\/}, January 1992: 99-100. \ref Horstmann, Cay S. 1991. Automatic conversion from a scientific word processor to {\TeX\/}. {\sl TUGBoat: The Communications of the {\TeX\/} Users Group\/} 12:471-478. \ref Janusz, Gerald J. 1991. Reviewing at Mathematical Reviews. {\sl Notices of the American Mathematical Society\/}, 38:789-791. \ref Knuth, Donald E. 1984. {\sl The TeXBook\/}, Reading, MA: Addison-Wesley and Providence, RI: The American Mathematical \ref Nicholson, Richard S. 1992. Data make the difference. {\sl Science News\/}, March 28:195. \ref Palca, Joseph. 1991. Briefing. {\sl Science\/}, November 29: \ref Palca, Joseph. 1991. New journal will publish without paper. {\sl Science\/}, September 27:1480. \ref Peterson, Ivars. 1992. Math for all seasons. {\sl Science News\/}, January 25:61. \ref Strangelove, Michael. 1991. {\sl Directory of Electronic Journals and Newsletters\/}. Washington D.C.: Association of Research Libraries. Sandra L. Arlinghaus, Institute of Mathematical Geography, 2790 Briarcliff, Ann Arbor, MI 48105. Richard H. Zander, Buffalo Museum of Science, 1020 Humboldt Parkway, Buffalo, NY 14211. \centerline{\bf WILDERNESS AS PLACE} \centerline{\bf John D. Nystuen $^*$} Some conflicts are the result of people talking at cross purposes because they interpret identical empirical data in quite different ways. These differences can arise from deep seated differences in belief systems or from the knowledge systems (theories) applied to understanding a phenomenon. The conflict over the meaning of wilderness is an example. \noindent{\bf Visual Paradoxes} The biologist Richard Dawkins in his book {\sl The Extended Phenotype\/} uses the analogy of the Necker Cube (Louis Albert Necker, 1832) to illustrate the fact that the same empirical evidence can be interpreted in two or more perfectly accurate ways, each of which is valid but incompatible with the other. The Necker Cube is a visual paradox in which the mind perceives a flat plane drawing as a three dimensional transparent cube in which the orientation of the cube is arbitrary (Figure 1). At one moment it appears to be viewed from above but as one stares at it, a reversal occurs and in the next moment it seems to be viewed from below. The visual paradox arises when full information is available. Partial knowledge seems to favor one view or the other. \midinsert \vskip 3in \noindent{\bf Figure 1.} Necker Cube. A sequence of three cubes shown as line drawings. The reader unfamiliar with Necker's Cube would be well-advised to reconstruct this figure. The left hand cube is one with all edges showing; the center cube has three edges hidden so that it appears the reader is looking down at the cube from above; and, the right cube has three edges hidden so that it appears that the reader is looking up at the cube from below. An additional set of views is available --- that of a two dimensional plane figure which, of course, is what the drawings are. This set of views may become dominant by rotating the cube so that the many symmetries of the cube are emphasized (Figure 2). \topinsert \vskip 3in \noindent{\bf Figure 2.} Views Along Axes of Symmetry of a Cube. This figure is also a sequence of three views of cubes shown as line drawings. The left cube is a full- information cube (no hidden edges) seen head-on, with a face of the cube closest to the face of the reader. The center cube is a cube with all edges showing viewed head -on with an edge closest to the reader so that the prominent edge, and the diametrically opposed edge appear to coincide for part of their length. The right cube is a view of the cube with one corner closest to the reader so that the plane view of the cube appears as a hexagon with three diameters.\endinsert Another well-known visual paradox, {\sl face/vase\/}, was introduced by Edgar Rubin in 1915 (Figure 3). In this example additional knowledge seems to resolve the paradox --- as a simple white, classical vase against a black background, both vase and profiles of faces at either side are evident. If baseball caps are put on the profiles, the faces dominate; if, instead, flowers are drawn in the vase, then the vase dominates. \topinsert \vskip 4in \noindent{\bf Figure 3.} Face/vase paradox. Usually one has to plan how to seek additional knowledge about a problem. If only a certain type of knowledge is pursued because that is the way the problem is interpreted, then one view will likely prevail. If only economic evidence is admitted for consideration (for example), other views, other values, may remain invisible. Past experience may bias one's interpretation beyond what seems reasonable to others with different points of view. Gerald Fisher's (1967) man-girl paradox is a sequence of eight progressively modified drawings --- from man to nymph-like girl (Figure 4). The fourth drawing in the sequence was found upon empirical testing to have equal probability of being seen as a man's face or a girl's figure. However, by viewing the sequence successively from the top left to the bottom right one can maintain a bias towards seeing the man's face almost to the last drawing. There, only a faint, melting ghost of a face remains to be seen, if seen at all. The opposite is true if one starts with the girl's figure and moves in the reverse direction. \topinsert \vskip 4in \noindent{\bf Figure 4.} Man-Girl. Shows a sequence of eight line drawings--transforming a man's face to the profile of a girl's body. \noindent{\bf Wilderness Defined} The value of wilderness to society resembles a Necker Cube paradox. People of goodwill see the same empirical evidence in very different lights. The dominant American view of the environment is utilitarian and anthropocentric. The environment is for humans to use. Natural resources are cultural appraisals, more a matter of society than of nature. For something to be a resource we must want to use it, know how to do so, have the power to do so, and be entitled to do so. Nature offers only the opportunity for use. A biocentric ethic imbues nature with intrinsic values independent of mankind. We are part of nature, not apart from it. In an anthropocentric view we are distinguished and especially favored by God. In a biocentric view all creatures, large and small, and plants too, have a right to exist. Most Native American cultures held to this belief. They apologized to their fellow life forms when consuming them to meet their own needs. In Western Society the biocentric ethic is not well understood perhaps even by many of its advocates. Preservationists focus on symbols of wilderness rather than on wilderness in its full existence. Tactical reasons motivate this approach but then frequently wilderness advocates are outmaneuvered. Do preservationists really care about the snail darter and the spotted owl? Or are these species being used as focal points to preserve entire habitats? They embody or personify concern for more abstract values. Do we really want the habitat to be {\sl preserved\/} unchanged? I recall, when visiting Disneyland, a frontier scenario of ``a settler's log cabin under attack and in flames." The logs were made of cement and the flames came from gas jets --- they burn eternally for the tourists, daily during open hours, season after The wilderness worth saving is the biosphere process. The wilderness ethic is to let wild habitats exist where human contact is slight and/or remote (outside--backdrop). Living wild habitats change and perhaps spotted owls or other species will vanish but not as a result of direct human action. Of value are natural processes remote and indifferent to mankind. John Muir said, ``In Wilderness is the Preservation of the Earth." That phrase is the motto of the Sierra Club which Muir founded in 1892. Preservation of the earth as the home of life transcends societal concerns. Beyond a species imperative, it is life \noindent{\bf Conflict or Synthesis} M. C. Escher, the artist noted for his depictions of the complexities of time and space, transcends the choice required by the Necker Cube. He gave the object some attention in his lithograph {\sl Belvedere\/} (see {\sl The World of Escher\/}, p. 229). The man seated in the foreground is holding an impossible cubic object while contemplating a drawing of it on the ground in front of him. In this scene Escher provides a drawing, a hand-held model, the embodiment of the concept in the structure of the castle building. Escher simultaneously embraces two views of the cube with a model and a construction process that can only exist in the imagination. The paradox is in the images of physical things depicted. There are no paradoxes in nature. Nature exists. Paradoxes observed in nature mean that our understanding of phenomena is inadequate. This is what drives the imagination of physicists. Theory holds that nothing can exceed the speed of light --- except human imagination; light bends; space is warped; black holes exist; time flows backward; light is both wave and photon. Deeper and deeper understanding of nature incorporates these constructs of our imagination. From the beginning many predictions of quantum mechanics were viewed as very strange. Now after many decades or resisting refutation, the theory yields new results that border on the surreal: that quantum phenomena are neither waves nor particles but are intrinsically undefined until the moment they are observed (John Hortgan, 1992). Yet nature exists. The problem is our mind set, the position of our understanding. To understand Escher's impossible cube one must take into account the position of the observer. It is like a rainbow; it exists only for those who are in the proper position to appreciate it. There is no rainbow for the people who are being rained upon. I remember talking to a Gurung woman (the Gurung are a highland people of Nepal) who, under a government program, had migrated to a lowland farm on the Nepalese portion of the Gangetic Plain (elevation 600 feet). I asked her if she missed the mountains for I had seen the breathtaking panoramas of her homeland in the high Himalaya. She said, ``What is there to miss? We have four bega of good land here and we had only one half bega of very poor land in the other place." We do not need to be articulate or self-conscious about things essential to our being. For example, food is so fundamental to our existence that we treat it very emotionally. Reasoned discourse is not the only or even dominant basis for thinking about food or debating public policy about entitlement to food. A sense of place is as deeply held and fundamental to our existence as food. We become attached to a place to the extent that we fill the place with meaning. A personal and deep attachment is made to the place called {\sl home\/}. Home is familiar, safe, restoring, and controlled territory. We fight to protect it from invasion with deep feeling and energy. We will die for it. Wilderness is a place that is {\sl not home \/} for humans. It becomes real and important only to the extent that we fill it with meaning. To give it meaning it must become foreground (subject). Mere opposites of home values do not capture the essence. Is wilderness strange, dangerous, stressful, and wild territory? Strange and wild are nice but to me stressful and dangerous are the wrong emphasis, sometimes used by organizations that are trying to build self-confidence in adolescents by thrusting them into confrontation with wilderness. Recreation hunters whose intent is to achieve a kill reveal this sort of confrontational approach to wilderness as well. I believe that wilderness should not be taken as hostile, something to overcome, but rather one should enter a wilderness prepared, take prudent action and seek to experience the strange and the wild to be found there. Admittedly, some views of wilderness are going to be incompatible. But at least hunters and preservationists have visions of the meaning of wilderness, compatible or not. Certain vantage points must be assumed or wilderness will remain invisible. An alliance to build a public edifice is conceivable that might, like Belvedere, provide positions for people to calmly gaze in different directions. Wilderness is like a rainbow. Existence depends, in part, on the position of the viewer. Do rainbows exist? Or are they only latent until observed in some fashion or another? Are they to be valued, if so, how is value assigned? Can you own one? \noindent{\bf Wilderness As Place} The {\sl Bureau of Land Management\/} (BLM) is a federal agency that controls 179 million acres of land mostly in the western states (over nine percent of the total land area in the coterminous USA). The bureau was created in 1946 through consolidation of two federal agencies, the {\sl Land Sales Office\/} and the {\sl Grazing Service\/}. The bureau inherited from these prior agencies the mandate to either sell off federal land to private owners as quickly and efficiently as possible or to make federal lands available for use by private individuals through issuing grazing permits. In 1976 Congress passed the {\sl Federal Land Policy and Management Act\/} which contained a mandate to the BLM to inventory, study, and make recommendations for wilderness designations for BLM lands. The bureau was to report back its actions by 1991. The bureau people were somewhat at a loss for words. What exactly is wilderness? Is that a place with no conceivable human use; a place nobody wants? Wouldn't it be what is left over after we do our job? Could we address this mandate simply by subtraction? The answer was no, that would not do. Wilderness did not fit into a commodity based, `I can own it,' philosophy. How could humans manage a wilderness? What would there be to do? The bureau people were more than a little uncomfortable with their new task. In the past two decades a sea of change has occurred on how to view the environment and the BLM has been caught in its tide. Today, environmental groups are a political force with access to agency decisions through new avenues of public participation. It is not business as usual. In the words of C. Ginger (1993): ``The philosophical challenge faced by BLM has, at its core, human perceptions of the value of land. These values are the same as those that were at the base of the disagreement between John Muir and Gifford Pinchot at the end of the nineteenth century. Muir and Pinchot debated the ideas of preservation of land versus conservation of land. Placed in the context of the wilderness protection, we might ask if we are saving wilderness for wilderness' sake or because it is a wise use of natural resources. These two perspectives (preservation and conservation) were a challenge to a third perspective that dominated the government institutions that oversaw public lands in Muir and Pinchot's time: exploitation of natural resources in the short run. All three points of view are present today in our approach to land and resources but it is Pinchot's view that provides the dominant ideal in the form of the multiple-use sustained -yield philosophy established by Congress for public land management in the United States. The debate over wilderness designations in the West illustrates that the idea of preserving a chunk of land is not just an administrative, legal or even political issue. The sometimes dramatic conflict reflects an underlying difference in values and perceptions of our relationship to the land. And the values are not simply held by individuals. They are reflected in and perpetuated by the institutions we have created to act collectively. We can find in the Bureau of Land Management how the debate over our relationship to the land is defined and pursued." Human institutions are not natural phenomena. They are created by humans and some contain paradoxes and ambiguities. These ambiguities may be the source of conflict in circumstances where identical evidence is interpreted in different ways. Human belief systems are mutable but they are also quite resistant to change even in the face of accumulating evidence. In the United States race relations and women's roles in society have changed in the second half of the 20th century to the extent that certain behaviors and attitudes accepted as commonplace in the first half of the century are disapproved and are illegal today. Equal access to places and roles is now an accepted ideal, not yet attained in many circumstances, but with many instances of success. {\sl Justice\/} and {\sl equality\/} are underlying moral imperatives driving these movements in particular Sustainability and ultimately, {\sl survivability of life\/} are the imperatives underlying the shift from anthropocentric to biocentric views. As far as we know, we alone, among sentient beings, record history, and thus can be aware of long consequences of our actions. As humans gain capacity to control and to destroy we must take responsibility to sustain. We need goals in this regard. Sustaining life processes on earth is an acceptable goal to be placed on the balance scale along with other values. Defining and managing wilderness by the agencies responsible for public lands is a skirmish in the paradigm shift over the position of humans in nature. Elements of nature must be given standing in human value systems in order that wilderness be recognized in human affairs. This is to be done by defining wilderness as a place apart, imbued with boundaries and rights, where humans behave in prescribed ways as if they were in someone else's home. For wilderness to be a place it must be filled with meaning that large segments of society understand and support, otherwise it will remain a backdrop in human affairs, invisible to policy \noindent{\bf Suggested Readings} \ref Fisher, Gerald, (September, 1968) Ambiguity of form: Old and new, {\sl Perception and Psychophysics\/}, v. 4, no. 3:189-192. \ref {\sl Image, Object, and Illusion, Readings from Scientific American\/} (1974) San Francisco: W. H. Freeman and Company. \ref Locher, J. L., Editor (1971) {\sl The World of M. C. Escher\/}, New York: Harry N. Abrams, Inc. Publishers. \ref Hortgan, John (July 1992) Quantum philosophy, {\sl Scientific American\/}, v. 267, no. 1:94-101. San Francisco: W. H. Freeman and Company. \ref Relph, E. (1976), {\sl Place and Placelessness\/}, London: Pion Limited. \ref Oelschlaeger, Max (1991) {\sl The Idea of Wilderness\/}, New Haven: Yale University Press. \noindent{\bf Sources} \ref M. C. Escher, ``Belvedere" 1958, lithograph. \ref M. C. Escher, ``Study for the Lithograph `Belvedere'" 1958 pencil. Plate 228, {\sl World of M. C. Escher\/} \ref L. S. Penrose and R. Penrose, ``Impossible Objects, A Special Type of Visual Illusion," {\sl The British Journal of Psychology\/}, February, 1958. Contains the impossible triangle--basis for Escher's ``waterfall." R. Penrose is, of course, the inventor (later) of Penrose tilings; he postulated the existence of five-fold symmetry thought to be impossible in nature by crystallographers until their recent discovery of five-fold symmetries in quasi-crystals. \ref Attneave, Fred, (December 1971) ``Multistability in perception," {\sl Scientific American\/}, San Francisco: W. H. Freeman and Company. \ref Rabbit-Duck, Joseph Jastrow, 1900. \ref Young girl --- Old woman, Edwin G. Boring, 1930, by W. E. Hill, {\sl Puck\/}, 1915 as ``My Wife and My mother-in-law." \ref Man-Girl, Gerald Fisher, 1967. \ref Reversible goblet, Edgar Rubin, 1915. \ref Necker Cube, Louis Albert Necker, Swiss geologist, 1832. \ref Slave market with apparition of the invisible bust of Voltaire, S. Dali, Dali Museum of Cleveland. \ref Clare Ginger, doctoral candidate, Urban, Technological and Environmental Planning Program, the University of Michigan. She is working on a dissertation about the meaning of wilderness in the eyes of BLM personnel and spent four summers collecting taped interviews from BLM employees at federal, state, and district levels. She asked them to describe wilderness and their responses to the wilderness mandate. Quotation in the text is from an unpublished document, 2/3/93. \noindent{\bf Visual illusion authors} \ref Marvin Lee Minsky, MIT \ref Robert Leeper, University of Oregon \ref Julian Hochberg and Virginia Brooks, Cornell University \ref Alvin G. Goldstein, University of Missouri \ref Ernst Mach, Austrian physicist and philosopher, (Dover Publ., 1959, trans., C. M. Williams). \ref Murray Eden, MIT \ref Leonard Cohen, New York University $^*$ John D. Nystuen, Professor of Geography and Urban Planning, College of Architecture and Urban Planning, The University of Michigan, Ann Arbor, Michigan, 48109. \centerline{\bf THE EARTH ISN'T FLAT. AND IT ISN'T ROUND EITHER!} \centerline{\bf SOME SIGNIFICANT AND LITTLE KNOWN EFFECTS} \centerline{\bf OF THE EARTH'S ELLIPSOIDAL SHAPE} \centerline{\bf Frank E. Barmore $^*$} \centerline{\bf Reprinted, with permission, from} \centerline{\bf THE WISCONSIN GEOGRAPHER} \centerline{\bf VOLUME 8, 1992} \centerline{\bf pp. 1 -- 8} \noindent {\bf Abstract} The small difference between the shape of the earth and a sphere is usually thought to be negligible except for work of very high accuracy such as geodesy. This is not the case. There are some examples where this small difference in shape makes an easily apparent difference in what is observed. This paper will comment on three problems and evaluate the impact of the non-spherical shape of the Earth on the result: 1) the qibla problem of Islamic geography, 2) the center of area (geographic center) and 3) the center of population. \noindent {\bf Introduction} I have noticed that some common considerations in geography are often treated without due regard for the Earth's ellipsoidal shape. This is surprising. The Earth is not spherical (round). It is, rather, very nearly an ellipsoid of revolution with equatorial radii, $a$ and $b$, of 6378.2 km. and polar radius, $c$, of 6356.6 km. --- a difference of 21.6 km. This difference is significantly larger than the next largest pervasive topographic feature, the continent --- ocean basin dichotomy of 5 km. Also, this shape, an ellipsoid of revolution, is not intrinsic to terrestrial planets. Venus is nearly spherical, $a=b=c=$ ca. 6051.5 km. (Head, et al., 1981). Mars is reasonably well described as a tri-axial ellipsoid of $a=3399.2$ km, $b=3394.1$ km. and $c=3376.7$ km. (Mutch, et al., 1976). This departure of the shape of the Earth from a sphere is often given as the flattening, f=(a-c)/a=0.0034 \quad\hbox{or} \quad 0.34\%, or the eccentricity, $e$, where e^2=(a^2 - c^2)/a^2 = 0.0068. The departure from a sphere also results in a difference between geocentric and geographic latitude of (at 45$^{\circ}$ latitude), 0.195^{\circ} = 0^{\circ}11.7' = 0^{\circ}11'42''. While these are small quantities, they are not insignificant. For comparison, consider the following difference or ratios of similar magnitude: \item\item{a.} one vacation day per year (which, in turn, is larger than the one day calendar adjustment every fourth or ``leap'' year), \item\item{b.} a watch which gains or looses five minutes per \item\item{c.} a two inch gap in a 50 foot brick wall, \item\item{d.} a 1/6 inch crack in a 48 inch table top, \item\item{e.} \$100 per \$30,000 of annual earnings, \item\item{f.} an angle of 1/3 of the apparent diameter of the sun or moon. We routinely concern ourselves with such small differences in daily life. We expect and receive better accuracy from craftsmen. Differences in direction of this magnitude are easily seen. Consistency would require that we be as concerned with equally small quantities in geography as we are in other circumstances. Therefore, all but the simplest considerations in geography should routinely take into account the Earth's ellipsoidal shape. Often this is not done. This paper will consider the impact of the Earth's non-spherical shape on the results in three cases: 1) the qibla problem of Islamic geography, 2) the computation of a geographic center (center of area) 3) the computation of a center of population. \noindent{\bf The Qibla Problem} As I have previously commented (Barmore, 1985), a Koranic line which may be translated as ``$\ldots $ wherever you are, turn your face towards it [the Holy Mosque --- the Kaaba]" is often invoked to establish the correct orientation (the qibla) during the obligatory prayer (the salat), and hence the correct orientation for mosques. This requirement, in turn, is often considered as satisfied when a mosque is aligned with the direction of the Kaaba in Mecca. There is, in Islamic scientific literature, sufficient discussion of the direction of Mecca to indicate the usual definition of direction (King, 1979). The direction is that of the shortest arc of a great circle on a spherical Earth between the locality and Mecca. (But note that medi{\ae}val Islamic religious and legal scholars have often argued otherwise and, as a result, other orientation traditions have existed (King, 1972, 1982a, 1982b, and other work in preparation).) The direction is then specified by stating the azimuth of this arc of a great circle relative to the meridian. Given the geographic coordinates of a locality and of Mecca the azimuth of Mecca is easily calculated with spherical trigonometry, {\bf provided a spherical Earth is assumed\/}. Tables of such information, both historical and contemporary, exist in great number. These tables, as well as numerous individual calculations in the literature discussing the many facets of Islamic culture, often give their results to the nearest minute of arc (or even the nearest second of arc). The implication is that the results are correct to the same level of accuracy. But the Earth is not spherical. The Earth is ellipsoidal in shape. If qibla azimuths are calculated assuming a spherical Earth, they do not represent the real case with an accuracy approaching a minute of arc. In every case I have examined, the calculations were done as if the Earth were a sphere. In order to illustrate the errors that result, I have calculated the simple azimuth as well as the geodetic azimuth of the Kaaba in Mecca for a number of places. (The simple azimuth is calculated on a sphere while the geodetic azimuth more closely represents the correct case (See Appendix A).) The qibla QE = az(S) - AZ(E), is the amount that must be subtracted from the incorrect but more easily calculated simple azimuth, $az(S)$, in order to obtain the more accurate geodetic azimuth, $AZ(E)$, calculated on the ellipsoid representing the Earth. The locations of various places were taken from {\sl The Times Atlas of the World\/} (1990). The location of the Kaaba in Mecca was taken from a large scale map of Mecca (1970). The result, for Clarke's (1866) Ellipsoid, is displayed in Table 1 for selected localities and shown for the world in Figure 1. \centerline{Table 1} The error in the qibla azimuth for various places when calculated on a sphere. The results are given in decimal degrees and in minutes of arc. A tabulated value of the qibla error, $QE = az(S) - AZ(E)$, is the amount that must be subtracted from the incorrect but more easily calculated simple azimuth, $az(S)$, in order to obtain the more accurate geodetic azimuth, $AZ(E)$, calculated on Clarke's (1866) Ellipsoid representing the earth. &Qibla Error\quad&min.arc\cr %sample line \+&&{\bf Place}& &\qquad{\bf Qibla}& &\qquad{\bf Qibla Error}&\cr \midinsert \vskip4.0in \noindent{\bf Figure 1.} The error in the qibla azimuth for various places when calculated on a sphere. The results are given in minutes of arc. The plotted value of the qibla error, $QE = az(S) - AZ(E)$, is the amount that must be subtracted from the incorrect but more easily calculated simple azimuth, $az(S)$, in order to obtain the more accurate geodetic azimuth, $AZ(E)$, calculated on Clarke's (1866) Ellipsoid representing the Earth. The variations are complex near Mecca, located at 21.4 degrees N., 39.8 degrees E., and at the antipodes of Mecca. Note the non-uniform contour intervals, the incomplete contours in regions of high contour line density and some intermediate contour fragments, shown dashed. \endinsert When these results are considered it is clear that qibla errors on the order of 0.1 degrees (0$^{\circ}$ $06'$) will result when azimuths are calculated assuming a spherical earth. Not only is this true for qibla azimuths, but it is also true for azimuths calculated for any other purpose. Clearly, azimuths calculated assuming a spherical earth will not, in general, be accurate to a tenth of a degree and should not be given in a way that implies such accuracy. It would not be appropriate to criticize historical works concerning the qibla problem for lacking such accuracy. However, knowledge of the ellipsoidal shape of the Earth is now widely known --- clear descriptions are to be found in many texts on physical geography. I wish to raise two questions: 1) Is there an instance in recent or contemporary works concerning the ``qibla problem" where the problem has been considered with due regard for the ellipsoidal (non-spherical) shape of the Earth? 2) Would Islamic legal, religious or geographic scholars have any interest in this small but noticeable correction to a traditional solution of the ``qibla problem"? \noindent{\bf The Geographic Center} There exists, in north central Wisconsin, less than 3/4 kilometer to the north and west of the very small community Poniatowski, a monument with the following text: \centerline{\sl GEOLOGICAL MARKER\/} {\sl This marker in Section 14, in the Town of Rietbrock, Marathon County is the exact center of the northern half of the Western Hemisphere. It is here that the 90th meridian of longtitude (sic) bisects the 45 parallel of latitude, meaning it is exactly halfway between the North Pole and the Equator, and is a quarter of the way around the earth from Greenwich, \centerline{\sl MARATHON COUNTY PARK COMMISSION} The location of Poniatowski near this unique geographic point has given it sufficient fame to be mentioned in newspaper articles, some tourist literature and even celebrated in song (Berryman, If the Earth were spherical or much more nearly so, then the statements on the marker would be true enough. But, as a result of the Earth's ellipsoidal shape: a) the place marked is not halfway between the Equator and the pole, b) the place marked is well removed from the ``center" and c) the halfway point and the center are well separated from one another. (Note, however, the Earth's ellipsoidal shape not withstanding, the monument does mark the place, 90 W longitude, 45 N latitude, well enough.) The monument's failure in marking the halfway point and the center is substantial and each failure will be discussed in turn. {\sl Halfway Point\/}: \noindent Because of the ellipsoidal shape of the Earth, the length (measured on the surface) of a degree of geographic (that is, geodetic) latitude varies with latitude. As a result, the point that is equidistant from the pole and the equator is not simply the midpoint in latitude. Using Clarke's (1866) ellipsoid and the various relationships in the geometry of an ellipsoid (Bomford, 1971, Appendix A) it is a straightforward calculus problem to find the equidistant point. It is at the geographic latitude 45.1447 = 45$^{\circ}08'41''$ (see Appendix B). The place with this latitude is about 16 km. from the one marked and sufficiently far from Poniatowski as to place it well into the next county to the north, Lincoln County. {\sl Center\/}: \noindent The concept of the geographic center (center of area) for a curved surface is not as straightforward as when the area is flat. What is usually meant by the center is the average (or mean) location. The location coordinates used (latitude and longitude) are curvilinear rather than rectangular. Because of this, one {\bf may not\/} average the latitude and longitude of the elements of area that make up the whole in order to find the center (average location) of the whole area. In order to make this point more clear, consider Figure 2. Shown shaded is the northwest quadrant of the Earth. On a sphere, this area shows a great deal of symmetry about the point at latitude 45$^{\circ}$ N., longitude 90$^{\circ}$ W. Surely the center of this quadrant on the surface of a sphere is at this central point. But, if one calculates the average latitude of the various area elements that make up the northwest quadrant on the surface of a sphere, the result is 32.7042 degrees or 32$^{\circ}42'15''$ N. Surely the center is not there. (Other statistics are no better when applied to latitude alone --- the median latitude is 30$^{\circ}$ N. and the modal latitude is 0$^{\circ}$.) What must be averaged is the location, {\bf not\/} the coordinates of the location. Phrased differently, the latitude of the center of area is different from the average latitude of the same area. \topinsert \vskip 6in \noindent{\bf Figure 2.} The geographic center (center of area) of the northwest quadrant of the Earth (or the upper left quadrant of a sphere or an ellipsoid) and other statistics. A) An oblique view of the Earth showing the northwest quadrant. B) The region of the northwest quadrant near the median and mean latitudes of the quadrant on the 90th meridian. C) The region of the northwest quadrant near the geographic center. The center was determined by the preferred method (Barmore, 1991); that is, calculated with the computations and the result restricted to the surface. The ellipsoid is Clarke's (1866) ellipsoid. \endinsert Any satisfactory method of finding the center must take into account the curved surface of the Earth in a suitable way. One method is to calculate the center by assuming that the quantities spread over the two dimensional surface of a sphere are distributed in a three-dimensional Euclidean space (as indeed they are). One early geographical use (the earliest I have noted) of this ``three-dimensional" method for finding centers of population (or area) on the surface of a sphere was derived by I. D. Mendeleev and used by his father, D. I. Mendeleev (1907 and earlier) to find the centers of area and population of Russia. Such a method is easily extended to calculating the center of area or population on the surface of an ellipsoid. I believe an alternative method is preferable --- a method that restricts the computations and the results to the surface of a sphere. We are largely confined to the Earth's surface and it is appropriate to adopt this provincial viewpoint when determining the center of population or geographic center. This is discussed elsewhere in some detail (Barmore, 1991). Whichever of the two methods is used (computations {\bf in\/} the earth in three dimensions or computations {\bf on\/} the surface in two dimensions) the geographic center (center of area) of the northwest quadrant of a spherical Earth is at 90$^{\circ}$ W. longitude and 45$^{\circ}$ N. latitude. But the Earth is not spherical. The Earth is ellipsoidal in shape. When these computations are done for an ellipsoid, one finds that the geographic center is far removed from 45$^{\circ}$ N. latitude (though it remains on the 90th meridian). I have used both methods to calculate the geographic center of the northwest quadrant for Clarke's (1866) ellipsoid using the ellipsoidal geometry found in Bomford (1977) and find the center is about 22 km. to the north, well into the next county, Lincoln County, at about 45$^{\circ}12'$ N. latitude. In addition to being far above the 45th parallel and far removed from Poniatowski, Wisconsin, the center is also far removed from the point midway between the equator and the pole (see Figure 2). Though the monument marks the intersection of the 45th parallel of latitude with the 90th meridian well enough, it marks {\bf neither\/} the point midway between the equator and the pole {\bf nor\/} the center of the northern half of the western hemisphere. The claims of the marker that it is at ``the exact center of the northern half of the Western Hemisphere $\ldots$ " and `` $\ldots$ is exactly halfway between the North Pole and the Equator, $\ldots$ " are simply not true. \noindent{\bf The Center of Population} When calculating the center of population of the United States, the Bureau of the Census explicitly states that it has assumed a spherical Earth (U.S. Bureau of the Census, 1973). But the Earth is ellipsoidal in shape, not spherical. The formul{\ae} used by the Census Bureau for the center of population calculation are not particularly suitable for the computation of the center of populations on a sphere, let alone an ellipsoid. As has been previously pointed out in considerable detail (Barmore, 1991), the Census Bureau formul{\ae} do not take the curvature of Earth's surface into account in an appropriate way. But, however the center of population is calculated for populations on the surface of a sphere, the questions remains: What will be the center of population for populations on the surface of an ellipsoid? As indicated in the previous section, there are two methods of computing centers on spherical surfaces and the procedures can be extended to the problem of calculating the center of population of the United States on the surface of an I have calculated the center of population of the United States for 1980 using Clarke's (1866) ellipsoid and the ellipsoidal geometry given in Bomford (1977) two ways: 1) {\bf in\/} the Earth in three dimensions and 2) {\bf on\/} the surface in two dimensions as outlined in a previous paper (Barmore, 1991). The same example data set was used in all cases. The results of these computations as well as previously derived results for the spherical case are shown in Table 2 and Figure 3. \centerline{Table 2.} The Center of Population for 1980 for the United States calculated by various methods for the same example data set previously used (Barmore, 1991). \centerline{Center of Population} \settabs\+\qquad&In three dimensions for an ellipsoid\quad &latitude\quad &longitude\quad &\cr %sample line \+&Method of computation&label&latitude&longitude&depth\cr \+&Bureau of the Census formul{\ae} \+&In three dimensions for a sphere &39.1823&90.3477&165 km\cr \+&In three dimensions for an ellipsoid &39.1887&90.3469&165 km\cr \+&On the surface of a sphere \+&On the surface of an ellipsoid \vskip 3.5in \noindent{\bf Figure 3.} The ``Center of Population" of the United States for 1980 calculated by various methods. The place shown as an open circle and labeled $BC$, is the center determined by the U.S. Bureau of the Census (1983). As discussed previously (Barmore, 1991) this place should not be called the center of population. The places shown as solid circles and labeled $s$ and $e$, mark the centers calculated in three dimensions assuming the population is on the surface of a sphere or on the surface of Clarke's (1866) ellipsoid, respectively. The calculated centers lie {\it ca.\/} 165 km below the places marked. The places shown as an asterisk or a plus and labeled $COP$ or $COP-E$ are the centers calculated using the preferred method (Barmore, 1991) and assumes the population is on the surface of a sphere or on the surface of Clarke's (1866) ellipsoid, respectively. The preferred method restricts the computation and results to the surface (sphere or ellipsoid) containing the population. \endinsert When these results are considered it is clear that the difference between the center obtained with the Bureau of the Census formul{\ae} and the center obtained using the preferred method (or the other reasonable alternative) is substantial. However, the error in ignoring the ellipsoidal shape of the earth is smaller --- less than a minute of arc difference in the location of the center of population. The Bureau of the Census gives the center of population to the nearest second of arc of latitude and longitude. If one wishes to pursue the location of the center of population of the United States to an accuracy of one second of an arc then the ellipsoidal shape of the Earth (and a host of other considerations) should be taken into account. \noindent{\bf Summary} The Earth is not spherical. The Earth is ellipsoidal in shape. When computations are done without due regard for the ellipsoidal shape of the Earth they may be in error by amounts on the order of 1/10 degree. This paper points out: 1) that errors of {\it ca.\/} 1/10 degree result in qibla (and other azimuths calculated on a sphere, 2) that errors of {\it ca.\/} 1/10 degree result in the location of the geographic center of very large areas calculated on a sphere, but 3) that the error in the location of United States population center when properly calculated on a sphere is less than one minute of an arc. \centerline{\bf Appendix A} Because the Earth is not a sphere (nor, for that matter, exactly an ellipsoid of revolution) a certain amount of care will be needed in using the terms {\bf azimuth\/} and {\bf distance\/}. This paper uses several terms (described below) which correspond closely to those defined and used by Bomford (1977). Also, several other concepts deserve additional comment. ASTRONOMICAL AZIMUTH: For places on the physical surface of the Earth, the astronomical azimuth of one place from another corresponds to what would be measured with an accurate instrument located on the {\bf surface of the Earth\/}. GEODETIC AZIMUTH: For places on the surface of an ellipsoid representing the Earth, the geodetic azimuth of one place from another is what would be measured with an accurate instrument located on the {\bf surface of the ellipsoid\/}, the instrument being ``leveled" relative to the ellipsoid's normal at the instrument's location rather than the ``gravitational field." SIMPLE AZIMUTH: For places on the surface of a sphere, the simple azimuth of one place from another corresponds to what would be measured with an accurate instrument located on the {\bf surface of the sphere\/}, the instrument being ``leveled'' relative to the sphere's normal at the instrument's location rather than the ``gravitational field." DISTANCE: For places on the surface of an ellipsoid, distances between places are often measured along the ``normal sections" rather than along geodesics. For places on the surface of a sphere, distances between places are almost always measured along geodesics, called great circles. On the sphere simple azimuths and great circle distances are easily calculated with spherical trigonometry. On the ellipsoid geodetic azimuths and normal section distances are determined by more complex calculations. In this paper Cunningham's formula was used for Geodetic Azimuth (Bomford, 1977, Eq. 2.23) and Rudoe's ``9-figure" formula was used for distances along the normal section (Bomford, 1977, p. 136). LOCATION: Places are located on a sphere, an ellipsoid or an accurate map according to their geographic (that is, geodetic) ACCURACY: Roughly speaking, calculations done on a sphere will represent distances and direction on the real surface of the Earth with an accuracy of one degree or more. Calculations done on a suitable ellipsoid will represent distances and direction on the real surface of the Earth with an accuracy of one minute of arc or more. For an accuracy of one second of arc or more, details such as the choice of the ellipsoid parameters, the Earth's gravitational field and heights of the various places must be taken into account. For the purposes of this paper (accuracy of one minute of arc) geodetic azimuths and distances in the normal sections represent the real case well enough. It is a rare case indeed that the difference between the geodetic and the astronomical quantities would be so large as one minute of arc (Bomford, 1977, p. 115, 528). In the main text of the paper results are often stated to the nearest second of arc (or 0.0001 degree). It should be kept in mind that these results are the geodetic results. This level of accuracy is justified for comparisons of similar results but it is not the absolute accuracy of quantities on the physical surface of the Earth. ELLIPSOID: All the calculations involving the ellipsoid and discussed in the main part of the text used Clarke's (1866) Ellipsoid, a=6378.2064 km. and e=0.08227185. The geometry of the ellipsoid and various series expansions for some of the relationships were those given by Bomford (1977, Appx. A, C). NOTATION: All azimuths are measured from the North toward the East and are always positive (i.e., SW = $+225$ degrees, never $-135$). Angles are given in degrees and decimal degrees (sometimes without the unit name or symbol) or in degrees and minutes of arc (and sometimes seconds of arc) and always with the symbols: dd$^{\circ}$mm$'$ss$''$). COMPUTATIONS: All computations were done on an Apple IIGS computer using the spreadsheet in AppleWorks 3.0 (Claris Corp.). \centerline{\bf Appendix B} \centerline{\bf Half-way Point Calculation.} \centerline{(added to this reprinting at the request of the Editor.)} \noindent If the distance from the equator to the pole measured along a meridian on the surface of the ellipsoid is $s$, then: s=\int_{\hbox{equator}}^{\hbox{pole}} ds . Rewriting this in terms of the radius of curvature, $\rho$, and the geographic (geodetic) latitude, $\phi$, the latitude of the half-way point, $\Phi$, is then given by: \int_0^{\Phi} \rho\,\cdot\,d\phi = {1\over 2}\,\, \int_0^{\pi / 2} \rho\,\cdot\,d\phi = {1\over 2}s. Bomford (1977, eq. A.53)v gives the radius of curvature in terms of the semi-major axis $a$, the eccentricity $e$, and the geographic latitude. Then: {1\over 2}\,\, \int_0^{\pi /2} Cancelling common terms, using the binomial expansion ($e$ is small), and evaluating the resulting series of definite integrals on the right hand side (RHS) one finds: {\pi \over 4}\, [1 + {3\over 2} \cdot e^2 \cdot {1 \over 2} + {{3\cdot 5}\over {2\cdot 4}} \cdot e^4 \cdot {{1\cdot 3}\over {2\cdot 4}} + \cdots ]. The left hand side (LHS) integrals can be reduced (with a certain amount of algebraic and trigonometric manipulation) to: + {3\over 2} \cdot e^2 \cdot ({\Phi\over 2}-{{\hbox{sin}2\Phi}\over 4}) + {{3 \cdot 5}\over {2 \cdot 4}} \cdot e^4 \cdot ({3\over 8}\,\Phi - {{\hbox{sin}2\Phi}\over 4} + {{\hbox{sin}4\Phi}\over {32}} +\cdots ). Ignoring the smaller terms --- terms containing $e^4$, $e^6$ etc. (using the eccentricity for Clarke's 1866 ellipsoid) yields: \Phi = 0.787923557 = 45.1447^{\circ} = 45^{\circ}08'41'' . Including terms containing $e^4$ and $e^6$ yields: \Phi = 0.787945019 = 45.145924^{\circ} = 45^{\circ}08'45''. \noindent{\bf References} \ref Barmore, Frank E. 1991. ``Where Are We? Comments on the Concept of the `Center of Population'." {\sl The Wisconsin Geographer\/}, Vol. 7, 40-50. (Reprinted (with the example data set used and with several corrections) by the Institute of Mathematical Geography, Ann Arbor, MI, in their journal, {\sl Solstice: An Electronic Journal of Geography and Mathematics\/}, Vol. III, No. 2, 22-38, Winter, 1992.) \ref Barmore, Frank E. 1985. ``Turkish Mosque Orientation and The Secular Variation of the Magnetic Declination." {\sl The Journal of Near Eastern Studies\/}. Vol. 44, No. 2, 81-98. \ref Berryman, Peter. 1989. ``PONIATOWSKI." {\sl The New Berryman Berryman Songbook\/}, Madison, WI, Lou and Peter Berryman. \ref Bomford, G. 1977. {\sl Geodesy\/}. Oxford UK, Clarendon Press. A reprinting (with corrections) of the 1971 3rd Ed. \ref Head, J. W., et al. 1981. ``Topography of Venus and Earth: A Test for the Presence of Plate Tectonics." {\sl American Scientist\/}, Vol. 69, 614-623. \ref King, D. A., 1972. ``Kibla." {\sl The Encyclop{\ae}dia of Islam\/}, 2nd ed., Vol. 5, 83-88. \ref King, D. A., 1978. ``Three Sundials from Islamic Andalusia." {\sl Journal for the History of Arabic Sciences\/}, Vol. 2, 358-392. \ref King, D. A., 1982a. ``Astronomical Alignments in Medi{\ae}val Islamic Religious Architecture." {\sl Annals of the New York Academy of Sciences\/}, Vol. 385, 303-312. \ref King, D. A., 1982 b. ``The World about the Kaaba." {\sl The Sciences\/}, Vol. 22, 16-20. \ref Mendeleev, D. I. 1907. {\sl K Poznaniyu Rossii\/}, 5th ed. St. Petersburg, A. S. Suvorina, p. 139. \ref Mutch, T. A., et al. 1976. {\sl The Geology of Mars\/}. Princeton NJ, Princeton University Press, pp. 61-63, 209, and 213. \ref U. S. Bureau of the Census. 1983. {\sl 1980 Census of Population, Vol. 1\/}, Chapter A, Part 1 (PC80-1-A1). Washington DC; U.S. Dep't. of Commerce, Bureau of the Census. Appendix A, p. A-5 and Table 8, p. 1-43. \ref U. S. Bureau of the census. 1973. {\sl 1970 Census of Population and Housing: Procedural History\/} (PHC(R)-1). Washington DC; U.S. Dep't. of Commerce, Bureau of the Census. Appendix B (Computation of the 1970 U.S. center of population), pp. 3-50. \ref {\sl The Times Atlas of the World\/}, 8th Comp. Ed. 1990. New York, Times Books / Random House. \ref {\sl Mecca al-Mukarrama, 1:15000\/} 1970? (Riyadh, Kingdom of Saudi Arabia, Ministry of Pe\-tro\-le\-um and Resources, Aerial Survey Dep't.) Frank E. Barmore, Department of Physics, Cowley Hall, University of Wisconsin, La Crosse, La Crosse, WI 54601 \centerline{\bf MICROCELL HEX-NETS?} \centerline{\bf Sandra Lach Arlinghaus $^*$} The ongoing revolution in electronic communications offers exciting opportunities to realize geographic ideas in perhaps unimagined electronic realms. It is well-known, throughout governmental, business, and academic communities, that the cartographer can make a map from hundreds of electronic layers in a Geographic Information System (GIS), in which the data behind the map work interactively with the map, so that upgraded data produces an upgraded map. GIS is certainly one exciting result of the interaction between traditional science and electronics. Cordless telephones offer other prospects: networks of mobile terminals can be linked together in networks across city streets as well as within office skyscrapers. Chia (1992) notes that the Research on Advanced Communications for Europe (RACE) initiative of 1988, to study techniques to implement a third generation Universal Mobile Telecommunications System by the year 2000, is a significant step toward unifying communications and fixed The concept of a mobile telecommunication is straightforward (Chia 1992). Simply stated, a set of microcell base stations, each of which can transmit and receive electronic information, is spread across a geographic space as a network of stations, each with its own tributary area, a microcell, with which it communicates. Typically, one might think of the microcell base station as the center of a circular tributary area, with circular areas packed to cover a larger circular area. At the center of the larger circular area, a macrocell base station serves as an ``umbrella" to relay information to the microcell base stations under it, and from one network of microcells to the next (Chia 1992). Within this sort of ``mixed cell architecture," a vehicle carrying a terminal onboard passes through the microcells and receives information on a continual basis from the base station associated with the microcell it is currently traversing. This sort of hand-off of information in order to traverse a network is not new; indeed, the Rohrpost --- an underground network of pneumatic tubes used for message transmission in Berlin in the early 1900s --- was composed of energy regions in which pneumatic carriers were handed off from one region to the next in order to transmit messages across a fairly large geographic area (Arlinghaus 1986). More commonly, a relay foot race involves the handing off of a baton from one runner to a second, once the first runner has expended much energy to traverse some specified geographic space. There are a host of other illustrations of this sort. There are apparently numerous engineering concerns associated with the optimal positioning of the base stations: antenna radiation patterns, natural terrain features, interference from tall buildings, and interference and signal attenuation of all sorts, including difficulties when the mobile unit turns a corner (Chia, 1992). The geometry of directional paths through Manhattan space (Krause 1975), based on number of vehicle turns can then also become of concern (Arlinghaus and Nystuen 1989). It is the geographic issues of street patterns and building position that are fundamental to the engineering concerns in implementing these networks in which moving vehicles interchange information with a fixed network of base stations (Chia 1992). Even a brief glance at an atlas shows the range of variation in street pattern --- from the predominantly rectilinear grid of Manhattan, to the polar-coordinate style of rotary and radial evident in Washington D.C. Thus, many studies involving microcell networks are done, initially, in an abstract environment (Chia, 1992) --- as a benchmark against which to evaluate others in less than optimal environments. It is within this spirit that a microcell system, composed of layers of microcells of varying size, is viewed. \noindent{\bf Lattices} Viewed broadly, microcell base stations are a set of lattice points. The way in which the lattice is constructed can affect all other considerations of the functioning of the consequent microcell network. There are an infinite number of ``general" environments that one might use in which to construct benchmark networks. When the size of the microcells is sufficiently ``large," the microcell tributary areas might be viewed as curved surfaces which when pieced together form a set of plates composing a broad continental (for example) surface. When the size of the microcells (or macrocells) is ``local" rather than ``large," curvature may not be an issue and the cells might be treated as plane regions. (What is ``local," and what is not, is a significant problem for pragmatic implementation; at the abstract level it is of importance to note it, but not necessarily to deal with it directly.) And, if the line-of-sight geometry is one that excludes parallelism, or that permits more than one parallel, then it may be suitable to view microcell network architecture/geography from the non-Euclidean vantage point of elliptic or hyperbolic geometry (Arlinghaus 1990). Within a plane region, there are two basic ways of creating an evenly-spread lattice: one with the lattice points lying in a grid pattern, and the other with the lattice points lying in a triangular / hexagonal grid pattern (Coxeter 1961). The differences between the two should be clear to anyone who has played the game of checkers on both a square board and on a ``Chinese" board. What is not evident, though, is the sorts of patterns that emerge when one stacks layers of square or hexagonal cells of different sizes in varying orientations. When a square lattice is chosen, one style of space-filling by tributary regions emerges; when an hexagonal lattice is chosen, another appears. \noindent{\bf Microcell hex-nets} Walter Christaller (1933, 1966) grappled with the problem of overlays of hexagonal nets; he did so in the German urban environment. One might question some of the interpretations of the patterns, but his analysis of the actual patterns of overlays is correct. There are numerous discussions of this problem, often under the heading of ``central place theory" --- or, how cities might share interstitial space (Christaller 1933, 1966; Dacey 1965). When the focus is on the extent to which space is filled by portions of the hexagonal outlines, as it might be when signal attenuation and interference of radio waves are an issue, then the fractal approach which permits the easy measurement of the extent to which an infinitely iterated overlay of nets will fill space is useful. One way to look at the complicated issue of visualizing overlays of hexagonal nets is simply to think of a central hexagon surrounded by six hexagons of the same size --- each of these is centered on a microcell base station. The central hexagon is also centered on a macrocell base station which serves the entire set of seven hexagons and has as its own larger tributary macrocell, a hexagon formed by joining pairs of vertices (separated by two intervening vertices) of the perimeter of this snowflake region. When these microcells and macrocells are iterated across the plane, a stack of two layers of hexagonal cells emerges, with the orientation of one relative to the other at an angle that insures that each of the macrocells contains the geometric equivalent of 7 microcells. If one zooms in or out, to generate other layers of larger or smaller hexagons, the stack may be increased; as long as the angle of orientation is fixed by the first two, the value of ``7" will be a constant of the hierarchy ---no matter which two adjacent layers of the hierarchy are considered, a large cell will contain the equivalent of 7 smaller cells. In the literature, this is often referred to as the ``$K=7$" hierarchy. When one chooses different orientations of the nets, different $K$ values emerge; indeed, there are an infinite number of possibilities. When it is also required that vertices of smaller hexagons coincide with those of larger hexagons, there are still an infinite number of hierarchies with the $K$ values generated by the Diophantine equation $x^2+xy+y^2 = K$ (Dacey 1965) where $x$ and $y$ are the coordinates of the lattice points arranged in a triangular lattice (and so relative to a coordinate system with the y-axis inclined at $60^{\circ}$ to the x-axis). A structurally identical process may be employed to make similar calculations for layers of squares centered on a square lattice. Relationships which show the number of small square microcells within a larger square macrocell are also constant between adjacent layers of a hierarchy formed from a single orientation criterion (``J" value). Fractal geometry may be used to generate any of these hierarchies: hexagonal or square. All that is needed is to know the number of sides in a fractal generator and the self-similarity pattern desired (K- or J-value). From these, one can determine completely and uniquely the entire hierarchy--both cell size within a layer and orientation of layer (Arlinghaus, 1985; Arlinghaus and Arlinghaus 1989). The fractal dimension measures the extent to which parts of the boundary of the hexagons or squares remain under infinite iteration. When the results of the calculations are displayed in a table, it appears that hexagonal nets consistently fill less space (Arlinghaus, 1993). This Table suggests that individuals actually implementing microcell systems might wish to first consider shape, size, and orientation of layers of a mixed cell architecture prior to superimposing any of the geographic concerns of street networks, or engineering concerns caused by interference and signal attenuation. A mixed cell architecture of low fractal dimension might be one that reduces interference, to some extent, just by the relative positions of microcells to macrocells. \centerline{Table: Comparison of fractal dimensions} \settabs\+\indent\quad&Lattice coordinates of\qquad\qquad &Fractal Dimension\qquad\qquad&\cr %sample line \+&Lattice coordinates of &Fractal Dimension &\cr \+&microcell base station &&\cr \+&adjacent to &Squares &Hexagons \cr \+&microcell base station &&\cr \+&at $(0,0)$. &&\cr \+&(1,1) &2.0 &1.262\cr \+&(1,2) &1.365 &1.129\cr \+&(0,2) &2.0 &1.585\cr \+&(0,3) &1.465 &1.262\cr \+&(0,4) &1.5 &1.161\cr \+&(0,5) &1.365 &1.209\cr \+&(0,6) &1.387 &1.161\cr \+&(0,7) &1.318 &1.129\cr \+&(0,8) &1.333 &1.153\cr \+&(0,9) &1.290 &1.131\cr \+&(0,10) &1.301 &1.114\cr \noindent{\bf References} \ref Arlinghaus, S. 1993. Electronic geometry. {\sl Geographical Review\/}, to appear, April issue. \ref Arlinghaus, S. 1990. Parallels between parallels. {\sl Solstice\/}, Vol. I, No. 2. \ref Arlinghaus, S. 1986. {\sl Down the Mail Tubes: The Pressured Postal Era, 1853-1984\/}, Monograph \#2, Ann Arbor: Institute of Mathematical Geography. \ref Arlinghaus, S. 1985. Fractals take a central place. {\sl Geografiska Annaler\/} 67B, 2, 83-88. \ref Arlinghaus, S. and Arlinghaus W. 1989. The fractal theory of central place geometry: a Diophantine analysis of fractal generators for arbitrary Loschian numbers. {\sl Geographical Analysis\/}, Vol. 21, No. 2. pp. 103-121. \ref Arlinghaus, S. and Nystuen, J. 1989. Elements of geometric routing theory. Unpublished. \ref Chia, Stanley. December, 1992. The Universal Mobile Telecommunication System. {\sl IEEE Communications\/}, Vol. 30, No. 12, pp. 54-62. \ref Christaller, Walter. 1933 (translated into English 1966). Baskin translation: {\sl The Central Places of Southern Germany\/}. Englewood Cliffs: Prentice-Hall. \ref Coxeter, H. S. M. 1961. {\sl Introduction to Geometry\/}. New York: Wiley. \ref Dacey, M. F. 1965. The geometry of central place theory. {\sl Geografiska Annaler\/} 47, 111-24. \ref Krause, Eugene. 1975. {\sl Taxicab Geometry\/}. Menlo Park: Addison-Wesley; 1980, Springer-Verlag. Sandra Lach Arlinghaus, Institute of Mathematical Geography, 2790 Briarcliff, Ann Arbor, MI 48105. \centerline{\bf SUM GRAPHS AND GEOGRAPHIC INFORMATION} \centerline{\bf Sandra L. Arlinghaus, William C. Arlinghaus, Frank Harary$^{*}$} \noindent{\bf Abstract} We examine a new graph theoretic concept called a ``sum graph," display a new sum graph construction, and prove a new theorem about sum graphs (sum graph unification theorem) verifying the construction. The sum graph is then generalized, ultimately as an augmented reversed logarithmic sum graph, so that it is useful in dealing with large sets of geographic information. The generalized form permits 1) the compression of large data sets, and 2) the simultaneous consideration of data sets at various levels of resolution. The advantages of employing sum graph unification and the augmented reversed logarithmic sum graph to handle data sets are illustrated by hypothetical example; as a data structure, the various forms of sum graph data management provide compact handling of data and do so in a manner that permits variability of resolution, at multiple levels (unlike the quadtree), within a single layer of mathematical manipulation. Our interest in creating, and exploring, this sort of data structure rests in searching for structures that are translation invariant. Data structures resting on geographic direction, such as the quadtree, seem destined not to be translation invariant; structures that are not tied to the ordering of the space in which they are embedded, but only to an ordering within the structure itself, have the potential to be translation invariant. Geography and graph theory have a long history of interaction: the Four Color Problem (now Theorem) and the K\"onigsberg Bridge Problem of graph theory arose as geographical questions. As geography has stimulated mathematical creation, so too has the body of theory developed by graph theorists stimulated careful analysis of various geographical networks. It is within this well-established spirit of interaction, and within the technological framework where electronic processing of data may be characterized using graph theory, that we examine a new graph theoretic concept, called a sum graph, as a theoretical data In this structure, the numerical pattern of the labels of the nodes in the ``sum graph" will be dictated by the linkage pattern in the underlying data, rather than the other way around, which is more conventional. Thus, data that are ``linear" (sequential), such as data streams in a raster mode, will be represented by a sum graph whose linkage pattern is linear, thereby forcing a certain style of label to be present on the associated nodes. We demonstrate the theoretical concepts in this paper using examples limited to the linear case because it is easy to express and because it has wide applicability. Thus, the first section introduces the reader to elements of the abstract development of sum graphs, focusing only on those concepts that will actually be applied. The second section shows how to force ``correct" labelling of ``sum graphs" to permit the simultaneous consideration of data at multiple levels of resolution within subsets of a data set that is linear in character. The third section introduces the concept of ``logarithmic sum graph," used to compress large data sets into subsets within bands of width of one unit --- a critical strategy as the length of the linear sequence (data stream) increases. The fourth section introduces the ``reversed sum graph" which also permits simultaneous consideration of data at more than one scale and does so with optimal labelling within bands of one unit. The fifth section introduces the ``augmented reversed logarithmic sum graph," a graph combining the desirable elements of previously considered structures augmented by a set of linkages, induced by the numerical labelling of subsets, that permits inclusion of data at variable levels of resolution and offers a means to link that data between, in addition to within, subsets. Throughout, we show how to use these concepts in a small application derived from a set of data concerning North American cities. \centerline{\bf 1. Sum Graphs} \noindent{\sl Definition 1\/} (Harary, 1989) Let $S$ be a set of $n$ distinct positive integers. Define {\sl the sum graph\/} $G^+(S)$ as follows: \item{1.} $G^+(S)$ has $n$ nodes, each labelled with a different element (number) of $S$; \item{2.} there is an edge between two nodes labelled $a$ and $b$ if and only if $a+b\in S$. \noindent{Example 1} Figure 1 shows the sum graph of $S_1=\{1,4,5,7,8,9\}$. $S_1$ is a set of arbitrarily chosen labels for the nodes. Because the label ``9" is an element of $S_1$, it follows that the edge linking 4 and 5 ($4+5=9$) is present in the graph. Because the label ``6" is {\sl not\/} an element of $S_1$ there is an edge linking 1 and 5 ($1+5=6$). A number of theorems concerning sum graphs appear in the mathematics literature (Harary, 1990; Bergstrand {\it et al.\/}, 1990, 1991). We state those results without proof; others wishing to employ these methods should read with understanding the proofs in the mathematics literature, lest the methods be inappropriately applied in different situations. First note that the largest number in $S$ cannot be the label of a node joined to any other node. & &8 & & \cr & &\bullet & & \cr & &\Big\vert & & \cr 4& &1 & &9 \cr \bullet &------------&\bullet & &\bullet \cr \Big\vert & &\Big\vert & & \cr \bullet & &\bullet & & \cr 5& &7 & & \cr \centerline{\bf Figure 1.} \centerline{$G^+(S_1)$: the sum graph of $\{1,4,5,7,8,9\}$.} \centerline{Reader is to solidify any dashed lines with a pencil} \noindent{\sl Lemma 1\/} (Harary, 1990) Every sum graph contains at least one isolated node. \noindent{\sl Example 2\/}: The sum graph of $S_2=\{2,3,5,6,10\}$ is displayed in Figure 2. $2$ &\bullet \cr &\Big\vert \cr $3$ &\bullet \cr & \cr $5$ &\bullet \cr & \cr $6$ &\bullet \cr & \cr $10$&\bullet \cr \centerline{\bf Figure 2.} \centerline{$G^+(S_2)$: the sum graph of $\{2,3,5,6,10\}$.} Lemma 1 assures that the node labelled 10 is isolated. Example 2 illustrates that more than one isolated node is possible; hence, the phrase ``at least" in the statement of Lemma 1. \noindent{\sl Definition 2\/} (Harary, 1969) Two graphs $G_1$ and $G_2$ are {\sl isomorphic\/} is there is a one-to-one correspondence $f$ between their node sets such that, for any two nodes $a$ and $b$ in $G_1$, $(a,b)$ is an edge in $G_1$ if and only if $(f(a),F(b))$ is an edge in $G_2$. Thus two graphs are isomorphic not only when they look the same, but perhaps have different labellings of the nodes, but they are also isomorphic when the graphs do not look alike but have the same connection pattern --- as are views of the same digital terrain model from different vantage points. Figure 3 illustrates this phenomenon for the graph of the octahedron. Isomorphic structures are invariant under geometric translation. \midinsert \vskip2.5in \centerline{\bf Figure 3.} \centerline{The octahedron in two different views (View A on the left; View B on the right)} \centerline{The reader should draw it} \noindent{\sl Notation\/} Given a set $S$ of positive integers, write $kS = \{kx : x \in S\}$. \noindent{\sl Theorem 1\/} (Harary, 1990) If $G^+(S)$ is a sum graph and $S'=kS$, $k$ a positive integer, then $G^+(S)$ and $G^+(S')$ are isomorphic. \noindent{\sl Example 3\/} Consider the sum graph of Example 1, $G^+(S_1)$ with $S_1 = \{1,4,5,7,8,9\}$. When $k=3$, we have $S_1 = \{3, 12, 15, 21, 24, 27\}$. The distributive law of algebra guarantees that exactly the same edges will appear in $G^+(S_1')$ as in $G^+(S_1)$. For example, because $5\in S_1$, 1 and 4 are adjacent in $G^+(S_1)$; because $3\cdot 5 \in S_1'$, $3\cdot 1$ and $3 \cdot 4$ are adjacent in $G^+(S_1')$, since $3 \cdot 1 + 3 \cdot 4 = 3 \cdot (1+4)$. Thus, $G^+(S_1)$ and $G^+(S_1')$ have the same edge structure (but different node labellings, hence, perhaps, different geographic positions), so they are One interesting structure a sum graph might have is a graph-theoretic path (Harary, 1969). \noindent{\sl Definition 3\/} (Niven and Zuckerman, 1960) The sequence of Fibonacci numbers $F_n$ is defined as follows: $F_1 =1$, $F_2 = 2$, $F_n = F_{n-2} + F_{n-1}$ when $n \geq 2$. For example, the first nine elements of this sequence are 1, 2, 3, 5, 8, 13, 21, 34, 55. \noindent{\sl Theorem 2\/} (Harary, 1990) If $S = \{F_1,F_2,\ldots , F_p\}$ is the set consisting of the first $p$ Fibonacci numbers, then $G^+(S)$ consists of a path connecting $F_1$ and $F_{p-1}$ and the isolated node $F_p$. \noindent{\sl Example 4\/} Let $S_3 = \{1,2,3,5,8,13,21,34,55\}$. Then $G^+(S_3)$ is the graph of Figure 4. $1$ &\bullet \cr &\Big\vert \cr $2$ &\bullet \cr &\Big\vert \cr $3$ &\bullet \cr &\Big\vert \cr $5$ &\bullet \cr &\Big\vert \cr $8$&\bullet \cr &\Big\vert \cr $13$ &\bullet \cr &\Big\vert \cr $21$ &\bullet \cr &\Big\vert \cr $34$ &\bullet \cr & \cr $55$ &\bullet \cr \centerline{\bf Figure 4.} A Fibonacci sum graph containing a path and an isolate} \centerline{\bf 2. Sum Graph Unification: Construction} One of the characteristics that distinguishes a sum graph from other graphs is that the algebraic rule assigning edges forces the sum graph to have at least one isolated node. Thus, in aligning this graph-theoretic concept with geographic notions, one might, at the outset, be tempted to look for applications that require ``isolating" one geographic location from a set of others, as in site-selection for toxic waste sites, for prisons, or for other similar societally-obnoxious facilities. Further reflection suggests, however, that the power behind this ``isolation" might be best exploited by considering the isolated node as one with linkages not visible at the graph-scale shown, much as inset maps generally do not reveal linkages to the larger-scale maps they modify. Thus, this cartographic conception of the isolated node as a node with invisible edges will provide a systematic method for shifting scale, or varying resolution, without disturbing the associated spatial structure. The isolated node acts as a ``cataloging" node functioning at a scale different from the content it catalogues (the term ``isolated" will therefore be reserved for the graph-theoretic case; when viewed in a geographic context, the ``isolated" node will be referred to as a ``cataloging" node to emphasize this Consider three disjoint sets of nodes, $A$, $B$, and $C$, with a linear linkage pattern joining each (Figure 5). The linear linkage pattern of each path is based on some serial arrangement of data, such as data ordered by longitude from east to west. &\bullet &&&&& &\bullet &&&&& &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr &\bullet &&&&& &\bullet &&&&& &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr &\bullet &&&&& &\bullet &&&&& &\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr & &&&&& &\bullet &&&&& &\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&& & &&&&& &\bullet \cr \centerline{\bf Figure 5.} \centerline{Three graphs, Left, Middle, and Right, representing serial linkage of data.} It is not difficult to obtain the paths, $P_3$, $P_4$, $P_5$ of Figure 5 as three distinct sum graphs using Theorem 2. Fibonacci labelling of the nodes of Figure 5, shown in Figure 6, generates (as sum graphs) exactly the path-patterns of Figure 5; e.g., the edge joining 2 to 3 in $A$ is present because $2+3=5$ is also a node label. An additional node, a cataloging one, is necessarily introduced in each sum-graph, $A$, $B$, and $C$ of Figure 6. When the label of a cataloging node is used as a label for an entire configuration, this sum graph represents not only the linear linkage within the path, but also, at the same time, represents information (as a label) for the entire path. Information at different cartographic scales is displayed $1$ &\bullet &&&&&$1$ &\bullet &&&&&$1$ &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr $2$ &\bullet &&&&&$2$ &\bullet &&&&&$2$ &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr $3$ &\bullet &&&&&$3$ &\bullet &&&&&$3$ &\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr $5$ &\bullet &&&&&$5$ &\bullet &&&&&$5$ &\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&&$8$ &\bullet &&&&&$8$ &\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&&$13$&\bullet \cr \centerline{\bf Figure 6.} \centerline{The three distinct Fibonacci sum graphs showing the paths} \centerline{$P_3$ (on the left), $P_4$ (middle), and $P_5$ (right).} In Figure 6, the simple Fibonacci labelling scheme of Theorem 2 produced three distinct sum graphs. Because the same labels are re-used, it would not be possible to compare information concerning these distinct sum graphs. Stronger theoretical results follow: results that will permit such comparison, while retaining the desirable asset of simultaneous display of data at different cartographic scales. Consider, as a whole, the set of twelve nodes from Figure 5. Find a strategy for labelling these nodes that will produce exactly the three paths of Figure 5 as subgraphs of a single sum graph. Viewing the three parts of Figure 5 as subgraphs of a {\sl single\/} sum graph will guarantee distinct labels for distinct nodes while retaining scale-shift characteristics. One way to achieve such a labelling is as follows. Assign Fibonacci numbers consecutively (starting with 1) to the nodes of one subgraph ($A$, in Figure 7). Continue this scheme to a node of subgraph $B$; thus, in Figure 7, $A$ has nodes with labels 1, 2, 3 and one node in $B$ has label 5. It might be natural to label the next node in $B$ with the next Fibonacci number --- 8. However, this would introduce an unwanted edge between 3 and 5. So, label the next node with one more than the next Fibonacci number --- in this case 9 --- to remove the possibility of introducing unwanted edges. Label the remaining nodes in the Fibonacci-style with 5 and 9 as the first two elements. Continue this scheme through to one node of subgraph $C$ (labels 14, 23, and 37 are thus introduced). The second node in the third subgraph must not be labelled 60, or else an unwanted edge is introduced linking 23 to 37. Call the label of the second node ``61". Continue labelling in the Fibonacci style using 37 and 61 as the first two elements of a Fibonacci-style label-generating scheme. In the case of Figure 7, all nodes are now labelled; a single extra node, which is a cataloging one, is also labelled. All paths of this single sum graph are exactly those desired. The label associated with the cataloging node, 416, is the catalogue number for the entire configuration; other labels describe the local, linear linkage patterns. Distinct labels correspond to distinct nodes in such a way that only desired paths are introduced between nodes. A single added cataloging node permits associating information with a label for this node at the scale of the entire configuration --- in the manner of object-oriented data structures. $1$ &\bullet &&&&&$5$&\bullet &&&&&$37$ &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr $2$ &\bullet &&&&&$9$&\bullet &&&&&$61$&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr $3$ &\bullet &&&&&$14$&\bullet &&&&&$98$&\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr & &&&&&$23$&\bullet &&&&&$159$&\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&& & &&&&&$257$&\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&&$416$ &\bullet \cr \centerline{\bf Figure 7.} \centerline{A Fibonacci-style of labelling for a sum graph with one cataloging node (416)} \centerline{showing the paths $P_3$ (on the left), $P_4$ (middle), and $P_5$ (right) as subgraphs.} Thus, two levels of variability in resolution are displayed --- that of the linkage pattern within individual subgraphs, and that of the weight of the entire graph, reflecting to some extent on the size of the data set, and the style of its subgraphs and their pattern of internal connection (had the subgraph in the middle terminated at 14, the subgraph on the right (with an added edge) would have begun with 23 and had an isolated node with label 419). Stronger yet would be to construct a single sum graph from which desired paths would emerge (as in Figure 7) and in which distinct paths would correspond to distinctly-labelled cataloging nodes as in Figure 6. The notion of wanting one cataloging node per desired path, in order to ensure greater variability in resolution, motivates the following definition. \noindent{\sl Definition 4\/} Suppose a set of $n$ nodes is partitioned into $t$ subsets. Further suppose $k$ of these subsets contain more than one node. To each of these $k$ subsets add a node. The resulting $t$ subsets will be called ``constellations" (Figure 8). &\bullet &&&&& &\bullet &&&&& &\bullet \cr & &&&&& & &&&&& & \cr &\bullet &&&&& &\bullet &&&&& &\bullet \cr & &&&&& & &&&&& & \cr &\bullet &&&&& &\bullet &&&&& &\bullet \cr & &&&&& & &&&&& & \cr &\bullet &&&&& &\bullet &&&&& &\bullet \cr & &&&&& & &&&&& & \cr & &&&&& &\bullet &&&&& &\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&& &\bullet \cr \centerline{\bf Figure 8.} \centerline{Three constellations, Left, Middle, and Right, partition a distribution of nodes.} Now we return to the example of Figure 5, with three nodes added to make three constellations (all with more than one node, as in Figure 6). \noindent We seek some labelling for the entire set of constellation nodes (Figure 8), as nodes of a single sum graph, that will \item{1.} produce the paths $P_3$, $P_4$, $P_5$; \item{2.} produce cataloging nodes within the subgraphs containing $P_3$, $P_4$, $P_5$; \item{3.} make retrieval of path structure simple. \noindent Because there are paths that are to be retrieved as subgraphs of a single sum graph, some sort of Fibonacci or Fibonacci-style labelling will be needed (Theorem 2). The labels from Figure 5 cannot be chosen because under that circumstance distinct nodes do not have distinct labels. Theorem 1 suggests that distinctness in labelling as well as retention of path structure is achieved by multiplying Fibonacci numbers by constants. Thus, the issue is to know what values to choose as these ``multipliers" so that distinctness of node labels (required by Definition 1) is ensured. Example 5, below, suggests a general construction that will satisfy these conditions. It will be proved in full generality in Theorem 3. \noindent{\sl Example 5\/} 1. To ensure path structure, give the underlying Fibonacci label pattern of 1,2,3,5; 1,2,3,5,8; 1,2,3,5,8,13 to, respectively, the left, middle, and right constellations (Definition 4) in the node pattern of Figure 8. To produce a set of suitable multipliers for these nodes, proceed to step 2. 2. Choose the smallest prime number greater than the sum of the largest and next largest numbers used in the underlying Fibonacci pattern. In this case, 13 is the largest number in the underlying Fibonacci pattern and 8 is the next largest, so choose 23, the smallest prime number larger than 13+8=21 (choosing 21 would introduce an unwanted edge). This number will be the multiplier for one constellation (in this case, we arbitrarily choose to use it for the left-hand constellation). 3. Use successive powers of 23 (23 functions therefore as a base-multiplier) to label the nodes of successive constellations. In this case, $23^2$ is used as the multiplier for the right-hand constellation. The nodes are now labelled as shown in Figure 9. x &\bullet &&&&&x^2 &\bullet &&&&&x^3 &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 2x&\bullet &&&&&2x^2&\bullet &&&&&2x^3&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 3x&\bullet &&&&&3x^2&\bullet &&&&&3x^3&\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr 5x&\bullet &&&&&5x^2&\bullet &&&&&5x^3&\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&&8x^2&\bullet &&&&&8x^3&\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&&13x^3 &\bullet \cr \centerline{\bf Figure 9.} \centerline{Sum graph derived from Figure 6 using the base multiplier and its powers,} \centerline{writing $x=23$ for brevity.} When this set of nodes is used as the set $S$ of Definition 1, the resulting sum graph is isomorphic to the union of the three sum graphs in Figure 5. The fact that three cataloging nodes are introduced by this procedure gives an indication from each coefficient of the cataloging nodes of size, shape, and connection pattern of the subgraph it represents (as did the single cataloging node of 416 for the entire graph in Figure 7). The set of steps in Example 5 may be stated more generally as in the Construction below. \centerline{\bf Construction: Sum Graph Unification\/} Given a set of nodes partitioned into constellations. To ensure a prescribed path structure linking the nodes, that can be retrieved electronically entirely (only) from the numerical characteristics of the labels for the nodes, assign labels in the following manner. 1. Label the nodes of each constellation with Fibonacci numbers, in order, beginning with the label ``1" in each constellation. 2. Find a base multiplier for each Fibonacci label. Form the sum of the two largest labels from step 1. The smallest prime number greater than this sum will serve as a multiplier. Use this prime base multiplier as the multiplier for labels of the nodes in one constellation. 3. Use successive powers of the prime in step 2 as multipliers for labels of the nodes in successive constellations. \centerline{\bf 3. Cartographic Application of Sum Graph Unification} The following application will show how the labelling produced by the Sum Graph Unification Construction might be used. Consider a set of seven North American cities together with selected suburbs of those cities (Table 1.1). Column 1 in Table 1.1 lists these cities and suburbs in seven groups as metropolitan areas (the latter named in all upper case letters): constellations. To consider the east-west extent a proposed metropolitan mass transit system might need to cover, the longitude is also associated with each location (in column 2 of Table 1.1). The sequential ordering of cities and suburbs, by longitude from east to west, describes a path within each constellation linking these nodes. The metro area node is a cataloging node not hooked into the path. Column 3 associates a Fibonacci number with each node of the entire distribution of nodes (step 1 in the Construction). Column 4 shows weights for the nodes by constellation; 37 is the base multiplier because it is the smallest prime greater than 21+13 (steps 2 and 3 in the Construction). Column 5 shows the product of columns 3 and 4; distinct nodes have distinct labels. Suppose the entire list is rearranged by longitude, independent of constellation; positions of data within all but the New Orleans constellation remain the same. In the New Orleans constellation, the suburb of Metairie is shifted from the New Orleans constellation to the St. Louis constellation (between E. St. Louis and Lemay). That Metairie jumps metropolitan area is evident from the factored weight associated with it: it belongs to constellation 7, that of New Orleans, as its exponent in the factored weight shows (Table 1.2). Thus, the sum graph node label shows that it is out of regional order and provides a direct means to re-sort it back into regional order. Rank-ordering or other conventional means would not do so; rank ordering does not show which city belongs in which constellation. These sum graph node labels offer a way to organize data and to retrieve predetermined sequential order of information from a jumbled data set. The node labels are somewhat large in magnitude, but that is irrelevant in this particular application. It may be important in others, and thus it is to this issue and to the related one of data compression that the remainder of the material is directed. \centerline{\bf 4. Sum Graph Unification: Theory\/} The example above may prove a useful source of mental reference points on which to base the formal proof of the following lemmas needed to probe Theorem 3 below. The first Lemma will prove that there are no unwanted edges linking nodes within constellations and the second one will prove that there are no edges linking nodes between constellations. For the most part, Theorem 3 is just a formalization of the method developed in the example based on Figure 9. However, additional details are necessary to allow for constellations of a single node (in these cases no new node is added). One might interpret such a node as a small city with no suburbs. (Readers wishing to examine the rigor of this method should read Theorem 3 and associated material with care; others might wish to skip to the next section.) \noindent{\sl Lemma 3a\/} Let $a$, $b$, $c$, $i$, $j$ be positive integers. If $p > a+b$, and $p > c$, it is impossible for $a\cdot p^i + b\cdot p^i = c\cdot p^j$ if $j\neq i$. \noindent{\sl Proof\/} Note that $a\cdot p^i+b\cdot p^i = (a+b)p^i < p^{i+1} \leq c\cdot p^j$ if $j>i$. Similarly, if $j a+b$. Let $x$, $y$, $z$ be positive integers, $x\neq y$. Then $a\cdot p^x + b\cdot p^y = c\cdot p^z$ is impossible. \noindent{\sl Proof\/}: Without loss of generality, assume $x < y$. Then, $p^y < a\cdot p^x+b\cdot p^y < (a+b)p^y < p^{y+1}$. Thus, for the equation to be possible, $z=y$. But then $a\cdot p^x \equiv 0(\hbox{mod} p)$, which is impossible, since $ap^x < p^{x+1} \leq p^y$. \noindent We now formalize the ideas exhibited in the construction of Example 3. \noindent{\sl Definition 5\/} (Harary, 1970) A linear tree is a path. A linear forest is a union of disjoint linear trees. {\sl Theorem 3\/} (Fibonacci sum graph unification) Suppose we are given a set of $n$ nodes, which are partitioned into $t$ subsets, $k$ of which contain more than a single node. Then there is a set $S$ of $n+k$ suitably chosen positive integers whose sum graph $G^+(S)$ consists of $t$ isolates ($k$ additional nodes and $t-k$ nodes from single-node subsets) together with a linear forest of $k$ nontrivial paths. \noindent{\sl Proof\/}: Suppose that the $n$ original nodes are $a_1$, $a_2$, $\ldots $, $a_n$. Divide these into the $t$ desired subsets \{x_{11}, x_{12}, \ldots x_{1n_1}\} \{x_{21}, x_{22}, \ldots x_{2n_2}\} \{x_{t1}, x_{t2}, \ldots x_{tn_t}\} where $n_1+n_2+\cdots +n_t = n$. Let $N = 2 + \hbox{max} \{n_1, n_2, \ldots , n_t\}$. Let $p$ be the smallest prime greater than $F_N$, the $N$th Fibonacci number. Now label $n+k$ nodes as \item{1.} If $n_i = 1$, label $x_{i1}$ with $p^i$ (subsets with exactly one node). \item{2.} If $n_i \neq 1$, label $x_{i1}$ with $p^i$, $x_{i2}$ with $2p^i,\ldots x_{in_i}$ with $p^iF_{n_i}$, and a new node $y_i$ with $p^iF_{(1+n_i)}$ (subsets with more than one node). \noindent Follow this procedure for all $i$, $1 \leq i \leq t$. Let $S$ consist of the original nodes together with the new $y_i$s. Now consider constellations consisting of the nodes labelled $x_i$ if $i=1$ and the nodes $\{x_{i1},\ldots , x_{in}, y_i\}$ is $i\neq 1$. Then Theorems 1 and 2 assure that there are Fibonacci paths $x_{i1}, x_{i2}, \ldots x_{in}$ and that $y_i$ is not adjacent to $x_{ia}$ for any $a$ ($1 \leq a \leq n_i$). Lemma 3a assures that there are no edges within a constellation other than the Fibonacci path. Lemma 3b assures that there are no edges between constellations. Thus, the theorem is proved. \centerline{\bf 5. Logarithmic Sum Graphs} The procedure displayed in the Construction, and proved in Theorem 3, meets the criteria of producing desired paths, from the labelling scheme alone, each with a corresponding cataloging node, as subgraphs of a single sum graph. In cases based on large data sets, the multipliers get very large very quickly. However, if the logarithm (using the base multiplier, $x$, as the base of the logarithm) of each label is taken, this issue of apparent significance vanishes (Table 2). In the example on which Figure 9 was based, the values of the multipliers transformed by the log base 23 display clearly the constellation structure. The nodes associated with all entries with integral part ``1" are grouped in a constellation, all with integral part ``2" in another, and all with integral part ``3" in yet another. The integral values serve as a data ``key" to this data structure. The fractional values are, of course, the same from subset to subset, exhibiting the same underlying Fibonacci linkage pattern from subset to subset. The largest value in each subset is the cataloging node; if other nodes were to be included in, for example, the third constellation, those also would have a logarithmic value greater than 3.8180367 but less than 4. Thus, independent of how many nodes there are in a single constellation, all the logarithmic labels are contained in a band of real numbers one unit wide: 3 is a greatest lower bound (which is attained), and 4 is an upper bound for labels in the third constellation. Further, the logarithmically - transformed labels increase additively: there are only as many different data keys as there are different constellations. 1 &\bullet &&&&&2 &\bullet &&&&&3 &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.22&\bullet &&&&&2.22&\bullet &&&&&3.22&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.35&\bullet &&&&&2.35&\bullet &&&&&3.35&\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.51&\bullet &&&&&2.51&\bullet &&&&&3.51&\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&&2.66&\bullet &&&&&3.66&\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&&3.81&\bullet \cr \centerline{\bf Figure 10.} \centerline{Logarithmic sum graph} When these logarithmic labels are attached to the nodes of the graph in Figure 9 we refer to the resulting graph as a ``logarithmic sum graph" (Figure 10). Note, however, that even though this graph is isomorphic to the sum graph of Figure 9, it is not itself a sum graph (in much the way that a truncated cone is not itself a cone, even though it is derived from a cone). From a purely theoretical standpoint, it is possible to identify the constellation to which a node belongs very simply from its assigned multiplier. For, if $p$ is the base multiplier, a node whose multiplier is $N=a\cdot p^k$ has $k\leq \hbox{log}_p\, N \leq k+1$, since $a < p$. Thus, a node with multiplier $N$ belongs to constellation $k$ if and only if $[\hbox{log}_p\,N] =k$ (where brackets denote the greatest integer function). (From a computer standpoint, one must be careful, since occasionally computational error might make $\hbox{log}_p\,p^k < k$ computationally. Adding a suitably small amount to $\hbox{log}_p \,N$ before determining its constellation should avert this difficulty.) In fact, it seems easier computationally to store $\hbox{log}_p\, N$ rather than $N$ as a multiplier, since then much smaller numbers can be stored. This motivates the following formal characterization of logarithmic sum graphs. \noindent{\sl Definition 6} Let $S$ be a set of $n$ distinct positive integers, $p$ a prime. Define the {\sl logarithmic sum graph\/}, relative to $p$, $G^+(\hbox{log}_p\, S)$ as follows: \item{1.} $G^+(\hbox{log}_p\, S)$ has $n$ nodes, labelled with the $n$ different labels $\{\hbox{log}_p\, x \quad \vert \quad x \in S \}$. \item{2.} there is an edge between two nodes labelled $a$ and $b$ if $p^a + p^b \in S$. \noindent Logarithmic sum graphs retain all the advantages afforded by Theorem 3, and they make it possible to handle large data sets more easily. \centerline{\bf 6. Reversed Sum Graphs.} In the procedure of Theorem 3, and in the logarithmic modification of that procedure to accommodate large data sets, the cataloging nodes all have the largest labels within their subgraph. It might be useful, in some situations, for the cataloging nodes to have the smallest labels within their subgraphs. For this purpose, we define the notion of a ``reversed" sum graph. \noindent{\sl Definition 7} Let $S$ be a set of positive integers such that the sum graph $G^+(S)$ [logarithmic sum graph $G^+(\hbox{log}_p\, S)$] is partitioned into constellations such as those of Theorem 3. Define the {\sl reversed sum graph\/} ${}^{+}G(S)$ [{\sl reversed logarithmic sum graph\/} $^{+}G(\hbox{log}_p\, S)$], isomorphic to $G^+(S)$ [$G^+(\hbox{log}_{p}\, S)$], as follows. If the nodes in a given constellation have labels $a_1 < a_2 < \ldots < a_p$, relabel them $a_p, a_{p-1}, \ldots , a_1$. That is, the node labelled $a_i$ is given the new label $a_{p+1-i}$. (Note that single-node constellations are not affected.) \noindent{Example 6\/} Let $S_4 = \{1,2,3,5,8,13\}$. The graphs $G^+(S)$, $^+G(S)$ are displayed in Figure 11. (As in the case of the logarithmic sum graph, note that a reversed sum graph (Definition 7) is not itself a sum graph.) 1 &\bullet &&&&&13 &\bullet \cr &\Big\vert &&&&& &\Big\vert \cr 2 &\bullet &&&&&8 &\bullet \cr &\Big\vert &&&&& &\Big\vert \cr 3 &\bullet &&&&&5 &\bullet \cr &\Big\vert &&&&& &\Big\vert \cr 5 &\bullet &&&&&3 &\bullet \cr &\Big\vert &&&&& &\Big\vert \cr 8 &\bullet &&&&&2 &\bullet \cr & &&&&& & \cr 13 & &&&&&1 &\bullet \cr \centerline{\bf Figure 11.} \centerline{A Fibonacci sum graph $G^+(S)$ (left)} \centerline{and its reversed sum graph $^+G(S)$ (right).} \noindent As Definition 7 suggests, logarithmic sum graphs may also be reversed. Figure 12 shows the logarithmic sum graph of Figure 10 and its reversed logarithmic sum graph. Reversed sum graphs, logarithmic or not, always assign an integer, the data key, to the cataloging node. This feature is particularly important in the case of the logarithmic representation, when data might be added to or deleted from a single subgraph, all with integral part of their labels identical to that of the cataloging label. 1 &\bullet &&&&&2 &\bullet &&&&&3 &\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.22&\bullet &&&&&2.22&\bullet &&&&&3.22&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.35&\bullet &&&&&2.35&\bullet &&&&&3.35&\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.51&\bullet &&&&&2.51&\bullet &&&&&3.51&\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&&2.66&\bullet &&&&&3.66&\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&&3.81&\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&& & \cr 1.51&\bullet &&&&&2.66&\bullet &&&&&3.81&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.35&\bullet &&&&&2.51&\bullet &&&&&3.66&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.22&\bullet &&&&&2.35&\bullet &&&&&3.51&\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr 1 &\bullet &&&&&2.22&\bullet &&&&&3.35&\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&&2 &\bullet &&&&&3.22&\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&&3 &\bullet \cr \centerline{\bf Figure 12.} \centerline{Logarithmic sum graph (top) and reversed logarithmic sum graph (bottom).} \centerline{\bf 7. Augmented Reversed Logarithmic Sum Graphs} Reversed logarithmic sum graphs single out cataloging nodes as the only nodes with integral labels. It may be useful to consider linkages within the set of cataloging nodes and to ``augment" the reversed logarithmic sum graph with edges displaying these linkages (Figure 13). 1.51&\bullet &&&&&2.66&\bullet &&&&&3.81&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.35&\bullet &&&&&2.51&\bullet &&&&&3.66&\bullet \cr &\Big\vert &&&&& &\Big\vert &&&&& &\Big\vert \cr 1.22&\bullet &&&&&2.35&\bullet &&&&&3.51&\bullet \cr & &&&&& &\Big\vert &&&&& &\Big\vert \cr 1 &\bullet &&&&&2.22&\bullet &&&&&3.35&\bullet \cr & &&&&& & &&&&& &\Big\vert \cr & &&&&&2 &\bullet &&&&&3.22&\bullet \cr & &&&&& & &&&&& & \cr & &&&&& & &&&&&3 &\bullet \cr \centerline{\bf Figure 13.} \centerline{ARL sum graph derived from a reversed logarithmic sum \centerline{\bf Reader should draw edges joining nodes 1 and 2, 2 and 3, and 1 and 3}. \noindent {\sl Definition 8\/} The {\sl augmented reversed logarithmic sum graph, ARL sum graph\/}, denoted $^+A(\hbox{log}_p\, S)$, consists of the nodes and edges of $^+G(\hbox{log}_p\, S)$ together with all edges linking the nodes with integer labels in $^+G(\hbox{log}_p\, S)$. Thus, $^+A(\hbox{log}_p\, S)$ $=$ $^+G(\hbox{log}_p\, S)$ $\cup$ $\{$ complete graph on nodes with integer labels in $^+G(\hbox{log}_p\, S)\}$. \noindent If $m$ is the number of nodes with integer labels in $^+G(\hbox{log}_p\, S)$, this augmentation adds $m\choose 2$ edges to the reversed sum graph $^+G(\hbox{log}_p\, S)$. The ARL sum graph is not itself a sum graph. \noindent Augmented reversed logarithmic sum graphs retain all the characteristics of Theorem 3, have the computational advantage of logarithmic sum graphs in handling large data sets, permit the reverse sum graph strategy of integral labelling of the cataloging node, and have the added feature of displaying the complete linkage pattern among cataloging nodes. Linkage patterns emerge both at the local scale and at the more global cataloging scale. \centerline{\bf 8. Cartographic Application of ARL Sum Graphs} The labels of Table 1.1, derived from the Sum Graph Unification Construction, offer a way to organize data and to retrieve predetermined sequential order of information from a jumbled data set. The relative sizes of the weights for the nodes in Table 1.1 are, however, awkward. A simple way to overcome this awkwardness is to take the logarithm of the node weights (to the base of the base multiplier). Thus, in Table 3, column 6 shows the $\hbox{log}_{37}$ of each node weight determined in Table 1.1 (listed in column 5 of Table 3). The constellation number is easily read off as the integral part of the logarithm and all entries for a single constellation are contained within a band of values one unit wide. When the labels are reversed, the integral label corresponds to the cataloging node. This reversed logarithmic sum graph (represented by Table 3) retains the favorable characteristics of Table 1.1 for sorting of data; the node labelling scheme of Table 3 is, however, easy to handle. The augmentation afforded by ARL sum graphs permits significant compression of data, particularly in large data sets, as it retains the favorable characteristics of the reversed logarithmic sum graph noted above. To illustrate this capability, we present the following application. Consider the set of 39 cities and metropolitan regions labelled in Table 1.1. One set of data that is often stored is distances between places (``distance" is used as an example). Generally this set is stored in a square array, or better, sometimes in an upper- or lower-triangular matrix. Sum graphs can reduce greatly the number of entries that need to be stored. Table 4.0 shows a complete set of great-circle distances between metropolitan areas. Each metropolitan area is assigned the latitude and longitude of the city for which it is named. Thus, particular sets of geographic coordinates are viewed simultaneously at two different scales. Tables 4.1 to 4.7 show complete sets of great-circle distances among the cities in each of the seven metropolitan areas (constellations). The distance between Livonia and Scarborough (for example), which does not appear directly in any of the set of Tables in Table 4, may nonetheless be obtained by summing the distances from Livonia to DETROIT, from DETROIT to TORONTO, from TORONTO to Scarborough (Figure 14). The algorithm displayed in Figure 14 shows how to use the reversed logarithmic node label of two arbitrary nodes to determine the distance between them using only the entries in Table 4.0, between metropolitan areas (constellations), and in Tables 4.1-4.7 (showing local linkages within each constellation). The distance so-obtained is not itself a great-circle distance but it may well be a distance more realistically representing current air-travel \hbox{DETROIT}&\longrightarrow &&&&&&&&&\hbox{TORONTO} \cr \Big\uparrow & &&&&&&&&&\Big\downarrow \cr \hbox{Livonia}&\longrightarrow &&&&&&&&&\hbox{Scarborough} \cr \centerline{\bf Figure 14.} \noindent Commutative diagram showing distance calculation scheme using Table 4; algorithm showing how to find distance within Table 4 using the data key provided by the reversed logarithmic sum graph label. \centerline{\bf Algorithm} \noindent\item{1.} Assumption: the cataloging city is also the city with the lowest non-integral label in its constellation. \noindent\item{2.} Find the distance from a city with a node with reversed logarithmic sum graph label $j.x$ to one with label $k.y$, $j\leq k$ (and $x < y$ if $j=k$) \item{a.} If $j=k$, use Table $4.j$ to find the distance from $j.x$ to $j.y$. \item{b.} If $j < k$, \item\item{i.} use Table 4.0 to find distance between cataloging cities $j$ and $k$. \item\item{ii.} use Table $4.j$ to find distance from $j$.lowest to $j.x$. \item\item{iii.} use Table $4.k$ to find distance from $k$.lowest to $k.y$. \noindent Add the results of i, ii, and iii to find the required There are 32 different cities in this example. An upper-triangular 32 by 32 matrix of ${32\choose 2} = 496$ different entries would normally be required to find between-city distances. Using the sum graph method, shown in the algorithm of Figure 14, requires the use of 8 smaller Tables: Table 4.0 for distances between cataloging node cities and Table $4.i$, $1\leq i\leq 7$, for distances of cities in constellation $i$ from cataloging city $i$. The latter procedure, composed of smaller matrices, requires storing (from each matrix) a total of {7\choose 2} + {6\choose 2} + {4\choose 2} + {5\choose 2} + {6\choose 2} + {5\choose 2} + {3\choose 2} + {3\choose 2} $= 21 + 15 + 6 + 10 + 15 + 10 + 3 + 3 = 83$ separate entries. In this case, sum graph methods afford a compression ratio of about 6 to 1 over traditional methods. With larger data sets, the compression ratio becomes much more substantial. Given a data set of 10,000 entries to be partitioned into 100 constellations of 100 entries each, traditional methods using an upper triangular matrix would require that ${10,000\choose 2} = 49,995,000$ entries be stored. Sum graph methods would require storing ${100\choose 2}$ entries for Table 4.0 and ${100\choose 2}$ entries for each of Tables $4.i$, $1 \leq i \leq 100$, for a total of $101 \cdot {100\choose 2} = 499,950$ entries. In this case the compression ratio is 100 to 1. If instead the 10,000 entries are partitioned in a different manner, different compression ratios result. If 1000 constellations of 10 entries each are used, the corresponding compression ratio is 91.8 to 1; if 10 constellations of 1000 each are used, the compression ratio is 10.09 to 1. Clearly the manner in which the partition is selected is important. Larger data sets bring even larger compression ratios: if 1,000,000 data points are considered, and are partitioned into 1000 constellations of 1000 each, the corresponding compression ratio is 1000 to 1. Any process of this sort also needs to accommodate the insertion of new data; when it does so without having to alter existing structure, it is ``dynamic." The Sum Graph Unification Construction is dynamic to an extent. Table 5 shows part of the data set of Table 3 with Ann Arbor added to the Detroit metro area. Only the one constellation needs relabelling; all others remain undisturbed. If, however, enough new entries had been added to force an increase in the prime base multiplier, then a global change would have been required for that single entry (generally easy to achieve electronically). None of the formul{\ae} would have required alteration. ``Dynamic" tables of this sort might see application as on-board mapping systems in cars or buses giving optimum route displays in an interactive mode (so-called IVHS or other commonly-used acronyms). So data becomes accurate more quickly in response to changing traffic patterns transmitted to the vehicle in some sort of interactive fashion. Advances in theory can bring advances in technology to the level of affordable cost and widespread application. The application of sum graphs might be one effort in that direction. \centerline{\bf 9. Summary} We have taken a tool from graph theory and specialized it in a number of directions in order to deal with various types of problems that often arise with data structures. Table 6 organizes these specializations in capsule format. Independent of how the sum graph is specialized to adapt to various difficulties in data management, however, the linkage pattern between nodes in a sum graph is determined by node weight alone, which is derived from whether or not one node is linked to another. There is no reliance on geographic direction or on any sort of other relative ordering based on the underlying space in which the nodes are embedded. Hence, the sum graph data structure has a theoretical base free from directional bias and is perhaps therefore, translation invariant. Determining whether or not this theoretical data structure offers a graphical application at the level of GIS theory--as in the quadtree) that permits translational invariance of the structure (independent of pixel shape) under GIS constraints, appears a significant next step in bringing theory into practice. \centerline{\bf References} \ref Bergstrand, D., F. Harary, K. Hodges, G. Jennings, L. Kuklinski, and J. Wiener. 1989. The sum number of a complete graph. Malaysian Mathematical Society, {\sl Bulletin\/}. Second Series, 12, no. 1, 25-28. \ref Bergstrand, D., F. Harary, K. Hodges, G. Jennings, L. Kuklinski, and J. Wiener. 1992. Product graphs are sum graphs. {\sl Mathematics Magazine\/}, 65, no. 4, 262-264. \ref Harary, F. 1969. {\sl Graph Theory\/}. Reading: Addison-Wesley. \ref Harary, F. 1970. Covering and packing in graphs, I. {\sl Annals\/}, New York Academy of Sciences, 175, 198-205. \ref Harary, F. 1990. Sum graphs and difference graphs. {\sl Congressus Numerantium\/}, 72, 101-108; {\sl Proceedings\/}, of the Twentieth Southeastern Conference on Combinatorics, Graph Theory, and Computing (Boca Raton, FL 1989). \ref Niven, I., and H. S. Zuckerman. 1960. {\sl An Introduction to the Theory of Numbers\/}. New York: Wiley. $^*$ Sandra L. Arlinghaus, Institute of Mathematical Geography, 2790 Briarcliff, Ann Arbor, MI 48105; William C. Arlinghaus, Lawrence Technological University, Southfield, MI 48075 Frank Harary, New Mexico State University, Las Cruces, NM 88003. \centerline{\bf TABLE 1.1:} \centerline{\bf Analysis according to sum graph unification construction} \settabs\+&E. St. Louis\quad &ITUDE\quad &NACCI\quad &MULTI-\quad &FACTORED\quad &474659385665\quad &ORDER \cr \+&City &LONG- &FIBO- &BASE &FACTORED&NODE &RANK \cr \+&Suburb &ITUDE &NACCI &MULTI- &WEIGHT &WEIGHT &ORDER\cr \+&METRO &west &LABEL &PLIER & & & \cr \+&Salem &70 54 &1 &$37$ &$1\cdot 37$ &37 &1 \cr \+&Lynn &70 57 &2 &$37$ &$2\cdot 37$ &74 &2 \cr \+&Quincy &71 00 &3 &$37$ &$3\cdot 37$ &111 &3 \cr \+&Brockton &71 01 &5 &$37$ &$5\cdot 37$ &185 &4 \cr \+&Cambridge &71 07 &8 &$37$ &$8\cdot 37$ &296 &5 \cr \+&Boston &71 07 &13 &$37$ &$13\cdot 37$ &481 &6 \cr \+&BOSTON & &21 &$37$ &$21\cdot 37$ &777 &7 \cr \+&Longueuil &73 30 &1 &$37^2$ &$1\cdot 37^2$ &1369 &8 \cr \+&Verdun &73 34 &2 &$37^2$ &$2\cdot 37^2$ &2738 &9 \cr \+&Montreal &73 35 &3 &$37^2$ &$3\cdot 37^2$ &4107 &10\cr \+&Laval &73 44 &5 &$37^2$ &$5\cdot 37^2$ &6845 &11\cr \+&MONTREAL & &8 &$37^2$ &$8\cdot 37^2$ &10952 &12\cr \+&Camden &75 06 &1 &$37^3$ &$1\cdot 37^3$ &50653 &13\cr \+&Philadelphia &75 13 &2 &$37^3$ &$2\cdot 37^3$ &101306 &14\cr \+&Upper Darby &75 16 &3 &$37^3$ &$3\cdot 37^3$ &151959 &15\cr \+&Norristown &75 21 &5 &$37^3$ &$5\cdot 37^3$ &253265 &16\cr \+&Chester &75 22 &8 &$37^3$ &$8\cdot 37^3$ &405224 &17\cr \+&PHILADELPHIA & &13 &$37^3$ &$13\cdot 37^3$ &658489 &18\cr \+&Scarborough&79 12 &1 &$37^4$ &$1\cdot 37^4$ &1874161 &19\cr \+&Toronto &79 23 &2 &$37^4$ &$2\cdot 37^4$ &3738322 &20\cr \+&North York&79 25 &3 &$37^4$ &$3\cdot 37^4$ &5622483 &21\cr \+&York &79 29 &5 &$37^4$ &$5\cdot 37^4$ &9370805 &22\cr \+&Etobicoke &79 34 &8 &$37^4$ &$8\cdot 37^4$ &14993288 &23\cr \+&Mississauga&79 37 &13 &$37^4$ &$13\cdot 37^4$ &24364093 &24\cr \+&TORONTO & &21 &$37^4$ &$21\cdot 37^4$ &39357381 &25\cr \+&Windsor &83 00 &1 &$37^5$ &$1\cdot 37^5$ &69343957 &26\cr \+&Warren &83 03 &2 &$37^5$ &$2\cdot 37^5$ &138687914 &27\cr \+&Detroit &83 10 &3 &$37^5$ &$3\cdot 37^5$ &208031871 &28\cr \+&Dearborn &83 15 &5 &$37^5$ &$5\cdot 37^5$ &346719785 &29\cr \+&Livonia &83 23 &8 &$37^5$ &$8\cdot 37^5$ &554751656 &30\cr \+&DETROIT & &13 &$37^5$ &$13\cdot 37^5$ &901471441 &31\cr \+&E. St. L. &90 10 &1 &$37^6$ &$1\cdot 37^6$ &2565726409 &32\cr \+&St. Louis &90 15 &2 &$37^6$ &$2\cdot 37^6$ &5131452818 &33\cr \+&Lemay &90 17 &3 &$37^6$ &$3\cdot 37^6$ &7697179227 &34\cr \+&ST. LOUIS & &5 &$37^6$ &$5\cdot 37^6$ &12828632045 &35\cr \+&New Orleans&90 05 &1 &$37^7$ &$1\cdot 37^7$ &94931877133&36\cr \+&Marrero &90 06 &2 &$37^7$ &$2\cdot 37^7$&189863754266 &37\cr \+&Metairie &90 11 &3 &$37^7$ &$3\cdot 37^7$&284795631399 &38\cr \+&NEW ORLEANS& &5 &$37^7$ &$5\cdot 37^7$&474659385665 &39\cr \centerline{\bf TABLE 1.2:} \centerline{\bf Analysis according to sum graph unification construction} \centerline{\bf Two constellations ordered from east to west by longitude} \settabs\+&E. St. Louis\quad &ITUDE\quad &NACCI\quad &MULTI-\quad &FACTORED\quad &474659385665\quad &ORDER \cr \+&City &LONG- &FIBO- &BASE &FACTORED&NODE &RANK \cr \+&Suburb &ITUDE &NACCI &MULTI- &WEIGHT &WEIGHT &ORDER\cr \+&METRO &west &LABEL &PLIER & & & \cr \+&New Orleans &90 05 &1 &$37^7$ &$1\cdot 37^7$ &94931877133 &36\cr \+&NEW ORLEANS & &5 &$37^7$ &$5\cdot 37^7$&474659385665 &39\cr \+&Marrero &90 06 &2 &$37^7$ &$2\cdot 37^7$&189863754266 &37\cr \+&E. St. Louis &90 10 &1 &$37^6$ &$1\cdot 37^6$ &2565726409 &32\cr \+&Metairie &90 11 &3 &$37^7$ &$3\cdot 37^7$&284795631399 &38\cr \+&St. Louis &90 15 &2 &$37^6$ &$2\cdot 37^6$ &5131452818 &33\cr \+&ST. LOUIS & &5 &$37^6$ &$5\cdot 37^6$ &12828632045 &35\cr \+&Lemay &90 17 &3 &$37^6$ &$3\cdot 37^6$ &7697179227 &34\cr \centerline{\bf TABLE 2:} \centerline{\bf Multipliers and their logarithms to the base} \centerline{\bf of the base multiplier of 23} \centerline{\bf for the example of Figure 7.} \qquad\qquad\qquad & Logarithm, base 23 \cr \+&Multiplier &Logarithm, base 23 \cr \+&$1\cdot 23 = 23$ &1\cr \+&$2\cdot 23 = 46$ &1.2210647\cr \+&$3\cdot 23 = 69$ &1.3503793\cr \+&$5\cdot 23 = 115$ &1.5132964\cr \+&$1\cdot 23^2 = 529$ &2\cr \+&$2\cdot 23^2 = 1058$ &2.2210647\cr \+&$3\cdot 23^2 = 1587$ &2.3503793\cr \+&$5\cdot 23^2 = 2645$ &2.5132964\cr \+&$8\cdot 23^2 = 4232$ &2.6631942\cr \+&$1\cdot 23^3 = 12167$ &3\cr \+&$2\cdot 23^3 = 24344$ &3.2210647\cr \+&$3\cdot 23^3 = 36501$ &3.3503793\cr \+&$5\cdot 23^3 = 60835$ &3.5132964\cr \+&$8\cdot 23^3 = 97336$ &3.6631942\cr \+&$13\cdot 23^3 = 158171$ &3.8180367\cr \centerline{\bf TABLE 3:} \centerline{\bf Table 1.1 labelled as a reversed logarithmic sum graph} \settabs\+&E. St. Louis\quad &ITUDE\quad &NACCI\quad &FACTORED\quad &474659385665\quad &ORDER \+&City &LONG- &FIBO- &BASE &FACTORED&NODE &LOG \cr \+&Suburb &ITUDE &NACCI &MULTI- &WEIGHT &WEIGHT&BASE \cr \+&METRO & &LABEL &PLIER & & &37 NODE\cr \+&Salem &70 54 &21 &$37$ &$21\cdot 37$&777&1.746657\cr \+&Lynn &70 57 &13 &$37$ &$13\cdot 37$&481&1.629043\cr \+&Quincy &71 00 &8 &$37$ &$8\cdot 37$&296&1.509974\cr \+&Brockton &71 01 &5 &$37$ &$5\cdot 37$&185&1.394708\cr \+&Cambridge &71 07 &3 &$37$ &$3\cdot 37$&111&1.269430\cr \+&Boston &71 07 &2 &$37$ &$2\cdot 37$&74&1.169991\cr \+&BOSTON & &1 &$37$ &$1\cdot 37$&37&1\cr \+&Longueuil &73 30 &8 &$37^2$ &$8\cdot 37^2$&10952&2.509974\cr \+&Verdun &73 34 &5 &$37^2$ &$5\cdot 37^2$&6845&2.394708\cr \+&Montreal &73 35 &3 &$37^2$ &$3\cdot 37^2$&4107&2.269430\cr \+&Laval &73 44 &2 &$37^2$ &$2\cdot 37^2$&2738&2.169991\cr \+&MONTREAL & &1 &$37^2$ &$1\cdot 37^2$&1369&2\cr \+&Camden &75 06 &13 &$37^3$ &$13\cdot 37^3$&658489&3.629043\cr \+&Phil. &75 13 &8 &$37^3$ &$8\cdot 37^3$&405224&3.509974\cr \+&U. Darby &75 16 &5 &$37^3$ &$5\cdot 37^3$&253265&3.394708\cr \+&Norris. &75 21 &3 &$37^3$ &$3\cdot 37^3$&151959&3.269430\cr \+&Chester &75 22 &2 &$37^3$ &$2\cdot 37^3$&101306&3.169991\cr \+&PHILADELPHIA& &1 &$37^3$ &$1\cdot 37^3$&50653&3\cr \+&Scar. &79 12 &21 &$37^4$ &$21\cdot 37^4$&39357381&4.746657\cr \+&Toronto&79 23 &13 &$37^4$ &$13\cdot 37^4$&24364093&4.629043\cr \+&NYork &79 25 &8 &$37^4$ &$8\cdot 37^4$&14993288&4.509974\cr \+&York &79 29 &5 &$37^4$ &$5\cdot 37^4$&9370805&4.394708\cr \+&Etobicoke&79 34 &3 &$37^4$ &$3\cdot 37^4$&5622483&4.269430\cr \+&Missi. &79 37 &2 &$37^4$ &$2\cdot 37^4$&3748322&4.169991\cr \+&TORONTO & &1 &$37^4$ &$1\cdot 37^4$&1874161&4\cr \+&Wind. &83 00 &13 &$37^5$ &$13\cdot 37^5$&901471441&5.629043\cr \+&Warren &83 03 &8 &$37^5$ &$8\cdot 37^5$&554751656&5.509974\cr \+&Detroit&83 10 &5 &$37^5$ &$5\cdot 37^5$&346719785&5.394708\cr \+&Dearb. &83 15 &3 &$37^5$ &$3\cdot 37^5$&208031871&5.269430\cr \+&Livonia&83 23 &2 &$37^5$ &$2\cdot 37^5$&138687914&5.169991\cr \+&DETROIT& &1 &$37^5$ &$1\cdot 37^5$&69343957&5\cr \+&ESLou&90 10 &5 &$37^6$ &$5\cdot 37^6$&12828632045&6.394708\cr \+&SLou &90 15 &3 &$37^6$ &$3\cdot 37^6$&7697179227&6.269430\cr \+&Lemay&90 17 &2 &$37^6$ &$2\cdot 37^6$&5131452818&6.169991\cr \+&ST. LOUIS& &1 &$37^6$ &$1\cdot 37^6$&2565726409&6\cr \+&NOrl&90 05 &5 &$37^7$ &$5\cdot 37^7$&474659385665&7.394708\cr \+&Marr&90 06 &3 &$37^7$ &$3\cdot 37^7$&284795631399&7.269430\cr \+&Meta&90 11 &2 &$37^7$ &$2\cdot 37^7$&189863754266&7.169991\cr \+&NEW ORLEANS& &1 &$37^7$ &$1\cdot 37^7$&94931877133&7\cr \centerline{\bf TABLE 4.0: Distances between all metro areas} \settabs\+&NEW ORLEANS\quad & BOS\quad & MONT \quad &PHIL \quad &TOR\quad &DET \quad &1034 \quad &1349\cr \+& &BOS &MONT &PHIL &TOR &DET &SL &NO \cr \+&BOSTON &0 &255 &263 &429 &615 &1034 &1349\cr \+&MONTREAL & &0 &388 &312 &523 &974 &1394\cr \+&PHIL & & &0 &331 &444 &808 &1086\cr \+&TORONTO & & & &0 &211 &662 &1112\cr \+&DETROIT & & & & &0 &452 &936 \cr \+&ST LOUIS & & & & & &0 &596 \cr \+&NEW ORLEANS & & & & & & &0 \cr \centerline{\bf TABLE 4.1: Boston-area cities} \settabs\+&Quincy\quad &Salem\quad & Lynn\quad &Quincy \quad &Brock.\quad &Cambr.\quad &Boston\cr \+& &Salem &Lynn &Quincy &Brock. &Cambr. &Boston\cr \+&Salem &0 &4.29 &19.1 &31.6 &15.3 &21.4 \cr \+&Lynn & &0 &15.1 &27.8 &10.2 &17.2 \cr \+&Quincy & & &0 &12.6 &10.9 &5.96 \cr \+&Brock. & & & &0 &22.4 &13.6 \cr \+&Cambr. & & & & &0 &9.21 \cr \+&Boston & & & & & &0 \cr \centerline{\bf TABLE 4.2: Montreal-area cities} \settabs\+&Longueuil\quad &Longue.\quad & Verdun\quad &Laval \quad &Mont.\cr \+& &Longue. &Verdun &Laval &Mont.\cr \+&Longueuil &0 &6.6 &11.3 &4.64 \cr \+&Verdun & &0 &9.29 &3.54 \cr \+&Laval & & &0 &7.35 \cr \+&Montreal & & & &0 \cr \centerline{\bf TABLE 4.3: Philadelphia-area cities} \settabs\+&Philadelphia\quad &Camden\quad & Chester\quad &U. Darby \quad &Norris.\quad &Phila.\cr \+& &Camden &Chester &U Darby &Norris. &Phila.\cr \+&Camden &0 &15.2 &9.12 &18.3 &7.7 \cr \+&Chester & &0 &9.64 &18.4 &13.0 \cr \+&Upper Darby & & &0 &11.2 &3.5 \cr \+&Norristown & & & &0 &10.7 \cr \+&Philadelphia& & & & &0 \cr \centerline{\bf TABLE 4.4: Toronto-area cities} \settabs\+&Mississauga \quad &Scar.\quad & Miss.\quad &N. York \quad &York\quad &Etob.\quad &Tor.\cr \+& &Scar. &Miss. &N. York &York &Etob. &Tor.\cr \+&Scarborough &0 &24.3 &11.0 &14.8 &19.5 &10.8\cr \+&Mississauga & &0 &17.9 &10.4 &6.27 &13.5\cr \+&North York & & &0 &7.66 &11.8 &8.23\cr \+&York & & & &0 &4.75 &5.12\cr \+&Etobicoke & & & & &0 &9.23\cr \+&Toronto & & & & & &0 \cr \centerline{\bf TABLE 4.5: Detroit-area cities} \settabs\+&Dearborn\quad &Windsor\quad &Warren\quad &Dear. \quad &Livonia\quad &Detroit\cr \+& &Windsor &Warren &Dear. &Livonia &Detroit \cr \+&Windsor &0 &16.3 &12.8 &20.7 &9.18 \cr \+&Warren & &0 &20.0 &19.3 &13.9 \cr \+&Dearborn & & &0 &10.5 &6.27 \cr \+&Livonia & & & &0 &11.5 \cr \+&Detroit & & & & &0 \cr \centerline{\bf TABLE 4.6: St. Louis-area cities} \settabs\+&E. St. Louis\quad &E. St. L.\quad & Lemay\quad &St. Louis \cr \+& &E. St. L. &Lemay &St. Louis \cr \+&E. St. Louis&0 &6.29 &4.49 \cr \+&Lemay & &0 &1.79 \cr \+&St. Louis & & &0 \cr \centerline{\bf TABLE 4.7: New Orleans-area cities} \settabs\+&New Orleans\quad & Met.\quad &Mar. \quad &New O.\cr \+& &Met. &Mar. &New O. \cr \+&Metairie &0 &7.61 &5.98 \cr \+&Marrero & &0 &5.84 \cr \+&New Orleans & & &0 \cr \centerline{\bf TABLE 5:} \centerline{\bf New data added --- Ann Arbor} \settabs\+&E. St. Louis\quad &ITUDE\quad &NACCI\quad &FACTORED\quad &474659385665\quad &ORDER \+&City &LONG- &FIBO- &BASE &FACTORED&NODE &LOG \cr \+&Suburb &ITUDE &NACCI &MULTI- &WEIGHT &WEIGHT&BASE \cr \+&METRO AREA & &LABEL &PLIER & & &37 NODE\cr \+&Wind. &83 00 &21 &$37^5$ &$21\cdot 37^5$&1456223097&5.746657\cr \+&Warren &83 03 &13 &$37^5$ &$13\cdot 37^5$&901471441&5.629043\cr \+&Detroit&83 10 &8 &$37^5$ &$8\cdot 37^5$&554751656&5.509974\cr \+&Dearb. &83 15 &5 &$37^5$ &$5\cdot 37^5$&346719785&5.394708\cr \+&Livonia&83 23 &3 &$37^5$ &$3\cdot 37^5$&208031871&5.269430\cr \+&Ann Arbor&83 45 &2&$37^5$ &$2\cdot 37^5$&138687914&5.169991\cr \+&DETROIT & &1 &$37^5$ &$1\cdot 37^5$&69343957&5\cr \+&ESLou&90 10 &5 &$37^6$ &$5\cdot 37^6$&12828632045&6.394708\cr \+&SLou &90 15 &3 &$37^6$ &$3\cdot 37^6$&7697179227&6.269430\cr \+&Lemay&90 17 &2 &$37^6$ &$2\cdot 37^6$&5131452818&6.169991\cr \+&ST. LOUIS& &1 &$37^6$ &$1\cdot 37^6$&2565726409&6\cr \+&NOrl&90 05 &5 &$37^7$ &$5\cdot 37^7$&474659385665&7.394708\cr \+&Marr&90 06 &3 &$37^7$ &$3\cdot 37^7$&284795631399&7.269430\cr \+&Meta&90 11 &2 &$37^7$ &$2\cdot 37^7$&189863754266&7.169991\cr \+&NEW ORLEANS& &1 &$37^7$ &$1\cdot 37^7$&94931877133&7\cr \centerline{\bf TABLE 6:} \centerline{\bf Specializations of sum graphs} \settabs\+\indent\qquad&Augmented reversed logarithmic\qquad\qquad \qquad&intermediate and global scales.\cr \+&Type of graph &Characteristics \cr \+&Sum graph & Variable resolution at \cr \+&(Figure 7) &local and global scales, only. \cr \+& &Shape, size, and connection \cr \+& &pattern of parts to whole \cr \+& &suggested by global label. \cr \+&Sum graph with base multiplier &Variable resolution at \cr \+&(Figure 9) &intermediate and global scales.\cr \+& &Relative shape, size, and \cr \+& &connection pattern of parts \cr \+& &to whole suggested by multiple \cr \+& &labels associated with split \cr \+& &regions. \cr \+&Logarithmic sum graph &Confines sum graph labels to \cr \+&(Figure 10) &a single unit for each \cr \+& &subgraph. Deals well with \cr \+& &split regions; is not itself \cr \+& &a sum graph. Label on \cr \+& &cataloging node suggests \cr \+& &relative shape, size, and \cr \+& &connection pattern of parts \cr \+& &to the whole \cr \+&Reversed sum graph &Not itself a sum graph. Sole \cr \+&(Figure 11) &function is to assign an \cr \+& &integral value to the \cr \+& &cataloging node of each \cr \+& &subgraph. \cr \+&Augmented reversed logarithmic & Combines characteristics \cr \+&\quad sum graph &of logarithmic and reversed \cr \+&(Figure 13) &sum graphs. Added edges \cr \+& &join cataloging nodes. \cr \+& &Linkage patterns are \cr \+& &suggested at local, \cr \+& &intermediate, and global \cr \+& &levels of resolution. \cr \centerline{\bf 5. SAMPLE OF HOW TO DOWNLOAD THE ELECTRONIC FILE} \centerline{\bf BACK ISSUES OF {\sl SOLSTICE\/} NOW AVAILABLE ON FTP} \noindent This section shows the exact set of commands that work to download {\sl Solstice\/} on The University of Michigan's Xerox 9700. Because different universities will have different installations of {\TeX}, this is only a rough guideline which {\sl might\/} be of use to the reader. (BACK ISSUES AVAILABLE using anonymous ftp to open um.cc.umich.edu, account GCFS; type cd GCFS after entering system; then type ls to get a directory; then type get solstice.190 (for example) and download it or read it according to local constraints.) Back issues will be available on this account; this account is ONLY for back issues; to write Solstice, send e-mail to Solstice@UMICHUM.bitnet or to Solstice@um.cc.umich.edu . Issues from this one forward are available on FTP on account IEVG (substitute IEVG for GCFS above). First step is to concatenate the files you received via bitnet/internet. Simply piece them together in your computer, one after another, in the order in which they are numbered, starting with the number, ``1." The files you have received are ASCII files; the concatenated file is used to form the .tex file from which the .dvi file (device independent) file is formed. The words ``percent-sign" and ``backslash" are written out in the example below; the user should type them symbolically. ASSUME YOU HAVE SIGNED ON AND ARE AT THE SYSTEM PROMPT, \#. \# create -t.tex \# percent-sign t from pc c:backslash words backslash solstice.tex to mts -t.tex char notab (this command sends my file, solstice.tex, which I did as a WordStar (subdirectory, ``words") ASCII file to the \# run *tex par=-t.tex (there may be some underfull (or certain over) boxes that generally cause no problem; there should be no other ``error" messages in the typesetting--the files you receive were already tested.) \# run *dvixer par=-t.dvi \# control *print* onesided \# run *pagepr scards=-t.xer, par=paper=plain \centerline{\bf 6. SOLSTICE--INDEX, VOLUMES I, II, AND III} \noindent {\bf Volume III, Number 2, Winter, 1992} \noindent {\bf 1.} A Word of Welcome from A to U. \noindent {\bf 2.} Press clippings--summary. \noindent {\bf 3.} Reprints: \noindent {\bf A.} What Are Mathematical Models and What Should They Be? by Frank Harary, reprinted from {\sl Biometrie - \smallskip \noindent {\sl 1. What Are They? 2. Two Worlds: Abstract and Empirical 3. Two Worlds: Two Levels 4. Two Levels: Derviation and Selection 5. Research Schema 6. Sketches of Discovery 7. What Should They Be? {\bf B.} Where Are We? Comments on the Concept of Center of Population, by Frank E. Barmore, reprinted from {\sl The Wisconsin Geographer\/}. \smallskip \noindent {\sl 1. Introduction 2. Preliminary Remarks 3. Census Bureau Center of Population Formul{\ae} 4. Census Bureau Center of Population Description 5. Agreement Between Description and Formul{\ae} 6. Proposed Definition of the Center of Population 7. Summary 8. Appendix A 9. Appendix B 10. References \noindent {\bf 4.} Article: The Pelt of the Earth: An Essay on Reactive Diffusion, by Sandra L. Arlinghaus and John D. Nystuen. \smallskip \noindent {\sl 1. Pattern Formation: Global Views 2. Pattern Formation: Local Views 3. References Cited 4. Literature of Apparent Related Interest. \noindent {\bf 5.} Feature Meet new{\sl Solstice\/} Board Member William D. Drake; comments on course in Transition Theory and listing of student-produced monograph. \noindent {\bf 6.} Downloading of Solstice. \noindent {\bf 7.} Index to Solstice. \noindent {\bf 8.} Other Publications of IMaGe. \noindent {\bf Volume III, Number 1, Summer, 1992} \noindent{\bf 1. ARTICLES.} {\bf Harry L. Stern}. {\bf Computing Areas of Regions With Discretely Defined Boundaries}. 1. Introduction 2. General Formulation 3. The Plane 4. The Sphere 5. Numerical Example and Remarks. Appendix--Fortran Program. \noindent{\bf 2. NOTE } {\bf Sandra L. Arlinghaus, John D. Nystuen, Michael J. Woldenberg}. {\bf The Quadratic World of Kinematic Waves} \noindent{\bf 3. SOFTWARE REVIEW} RangeMapper$^{\hbox{TM}}$ --- version 1.4. Created by {\bf Kenelm W. Philip}, Tundra Vole Software, Fairbanks, Alaska. Program and Manual by {\bf Kenelm W. Philip}. Reviewed by {\bf Yung-Jaan Lee}, University of Michigan. \noindent{\bf 4. PRESS CLIPPINGS} \noindent{\bf 5. INDEX to Volumes I (1990) and II (1991) of {\sl Solstice}.} \noindent {\bf Volume II, Number 1, Summer, 1991} \noindent 1. ARTICLE Sandra L. Arlinghaus, David Barr, John D. Nystuen. {\sl The Spatial Shadow: Light and Dark --- Whole and Part\/} This account of some of the projects of sculptor David Barr attempts to place them in a formal, systematic, spatial setting based on the postulates of the science of space of William Kingdon Clifford (reprinted in {\sl Solstice\/}, Vol. I, No. 1.). \noindent 2. FEATURES \item{i} Construction Zone --- The logistic curve. \item{ii.} Educational feature --- Lectures on ``Spatial Theory" \noindent {\bf Volume II, Number 2, Winter, 1991} \noindent 1. REPRINT Saunders Mac Lane, ``Proof, Truth, and Confusion." Given as the Nora and Edward Ryerson Lecture at The University of Chicago in 1982. Republished with permission of The University of Chicago and of the author. I. The Fit of Ideas. II. Truth and Proof. III. Ideas and Theorems. IV. Sets and Functions. V. Confusion via Surveys. VI. Cost-benefit and Regression. VII. Projection, Extrapolation, and Risk. VIII. Fuzzy Sets and Fuzzy Thoughts. IX. Compromise is Confusing. \noindent 2. ARTICLE Robert F. Austin. ``Digital Maps and Data Bases: Aesthetics versus Accuracy." I. Introduction. II. Basic Issues. III. Map Production. IV. Digital Maps. V. Computerized Data Bases. VI. User \noindent 3. FEATURES Press clipping; Word Search Puzzle; Software Briefs. \noindent{\bf INDEX to Volume I (1990) of {\sl Solstice}.} \noindent{\bf Volume I, Number 1, Summer, 1990} \noindent 1. REPRINT William Kingdon Clifford, {\sl Postulates of the Science of Space\/} This reprint of a portion of Clifford's lectures to the Royal Institution in the 1870's suggests many geographic topics of concern in the last half of the twentieth century. Look for connections to boundary issues, to scale problems, to self- similarity and fractals, and to non-Euclidean geometries (from those based on denial of Euclid's parallel postulate to those based on a sort of mechanical ``polishing"). What else did, or might, this classic essay foreshadow? \noindent 2. ARTICLES. Sandra L. Arlinghaus, {\sl Beyond the Fractal.} An original article. The fractal notion of self-similarity is useful for characterizing change in scale; the reason fractals are effective in the geometry of central place theory is because that geometry is hierarchical in nature. Thus, a natural place to look for other connections of this sort is to other geographical concepts that are also hierarchical. Within this fractal context, this article examines the case of spatial When the idea of diffusion is extended to see ``adopters" of an innovation as ``attractors" of new adopters, a Julia set is introduced as a possible axis against which to measure one class of geographic phenomena. Beyond the fractal context, fractal concepts, such as ``compression" and ``space-filling" are considered in a broader graph-theoretic setting. William C. Arlinghaus, {\sl Groups, Graphs, and God} An original article based on a talk given before a MIdwest GrapH TheorY (MIGHTY) meeting. The author, an algebraic graph theorist, ties his research interests to a broader philosophical realm, suggesting the breadth of range to which algebraic structure might be applied. The fact that almost all graphs are rigid (have trivial automorphism groups) is exploited to argue probabilistically for the existence of God. This is presented with the idea that applications of mathematics need not be limited to scientific \noindent 3. FEATURES \item{i.} Theorem Museum --- Desargues's Two Triangle Theorem from projective geometry. \item{ii.} Construction Zone --- a centrally symmetric hexagon is derived from an arbitrary convex hexagon. \item{iii.} Reference Corner --- Point set theory and topology. \item{iv.} Educational Feature --- Crossword puzzle on spices. \item{v.} Solution to crossword puzzle. \noindent 4. SAMPLE OF HOW TO DOWNLOAD THE ELECTRONIC FILE \noindent{\bf Volume I, Number 2, Winter, 1990} \noindent 1. REPRINT John D. Nystuen (1974), {\sl A City of Strangers: Spatial Aspects of Alienation in the Detroit Metropolitan Region\/}. This paper examines the urban shift from ``people space" to ``machine space" (see R. Horvath, {\sl Geographical Review\/}, April, 1974) in the Detroit metropolitan region of 1974. As with Clifford's {\sl Postulates\/}, reprinted in the last issue of {\sl Solstice\/}, note the timely quality of many of the \noindent 2. ARTICLES Sandra Lach Arlinghaus, {\sl Scale and Dimension: Their Logical Linkage between scale and dimension is made using the Fallacy of Division and the Fallacy of Composition in a fractal Sandra Lach Arlinghaus, {\sl Parallels between Parallels\/}. The earth's sun introduces a symmetry in the perception of its trajectory in the sky that naturally partitions the earth's surface into zones of affine and hyperbolic geometry. The affine zones, with single geometric parallels, are located north and south of the geographic parallels. The hyperbolic zone, with multiple geometric parallels, is located between the geographic tropical parallels. Evidence of this geometric partition is suggested in the geographic environment --- in the design of houses and of gameboards. Sandra L. Arlinghaus, William C. Arlinghaus, and John D. Nystuen. {\sl The Hedetniemi Matrix Sum: A Real-world Application\/}. In a recent paper, we presented an algorithm for finding the shortest distance between any two nodes in a network of $n$ nodes when given only distances between adjacent nodes [Arlinghaus, Arlinghaus, Nystuen, {\sl Geographical Analysis\/}, 1990]. In that previous research, we applied the algorithm to the generalized road network graph surrounding San Francisco Bay. Here, we examine consequent changes in matrix entires when the underlying adjacency pattern of the road network was altered by the 1989 earthquake that closed the San Francisco --- Oakland Bay Bridge. Sandra Lach Arlinghaus, {\sl Fractal Geometry of Infinite Pixel Sequences: ``Su\-per\--def\-in\-i\-tion" Resolution\/}? Comparison of space-filling qualities of square and hexagonal \noindent 3. FEATURES \item{i.} Construction Zone --- Feigenbaum's number; a triangular coordinatization of the Euclidean plane. \item{ii.} A three-axis coordinatization of the plane. \centerline{\bf 7. OTHER PUBLICATIONS OF IMaGe} \centerline{\bf MONOGRAPH SERIES} \centerline{Scholarly Monographs--Original Material, refereed} Prices exclusive of shipping and handling; payable in U.S. funds on a U.S. bank, only. All monographs are \$15.95, except \#12 which is \$39.95. Monographs are printed by Digicopy. 1. Sandra L. Arlinghaus and John D. Nystuen. Mathematical Geography and Global Art: the Mathematics of David Barr's ``Four Corners Project,'' 1986. This monograph contains Nystuen's calculations, actually used by Barr to position his abstract tetrahedral sculpture within the earth. Placement of the sculpture vertices in Easter Island, South Africa, Greenland, and Indonesia was chronicled in film by The Archives of American Art for The Smithsonian Institution. In addition to the archival material, this monograph also contains Arlinghaus's solutions to broader theoretical questions --- was Barr's choice of a tetrahedron unique within his initial constraints, and, within the set of Platonic solids? 2. Sandra L. Arlinghaus. Down the Mail Tubes: the Pressured Postal Era, 1853-1984, 1986. The history of the pneumatic post, in Europe and in the United States, is examined for the lessons it might offer to the technological scenes of the late twentieth century. As Sylvia L. Thrupp, Alice Freeman Palmer Professor Emeritus of History, The University of Michigan, commented in her review of this work ``Such brief comment does far less than justice to the intelligence and the stimulating quality of the author's writing, or to the breadth of her reading. The detail of her accounts of the interest of American private enterprise, in New York and other large cities on this continent, in pushing for construction of large tubes in systems to be leased to the government, brings out contrast between American and European views of how the new technology should be managed. This and many other sections of the monograph will set readers on new tracks of thought.'' 3. Sandra L. Arlinghaus. Essays on Mathematical Geography, A collection of essays intended to show the range of power in applying pure mathematics to human systems. There are two types of essay: those which employ traditional mathematical proof, and those which do not. As mathematical proof may itself be regarded as art, the former style of essay might represent ``traditional'' art, and the latter, ``surrealist'' art. Essay titles are: ``The well-tempered map projection,'' ``Antipodal graphs,'' ``Analogue clocks,'' ``Steiner transformations,'' ``Concavity and urban settlement patterns,'' ``Measuring the vertical city,'' ``Fad and permanence in human systems,'' ``Topological exploration in geography,'' ``A space for thought,'' and ``Chaos in human systems--the Heine-Borel Theorem.'' 4. Robert F. Austin, A Historical Gazetteer of Southeast Asia, Dr. Austin's Gazetteer draws geographic coordinates of Southeast Asian place-names together with references to these place-names as they have appeared in historical and literary documents. This book is of obvious use to historians and to historical geographers specializing in Southeast Asia. At a deeper level, it might serve as a valuable source in establishing place-name linkages which have remained previously unnoticed, in documents describing trade or other communications connections, because of variation in place-name nomenclature. 5. Sandra L. Arlinghaus, Essays on Mathematical Geography--II, Written in the same format as IMaGe Monograph \#3, that seeks to use ``pure'' mathematics in real-world settings, this volume contains the following material: ``Frontispiece -- the Atlantic Drainage Tree,'' ``Getting a Handel on Water-Graphs,'' ``Terror in Transit: A Graph Theoretic Approach to the Passive Defense of Urban Networks,'' ``Terrae Antipodum,'' ``Urban Inversion,'' ``Fractals: Constructions, Speculations, and Concepts,'' ``Solar Woks,'' ``A Pneumatic Postal Plan: The Chambered Interchange and ZIPPR Code,'' ``Endpiece.'' 6. Pierre Hanjoul, Hubert Beguin, and Jean-Claude Thill, Theoretical Market Areas Under Euclidean Distance, 1988. (English language text; Abstracts written in French and in English.) Though already initiated by Rau in 1841, the economic theory of the shape of two-dimensional market areas has long remained concerned with a representation of transportation costs as linear in distance. In the general gravity model, to which the theory also applies, this corresponds to a decreasing exponential function of distance deterrence. Other transportation cost and distance deterrence functions also appear in the literature, however. They have not always been considered from the viewpoint of the shape of the market areas they generate, and their disparity asks the question whether other types of functions would not be worth being investigated. There is thus a need for a general theory of market areas: the present work aims at filling this gap, in the case of a duopoly competing inside the Euclidean plane endowed with Euclidean (Bien qu'\'ebauch\'ee par Rau d\`es 1841, la th\'eorie \'economique de la forme des aires de march\'e planaires s'est longtemps content\'ee de l'hypoth\`ese de co\^uts de transport proportionnels \`a la distance. Dans le mod\`ele gravitaire g\'en\'eralis\'e, auquel on peut \'etendre cette th\'eorie, ceci correspond au choix d'une exponentielle d\'ecroissante comme fonction de dissuasion de la distance. D'autres fonctions de co\^ut de transport ou de dissuasion de la distance apparaissent cependant dans la litt\'erature. La forme des aires de march\'e qu'elles engendrent n'a pas toujours \'et\'e \'etudi\'ee ; par ailleurs, leur vari\'et\'e am\`ene \`a se demander si d'autres fonctions encore ne m\'eriteraient pas d'\^etre examin\'ees. Il para\^it donc utile de disposer d'une th\'eorie g\'en\'erale des aires de march\'e : ce \`a quoi s'attache ce travail en cas de duopole, dans le cadre du plan euclidien muni d'une distance 7. Keith J. Tinkler, Editor, Nystuen---Dacey Nodal Analysis, Professor Tinkler's volume displays the use of this graph theoretical tool in geography, from the original Nystuen --- Dacey article, to a bibliography of uses, to original uses by Tinkler. Some reprinted material is included, but by far the larger part is of previously unpublished material. (Unless otherwise noted, all items listed below are previously unpublished.) Contents: `` `Foreward' " by Nystuen, 1988; ``Preface" by Tinkler, 1988; ``Statistics for Nystuen --- Dacey Nodal Analysis," by Tinkler, 1979; Review of Nodal Analysis literature by Tinkler (pre--1979, reprinted with permission; post---1979, new as of 1988); FORTRAN program listing for Nodal Analysis by Tinkler; ``A graph theory interpretation of nodal regions'' by John D. Nystuen and Michael F. Dacey, reprinted with permission, 1961; Nystuen---Dacey data concerning telephone flows in Washington and Missouri, 1958, 1959 with comment by Nystuen, 1988; ``The expected distribution of nodality in random (p, q) graphs and multigraphs,'' by Tinkler, 1976. 8. James W. Fonseca, The Urban Rank--size Hierarchy: A Mathematical Interpretation, 1989. The urban rank--size hierarchy can be characterized as an equiangular spiral of the form $r=ae^{\theta \, \hbox{cot}\alpha}$. An equiangular spiral can also be constructed from a Fibonacci sequence. The urban rank--size hierarchy is thus shown to mirror the properties derived from Fibonacci characteristics such as rank--additive properties. A new method of structuring the urban rank--size hierarchy is explored which essentially parallels that of the traditional rank--size hierarchy below rank 11. Above rank 11 this method may help explain the frequently noted concavity of the rank--size distribution at the upper levels. The research suggests that the simple rank--size rule with the exponent equal to 1 is not merely a special case, but rather a theoretically justified norm against which deviant cases may be measured. The spiral distribution model allows conceptualization of a new view of the urban rank--size hierarchy in which the three largest cities share functions in a Fibonacci hierarchy. 9. Sandra L. Arlinghaus, An Atlas of Steiner Networks, 1989. A Steiner network is a tree of minimum total length joining a prescribed, finite, number of locations; often new locations are introduced into the prescribed set to determine the minimum tree. This Atlas explains the mathematical detail behind the Steiner construction for prescribed sets of $n$ locations and displays the steps, visually, in a series of Figures. The proof of the Steiner construction is by mathematical induction, and enough steps in the early part of the induction are displayed completely that the reader who is well--trained in Euclidean geometry, and familiar with concepts from graph theory and elementary number theory, should be able to replicate the constructions for full as well as for degenerate Steiner trees. 10. Daniel A. Griffith, Simulating $K=3$ Christaller Central Place Structures: An Algorithm Using A Constant Elasticity of Substitution Consumption Function, 1989. An algorithm is presented that uses BASICA or GWBASIC on IBM compatible machines. This algorithm simulates Christaller $K=3$ central place structures, for a four--level hierarchy. It is based upon earlier published work by the author. A description of the spatial theory, mathematics, and sample output runs appears in the monograph. A digital version is available from the author, free of charge, upon request; this request must be accompanied by a 5.25--inch formatted diskette. This algorithm has been developed for use in Social Science classroom laboratory situations, and is designed to (a) cultivate a deeper understanding of central place theory, (b) allow parameters of a central place system to be altered and then graphic and tabular results attributable to these changes viewed, without experiencing the tedium of massive calculations, and (c) help promote a better comprehension of the complex role distance plays in the space--economy. The algorithm also should facilitate intensive numerical research on central place structures; it is expected that even the sample simulation results will reveal interesting insights into abstract central place theory. The background spatial theory concerns demand and competition in the space--economy; both linear and non--linear spatial demand functions are discussed. The mathematics is concerned with (a) integration of non--linear spatial demand cones on a continuous demand surface, using a constant elasticity of substitution consumption function, (b) solving for roots of polynomials, (c) numerical approximations to integration and root extraction, (d) multinomial discriminant function classification of commodities into central place hierarchy levels. Sample output is presented for contrived data sets, constructed from artificial and empirical information, with the wide range of all possible central place structures being generated. These examples should facilitate implementation testing. Students are able to vary single or multiple parameters of the problem, permitting a study of how certain changes manifest themselves within the context of a theoretical central place structure. Hierarchical classification criteria may be changed, demand elasticities may or may not vary and can take on a wide range of non--negative values, the uniform transport cost may be set at any positive level, assorted fixed costs and variable costs may be introduced, again within a rich range of non--negative possibilities, and the number of commodities can be altered. Directions for algorithm execution are summarized. An ASCII version of the algorithm, written directly from GWBASIC, is included in an appendix; hence, it is free of typing errors. 11. Sandra L. Arlinghaus and John D. Nystuen, Environmental Effects on Bus Durability, 1990. This monograph draws on the authors' previous publications on ``Climatic" and ``Terrain" effects on bus durability. Material on these two topics is selected, and reprinted, from three published papers that appeared in the {\sl Transportation Research Record\/} and in the {\sl Geographical Review\/}. New material concerning ``congestion" effects is examined at the national level, to determine ``dense," ``intermediate," and ``sparse" classes of congestion, and at the local level of congestion in Ann Arbor (as suggestive of how one might use local data). This material is drawn together in a single volume, along with a summary of the consequences of all three effects simultaneously, in order to suggest direction for more highly automated studies that should follow naturally with the release of the 1990 U. S. Census data. 12. Daniel A. Griffith, Editor. Spatial Statistics: Past, Present, and Future, 1990. Proceedings of a Symposium of the same name held at Syracuse University in Summer, 1989. Content includes a Preface by Griffith and the following papers: Brian Ripley, ``Gibbsian interaction models"; J. Keith Ord, ``Statistical methods for point pattern data"; Luc Anselin, ``What is special about spatial data"; Robert P. Haining, ``Models in human geography: problems in specifying, estimating, and validating models for spatial data"; R. J. Martin, ``The role of spatial statistics in geographic modelling"; Daniel Wartenberg, ``Exploratory spatial analyses: outliers, leverage points, and influence functions"; J. H. P. Paelinck, ``Some new estimators in spatial econometrics"; Daniel A. Griffith, ``A numerical simplification for estimating parameters of spatial autoregressive models"; Kanti V. Mardia, ``Maximum likelihood estimation for spatial models"; Ashish Sen, ``Distribution of spatial correlation statistics"; Sylvia Richardson, ``Some remarks on the testing of association between spatial Graham J. G. Upton, ``Information from regional data"; Patrick Doreian, ``Network autocorrelation models: problems and prospects." Each chapter is preceded by an ``Editor's Preface" and followed by a Discussion and, in some cases, by an author's Rejoinder to the Discussion. 13. Sandra L. Arlinghaus, Editor. Solstice --- I, 1990. 14. Sandra L. Arlinghaus, Essays on Mathematical Geography --- III, 1991. A continuation of the series. Essays in this volume are: Table for central place fractals; Tiling according to the ``Administrative" Principle; Moir\'e maps; Triangle partitioning; An enumeration of candidate Steiner networks; A topological generation gap; Synthetic centers of gravity: A conjecture. 15. Sandra L. Arlinghaus, Editor, Solstice --- II, 1991. 16. Sandra L. Arlinghaus, Editor, Solstice --- III, 1992. Editor, Daniel A. Griffith Professor of Geography Syracuse University 1. Spatial Regression Analysis on the PC: Spatial Statistics Using Minitab. 1989. Cost: \$12.95, hardcopy. Editor of MICMG Series, John D. Nystuen Professor of Geography and Urban Planning The University of Michigan 1. Reprint of the Papers of the Michigan InterUniversity Community of Mathematical Geographers. Editor, John D. Nystuen. Cost: \$39.95, hardcopy. Contents--original editor: John D. Nystuen. 1. Arthur Getis, ``Temporal land use pattern analysis with the use of nearest neighbor and quadrat methods." July, 1963 2. Marc Anderson, ``A working bibliography of mathematical geography." September, 1963. 3. William Bunge, ``Patterns of location." February, 1964. 4. Michael F. Dacey, ``Imperfections in the uniform plane." June, 1964. 5. Robert S. Yuill, A simulation study of barrier effects in spatial diffusion problems." April, 1965. 6. William Warntz, ``A note on surfaces and paths and applications to geographical problems." May, 1965. 7. Stig Nordbeck, ``The law of allometric growth." June, 1965. 8. Waldo R. Tobler, ``Numerical map generalization;" and Waldo R. Tobler, ``Notes on the analysis of geographical distributions." January, 1966. 9. Peter R. Gould, ``On mental maps." September, 1966. 10. John D. Nystuen, ``Effects of boundary shape and the concept of local convexity;" Julian Perkal, ``On the length of empirical curves;" and Julian Perkal, ``An attempt at objective generalization." December, 1966. 11. E. Casetti and R. K. Semple, ``A method for the stepwise separation of spatial trends." April, 1968. 12. W. Bunge, R. Guyot, A. Karlin, R. Martin, W. Pattison, W. Tobler, S. Toulmin, and W. Warntz, ``The philosophy of maps." June, 1968. Reprints of out-of-print textbooks. Printer and obtainer of copyright permission: Digicopy Corp. Inquire for cost of reproduction---include class size 1. Allen K. Philbrick. This Human World. 2. John F. Kolars and John D. Nystuen. Human Geography. Publications of the Institute of Mathematical Geography have been reviewed or noted in 1. The Professional Geographer published by the Association of American Geographers; 2. The Urban Specialty Group Newsletter of the Association of American Geographers; 3. Mathematical Reviews published by the American Mathematical Society; 4. The American Mathematical Monthly published by the Mathematical Association of America; 5. Zentralblatt fur Mathematik, Springer-Verlag, Berlin 6. Mathematics Magazine, published by the Mathematical Association of America. 7. Science, American Association for the Advancement of Science 8. Science News. 9. Harvard Technology Window. 10. Graduating Engineering Magazine. 11. Newsletter of the Association of American Geographer. 12. Journal of The Regional Science Association.
{"url":"http://www-personal.umich.edu/~copyrght/image/solstice/sols193.html","timestamp":"2014-04-19T09:34:22Z","content_type":null,"content_length":"224209","record_id":"<urn:uuid:8fe3d17e-bc9b-42bc-9835-35b2ee2272b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Canonical bases and KLR-algebras Seminar Room 1, Newton Institute We'll explain how Khovanov-Lauda-Algebras categorify the canonical basis of the negative part of the quantum enveloping algebra, and we'll give some motivation for such constructions which come from Cherednik algebras. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/ALT/seminars/2009062511301.html","timestamp":"2014-04-21T09:43:06Z","content_type":null,"content_length":"5919","record_id":"<urn:uuid:20e0bbf1-40ff-40a4-819f-557b8de8603f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
On A 1287 Mile Long Road Trip In Windy Conditions, ... | Chegg.com hopefully this is the last question Image text transcribed for accessibility: On a 1287 mile long road trip in windy conditions, a family calculates their fuel economy to be 18 miles per gallon over the 2 days it takes to reach their destination. On their 2-day return trip, with a similar wind at their backs, their calculations show a fuel economy of 26 miles per gallon. Calculate the expected fuel economy of their vehicle in ideal (non-windy) conditions (f) as well as the influence of the wind (w) in miles per gallon. compare how many gallons of fuel they used for the round trip to the expected usage in ideal (non-windy) travel conditions. Which uses more? By how many gallons? )Round to the nearest hundredth if necessary. Show all your work gallons there: mi =71.5 gallons return: mi =49.5 gallons windy Taip: Calculate gallons each way mi = gallons for ideal Trip:
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1287-mile-long-road-trip-windy-conditions-family-calculates-fuel-economy-18-miles-per-gall-q3992779","timestamp":"2014-04-20T15:05:20Z","content_type":null,"content_length":"21298","record_id":"<urn:uuid:685ee02f-27ea-495f-bdcd-14ac7951de90>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic Multi-Period Routing With Two Classes • In the Dynamic Multi-Period Routing Problem, one is given a new set of requests at the beginning of each time period. The aim is to assign requests to dates such that all requests are fulfilled by their deadline and such that the total cost for fulling the requests is minimized. We consider a generalization of the problem which allows two classes of requests: The 1st class requests can only be fulfilled by the 1st class server, whereas the 2nd class requests can be fulfilled by either the 1st or 2nd class server. For each tour, the 1st class server incurs a cost that is alpha times the cost of the 2nd class server, and in each period, only one server can be used. At the beginning of each period, the new requests need to be assigned to service dates. The aim is to make these assignments such that the sum of the costs for all tours over the planning horizon is minimized. We study the problem with requests located on the nonnegative real line and prove that there cannot be a deterministic online algorithm with a competitive ratio better than alpha. However, if we require the difference between release and deadline date to be equal for all requests, we can show that there is a min{2*alpha, 2 + 2/alpha}-competitive algorithm.
{"url":"https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2228","timestamp":"2014-04-18T06:14:40Z","content_type":null,"content_length":"19293","record_id":"<urn:uuid:7734177e-c0ab-443f-b4c0-132bbb9e3732>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
proof about a subset October 12th 2009, 09:10 AM #1 Mar 2009 proof about a subset Prove (A union B) intersect C is a subset of A union (B intersect C) x is an element of (A union B) intersect C. Thus, x is an element of A intersect B or an element of B intersect C or an element of A intersect B intersect C. If x is an element of A intersect B then x is an element of A union (B intersect C). If x is an element of B intersect C then x is an element of A union (B intersect C). If x is an element of A intersect B intersect C then x has to be an element of A union (B intersect C). Therefore, x will always be an element of A union (B intersect C). I feel like the logic is correct but i may have skipped some steps. Any comments would be great. $\begin{gathered}<br /> x \in \left[ {\left( {A \cup B} \right) \cap C} \right] \hfill \\<br /> x \in (A \cup B) \wedge x \in C \hfill \\<br /> \left[ {x \in A \vee x \in B} \right] \wedge x \in C \hfill \\<br /> \left[ {\left( {x \in A \wedge x \in C} \right) \vee \left( {x \in B \wedge x \in C} \right)} \right] \hfill \\ <br /> \end{gathered}$ October 12th 2009, 09:21 AM #2
{"url":"http://mathhelpforum.com/discrete-math/107550-proof-about-subset.html","timestamp":"2014-04-20T23:51:24Z","content_type":null,"content_length":"34527","record_id":"<urn:uuid:2aaf9b9e-888f-4faa-8b9a-e6818d9b432a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Time to pregnancy: a probabilistic method for using the duration of non-conception in fertility assessment Sozou, Peter D. (2010) Time to pregnancy: a probabilistic method for using the duration of non-conception in fertility assessment. Human Reproduction, 25 (6). ISSN 0268-1161 Full text not available from this repository. Introduction: An important problem in reproductive medicine is determining when people who have failed to achieve a pregnancy without medical assistance should begin investigation and treatment. This study provides a firm theoretical basis for determining what can be deduced about a couple's fertility from the period of time over which they have been trying to conceive. The presentation will provide important insights into the role of probability in the conception process. Material & Methods: The starting point is that there is some uncertainty in a couple's likelihood of conceiving in each cycle. This is modelled as a probability distribution. As the number of cycles increases, if a couple have not yet conceived it is progressively more likely that they are at the low fertility end of the distribution. In other words the distribution changes so that there is relatively more weight at the lower fertility end and less at the higher fertility end. A mathematical method known as Bayes' Theorem enables the probability distribution to be updated. From the original probability distribution, and the number of cycles of non-conception, it is possible to obtain a posterior distribution for the couple's fertility. This in turn enables calculation of the number of cycles of non-conception before various subfertility metrics are reached. A typical such metric is that the probability of conception on the next cycle falls below a given threshold such as 10% or 5%. A computer program has been written in C to put these methods into effect. Results: Four different prior distributions are considered. Three are derived from fits to time-to-conception data for different populations, and a fourth is a hypothetical example representing a mixture of a fertile and a subfertile population. It is found that the number of cycles of non-conception for a given metric to be reached varies somewhat between different populations. For example, the number of cycles before the probability of conception in the next cycle falls below 10% ranges from 8 to 15 among the four examples. It is also found that the predicted conception pattern over time is generally not highly sensitive to the form of distribution fitted to a given dataset. If there are enough data points, these constrain the fitted distribution in such a way that it produces a similar conception pattern when different parametric forms of distribution are used. Results will be presented for beta, triangular and compressed beta distributions. Conclusions: The analysis described in this study illustrates how the duration of non-conception influences a couple's probability of conceiving. A computer program using these methods yields results that appear to be robust. This approach has the potential to contribute to decision support tools which make use of the duration of non-conception for fertility assessment. Actions (login required) Record administration - authorised staff only
{"url":"http://eprints.lse.ac.uk/29697/","timestamp":"2014-04-16T21:55:11Z","content_type":null,"content_length":"26959","record_id":"<urn:uuid:b445163e-e1c3-4a2c-a8d0-4a68b0b23742>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Inequality with logarithm, help needed August 14th 2010, 07:18 AM #1 Aug 2010 Inequality with logarithm, help needed $8n^2 < 64n* log_2 n$ So I continue: $n< 8* log_2 n$ $n< log_2 n^8$ $2^n < n^8$ Not sure if this is correct procedure at all. Anyway, I'm stuck here... What are you trying to do here? Are you trying to solve this inequality for $n$? Or are you trying to prove that this statement is true for all $n$? I'm trying to find the values of n, where the statement is true. So the answer would be in the form: n < some number I did set up a table in excel. I know the answer is between 43 and 44. That is for n= 43, 2^n is smaller. But for n=44, n^8 is smallest. But would like to know how the procedure is to get to that result. Last edited by Lars91; August 14th 2010 at 07:47 AM. Reason: forgot a word There are two solutions for $2^n<n^8$ One solution involves negative n. If the solution is for positive n, since your original equation is a logarithm, you could use a change of base to the natural logarithm. This makes a graphical solution much easier. Then you end up with You can use the Newton-Raphson method to find the two "n" for which this is zero. Your inequality holds for "n" between those two values. Thank you! I think it will suffice to just do the graph on my calculator when reaching an answer. It is computer science and Newton-Raphson method is outside scope of the course I am taking. The question was for what input size n one algorithm ran faster the the other one. August 14th 2010, 07:23 AM #2 August 14th 2010, 07:39 AM #3 Aug 2010 August 14th 2010, 11:17 AM #4 MHF Contributor Dec 2009 August 15th 2010, 05:15 AM #5 Aug 2010
{"url":"http://mathhelpforum.com/algebra/153671-inequality-logarithm-help-needed.html","timestamp":"2014-04-21T04:46:27Z","content_type":null,"content_length":"45174","record_id":"<urn:uuid:e085b6d8-f0c7-4634-a844-0d3999bd6b3d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: this is rather very simple algebra question. #1) sam drives 20 miles at the rate of 60mph and then reduced the speed to 40mps at bumpy road for another 8 miles drive. What is the average speed of the entire driving? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fa129fe4b022b322700339","timestamp":"2014-04-18T10:46:48Z","content_type":null,"content_length":"35027","record_id":"<urn:uuid:3bee3169-629e-4959-a4b5-9461b9d32284>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Developing Geometric Concepts Why Geometry? The resources in this professional development collection were assembled to help you enhance your understanding of geometric concepts via theory, research, and activities that can be adapted for classroom use. If the question “Why Geometry?” is the place to begin, the New Zealand article “Geometry Information” provides key arguments in favor of introducing geometric concepts in the elementary grades. What Topics are Appropriate? Some research has investigated how students learn geometric concepts and when they are capable of achieving a deep understanding. The van Hieles worked with individual students and developed their five levels of geometric thought. The IMAGES website details those levels and contains other pages of teaching suggestions and activities to implement them. Jenni Way provides examples of instruction and student work in the van Hiele model. Doug Clements has written extensively on the geometric learning of young children and two of his articles are included. How Should We Plan to Teach these Topics? The Annenberg Foundation’s Learner Online has two excellent teacher labs. This collection provides a sampling of topics from the Mathematics Developmental Continuum of the State of Victoria, Australia. In these documents, you will find examples, teaching strategies, activities and progress indicators. The Educational Development Center has many good resources for teacher professional development. Their article “Right Angle” is one example of how geometric topics can connect with other school
{"url":"http://www.mathlanding.org/collections/pd_collection/developing-geometric-concepts","timestamp":"2014-04-20T13:22:53Z","content_type":null,"content_length":"57989","record_id":"<urn:uuid:88b1a42a-697f-4aad-b67a-7103a3bb62f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplication by Clever Form of 1 This super handy trick lets you simplify an expression without changing its value. Which is good, because fractions should not be subject to inflation. Sample Problem Here's a horrible fraction. A really horrible fraction. This thing is getting coal for Christmas. If we multiply it by 1, we won't change its value. If we multiply it by anything equivalent to 1 (such as Our new numerator is We can also use this trick to rewrite decimal division problems... 3.4/7.8 x 10/10 = 34/78 ...or when we have a division problem written as a fraction. Multiplication By Clever Form Of 1 Practice: Find the cleverly disguised form of 1 that will let you simplify the rotten fractions below, then simplify. And quickly, please - it hurts our eyes just to look at them. Find the cleverly disguised form of 1 that will let you simplify the rotten fractions below, then simplify. And quickly, please - it hurts our eyes just to look at them.
{"url":"http://www.shmoop.com/number-types/multiplication-clever-form-1-help.html","timestamp":"2014-04-21T12:18:19Z","content_type":null,"content_length":"41065","record_id":"<urn:uuid:f3dc7ee2-3840-49a4-a852-35d50d9c02db>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Valley Village Algebra Tutor Find a Valley Village Algebra Tutor Whether you need a quick review session before a test or a patient long-term tutor to teach you the details of a subject, I can help. I specialize in tutoring math because it is my passion and my expertise, and I hope I can help you enjoy it too. Math is a daunting subject for many students because it is cumulative. 14 Subjects: including algebra 1, algebra 2, calculus, statistics ...My track record is similar on the AP Lit and AP language exams. My students typically receive between 680 and 760 on the history subject test with only a month or two of prep. I am an excellent tutor for last-minute cram sessions and other forms of academic crisis, but find that my students learn much more effectively with consistent work a few hours a week. 20 Subjects: including algebra 2, Spanish, algebra 1, reading ...I teach students to use assignment logs, daily planners, and monthly schedules to break down assignments into manageable time slots each day. These tools allow each student to prioritize their workload so that it gets done on a timely basis and assignments do not get missed. I also incorporate... 22 Subjects: including algebra 1, English, reading, writing ...Before moving to Los Angeles, I taught for years at a high-end tutoring firm in New York City, where I successfully helped students gain admission to the country's most competitive colleges and secondary schools. In addition to helping students prepare for standardized tests (SAT, ACT, ISEE, SSA... 42 Subjects: including algebra 2, algebra 1, reading, English ...I am most capable of tutoring in Trigonometry. I break down the concepts into smaller parts. I like to use repetition, key phrases and mnemonics to help my students remember the concepts. 31 Subjects: including algebra 1, algebra 2, reading, chemistry
{"url":"http://www.purplemath.com/valley_village_algebra_tutors.php","timestamp":"2014-04-20T11:28:52Z","content_type":null,"content_length":"24188","record_id":"<urn:uuid:aaefc1bf-e100-4547-8b12-b69666cb71f3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulation reduction as constraint - J. Automated Reasoning , 2002 "... The notions of bisimulation and simulation are used for graph reduction and are widely employed in many areas: Modal Logic, Concurrency Theory, Set Theory, Formal Verification, etc. In particular, in the context of Formal Verification they are used to tackle the so-called state-explosion problem. ..." Cited by 18 (1 self) Add to MetaCart The notions of bisimulation and simulation are used for graph reduction and are widely employed in many areas: Modal Logic, Concurrency Theory, Set Theory, Formal Verification, etc. In particular, in the context of Formal Verification they are used to tackle the so-called state-explosion problem.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=659164","timestamp":"2014-04-20T14:33:54Z","content_type":null,"content_length":"12183","record_id":"<urn:uuid:15e3a76e-d362-4ba5-afdc-2184db5fae51>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Caddo Mills Geometry Tutor ...I appreciate your consideration in my services as you search for support in your child’s academic development. I graduated from Texas A&M University in 2005 with a bachelor’s degree in Interdisciplinary Studies. From 2005-2007 I taught 5th grade science at Thurgood Marshall Elementary, a Title 1 school, in Richardson ISD. 15 Subjects: including geometry, chemistry, algebra 1, algebra 2 ...I like inspiring students about the real world application of physics which makes them appreciate the things that they learn at school. I tutor both high school and college physics students. I am a physicist with both a bachelor and master’s degree in physics. 25 Subjects: including geometry, chemistry, calculus, physics ...I also know many memory tricks, and tips for doing well on exams. I believe that enjoyment and encouragement can be fundamental motivators for success. These will necessarily be qualities I commit to in any tutoring relationship. 17 Subjects: including geometry, reading, biology, chemistry ...The most rewarding part of my career as a teacher is to work with a student one-on-one and to witness the moments when he or she finally grasps the essence of a new idea. I adore the moments when students say, "Ah, it totally makes sense." I have had the pleasure helping hundreds of students and... 19 Subjects: including geometry, chemistry, physics, statistics ...I am certifying in Spanish, but I was also a physics major until I was forced to choose between Spanish or physics and opted out of the physics major. I grew up in Texas, so naturally I've been around Spanish my whole life. By the time I got to high school I felt I had a grasp on the language, so I took German for three years instead. 12 Subjects: including geometry, Spanish, physics, English
{"url":"http://www.purplemath.com/Caddo_Mills_geometry_tutors.php","timestamp":"2014-04-18T11:19:48Z","content_type":null,"content_length":"23904","record_id":"<urn:uuid:a2b8aa83-f44b-4c68-ba23-7a93a32895bd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamical Systems V Dynamical Systems V: Bifurcation Theory and Catastrophe Theory V.I. Arnold Bifurcation theory and catastrophe theory are two of the best known areas within the field of dynamical systems. Both are studies of smooth systems, focusing on properties that seem to be manifestly non-smooth. Bifurcation theory is concerned with the sudden changes that occur in a system when one or more parameters are varied. Examples of such are familiar to students of differential equations, from phase portraits. Moreover, understanding the bifurcations of the differential equations that describe real physical systems provides important information about the behavior of the systems. Catastrophe theory became quite famous during the 1970's, mostly because of the sensation caused by the usually less than rigorous applications of its principal ideas to "hot topics", such as the characterization of personalities and the difference between a "genius" and a "maniac". Catastrophe theory is accurately described as singularity theory and its (genuine) applications. The authors of this book, the first printing of which was published as Volume 5 of the Encyclopaedia of Mathematical Sciences, have given a masterly exposition of these two theories, with penetrating insight. We haven't found any reviews in the usual places. References from web pages Book List of IWAI Lab. -1986 (Author from A ) Dynamical Systems and Applications World Scientific Advanced Series in Dynamical Systems, v.5 World Scientific, 1987 404.0 88010098 ... yang.amp.i.kyoto-u.ac.jp/ research/ booklist/ A.html Phys. Rev. Lett. 93, 168103 (2004): Biktashev and Suckley - Non ... [20] Dynamical Systems V edited by vi Arnol'd (Springer, Verlag, Berlin, 1994), pp. 154 192. [21] an Tikhonov, Math. USSR Sbornik 31, 575 (1952). ... link.aps.org/ doi/ 10.1103/ PhysRevLett.93.168103 <a href="http://adsabs.harvard.edu/cgi-bin/bib_query?1984ntpp.proc ... [12] vi Arnol’d, “Catastrophe theory”, Dynamical systems V, Itogi Nauki i ... English transl. in Encyclopaedia Math. Sci. [Dynamical systems V], 1994, ... www.turpion.org/ php/ reference.phtml/ ref_sm3831.pdf?journal_id=sm& paper_id=3831& volume=198& issue=1& type=pdf vs afrajmovich libri - I Libri dell'autore: vs Afrajmovich ... Dynamical Systems V: Bifurcation Theory and Catastrophe Theory · Dynamical Systems V: Bifurcation Theory and Catastrophe Theory · Springer - May 1994 ... www.libreriauniversitaria.it/ books-author_v+s+afrajmovich-v+s+afrajmovich.htm Bibliographic information
{"url":"http://books.google.com/books?id=KhCQcU0l1lIC&dq=related:ISBN0715611305&lr=&source=gbs_similarbooks_r&cad=2","timestamp":"2014-04-21T10:38:40Z","content_type":null,"content_length":"120384","record_id":"<urn:uuid:79b90d26-7a5b-46de-85c1-2775b95036dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve the equation by completing the square,x^2+4x+3=0 • 10 months ago • 10 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51c1ee0de4b0a49e8c8ff2c6","timestamp":"2014-04-17T09:38:46Z","content_type":null,"content_length":"119321","record_id":"<urn:uuid:f867c15e-4c2d-4a2e-bf64-8a4598cbde18>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling for Deformable Body and Motion Analysis: A Review Mathematical Problems in Engineering Volume 2013 (2013), Article ID 786749, 14 pages Review Article Modeling for Deformable Body and Motion Analysis: A Review ^1School of Electronic and Optical Engineering, Nanjing University of Science and Technology, 200 Xiao Ling Wei, Nanjing 210094, China ^2Information Technology Research Center, China Electronics Standardization Institute, No. 1 Andingmen East Street, Beijing 100007, China ^3National Key Laboratory for Digital Multimedia Chip Technology, Vimicro Corporation, Beijing 100191, China ^4College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou 310023, China Received 5 December 2012; Revised 26 January 2013; Accepted 26 January 2013 Academic Editor: Carlo Cattani Copyright © 2013 Hailang Pan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper surveys the modeling methods for deformable human body and motion analysis in the recent 30 years. First, elementary knowledge of human expression and modeling is introduced. Then, typical human modeling technologies, including 2D model, 3D surface model, and geometry-based, physics-based, and anatomy-based approaches, and model-based motion analysis are summarized. Characteristics of these technologies are analyzed. The technology accumulation in the field is outlined for an overview. 1. Introduction Human body modeling is experiencing a continuous and accelerated growth. This is partly due to the increasing demand from computer graphics and computer vision communities. Computer graphics pursues a realistic modeling of both the human body geometry and its associated motion. This will benefit applications such as games, virtual reality, or animations, which demand highly realistic human body models (HBMs). Recently, computer vision has been used for the automatic generation of HBMs from a sequence of images by incorporating and exploiting prior knowledge of the human appearance. Computer vision also addresses human body modeling, but in contrast to computer graphics it seeks more for an efficient than an accurate model for applications, such as intelligent video surveillance, motion analysis, telepresence, or human-machine interface. Computer vision applications rely on vision sensors for reconstructing HBMs. Obviously, the rich information provided by a vision sensor, containing all the necessary data for generating a HBM, needs to be processed. Approaches such as tracking segmentation-model fitting or motion prediction-segmentation-model fitting or other combinations have been proposed showing different performances according to the nature of the scene to be processed (e.g., indoor environments, studio-like environments, outdoor environments, single-person scenes, etc.). The challenge is to produce a HBM able to faithfully follow the movements of a real person [1, 2]. Modeling a human is a great challenge if we consider the numerous parts needed to compose a body. The first step is the basic structure modeling, the definition of the joints, their positions, orientations, and the geometric model that will describe the body hierarchy. Next, we need to think about the body volume and on top of this, we can use a parametric surface to simulate skin for example. With these three elements, we can reach a good representation of a body. Methods used to deform the human skin layer include curve-, contour-, surface-, geometry-, physics-, and anatomy-based approaches. 2D models such as curve and contour models offer representational and computational simplicity and are often preferred over 3D models for applications involving monocular images and video. These models typically represent the shape of the human body coarsely. None of these methods explicitly models how clothing influences human shape. Surface models used in animation have a highly structured mesh to give high-resolution representation in areas of deformation and efficient representation in other areas. Preserving this vertex parameterization is important in reconstructing models that can be used for animation. Geometry-based approach such as free form deformation (FFD) provides flexibility for the users to control deformations of models. The method is simple and fast but requires considerable skill to model realistic model. Physics-based approaches such as the finite element method (FEM) can model skin layers according to their physical properties accurately. Anatomy-based methods create an accurate human body model based on a precise representation of the skeleton, muscles, and fatty tissues. These techniques generate realistic and dynamic deformation of an articulated body using physical simulation but, due to their high computational cost, applications are mainly in offline simulation and animation [3]. Model-based pose estimation algorithms aim at recovering human motion from one or more camera views and a 3D model representation of the human body. The model pose is usually parameterized with a kinematic chain, and thereby the pose is represented by a vector of joint angles. The majority of algorithms are based on minimizing an error function that measures how well the 3D model fits the image. This category of algorithms usually has two main stages, namely, defining the model and fitting the model to image observations. The model image association problem for pose estimation is usually formulated as the minimization/maximization of an error/likelihood function. The two main strategies have been described, namely, local and particle-based optimization. Local optimization methods are faster and more accurate but in practice, if there are visual ambiguities, or really fast motions, the tracker might fail catastrophically. To achieve more robustness, particle filters can be used because they can represent uncertainty through a rigorous Bayesian paradigm [4]. In the past, there are several reviews on the study of human motion capture, detection, and analysis. Aggarwal et al. [5] wrote a review about articulated and elastic nonrigid motion in 1994. There is no other review about non-rigid, elastic, or deformable human motion analysis in the past 19 years. This paper is the first such review after 1995. This paper is organized as follows Section 2 presents the 2D curve and contour human models. Section 3 presents the 3D human surface models such as quadrics, superquadrics, implicit surface, spline surface, and mesh surface models. Section 4 presents geometry-based human models. Section 5 presents physics-based human models. Anatomically based human models are presented in Section 6. 2. Two-Dimensional Models 2.1. Curve Models Tabia et al. proposed an approach to matching 3D objects in the presence of nonrigid transformation and partially similar models [6]. They adopt square-root elastic (SRE) framework because it simplifies the elastic shape analysis. They define a space of closed curves of interest, impose a Riemannian structure on this space using the elastic metric, and compute geodesic paths under this metric. These geodesic paths can then be interpreted as optimal elastic deformations of curves. Mori and Malik [7] took a single two-dimensional image containing a human figure, locate the joint positions, and use these to estimate the body configuration and pose in three-dimensional space (see Figure 1(a)). They match the input image to each stored view using the technique of shape context matching in conjunction with a kinematic chain-based deformation model and then use joint positions to estimate the body configuration and pose in three-dimensional space. In their approach, a shape is represented by a discrete set of points sampled from the internal and external contours on the shape. They first perform edge detection on the image, using the boundary detector to obtain a set of edge pixels on the contours of the body. They then sample some number of points (300–1000) from these edge pixels to be used as the sample points for the body. Srivastava et al. [8] introduced a square-root velocity (SRV) representation for analyzing shapes of curves in Euclidean spaces using an elastic metric (see Figure 1(b)). Huang et al. [9] presented a variational and statistical approach for shape registration (see Figure 1(c)). Shapes of interest are implicitly embedded in a higher-dimensional space of distance transforms. In this implicit embedding space, registration is formulated in a hierarchical manner: the mutual information criterion supports various transformation models and is optimized to perform global registration; then, a B-spline-based incremental free form deformations (IFFD) model is used to minimize a sum-of-squared-differences (SSD) measure and further recover a dense local nonrigid registration field. 2.2. Contour Models Liu et al. [10] presented boosted deformable model for human body alignment. Their model representation consists of a shape component represented by a point distribution model and an appearance component represented by a collection of local features, trained discriminatively as a two-class classifier using boosting (see Figure 2(a)). Freifeld et al. [11] defined a new “contour person” model of the human body that has the expressive power of a detailed 3D model and the computational benefits of a simple 2D part-based model. The contour person (CP) model is learned from a 3D SCAPE model of the human body that captures natural shape and pose variations; the projected contours of this model, along with their segmentation into parts, form the training set (see Figure 2(b)). The CP model factors deformations of the body into three components: shape variation, viewpoint change, and part rotation. This latter model also incorporates a learned non-rigid deformation model. Zuffi et al. [12] defined a new deformable structures (DS) model that is a natural extension of previous pictorial structures (PS) models and that captures the non-rigid shape deformation of the parts (see Figure 2(c)). Each part in a DS model is represented by a low-dimensional shape deformation space, and pairwise potentials between parts capture how the shape varies with pose and the shape of neighboring parts. A key advantage of such a model is that it more accurately models object boundaries. This enables image likelihood models that are more discriminative than the previous PS likelihoods. They focus on a human DS model learned from 2D projections of a realistic 3D human body model and use it to infer human poses in images using a form of nonparametric belief propagation. Guan et al. [13] studied detection, tracking, segmentation, and pose estimation of people in monocular images. They start with a contour person (CP) model (see Figure 2(d)), which is a low-dimensional, realistic, parameterized generative model of 2D human shape and pose. The CP model is learned from examples created by 2D projections of multiple shapes and poses generated from a 3D body model such as SCAPE. The CP model is based on a template, corresponding to a reference contour that can be deformed into a new pose and shape. This deformation is parameterized and factors the changes of a person’s 2D shape due to pose, body shape, and the parameters of the viewing camera. This factorization allows different causes of the shape change to be modeled separately. 3. 3D Surface Models 3.1. Quadrics Park and Hodgins [14] presented a technique for capturing and animating those motions using a commercial motion capture system and approximately 350 markers. They supplement these markers with a detailed, actor-specific surface model (see Figure 3(a)). The motion of the skin can then be computed by segmenting the markers into the motion of a set of rigid parts and a residual deformation (approximated first as a quadratic transformation and then with radial basis functions). Fayad et al. [15] proposed a more general shape model that accounts for quadratic deformations. Their approach takes motion capture (MOCAP) data as input and enables the extraction of more accurate estimates for the rigid component of the different body segments using a factorization framework. The parameters of the model are computed using a Levenberg-Marquardt nonlinear optimization scheme. Hyun et al. [16] presented a new approach to the modeling and deformation of a human or virtual character’s arms and legs. Each limb is represented as a set of ellipsoids of varying sizes interpolated along a skeleton curve (see Figure 3(b)). A base surface is generated by approximating these ellipsoids with a swept ellipse, and the difference between that and the detailed shape of the arm or leg is represented as a displacement map. Pan and Liu [17] presented a model of elastic articulated objects based on revolving conic surface and a method of model-based motion estimation (see Figure 3(c)). The model includes 3D object skeleton and deformable surfaces that can represent the deformation of human body surfaces. In each limb, surface deformation is represented by adjusting one or two deformation parameters. Then, the 3D deformation parameters are determined by corresponding 2D image points and contours with volume invariable constraint. The 3D motion parameters are estimated based on the 3D model. 3.2. Superquadrics Terzopoulos and Metaxas [18] presented a physically based approach to fitting complex three-dimensional shapes using a class of dynamic models that can deform both locally and globally. They formulate the deformable superquadrics, which incorporate the global shape parameters of a conventional superellipsoid with the local degrees of freedom of a spline. The model’s six global deformational degrees of freedom capture gross shape features from visual data and provide salient part descriptors for efficient indexing into a database of stored models. The local deformation parameters reconstruct the details of complex shapes that the global abstraction misses. The equations of motion which govern the behavior of deformable superquadrics make them responsive to externally applied forces. The authors fit models to visual data by transforming the data into forces and simulating the equations of motion through time to adjust the translational, rotational, and deformational degrees of freedom of the models. Sminchisescu [19] built a human body model which consists of a kinematic “skeleton” of articulated joints controlled by angular joint parameters, covered by “flesh” built from superquadric ellipsoids with additional tapering and bending parameters (see Figure 4(a)). A typical model has around 30 joint parameters, plus 8 internal proportion parameters encoding the positions of the hip, clavicle, and skull tip joints, plus 9 deformable shape parameters for each body part, gathered into a vector. A complete model can be encoded as a single large parameter vector. During tracking, they usually estimate only joint parameters, but during initialization the most important internal proportions and shape parameters are also optimized, subject to a soft prior based on standard humanoid dimensions and updated using collected image evidence. Although this model is far from being photo realistic, it suffices for high-level interpretation and realistic occlusion prediction, and it offers a good trade-off between computational complexity and coverage. Hofmann and Gavrila [20] presented an approach for 3D human body shape model adaptation to a sequence of multi-view images. They use an articulated model with linearly tapered superquadrics as geometric primitives for torso, neck, head, upper and lower arm, hand, upper and lower leg, and foot, assuming body symmetry (see Figure 4(b)). The parameter space of each superquadric comprises parameters for length, squareness, and tapering. They implement automatic pose and shape estimation using a three-step procedure: first, they recover initial pose over a sequence using an initial (generic) body model. Both model and poses then serve as input to the above-mentioned adaptation process. Finally, a more accurate pose recovery is obtained by means of the adapted model. Sundaresan and Chellappa [21] proposed a general approach using Laplacian Eigenmaps and a graphical model of the human body to segment 3D voxel data of humans into different articulated chains. They select the superquadric model to represent human bodies (see Figure 4(c)). They use a hierarchical approach beginning with a skeletal model (joint locations and limb lengths) and then proceeding to increase the model complexity and refining parameters to obtain a volumetric model (superquadric parameters). Yang and Lee [22] reconstructed a 3D human body pose from stereo image sequences based on a top-down learning method. The 3D human body model consists of 17 body components. The human body model has 40 degrees of freedom (DOF). Tapered superquadrics are employed to represent body components. 3.3. Implicit Surface Matsuda and Nishita [23] modeled the human body by layered metaballs, which correspond to the horizontal cross section of the body, in their cloth simulation system. For each cross section, metaballs are generated by measured sample points on the boundary of the cross section. In order to fit the metaball surface with the sampling points, they employed the steepest descent method. For body deformation, the sampling points on the cross section are smoothly moved using Bezier curves. Blinn [24] presented a new algorithm applicable to other functional forms, in particular to the summation of several Gaussian density distributions. He models human body by using implicit surface (see Figure 5(a)), but he did not model from images. Thalmann et al. [25] presented different methods for representing realistic deformations for virtual humans with various characteristics: sex, age, height, and weight. Their methods based on a combination of metaballs and splines could be applied to frame-by-frame computer generated films and virtual environments. Smooth implicit surfaces, known as metaballs, are attached to an articulated skeleton of the human body and are arranged in an anatomically based approximation. This particular human body model includes 230 metaballs. D’Apuzzo et al. [26] outlined techniques for fitting a simplified model to the noisy 3D data extracted from the images and a new tracking process based on least squares matching is presented. They present a simplified model of a limb. Ellipsoidal metaballs are used to simulate the gross behavior of bone, muscle, and fat tissue. Only three ellipsoidal metaballs are attached to each limb skeleton and arranged in an anatomically based approximation (see Figure 5(b)). Each ellipsoidal metaball has four deformation parameters and each limb has three ellipsoidal metaballs, so each limb has 12 deformation parameters. Fua et al. [27] presented a comprehensive concept to fit animation models to a variety of different data derived from multiimage video sequences. Their research includes setting up and calibrating a system of three CCD cameras, extracting image silhouettes, tracking individual key body points in 3D, and generating surface data by stereo or multi-image matching. To reduce the number of degrees of freedom (DOFs) and to be able to robustly estimate the skeleton’s position, they replace the multiple metaballs by one ellipsoid attached to each bone in the skeleton (see Figure 5(c)). Plänkers and Fua [28] developed a framework for 3D shape and motion recovery of articulated deformable objects. They propose a formalism that incorporates the use of implicit surfaces into earlier robotics approaches that is designed to handle articulated structures. Their human body model also includes 230 metaballs (see Figure 5(d)). Tong et al. [29] constructed a human body model using convolution surface with articulated kinematic skeleton (see Figure 5(e)). The human body’s pose and shape in a monocular image can be estimated from convolution curve through nonlinear optimization. 3.4. Spline Surface Nahas et al. [30] described how the use of B-spline surfaces allows lissome movements of body and face (see Figure 6(a)). Their method is empirical, based on a parametrical animation. It can be combined with a muscles model for animation. Fu and Yuan [31] introduced the establishment of human body based on B-spline (see Figure 6(b)). According to the military criteria, they divide the whole body into sixteen limbs and use the method of multiplying of matrix to establish the equation of human body’s movement. It constructs the blending surface between two limbs by transfinite interpolant. Huang et al. [32] discussed a motion modeling method to simulate the bend of human leg and corresponding deformations of muscles on the basis of NURBS FFD (free form deformation) [33]. Generally, FFD uses a trivariate tensor product spline to transmit deformations, and it is feasible to choose only two order B-spline basis functions in this case (see Figure 6(c)). According to the anatomic structure of joints, the simulation formulas are presented, and they coincide with the motion characteristics of knee joint and muscles. Wang and Jiang [34] simulated three-dimensional human’s leg bending by using free form deformation on the basis of NURBS (see Figure 6(d)); leg looked as combo of rigid body bone and flexible body muscle. It improves Barr’s deformation methods, weight is used to control deformation, and a good visual effect of simulation is achieved. 3.5. Mesh Surface Huang et al. [35] considered the problem of aligning multiple non-rigid surface mesh sequences into a single temporally consistent representation of the shape and motion (see Figure 7(a)). A global alignment graph structure is introduced, which uses shape similarity to identify frames for intersequence registration. Graph optimization is performed to minimize the total non-rigid deformation required to register the input sequences into a common structure. Chang and Lin [36] presented a 3D model-based tracking algorithm called the progressive particle filter to decrease the computational cost in high degrees of freedom by employing hierarchical searching. A 3D virtual human model is developed to simulate human movement. The proposed 3D human model is constructed from deformable flesh (see Figure 7(b)). Deformable flesh can be deformed to precisely fit the target body to achieve accurate tracking results. Liao et al. [37] reconstructed complete 3D deformable models over time by a single depth camera, provided that most parts of the models are observed by the camera at least once. A mesh warping algorithm based on linear mesh deformation is used to align different partial surfaces. A volumetric method is then used to combine partial surfaces, fix missing holes, and smooth alignment errors (see Figure 7(c)). Varanasi et al. [38] addressed the problem of surface tracking in multiple camera environments and over time sequences. In order to fully track a surface undergoing significant deformations, they cast the problem as a mesh evolution over time. Such an evolution is driven by 3D displacement fields estimated between meshes recovered independently at different time frames. The contribution is a novel mesh evolution-based framework that allows to fully track, over long sequences, an unknown surface encountering deformations, including topological changes (see Figure 7(d)). Balan and Black [39] estimated the detailed 3D shape of a person from images of that person wearing clothing. They employ a parametric body model called SCAPE that is able to capture variability of body shapes between people, as well as articulated and non-rigid pose deformations. The model is derived from a large training set of human laser scans, which have been brought in full correspondence with respect to a reference mesh (see Figure 7(e)). Starck and Hilton [40] presented a model-based approach to recover animated models of people from multiple view video images. A prior humanoid surface model is first decomposed into multiple levels of detail and represented as a hierarchical deformable model for image fitting. A novel mesh parameterization is presented that allows propagation of deformation in the model hierarchy and regularization of surface deformation to preserve vertex parameterization and animation structure (see Figure 7(f)). De Aguiar et al. [41] jointly captured the motion and the dynamic shape of humans from multiple video streams without using optical markers. Their approach uses a deformable high-quality mesh of a human as scene representation. It jointly uses an image-based 3D correspondence estimation algorithm and a fast Laplacian mesh deformation scheme to capture both motion and surface deformation of the actor from the input video footage (see Figure 7(g)). Wang et al. [42] developed an efficient and intuitive deformation technique for virtual human modeling by silhouettes input. With their method, the reference silhouettes and the target silhouettes are used to modify the synthetic human model, which is represented by a polygonal mesh. The system moves the vertices of the polygonal model so that the spatial relation between the original positions and the reference silhouettes is identical to the relation between the resulting positions and the target silhouettes. Their method is related to the axial deformation. Seo et al. [43] aimed to carry out realistic deformations on the human body models as well as make its usage simple. Their system is composed of several modules: skin attachment to an H-Anim skeleton is carried out first in order to get deformation in skeletal shape modification as well as in animation; volumetric deformation module deals with the volumetric scale of body parts (see Figure 7(h)). These deformation operators, together with the skeletal deformation, allow the automatic adaptation of the body model to different sizes and proportions to accommodate anthropometrical variations; surface optimization is used to simplify the model in consideration of not only geometric features, but also the animation aspect of it; finally, the BDP (MPEG-4 format) generation module describes the geometry of the model as well as how to animate it according to the MPEG-4 BDP specifications. 4. Geometry-Based Approaches Kokkinos et al. [44] presented intrinsic shape context (ISC) descriptors for 3D shapes (see Figure 8(a)). They generalize to surfaces the polar sampling of the image domain used in shape contexts: for this purpose, they chart the surface by shooting geodesic outwards from the point being analyzed; “angle” is treated as tantamount to geodesic shooting direction and radius as geodesic distance. To deal with orientation ambiguity, they exploit properties of the Fourier transform. For the analysis of deformable 3D shapes, Raviv et al. [45] introduced an (equi)affine invariant diffusion geometry by which surfaces that go through squeeze and shear transformations can still be properly analyzed (see Figure 8(b)). The definition of an affine invariant metric enables them to construct an invariant Laplacian from which local and global geometric structures are extracted. Castellani et al. [46] exploited a new generative model for encoding the variations of local geometric properties of 3D shapes. Surfaces are locally modeled as a stochastic process, which spans a neighborhood area through a set of circular geodesic pathways, captured by a modified version of a Hidden Markov Model (HMM), named multicircular HMM (MC-HMM). The approach proposed consists of two main phases: local geometric feature collection and MC-HMM parameter estimation. Akhter et al. [47] proposed a dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories. They describe the dual relationship between the two approaches, showing that they both have equal power for representing 3D structure. They further show that the temporal smoothness in 3D trajectories alone can be used for recovering nonrigid structure from a moving camera (see Figure 8(c)). The principal advantage of expressing deforming 3D structure in trajectory space is that they can define an object on independent basis. This results in a significant reduction in unknowns and corresponding stability in estimation. Gotardo and Martinez [48] addressed the classical computer vision problems of rigid and nonrigid structure from motion (SFM) with occlusion. They assume that the columns of the input observation matrix describe smooth 2D point trajectories over time. They then derive a family of efficient methods that estimate the column space of using compact parameterizations in the Discrete Cosine Transform (DCT) domain. In non-rigid SFM, they propose a 3D shape trajectory approach that solves for the deformable structure as the smooth time trajectory of a single point in a linear shape space. Raviv et al. [49] presented a generalization of symmetries for non-rigid shapes and a numerical framework for their analysis (see Figure 8(d)), addressing the problems of full and partial exact and approximate symmetry detection and classification. Zhu et al. [50] formulated a hierarchical configurable deformable template (HCDT) to model articulated visual objects—such as horses and baseball players—for tasks such as parsing, segmentation, and pose estimation. HCDTs represent an object by an AND/OR graph where the OR nodes act as switches, which enables the graph topology to vary adaptively. This hierarchical representation is compositional, and the node variables represent positions and properties of subparts of the object. The graph and the node variables are required to obey the summarization principle, which enables an efficient compositional inference algorithm to rapidly estimate the state of the HCDT. Cui et al. [51] reported a parameterized model for virtual human body (see Figure 8(e)). In this model, the virtual human body was partitioned into several parts. Based on the partitioned human model, the proportional characteristics of the human body were used to calculate the offset of the vertices to implement the deformation on specific part of the body. The interpolation method was used to smoothen the deformed surfaces. Liu and Shang [52] presented the example-based method for generating realistic, controllable human models (see Figure 8(f)). Users are assisted in automatically generating an example body data by controlling the parameters. The examples from the Poser and 3D Max are preprocessed as templates. The modeling method learns from these examples. After this learning process, the synthesizer translates the mesh of vertices to the generation of appropriate shape and proportion of the body geometry through free form deformation method. Oshita and Suzuki [53] proposed an easy-to-use real-time method to simulate realistic deformation of human skin. They utilize the fact that various skin deformations can be categorized into various deformation patterns. A deformation pattern for a local skin region is represented using a dynamic height map (see Figure 8(g)). Users of their system specify a region of the body model in the texture space to which each deformation pattern should be applied. Then, during animation, the skin deformation of the body model is realized by changing the height patterns based on the joint angles and applying bump mapping or displacement mapping to the body model. Tian et al. [54] presented an improved skinning method that can effectively reduce traditional flaws of this method. After the key terminologies are introduced, the improvement for skinning deformation is illustrated in detail. The main measures include that adding extra joints on JSL to minimize the distance between adjacent joints and thus joint’s importance attribute is introduced, using joint cluster to replace single joint; creating the corresponding relationship between skin and JSLs based on flexible model and multijoints-binding method (MJBM), that is to say binding one skin vertex to several joints using distance criterion and weight coefficient, is the function of distances between the skin vertex and its related joints. All these improvements can make the skin deformation more smooth (see Figure 8(h)). Zhou and Zhao [55] presented a skin deformation algorithm for creating 3D characters or virtual human models. The algorithm can be applied to rigid deformation, joint-dependent localized deformation, skeleton driven deformation, cross-contour deformation, and free-form deformation (FFD). These deformations are computed and demonstrated with examples, and the algorithm is applied to overcome the difficulties in mechanically simulating the motion of the human body by club-shape models. The techniques enable the reconstruction of dynamic human models that can be used in defining and representing the geometrical and kinematical characteristics of human motion. Shen et al. [56] presented an approach for human skin modeling and deformation based on cross-sectional methods. Internally, the authors use dynamic trimmed parametric patches for describing the smooth deformation of skin pieces; then they polygonalize parametric patches for final body skin synthesis and rendering. Simple and intuitive, their method combines the advantages of both parametric and polygonal representations, produces very realistic body deformations, and allows the display of surface models at several levels of detail. Smeets et al. [57] used an isometric deformation model. The geodesic distance matrix is used as an isometry-invariant shape representation. Two approaches are described to arrive at a sampling order invariant shape descriptor: the histogram of geodesic distance matrix values and the set of largest singular values of the geodesic distance matrix. Shape comparison is performed by the comparison of the shape descriptors using the -distance as dissimilarity measure. Rumpf and Wirth [58] introduced the covariance of a number of given shapes if they are interpreted as boundary contours of elastic objects. Based on the notion of nonlinear elastic deformations from one shape to another, a suitable linearization of geometric shape variations is introduced. Once such a linearization is available, a principal component analysis can be investigated. This requires the definition of a covariance metric—an inner product on linearized shape variations. The resulting covariance operator robustly captures strongly nonlinear geometric variations in a physically meaningful way and allows to extract the dominant modes of shape variation. The underlying elasticity concept represents an alternative to Riemannian shape statistics. Fundana et al. [59] proposed a method for variational segmentation of image sequences containing nonrigid, moving objects. The method is based on the classical Chan-Vese model augmented with a novel frame-to-frame interaction term, which allows them to update the segmentation result from one image frame to the next using the previous segmentation result as a shape prior. The interaction term is constructed to be pose invariant and to allow moderate deformations in shape. Mio et al. [60] studied shapes of planar arcs and closed contours modeled on elastic curves obtained by bending, stretching, or compressing line segments nonuniformly along their extensions. Shapes are represented as elements of a quotient space of curves obtained by identifying those that differ by shape-preserving transformations. The elastic properties of the curves are encoded in Riemannian metrics on these spaces. Geodesics in shape spaces are used to quantify shape divergence and to develop morphing techniques. The shape spaces and metrics constructed offer an environment for the study of shape statistics. Elasticity leads to shape correspondences and deformations that are more natural and intuitive than those obtained in several existing models. Cremers [61] tackled the challenge of learning dynamical statistical models for implicitly represented shapes. They show how these can be integrated as dynamical shape priors in a Bayesian framework for level set-based image sequence segmentation. They propose learning the temporal dynamics of a deforming shape by approximating the shape vectors of a sequence of level set functions by a Markov chain of order . 5. Physics-Based Approaches Tang [62] presented a physics-based approach to model human skin deformation using boundary element method (BEM). Given the magnitude of displacement between the skin layer and the underlying skeleton at the anatomical landmarks, the approach determines the displacement of each vertex of the human skin model by using the BEM (see Figure 9(a)). They demonstrated their results by modeling the skin deformation of human lower limb with jumping and walking motions. Shin and Badler [63] modeled a deformable human arm to improve the accuracy of constrained reach analysis. Their research is largely composed of two parts. The first part is modeling a deformable human arm based on these empirical biomechanical properties and calculating the deformation due to various contact areas (see Figure 9(b)). The second part is evaluating the reachable space (reachability) from the arm deformation in a given geometric (CAD) environment. Using the empirical force-displacement relation, they have built a simple human arm model that deforms using a finite element method. Pentland and Horowitz [64] introduced a physically correct model of elastic nonrigid motion. This model is based on the finite element method (see Figure 9(c)), but it decouples the degrees of freedom by breaking down object motion into rigid and nonrigid vibration or deformation modes. The result is an accurate representation for both rigid and nonrigid motions that has greatly reduced dimensionality. Because of the small number of parameters involved, they have been able to use this representation to obtain accurate overconstrained estimates of both rigid and nonrigid global motions. These estimates can be integrated over time by the use of an extended Kalman filter [65], resulting in stable and accurate estimates of both 3D shape and 3D velocity. The formulation was then extended to include constrained nonrigid motion. Examples of tracking single nonrigid objects and multiple constrained objects were presented. 6. Anatomy-Based Approaches Hyun et al. [66] presented a sweep-based approach to human body modeling and deformation. A rigid 3D human model, given as a polygonal mesh, is approximated with control sweep surfaces. The vertices on the mesh are bound to nearby sweep surfaces and then follow the deformation of the sweep surfaces as the model bends and twists its arms, legs, spine, and neck (see Figure 10(a)). Anatomical features including bone protrusion, muscle bulge, and skin folding are supported by a GPU-based collision detection procedure. The volumes of arms, legs, and torso are kept constant by a simple control using a volume integral formula for sweep surfaces. Zuo et al. [67] proposed a new method of muscle modeling based on both anatomical and real-time considerations. In the muscle modeling system, muscle can be constructed and edited easily through appointing some radial and transverse cross-section control parameters. Deformation of muscle model can be achieved through axial deformation and cross section’s deformation. The user can adjust the precision of models to meet different requirements. Nedel and Thalmann [68] proposed a method to simulate human bodies based on anatomy concepts. Their model is divided into three layers and presented in three steps: the concept of a rigid body from a real skeleton, the muscle design and deformation based on physical concepts, and skin generation. Muscles are represented at two levels: the action lines and the muscle shape. The action line represents the force produced by a muscle on the bones, while the muscle shapes used in the simulation consist of surface-based models. To physically simulate deformations, they used a mass-spring system with a new kind of springs, called “angular springs,” which were developed to control the muscle volume during simulation (see Figure 10(b)). Aubel and Thalmann [69] proposed a new, generic, multilayered model for automating the deformations of the skin of human characters based on physiological and anatomical considerations. Muscle motion and deformation are automatically derived from an action line that is deformed using a 1D mass-spring system. They cover the muscle layer with a viscoelastic fat layer that concentrates the crucial dynamics effects of the animation (see Figure 10(c)). Min et al. [70] proposed an anatomically based modeling and animation scheme for a human body model whose shape was created from 3D scan data of a human body. The proposed human body model is composed of three layers: a skeleton layer, a muscle layer, and a skin layer. The skeleton layer, represented as a set of joints and bones, controls the animation of the human body model. The muscle layer deforms the skin layer realistically during animation. They create the muscles in that layer using soft objects, also known as blobby objects or metaballs, and deform them through the insertion/origin points of the muscles and the volume-preserving constraints. To deform the skin layer during animation, they bind the skin layer to both the skeleton layer and the muscle layer by finding corresponding joints and muscles of the vertices on the skin layer. They applied the proposed scheme for modeling the upper limb and shoulder of human body (see Figure 10(d)). 7. Conclusion This paper attempts to provide a comprehensive survey of research on deformable human modeling and motion analysis and to provide some structural categories for the methods described in over 60 papers. This work can be generally categorized as 2D model, 3D surface model, and geometry-based, physics-based, and anatomy-based approaches. Compared with rigid motion analysis, the analysis of nonrigid and elastic articulated motion is still in its infancy at the current stage. The main difficulties in developing algorithms for human shape and motion analysis stem from the complex 3D nonrigid motions of human. The motivations of human motion analysis based on deformable models are driven by application areas such as medical imaging, biomedical applications, gesture recognition, choreography, video conferencing, material deformation studies, and image compression. Although over the last decade much progress has been made in human pose estimation based on deformable model, there remain a number of open problems and challenges. First, one important motivation of this research is to build a body surface model that properly describes human body deformation from a small number of parameters and human 3D shape analysis from 2D image sequences. Second, sports biomechanics analysis is made to determine temporal and spatial parameters, kinematic variables, and kinetic variables of human body. Sports biomechanics analysis has been restricted to rigid models. There is less research using nonrigid model. Image-based sports biomechanics analysis of deformable human body model includes volume, gravity center, force of gravity and moment of inertia determination, and kinetic and rotational dynamics analysis. Third, while tracking walking motions in semicontrolled settings is more or less reliable, robust tracking of arbitrary and highly dynamic motions is still challenging even in controlled setups. Fourth, tracking arbitrary motions in outdoor settings has been mostly unaddressed and remains one of the open problems in computer vision. Outdoor human tracking would be allowed to capture sport motions in their real competitive setup. Fifth, tracking people in the office or in the streets interacting with the environment is still an extremely challenging problem to be solved. We expect that novel schemes will be presented to deal with human motion analysis based on deformable models in the future. This research is supported by the National Science Foundation of China 61075031, 31270998, and 61173096, China Postdoctoral Science Foundation 2012M511321, Jiangsu Postdoctoral Science Foundation 1102169C, and National Science and Technology Support Program of China 2012BAZ04319. 1. M. Enzweiler and D. M. Gavrila, “Monocular pedestrian detection: survey and experiments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2179–2195, 2009. View at Publisher · View at Google Scholar · View at Scopus 2. T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231–268, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 3. L. Wang, W. Hu, and T. Tan, “Recent developments in human motion analysis,” Pattern Recognition, vol. 36, no. 3, pp. 585–601, 2003. View at Publisher · View at Google Scholar · View at Scopus 4. G. Pons-Moll and B. Rosenhahn, “Model-based pose estimation,” in Model-Based Pose Estimation. Visual Analysis of Humans: Looking at People, pp. 139–170, Springer, 2011. 5. J. K. Aggarwal, Q. Cai, W. Liao, and B. Sabata, “Articulated and elastic non-rigid motion: a review,” in Proceedings of the Workshop on Motion of Non-Rigid and Articulated Objects, pp. 2–14, November 1994. View at Scopus 6. H. Tabia, M. Daoudi, J. P. Vandeborre, and O. Colot, “A new 3D-matching method of nonrigid and partially similar models using curve analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 4, pp. 852–858, 2011. View at Publisher · View at Google Scholar · View at Scopus 7. G. Mori and J. Malik, “Recovering 3D human body configurations using shape contexts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 7, pp. 1052–1062, 2006. View at Publisher · View at Google Scholar · View at Scopus 8. A. Srivastava, E. Klassen, S. H. Joshi, and I. H. Jermyn, “Shape analysis of elastic curves in euclidean spaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 7, pp. 1415–1428, 2011. View at Publisher · View at Google Scholar · View at Scopus 9. X. Huang, N. Paragios, and D. N. Metaxas, “Shape registration in implicit spaces using information theory and free form deformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1303–1318, 2006. View at Publisher · View at Google Scholar · View at Scopus 10. X. Liu, T. Yu, T. Sebastian, and P. Tu, “Boosted deformable model for human body alignment,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, June 2008. View at Publisher · View at Google Scholar · View at Scopus 11. O. Freifeld, A. Weiss, S. Zuffi, and M. J. Black, “Contour people: a parameterized model of 2D articulated human shape,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 639–646, June 2010. View at Publisher · View at Google Scholar · View at Scopus 12. S. Zuffi, O. Freifeld, and M. J. Black, “From pictorial structures to deformable structures,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3546–3553, 13. P. Guan, O. Freifeld, and M. J. Black, “A 2D human body model dressed in eigen clothing,” in Proceedings of the European Conference on Computer Vision, part 1, vol. 6311 of Lecture Notes in Computer Science, pp. 285–298, 2010. 14. S. I. Park and J. K. Hodgins, “Capturing and animating skin deformation in human motion,” ACM Transactions on Graphics, vol. 25, no. 3, pp. 881–889, 2006. View at Publisher · View at Google 15. J. K. Fayad, A. D. Bue, L. Agapito, and P. M. Q. Aguiar, “Human body modelling using quadratic deformations,” in Proceedings of the 7th EUROMECH Solid Mechanics Conference, pp. 1–19, Lisbon, Portugal, 2009. View at Publisher · View at Google Scholar 16. D. E. Hyun, S. H. Yoon, M. S. Kim, and B. Juttler, “Modeling and deformation of arms and legs based on ellipsoidal sweeping,” in Proceedings of the 11th Pacific Conference on Computer Graphics and Applications, p. 204, 2003. 17. H. Pan and Y. Liu, “Motion estimation of elastic articulated objects from points and contours with volume invariable constraint,” Pattern Recognition, vol. 41, no. 2, pp. 458–467, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 18. D. Terzopoulos and D. Metaxas, “Dynamic 3D models with local and global deformations: deformable superquadrics,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 7, pp. 703–714, 1991. View at Publisher · View at Google Scholar · View at Scopus 19. C. Sminchisescu, Estimation algorithms for ambiguous visual models—three dimensional human modeling and motion reconstruction in monocular video sequences [Ph.D. thesis], Inria, 2002. 20. M. Hofmann and D. M. Gavrila, “3D human model adaptation by frame selection and shape-texture optimization,” Computer Vision and Image Understanding, vol. 115, no. 11, pp. 1559–1570, 2011. View at Publisher · View at Google Scholar 21. A. Sundaresan and R. Chellappa, “Model driven segmentation of articulating humans in Laplacian Eigenspace,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 10, pp. 1771–1785, 2008. View at Publisher · View at Google Scholar · View at Scopus 22. H. D. Yang and S. W. Lee, “Reconstruction of 3D human body pose from stereo image sequences based on top-down learning,” Pattern Recognition, vol. 40, no. 11, pp. 3120–3131, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 23. R. Matsuda and T. Nishita, “Modeling and deformation method of human body model based on range data,” in Proceedings of the International Conference on Shape Modeling and Applications, pp. 80–87, 272, 1999. View at Publisher · View at Google Scholar 24. J. F. Blinn, “A generalization of algebraic surface drawing,” ACM Transactions on Graphics, vol. 1, no. 3, pp. 235–256, 1982. View at Publisher · View at Google Scholar 25. D. Thalmann, J. Shen, and E. Chauvineau, “Fast realistic human body deformations for animation and VR applications,” in Proceedings of the Computer Graphics International (CGI '96), pp. 166–174, June 1996. View at Scopus 26. N. D'Apuzzo, R. Plänkers, P. Fua, A. Gruen, and D. Thalmann, “Modeling human bodies from video sequences,” in Electronic Imaging, Proceedings of SPIE, pp. 1–12, Bellingham, Wash, USA, 1999. View at Publisher · View at Google Scholar 27. P. Fua, A. Gruen, R. Plänkers, N. D’Apuzzo, and D. Thalmann, “Human body modeling and motion analysis from video sequences,” International Archives of Photogrammetry and Remote Sensing, vol. 32, pp. 866–873, 1998. 28. R. Plänkers and P. Fua, “Articulated soft objects for multiview shape and motion capture,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1182–1187, 2003. View at Publisher · View at Google Scholar · View at Scopus 29. M. L. Tong, Y. C. Liu, and T. Huang, “Recover human pose from monocular image under weak perspective projection,” in Proceedings of Lecture Notes in Computer Science, vol. 3766, pp. 102–111, Springer, Heidelberg, Germany, 2007. 30. M. Nahas, H. Huitric, and M. Saintourens, “Animation of a B-Spline figure,” The Visual Computer, vol. 3, no. 5, pp. 272–276, 1988. View at Publisher · View at Google Scholar · View at Scopus 31. S. B. Fu and X. G. Yuan, “The establishment of human body based on B-spline surface,” Chinese Journal of Computers, vol. 21, no. 12, pp. 1131–1135, 1998. 32. H. Y. Huang, F. H. Qi, and Z. H. Yao, “A leg motion modeling method based on NURBS free-form deformation,” Journal of Computer Research & Development, vol. 37, no. 6, pp. 697–702, 2000. 33. J. Sanchez-Reyes and J. M. Chacon, “Hermite approximation for free-form deformation of curves and surfaces,” Computer-Aided Design, vol. 44, no. 5, pp. 445–456, 2012. View at Publisher · View at Google Scholar 34. J. Z. Wang and Y. M. Jiang, “Three-dimension human body free form deformation method based on NURBS,” Computer Engineering and Applications, vol. 38, no. 17, pp. 95–97, 2002. 35. P. Huang, C. Budd, and A. Hilton, “Global temporal registration of multiple non-rigid surface sequences,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3473–3480, 2011. 36. I. C. Chang and S. Y. Lin, “3D human motion tracking based on a progressive particle filter,” Pattern Recognition, vol. 43, no. 10, pp. 3621–3635, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 37. M. Liao, Q. Zhang, H. Wang, R. Yang, and M. Gong, “Modeling deformable objects from a single depth camera,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 167–174, Kyoto, Japan, October 2009. View at Publisher · View at Google Scholar · View at Scopus 38. K. Varanasi, A. Zaharescu, E. Boyer, and R. Horaud, “Temporal surface tracking using mesh evolution,” Proceedings of the 10th European Conference on Computer Vision, part 2, pp. 30–43, 2008. View at Publisher · View at Google Scholar · View at Scopus 39. A. O. Balan and M. J. Black, “The naked truth: estimating body shape under clothing,” in Proceedings of the European Conference on Computer Vision, part 2, vol. 5303 of Lecture Notes in Computer Science, pp. 15–29, 2008. 40. J. Starck and A. Hilton, “Model-based human shape reconstruction from multiple views,” Computer Vision and Image Understanding, vol. 111, no. 2, pp. 179–194, 2008. View at Publisher · View at Google Scholar · View at Scopus 41. E. De Aguiar, C. Theobalt, C. Stoll, and H. P. Seidel, “Marker-less deformable mesh tracking for human shape and motion capture,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–8, June 2007. View at Publisher · View at Google Scholar · View at Scopus 42. C. C. L. Wang, Y. Wang, and M. M. F. Yuen, “View-dependent deformation for virtual human modeling from silhouettes,” in Proceedings of the IASTED International Conference Visualization, Imaging, and Image Processing, pp. 140–144, ACTA Press, Marbella, Spain, 2001. 43. H. Seo, F. Cordier, L. Philippon, and N. M. Thalmann, “Interactive modeling of MPEG-4 deformable human body models,” in Proceedings of the IFIP TC5/WG5. 10 Workshop on Deformable Avatars (Deform/ Avatars '00), pp. 120–131, 2000. 44. I. Kokkinos, M. Bronstein, R. Litman, and A. Bronstein, “Intrinsic shape context descriptors for deformable shapes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 159–166, 2012. 45. D. Raviv, A. M. Bronstein, M. M. Bronstein, R. Kimmel, and N. Sochen, “Affine-invariant diffusion geometry for the analysis of deformable 3D shapes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2361–2367, 2011. 46. U. Castellani, M. Cristani, and V. Murino, “Statistical 3D shape analysis by local generative descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2555–2560, 2011. View at Publisher · View at Google Scholar 47. I. Akhter, Y. Sheikh, S. Khan, and T. Kanade, “Trajectory space: a dual representation for nonrigid structure from motion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 7, pp. 1442–1456, 2011. View at Publisher · View at Google Scholar · View at Scopus 48. P. F. U. Gotardo and A. M. Martinez, “Computing smooth time-trajectories for camera and deformable shape in structure from motion with occlusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 10, pp. 2051–2065, 2011. View at Publisher · View at Google Scholar 49. D. Raviv, A. M. Bronstein, M. M. Bronstein, and R. Kimmel, “Full and partial symmetries of non-rigid shapes,” International Journal of Computer Vision, vol. 89, no. 1, pp. 18–39, 2010. View at Publisher · View at Google Scholar · View at Scopus 50. L. Zhu, Y. Chen, C. Lin, and A. Yuille, “Max margin learning of hierarchical configural deformable templates (HCDTs) for efficient object parsing and pose estimation,” International Journal of Computer Vision, vol. 93, no. 1, pp. 1–21, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 51. H. J. Cui, R. M. Wang, and Y. Li, “Parameterized model for virtual human deformation,” Journal of Fiber Bioengineering & Informatics, vol. 4, no. 4, pp. 371–381, 2011. View at Publisher · View at Google Scholar 52. Z. Liu and S. Shang, “Free-form deformation algorithm of human body model for garment,” in Proceedings of the International Conference on Computer Application and System Modeling (ICCASM '10), vol. 11, pp. V11602–V11605, October 2010. View at Publisher · View at Google Scholar · View at Scopus 53. M. Oshita and K. Suzuki, “Artist-oriented real-time skin deformation using dynamic patterns,” in Arts and Technology, vol. 30 of Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, pp. 88–96, 2010. View at Publisher · View at Google Scholar 54. Q. G. Tian, X. J. Li, and B. Z. Ge, “Implementation of digital human modeling and skin deformation based on flexible model and multi-joints-binding method,” in Proceedings of the MIPPR Pattern Recognition and Computer Vision, vol. 7496 of Proceedings of the SPIE, pp. 74961K-1–74961K-6, 2009. 55. X. J. Zhou and Z. X. Zhao, “The skin deformation of a 3D virtual human,” International Journal of Automation and Computing, vol. 6, no. 4, pp. 344–350, 2009. View at Publisher · View at Google Scholar · View at Scopus 56. J. H. Shen, N. M. Thalmann, and D. Thalmann, “Human skin deformation from cross-sections,” in Proceedings of the Computer Graphics International, pp. 1–17, Melbourne, Australia, 1994. View at Publisher · View at Google Scholar 57. D. Smeets, J. Hermans, D. Vandermeulen, and P. Suetens, “Isometric deformation invariant 3D shape recognition,” Pattern Recognition, vol. 45, no. 7, pp. 2817–2831, 2012. View at Publisher · View at Google Scholar 58. M. Rumpf and B. Wirth, “An elasticity-based covariance analysis of shapes,” International Journal of Computer Vision, vol. 92, no. 3, pp. 281–295, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 59. K. Fundana, N. C. Overgaard, and A. Heyden, “Variational segmentation of image sequences using region-based active contours and deformable shape priors,” International Journal of Computer Vision, vol. 80, no. 3, pp. 289–299, 2008. View at Publisher · View at Google Scholar · View at Scopus 60. W. Mio, A. Srivastava, and S. Joshi, “On shape of plane elastic curves,” International Journal of Computer Vision, vol. 73, no. 3, pp. 307–324, 2007. View at Publisher · View at Google Scholar · View at Scopus 61. D. Cremers, “Dynamical statistical shape priors for level set-based tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1262–1273, 2006. View at Publisher · View at Google Scholar · View at Scopus 62. Y. M. Tang, “Modeling skin deformation using boundary element method,” Computer-Aided Design and Applications, vol. 7, no. 1, pp. 101–108, 2010. View at Publisher · View at Google Scholar · View at Scopus 63. H. Shin and N. I. Badler, “Modeling deformable human arm for constrained research analysis,” in Proceedings of the Digital Human Modeling Conference, pp. 217–228, 2002. 64. A. Pentland and B. Horowitz, “Recovery of nonrigid motion and structure,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 7, pp. 730–742, 1991. View at Publisher · View at Google Scholar · View at Scopus 65. P. Frogerais, J. Bellanger, and L. Senhadji, “Various ways to compute the continuous-discrete extended kalman filter,” IEEE Transactions on Automatic Control, vol. 57, no. 4, pp. 1000–1004, 2012. View at Publisher · View at Google Scholar 66. D. E. Hyun, S. H. Yoon, J. W. Chang, J. K. Seong, M. S. Kim, and B. Jüttler, “Sweep-based human deformation,” The Visual Computer, vol. 21, no. 8–10, pp. 542–550, 2005. View at Publisher · View at Google Scholar · View at Scopus 67. L. Zuo, J. T. Li, and Z. Q. Wang, “Anatomical human musculature modeling for real-time deformation,” in Proceedings of Computer Graphics, Visualization and Computer Vision, pp. 1–7, 2003. View at Publisher · View at Google Scholar 68. L. P. Nedel and D. Thalmann, “Anatomic modeling of deformable human bodies,” The Visual Computer, vol. 16, no. 6, pp. 306–321, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 69. A. Aubel and D. Thalmann, “Realistic deformation of human body shapes,” in Proceedings of the Computer Animation and Simulation, pp. 125–135, 2000. 70. K. H. Min, S. M. Baek, G. A. Lee, H. Choi, and C. M. Park, “Anatomically-based modeling and animation of human upper limbs,” in Proceedings of the International Conference on Human Modeling and Animation, pp. 1–14, IEEE Computer Society Press, Washington, DC, USA, 2000. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/mpe/2013/786749/","timestamp":"2014-04-16T06:10:04Z","content_type":null,"content_length":"109452","record_id":"<urn:uuid:6be3e4d9-ccff-439d-a878-fd84e31ffd43>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply This problem appeared: If the points (a,2a) (2a,a), (a,a)and enclose a triangle of area 18 sq units, the centroid of the triangle is ? Let's solve it with geogebra! 1)Create a slider called a and range it from 0 to 10. 2) Enter points (a,a),(a,2a)(2a,a). 3) Use the polygon tool to create a triangle with those 3 points as the vertices. 4) Get the midpoints of the 3 sides of the triangle using the midpoint tool. 5) Draw line segments from each vertices to the opposite midpoint. 6) Get the intersection of those 3 line segments and call it point G. That is the centroid. 7) Move the slider until the triangle has an area of 18. You should be able to get it exactly. 8) Read off the coordinates of the centroid. What did you get? If we assume a is positive is that the only answer?
{"url":"http://www.mathisfunforum.com/post.php?tid=17754&qid=218216","timestamp":"2014-04-20T01:01:13Z","content_type":null,"content_length":"16260","record_id":"<urn:uuid:fb6e143b-ada9-4dfc-8ce7-b9449d92053f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. The schematic of the DTS. Each wire electrode is connected to a separate CSA. The entrance and exit grids are part of the shielding Faraday box around the array of electrodes. The COULOMB model of the DTS. There are four electrode planes with seven wires in each. The wires in planes 1 and 3 are along the y-axis and wires in planes 2 and 4 are along the x-axis. The origin of the coordinate system is placed at the center of the box. The kernel volume is shown in the middle and the dimensions are given in mm. The effect of the proximity of the wall on charge (Q 3, 5) induced on the closest wire that is located at x = 0 mm, y, z = −20 mm. The figure shows the induced charge as a function of the distance from the wall for a few different fixed x and z coordinates. The effect of the wall is the strongest, when the dust particle is in between two electrode planes (z = 0 mm). The bottom panel shows the induced charge relative to the undisturbed kernel results. The effect of the entrance grid and the shape of the correction function f (z p ). The figure shows the induced charge on the nearest electrode as a function of distance, with and without the wall effect included. The ratio of the two calculations defines the shape of the correction function f (z p ). Simulated induced charge signals from a dust particles with incidence angles θ x = −5.7° and θ y = 16.8°. The signals from the four closest wire electrodes in each plane are shown. The data are normalized to the dust charge, and the curves are staggered in the vertical direction for clarity. The vertical lines mark the positions of the four electrode planes. The effect of the limited size DTS model on the induced charge. The figure shows the induced charge on the closest electrode (Q 3, 5) as the particles moved between points (10, 10, −20) and (10, 10, 0). The model with 7 wire electrodes in each plane yields somewhat smaller induced charge signals. The bottom panel shows the ratio of the signals from the two models. The convergence of the χ2-minimum with increasing number of iteration steps. The different lines correspond to different data analysis runs. The simulated signal is for a 5.21 km/s velocity particle with added white noise (QNR = 10), see Sec. V for details. The dust particle moves from point (−5.5, −10, −100) to point (32, −10, 100), which corresponds to θ x = 10.62° and θ y = 0°. Uncertainty of the parameters determined from the analysis as a function of QNR. The two outliner points (diamond and cross) are discussed in details in the text. The schematics of the charge sensitive electronics (CSA) integrated into the laboratory version. An example of the DTS data and the best fit provided by the analysis (thick smooth lines). Signals from the eight electrodes closest to the path of the dust particle are shown. The calculated particle parameters are: Q = 15.77 fC, v = 4.57 km/s, θ x = 0.086°, and θ y = 0.98°. The curves are staggered in the vertical direction for clarity. Results of the analyses of the data for 0° and 10° incident angles. The panels show the (a) charge, (b) speed, (c) incident angle θ x , and (d) incident angle θ y . The shaded areas indicate +/−1° deviations from the mean value (outliner points excluded).
{"url":"http://scitation.aip.org/content/aip/journal/rsi/82/10/10.1063/1.3646528","timestamp":"2014-04-17T22:09:36Z","content_type":null,"content_length":"85410","record_id":"<urn:uuid:864567bf-acb4-4fd4-b09e-62f0be5801af>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
RELATIVITY QUESTIONS! (and other common queries) Re: RELATIVITY QUESTIONS! (and other common queries) I recently found out about Hamiltonian mechanics and so have been playing around with it a fair bit. In doing so, I decided to have a go at a relativistic version (although it possibly isn't correct as I am in no way sure that Hamilton's equations should still hold (as I assumed they did) in a relativistic context) by using a relativistic form for T(p). I checked that this returned the velocity when the derivative with respect to p was taken and it did, correctly (I believe), return the momentum divided by being given as a function of I then thought I'd have a look at a relativistic harmonic oscillator using I was able to reduce this to the equation (for some reason, the equations seemed much easier to solve for the momentum than the position): \ddot p^2 + \frac{p^2\ddot p^2}{m^2c^2} - \frac{k^2p^2}{m^2}=0 Which bears an obvious resemblance to the classical form with a relativistic correction of \frac{p^2 \ddot p^2}{m^2c^2} which will, as expected, produce large deviations for large momenta as well as high jerk situations and, according to wolfram alpha, the solutions appear to be approximately sinusoidal (which is good as it should approximate the classical sinusoidal harmonic oscillator at low momenta). That said, I am unable to reduce this to a closed form (and, as the equation is non-linear, it appears that in all likelihood, no such form exists) and, not being familiar with the techniques of differential equations at all, am unable to plot the solutions (I am particularly interested in what happens to the family of \ddot p(0)=0 curves as becomes close to m\gamma c All of this led to me to wonder the following: 1. Do hamilton's equations still hold for relativistic physics (with an appropriate hamiltonian)? 2. If anyone could give me a clue as to how to plot the solutions to the differential equation (thinking about it physically, I think that it should be triangle wave in the limit of large and a sine wave in the limit of small 3. If anyone had any idea why it might be (or at least seem) easier to solve for momentum than position. Last edited by eSOANEM on Wed Jan 11, 2012 4:49 pm UTC, edited 1 time in total. Gear wrote:I'm not sure if it would be possible to constantly eat enough chocolate to maintain raptor toxicity without killing oneself. Magnanimous wrote:The potassium in my body is emitting small amounts of gamma rays, so I consider myself to have nuclear arms. Don't make me hug you. Re: RELATIVITY QUESTIONS! (and other common queries) doogly wrote:At this level people are talking cosmologicly, and there, we are gonna assume the universe is homogenous and isotropic. The same for all points (in space, anyway - expanding in time, and if you have drunk enough kool aid at the GR font to get upset by picking out a preferred time direction, we just imagine that all matter that exists in the world can be treated like a single fluid, and it has a rest frame, so that does make things rather special), and the same in all directions. So, if you are homogeneous and isotropic, there isn't a lot you can vary. You are going to be a space of constant curvature. Positive, negative or zero. Sphere, hyperbola or flat. yurell wrote:I suppose the easiest two dimensional analogues are: □ Positive: A sphere □ Zero: A plane □ Negative: A saddle Picturing curvature in more than two dimensions is ... really difficult for me, so I just stick with what the maths says. Okay, positive is a sphere, and zero is flat, those two work. The one that I've never understood is negative curvature. If I follow my intuition, it tells me that making the curvature negative should just fold the universe back into a sphere, only this time one with us on the inside. But to us, it should be indistinguishable from the positive curvature sphere. Why is this not the case? Kewangji wrote:Posdy zwei tosdy osdy oady. Bork bork bork, hoppity syphilis bork." One day, I'm going to come home and find you lying on the floor. I'll ask what's wrong and you'll say "It finished...he stopped updating...it's over..." No-one does a slice like Big Rico's. No-one. Re: RELATIVITY QUESTIONS! (and other common queries) Carlington (The Aussie) wrote:Okay, positive is a sphere, and zero is flat, those two work. The one that I've never understood is negative curvature. If I follow my intuition, it tells me that making the curvature negative should just fold the universe back into a sphere, only this time one with us on the inside. But to us, it should be indistinguishable from the positive curvature sphere. Why is this not the case? Here's a picture of negative curvature. The really tricky part is figuring out how a surface that has negative curvature at every point could curve back onto itself. It doesn't seem possible in three dimensions but might be in four (or more). Re: RELATIVITY QUESTIONS! (and other common queries) OK, 2 dimensions. You're on the surface of a sphere, like the earth. The curvature is +R^2. So you say to yourself, hmmm, what if I just changed my perspective, and declare that I am inside this sphere. What happens to the curvature now? Ah, well the principal curvature along this here latitude direction is now -R! And along this longitude is -R! Negative waaaaait..... The curvature is still +R^2! In terms of arithmetic, this fact is just due to the product of negative numbers being a positive number. In terms of geometry, what happens is that you haven't actually *done* anything. The sphere doesn't care where you're standing. Higher dimensions remain a bit trickier, naturally. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) 2D surfaces have 4 possibilities: • ++ (positive curvature) • +- (negative curvature) • -+ (negative curvature) • -- (positive curvature) 3D surfaces have 8 possibilities: • +++ (positive curvature) • ++- (negative curvature) • +-+ (negative curvature) • +-- (negative curvature) • -++ (negative curvature) • -+- (negative curvature) • --+ (negative curvature) • --- (positive curvature) When you get to 4D surfaces, things get strange: • ++++ (positive curvature) • +++- (negative curvature) • ++-+ (negative curvature) • ++-- (???) • +-++ (negative curvature) • +-+- (???) • +--+ (???) • +--- (negative curvature) • -+++ (negative curvature) • -++- (???) • -+-+ (???) • -+-- (negative curvature) • --++ (???) • --+- (negative curvature) • ---+ (negative curvature) • ---- (positive curvature) Re: RELATIVITY QUESTIONS! (and other common queries) And zeros exist as options as well. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) In my head, I'm working with two dimensions, because that's how I usually come to terms with these sorts of things before I extend them. I'm doing that by envisaging a flat plane, say a piece of paper. If we give it positive curvature, we're pulling the edges up as much as we can, and eventually they're going to touch each other, because they just keep on curving. For negative curvature, I'm doing the same thing, only pushing all the edges down, like a mirror image. Is this where I'm coming unstuck? Kewangji wrote:Posdy zwei tosdy osdy oady. Bork bork bork, hoppity syphilis bork." One day, I'm going to come home and find you lying on the floor. I'll ask what's wrong and you'll say "It finished...he stopped updating...it's over..." No-one does a slice like Big Rico's. No-one. Re: RELATIVITY QUESTIONS! (and other common queries) Yes it is. Negative curvature (for a two dimensional surface), is like pushing one set of opposite corners of the square and pulling the other two (which is why you get the saddle shape in the diagrams people have shown). Gear wrote:I'm not sure if it would be possible to constantly eat enough chocolate to maintain raptor toxicity without killing oneself. Magnanimous wrote:The potassium in my body is emitting small amounts of gamma rays, so I consider myself to have nuclear arms. Don't make me hug you. Re: RELATIVITY QUESTIONS! (and other common queries) The intersection of the saddle with the planes that bisect the saddle ("planes of principle curvatures" in the diagram linked), and any plane parallel to these, are hyperbolas and parabolas. If you visualize these, you should be able to see how they don't "wrap around". Remember your conic sections if you need a little help. Last edited by thoughtfully on Wed Jan 11, 2012 8:22 pm UTC, edited 1 time in total. Re: RELATIVITY QUESTIONS! (and other common queries) thoughtfully wrote:The intersection of the saddle with the planes that bisect the saddle ("planes of principle curvatures" in the diagram linked) are a hyperbola and a parabola. If you visualize these, you should be able to see how they don't "wrap around". Remember your conic sections if you need a little help. In the diagram shown, they are conic sections. If 3D space is a "surface" in 4D (or higher), it need not be a conic section and it may wrap around in ways that are hard to imagine but mathematically Re: RELATIVITY QUESTIONS! (and other common queries) Ok, but intuitions aren't going to be useful there. Out of curiousity, is there an analogue to conic sections for quartic surfaces? Can they be classified as intersections of 3-space with a "hypercone"? Hypercones aren't all that much harder to visualize than hyperspheres or hypercubes or simplices. Then it's one more step up in dimensions to the shape of space-time, yeah? Re: RELATIVITY QUESTIONS! (and other common queries) oooooh let's be careful. So, what does the word "hyperbolic" mean. Who are you talking to? Maybe you are big on conic sections, and when you go to generalize to higher dimensions, you want to talk about intersections and things, and you are an algebraic geometer. But in the sense we have been using it, it is a differential geometry phrase. Hyperbolas are the "trope namer," but a given manifold of negative curvature isn't a hyperbola necessarily. And if you go to 3 manifolds, you can have a manifold which is hyperbolic but also compact. That is not at all like what you have in 2D. Compact hyperbolic manifolds are very exciting though. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) doogly wrote:OK, 2 dimensions. You're on the surface of a sphere, like the earth. The curvature is +R^2. So you say to yourself, hmmm, what if I just changed my perspective, and declare that I am inside this sphere. What happens to the curvature now? Ah, well the principal curvature along this here latitude direction is now -R! And along this longitude is -R! Negative waaaaait..... The curvature is still +R^2! In terms of arithmetic, this fact is just due to the product of negative numbers being a positive number. In terms of geometry, what happens is that you haven't actually *done* anything. The sphere doesn't care where you're standing. Higher dimensions remain a bit trickier, naturally. So at any point on a negatively curved surface the curvatures on your latitude and longitude directions have opposite signs, ie the curves in one direction are concave, but the orthogonal curves are convex. Here's an image of a pseudosphere, which is a surface of constant negative curvature (apart from the singularity at the equator). OTOH, thinking about curved spaces by considering their properties when embedded in a higher dimensional flat space isn't necessarily that helpful. In some ways, it's better to avoid invoking an embedding space, since it may have no actual physical existence. The alternative approach is to look at the metric properties of the curved space. A simple way to do that is to examine the properties of triangles. In a flat space, the sum of the angles of any triangle always equals pi. In a positively curved space, the angle sum of a triangle is greater than pi, and in a negatively curved space, the angle sum of a triangle is less than pi. In both curved cases, if the space is of uniform curvature then the area of a triangle in that space is proportional to the absolute value of the difference between the angle sum and pi. Re: RELATIVITY QUESTIONS! (and other common queries) This comes from this game. http://images.4channel.org/f/src/gravity.swf See that sine and curl going off to the lower left? Could that really happen? Happen in space? Oh wait, of course it can. The moon makes the Earth wobble too. Last edited by Nosforit on Mon Jan 16, 2012 8:25 pm UTC, edited 1 time in total. Re: RELATIVITY QUESTIONS! (and other common queries) That looks like two orbiting bodies heading off into the void together. As a guess, the sin wave is heavier than the curl-wave. The curl-wave sometimes goes in a retrograde direction, before it is pulled forward by the heavier sin-wave body. But that is just a guess. One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total. Re: RELATIVITY QUESTIONS! (and other common queries) I was able to replicate it with just a few dozen tries, so unless I have a magic touch it should be pretty common in physical space. Unless tidal forces shred them? IIRC retrograde meant that a celestial object appered to move backwards, and did not physically do so... Re: RELATIVITY QUESTIONS! (and other common queries) It can mean either. If you're talking to astrologers for some reason, something is retrograde if it looks like it's moving backwards relative to other stuff. If you're talking to astronomers, it means it's actually moving backward relative to other stuff. Treatid basically wrote:widdout elephants deh be no starting points. deh be no ZFC. (If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome) Re: RELATIVITY QUESTIONS! (and other common queries) "Retrograde" is also sometimes used in historical contexts to describe the apparent backward motion of planets. Of course in Ptolemaic cosmology it was true retrograde motion, so I guess the line is blurred there. Re: RELATIVITY QUESTIONS! (and other common queries) Since Newton's gravitation is supposed to be an approximation of Relativity, can anyone explain to me where does the value of the gravitational constant comes from? Is it still a free variable, or does it pop out elegantly from GR? ...And that is how we know the Earth to be banana-shaped. Re: RELATIVITY QUESTIONS! (and other common queries) Still a free parameter. That and the cosmological constant are the only two in GR though, which is nice. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) doogly wrote:Still a free parameter. That and the cosmological constant are the only two in GR though, which is nice. It was nicer when they thought the cosmological constant was zero. Re: RELATIVITY QUESTIONS! (and other common queries) It has no more reason to be zero than any other number. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) doogly wrote:It has no more reason to be zero than any other number. That's only true in a theory that incorporates the constant in the first place. The constant doesn't come out of the original theory, it was Einstein's reaction to its prediction of an unstable From a quantum perspective it was MUCH nicer, since it is hard to figure out why the value is so small yet nonzero. Re: RELATIVITY QUESTIONS! (and other common queries) You have to have the cosmological constant. You are writing down the Lagrangian, you include all terms up to a certain order. Einstein didn't include it, but there was no reason not to. Essentially he just figured it should be zero; it wasn't *necessary.* But so what? Making things 0 in quantum mechanics is just as hard as small things. Unless of course you are willing to just wave your hands and mumble something about symmetry, then 0 is easy to justify. But if you want these symmetries to be in some sense natural and not entirely ad hoc, you still have some work to do to get 0 cosmological constant. I haven't seen any compelling arguments why that should LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) Hi. I have a slightly-higher-but-not-much-higher-than-A-Level physics understanding. I understand that when you are in motion, you experience time at a slower rate than someone who is not in motion, or who is travelling more slowly than you. However, I have been thinking about this recently, and I run into a problem when I consider all motion as relative (Since we don't have any absolute reference point to use). My question is this - Suppose a spaceship travels away from a planet at, say, 0.2 the speed of light, and then returns. Since it's been going fast, time has passed more slowly on board, so an atomic clock on the ship will have counted less time than an atomic clock on the planet. That's taking the planet as our reference point. Why can't we take the ship as our reference point? From that perspective, the planet is travelling away at 0.2 the speed of light, and then returning. Surely that means the planet (having been going fast) will have experienced less time than the ship, and the atomic clock on the planet will have counted less time than the clock on the ship. Am I missing something fundamental here? I've tried to be concise with my question, but if it's not clear I can try rephrasing. "Analogies in writing are like feathers on a snake." "Exaggeration is a billion times worse than understatement." Never Forget Re: RELATIVITY QUESTIONS! (and other common queries) Yes, you are correct in the framework of special relativity; both of them experience less time than the other. However, this is fixed by general relativity, where the ship's acceleration is not relative, and has the effect of speeding up external clocks. This means that the ship objectively experiences less time than the planet. ...And that is how we know the Earth to be banana-shaped. Re: RELATIVITY QUESTIONS! (and other common queries) Ah, thankyou. "Analogies in writing are like feathers on a snake." "Exaggeration is a billion times worse than understatement." Never Forget Re: RELATIVITY QUESTIONS! (and other common queries) An interesting case which seems paradoxical which is closely related is if you have two space ships which leave a planet going in different directions. Both of these ships will see the other ship's clock moving slower than there own. For an intuitive justification for why the spaceship experiences less time rather than the earth doing so (in the original situation with one ship leaving a planet and returning), the spaceship ends up in the planet's reference frame so it makes sense that that reference frame should be more correct than the ship's one when talking about the time experienced (also, the earth is in an inertial frame (meaning SR is valid for it) whereas the ship is not (meaning SR is not valid for it)). Gear wrote:I'm not sure if it would be possible to constantly eat enough chocolate to maintain raptor toxicity without killing oneself. Magnanimous wrote:The potassium in my body is emitting small amounts of gamma rays, so I consider myself to have nuclear arms. Don't make me hug you. Re: RELATIVITY QUESTIONS! (and other common queries) SR can handle acceleration just fine. It is the flat space limit of GR. Non inertial frames are just different coordinate systems. They're just as easy in SR as, say, a rotating one is in Newtonian mechanics. Which is to say, not entirely easy, you have to do a little work, but it's not a theory or system of mechanics. It's just new coordinates. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) I was under the impression that, under GR, mach's principle holds so gravitating frames are identical to accelerating frames. If this is the case, surely any predictions of SR with accelerating frames would be on shaky ground should the accelerations be sufficiently large or for the effects to be measured across a sufficiently large length scale? Gear wrote:I'm not sure if it would be possible to constantly eat enough chocolate to maintain raptor toxicity without killing oneself. Magnanimous wrote:The potassium in my body is emitting small amounts of gamma rays, so I consider myself to have nuclear arms. Don't make me hug you. Re: RELATIVITY QUESTIONS! (and other common queries) You can easily observe the effect by following the motion of the ship in special relativity, assuming instantaneous acceleration. The ships starts in the Earth's Frame at (0,0,). Boost this to the moving ship's frame, travelling at velocity v (still at (0,0)) The ship travels for some time t in its own reference frame, so it's located at (t,x) Now perform the Lorentz transform to get it travelling back towards the Earth, instead of away, and you'll notice something incredibly interesting — the act of turning suddenly adds a huge amount of time to the clocks on Earth. This is because acceleration is not relative in relativity; you can always tell when you're accelerating, independent of any other object in the Universe. I do suggest doing the maths, though, starting at the Earth, boosting to the frame v, travelling some (t,x), boosting back to the initial frame, boost again to -v (turning around) and coming back, and you realise the solution to this problem (the Twins Paradox) simply falls out of the maths cemper93 wrote:Dude, I just presented an elaborate multiple fraction in Comic Sans. Who are you to question me? Pronouns: Feminine pronouns please! Re: RELATIVITY QUESTIONS! (and other common queries) doogly wrote:You have to have the cosmological constant. You are writing down the Lagrangian, you include all terms up to a certain order. Einstein didn't include it, but there was no reason not to. Essentially he just figured it should be zero; it wasn't *necessary.* But so what? I don't see why that isn't sufficient justification. The cosmological constant has no more inherent right to exist than the divergence of the magnetic field. At the time, there was no reason to believe vacuum energy existed. There still isn't a great reason to believe magnetic monopoles exist. You make your theory no more complicated than it needs to be. Making things 0 in quantum mechanics is just as hard as small things. Unless of course you are willing to just wave your hands and mumble something about symmetry, then 0 is easy to justify. But if you want these symmetries to be in some sense natural and not entirely ad hoc, you still have some work to do to get 0 cosmological constant. I haven't seen any compelling arguments why that should be. Quantum field theory naively predicts a constant over 100 orders of magnitude too large. In such cases, of course it is easier to search for symmetries that make it zero than to tune the parameters to make it extremely small but nonzero. I don't understand how it could be otherwise. Re: RELATIVITY QUESTIONS! (and other common queries) The equivalence of gravity and acceleration is pointlike. So special relativity would be all you need if you did everything where the acceleration was just 9.8 m/s^2, and you never left that regime, looked for tidal effects, or what have you. That calculation, "100 orders of magnitude off," is with a naive cutoff at the planck scale. The actual answer is infinite, and there's no reason to put in a cutoff. But if you do, at the planck scale, then you get... not a right answer. So, maybe that was the wrong thing to do? Cutoffs are just awful ways to do QFT. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) That calculation, "100 orders of magnitude off," is with a naive cutoff at the planck scale. Its also from using rough estimates for the vacuum expectation of the Higg's field/potential energy of the Higgs field (and the chiral condensate of QCD). Even if you figure out a better method to deal with vacuum fluctuations, you still run into problems from the non-zero vacuum expectation values. Thats why SUSY forces exactly 0- the constraints on the super-potential guarantee the potential energy is 0, even with a non-zero vev. Re: RELATIVITY QUESTIONS! (and other common queries) So since we observe a nonzero cosmological constant, I'm putting that in the "con" column for SUSY ; ) LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: RELATIVITY QUESTIONS! (and other common queries) Indeed. Its also a clear problem with the higgs mechanism that we don't talk about very often. Of course, someone who likes SUSY would suggest that we obviously know it is broken- perhaps leading to a small cosmological constant. Re: RELATIVITY QUESTIONS! (and other common queries) (Not entirely sure this goes here, but it's kinda on-topic) I've got a couple of questions, since I was recently trying to calculate (out of interest) various things about pulsar periods. 1. Does anyone have any estimate of how many pulsars are in the Milky Way? 2. I tried assuming that all pulsars have a random period between 10 and 1000ms, (I couldn't find real data) and then calculating the lowest common multiple of the lot of them, so I could see the "period" of the entire galaxy. However, I always got an answer between 10^28 and 10^32ms, even with a "galaxy" consisting of only 100 pulsars. (At minimum, about 50k times longer than the universe has existed) Am I doing something wrong here, or is that actually how periods work? ...And that is how we know the Earth to be banana-shaped. Re: RELATIVITY QUESTIONS! (and other common queries) I don't know about part 1, but for part 2... two periods will only even have a LCM if their ratio is rational. If one is, like 1ms, and another is sqrt(2) ms, then they simply don't have an LCM. And even if they are co-rational, the LCM can be huge even if the two numbers are small... if the ratio of the two numbers is p/q, in least terms, then the LCM is pq times their GCD (or, equivalently, p times the first number, or q times the second number)... So numbers that don't form a nice simple ratio means a huge LCM. Picking random numbers "properly" will almost certainly give you numbers which aren't co-rational... picking random numbers with a computer will always give you rational numbers, for all the usual systems, but liable to give you numbers where pq is huge. While no one overhear you quickly tell me not cow cow. but how about watch phone? Re: RELATIVITY QUESTIONS! (and other common queries) Oops, I meant to specify that the periods were an integer number of milliseconds. The calculation went something along the lines of: Code: Select all l = [random.randint(10, 1000) for i in range(100)] reduce(lcm, l) And I suppose on an abstract level I knew that the lcm would generally come out quite large, but I generally wasn't expecting it to be quite as large as it was. Am I doing the right calculation, even if the answer isn't what I was expecting? ...And that is how we know the Earth to be banana-shaped. Re: RELATIVITY QUESTIONS! (and other common queries) There are ~145 primes < 1000. (very approx) If you only sampled the primes, after 100 you'd have about half of them. Now, because you are also sampling composite numbers, you'll over-select small primes and under-select big primes. And while a given number can add more than one prime to your list, this will happen more often with small primes. The other thing that LCM cares about is prime powers. So these are all complications. But, because I'm lazy, I'll just use the "sampling primes" case to get an initial spitball estimate. The product of the first 145 primes is known as 145#. We only have about half of them, so we'll take the sqrt. Now, n# is bounded below by e^(n ln n) I believe -- so this is on the order of e^361 =~ 10^156. That's big. So I'm vaguely surprised the result is that small. I guess the dominance of the low-order values ends up having a large impact, not surprisingly. If we ignore prime powers, the odds of getting a new prime end up being the product of (p-1)/p, which sadly is not a function which has been researched much to my knowledge.[1] You'll note that this value is a function of your temporal resolution, or how close you want it to get. It is sort of fractal that way -- the more you zoom in, the bigger it gets. [1] I kid. Last edited by Yakk on Fri Apr 13, 2012 2:47 pm UTC, edited 2 times in total. One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
{"url":"http://forums.xkcd.com/viewtopic.php?p=2860614","timestamp":"2014-04-18T15:40:09Z","content_type":null,"content_length":"114236","record_id":"<urn:uuid:e326ba1c-2e77-41cb-8dd3-551b302c626e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
"high school" algebra -> relativistic conservation of momentum and energy Generally, it's a good idea when solving these types of problems to avoid using [itex]\gamma[/itex] and velocities if you can avoid them. Work with energy, momentum, and mass instead. Doing so usually makes the algebra simpler. I think your best approach is to solve for the electron's energy. Once you have that, you can calculate its momentum. You can rewrite your equations as follows: [tex]p_0 c - p c = E_e - m c^2[/tex] [tex]p_0 + p = p_e[/tex] Multiply the second equation by c, square both equations, then subtract the second one from the first, and use the relation [itex]E^2 - (pc)^2 = (mc^2)^2[/itex] to simplify what you get. Figure out how to eliminate p from the equation and solve for [itex]E_e[/itex].
{"url":"http://www.physicsforums.com/showthread.php?t=377622","timestamp":"2014-04-19T12:35:54Z","content_type":null,"content_length":"41045","record_id":"<urn:uuid:a66cd350-1eef-4b03-8a86-3121fb1d6c56>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Are you ready to retire? Mathematical models estimate the value of pension plans October 21st, 2013 in Other Sciences / Mathematics This cartoon relates to pensions and early retirement. Credit: (c) Fran, Jantoo.com This cartoon relates to pensions and early retirement. Credit: (c) Fran, Jantoo.com There comes a time in each of our lives when we consider starting a pension plan –either on the advice of a friend, a relative, or of our own volition. The plan of choice may depend on various factors, such as the age and salary of the individual, number of years of expected employment, as well as options to retire early or late. One possible plan is a defined pension plan, where the benefit amount is typically based on the employee's number of years of service at the time of retirement and the salary and/or average salary over an employment period. For instance, the employee may receive a fraction of the average salary during a certain number of years. In a paper published last month in the SIAM Journal on Applied Mathematics, authors Carmen Calvo-Garrido, Andrea Pascucci, and Carlos Vázquez present a partial differential equation (PDE) model governing the value of a defined pension plan including the option for early retirement. "The employer bears the liability of the pension and the value of this liability is understood as the value of the pension plan," says author Carlos Vázquez. "It is important to develop mathematical models to compute the value of this liability in order to estimate the financial situation of the institution or company that has the obligation with the pension plan member." The analysis in the paper uses modeling tools similar to those used in quantitative finance, for instance, for pricing American options. The model assumes that the wage or salary of an employee at any given time is governed by a stochastic differential equation, which in turn depends on the time of recruitment, current salary of the employee and age of entry. Uncertainty of the salary is assumed to depend only on volatility, which refers to the uncertainty or risk associated with a value or asset. "Models need to reproduce the uncertainties associated with the underlying factors of the plan (salary, interest rate and so on) and should allow one to compute the pension plan price in order to reproduce situations in different scenarios," author Andrea Pascucci explains. The authors obtain the value of a defined benefit pension plan including the option for early retirement for the employee, thus computing the value of the pension plan as well as the region of early retirement. "If the pension plan incorporates the option of early retirement by the member, then the additional question arises: when is it optimal to retire? Mathematical modeling tools allow us to pose the problem in terms of partial differential equations," says Vázquez. The optimal retirement problem is a "free boundary problem" for the underlying PDE. Most applications of PDEs involve domains with boundaries, and certain boundary conditions need to be satisfied in order to solve the PDEs. Free boundary problems deal with solving PDEs where part of the boundary is unknown in advance, referred to as a free boundary. Thus, in addition to standard boundary conditions, an additional condition must be imposed at the free boundary. The free boundary in this problem is the optimal retirement boundary between the region where it is optimal to retire and the region where it is optimal to continue working. "The practical solution of the PDE model to obtain pension plan prices from the data requires the use of suitable numerical algorithms to be run on a computer," says author M. Carmen Calvo-Garrido. "From the numerical solutions, we can identify at each date, for a given salary and average salary, if it is optimal to retire or not, and also to obtain the value of the pension plan in any case." Mathematical analysis provides rigorous justification of the correctness of the model, also proving the expected qualitative properties. Future directions may involve the application of similar modeling techniques to study the evolution of wages and salaries. "We are working on a more complete model for salaries evolution that includes the possibility of jumps (due to economic crisis, sudden increase or decrease in salaries, etc)," says Vázquez. "PDE problems including realistic, stochastic interest rate models also present a very challenging topic. The calibration of model parameters is an interesting and difficult problem due to the need of suitable real data." More information: Mathematical Analysis and Numerical Methods for Pricing Pension Plans Allowing Early Retirement, M. Carmen Calvo-Garrido, Andrea Pascucci, and Carlos Vázquez, SIAM Journal on Applied Mathematics, 73(5), 1747-1767 (Online publish date: September 4, 2013). epubs.siam.org/doi/abs/10.1137/120864751 Provided by Society for Industrial and Applied Mathematics "Are you ready to retire? Mathematical models estimate the value of pension plans." October 21st, 2013. http://phys.org/news/2013-10-ready-mathematical-pension.html
{"url":"http://phys.org/print301593725.html","timestamp":"2014-04-21T10:49:21Z","content_type":null,"content_length":"10055","record_id":"<urn:uuid:44e9d72f-d343-4779-8bee-7c5289cecd1c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Explicit substitutions and programming languages - CSL 2007 , 2007 "... Calculi with explicit substitutions (ES) are widely used in different areas of computer science. Complex systems with ES were developed these last 15 years to capture the good computational behaviour of the original systems (with meta-level substitutions) they were implementing. In this paper we fi ..." Cited by 6 (1 self) Add to MetaCart Calculi with explicit substitutions (ES) are widely used in different areas of computer science. Complex systems with ES were developed these last 15 years to capture the good computational behaviour of the original systems (with meta-level substitutions) they were implementing. In this paper we first survey previous work in the domain by pointing out the motivations and challenges that guided the development of such calculi. Then we use very simple technology to establish a general theory of explicit substitutions for the lambda-calculus which enjoys fundamental properties such as simulation of one-step beta-reduction, confluence on metaterms, preservation of beta-strong normalisation, strong normalisation of typed terms and full composition. The calculus also admits a natural translation into Linear Logic’s proof-nets. - In [41] (2000 , 2000 "... This paper extends the recent work [CMT00] on the operational semantics and type system for a core language, called MiniML ref , which exploits the notion of closed type (see also [MTBS99]) to safely combine imperative and multi-stage programming. The main novelties are the identification of a larg ..." Cited by 4 (3 self) Add to MetaCart This paper extends the recent work [CMT00] on the operational semantics and type system for a core language, called MiniML ref , which exploits the notion of closed type (see also [MTBS99]) to safely combine imperative and multi-stage programming. The main novelties are the identification of a larger set of closed types and the addition of a binder for useless variables. The resulting language is a conservative extension of MiniML ref , a simple imperative subset of SML. 1 - Logical Methods in Computer Science "... Vol. 5 (3:1) 2009, pp. 1–29 ..." - INFORM. AND COMPUT , 2007 "... We present a simple term calculus with an explicit control of erasure and duplication of substitutions, enjoying a sound and complete correspondence with the intuitionistic fragment of Linear Logic’s proof-nets. We show the operational behaviour of the calculus and some of its fundamental properties ..." Cited by 3 (2 self) Add to MetaCart We present a simple term calculus with an explicit control of erasure and duplication of substitutions, enjoying a sound and complete correspondence with the intuitionistic fragment of Linear Logic’s proof-nets. We show the operational behaviour of the calculus and some of its fundamental properties such as confluence, preservation of strong normalisation, strong normalisation of simply-typed terms, step by step simulation of β-reduction and full composition. "... Abstract. In a previous paper [2] which appeared in the volume celebrating Klop’s 60th anniversary, we presented a labeled lambda-calculus to characterize the dag implementation of the weak lambda-calculus as described in Wadsworth’s dissertation [11]. In this paper, we simplify this calculus and pr ..." Cited by 2 (0 self) Add to MetaCart Abstract. In a previous paper [2] which appeared in the volume celebrating Klop’s 60th anniversary, we presented a labeled lambda-calculus to characterize the dag implementation of the weak lambda-calculus as described in Wadsworth’s dissertation [11]. In this paper, we simplify this calculus and present a simpler proof of the sharing property which allows the dag implementation. In order to avoid duplication of presentations, we mainly show here the modifications brought to the weak labeled lambda-calculus in [2]. The reader is therefore recommended to read first the companion article and later read our present paper. We are happy that this note can therefore be considered as establishing a new bridge between two friends and now senior colleagues, Jan Willem Klop and Henk
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=471714","timestamp":"2014-04-16T16:23:14Z","content_type":null,"content_length":"22147","record_id":"<urn:uuid:83d115ff-7d1c-426e-95a1-8b0def1540c4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
iir_bessel (MathScript RT Module Function) LabVIEW 2011 MathScript RT Module Help Edition Date: June 2011 Part Number: »View Product Info Owning Class: filter design Requires: MathScript RT Module [b, a] = iir_bessel(n, w) [b, a] = iir_bessel(n, [w1, w2]) [b, a] = iir_bessel(n, w, option) [b, a] = iir_bessel(n, [w1, w2], 'stop') [z, p, k] = iir_bessel(n, w) [z, p, k] = iir_bessel(n, [w1, w2]) [z, p, k] = iir_bessel(n, w, option) [z, p, k] = iir_bessel(n, [w1, w2], 'stop') [as, bs, cs, ds] = iir_bessel(n, w) [as, bs, cs, ds] = iir_bessel(n, [w1, w2]) [as, bs, cs, ds] = iir_bessel(n, w, option) [as, bs, cs, ds] = iir_bessel(n, [w1, w2], 'stop') Legacy Name: besself Designs an analog Bessel filter. If you specify w, this function generates a lowpass filter of order n. If you specify w1 and w2, this function generates a bandpass filter of order 2n. [z, p, k] and [as, bs, cs, ds] generate the zero-pole-gain representation and the state-space representation, respectively, of the filter. Name Description n Specifies the filter order. n is a nonnegative integer. w Specifies the cutoff frequency of the filter. w is a real number between 0 and 1. 1 represents the Nyquist frequency. w1 Specifies the low cutoff frequency. w1 must fall in the range [0, 1]. w2 Specifies the high cutoff frequency. w2 must fall in the range [0, 1] and must be greater than w1. Specifies the type of filter to design. option is a string that accepts the following values. 'low' (default) Designs a lowpass filter. 'high' Designs a highpass filter. 'stop' Directs LabVIEW to design a stopband filter. If you do not specify 'stop' and you specify w1 and w2, LabVIEW designs a bandpass filter. Name Description b Returns the numerator of the filter under design. b is the forward filter coefficient of order n. b is a real vector. a Returns the denominator of the filter under design. a is the backward filter coefficient of order n. a is a real vector. z Returns the zeros of the filter. z is a vector. p Returns the poles of the filter. p is a vector. k Returns the gain of the filter. k is a real number. as Returns the A coefficients of the filter. as is a matrix. bs Returns the B coefficients of the filter. bs is a matrix. cs Returns the C coefficients of the filter. cs is a matrix. ds Returns the D coefficients of the filter. ds is a matrix. The following table lists the support characteristics of this function. Supported in the LabVIEW Run-Time Engine Yes Supported on RT targets Yes Suitable for bounded execution times on RT Not characterized N = 5; W = 0.4; [B, A] = iir_bessel(N, W) Related Topics
{"url":"http://zone.ni.com/reference/en-XX/help/373123B-01/lvtextmath/msfunc_iir_bessel/","timestamp":"2014-04-17T21:23:53Z","content_type":null,"content_length":"16951","record_id":"<urn:uuid:6712297a-f149-4c0f-89c4-dada11790e10>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Microbenchmark: Summing over array of doubles Yaroslav Bulatov bulatov at engr.orst.edu Tue Aug 3 01:32:21 CEST 2004 Duncan Booth <me at privacy.net> wrote in message news:<Xns95395FD0EF85duncanrcpcouk at 127.0.0.1>... > Christopher T King <squirrel at WPI.EDU> wrote in > news:Pine.LNX.4.44.0408011840050.21160-100000 at ccc4.wpi.edu: > > On 1 Aug 2004, Duncan Booth wrote: > > > >> I just had a brief look at the code you posted. Are you not concerned > >> about accuracy in any of your calculations? Summing a 10 million > >> element array by simply accumulating each element into a running > >> total is guaranteed to give a lousy result. > > > > Lousy or not, I believe that's how numarray is implemented internally, > > so at least all the benchmarks are the same. If you want accuracy > > summing that many numbers, you're going to have to do it in software > > (likely by summing each mantissa bit individually and reconstructing > > the float afterward), so it will be abysmally slow (obviously not what > > the OP wants). > > > My point being that speed isn't everything. Most applications doing large > floating point calculations should be concerned about accuracy, and how not > to add up a large array of floating point numbers was one of the first > things I was taught in computer science classes. The fastest way to sum 10 > million numbers (albeit at some considerable loss of accuracy): > return 0 > The 'correct' way to sum a large array of floats is, of course, to > calculate a lot of partial sums and sum them. For example, naively, we > might say that to sum the array we sum the first half, sum the second half > and add the results together. This definition being recursive ensures that > if the inputs are all of a similar size then all the intermediate > calculations involve summing similarly sized values. A more sophisticated > technique might also try to handle the case where not all values are a > similar size. > If you are worried about speed, then calculating partial sums is also the > way to go: the naive technique used by the original poster leaves no scope > for parallel calculation. It should be possible to write a slightly more > complex loop in the C version that runs a lot faster on systems that are > capable of performing multiple floating point instructions in parallel. You are right, naive summing generates significant accuracy losses. I estimated the error by summing [0.123456789012345]*1000000 and comparing it to 1234567.89012345. All methods have error about 1e-4 . The method below sums the array at the same speed as regular Python sum loop, but reports error < 1e-15 . def sum2(arr): size = len(arr) for itr in range(int(ceil(log(size)/log(2)))): for i in xrange(0, size, 2**(itr+1)): next_i = i+2**itr if next_i<size: return arr[0] When arbitrary precision numbers are being used, this method has performance O(nd) vs. regular summing O(n(d+log d)) where n is size of array, d is number of digits of the elements. In practice I got about 20% performance increase when summing an array of 1000 Decimals using method above. A better estimate of error difference would use random digits as opposed to [x]*10000000, but I don't know how I would calculate exact answer in any reasonable amount of time. (using Decimals takes over a second for 4000 array (with conversion)) More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2004-August/284228.html","timestamp":"2014-04-21T13:01:16Z","content_type":null,"content_length":"6704","record_id":"<urn:uuid:c01b7614-a13c-4d38-9977-5093d2d876d0>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Collaborative Lesson Planning Template Team Members Sarah Mensching - Kearns Jr. High Cathie Lauterborn – Kearns Jr. High Part I: Selecting a Mathematical Task What are the mathematical goals, objectives, and purpose for the lesson? o What should students know and be able to do as a result of this lesson? o To what standards / expectations / evidence does this lesson align? Goals and Objectives: 1. Students will be able to apply the Pythagorean Theorem to find the missing side of a right 2. Students will be able to summarize the Pythagorean Theorem using key vocabulary terms. 1. 1.2d Calculate the measures of the sides of a right triangle using the Pythagorean Theorem In what ways does the task build on students’ prior knowledge? o What definitions, concepts, or ideas do students need to know for the task? o How will you address any prerequisites that are missing from students’ backgrounds? o What questions will you ask to help students access their prior knowledge? The Pythagorean Theorem builds upon knowledge about right triangles, squares and square roots, and how to solve equations. 1. Students need to know about squares and square roots and that they are inverse operations. 2. Students need to know how to solve equations for a specified variable. 3. Students need to know how to round decimals and simplify square roots. 4. Prerequisite knowledge about squares and square roots will be addressed in the warm-up activity. Any other missing information will be dealt with on an individual student basis. 5. Questions: a. What is the opposite of squaring a number? b. How do we take the square root of a number? c. What is a right triangle? What are all the ways the task can be solved? o What errors might students make? What are the areas of difficulty in the task? o What misconceptions might students have? o Which of the methods do you think your students will use? The Pythagorean Theorem can be learned a variety of ways. Besides just straight lecture, there are many web based activities, manipulatives, paper and pencil activities with graph paper, diagrams, and real life examples available to help students learn the theorem. A variety of activities along with connecting it to real life examples is what we think are best. 1. Some students will misidentify the hypotenuse and legs of the right triangle 2. Some students will multiply by 2 instead of squaring a number. 3. Some students will divide by 2 instead of taking the square root of the number. Areas of Difficulty: 1. Identifying the hypotenuse. 2. Solving for one of the legs instead of the hypotenuse. 3. Connecting the Pythagorean with real life models. 4. Visualizing and diagramming the concept. 1. Multiplying by 2 instead of squaring the numbers. 2. Students may assume that they can use the theorem on all triangles 3. Students may leave the negative answer of the square root as an answer. Students will be asked to use a visual method as an introductory task. Students will probably use the basic algorithm of the theorem to solve problems along with perhaps diagramming the information to correctly identify the legs and the hypotenuse of the right triangle. To reinforce the algorithm, there are many web based activities that are fun and engaging to students. The notes that they take in class will also reinforce the process. Collaborative Lesson Planning Template What particular challenges might the task present to students, especially struggling students or ELL students? o How will you address these challenges? o Which strategies would be a good match to both the mathematics goal and students’ 1. Vocabulary is always a big issue with ELL students and struggling readers. 2. Diagramming a problem 3. Connecting the theorem to real life. How will you address these challenges? 1. Explicit instruction on vocabulary. 2. Frayer model on the specific vocabulary. 3. Model the diagramming process, verbally and on the board. Give several examples. 4. Use resources available to connect to real life such as YouTube videos and PowerPoint presentation where you see distance problems in real life. Make it fun. 5. Consistent use of vocabulary by teacher. 1. All of the above What are the expectations for students as they work/complete the task? o What resources or tools will they use in their work? o How will they work – groups, independent? How long? o If they work in groups, in what ways will they be partnered? o What evidence will I accept that students know the mathematical goal of the lesson? We expect our students to work in pairs and independently to solve problems using the Pythagorean Theorem. We expect out students to ask questions when we they do not understand something. We expect our students to connect what we did previously in solving equations and squares and square roots. We expect our students to observe already established classroom rules and procedures as they work on the activities and take notes. Calculators, manipulatives for triangle activity, Frayer model, Cornell notes, textbooks, computer Students will work in pairs for the triangle activity and independently during notetaking. 1. Exit tickets on language objective. 2. Practice problems 3. Questioning Collaborative Lesson Planning Template Steps of the Lesson Part 1: Set up the Task How will you introduce students to the activity to provide access to ALL students? maintain the cognitive demands of the task? Teacher Action Student Reaction / Response Instructional / accessibility strategy What do you expect will be the Higher level / critical thinking questions response from students as a Reading / vocabulary strategy result of teacher action? Formative assessment strategy Time allocation Opening activity: Triangle Activity (please see attached worksheet) Students should be able to discover the theorem themselves by doing the activity Higher Level/critical thinking questions: 1. Do you think this works with all triangles? 1. Most will assume that it does work with other triangles 2. What is the relationship with a square and the actual 2. The area of a square is that square of a number? number squared. 3. How is the theorem similar and different to other 3. Multiple responses are equations expected. Vocabulary Strategy: 1. Frayer Model 1. We expect a positive response from our student. 2. Explicit instruction for notes 2. No response expected Formative assessment strategy: 1. Questioning 1. We expect our students to respond to questions. 2. Practice problems 2. Some struggling will probably occur at first especially when solving for a leg the triangle. 3. Exit tickets 3. Students will write out what the theorem is and explain it at the end of class. Time Allocation: Please see attached lesson plan Collaborative Lesson Planning Template Steps of the Lesson Part 2: Exploration As students work on task, What higher level / critical thinking questions will you ask? How will you ensure students remain engaged in the task? Teacher Action Student Reaction / Response Instructional / accessibility strategy What do you expect will be the Higher level / critical thinking questions response from students as a Reading / vocabulary strategy result of teacher action? Formative assessment strategy Time allocation Higher level/critical thinking questions: Triangle Activity: 1. What do you notice about the area of the legs squared We expect multiple answers to and the hypotenuse’s area squared? these questions. 2. Why do you think we are making actual squares? 3. Do you see a pattern between the lengths of the legs and the hypotenuse? 4. What is the relationship between the length of the side and the square? Cornell Notes: 1. What is the relationship between the legs of a right We expect an analyzed detailed triangle and the hypotenuse? summary on their notes along 2. How do we tell the difference between the legs and the with higher thinking questions. 3. What is the difference between squaring a number a We engagement while taking multiplying a number by 2? notes. 4. What is the difference between taking the square root of a number and dividing by 2? 5. If we know all the lengths of a triangle, can we tell if it is right triangle? 6. Will this work with other triangles besides right triangles? Engagement Strategies: 1. For triangle activity, use different color paper for the 100% participation on all squares to make the activity more visually appealing. activities. 2. Think-pair-share strategies 3. Computer activities 4. Already understood classroom procedures, expectations, and consequences. Collaborative Lesson Planning Template Steps of the Lesson Part 3: Share and Discuss How will you orchestrate the class discussion to accomplish mathematical goals? Which solution paths to share and in which order? What specific questions will be asked to make sense of mathematical ideas? Teacher Action Student Reaction / Response Instructional / accessibility strategy What do you expect will be the Higher level / critical thinking questions response from students as a Reading / vocabulary strategy result of teacher action? Formative assessment strategy Time allocation Class Discussion: As previous stated, we have multiple higher levels thinking We expect full participation questions discusses before, during and after the Triangle activity according to previously and during the Cornell notes. Wrapping up the session at the determined classroom rules and end of class, we are going to reiterate what should have been expectations. learned and clarify any questions at that point. Solution Paths: We are using multiple solution paths; diagrams, oral, algebraic We expect students to be so that students will have the skills to solve for missing sides of a familiar with a variety of solution right triangle. paths. Specific Questions: 1. What is the relationship between the legs and the hypotenuse We expect that our students will of a right triangle? be able to answer these 2. Is the theorem specific to right triangles? questions.
{"url":"http://www.docstoc.com/docs/131485357/Collaborative-Lesson-Planning-Template","timestamp":"2014-04-21T12:59:48Z","content_type":null,"content_length":"65226","record_id":"<urn:uuid:9e223a55-3530-4b89-8870-e950bffffea8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - CIRCUIT ANALYSIS: 7 resistors, 2 Indep. Volt Source, V.C.C.S, V.C.V.S. - find I OK, I have added 3 currents to the diagram (in green). Just what you said above. Now, I use [itex]i\,=\,\frac{V}{R}[/itex] to get the new currents in green. KCL: [tex]I_3\,+\,I_4\,+\,I_5\,+\,\frac{V_b}{4K\Omega}\,=\,0[/tex] Now if I combine those four equations above: How do you proceed?
{"url":"http://www.physicsforums.com/printthread.php?t=151625","timestamp":"2014-04-20T18:31:38Z","content_type":null,"content_length":"24968","record_id":"<urn:uuid:35ffa39a-9380-4816-91f2-12513cb47936>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Box graph question [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Box graph question From n j cox <n.j.cox@durham.ac.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: Box graph question Date Wed, 01 Nov 2006 18:14:54 +0000 My advice in a nutshell: forget any idea that -graph box- is the way to do this. Produce your own results set and then build your own graph as a set of calls to -twoway-. sysuse auto, clear statsby "su mpg" mean=r(mean) max=r(max) min=r(min) sd=r(sd), by(rep78) // here k = 1; tune to choice gen meanpsd = mean + sd gen meanmsd = mean - sd // use -line- or -connected- not -scatter- twoway scatter mean rep78 || /// rbar mean max rep78, bcolor(none) barw(0.2) || /// rbar mean min rep78, bcolor(none) barw(0.2) || /// rbar meanmsd meanpsd rep78 , barw(0.1) bcolor(red) || /// legend(off) ytitle(Miles per gallon) yla(, ang(h)) /// note(red bars: mean +/- sd; empty bars show range) Not so long ago someone was complaining about the overlay idea as unnatural. Here I designed a novel kind of graph purely from very basic ingredients. It works! Rodrigo A. Alfaro <ralfaro76@hotmail.com> I am not graphic at all. I cannot find a simple solution for my problem. Probably the answer is simple, but I need help to find it. I want to create a box graph with a line connecting the means (not the medians), but the boxes have to cover the range (not the IQR)... also with boxes covering * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-11/msg00019.html","timestamp":"2014-04-17T15:37:59Z","content_type":null,"content_length":"6701","record_id":"<urn:uuid:aea5a479-5d42-4fb0-86b7-6e90ca276b88>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
[Digital Systems]NAND-Gate Implementation this one is pretty simple. You don't need to worry about K-Maps because K-Maps are for simplifying messy boolean expressions. The expression you have, Z=A'C+BC' is a very simple one. In practice, constructing a digital circuit is cheapest by implementing it using only NAND gates, so I guess the idea is to show you how it's done. Basically each component, NOT, OR, AND, can each be made up of an assortment of NAND gates. All you do is put little blocks of these NAND gate configurations together the same as you would with regular NOT, OR, and AND gates. Here's a good quick tutorial on it: Hope this helps.
{"url":"http://www.physicsforums.com/showthread.php?t=182743","timestamp":"2014-04-20T08:35:34Z","content_type":null,"content_length":"37049","record_id":"<urn:uuid:c27003d8-e5b1-4a90-87d5-069af1e0240a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - measure theory and number theory? All fields are rings, but not conversely. You said, '...algebra and number theory are related as soon as you start learning about them...' I assume one does not learn Galois theory as the first exposure to algebra. One does rings and group theory before moving on to Galois theory which uses a combination of them. You can do number theory using rings and groups. For example, the group of integers 1,...p-1 with multiplication mod p can be used to show for all m, m^p-1=1 mod p, a trivial result once you talk about the order of a group. I'm sure there are other things I can't remember off the top of my head
{"url":"http://www.physicsforums.com/showpost.php?p=1511035&postcount=11","timestamp":"2014-04-21T09:51:54Z","content_type":null,"content_length":"8119","record_id":"<urn:uuid:cc3dd2c6-f87c-4cc4-986b-ca46c7f256a4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
6 July 1998 Vol. 3, No. 27 THE MATH FORUM INTERNET NEWS NuCalc Graphing Calculator| Behind the Numbers-Sills | Thinking about Choice NUCALC - GRAPHING CALCULATOR 2.0 Graphing Calculator 2.0 from Pacific Tech is designed to offer a new way of looking at math, providing symbolic and numeric methods for visualizing two- and three-dimensional mathematical objects. It features a variety of 2D and 3D graphs, coordinate systems, numeric solutions, symbolic methods, animation, the ability to graph multiple functions, choices of colors, and texture map surfaces. The Web site offers a free demo to download, and a guided tour of mathematical objects: BEHIND THE NUMBERS - Jonathan Sills (for ESPN SportsZone) "What happens when exact calculations meet the human, unpredictable factors that make sports so entertaining?" Articles by Jonathan Sills, the Zone's resident mathematician, exploring the gray area between math and sports: - BASEBALL: "Who's the King of the K?" An analysis of pitchers' balls and strikes. - BASKETBALL: "Is there a better way to rate NBA Players?" Point weightings and their overemphasis in individual - FANTASY BASEBALL: "Is it right to bench a lefty?" The statistics in fantasy baseball. - FOOTBALL: "How should we poll?" Who's really No. 1 in college football? It's all about the variance. - GOLF: "How tough is Q-school?" A look at the PGA's method of qualifying members. - HOCKEY: "Can math help hockey schedules?" "The worst job in hockey". - SOCCER: "World Cup 1998: What's the point of a victory?" A combinatoric and probabilistic analysis of recent rule changes for advancing through rounds of the World Cup. - STRONGMAN: "How strong is the world's strongest man?" The point basis for judging this competition. Jonathan Sills is a Chicago native who was a Fulbright Scholar at Oxford. He's done informal consulting with minor-league hockey and baseball, using math to help leagues determine schedules. He holds a degree in Civil Engineering and Operations Research from Princeton THINKING ABOUT CHOICE - a math-teach discussion "Why can't teachers allow children to make at least some choices in math as we do in other subjects?" - Mary O'Keefe "If... the problems at hand are such as to require real thought, then students need work only a handful - and the instructor should both require them to express their thought and engage that thought himself when grading it. In this case, choice is reasonable." - Lou Talman "... one thing I think that many of the experienced teachers may be forgetting is the amount of work involved in teaching when you are first learning how..." - Pat Krivoshein "What makes a magician masterful is not the choices given the audience, rather it is the illusion of choices. - Ron Ferguson A discussion that touches on using keys to grade assignments, making homework more thought-provoking, varying the way students are assigned routine skills, the use of estimating, the teacher's responsibility to the student, and the kinds and merits of choice in the math classroom. For more MATH-TEACH threads, see the Math Forum's Web archive or subscribe to the mailing list. CHECK OUT OUR WEB SITE: The Math Forum http://mathforum.org/ Ask Dr. Math http://mathforum.org/dr.math/ Problems of the Week http://mathforum.org/students/ Internet Resources http://mathforum.org/~steve/ Join the Math Forum http://mathforum.org/join.forum.html Send comments to the Math Forum Internet Newsletter editors _o \o_ __| \ / |__ o _ o/ \o/ __|- __/ \__/o \o | o/ o/__/ /\ /| | \ \ / \ / \ /o\ / \ / \ / | / \ / \
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews3.27.html","timestamp":"2014-04-20T08:57:42Z","content_type":null,"content_length":"9106","record_id":"<urn:uuid:8d0e37b4-b91d-40cc-8caa-3e08d69fc4a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: q-q plots, theoretical distribution with values higher than the Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: q-q plots, theoretical distribution with values higher than the sample's cutoff point From David Hoaglin <dchoaglin@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: q-q plots, theoretical distribution with values higher than the sample's cutoff point Date Thu, 19 Jul 2012 07:37:06 -0400 Dear Lucia, I am having trouble fitting the pieces of information together. If, in context, observations greater than 10,000 are likely to be outliers, I would not expect a distribution that fits your data well below 9,000 to have a heavier tail, with corresponding quantiles out to 20,000. Why do you consider observations greater than 10,000 to be outliers? (The largest 4 observations are between 9,900 and 10,000. Why are do they seem to be clumping there, almost as if 10,000 were an upper bound?) Perhaps your data do not follow a lognormal distribution or one of the other theoretical distributions you mentioned, but then the Q-Q plot should show systematic lack of fit in other parts of the data, not just at 10,000 and above. I don't recall seeing a description of what the data are. Is it possible that your sample is a mixture of some sort? Is there evidence of more than one mode? Also, what is the sample size? It is difficult to get much information on shape of distribution without having several hundred One can often learn a lot about the distribution shape of the data by taking an exploratory approach, based on quantiles. I like to use the g-and-h distributions, introduced by John W. Tukey, though that family is not as well known as it deserves to be. The lognormal distributions are a subfamily of the g-and-h distributions, as are the normal distributions, and the approach has a lot of flexibility for data that have heavy tails. David Hoaglin On Thu, Jul 19, 2012 at 3:39 AM, Lucia R.Latino <Latino@economia.uniroma2.it> wrote: > Dear Nick, > I dropped all the observations greater than 10,000 because I considered them > outliers. However, even without dropping the observations, q-q plots show > the same pattern. Also the use of the weights does not make so much > difference, as you said. > I know that the distribution is not lognormal (it is what I was trying > exactly to show), my concern was about the plots. As I mentioned before, > the points are close enough to the 45 line degree (in the case of the GB2 > and Singh-Maddala, the points on the q-q plot fall exactly on the straight > line) till approximately the value 9,000. After that, the points depart > significantly from the 45 line degree, they become a parallel line to the > x-axis; furthermore, while the sample distribution reaches value 10,000, the > theoretical one reaches approximately value 20,000. > I think that this is a "weird" behavior of the plots or I am simply missing > something important about the q-q plots. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-07/msg00667.html","timestamp":"2014-04-18T09:00:32Z","content_type":null,"content_length":"11861","record_id":"<urn:uuid:3150c95b-f1ab-46c2-a643-0244e81490cb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
[Rd] setdiff for data frames G. Jay Kerns gkerns at ysu.edu Mon Dec 10 16:53:44 CET 2007 I have been interested in setdiff() for data frames that operates row-wise. I looked in the documentation, mailing lists, etc., and didn't find exactly the right thing. Given data frames A, B with the same columns, the goal is to extract the rows that are in A, but not in B. Of course, one can usually do setdiff(rownames(A), rownames(B)) but that is cheating. :-) I played around a little bit and came up with setdiff.data.frame = function(A, B){ g <- function( y, B){ any( apply(B, 1, FUN = function(x) identical(all.equal(x, y), TRUE) ) ) } unique( A[ !apply(A, 1, FUN = function(t) g(t, B) ), ] ) I am sure that somebody can do this a better/faster way... any ideas? Any chance we could get a data.frame method for set.diff in future R versions? (The notion of "set" is somewhat ambiguous with respect to rows, columns, and entries in the data frame case.) P.S. You can see what I'm looking for with A <- expand.grid( 1:3, 1:3 ) B <- A[ 2:5, ] G. Jay Kerns, Ph.D. Assistant Professor / Statistics Coordinator Department of Mathematics & Statistics Youngstown State University Youngstown, OH 44555-0002 USA Office: 1035 Cushwa Hall Phone: (330) 941-3310 Office (voice mail) -3302 Department -3170 FAX E-mail: gkerns at ysu.edu More information about the R-devel mailing list
{"url":"https://stat.ethz.ch/pipermail/r-devel/2007-December/047706.html","timestamp":"2014-04-19T07:16:12Z","content_type":null,"content_length":"3860","record_id":"<urn:uuid:d7da1f4b-97d3-4d8f-bb32-8ee16e559a14>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
PyStruct - Structured Learning in Python PyStruct aims at being an easy-to-use structured learning and prediction library. Currently it implements only max-margin methods and a perceptron, but other algorithms might follow. The learning algorithms implemented in PyStruct have various names, which are often used loosely or differently in different communities. Common names are conditional random fields (CRFs), maximum-margin Markov random fields (M3N) or structural support vector machines. If you are new to structured learning, have a look at What is structured learning?. The goal of PyStruct is to provide a well-documented tool for researchers as well as non-experts to make use of structured prediction algorithms. The design tries to stay as close as possible to the interface and conventions of scikit-learn. The current version is PyStruct 0.2 which you can install via pip: pip install pystruct Starting with this first stable release, PyStruct will remain stable with respect to API and will provide backward compatibility. You can contact the authors either via the mailing list or on github. To install pystruct, you need cvxopt, cython and scikit-learn. The easiest way to install pystruct is using pip: pip install pystruct This will also install the additional inference packages ad3 and pyqpbo. You might also want to check out OpenGM, a library containing many many inference algorithms that can be used with PyStruct. In order to do learning with PyStruct, you need to pick two or three things: a model structure, a learning algorithm and optionally an inference algorithm. By constructing a learner object from a model, you get an object that can fit to training data and can predict for unseen data (just like scikit-learn estimators). Models, aka CRFs These determine what your model looks like: its graph structure and its loss function. There are several ready-to-use classes, for example for multi-label classification, chain CRFs and more complex models. You can find a full list in the Models section of the references Learning algorithms These set the parameters in a model based on training data. Learners are agnostic of the kind of model that is used, so all combinations are possible and new models can be defined (to include, e.g., higher-order potentials) without changing the learner. The current learning algorithms implement max margin learning and a perceptron. See the Learning section of the references. Inference solvers These perform inference: they run your model on data in order to make predictions. There are some options to use different solvers for inference. A linear programming solver using GLPK is included. I have Python interfaces for several other methods on github, including LibDAI, QPBO, AD3. This is where the heavy lifting is done and in some sense these backends are interchangeable. Currently I would recommend AD3 for very accurate solutions and QPBO for larger models. The the cutting plane solvers include an option (switch_to) to switch the solver to a stronger or exact solver when no constraints can be found using the previous solver (which should be a faster undergenerating solver, such as QPBO).
{"url":"http://pystruct.github.io/","timestamp":"2014-04-17T09:53:05Z","content_type":null,"content_length":"9656","record_id":"<urn:uuid:8c4e4abd-a445-4c43-ab50-eadbba659874>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
etd AT Indian Institute of Science: A Study On The Predictive Optimal Active Control Of Civil Engineering Structures & Collections Thesis Guide Submitted Date Sign on to: Receive email Login / Register authorized users Edit Profile About DSpace etd AT Indian Institute of Science > Division of Earth and Environmental Sciences > Civil Engineering (civil) > Please use this identifier to cite or link to this item: http://hdl.handle.net/2005/223 Title: A Study On The Predictive Optimal Active Control Of Civil Engineering Structures Authors: Keyhani, Ali Advisors: Allam, Mehter M Submitted Dec-2000 Publisher: Indian Institute of Science Uncertainty involved in the safe and comfort design of the structures is a major concern of civil engineers. Traditionally, the uncertainty has been overcome by utilizing various and relatively large safety factors for loads and structural properties. As a result in conventional design of for example tall buildings, the designed structural elements have unnecessary dimensions that sometimes are more than double of the ones needed to resist normal loads. On the other hand the requirements for strength and safety and comfort can be conflicting. Consequently, an alternative approach for design of the structures may be of great interest in design of safe and comfort structures that also offers economical advantages. Recently, there has been growing interest among the researchers in the concept of structural control as an alternative or complementary approach to the existing approaches of structural design. A few buildings have been designed and built based on this concept. The concept is to utilize a device for applying a force (known as control force) to encounter the effects of disturbing forces like earthquake force. However, the concept still has not found its rightful place among the practical engineers and more research is needed on the subject. One of the main problems in structural control is to find a proper algorithm for determining the optimum control force that should be applied to the structure. The investigation reported in this thesis is concerned with the application of active control to civil engineering structures. From the literature on control theory. (Particularly literature on the control of civil engineering structures) problems faced in application of control theory were identified and classified into two categories: 1) problems common to control of all dynamical systems, and 2) problems which are specially important in control of civil engineering structures. It was concluded that while many control algorithms are suitable for control of dynamical systems, considering the special problems in controlling civil structures and considering the unique future of structural control, many otherwise useful control algorithms face practical problems in application to civil structures. Consequently a set of criteria were set for judging the suitability of the control algorithms for use in control of civil engineering structures. Various types of existing control algorithms were investigated and finally it was concluded that predictive optimal control algorithms possess good characteristics for purpose of control of civil engineering structures. Among predictive control algorithms, those that use ARMA stochastic models for predicting the ground acceleration are better fitted to the structural control environment because all the past measured excitation is used to estimate the trends of the excitation for making qualified guesses about its coming values. However, existing ARMA based predictive algorithms are devised specially for earthquake and require on-line measurement of the external disturbing load which is not possible for dynamic loads like wind or blast. So, the algorithms are not suitable for tall buildings that experience both earthquake and wind loads during their life. Consequently, it was decided to establish a new closed loop predictive optimal control based on ARMA models as the first phase of the study. In this phase it was initially established that ARMA models are capable of predicting response of a linear SDOF system to the earthquake excitation a few steps ahead. The results of the predictions encouraged a search for finding a new closed loop optimal predictive control algorithm for linear SDOF structures based on prediction of the response by ARMA models. The second part of phase I, was devoted to developing and testing the proposed algorithm The new developed algorithm is different from other ARMA based optimal controls since it uses ARMA models for prediction of the structure response while existing algorithms predict the input excitation. Modeling the structure response as an AR or ARMA stochastic process is an effective mean for prediction of the structure response while avoiding measurement of the input excitation. ARMA models used in the algorithm enables it to avoid or reduce the time delay effect by predicting the structure response a few steps ahead. Being a closed loop control, the algorithm is suitable for all structural control conditions and can be used in a single control mechanism for vibration control of tall buildings against wind, earthquake or other random dynamic loads. Consequently the standby time is less than that for existing ARMA based algorithms devised only for earthquakes. This makes the control mechanism more reliable. The proposed Abstract: algorithm utilizes and combines two different mathematical models. First model is an ARMA model representing the environment and the structure as a single system subjected to the unknown random excitation and the second model is a linear SDOF system which represents the structure subjected to a known past history of the applied control force only. The principle of superposition is then used to combine the results of these two models to predict the total response of the structure as a function of the control force. By using the predicted responses, the minimization of the performance index with respect to the control force is carried out for finding the optimal control force. As phase II, the proposed predictive control algorithm was extended to structures that are more complicated than linear SDOF structures. Initially, the algorithm was extended to linear MDOF structures. Although, the development of the algorithm for MDOF structures was relatively straightforward, during testing of the algorithm, it was found that prediction of the response by ARMA models can not be done as was done for SDOF case. In the SDOF case each of the two components of the state vector (i.e. displacement and velocity) was treated separately as an ARMA stochastic process. However, applying the same approach to each component of the state vector of a MDOF structure did not yield satisfactory results in prediction of the response. Considering the whole state vector as a multi-variable ARMA stochastic vector process yielded the desired results in predicting the response a few steps ahead. In the second part of this phase, the algorithm was extended to non-linear MDOF structures. Since the algorithm had been developed based on the principle of superposition, it was not possible to directly extend the algorithm to non-linear systems. Instead, some generalized response was defined. Then credibility of the ARMA models in predicting the generalized response was verified. Based on this credibility, the algorithm was extended for non-linear MDOF structures. Also in phase II, the stability of a controlled MDOF structure was proved. Both internal and external stability of the system were described and verified. In phase III, some problems of special interest, i.e. soil-structure interaction and control time delay, were investigated and compensated for in the framework of the developed predictive optimal control. In first part of phase III soil-structure interaction was studied. The half-space solution of the SSI effect leads to a frequency dependent representation of the structure-footing system, which is not fit for control purpose. Consequently an equivalent frequency independent system was proposed and defined as a system whose frequency response is equal to the original structure -footing system in the mean squares sense. This equivalent frequency independent system then was used in the control algorithm. In the second part of this phase, an analytical approach was used to tackle the time delay phenomenon in the context of the predictive algorithm described in previous chapters. A generalized performance index was defined considering time delay. Minimization of the generalized performance index resulted into a modified version of the algorithm in which time delay is compensated explicitly. Unlike the time delay compensation technique used in the previous phases of this investigation, which restricts time delay to be an integer multiplier of the sampling period, the modified algorithm allows time delay to be any non-negative number. However, the two approaches produce the same results if time delay is an integer multiplier of the sampling period. For evaluating the proposed algorithm and comparing it with other algorithms, several numerical simulations were carried during the research by using MATLAB and its toolboxes. A few interesting results of these simulations are enumerated below: ARM A models are able to predict the response of both linear and non-linear structures to random inputs such as earthquakes. The proposed predictive optimal control based on ARMA models has produced better results in the context of reducing velocity, displacement, total energy and operational cost compared to classic optimal control. Proposed active control algorithm is very effective in increasing safety and comfort. Its performance is not affected much by errors in the estimation of system parameters (e.g. damping). The effect of soil-structure interaction on the response to control force is considerable. Ignoring SSI will cause a significant change in the magnitude of the frequency response and a shift in the frequencies of the maximum response (resonant frequencies). Compensating the time delay effect by the modified version of the proposed algorithm will improve the performance of the control system in achieving the control goal and reduction of the structural response. URI: http://hdl.handle.net/2005/223 Appears in Civil Engineering (civil) Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated.
{"url":"http://etd.ncsi.iisc.ernet.in/handle/2005/223","timestamp":"2014-04-24T17:03:24Z","content_type":null,"content_length":"34059","record_id":"<urn:uuid:908569c7-5764-4160-9566-6a6b4c56755a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Towards a Mathematical Science of Computation was given at the congress IFIP-62 and published in the proceedings of that conference. It extends the results of A Basis for a Mathematical Theory of Computation which was first given in 1961. I think this paper includes the first use of the term abstract syntax and maybe the first occurrence of the idea. .dvi, .pdf and .ps versions are also available. Up to: McCarthy home page I welcome comments, and you can send them by clicking on
{"url":"http://www-formal.stanford.edu/jmc/towards.html","timestamp":"2014-04-21T02:45:53Z","content_type":null,"content_length":"1955","record_id":"<urn:uuid:b4ac3447-85c1-4ca4-ba9c-ff5e48eb9166>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The Application of Discontinuous Galerkin Space and Velocity Discretization to Model Kinetic Equations Alexander Alekseenko1 , Natalia Gimelshein2 and Sergey Gimelshein2 Department of Mathematics, California State University Northridge, Northridge, California 91330, USA ERC, Inc, Edwards AFB, California 93524, USA Abstract. An approach based on a discontinuous Galerkin discretiza- tion is proposed for the Bhatnagar-Gross-Krook model kinetic equa- tion. This approach allows for a high order polynomial approximation of molecular velocity distribution function both in spatial and velocity variables. It is applied to model normal shock wave and heat transfer problems. Convergence of solutions with respect to the number of spatial cells and velocity bins is studied, with the degree of polynomial approxi- mation ranging from zero to four in the physical space variable and from zero to eight in the velocity variable. This approach is found to conserve mass, momentum and energy when high degree polynomial approxima-
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/610/2756352.html","timestamp":"2014-04-19T09:49:25Z","content_type":null,"content_length":"8157","record_id":"<urn:uuid:e2dc36d8-5c1e-45b6-b744-29fb87e7b01a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Physics 1210 Submissions [11] viXra:1210.0177 [pdf] submitted on 2012-10-30 12:14:51 Instant Broglie Bohm Pilot Waves, the Origin of All Entanglement Effects in the Lab and Wavefunction Collapses in Our Universe as Related to Our Opposing Anti-Copy Universe(s) According to Quantum FFF Theory. Authors: Leo Vuyk Comments: 23 Pages. 23 Abstract, According to Q-FFF theory, we material humans live in one of an even number of CPT symmetrical copy universes however, then we need an instant correlation medium between these universes to synchronize all wavefunction collapses and even our conscious decisions. Thus, we need an INSTANT timeless Broglie Bohm pilot wave. Then Schrödinger’s Cat and correlated anti-material Copy Cats, are alive or dead in all universes at the same time and God plays dice in an even number of correlated universes. The main characteristics of the Quantum-FFF Theory are: 1: Sub Quantum Microstructure of elementary particles including photons, being convertibles of the oscillating dark energy Higgs particle shaping the vacuum lattice.. 2: The energetic oscillating Higgs is by collision the origin all particle motion and spin states and even dark energy (125 Gev) inside a truncated tetrahedron shaped chiral vacuum lattice. 3:The lattice chirality (left or right handed) is the origin of our material universe and the a-symmetry of some decay phenomena. 4: The Higgs vacuum lattice is transportation medium of all photonic information and able to mimic relativity rules down to a measurable level. 5: Nothing sucks in physics and everything is Entangled by INSTANT communication between at least two anti-copy Universes or the Multiverse, being entangled since the big bang. (the Broglie-Bohm pilot wave) or between entangled particles in the lab. 6: Black holes of all sizes (down to ball lightning) can not emit gravitons themselves, as a consequence, they must be massless but also counter intuitive the origin of all dark matter due to a gravitational Casimir pressure effect. Category: Quantum Physics [10] viXra:1210.0175 [pdf] replaced on 2013-05-05 15:38:13 Measurement Theory Authors: Nolan L. Aljaddou Comments: 12 Pages. This work is an analytic establishment of the proper underlying theoretical foundation of the science of physics, in terms of its most fundamental precept - measurement - in order to formulate the proper solution to some of its ultimate problems as well as to explain some of its greatest mysteries. It is grouped into two primary sections, the first covering the purely mathematical foundation, with the latter the application of the mathematical tools derived in order to construct the proper physical foundation. These two groups are further divided into subsections exemplifying a dual logic premise/implication structure of establishing necessary first principles alone, followed by the ultimate extent of the logical consequences of those principles. In the mathematical section this takes the form of first addressing the mathematical principle common to all physical measurement and then demonstrating the extent of its logical application in deriving the various branches of mathematics necessary for physics. In the following physical section, the nature of fundamental measures such as space and time and their exact interrelationship is established, followed by an examination of their consequential manifest properties in producing the existence of matter and its counterpart anti-matter, as well as the nature of their mutual interaction. The origin of fundamental particle variety is then established, following its effects through to the cosmological phenomena of black holes and the big bang, and the organization of subatomic structure. The exposition concludes in deducing the fundamental universal principle which governs all physical phenomena in their varied forms. Broadly speaking, the categories of foundation established are listed in order as follows: Foundations of Mathematics, Fundamental Branches of Mathematics, Quantum Physics, General Relativity, Classical Physics, Elementary Particle Physics, Cosmology, and Unification Physics - covering the spectrum of general mathematical physics classification. Fundamental problems solved include: the identity of the most fundamental axiom of mathematics, the proper axiomatic establishment of the calculus, the proper derivation and establishment of the correct framework of quantum physics - and clarification of quantum misunderstanding - the explanation for matter's curvature of space-time, the reason for classical inertia and the form of the electromagnetic equations, the explanation for the divisions of fundamental particles and the fundamental forces, the mathematical proof of the existence of Yang-Mills theory of chromodynamics and its relation to producing the big bang and the internal effects of black holes, the identity of the unification principle of physics, and several others. Its reductionist nature establishes that it is the ultimate, unique foundation of all physical principles and by its nature the only means of solving and understanding the essential problems addressed therein. Category: Quantum Physics [9] viXra:1210.0172 [pdf] replaced on 2013-03-19 06:07:22 Matter as Gravitational Waves ( on the Nature of Electron ) Authors: Ernesto López González Comments: 56 pages, in Spanish Background: At present in physics there are 2 large and seemingly incompatible theories: the theory of General Relativity and Quantum Mechanics. A model to derive Quantum Mechanics from General Relativity is presented here.Results: A hexadimensional model with the following characteristics is proposed: 1 temporal dimension and 5 spatial dimensions (3 extended spatial dimensions and a plane of compacted dimensions; where elementary particles like quarks and electrons move at the speed of light in elliptical paths with a perimeter equal to a half Compton wavelength). The charge / mass ratio and the intrinsic magnetic momentum of the electron solely from its mass are estimated.The gravitational wave equations are solved for the particular case of a flat three-dimensional space. The boundary conditions are set assuming that the waves are guided by the curvature of compacted dimensions. In particular, exact solutions are obtained for the case of a motionless particle-pulsation, uniform linear motion particle-pulsation and the relativistic hydrogen atom. These solutions justify the postulates of quantum mechanics and provide numerical solutions compatible with the experimental data. Finally a possible origin of inertia is proposed.Conclusions: We should review the dual wave-particle concept in favour of a solely gravitational wave nature. It is remarkable to note that the same conclusions can be drawn with other configurations of compacted dimensions (whether it be in number, size or topology) A great deal of theoretical effort is needed to expand the hypothesis towards strong and weak nuclear forces and curved tridimensional spaces. Category: Quantum Physics [8] viXra:1210.0159 [pdf] submitted on 2012-10-26 19:27:12 A Novel Way to 'Make Sense' Out of the Copenhagen Interpretation Authors: Armin Nikkhah Shirazi Comments: 5 Pages. To be published in AIP proceedings of the Quantum Theory: Reconsideration of Foundations 6 conference in Växjö, Sweden This paper presents a concise exposition of the Dimensional Theory, a novel framework which helps make sense out of the Copenhagen Interpretation as it explains the peculiarities of quantum mechanics in a way that is most consistent with that interpretation. \footnote{A recording of the talk based on this material can be viewed at http://youtu.be/GurBISsM308 } Category: Quantum Physics [7] viXra:1210.0156 [pdf] submitted on 2012-10-26 13:33:02 The Quantization in the Bohr's Theory About the Hydrogen Atom Authors: sangwha Yi Comments: 4 Pages. The article treats that the quantization in the Bohr’s theory about the hydrogen atom. If calculates the electron motion in the hydrogen atom by the classical mechanic, can do the quantization in the Bohr’s theory about the hydrogen atom. In this time, the electron’s orbit velocity is non-relativity velocity. Category: Quantum Physics [6] viXra:1210.0133 [pdf] submitted on 2012-10-23 19:28:09 Why do Electrons (Orbitals) Have Discrete Quantum Numbers? Authors: Joel M Williams Comments: 2 Pages. The MCAS electron orbital model is given in several places elsewhere. “Why do electrons generate the Balmer series?” Not just that they do. In the MCAS model, they do so because the nuclei and electrons have different motion parameters, but their interaction (!) must coincide when the electron approaches the nucleus. Passing close to the nucleus allows the necessary intimacy, whereas the distant circular Bohr orbits (and those of all other models) never seemed to provide any such mechanism. Higher mathematical treatments have not provided a logical physical explanation either; just parameters to make it so as did the refinement of the Bohr model. This short paper demonstrates that simple Newtonian physics can be applied to the electron-nucleus realm and generate the principal quantum numbers of a single electron a priori. Category: Quantum Physics [5] viXra:1210.0128 [pdf] submitted on 2012-10-23 06:37:29 [4] viXra:1210.0121 [pdf] submitted on 2012-10-22 09:46:49 Entanglement Dynamics : Application to Quantum Systems Authors: Elemer E Rosinger Comments: 16 Pages. For the first time in known literature, one studies {\it entanglement dynamics} which is the way the complexity of entanglement may change in time, for instance, in the solution of a Schr/"{o}dinger equation giving the state of a composite quantum system. The paper is a preliminary study which gives the rigorous definition of the respective general mathematical model. Applications to effective Schr/"{o}dinger equations are given in a subsequent paper. Category: Quantum Physics [3] viXra:1210.0075 [pdf] submitted on 2012-10-15 04:28:54 Quantum Reversal of Soul Energy Authors: Fran De Aquino Comments: 11 Pages. In the last decades, the existence of the Soul has been seriously considered by Quantum Physics. It has been frequently described as a body of unknown energy coupled to human body by means of a mutual interaction. The Quantum Physics shows that energy is quantized, i.e., it has discrete values that are defined as energy levels. Thus, along the life of a person, the energy of its soul is characterized by several quantum levels of energy. Here, we show by means of application of specific electromagnetic radiations on the human body, that it is possible to revert the energy of the soul to previous energy levels. This process can have several therapeutic applications. Category: Quantum Physics [2] viXra:1210.0061 [pdf] submitted on 2012-10-11 23:39:54 Updated View about the Hierarchy of Planck Constants Authors: M. Pitkänen Comments: 12 Pages. Few years has passed from the latest formulation for the hierarchy of Planck constant. The original hypothesis seven years ago was that the hierarchy is real. In this formulation the imbedding space was replaced with its covering space assumed to decompose to a Cartesian product of singular finite-sheeted coverings of M^4 and CP[2]. Few years ago came the realization that the hierarchy could be only effective but have same practical implications. The basic observation was that the effective hierarchy need not be postulated separately but follows as a prediction from the vacuum degeneracy of Kähler action. In this formulation Planck constant at fundamental level has its standard value and its effective values come as its integer multiples so that one should write hbar[eff]=n×hbar rather than hbar= nhbar[0] as I have done. For most practical purposes the states in question would behave as if Planck constant were an integer multiple of the ordinary one. It was no more necessary to assume that the covering reduces to a Cartesian product of singular coverings of M^4 and CP[2] but for some reason I kept this In the recent formulation this assumption is made and the emphasis is on the interpretation of the multi-sheetedness (in the sense of Riemann surfaces) resulting as a multi-furcation for a preferred extremal taking place at the partonic 2-surfaces. This gives a connection with complexity theory (say in living systems), with transition to chaos, and with general ideas about fractality. Second quantization of the multi-furcation means accepting not only superpositions of branches as single particle states but also the analogs of many-particle states obtained by allowing several branches up to the maximum number. This revives the ideas of N-atom, N-molecule etc.. already given up as too adventurous. The question whether gravitational Planck constant h[gr] having gigantic values results as an effective Planck constant has remained open. A simple argument suggests that gravitational four-momentum could be identified as a projection of the inertial four-momentum to the space-time surface and that the square of the gravitational four-momentum obtained using the effective metric defined by the anti-commutators of the modified gamma matrices appearing in the modified Dirac equation naturally leads to the emergence of h[gr]. Category: Quantum Physics [1] viXra:1210.0034 [pdf] submitted on 2012-10-08 13:10:22 Time Energy Uncertainty Principle and Resonance in Chemistry Authors: Mirza Wasif Baig Comments: 7 Pages. Mathematical modelling of resonance in Chemistry and Vaccuum fluctuations of Virtual Particles by single differential equation Phenomenon of resonance in chemistry and vacuum fluctuations of virtual particles in high energy physics has been mathematically modeled by a third order differential equation. In newly formulated differential equation wave function describing both canonical forms of molecules and state of virtual particles is a dependent variable while time is an independent variable. Solution of differential equation after application of appropriate boundary conditions gives an acceptable wave function Operation of energy and time operator on this wave function proves that these operators do not hold commutative property. Time energy uncertainty principle has been derived by calculating variance of energy and time for the same wave function. Category: Quantum Physics
{"url":"http://vixra.org/quant/1210","timestamp":"2014-04-18T15:40:36Z","content_type":null,"content_length":"21185","record_id":"<urn:uuid:76412f89-e2f1-4f80-a476-9dea706abad4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
BMI Calculator Body Mass Index Calculation Are you over or underweight? The best way of weight control is to use BMI calculators. BMI calc helps determine the weight range based on the body mass index. BMI calculator gives individual results of the measurements. BMI calc helps understand the person's needs for healthy weight. BMI calculator shows the weight range for a particular height. Body mass index calculator can indicate the risk of serious health conditions associated with weight problems. Our calculators may show categories such as underweight, healthy weight, overweight and obesity. BMI calculator can be used for men and women from 20 years and older. For children and teens from 2 to 19 years it is good to use special BMI calculator for kids as the particularities of their age and gender must be taken into account. Imperial BMI Calculator Metric BMI Calculator This calculator is a standard BMI calc. The calculation of the BMI - body mass index with BMI calculators is carried out according to the same BMI formula; however, the definition of calculation results may differ depending on your age, sex, or established norms of the country where you reside. You can use our advanced bmi calculators to get additional results.
{"url":"http://bmicalc.org/","timestamp":"2014-04-19T09:23:32Z","content_type":null,"content_length":"9693","record_id":"<urn:uuid:60eeb725-c5a9-4527-b9fb-1f12aaaa11f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
translating English phrases into algebraic expressions September 20th 2010, 02:15 PM translating English phrases into algebraic expressions okay I have another problem a number, half of that number, and one-third of that number are added. The result is 22. So far I have September 20th 2010, 02:19 PM write it like this ..... x+ x/2 + x/3 = 22 or $x+\frac {x}{2} +\frac {x}{3} = 22$ that way someone can think you wrote ... x+ 1/(2x) + 1 /(3x) .... it's the same as the another one that you post ... just this one multiply with 6 $6x +.....$ September 20th 2010, 02:25 PM thank you so much for help. I am enjoying this forum because it is helping me a lot to understand the problems.
{"url":"http://mathhelpforum.com/algebra/156829-translating-english-phrases-into-algebraic-expressions-print.html","timestamp":"2014-04-21T12:30:45Z","content_type":null,"content_length":"5131","record_id":"<urn:uuid:6b230c39-ad2e-461b-9795-73b15c94079e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Stray light correction Spectral stray-light correction Spectral stray light in a spectroradiometer can be described by the instrument’s spectral line spread function (SLSF). An example of a SLSF of a CCD array spectroradiometer is shown in the figure at the top right. We have developed a correction method to obtain spectral stray-light corrected signal, Y[IB], by using a simple matrix multiplication of the raw measured signals, Y[meas], by a spectral stray-light correction matrix, C[spec], (Equation 1) Y[IB] = C[spec]Y[meas] (1) Only a one-time characterization of the instrument for a set of SLSFs is required to derive the correction matrix, C[spec]. A correction, which can be done in real time, can reduce errors due to stray light by more than one order of magnitude. Thus, the application of this method could lead to significant reductions in the overall measurement uncertainties in applications where spectral components of the source have a large dynamic range. An example of stray-light correction results for a CCD-array spectroradiometer is shown in Figure 1, where the light source is an incandescent lamp with a green bandpass filter. The residues of the stray light signals are at least one order smaller than the original stray-light signals. Figure 1. Plot of raw signals and stray-light corrected signals of a CCD-array spectroradiometers. The peak raw signal is normalized to 1. For more technical information, see Simple spectral stray light correction method for array spectroradiometers and New NIST method improves accuracy of spectrometers. Spatial stray-light correction Spatial stray light in an imaging instrument can be described by the instrument’s point spread function (PSF). An example of a PSF of a CCD imaging radiometer is shown in the figure at the bottom We have also developed a correction method to acquire spatial stray-light corrected signals (transposed to a column vector (CV)), Y[IR,cv], involving simple matrix multiplication of the raw measured signals (transposed to a CV), Y[meas,cv], by a spatial stray-light correction matrix, C[spat], (Equation 2). Y[IR,cv] = C[spat] Y[meas,cv] (2) Only a one-time characterization of the instrument for a set of PSFs is required to derive the correction matrix, C[spat]. A correction, which minimally impacts data acquisition time, can reduce errors due to stray light by more than one order of magnitude. Thus, application of this method could lead to significant reductions in the overall measurement uncertainties in applications where images have high contrast ratios. An example of stray-light correction results for a CCD imaging radiometer is shown in Figure 2, where the light source is an integrating sphere with the center of exit port covered by an opaque black sheet, The plot is a 1 dimensional image signals along the center line across the sphere port. The residues of the stray-light signals are at least one order of magnitude smaller than the original stray light signals. Figure 2. Plot of raw signals and stray-light corrected signals of an imaging radiometer. The peak raw signal is normalized to 1. For more technical information, see Characterization and correction of stray light in optical instruments.
{"url":"http://nist.gov/pml/div685/grp03/spectroradiometry_straylight.cfm","timestamp":"2014-04-16T21:56:25Z","content_type":null,"content_length":"26246","record_id":"<urn:uuid:f2d44303-efad-4c46-9d4f-7fa2cc244e9c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
The Theory of Equilibrium of Elastic Systems and Its Applications Alberto Castigliano We haven't found any reviews in the usual places. NOTATION 1ziii lvi CHAPTER II 29 12 other sections not shown References from web pages Elasticity -- from Eric Weisstein's Encyclopedia of Scientific Books The Theory of Equilibrium of Elastic Systems and its Applications. New York: Dover, 1966. 360 p. Chandrasekharaiah, ds and Debnath, Lokenath. ... www.ericweisstein.com/ encyclopedias/ books/ Elasticity.html Geometry.Net - Scientists: Castigliano Alberto 560 p. $90. Brekhovkikh, Leonid M. Waves in Layered Media. Castigliano, Alberto. The Theory of Equilibrium of Elastic Systems and its Applications. ... www.geometry.net/ detail/ scientists/ castigliano_alberto.html Bibliographic information
{"url":"http://books.google.co.uk/books?id=wU1CAAAAIAAJ&q=The+Theory+of+Equilibrium+of+Elastic+Systems+and+Its+Applications&dq=The+Theory+of+Equilibrium+of+Elastic+Systems+and+Its+Applications&pgis=1","timestamp":"2014-04-17T12:43:14Z","content_type":null,"content_length":"132207","record_id":"<urn:uuid:01086740-e04d-4723-8144-86c2c4a237b2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
“How much math do I need to know to program?” Not That Much, Actually. Here are some posts I’ve seen on the r/learnprogramming subreddit forum: Math and programming have a somewhat misunderstood relationship. Many people think that you have to be good at math or made good grades in math class before you can even begin to learn programming. But how much math does a person need to know in order to program? Not that much actually. This article will go into detail about the kinds of math you should know for programming. You probably know it already. For general programming, you should know the following: • Addition, subtraction, division, and multiplication – And really, the computer will be doing the adding, subtracting, dividing, and multiplying for you anyway. You just have to know when you need to do these operations. • Mod – The mod operation is the “remainder” and its sign is usually the % percent sign. So 23 divided by 7 is 3 with a remainder of 2. But 23 mod 7 is 2. • The even/odd mod test trick – If you want to know if a number is odd or even, mod it by 2. If the result is 0, the number is even. If the result is 1, the number is odd. 23 mod 2 is 1, so you know 23 is odd. 24 mod 2 is 0, so you know 24 is even. If x mod 2 is 0, you know that whatever number is stored in the variable x is even. • To get a percentage of a number, multiply that number by the percent number with the decimal point in front of it. So to get 54% of 279, multiple 0.54 * 279. This is why 1.0 often means 100% and 0.0 means 0%. • Know what negative numbers are. A negative number times a negative number is a positive. A negative times a positive is negative. That’s about it. • Know what a Cartesian coordinate system is. In programming, the (0, 0) origin is the top left corner of the screen or window, and the Y axis increases going down. • Know the Pythagorean theorem, and that it can be used to find the distance between two points on a Cartesian coordinate system. The Pythagorean theorem is a^2 + b^2 = c^2. What this usually means in programming is the distance between coordinate (x1, y1) and (x2, y2) will just be sqrt( (x1 – x2)^2 + (y1 – y2)^2 ). • Know what decimal, binary, and hexadecimal numbering systems are. Decimal numbers are the numbers we’re used to that have ten digits: 0 to 9. It’s commonly thought that humans develop this system because we have ten fingers and counted on our fingers. Computers work with binary data, which is a number system with only two digits: 0 and 1. This is because we build computers out of electronics components where it’s cheaper to make them only recognize two different states (one state to represent 0 and the other to represent 1). The numbers are still the exact same, but they are written out differently because there are a different number of digits in each system. Because hex has 6 more digits than the 0-9 numerals can provide, we use the letters A through F for the digits above 9. The easiest way to show these number systems is with an odometer. The following three odometers always show the same number, but they are written out differently in different number systems: See the Odometer Number Systems page in a new window. You don’t even have to know the math of converting a number from one number system to another. Every programming language has functions that can do this for you. (On a side note, hexadecimal is used because one hexadecimal digit can represent exactly four binary digits. So since 3 in hex represents 0011 in binary and A in hex represents 1010. This has the nice effect that the hex number 3A (which is 58 in decimal) is written in binary as 00111010. Hex is used in programming because it is a shorthand for binary. Nobody likes writing out all those ones and zeros.) And that’s about it. Other than the number system stuff, you probably already knew all the math you needed to know to do programming. Despite the popular conception, math isn’t really used that much in programming. You would need to know math in order to write programs that do, say, earthquake simulators. But that’s more about needing to know math for earthquakes rather than needing to know math for programming an earthquake simulator. Advanced Mathematics in Some Areas of Programming There’s a few areas of programming where some additional math knowledge might be needed (but for 95% of the software you’ll write, you don’t need to know it.) 3D games and 3D graphics – 3D stuff will usually involve knowing trigonometry and linear algebra (that is, math dealing with matrices). Of course, there are many 3D graphics libraries that implement all this math programming for you, so you don’t need to know the math. 2D physics (like Angry Birds) and 3D physics (like many popular 3D games use) – To do programming that involves physics, you’ll need to learn some physics equations and formulas (specifically mechanics, which is the type of physics with springs, gravity, and balls rolling down inclined planes.) However, there are several physics engines and software libraries that implement this stuff for you, so you really don’t need to know the physics equations to make a game like Angry Birds. Cryptography – And really, by cryptography, I just mean RSA. In which case, you’d have to learn some math about how prime numbers work and doing the Greatest Common Divisor (which is a dead simple algorithm, although plenty of programming languages have gcd() function that does this for you.) Other encryption ciphers are mostly moving data around in specific steps. For example, this Flash animation shows the steps in the AES “Rijndael” cipher. All the steps are basically substituting numbers for other numbers, shifting rows of numbers over, mixing up columns of numbers, and doing basic addition with numbers. And that’s just if you want to write your own encryption ciphers (which you shouldn’t do, because there are already plenty of good ones and without expertise your cipher will probably suck and be easily cracked.) If you just want to write a program that encrypts data, there are software libraries that implement encryption and decryption functions already. So even for the above situations, you don’t need to know the math to make programs with 3D graphics, physics, or encryption. Just learn to use the libraries. What You Do Need to Learn to Do Programming What you do need to learn is how to model data and devise algorithms. This basically means, how to take some real-world calculation or some data processing, and write out code that makes the computer do it. For example, in the game Dungeons and Dragons the characters and monsters have several different statistics for combat: • HP, or hit points, is the amount of damage a person can take before dying. More HP means you can take more damage before dying. • AC, or armor class, is a measure of the chance your armor has of blocking an attack. The lower the AC, the more protective the armor is. • THAC0 (pronounced “thay-co”), or “To Hit Armor Class 0”, is a measure of how skillful the person is at making a successful hit on an opponent. The lower the THAC0, the more accurate the person’s attack is. • The damage of the weapon is written out as something like 1d6+2. This means the damage is the amount from rolling 1 six-sided dice, and then adding 2 to it. A damage stat of 2d4 would be rolling 2 four-sided dice and adding them together. (Dungeons and Dragons uses 4, 6, 8, 10, 12, and 20-sided dice.) To see if an attacker hits a defender, the attacker rolls a twenty-sided die. If this number is equal to or greater than the attacker’s THAC0 minus the defender’s AC, then the hit is successful and the defender takes damage. Otherwise, the defender has either dodged or blocked the attack and takes no damage. Let’s take two Dungeon and Dragons characters, Alice and Bob, with the following stats: • Alice: HP 14, AC 5, THAC0 18, DAMAGE 1d6 • Bob: HP 12, AC 7, THAC0 16, DAMAGE 2d4 | Next page
{"url":"http://inventwithpython.com/blog/2012/03/18/how-much-math-do-i-need-to-know-to-program-not-that-much-actually/?wpmp_switcher=mobile&wpmp_tp=0","timestamp":"2014-04-19T14:29:52Z","content_type":null,"content_length":"19723","record_id":"<urn:uuid:5a042069-edd2-487e-bb60-f343ce6809c1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Post #1475411 03-15-2013 04:04 PM #0 Working backwards then, find the point 0.29 log-H left of 0.1 of the film-developed-to-ISO-conditions curve. Find slope of the tangent to that point and multiply by 3, do 1.50 times that, to find the density delta. Make a triangle with that density and 1.50 log-H and find where that triangle meets the curve. What will I find? Will some of the measurement points coincide or be in a predictable place? Or is it "Not locked down" and the Average Gradient triangle could wiggle anywhere depending on the shape of the toe? I've never tried it. I have found that long toed cuves have a slightly higher CI index when matching the ISO parameters than straighter curves. This make me assume there won't be a perfect match with the 1.50 fractional gradient approach all case. Remember the graph of the spread function? But neither does the fractional gradient method always perfectly conform to the print judgment Last edited by Stephen Benskin; 03-15-2013 at 04:12 PM. Click to view previous post history.
{"url":"http://www.apug.org/forums/viewpost.php?p=1475411","timestamp":"2014-04-20T23:29:01Z","content_type":null,"content_length":"12452","record_id":"<urn:uuid:f311b09c-c6ee-483a-a4f1-1c650e63f97a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Regarding transforming Control Flow Graph "Deep.V...@gmail.com" <vipin.v.deep@gmail.com> 11 Apr 2005 00:22:29 -0400 From comp.compilers | List of all articles for this month | From: "Deep.V...@gmail.com" <vipin.v.deep@gmail.com> Newsgroups: comp.compilers Date: 11 Apr 2005 00:22:29 -0400 Organization: http://groups.google.com Keywords: optimize, question Posted-Date: 11 Apr 2005 00:22:29 EDT Dear Reader, I am doing a project where I have to modify the program so that only subset of its path are retained and remove rest of the portions. For this I have to modify the control flow graph retaining necessary paths and pruning rest. What are the suffecient criterion to say that the transformations are legible. Some of them which I can think are- retaining dependencies, ensure return values. Is there any criterion which I have to prove (to be valid), for arguing that my transformations are correct. Deepak V Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/05-04-036","timestamp":"2014-04-17T03:51:23Z","content_type":null,"content_length":"4219","record_id":"<urn:uuid:cace3d4c-5de3-44df-a11f-8a91296a1467>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
India's Engineering Grads Cannot Solve Simple Math Problems 33368279 submission chiguy (522222) "MIT alumnus Varun Aggarwal and IIT-Delhi graduate Himanshu Aggarwal released a study suggesting that 30% of Indian engineering graduates can't solve simple math problems. As reported in India Today: "A bag is full of 20 bananas and no other fruit. Rajeev draws a fruit from the bag. What is the probability that he will draw a banana? An embarrassing 30 per cent of the country's engineers cannot solve a problem as simple as the one above, a study has found. Their ineptitude, however, is not limited to just sums of probability. It's worse as over one-third engineers do not possess mathematical skills needed in day-to-day life for doing simple transactions, counting and arranging. In other words, they have a weak understanding of concepts as elementary as decimals, powers, operations, ratio, fractions and the ability to apply these concepts to real-world problems." Is this surprising? How does this compare to American/Western countries?"Link to Original Source This discussion was created for logged-in users only, but now has been archived. No new comments can be posted. • by treeves (963993) I guy I know on Facebook (well I know him from the Navy) posted a question his sixth grade son got for math homework: you have a bag with 5 red marbles, 3 blue marbles and two green marbles in it. You pull out two marbles without looking inside. What is the probability you will pull out two that are different colors? Show your work. I answered it and I'm sure I got it right, but he didn't post the "correct" answer and I don't think most American high school students could do it, heck college students for th □ Re: (Score:1) by Ignacio (1465) Elementary school kids can barely multiply these days. Asking them to do hypergeometric analysis is... yeah. Also, 5/10*5/9 + 3/10*7/9 + 2/10*8/9 = 62/90 = 31/45. So a little over two-thirds. □ Re: (Score:2) by iamhassi (659463) I guy I know on Facebook (well I know him from the Navy) posted a question his sixth grade son got for math homework: you have a bag with 5 red marbles, 3 blue marbles and two green marbles in it. You pull out two marbles without looking inside. What is the probability you will pull out two that are different colors? Show your work. I answered it and I'm sure I got it right, but he didn't post the "correct" answer and I don't think most American high school students could do it, heck college students for that matter. But a sixth grader? No way. are you sure that's 6th grade? Because this school has a very similar but easier (just two colors) marble problem and they're saying it's 8th grade. [portangelesschools.org] More colors means it's more difficult, so I'm guessing that problem is somewhere above 8th grade, at least according to the Port Angeles School District [portangelesschools.org] • Anyone familiar with the biography of Richard Feynman will know how he went to Brazil and looked at the country's physics education, and reported back that, in effect, they didn't have any: they just had rote learning. So the student could recite the formula for the deflection of a light ray entering a region of different refractive index, but couldn't even recognise where it was appropriate in a real-world situation. I would suggest, as an old fogey, that a lot of this arises from course work rather than ex • by iamhassi (659463) HR doesn't care, he has a "college degree" and he's cheaper than Western college grads so he's hired! Most companies don't give math tests and don't hire engineers to pick bananas from a bag. • by InsertCleverUsername (950130) Speaking from experience working with various outsourced groups doing software development, I'm surprised it's only 30 percent. I cringe every time I find myself somewhere that I'm required to work with these groups because I typically spend more time explaining, hand-holding, code reviewing, and often completely re-doing the work than if I just did it myself. Part of it is a better understanding of the business, of course, but part of it is just a lack of ability. In my experience, those with talent hav • It's not a math problem, it's a language and logic problem. Take the number 20 out and it's still the same problem. If you say there are bananas in the bag and no other fruit, it's reasonable to infer that bananas are a fruit, therefore pulling out a fruit means you get a banana every time. The numbers are no more mathmatically significant than the symbols used in sudoku. And despite whatever calculation skills may exist in the populace, there is an appalling inability to employ logic. Look how many people are
{"url":"http://entertainment.slashdot.org/submission/2086333/indias-engineering-grads-cannot-solve-simple-math-problems","timestamp":"2014-04-17T00:50:13Z","content_type":null,"content_length":"74155","record_id":"<urn:uuid:d7803209-27da-445e-b348-221b401263f2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
degenerating surface up vote 4 down vote favorite Hi, i have a sequence of immersed disc $u_n: \mathbb{D} \rightarrow \mathbb{R}^3$ which converge to a singular cover of the disc: $z^k$ for $k\geq 2$, moreprecisely $u_n \rightarrow z^k$ in $C^2(\ mathbb{D})$. Of course the Gauss curvature of the image $\Sigma_n=u_n(\mathbb{B})$ blows up thanks to Gauss-Bonnet formula : $\int_{\Sigma_n}K= (1-k)2\pi + o(1)$. My questions are the following: 1) can the Gauss curvature be bounded form above? i.e the blow-up come only from necks and there is no pinching region... my feeeling is no, since you have to close the surface. 2) Extra bonus: same question with only a convergence in $C^2_{loc}(\mathbb{D}\setminus \{ 0\})$, here we allow the closing of the surface be made by a big sphere for example. gt.geometric-topology dg.differential-geometry blow-ups branched-covers add comment 2 Answers active oldest votes When $k$ is odd, there does exist such a family of immersions satisfying Paul's requirements. (When $k$ is even, Vitali Kapovich has shown, using a clever topological argument, that it's not possible to have such a family of degenerating immersions. Please see his answer for the details.) Set $k=2m+1$, and consider the (complex) $1$-parameter family of maps $u_t:\mathbb{C}\to\mathbb{R}^3$ given by $$ u_t(z) = \bigl(Re(z^{2m+1}-(2m{+}1)t^2z),\ Im(z^{2m+1}+(2m{+}1)t^2z),\ \tfrac{4m+2}{m+1} Re(t z^{m+1})\ \bigr). $$ These smooth maps converge smoothly to $u_0$ as $t\to0$, and $u_t$ induces the metric $$ ds_t^2 = (2m{+}1)^2\bigl(|z|^{2m}+|t|^2\bigr)^2 |dz up vote 8 down |^2. $$ Thus, $u_t$ is an immersion for $t\not=0$, while $u_0(z) = \bigl(Re(z^{2m+1}),\ Im(z^{2m+1}),\ 0\ \bigr)$. vote accepted The family $u_t$ was constructed using the Weierstrass formula for minimal immersions, so the image $u_t({\mathbb{C}})\subset \mathbb{R}^3$ is an immersed minimal surface, and, as a result, the Gauss curvature is everywhere non-positive. In fact, for $t\not=0$, the Gauss curvature only vanishes at $z=0$, and then only when $m>1$. In particular, all of these degenerating immersions have curvature bounded above. add comment for question 1) it's not clear to me why $C^2$ approximating $z \mapsto z^k$ by immersions is possible at all. do you have an example of such a sequence? edit: Ok, I thought some more on the topological issue of whether it's always possible to deform $z \mapsto z^k$ to an immersion in $C^2$ and I can say that it is definitely NOT possible if $k$ is even. In particular, it's impossible when $k=2$. There is an easy necessary condition for an existence of such deformation. An immersion $f$ of a disk gives a parallelization of the tangent bundle along $f$, i.e we get a map $f'\colon D^2\to V_2(\mathbb R^3)$ where $V_2(\mathbb R^3)$ is the Stiefel manifold of orthonormal 2-frames in $\mathbb R^3$ which is of course just $SO(3)$. This map ought to extend the map on the boundary of the disk $S^1$ which being an immersion already won't change much under a small deformation. That map is essentially given by $z\ to kz^{k-1}$ which disregarding the conformal factor $k$ can be thought of as a map $S^1\to SO(2)=V_2^{or}(\mathbb R^2)\subset V_2(\mathbb R^2)=O(2)$. The natural map $V_2(\mathbb R^2)\to V_2 (\mathbb R^3)$ corresponds to the standard inclusion $SO(2)\to SO(3)$ on identity component. Now, $\pi_1(SO(2))\cong \mathbb Z$ and $\pi_1(SO(3))\cong \mathbb Z/2\mathbb Z$ and the map $\pi_1 (SO(2))\to \pi_1(SO(3))$ is well-known to be onto (as is immediate from the long exact sequence for the fibration $SO(2)\to SO(3)\to S^2$). That means that the map $z\mapsto z^{k-1}$ gives a generator of $\pi_1(SO(3))$ when $k$ is even and thus can not be extended to $D^2$. up vote 11 down (Note that this takes care of both 1) and 2) when $k$ is even.) I think the above necessary condition should also be sufficient and therefore when $k$ is odd such a deformation should always be possible (at least in $C^0$) as evidenced by $k=1$ of course. I'm not at all an expert on immersions but the subject is very well understood and I hope someone who knows more about it will chime in. OK, Robert Bryant answered this (see below). Moreover, a direct calculation shows that his example produces a family of immersions $f_t$ with with the induced metrics $ds_t^2$ having $sec\le 0$ for all $t>0$. This settles the original question in the positive for $k$ odd. [S:For question 2) you can not expect any upper curvature bounds. Given an immersion you can locally perturb it on an arbitrary small neighborhood of 0 by adding a small thin "finger" to your surface. this will introduce some arbitrary positive (and negative) curvature. Doing this along a given sequence converging in $C^2_{loc}(\mathbb D\backslash \{0\})\cap C^0(\mathbb D)$ will keep such convergence.:S] got you. then please disregard my original answer about 2). Note however that my edit above shows that when $k$ is even you can not at all extend the immersion $z\to z^k$ near the boundary $S^1$ to an immersion $D^2\to \mathbb R^3$. this does take care of both 1) and 2) in that case. I don't really want to think about possible geometric restrictions when $k$ is odd until I'm certain it's actually possible. – Vitali Kapovitch Nov 22 '11 at 17:52 1 The case $k=2m+1$ does occur. Consider the (complex) $1$-parameter family of maps $u_t:\mathbb{C}\to\mathbb{R}^3$ given by $$ u_t(z) = \bigl(Re(z^{2m+1}-(2m{+}1)t^2z),\ Im(z^{2m+1}+(2m{+} 1)t^2z),\ \tfrac{4m+2}{m+1} Re(t z^{m+1})\ \bigr). $$ These smooth maps converge smoothly to $u_0$ as $t\to0$ and $u_t$ induces the metric $$ ds_t^2 = (2m{+}1)^2\bigl(|z|^{2m}+|t|^2\bigr)^ 2 |dz|^2. $$ Thus, $u_t$ is an immersion for $t\not=0$, while $u_0(z) = \bigl(Re(z^{2m+1}),\ Im(z^{2m+1}),\ 0\ \bigr)$. – Robert Bryant Nov 22 '11 at 21:25 Thank you Vitali and Robert for your proof of the fact an immersion is possible if and only if $k$ is odd. – Paul Nov 22 '11 at 22:15 @Robert Bryant This is a really nice example!! It was pretty clear to me that a $C^0$ approximation like this near 0 is possible but I wasn't sure if it can still happen with $C^2$ convergence. Note that your example also answers the original question by the OP because $ds_t^2$ has nonpositive sectional curvature for all $t>0$. – Vitali Kapovitch Nov 22 '11 at 22:19 @Vitali: Thanks. Actually, I didn't realize that this is what Paul was asking in 1). I thought that he was asking whether you could prove an upper bound for the curvature, not whether there was some example with an upper bound. Now that I read over the question again, I realize that your interpretation makes more sense than mine. I did know that the curvature was nonpositive everywhere, since, after all, these immersions are minimal surfaces. I guess I should claim the reward, or should we split it. – Robert Bryant Nov 22 '11 at 23:57 show 1 more comment Not the answer you're looking for? Browse other questions tagged gt.geometric-topology dg.differential-geometry blow-ups branched-covers or ask your own question.
{"url":"http://mathoverflow.net/questions/81325/degenerating-surface","timestamp":"2014-04-17T15:43:10Z","content_type":null,"content_length":"64723","record_id":"<urn:uuid:95455515-b50d-47e6-a789-a2f48764119f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Paper stirs up controversy over the nature of the quantum wave function The researchers' definition of a physical property is illustrated here. Image (c) Nature Physics (2012) doi:10.1038/nphys2309 (Phys.org) -- Back in November, a paper posted to a preprint server arXiv by three British physicists prompted some heated debate regarding the nature of the quantum wave function, a probability function that physicists use to help them better understand the quantum world. At the time, the three refrained from joining in on subsequent discussions on the paper due to pending acceptance of the paper in the journal Nature Physics. Now that the paper has been accepted and printed, the three, Matthew Pusey, Jonathan Barrett and Terry Rudolph are openly defending their assertion that the wave function is real, not some function that is dependent on available information for the user when using it. At the heart of the issue are the contrasting ideas on the very nature of quantum mechanics itself. In their paper, the British physicists contend that the wave function is not just a tool that can be used for statistical purposes, but can measure actual real things. Others have suggested that it cannot be a real tool because of inconsistencies in observable quantum mechanics, such as entanglement. Because of such inconsistencies, physicists such as Einstein contended that our knowledge or model of quantum mechanics is incomplete, not wrong. It’s possible the thinking goes, as an example, that because two distant entangled particles react in identically the same way at the same time, seemingly sharing information faster than the speed of light, that there is some new element of quantum mechanics at work that would allow for such a real world phenomenon to exist, rather than an example of the failure of quantum mechanics theory itself. Another example is the differing views regarding Schrödinger’s cat. Some might say the wave function could be used to prove whether the unseen and thus un-measurable cat is truly dead or alive, whereas others, such as Einstein would say that because the inquisitor has only partial knowledge, no true answer can be given. The problem with proving which view is true is the theory that most agree on and that is that quantum states can be changed simply by measuring them, which means, that as things stand now, physicists have no way of proving what state existed prior to measurement. But that doesn’t mean the wave function can’t be used to measure a quantum state, Pusey et al say, because the true state did exist before measurement occurred. At that moment it was real, they say, as is the wave function and they believe they have proved it. More information: On the reality of the quantum state, Nature Physics (2012) doi:10.1038/nphys2309 Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state truly represents. One possibility is that a pure quantum state corresponds directly to reality. However, there is a long history of suggestions that a quantum state (even a pure state) represents only knowledge or information about some aspect of reality. Here we show that any model in which a quantum state represents mere information about an underlying physical state of the system, and in which systems that are prepared independently have independent physical states, must make predictions that contradict those of quantum theory. via Nature News 1 / 5 (1) May 09, 2012 Here we show that any model in which a quantum state represents mere information about an underlying physical state of the system, and in which systems that are prepared independently have independent physical states, must make predictions that contradict those of quantum theory. - research authors My imagination falls short to see the consequences of considering the "One possibility is that a pure quantum state corresponds directly to reality." Perhaps others see consequences arising from considering this one possibility. (Besides labeling other approaches to 'reality' superceded or obsolete) Awaiting other readers comments. 3 / 5 (2) May 09, 2012 We have measured the quantum function already - so I presume, what is measurable, it does exist physically. Or we should doubt the existence of photons, electrons, even the atoms and many other artefacts, which were never observed directly with naked eye, only "measured". 2.3 / 5 (7) May 09, 2012 We can misinterpret measurements, we do it all the time. Ghost hunters measure properties of ghosts, psychics measure the success rate of their predictions, etc etc... We measured SOMETHING, and we THINK it is ... and we THINK it is caused by ... Scientists should never be dogmatic. 1 / 5 (1) May 09, 2012 From article: Its possible the thinking goes, as an example, that because two distant entangled particles react in identically the same way at the same time, seemingly sharing information faster than the speed of light, that there is some new element of quantum mechanics at work that would allow for such a real world phenomenon to exist, rather than an example of the failure of quantum mechanics theory This statement should keep string theorists busy for a while trying to figure out how to change their theory to match prediction. 1.1 / 5 (8) May 09, 2012 From article: Its possible the thinking goes, as an example, that because two distant entangled particles react in identically the same way at the same time, seemingly sharing information faster than the speed of light, that there is some new element of quantum mechanics at work that would allow for such a real world phenomenon to exist, rather than an example of the failure of quantum mechanics theory itself. This statement should keep string theorists busy for a while trying to figure out how to change their theory to match prediction. It has never been proven that two distant "entangled particles" are in fact entangled or have any effect on each other, other than gravitationally or electromagnetically. 3.6 / 5 (7) May 09, 2012 I have always likened entanglement to using the same random number seed in a program. If I run the same piece of code on different machines using the random function I will get different numbers until I set the seeds equal. Entanglement, to me, is exactly that. This is in essence a hidden variable and breaks Bell's Inequality. Lots of really smart people say my idea doesn't work and I tend to side with the really smart people, not the armchair physicist. Is it possible that this idea of the wave function being real provides a means to get around this problem? On one side, measuring a quantum particle "collapses" the wave function and the quantum state "becomes real". But if the wave function is always real then the to be measured state was always there, waiting to be discovered. Seems like semantics and not a way around the hidden variables 3 / 5 (3) May 09, 2012 We measured SOMETHING, and we THINK it is ... and we THINK it is caused by ...Scientists should never be dogmatic. Nobody is dogmatic here. Until we have no better argument, we should consider, the measured artefact corresponds the observed reality. Or should be doubt the existence for example the electron just because the related measurements COULD be interpreted differently in the future? Until we have no good reason, the experiment always goes first in physics. 1.7 / 5 (7) May 09, 2012 Consciousness can be represented by neural frequency (vibrational energy of neurons interacting with other neurons causing like vibration). The vibration of a neuron cascades to other neurons affecting their vibration. So, 1 neuron influences another by wave energy generated by vibration. Here is the kick. The cerebral neural vibrational energy influences the objects in the measurers surroundings. So the thought of the observer (measurer) affects and changes the object being measured. When you expect to see an electron representing itself in a certain way you are translating that vibrational energy to the electron and it acts in accordance. The conscious thought of the observer has an influence on matter. Matter is shaped by consciousness. 3 / 5 (2) May 09, 2012 1 / 5 (1) May 09, 2012 what if question is not observing Schrödingers cat directly, as if in a completely removed state, uncorrelated with anything else. what if observation occurs indirectly, via entanglement with the maker of the box or person who set up the experiment, or a connection with ones housecat, patterning which may provide (sensory) 'tells', and is a more real-world experience. then, perhaps, observation of the cat might occur through correlation with the construction or constructor of the experiment, outside the boxed-in boundary of 'the experiment'. in other words, what if the path to the cat's condition is through the box or the person who set up the lab experiment, querying their correlation/entanglement, as this could feasibly provide more information existing beyond the boundary proper. and if not that, why not use a thermal camera? 2.7 / 5 (12) May 09, 2012 Of course the wavefunction is not a "thing", it is a model or mathematical conceptualization of the underlying reality. The very act of conceptualizing reality, changes Reality , to a form which is dependent upon mind. Everything is fine until a measurement is performed because this is the seam between Reality as it is in it self, and the interface to mind. The act of acquiring knowledge of reality demands that Reality conform to its presupposed conceptual framework. There IS an objective Reality apart from mind, yes, ... it's just that it is unknowable in its original form (if it is even meaningful to ascribe a form to it, as it exists apart from being 1 / 5 (1) May 09, 2012 i would like to further add: if conceptualizing the wavefunction as a "circuit", even if of information (somehow tied to matter/energy) that it too would need to be grounded if connecting two or more realms together [point-line-plane]. consciousness as circuit, subconscious, etc. then, via physical if mental interaction with variables involved in the Schrödingers cat experiment, if a wavefunction were somehow connected to the cat, at some previous time, and correlated in this way, it may be possible to query that circuit and indirectly evaluate its status, than to determine its fate by opening the box. the claim then would be that the quantum correlation or circuit potentially exists beyond the box by default, in real-world conditions of things interacting with things in the environment, and thus the observation itself is entangled in the world by default of its construction. as is the cat, its previous life, all those interactions, potentially circuits involved in how it is grounded. 1.3 / 5 (6) May 09, 2012 ,... that entanglement is irrational to us, is equivalent to saying that the artificial form (of space and time) that we wish to conform reality to, fails to provide consistentcy and intuitional sense wrt observations. Reality does not "fit" within this conceptual framework, ,.. It is free of such intuitive relations, unlike the mind. 3 / 5 (5) May 09, 2012 Interesting. This position would support Many Worlds or de Broglie-Bohm interpretation at the expense of Copenhagen interpretation. And I tend to agree with them. 3 / 5 (2) May 09, 2012 in other words: could the presumed conditions of the original experiment involve that 'the truth' of the existence of the cat is independent of all other reality? or is that impossible- and that the cat is heavily entangled with the world as a condition of being part of it, in whatever ways and dimensionality, and that some of these likely exist outside of the limits of the box - the truth of the cat connected with other structuring (truth of the box, experiment maker, the weather) as these are grounded in interconnected circuitry. in other words: the box does not and cannot contain the entangled/wavefunction of truth of its being, in its entirety, and perhaps like a map, this connectivity back to it from various angles/facets, could provide more information prior to triggering its fate by touching the box. thus, perhaps 'weighing' its status via probability of these other connected factors, their truth as it may ground in shared circuitry with the cat. this could even be assumed, no? 3.3 / 5 (3) May 09, 2012 The many worlds interpretation is reconciliation of possible states (results of wave-function). It is just a way of balancing the fractalization of reality by stating that the untrue states of our universe still exist (are true) in the grander scheme. So the unfulfilled possibilities are fulfilled in another time and place, like a branching of reality. The many worlds interpretation is the inability to accept that reality chooses a certain path over another by stating that both paths actually exist, we just diverge in a singular direction. IMO the many worlds interpretation is a construct of the mind to balance mathematical equations and not the actual state of reality. When 1/2 states becomes true the other state (2/2) becomes false. The many worlds interpretation says there is no collapse into a true state, that both 1 and 2 of 2 are true in different dimensions. In matrix form: T F - world 1 F T - world 2 3.7 / 5 (3) May 09, 2012 Not knowing anything in depth about this (just a fascinated tech lurker) it seems that the quantum states may already be in the detected state to begin with. Have we ever actually detected a state change when a particle is measured, analyzed, or looked at? Also, could entanglement simply be the fact that a clone (or cut and paste) of the original particle is just that - an exact copy and therefore an extension (characteristics and all) of the original cloned particle with these common elements wired through the quantum foam underlayment? Again, just a layman trying to connect the dots as I can understand them. 5 / 5 (7) May 09, 2012 A (free) preprint of the paper submitted to Nature can be found here: http://arxiv.org/...28v2.pdf 4.8 / 5 (10) May 09, 2012 The question whether the wave function is real or not is a bit more tricky: We never measure the wave function directly (which is a complex function) but only ever the square of the function (the square of a complex function being a real function. In this case denoting probabilities/probability densities) That it doesn't satisfy our ego, or isn't plausible or whatever shouldn't be an argument for or against it. What we find plausible are only things that we have macroscopic analogies/experiences for. But analogies only work if they are more fundamental than the thing they are an analogy for. Quantum ohysisc is more fundamental than any of our everyday experiences, so OF COURSE it will not conform to our analogies and OF COURSE we will not find it 'plausible'. But observation/experiments trumps what seems plausible. So whether you feel good or bad about it doesn't make any difference. Quantum physics works. Use it. 4.6 / 5 (11) May 09, 2012 The cerebral neural vibrational energy influences the objects in the measurers surroundings. Huh? Pick up a book on neurology. Preferrably a children's book. It will make it clear to you that you are wrong about the brain in every respect. Every one. Matter is shaped by consciousness. Sooooo. No matter was around before consciousness came into being (oh, about a few billion years after the universe started)...and that consciousness wasn't formed out of matter because matter wasn't formed for lack of consciousness. Can I have the number of your drug dealer? Must be some good stuff he's selling. Do you ever check your thoughts for consistency (much less reality) before writing them down? 1.9 / 5 (9) May 09, 2012 We never measure the wave function directly Never ever say never: Nature.com: Direct measurement of the quantum wavefunction. BTW If you would read my posts here, you would already know about it. Your trolling brings the words of Lord Kelvin on my mind: "Heavier-than-air flying machines are fantasy. Simple laws of physics make them impossible." 1.8 / 5 (5) May 09, 2012 The energy the observer senses from the object being observed stimulates brain matter. The energy of your computer screen is causing your neurons to change the way they act, this is your awareness of the screen. You can turn the monitor off and see the image internally independent of the screen. Your neurons mimic the energy they receive, this is memory. So the screen causes mechanical motion of your neurons, perception. But the neurons have the ability to reproduce this motion in the absence of the screen, this is memory. So the screen affects your neural frequency. Quantum mechanical experiments show the inverse to also be true. The energy that your neurons emit influence the object you are measuring. 2.3 / 5 (3) May 09, 2012 The question whether the wave function is real or not is a bit more tricky Not tricky at all if you look at it from a math perspective. Complex means it has an imaginary component. That would be not real. What exactly do the authors of the paper mean again? But analogies only work if they are more fundamental than the thing they are an analogy for. Nonsense AP. All that is required of an analogy is that it be analogous, not "more fundamental". That said, trying to understand QM from our macro perspective is difficult. You got it right. Quantum physics works. Use it. 5 / 5 (1) May 09, 2012 From article: Its possible the thinking goes, as an example, that because two distant entangled particles react in identically the same way at the same time, seemingly sharing information faster than the speed of light, that there is some new element of quantum mechanics at work that would allow for such a real world phenomenon to exist, rather than an example of the failure of quantum mechanics theory itself. This statement should keep string theorists busy for a while trying to figure out how to change their theory to match prediction. It has never been proven that two distant "entangled particles" are in fact entangled or have any effect on each other, other than gravitationally or electromagnetically. Actually they have been able to cause a vibration in one entangled object and have the other respond when at a distance. 3 / 5 (8) May 09, 2012 Complex means it has an imaginary component. That would be not real. Imaginary in mathematical sense has not nothing to do with imaginary in physical sense. For example, the particles of water at the surface waves are doing circular motion, so that this motion has an imaginary component from strictly geometrical perspective. But is such imaginary component imaginary from physical point of view? Of course not: if you put a floater at the water surface, you can follow this "imaginary" motion clearly along whole its path. Albert Einstein: "As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to the reality". The contemporary physicists often tend to consider their abstract theories as a facts and vice-versa. 4.3 / 5 (6) May 09, 2012 Complex means it has an imaginary component. That would be not real. Complex and imaginary as in mathematical sense. This does not mean that the part of a formula containing imaginary numbers is never observable. E.g. many formulas in electronics are given using complex numbers where both the imaginary and the real component map onto physically real quantities. Trigonometric functions can be written using imaginary exponents (which can describe wave motions that are very real) Using the imaginary component( or components - there are also 'hyper'complex functions with imaginary componets i,j, and k for example...or even more) is just a neat way writing multidimensional functions without resorting to matrix mechanics. You can rewrite any (hyper)complex formula using NxN matrices (N being the number of distinct complex components plus one) and vice versa. 1 / 5 (2) May 09, 2012 But analogies only work if they are more fundamental than the thing they are an analogy for. You're completely right about it (with compare to the above slip with "direct wave function measurement"). The other parables, which "don't work" are called homologies. What you just said is actually the definition of the difference between homologies and analogies. I know about it very well, because I'm using the analogies in my explanations of dense aether model often - so I must be sure, I'm not using homologies, the similarity of whose with subject is only accidental. The usage of homologies is typical for various crackpots. 3.8 / 5 (4) May 09, 2012 All that is required of an analogy is that it be analogous, not "more fundamental". We use analogies to make predictions about how the thing we are making analogies for works. If we use less fundamental analogies (waves, particles, or field lines in electrodynamics) then we always end up confusing ourselves with unworkable extrapolations. Waves don't work in QM. Particles don't work QM. Field lines don't work as an analogy in electrodynamics (beyond the most trivial of examples). The things that work are square roots of probability densities in QM (and field equations/potential fields in ED). That we have NO everyday analogy to these concepts should not surprise us because our everday conceptions are built on QM (and ED, and Relativity, and...). Going back from macroscopic to microscopic by using macrsocopic anaolgies is self defeating. If the math works use it. Who cares whether we can conceptualize it? 1 / 5 (2) May 09, 2012 The question isn't, whether the quantum wave function (QWF) is real, because it's abstract object, invented with humans and as such it cannot be real by its very definition. Instead of it, the question is, whether the motion, corresponding the QWF may exist physically behind various quantum phenomena. In dense aether model the motion corresponding the QWF is very relevant for low-dimensional particles, like the photons and/or deBroglie wave formed around particles in motion. We must accept the principle of relativity, though: for observer in parallel motion with flying electron the deBroglie wave around electron will effectively disappear, because such an electron is at rest with respect to the observer. This paradox is not so strange, if we realize, that the water surface, which is nearly undetectable for slowly moving observer becomes a concrete wall, if you'll hit it in high speed. Another example, where the principle of relativity applies is so-called the collapse of wave function. 1 / 5 (2) May 09, 2012 If the math works use it. Who cares whether we can conceptualize it? That's the right question. The answer is, such an conceptualization may lead into new testable predictions directly (like the superluminal speed of gravitational waves in dense aether model). And it can even helps us in development of another mathematical models, i.e. indirectly. I even believe, every formal model has some conceptualization behind in - no matter how trivial it can be. We cannot solve even the most trivial homework from physics, if we don't understand the subject at all at its conceptual level. Without such a conceptual understanding we're condemned for blind combination of equations. Surprisingly, mainstream physicists often believe, they can advance in understanding of the Universe without conceptual understanding. This is very naive stance: if we cannot solve even the most trivial homework without it, how can we expect, we could ever succeed with it at the case of complex theories? 1 / 5 (2) May 09, 2012 The good question in this connection is, whether the underwater vortex ring is real? Inside of incompressible fluids there isn't even any density gradient behind it. We can still visualize it with dye or air bubbles, but without such tools it's just rolling mass of water, which still propagates through underwater like rigid body. If we cannot observe the vortex ring inside of water easily, how difficult it may be to observe the hidden motion of vacuum around particle? You cannot observe it at distance, until you get direct contact with it, which will destroy it at the same moment. A tomographic or stroboscopic technique may usually help with this task: instead of observation of single ring we can analyze the momentum transferred during collision with many rings at different phase of their motion - from these consecutive observations we could reconstruct the complete motion of the ring. This is the principle, in which the observation of wave function is made in quantum 5 / 5 (2) May 09, 2012 We use analogies to make predictions about how the thing we are making analogies for works. I'd argue instead that we use analogies to communicate and to understand. Making predictions by extrapolating from our analogies is a very dangerous business. We mustn't take our analogies too far. When using an analogy it is very important to understand the strength of the analogy and to set the limits of what is relevant. But we argue semantics when we agree on the relevant point. I see neither you nor Zephir appreciated my weak pun on real/imaginary. Humorless, the lot of you! 5 / 5 (5) May 09, 2012 The question isn't, whether the quantum wave function (QWF) is real, because it's abstract object, invented with humans and as such it cannot be real by its very definition. Wrong. That is exactly what this article is about. And your "very definition" is nonsense. This paradox is not so strange You "paradox" isn't. It is an absurdity. Try again perchance? ...flying electron... More nonsense. Surprisingly, mainstream physicists often believe, they can advance in understanding of the Universe without conceptual understanding. No Heisenberg explicitly showed that our attempts at a conceptual understanding is what was holding us back from advancing in Quantum Theory. We must abandon our conceptualizations in favor of what we observe to be true. The analogies to our everyday world can't be extrapolated into the quantum world. (Look at me. I pulled it full circle between comments.) 1 / 5 (3) May 09, 2012 Wrong. That is exactly what this article is about. Mathematical objects aren't real. You even cannot find a single triangle in the nature. Which function is real by you, after all? If you'll try to answer this question and give me some example, you can recognize, that the quantum wave function cannot be any exception. Heisenberg explicitly showed that our attempts at a conceptual understanding is what was holding us back from advancing in Quantum Theory I don't understand. How Mr. Heisenberg "explicitly" showed it? We must abandon our conceptualizations.. LOL, the only thing which we really have to do is to die... :-) Everything else is a viable option. Stop with religious preaching and try to argument logically and objectively. 1.7 / 5 (6) May 09, 2012 We must abandon our conceptualizations.. The mainstream physicists are aware of conceptual problems of their theories and their adherence to dogmatic stances is the only "argument" which they really have. In similar way, the ancient theologians claimed, we should abandon any attempt for explanation of "God's will". Of course, both "recommendations" are motivated with the same fear of lost of information monopoly, which just did hit its limits. Do you really think, I'm so stupid, I cannot recognize this motivation? Nothing changed in thinking of people from medieval times.. not rated yet May 09, 2012 regarding 'many worlds', where the false is sustained as potential truth (if via states of superposition). still, 'the false' would not be a grounded circuit insofar as it is not actually true, overall, when collating the many worlds into a single empirical model of truth, the truth of each world in a shared/unified model of reality. the falsity does not continue as 'truth', it falls away from this cosmic tree of fractal connection. thus if 'truth' (or more likely pseudo-truth) could be correlated in this way, that would be the structure connecting events in the model (hypotheses) as this falsity is removed via experiment, observation, testing. yet what if falsity is structuralized. then modeling of truth is bounded, ungrounded or short-circuiting by default, if due to error or design. testing ideas in such models forcing perspectives, understandings, to retain these limits, vs. integration of truth beyond multiplicitous finite (rel) boundaries. integrated truth/circuitry 5 / 5 (2) May 09, 2012 The good question in this connection is, whether the underwater vortex ring is real? ROFL. Yeah Zephir. That is exactly what I was thinking. Errr.... how difficult it may be to observe the hidden motion of vacuum around particle? See, this is what we are talking about. Your analogies don't work. Your intuitivity don't work with quantumtivity. Momentum transfer between quantum wave functions? Ummm, methinks you use lots of words together that don't belong together. But maybe I just didn't fully understand the abstract you linked to 5 / 5 (7) May 09, 2012 Of course, both "recommendations" are motivated with the same fear of lost of information monopoly, HAHA. You are funny. No, quantum theorists were motivated by discovering a theory that described reality. Not protecting an information monopoly that didn't exist. In the early 20th century the physicists kept coming up against observations that didn't make sense within their analogies/conceptualizations. Heisenberg promoted tossing the conceptualizations out the window and instead just doing what worked. Perhaps my "explicitly showed" was a poor word choice, but that Quantum Mechanics works while attempts at conceptualization fail is undisputed by all but you. 1 / 5 (1) May 09, 2012 whether the underwater vortex ring is real How you would prove, the underwater vortex ring is real, if you couldn't leave the underwater and you cannot visualize the vortex ring with dyes or bubbles? The only thing which you know is, this vortex ring collapses in direct contact with observer. IMO it's quite realistic analogy and rather close to detection of motion inside of real particles in the vacuum. Actually, the observation of quantum wave packets in vacuum is generally easier, because the vacuum is formed with foam, which gets more dense under any shaking - so that every standing vortex in it manifests itself with density gradient (a probability function). But the problem of motion detection inside of these particles is similar to vortex rings inside of underwater: every direct observation leads into collapse of their wave function. So, if you find the solution of this task for underwater vortices, you can apply it to the observation of real particles as well. 1 / 5 (1) May 09, 2012 Perhaps my "explicitly showed" was a poor word choice In another words, your Heisenberg did prove anything and whole the previous post of yours was based on dummy religion, which you even don't understand. That's a nice outcome, indeed... quantum theorists were motivated by discovering a theory that described reality Erwin Schrodinger: "I don't like it, and I'm sorry I ever had anything to do with it." I'm not sure, you know, what these theorists really wanted, but it definitely wasn't the quantum theory in its present state... 5 / 5 (3) May 09, 2012 underwater vortex ring ... more babble ... IMO it's quite realistic analogy... Um, not really. You already gave the answer. You add dye or bubbles or some other impurity that you can some how observe. We might even stumble across a way to directly observe the water molecules without disturbing the system. With the QWF we don't have a clue how to accomplish this because there is doubt that the QWF is actually some thing. Sure, we use it to predict observations, but does it represent something real? That is the question. Your nonsense about math not being real is silly semantics. The question is does the quantum wave function math represent something real as is the case with other complex functions as noted above by yourself and AP. No triangles in Nature... Nonsense. I must be in a strange mood to be conversing so with you today. 5 / 5 (4) May 09, 2012 In another words, your Heisenberg did prove anything and whole the previous post of yours was based on dummy religion, which you even don't understand. Right. My Heisenberg. Because science is a dummy religion and only your intuitivity based on bad analogies that provides no predictions can truly be relied upon. I think not. 1 / 5 (3) May 09, 2012 You add dye or bubbles or some other impurity that you can some how observe. You cannot, as at the vacuum you have no such an "impurities" available anyway. Your nonsense about math not being real is silly semantics I explained it already in post starting with words "The question isn't" science is a dummy religion Not the science, but the belief, just the math itself can explain everything and we shouldn't ask the questions, which math cannot answer ("magnets, how do they work") is a religion. It's a religion of people, who are doing their money with maths. 1.7 / 5 (6) May 09, 2012 But observation/experiments trumps what seems plausible. So whether you feel good or bad about it doesn't make any difference. Quantum physics works. Use it. Correct, and this is more or less the Copenhagen interpretation,... that is, from OUR perspective the wavefunction collapses,... and to speak of "it" continuing on in other worlds, or speculating as to the nature of the "underlying reality", is pointless metaphysics. Science is based on and limited to observations. If the 'measurement problem' could be considered an epistemological issue, one can move past the interpretations as meaningless. This is what I took from Bohr's point against Einstein. 1 / 5 (2) May 09, 2012 Science is based on and limited to observations. The problem with measurement of quantum function within particles is not so different from measurement of motion of electrons inside of atoms or the studying of the interior of Earth or Sun with its seismic waves at the surface. At both cases we are deducing the bulk motion (in extradimensions) just from observation of the surface of objects. At both cases we shouldn't consider, that the (motion of the) interior of particles doesn't exist just because only surface (motion) is directly observable. 5 / 5 (3) May 09, 2012 The problem with measurement of quantum function within particles is not so different from measurement of motion of electrons inside of atoms or the studying of the interior of Earth or Sun with its seismic waves at the surface. Actually it is so different. That is the whole point. All your watery analogic garbage is nonsense precisely because it is different. The double slit experiment alone shows us that they are different. Your "bulk motion (in extradimensions)" is not what is happening on the interior of the Earth or Sun. Your conceptualization doesn't work. Back to your water vortex ring. That you can't put dye in the QWF is why the analogy is bad. You CAN put it in the water and see the motion of the water. You CAN observe the water's motion. In the QWF there is no "motion". The reality of the QWF, as generally accepted, is probability. Your intuitive analogies are flawed. We don't even need to get into what your "bulk motion in extradimensions" would be. 1 / 5 (2) May 09, 2012 You CAN observe the water's motion. In the QWF there is no "motion" Actually we can observe the vacuum motion with dark matter, the particles of which (the neutrinos) do serve as a tiny bubbles visualizing the vacuum flow. But this is not the point, which could lead us to tomographic principle of weak measurements in quantum mechanics. In my experience the proponents of formal models cannot think about stuff from practical intuitive perspective, their thinking is always locked in abstract equations, which they can handle mechanically. 1 / 5 (2) May 09, 2012 Another way, how to visualize the vacuum motion inside of photons provide the boson condensates. The light waves are slowed with them in such a way, that the vortex rings which are observable there correspond the photons within vacuum directly. 1 / 5 (3) May 09, 2012 All your watery analogic garbage is nonsense precisely because it is different. The double slit experiment alone shows us that they are different. The water surface analogy of double slit experiment was realized experimentally already before years - therefore we can be perfectly sure, just at the case of double slit experiment this "watery" analogy works perfectly. 3 / 5 (2) May 09, 2012 ... Lord Kelvin ...: "Heavier-than-air flying machines are fantasy. Simple laws of physics make them impossible." One would think that the existence of birds or even helicopter seeds would be enough to refute Kelvin. 5 / 5 (4) May 09, 2012 The reality - no pun intended - is that the quantum principles work, and quantum physics explains it quite well. Maybe the real explanation is much simpler or prosaic, but that doesn't change a thing (so far). Even though I would prefer for wave function to be a real phenomenon. not rated yet May 09, 2012 Pyle, your random number generator idea is exactly along the lines of my own thinking. It's like the entangled pair has a seed and an anti-seed both of which pre-determine the opposing outcomes, even if the measurement at any point in time is not predictable. And it doesn't necessarily violate Bell's theorem, "that no local deterministic hidden-variable theory can reproduce all the experimental predictions of quantum mechanics". If you are still dealing with random variables, you've given up determinism. 1 / 5 (3) May 09, 2012 From article: Its possible the thinking goes, as an example, that because two distant entangled particles react in identically the same way at the same time, seemingly sharing information faster than the speed of light, that there is some new element of quantum mechanics at work that would allow for such a real world phenomenon to exist, rather than an example of the failure of quantum mechanics theory itself. This statement should keep string theorists busy for a while trying to figure out how to change their theory to match prediction. It has never been proven that two distant "entangled particles" are in fact entangled or have any effect on each other, other than gravitationally or electromagnetically. Actually they have been able to cause a vibration in one entangled object and have the other respond when at a distance. Got a link? I was under the impression that entanglement couldnt be used to transmit information FTL. 2.1 / 5 (7) May 09, 2012 I have always likened entanglement to using the same random number seed in a program. If I run the same piece of code on different machines using the random function I will get different numbers until I set the seeds equal. Entanglement, to me, is exactly that And it doesn't necessarily violate Bell's theorem [...] If you are still dealing with random variables, you've given up determinism. But that qualifies as a "hidden variable" since it depends on a particular seed which supposedly is intrinsic to each entangled particle at the start. The seed is not random, whereas the two observers choice of measurement is. Each observer can independently choose a different measurement, say a different angle for checking spin up/down, so each observer would be supplying their own seed separately. 1 / 5 (1) May 09, 2012 Ha Ha, I was just reading these and a really old explanation for entangled photons came to mind. The old principle of similarity. Since if you make two identical things and change one the other has to change to match..... Since photons are about as small as you can get and still be something if you change on the other has to change too! Ok, it was just a joke.... but think, if the old magic theories did work it would make sense of a lot of quantum oddities! 5 / 5 (4) May 09, 2012 I cannot parse what citizen2000 is talking about - can some one explain? From article: Its possible the thinking goes, as an example, that because two distant entangled particles react in identically the same way at the same time, seemingly sharing information faster than the speed of light, that there is some new element of quantum mechanics at work that would allow for such a real world phenomenon to exist, rather than an example of the failure of quantum mechanics theory itself. This statement should keep string theorists busy for a while trying to figure out how to change their theory to match prediction. I don't think so because the comment in the text is michievous because information can not be shared faster than light. Entanglement doesn't do it. This is known in physics. 3 / 5 (2) May 10, 2012 Ain't it great how physorg Continues to not list the direct link to the paper so Everybody can read it ? Here it is, doing my civic duty:http://arxiv.org/abs/1111.3328 1 / 5 (3) May 10, 2012 According to conventional quantum mechanics, it seems that the reality of wave function is mathematical tool; is this just what the nature tells us? May be not so; any natural phenomena should have some mechanism behind. Here is an unconventional concept which talks about the mechanism of quantum mechanics! not rated yet May 10, 2012 I didn't understand this article. Because it was not well written. 4 / 5 (4) May 10, 2012 [q9Making predictions by extrapolating from our analogies is a very dangerous business. Exactly. as you can see in this very discussion: People are trying to use other analogies (e.g. determinism) and fit them onto QM. That is just another example of taking analogies too far. There's other things that really work but which confounds us in our ability to conceptualize. Take the humble E equals m c squared. WEhat does "c squared" even mean? A speed squared? There's just no analog to this. All those out there trying to fit physics to what 'feels right' are doomed to failure (and have been so since newtonian gravity got overturned) 1 / 5 (3) May 10, 2012 The reality - no pun intended - is that the quantum principles work, and quantum physics explains it quite well. It doesn't explain it at all, only describes. After all, as Feynman explained, the purpose of formal physics is not to explain things (i.e. to answer "WHY" questions), but to describe them (i.e. to answer "HOW" questions). In particular, the quantum principles are ad-hoced postulates of quantum theory, i.e. the assumptions considered true without proof by their definition. Your comment just illustrates, even the proponents of mainstream physics have no idea, how contemporary science work. 1 / 5 (2) May 10, 2012 People are trying to use other analogies (e.g. determinism) and fit them onto QM. So in your opinion this analogy is only accidental and no deeper logics is behind it? All those out there trying to fit physics to what 'feels right' are doomed to failure Yes, to the same failure, as the attempt for "direct measurement of quantum wave function"...;-) Please, don't speculate what is possible and what's not, if you're not familiar with subject at all. I can understand, all proponents of formal math will defend their formal approach in the sake of keeping informational monopoly and social credit in the same way, like medieval theologians - but the evolution of human understanding simply doesn't work so. Today we have many intuitive explanations for phenomena, which were considered as a mathematical objects only before some time and most people feel quite comfortable with it, because they do want to imagine these things. 1 / 5 (2) May 10, 2012 The legitimization of physical perspective of quantum wave function is particularly sensitive question for proponents of formal approach in physics, as it opens the space for deeper questions: if the quantum wave and quantum states are real, what is forming them? What is undulating and rotating here? How such environment should behave? etc. This kind of questions are just the questions, which may point for dense aether model and which are making the proponents of mainstream physics nervous. 4 / 5 (4) May 10, 2012 So in your opinion this analogy is only accidental and no deeper logics is behind it? There's no analogy here. It conforms to the description of QM. There is really no reason that QM should not work at macroscopic scales (only that it becomes increasingly harder to measure/observe at those scales). What they did here is find a clever way to observe it. (Possibly...Though some of the conclusions in the abstract seem to be jumping it a bit. E.g. they say the interference is due only to the droplet's action on itself - but they are working in an environment with forced vibration - i.e. an external power source). But if we take their results at face value then yes: they demolished determinism for macroscopic objects (again, no surprise: if it doesn't hold for microscopic ones that make up macroscopic objects then macroscopic objects cannot go back to being deterministic, anyhow) Relativity (e.g.) is not intuitive. But it works. That's what counts. 5 / 5 (3) May 10, 2012 Claptrap Nonsense. "Of course the wavefunction is not a "thing", it is a model or mathematical conceptualization of the underlying reality. The very act of conceptualizing reality, changes Reality , to a form which is dependent upon mind. " - NumenTard Observing changes the state of a system because it mus interact with the system in order for the measurement to be made. Consciousness, blueberry tarts,or bigfoot have no place in the equations. not rated yet May 10, 2012 The ardent fanatics who are antagonistic towards the idea of 'real' wavefunctions seem to miss the very basis for the understanding of QM. When entanglement was first verified, the team leader of the Geneva experiment himself said that this implies some sort of influence acting outside of space-time. I do not believe for one moment that we ought to restrict ourselves to local systems obeying Special Relativity; non-locality is real, and indicates that the very nature of the quantum field is to exist independent of observation. Perhaps quantum fields interfere with fields of consciousness to create localised condensations of the field, ie, particles and matter. In the absence of observation, the quantum field remains in an indeterminate state, just like fluctuating waves of probability. In simplistic terms, this sets a new paradigm for understanding just how our universe materialises out of 'nothing', virtual particles and all. For reference, visit http:// 1.8 / 5 (5) May 10, 2012 No, No Vendicar I'm not saying consciousness has an effect UPON reality, ... I'm saying it has an effect upon our UNDERSTANDING of reality. I'm saying 'conceptualized reality' is of a different form than Reality as it is in it self, (unconceptualized). This "form" is the conceptual framework we supply from ourselves,... we force reality into this mold in order to rationalize it. This only works at the macroscopic realm where the mind has evolved to operate on experience, and not at the qm realm. QM is non-intuitive, and one should not expect it to be. My argument (in somewhat Kantian terms) is in support of scientific positivism to the extent that intuitive expectations of how reality "should be", should not be a basis for physical theories. In essense this is the Bohr argument. From our observational perspective the "wavefunction" ends at measurement as described by Von Neumann. 1 / 5 (1) May 10, 2012 with the quantum wave function is the same as with time: they both are existent in the universe but only as a mathematical quantities about time see more on: http://phys.org/n...ace.html 1 / 5 (3) May 10, 2012 Relativity (e.g.) is not intuitive. But it works. That's what counts. Relativity is intuitive in dense aether model. And it still works there (until you keep the relativistic perspective, which is not universal, indeed)... The "just working" will not be sufficient criterion for future theories - we will need to know, how these theories are working and why as well. We have full rights to know about it - and if the contemporary generation of physicists will not be willing to cooperate with the rest of people in this question, it will be fired and replaced with more insightful people. 4.2 / 5 (5) May 10, 2012 dense aether model Yeah, and dense aether model is intuitive and works in (and only in) the unicorn fart model. So that's not really an argument. That's like saying: Angels are intuitive in a god model. So what? we will need to know, how these theories are working and why as well. I think you lack a very basic understanding of what science is and what it isn't. You think science can supply some sort of 'truth' that is absolute. That's not part of science. Science is only concerned with what works. it will be fired and replaced with more insightful people. If these more insightful people come up with stuff that works better (i.e. which work in all cases the old theories worked AND in some where they don't), sure. That's how science progresses. You can even philosophically demonstrate that there never can be an ultimately, absoultely, indisputably true theory of everything (TOE). 'Truth' from limited data is an unattainable goal. 1 / 5 (1) May 10, 2012 `Suddenly Last Summer', the QM wavefunction was measured directly: Sounds like an `element of reality' to my elementary mind. If its quantifiable by measuring apparatus, how can it not be `physically real' ? Lundeen et.al, measured BOTH real & imaginary components of the wave function. Sounds like Psi is more `real' than my toaster, which has no imaginary components ! 1 / 5 (2) May 10, 2012 If its quantifiable by measuring apparatus, how can it not be Just compare the 2nd post in this thread. Apparently the variables of wave function could play a role of "hidden variables", predicted with some interpretations of quantum mechanics and violating the violation of Bell's inequalities. Albert Einstein: "As I have said so many times, God doesn't play dice with the world." 2 / 5 (4) May 10, 2012 Another paper eliminates the possibility of the subjective interpretation of quantum mechanics wave functions.. 3.7 / 5 (3) May 10, 2012 Science is only concerned with what works Science is a construct of man. Science has no concerns Antialias_PhysOrg, you are anthropomorphizing. Science is the pursuit of knowledge. You think science can supply some sort of 'truth' that is absolute Science can do no such thing. The absolute truth is reality itself. Man can attain full complete knowledge of reality at this point the absolute "truth" is found. 'Truth' from limited data is an unattainable goal. False. Even with very limited data it is possible to 'luck' out. Knowing reality is the goal of the scientific pursuit. I've once scored a goal playing soccer with my eyes closed, I was in the right place and right time, the ball found my foot and went through the posts. Goal attained with very limited data. 4 / 5 (4) May 10, 2012 Using intuition to know where to stand and when can give awesome results. Sometimes it takes nothing more than being in the right place at the right time. 5 / 5 (1) May 10, 2012 And it doesn't necessarily violate Bell's theorem I'm afraid it does. The "random" number generators in my example aren't random. The seeds determine the output. Use the same seed and you get the same random-looking results. Noummie got at least that right. Actually they have been able to cause a vibration in one entangled object and have the other respond when at a distance. Got a link? I was under the impression that entanglement couldnt be used to transmit information FTL. Wow, rating by past performance. StarGazer, your point was correct. After all, as Feynman explained, the purpose of formal physics is not to explain things ... , but to describe them. Zephir: Do you even know what the meaning of your quote was? You, who is trying to explain quantum phenomenon with your failed analogies? Don't you get it? QM describes reality. Science has moved on and is exploring it deeper. You are sitting on the sidelines failing to explain it. 1 / 5 (1) May 10, 2012 We never measure the wave function directly Never ever say never: Nature.com: http://www.nature...120.html of Lord Kelvin on my mind: "Heavier-than-air flying machines are fantasy. Simple laws of physics make them impossible." No doubt, Antialias Physorg is a troll beyond measure and should be banned and forced to watch every episode of Mickey Mouse. 5 / 5 (1) May 10, 2012 Man can attain full complete knowledge of reality at this point the absolute "truth" is found. Not without having all data available, ever (and not even then). Example: You make the following observation You try to find the natural law that gives you this. Which one is the true one? There are an infinite number of those that render this sequence - and an infinte ones that will predict "6,7,8,..." (which may or may not fit further measurements) The most USEFUL law is f(x) equals x. But no matter how many observations you make there will always be an infinite number of laws that fit and we just need to choose one (e.g. by Occams razor - though that only renders the easiest one, not necessarily the 'true' one) Even with very limited data it is possible to 'luck' out. Sure. But you can never PROVE it's the true one. That's why science goes for 'useful' and not 'true'. I'm not dissing science, here. Just saying what it is and what it isnt) 1 / 5 (2) May 10, 2012 Do you even know what the meaning of your quote was? Of course, I explained it already here many times. What the Feynman essentially wanted to say was: "We, scientists, have formal math methods, which are enabling us to pile up publications and take the money for it from laymans. We know, we cannot explain "WHY" question with such approach - so you shouldn't ask us for it. Such a questions are just undermining our authority." ..just saying what it is and what it isn't.. You've no authority for it. You just said, that "we never measure the wave function directly", although such a measurement has been made before year. How we can trust you in another, even more fundamental questions? The true is, the science, the mainstream physics has become a sanctuary of various nerds, who are separated from reality. How such people can decide, what the science is and what isn't? You should remain outside of it for to get the objective stance. 5 / 5 (4) May 10, 2012 although such a measurement has been made before year. They talk about 'weak measurement'. It is a retrospective measurement by pinning the other component down after the fact. From an information theory point of view that might be problematic. Not necessarily false, mind you, but I'm not sure that what they did is the correct interpretation. If they did measure it directly in a valid way then I'll happily agree. I have no problem with the wavefunction describing something 'real' (i.e. it being useful apart from its square being useful). That doesn't invalidate the point I'm trying to make here: Theories and laws can never be said to be final (What science CAN say, though, is which theories and laws are definitely false - e.g. determinism). 1 / 5 (4) May 10, 2012 The problem of detection of quantum wave function in vacuum is exactly analogous to the water surface analogy of famous double slit experiment. In this analogy the vacuum is forming the water surface and the motion of objects (jumping droplets) between slits is driven with the interference of ripples, which are formed at the water surface during this. These ripples exactly correspond the quantum wave function of object, as computed with time dependent Schrodinger equation. Now the question is: are these waves real? From extrinsic perspective of observer, who is following the whole experiment with much faster waves (like the light or sound waves) the answer is completely clear if not trivial : these ripples are indeed real! We even have a picture of it! But this perspective doesn't correspond the real life perspective of observer at the vacuum. The observer at the vacuum has no faster ripples available, than the ripples of the vacuum itself. 1 / 5 (2) May 10, 2012 Our perspective of double slit experiment actually corresponds the perspective of hypothetical flat 2D waterstrider, which can observe the (situation at the) water surface via its surface ripples only. In this moment we have a problem with direct observation of surface ripples, which are analogous to the quantum wave function, because these ripples are constrained to object and we cannot observe them directly with external energy source, because all ripples do penetrate mutually like the ghosts at the water surface. We cannot see the light wave in the vacuum, until it hits something. The light waves are completely transparent and invisible for us and because the quantum wave function is realized with such a light wave too, we actually cannot see it, until we come really close to the object in motion (Unruh radiation or dynamical Casimir effect). 1 / 5 (4) May 10, 2012 The similar situation could exist with photon sphere of massive black holes, which are supposed being surrounded with dense ring of photons (light waves). These photons are supposed to encircle the black hole like Saturn rings along metastable path in some theories. But even if such photon sphere would exist, we couldn't see it from outside - simply because the light is transparent and invisible for us from outside. We should cross the photon sphere directly for being able to see it and this observation would removed the photons observed from their circular path at the same moment. We would collapse or destroy the photon sphere with our observation in similar way, like the quantum wave function with direct observation of de Broglie wave. This analogy explains, why otherwise real deBroglie wave surrounding the objects in motion is so difficult to observe directly. 5 / 5 (3) May 10, 2012 Photon spheres only work for non rotating black holes (and there's no reason to believe there are any of those). Rotating ones could have a 2D photon disc. But even that is unliley because a) the vast majoprity of paths on the photon disc are unstable (spiraling in or out...and at the speed of light the time it takes for a photon to either totally escape of fall in from such a small orbit is very short) b) every atom falling into the black hole changes the radius of the one stable path. So even those lucky few photons don't hang around for any length of time. As for the rest of your analogies: You're again trying to fit everyday observations onto fundamentals. That just doesn't make any sense. 1 / 5 (3) May 10, 2012 You're again trying to fit everyday observations onto fundamentals. That just doesn't make any sense. This is just the basic principle of AWT..;-) This theory explains the abstract and distant Universe with the most common artifacts and phenomena, which we can be perfectly sure, how they do actually work. Whereas the mainstream physics is trying to describe the everyday phenomena with abstract theories and concepts, which we can be never sure with. It's reversed, up to bottom approach similar to the building of house from roof and it's not surprising, that the mainstream physics cannot actually explain anything with it after then: it simply lacks the causual hiearchy for such an explanation, because all concepts of mainstream physics are ad-hoced in it. We have photons - but why? We have strings - but why? We have light invariance - but why? It's not surprising after then, that the people like the Feynman are getting so nervous from asking of "WHY" questions. 4 / 5 (4) May 10, 2012 [qThis theory explains the abstract and distant Universe with the most common artifacts and phenomena, which we can be perfectly sure, how they do actually work. Whereas the mainstream physics is trying to describe the everyday phenomena with abstract theories and concepts, which we can be never sure with. 'mainstream physics' (what is that, anyways? Anything that isn't 'quack science'?) has got it right, though. They go with what agrees with observation. Taking everyday analogies and saying that they fit observation when they don't is just delusional. The current approach has the advantage that it's useful. It makes predictions we can test (and QM has passed a LOT of tests with flying colors) We have photons - but why? We have strings - but why? We have light invariance - but why? Wrong questions. You still don't get what science is, do you? 1 / 5 (2) May 10, 2012 Photon spheres only work for non rotating black holes IMO photon spheres simply don't exist, they're metastable even in general relativity itself - but I didn't invented neither promoted this concept. I just used it for explanation, why the quantum wave function cannot be observed directly, if its realized with electromagnetic wave constrained to the massive object. In this situation the massive object is everything, what we can see from it - even if it would be surrounded with pile of light, we would see its neighborhood empty from distance. 1 / 5 (3) May 10, 2012 Wrong questions. You still don't get what science is, do you? You cannot have wrong questions, only wrong answers. IMO the question "why the magnets work?" is quite relevant. After all, if we can ask, "why people evolved pale skin" in biology, why we cannot ask the the same "why" question in physics? Because this branch of science already reduced all constructive insights into mechanical operations with equations? Oh, come on - such an approach is not a science, but an ideology. The ideology of mathematicians and various 2nd grade high school teachers, being more specific. In real evolution of human understanding the answering of "why" and "how" questions has perfectly symmetrical roles. 5 / 5 (3) May 10, 2012 You cannot have wrong questions Yes you can. "what was before time?" is a wrong question - because it violates its own premise ('before' requires a concept of time which isn't available in a state without time) Many questions require causality when it isn't available (e.g. "what caused the big bang?") or determinism (e.g. "which slit did the electron pass through?") We can ask HOW magnets work. We can ask: Is there an easier, more fundamental way to express electromagnetism? Preferrably one in which the four knowm forces are combined an dthat will make some testable predictions the current set of formulae don't. But when you get to the most fundamental the WHY doesn't make any sense anymore - because you're effectively asking "what's more fundamental than that which is fundamental"...that is just another of these wrong questions. 1 / 5 (3) May 10, 2012 Many questions require causality when it isn't available (e.g. "what caused the big bang?") or determinism (e.g. "which slit did the electron pass through?") These question have perfect meaning and many scientists (like the Roger Penrose) are analysing them (with ekpyrotic or cyclic cosmology etc). So your objection was disproven by example. Regarding the question "which slit did the electron pass through?", this question has perfect meaning in deterministic theory, because the electron is way too smaller than every double slit thinkable. The water surface analogy demonstrates, the electron really travels trough one of slit and its repetetive observation by tomographics weak measurement could reveal, which path such electron is actually using. Such a question therefore was not only asked, but answered already. It just means, you're trolling again and demonstrating, you've no idea about experiments of QM. 1 / 5 (2) May 10, 2012 The recent experiments demonstrated, that the question "which slit did the electron pass through?" not only has a good meaning with respect to the determinism, but it can be even answered experimentally. It just means, everything what you believe about quantum mechanics is simply plain wrong and already disproved with (officially peer-reviewed!) experiments. The people like you will remain the opponents of dense aether model for ever, simply because they're ignorant enough for to become familiar with its actual motivations. The ignorance is the mother of religion. 5 / 5 (3) May 10, 2012 These question have perfect meaning and many scientists (like the Roger Penrose) are analysing them (with ekpyrotic or cyclic cosmology etc). They are moving the problematic point. Which is perfectly OK. To say (for example) that this universe's big bang is the big crunch of the previous one might be an answer (one I don't subscribe to, but it's at least an interesting approach). It just moves the 'wrong question' point of first cause to the point at which the first of that chain of universes came about. Doesn't solve the problem. Interference with single electrons shows that it travels through both slits. Interference has been observed with single buckyballs http://physicswor...olecules electron is way too smaller than every double slit thinkable Double slit experiments do not require the slit to be as small or smaller than the objects passing through. I think you're confusing experiments here. 5 / 5 (2) May 10, 2012 Oh, this is brilliant. 10 minute lecture by Feynmun I just found through StumbleUpon... If I were superstitious I'd think it was godsent. He says EXACTLY what science is about, how it works and what it can and cannot do. 1 / 5 (1) May 10, 2012 I didn't understand this article. Because it was not well written. Would you please explain why it is be so? 1 / 5 (1) May 11, 2012 He says EXACTLY what science is about, how it works and what it can and cannot do. "No matter how smart a person is, no matter how elegant their hypothesis, if it does not agree with experimentation, it is wrong. Feynman calls this the key to science. Guessing is not unscientific, though it may seem that way to non-scientists. Rather, it would be unscientific to just accept a guess because it is comforting or easy." It exactly applies to theories like the Big Bang or gravitational waves or refusal of dense aether model. 5 / 5 (2) May 11, 2012 If you watched the video you may see that dense aether model is captured by the 'vague models' part he talks about (i.e.: useless models). It also doesn't agree with observation. So it fails on both fronts to be science. The most important part is that he notes what science can and cannot do (cscience can disprove a theory but it can never prove one to a final degree of certainty) Big Bang is a model which is a best guess. It makes predictions. Those predictinos have been borne out (e.g. the comsic microwave backgronud was one of the predictions. Also the composition of far away (older) stars as opposed to closer (younger) ones). So it's still the best guess we have. Gravity waves are still in the stage of being tested for (LIGO has not reached its final stage of sensitivity). If we don't find any then that theory will go out the window. Gravity waves are predicted by relativity. Relativity has a good track record, so gravity waves currently count as best guess. 1 / 5 (1) May 11, 2012 It also doesn't agree with observation LOL, says who? Your claims were proven false twice-times just in this thread. Those predictinos have been borne out (e.g. the comsic microwave backgronud was one of the predictions. The interpretation of the cosmic microwave background is a controversial issue from the 1940s, when proponents of the steady state theory argued that the microwave background is the result of scattered starlight from distant galaxies. Using this model and based on the study of narrow absorption line features in the spectra of stars the astronomer Andrew McKellar in 1941 calculated, that the temperature of interstellar space is 2 K. It was five years BEFORE the Alpher and Herman predicted the similar result - but still essentially wrong i.e. 28 K - from Big Bang theory. The correct value (2,7 K) has been found experimentally just after fifteen years. The conclusion therefore is, we don't need the Big Bang theory for CORRECT prediction of CMBR at all. 1 / 5 (2) May 11, 2012 Gravity waves are still in the stage of being tested for (LIGO has not reached its final stage of sensitivity). The existing stage of LIGO sensitivity should be well sufficient for detection of gravitational waves from many types of objects in more than two orders of magnitude. Therefore in my opinion the existence of gravitational waves in general relativity sense was disproved already with five sigma reliability. If you cannot see the object searched even at the hundred-times better resolution, it simply means the existence of this object was disproved. But the LIGO and another gravitational waves detectors are serving as an excellent jobs generators both for researchers, both for private companies involved - so I don't expect, the parasites who are maintaining this research will be willing to end it so 1 / 5 (2) May 11, 2012 IMO the only solution of this controversy is simply to shut down all money flowing into gravitational waves research. The theorists involved will find a better jobs gradually and nobody will prohibit the introduction of more insightful models anymore. But until we would pay some research, then the motivation of the people involved will never cease. The best physics is the physics developed for private money, as Tesla and Faraday demonstrated before years. 5 / 5 (4) May 11, 2012 IMO the only solution of this controversy is simply to shut down all money flowing into gravitational waves research. Good thing, then, that no one cares for your opinion - or we'd still be sitting in caves wondering why our cold fusion oven wasn't working. Science progresses by trying out best guesses. Sometimes best guesses are wrong. So? Does that mean that if we sink money into something to test it - and it doesn't pan out - that there was therefore something wrong with the method? LIGO wasn't built for shits and giggles. Gravity waves are predicted and we really should look at this with the utmost care. It's an active test of relativity. If they don't find any graviyt waves then it's back to the drawing board. Science wins either way. The 'job generator' thing is completly off base. Scientists could earn far more by going to work for a company (I make 4 times as much after switching 2 years ago - for a MUCH less demanding job) 1 / 5 (1) May 11, 2012 ..be sitting in caves wondering why our cold fusion oven wasn't working. Cold fusion is working very well (1400% energy yield speaks for itself) - but it's not a credit of mainstream physics, which ignored if not suppressed this research for twenty years. Science progresses by trying out best guesses. Sometimes best guesses are wrong. It's rather rule than exception of mainstream physics of last decades. The gravitational waves was debunked before fifty years already with Eddington and Herman Weyl. Nobody understood the relativity better than these two guys. But they were ridiculed and ignored and their theories were claimed as a numerology (Eddington in particular). 1 / 5 (1) May 11, 2012 It will make it clear to you that you are wrong about the brain in every respect. Every one. You're obviously a fundamentalist neuricentric (Crick, Hameroff, Penrose) believing that consciousness is an "emergent dynamic" or even epiphenomenal. The drug of YOUR choice is causing hallucinatory obfuscation and dismissal of foundational attributes of consciousness. Check Kaivaranen, Pribram, Bohm, van Lommel. Exobiotic consciousness exists. Glad I'm not using YOUR drug dealer. 1 / 5 (1) May 11, 2012 Dramatic scientific evidence that all of physical matter is formed by an aether of invisible, conscious energy has existed since at least the 1950s. Compare the article "Causal mechanics - a Russian scientific controversy" from the first issue (page No. 1073) 1 / 5 (1) May 11, 2012 @Origin - unfortunately the first link you supply is faulty, the second only in Russian and not translatable - can you offer more accessible alternative? not rated yet May 11, 2012 OK, thank you for notice. Try to check this. The person of Nikolay Kozyrev is very interesting. It's a pity, so many insights are forgotten just because they cannot help the other physicists in making money. 1 / 5 (1) May 11, 2012 @Terriva - thank you so much for the Kozyrev link - this supports my own thesis on which I have written a 530-page book - that consciousness is primary to the physicalisation of potential in cosmogenic ontogeny, as also modulating all causal evolutions. This also is in agreement with Bohm's quantum potential term added to the Schrodinger wave function and his explicit indication that fundamental particles, in their "deriving meaning from their environment" display primary consciousness. I wonder, are you familiar with the Gariaev Group's research, the DNA-Phantom effect, and DNA's syntactic structure which obeys Zip's Law in linguistics? not rated yet May 11, 2012 530-page book - that consciousness is primary to the physicalisation of potential in cosmogenic ontogeny 5 / 5 (2) May 11, 2012 my own thesis on which I have written a 530-page book - that consciousness is primary to the physicalisation of potential in cosmogenic ontogeny, as also modulating all causal evolutions. I don't know your meaning here - can you explain it better? Consciousness comes after physical universe already exists, so how it can play any part in its evolution? not rated yet May 11, 2012 others, such as Einstein would say that because the inquisitor has only partial knowledge, no true answer can be given. This is the position I've always taken, and it's very self consistent. The problem with hypothetical QM paradoxes is that they are not in the "real world". They are a thought experiment where you are asked to neglect some function of reality under ridiculous circumstances, including Schrodinger's Cat. Of course, in the real world, we still have unknowns (DM and DE possibly) and unknown unknowns which could influence so-called "random" behavior. And so, the problem is a matter of partial knowledge. Case and point, they are still arguing over the nature of the problem. We have partial knowledge of the theory of all laws of physics, so how could you NOT have partial knowledge of the reality? There is no "reality" in the Schrodinger cat system, because the thought experiment is unrealistic. In the real world there are other variables. 1 / 5 (1) May 11, 2012 Ah, the "scientific" and pragmatic, empirical paradigm defines consciousness as either 1) an emergent property of complex organicity, or 2) epiphenomenal. A deeper inspection, e.g. Kaivaranen, Puharich, Gariaev, Sheldrake, Houk, Tiller, van Lommel, Stevenson, prove beyond doubt that consciousness is a real entity that can be either endobiotic or exobiotic. Consciousness is quantized into individuality from what we may accept as Jungian Collective Consciousness - this in the first place does not conform to a "one life and that's it" view - see M.D. psychiatrist Prof. Ian Stevenson's extensive forensic evidence, e.g. http://www.scient...dorf.pdf and van Lommel/s studies of NDEs. Consciousness is not result, it is cause. molecular embryologists as also cosmogonists have got it exactly in reverse... not rated yet May 11, 2012 prove beyond doubt that consciousness is a real entity that can be either endobiotic or exobiotic Could you provide one single but robust evidence of this bold claim (as pregnant and straightforward, as possible, plz..)? 4.2 / 5 (5) May 11, 2012 Tachyon8491, still it sounds like non-sense to me. Conciousness is just emergent property in living (so far) organsims, not magic. 5 / 5 (1) May 12, 2012 My imagination falls short to see the consequences of considering the "One possibility is that a pure quantum state corresponds directly to reality." Perhaps others see consequences arising from considering this one possibility. (Besides labeling other approaches to 'reality' superceded or obsolete) Awaiting other readers comments. Traditional quantum theory rests on the two issues of locality and realism. Current experiments (Bell inequality tests) show that at least entanglement is non-local. The second issue, whether reality is direct or holographic has been thought to be only conjecture or in the realm of philosophers. Pussey, Barrett, and Rudolph have shown that holographic theories do not describe quantum mechanical systems. The implication is that only local, real theories will correctly describe quantum systems. 1 / 5 (1) May 12, 2012 IMO Consciousness is just human abstract construct. In classical mechanics is quite common situation, when material objects follows the gradient of energy density and it avoids the obstacles and elevated points, the overcoming of whose would decrease its energy content. The common bacteria are behaving in the same way, like the masssive particles. They're just formed with many foamy membranes, so they can interact with more energy density gradients at the same moment. Such a bacterium can follow the gradient of sugar (chemical energy) concentration, while avoiding hot spots in the environment - it's sorta more intelligent, than the common massive particle. The humans have whole membrane structure equipped for interaction with many weak density gradients at the same moment. When we are walking along the street, we are making many decisions at the same moment. Our resulting path is affected with the same principle of least action, like the path of bacterium or speckle of dust, though. 1 / 5 (1) May 12, 2012 In certain contexts I'm thinking about possibility, the human brain is structure equipped for navigation between overlapping density gradients, generated with all observable stars on the sky at the same moment. The number of permutations of observable states in our observable Universe corresponds the number of possible states in our brain. For example, the human brain contains 10^23 atoms, so that the observable Universe contains just 10^(23*23) = 10^500 atoms. In this sense we are really a children of stars. I just cannot find any practical application for this concept and no way for its practical testing accordingly - so I'm not using it for any explanation. But its evident, for simple organism the observable Universe is simple as well, when being observed with transverse waves (the "Simillia simillibus observentur" principle). We can see the Universe complex and large, because our brains are complex and large - and we are even connecting them into larger network, like the Internet not rated yet May 12, 2012 The question whether the wave function is real or not is a bit more tricky Not tricky at all if you look at it from a math perspective. Complex means it has an imaginary component. That would be not real. What exactly do the authors of the paper mean again? But analogies only work if they are more fundamental than the thing they are an analogy for. Nonsense AP. All that is required of an analogy is that it be analogous, not "more fundamental". That said, trying to understand QM from our macro perspective is difficult. You got it right. Quantum physics works. Use it. Imaginary simply means the square root of negative one. Having an imaginary component may simply imply a phase shift of some sort, although of what, well that is the question. 3.7 / 5 (3) May 12, 2012 (the "Simillia simillibus observentur" principle). What you mean is doctrine of signatures. And that approach to thinking hasn't been used...well..since before the Renaissance. 5 / 5 (1) May 13, 2012 It is my opinion (and that's all it is because I am not educated in physics, though I am having a good time with it and have my opinions) that it could have something to do with a 'backward going in time influence' (to say something vague) of the act of observation. That is: The observation determines the reality, wheter it is 'prior' or 'after' the event. (I am probably being arrogant and stupid now) 1 / 5 (1) May 13, 2012 Every particle has mass; @ or tinier than string theory. To detect true quantum state, just send smaller detector particles (exponential scale). If I throw 1 particle of dust at planet, the planet won't move; same concept for aforementioned. We live an an infinite ocean of smaller fish (particles). Our detectors are still on the surface of this ocean. Now with quantum being so small, it is in constant bombardment by other particles which is why when we try to detect it; it is not a 1 to 1 relationship. Statistics is are way of coping with something out of our reach. But we need info on this ocean of fish i.e., solar system, approx. how many of each particle. Then you have to account for different particle behaviour. It is a monstrous task. It's almost better to develop string theory and then somehow magically come up with all the particles we know; this might be the most efficient way. 1 / 5 (3) May 13, 2012 But analogies only work if they are more fundamental than the thing they are an analogy for. Nonsense AP. All that is required of an analogy is that it be analogous, not "more fundamental". That said, trying to understand QM from our macro perspective is difficult. You got it right. I recall a statement by Feynman in Vol II-Ch 35 of his lectures, wrt quantized angular momentum,.... "There isn't any descriptive way of making it intelligible that isn't so subtle and advanced in it's own form that it is more complicated than the thing you were trying to explain" 2 / 5 (4) May 13, 2012 still it sounds like non-sense to me. Conciousness is just emergent property Of course you would think that - entire tribes of scientists do - its a scientistic paradigm that is radically falsified be existing, pragmatic, empirical and forensic evidence. Have you studied the sources I supplied, I wonder? Study Speman, Gurwitsch, molecular embryology versus morphogenetic embryology - although I doubt whether you can cope with the paradigm shift. Study also Bohm's quantum potential term and his statement about the electron's primary, fundamental conscious dynamics. Study exobiotic quantized consciousness in NDEs via the van Lommel study. Then again: there are none so blind as those who refuse to see... 3.7 / 5 (6) May 13, 2012 Of course you would think that - entire tribes of scientists do For good reason. its a scientistic paradigm that is radically falsified be existing, pragmatic, empirical and forensic evidence These are empty words. Have you studied the sources I supplied, I wonder? You want I should spend weeks/months reading your sources? Good communicators (you are not) can put in capsule what they mean and what is the evidence. They do not hide behind waffle. Study exobiotic quantized consciousness in NDEs via the van Lommel study. More bogus references? Near death experiences - really? Now I know yuu are flake! It is simply experience of brain losing oxygen (CO2 build up in blood) and random neurons firing. It can lead to euphoria feeling and after recovery, the brain makes narrative of experience. Then again: there are none so blind as those who refuse to see. This is you perfectly. You have mystical delusion. not rated yet May 14, 2012 All taken with a grain of salt! Does Time Exists Independent From Space? By Earth Changes Media Apr 15, 2012 - 8:26:59 PM In a new study, scientists, which include Amrit Sorli and Davide Fiscaletti, have shown that two phenomena of special relativity - time dilation and length contraction - can be better described within the framework of a 3D space with time as the quantity used to measure change in this space. The prevailing view in physics has been that time serves as the fourth dimension of space, an arena represented mathematically as 4D Minkowski spacetime. However, some scientists argue that time exists completely independent from space. The scientists have published their article in a recent issue of Physics Essays. The work builds on their previous articles, in which they have investigated the definition of time as a "numerical order of material change." The wave function is real from this Article. But when the measurement taken later be faulty without added considerations? 1 / 5 (3) May 15, 2012 random neurons firing. It can lead to euphoria feeling and after recovery, the brain makes narrative of experience. And that is exactly where you are proven wrong - the research proves conclusively, incontrovertibly, forensically, that conscious identity can be exobiotic during NDE. But please, stay within your self-delusional scientistic paradigm - it's far safer for individuals like you, the jump in insight and comprehension far too dangerous for your simplistic world-model. 5 / 5 (5) May 15, 2012 that conscious identity can be exobiotic during NDE. Nope. They actually did an experiment for that. They put a garish LED sign in operating rooms above the operating light. Many people reported 'floating outside their bodies and seeing the entire operation from above'. None reported seeing the sign (and they should have fom where they reported their point of view was.There ws no way they could have missed it.). It's just a delusion - a lucid dream. 4.2 / 5 (5) May 15, 2012 And that is exactly where you are proven wrong - the research proves conclusively, incontrovertibly, forensically, that conscious identity can be exobiotic during NDE -TachyTard What a load of crap! You're full of shit. Go and play with your crystals. 3 / 5 (2) May 15, 2012 But please, stay within your self-delusional scientistic paradigm - it's far safer for individuals like you, the jump in insight and comprehension far too dangerous for your simplistic You think your superior so you insult me, but you say nothing scientific, just psuedoscience and big words for a front. 3.7 / 5 (3) May 15, 2012 But simplicio, it's SCIENTISTIC! 3 / 5 (4) May 15, 2012 FYI, NDE's/OBE's (out of body experiences) have been put to the test many times and fail every time. There is no evidence that the person experiencing the OBE actually gains any information about the real world during the experience, it is fabricated by the brain. 1 / 5 (3) May 16, 2012 There is no evidence that the person experiencing the OBE actually gains any information about the real world You're simply uninformed - no doubt you have not bothered to study the RIGHT research - there is always ambiguity in such an area as it's horribly paradigmatic and people entrench their worldviews with nail-biting tenacity: epiphenomenalism / neuricentrism / emergentism of consciousness. Study this: @Simpleton - sorry, can't help but be superior to you - some of us just are, but don't worry about it ;) 4 / 5 (4) May 16, 2012 You're simply uninformed - no doubt you have not bothered to study the RIGHT research -TachyTard You mean crank 'research'? Perhaps you should put on your tin-foil hat and PM Zephir or QC - I'm sure you'll have a gay old time! epiphenomenalism / neuricentrism / emergentism... -TachyTard egotism / idiotism / delisionism / mysticism - you really are a moron and a half. @Simpleton - sorry, can't help but be superior to you - some of us just are, but don't worry about it -TachyTard He's got more sense then you can ever hope for, because for you, it is too late, Tard Boy! Acting superior must be a defense mechanism.
{"url":"http://phys.org/news/2012-05-paper-controversy-nature-quantum-function.html","timestamp":"2014-04-16T05:30:14Z","content_type":null,"content_length":"267220","record_id":"<urn:uuid:58cabca3-316d-4b99-b809-cb0d7aa2b93b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: integral of y dx Replies: 5 Last Post: Feb 17, 2011 8:30 AM Messages: [ Previous | Next ] Hey00 Re: integral of y dx Posted: Feb 17, 2011 8:30 AM Posts: 1 From: Valencia y= e^x Registered: 2/17/11
{"url":"http://mathforum.org/kb/message.jspa?messageID=7386722","timestamp":"2014-04-21T09:54:13Z","content_type":null,"content_length":"21616","record_id":"<urn:uuid:29178c83-f506-4f66-a565-be8e6de86ed1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Injuries and your franchise QB, part II In the first article on this topic I explained that injuries are rare, potentially catastrophic, events and that their distribution therefore differs fundamentally from intuitive expectations. We expect one half of a population to be on one side of average and the other half to be on the other. That's not the way it works with rare events. The average earthquake causes some dollar value of damages. The median earthquake causes zero dollars worth of damage. Really what we care about is how often an earthquake causes huge damage and that takes more than looking at central tendencies. The rarity of injuries means that models using simple expected values are unlikely to be elucidative (the expected damages due to an earthquake in any given year aren't going to be too different between California and Missouri) and that distribution models can't be normal. In the last piece I settled on a gamma distribution to describe the population injury risk for QBs and you can read in the comments why that meant individual QB distributions were also likely to be gamma distributions. As promised this piece will end with a more robust model that will be tested out of population in the next one. So let's talk about individual modeling. I have this speech attached to a keyboard shortcut. First, a caveat. Models like these can only account for what is measurable. Thy shouldn't even try to account for things that aren't. That's how you end up with hideous monsters like QBR. That doesn't mean that the things I can't measure aren't important it just means that when I say "this is Wilson's injury risk probability distribution" I mean "accounting for the modeled parameters - everything else is league average." Second, I don't believe in the spaghetti regression. What I mean by that is throwing every measurement I have into a predictive analysis to see if they stick. Don't get me wrong, I'm not doing super rigorous science here so I look at everything. But unless I have a plausible theory of causation beforehand or develop an extremely compelling one after I'd rather have an intuitive understanding of causation than a system that predicts slightly more accurately on paper. Partly that's because it helps to understand all of the moving parts when something goes wrong but mostly that's because I think this sort of analysis should clarify causal mechanisms - not obscure them. The conclusions in the third piece of this series should give readers a better intuitive understanding of QB injuries not just a jumble of numbers. To start out I have a jumble of QB stats and the idea that expected games missed due to injury is modeled by a gamma distribution. Where I want to get is having an individual model for each QB based on key stats. I've got something else though - the strong suspicion that my population (All QB seasons in the past three years) is actually multiple populations. Here's what I mean by that: Punk is dead Let's say I took some polling data on an insensitive caricature of Capitol Hill (Seattle, not DC). I might find that wearing leather clothing has no correlation with going to punk rock shows. Of course it couldn't, punk is dead, but let's say I mean the soft socially conscious stuff the kids call punk nowadays. That would be surprising since we know that there is a large population of hill rats who wear leather and go to punk shows. Problem is that they're being cancelled out by a large population of hill dwellers who wear other leathers and go to dance clubs instead of punk shows. I could differentiate between chaps and jackets but it would be better to admit that I'm polling two fundamentally different populations. In the NFL there are many different types of QBs - you'll know them by heart: pocket passers, scramblers, option QBs, Tim Tebow, gun slingers, and any phrase Jon Gruden has invented. When I started this analysis I was mostly concerned with two, mutually exclusive, populations: backups and starters. The variance in play stats for backups is essentially zero so it might seem like they'd be prefect for sussing out the effect of things like age - the problem is that that only works if starters and backups are otherwise the same. I doubt they are. First and foremost a starter may be more likely to avoid a listing on the injury report than a backup given the exact same injury. That would mean that I simply can't compare them since injury report listings are how I created the eGames stat I use in this analysis. Second, what if QBs with preexisting injuries are likely to become backups and carry a higher injury risk with them? That would mean anything that differentiated backups and starters would show correlation with injuries - causal or not - when looking at the full population. After poking about with numbers I decided that nobody cares about backup injury risk anyways and that the drop in sample size would but more than outweighed by getting a more accurate picture of the population of interest. Backup QBs deserve to be ignored. Just like the text in pull quote tags. At this point I could have just made an arbitrary games started cutoff or taken the top 96 QBs in games started but when I clustered QBs into two groups by all their stats (if you're curious what this means ask away in the comments) they neatly popped a backup group and a starter group leaving out some of the annoying game managing replacements who never did anything under center in their games started. So that leaves me with a group of starting QB seasons. Tragically it also leaves me a sample size of around 100 QBs and that's small. Fortunately I've never met a sample I couldn't irresponsibly Bootstrapping is a great example of the vernacular meaning of a word and the scientific being the same. Basically I take my sample of QBs and pull many smaller random samples from it. I then use the representative stats and distribution parameters for estimated distributions of those samples to approximate the relationship between the stats and the distribution of injuries. Note: "Distribution parameters" means the values that control a distributions shape - for the normal curve it is mean and standard deviation, for the gamma distribution it is shape and scale or rate. The stats I used in this process were passing and rushing attempts per game, rushing yards per game, sacks per game, and yards per carry. As an aside someone will say that I should have looked at age as well. It turns out I did! It is a very good predictor of backup injuries but falls apart for starters. I didn't really consider including it in the final analysis because I have such a short and small data set that the strong relationship of the age of a QB in year x versus years x+1 and x+2 meant that it could just act as a proxy for individual QB luck and unmeasured stats that didn't vary year to year. In other words I was worried that it would add more questions about the final analysis than it would solve. Passing attempts was basically independent of the distribution parameters, the others were all related to varying degrees but almost all of the predictable variation could be explained by just rush yards per game and sacks per game. Here's the super intimidating, super approximate, equation for the probability density function of an individual QB's injuries, ryds is the rush yards per game, sk is the sacks per game, and egames is the games missed due to injury: Everything went fine making the bootstrapped model except for Michael Vick. His 2010 season went ahead and broke the whole thing. I'll show you why because 3D graphs are cool. The model assumes the distribution parameters are gamma distributed and, basically, Michael Vick's 2010 was past the limit - the model spits out a negative value and that makes no sense in the physical world. In the following chart Vick's 2010 is the expanding red dot. I "solved" the problem by excluding Michael Vick's 2010 - in the next piece I'll propose and test a more robust solution. Vick's problems aside, now I can make individual QB distributions, I could already do that - I did it in the last piece, How can I show that this method is at least accurate enough to go on with? Monte Carlo! Essentially I run a whole mess of simulated seasons using the projected distributions and then preform the Kolmogorov-Smirnov test to see if it is likely that the simulated data and the real data came from the same distribution. It's actually more complicated than that because the distributions are continuous whereas the actual data only has a few possible values (since it comes from the injury report). That means I had to take the simulated seasons and break them down into possible weekly values, and then reconstruct the seasons from there. The final data can be visualized like this: In the end I got a p value of .99 using the argument that the simulated injuries and actual injuries were derived from the same distribution as the null. For you betting folk that translates to 1:99 odds. That's Browns don't win the Superbowl territory in terms of certainty. The p value of the same simulation using the population distribution for each QB instead of the individual projections is .66 so the individual distributions describe the data better than just assuming each QB has a league average risk of injury. At this point I am very certain that the model describes the injury risk for in sample QBs - the remaining questions are: • Does it describe out of sample QB seasons well? • Can this information be used to project? • How should projections be used? • What does it all mean? I'll answer these (and more!) in the final piece in this series. Now, just for the laugh out louds I'll share the movement in injury probability relative to QB stats and individual projections for a few QBs. First here are the probability distributions of certain numbers of games missed across values of sacks per game and rush yards per game, I'm so sorry for the quality: And here are the distributions for the NFC west QBs: Kaepernick and Palmer are the pretty clear winners. Kaepernick's, maybe counterintuitive, below average injury risk is based on his low sack numbers - I'm not convinced that they're sustainable so I doubt the projection is an accurate reflection of his actual injury risk. But that could just be wishful thinking. More from Field Gulls:
{"url":"http://www.fieldgulls.com/seahawks-analysis/2013/7/24/4543852/seahawks-russell-wilson-colin-kaepernick-carson-palmer-nfl-training-camp?ref=yahoo","timestamp":"2014-04-18T18:13:02Z","content_type":null,"content_length":"96762","record_id":"<urn:uuid:3d2a9da3-6ece-4d1a-b313-0105b34d7efd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverwoods, IL Math Tutor Find a Riverwoods, IL Math Tutor ...A good foundation in understanding and solving word problems not only creates a basis in Mathematics, but also prepares the student for real-life situations. Topics include: working with fractions, decimals, percents, positive/negative integers, rational numbers, ratios and proportions, and algebraic equations. Elementary Math, from grades 1-8, are covered. 11 Subjects: including algebra 1, algebra 2, calculus, geometry ...The SAT math section is curiously different than the ACT math section. Although the SAT math test does not test trigonometry as the ACT does, the test is in some ways conceptually more difficult than the ACT math test. I have been tutoring students successfully on both of these tests for nearly... 20 Subjects: including algebra 1, algebra 2, vocabulary, grammar I taught math in high school and college, so I know what is expected of math students, and how to explain it. Math is not so difficult if it is explained well. And I can do that. 9 Subjects: including geometry, algebra 1, algebra 2, calculus ...Having taught AP Calculus for the past two years and having taught Precalculus either in formal classes or with tutoring students over the last eight years, I am familiar with the essential skills students need to succeed, both this year and beyond. As I was completing the equivalent of a bachel... 11 Subjects: including algebra 1, algebra 2, calculus, geometry ...Initially I played doubles, but soon I became competitive enough to play singles. During the summers I was in high school, I assisted my tennis coach as an instructor for a tennis camp. After high school, I continued on and played tennis for my college, Rose-Hulman Institute of Technology, a division 3 school. 13 Subjects: including precalculus, prealgebra, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Riverwoods_IL_Math_tutors.php","timestamp":"2014-04-17T13:52:07Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:a33e45a2-931d-483d-b707-3b94f632a818>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Factor 5x+15. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/515dcafbe4b0161ab93d5ca8","timestamp":"2014-04-17T19:24:37Z","content_type":null,"content_length":"37074","record_id":"<urn:uuid:06f01685-d972-4ddb-b721-8e2dc552a5ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Capillary rise of a liquid into a deformable porous material J. I. Siddique,a D. M. Anderson,b and Andrei Bondarevc Department of Mathematical Sciences, George Mason University, Fairfax, Virginia 22030, USA Received 25 September 2008; accepted 12 December 2008; published online 27 January 2009 We examine the effects of capillarity and gravity in a model of one-dimensional imbibition of an incompressible liquid into a deformable porous material. We focus primarily on a capillary rise problem but also discuss a capillary/gravitational drainage configuration in which capillary and gravity forces act in the same direction. Models in both cases can be formulated as nonlinear free-boundary problems. In the capillary rise problem, we identify time-dependent solutions numerically and compare them in the long time limit to analytically obtain equilibrium or steady state solutions. A basic feature of the capillary rise model is that, after an early time regime governed by zero gravity dynamics, the liquid rises to a finite, equilibrium height and the porous material deforms into an equilibrium configuration. We explore the details of these solutions and their dependence on system parameters such as the capillary pressure and the solid to liquid density ratio. We quantify both net, or global, deformation of the material and local deformation that may occur even in the case of zero net deformation. In the model for the draining problem, we identify numerical solutions that quantify the effects of gravity, capillarity, and solid to liquid density ratio on the time required for a finite volume of fluid to drain into the deformable porous material. In the
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/686/2420003.html","timestamp":"2014-04-19T17:32:38Z","content_type":null,"content_length":"8783","record_id":"<urn:uuid:745a3668-46fd-44f5-9d8b-a4abf114bbcd>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Confluent Markov Chains We consider infinite-state discrete Markov chains which are confluent: each computation will almost certainly either reach a defined set F of final states, or reach a state from which F is not reachable. Confluent Markov chains include probabilistic extensions of several classical computation models such as Petri nets, Turing Machines, and communicating finite-state machines. For confluent Markov chains, we consider three different variants of the reachability problem and the repeated reachability problem: The qualitative problem, i.e., deciding if the probability is one (or zero); the approximate quantitative problem, i.e., computing the probability up-to arbitrary precision; and the exact quantitative problem, i.e., computing probabilities exactly. We also study the problem of computing the expected reward (or cost) of runs until reaching the final states, where rewards are assigned to individual runs by computable reward functions.
{"url":"http://www.newton.ac.uk/programmes/LAA/abdulla.html","timestamp":"2014-04-19T17:11:04Z","content_type":null,"content_length":"2750","record_id":"<urn:uuid:c17f1bb4-3d9b-416a-a132-e54fc0f37cee>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
1. (PHYSICS) A physical entity that is described by specifying the value of some quantity at every point of space and time. Alternatively, a region within which a particular type of force can be observed or experienced. Varieties of field include a gravitational field, electric field, magnetic field (or when the latter two are linked, an electromagnetic field), and nuclear field. The laws of physics suggest that fields represent more than a possibility of force being observed, but that they can also transmit energy and momentum – a light wave, for example, is a phenomenon completely defined by fields. Whereas a field exists throughout a region of space and time, a particle exists only at a single point. 2. (MATH) A set F (such as a number system) with two operations + and ×, in which (1) both + and × are associative and commutative, and the operation × is distributive over +; (2) there are two identity elements in F, 0 relative to + and 1 relative to ×, such that a + 0 = a and a × 1 = a for any element a of the field; (3) every element a has an inverse -a, also a member of the set, such that a + (-a) = 0; (4) every nonzero element a has an inverse a^-1, also a member of the set, such that a × a^-1 = 1. Examples of fields include the set of rational numbers and the real numbers with, in each case, the operations addition and multiplication. A set (with two operations) which satisfies conditions (1), (2), and (3) but not (4) (because the result of dividing one integer by another is not necessarily an integer), and therefore is not a field, is an integral domain: an example is the set of all integers under addition and multiplication. Compare with ring. See also group. SETS AND SET THEORY
{"url":"http://www.daviddarling.info/encyclopedia/F/field.html","timestamp":"2014-04-20T05:48:15Z","content_type":null,"content_length":"8897","record_id":"<urn:uuid:72db7a88-7514-4c7d-9d26-8e38e5dd52e8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 64 , 2003 "... In this paper, we consider the problem of how to represent the locations of Internet hosts in a Cartesian coordinate system to facilitate estimate of the network distance between two arbitrary Internet hosts. We envision an infrastructure that consists of beacon nodes and provides the service of est ..." Cited by 112 (3 self) Add to MetaCart In this paper, we consider the problem of how to represent the locations of Internet hosts in a Cartesian coordinate system to facilitate estimate of the network distance between two arbitrary Internet hosts. We envision an infrastructure that consists of beacon nodes and provides the service of estimating network distance between two hosts without direct delay measurement. We show that the principal component analysis (PCA) technique can e#ectively extract topological information from delay measurements between beacon hosts. Based on PCA, we devise a transformation method that projects the distance data space into a new coordinate system of (much) smaller dimensions. The transformation retains as much topological information as possible and yet enables end hosts to easily determine their locations in the coordinate system. The resulting new coordinate system is termed as the Internet Coordinate System (ICS). As compared to existing work (e.g., IDMaps [1] and GNP [2]), ICS incurs smaller computation overhead in calculating the coordinates of hosts and smaller measurement overhead (required for end hosts to measure their distances to beacon hosts). Finally, we show via experimentation with real-life data sets that ICS is robust and accurate, regardless of the number of beacon nodes (as long as it exceeds certain threshold) and the complexity of network - Journal of Global Optimization , 2000 "... A finite new algorithm is proposed for clustering m given points in n-dimensional real space into k clusters by generating k planes that constitute a local solution to the nonconvex problem of minimizing the sum of squares of the 2-norm distances between each point and a nearest plane. The key to th ..." Cited by 42 (3 self) Add to MetaCart A finite new algorithm is proposed for clustering m given points in n-dimensional real space into k clusters by generating k planes that constitute a local solution to the nonconvex problem of minimizing the sum of squares of the 2-norm distances between each point and a nearest plane. The key to the algorithm lies in a formulation that generates a plane in n-dimensional space that minimizes the sum of the squares of the 2-norm distances to each of m1 given points in the space. The plane is generated by an eigenvector corresponding to a smallest eigenvalue of an n \Theta n simple matrix derived from the m1 points. The algorithm was tested on the publicly available Wisconsin Breast Prognosis Cancer database to generate well separated patient survival curves. In contrast, the k-mean algorithm did not generate such well-separated survival curves. 1 Introduction There are many approaches to clustering such as statistical [2, 9, 6], machine learning [7, 8] and mathematical programming [15... - TECHNICAL REPORT , 1988 "... Modular manipulator designs have long been b een considered for use as research tools, and as the basis for easily modified industrial manipulators. In these manipulators the links and joints are discrete and modular components that can be assembled into a desired manipulator configuration. As ha ..." Cited by 39 (13 self) Add to MetaCart Modular manipulator designs have long been b een considered for use as research tools, and as the basis for easily modified industrial manipulators. In these manipulators the links and joints are discrete and modular components that can be assembled into a desired manipulator configuration. As hardware advances have made actual modular manipulators practical, various capabilities of such manipulators have gained interest. Particularly desirable is the ability to rapidly reconfigure such a manipulator, in order to custom tailor i t to specific tasks. T i reconfiguration hs greatly enhances the capability of a given amount of manipulator hardware. This paper discusses the development of a prototype modular manipulator and the implementation of a configuration independent manipulator kinematics algorithm used for path planning in the prototype. - IEEE transactions on Pattern Analysis and Machine Intelligence , 2004 "... Abstract—The histogram of image intensities is used extensively for recognition and for retrieval of images and video from visual databases. A single image histogram, however, suffers from the inability to encode spatial image variation. An obvious way to extend this feature is to compute the histog ..." Cited by 39 (0 self) Add to MetaCart Abstract—The histogram of image intensities is used extensively for recognition and for retrieval of images and video from visual databases. A single image histogram, however, suffers from the inability to encode spatial image variation. An obvious way to extend this feature is to compute the histograms of multiple resolutions of an image to form a multiresolution histogram. The multiresolution histogram shares many desirable properties with the plain histogram including that they are both fast to compute, space efficient, invariant to rigid motions, and robust to noise. In addition, the multiresolution histogram directly encodes spatial information. We describe a simple yet novel matching algorithm based on the multiresolution histogram that uses the differences between histograms of consecutive image resolutions. We evaluate it against five widely used image features. We show that with our simple feature we achieve or exceed the performance obtained with more complicated features. Further, we show our algorithm to be the most efficient and robust. - SIAM J. Matrix Anal. Applicat , 1993 "... We present a Jacobi-like algorithm for simultaneous diagonalization of commuting pairs of complex normal matrices by unitary similarity transformations. The algorithm uses a sequence of similarity transformations by elementary complex rotations to drive the off-diagonal entries to zero. We show th ..." Cited by 38 (0 self) Add to MetaCart We present a Jacobi-like algorithm for simultaneous diagonalization of commuting pairs of complex normal matrices by unitary similarity transformations. The algorithm uses a sequence of similarity transformations by elementary complex rotations to drive the off-diagonal entries to zero. We show that its asymptotic convergence rate is quadratic and that it is numerically stable. It preserves the special structure of real matrices, quaternion matrices and real symmetric matrices. , 1999 "... The comparative study of classifier performance is a worthwhile concern in Machine Learning. Empirical comparisons typically examine unbiased estimates of predictive accuracy of different algorithms -- the assumption being that the classifier with the highest accuracy would be the "optimal" choice o ..." Cited by 33 (1 self) Add to MetaCart The comparative study of classifier performance is a worthwhile concern in Machine Learning. Empirical comparisons typically examine unbiased estimates of predictive accuracy of different algorithms -- the assumption being that the classifier with the highest accuracy would be the "optimal" choice of classifier for the problem. The qualification on optimality is needed here, as choice is restricted to the classifiers being compared, and the estimates are typically subject to sampling errors. Comparisons based on predictive accuracy overlook two important practical concerns, namely (a) class distributions cannot be specified precisely. Distribution of classes in the training set are thus rarely matched exactly on new data; and (b) that the costs of different types of errors may be unequal. Using techniques developed in signal detection, Provost and Fawcett describe an elegant method for the comparative assessment of binary classifiers that takes these considerations into account. Thei... - In Proc. 5th Int. Conference on Data and Communications , 1994 "... We analyze the performance of a generic feedback flow control mechanism which captures the properties of several such mechanisms recently proposed in the literature. These mechanisms dynamically regulate the rate of data flow into a network based on feedback information about the network state. They ..." Cited by 25 (4 self) Add to MetaCart We analyze the performance of a generic feedback flow control mechanism which captures the properties of several such mechanisms recently proposed in the literature. These mechanisms dynamically regulate the rate of data flow into a network based on feedback information about the network state. They are used in a variety of networks and they have been advocated for upcoming high-speed networks. However, they are difficult to model realistically. In this paper, we present a stochastic discrete-time approach that yields models which are realistic and yet tractable and computationally easy to solve. For our generic mechanism, the feedback consists of an exponentially averaged estimate of the bottleneck service rate and queue size. We obtain a model described by non-linear stochastic difference equations. We find the conditions under which these equations converge to a steady-state and we characterize the speed of convergence to steady-state. We then consider a linearized version of the mo... - In Proceedings of Data Compression Conference , 1999 "... This paper presents a POCS-based algorithm for consistent reconstruction of a signal x 2 R K from any subset of quantized coefficients y 2 R N in an N \Theta K overcomplete frame expansion y = Fx, N = 2K. By choosing the frame operator F to be the concatenation of two K \Theta K invertible tr ..." Cited by 20 (0 self) Add to MetaCart This paper presents a POCS-based algorithm for consistent reconstruction of a signal x 2 R K from any subset of quantized coefficients y 2 R N in an N \Theta K overcomplete frame expansion y = Fx, N = 2K. By choosing the frame operator F to be the concatenation of two K \Theta K invertible transforms, the projections may be computed in R K using only the transforms and their inverses, rather than in the larger space R N using the pseudo-inverse as proposed in earlier work. This enables practical reconstructions from overcomplete frame expansions based on wavelet, subband, or lapped transforms of an entire image, which has heretofore not been possible. 1 Introduction Multiple description (MD) source coding is the problem of encoding a single source fX i g into N separate binary descriptions at rates R 1 ; : : : ; RN bits per symbol such that any subset S of the descriptions may be received and together decoded to an expected distortion D S commensurate with the total b... - Algorithmica , 1998 "... For a Markovian source, we analyze the Lempel-Ziv parsing scheme that partitions sequences into phrases such that a new phrase is the shortest phrase not seen in the past. We consider three models: In the Markov Independent model, several sequences are generated independently by Markovian sources, ..." Cited by 17 (11 self) Add to MetaCart For a Markovian source, we analyze the Lempel-Ziv parsing scheme that partitions sequences into phrases such that a new phrase is the shortest phrase not seen in the past. We consider three models: In the Markov Independent model, several sequences are generated independently by Markovian sources, and the ith phrase is the shortest prefix of the ith sequence that was not seen before as a phrase (i.e., a prefix of previous (i \Gamma 1) sequences). In the other two models, only a single sequence is generated by a Markovian source. In the second model, for which we coin the name Gilbert-Kadota model, a fixed number of phrases is generated according to the Lempel-Ziv algorithm, thus producing a sequence of a variable (random) length. In the last model, known also as the Lempel-Ziv model, a string of fixed length is partitioned into a variable (random) number of phrases. These three models can be efficiently represented and analyzed by digital search trees that are of interest to other - IEEE/ASME TRANSACTIONS ON MECHATRONICS , 2001 "... One challenge in multimodal interface research is the lack of robust subsystems that support multimodal interactions. By focusing on a chair—an object that is involved in virtually all human–computer interactions, the sensing chair project enables an ordinary office chair to become aware of its occ ..." Cited by 17 (1 self) Add to MetaCart One challenge in multimodal interface research is the lack of robust subsystems that support multimodal interactions. By focusing on a chair—an object that is involved in virtually all human–computer interactions, the sensing chair project enables an ordinary office chair to become aware of its occupant’s actions and needs. Surface-mounted pressure distribution sensors are placed over the seatpan and backrest of the chair for real time capturing of contact information between the chair and its occupant. Given the similarity between a pressure distribution map and a grayscale image, pattern recognition techniques commonly used in computer and robot vision, such as principal components analysis, have been successfully applied to solving the problem of sitting posture classification. The current static posture classification system operates in real time with an overall classification accuracy of 96 % and 79 % for familiar (people it had felt before) and unfamiliar users, respectively. Future work is aimed at a dynamic posture tracking system that continuously tracks not only steady-state (static) but transitional (dynamic) sitting postures. Results reported here form important stepping stones toward an intelligent chair that can find applications in many areas including multimodal interfaces, intelligent environment, and safety of automobile operations.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=604637","timestamp":"2014-04-23T13:44:02Z","content_type":null,"content_length":"40970","record_id":"<urn:uuid:f51dfb50-55a4-44a0-b076-7eaca4b12e8d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Monte Carlo method for estimating plane figure area 12 Mar 2011 An area S(A) of a figure A, bounded by a Jordan curve is calculated. function S=Area_M(N) % by Konstantin Ninidze % Monte Carlo method for estimating the area for a plane figure. % Suppose we need to calculate an area S(A) of a figure A, bounded by a % Jordan curve (which in our case is constructed by a cubic spline % approximation). We'll place A inside a square of known area S(sq) and % then place a known number of points (N) at random locations inside the % square. After this we'll count the number of the random points (M) that lie % inside the contour A and taking in mind, that the area S(A) of the object % A is proportional to the number of points (M) that lie inside square % we'll obtain a formula for the area: S(A)=S(sq)*M/N. % A slight modification of Matlab "getcurve" function is needed to obtaine % the closed curve, see mygetcurve1. in = inpolygon(x,y,X(1,:),X(2,:)); hold on title([' area estimation is ',num2str(S)]) xlabel([' N= ',num2str(N)]) function [xy, spcv] = mygetcurve1 % do not repeat points! w = [-1 1 -1 1]; axis(w), hold on, grid on title('Use mouse clicks to pick points INSIDE the gridded area.') pts = line('Xdata',NaN,'Ydata',NaN,'marker','o','erase','none'); maxpnts = 100; xy = zeros(2,maxpnts); while 1 for j=1:maxpnts [x,y] = ginput(1); if isempty(x)||x<w(1)||x>w(2)||y<w(3)||y>w(4), break, end xy(:,j) = [x;y]; if j>1 xlabel('When you are done, click OUTSIDE the gridded area.') if j>1, break, end xlabel(' You need to click INSIDE the gridded area at least once') if norm(xy(:,1)-xy(:,j-1))<.05, xy(:,j-1)=xy(:,1); end spcv = cscvn(xy); fnplt(spcv) An area S(A) of a figure A, bounded by a Jordan curve is calculated. function S=Area_M(N) % by Konstantin Ninidze % Monte Carlo method for estimating the area for a plane figure. % Suppose we need to calculate an area S(A) of a figure A, bounded by a % Jordan curve (which in our case is constructed by a cubic spline % approximation). We'll place A inside a square of known area S(sq) and % then place a known number of points (N) at random locations inside the % square. After this we'll count the number of the random points (M) that lie % inside the contour A and taking in mind, that the area S(A) of the object % A is proportional to the number of points (M) that lie inside square % we'll obtain a formula for the area: S(A)=S(sq)*M/N. % A slight modification of Matlab "getcurve" function is needed to obtaine % the closed curve, see mygetcurve1. in = inpolygon(x,y,X(1,:),X(2,:)); hold on title([' area estimation is ',num2str(S)]) xlabel([' N= ',num2str(N)]) function [xy, spcv] = mygetcurve1 % do not repeat points! w = [-1 1 -1 1]; axis(w), hold on, grid on title('Use mouse clicks to pick points INSIDE the gridded area.') pts = line('Xdata',NaN,'Ydata',NaN,'marker','o','erase','none'); maxpnts = 100; xy = zeros(2,maxpnts); while 1 for j=1:maxpnts [x,y] = ginput(1); if isempty(x)||x<w(1)||x>w(2)||y<w(3)||y>w(4), break, end xy(:,j) = [x;y]; if j>1 xlabel('When you are done, click OUTSIDE the gridded area.') if j>1, break, end xlabel(' You need to click INSIDE the gridded area at least once') if norm(xy(:,1)-xy(:,j-1))<.05, xy(:,j-1)=xy(:,1); end spcv = cscvn(xy); fnplt(spcv) function S=Area_M(N) % by Konstantin Ninidze % % Monte Carlo method for estimating the area for a plane figure. % Suppose we need to calculate an area S(A) of a figure A, bounded by a % Jordan curve (which in our case is constructed by a cubic spline % approximation). We'll place A inside a square of known area S(sq) and % then place a known number of points (N) at random locations inside the % square. After this we'll count the number of the random points (M) that lie % inside the contour A and taking in mind, that the area S(A) of the object % A is proportional to the number of points (M) that lie inside square % we'll obtain a formula for the area: S(A)=S(sq)*M/N. % A slight modification of Matlab "getcurve" function is needed to obtaine % the closed curve, see mygetcurve1. close X= mygetcurve1; X=[X,X(:,1)]; x=2*rand(1,N)-1; y=2*rand(1,N)-1; in = inpolygon(x,y,X(1,:),X(2,:)); M=sum(in); S=4*M/N; hold on plot(x(in),y(in),'r+',x(~in),y(~in),'bo') title([' area estimation is ',num2str(S)]) xlabel([' N= ',num2str(N)]) function [xy, spcv] = mygetcurve1 % do not repeat points! w = [-1 1 -1 1]; clf, axis(w), hold on, grid on title('Use mouse clicks to pick points INSIDE the gridded area.') pts = line('Xdata',NaN,'Ydata',NaN,'marker','o','erase','none'); maxpnts = 100; xy = zeros(2,maxpnts); while 1 for j=1:maxpnts [x,y] = ginput(1); if isempty(x)||x<w(1)||x>w(2)||y<w(3) ||y>w(4), break, end xy(:,j) = [x;y]; if j>1 set(pts,'Xdata',xy(1,1:j),'Ydata',xy(2,1:j)) else set(pts,'Xdata',x,'Ydata',y) xlabel('When you are done, click OUTSIDE the gridded area.') end end if j> 1, break, end xlabel(' You need to click INSIDE the gridded area at least once') end xy(:,j:maxpnts)=[]; if norm(xy(:,1)-xy(:,j-1))<.05, xy(:,j-1)=xy(:,1); end set(pts,'Xdata',xy(1,:),'Ydata',xy (2,:),'erase','xor','linestyle','none') xy=[xy,xy(:,1)]; spcv = cscvn(xy); fnplt(spcv)
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/30728-monte-carlo-method-for-estimating-plane-figure-area/content/Area_M.m","timestamp":"2014-04-19T12:36:53Z","content_type":null,"content_length":"20203","record_id":"<urn:uuid:b9ef331a-158d-4068-98a4-0f417326d707>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Glossary:Body mass index (BMI) From Statistics Explained The body mass index, abbreviated as BMI, is a measure of a person’s weight relative to height that correlates fairly well with body fat. The BMI is accepted as the most useful indicator of obesity in adults when only weight and height data are available. BMI is calculated by dividing body weight (in kilograms) by height (in metres) squared. The following subdivisions are used to categorize the BMI into four categories: Note that the BMI is not calculated for children. Related concepts Statistical data
{"url":"http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/Glossary:Body_mass_index_(BMI)","timestamp":"2014-04-16T22:59:51Z","content_type":null,"content_length":"23203","record_id":"<urn:uuid:6f17fb8e-de8c-4794-bdbf-ba599dcd969c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Urn and slips of papers question Hello, I need help with my question. Thank you :] Consider an urn containing 1996 slips of papers numbered from 1 to 1996. A procedure consists of three steps: 1. Take two slips from the urn. 2. Write down the absolute value of the difference of the numbers on the slips of step1 on a new slip of paper. 3. Discard the two slips in step1 and replace them with the slip in step2 into the urn. Repeating the procedure until only one slip of paper is left in the urn, prove that the number on this slip is even. I think maybe you can proceed by induction. It is important that the upper limit on the numbers is a multiple of 4, so use the rule n = 4m for some counting number m. Then prove the case for m = 1 by showing cases. Assume the rule for m = k. Then see where you can go with m = k + 1. Strong induction might be helpful.... Re: Urn and slips of papers question know its an old posting, but was working on something similar and came across this. Have solved it, thought i might share for other people that might stumble across this. Basically my solution is to look at possible cases. When taking 2 slips there are three possible cases. (Here n denotes the number of even valued slips and m the number of odd ones). case 1: take two even numbers. ==> n=n-1 m=m case 2: take an odd and an even number. ==> n=n-1 m=m case 3: take two odd numbers. ==> n=n+1 m=m-2 At the start there are 1996 slips, so both n and m are even (1996/2). The only case that leaves an odd number if its the last case that happens is case 2. For this to happen both m and n need to be 1 at the end. So one even and one odd valued slip. However, since at the beginning m is even and we only ever decrease m with 2 (in case 3) m will always stay even and therefore we can never get the situation described above. So the last number will always be even.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=15&t=521&p=3408","timestamp":"2014-04-19T14:58:47Z","content_type":null,"content_length":"21200","record_id":"<urn:uuid:eef46fc0-c8eb-44a5-bc9a-907be6498f7b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Equivalent metrics on Fréchet spaces and Lipschitz maps up vote 4 down vote favorite Lipschitz maps are defined over metric space as maps $f:(X,d_X) \to (Y,d_Y)$ such that $$ d\left( f(x),f(x^\prime) \right)_Y \le k d(x,x^\prime)_X \ \forall x,x^\prime \in X, $$ where $k$ is a positive constant. We usually say that $f$ is a contraction if $k<1$. It is well know that a different equivalent metric on $X$ does not preserve contractions, i.e. a map can be a contraction with respect to a metric but not with respect to an equivalent one. In the Banach space setting, where the spaces $X$ and $Y$ are endowed with a norm defining the topology, there is a somehow "canonical" distance given by $$ d(x,x^\prime) = \lVert x-x^\prime \rVert .$$ With this distance, Lipschitz maps can be characterized as maps satisfying, for some $k>0$ $$ {\left\lVert f(x) - f(x^\prime) \right\rVert}_Y \le k {\left \lVert x-x^\prime \right \rVert}_X \ \ forall x, x^\prime \in X. $$ It is obvious that, if $f$ satisfies the above relation, then is a $k$-lipschitz map. In the Fréchet space setting, the topology is defined by a countable family of semi-norms $({\lVert\cdot\rVert}_n)$. The classical example of metric inducing the same topology is given by $$ d(x,x^\ prime) = \sum_{n=0}^\infty {2^{-n}} \frac{{\lVert x-x^\prime\rVert}_n}{1+{\lVert x-x^\prime\rVert}_n} . $$ In analogy with the Banach case, I would like to characterize (at least some) Lipschtiz maps between Fréchet spaces as maps satisfying $$ {\left\lVert f(x) - f(x^\prime) \right\rVert}_n \le k {\left \lVert x-x^\prime \right \rVert}_n \ \forall x, x^\prime \in X,\ \forall n \in \mathbb{N}. $$ Again, maps satisfying the last equation are Lipschitz maps with respect to the metric defined above, but the Lipschitz constant is not $k$ anymore, and in particular contraction with respect to the semi-norms (i.e. maps satisfying the last equation with $k<1$) are not contraction with respect to the Are there equivalent distances on $X$ and $Y$ such that every contraction with respect to the semi-norms is a contraction with respect with the new distance? If this is not possibile for every contraction, is it possible for a specific one? fa.functional-analysis metric-spaces add comment 1 Answer active oldest votes Use $\sum 2^{-n}(\|x-y\|_n \wedge 1)$ for the distance on $Y$ and $\sum 2^{-n}(\|x-y\|_n \wedge 2)$ for the distance on $X$. up vote 3 down vote accepted I understand the idea, but I would say that if I take $ \sum 2^{-n} ({\left\lVert x-y\right\rVert} \wedge 1)$ on $X$, I need $\sum 2^{-n} ({\left\lVert x-y\right\rVert} \wedge 1/k)$ on $Y$. Anyway... what about maps from $X$ into himself? Are maps $f:X\toX$ satisfying $\left\lVert f(x) -f(y)\right\rVert \le k \left \lVert x-y \right \rVert$, with $k<1$, distance-contraction (and thus have a unique fixed point)? maybe this is a different question... – Angelo Lucia Aug 7 '11 at 20:03 I meant $\sum 2^{-n} (\left\lVert x-y \right\rVert \wedge k)$ on $Y$. – Angelo Lucia Aug 7 '11 at 21:29 As for having the same distance on $X$ and $Y$ when $X=Y$, that is not possible when e.g. $X$ is the countable product of lines. For this space $I/2$ is not a contraction under any compatible metric. – Bill Johnson Aug 7 '11 at 23:17 ...but it still has a unique fixed point, right? – Angelo Lucia Aug 8 '11 at 8:39 Sure; zero. I is the identity operator. – Bill Johnson Aug 8 '11 at 19:34 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis metric-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/72232/equivalent-metrics-on-frechet-spaces-and-lipschitz-maps","timestamp":"2014-04-19T22:22:47Z","content_type":null,"content_length":"58288","record_id":"<urn:uuid:df946339-1d4b-44a6-a3a2-ea9c0a309ead>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
SPSSX-L archives -- November 2000 (#151)LISTSERV at the University of Georgia Date: Fri, 10 Nov 2000 11:38:04 +0100 Reply-To: Hassane ABIDI <hassanab@LYON-SUD.UNIV-LYON1.FR> Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU> From: Hassane ABIDI <hassanab@LYON-SUD.UNIV-LYON1.FR> Organization: Unité d'Epidémiologie, Centre Hospitalier, lyon sud Subject: logistic regression and Cox model Content-Type: text/plain; charset=us-ascii Dear colleagues, I have a problem to interpret (or to compare) the output of logistic regression and that of the Cox model. let : Y is a binary variable where Y=1 when event is present and X is a binary explatory variable, X=1 in the exposition case and 0 otherwise T is the time where the events are oberved or censured. In logistic case (independently of time) we estimate: Odds(x)=Logit(p(x)) ( = Ln( P(x)/(1-p(x)) ) ) OR=Odds(1)/odds(0) = Exp(a) where a is the coefficient of X in logistic In Cox model (proportional hazard model), we estimate: h(t,x)=h(t)*Exp(bx) (hazard function) thus the ratio of estimated hazards for X=1 and X=0 is h(t,1)/h(t,0)=Exp(b) (presumedly independent of time) My problems: 1) In which case, Exp(a) (from logistic) estimates P(1)/P(0) ? 2) In which case we expect that Exp(a) = Exp(b) (or approximate) ie the ratio of two odds estimates the ratio of tow hazards ie the logistic model and the Cox model give the same (or approximately) results ?? | Hassane ABIDI (PhD) | | Unite d'Epidemiologie; Centre Hospitalier Lyon-Sud | | Pavillon 1.M, 69495 Pierre Benite Cedex, France | | Tel: (33) 04 78 86 56 87 ; Fax: (33) 04 78 86 33 31 | | E. mail: Hassanab@lyon-sud.univ-lyon1.fr |
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0011&L=spssx-l&F=&S=&P=15403","timestamp":"2014-04-25T04:48:29Z","content_type":null,"content_length":"10403","record_id":"<urn:uuid:fe6bdfb5-7ab5-4900-814a-db2f94ffa9b7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
The efficient control of concurrent access to shared resources is a major topic in computer science which has become even more important with the spread of multi-core computers. The classic approach to ensure consistency, mutual exclusion, is well understood but typically can not make use of the capacity of modern multicore computers and suffers from severe problems such as deadlocks. Fine-grained concurrent algorithms that use fine-grained locking or even avoid the use of locks are a class of concurrent algorithms designed to avoid these shortcomings. They typically provide good performance under no contention or multiprogramming, and outperform coarse-grained locking algorithms under high contention. Lock-free algorithms in particular, avoid deadlocks and livelocks and ensure global progress in the presence of arbitrary process failures or delays. This is typically achieved by applying synchronization primitives such as CAS (Compare And Swap) and an optimistic try and retry scheme, instead of locking. The application domain of these algorithms ranges from managing multiprocessor communication to real-time gaming or hash tables for the efficient indexing in distributed databases or webservers. The main concern of this project is the verification of efficient implementations of different multithread-safe algorithms, in particular data structure implementations. The analysis focuses on the main correctness and liveness properties of these algorithms: The main correctness condition, linearizability, ensures that lock-free operations can be seen as atomic from an external point of view, i.e., they either take place in one step or they have no visible effect. The main liveness property, lock-freedom, guarantees that even in the presence of process failure, one of the currently active operations terminates. The project defines a new approach for the integrated development and analysis of fine-grained algorithms in the interactive theorem prover KIV. The technique embeds linearizability in a verification approach based on refinement to allow for the modular development of correct software. Based on a generic and expressive temporal logic framework for the verification of concurrent algorithms, proof obligations for both linearizability and lock-freedom are derived and instantiated to verify algorithms using automated verification techniques. In particular, the approach shall meet the following • Specification of fine-grained algorithms at different levels of abstraction. • Development of a refinement theory which translates linearizability to process-local proof-obligations. • Development of process-local proof obligations for lock-freedom. • Interactive verification of these proof-obligations, as well as the decomposition of global properties to process-local proof-obligations, in an expressive temporal logic framework. • Integration of different automation techniques for the analysis of algorithms. In particular, we would like to consider techniques such as Shape Analysis, Ownership or Atomicity Analysis. Compared to other existing approaches, our technique integrates the mechanized specification, decomposition of global properties and the proof of the resulting process-local proof obligations (which imply linearizability and lock-freedom) into one calculus. Using interactive theorem proving, scaling problems from which automated techniques suffer can be reduced, while these techniques can be applied to reduce the number of required interactions. A list of publications is given below. The KIV projects can be found here. [1] Proving linearizability with temporal logic. S. Bäumler, G. Schellhorn, B. Tofan, and W. Reif. Journal Formal Aspects of Computing (FAC), 23(1):91–112, 2011. [2] Mechanically verified proof obligations for linearizability. J. Derrick, G. Schellhorn, and H. Wehrheim. Journal ACM transactions on Programming Languages and Systems, 33(4):1–43, 2011. [3] Verifying linearisabilty with potential linearisation points. J. Derrick, G. Schellhorn, and H. Wehrheim. In Proc. of Formal Methods (FM), pages 323–337. Springer LNCS 6664, 2011. [4] How to prove algorithms linearizable. G. Schellhorn, J. Derrick, and H. Wehrheim. In Proc. of CAV, Vol. 7358, pages 243–259. Springer LNCS, 2012. [5] RGITL: A temporal logic framework for compositional reasoning about interleaved programs. G. Schellhorn, B. Tofan, G. Ernst, J. Pfähler, and W. Reif. Journal Annals of Mathematics and Artificial Intelligence (AMAI), accepted, 2013. [6] Interleaved programs and rely-guarantee reasoning with ITL. G. Schellhorn, B. Tofan, G. Ernst, and W. Reif. In Proc. of Temporal Representation and Reasoning (TIME), IEEE, CPS, pages 99–106, 2011. [7] Temporal logic verification of lock-freedom. B. Tofan, S. Bäumler, G. Schellhorn, and W. Reif. In Proc. of MPC 2010, Springer LNCS 6120, pages 377–396, 2010. [8] Compositional verification of a lock-free stack with RGITL. B. Tofan, G. Schellhorn, G. Ernst, J. Pfähler, and W. Reif. In Proc. of AVoCS, submitted. ECEASST, 2013. [9] Formal verification of a lock-free stack with hazard pointers. B. Tofan, G. Schellhorn, and W. Reif. In Proc. of ICTAC, pages 239–255. Springer LNCS 6916, 2011. [10] Two approaches for proving linearizability of multiset. B. Tofan, O. Travkin, G. Schellhorn, and H. Wehrheim. Journal Science of Computer Programming, submitted, 2013. [11] Proving linearizability of multiset with local proof obligations. O. Travkin, H. Wehrheim, and G. Schellhorn. In Proc. of AVoCS, Vol. 53. ECEASST, 2012.
{"url":"http://www.uni-augsburg.de/forschungsportal/fak/fai/informatik/2010/VeriCAS.html","timestamp":"2014-04-19T01:47:58Z","content_type":null,"content_length":"15695","record_id":"<urn:uuid:8c724017-949b-4e57-81bb-076fa02e0122>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
The rotation of a fan creates a blocking path for visible light. The rotation of a fan creates a blocking path for visible light. ^ 1 The webcam (above) produces exactly 3 smeared out regions of a moving fan. It is much more prominent when you see with the webcam instead of taking the pic because the cam is low res. The cell with much better resolution and faster speed perhaps, does not make this clear atall, there is only 1 complete circle with the cellular cam. This is what I had described in the article below written 2 weeks ago. I took the pic only today. Also see the very interesting patterns of the Canon PS a 8 MP, 12 optical, 6.0 – 72.0 mm lens. The last pic above is dark but if you inspect clearly you will see something. WE don’t need this for our analysis. Check that with a shutter-speed priority (and some other modes) you see a statonary picture of the moving fan (unbelievably sharp) One needs the details of how these things work before one can say exactly why we see these pattern. But the semi-qualitative analysis I had done prior to taking these pics. YOU can produce a dark room, note everything you about your experimental setup and by obtaining desirable patterns calculate the exact Uncertainity equation value from these (eg you may use calibrated sources of light color and shutter speed etc ) a week or two or three ago, I wrote an article where I said “quantum mechanical phenomena corresponds to classical world and gave two example, one of a rotating fan and I asked you to count it’s blades, the other was rotating a bicycle wheel when your bike man is fixing it for you”. First of all correspondence is like an effect moving up the ladder to a different scale and despite of their legitimacy in current understanding a lot of bad understaning and philosophy has blocked their way from popular knowledge. Secondly it may have raised your eyebrow so much you might have consulted your eyedoctor for a check up. I take full responsibility for that but you have to call your insurance not me. I did a little notebook calculation and here is the reason. Consider a fan with 3 blades of 40 cm length and consider it making a 10 rpm motion. 10 rotations per minute. Oh thats sucking, consider it 10 rps, rotations per second. It gives us an angular frequency of w. vv/R = wwR (v=Rw). Then v = 2pi.R.10 m/s = 62.8×0.4 m/s = 25.12 m/s that is 25 Hz-m. First of all this is a great speed. Your airplane moves only 250 m/s. Your car moves at 25 m/s (90 km/hr) Most of the energy of your fan is just wasted in keeping it in circle. Now 25 Hz along a meter length is like 25×10^9 Hz for a 10^-9 meter that is 25 GHz for a nanometer deflection. Or 25/500 GHz for a 500 nanometer deflection. 0.05 THz for a wavelength of visible light. SO at the distance-scale of wavelength of light the time window available for the photon is inverse of that frequency. that is 1/delT = 0.05 THz. This means delT = 5 nano-seconds. This is the time statistically available to the photons of 500 nm to get reflected and give us a good picture so we can see or count the rotating fan’s blade. I intended an Uncertainity relation explanation hence denoted T with delT but now I see that I do not need a mathematically explicit explanation for that. (remember that the camera shutter has the same principle, it allows certain adjusted time window, if it is higher it won’t work, if it is lower it intereferes in various ways with the process of picture taking and sometimes gives you funny picture such as a fuzzy face who you can realize who he or she is but not recognizable to a 3rd party. The rotating fan also creates a time window as it is moving really fast from one point to the next) If you are to understand this process in a way in which technology is really really not playing a role consider a camera-obscura. (a pinhole camera, I played these a lot in high school, as an 11 year old, that and the immersion water heater I made using connecting two “blades”, razors that we use for shaving etc etc were my favorite science gadgets. I also had a soap making story which was a disaster that I will never forget. I also thought of making salt from acid and base, from stuff taken away from lab and popular bases/alkalines available such as baking soda, cannot say: stealing from the lab tho. I checked the formula of these from textbook also, this was probably when I was 13 years old, or was it, it may as well be when I was 15 or 16 years old, but I don’t recall exactly when I studied Chemistry with formulas in high school ) But now that I thought hard about it I recall that I might have had some chemistry textbook with formulas during early high-school (8th standard=13 yrs age) This following article here calculates circular motion parameters using a ceiling fan rotation of 0.5 rps, 20 times lesser than what I have taken here; (per him/her the table top fan moves much faster and I “javen’t” reviewed exact speeds of fans, but a factor of 10 will only reduce the available time window by the same factor, hence we’ll have 50 nano-seconds for a slow moving fan.) Now the 500 nm (you can take an average for all wavelength in optical range, how do we do that?) light quantum has an energy proportional to it’s frequency and ~400 to 750 nano-meter lights (all of visible light) has a frequency of 400 – ~ 800 THz. So we do not need additional calculations here. Our 0.05 THz frequency allows a time window that allows the visible light to pass through. But a variation of these parameters make it easy to see that lights are allowed to be reflected and passed through in a statistical way, with various probabilities. If all lights were to be reflected we would probably see images of only 3 blades with very dark shades and a rotating disk of light-flash. Also the distribution of intensity in various angles makes a more complex geometry to consider. By varying your fan-speed and intensity of your room light (eg by making it really dark that is switching off scattering) you are capable of seeing a very slow moving image of the fan-blades where you can count the number of these dark shades resembling the actual blades and a disk of rotation characterized by light which signifies no light or much lesser intensity of light was being blocked by the blades. In other words in an optimized situation we see that the probabilities of light reflected or passing through has really separated. It is also possible to see a reverse motion of the image of the fan blade. All these effects are not explainable by classical wave-mechanics. May be from the latter we understand that OK light passes sometimes and does not sometimes (but why? light wavelength is so small, it’s size is so small, it is just a quantum) but the detailed behavior is much more complicated than the knowledge of classical mechanics can explain. You can be very strict about the speed of fan and wavelength of light but you must be careful what exact situation you have observed. With a camera you can take picture in an optimally prepared dark room with light sources and you will see the picture never gives you 3 stationary blades. In-fact with very slow motion a blurried picture is produced but that is because the time-window of the camera flash has meant additional relative motion between the target and it’s image, explainable by quantum-mechanics. And waves as we understand now is fully explanable by quantum-mechanics which has in it the idea that light moves relativistically. The equations are only a result of such understanding and needs to be used careully and only by someone who has the confidence he understands them. Basic Physics is misused without even quantum mechanics and Relativity so it is really a hard job to get your physics right. Do not take this lightly and do not laugh at ideas that you do not understand. But if you really understand something welcome to find loopholes and criticize but don’t forget that for every loophole you think you have found you might have found 4 to your account and the lesson that you are the one who did not understand what was being described. But then take it like a sports as long as you have the sportive shirts on your chest. Nothing in classical mechanics will ever explain these effects completely (ofcourse you can say we have a camera, we have an object and we take a picture and that is all that matters, but I do not welcome such pickers or cynical people to discuss my Physics ideas) One comment
{"url":"http://mdashf.org/2011/10/22/the-rotation-of-a-fan-creates-a-blocking-path-for-visible-light/","timestamp":"2014-04-19T04:21:31Z","content_type":null,"content_length":"131001","record_id":"<urn:uuid:717a43ee-8de9-4f31-8cdf-4afbaf999d44>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 1 Tutors West Roxbury, MA 02132 Math for Elementary, Middle and High School ...Your child can have "math confidence", be more independent in doing their homework and get better grades by getting her/him support for Algebra 2. NOW IS THE TIME - Algebra 1 introduced many topics and skills which will be PRACTICED A LOT in Algebra 2. If your... Offering 9 subjects including algebra 1
{"url":"http://www.wyzant.com/Roxbury_MA_Algebra_1_tutors.aspx","timestamp":"2014-04-21T15:20:53Z","content_type":null,"content_length":"61232","record_id":"<urn:uuid:c5067004-8c7e-40fe-9803-08bbfe8cd2b0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Mitering a Closed 3D Path This Demonstration shows the amount of twist (writhe) that occurs when a closed 3D path is constructed from a beam using regular miter or fold joints at the vertices. The path includes four or (when "odd" is checked) five vertices of a cuboid. All beams have the same (selectable) polygonal cross section. The top beam is cut in the middle, so that the two halves can rotate with respect to each other. The height, width, and depth of the cuboid, as well as the longitudinal rotation of the beam and the beam radius, can be manipulated to see how that influences the amount of twist. When constructing a path in space by mitering together beams of the same cross section, it is not always the case that the edges nicely match up all the way around. Snapshot 1 shows a situation where the edges do match up. In Snapshot 2, the path is slightly altered by reducing its height and depth, resulting in a mismatch. Rotating the cross section is to no avail, as shown in Snapshot 3. The twist is more visible when highlighting a seam of the beam. It is interesting to explore the difference between using miter joints and fold joints, in particular for an odd number of vertices. Use of a 2-gon as cross section (i.e., a strip) avoids self-intersection for fold joints. In the case of miter joints, all beams rotate in the same direction, but with fold joints adjacent, beams rotate in the opposite direction. This is illustrated in Snapshots 4 to 8. Also see T. Verhoeff and K. Verhoeff, "The Mathematics of Mitering and Its Artful Application," in Bridges Leeuwarden: Mathematical Connections in Art, Music, and Science, Bridges 2008 Conference Proceedings (R. Sarhangi and C. Séquin, eds.), Hertfordshire, UK: Tarquin Publications, 2008. This paper presents the Miter Joint Rotation Invariance theorem, Even Fold Joint Rotation Invariance theorem, and the Odd Fold Joint Matching theorem for closed 3D paths. Contributed by: Tom Verhoeff (Eindhoven University of Technology)
{"url":"http://demonstrations.wolfram.com/MiteringAClosed3DPath/","timestamp":"2014-04-18T23:43:35Z","content_type":null,"content_length":"44831","record_id":"<urn:uuid:95c43250-fe8e-4d18-a384-2fd85bf225da>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Explanations for Measurement Accuracy Specifications The way accuracy is defined for pressure instruments on technical data sheets can vary significantly across manufacturers and product types. There are many contributing error factors which go into a total uncertainty calculation and the proportion contributed by each one will be different from one measurement technology to another. Specification Examples Please click on an example term below to be guided to the associated glossary explanation: Room Temperature Accuracy Zero and Span Setting Thermal Errors Total Error Band • +/-1.5% FS TEB, comp temp range -20 to +80 degC, plus 90 days drift How close the measured reading is to a reference point or value. BSL – Best Straight Line A virtual line derived from a set of non-linear points which is used to demonstrate the best accuracy that can be achieved from the product. See stability. FRO – Full Range Output The difference in output signal between the minimum and maximum measurable pressure. Another way of describing the full scale for pressure sensors. FS – Full Scale / Full Span The difference between the lowest and highest measured point. FSO – Full Scale Output See full range output. The shift in measurement when comparing between readings of the same pressure which were taken following an increase in pressure and a decrease in pressure. The straightness of a set of measured points compared to a perfectly straight line. Long Term Drift See stability. Long Term Repeatability The amount of change in measured points following many measurement cycles from low to high, then to low again over a long period of time. Long Term Stability See stability. NLH – Non-Linearity and Hysteresis An accuracy or precision specification that only considers one cycle of increasing and decreasing pressure and excludes any short term repeatability effects. NLHR – Non-Linearity, Hysteresis and Repeatability An accuracy or precision specification that includes all room temperature uncertainties for a pressure sensing device. Occasionally may include zero and span setting offsets. See linearity. See repeatability. A measure of the proximity of all measured pressure points to a virtual reference line such as bsl or tsl. Defines the limits of variation in measurement, i.e. 100% span RDG – Reading Used to distinguish a percentage accuracy which varies proportionally to the measured span (% of reading) from one which is a fixed percentage of the maximum measurement reading (i.e. % of full scale Referred Temperature Error A fixed temperature reference is defined (usually room temperature) which is representative of the average operating temperature. The temperature error is then defined as a +/- value of the largest The amount of change in measured points following a number of measurement cycles from low to high, then to low again over a period of time. The ability of a device to distinguish a measurement via a reading or an signal output. In most cases the resolution should be much better than the overall accuracy, but in some cases the resolution can become a significant part of the total measurement uncertainty. Short Term Repeatability The amount of change in measured points following a few measurement cycles from low to high, then to low again over a short period of time. The difference between any measured point and the lowest value. Span Offset The variation in measured span compared to the perfect span reading, which is either represented as a percentage, pressure unit or output value error. Span Stability The amount of long term measurement variation which is only attributed to the span. Span Drift See span stability. The amount of measurement change expected over a long period of time. TEB (i) – Thermal / Temperature Error Band The difference between the most negative and positive error across the whole temperature range. The difference is then halved and expressed as a +/- error. TEB (ii) – Total Error Band A combined error that includes linearity, hysteresis, repeatability, zero setting, span setting and thermal errors. It may also include stability error if a time factor is included with the total error band. TSL – Terminal Straight Line The line created by joining the lowest and highest measured points together. The error of all other measured points is referred to this line TSS – Thermal / Temperature Span Sensitivity How a measured value at any point in the range is affected by changes in temperature, normally expressed as a % span or % span / degC. TZS – Thermal / Temperature Zero Shift How much the lowest measured reading will vary with temperature, typically shown as % full span or % full span / degC. URL – Upper Range Limit Used to define the accuracy as a factor of the maximum range of a rangeable device rather than an adjusted (turndown) range. Zero Drift See zero stability. Zero Offset The amount of variation of the lowest measured reading compared to a perfect reading, which can be expressed as a percentage of full scale (%FS) or measurement units. Zero Stability The amount of long term measurement variation which only affects the zero offset.
{"url":"http://www.sensorsone.com/explanations-for-measurement-accuracy/","timestamp":"2014-04-21T02:44:54Z","content_type":null,"content_length":"30441","record_id":"<urn:uuid:76c99312-e631-40fc-a176-b5fcf272d3ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating a Number based on Mask and Range Constraints Patent application title: Generating a Number based on Mask and Range Constraints Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Generating a number based on mask and range constraints. For example, a method of generating a pseudo random number satisfying a range constraint and a mask constraint may include determining a number of possible solutions satisfying the range constraint and the mask constraint; selecting an index representing a solution of the possible solutions; and generating the pseudo random number based on the index. Other embodiments are described and claimed. A computer-implemented method of generating a pseudo random number satisfying a range constraint and a mask constraint, the method performed by a computer having a processor and a memory unit, the method comprising: determining a number of possible solutions satisfying said range constraint and said mask constraint; selecting an index representing a solution of said possible solutions; and generating said pseudo random number based on said index. The method of claim 1, wherein determining the number of possible solutions satisfying said range constraint and said mask constraint comprises: determining a number of possible solutions in a sub-range of said range; updating a total number of possible solutions according to the number of possible solutions in said sub-range; and repeating said determining and updating for one or more other sub-ranges of said range. The method of claim 2, wherein determining the number of possible solutions in said sub-range comprises determining the number of possible solutions in said sub-range based on a sub-range mask corresponding to said sub-range. The method of claim 3, wherein determining the number of possible solutions in said sub-range comprises determining the number of possible solutions in said sub-range based on a hamming weight of an intersection mask between said sub-range mask and said mask constraint. The method of claim 4 comprising determining the hamming weight of said intersection mask based on a hamming weight of an intersection mask corresponding to a previous sub-range. The method of claim 2, wherein said repeating comprises determining a number of possible solutions in said sub-range based on a previous sub-range. The method of claim 1, wherein generating said pseudo random number based on said index comprises: determining a sub-range of said range which includes said solution; and generating said pseudo random number based on said sub-range. The method of claim 7, wherein determining the sub-range which includes said solution comprises: defining a sub-range of said range; determining whether said sub-range includes the solution represented by said index; and repeating said defining and determining for one or more other sub-ranges of said range until a sub-range of said one or more other sub-ranges includes the solution represented by said index. The method of claim 8, wherein determining whether said sub-range includes the solution represented by said index comprises: determining a number of possible solutions in said sub-range; updating a total number of possible solutions according to the number of possible solutions in said sub-range; and determining that said sub-range includes the solution represented by said index if the index is smaller than or equal to said total number. The method of claim 9, wherein determining the number of possible solutions in said sub-range comprises determining the number of possible solutions in said sub-range based on a sub-range mask corresponding to said sub-range. The method of claim 10, wherein determining the number of possible solutions in said sub-range comprises determining the number of possible solutions in said sub-range based on a hamming weight of an intersection mask between said sub-range mask and said mask constraint. The method of claim 7, wherein generating said pseudo random number based on said sub-range comprises pseudo randomly selecting said pseudo random number from one or more possible solutions within said sub-range. The method of claim 1 comprising determining said number of possible solutions and generating said pseudo random number using a common algorithm. The method of claim 1 comprising generating said pseudo random number utilizing a memory with a space complexity of no more than an order of 1. 15. The method of claim 1 comprising generating said pseudo random number with a time complexity of no more than an order of a number of bits of said pseudo random number. A computer program product comprising a non-transitory computer-useable medium including a computer-readable program, wherein the computer-readable program when executed on a computer causes the computer to: generate a pseudo random number satisfying a range constraint and a mask constraint by determining a number of possible solutions satisfying said range constraint and said mask constraint; selecting an index representing a solution of said possible solutions; and generating said pseudo random number based on said index. The computer program product of claim 22, wherein the computer-readable program causes the computer to determine the number of possible solutions satisfying said range constraint and said mask constraint by determining a number of possible solutions in a sub-range of said range; updating a total number of possible solutions according to the number of possible solutions in said sub-range; and repeating said determining and updating for one or more other sub-ranges of said range. The computer program product of claim 22, wherein the computer-readable program causes the computer to determine a sub-range of said range which includes said solution; and generate said pseudo random number based on said sub-range. The computer program product of claim 22, wherein the computer-readable program causes the computer to determine the sub-range which includes said solution by defining a sub-range of said range; determining whether said sub-range includes the solution represented by said index; and repeating said defining and determining for one or more other sub-ranges of said range until a sub-range of said one or more other sub-ranges includes the solution represented by said index. FIELD [0001] Some embodiments are related to the field of generating a number satisfying one or more range and/or mask constraints. BACKGROUND [0002] Generation of random or pseudo-random numbers is a fundamental building stone in many applications. For example, pseudo random generation of a number may be required in context of Pseudo Random Test Generators (PRTGs) for hardware verification. Hardware verification may involve generating pseudo random op-code instructions, and may require pseudo random generation of numbers that satisfy one or more constraints, e.g., a range constraint and/or a mask constraint. The constraints may arise, for example, from hardware architecture, as well as from a user specification. The pseudo randomly generated numbers may be required to be selected, e.g., with a uniform distribution, from a set of all valid numbers that satisfy the one or more constraints. Some computing systems, for example, computing systems including post-silicon PRTGs, may require generating pseudo random numbers while consuming relatively short computing time and/or small memory SUMMARY [0004] Some embodiments include, for example, devices, systems and methods of pseudo random number generation. Some embodiments include, for example, a method of generating a pseudo random number satisfying a range constraint and a mask constraint. In some embodiments, the method may include determining a number of possible solutions satisfying the range constraint and the mask constraint; selecting an index representing a solution of the possible solutions; and generating the pseudo random number based on the index. In some embodiments, determining the number of possible solutions satisfying the range constraint and the mask constraint may include determining a number of possible solutions in a sub-range of the range; updating a total number of possible solutions according to the number of possible solutions in the sub-range; and repeating the determining and updating for one or more other sub-ranges of the In some embodiments, determining the number of possible solutions in the sub-range comprises determining the number of possible solutions in the sub-range based on a sub-range mask corresponding to the sub-range. In some embodiments, determining the number of possible solutions in the sub-range comprises determining the number of possible solutions in the sub-range based on a hamming weight of an intersection mask between the sub-range mask and the mask constraint. In some embodiments, the method may include determining the hamming weight of the intersection mask based on a hamming weight of an intersection mask corresponding to a previous sub-range. In some embodiments, the repeating may include determining a number of possible solutions in the sub-range based on a previous sub-range. In some embodiments, generating the pseudo random number based on the index may include determining a sub-range of the range, which includes the solution; and generating the pseudo random number based on the sub-range. In some embodiments, determining the sub-range which includes the solution may include defining a sub-range of the range; determining whether the sub-range includes the solution represented by the index; and repeating the defining and determining for one or more other sub-ranges of the range until a sub-range of the one or more other sub-ranges includes the solution represented by the index. In some embodiments, determining whether the sub-range includes the solution represented by the index may include determining a number of possible solutions in the sub-range; updating a total number of possible solutions according to the number of possible solutions in the sub-range; and determining that the sub-range includes the solution represented by the index if the index is smaller than or equal to the total number. In some embodiments, determining the number of possible solutions in the sub-range may include determining the number of possible solutions in the sub-range based on a sub-range mask corresponding to the sub-range. In some embodiments, determining the number of possible solutions in the sub-range may include determining the number of possible solutions in the sub-range based on a hamming weight of an intersection mask between the sub-range mask and the mask constraint. In some embodiments, generating the pseudo random number based on the sub-range may include pseudo randomly selecting the pseudo random number from one or more possible solutions within the In some embodiments, the method may include determining the number of possible solutions and generating the pseudo random number using a common algorithm. In some embodiments, the method may include generating the pseudo random number utilizing a memory with a space complexity of no more than an order of 1. In some embodiments, the method may include generating the pseudo random number with a time complexity of no more than an order of a number of bits of the pseudo random number. Some embodiments include an apparatus including a pseudo random number generator to generate a pseudo random number satisfying a range constraint and a mask constraint by determining a number of possible solutions satisfying the range constraint and the mask constraint; selecting an index representing a solution of the possible solutions; and generating the pseudo random number based on the In some embodiments, the pseudo random number generator is capable of determining the number of possible solutions satisfying the range constraint and the mask constraint by determining a number of possible solutions in a sub-range of the range; updating a total number of possible solutions according to the number of possible solutions in the sub-range; and repeating the determining and updating for one or more other sub-ranges of the range. In some embodiments, the pseudo random number generator is capable of determining a sub-range of the range, which includes the solution; and generating the pseudo random number based on the In some embodiments, the pseudo random number generator is capable of determining the sub-range which includes the solution by defining a sub-range of the range; determining whether the sub-range includes the solution represented by the index; and repeating the defining and determining for one or more other sub-ranges of the range until a sub-range of the one or more other sub-ranges includes the solution represented by the index. In some embodiments, the pseudo random number generator is capable of generating the pseudo random number utilizing a memory with a space complexity of no more than an order of 1. In some embodiments, the pseudo random number generator is capable of generating the pseudo random number with a time complexity of no more than an order of a number of bits of the pseudo random Some embodiments include a computer program product comprising a computer-useable medium including a computer-readable program, wherein the computer-readable program when executed on a computer causes the computer to generate a pseudo random number satisfying a range constraint and a mask constraint by determining a number of possible solutions satisfying the range constraint and the mask constraint; selecting an index representing a solution of the possible solutions; and generating the pseudo random number based on the index. In some embodiments, the computer-readable program causes the computer to determine the number of possible solutions satisfying the range constraint and the mask constraint by determining a number of possible solutions in a sub-range of the range; updating a total number of possible solutions according to the number of possible solutions in the sub-range; and repeating the determining and updating for one or more other sub-ranges of the range. In some embodiments, the computer-readable program causes the computer to determine a sub-range of the range, which includes the solution; and generate the pseudo random number based on the In some embodiments, the computer-readable program causes the computer to determine the sub-range which includes the solution by defining a sub-range of the range; determining whether the sub-range includes the solution represented by the index; and repeating the defining and determining for one or more other sub-ranges of the range until a sub-range of the one or more other sub-ranges includes the solution represented by the index. Some embodiments may include, for example, a computer program product including a computer-useable medium including a computer-readable program, wherein the computer-readable program when executed on a computer causes the computer to perform methods in accordance with some embodiments of the invention. Some embodiments may provide other and/or additional benefits and/or advantages. BRIEF DESCRIPTION OF THE DRAWINGS [0033] For simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity of presentation. Furthermore, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. The figures are listed below. FIG. 1 is a schematic block diagram illustration of a computing system, in accordance with some demonstrative embodiments; FIG. 2 is a schematic flow-chart illustration of a method of pseudo randomly generating a number satisfying a range constraint and a mask constraint, in accordance with some demonstrative FIG. 3 is a schematic flow-chart illustration of a method of determining a number of possible solutions satisfying a range constraint and a mask constraint, in accordance with demonstrative embodiments; and FIG. 4 is a schematic flow-chart illustration of a method of generating a pseudo random number based on a selected index number, in accordance with some demonstrative embodiments. DETAILED DESCRIPTION [0038] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of some embodiments. However, it will be understood by persons of ordinary skill in the art that some embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, units and/or circuits have not been described in detail so as not to obscure the discussion. Discussions herein utilizing terms such as, for example, "processing", "computing", "calculating", "determining", "establishing", "analyzing", "checking" or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulate and/or transform data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information storage medium that may store instructions to perform operations and/or processes. The terms "plurality" and "a plurality" as used herein includes, for example, "multiple" or "two or more". For example, "a plurality of items" includes two or more items. The terms "random" and "pseudo-random" as used herein includes, for example, random, pseudo-random, unpredictable and/or haphazard. The terms "random" and/or "pseudo-random" as used herein may relate, for example, to one or more items or numbers that lack order, that appear to lack a pattern, that lack predictability, that appear to lack predictability, that lack a definitive pattern, that are haphazard or appear to be haphazard, that are generated or produced by a process whose output does not follow a describable pattern or a deterministic pattern, that do not follow a deterministic rule, that appear to not follow a deterministic rule, that appear to be chaotic or disorganized, or the like. Although portions of the discussion herein relate, for demonstrative purposes, to wired links and/or wired communications, embodiments of the invention are not limited in this regard, and may include one or more wired or wireless links, may utilize one or more components of wireless communication, may utilize one or more methods or protocols of wireless communication, or the like. Some embodiments may utilize wired communication and/or wireless communication. At an overview, some embodiments provide devices, systems and/or methods of pseudo randomly generating a natural number that satisfies both a predefined range constraint and a predefined mask constraint. The range constraint ("the range") may be defined, for example, by a lower bound value and an upper bound value, e.g., lower bound=122, upper bound=735, for the pseudo randomly generated number ("the generated number"). A mask constraint denoted M ("the mask") referring to a binary representation of the generated number, e.g., refering to a string of N binary bits representing the generated number, may include, for example, a constraint on one or more of the N bits ("the fixed bits") to have specific values; while one or more other bits ("the variable bits") of the generated number may be unconstrained to be pseudo randomly selected. For example, a mask "XXX1XX0X" may constrain a second Least-Significant-Bit (LSB) and a fifth LSB of the number to have the values "0" and "1", respectively, and maintain values of a first, third, fourth, sixth, seventh and eighth LSBs of the number unconstrained, to be pseudo randomly selected. Some embodiments may include a Pseudo-Random Number Generator (PRNG) capable of pseudo randomly generating a number according to the predefined mask and range such that, for example, the number is generated with a uniform distribution over a set of all valid numbers that satisfy both the range and mask constraints ("the set of possible solutions"). In some embodiments, the PRNG may iteratively divide the range into a plurality of at most an order of 2N disjoined sub-ranges, covering the range. The range division into sub-ranges may be performed according to a scheme that allows a representation of substantially every sub-range by a respective sub-range mask. In some embodiments, the PRNG may perform a first counting iteration to determine the number of possible solutions in the range, e.g., by iteratively defining a plurality of intersection masks between the mask and the plurality of sub-range masks, respectively. For example, the PRNG may define one or more intersection masks resulting from intersecting the mask with the sub-range masks, respectively. The PRNG may count a number of possible solutions in a first defined intersection mask. The PRNG may define a second intersection mask, count a number of possible solutions in the second intersection mask, and add the number of possible solutions in the second intersection mask to the number of possible solutions in the first intersection mask. The PRNG may proceed in a similar fashion to iteratively count possible solutions in substantially all the remaining intersection masks, to determine a total number of possible solutions satisfying the predefined range and mask. In some embodiments, the PRNG may select, e.g., pseudo randomly, an index number, denoted i, from a set of numbers having a size corresponding to the total number of the counted possible solutions. The selected index i may represent or correspond to, for example, a solution from the possible solutions in the range ("the i-th solution"). In some embodiments, the PRNG may perform a second counting iteration of the possible solutions in the range, to determine a sub-range ("the selected sub-range") corresponding to the selected i-th solution; and generate the pseudo random number based on the selected sub-range, e.g., by pseudo randomly selecting a possible solution included in the sub-range. For example, the PRNG may iteratively divide the range into at most an order of 2N disjoined sub-ranges covering the range, and define one or more intersection masks corresponding to the sub-range masks, respectively, e.g., as described above with relation to the first counting iteration. In some embodiments, the PRNG may count a number of possible solutions in a first defined intersection mask, and determine if the i-th solution corresponds to the first respective sub-range, for example, by checking whether the number of possible solutions in the first defined intersection mask is equal to, or greater than the index i. In some embodiments, the PRNG may pseudo randomly generate a possible solution from the first sub-range, e.g., if the i-th solution does correspond to the first defined corresponding intersection mask. Otherwise, the PRNG may select a second intersection mask, count a number of possible solutions in the second intersection mask, and add the number of possible solutions in the second intersection mask to the number of possible solutions in the first intersection mask. The PRNG may check whether the i-th solution corresponds to the second intersection mask, for example, by checking whether the number of total possible solutions in the first and second selected intersection masks is equal to, or greater than the index i, and so on, e.g., until it is determined that the defined intersection mask corresponds to the i-th solution. In some embodiments, the first and the second counting iterations may be performed using a common or similar algorithm, executed using common or similar code and/or executed re-using the same ("used") code. In some embodiments, the PRNG may generate the pseudo random number based on the range and mask constraints without contemporarily storing in a memory some or all of the possible solutions, sub-ranges, sub-masks and/or intersection masks, for example, without contemporarily storing in the memory two or more of the possible sub-ranges, sub-masks, intersection masks, and/or possible solutions, during the process of generating the pseudo random number, for example, since the defining of the plurality of sub-range masks, the defining of the intersection masks and/or the counting of the possible solutions in the sub-ranges may be iteratively performed with relation to each sub-range, e.g., such that when defining a certain sub-range mask and/or counting the possible solutions in the certain sub-range, only data corresponding to the certain sub-range, and/or the sub-range mask and intersection mask corresponding to the certain sub-range may be stored in memory. In some embodiments, the PRNG may generate the pseudo random number within a time period corresponding to a time complexity of no more than an order of N, for example, with an order consant of less than 500, e.g., wherein a time complexity of k*N is equal to k times the time needed for processing the word size N; and/or utilizing a memory space corresponding to a space complexity of no more than an order of 1, e.g., with a relatively small order constant, for example, utilizing a constant memory space, e.g., of no more than ten N-size words regardless of the value of N, assuming that the pseudo random number is generated using a machine having an N word-size, which processes an N bit string as a single word. In some embodiments, the PRNG may be associated with, or may be implemented as part of a pseudo random test generator, which may be used for simulation based verification, and may be required to generate values for instruction opcodes, satisfying a plurality of user defined and/or architecture defined constraints, e.g., which may be represented by one or more range and mask constraints. Some embodiments may be applied in a context of post-silicon exercisers. In other embodiments the PRNG may be implemented as part of any other suitable device or system. Reference is made to FIG. 1, which schematically illustrates a system 100 in accordance with some demonstrative embodiments. In some embodiments, system 100 may include a PRNG 130 to pseudo randomly generate a number, denoted Rn, satisfying at least a range constraint and a mask constraint, e.g., as described in detail In some demonstrative embodiments, system 100 may include, for example, a processor 111, an input unit 112, an output unit 113, a memory unit 114, a storage unit 115, and/or communication unit 116. System 100 optionally includes other suitable hardware components and/or software components. In some embodiments, PRNG 130 may be included in, associated with, or implemented as part of a test generator 120 to generate pseudo random values used, for example, in a hardware verification process. In some embodiments, test generator 120 may generate at least one test case 140 including values for instruction opcodes, satisfying a plurality of user defined and/or architecture defined constraints, e.g., which may be represented by one or more range and mask constraints. In some embodiments, PRNG 130 may receive, e.g., from test generator 120 an input including a plurality of input constraints, for example, a range constraint and a mask constraint; and may pseudo randomly generate the number Rn satisfying the input constraints, e.g., with a substantially uniform distribution over a set of possible solutions. For example, PRNG 130 may receive a range constraint specifying a range, e.g., defined by an upper bound value and a lower bound value. PRNG 130 may also receive a mask constraint, for example, a constraint on one or more fixed bits of the N bits of a binary representation of the number Rn, to have specific values; while one or more other, variable bits of the number Rn may be unconstrained to be pseudo randomly selected. For example, a mask "XXX1XX0X" may constrain a second LSB and a fifth LSB of the number Rn to have the values "0" and "1", respectively, and maintain values of a first, third, fourth, sixth, seventh and eighth LSBs of the number Rn unconstrained, to be pseudo randomly selected. In some embodiments, PRNG 130 may determine a number, denoted P, of possible solutions ("the possible solutions") satisfying the input constraints, e.g., by performing a first counting iteration of the possible solutions. For example, PRNG 130 may count the number of possible solutions in a plurality of intersection masks, resulting from intersecting the mask with a plurality of sub-range masks representing a plurality of sub-ranges of the range, respectively, e.g., as described in detail below. In some embodiments PRNG 130 may select, e.g., pseudo randomly, an index i representing a solution from the set of the possible solutions, wherein i belongs to the set {0, . . . , P-1}. In some embodiments, PRNG 130 may generate the number Rn based on the index i, e.g., by performing a second counting iteration of the possible solutions. For example, PRNG 130 may determine a selected sub-range corresponding to the selected i-th solution; and generate the number Rn based on the selected sub-range, e.g., by pseudo randomly selecting a possible solution included in the sub-range, as described below. In some embodiments, PRNG 130 may perform the first and the second counting iterations using a common or similar algorithm and/or executed using common or similar code. For example, PRNG 130 may perform the first and second counting iterations using a common predefined counting function capable of counting a number of solutions; and generating a pseudo random number upon reaching a predefined input value, denoted k, or returning the number of solutions, e.g., if the value k is not reached. For example, PRNG 130 may perform the first counting iteration by applying the counting function to the range and mask, and the value k=MAX_INT, wherein MAX_INT denotes a selected value greater than a maximal possible number of the possible solutions, e.g., MAX_INT≧2 ; and perform the second counting iteration by applying the same counting function to the range and mask, and the value k=i. In some embodiments, the first and the second counting iterations may be performed using a common or similar algorithm and/or executed using common or similar code. In some embodiments, PRNG 130 may generate the number Rn without storing one or more of the set of P possible solutions, one or more of the sub-ranges, and/or one or more of sub-range masks and/or intersection masks corresponding to the sub-ranges, for example, without contemporarily storing all of the P possible solutions, the sub-ranges, and/or the sub-range masks and/or intersection masks corresponding to the sub-ranges, e.g., without contemporarily storing more than one of the sub-ranges, and/or the sub-range mask and/or intersection mask corresponding to the sub-range in memory unit 114 and/or in other memory spaces of system 100. In some embodiments, PRNG 130 may generate the number Rn within a time period corresponding to a time complexity of no more than an order of N, e.g., with an order constant of less than 500; and/or utilizing a memory space corresponding to a space complexity of no more than an order of 1, for example, utilizing a constant memory space, e.g., of no more than ten N-size words regardless of the value of N, assuming that the pseudo random number is generated using a machine having an N word-size, which processes an N bit string as a single word. In some demonstrative embodiments, processor 111 includes, for example, a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a host processor, a controller, a plurality of processors or controllers, a chip, a microchip, one or more circuits, circuitry, a logic unit, an integrated circuit (IC), an application-specific IC (ASIC), or any other suitable multi-purpose or specific processor or controller. In some demonstrative embodiments, input unit 112 includes, for example, a keyboard, a keypad, a mouse, a touch-pad, a stylus, a microphone, or other suitable pointing device or input device. Output unit 113 include, for example, a cathode ray tube (CRT) monitor or display unit, a liquid crystal display (LCD) monitor or display unit, a screen, a monitor, a speaker, or other suitable display unit or output device. In some demonstrative embodiments, memory unit 114 includes, for example, a random access memory (RAM), a read only memory (ROM), a dynamic RAM (DRAM), a synchronous DRAM (SD-RAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units. Storage unit 115 includes, for example, a hard disk drive, a floppy disk drive, a compact disk (CD) drive, a CD-ROM drive, a digital versatile disk (DVD) drive, or other suitable removable or non-removable storage units. Memory unit 114 and/or storage unit 115 may, for example, store data processed by system 100. In some demonstrative embodiments, communication unit 116 includes, for example, a wired or wireless network interface card (NIC), a wired or wireless modem, a wired or wireless receiver and/or transmitter, a wired or wireless transmitter-receiver and/or transceiver, a radio frequency (RF) communication unit or transceiver, or other units able to transmit and/or receive signals, blocks, frames, transmission streams, packets, messages and/or data. Communication unit 116 may optionally include, or may optionally be associated with, for example, one or more antennas, e.g., a dipole antenna, a monopole antenna, an omni-directional antenna, an end fed antenna, a circularly polarized antenna, a micro-strip antenna, a diversity antenna, or the like. In some demonstrative embodiments, the components of system 100 are enclosed in, for example, a common housing, packaging, or the like, and are interconnected or operably associated using one or more wired or wireless links. In other embodiments, for example, components of system 100 may be distributed among multiple or separate devices, may be implemented using a client/server configuration or system, may communicate using remote access methods, or the like. Reference is made to FIG. 2, which schematically illustrates a method of generating a pseudo random number satisfying a range constraint and a mask constraint, in accordance with some demonstrative embodiments. In some non-limiting embodiments, one or more operations of the method of FIG. 2 may be implemented, for example, by one or more elements of a system, e.g., system 100 (FIG. 1), for example, a PRNG, e.g., PRNG 130 (FIG. 1), and/or by other suitable units, devices and/or systems. As indicated at block 210, in some embodiments, the method may include, for example, receiving a range constraint and a mask constraint for the pseudo randomly generated number, e.g., as described above. The range and mask constraints may be pre-defined by a user and/or by other requirements, e.g., requirements associated with hardware-verification. A range constraint may be characterized, or represented by a lower bound value, denoted minVal, and an upper bound value, denoted maxVal, defining a range [minVal,maxVal] for the number Rn. A mask constraint, denoted M, represented by an N bit binary string, may be characterized, or represented by an ordered pair of binary strings of N bits, denoted (F,V). The string F, also denoted fixedBits, may represent constrained bits of the mask M; and the string V, also denoted variableBits, may represent unconstrained bits of the mask M. The string F may include, or may be, for example, a binary string of N bits, having bits with the value "1" representing respective bits of the mask M having a constrained value "1"; and having bits with the value "0" for substantially all other bits. The string V may include, or may be, for example, a binary string of N bits, have bits with the value "1" representing respective unconstrained bits of the mask M, and having bits with the value "0" for substantially all other bits. For example, for N=9, the mask M=XX1X01XX0 may be represented by a corresponding ordered pair of strings (fixedBits,variableBits) or (F,V), wherein F= 001001000, and V=110100110. As indicated at block 220, in some embodiments the method may include, for example, determining a number, also denoted solutionCount, of possible solutions satisfying the range and mask constraints, e.g., based on the values minVal, maxVal, fixedBits, and VariableBits. For example, determining the number solutionCount may include performing a first counting iteration, e.g., by applying a pre-defined counting function, denoted CountGen, to the values minVal, maxVal, fixedBits, VariableBits, and MAX_INT, e.g., as described below with reference to FIG. 3. As indicated at block 230, in some embodiments the method may include, for example, pseudo randomly selecting an index, also denoted solutionIndex, representing a solution of the possible solutions. Pseudo randomly selecting the index may include, for example, applying a pseudo random-number-selecting function, for example, denoted GetRandomInRange, or any other suitable function, to pseudo randomly select a number with a uniform distribution from a range of numbers of a size corresponding to the number solutionCount, e.g., from a set of numbers [0,solutionCount-1]. As indicated at block 240, in some embodiments the method may include, for example, generating the number Rn, based on the index solutionIndex. For example, generating the number Rn may include performing a second counting iteration, e.g., by applying the function CountGen to the values minVal, maxVal, fixedBits, VariableBits, and solutionIndex, e.g., as described below with reference to FIG. 4. In some embodiments, one or more operations of the method of FIG. 2 may be implemented using any suitable computer readable program including a computer code, for example, written in a "C" computer language or any other suitable language. For example, one or more operations of the method of FIG. 2 may be implemented using one or more lines of the following code of a function ProcedureGen: 050 int Gen(int minVal, int maxVal, 051 int fixedBits, int variableBits) 052 { 053 int solutionCount=CountGen(minVal,maxVal,fixedBits,variableBits,MAX_INT); 054 int solutionIndex=GetRandomInRange(0,solutionCount-1); 055 return CountGen(minVal,maxVal,fixedBits,variableBits,solutionCount); 056 }. Reference is made to FIG. 3, which schematically illustrates a method of determining a number of possible solutions satisfying a range constraint and a mask constraint, in accordance with demonstrative embodiments. In some non-limiting embodiments, one or more operations of the method of FIG. 3 may be included in, or implemented by, for example, a part of one or more operations of a method of generating a pseudo random number based on range and mask constraints, e.g., the method of FIG. 2; and/or one or more elements of a system, e.g., system 100 (FIG. 1), for example, a PRNG, e.g., PRNG 130 (FIG. 1), and/or by other suitable methods, units, devices and/or systems. As indicated at block 310, in some embodiments, the method may include, for example, determining a first sub-range mask denoted m , representing a first sub-range of the range. For example, the method may include determining a lower sub-range bound value, represented by an N-bit binary string, denoted currentMinVal, and an upper sub-range bound value represented by an N bit binary string, denoted currentMaxVal, corresponding to the sub-range. The lower sub-range bound value may be defined to be equal to the value MinVal. The upper sub-range bound value and/or the sub-range mask may be defined according to the string currentMinVal, e.g., as described below. In some demonstrative embodiments, the string currentMaxVal may be defined by setting each bit of a sequence of Z(currentMinVal) LSBs of the string currentMaxVal to the value "1", wherein Z(str) denotes the number of consecutive LSBs of a string str having the value "0"; and setting all other bits of the string currentMaxVal to have values of respective bits of the string currentMinVal. Determining the sub-range mask m may include setting the Z(currentMinVal) LSBs of m to be variable bits, and setting one or more other bits of m to be fixed bits having the values of respective bits of the strings currentMinVal and currentMaxVal. For example, if currentMinVal=1001011000, then Z(currentMinVal)=3, currentMaxVal=1001011111, and As indicated at block 320, in some embodiments the method may include, for example, determining a number of possible solutions in the sub-range, based on the sub-range mask, e.g., as described below. In some embodiments, determining the number of possible solutions in the sub-range may include, for example, checking whether or not an intersection between the sub-range mask m and the mask M includes at least one possible solution. For example, if, M=(F,V), and m is represented by a corresponding ordered pair of N-bit binary strings m ), then a condition for the intersection of sub-range mask m and the mask M to include at least one possible solution may be represented by the following logical condition: |V)==0 (1) "&" represents a bitwise "and" logical operator; "==" represents a bitwise "equal" logical operator; "˜" represents a bitwise "not" logical operator; " " represents a bitwise "xor" logical operator; and "|" represents a bitwise "or" logical operator. In some embodiments, determining the number of possible solutions in the sub-range may include determining that the number of possible solutions in the sub-range is zero, if the intersection between the sub-range mask m and the mask M does not include at least one possible solution, e.g., does not fulfill the Logical condition 1. Otherwise, e.g., if the intersection between the sub-range mask m and the mask M is determined to include at least one possible solution, determining the number of possible solutions in the sub-range may include defining an intersection mask, denoted m, based on the intersection between the sub-range mask m and the mask M. For example, determining the number of possible solutions in the sub-range may include counting a number, denoted h, of variable bits of the intersection mask m ("the hamming weight of m"), e.g., representing a number of bits that are variable bits in both the sub-range mask m and in M, respectively. Determining the number of possible solutions in the sub-range may also include determining the number of possible solutions in the sub-range based on the hamming weight h, e.g., determining that the number of possible solutions in the sub-range is equal to 2 . For example, if v =10010101, and V=01011101, then h=3, since only the first, third and fifth LSBs of both v and V have the value "1", and the number of possible solutions in the sub-range is 2 As indicated at block 330, in some embodiments the method may include, for example, adding the number of possible solutions in the sub-range, e.g., 2 , to a total number of possible solutions. In some embodiments, one or more operations of the method of FIGS. 2 and/or 3, e.g., the operations described above with reference to blocks 320, 330, and/or 230 (FIG. 2), may be implemented using any suitable computer readable program including a computer code, for example, written in a "C" computer language or any other suitable language, for example, using one or more lines of the following code of a function CountGenSubrange: -US-00001 001 boolean CountGenSubrange(int fixedBits,int variableBits, 002 int currentMinVal,int currentMaxVal, 003 int solutionlndex,int* solutionCount) 004 { 005 int currentVariableBits= (currentMaxVal-currentMinVal); 006 if (MaskContradiction(fixedBits,varibleBits, 007 currentMinVal,currentVariableBits)) 008 return false; 009 *solutionCount+=1<<GetHammingWeight(variableBits& currentVariable- Bits); 010 if (*solutionCount<=solutionIndex) 011 return false; 012 *solutionCount=GetRandomInMaskIntersection(fixedBits,variableBits, 013 currentMinVal,currentVariableBits); 014 return true; 015 }. In one demonstrative non-limiting example, the function CountGenSubrange may receive the input values fixedBits=000000 and variableBits=111110, e.g., corresponding to the mask M=XXXXX0; currentMinVal =101000; currentMaxVal=101111; solutionIndex=12 and solutionCount=10. Based on the input values, the function CountGenSubrange may perform the following operations, with reference to code lines determining a sub-range mask, e.g., 101XXX, based on the values currentMinVal=101000 currentMaxVal=101111; determining that there exists a possible solution satisfying both the sub-range mask and the mask M, e.g., by determining that Condition 1 is satisfied; determining an intersection mask, e.g., 101XX0, of the mask M and the sub-range mask; counting a number of solutions, e.g., four solutions, satisfying the intersection mask; updating the number of total possible solutions, e.g., update the value of solutionCount to solutionCount=14; determining that the solutionIndex=12 is smaller or equal to the updated solutionCount=14; generating a number satisfying the intersection mask; and returning a value "true" to indicate that a possible solution has been generated. As indicated at block 340, in some embodiments, the method may include, for example, checking whether or not determination of the number of possible solutions has been iterated over substantially the entire range. As a demonstrative non-limiting example, the method may include checking whether or not the upper bound value currentMaxVal of the sub-range is equal to the upper bound value MaxVal of the range. As indicated at block 344, in some embodiments, the method may include, for example, determining a next sub-range mask representing a respective next sub-range, e.g., if determination of the number of possible solutions not been iterated over substantially the entire range. For example, in some embodiments the method may include determining a lower bound value and an upper bound value for the next sub-range. In some embodiments, the method may include determining the lower bound value of the next sub-range to be a number exceeding by 1 the upper bound value of a previously determined sub-range ("the previous sub-range"); and determining the upper bound value of the next sub-range according to the lower bound value of the next sub-range, e.g., as described above with reference to block 310 if, for example, the upper bound value of the next sub-range does not exceed the upper bound value of the entire range. In some embodiments, e.g., if the upper bound value of the next sub-range exceeds the upper bound value of the entire range, the method may include determining the upper bound value of the next sub-range to be the upper bound value of the entire range maxVal; and determining the lower bound value of the next sub-range currentMinVal, by setting each bit of a sequence of Z'(maxVal) LSBs of the string currentMinVal to the value "0", wherein Z'(maxVal) denotes the number of consecutive LSBs of the string maxVal having the value "1"; and setting all other bits of the string currentMinVal to have values of respective bits of the string maxVal. One or more further sub-ranges may be iteratively determined during one or more further iterations, respectively, e.g., as long as the upper bound value of the sub-ranges is larger than the lower bound value of a remaining uncovered range. For example, a current sub-range may be determined based on the lower bound of a previous sub-range. In one example, the upper bound currentMaxVal of the current sub-range may be determined by reducing the value "1" from the value of the lower bound of the previous sub-range; and the lower bound currentMinVal of the current sub-range may be determined by setting each bit of a sequence of Z'(currentMaxVal) LSBs of the string currentMaxVal to the value "0", wherein Z'(currentMaxVal) denotes the number of consecutive LSBs of the string currentMaxVal having the value "1"; and setting all other bits of the string currentMinVal to have values of respective bits of the string currentMaxVal. In some embodiments, the method may also include determining a number of possible solutions in the next sub-range e.g., as described above with reference to block 320; and adding the number of possible solutions in the next sub-range to the total number of solutions, e.g., as described above with reference to block 330. In some embodiments, determining the number of possible solutions in the next sub-range may include determining the hamming weight of the next sub-range based on the hamming weight of the previous sub-range mask. For example, the method may include maintaining the hamming weight of the previous sub-range and a number, denoted j, of variable bits of the previous sub-range mask. Determining the hamming weight of the next sub-range may include, for example, counting a number, denoted j', of variable bits of the next sub-range mask; counting a number, denoted j'', of variable bits of the mask M within a range of LSBs of the mask M having indices that are greater than j+1 and smaller than or equal to j'; and determining the hamming weight of the next sub-range to be equal to the sum of the hamming weight of the previous sub-range and the number j''. For example, if the previous sub-range is represented by the sub-range mask is m =101011XX, and the next sub-range is represented by the sub-range mask m =1011XXXX, then j=2, j'=4, j'' is equal to the number of variable bits of the mask M in the range between the second and fourth LSBs of the mask M, and the hamming weight of the next sub-range is equal to j+j''. As indicated at block 350, in some embodiments, the method may include, for example, determining the total number of solutions in the range, e.g., if counting the number of possible solutions has been iterated over substantially the entire range. Reference is made to FIG. 4, which schematically illustrates a flow-chart of a method of generating a pseudo random number, e.g., the number Rn, based on a selected index number, e.g., the index i, in accordance with some demonstrative embodiments. In some non-limiting embodiments, one or more operations of the method of FIG. 4 may be included in, or implemented by, for example, the method of FIG. 2, and/or one or more elements of a system, e.g., system 100 (FIG. 1), for example, a PRNG, e.g., PRNG 130 (FIG. 1), and/or by other suitable methods, units, devices and/or systems. As indicated at block 410, in some embodiments, the method may include determining a first sub-range mask, representing a first sub-range of the range. For example, the method may include determining a lower value bound and an upper value bound of the first sub-range, and determining the first sub-range mask corresponding to the sub-range; e.g., as described above with reference to block 310 (FIG. 3). As indicated at block 415, in some embodiments, the method may include determining whether or not the index i represents a solution in the sub-range, e.g., as described in detail below. As indicated at block 420, in some embodiments determining whether or not the index i represents a solution in the sub-range may include, for example, determining a number of possible solutions in the sub-range based on the sub-range mask, e.g., as described above with reference to block 320 (FIG. 3). As indicated at block 430, in some embodiments determining whether or not the index i represents a solution in the sub-range may include, for example, adding the number of possible solutions in the sub-range to a total number of possible solutions, e.g., as described above with reference to block 330 (FIG. 3). As indicated at block 435, in some embodiments determining whether or not the index i represents a solution in the sub-range may include, for example, determining whether or not the index i is smaller than or equal to the total number of solutions. As indicated at block 444, in some embodiments the method may include determining a next sub-range mask representing a next sub-range of the range if, for example, it is determined that the index i does not represent a solution in the sub-range, e.g., if the index i is larger than the total number of possible solutions. In some embodiments, determining the next sub-range mask may include determining the next sub-range mask based on the previous sub-range mask, e.g., as described above with reference to block 344 (FIG. 3). In some embodiments, the method may include, for example, determining a number of possible solutions in the next sub-range, e.g., as described above with reference to block 420; adding the number of possible solutions in the new sub-range to the total number of solutions, e.g., as described above with reference to block 430; and determining whether the index i is smaller or equal to the new total number of solutions, e.g., as described above with reference to block 435. As indicated at block 460, in some embodiments the method may include pseudo randomly generating the number Rn based on the sub-range representing the index i ("the selected sub-range"), for example, if it is determined that the index i does represent a solution in the sub-range, e.g., if the index i is bigger than the total number of possible solutions. As indicated at block 470, in some embodiments pseudo randomly generating the number Rn may include, for example, pseudo randomly generating a solution of the possible solutions in the selected sub-range. For example, the method may include pseudo randomly generating a number satisfying an intersection mask m representing an intersection between the mask M=(F,V) and the sub-range mask m=(f,v) corresponding to the selected sub-range. The intersection mask m may be represented by an ordered pair of N-bit binary strings, e.g., m ), wherein f =F|f, and v =V&v. For example, a bit having an index i of the string f may be set to the value "0", e.g., if both the bits F(i ) and f(i ) have the value "0", wherein F(i ) and f(i ) denote the i -th bits of the strings F and f, respectively; while other bits of the string f may be set to the value "1". Similarly, a bit having an index i of the string v may be set to the value "1", e.g., if both the bits V(i ) and v(i ) have the value "1", wherein V(i ) and f(i ) denote the i -th bits of the strings V and v, respectively; while other bits of string v may be set to the value "0". In some embodiments, pseudo randomly generating a number satisfying the intersection mask m may include, for example, pseudo randomly generating an N-bit binary string, denoted X, and logically combining the string X with the intersection mask m . In one example, the number Rn may be generated as follows: =(X& v ) f . (2) In other embodiments, the number Rn may be generated by applying any other suitable algorithm and/or calculation to the selected sub-range, the selected sub-range mask, and/or any other suitable sub-range and/or sub-range mask, which may be determined based on the selected sub-range, e.g., a sub-range preceding or succeeding the selected sub-range. In some embodiments, one or more operations of the methods of FIGS. 3 and/or 4 may be implemented using any suitable computer readable program including a computer code, for example, written in a "C" computer language or any other suitable language. In some embodiments, the first and the second counting iterations may be performed using a common or similar algorithm and/or executed using common or similar code. In some embodiments, operations of the methods of FIGS. 3 and 4 may be performed using a common or similar algorithm and/or executed using common or similar code. In one example, one or more operations of the method of FIG. 3 may be performed during a first counting iteration by applying the following counting function, denoted CountGen, to the minVal and maxVal strings representing the range, the fixedBits and variableBits strings representing the mask, and to the value solutionIndex=MAX_INT: -US-00002 016 int CountGen(int minVal, int maxVal, 017 int fixedBits, int variableBits, 018 int solutionIndex) 019 { 020 int solutionCount=0; 021 while (minVal!=0) { 022 int skip=(minVal&-minVal); 023 int currentMaxVal=minVal+skip-1; 024 if (currentMaxVal>maxVal) 025 break; 026 if (CountGenSubrange(fixedBits,variableBits, 027 minVal,currentMaxVal, 028 solutionIndex,&solutionCount)) 029 return solutionCount; 030 if (currentMaxVal==RANGE_MASK_MAX) 031 return solutionCount; 032 minVal=currentMaxVal+1; 033 } 034 while (maxVal>=minVal) { 035 int skip=((maxVal+1)&-(maxVal+1)); 036 int currentMinVal=maxVal-skip+1; 037 if (CountGenSubrange(fixedBits,variableBits, 038 currentMinVal,maxVal, 039 solutionIndex,&solutionCount)) 040 return solutionCount; 041 if (currentMinVal== RANGE_MASK_MIN) 042 break; 043 maxVal=currentMinVal-1; 044 } 045 046 return solutionCount; 047 } In some non-limiting embodiments, one or more operations of the method of FIG. 4 may be performed during a second counting iteration, for example, by applying the same counting function CountGen to the same minVal and maxVal strings, the same fixedBits and variableBits strings, and to the value solutionIndex=i. In one demonstrative non-limiting example, the input values fixedBits=0000000, variableBits=0111110, e.g., corresponding to the mask M=XXXXXX0; minVal=0010100, maxVal=0101000, corresponding to the range; and the value solutionIndex=10, may be provided as input to the function CountGen. Based on the input values, the function CountGen may perform the following operations, e.g., with reference to code lines 021-047: defining a first sub-range, e.g., 0010100-0010111; determining that there are two solutions in the first sub-range, e.g., by providing to the function CountGenSubrange the values fixedBits, variableBits, minVal=CurrentMinVal, and CurrentmaxVal corresponding to the first sub-range; updating the total number of solutions to two possible solutions; determining that the total number of possible solutions is smaller than the value solutionIndex; defining a second sub-range, e.g., 0011000-0011111; determining that there are four solutions in the second sub-range, e.g., by providing to the function CountGenSubrange the values fixedBits, variableBits, minVal=CurrentMinVal, and CurrentmaxVal corresponding to the second sub-range; updating the total number of solutions to a total of six possible solutions; determining that the total number of possible solutions is smaller than the value solutionIndex; defining a third sub-range, e.g., 0100000-0111111; determining that the upper bound of the third sub-range, 0111111, exceeds the upper bound maxVal; defining a fourth sub-range, e.g., 0100111-0101000; determining that there is a single solution in the fourth sub-range, e.g., by providing to the function CountGenSubrange the values fixedBits, variableBits, CurrentMinVal, and maxVal=CurrentmaxVal corresponding to the fourth sub-range; updating the total number of solutions to a total of seven possible solutions; determining that the total number of possible solutions is smaller than the value solutionIndex; defining a fifth sub-range, e.g., 0100000-0100111; determining that there are four solutions in the fifth sub-range, e.g., by providing to the function CountGenSubrange the values fixedBits, variableBits, CurrentMinVal, and maxVal=CurrentmaxVal corresponding to the fifth sub-range; updating the total number of solutions to a total of eleven possible solutions; determining that the total number of possible solutions is greater than the value solutionIndex; and pseudo randomly selecting a solution of the possible solutions in the fifth sub-range. Some embodiments may include a method, e.g., the method of FIG. 2, and/or an PRNG, e.g., PRNG 130 (FIG. 1), capable of generating a pseudo random number of N bits, e.g., the number Rn, based on predefined mask and range constraints within a time period corresponding to a total time complexity of no more than an order of N, and/or utilizing a total memory space corresponding to a total space complexity of no more than an order of 1, e.g., assuming the pseudo random number is generated using a machine having an N word-size. In one example, determining the number of possible solutions in the sub-ranges, e.g., as described above with reference to the calculation iterations of lines 021-033 and 034-047 of the function countGen, may be performed during no more than an order of N iterations ("the calculating iterations") since, for example, during each calculating iteration at least one of the numbers Z (currentminVal) and Z(currentmaxVal+1) corresponding to a sub-range ("the iterated sub-range") may increase and therefore, determining a next sub-range may include determining only values of LSBs calculated in previous calculating iterations of previous respective sub-ranges. Additionally or alternatively, the calculation of the hamming weight corresponding to the iterated sub-range may be performed, for example, by determining the hamming weight of a next sub-range based on a hamming weight of a previous sub-range counting iteration, e.g., as described above. Therefore, a total time period for calculating the hamming weight may correspond to a time complexity of no more than an order of N. Additionally or alternatively, determining the plurality of sub-ranges corresponding to the range and determining the number of possible solutions in substantially each of the sub-ranges, may be performed using no more than at most an order of 1 of memory space since, for example, each of the two counting iterations described above may be performed while maintaining in the memory space only values associated with a recently iterated sub-range, a number of the total counted solutions, and/or a constant number of input values. In some embodiments, e.g., if it is not assumed that the pseudo random number is generated using a machine having an N word-size, the pseudo random number may be generated, for example, within a time period corresponding to a total time complexity of no more than an order of N , and/or utilizing a total memory space corresponding to a total space complexity of no more than an order of N. For example, N calculations including an order of 1 operations on N bits numbers may be performed, resulting in a total time complexity of an order of N . Similarly, each operation, e.g., of the first and the second counting operations, may require an order of 1 values of N-bit numbers, resulting in a total space complexity an order of N. Some embodiments of the invention, for example, may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment including both hardware and software elements. Some embodiments may be implemented in software, which includes but is not limited to firmware, resident software, microcode, or the like. Furthermore, some embodiments of the invention may take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For example, a computer-usable or computer-readable medium may be or may include any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. In some embodiments, a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements, for example, through a system bus. The memory elements may include, for example, local memory employed during actual execution of the program code, bulk storage, and cache memories which may provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Functions, operations, components and/or features described herein with reference to one or more embodiments, may be combined with, or may be utilized in combination with, one or more other functions, operations, components and/or features described herein with reference to one or more other embodiments, or vice versa. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. Patent applications by Ehud Aharoni, Kfar Saba IL Patent applications by Oded Margalit, Ramat Gan IL Patent applications by International Business Machines Corporation Patent applications in class Random number generation Patent applications in all subclasses Random number generation User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120185522","timestamp":"2014-04-18T21:38:31Z","content_type":null,"content_length":"98469","record_id":"<urn:uuid:df21365b-d14f-4bda-aa4e-2d72f86c4731>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this equation Correct and can it be simplified/ solved for Vmax March 22nd 2013, 10:08 AM #1 Apr 2012 United Kingdom Is this equation Correct and can it be simplified/ solved for Vmax Good day to you all, I have the following fixed values: - Acceleration = 1000 mm/s^2 - (A[a]) Deceleration = -400 mm/s^2 - (A[d]) Total Distance = 450 mm - (S[t]) Time is totally variable/infinite and not a factor The object starts from velocity(u[1)] and accelerates a distance (S[a])to reach a velocity of V[m. ]It then decelerates from velocity V[m] a distance (S[d])to final velocity (V[2]). Distance S[a] and S[d] have to total the full distance travelled - S[t ] I would like to know the maximum velocity reached by the object. From this I can the easily calculate the distance S[a] and S[d. Now I believe that.......... $S_a = (V_m^2 - U_1^2)/(2*A_a)$ And obviously $S_d = (V_2^2 - V_m^2)/(2*A_d)$ So if acceleration distance $S_a = S_t - S_d$ Then we can substitute S[a ]in above to get: - $S_t - S_d = (V_m^2 - U_1^2)/(2*A_a)$ $S_d = S_t + (V_m^2 - U_1^2)/(2*A_a)$ Combining gives us $S_d = (V_2^2 - V_m^2)/(2*A_d) = S_t + (V_m^2 - U_1^2)/(2*A_a)$ I have tried multiply out and simplifying and in all honesty disappeared up ....... If I am right on the track, and that would be great! Can one of you gurus simplify it and solve for V[m] for two states: - 1. Values of starting velocity U[1] and finishing velocity V[2 ]both are 0 2. Given any starting velocity U[1] and finishing velocity V[2 ]if that's possible Thank you so much Re: Is this equation Correct and can it be simplified/ solved for Vmax Re: Is this equation Correct and can it be simplified/ solved for Vmax you sure? Re: Is this equation Correct and can it be simplified/ solved for Vmax You had $S_d = S_t - (V_m^2 - U_1^2)/(2*A_a)$ Re: Is this equation Correct and can it be simplified/ solved for Vmax Dam, never doubted you! That is why it never calculate out properly! Re: Is this equation Correct and can it be simplified/ solved for Vmax So the question now is........ Can anyone rearrange this lot to solve V[m 1. When V[2] and U[1] = 0 2. Arbitrary values of V[2] and U[1] $(V_2^2 - V_m^2)/(2*A_d) = S_t - (V_m^2 - U_1^2)/(2*A_a)$ March 22nd 2013, 01:36 PM #2 Super Member Oct 2012 March 22nd 2013, 02:11 PM #3 Apr 2012 United Kingdom March 23rd 2013, 03:24 AM #4 Super Member Oct 2012 March 23rd 2013, 03:34 AM #5 Apr 2012 United Kingdom March 23rd 2013, 03:50 AM #6 Apr 2012 United Kingdom
{"url":"http://mathhelpforum.com/algebra/215292-equation-correct-can-simplified-solved-vmax.html","timestamp":"2014-04-21T07:54:49Z","content_type":null,"content_length":"45591","record_id":"<urn:uuid:2c080ed9-a455-4e16-8d49-dfbd1c6e78d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Home Home Schedule Math 430, Spring 2009 Thursday, April 30 Solutions I posted some suggested problems on Galois theory on the homework page. Do as many as hold your interest, or as you need to gain some comfort with Galois groups. I will hold office hours Monday, 1pm - 3pm. As usual, I'm also very happy to meet by appointment. Tuesday, April 28 A brief list of topics for the final exam is available. The course schedule (linked at left) also covers through the end of the semester. My office hours over the coming days are: Wednesday (tomorrow) 1-3pm, and Monday 5/4 1 - 3pm. Remember that the exam is Wednesday 5/6, 10:30am - 12:30pm, in our usual room. By the weekend I'll have a short list of practice problems on the last week of class. I'm not going to collect these, but doing them will help you on the exam. Saturday, March 28 In addition to the notes from yesterday, a brief list of topics for Exam 2 is here. Friday, March 27 The notes on UFDs, the Eisenstein Criterion, and face rings are now available. Wednesday, March 25 A new homework is up, due next Friday. Since it's an exam week, I intend it to be very short and straightforward. Friday, March 6 Igor Konfisakhar (mangled email address ikonfisa AT wustl,edu) has asked me to inform you that he's available for paid tutoring. Could be a good opportunity if you want to brush up over break. Please email him for details. Wednesday, March 4 I've posted the notes on Zorn's Lemma and Maximal Ideals, along with a new homework due after spring break. It's my intention that this homework be relatively easy, and that it should not interfere with taking a refreshing spring break! Tuesday, February 24 I've posted some notes on the conjugacy classes of elements in S[n], as well as on our construction of the sign homomorphism via directed graphs. Also: a minor correction on the homework. I forgot the word "nonabelian" in conjunction with simple groups. Cyclic groups of prime order are simple, but not as interesting as the nonabelian simple groups! Saturday, February 14 I've posted a list of topics for Exam 1, in honor of St. Valentine's abiding interest in group theory. Also, I've written solutions for several problems on the homework. (Some of these we'd more informally discussed in class.) See the link at left. I'm happy to discuss solutions for other homework problems in office hours, and if there are a lot of questions about one or more I'll consider writing additional solutions. Wednesday, February 11 Homework set #5 is now posted. In recognition that we have an exam next week, I've made this one slightly shorter and much more straightforward/computational than the previous few homework sets. Also, I'll be collecting it on Friday, instead of Wednesday. Nonetheless, I recommend that you work seriously on it this weekend, since the material will be covered on the I've also posted a Solution to p53 #1 (from last week), which I understand some of you found a little difficult. Friday, January 30 As promised, the handouts on the subgroup lattice of C[12] and S[4] are available in PDF format. Please let me know if you have trouble reading these. Homework set #3 was posted Wednesday night, as discussed in class. Monday, January 26 My office hours will be Tuesday 1-2pm and Thursday 1:30-2:30pm on an ongoing basis. I've updated the syllabus. Also, the handout on the number of isomorphism classes of finite groups of orders up to 354 is here. Wednesday, January 21 Homework set #2 has been posted. Also, the subgroup lattice of C[12] and S[4], as handed out in class, are available for download (in postscript format). Finally, I've made solutions available for some homework questions that multiple people asked me about. See the link at left. Wednesday, January 14 Homework set #1 has been posted. Please use the link at left. Monday, January 12 Welcome to Math 430! The Syllabus and Schedule are linked at the left. Last modified April 30, 2009
{"url":"http://www.math.wustl.edu/~russw/s09.math430/","timestamp":"2014-04-16T10:26:56Z","content_type":null,"content_length":"6881","record_id":"<urn:uuid:d2554ed9-52e3-404d-a368-b4ca77e84dff>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Equations August 6th 2009, 07:43 AM #1 Jan 2008 Differential Equations Q: A population of animals live on an isolated island such that at any time, the number of births per unit time is proportional to the population and the number of deaths per unit time is proportional to the square of the population. If the population at time t is x and the population is changing at the greatest rate when x = 1, show that $\frac{dx}{dt} = kx (2 - x)$ where k is positive constant. (I have done the showing step, but I don't know how to solve the following parts.) i) Find the general solution of the differential equation in terms of k. ii) Show that there is a limit to the size of the population. Pleas help! Thank you! Q: A population of animals live on an isolated island such that at any time, the number of births per unit time is proportional to the population and the number of deaths per unit time is proportional to the square of the population. If the population at time t is x and the population is changing at the greatest rate when x = 1, show that $\frac{dx}{dt} = kx (2 - x)$ where k is positive constant. (I have done the showing step, but I don't know how to solve the following parts.) i) Find the general solution of the differential equation in terms of k. ii) Show that there is a limit to the size of the population. Pleas help! Thank you! In order to simplify the problem we write the equation as... $\frac{dx}{dt}= - k\cdot x\cdot (x-2)$ , $k>0$ (1) Before the 'attack' some useful considerations... a) the (1) has two 'stationary solutions' : $x=0$ and $x=2$ b) for $x>2$ is $x'(t)<0$, for $0<x<2$ is $x'(t)>0$. That means that if we set for $x(0)=x_{0}$ it is... $\lim_{t \rightarrow \infty} x(t)= 2$ , $x_{0}>0$ c) $x'(t)$ is independent from t so that if $x_{0}(t)$ is solution od (1), $x_{0}(t-t_{0})$ with $t_{0}>0$ arbitrary is also solution of (1) The (1) can be solved with standard sepation of variable and the general solution is... $x= \frac{2}{1-2\cdot c\cdot e^{-kt}}$ (2) ... where the constant c is determined by the 'initial condition'. Some problem exist however when is $x_{0}=0$ Kind regards Last edited by chisigma; August 7th 2009 at 03:29 AM. Reason: error in (2).... sorry!... August 6th 2009, 08:15 AM #2 August 6th 2009, 11:34 PM #3
{"url":"http://mathhelpforum.com/differential-equations/97177-differential-equations.html","timestamp":"2014-04-20T23:47:11Z","content_type":null,"content_length":"42805","record_id":"<urn:uuid:82a08fe2-b745-475a-bb28-bc27e9eb33eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
1) In The Circuit Below, Find The Voltage Reading ... | Chegg.com 1) In the circuit below, find the voltage reading as seen by the high?impedance voltmeter across the 2 ? resistor. Note that the sign of your answer is critical; as it will indicate which way the induced current is flowing. (The B field is pointing into the page.) 2)Using a Smith chart, answer the following questions about the Z0 = 50 ? transmission line below that has a complex load given by ZL = 15 + j15 ?. Let me know if you need me to provide you with a chart. Plan to turn in your chart, clearly marking your answers. a. In polar coordinates, what is ?(0) (the reflection coefficient at the load)? b. Determine the admittance YL . c. Determine ?(.1?) (reflection coefficient a tenth of a wavelength from the d. What is Z at this same point? Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-circuit-find-voltage-reading-seen-high-impedance-voltmeter-across-2-resistor-note-sign-a-q3963277","timestamp":"2014-04-23T13:13:19Z","content_type":null,"content_length":"22361","record_id":"<urn:uuid:9c35b553-8f55-426c-81f5-8760060f11a9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
What's wrong with that integral December 9th 2006, 06:06 AM #1 Jul 2005 What's wrong with that integral $\int xe^{-x} dx$ $\int xe^{-x} dx=xe^{-x} - \int e^{-x} dx = xe^{-x}-e^{-x}+C$ mathbook gives: $-xe^{-x}-e^{-x}+C$ where does that minus come from ? You essentially got it right, just (I assume) two small typos that screwed you up. $dv=e^{-x}dx$ so $v=-e^{-x}$. Make the new substitution and everything will work out. Your second typo was here: $\int xe^{-x} dx = xe^{-x} - e^{-x}+C$ According to your work, the sign of the $e^{-x}$ term should have been a "+", which is why you got the sign of that term correct. December 9th 2006, 06:30 AM #2
{"url":"http://mathhelpforum.com/calculus/8636-what-s-wrong-integral.html","timestamp":"2014-04-17T05:11:51Z","content_type":null,"content_length":"36465","record_id":"<urn:uuid:8e8aed4a-dd75-455f-aa4e-7e3d41a12a7f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Steven Galbraith Thesis Doctoral thesis Equations for modular curves, Oxford 1996. A number of people have been interested in some of the results of this thesis. The following references contain further information on some of these topics. • The use of the canonical embedding to obtain equations for modular curves as in Chapter 2 (including some of the examples given in the thesis) is also described in: □ Mahoro Shimura, Defining equations of modular curves X[0](N), Tokyo J. Math., 18, no. 2, p. 443-456 (1995) • Equations for hyperelliptic modular curves as in Chapter 3 have also been given by: □ Josep Gonzalez, Equations of hyperelliptic modular curves, Ann. Inst. Fourier, 41, , p. 779-795 (1991) □ N. Murabayashi, On normal forms of modular curves of genus 2, Osaka J. Math, 29, p. 405-462 (1992) □ Yuji Hasegawa, Table of quotient curves of modular curves X[0](N) with genus 2, Proc. Japan Acad. Ser. A Math. Sci. 71, no. 10, p. 235-239 (1995) Different methods for obtaining some equations for hyperelliptic modular curves (and more generally, curves whose Jacobian is a factor of J[0](N)) were given by: □ X. D. Wang, 2-dimensional simple factors of J[0](N), Manuscripta Mathematica, Vol. 87, No. 2, p. 179-197 (1995) □ Hermann-Josef Weber, Hyperelliptic simple factors of J[0](N) with dimension at least 3, Experimental Mathematics, Vol. 6, No. 3, p. 235-249 (1997) □ Gerhard Frey and Michael Mueller, Arithmetic of modular curves and applications, in B. H. Matzat (ed.), Algorithmic algebra and number theory, Springer (1999) • To obtain the modular form data for computations such as these I recommend consulting William Stein's tables. • There has been a lot of work on Q-curves, related to that of Chapter 6 of the thesis. The connection between Q-curves and rational points on quotients of modular curves was noted by Elkies: □ Noam Elkies, Remarks on elliptic K-curves, preprint (1993) Examples of j-invariants of Q-curves corresponding to the cases where the modular curve has genus zero or has genus one and the Jacobian has positive rank have been given by: □ Yuji Hasegawa, Q-curves over quadratic fields, Manuscripta Math., 94, p. 347-364 (1997) □ Josep Gonzalez and J.-C. Lario, Rational and elliptic parameterisations of Q-curves, J. Num. Th., 72, p. 13-31 (1998) Data on the j-invariants of the quadratic Q-curves provided by the rational points found in Chapter 6 of the thesis is given in: □ Steven Galbraith, Rational points on X[0]^+(p), Experimental Math., 8, No. 4, p. 311-318 (1999) More information on the j-invariants of quadratic Q-curves can be found in: □ Josep Gonzalez, On the j-invariants of the quadratic Q-curves, Preprint (1998) Two new examples of j-invariants of Q-curves (from the case where N is composite) are given in: □ Steven Galbraith, Rational points on X[0]^+(N) and quadratic Q-curves, gzipped ps. • The outcome of this work is the following fact: Suppose that the genus of X[0]^+(N) is less than or equal to 5. Then X[0]^+(N)(Q) has exceptional rational points (i.e., non-cusp and non-CM) when: X[0]^+(N) has genus one and N = 37, 43, 53, 61, 65, 79, 83, 89, 101 and 131 (all rank 1), X[0]^+(N) has genus between 2 and 5 and N = 73, 91, 103, 125, 137, 191 and 311. We conjecture that the above cases are the only ones for which there are exceptional rational points when the genus of X[0]^+(N) is less than or equal to 5 Last Modified: 18-10-2001
{"url":"http://www.isg.rhul.ac.uk/~sdg/thesis.html","timestamp":"2014-04-18T10:35:08Z","content_type":null,"content_length":"5309","record_id":"<urn:uuid:01e011c1-4a26-44a7-af04-41f54533bfc3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence and multiplicity of positive solutions to a perturbed singular elliptic system deriving from a strongly coupled critical potential In this paper, we consider singular elliptic systems involving a strongly coupled critical potential and concave nonlinearities. By using variational methods and analytical techniques, the existence and multiplicity of positive solutions to the system are established. MSC: 35J60, 35B33. Palais-Smale condition; Nehari manifold; strongly coupled; elliptic system; critical potential 1 Introduction and main results In this paper, we consider the following elliptic system: where is a smooth bounded domain such that , , is the critical Sobolev exponent, is the best Hardy constant and denotes the completion of with respect to the norm and is defined as the completion of the with respect to the norm defined by for . Definitions of strongly and weakly coupled terms are as follows. The terms and ( ) are weakly coupled, ( ) is strongly coupled when L or K is a derivative operator. Thus, is strongly coupled when and are positive. The parameters in (1.1) satisfy the following assumption. The corresponding energy functional of (1.1) is defined in by where and . Then and the duality product between and its dual space is defined as where and denotes the Fréchet derivative of J at . A pair of functions is said to be a weak solution of (1.1) if Therefore, a weak solution of (1.1) is equivalent to a nonzero critical point of [1]. Problem (1.1) is related to the well-known Hardy inequality [2] If , by (1.2), is an equivalent norm of H, the operator L is positive and the first eigenvalue of L and the following best constant are well defined: where is the completion of with respect to . Note that is the well-known best Sobolev constant. For , the constant is achieved by the following extremal functions [3]: where is a radially symmetric function On the other hand, for any , , , and , , by the Young and Sobolev inequalities, the following best constants are well defined on the space : We define Since f is a continuous function on such that . Then there exists such that Set , , and . Then (1.1) reduces to the semilinear scalar problems that have been extensively investigated by many authors. See [4-6] and the references therein. Regular semilinear elliptic systems have been studied extensively and many conclusions have been established. For example, Alves et al. studied in [7] an elliptic system and some important conclusions had been obtained. However, the elliptic systems involving the Hardy inequality have seldom been studied and we only find some results in [8-16]. Thus it is necessary for us to investigate the related singular systems deeply. Among the references above, the elliptic systems involving the Hardy inequality and concave-convex nonlinearities had been studied only in [12]. In this paper, only the case of (1.1) involving multiple strongly-coupled critical terms is considered. Let be the Lebesgue measure of Ω. We define the following constant: Then the main results of this paper can be concluded in the following theorems and the conclusions are new to the best of our knowledge. It can be verified that the intervals in Theorems 1.1 and 1.2 for the parameters , , μ and q are allowable. Theorem 1.1Suppose that (ℋ) holds and . Then problem (1.1) has at least one positive solution. Theorem 1.2Suppose that (ℋ) holds, , and . Then there exists such that problem (1.1) has at least two positive solutions for all and satisfying . This paper is organized as follows. Some preliminary results and properties of the Nehari manifold are established in Sections 2 and 3, and Theorems 1.1 and 1.2 are proved in Section 4. 2 The local Palais-Smale condition Throughout this paper, we always assume that the assumption (ℋ) holds, denotes the norm of the space H, by the Hardy inequality is equivalent to , i.e., denotes the first eigenvalue of the operator L, means the norm of the space , is the dual space of E. for all and . is said to be nonnegative in Ω if and in Ω. is said to be positive in Ω if and in Ω. is a ball in . denotes a quantity satisfying , means as and is a generic infinitesimal value. In particular, the quantity means that there exist the constants such that as ε is small. We always denote positive constants as C and omit dx in integrals for convenience. Lemma 2.1If is a (PS)[c]-sequence ofJwith inE, then and , where Proof Let and . Since is a (PS)[c]-sequence of J with in E, we can deduce that , and therefore , that is, From the Hölder inequality it follows that Thus, the proof is complete.□ Lemma 2.2If is a (PS)[c]-sequence of the functionalJ, then is bounded inE. Proof See Hsu [[12], Lemma 2.2].□ Lemma 2.3Suppose that (ℋ) holds. ThenJsatisfies the (PS)[c]condition for all , where Proof We follow the argument in [15]. Let be a (PS)[c]-sequence of J with . Write . We know from Lemma 2.2 that is bounded in E, and then up to a subsequence, z is a critical point of J. Furthermore, we may assume that , weakly in H and , strongly in for all and , a.e. in Ω. Hence, we have that Set , and . From the Brézis-Lieb lemma [17] it follows that and by Lemma 2.1 in [18] we have Since , and by (2.2) to (2.4), we can deduce that Hence, we may assume that If , the proof is complete. Assume ; then from (2.6) and the definition of it follows that which implies that In addition, from (2.5) to (2.7) and Lemma 2.1, we get which is a contradiction. Therefore, the proof of Lemma 2.3 is complete.□ 3 Nehari manifold Since J is unbounded below on E, we need to consider J on the Nehari manifold By the Hölder inequality and the definition of it follows that Lemma 3.1The functionalJis coercive and bounded below on . Proof Suppose that . From (3.1) and (3.2) we get Thus, J is coercive and bounded below on .□ Lemma 3.2Suppose that is a local minimizer ofJon and . Then in . Proof The proof is similar to that of [19] and the details are omitted.□ Proof We argue by contradiction. Suppose that there exist such that and . Then the fact together with (3.5) and (3.6) imply that By (1.5) and (3.7) we have which implies that By (3.2) and (3.8) we have From (3.9) and (3.10) it follows that which is a contradiction.□ By Lemma 3.3, we write and define Lemma 3.4 (ii) There exists a positive constant depending on , , q, N, , and such that for all . Proof (i) Let . By (3.1) and (3.6) it follows that According to (3.1) and (3.11), we have that (ii) Suppose that and . By (1.7), (3.1) and (3.5) we have that which implies that From (3.4) and (3.12) it follows that where is a positive constant.□ Lemma 3.5Suppose that and with . Then there exist unique such that and . In particular, we have Proof The proof is similar to that of [20] and is omitted.□ Then we have the following lemma. Lemma 3.6Suppose that and with . Then there exist unique such that , and Proof The proof is almost the same as that in [[20], Lemma 2.7] and is omitted here.□ 4 Proof of Theorems 1.1 and 1.2 Lemma 4.1 (i) If , then the functionalJhas a (PS)sequence . (ii) If , then the functionalJhas a -sequence . Proof The proof is similar to that of [21] and is omitted.□ Lemma 4.2Suppose that . ThenJhas a minimizer such that is a positive solution of (1.1) and . Proof By Lemma 4.1(i), there exists a (PS) of J such that Since J is coercive on (see Lemma 3.1), we get that is bounded in E. Passing to a subsequence (still denoted by ), we can assume that there exists such that which implies that First, we claim that is a solution of (1.1). By (4.1) and (4.2), it is easy to see that is a solution of (1.1). Furthermore, from and (3.3), we deduce that Taking in (4.4), by (4.1), (4.2) and the fact , we get Therefore, is a nontrivial solution of (1.1). Next, we prove that strongly in E and . Noting and applying the Fatou lemma, we have Therefore, and . Set . By the Brézis-Lieb lemma [17], we get Then standard argument shows that strongly in E. Moreover, we have . Otherwise, if , then by Lemma 3.5 there exist unique such that and . Since there exists such that . By Lemma 3.5 we get that which is a contradiction. Since and , by Lemma 3.2 we may assume that is a nontrivial nonnegative solution of (1.1). In particular , . Indeed, without loss of generality, we may assume that . Then as is a nontrivial nonnegative solution of by the standard regularity theory, we have in Ω and Moreover, we may choose such that and so by Lemma 3.6 there is unique such that . Moreover, This implies which is a contradiction. Finally, from the maximum principle [22] we deduce that in Ω and is thus a positive solution of (1.1).□ Let be defined as in (1.4) and set , where is a cut-off function: The following results are already known. Lemma 4.3[4] As we have the following estimates: Lemma 4.4[11] Suppose that (ℋ) holds, is defined as in (1.6) and are the minimizers of defined as in (1.4). Then and has the minimizers , where . Lemma 4.5Under the assumptions of Theorem 1.2, there exist and such that for all there holds Proof For all , define the functions and Note that and as t is closed to 0. Thus, is attained at some finite with . Furthermore, , where and are the positive constants independent of ε. Choose small enough such that for all . Set . Then for all and , which implies that there exists satisfying , for all . Note that From (4.9) and Lemmas 4.3, 4.4 it follows that where we have used the assumption . Therefore we can choose , such that The definition of in Lemma 2.1 implies that Note that Taking ε small enough, there exists such that for all , Choose . Then for all there holds Finally, we prove that for all . Recall that . By Lemma 3.5, the definition of and (4.11), we can deduce that there exists such that and The proof is thus complete by taking .□ Lemma 4.6Set . Then for all , problem (1.1) has a positive solution such that and . Proof By Lemma 4.1, there exists a -sequence of J for all . From Lemmas 2.3, 3.4 and 4.5, it follows that and J satisfies the condition for all . Since J is coercive on , we get that is bounded in E. Therefore, there exist a subsequence (still denoted by ) and such that strongly in E and for all . Since and , by Lemma 3.2 we may assume that is a nontrivial nonnegative solution of (1.1). Moreover, by (3.7) and , we get This implies that and . From the strong maximum principle [22] it follows that is a positive solution of (1.1).□ Proof of Theorems 1.1 and 1.2. By Lemma 4.2, we obtain that (1.1) has a positive solution for all . On the other hand, from Lemma 4.6, we can get the second positive solution for all . Since , this implies that and are distinct.□ Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2012/1/116","timestamp":"2014-04-17T04:16:34Z","content_type":null,"content_length":"327122","record_id":"<urn:uuid:80d70460-2f3d-4ac6-8907-50c2ebe8c9e1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Cognitive science of mathematics cognitive science of mathematics is the study of ideas (concepts) using the techniques of cognitive science . It proposes to ground the foundations of mathematics in the empirical study of human This approach to mathematics in general was preceded by the study of human cognitive bias in probabilistic reasoning and economic contexts, most notably by Amos Tversky and Daniel Kahneman. Such biases affect economic measurement, perceived financial risk, and ground the field of behavioral finance. This work suggests that mathematical practice has little direct relevance to how people think about mathematical situations. If human intuition is inconsistent with formal mathematics, this gives rise to the question of where formal mathematics comes from. The book Where Mathematics Comes From (George Lakoff, Rafael E. Núñez, 2000) is an accessible and controversial introduction to the subject. It culminates with a case study of Euler's identity; the authors argue that this identity reflects a cognitive structure peculiar to humans or to their close relatives, the hominids. See also • Brian Butterworth, 1999. What Counts: How Every Brain is Hardwired for Math. Free Press.
{"url":"http://www.reference.com/browse/Cognitive+science+of+mathematics","timestamp":"2014-04-16T11:14:53Z","content_type":null,"content_length":"81042","record_id":"<urn:uuid:a7b0048c-a450-4a71-9266-f8f0e3d8bcca>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
State of Matter's Volume Measurements Testicular Volume: Comparison of Orchidometer and US… The volume formula provides a superior estimate of testicular volume and should be used in clinical … Rivkees SA, Hall DA, Boepple PA, Crawford JD. Accuracy and reproducibility of clinical measures of testicular volume. State of Matter’s Volume Measurements The space occupied by a substance is called its volume. SI unit of volume is cubic meter (m 3). The solids whose volume can be calculated by measuring length, width, height, diameter etc. are called regular solids. Automatic measuring equipment for concrete volume changing amount It works as sealing measured concrete in plastic bag and placing it in the container filled with coloured liquid, making liquid column in glass tube up and down at time of measured concrete volume being varied, converting signal received by strip photocell … National Quality Measures Clearinghouse | Abdominal… Guide to inpatient quality indicators: quality of care in hospitals - volume, mortality, and utilization [version 3.1]. … Composite measures user guide for the inpatient quality indicators (IQI) [version 4.2]. Rockville (MD): Agency for Healthcare … Math Forum: Measurement:HS Dr. Math:Volume High School Level Questions on Volume from the Dr. Math Archives High School Dr. Math Questions on Measurement Below are some questions and answers from our archive of questions sent via e-mail by K12 students and teachers to Ask Dr. Math. Measurement of lung volume and ventilation distribution… The number of volume turnovers was calculated using the cumulative expired alveolar volume (V T minus dead space volume) 10. The mean dilution number (MDN) was calculated similarly to AMDN but the cumulative expired alveolar volume is not corrected … What is used to measure the volume of a liquid Volume is a measure of how much space can fit inside a container, like a flask or graduated cylinder. Liquids are measured in volume. You can also measure the volume displaced by a solid by measuring the volume in a container, then measuring… Metric units and Measurement For example, when specifying the height of a person 1.63 meters tall, to say that person is 1 or 2 meters tall doesn’t give us a very good idea of how tall that person really is. The prefixes for the different units of length, volume, and mass in the … Cooking ingredients conversion and equivalent volume… Dry and liquid volume capacity conversion for calculating equivalent amounts between various capacity and volume units. … This conversion calculator for cooking lets you instantly convert recipe ingredients from common unit measures into equivalents. Volume Quiz - Free Math worksheets, Free phonics… Quiz *Theme/Title: Volume * Description/Instructions The volume of an object is the amount of space it takes up. For example, an inflated balloon takes up more space than an empty balloon. That means that a balloon that is blown up has more volume than …
{"url":"http://mkarchives.tumblr.com/tagged/Measurement","timestamp":"2014-04-18T10:39:06Z","content_type":null,"content_length":"35290","record_id":"<urn:uuid:dac38745-d716-44c0-b8f0-b162974e145c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Schmidt-Newton telescope telescopeѲptics.net ▪ ▪ ▪ ▪ ▪▪▪▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ CONTENTS ◄ 10.2.2.2. Wright, Baker camera ▐ 10.2.2.4. Schmidt-Cassegrain telescope ► 10.2.2.3. Schmidt-Newton telescope The only difference in the optical layout of the Schmidt-Newtonian vs. Schmidt camera is in the position of corrector lens. In the telescope, the corrector is typically positioned at a distance somewhat inside the focus of the spherical primary (FIG. 173). The reduced mirror-to-corrector distance has practically no effect on the FIGURE 173: Schmidt-Newtonian telescope optical elements. Schmidt corrector cancels spherical aberration of the spherical primary. It also usually supports the diagonal mount, which eliminates spider diffraction effect. Due to displaced stop (normally coinciding with the corrector), coma, astigmatism and field curvature are all lower than in the comparable Newtonian with paraboloidal primary with the stop at the surface (while the benefit of lower astigmatism or field curvature can be achieved by manipulating stop position of a paraboloid as well, its coma error is independent of stop position). While the corrector is very forgiving to miscollimation, it does complicate adjusting the flat. needed corrector power to cancel spherical aberration of the mirror. Needed corrector shape is determined from Eq. 101 or Eq. 101.1 (higher-order aberration is negligible). However, as a consequence of the stop now being closer to the primary, portions of mirror's coma and astigmatism are reintroduced, which is evident on the ray spot plot. The P-V wavefront error of lower-order coma in the Schmidt-Newtonian for object at infinity is given by: Wc = (1-σ)αD/48F2 (110) with σ being the corrector-to-primary separation in units of the primary radius of curvature (σ is numerically positive), α the field angle in radians, D the aperture diameter and F the focal ratio. With α=h/ƒ, h being the linear height in the image plane, and ƒ the mirror focal length, it can be also expressed as Wc=(1-σ)h/48F3. It makes coma in the Schmidt-Newtonian lower by a factor of (1-σ) vs. paraboloidal Newtonian. The lower-order astigmatism P-V wavefront error is given by: Wa = (1-σ)2Dα2/8F (111) or, alternatively, Wa=-(1-σ)2h2/8DF3. In other words, it is by a factor (1-σ)2 lower than for a mirror with the stop at the surface. Change in astigmatism also changes the median (best) image surface. It is now given by: with the sagittal surface curvature radius 1/Rs=(2/R)-[2(1-σ)2/R], and the tangential surface curvature given by 1/Rt=(2/R)-[6(1-σ)2/R], R being the mirror radius of curvature. It makes median field curvature lower by a factor of [1-2(1-σ)2] than for a paraboloid with the stop at the surface. With the usual value for σ of ~0.45, the Schmidt-Newtonian coma is lower by a factor of ~0.55, astigmatism by a factor of ~0.3, and median field curvature by a factor of ~0.4 vs. comparable paraboloid with the stop at the surface. Of course, similar gains in the reduction of astigmatism and field curvature can be just as well obtained with a paraboloid, by moving the stop away from the Elements alignment in the Schmidt-Newtonian is somewhat more complicated than in the all-reflecting arrangement. The two mirrors and the focuser have to be aligned as in all-reflecting system, but all three also need to be aligned with the corrector. Decentered corrector will induce coma, as given by Eq. 109, while corrector tilt will induce tilted image surface. With the corrector commonly supporting diagonal mount, the limit to collimation accuracy is set by the accuracy of the corrector/diagonal alignment (unless correction is made at the focuser). On the other hand, better coma correction of Schmidt-Newtonian makes the tolerance for mirrors/focuser misalignment nearly twice more forgiving than in a comparable all-reflecting Newtonian. ◄ 10.2.2.2. Wright, Baker camera ▐ 10.2.2.4. Schmidt-Cassegrain telescope ► Home | Comments
{"url":"http://www.telescope-optics.net/SN.htm","timestamp":"2014-04-19T15:08:20Z","content_type":null,"content_length":"14147","record_id":"<urn:uuid:bf193b4b-5e07-4c97-af64-d7b15436f46b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Identifying Potential Clinical Syndromes of Hepatocellular Carcinoma Using PSO-Based Hierarchical Feature Selection Algorithm BioMed Research International Volume 2014 (2014), Article ID 127572, 12 pages Research Article Identifying Potential Clinical Syndromes of Hepatocellular Carcinoma Using PSO-Based Hierarchical Feature Selection Algorithm ^1School of Electronics and Information Engineering, Tongji University, Shanghai 201804, China ^2The Advanced Research Institute of Intelligent Sensing Network, Tongji University, Shanghai 201804, China ^3The Key Laboratory of Embedded System and Service Computing, Tongji University, Ministry of Education, Shanghai 201804, China Received 17 December 2013; Revised 7 February 2014; Accepted 10 February 2014; Published 17 March 2014 Academic Editor: Jose C. Nacher Copyright © 2014 Zhiwei Ji and Bing Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Hepatocellular carcinoma (HCC) is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM) plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS) model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC. 1. Introduction Hepatocellular carcinoma (HCC) is the third most common cause of cancer-related death worldwide and the leading cause of death in patients with cirrhosis [1, 2]. In clinical practice, symptoms attributable to HCC are usually absent, so the majority of patients are diagnosed with advanced disease, often precluding potentially curative therapies. This has resulted, in part, in a 5-year overall survival rate of 12% and a median survival following diagnosis ranging from 6 to 20 months [3, 4]. Therefore, timely and accurate diagnosis is very important for treatment of HCC. Currently, the modalities employed in the diagnosis of HCC mainly include cross-sectional imaging, biopsy, and serum AFP, which depend on both the size of the lesion and underlying liver function, and some of them are controversial [5, 6]. Traditional Chinese Medicine (TCM) is one of the most popular complementary and alternative medicine modalities. It plays an active role in diagnosis and treatment of HCC in Chinese and East some Asian countries [7, 8]. Different from other diagnostic methods, it is possible to accurately diagnose HCC using inspection, auscultation and olfaction, inquiry, and pulse taking and palpation [8]. In this study, we will work on a TCM clinical dataset, which is observed from 120 HCC patients. Each patient is observed on 147 clinical symptoms and a positive score is evaluated to indicate total positive strength of symptoms. Based on this TCM dataset, we could achieve two aims: (1) screening the potential clinical syndromes for this cancer and (2) inferring the relationships among the potential clinical features via Bayesian network analysis. However, the computational cost will be exceedingly high if the dimensions of the raw dataset are large. Furthermore, the causal relationships between all the features are difficult to infer because high dimensional data sharply increases the complexity of Bayesian network structure learning [9]. In this study, a particle swarm optimization-based hierarchical feature selection (PSOHFS) model was proposed to select potential clinical syndromes for HCC diagnoses. Firstly, all the 147 original symptoms were arranged into 27 groups according to the categories of clinical observations, and 27 new syndrome features were generated from these groups. Then, the hierarchical feature representation was built with a tree structure, in which different layers indicate different scales of clinical information (Figure 1). Secondly, an improved PSO algorithm was employed at the syndrome level to search an optimal syndrome subset for diagnoses. The experiment shows that 24 novel syndromes searched by PSOHFS could improve accuracy of diagnosis. In addition, Bayesian networks were further constructed at two levels: (1) a global network on the middle-layer features revealed the relationships among 24 potential syndromes; (2) the local networks were used to represent the connections of symptoms in the same groups. The rest of the paper is organized as follows. Section 2 introduces the details about the experimental data and the feature selection approach. Sections 3 and 4 present the experiment design and results, respectively. Some important conclusions drawn are presented in Section 5. 2. Materials and Methods 2.1. Experimental Data In this study, the raw data was observed from 120 HCC patients. The clinical dataset includes 300 samples and 147 clinical symptoms. The levels of positive of each symptom are quantified with nonnegative integers. The larger value indicates stronger positive symptom occurred. There are two types of data range for all the original symptoms: binary or integer. For example, the symptom “lip color is white” is binary (0 or 1); that means there are two possible states for this symptom: occurrence or nonoccurrence. Another example is “abdominal pain”; its data range is 0, 1, 2, and 3. The symptom is not positive if its value equals zero; otherwise, the larger the value is, the stronger positive symptom will be. In addition, each patient is marked with a score (nonnegative value) to represent the total evaluation of positive symptoms on this patient. It is obvious that if the HCC patients have larger positive scores than normal people, it is because some clinical symptoms 2.2. Feature Selection Feature selection for classification or regression can be widely organized into three categories, depending on how they interact with the construction of model. Filter methods employ a criterion to evaluate each feature individually and is independent of the model [10]. Among them, feature ranking is a common method which involves ranking all the features based on a certain measurement and selecting a feature subset which contains high-ranked features [11]. Wrapper methods involve combinatorial searches through the feature space, guided by the predicting performance of a classification or regression model [12]. Embedded methods perform feature selection in the process of training a model [13]. 2.3. Hierarchical Feature Selection When the raw dataset is high dimensional, the complexity of feature selection may be extremely high: (a) the computational cost will sharply increase, particularly for the wrapper and embedded methods; (b) the potential optimal feature subset may include many irrelevant or redundant features. Therefore, it is necessary to preliminarily reduce the dimension of original feature set before feature selection. As a common preselecting strategy, feature ranking-based approach could quickly reduce the feature space by picking up high-ranked features [14]. However, this type of approach always leads to inclusion of some redundant features. In addition, the optimal feature subset which covers high-ranked features may not provide the best performance in the classification (or regression) model. Ruvolo et al. proposed a novel hierarchical feature selection approach for the audio classification by converting the raw data to three-layer feature representation with a tree structure [15]. All the low-layer features are aggregated into several groups in a “bag of features” manner, and then a higher-layer feature is extracted based on the lower-layer features in the same group. Obviously, the high-layer feature set constitutes a reduced feature space with little redundancy and might provide lower computational cost for classification or regression model. In this study, our raw TCM data is high dimensional and there are some redundant clinical symptom features included. For example, there are four redundant observed features to describe lip color of patients, such as “lip color is pale,” “lip color is red,” “lip color is pink,” and “lip color is dark purple.” Therefore, we aggregate several features into a group if they describe the same category of clinical symptoms or the same part of body and define a new syndrome feature for each symptom group. After extracting all the syndrome features, we build a tree structure to achieve the hierarchical feature representation (Figure 1). In this hierarchical structure, the bottom-layer nodes (leaf nodes) are the original clinical symptom features which are directly collected from the original TCM clinical dataset. And a middle-layer syndrome feature is defined on a group of symptoms which are related to the same part of the body. If the symptoms in the same group are not mutually exclusive (concurrent), the corresponding syndrome is defined as the sum of all these symptoms; otherwise, the level of positivity of the syndrome is based on the frequency of each symptom in all the patients (see Section 2). The top-layer node is the root of the tree, which denotes the positive score of a patient. It is obvious that each syndrome roughly represents the positive strength of one specific aspect or part of body, while symptom provides much more detailed information. Particularly, our study focuses on how to reasonably extract the syndrome features to generate a reduced feature set for feature selection and infer the causal relationships among these two-layer features. 2.4. Particle Swarm Optimization-Based Hierarchical Feature Selection (PSOHFS) Based on the hierarchical feature representation, the dimension of the processed TCM dataset is sharply reduced on the syndrome level. We designed a chaotic binary particle swarm optimization (CBPSO) algorithm to search potential syndromes for diagnosing efficiently. The flow chart of proposed CBPSO-based feature selection is shown in Figure 2. Particle swarm optimization (PSO) is a population-based random optimization algorithm [16]. A swarm consists of particles moving around in a -dimensional search space. The position of the th particle is represented as , and the velocity , where . The positions and velocities of particles are confined within and , respectively. Each particle coexists and evolves simultaneously based on knowledge shared with neighboring particles; it makes use of its own memory and knowledge gained by the swarm as a whole to find the best solution. The best previously encountered position of the th particle is considered as its individual best position . The best position of all the is considered as the global best position . The limitation of the standard PSO algorithm is applied to optimize the problems in continuous space. However, many optimization problems occur in a discrete feature space; thus binary PSO (BPSO) was proposed to combinatorial optimization [17]. In BPSO, each particle is presented as a binary vector, thus, the overall velocity of particle may be described by the number of bits changed per iteration. Generally, each particle is updated as the following equations: Equation (1) will be used to update the velocities and positions of each particle in each generation. The inertia weight controls the impact of the previous velocity of a particle on its current one. and are random numbers between ; and are acceleration constants that control how far a particle moves in a single generation. Velocities and denote the th velocities of the th particle in the current and the last generations, respectively. and indicate corresponding positions on the th dimension, respectively. In our case, , . Generally, the speed of convergence of BPSO is fast; however, it has high risk of converging to local optimum. Because chaos is a complex behavior of a nonlinear deterministic system which has ergodic and stochastic properties, we combine chaos theory with BPSO to design chaotic binary particle swarm optimization (CBPSO), which potentially promotes the convergence performance of BPSO [18]. CBPSO-based feature selection is introduced in the following steps (Figure 2). (1) Chaotic Initialization of Particle Swarm. When CBPSO is used for feature selection, each particle indicates a candidate feature subset. Given an original feature set , each particle is denoted by , where is the number of features. It is obvious that each particle represents a candidate feature subset. If equals 1 indicates the th feature is selected; otherwise, is not selected. The performance of convergence about BPSO largely depends on initial particle swarm. The chaotic initialization via globally searching combined the ergodic and stochastic property of chaotic system is often has a better quality than random initialization. The common chaotic model is logistic model; it can be shown as follows: Equation (2) indicates a dynamical system, where is a control parameter. Given the value of , a time series is generated from a random initial value , which ranges from 0 to 1. When equals 4, there is no stable solution for the dynamic system. It appears as a complete chaotic state. Now, an initial random vector is generated. We substitute each element of into (2) orderly and iterate times, respectively, and then obtain chaotic variables , which have different locus. When is substituted into (3), we get binary vectors , where the binary vector represents a particle (): At last, we select top binary vectors to constitute initial particle swarm based on the fitness values. For fully traversal of chaotic variable, the iteration of chaotic series is always large (here, , ). (2) Fitness Calculation Based on LSSVR. Support vector machine (SVM) has excellent capabilities in classification (SVC) or regression (SVR), even for small sample [19]. It minimizes an upper bound of the generalization error based on the principle of structure risk minimize. However, SVM training process will be time consuming if dataset is huge. Therefore, least squares support vector machine (LSSVM) is proposed to overcome the shortcoming of high computational cost [20]. Generally, LSSVM can be categorized into LSSVR which is used for regression and LSSVC for classification. Because the problem-solving process of the SVR is a QP problem, which will inevitably cause a high computational complexity especially for large-scale QP problem, LSSVR can overcome these shortcomings by a set of linear equations and squared loss function which lead to important reduction in computational complexity [21]. In this study, we use LSSVR as a regression model to evaluate the predicting performance of each candidate feature subset. We assume that an optimal feature subset not only has excellent performance of prediction but also contains more relevant features and less irrelevant features. The fitness function is defined in denotes a particle-coding binary vector which indicates a candidate feature subset. The function calculates the predicting error of LSSVR model based on the selected features in . The parameter is a weight between 0 and 1. Function indicates the correlation measure between a feature subset and the target variable. In (5), the function measures the relevance between feature (included in ) and target value via a feature-ranking strategy. In our experiment, the more predictive features have smaller values of (see experiment in Section 3.2). Therefore, the smaller fitness value corresponds to the better candidate feature subset: (3) Update the Velocity and Position for Each Particle. The velocity and position of each particle are updated according to (1). Considering the searching performance of CBPSO is affected largely by inertia weight (), the value of is dynamically updated in our CBPSO by using nonlinear decreasing strategy. Its calculation is as follows: In (6), is the number of iterations, is the current iteration, and is a constant (set ). and , respectively, are the values of on the initial and last generation (). In our case, , . The performance of global search of CBPSO is increased using larger at the beginning of iteration, and the local search will be enhanced using smaller at the later stage. (4) Reinitialization of Particle Swarm with Probability. The trajectory of particle is largely affected by and all the . At the beginning of iteration, the convergence rate of swarm is fast, but it is slow at the later stage which has high risk of converging to local optimum. For overcoming this shortcoming, each particle in each generation is reinitialized with small probability (Figure 3). In Figure 3, is the probability of reinitialization for current particle swarm, with its calculation based on (7). At the early stage of iteration, there are many chances for particles to approximate the optimal solution, so that the probability of reinitialization for whole swarm is small. In the later stage, the probability of reinitialization is increased, it can largely avoid the particles fall into the local optimum. The parameter denotes the current iteration, and is a small random probability (in our case, ). When the better particle is found after reinitialization, update the current and : (5) Mutation of the Potential Global Optimal Solution. If the global optimal particle is not constantly improved for a long time, it is necessary to make variation for it to jump out from the local optimal point. In our case, when is invariant in 10 iterations, its binary coding vector will be mutated with a random probability. If a better particle is found, is updated again. (6) Elitist Strategy Is Used in the Later Stage of Iteration. If step (4) could not obviously improve the further, a number of new particles are generated with a probability to instead some particles in current swarm so that the diversity of current swarm could be enhanced [22]. 3. Experiment 3.1. Data Preprocessing For hierarchical representation of clinical symptoms, our raw dataset should be preprocessed as in the following steps. Firstly, we manually divide all the 147 symptoms into 27 groups according to the categories of symptoms (Table SS in Supplementary Material available online at http://dx.doi.org/10.1155/2014/127572). Figure 4(a) shows an example of four clinical symptoms (pale, red, pink, and dark purple) being arranged to a group called “lip color.” Hence, a syndrome feature “lip color” simply represents the states of lip color for a patient instead of four redundant symptom features. Secondly, we calculate each syndrome feature which is extracted from the corresponding clinical symptom group. Therefore, we obtain a new reduced feature space at syndrome level. Finally, combining the original symptom features, extracted syndrome features, and the positive score, we build a tree structure for hierarchical feature representation of the TCM clinical data. Two typical examples are given regarding how to extract the syndrome features from the group of symptoms. Example 1. Figure 4(a) shows an example of several symptoms in the same group being mutually exclusive. That means if the lip color of a patient is red, the rest of the three colors will not appear with him/her. We name a new feature with five possible discrete values () to simplistically represent the combined meaning of four original symptoms. According to Figure 4(a), the states of lip color for a patient are presented with a binary vector (length is four) in original TCM data, while we can represent it with a single value , where . If equals zero, that means all four symptoms are not positive. Otherwise, one of the symptoms appears positive. As for the mapping between four symptoms and four discrete values (1, 2, 3, and 4), we follow a simple rule to assign each candidate value to a possible level of this symptom: the larger discrete value of indicates that much more patients are positive on this clinical symptom. We count the statistic distributions of all the samples on these four symptoms, respectively, and map each discrete value to a symptom of lip color according to the mean value of positive scores on each symptom. Example 2. The symptoms in the same group are not mutually exclusive. Figure 4(b) shows three clinical symptoms of emotion: irritability, depression, and sigh. These symptoms could be positive simultaneously on a patient. For example, the clinical symptoms of emotion for a patient are denoted by a vector in original data, which means two emotion-related positive symptoms appeared with him/ her. In this case, a new syndrome feature is extracted from , where . Therefore, if a patient has several positive symptoms which belong to the same syndrome, cumulative summation is a feasible strategy to get a total positive strength on this syndrome. 3.2. Experiment Design First, we proposed a feature-ranking strategy for association analysis between individual syndrome and positive score (target value) with function : Combining ((4)-(5), (8)-(9)), we can determine the fitness function in the proposed PSOHFS model for feature subset optimizing. The function is the correlation coefficient between feature and target value (). Function denotes the predicting error of LSSVR model with all the features except . If the predicting error is obviously increased after moving out from the whole feature set, it indicates the feature is high predictive. The smaller value of , the higher-ranked feature will be. The result of feature ranking can provide a reference about the importance of each syndrome to positive score. Secondly, our developed CBPSO algorithm was applied at the syndrome level for feature selection. Different swarm size and the number of iterations were chosen to test the searching performance of the proposed CBPSO. And then, the predicting performance of the optimal syndrome subset (OPS) by proposed model was further validated. On the one hand, we employed two well-established feature selection methods to compare them with our proposed PSOHFS model: (1) correlation-based filter method (CFM) [14, 23] and (2) PSO-based wrapper method (PWM) [14]. These standard approaches were applied on original symptom features. On the other hand, we further validated the performance of OPS by feature ranking on the syndrome feature level. Two types of syndrome subsets were selected to compare: (1) full collection with all the 27 syndromes (FCS) and (2) filter-based syndrome set by feature ranking via (8). Here, we set threshold 0.8 and 0.9 to get two potential syndrome subsets: FRS1 and FRS2. Finally, based on the optimal potential syndrome subset inferred by our PSOHFS model, Bayesian networks were constructed, respectively, at the symptom and syndrome levels. On the one hand, the global Bayesian network on potential syndromes was inferred using GES algorithm [24]. Such coarser-grained network can roughly reveal the causal relationships among these potential syndromes of this cancer. Before structure learning of global network, the processed TCM dataset (TD) in Section 3.1 should be firstly discretized according to denotes all the calculated values of th syndrome. Function is used to estimate the optimal intervals of discretization for the sample of th syndrome. If the number of positive levels for a syndrome is larger than four, the discretization is necessary on this syndrome. On the other hand, we chose three syndromes as examples to construct local networks using GES algorithm (Table 4). When a network structure is learned, Maximum Likelihood Estimation (MLE) is utilized to compute all the conditional probability tables. Then, the probability inference could be achieved using inference algorithm, such as junction tree method [25, 26]. 3.3. Experimental Parameters The simulating experiments were implemented under the environment of MATLAB2011a with Intel Core i5-2410 CPU @ 2.3GHZ, 4GB RAM. In the LSSVR regression model, Gaussian RBF kernel is employed, and the kernel parameters and should be determined firstly. Currently, many approaches have been applied in parameter optimization of LSSVR, such as grid search [27], cross-validation [28, 29], genetic algorithm (GA) [30], and simulated annealing algorithm [31]. In our study, grid search was selected to determine the parameters in the range of [0.1, 100000] for and [0.1, 10000] for . For a pairwise (, ), we used 10-fold cross-validation to evaluate the performance of LSSVR model. To evaluate the accuracy of prediction, three statistical metrics are widely employed: (1) mean square error (MSE), (2) root mean square error (RMSE), and (3) mean relative percentage error (MRPE). In (11), where and are the observed value and predicted value, the smaller MSE, RMSE, and MRPE are, the better the LSSVR model will be: In our experiment, we used to calculate the values of function and . Moreover, the Matlab Bayes Net Toolbox FullBNT-1.0.7 [32] and BNT Structure Learning Package BNT_SLP_1.5 were, respectively, used in the Bayesian network structure learning, parameters learning, and probability inference. The probability distribution between nodes in a Bayesian network could be computed according to the inferred network structure and conditional probability tables. 4. Results and Discussion Table 1 shows the results of association analysis between individual syndromes and positive score. reflects the predicting performance of feature to (positive score). The smaller the value of is, the more important the feature will be. The value of indicates predicting error of LSSVR model based on all the features except ; it is measured by . Here, it is obvious that the higher-ranked features have lower values of . We clearly see some important syndromes are high predictive, such as “facial features,” “skin of the limbs,” “diet,” “sternocostal and abdominal pain,” and so forth. Our developed CBPSO algorithm was applied to search the optimal syndrome subset on the processed TCM dataset. Assigning different swarm size and the number of iterations, this CBPSO algorithm shows excellent convergence performance (Figure 5). Different assignments of parameters for CBPSO finally got the same optimal solution: 001101111111111111111111111. It means the potential syndrome subset containing 24 syndromes is a steady solution for this NP-hard problem (Table 2). These 24 syndromes reflect many cancer-related parts of body or aspects of observation, which are helpful to clinically diagnose HCC. Now, two well-established feature selection methods were introduced to be compared with our proposed PSOHFS model: (1) correlation-based filter method (CFM) [14, 23] and (2) PSO-based wrapper method (PWM) [14]. The first one uses correlation-based feature ranking as the principle criteria for feature selection by ordering. The second one uses standard BPSO algorithm to search an optimal feature subset. These two methods were all applied on the original symptom features. For CFM, we used 15% and 30% top-ranked features to validate its performance, while, for PWM, we set population size equal to 100 and iterations equal to 100 and 200. Table 3 shows the error of prediction of the LSSVR model based on candidate optimal feature subsets. Five candidate feature subsets were searched by the above two methods and PSOHFS model, respectively. In Table 3, the values of MSE, MRSE, and MRPE were calculated based on LSSVR by 5-fold cross-validation. Comparing the values of MSE, RMSE, and MRPE in Table 3, we can see that the optimal syndrome set (OPS) searched by our PSOHFS model has the obvious superiority in the predicting performance. The dimension of the PSOHFS-based optimal syndrome subset equals 24, which is significantly smaller relatively to the dimension of the original symptoms (147). Because CFM and PWM work directly on the original high dimensional feature space, it is hard for them to achieve an optimized prediction performance and the dimension of potential feature subset, simultaneously. PWM searches for the optimal solution depending on the evaluation of regression model, so the optimal feature subset from PWM is more predictive than CFM’s. However, standard wrapper-based methods do not optimize the size of optimal feature subset. CFM got the worst result is reasonable because the correlation measurement can only detect linear dependencies between variable and target. Next, we further validate the performance of OPS on the syndrome level. Two types of syndrome subsets were selected to compare: (1) full collection with all the 27 syndromes (FCS) and (2) filter-based syndrome subset by feature ranking via (8). Here, we chose threshold 0.8 and 0.9 to get two potential syndrome subsets: FRS1 and FRS2 (Table 1). In Table 4, we obviously find OPS can get good balance between the dimension and predicting performance. The verification on FRS1 and FRS2 proves the fact that, although feature-ranking methods run quickly, they still easily lead to worse results because feature-ranking filter ignores the possible interactions and dependences among the features [29]. The difference between Tables 3 and 4 indicates the feature selection on a reduced feature space of original dataset potentially obtains a better solution. 24 potential syndrome features could quickly diagnose the positive level of HCC patients with high accuracy. Our result suggested that “lip color,” “tongue color,” and “coated tongue color” could be ignored during the process of prediction because they are weak predictive features for discriminating these HCC samples. Finally, based on the hierarchical feature representation and the result of feature selection on syndromes, Bayesian network on two layers was constructed and the conditional probability tables were inferred. Here, we picked up three cases to explain what we can obtain from the Bayesian network analysis in the symptom and syndrome feature space (Table 5). Figure 6(a) shows the Bayesian network structure of “emotion” syndrome. We can clearly see that there is a causal relationship between “depression” and “sigh.” When a patient is depressive, sigh is a usual symptom with him/her. While “irritability” seems to reflect inversely comparing to “depression”; therefore it is an independent node in this inferred network structure. The conditional probability tables of “emotion” are shown as in Supplementary Table S1A-S1C. For example, P(“irritability” , “depression” , “sigh” ) suggests the probability of the clinical symptoms “depression” and “sigh” is positive on a patient. Figure 6 (b) shows the network structure of “cardiothoracic condition” syndrome. From Figure 6(b), “tightness in the chest” might lead to three other clinical symptoms: “shortness of breath,” “palpitations,” and “pain in chest.” The conditional probability tables of “cardiothoracic condition” are shown in Supplementary Table S2A-S2D. For example, P(“tightness in the chest” , “shortness of breath” , “palpitations” , “pain in chest” ) . Similarly, Figure 6(c) shows the network structure of “diet” syndrome. The conditional probability tables of “diet” are shown in Supplementary Table S3A-S3G. At last, Figure 7 represents the global network on 24 potential syndromes. There are three subnetwork modules and six independent nodes in Figure 7. All the relationships among these syndromes were represented. Their conditional probability tables were listed in Supplementary Table SS1-SS24. Based on the hierarchical feature representation, the Bayesian networks potentially provided us with useful knowledge with multi-granularity. From Table 6, we can clearly see that the computational cost of network structure learning is sharply increased when the number of nodes in the network is increasing. It further proves that if we construct Bayesian network on 147 original clinical symptoms directly, it will meet unimaginable computational complex; therefore, our method proposed in this paper provided a good solution. 5. Conclusions In this paper, a particle swarm optimization-based hierarchical feature selection (PSOHFS) model was proposed to infer potential clinical features of HCC on a Traditional Chinese Medicine dataset which was collected from 120 patients. The PSOHFS model firstly arranged all the 147 original symptoms into 27 groups according to the categories of clinical symptoms and extracted a new syndrome feature from each group. The raw TCM clinical dataset was represented in a reduced feature space so that we can build a hierarchical feature representation pattern with a tree structure. Based on such hierarchical feature graph, we reached two aims: (1) based on a significant reduced feature space, the feature selection can be easily realized, and the optimal feature subset could diagnose patient samples efficiently; (2) we constructed Bayesian network on symptom and syndrome levels. A global Bayesian network for all the potential syndromes roughly described the relationships among the main important aspects of HCC. While each local network was constructed for the symptom features in the same group, the causal relationships among them could be inferred. In our simulating experiment, our CBPSO algorithm in PSOHFS model discovered an optimal syndrome subset of HCC, which included 24 syndromes. With a LSSVR regression model built by these 24 potential syndromes, the diagnosis accuracy of HCC is high and computational cost is sharply reduced. The significance of the proposed model is as follows: (1) feature selection is implemented on a reduced feature space, so that the dimension of optimal feature subset is smaller; (2) the fitness function in CBPSO algorithm optimizes the predicting performance and the correlation between features and target variable. Based on the results of feature selection, we further achieved the Bayesian network construction at both syndrome and symptom levels to explain the relationships among all the nodes and the probability inference could be computed based on learned network structure and conditional probability tables. However, our model also has some shortcomings: (1) most of syndrome groups were aggregated from the clinical symptoms observed from the same parts of body, while much more evidence proved that there are significant relationships between symptoms which describe different parts (aspects) of body; (2) we did not study the relationships of clinical symptom features which belong to different groups. In the future, we will collect more clinical samples of HCC to deeply analyze the correlation between any clinical features. Also, some high-predictive clinical features inferred in this study need to be validated further in other TCM clinical datasets. If we can discover and validate some high-predictive clinical features in the next step of research, that might be the significant phenotypes of this cancer. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. This work was supported by the National Science Foundation of China (nos. 61272269 and 61133010). The data in this work was collected by the Changhai Hospital in Shanghai, China. The authors give special thanks to Professor X. Q. Yue for his work in data preprocessing.
{"url":"http://www.hindawi.com/journals/bmri/2014/127572/","timestamp":"2014-04-17T10:47:25Z","content_type":null,"content_length":"269455","record_id":"<urn:uuid:457dbae8-b008-42a9-a667-acbe5932513b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Why particles can't be octopi A reader has pointed out that the idea that particles are different octopi is becoming popular. Apparently, almost no one knows why an electron can't be either a mammal or at least a reptile. ;-) There are several basic defining properties that distinguish elementary fermions - quarks and leptons - from other objects such as dogs: • they are quantum mechanical entities that are able to interfere in double-slit experiments; they are indistinguishable • these particles satisfy the rules of special relativity and the relativistic dispersion relations • they carry spin and are created by quantum fields with a spinor index • quarks carry color and they transform as a representation of a gauge group • consequently, they interact through interactions mediated by Yang-Mills gauge fields These features are no details that can be ignored: instead, they are features that distinguish elementary particles from other objects, such as supermarkets, weapons of mass destruction, or extraterrestrial aliens. These features underlie our description of everything we know about the particles - as encoded in quantum field theory. Elementary particles can't be dogs or octopi because dogs and octopi violate every single property listed above. If someone violates all these requirements, it means that he has made no (0) progress in describing these elementary particles. Let us see why the octopi violate these requirements one by one. Dogs and octopi can't realistically generate interference patterns in double-slit experiments. It is because they carry a huge entropy: a dog can be represented by many different microstates and these microstates don't interfere with each other. It is absolutely necessary for the quantum state describing an electron to be unique - determined by the position (or momentum) and spin only - in order for the electron to be able to interfere. Elementary particles of the same kind must be indistinguishable and the fermions such as electrons must moreover obey the Pauli principle; otherwise chemistry could not work. Mammals violate these rules because their complicated internal structure makes them distinguishable. A necessary condition for this uniqueness that is required for the existence of interference is the uniqueness of the vacuum itself. Unusual theories that imagine a chaotic structure at every point of "empty" space violate this rule maximally. A double-slit experiment would never create an interference pattern if vacuum had a large degeneracy, as envisioned by theories of "discrete gravity" and similar hopeless approaches to physics. The vacuum must be a completely pure, unique state, and the one-electron states must also be pure, well-defined states in the Hilbert space that are described by the position/momentum and the spin only. No chaos is acceptable for particle physics. Some people, especially cranks, imagine that the deepest idea that they can dedicate to the world of physics is to make things complex and to give the particles complicated internal structure. But the experiments offer clean outcomes and require just the opposite: the vacuum and the particles must be very clean, unique, and if there is any internal structure, there must exist reasons why the structure is always found in the same state. The difficult task for those who probe physics beyond quantum field theory is not how to make things complex, foggy, and chaotic: it is on the contrary how to make things consistent, pure, well-defined, and consistent with the very sharp and nearly accurate laws of Nature that we have already understood, experimentally verified, and summarized in the Standard Model and General A related point is that the elementary particles can't be identified with distortions of some "underlying structure" that is not unique because such a picture would violate the fact that the elementary particles follow the rules of special relativity. Elementary particles can't be excitations of an aether - a discredited notion from the 19th century physics that some people are trying to revive under the name "spin network" - because such an aether would break the Lorentz invariance. We know from experiments that the Lorentz invariance is either exact or an extremely accurate All people who say that the Lorentz invariance is something that can be ignored or something that is cheap to obtain are crackpots because special relativity is one of five most important and five most constraining discoveries of the 20th century physics. Octopi that swim in the ocean do not respect the rules of special relativity (unless you include the details of the ocean into your description): water spontaneously breaks this symmetry. Everything else that resembles a complex object on a generic complex background is all but guaranteed to do the same thing. Particle physics follows very different rules from the ocean. Elementary fermions are excitations of spin-1/2 fields in spacetime - quantum fields that transform as a spinor under the Lorentz group. Again, this is no detail. It is a very defining feature of the leptons and quarks. Octopi and dogs don't transform as a spinor. A naive classical picture of octopi can't be compatible with the spinorial gauge theory of an electron. Note that the correct spin of particles can be extracted from string theory in all of its realizations. For example, particles can be viewed as excited strings and the excitations themselves transform as the appropriate representation of the Lorentz group. It is because the excitations of the "minimal energy string" are operators themselves and they naturally transform as representations of various groups. A random octopus embedded in a complex environment won't transform as a representation of the Lorentz group - this group will be broken. Quarks also transform as the three-dimensional representation of a group, namely the colorful SU(3) group. The word "group" really means a "symmetry". A symmetry is a set of transformations that transform one object onto another object. It is something that shows that these two or more objects related by symmetries actually have identical physical properties. If you draw three octopi that differ in details, they can't have the same physical properties, and consequently they cannot form a representation of a group. If three "colors" of quarks don't have physically indistinguishable properties, you won't ever be able to find an SU(3) theory that creates consistent forces between them. They will never have the right interaction terms with the gauge fields, even if you believe that such a gauge field could also be found. Strings are the most complicated objects that can behave as elementary particles with the right properties. Branes of other dimensions are in principle capable to do the same thing. But it is hard to quantize branes of wrong dimensions directly - we know how to define quantum theories describing internal dynamics of branes using open strings stretched between them: a brane on which an open string ends is called a D-brane. Generic animals can't play the role of elementary particles because of all the reasons above. Any paradigm that is meant to be treated seriously by theoretical physicists must explain why it reproduces the same kind of Lagrangians - or equations of motion - that we know from the Standard Model and/or General Relativity: quantum fields with Lorentz or spinor indices, color indices, and their right products in the Lagrangian. This task is very nontrivial which is why string theory is the only known way (and probably the only mathematically possible way) how to describe observed physics by something else than point-like particles or by quantum fields defined at each point of a spacetime. Naomi Oreskes and Christopher Monckton and co2 temperature correlation The Canadian conservative government of Stephen Harper has made one of the obvious rational decisions: ... Massachusetts v. EPA ... they have erased the Kyoto protocol from their federal budget and slashed funding for the greenhouse gas programs, bringing their country closer to the rest of the civilized North America. I always believed that they would be trying to act in this direction. Many people including Steve McIntyre did not believe me. The Tories care about the environment but they also care about common sense. They prefer solutions that make sense and have a high enough chance to work. They will introduce tax breaks to support the public transportation, among other things. Alberta in particular is rather happy and wants to Note that according to Kyoto, Canadians would eventually have to pay about $600 per family for carbon dioxide credits. It's not a devastating amount but still, it is silly to throw money away for such entirely useless things and it is even sillier to torture yourself even if you know that you will have to pay anyway. When the Tories planned to return rational thinking to the environmental policymaking, they had to think about possible criticism. But I guess that there exists no real threat to be afraid of. Who are the critics? They're people like Al Gore. If you have forgotten, Al Gore is a megalomanic eco-prophet who has had plans to control the entire territory of the United States of America six years ago. Now he tries to promote his movie full of convenient untrue statements (they're convenient for the producers' pockets): The movie argues that the planet will face a catastrophe in 10 years (...) unless the instructions of the prophet and narrator Gore will be followed. Moreover, Gore is now explaining that climate change is no longer a political issue: it has become a spiritual (religious) issue! Wow. The debate is over and a new era of crucification of those who find the prophet intellectually challenged is getting started. When read the "made-in-Canada" quotes from Environment Minister Rona Ambrose, Gore rolled his eyes and made a flag-waving gesture with his hand. During his stay in Toronto, Gore also claimed that the government had no mandate to make any decisions about the environment. I guess that Gore believes that he has a mandate himself - a direct mandate from the God of climate change. ;-) David Goss has pointed out an article of Nicolae Nicorovici and Graeme Milton consider objects called "superlenses" that hypothetically create appropriate electromagnetic resonance that cancels light reflected from a seed of dust - and, maybe, from a spaceship in the future - rendering the seed invisible. ;-) Alternatively, you can say that the superlens forces the seed of dust to emit no light. Sounds intriguing and bizarre. Glenn Reynolds has suggested a solution of oil problems that should not be unexpected from an instant right-wing pundit: • Of course, if we seized the Saudi and Iranian oil fields and ran the pumps full speed, oil prices would plummet, dictators would be broke, and poor nations would benefit from cheap energy. But we'd be called imperialist oppressors, then. Sean Carroll disagrees. He thinks that Reynolds has squeezed five units of wrongness into four statements. Sean predicts that Reynolds' blog will collapse into a black hole. I happen to think that Sean's prediction is a flawed prediction and Reynolds is closer to the truth than Carroll. Let's analyze the statements one by one. Prices would plummet Sean thinks that they won't plummet because the oil fields are essentially running at full capacity. Sean has a naive idea about the driving forces behind these prices. In 2002, the oil price was $18 instead of $70. Does it mean that the oil fields were running at a much-higher-than-full capacity? The oil price is a very volatile quantity that sensitively responds to many different factors. The consumers are ready to pay higher prices because they feel that oil is something valuable that can cease to be available tomorrow. OPEC's statements have a dramatic impact on the price. If there were real competition, the prices could drop. Of course, the conflicts started by September 2001 did not really move the oil industry in this right direction. Dictators would be broke Sean thinks that dictators are rich even without oil. In principle, dictators can be rich even without natural resources, but it is naive to claim that oil does not make them richer. In Reynolds's article, many figures are listed that show how the regimes of oil-rich countries financially benefit from the higher prices. Poor nations would benefit Sean argues that oil price is more important for rich countries because they consume most of oil. That's completely unrealistic, much like the previous points. Rich nations and people may consume more oil, but oil is still a smaller percentage of the money that they must spend and a change of the price has smaller consequences. Poor countries are affected by higher prices more than the rich ones. This is why many officials have proposed an IMF-backed fund to help the poor countries hit by oil price volatility. See, for example, BBC or Sinha's calls. We would be called imperialist oppressors I think it is obvious that even if a fuller control by U.S. capitalism led to a smaller influence of dictators, lower prices, and stronger growth of the economies, especially the poor ones, the U.S. would be blamed as an imperialist oppressor. Even Sean Carroll agrees that it is the case. But he disagrees that it would be inappropriate to blame the U.S. for such changes. Well, if the governments and political systems impose things such as affirmative action, stifling political correctness, nationalization of corporations, huge redistribution plans, far left-wing blogs offer their support. If someone thinks about government plans that would actually make things better, not worse, and cheaper, not more expensive, far left-wing bloggers complain about imperialist oppression. The far left-wing approach is counterproductive for everyone who actually wants to live in a better world. And that's the memo. We have some good news for those people who complain that the physics mafia does not allow the fans of alternative physics to submit their work: the arXiv is now apparently fully open to crackpots. Tonight, there are at least four crackpot papers on gr-qc and hep-th. On gr-qc, an author from Chicago with a dot-com address derives the masses of all elementary particles. His groundbreaking idea is based on Kaluza-Klein theory but he only cites Dirac and Georgi+Glashow. The physicist calculates the masses of all elementary fermions using a simple square-root formula. Because the results disagree, among many other things, with the known properties of the s,c,b,t quarks, the author predicts that these quarks probably don't exist. On hep-th, there are two papers about an unusual mechanism to generate masses for non-Abelian gauge bosons. One of them is short and the following one is longer. The author writes a non-local action for the massive gauge boson involving the inverse box. For U(1) gauge bosons, it is a standard textbook trick that creates an equivalent action. For non-Abelian gauge groups, one needs the inverse covariant box which obviously leads to a non-polynomial and non-local theory that breaks down exactly where you expect problems with the unitarity of the WW-WW scattering. It is a standard material from graduate courses of quantum field theory that one can make gauge bosons massive with extra Goldstone bosons that live in the group manifold. However, the non-linear sigma model is not renormalizable and breaks down at energies comparable to 4.pi.f where f is the decay constant. If we want the theory to be valid at higher energies, we must complete it and the Higgs mechanism is the only perturbative way to do it. The exchange of additional fields such as the Higgs helps to keep the WW to WW scattering unitary. The statements in the paper claiming that the strange new theory can be proved to be perturbatively renormalizable must be incorrect. The microscopic source of the confusion is probably that the author does not appreciate how difficult it is to invert the covariant box. The only way how to complete the theory into a renormalizable theory is to effectively create new particles corresponding to the non-invertible modes of the covariant box - as long as these particles will interact just like the Higgses, and one obtains a theory equivalent to the standard spontaneous symmetry breaking. In another paper, an author proposes a list of generic predictions of quantum gravity. It would be more accurate to call it a list of misconceptions inspired by sloppy thinking about quantum gravity - and it would be even more accurate to call it a list of reasons why all "alternative" attempts to define quantum gravity must be inconsistent. Neither of the bizarre effects is predicted by anything that could be called a theory of quantum gravity - and most likely, neither of the bizarre effects is even consistent with quantum gravity. The list includes double special relativity, something that is known to be inconsistent with locality and additivity of energy, even with the approximate ones. Similar considerations show that doubly special relativity leads to the so-called soccer ball problem (thanks to Sabine for explaining me the terminology): you can't kick a soccer ball if its total energy (including the latent one) is going to exceed the Planck energy - about a few micrograms. In fact, the soccer ball couldn't exist. The second "general prediction" is that elementary particles are "coherent excitations of quantum geometry" that probably refers to a recent kindergarden theory that elementary particles are different octopi. Well, in reality, gravitons (and perhaps KK-photons) are coherent excitations of quantum geometry while other particles are excitations of something more general - and whether or not you call this more general thing (string theory) "geometry" is a matter of terminology. The third "general prediction" is that "locality is disordered". The author also repeats many misconceptions about the quasinormal modes - such as the fantasy that they have something to do with the black hole entropy counting in loop quantum gravity. This fantasy has been known to be patently false for more than two years. In fact, it is wrong on all sides: the quasinormal frequencies are generally not what they needed to be according to the hypothesis; the entropy predicted by loop quantum gravity is not what it needed to be either; and the link between these two is completely unphysical. The author of this particular paper always tells me that he understands my explanations why these things are safely known not to work. Then he returns home and writes another silly paper claiming that they do work. Sigh. Apparently, a classmate of two other famous critics. Moreover, all of them will probably believe the theory of global warming. Via Rae Ann. Via Steve McIntyre's blog (posting by John A.) Dave Stockwell has created a script whose source is found here (mirror) and described here. If you click the image below, it opens in a separate window: you probably need to click because the image does not quite fit here. Every time you reload the image, the calculation starts from the beginning. Although Dave offers an explanation, let me offer you mine, too. The blue graph shows temperatures from 1856 to 1994 or so measured by the CRU thermometers - the array is called "cru" - and these real numbers are used to make predictions from 1994 to 2093 with an important help of a random generator: the predicted temperatures for the period 1995-2094 depend on random numbers as well as the CRU data from the past. The eleven temperatures from the period 1995-2005 are known from the CRU data, but they are also predicted using the random forecasting algorithm. These eleven years are used to calculate the verification statistics - a kind of score that is used to evaluate how much you should believe the prediction: statistical skill. How are the random predictions made? Weighted random data The temperatures predicted for the years 1995-2094 are calculated using the array called "fcser" that later becomes the second part of the array "graphValues"; the role of the "fcser" array is to emulate the temperature persistence. These "predictions" are "calculated" from the "series" as follows: the temperatures in the "temp" array are calculated as • the CRU temperature from 1994 plus a random number between -0.5 and +0.5 plus "fcser" for the given year where fcser for the given year is a weighted average of the values of "temp" from previous years (for the years up to 1994, the real CRU data are used): the weights, defined in the array "weight", are a particular decreasing function of the time delay. If you care, "weight[y]" for the delay of "y" years is recursively calculated by • weight[1] = 1/2 • weight[y] = weight[y-1] * (y-1.5)/y For a long time delay, you see that "dw/dy = -1.5 w/y" which means that the weight goes like "y^{-1.5}", a power law. All the numerical constants are variables in the script that can be modified if you wish. The formula for the weights has the interesting feature that they automatically sum to one, in fact for a general value of "d": • weight[1] = d • weight[y] = weight[y-1] * [1 - (1+d)/y] I leave you the proof as a homework exercise. The value of "d" leading to the most reasonable color of the noise is clearly related to the critical exponents encoding the temperature autocorrelation. Verification statistics Two verification statistics are calculated to quantify the agreement between the observed CRU temperatures and the randomly predicted temperatures in the 1995-2005 interval: • r2 - or "r squared" • re - or "reduction of error" Here, "r2" is the usual correlation coefficient squared - something that measures the correlation between eleven numbers "x_i" (CRU temperatures) and eleven numbers "y_i" (randomly predicted temperatures). The correlation coefficient is a number between -1 and 1 calculated as follows: • [Sum(xy) - 11 Average(x) Average(y)] / sqrt(Variance(x) Variance(y)] where "Variance(x) = Sum[(x_i-Average(x))^2]" and similarly for "y". This "r2" statistics is normally used to evaluate statistical skill, and you may see that this number is extremely close to zero whenever you reload the picture; they're much smaller than one. This smallness tells you that the random numbers (of course) are statistically insignificant and the prediction is not trustworthy. The "hockey stick graph" of the past temperatures gives you a tiny "r2", too. On the other hand, "re" is the reduction of error. You usually get high numbers around 0.5; the Mann-Bradley-Hughes gives a rather high verification statistics, too. Because in this experiment, you see that "re" is high even though the prediction is based on random data - i.e. on complete garbage - it shows that high "re" can't be trusted. This "re" is calculated as follows: • re = 1 - SumVariances/SumVariancesRef where "SumVariances" is the sum of "(cru-predictedtemp)^2" over the eleven years while "SumVariancesRef" is the sum of "(cru-averagecru)^2" where "cru" are the actually measured temperatures in the eleven years of the verification period. In other words, the number "re" is a number between 0 and 1 that tells you by how much your prediction is better from the assumption of a simple "null hypothesis" that the temperature is constant over the 11-year period. This particular program predicts the 1995-2094 temperatures as random data with a particular power law encoding the noise at different time scales, but otherwise oscillating around constant data (the 1994 temperature). You could modify the "predictions" by any kind of bias you want - global warming or global cooling - and the statistical significance of your results would not change much. Also, the M&M mining effect is still not included: if you allow your algorithm to choose the "best trees", you can increase your verification statistics even though the data is still noise. The punch line is that the reconstructions that imply that the 20th century climate was unprecedented are as statistically trustworthy as sequences of random numbers. If you want to verify the hypotheses, you must actually pay attention to the "r2" statistic. With this method, you can see that the randomly generated predictions are garbage, much like various existing "hockey stick" graphs whose goal was to "prove" that the 20th century climate was unprecedented. Short comments about two interesting physicists (and speakers) who visited Harvard. Ari Pakman et al. have done something that should have been calculated eight years ago or so: they have verified the three-point functions of the chiral primary operators in the AdS3/CFT2 correspondence. Recall that the same task in AdS5/CFT4 was solved by Lee, Minwalla, Rangamani, and Seiberg in 1998. The calculation of Ari and his company starts with the Wess-Zumino-Witten models for the groups SU(2) and SL(2,R) that are combined in various ways. The intermediate results depend on the double gamma function and similar monsters. But all these complicated functions eventually cancel between the SU(2) and SL(2,R) parts of the theory to give you a very simple result (essentially equal to one) - one that matches the correlators in the symmetric orbifold CFTs that describe the boundary conformal field theory - correlators calculated by Lunin and Mathur, among others. I can't tell you more details because the paper by Ari et al. is yet to be published. When Ari visited Harvard at the end of 2004, he showed a picture of Che Guevara on one of his transparencies. At that time, I did not know that particular communist bastard, so I asked Ari who was that - and Ari answered that it was a Czech dissident. Ari assumed that I was joking - because we certainly had to hear about Che all the time, he thought - but I was not joking and in fact the Czechoslovak communists did not tell us a single nice word about Che. He was never popular in Czechoslovakia and as far as I can tell, the Czechoslovak communists did not trust allies such as Che. Rajesh Gopakumar is continuing with his program to derive the worldsheet theory of a string from the known gauge theory on the boundary of the AdS spacetime. He has a sophisticated sequence of steps to translate the diagrams in gauge theory - and he considers free diagrams in the free gauge theory only. By imagining that the worldsheet is discretized in a particular way, he can construct the hypothetical worldsheet correlators that indeed lead, after an integral over the worldsheet positions (and perhaps other moduli, if you considered string loops), to the simple power-law correlators of the chiral primaries on the CFT side. The worldsheet correlators satisfy all the usual properties that you expect from a CFT, and as Davide Gaiotto has pointed out, they resemble powers of correlators of spin fields in the Ising model. Indeed, it is not unnatural to expect that the vertex operators for physical states in the hypothetical CFT are represented by some kind of spin fields. Polar bears are often used as symbols of victims of the so-called global warming. Dr. Mitchell Taylor, a polar bear biologist, debunks various convenient lies of this kind in 11 out of 13 Canadian populations of polar bears are stable or growing. One Southern population could actually be over-abundant. After 151 votes, the percentual gains seem to be rather constant, so let me close the polls. The results are the following: 1. ILC: the linear collider, 47% 2. Millions of PC for kids, 31% 3. Two weeks of Kyoto, 11% 4. One month of war in Iraq, 8% 5. Ten space shuttle flights, 3% The message is that the Pentagon and especially NASA should either improve their public relations or modify their military or research goals because their result is worse than the support for the mad agreement to prevent the climate from changing. On the contrary, the voters have shown that a new linear collider should be built, and to a lesser extent, they have also demonstrated a pretty good support for the MIT plan to produce millions of $100 laptops for the kids in the third world. • Update - van der Waals and Casimir: Nima told me an obviously true statement whose validity I did not appreciate although I had to know it because it is apparently explained in the Landau-Lifshitz books. I thought that there were too problematic forces competing with gravity: the Casimir force and the van der Waals forces. In fact, they're the same one-loop effect. The Casimir force is a macroscopic description of the overall effect of van der Waals forces between the atoms of the metallic plates. Blayne Heckel from University of Washington gave a colloquium about the submillimeter tests of gravity. He started with motivation for these experiments - with a review of Newton's theory, general relativity, old large dimensions, and warped dimensions, among other possibilities inspired by string theory. What do you need in their experiment? You need a one-meter-long fiber and a hanging gadget whose size is seven centimeters. There are mirrors on the gadget: because they reflect a laser beam, you can measure the orientation of the gadget. On that gadget, you find a horizontal disk with many holes that respect a Z_{21} symmetry. This disk can rotate relatively to another disk beneath it. In the experiments, it rotates very slowly, with a period comparable to several hours. The existence of holes exerts a gravitational torque on the gadget whose magnitude is periodically oscillating. This effect caused by gravity is still pretty large. Because you want to study deviations from the "1/r^2" force law, you add another disk with holes whose torques exactly cancel the previously discussed torques assuming that Newton's law is That's a method to measure the hypothetical deviations only. You must be careful about many details - for example, you must insert a thin conducting foil in between the different layers to screen the electromagnetic effects including the Casimir forces. This foil is really thin because eventually you are able to measure the forces at distances slightly below 100 microns. The deviations are conveniently parameterized as a Yukawa force that is, relatively to Newton's force, suppressed by the factor "alpha.exp(-r/r0)". You measure the angular orientation of your gadget and finally you evaluate the data: you allow the coefficient "alpha", the distance scale of the new force "r0", as well as various parameters of your gadget's mass distribution to vary, and you calculate the best fit. After having made several versions of their experiment with slightly modified details, they end up with • alpha = (-0.7 +- 0.9) x 10^{-4} which means that "alpha" is really zero and you can't therefore determine "r0" at all. What the value of alpha above shows that what we see is less than a one-sigma effect because 0.7 is less than the standard deviation of 0.9: while that would be enough in climate science to prove a new kind of looming catastrophe, it means "no discovery" in physics. The intermediate results indicated some effects - equivalent to an additional repulsive force - but these were just 3 sigma effects that disappeared when they tried to do things slightly differently. They also measure a possible existence of the preferred reference frame and other unusual terms and they can show that the coefficients of the terms responsible for these effects are at least 100,000 times weaker than what you would normally expect if you assumed that CPT and the Lorentz invariance are broken at the Planck scale. While Lee Smolin waits for GLAST to prove his unusual theories that the normal Lorentz invariance is broken, I think that these terrestrial experiments have already falsified these bizarre theories, proving that they were indeed (easily) falsifiable. Clifford Johnson from Cosmic Variance was He has learned that the Christians can be great people and in principle, they could even become scientists. One can talk to them, smile at them, and respect them as human beings. They can write and they do write lovelier articles about Clifford than the left-wing atheists. Such an experience makes a difference. Indeed, Clifford and it went wonderfully: singing with the piano, clapping, preaching. Indeed, Clifford has found out that the Christians can be more human and more friendly people than the officers of PC police. Moreover, some verses from the Bible resonated with the message he wanted to give. Of course, the idea of Clifford Johnson in the church was rather controversial at Cosmic Variance. Religion is viewed as the source of all lies in the world by Sean Carroll. He emphasizes that religion is not necessarily evil: it is just false. And one must do everything to fight it; see, for example, these 172 pages. More seriously, there are some scientifically strange things that many Christians believe. But there are also many scientifically strange things that left-wingers such as Sean Carroll believe. I have discussed the fact that the color or the amount of religion in some ideas cannot universally predict their scientific strength. Moreover, I still view religion as the basis of moral principles in our society. Yes, I am primarily talking about the judeo-Christian tradition. But more generally, religions showed their power to give our lives a direction. Religions can't provide us with the final word about difficult scientific questions; but they have been and they are a part of the transformation of skillful monkeys to human beings. Science and religion were born into the same cradle. Their diversification only occured when the human civilization made many other important steps. Although it has always been clear that I would remain an infidel, the Christian environment is something that many of us are able to live with. If we had to spend years with extraterrestrial aliens, it could be difficult - but if they were Christians, things could simplify dramatically. ;-)
{"url":"http://www.karlin.mff.cuni.cz/~motl/","timestamp":"2014-04-16T19:04:38Z","content_type":null,"content_length":"92058","record_id":"<urn:uuid:fad5ed2a-a433-46b7-b89c-bfeb03ec2ca3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
elem. # theo - "prove never perfect square" September 23rd 2008, 05:34 PM #1 elem. # theo - "prove never perfect square" Prove $3a^2 - 1$ is never a perfect square. The book hints to use the fact: "The square of any integer is either of the form 3k or 3k+1," which we proved in a previous problem. So how do you start? Assuming $3a^2 - 1$ is a perfect square, it must be able to be written in the form of 3k or 3k+1. Then what? How do you show it can't or NEVER can be? Prove $3a^2 - 1$ is never a perfect square. The book hints to use the fact: "The square of any integer is either of the form 3k or 3k+1," which we proved in a previous problem. So how do you start? Assuming $3a^2 - 1$ is a perfect square, it must be able to be written in the form of 3k or 3k+1. Then what? How do you show it can't or NEVER can be? i guess you could assume a is even and show it doesn't work, then assume a is odd and show it doesn't work either. that is, you can't simplify it to get it in that form Ok, so by showing that there's no possible way to put it into the form would be like "let a=2s+1" ... ect ... which is not of the form 3k or 2k+1 and so on with "let a+2s" ... ? Is that enough? But wait: how do we even know a is an integer at all? The problem never said so ... ? yes, if you can show that, it is enough, because there are no other options. you have exhausted all possibilities. a is an integer, it has to be even or odd, and that covers all integers. so you would prove that it can never work no matter what September 23rd 2008, 05:54 PM #2 September 23rd 2008, 06:05 PM #3 September 23rd 2008, 06:11 PM #4 September 23rd 2008, 07:54 PM #5 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/number-theory/50339-elem-theo-prove-never-perfect-square.html","timestamp":"2014-04-19T23:46:41Z","content_type":null,"content_length":"47574","record_id":"<urn:uuid:53fae316-ea24-457f-9427-37b33a417820>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Defect Report #025 Submission Date: 10 Dec 92 Submittor: WG14 Source: X3J11/91-005 (Fred Tydeman) Question 1 What is meant by ``representable floating-point value?'' Assume double precision, unless stated otherwise. First, some definitions based partially upon the floating-point model in subclause 5.2.4.2.2, on pages 14-16 of the C Standard: 1. +Normal Numbers: DBL_MIN to DBL_MAX, inclusive; normalized (first significand digit is non-zero), sign is +1. 2. -Normal Numbers: -DBL_MAX to -DBL_MIN, inclusive; normalized. 3. +Zero: All digits zero, sign is +1; (true zero). 4. -Zero: All digits zero, sign is -1. 5. Zero: Union of +zero and -zero. 6. +Denormals: Exponent is ``minimum'' (biased exponent is zero); first significand digit is zero; sign is +1. These are in range +DBL_DeN (inclusive) to +DBL_MIN (exclusive). (Let DBL_DeN be the symbol for the minimum positive denormal, so we can talk about it by name.) 7. -Denormals: same as +denormals, except sign, and range is -DBL_MIN (exclusive) to -DBL_DeN (inclusive). 8. +Unnormals: Biased exponent is non-zero; first significand digit is zero; sign is +1. These overlap the range of +normals and +denormals. 9. -Unnormals: Same as +unnormals, except sign; range is over -normals and -denormals. 10. +infinity: From IEEE-754. 11. -infinity: From IEEE-754. 12. Quiet NaN (Not a Number); sign does not matter; from IEEE-754. 13. Signaling NaN; sign does not matter; from IEEE-754. 14. NaN: Union of Quiet NaN and Signaling NaN. 15. Others: Reserved (VAX?) and Indefinite (CDC/Cray?) act like NaN. On the real number line, these symbols order as: [ 1 )[ 2 ]( 3 ]( 4 )[5]( 6 )[ 7 )[ 8 ]( 9 ] -INF -DBL_MAX -DBL_MIN -DBL_Den -0 +0 +DBL_Den +DBL_MIN +DBL_MAX +INF Non-real numbers are: SNaN, QNaN, and NaN; call this region 10. Regions 1 and 9 are overflow, 2 and 8 are normal numbers, 3 and 7 are denormal numbers (pseudo underflow), 4 and 6 are true underflow, and 5 is zero. So, the question is: What does ``representable (double-precision) floating-point value'' mean: a. Regions 2, 5 and 8 (+/- normals and zero) b. Regions 2, 3, 5, 7, and 8 (+/- normals, denormals, and zero) c. Regions 2 through 8 [-DBL_MAX ... +DBL_MAX] d. Regions 1 through 9 [-INF ... +INF] e. Regions 1 through 10 (reals and non-reals) f. What the hardware can represent g. Something else? What? Some things to consider in your answer follow. The questions that follow are rhetorical and do not need answers. Subclause 5.2.4.2.2 Characteristics of floating types <float.h>, page 14, lines 32-34: The characteristics of floating types are defined in terms of a model that describes a representation of floating-point numbers and values that provide information about an implementation's floating-point arithmetic. Same section, page 15, line 6: A normalized floating-point number x ... is defined by the following model: ... That model is just normalized numbers and zero (appears to include signed zeros). It excludes denormal and unnormal numbers, infinities, and NaNs. Are signed zeros required, or just allowed? Subclause 6.1.3.1 Floating constants, page 26, lines 32-35: ``If the scaled value is in the range of representable values (for its type) the result is either the nearest representable value, or the larger or smaller representable value immediately adjacent to the nearest value, chosen in an implementation-defined manner.'' A B y C x D E z F -DBL_Den 0.0 +DBL_Den +DBL_MIN +DBL_MAX +INF The representable numbers are A, B, C, D, E, and F. The number x can be converted to B, C, or D! But what if B is zero, C is DBL_DeN (denormal), and D is DBL_MIN (normalized). Is x representable? It is not in the range DBL_MIN ... DBL_MAX and its inverse causes overflow; so those say not valid. On the other hand, it is in the range DBL_DeN ... DBL_MAX and it does not cause underflow; so those say it is valid. What if B is zero, A is -DBL_DeN (denormal), and C is +DBL_DeN (denormal); is y representable? If so, its nearest value is zero, and the immediately adjacent values include a positive and a negative number. So a user-written positive number is allowed to end up with a negative value! What if E is DBL_MAX and F is infinity (on a machine that uses infinities, IEEE-754)? Does z have a representation? If z came from 1.0/x, then z caused overflow which says invalid. But on IEEE-754 machines, it would either be DBL_MAX or infinity depending upon the rounding control, so it has a representation and is valid. What is ``nearest?'' In linear or logarithmic sense? If the number is between 0 and DBL_DeN, e.g., 10^-99999, it is linear-nearest to zero, but log-nearest to DBL_DeN. If the number is between DBL_MAX and INF, e.g., 10^+99999, it is linear- and log-nearest to DBL_MAX. Or is everything bigger than DBL_MAX nearest to INF? Subclause 6.2.1.3 Floating and integral, page 35, Footnote 29: ``Thus, the range of portable floating values is (-1,Utype_MAX+1).'' Subclause 6.2.1.4 Floating types, page 35, lines 11-15: ``When a double is demoted to float or a long double to double or float, if the value being converted is outside the range of values that can be represented, the behavior is undefined. If the value being converted is in the range of values that can be represented but cannot be represented exactly, the result is either the nearest higher or nearest lower value, chosen in an implementation-defined manner.'' Subclause 6.3 Expressions, page 38, lines 15-17: ``If an exception occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined.'' w = 1.0 / 0.0 ; /* infinity in IEEE-754 */ x = 0.0 / 0.0 ; /* NaN in IEEE-754 */ y = +0.0 ; /* plus zero */ z = - y ; /* minus zero: Must this be -0.0? May it be +0.0? */ Are the above representable? Subclause 7.5.1 Treatment of error conditions, page 111, lines 11-12: ``The behavior of each of these functions is defined for all representable values of its input arguments.'' What about non-numbers? Are they representable? What is sin(NaN)? If you got a NaN as input, then you can return NaN as output. But, is it a domain error? Must errno be set to EDOM? The NaN already indicates an error, so setting errno adds no more information. Assuming NaN is not part of Standard C ``representable,'' but the hardware supports it, then using NaNs is an extension of Standard C and setting errno need not be required, but is allowed. Correct? Subclause 7.5.1 Treatment of error conditions, on page 111, lines 20-27 says: ``Similarly, a range error occurs if the result of the function cannot be represented as a double value. If the result overflows (the magnitude of the result is so large that it cannot be represented in an object of the specified type), the function returns the value of the macro HUGE_VAL, with the same sign (except for the tan function) as the correct value of the function; the value of the macro ERANGE is stored in errno. If the result underflows (the magnitude of the result is so small that it cannot be represented in an object of the specified type), the function returns zero; whether the integer expression errno acquires the value of the macro ERANGE is implementation-defined.'' What about denormal numbers? What is sin(DBL_MIN/3.0L)? Must this be considered underflow and therefore return zero, and maybe set errno to ERANGE? Or may it return DBL_MIN/3.0, a denormal number? Assuming denormals are not part of Standard C ``representable,'' but the hardware supports it, then using them is an extension of Standard C and setting errno need not be required, but is allowed. What about infinity? What is exp(INF)? If you got an INF as input, then you can return INF as output. But, is it a range error? The output value is representable, so that says: no error. The output value is bigger than DBL_MAX, so that says: an error and set errno to ERANGE. Assuming infinity is not part of Standard C ``representable,'' but the hardware supports it, then using INFs is an extension of Standard C and setting errno need not be required, but is allowed. Correct? What about signed zeros? What is sin(-0.0)? Must this return -0.0? May it return -0.0? May it return +0.0? Signed zeros appear to be required in the model in subclause 5.2.4.2.2 on page 15. What is sqrt(-0.0)? IEEE-754 and IEEE-854 (floating-point standards) say this must be -0. Is -0.0 negative? Is this a domain error? Subclause 7.9.6.1 The fprintf function on page 132, lines 32-33 says: ``(It will begin with a sign only when a negative value is converted if this flag is not specified.)'' What is fprintf(stdout, "%+.1f", -0.0);? Must it be -0.0? May it be +0.0? Is -0.0 a negative value? The model on page 15 appears to require support for signed zeros. What is fprintf(stdout, "%f %f", 1.0/0.0, 0.0/0.0);? May it be the IEEE-854 strings of inf or infinity for the infinity and NaN for the quiet NaN? Would NaNQ also be allowed for a quiet NaN? Would NaNS be allowed for a signaling NaN? Must the sign be printed? Signs are optional in IEEE-754 and IEEE-854. Or, must it be some decimal notation as specified by subclause 7.9.6.1, page 133, line 19? Does the locale matter? Subclause 7.10.1.4 The strtod function on page 151, lines 2-3 says: ``If the subject sequence begins with a minus sign, the value resulting from the conversion is negated.'' What is strtod("-0.0", &ptr)? Must it be -0.0? May it be +0.0? The model on page 15 appears to require support for signed zeros. All floating-point hardware I know about support signed zeros at least at the load, store, and negate/complement instruction level. Subclause 7.10.1.4 The strtod function on page 151, lines 12-15 say: ``If the correct value is outside the range of representable values, plus or minus HUGE_VAL is returned (according to the sign of the value), and the value of the macro ERANGE is stored in errno. If the correct value would cause underflow, zero is returned and the value of the macro ERANGE is stored in errno.'' If HUGE_VAL is +infinity, then is strtod("1e99999", &ptr) outside the range of representable values, and a range error? Or is it the ``nearest'' of DBL_MAX and INF? Principles for C floating-point representation: (These principles are intended to clarify the use of some terms in the standard; they are not meant to impose additional constraints on conforming implementations.) 1. ``Value'' refers to the abstract (mathematical) meaning; ``representation'' refers to the implementation data pattern. 2. Some (not all) values have exact representations. 3. There may be multiple exact representations for the same value; all such representations shall compare equal. 4. Exact representations of different values shall compare unequal. 5. There shall be at least one exact representation for the value zero. 6. Implementations are allowed considerable latitude in the way they represent floating-point quantities; in particular, as noted in Footnote 10 on page 14, the implementation need not exactly conform to the model given in subclause 5.2.4.2.2 for ``normalized floating-point numbers.'' 7. There may be minimum and/or maximum exactly-representable values; all values between and including such extrema are considered to ``lie within the range of representable values.'' 8. Implementations may elect to represent ``infinite'' values, in which case all real numbers would lie within the range of representable values. 9. For a given value, the ``nearest representable value'' is that exactly-representable value within the range of representable values that is closest (mathematically, using the usual Euclidean norm) to the given value. (Points 3 and 4 are meant to apply to representations of the same floating type, not meant for comparison between different types.) This implies that a conforming implementation is allowed to accept a floating-point constant of any arbitrarily large or small value. Previous Defect Report < - > Next Defect Report
{"url":"http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_025.html","timestamp":"2014-04-21T09:36:07Z","content_type":null,"content_length":"14507","record_id":"<urn:uuid:5423aa20-76e6-425c-bc46-16e109e24501>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Shawnee, CO Math Tutor Find a Shawnee, CO Math Tutor ...I have also co-taught in the area of reading instruction, as well as provided individualized instruction for reading and study skills in the resource room. I believe that each student is unique with their own learning style, who deserve the respect and opportunity to learn in a manner in which t... 11 Subjects: including algebra 1, prealgebra, reading, elementary (k-6th) ...JohnI completed math in college through Differential Equations. I have tutored up through AP Calculus and AP Statistics within the past few years. I relate math concepts to physical realities to help develop an intuitive approach to problem solving. 8 Subjects: including trigonometry, logic, elementary science, algebra 1 ...As a physicist I am also very good at math and I am willing to tutor up through differential equations.In May 2013 I graduated from the University of Colorado at Boulder with a degree in Physics. I graduated with Mangna Cum Laude with a GPA of 3.60. I also wrote an honors thesis on chemical vapor deposition diamonds. 13 Subjects: including precalculus, trigonometry, differential equations, linear algebra ...I am very laid back, patient and committed to the student in my style of teaching. I am very likeable and can relax the most tense or reluctant student so that our tutoring session is successful. I believe the biggest obstacle of learning is getting the student to realize his own personal way of comprehension is as successful as any other. 23 Subjects: including geometry, accounting, SAT math, GED I am a graduate from the University of Vermont. There I was a teachers assistant for two years assisting with; College Algebra, Sports Nutrition, Injury Assessment and Management in the Athlete, Emergency Medical Scenarios in Sports Medicine. As well as working as a teachers assistant I was a Strength and Conditioning coach in the varsity weight room for division one athletes. 16 Subjects: including algebra 1, probability, precalculus, elementary math Related Shawnee, CO Tutors Shawnee, CO Accounting Tutors Shawnee, CO ACT Tutors Shawnee, CO Algebra Tutors Shawnee, CO Algebra 2 Tutors Shawnee, CO Calculus Tutors Shawnee, CO Geometry Tutors Shawnee, CO Math Tutors Shawnee, CO Prealgebra Tutors Shawnee, CO Precalculus Tutors Shawnee, CO SAT Tutors Shawnee, CO SAT Math Tutors Shawnee, CO Science Tutors Shawnee, CO Statistics Tutors Shawnee, CO Trigonometry Tutors
{"url":"http://www.purplemath.com/Shawnee_CO_Math_tutors.php","timestamp":"2014-04-20T06:44:40Z","content_type":null,"content_length":"23976","record_id":"<urn:uuid:791c97b7-2890-4a40-aa0a-5b58ba1ad784>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Does the dual of an object with trivial symmetry also have trivial symmetry? up vote 6 down vote favorite Let $C$ be a symmetric monoidal category. I am interested in objects $X \in C$ such that the symmetry $S_{X,X} : X \otimes X \cong X \otimes X$ is equal to the identity. There are many examples of such objects, e.g. invertible sheaves. My first question is: How would you call such an object? Now assume that $X$ has a dual $Y$, i.e. we have morphisms $e: Y \otimes X \to 1$ and $c : 1 \to X \otimes Y$ such that the triangular identities are satisfied. Question. Assuming $S_{X,X}$ is the identity, can we conclude that $S_{Y,Y}$ is the identity? If not, does it suffice to assume that $e$ (and thus $c$) is an isomorphism? Edit: I am still interested how objects with $S_{X,X}=\mathrm{id}$ are called in the literature or which terminology you would suggest. ct.category-theory monoidal-categories braided-tensor-cat add comment 2 Answers active oldest votes I believe the answer to your question is yes, without a further assumption that e is an isomorphism. The symmetry S_{Y,Y} can be obtained from the symmetry S_{X,X} as follows $Y\otimes Y \xrightarrow{c\circ c} Y\otimes Y \otimes X\otimes X \otimes Y \otimes Y \xrightarrow{id_Y^{\otimes 2} S_{X,X}\otimes id_Y^{\otimes 2}} Y\otimes Y \otimes X\otimes X \ otimes Y \otimes Y \xrightarrow{e\circ e}Y\otimes Y$. Here, $c\circ c$ is shorthand for $(id_Y^{\otimes 2}\otimes c \otimes id)\circ(id_Y^{\otimes 2}\otimes c)$, and similarly for $e\circ e$. In pictures, all I'm doing (which I would draw if I knew an easy way) is: up vote 5 down vote accepted Take $Y \otimes Y$ up, and then bend them around to the right and back down (they become X's on the downward strand, apply $S_{X,X}$, then bend the X's back around and up to the right (where they become Y's again. here is a pdf of the computation I am just really repating a proof here that $S_{U^*,V^*}=S_{U,V}^*$, which holds for the braiding in any rigid braided monoidal categetory. Since $S_{X,X}$ is the identity, you will get a diagram which is recognizable as the identity for $Y\otimes Y$. Thanks. Why does this composition agree with $S_{X,X}$? Also, do you have a reference for the general fact $S_{U^*,V^*}=S_{U,V}^*$? – Martin Brandenburg Oct 25 '11 at 14:01 I imagine it's in Kassel's book quantum groups, among other places. To see that this composition agrees with $S_{Y,Y}$ (that's what you meant I think), you use the naturality of the braiding. So you can rewrite this as a morphism where you do $c\circ c$ and then the $e\circ e$ (which cancel), and then the braiding. Let me add a picture. – David Jordan Oct 25 '11 at 16:01 picture added. should have done so in the first place. Note that the linked pdf has the proof in general that S_{U^*,V^*}=S_{U,V}^* implicit, since there was no need for both the original slots to be equal. – David Jordan Oct 25 '11 at 16:25 See 2.5.4.2 pag.46 on "CAtegories TAnnakiennes" Lnm 265 (Neantro Saavedra Rivano) – Buschi Sergio Oct 25 '11 at 16:33 @Buschi: Does this really help here? In my version, this is just a trivial reformulation that two objects are inverse to each other. @David: Thanks a lot! Meanwhile I've also found the relevant section in Kassel's book. It will take a while to digest it. – Martin Brandenburg Oct 25 '11 at 20:37 add comment Brandenburg, I think that the answere is yes: From the theory of adjunctions given $(F_k, G_k, \epsilon_k, \eta_k): \mathcal{C}\to \mathcal{C}$ for $k=1, 2$ (Maclane CWM notations), and given a natural morphism $\phi: F_2\circ F_1 \to F_1\circ F_2$ there exist a natural morphisms $\widetilde{\phi}: G_1\circ G_2 \to G_2\circ G_1$ defined as : $G_1G_2\xrightarrow{\eta_2 G_1G_2} G_2F_2G_1G_2 \xrightarrow{G_2\eta_1 F_2 G_1G_2} G_2G_1F_1F_2G_1G_2$ $\xrightarrow{GG\phi F_2 G_1G_2} G_2G_1F_2F_1G_1G_2 \xrightarrow{GGF\epsilon_1 G} G_2G_1F_2G_2\xrightarrow{GG\epsilon_2} G_2G_1$ Considering the case $(F_1, G_1, \epsilon_1, \eta_1)= (F_2, G_2, \epsilon_2, \eta_2)$ and indicate it as up vote 0 $(F, G, \epsilon, \eta)$. down vote By naturality, we have $GF\epsilon\ast \eta FG= \eta\ast \epsilon $, then $GGF\epsilon\ast G\eta FG= G\eta\ast G\epsilon $, then $GGF\epsilon G\ast G\eta FGG= G\eta G\ast G\epsilon G $. Let $\phi=1$, then $\widetilde{\phi}= GG\epsilon\ast GGF\epsilon G\ast G\eta FGG\ast \eta GG = GG\epsilon\ast G\eta G\ast G\epsilon G \ast \eta GG =$ $=G(G\epsilon\ast \eta G)\ast (G\epsilon \ast \eta G)G =1_G\ast1_G=1_G $. Now we use this proof for a 2-category with a only one object, (essentially a strict monoidal category), and then to a bicategory with one object (essentially a monoidal category). I realized that mine answer is repetitive (see the Jordan answere above), because don't show that $\widetilde{\phi}$ is the symmetry . – Buschi Sergio Dec 7 '12 at 21:31 add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory monoidal-categories braided-tensor-cat or ask your own question.
{"url":"http://mathoverflow.net/questions/79068/does-the-dual-of-an-object-with-trivial-symmetry-also-have-trivial-symmetry?sort=newest","timestamp":"2014-04-16T13:32:40Z","content_type":null,"content_length":"63538","record_id":"<urn:uuid:2b696bbb-bef7-4b0a-87ad-c654b6a20ea1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Pembroke Park, FL Math Tutor Find a Pembroke Park, FL Math Tutor ...I speak both Spanish and English and I have been tutoring students since I was 13 years old. In high school I graduated top 10% of my class with honors and AP credit in the math and sciences. Unlike most other college students, I have taken all of the difficult science and math courses but also education courses on how to portray this material to students. 22 Subjects: including prealgebra, English, geometry, algebra 1 ...My undergraduate major also required me to take math classes as hard as Calculus III, therefore I have a strong math background. I have tutored high school students in all math subjects. Algebra I and II, as well as Geometry were the subjects that I tutored in the most. 16 Subjects: including algebra 1, algebra 2, reading, prealgebra ...I am currently applying to dental school. Last summer, I took the Dental Admissions Test (DAT) and scored in the 94th percentile in Academic Average (AA), 91st percentile in Total Science (TS), and 94th percentile in Perceptual Ability (PAT). On the Math/Quantitative Reasoning section of the DAT, I scored in the 93rd percentile. I scored a 730 on the Math section of the SAT. 8 Subjects: including algebra 1, biology, chemistry, prealgebra ...Having helped friends and families in their deficiencies in Mathematics, I can tell you that I know how to help the students to pass their tests. Since 1981 I have been involved in direct sales. The experience gained training new salesmen can be applied in tutoring students with their math. 14 Subjects: including calculus, chemistry, physics, trigonometry ...I have experience tutoring students from preschool through college level. While at Florida State University, I did an internship at a day care where I held the position of Assistant teacher. At the Day Care I would teach all subject matters for children ages 3-5. 14 Subjects: including geometry, economics, algebra 1, ACT Math Related Pembroke Park, FL Tutors Pembroke Park, FL Accounting Tutors Pembroke Park, FL ACT Tutors Pembroke Park, FL Algebra Tutors Pembroke Park, FL Algebra 2 Tutors Pembroke Park, FL Calculus Tutors Pembroke Park, FL Geometry Tutors Pembroke Park, FL Math Tutors Pembroke Park, FL Prealgebra Tutors Pembroke Park, FL Precalculus Tutors Pembroke Park, FL SAT Tutors Pembroke Park, FL SAT Math Tutors Pembroke Park, FL Science Tutors Pembroke Park, FL Statistics Tutors Pembroke Park, FL Trigonometry Tutors Nearby Cities With Math Tutor Aventura, FL Math Tutors Cooper City, FL Math Tutors Dania Math Tutors Dania Beach, FL Math Tutors Hallandale Math Tutors Hollywood, FL Math Tutors Miami Gardens, FL Math Tutors Miami Lakes, FL Math Tutors Miramar, FL Math Tutors N Miami Beach, FL Math Tutors North Miami Beach Math Tutors Opa Locka Math Tutors Pembroke Pines Math Tutors Sunny Isles Beach, FL Math Tutors West Park, FL Math Tutors
{"url":"http://www.purplemath.com/pembroke_park_fl_math_tutors.php","timestamp":"2014-04-21T07:17:24Z","content_type":null,"content_length":"24321","record_id":"<urn:uuid:0039f3dc-a834-4b6d-87af-ae0e1cea6f22>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
geometric sequence help November 24th 2009, 07:08 PM #1 Junior Member Sep 2008 geometric sequence help Find the tenth term of a geometric sequence. the first term is 3 the 6th term is root3/9 i stuck on the calucation for finding R t6= root3/9 = 3(r)^5 (3^-1/2)/3^2=3(r)^5 <-- stuck I checked the answer on my workbook .It says. then it suddenly jumps right into r= 3^-1/2 i do not understand how they get rid of the exponent 5 in the workbook and if i factor out the 9 to 3^2, then how can i continue? will the answer be equivalent? Using the information given $\frac{\sqrt{3}}{9}= 3r^5$ Divide by 3 and take the 5th root. Find the tenth term of a geometric sequence. the first term is 3 the 6th term is root3/9 i stuck on the calucation for finding R t6= root3/9 = 3(r)^5 (3^-1/2)/3^2=3(r)^5 <-- stuck I checked the answer on my workbook .It says. then it suddenly jumps right into r= 3^-1/2 i do not understand how they get rid of the exponent 5 in the workbook and if i factor out the 9 to 3^2, then how can i continue? will the answer be equivalent? I am having trouble following your notation. Please use parentheses to clear up confusion. If you mean $\frac{\sqrt{3}}{9}=3r^5$ then cancel out a 3 from both sides, then take the 5th root of both sides. November 24th 2009, 07:24 PM #2 November 24th 2009, 07:24 PM #3 MHF Contributor Oct 2005
{"url":"http://mathhelpforum.com/algebra/116600-geometric-sequence-help.html","timestamp":"2014-04-18T22:46:51Z","content_type":null,"content_length":"36994","record_id":"<urn:uuid:eb36f311-0913-4c06-a9bf-d3a3b79e98b2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: predicted values in svy glm l(log) f(poisson) Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: predicted values in svy glm l(log) f(poisson) From Steven Samuels <sjsamuels@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: predicted values in svy glm l(log) f(poisson) Date Thu, 23 Dec 2010 16:59:55 -0500 Actually, the following code will work whether or not exposure was a stratum variable at any stage. Steven J. Samuels 18 Cantine's Island Saugerties NY 12477 Voice: 845-246-0774 Fax: 206-202-4783 **************************CODE BEGINS************************** sysuse auto, clear svyset turn [pw= trunk] replace foreign = foreign +1 //convenient for -margins- // foreign =2 is the treated group svy: glm rep78 mpg weight i.foreign, link(log) family(poisson) margins, subpop(if foreign==2) at(foreign=(1,2)) post vce(unconditional) // _at2 is foreign as foreign _at1 is foreign as domestic lincom _b[2._at]- _b[1._at] //ATT margins, coeflegend //If you forget the coefficient names lincom _b[2._at] - _b[1bn._at] ***************************CODE ENDS*************************** Use -margins-, but without knowing the survey design it's hard to say more. Were separate samples taken from the "exposed" and "unexposed" units (whatever they were)? Were the PSUs stratified by exposure status? Describe the design and your -svyset- statement. On Dec 23, 2010, at 2:03 PM, Douglas Levy wrote: I am now revisiting this issue, having, with Steve's guidance, settled on option #2 from my original post. I.e., estimate glm model; predict daysmissed for exposed=1; predict daysmissed for the exposed group when exposed is set to 0; take difference of the [weighted] means of the predictions. Now my question is, how can I put confidence bounds on the difference in the mean predictions? I thank the group for any help it can offer. On Tue, Oct 26, 2010 at 1:34 PM, Steven Samuels <sjsamuels@gmail.com> wrote: Your second suggestion would be an estimate of the average effect of treatment (exposure, here) among the treated (ATT). For an overview of possibilities, see Austin Nichols's 2010 conference presentations; his 2007 Stata Journal Causal Inference article; and the 2008 Erratum, all linked at http://ideas.repec.org/e/pni54.html. Holding covariates at the means in non-linear models can be dangerous. For an example, see http://www.stata.com/statalist/archive/2010-07/msg01596.html and Michael N. Mitchell's followup. Steven J. Samuels 18 Cantine's Island Saugerties NY 12477 Voice: 845-246-0774 Fax: 206-202-4783 On Oct 26, 2010, at 11:24 AM, Douglas Levy wrote: I have complex survey data on school days missed for an exposed and unexposed group. I have modeled the effect of exposure on absenteeism using svy: glm daysmissed exposure $covariates, l(log) f(poisson). I would like to estimate adjusted mean days missed for the exposed and control groups, but I'm not sure of the best way to deal with this in a non-linear model. There are a couple of methods I've encountered, and I would be grateful for some thoughts on the pros and cons of 1. Estimate glm model. Reset all covariates to their [weighted] sample means. Predict daysmissed when exposed=0 and when exposed=1. 2. Estimate glm model. Predict daysmissed for exposed=1. Predict daysmissed for the exposed group when exposed is set to 0. Take the [weighted] means of the predictions. 3. Other suggestions? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-12/msg00915.html","timestamp":"2014-04-19T17:20:49Z","content_type":null,"content_length":"12937","record_id":"<urn:uuid:b4b42506-0992-4a4b-aa0d-eb5f0394d7df>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
"Obscured Clues" PAP Posts: 1876 Joined: 1/20/2004 From: Canada Status: offline I know I don't post very often here now, but I wanted to bring to your attention a variant of PAP that I find rather interesting. I start by showing you the puzzle, which I've redone in another program for the purposes of this demo. If you want to try it yourself, here is a direct PDF link. If you get lost, I've given a solve demo below the picture. I recognize these images are large; only the starting position and the final solution will appear inline, and the rest will be linked for those who want to try to solve along with this demo. The first thing you'll notice are the squares, which are called "obscured clues" (hereafter referred to mostly as squares or square clues). These clues are not known and are actually not needed to solve the puzzle; in essence, knowing those clues would make the puzzle way too easy. This is especially interesting in the bottom row, which has only a square as a clue. Let's begin the way one would normally begin solving, by filling in known areas. For now, we're going to treat each black square - known to be a separate area of cells from any other number, per normal PAP rules - as a clue of "1" for the purposes of filling starting points (red don't come into play for this), and I'll explore more about squares later, not solving them for now. This is the result. (The black square in the third row from the bottom was used for the purpose of determining the lower-right cell being red; the cell above clearly cannot be.) Now that we're out of obvious plays, let's start looking at the squares. Each square can be any number from 1-26 (left clues) or from 1-22 (top clues) based on the grid dimensions. I'm going to replace them with the actual clue determined via solving for simplicity, though you would normally just recognize that the square clue is solved. Because each square is a single set of cells, it can sometimes be very easy to solve for a square; in other cases, you must rely on a dot (cell that is not true) or a cell of another colour in the grid to tell you that a square clue can now be solved. Let's solve for red. This is actually the easiest part of the puzzle, as all of the red cells are in the lower seven rows only. Also, six rows contain one red square each and every column of the puzzle has one red clue (number or square). We'll start with the bottom row, which already has six red cells indicated. When solving for a square, any group of cells like this can automatically be connected as part of a single set of clues. This means that the entire row is filled, making the square in this row have a value of 26. Using a similar technique, every single row can be connected in the middle two columns, resulting in clues of 4, 8, 12, 16, and 16 respectively in the first four rows. This also completes three squares in the columns, which from left to right are shown to be a 2, 7, and 7. This leaves us one final red square to solve for, in the second column from the right; by using the "2" clues nearby that are all completed it can be deduced that the group cannot extend to the next cell above, thus solving red in its entirety. We can also fill in one more black cell on the left side. You should be here at this point. Sharp learners can see that I've solved one black square as a "4" already filled in on the left, in the fourth row from the bottom. Beyond this, we cannot solve for the left side yet, so we'll move to the right side of the grid. Over here, look at row 13, with a square, 3, 3, and another square. On the right side, if you attempt to extend the first of the two filled cells to form a group of three, it connects to the other cell to make a group of four. Therefore, you know that all four cells are part of this obscured clue, and can fill them. Similar solving can be done for the 2 in the row below, but we're going to skip this and look at the third and fourth columns from the right. The top cells cannot stretch down to connect and form a 7; it would be too long. Therefore, we automatically know that the second and third sets must be connected as the square is the only remaining clue in each column, so go ahead and connect them. This also gives you two more cells, one as part of each of the "7" clues. DO NOT ASSUME THE SQUARE CLUES IN THE ROWS ARE SOLVED YET; we still have one unmarked column to the right. The fifth and sixth rows from the bottom now have their "2" clues solved, and in the lower of the two rows, we can also solve the "1". This allows us to complete a few more cells with it, as well as solve more on the right side that gets narrowed in. Work on your own with basic clues (do not solve beyond these eight columns for any rows yet; there are some where you can continue, but I will pick up there) and you should eventually reach this point. You may have noticed I've replaced several squares with "4" clues; these are solved just by working on the right side. If you are intrepid and have gone on past where I paused here, you may have further progress, but I will now resume solving for rows. The "8" can now be started; three cells can be filled. Meanwhile, the row with 4,2,1,1,1,1,2,4 (the last "4" being a solved square) can have two cells filled in on the left thanks to progress on the We still cannot proceed at all on the left, so we'll continue in the middle. Two columns can have a "1" clue bordered. Once this is done, it's possible to solve part of a "6" on the left side, allowing a group of false squares to be marked where a sole black "4" - similar to the right side - is given. This leaves the fifth row from the bottom and two related columns to be fully solved. More simple solving gets you this. Again, your progress may be slightly ahead; I avoid getting too far ahead for the sake of confusing you. The ninth row from the top can now be started. On each side of the grid, there is a batch of 1,2,1 that can be filled in. This completes a lot of "4" clues in some columns, and THAT does something extremely useful; it puts a break between the square clues and the "7" clues on both the left and right sides of the grid. On each side, two obscured clues can be solved as "12" clues and connected, while the "7" clues can also be solved. More work results in both sides being completely solved (square clues all replaced with numbers). There are only seven square clues left to be solved, and we can start by filling in more of the "8" clue smack dab in the middle. All six known cells are known to be a "1" clue, and can be closed. Following this, the second row from the bottom with NO RED in it (now marked as 4,2,1,1,2,4) can have one square of the "2" on each side filled. Moving to columns, we can now fill in seven of the required squares for each "8" clue. Back in the row we just visited, we can now solve each "2" clue, and we can start on the "3" clues in the row above. The known fills for the "3" clues will complete two columns and will lead us to a discovery that allows us to complete the puzzle. The outer instances of obscured clues in the columns are now known to be "1" and the rest of those solved columns are completed appropriately. The "8" in the 4,8,4 is now able to be completed, and the remaining square clues in columns can be completed with the help of the row with the red group of four cells. After marking false squares in a few rows and affirming two squares as "1" clues (each row being 6,1,1,6), there is one final square to solve. All we have to do is connect and complete a "4" clue there. The rest is solved normally and you get what looks like a red carpet entrance to some home or festivity. And that is how you solve an obscured clues puzzle. Please comment and discuss! If you want to download the PDF of the puzzle, the link is above at the start of the post. Would love to see Conceptis try this someday to make those smaller puzzles just a tad more difficult.
{"url":"http://www.conceptispuzzles.com/ru/forum/tm.aspx?m=34744&mpage=1&key=","timestamp":"2014-04-16T17:13:09Z","content_type":null,"content_length":"38793","record_id":"<urn:uuid:7e6299b5-7de7-46ab-a9ee-d117309b0b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Perimeter and Area of a Triangle Given its Vertices Online calculator to calculate the area and perimeter of a triangle given the coordinates of its vertices. The distance formula is used to find the distances between vertices then these distances are used to find the perimeter and area of the triangle. How to use the calculator Enter the x and y coordinates of the three vertices A, B and C of the triangle and press "enter". The outputs are the area and perimeter of the triangle. More Online Geometry Calculators and Solvers. Free Online Graph Plotter for All Devices Home Page -- HTML5 Math Applets for Mobile Learning -- Math Formulas for Mobile Learning -- Algebra Questions -- Math Worksheets -- Free Compass Math tests Practice Free Practice for SAT, ACT Math tests -- GRE practice -- GMAT practice Precalculus Tutorials -- Precalculus Questions and Problems -- Precalculus Applets -- Equations, Systems and Inequalities -- Online Calculators -- Graphing -- Trigonometry -- Trigonometry Worsheets -- Geometry Tutorials -- Geometry Calculators -- Geometry Worksheets -- Calculus Tutorials -- Calculus Questions -- Calculus Worksheets -- Applied Math -- Antennas -- Math Software -- Elementary Statistics High School Math -- Middle School Math -- Primary Math Math Videos From Analyzemath Author - e-mail Updated: 2 April 2013 Copyright © 2003 - 2014 - All rights reserved
{"url":"http://www.analyzemath.com/Geometry_calculators/perimeter_area_tri_verti.html","timestamp":"2014-04-19T04:19:48Z","content_type":null,"content_length":"9839","record_id":"<urn:uuid:38c456c4-10d3-4275-9fd8-c213ee53d02f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
n the Moon Calculator Weight on the Moon Calculator How Much Would You Or Any Object Weigh On the Moon? This Weight on the Moon Conversion Calculator calculates your weight or the weight that any object would be if on the moon. To use this calculator, a user just enters his or her weight and clicks the 'Calculate' button, and his/her weight will automatically be computed and shown below. The weight can be entered in any units and the resultant answer will be in the same units which the user has input. Thus, for example, if a user enters 200lbs, the resultant answer of the weight on the moon would be 33.07lbs. The units will always match. Just for the sake of clarity, a user can select the units he wants the answer to appear in, whether grams, kilograms, or pounds, and that unit will show in the answer. This calculates the weight on the moon from the weight of the object on earth. Being that it takes the weight of an object on earth and converts it to Martian weight, the formula is Weight on the Moon= (Weight on Earth/9.81m/s^2) * 1.622m/s^2. To find the weight on the moon, we divide the weight on earth by the earth's force of gravity, which is 9.81m/s^2. This calculates the mass of the object. Once we have the object's mass, we can find the weight by multiplying it by the gravitational force, which it is subject to. Being that the moon has a gravitational force of 1.622m/s^2, we multiply the object's mass by this quanitity to calculate an object's weight on the moon. So an object or person on the moon would weigh 16.5% its weight on earth. Therefore, a person would be much lighter on the moon. Conversely, a person is 83.5% heavier on earth than on the moon. What causes the differences in weight between the various planets? The answer is the gravitational force which a planet is subject to. Earth has much more mass than the moon. Therefore, the sun pulls on it with greater force, because of the greater mass. You probably are familiar that gravity is a downward force. It pushes you back to the Earth's surface. This is why when you jump, you come back down. Being that the earth has greater gravitational force, it pulls object downward with greater force. This translates into greater weight. If you want to calculate the weight of all of the planets simultaneously, see our Weight on Planets Calculator. Related Resources Weight on Mars Calculator Weight on Venus Calculator Weight on Mercury Calculator Weight on Saturn Calculator Weight on Neptune Calculator Weight on Jupiter Calculator Weight on Uranus Calculator
{"url":"http://www.learningaboutelectronics.com/Articles/Weight-on-the-moon-conversion-calculator.php","timestamp":"2014-04-18T09:47:38Z","content_type":null,"content_length":"9198","record_id":"<urn:uuid:7145afad-58f7-45bb-adc2-e27314bcc020>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Alfine 11 speed with Versa 11 levers: a review Thanks. Your talk of adaptors is a bit skeeery, I know nothing of these things! It sounds like getting On-one to molish this might be preferable to trying to work it out myself! Nah, we just have a BWNCWR fettling party! That thought had crossed my mind! It is prolly about time I learned how to build wheels, and also to bake CAIK How on earth do you work out what length spokes to order? It depends on the CAIK recipe.
{"url":"http://yacf.co.uk/forum/index.php?topic=50729.45","timestamp":"2014-04-18T20:44:32Z","content_type":null,"content_length":"88153","record_id":"<urn:uuid:9c18e4e9-a244-4526-bf52-d93a54cbc791>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: This is random, but do you guys think an ACT score of 28 will get me a full scholarship? how many points does the average ACT score go up? i want to take it in September again. How did you guys do on your ACTs? • one year ago • one year ago Best Response You've already chosen the best response. I took it as a 7th grader. (Now going into 8th). I took it and got a 17 Best Response You've already chosen the best response. i just got a pathetic score of 30 :/ Best Response You've already chosen the best response. Dude. 30 is great. Best Response You've already chosen the best response. it was pathetic for my parents... Best Response You've already chosen the best response. 17 was pathetic for me. Best Response You've already chosen the best response. 30 is amazing! I wish I would get a 30! and @karatechopper 17 is really good for a 7th grader. Best Response You've already chosen the best response. I had 31 still didn't get in engineering school Best Response You've already chosen the best response. you can take up to three times before they start averaging your score Best Response You've already chosen the best response. @Libniz wow!! 31! Can u give me some study tips? Best Response You've already chosen the best response. what's your weakest subject? Best Response You've already chosen the best response. Best Response You've already chosen the best response. math :( i got a 24 in math 30 english 8 writing 28 in science 29 in reading Best Response You've already chosen the best response. 8 out of 15 right? Best Response You've already chosen the best response. What's the highest you can get? o.o Best Response You've already chosen the best response. ACT is out of 36 Best Response You've already chosen the best response. Like 36 for each subject? Or 26 altogether? Best Response You've already chosen the best response. no 12. 2 ppl grade your essay on a scale of 6. both of my graders gave me a 4 Best Response You've already chosen the best response. How can you get a total of 36 with 5 subjects? o.o Best Response You've already chosen the best response. science was pretty easy when I took it ; you didn't have to know any science , just have to read the graph/chart Best Response You've already chosen the best response. each subject is out of 36 , then they average it for your final score Best Response You've already chosen the best response. Oh! Okay. Best Response You've already chosen the best response. I guess you can improve your math score, get it up close to 30 Best Response You've already chosen the best response. Can you take the ACT or SATs online or just in a classroom or schoolroom? Best Response You've already chosen the best response. just in a classroom Best Response You've already chosen the best response. you need it for college admission though Best Response You've already chosen the best response. That's what I thought. Cool. I got so confused in college planner. Best Response You've already chosen the best response. They crammed a ton of stuff into a few months of school. It was like @ . @ Best Response You've already chosen the best response. @rebeccaskell94 have u taken any standarized tests? Best Response You've already chosen the best response. Haha not since 8th grade. I'm graduating next year. When I took it my scores were in the 9-11th grade levels but I'm scared of the PSAT and SAT. I think I'm going to try to take them this fall. Best Response You've already chosen the best response. math just takes experience Best Response You've already chosen the best response. if ur a math person u shuld take the ACT instead of the SAT. SAT is more vocab and reading. Im graduating this year and Ive only taken the ACT once. I wish i culd take the PSAT, but since im graduating from high school in 3 years, I lost the opportunity to take it. Best Response You've already chosen the best response. I'm better in language arts/English etc. I suck at math haha I can't remember formulas. Best Response You've already chosen the best response. out of curiosity , did you take any ACT prep course? Best Response You've already chosen the best response. Best Response You've already chosen the best response. i took an act prep course. I got 2 book included. The Real ACT and Barrons Best Response You've already chosen the best response. I think the higher you go in ranks the less they pay attention to your test scores. At some point your score will be no different than any other person applying. I got a 24 over all and I got some pretty good scholarships. It just depends on the school you apply to. Best Response You've already chosen the best response. I wanna go to FSU. :D Best Response You've already chosen the best response. have you applied yet? @rebeccaskell94 Best Response You've already chosen the best response. nooooooo here just like, go to chat. Best Response You've already chosen the best response. **tip--private colleges tend to be generous with scholarship Best Response You've already chosen the best response. @Romero what kind of scholarships did you get? and to which college? Best Response You've already chosen the best response. @Libniz r u in college rite now? Best Response You've already chosen the best response. yes, I am Best Response You've already chosen the best response. did u get scholarships with ur ACT score? Best Response You've already chosen the best response. University of Illinois no, I didn't even get in engineering program with my ACT score, I had to wait a semester Best Response You've already chosen the best response. university of michigan ann arbor wants female engineers. one of my friends got a 25 on her ACT applied just for the heck of it, and got accepted just because she put engineering down as her major. her gpa wasnt that great either Best Response You've already chosen the best response. females have it too easy Best Response You've already chosen the best response. loll :) Best Response You've already chosen the best response. so are you thinking about studying engineering Best Response You've already chosen the best response. I applied to a lot of private schools and a couple of public schools but I saw that private schools gave out more money. Then tend to cover most of my cost. There are a lot of good private engineering schools like cal tech and harvey mudd. I got into Bucknell which is a good private school for undergraduate engineering. My friend got accepted to Lehigh. I think the best engineering school that I got accepted to was UCSD. Best Response You've already chosen the best response. Any tips for getting into MIT? Best Response You've already chosen the best response. My prep books were The Real ACT. Helped lots Best Response You've already chosen the best response. @Libniz im thinking of studying pharmacy. Best Response You've already chosen the best response. undergraduate for pharmacy is actually pretty easy Best Response You've already chosen the best response. in what way? academic wise or admission wise? Best Response You've already chosen the best response. I thought you needed to go to grad school to be a pharmacist Best Response You've already chosen the best response. academic wise; you would either be bio or chem major Best Response You've already chosen the best response. yes. I will be a chem major according to my counselor. and @ romero yes pharmacy requires grad school. its just like any medical profession you do a science major, then grad school Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5011dbf1e4b08ffc0ef420d0","timestamp":"2014-04-19T13:03:41Z","content_type":null,"content_length":"164685","record_id":"<urn:uuid:137546b2-1d18-4a19-bb27-c7d139293ed1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphing Questions February 28th 2006, 03:01 PM Graphing Questions Hello everyone. These questions are from a grade 12 Data Management class. Any help would be appreciated. Thanks! :) #1)Alice is constructing a histogram to display data with a range of 12. She decided on a bin width of 3. Explain why this is not a good choice. #2) A Histogram is constructed for a set of data. How would the shape of the histogram change if each of the data was multiplied by the same negative number? In particular, describe the effect of this transformation for U-Shaped, uniform, mound-shaped, right-scewed and left scewed histograms. February 28th 2006, 11:33 PM Originally Posted by crazy_gal108 Hello everyone. These questions are from a grade 12 Data Management class. Any help would be appreciated. Thanks! :) #1)Alice is constructing a histogram to display data with a range of 12. She decided on a bin width of 3. Explain why this is not a good choice. I expect that the answer you are expected to give to this question is that a bin width 0f 3 will give the data all falling within about 5 bins which will be a bit to coarse to show whatever it is you expect the histogram to show. In the real world the number of bins you use depends on a number of factors. The main one is "what do you want the histogram to show?" a second important factor is "how many data points do you have?". There are a number of formulas for the appropriate number of bins or bin width to use for a histogram. Two of the most common are: Scott's rule: $<br /> h\approx \frac{3.5 \times \sigma}{n^{1/3}}<br />$ where h is the recommended bin width, and $\sigma$ is an estimate of the standard deviation of the data or underlying distribution, and $n$ is the number of data points. Freedman & Diaconis's rule: $<br /> h\approx \frac{2 \times \mathrm{IQR}}{n^{1/3}}<br />$ where $\mathrm{IQR}$ is the interquartile range of the data. March 1st 2006, 07:40 AM Originally Posted by crazy_gal108 #2) A Histogram is constructed for a set of data. How would the shape of the histogram change if each of the data was multiplied by the same negative number? In particular, describe the effect of this transformation for U-Shaped, uniform, mound-shaped, right-scewed and left scewed histograms. I think you should try this one yourself. Try making up some data, draw a histogram, then multiply the data by say -3 and plot another histogram and see what it looks like.
{"url":"http://mathhelpforum.com/advanced-statistics/2051-graphing-questions-print.html","timestamp":"2014-04-18T06:49:24Z","content_type":null,"content_length":"7402","record_id":"<urn:uuid:e92885bf-77c2-4619-bd61-b986e466deb4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
This document assume that you have already installed Hama cluster and you have tested it using some examples. • Uses the PageRank algorithm described in the Google Pregel paper • Introduces partitioning and collective communication Run PageRank on Hama Cluster First of all, generate a symmetric adjacency matrix using the gen command. % bin/hama jar hama-examples-0.x.0.jar gen fastgen 100 10 randomgraph 2 This will create a graph with 100 nodes and 1K edges and store 2 partitions on HDFS as the sequence file. You can adjust partition and tasks numbers to fit your cluster. Then, run PageRank using: % bin/hama jar hama-examples-0.x.0.jar pagerank randomgraph pagerankresult 4
{"url":"http://wiki.apache.org/hama/PageRank","timestamp":"2014-04-23T17:59:44Z","content_type":null,"content_length":"11194","record_id":"<urn:uuid:509d9654-1129-4808-985d-12f593c1b345>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
clock problem July 14th 2013, 07:47 PM clock problem A watch, which gains 5 seconds every 3 minutes, is set right at 6a.m. What is the true time in the afternoon of the same day when the watch indicates a quarter past 3? My answer is 2:59:15 seconds but the book answer is 3:00:00 Kindly check if you get a chance, appreciate your help. July 15th 2013, 08:01 AM Re: clock problem Your book's answer is correct. Since you did not show how you got your answer it is impossible to say what you might have done wrong. July 15th 2013, 10:38 AM Re: clock problem Time elapsed 15:15:00 on clock minus start time of 6:00:00 = 9:15:00 or 9.25 hrs which equals 555minutes Clock runs fast 555/3 *5 = 925 seconds fast 15 min 25 sec Correct time 14:59:35 What did I do wrong? July 15th 2013, 02:12 PM Re: clock problem @HallsOf Ivy I did similiar to bjhopper the difference between 6 a.m to 3:15 pm is 9 hours and 15 minutes which is equal to 555 minutes. So we have 555/3 = 185 minutes, which is equal to 185 * 5= 925 seconds or 15 minutes and 25 seconds. So I get 2:59:35. July 15th 2013, 02:28 PM Re: clock problem I'll defer a direct answer until HallsofIvy gets back, but have you considered that the extra rate of the fast clock is not necessarily going to have a chance to act on the full 5 seconds in those last three minutes?
{"url":"http://mathhelpforum.com/algebra/220583-clock-problem-print.html","timestamp":"2014-04-23T21:48:54Z","content_type":null,"content_length":"5232","record_id":"<urn:uuid:8b3ce2d1-8b7b-4b02-aceb-407c240ec2cf>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
What finite group schemes can act freely on a rational function field in one variable? up vote 13 down vote favorite Suppose that $G$ is a finite group scheme over a field $k$ (we may want to assume that $k$ is perfect). How does one tell whether there exists a free action of $G$ on the function field $k(t)$ in one variable? By this I mean that there exists an action $G \times_{\mathop{\rm Spec}k}\mathop{\rm Spec}k(t) \to \mathop{\rm Spec}k(t)$, making $\mathop{\rm Spec}k(t)$ into a $G$-torsor over a scheme (necessarily of the form $\mathop{\rm Spec} k(s)$, where $s \in k(t)$, by Lüroth's theorem). The question is a very natural one when one studies essential dimension of group schemes: see http://www.math.ubc.ca/~reichst/lens-notes6-27-8.pdf for a nice survey of the topic of essential dimension, and http://front.math.ucdavis.edu/1001.3988 for the essential dimension of group schemes. When $G$ is smooth over $k$, then it is easy to see that the action extends to an action on $\ mathbb{P}^1$, so $G$ must be a subgroup of ${\rm PGL}_{2,k}$; but when $G$ is not smooth it is not all clear to us that this must happen. The sheaf of automorphisms of $k(t)$ over $k$ is enormous in positive characteristic, and we find it very hard to see what group schemes it contains. For example, how about twisted forms of the group scheme $\mu_p$, where $p$ is the characteristic of the field? I would conjecture that most of them can't act freely on $k(t)$, but we can't prove it. ag.algebraic-geometry group-schemes Do you mean free action or faithful action? And $\mu_p$ can act faithfully since ${\rm{PGL}}_ 2$ contains a nontrivial split torus (e.g., $\zeta.[x,y] = [\zeta x, y]$). Anyway, by viewing the projective line over the finite base $G$, by looking at geometric fibers over $G$ your setup amounts to an action of $G$ on a dense open of the projective line (as for ordinary finite groups). But then it perhaps get complicated, since automorphism functor of such opens is generally not representable. – BCnrd Apr 5 '10 at 16:32 Any elt. of Lie alg. with ss adjoint action is tangent to a torus, and for infinitesimal gps of height $\le 1$ a homomorphism to another $k$-gp of finite type is same as map on $p$-Lie algebras. 1 So for any form $\mu$ of $\mu_p$, nontrivial action on proj. line factors through embedding of $\mu$ into $k$-torus, which in turn is 1-dim'l. Hence, the possible $\mu$ are the $p$-torsion in $k$-tori of ${\rm{PGL}}_ 2$, which correspond to maximal $k$-tori in ${\rm{GL}}_ 2$, which correspond to deg-2 etale comm. algebras (i.e., split or separable quad. field). So no examples beyond what you know. – BCnrd Apr 5 '10 at 17:37 2 I am not convinced that $t^{1/p}$ must go to $ut^{1/p}$; you can also add nilpotents. This is what makes it complicated. For example, $\alpha_p$ can add freely by translations. In fact, I think that the automorphism group scheme of $k(t^{1/p})$ over $k(t)$ isn't even finite. – Angelo Apr 5 '10 at 20:19 You're right, and this error of mine is especially ironic since not more than a day ago I explained to a colleague why the automorphism scheme of $F(a^{1/p})$ over $F$ has positive dimension. 2 Passing to a geometric point over $F$, this becomes the automorphism scheme of $F[y]/(y^p)$ as an $F$-algebra, which is parameterized by the possible images of $y$, namely $c_ 0 + c_ 1 y + ...$ with $c_ 1$ a unit and $c^p _0 = 0$. So it has dimension $p-1$. So one should first work out the structure of this group, especially if the evident 1-dimensional torus is maximal in its "smooth" part. – BCnrd Apr 6 '10 at 5:19 2 Indeed, this would be logical. The structure of the group is complicated, but it has a large unipotent part, which should not interfere with a form of $\mu_p$. I am fairly sure that the 1-dimensional torus is maximal. Thanks, Brian. – Angelo Apr 6 '10 at 8:58 show 5 more comments 1 Answer active oldest votes I guess by looking at it algebraically one can at least rule out the forms of $\mu_p$. Let $H$ be the function algebra of the group scheme, $H^\ast$ its dual. $H^\ast$ is a cocommutative Hopf algebra. Algebraically, you ask whether $H^\ast$ can act on $k(t)$ so that $k(t)>k(s)$ is Hopf-Galois. If I remember correctly, there is a theorem (see chapter 8 of Montogmery' Hopf Algebra actions) that this is equivalent to the semidirect product $k(t)*H^\ast$ being simple. This will necessarily require $H^\ast$ to be semisimple. Now I read your $\mu_p$ as a form of up vote the cyclic group. Thus, your $H^\ast$ fails to be semisimple by Maschke's theorem. 2 down vote Sorry, if I misunderstood or misquoted something. 7 Are you the real Bugs Bunny? I think I underestimated you. – Kevin Buzzard Apr 6 '10 at 11:26 Dear Bugs Bunny, thank you very much for the suggestion. However, in order to consider $\mu_p$ as a form of the cyclic group, the characteristic should be different from $p$, in which case we know a proof; we are interested exactly in the characteric $p$ case. Any form of $\mu_p$ is semisimple (in any characteristic). This said, I will take a look at Montgomery's book, there might be something in it. – Angelo Apr 6 '10 at 12:52 1 After some further thought, I realize I probably misunderstood what you were saying. On the other hand, your argument should apply to all forms of $\mu_p$; while some of them, like $\ mu_p$ itself, do appear. Also, it would exclude $\alpha_p$, which also appears. – Angelo Apr 6 '10 at 14:30 No, Kevin, I am not real, I am p-adic :-)) Angelo, I think now that your $\mu_p$ is $p$-th roots of unity. Then, indeed, $H^*$ is semisimple and nothing I said is of any use. Is your $\ 5 alpha_p$ the Frobenius kernel of the additive group. Then it does, indeed, act via $d/dt$. I guess there is no theorem concluding that $H^*$ is semisimple or it may require a separable extension... Anyway, I will look into Childs', Taming wild extensions and see whether I can say anything intelligent. – Bugs Bunny Apr 7 '10 at 9:20 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry group-schemes or ask your own question.
{"url":"https://mathoverflow.net/questions/20384/what-finite-group-schemes-can-act-freely-on-a-rational-function-field-in-one-var","timestamp":"2014-04-17T21:54:35Z","content_type":null,"content_length":"65371","record_id":"<urn:uuid:a9ba2a5f-8325-4ed1-9bf5-e1b245994495>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
he Haskell News aggregator Notice that, on the program below, double_minus_one_is_even will never typecheck, independent of its input. How can I encode this so Haskell will report the problem before trying to apply that function to a value? data Nat = Z | S Nat deriving (Show,Eq) is_nat :: Nat -> Nat is_nat Z = Z is_nat (S a) = S (is_nat a) is_even :: Nat -> Nat is_even Z = Z is_even (S (S a)) = (S (S (is_even a))) double :: Nat -> Nat double Z = Z double (S a) = (S (S (double a))) double_minus_one :: Nat -> Nat double_minus_one (S a) = S (double a) double_is_even :: Nat -> Nat double_is_even Z = Z double_is_even a = is_even (double (is_nat a)) double_minus_one_is_even :: Nat -> Nat double_minus_one_is_even Z = Z double_minus_one_is_even a = is_even (double_minus_one (is_nat a)) zero = is_nat Z one = is_nat (S Z) two = is_nat (S (S Z)) three = is_nat (S (S (S Z))) four = is_nat (S (S (S (S Z)))) main :: IO () main = print (double_minus_one_is_even two) {- Fails here, should fail before! -} submitted by SrPeixinho [link] [32 comments] 'O' is for Ontology. What is an ontology? A knowledge-base? Sure, if that's simpler to grasp, but only insofar as 'knowledge-base' doesn't mean 'collection of facts' or 'data set.' An ontology is more than that. But what, precisely, is an ontology? Well, actually, there is a precise meaning to 'ontology.' And 'meaning,' itself, is central to ontology. Because what does these data mean is what an ontology is about. It's not a listing of facts, but it's also the relationship of the facts in the ontology that makes it what it is. The data, the facts, of an ontology have meaning, not only intrinsically, but also explicitly, or: the meaning is useable, or can be processed, itself, as information, and used in the handling of the underlying information. An ontology? Absolutely! You hand that to your husband, and he knows exactly what it is and he knows exactly how to use it. He even, helpfully, penciled in the missing item (ho-hos, just as a 'fer instance') onto your shopping list for you. Now, ontology, per se? Not so much. But if you explicitly titled it ‚ÄúShopping List,‚Äù now you're talking! Format it as XML or JSON or OWL and then your computer will do your shopping for you, just as well as your husband would. Even better, as it won't add those pesky ho-hos your husband always 'helpfully' adds to your list for you. Your computer does need arms and legs and artificial intelligence, but I'm sure Cyberdyne Systems will be happy to help you there ... and also hand your terminator that was a computer a fully automatic plasma rifle. Whoopsie! Better give your husband the list, and live with the ho-hos ... Ho-hos are, relatively speaking, better than global thermonuclear war and the extinction of all mankind. But I digress. As always. Is there already a class that looks like this, somewhere? class P f where punit :: f () ppair :: f a -> f b -> f (a,b) phomotopy :: (a -> b) -> (b -> a) -> f a -> f b Note that f doesn't need to be a Functor. Update: added phomotopy submitted by AshleyYakeley [link] [25 comments] The following blog post originally appeared as a guest post on the Skills Matter blog. Well-Typed are regularly teaching both introductory or advanced Haskell courses at Skills Matter, and we will also be running a special course on Haskell’s type system. Statically typed languages are often seen as a relic of the past – old and clunky. Looking at languages such as C and Java, we’re used to writing down a lot of information in a program that just declares certain variables to be of certain types. And what do we get in return? Not all that much. Yes, granted, some errors are caught at compile time. But the price is high: we’ve cluttered up the code with noisy declarations. Often, code has to be duplicated or written in a more complicated way, just to satisfy the type checker. And then, we still have a significant risk of run-time type errors, because type casting is common-place and can fail at unexpected moments. So it isn’t a big surprise that dynamically typed languages are now very fashionable. They promise to achieve much more in less time, simply by getting rid of static type checking. However, I want to argue that we shouldn’t be too keen on giving up the advantages of static types, and instead start using programming languages that get static typing right. Many functional languages such as Scala, F#, OCaml and in particular Haskell are examples of programming languages with strong static type systems that try not to get in the way, but instead guide and help the In the rest of this post, I want to look at a few of the reasons why Haskell’s type system is so great. Note that some of the features I’m going to discuss are exclusive to Haskell, but most are not. I’m mainly using Haskell as a vehicle to advertise the virtues of good static type systems. 1. Type inference Type inference makes the compiler apply common sense to your programs. You no longer have to declare the types of your variables, the compiler looks at how you use them and tries to determine what type they have. If any of the uses are inconsistent, then a type error is reported. This removes a lot of noise from your programs, and lets you focus on what’s important. Of course, you are still allowed to provide explicit type signatures, and encouraged to do so in places where it makes sense, for example, when specifying the interface of your code. 2. Code reuse Nothing is more annoying than having to duplicate code. In the ancient days of statically typed programming, you had to write the same function several times if you wanted it to work for several types. These days, most languages have “generics” that allow you to abstract over type parameters. In Haskell, you just write a piece of code that works for several types, and type inference will tell you that it does, by inferring a type that is “polymorphic”. For example, write code that reverses all the elements of a data structure, and type inference will tell you that your code is independent of the type of elements of the data structure, so it’ll just work regardless of what element type you use. If you write code that sorts a data structure, type inference will figure out that all you require to know about the elements is that they admit an ordering. 3. No run-time type information by default Haskell erases all type information after type checking. You may think that this is mainly a performance issue, but it’s much more than that. The absence of run-time type information means that code that’s polymorphic (i.e., type-agnostic, see above) cannot access certain values. This can be a powerful safety net. For example, just the type signature of a function can tell you that the function could reorder, delete or duplicate elements in a data structure, but not otherwise touch them, modify them or operate on them in any other way. Whereas in the beginning of this post I complained that bad static type systems don’t allow you to do what you want because they’re not powerful enough, here we can deliberately introduce restrictions to save us (as well as colleagues) from accidental mistakes. So polymorphism turns out to be much more than just a way to reduce code duplication. By the way, Haskell gives you various degrees of selective run-time typing. If you really need it in places, you can explicitly attach run-time type information to values and then make type-based decisions. But you say where and when, making a conscious choice that you gain flexibility at the cost of safety. 4. Introducing new datatypes made easy In Haskell, it’s a one-liner to define a new datatype with a new name that has the same run-time representation as an existing type, yet is treated as distinct by the type system. (This may sound trivial, but surprisingly many statically typed languages get it wrong.) So for example it’s easy to define lots of different types that are all integers internally: counters, years, quantities, … In Haskell, this simple feature is often used to define safe boundaries: a specific type for URLs, a specific type for SQL queries, a specific type for HTML documents, and so on. Each of these types then comes with specific functions to operate on them. All such operations guarantee that whenever you have a value of this type, it’s well-formed, and whenever you render a value of this type, it’s syntactically correct and properly escaped. 5. Explicit effects In virtually all programming languages, a function that performs some calculations on a few numbers and another function that performs the same calculations, but additionally sends a million spam emails to addresses all over the world, have exactly the same type, and therefore the same interface. Not so in Haskell. If a function writes to the screen, reads from the disk, sends messages over the network, accesses the system time, or makes use of any other so-called side effect, this is visible in its type. This has two advantages: first, it makes it much easier to rely on other people’s code. If you look at the interface and a function is effect-free, then you for example automatically know that it is also thread-safe. Second, the language facilitates a design where side effects are isolated into relatively small parts of the code base. This may seem difficult to achieve for highly stateful systems, but surprisingly, it usually is not: even interactive systems can usually be described as pure functions reacting to a series of requests with appropriate responses, and a separate driver that does the actual communication. Such separation makes it not only easier to test the program, but also facilitates the evolution of the program such, for example, to adapt it to run in a different environment. Haskell’s type system therefore encourages good design. 6. Types as a guide in program development If you only ever see types as a source of errors, and therefore as enemies on your path of getting your program accepted, you’re doing them injustice. Types as provided by Haskell are an element of program design. If you give your program precise types and follow systematic design principles, your program almost writes itself. Programming with a strong type system is comparable to playing a puzzle game, where the type system removes many of the wrong choices and helpfully guides you to take the right path. This style of programming is supported by a new language extension called “Typed Holes” where you leave parts of your program unspecified during development, and obtain feedback from the development environment about what type has to go into the hole, and what program fragments you have available locally to construct a value of the desired type. Playing this kind of puzzle game is actually quite fun! 7. Programming on the type level Haskell’s type system provides many advanced features that you don’t usually have to know about, but that can help you if you want to ensure that some complicated invariants hold in your program. Scarily named concepts such as “higher-ranked polymorphism”, “generalized algebraic datatypes” and “type families” essentially provide you with a way to write programs that compute with types. The possibilities are nearly endless. From playful things such as writing a C-printf-style function where the first argument determines the number of arguments that are expected afterwards as well as their types, you can go on to code that provides useful guarantees such as that mutable references that are available within one thread of control are guaranteed not to be accessed in a completely different context, arrays that can adapt to different internal representations depending on what type of values they contain, working with lists that are guaranteed to be of a specific length, or with trees that are guaranteed to be balanced, or with heterogeneous lists (where each element can be of a different type) in a type-safe way. The goal is always to make illegal inputs impossible to construct. If they’re impossible to construct by the type system, you can isolate sanity tests at the boundary of your code, rather than having to do them over and over again. The good thing is that these features are mostly optional, and often hardly affect the interface of libraries. So as a user, you can benefit from libraries employing such features and having extra safety guarantees internally. As a library writer, you can choose whether you’re happy with the normal level of Haskell type safety (which is already rather a lot), or if you want to spend some extra effort and get even more. If my overview has tempted you and you now want to learn more about Haskell, you’re welcome follow one of my introductory or advanced Haskell courses that I (together with my colleagues at Well-Typed ) regularly teach at Skills Matter. These courses do not just focus on the type system of Haskell (although that’s a significant part). They introduce the entire language in a hands-on way with lots of examples and exercises, as well as providing guidelines on how to write idiomatic Haskell and how to follow good development practices. If you already know some Haskell and are particularly interested in the advanced type system features mentioned in point 7, we also offer a new one-day course on Haskell’s type system that specifically focuses on how far you can push it. Having followed the Python community for quite awhile, I've appreciated the Python PEP's which give transparency to upcoming changes and ongoing discussions. Is there anything similar for Haskell? I've found a few "clues" to start with thus far: 1) The Haskell wiki's "Proposals" page: http://www.haskell.org/haskellwiki/Category:Proposals This page looks to me like a random collection of proposals, not a comprehensive collection. Some of the proposals have not been updated for many years. 2) Wikipedia's Haskell page: http://en.wikipedia.org/wiki/Haskell_%28programming_language%29#Haskell_Prime According to Wikipedia: "In early 2006, the process of defining a successor to the Haskell 98 standard, informally named Haskell Prime, began.[25] This is an ongoing incremental process to revise the language definition, producing a new revision once per year. The first revision, named Haskell 2010, was announced in November 2009[1] and published in July 2010." That statement appears self contradictory. If a new revision is produced every year, why hasn't there been a new revision since 2010? 3) The current committee members: https://ghc.haskell.org/trac/haskell-prime/wiki/Committee I saw a posting on Reddit announcing the committee had been formed. It sounded nice, but I haven't heard any more about it? What does the committee do? Does the committee interact with the community? How can I follow what they are up to? What's the best way to follow upcoming changes? How can I stay connected with the community who makes decisions regarding the language standard? submitted by Buttons840 [link] [6 comments] 'N' is for got Nuttin'! I was thinking of covering the universal combinator, the Nightingale, with this post. It's a combinator that gives you any other combinator in the combinatory logic: N = λx -> (xS)K N = λx -> xKSK or an infinite many other combinations of combinators that when combined repeatedly reduce either to the S or the K combinators, because the SK-basis has been shown to be Turing-complete. But then I thought: eh! Maybe you've read enough about combinators this month, or to date, at any rate, so there is it: the N-combinator, the Nightingale, when combined with only itself (in certain desired ways), can give you any computable form. Yay! So, let's talk about nothing for a sec here. ... hm, hm, hm, ... Nice talk? The problem with nothing is that there's nothing to talk about ... about it. But people tend to talk quite a bit, and if you reduce what they've just been saying to you and to everyone else who will listen (and quite a few of them won't), then you find that they haven't been really talking about anything at all, and, insult to injury, even would find their own words boring, obnoxious, offensive, thoughtless, careless, if you played them back to them and forced them to listen to every word that came out of their mouths. Ever listen to yourself speaking? Educational experience. Sometimes, I stop and listen to myself. Most times, I find, it would be better if I shut myself up sooner rather than later. geophf, shut up! I tell myself. Sometimes I listen when I tell myself that. I feels better when I shut myself up when I'm speaking I have nothing to say You know: THINK before you speak. If it's not every one of those things, then why did I just open my mouth. Here's another gauge. Bad people talk about people.Good people talk about things.Great people talk about ideas and ideals. What are you talking about? Okay, but that's really 'something' that I've been talking about, and that is what people talk about, which is usually 'nothing good nor kind,' but it's still something, even if that something is vicious or inane or banal. So let's talk about nothing. The problem of nothing is that there's none of it. And here's why. Get yourself to a pure vacuum. Is it inside a glass tube? No, because, actually, you've sucked most of the air out of it, but guess what? There's still quite a bit of light in there. That's not nothing. That's quite a bit of energy. Drat! Bummer! So, turning off the night doesn't do anything for you. Visible light is gone but then you have infrared and ooh! vampires! — so you've got to go somewhere where's there's no light, no light, on either end of the visible spectrum. Quite a lot of dust out there, star dust, and solar winds. Hm, anywhere where we can go where there's nothing? How about next to a black hole? Now, that's interesting. Now, if we can find a place somewhere around the black hole where it's not sucking up mass nor jetting out X-rays (and, granted, that would be hard to find, ... how about a black hole away from any nearby galaxy in spaaaace! Aw, the poor lonely black hole!) (Nobody cares about black holes being all alone, they're just like: oh! black hole! black holes are evil! I hate you, black hole! But does anybody ever consider the black hole's feelings when they're making these pronouncements? They just say these things to the black hole's face, and then wonder why black holes are always so mean to them!) There's still a problem. It's called the Hawking radiation You see, even in a 'pure vacuum' ... it isn't. Nature abhors a vacuum, so what happens is that there's so much energy around a black hole that it quantum particles (the Higgs field ?) sucking most of them back into itself, but some little quarks escape, at around the speed of light (quark: 'Imma getting me away from that there black hole! NAOW!') and so Hawkings predicted, correctly, that even black holes emit radiation. Bummer, dude! There doesn't seem to be anywhere where you can find yourself some nothing to be all sad and morose and all alone by yourself! You just have to buck up and accept the John Donne commandment: No quark is an island. Or something like that. BUT THEN! There's mathematics. You can invent a mathematical space where there's nothing in it, just it, and you (which makes it not nothing, but you get some quiet-time, finally, some alone time to catch up on your reading without having to worry about if the dishes were done). What happens there? Besides nothing? What does it look like? Besides really, really empty? Here's an interesting factoid (factoid, n.: geophf's interesting declarations that may or may not be somewhat related to some reality), you're not the first visitor there. There was this tall, thin dude in a white lab coat, by the name of Mach (no, not Mac the Knife, okay? Different Mac), and yes, the measure of velocity greater than the speed of sound was named after him (you know: mach 1, mach 2, mach 5+ for SR-71s), but besides that (and that's more than most people have ever done with their lives, but did he rest on his laurels? No. Besides, ... he still needed to earn his bread, so he could bake it, and then enjoy it with a latte) (lattes are important to physicists and mathematicians, don't you know.) (But I digress.) (As usual.) Besides that, he did this thought-experiment. He created this mostly empty space, just him and the star Alpha Centauri in the distance. That was it in the entire Universe. And he spun himself, arms outstretched. How did he know he was spinning? Easy, Alpha Centauri came into view, crossing it, then disappeared behind him as he spun, and he saw the star in an arc. Next, he removed Alpha Centauri. Now. Was he spinning? He had no way to tell. You can tell you're spinning, because you get dizzy and then sick, but why? Because of gravity. Earth's gravity (mostly). But now there's not Earth. There's not even Another Earth . So is not now giving you a frame of reference, and you could tell you were spinning because there Alpha Centuri was in the distance, giving you your (only) point of reference, but now it's gone, too. You could be moving a million miles per hour, you could be spinning and doing backward summersaults (or winter-saults for all you know: no seasons; no nothing. No nothing but nothing in your created Universe), and you'd have no frame of reference for you to make that determination. Are you spinning in Mach's empty Universe? The answer to that is: The answer to that question is that that's not a question to be asking. The answer to that question is that it has no sensible answer, whether you're spinning or not, you have no way to tell, or, no way to measure it. What's your coordinate system? How do you measure your speed or your spin. You don't, because if the XY-axis extends from in front of you, then it's always oriented to your front, no matter which way you face. Anyway, where is time in all this? There's nothing. Space is bounded by the things in it, otherwise there's not space. 'Space' is emptiness, nothing there, but the space is defined by the no-thing between things. There are no things in your nothing-Universe. No space, no space-time. No space-time, no time. Spin needs time to measure. If you had Alpha Centuri, then you would have light from four years ago shining on you, but now you have nothing, no light, no space, no time. No spin. This post was about nothing, nothing at all. I hope you see when you get to that point where there is indeed nothing, no dust, no light, no nothing, you really, do have nothing, not even space (the space between things: there are no things), not even time, not even spin. Not even weight; weight needs gravity. You have no gravity in your nothing-Universe. Enjoy the slim new-you, but the thing is, it'll be a lonely victory, so no bragging rights: nobody to brag to. Okay, I'm going to go enjoy a latte with my busy, noisy family, and enjoy my not-nothing, not-solitude. Nothing's not all cracked up to be what it's supposed to be.
{"url":"http://sequence.complete.org/aggregator","timestamp":"2014-04-18T18:13:04Z","content_type":null,"content_length":"51906","record_id":"<urn:uuid:8bf3275e-6f2a-4601-8be1-ebe6b61a8dd9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector problems July 19th 2010, 10:35 AM #1 Jul 2010 I just started on vector and there are some confusing problems i want to ask: Find the magnitude and direction angle of the vector: 1) $3i-4j$<- on this one i found the magnitude but not the direction angle, though i'm positive that i did just like i've done with some previous problems. I got 53.13, but the answer in the book is 306.87 2) $-3i-5j$<-- i also got the magnitude of this one, but not the direction angle. again 3) $7(cos135i+sin135j)$ Find the vector v with the given magnitude and the same direction as u $|v| = 1, u =<3,-3>$ Please help! Thanks Last edited by nhunhu9; July 19th 2010 at 10:54 AM. I just started on vector and there are some confusing problems i want to ask: Find the magnitude and direction angle of the vector: 1) $3i-4j$<- on this one i found the magnitude but not the direction angle, though i'm positive that i did just like i've done with some previous problems. I got 53.13, but the answer in the book is 306.87 +i , -j ... quadrant IV angle 2) $-3i-5j$<-- i also got the magnitude of this one, but not the direction angle. again -i , -j ... quadrant III angle 3) $7(cos135i+sin135j)$ what is your problem with this vector? Find the vector v with the given magnitude and the same direction as u $|v| = 1, u =<3,-3>$ v = < 1/sqrt(2) , -1/sqrt(2) > Number 3 is also find the direction angle but since the fisrt and second problems have to do with quadrant i already firgured it out. Thank you July 19th 2010, 01:56 PM #2 July 19th 2010, 04:27 PM #3 Jul 2010
{"url":"http://mathhelpforum.com/pre-calculus/151367-vector-problems.html","timestamp":"2014-04-21T16:55:52Z","content_type":null,"content_length":"38331","record_id":"<urn:uuid:f94f712a-a340-4d88-aae9-80c6c878b713>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Audubon, NJ Statistics Tutor Find an Audubon, NJ Statistics Tutor ...After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. I have been in upper management since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university. 13 Subjects: including statistics, calculus, geometry, algebra 1 ...My tutoring is guaranteed: During our first session, I will assess your situation and determine a grade that I think you can get with regular tutoring. If you don't get that grade, I will refund your money, minus any commission I paid to this website. Please note that I only tutor college stude... 11 Subjects: including statistics, calculus, ACT Math, precalculus ...Mathematical logic is a subfield of mathematics. This includes, but is not limited to, set theory, proofs (such as in geometry) and model theory. Much of the SAT test includes testing the students' reasoning and logic skills. 22 Subjects: including statistics, geometry, GRE, ASVAB ...I was trained in adolescent and cognitive psychology and have a very strong practical Mathematics background. I have served as an educator in various roles, both part time and full time, spanning across middle school and elementary school classroom settings. I have focused on working with class... 9 Subjects: including statistics, geometry, algebra 1, algebra 2 ...I started a business with a friend of mine, which I ran successfully for about 12 years. I then changed my career and became a teacher. I currently teach high school level math, chemistry and physics at a private school. 15 Subjects: including statistics, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Audubon_NJ_Statistics_tutors.php","timestamp":"2014-04-17T07:13:03Z","content_type":null,"content_length":"24194","record_id":"<urn:uuid:e07863a3-5be3-481c-bfb9-0a43eab2161a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00499-ip-10-147-4-33.ec2.internal.warc.gz"}