content
stringlengths
86
994k
meta
stringlengths
288
619
Re: st: RE: Mean test in a Likert Scale Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: RE: Mean test in a Likert Scale From Yuval Arbel <yuval.arbel@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: RE: Mean test in a Likert Scale Date Mon, 3 Sep 2012 07:54:09 -0700 Nick and Maarten, Note, that Kmenta's message is to prefer models with less restrictions. Moreover, are you suggesting we can deal in the same manner with quantitative values and ordinal variables? if our independent variables are what subjects marked on a questionnaire on a scale between 1 to 5 is the statistical treatment within a regression analysis framework should be identical to an independent variable measured in US dollars? On Mon, Sep 3, 2012 at 2:27 AM, Maarten Buis <maartenlbuis@gmail.com> wrote: > On Mon, Sep 3, 2012 at 11:01 AM, Nick Cox wrote: >> Econometricians' practice seems almost invariably looser than that -- >> and a good thing too -- although the idea that an analysis is correct >> or incorrect, and no shades of grey, still seems pervasive. > What amuses me are the occasional reference to "correct models", which > is just a contradiction in terms. A model is by definition a > simplification of reality, and simplifying reality is really central > to what a model is. If reality where so simple we could understand it > without simplification we would not need a model. However, > simplification is just another word for "wrong in some useful way". So > a correct model either does not simplify and is thus not a model, or > it is not as correct as the author thinks it is. > -- Maarten > --------------------------------- > Maarten L. Buis > WZB > Reichpietschufer 50 > 10785 Berlin > Germany > http://www.maartenbuis.nl > --------------------------------- > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ Dr. Yuval Arbel School of Business Carmel Academic Center 4 Shaar Palmer Street, Haifa 33031, Israel e-mail1: yuval.arbel@carmel.ac.il e-mail2: yuval.arbel@gmail.com * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-09/msg00052.html","timestamp":"2014-04-18T01:03:47Z","content_type":null,"content_length":"12093","record_id":"<urn:uuid:1c73a835-fd16-4357-9763-2fd72002e5aa>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Choked Compressible Flow of Gas from Tank through Pipe │ Choked Flow of Gas from Tank through Pipe. Adiabatic │Compute flow rate for compressible gas flow from a pressurized tank discharging through a pipe with temperature changes. Sonic flow (choked│ │ (Fanno) Flow. │ flow) at pipe exit. │ │ To: Weymouth (non-choked) gas flow calculator Darcy-Weisbach incompressible flow calculator │ │ Gas flow conversions (mass, standard, actual) Viscosity of Gases │ │ LMNO Engineering home page Unit Conversions Trouble viewing or printing? │ Gas flows from the tank, into the pipe, and discharges. Register to enable "Calculate" button. Units: atm=atmosphere, cm=centimeter, cP=centiPoise, ft=foot, g=gram, hr=hour, k=kilo (1000), kPa=kiloPascal, kg=kilogram, km=kilometer, lb=pound, m=meter, min=minute, mm=millimeter, M=Mega (million, 10^6) or Thousand (10^3) depending on context, MM=Million, MPa=MegaPascal, Mscfh=thousand std cubic foot per hour, MMscfd=million std cubic foot per day, mol=mole, N=Newton, N/m^2=Newton per square meter (same as Pascal), Normal=std conditions, Pa=Pascal, s=second, psf=pound per square foot (lb/ft^2), psi=pound per square inch, psia=psi (absolute), psig=psi (gage), scfd=standard cubic foot per day, scfh=standard cubic foot per hour, scfm=standard cubic foot per minute. Standard conditions are 15^ oC (i.e. 288.15 K, 59^ oF, 518.67^ oR) and 1 atm (i.e. 101,325 N/m^2, 14.696 psia). If gage pressure units are selected, the calculation assumes atmospheric pressure is 1 atm. The program uses 1 atm to convert between gage and absolute pressures. Gas flowing steadily from a tank into a pipe and discharging to the atmosphere (or another tank) is modeled using Fanno flow. Fanno flow assumes that the pipe is adiabatic. Adiabatic mans that there is no heat transfer into or out of the pipe. This is physically accomplished by insulating the pipe. Even if the pipe is not insulated, the adiabatic assumption is probably more realistic than an isothermal assumption for short lengths of pipe. In longer pipelines that are isothermal (constant temperature) and subsonic, the Weymouth calculation may be more suitable for computing flow rates and pressure drops. The choked flow calculation computes the mass flow rate through a pipe based on tank pressure and temperature, pipe length and diameter, minor losses, discharge pressure, and gas properties. Temperatures, pressures, densities, velocities, and Mach numbers are computed at all transition points (in the tank, at the pipe entrance, in the pipe at the exit, and in the surroundings at the discharge. As the gas flows through the pipeline, the pressure drops. Hence, density drops and velocity increases. If choking occurs, it will only occur at the pipe exit (Munson et al., 1998) for flow through a constant diameter pipe. For a gas flowing steadily from a tank to a pipe under Fanno flow conditions (adiabatic), the procedure follows that of Fanno flow in Gerhart et al. (1992) and Munson et al. (1998). An example in Perry (1984) uses figures which represent Fanno flow for choked flow. The following equations are solved simultaneously to compute mass flow rate, temperatures, pressures, velocities, and Mach Gas specific gravity and pipe cross-sectional area: Gas densities using ideal gas law: The calculation checks to see if choked flow occurs. If flow is choked, then choking occurs at location 2. Choked flow occurs if P[3]<P[2]^*. P[2]^* is the static pressure at location 2 if flow is choked. If choked flow occurs, then the * is dropped from the superscripts at location 2 for simplicity. ΣK[m] represents losses due to elbows and the pipe entrance. M[1] is computed from: Conditions are stagnant in the tank. Assuming the tank exit behaves isentropically: P[t] = P[o,1] T[t] = T[o,1] ρ[t] = ρ[o,1 ] and: Since choked flow occurs at location 2, M[2]=1.0 and: Velocities are: Since pipe diameter and mass flow rate are constant along the pipe, the Reynolds number is the same at locations 1 and 2: Moody friction factor is computed from the following equations: If laminar flow (Re < 4000 and any ε/D), then If turbulent flow (4000 < Re < 10^8 and ε/D < 0.05), then Colebrook equation (Munson et al., 1998, p. 494): If fully turbulent flow (Re > 10^8 and 0 < ε/D < 0.05), then Streeter et al. (1998, p. 289): Mass flow rate, density at standard conditions, and flow rate at standard conditions are: The units shown for the variables are SI (International System of Units); however, the equations above are valid for any consistent set of units. Our calculation allows a variety of units; all unit conversions are accomplished internally. A = Pipeline cross-sectional area, m^2. D = Pipe inside diameter, m. f = Moody friction factor. (Note that Moody friction factor is 4 times the Fanning friction factor. Fanning friction factor is often used by chemical engineers.) k = Gas specific heat ratio. Specific heat at constant pressure divided by specific heat at constant volume, C[p]/C[v]. Default values at 15^oC or 20^oC from Munson et al. (1998). k actually varies with temperature and can range from, using methane as an example, 1.32 at 50^oF to 1.28 at 250^oF (GPSA, 1998, p. 13-6). K[m] = Minor loss coefficient for pipe entrance, bends, etc. Since flow is choked (if the calculation proceeds), do not include an exit loss coefficient. ln = Natural logarithm (base e, where e=2.71828...). log = Common logarithm (base 10). L = Pipe length, m. M = Mach number. M[w] = Molecular weight of gas, kg/mole. P = Absolute pressure, N/m^2 absolute. Q = Flow rate at standard conditions, Normal m^3/s. Re = Reynolds number. R[u] = Universal gas constant, 8.3144126 N-m/mol-K (CRC, 1983, p. F-208). S = Specific gravity of gas, relative to air (S[air]=1). T = Absolute temperature, Kelvin. V = Velocity, m/s. W = Mass flow rate, kg/s. ε = Pipe roughness, m. Default values from Munson et al. (1998). μ = Gas dynamic viscosity, kg/m-s. The program assumes this is constant even though temperature (thus viscosity) varies along the pipe. π = 3.14159.... ρ = Gas density, kg/m^3. Σ = Summation. t = Tank. 1 = Pipe entrance. 2 = Pipe exit. 3 = Surroundings. o = Stagnation property. s = std = Standard (or "Normal") conditions. The word "standard" is used for English units. The term "Normal" is used with SI units, as in Nm^3/s which means "Normal m^3/s". Standard and Normal conditions for our choked flow calculation are 15^ oC (i.e. 288.15 K, 59^ oF, 518.67^ oR) and 1 atm (i.e. 101,325 N/m^2, 14.696 psia) from Perry (1984, p. 3-167). Some industries use different temperature and pressure for standard (or Normal) conditions. To convert the mass flow rate computed by our choked flow calculation to volumetric flow at a different T[s] and P[s], use our gas flow conversions page. If gage pressure units are selected, the calculation assumes atmospheric pressure is P[s]. The program uses P[s] to convert between gage and absolute pressures. Minor Loss Coefficients (K[m]). From Munson et al. (1998). │Fitting │K[m]│Fitting │K[m]│ │Pipe Entrance (Tank to Pipe): │ │Elbows: │ │ │Square Connection │0.5 │Regular 90°, flanged │0.3 │ │Rounded Connection │0.2 │Regular 90°, threaded │1.5 │ │ │ │Long radius 90°, flanged │0.2 │ │ │ │Long radius 90°, threaded│0.7 │ │180° return bends: │ │Long radius 45°, threaded│0.2 │ │Flanged │0.2 │Regular 45°, threaded │0.4 │ │Threaded │1.5 │ │ │ Error Messages given by calculation Input checks: "Need S > 1e-8", "Need Viscosity > 1e-20 N-s/m^2 ", "Need k > 1.0000001", "Need D > 1e-9 m", "Need Pipe Roughness > 0", "Need T[t] > 1e-8 Kelvin", "Need P[t] > 1e-8 N/m^2 absolute", "Need P[3] > 1e-8 N/m^2 absolute", "Need P[t] > P[3]". "Need K[m]+ f L / D >= 1e-8". Specific gravity, viscosity, specific heat ratio, pipe diameter, surface roughness, tank temperature, tank pressure, discharge pressure must be acceptable values. "Need L > 0". Pipe length can be 0.0 since the software can model an orifice without actually having a pipe. Run-time messages: "Flow is choked". This message will appear during normal running of the program if flow is choked. "Flow is subsonic". If the conditions are such (e.g. low P[t], high P[3], high L, high K[m], low D), then flow will not be choked and the calculation will stop. "Mach at 1 not found". The program is unable to compute the Mach number at the pipe inlet. "Re or ε/D out of range". Reynolds number or roughness-to-diameter ratio is out of range. "Moody f not found". The program is unable to determine the friction factor. Reynolds number or roughness-to-diameter ratio may be out of range. Chemical Rubber Company (CRC). 1983. CRC Handbook of Chemistry and Physics. Weast, Robert C., editor. 63rd edition. CRC Press, Inc. Boca Raton, Florida. Gerhart, P. M., R. J. Gross, and J. I. Hochstein. 1992. Fundamentals of Fluid Mechanics. Addison-Wesley Pub. Co. 2ed. Gas Processors Suppliers Association (GPSA). 1998. Engineering Data Book (fps [foot-pound-second] version). Gas Processors Association. 11ed. Perry, R. H. and D. W. Green, editors. 1984. Perry's Chemical Engineers' Handbook. McGraw-Hill, Inc. 6ed. Munson, B.R., D. F. Young, and T. H. Okiishi. 1998. Fundamentals of Fluid Mechanics. John Wiley and Sons, Inc. 3ed. Streeter, V. L., E. B. Wylie, and K. W. Bedford. 1998. Fluid Mechanics. McGraw-Hill. 9ed. © 2010 LMNO Engineering, Research, and Software, Ltd. (All Rights Reserved) LMNO Engineering, Research, and Software, Ltd. 7860 Angel Ridge Rd. Athens, Ohio 45701 USA +1 (740) 592-1890 LMNO@LMNOeng.com http://www.LMNOeng.com
{"url":"http://www.lmnoeng.com/Gas/choke.htm","timestamp":"2014-04-21T07:11:59Z","content_type":null,"content_length":"16319","record_id":"<urn:uuid:e3f1b05a-a4d1-41c5-be54-ffbd753612d5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
PHI 103 The truth table for a valid deductive argument will show My Account Mathematics Solutions will be PHI 103 The truth table for a valid deductive argument will show available for download immediately upon Price: $5.33 Additionally, you will receive the solutions via email. List All Products Advanced Search Forgot your password? Forgot your username? No account yet? Download Area Show Cart Your Cart is currently empty. Sample Solutions Using Power Series to Solve an Equation Circuit Diagrams Spelling and Usage Business and Finance You are protected with The truth table for a valid deductive argument will show wherever the premises are true, the conclusion is true. that the premises are false. that some premises are true, some premises false. wherever the premises are true, the conclusion is false. A conditional sentence with a false antecedent is always Cannot be determined. not a sentence. The sentence "P Q" is read as P or Q P and Q If P then Q Q if and only P In the conditional "P Q," "P" is a sufficient condition for Q. sufficient condition for P. necessary condition for P. necessary condition for Q. What is the truth value of the sentence "P & ~ P"? Cannot be determined Not a sentence "P v Q" is best interpreted as P or Q but not both P and Q P or Q or both P and Q Not both P or Q P if and only if Q Truth tables can determine which of the following? If an argument is valid If an argument is sound If a sentence is valid All of the above One of the disadvantages of using truth tables is it is difficult to keep the lines straight T's are easy to confuse with F's. they grow exponentially and become too large for complex arguments. they cannot distinguish strong inductive arguments from weak inductive arguments. "~ P v Q" is best read as Not P and Q It is not the case that P and it is not the case that Q It is not the case that P or Q It is not the case that P and Q In the conditional "P Q," "Q is a sufficient condition for Q. sufficient condition for P. necessary condition for P. necessary condition for Q. You may also be interested in this/these product(s) $5.00 $10.00 $7.53 $5.00 $5.00 $8.54 $3.75 $20.00 $20.00 $5.00 $3.29 Last Updated: Sunday, 20 April 2014 21:45
{"url":"http://www.lil-help.com/index.php?page=shop.product_details&category_id=19&flypage=flypage_images.tpl&product_id=701&vmcchk=1&option=com_virtuemart&Itemid=153","timestamp":"2014-04-20T21:45:59Z","content_type":null,"content_length":"46502","record_id":"<urn:uuid:f0c44025-30bf-45f0-9002-4445af04b621>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Bowie, MD SAT Math Tutor Find a Bowie, MD SAT Math Tutor ...With my experience and expertise I use a variety of teaching methods and will tailor my methodology to meet your child's needs. I guarantee that your child will comprehend difficult concepts, be motivated to learn and will have grade improvement. I am confident that his/her knowledge will be vastly improved within the first month of tutoring. 14 Subjects: including SAT math, reading, English, geometry ...I enjoy every minute of it, and it's been one of the most rewarding experiences of my life so far, one that has inspired me to become a secondary math teacher. I value a student's desire to learn and commitment to having a good educational relationship. Being open about your needs, concerns, and things that are going well is very helpful to improving your math skills. 15 Subjects: including SAT math, chemistry, calculus, geometry My love of teaching, especially on a one-to-one basis, began in high school, where the school guidance counselor enlisted me in tutoring struggling algebra and geometry students. Since then, I informally assisted fellow students in my college studies in engineering. I worked successfully for many ... 12 Subjects: including SAT math, reading, calculus, geometry ...My goal is to become a teacher, either at the High School or College level. I believe that education and the process of educating are two-way streets, during which both parties learn and evolve constantly. Success, especially in tutoring, comes from building strong, personal relationships with ... 11 Subjects: including SAT math, geometry, German, algebra 2 ...As an engineer, I am forced to rely on these basics to accomplish my job on a day-to-day basis. I am familiar with and have tutored each level of calculus at a university level. I have degrees in both Mechanical and Electrical Engineering, which are simply physics taken to an extreme. 7 Subjects: including SAT math, calculus, physics, algebra 1 Related Bowie, MD Tutors Bowie, MD Accounting Tutors Bowie, MD ACT Tutors Bowie, MD Algebra Tutors Bowie, MD Algebra 2 Tutors Bowie, MD Calculus Tutors Bowie, MD Geometry Tutors Bowie, MD Math Tutors Bowie, MD Prealgebra Tutors Bowie, MD Precalculus Tutors Bowie, MD SAT Tutors Bowie, MD SAT Math Tutors Bowie, MD Science Tutors Bowie, MD Statistics Tutors Bowie, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/bowie_md_sat_math_tutors.php","timestamp":"2014-04-17T21:35:24Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:8643055f-a29a-42c1-9556-34369f5b10e4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
This module defines classes of monads that can perform multiple computations in parallel and, more importantly, combine the results of those parallel computations. There are two classes exported by this module, Parallel and Fork. The former is more generic, but the latter is easier to use: when invoking any expensive computation that could be performed in parallel, simply wrap the call in forkExec. The function immediately returns a handle to the running computation. The handle can be used to obtain the result of the computation when needed: do child <- forkExec expensive result <- child In this example, the computations expensive and otherStuff would be performed in parallel. When using the Parallel class, both parallel computations must be specified at once: bindM2 (\ childResult otherResult -> ...) expensive otherStuff In either case, for best results the costs of the two computations should be roughly equal. Any monad that is an instance of the Fork class can also be instance of the Parallel class by the following rule: bindM2 f ma mb = do {a' <- forkExec ma; b <- mb; a <- a'; f a b} When operating with monads free of side-effects, such as Identity or Maybe, forkExec is equivalent to return and bindM2 is equivalent to \ f ma mb -> do {a <- ma; b <- mb; f a b} — the only difference is in the resource utilisation. With the IO monad, on the other hand, there may be visible difference in the results because the side effects of ma and mb may be arbitrarily reordered. class Monad m => Parallel m whereSource Class of types that can perform two computations in parallel and bind their results together. bindM2 :: (a -> b -> m c) -> m a -> m b -> m cSource Perform two monadic computations in parallel; when they are both finished, pass the results to the function. Apart from the possible ordering of side effects, this function is equivalent to \f ma mb-> do {a <- ma; b <- mb; f a b} Parallel [] Parallel IO Parallel Maybe Parallel Identity Any monad that allows the result value to be extracted, such as Identity or Maybe monad, can implement bindM2 by using par. Parallel ((->) r) Parallel m => Parallel (ListT m) Parallel (ResourceT IO) Parallel m => Parallel (MaybeT m) Parallel m => Parallel (IdentityT m) (Parallel m, Error e) => Parallel (ErrorT e m) Parallel m => Parallel (ReaderT r m) class Monad m => Fork m whereSource Class of monads that can fork a parallel computation. forkExec :: m a -> m (m a)Source Fork a child monadic computation to be performed in parallel with the current one. Fork [] Fork IO Fork Maybe Fork ((->) r) Fork (ResourceT IO) bindM3 :: Parallel m => (a -> b -> c -> m d) -> m a -> m b -> m c -> m dSource Perform three monadic computations in parallel; when they are all finished, pass their results to the function. Control.Monad equivalents liftM2 :: Parallel m => (a -> b -> c) -> m a -> m b -> m cSource Like liftM2, but evaluating its two monadic arguments in parallel. liftM3 :: Parallel m => (a1 -> a2 -> a3 -> r) -> m a1 -> m a2 -> m a3 -> m rSource Like liftM3, but evaluating its three monadic arguments in parallel. ap :: Parallel m => m (a -> b) -> m a -> m bSource Like ap, but evaluating the function and its argument in parallel. mapM :: Parallel m => (a -> m b) -> [a] -> m [b]Source Like mapM, but applying the function to the individual list items in parallel. Default instances
{"url":"http://hackage.haskell.org/package/classy-parallel-0.1.0.0/docs/Control-Monad-Parallel.html","timestamp":"2014-04-19T17:58:51Z","content_type":null,"content_length":"21552","record_id":"<urn:uuid:14ac973e-709e-4d3a-ae0a-c8491a86dd75>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
S 402: Homework #3 Fall 2004 Satisfaction Due: Friday, October 22 Special late policy: Because this homework is due right before vacation, there will be a special late policy that hopefully will reduce the pressure over the break on people who are unable to complete the assignment on time (especially those who might be traveling). In particular, the first five calendar days of break will count as a single late "day". So, if you turn in this homework after the due date (Oct. 22) but before 11:59pm on Oct. 27, it will count as being only one day late. After that, calendar days will count the same as late days. For instance, a homework turned in on Oct. 28 will count as being two days late. As usual, no homeworks will be accepted more than five "days" late, which means they will not be accepted after Sunday, Oct. 31. For this homework only, if you are out of the Princeton area over break, you may mail or email the written part of the assignment to Zafer. If mailed, your homework is considered submitted on the post mark date, and should be sent to this address: Zafer Barutcuoglu, Princeton University, Department of Computer Science, 35 Olden Street, Princeton, NJ 08544. (It would be wise to send him email at the same time you mail your assignment so that he can look out for it; also, save a photocopy of your work.) Part I: Written Exercises See instructions on the assignments page (as well as the special instructions above) on how and when to turn these in. (Each of these will be worth around 15 points.) 1. Exercise 7.4a,b,c in R&N. Be sure to prove these assertions; don't just wave your hands. 2. Exercise 7.9 in R&N. Also show how the resolution algorithm would handle each of these questions. 3. Suppose two cards are drawn at random from a full deck of cards. a. What is the probability that the first card is either a King or a Queen? b. What is the probability that the first card is either a King or a Heart? c. Is the suit (heart, spade, diamond, club) of the first card independent of the suit of the second card? Justify your answer (this means justify mathematically using the definition of d. Is the suit of the first card independent of the rank (A,2,3,...,10,J,Q,K) of the second card? Justify your answer. 4. Exercise 13.11 in R&N. Part II: Programming In this assignment, you will have a chance to develop and experiment with your own ideas for solving CNF satisfiability problems. The assignment has three parts, one of which is optional and for extra credit: 1. Write the fastest program you can for finding satisfying assignments (models) for a given CNF sentence. 2. Write a program for generating many "interesting" CNF sentences. 3. (Optional) Run a systematic experiment using your solver and generator programs. Each of these are described in detail below. After the submission deadline, we will also conduct our own "implementation challenge" to see whose program is fastest and most effective. A satisfiability solver The first part of the assignment is to implement a satisfiability solver, or "sat-solver" for short. Given a CNF sentence, your sat-solver should attempt to find a satisfying assignment as quickly as possible. Your sat-solver does not need to be complete in the sense of returning "failure" when the CNF is unsatisfiable. Rather, if no satisfying assignment can be found, it should return the "best" assignment it can find, i.e., one with as many clauses satisfied as possible. Most likely, you will want to use a variant of WalkSat, but this is not required. Be creative. There are many ways in which even WalkSat can be tinkered with to improve performance. Your sat-solver will be given a CNF in a representation described below. It also will be given a time limit in the form of a "timer" object. Your sat-solver must return with some answer (i.e., some assignment) by the time the timer reaches zero, or as soon thereafter as possible. A generator The second part of the assignment is to devise and implement a method of generating many CNF sentences corresponding to a natural class of problems. For instance, in class, we discussed how graph-coloring problems can be converted into CNF satisfiability problems. Thus, one way of generating interesting CNF sentences is to first generate random graph-coloring problems, and then convert them into CNF sentences whose satisfiability must be solved to find a coloring of the randomly generated graph. Similarly, we saw in class how a blocks-world planning problem can be converted into a satisfiability problem. A generator in this case could be designed by first generating random blocks-world problems (i.e., start and goal configurations) and then converting these into satisfiability For this part of the assignment, you should (1) choose a natural problem, (2) figure out how to reduce this problem to a satisfiability problem, (3) implement a generator for natural instances of the original problem, and (4) implement your reduction. The end result will be a program that generates CNF sentences, but of a very particular form. Since we discussed graph coloring and blocks-world planning problems in class, these would not be appropriate choices for this assignment. However, here are some other examples: • Hamiltonian cycles: Given a graph, find a Hamiltonian cycle, i.e., a path through the graph that begins and ends at the same vertex, and that visits every vertex exactly once. • Dinner party planning: Some number of guests are coming for dinner and will be seated at a single large table (with the host and hostess at the ends of the table). Some pairs of guests have indicated that they must sit next to each other. Other pairs have indicated that they absolutely refuse to sit next to each other. Find a seating arrangement that satisfies all the constraints. • Wedding party planning: Some number of guests are invited to a wedding, and will be seated at some number of tables of fixed size. Some pairs of guests must be together, and some must be separated. Find an arrangement that satisfies all the constraints. • Robot navigation: A robot must make its way through an environment with obstacles. It can move up, down, left or right, and must find a path from a start position to some goal position. • Path finding in a graph: Given a graph, find a path from the start state to the goal state. Some of these are deliberately vague. You can fill in the details as you wish. You should not feel constrained to choose any of these. On the contrary, you are strongly encouraged to be creative and come up with your own problem. You need to pick a problem for which it is easy and natural to generate many CNF sentences. For instance, it would not be appropriate to choose a problem such as n-queens since, for each n, there is only a single problem instance. Once you have chosen a problem, you need to figure out how to generate random instances, and how to reduce these instances to satisfiability problems. Be very careful to check that an assignment that satisfies the CNF also is a solution to the original problem instance, and vice versa. For some problems, it is possible to design your generator to produce only satisfiable CNF sentences. For instance, for the graph-coloring problem, you can start by 3-coloring a set of vertices and then add random edges, never adding an edge that would connect two like-colored vertices. The resulting graphs (without the colors, of course) will clearly be 3-colorable by construction, so the generated CNF's will also be satisfiable. If possible, you are encouraged (but not required) to produce a generator of this form since it should make the implementation challenge more interesting (obviously, no one will be able to solve an unsatisfiable CNF). The final step is to implement all of the above. The end product will be a method for generating many CNF sentences. Your generator (and those of your classmates) will be used to generate the CNF sentences used for the class implementation challenge. In case the process of creating a generator is still unclear, here is a clarifying note written last year by Matt Hibbs, one of that year's TA's. A systematic experiment The final part of this assignment, which is optional and for extra credit, is to conduct a systematic experiment of some sort exploring some aspect of your sat-solver. An example of such an experiment is the one shown in Figure 7.18b of R&N. In this experiment, the performance of WalkSat was tested on randomly generated 3-CNF sentences with the number of symbols fixed, but with varying clause-to-symbol ratios. Most likely, your generator will take some parameters, such as the number of edges and number of vertices in the graph that is to be 3-colored in the coloring problem. You might try measuring run time as a function of one of the parameters (with the others held fixed). For instance, in this example, you could measure run time as a function of the number of edges with the number of vertices held fixed. Here is how you might conduct such an experiment: 1. Fix the number of vertices to be, say, 50. Let the number of edges vary taking values, say, of 20, 40, 60, ..., 200. 2. For each of these settings, generate, say, 50 random graphs and convert them to CNF sentences. If possible, generate only CNF sentences that are known to be satisfiable (see note above). 3. Run your sat-solver on all of the generated CNF sentences, and record all the results, i.e., running time and the number of unsatisfied clauses. You may need to limit the running time of each run to keep the experiment from running for too long. 4. Make a plot with the number of edges on the x-axis, and the average (or median) running time on the y-axis. (There are many programs available for making plots including excel, gnuplot and The numbers chosen here for number of vertices and number of edges were arbitrary. In your own experiment, if possible, you should try to identify an "interesting" region for the parameter settings, as in R&N Figure 7.18b where the running time increases but then decreases as the clause-to-symbol ratio is increased. This is just one possible experiment that could be conducted. Myriad possibilities exist. For instance, you might want to compare performance of two substantially different algorithms, or variants of the same algorithm, over a range of problems. Or, if your algorithm has a parameter, you can see how the algorithm's performance varies as a function of this parameter. Most likely, if you play around with your algorithm and your generator, you will encounter some aspect of one or the other that seems worth exploring systematically. Moreover, such an experiment may help you to design a more effective sat-solver. As with the other parts of this assignment, you are encouraged to think creatively. It is sufficient to conduct a single experiment for this part of the assignment, although you are welcome to do more. Be careful to plan your experiment carefully, possibly after running some preliminary test. Because we are dealing with randomized algorithms and random instances, you will almost certainly need to run your algorithm many times, taking the average (or median) of the results. This also means that the experiments can take quite a while to run. This is why it is so important that you start early if you are planning to do this part of the assignment. Ideally, you would run your algorithm hundreds of times to get reliable results. Unfortunately, you probably will not have time to do this, and may be forced to run only ten or twenty times. Plan carefully, and be sure your experiments will terminate in a reasonable period of time before beginning them. For instance, the experiment outlined above will require generating 50 * 10 = 500 CNF sentences. If we allow our sat-solver 30 seconds for each one, this gives a total (worst-case) running time of 500 * 30 = 15000 seconds, or about 4 hours, 10 minutes. Obviously, running times may vary depending on who else is using the computer you are on. If you are using a shared computer, this might lead to unreliable results. In this case, you may instead wish to measure some quantity other than running time, such as the number of variable flips made by the algorithm. The implementation challenge Once you turn in your sat-solver and generator, we will run everyone's sat-solver on everyone's generator to determine whose sat-solver runs the fastest. In particular, each generator will be used to generate three CNF sentences. We will then run every sat-solver on the resulting collection of CNF sentences. Each sat-solver will be given 20 seconds on a 2.4GHz PC to solve each CNF sentence. If your sat-solver solves a CNF, you will be charged the actual number of seconds it took to solve it. If you do not solve it, you will be charged the full 20 seconds, or your actual time, whichever is larger (this is to penalize going over the time limit). We will then take the average (over all CNF's) of these running times. We also will run other statistics, such as the average number of unsatisfied clauses achieved by each sat-solver. To make the implementation challenge interesting, please try to set up your generator to produce CNF's that are challenging but that at least your own sat-solver would have a reasonable chance of solving in 20 seconds (if run on a 2.4GHz machine). Obviously, it won't be much fun if no one's sat-solver can solve anyone's generated CNF's, nor if all of the CNF's are trivially solvable almost Your grade will not depend on how well you do in this implementation challenge relative to the rest of the class. Although you are encouraged to do the best you can, this should not be regarded as anything more than a fun (I hope) communal experiment exploring methods for solving satisfiability problems. The code we are providing We are providing the following classes and interfaces: • Generator - This is the interface for an object that generates CNF sentences. In particular, every Generator must include a method getNext() that returns a (new) CNF sentence each time it is • SatSolver - This is the interface for an object that solves satisfiability problems. Every SatSolver must include a method solve() that takes as input a CNF and a Timer object and returns a model (assignment). The SatSolver should attempt to return before Timer.getTimeRemaining() returns a negative number, or very soon thereafter. • Literal - An object of this class represents a literal. This class has two fields, both public: sign, representing whether or not the literal is negated (false = negated), and symbol, representing the index (integer i.d.) of the referenced symbol. • Timer - An object in this class acts as a timer, indicating both how much time has elapsed from the time the object was created (returned by the getTimeElapsed() method), as well as how much time remains until the end of a specified period (returned by the getTimeRemaining() method). • CnfUtil - This class contains a number of static methods that you may find helpful, such as for printing. Using these methods is optional. A CNF is represented as an array of arrays of Literals. Each component array corresponds to a clause (disjunction of literals) in the natural way. Thus, cnf[4][1] corresponds to literal #1 in clause #4. Note that, in java, these component arrays do not all need to be of the same length. (See the section on multidimensional arrays in Java in a Nutshell.) Likewise, a clause is represented as an array of Literals with the obvious interpretation, so a CNF can be viewed equivalently as an array of clauses. A model or assignment is represented as a boolean array in the natural way. Thus, model[3] is the true/false assignment to symbol #3. We also are providing a Generator called RandomCnfGenerator which generates random CNF formulas of a fixed size with a fixed number of symbols and clauses. You can use it in your experiments, or as an example for writing your own generator. A large number of CNF sentences are available from the website of the DIMACS implementation challenge that was conducted several years ago. You may download and try out your program on these, or use these in your experiments. The contents of the files on this website are (very tersely) described here (unfortunately, I really don't know much more about these beyond what appears in this document). We have provided a Generator called DimacsFileGenerator that reads in files of this sort (after they have been uncompressed). Both classes RandomCnfGenerator and DimacsFileGenerator have their own main methods which can be used for testing. Moreover, you may wish to use these as examples of how to use SatSolvers and All code can be downloaded from this directory, or all at once from this tar file. Documentation is available here. What to turn in Using whiteboard, as in HW#2, you should turn in the following: • A class called MySatSolver which implements SatSolver, and which includes a constructor taking no arguments. This is the sat-solver that we will use as your entry to the implementation challenge. A dummy version of MySatSolver has been provided which you can use as an example and a template. • A class called MyGenerator which implements Generator, and which includes a constructor taking no arguments. This is the generator we will use for the implementation challenge. A template of MyGenerator has been provided. • Any other java files that you wrote and used for your experiment, or that are needed by MyGenerator or MySatSolver. • A readme.txt file. At the top of this file, please include a one-sentence description of your sat-solver, and a one-sentence description of your generator (we may use these in compiling the results of the implementation challenge). If appropriate, the readme.txt file can also be used to explain briefly how your code is organized, what data structures you are using, or anything else that will help the TA understand how your code works. The code that you turn in should not write anything to standard output or standard error. In addition to the program itself, you should write a brief description of: • how your sat-solver works, including pseudocode if appropriate; • your generator, in particular, what natural problem it is based on, how you generated instances of that problem, and how you reduced instances of that problem to CNF; • your experiment (if you did one), what the results were of the experiment (which you might be able to summarize as a graph, such as Figure 7.18b of R&N), and what you concluded from the This written work should be handed in with the written exercises. What you will be graded on You will be graded on completing all the different parts of this assignment. You have a great deal of freedom in choosing how much work you want to put into this assignment, and your grade will in part reflect how great a challenge you decide to take on. Creativity will certainly be an important component of your grade. Although your standing in the implementation challenge will not affect your grade, your sat-solver should be at least as fast and effective as vanilla WalkSat. However, you should not let this deter you from trying a risky idea for an algorithm quite different from WalkSat. If you have such an idea, you might go ahead and implement it, and compare its performance to WalkSat, thus satisfying the third (optional) part of the assignment. You then can enter the faster of the two algorithms in the implementation challenge. Needless to say, your code should not halt prematurely with an exception (unless given bad data). As usual, you should follow good programming practices, including documenting your code. Finally, the written part of this assignment should be clear and concise. This programming assignment will be worth roughly 75 points, which will be divided between the write-up, the generator, the sat-solver, and overall creativity. The extra credit "systematic experiment" portion will be worth 5-30 points (depending on what you do for this part).
{"url":"http://www.cs.princeton.edu/courses/archive/fall04/cos402/assignments/satisfaction/index.html","timestamp":"2014-04-16T07:13:36Z","content_type":null,"content_length":"24007","record_id":"<urn:uuid:c7ef4c54-b151-472a-87a4-f51bba65f799>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Required : Calculate his Gross Wages acco7Jing to : I. Piece work with guaranteed weekly wage. 1. Rowan Premium Plan. 2. Halsey Premium Plan. Answer : (1) Rs. 192. (2) Rs. 242.22 (3) Rs. 226. P.8-12: There are ten men working as a group on a particular manufacturing project. When the weekly production of the group exceeds a standard number of pieces per hour, each man in the group is paid a bonus for excess production, in addition to his wages at hourly rate. The amount of the bonus is computed by first determining the percentage by which the group’s production exceeds the standard. One-half of this percentage is then applied to a wage rate of Rs.12.50 to determine an hourly bonus rate. Each man in the group is paid, as a bonus, this bonus rate applied to his total hours worked during the week, the standard rate of production before a bonus can be earned is 200 pieces per hour. On the basis of the production record stated below compute: (a) The rate and amount of bonus for the week. (b) The total wages of Mr. B, who worked 40 hours at a base rate of Rs.5.00 per hour. • Production Record Days Hours worked Product ion Monday 72.0 17,680 Tuesday 72.0 17,348 Wednesday 72.0 18,000 Thursday 72.0 18,560 Friday 71.5 17,888 Saturday 40.0 9,600 Total: 399.5 99,076 Answer: a) i- Rs. 1.50 per hour 6- Rs. 599.25 (b) Rs.260(See Illustration No.17) Similar Accounting Articles: This entry was posted in Cost Accounting and tagged Accounting & Control, Labour Cost. Bookmark the permalink.
{"url":"http://accountingexample.com/practical-problems-8/","timestamp":"2014-04-20T21:33:46Z","content_type":null,"content_length":"28992","record_id":"<urn:uuid:69f39419-11ef-478c-8409-04ea1efcb059>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
changing celsius to F 45.7°f = what in celsius Is this the equation I should be using C = °F-32/1.8 C= 45.7-32/1.8= 7.6 Your answer is correct but the correct equation for Fahrenheit to Celsius is : (F - 32) x 5/9 = C where F=45.7 Thus, (45.7-32)*5/9=7.6 Composition of Functions To derive the formula, use the freezing and boiling points of water, to get the slope of the line: $\frac{\Delta C}{\Delta F}=\frac{100-0}{212-32}=\frac{5}{9}$ Now, use the point-slope formula: $C-0= \frac{5}{9}(F-32)$ $C(F)=\frac{5}{9}(F-32)$ You mean C= (F- 32)/1.8. What you wrote really means C= F- (32/1.8) You can check that is correct by checking the "easy values". The freezing point of water is 0 C and 32F: (32-32)/1.8= 0/1.8= 0 so that is correct. The boiling point of water is 100 C and 212 F: (212- 32)/1.8= 180/1.8= 100 so that is correct. Since this is a "linear" equation and two points determine a line you know that is the correct formula for any F.
{"url":"http://mathhelpforum.com/math-topics/203630-changing-celsius-f-print.html","timestamp":"2014-04-16T06:15:34Z","content_type":null,"content_length":"7592","record_id":"<urn:uuid:8331a882-3070-4773-99d2-2e82c0ef5239>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Military Flight Aptitude Test: Decimals Some questions on the Military Flight Aptitude Test will require you to work with decimals. You can express a fraction as either a percentage or a decimal. To convert a fraction into a decimal, just divide the numerator by the denominator. For example, 1/4 is 1 / 4 or .25. Rounding up or down Sometimes your decimals don’t come out as cleanly as .25; you may have a decimal that has more decimal places than your calculator can register. In these instances, you round the number to cut off the run of decimal places. To round a decimal, decide which decimal place you want to round to. (The first place to the right of the decimal point is the tenths place; the second is the hundredths; the third is the thousandths; and so on.) Look one place to the right of your desired rounding place; if the number there is lower than five, the number in your rounding place stays the same. If the number to the right of the rounding place is five or higher, the number in your rounding place bumps up to the next higher number. For the purposes of a military flight aptitude test, you can round extended decimals to the nearest hundredth. For example, 5/9 is 5 divided by 9 or (The bar over the 5 means that number repeats to infinity.) To round it to the hundredths place, you check the third number to the right of the decimal place (thousandths); that number is 5, so you round up to the next number to get 0.56. Adding and subtracting decimals Addition and subtraction are easy with decimals. You just have to remember to line up the columns correctly before you start. Align the terms so that the decimal points match up, and you’ll be fine. Multiplying decimals When multiplying decimals, you multiply as if you’re multiplying whole numbers and then adjust the placement of the decimal point based on the total number of decimal places in the factors. For example, to multiply 1.111 x 1.03, you treat the problem as 1111 x 103 = 114433. Next, you add up the number of decimal places in the problem: 1.111 has three decimal places, and 1.03 has two, for a total of five places. Starting at the farthest right decimal place in the answer, you move five places to the left and drop in the decimal point to get 1.14433, which you can round off to 1.14. Dividing decimals When dividing a decimal by a whole number, you first move the decimal point in the dividend (the number you’re dividing into) to the right until you have a whole number; perform the division with the whole numbers and then move the decimal place back to the left the same number of spaces you moved it to the right. For example, to divide 0.3 by 2, you move the decimal point in 0.3 one space to the right so that the equation is 3 / 2 = 1.5. Then you move the decimal point back the same amount, or one space, to the left to get your final answer of 0.15. If the decimal is divided by another decimal, you move the decimal point in the divisor (the number that is going into the other number) just enough to make it a whole number. Move the decimal point in the dividend the same number of spaces (even if the dividend won’t be a whole number) and divide as you normally would. For example, take .002 / .08. First, you move the point out by two decimal places to get 0.2 / 8, which is the same as 0.200 / 8, or 0.025. (You can add zeros to the right of a decimal point without changing the value of the number.) In this case, you don’t have to move the decimal point in the final answer.
{"url":"http://www.dummies.com/how-to/content/military-flight-aptitude-test-decimals.html","timestamp":"2014-04-18T21:58:22Z","content_type":null,"content_length":"53452","record_id":"<urn:uuid:464b4f46-e4a2-46f5-a136-fc0bb5b834a1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Distance between a point and a line May 24th 2006, 11:40 AM [SOLVED] Distance between a point and a line Alright well i've been having a lot of trouble trying to figure out how to find the distance between a line and a point not on the line. So if anyone could help me i'd really like that. Here are some sample problems: Sample 1: (6,1) and -2x+4=4 Sample 2: (-7,-2) and y=7x-3 if you could take them step by step for me i'd really like that. May 24th 2006, 12:23 PM Originally Posted by Silly.girl Alright well i've been having a lot of trouble trying to figure out how to find the distance between a line and a point not on the line. So if anyone could help me i'd really like that. Here are some sample problems: Sample 1: (6,1) and -2x+4=4 Sample 2: (-7,-2) and y=7x-3 if you could take them step by step for me i'd really like that. There is a problem with sample 1: -2x + 4 = 4 isn't a line! Generally what you want is the "perpendicular distance" between a point and a line. There is a general formula out there (I don't have it handy), but this is the basic logic of how to find it if you don't have the formula: 1. Solve the equation of the line (line 1) in slope-intercept form. 2. You want to find the equation of a line (line 2) passing through your point that is perpendicular to the original line (line 1). Thus the slope of line 2 will be -1/m(1). 3. Now you need to find the point of intersection of the two lines. 4. Now find the distance between the point of intersection and your point. It's long and it's messy, but if you do it step by step it's really not that bad. May 24th 2006, 12:31 PM Originally Posted by Silly.girl Alright well i've been having a lot of trouble trying to figure out how to find the distance between a line and a point not on the line. So if anyone could help me i'd really like that. Here are some sample problems: Sample 1: (6,1) and -2x+4=4 Sample 2: (-7,-2) and y=7x-3 if you could take them step by step for me i'd really like that. Let $(x,y)$ ba a point on the line. Then the distance $d$from the point $(-7,-2)$ to the point $(x,y)$ satisfies: $<br /> d^2=(x+7)^2+(y+2)^2<br />$ but on the line $y=7x-3$, so: $<br /> d^2=(x+7)^2+(7x-1)^2<br />$ Now the point on the line closest to $(-7,-2)$ will minimise $d^2$, so we want to find the $x$ such that: $<br /> \frac{d}{dx}d^2=0<br />$, which is equivalent to: $<br /> \frac{d}{dx}[(x+7)^2+(7x-1)^2]=2(x+7)+14(7x-1)=0<br />$, or $x=0$. so the closest point to the given point is $(0,-3)$. Hence the required distance is: $\sqrt{50}$ May 24th 2006, 01:18 PM Given a line with equation, Then, the point, $(x_0,y_0)$ the distance between them is, June 1st 2006, 05:25 AM Originally Posted by topsquark There is a problem with sample 1: -2x + 4 = 4 isn't a line! -2X+4=4 REPRESENTS ALINE. IF YOU SOLVE IT YOU WILL GET WHICH REPRESENTS Y-AXIS June 1st 2006, 05:30 AM Originally Posted by Silly.girl Alright well i've been having a lot of trouble trying to figure out how to find the distance between a line and a point not on the line. So if anyone could help me i'd really like that. Here are some sample problems: Let your line be Ax + By +C=0 and the point be a,b since shortect distance in euclidean geometry is perpendicular distance, we will find the of perpendicular line which will be of the form of Bx -Ay +k= 0. Rest you try yourself.
{"url":"http://mathhelpforum.com/pre-calculus/3101-solved-distance-between-point-line-print.html","timestamp":"2014-04-21T05:08:53Z","content_type":null,"content_length":"11960","record_id":"<urn:uuid:a02b60e0-2acd-4f13-b102-2174e6fcd0ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring Trinomials, a = 1 - Problem 7 Factoring is a process of turning a sum or difference into a product. If the first term of the trinomial is x squared, then we know that the first terms in each factor must be x. If the "c" term of the trinomial is positive and "b" is negative, like in these examples, then we know our remaining terms must both be negative. We look for integers that multiply to the "c" term and add to the "b" term, being careful with the positive and negative sign placement. Sometimes it is useful to make a table of possible factors, which we demonstrate here. Transcript Coming Soon! b negative and c positive factoring trinomials
{"url":"https://www.brightstorm.com/math/algebra/factoring-2/factoring-trinomials-a-equals-1-problem-7/","timestamp":"2014-04-18T15:40:37Z","content_type":null,"content_length":"61057","record_id":"<urn:uuid:453acb98-fca0-4fb2-a78c-f4a3039a49ee>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by nick Total # Posts: 1,018 world history 1)Who became Da vinci,s rival in Florence? -Michelangelo 2)What was the mistake Leornardo made with his work the last Supper? -leornardo chose to sit Judas in with the rest of the disciples 3)What was the name Da Vinci gave to his appertentice? Can you check my answer? And he... 4,915 people were surveyed about their recent purchases. 1,027 of those people surveyed bought a lawn mower within the last year, 918 bought a snow blower within the last year, and 3,387 didn t buy either of the two. a) How many people bought a lawn mower and a snow blowe... Algebra II 1. A sound has an intensity of 5.92 e 1025W/m2.What is the loudness of the sound in decibels? Use . 2. Suppose you decrease the intensity of a sound by 45%. By how many decibels would the loudness be decreased? social studies mummy-shaped figures that worked in the after life,s fields for the mummy A +30 µC charge is placed 44 cm from an identical +30 µC charge. How much work would be required to move a +0.69 µC test charge from a point midway between them to a point 14 cm closer to either of the charges? find the average of 26, 42, 57, 49, and 16. Earth Science True or false? ------------------------- Although there is scientific evidence and many scientific theories about the origin of the universe, how it all happened remains a question. Which statement is true? ------------------------- The perfect cosmological principle is the ba... spanish/check please Describe a typical day in your life from beginning to end eight different( (reflexive verbs).give time you do each of those activities as well. 1-Yo me levanto a las seis de la manana. 2-Entonces me lavo la cara a las seis y cuarto. 3-Depues de que me pongo la ropa a las seis... I have been asked "Determining whether amounts are in conformity with GAAP addresses the proper measurements of assets, liabilities, revenues, and expenses which includes all of the following except: a. the reasonableness of management's accounting principles. b. prop... Englidh 7 - Journal Entry (plz read) This is great!! :) I'm not 100 percent sure but I believe it goes like this . The coach could have runners A B C run. He could also have runners A B D run it . Yet he could have runners C A D run . He also may have Runners B D C run. THANKS FOR THE ANSWER BUT I DON'T KNOW HOW TO WRITE THE PROBLEM TO ARRIVE TO THAT ANSWER Spanish 1 ESTA' REMEMBER THE ACCENT OVER THE "A" Spanish 1 DONDE ESTA LA SILLA WHERE IS THE CHAIR? SANTA WAS NUMBERING THE PAGES OF HIS ORDER BOOK. HE WROTE 690 DIGITS. HOW MANY PAGES WERE IN HIS ORDER BOOK? I am right to believe that the rights and obligation assertion applies only to the balance sheet items only? Because my homework assignment asks," The rights and obligations assertion applies to: a. current liability items only. b. revenue and expense items only. c. both ... linear equations Write the equation of a line in slope-intercept form that is perpendicular to the line and passes through the point (-9, 4). Sin would be 1/2 +360n? Find the exact values of the inverse function arcsin 1/2 what is an economist view on this statement "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest." I need help with this. I was presented with this question: Analytical procedures show that inventory turnover decreased from 31 34 days to 27 days, and gross margins declined to the lowest level in five years. What might this indicate about the risk of misstatement with r... HARRIET BOUGHT CHRISTMAS CARDS. SHE BOUGHT 1 BOX WITH 15 CARDS FOR $18.00, 1 BOX WITH 12 CARDS FOR 20.00, AND 2 BOXES WITH 10 CARDS FOR $8.00 EACH. WHAT WAS THE AVERAGE PRICE SHE PAID PER CARD? interm. algebra 3e^-5x=48 solve Sam and Nora leave their business office at the same time. Sam drives north at 53 km/h, Nora drives south at 47 km/h. How long until they are 50 km apart? Why does the electoral college not allow citizens living in U.S. territories to vote, as it is the only place in the universe (astronauts have voted) that U.S. citizens can't vote. if 16 tablespoons = 1 cup how many tablespoons 3/4 of a cup Write limits of integration for the integral \int_W f(x,y,z)\,dV, where W is the quarter cylinder shown, if the length of the cylinder is 1 and its radius is 1. add (5y^2-7y)+(3y-y^2) |2x+5| -1 < 6 How can estimating help you add two-digit numbers? A 220-kg crate rests on a surface that is inclined above the horizontal at an angle of 19.4°. A horizontal force (magnitude = 525 N and parallel to the ground, not the incline) is required to start the crate moving down the incline. What is the coefficient of static fricti... A rocket of mass 4.45E+5 kg is in flight. Its thrust is directed at an angle of 51.1° above the horizontal and has a magnitude of 7.64E+6 N. Calculate the magnitude (enter first) and direction of the rocket's acceleration. Give the direction as an angle above the horiz... Why is there a gain or loss in a pension expense calculation? I am more confused about this every time I read about this topic. Can somebody explain this to me? I have researched and read and still do not understand why. calculate the frequency of an xray having a wavelength of 3.5*10to the negative seven m. (meters). A dynamics cart carrying a black 10.0 cm strip accelerates uniformly and passes through two photogates. The time for the strip through the first photogate, time in Gate1, is 0.250 s and through the second photogate, time in Gate2, is 0.180 s. (1) Determine the average velocity... A woman stands on a scale in a moving elevator. Her mass is 57.0 kg, and the combined mass of the elevator and scale is an additional 811 kg. Starting from rest, the elevator accelerates upward. During the acceleration, the hoisting cable applies a force of 9391 N. What does t... A 71.0 kg person steps on a scale in an elevator. The scale reads 722 N. What is the magnitude of the acceleration of the elevator? A 31.0 kg object slides down a slope which is inclined at an angle of 20.0o to the horizontal. What is the normal force exerted by the slope? A rocket of mass 4.45E+5 kg is in flight. Its thrust is directed at an angle of 51.1° above the horizontal and has a magnitude of 7.64E+6 N. Calculate the magnitude (enter first) and direction of the rocket's acceleration. Give the direction as an angle above the horiz... If 0.120 moles of water are produced in the reaction H+(aq)+OH-(aq)=H2O deltaH=-56.2kj/mol what is qrxn The highest barrier that a projectile can clear is 15.8 m, when the projectile is launched at an angle of 13.4° above the horizontal. What is the projectile's launch speed? An Olympic basketball player shoots towards a basket that is 5.64 m horizontally from her and 3.05 m above the floor. The ball leaves her hand 1.71 m above the floor at an angle of 41.0o above the horizontal. What initial speed should she give the ball so that it reaches the b... john bought a bag, pai oh shoes ad a watch for $160 altogethe. the bag cost $15 more than the pair of shoes and the watch cost $20 less than the bag. What was the cost of the bag? its better to round to the hundreds because if you round to the thousands, it would be really far away from the real answer. for example, if you round 7,127 to the thousands place, it would be 7,000. but if you round it to the hundreds place, it would be 7,100 which is closer ... cool :P by the way, i was just curious. LOL are you a teacher or a random student with that name? thanks that really helped :D <3 what are dependent and independent clauses? Social Studes i will answer the first part for you, a backcountry is a sparsely populated rural region remote from a settled area. the volume will be 216 cm cubed. about 2 quarts are in a liter reading Plz Help i agree with you reading Plz Help i read the book 2 times, but i think that they dont mention it in the whole book. but i think that mayella is lying but it doesn't mention anything???? i think that mayella's father told her to make that up reading Plz Help in the novel "To Kill a Mockingbird", is Tom Robinson guilty, or is it Mayella Ewells? Thw total cost of a meal at a banquet hall is $20 per person plus a $500 charge for renting Suppose the total was $2160 find the number of guests? What is the average of 12,43,37,40 please From the position vs time graph of an object moving with constant acceleration, how could you find the velocity I do not understand why stock option compensations need to be included as an expense when calculating the company's net income? number of moles of NaOH required to titrate 38.05mL of a 1.500 M H3PO4 solution A truck is stopped for a red light at Ridge Road on Higgins line. At the instant the light turns green, the truck pulls away with a constant acceleration of 3.3 m/s2. A cyclist approaching the truck with a constant velocity of 12.0 m/s is 8.60 m behind the truck when the light... A truck is stopped for a red light at Ridge Road on Higgins line. At the instant the light turns green, the truck pulls away with a constant acceleration of 3.3 m/s2. A cyclist approaching the truck with a constant velocity of 12.0 m/s is 8.60 m behind the truck when the light... what maximum height will a 311.2 m drive reach if it is launched at an angle of 17.0 degrees to the ground Consider a particle undergoing positive acceleration. (1) what is the shape of the position-time graph? (2) How do we determine an instantaneous velocity from the position-time graph? Accounting-HELP ME! A company purchased machinery for $660,000 on May 1,2008. It is estimated that it will have a useful life of ten years,salvage value of $45,000, production of 350,000 units, and working hours of 60,000. During 2009 the company uses the machinery 3,050 hours, and the machinery ... Brenda dog is 5 times as long as hugh dog together the dogs are 48 inches long how many inches long is hugh dog ENGLISH 111 In the following sentence identify the appositive or appositive phrase and the noun or pronoun defined by the appositive. We often contribute to the Salvation Army, one of our favorite charities. An automobile and train move together along parallel paths at 31.6 m/s. The automobile then undergoes a uniform acceleration of −4 m/s2 because of a red light and comes to rest. It remains at rest for 44 s, then accelerates back to a speed of 31.6 m/s at a rate of 2.22 m... SraJMcGin, Thank you for the information, but I still do not know how to set this problem up. The site was no help. I have tried to figure out how to do this for four days and my textbook does not give an example on how to do this problem. Lower-of-Cost-or-Market Corrs Company began operations in 2007 and determined its ending inventory at cost and at lower of cost or market at December 31,... The primary substance from which all other chemical compounds and substances are made is a____________. A) element B) compound C) mixture D) pure substance A UPS employee has a 9 hour shift, where he has on average 1 container of packages containing a variety of basic, business, and oversized packages to process every 3 hours. As soon as he processes a package he passes to the next employee who loads it on a truck. The time is ta... Operations Management Fresh Sub hires three workers during the peak hours. When customers arrive, one worker is dedicated to order taking and preparing the bread and the meat; then the customer is passed to the second worker, who asks the customer about the vegetable selection and specializes in as... use the numbers 3, 5, 6, 2, 54, and 5 in that to write an expression that has a value of 100. Cruise industries purchased $10,800 of merchandise on February 1, 2007, subject to a discount trade of 10% and with credit terms of 3/15, n/60. It returned $2,500 (gross price before trade or cash discount) on February 4. The invoice was paid on February 13. At what amount wou... for every two M&Ms you buy, you get two free. If one M&Ms cost .79 cents how many can you buy for 6 dollars? can somebody please explain to me how to do this problem: Cruise Industries purchased $10,800 of merchandise on February 1, 2007, subject to a trade discount of 10% and with credit terms of 3/15, n/ 60. It returned $2,500 (gross price before trade or cash discount) on February ... Is there a relationship between ethical behavior in academia and ethical behavior in the business world? If I am correct I would have to say that there is a relationship between ethical behavior in academia and ethical behavior in the business world. I am I correct? Considering how organization must manage cash, receivables, and inventories. Which of the three variables is the most important to manage? In my opinion I would have to say that the cash would be the most important to manage. Am I right? Sets A, B, and C have 6 members in common. Sets A and B have a total of 17 members in common. Sets B and C have a total of 10 members in common. If each member of set B is contained in at least one of the other two sets, how many members are in set B? Yolanda walked a distance of 3 miles in 90 minutes. If her speed for the first mile was 6 miles per hour, how many minutes did it take her to walk the rest of the distance. whts the answer Nine less than 7 times a number Nine less than 7 times a number What is the journal entry to record a sale but no cash comes in? Is it possible to solve that with different variables? how do I solve the problem? a/b = c/d You are making ballon bunches to attach to tables for a charity event. You plan on using 8 ballons in each bunch. Write a rule for the total number of balloons used as a function of the number of bunches created. Identify the independent and dependent variables. How many ballo... give vaccinations to 50 of the 100 rats and then expose all 100 to the disease Environmental problems How did pre-modern Europeans understand and deal with what we would today call environmental problems? What are some key elements of activity-based costing? I already know what activity-based costing is, I just do not understand what the key elements are. For the first eleven years of its existence, the United States was a _____. chem 150 calculate the molarity of 700grams of NaOH in 2 liters of solution Selling price of a bond: Problem type 1 On December 31, 2008, $140,000 of 9% bonds were issued. The market interest rate at the time issuance was 11%. The bonds pay on June 30 and December 31 and mature in 10 years. Compute the selling price of a single $1,000 bond on December... How do I figure FUTA/SUTA earnings? This is what my book gives: James Company has three employees .Its payroll information is given below. Employee Earnings Prior October to October Earnings Donald Robinson: $6050 $1200 OASDI Earnings FUTA/SUTA Earnings $1200 $950 Tax Rate Cei... Oh ok. I was wondering if anyone liked it. Thanks Does anyone on here use Writepoint to check your work?? Is this sentence proper? They had swum across the lake several times. Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=nick&page=4","timestamp":"2014-04-19T03:01:12Z","content_type":null,"content_length":"28163","record_id":"<urn:uuid:f91a4688-2b1d-4ab5-82c2-a3c0c7966d03>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Simple historical question H. Enderton hbe at math.ucla.edu Fri Jul 20 14:09:29 EDT 2007 Credit for showing the undecidability of true arithmetic surely goes to Alonzo Church, "An unsolvable problem of elementary number theory," Amer. J. of Math., vol. 58 (1936), 345-363. This is the same paper that presents Church's Thesis for the first time, so it would have been hard for anyone (Goedel, Post, Turing, ...) to have formulated the result before then. As for your pedagogical point, I quite agree. Now that we have a robust theory of computability, I think we can say that the heart of Goedel's first incompleteness theorem lies in the fact that true arithmetic is not computably enumerable. Of course, in 1931 the computability concepts were unavailable. --Herb Enderton On Thu, 19 Jul 2007, Michael Sheard wrote: > ... > To whom should we rightly give first credit for the result/observation > that Th(N,0,S,+,*,<) is undecidable? > ... > This question arose in a pedagogical context. In a survey of > undecidability results, I find it much simpler and more effective to > mention the undecidability of Th(N,...) rather than of PA, or any other > axiomatized theory, especially for a general audience. > Many thanks, > Michael Sheard More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-July/011741.html","timestamp":"2014-04-18T20:42:39Z","content_type":null,"content_length":"3636","record_id":"<urn:uuid:ff2545db-df42-4646-b149-a42fd2f44b35>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
2.4. Optimizing code The first thing to look for is algorithmic optimization: are there ways to compute less, or better? For a high-level view of the problem, a good understanding of the maths behind the algorithm helps. However, it is not uncommon to find simple changes, like moving computation or memory allocation outside a for loop, that bring in big gains. In both examples above, the SVD - Singular Value Decomposition - is what takes most of the time. Indeed, the computational cost of this algorithm is roughly However, in both of these example, we are not using all the output of the SVD, but only the first few rows of its first return argument. If we use the svd implementation of scipy, we can ask for an incomplete version of the SVD. Note that implementations of linear algebra in scipy are richer then those in numpy and should be preferred. In [3]: %timeit np.linalg.svd(data) 1 loops, best of 3: 14.5 s per loop In [4]: from scipy import linalg In [5]: %timeit linalg.svd(data) 1 loops, best of 3: 14.2 s per loop In [6]: %timeit linalg.svd(data, full_matrices=False) 1 loops, best of 3: 295 ms per loop In [7]: %timeit np.linalg.svd(data, full_matrices=False) 1 loops, best of 3: 293 ms per loop We can then use this insight to optimize the previous code: In [1]: import demo In [2]: %timeit demo. demo.fastica demo.np demo.prof.pdf demo.py demo.pyc demo.linalg demo.prof demo.prof.png demo.py.lprof demo.test In [2]: %timeit demo.test() ica.py:65: RuntimeWarning: invalid value encountered in sqrt W = (u * np.diag(1.0/np.sqrt(s)) * u.T) * W # W = (W * W.T) ^{-1/2} * W 1 loops, best of 3: 17.5 s per loop In [3]: import demo_opt In [4]: %timeit demo_opt.test() 1 loops, best of 3: 208 ms per loop Real incomplete SVDs, e.g. computing only the first 10 eigenvectors, can be computed with arpack, available in scipy.sparse.linalg.eigsh. Computational linear algebra For certain algorithms, many of the bottlenecks will be linear algebra computations. In this case, using the right function to solve the right problem is key. For instance, an eigenvalue problem with a symmetric matrix is easier to solve than with a general matrix. Also, most often, you can avoid inverting a matrix and use a less costly (and more numerically stable) operation. Know your computational linear algebra. When in doubt, explore scipy.linalg, and use %timeit to try out different alternatives on your data.
{"url":"http://scipy-lectures.github.io/advanced/optimizing/","timestamp":"2014-04-17T18:31:35Z","content_type":null,"content_length":"46106","record_id":"<urn:uuid:267ed923-f3de-43e9-9f6d-e8c802fb7c96>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Math Forum » Discussions » Policy and News » geometry.announcements Topic: The Solomon Temple's Mathematical Codes - Part 21: Before us, on this Earth, it was another Highly Spiritual advanced Civilization who knew that the Human Voice has a Mathematical Frequency Replies: 0 The Solomon Temple's Mathematical Codes - Part 21: Before us, on this Earth, it was another Highly Spiritual advanced Civilization who knew that the Human Voice has a Mathematical Frequency Posted: May 5, 2011 8:22 AM Copyright 2011 by Ion Vulcanescu. Researching the Solomon Temple, and the Cheops Pyramid I came to the realization that BEFORE US, ON THIS EARTH WHO KNEW THAT and in this article you shall see The research data indicates that the Designers of Cheops Pyramid, implemented it for us - the future generations of after them, in AND HER PYRAMIDON, and THE OLD HEBREWS by designing THE NAMES of the Solomon Temple's 3 rooms: ULAM, HEKAL, and DEBIR alike their ancestors in knowledge - the Egyptian sages, they left it after them encoded in THE MASTER FREQUENCIES of The Solomon Temple's 3 rooms A SQUARE PERIMETER (separated in 8 parts). WELCOME to the 2010- 2011 edition of the miniserie of articles: "The Solomon Temple's Mathematical Codes - Part .21 Please, be advised that this article contains on only the Research, but also my Opinions of philosopher, both fully protected by the Federal Laws of the United States, and by the International Copyrights Laws. If you intend to use partial or total this Research, you can use it by the Fair Uses Rights of the Copyrights Laws, or by the Grant I have indicatted in the Copyrights Notice of that can be found at the end of the article: "The Condemnation by the Paris Livre of the French Academy of Sciences" Ion Vulcanescu's previous years research articles of were entruested by me as author to USA Drexel University's Mathforum, and can be found at: As here-and there may be typing errors, or other errors, this article shall be continuu corrected and reeditted until it shall remain fidel to the intended form. Due to this Effect the researchers who shall find this article on Internet are asked that allways to return to Mathforum and consider only the last corrected edition. Years back during researching the Cheops Pyramid, I realized that taking 100% the "ancient writings" about the Egyptians and the Jews I would be fooled by ...and as the years passed ( I am now 62 years old) I also realized that if I have to reach the understanding of the Cheops Pyramid's Time Message, I have to look to OTHERS that may have used as a "Point of Anchorage" the Cheops Pyramid, and may have left after them Mathematical Time-Messages through which as researcher I would be able to reach not only THE OLD HEBREWS,but also THE CHEOPS PYRAMID. As I shown in previus published articles here on Mathforum, the Cheops Pyramid it is not a True Pyramid, but a Bent Pyramid. This Proof that Cheops Pyramid it is a BENT PYRAMID lives in being diferent of the - PYRAMIDON Angle of Inclination= 48 deg.8.46 min,21 sec. - PYRAMID Angle of Inclination= 51 deg,49 min,2.48 sec. The researchers in Egyptology, as also the Historians knew that after Zoser reigntime, Egypt entered in an era of time when we see an Effect through which Sneferu built the Bent Pyramids of Meidum and Dashur. The scholars of today after researching these pyramids jumped to the "conclusion" that these "Bent Pyramids" were "bent" for that during the constructions they develped "cracks". As researcher I did not fully understood the Effect through which these "bent Pyramids" appeared, and for years myself too, I was fooled by these "experts" that the pyramids built by Sneferu were bent for that "they developed cracks". The understanding that taking such "design" of builting BENT PYRAMIDS was was realized by me only after I understood that even Cheops Pyramid was a Bent Pyramid, and I saw through THE ANGLES OF INCINATION. It was observed by me that the designers of Cheops Pyramid further refined the Design by creating "The Benting Effect" by just playing with the angle of inclination of A SINGLE STONE - THE PYRAMIDON ! As the research went ahead, I realized that in the now Trigonometric Effect known in trigonometry as COSINE ANGLE where the numerical value of the angle can be obtained by making the ratio between Adjacent side and Hypotenuse was hidden "Something of Value" that the ancient Egyptians appears that know it under other then Trigonometric Concept proven to me to have been THE SECOND OF THE ANGLE, AND ENCODING IT found by me as to be that they encoded again through yet another Concept herein on Mathforum named by me as My then Feeling it was that did the designers of Cheops Pyramid CHOSEN these angles. Then I realized a Strange Effect that I can express it so: who appears that knew that and that THE OLD HEBREWS through the designing of for Cesar and then "seating" in the Roman Alphabet the 3 sounds known today as J,U, and W, that later in time they became to be as they are seen today as Single Letters, they lett in time the Time-Message that THEY KNEW...THE HUMAN VOICE MATHEMATICAL FREQUENCY ! Now, I had a hint that the Old Hebrew sages from the Court of Cesar, did a Historical Trick of Untold Proportions by letting the Jews know that in the Jewish Alphabet there were incorporated Time-Messages, when in fact NOT IN JEWISH ALPHABET ( of then), but, IN THE ROMAN ALPHABET( later moved and imposed on the Roman occupied England) were incorporatted THE REALLY OLD HEBREWS TIME-MESSAGES ! ...and as I begun to advance the research, and finding all kind of Frequencies as: Alphabetical, Numerical, Sex and ending in the now MASTER FREQUENCY, the PROOF that apeared in Full Light showing the historical Effect seen as: but before them and they lett it be known this Effect in AND HER PYRAMIDON. ...ande herenow I present the Six Samples that hold THE PROOF that before us on this Earth there was another Civilization Spiritual higher advanced than us! The First Sample "(SERKET + SCORPION + NARMER) - ((SAQQARA STEP PYRAMID VOLUME'S (Large Part) : DECIMAL SYSTEM))= = (Cheops Pyramid's ANGLE OF INCLINATION + +Pyramidon's ANGLE OF INCLINATION ) - - ( Cheops Pyramid ANGLE OF INCLINATION - Pyramidon's ANGLE OF INCLINATION)" Let's read this Time-Message in the Concept of the Mathematical Model of the Master Frequency: "(16384 + 47524 + 19044) - ((330400 - 7840) : 10))= =(( 8640 + 4608+7744+4356) + ( 4900+6724+8836+4356))- -((8640+4608+7744+4356) - ( 4900+6724+8836+4356))" (1) - The names as SERKET, SCORPION , NARMER were read from the Egyptian Hireogliphs. Their Translation in the now sound it is made only through the Roman Alphabet, that today it is..the English Alphabet. (2) -The research by me and the realization that THE SAQQARA STEP PYRAMID (of King Zoser) have to be researched as having two volumes "322560 + 7840 = 330400= Total Volume". This research I presented partial on Mathforum in 2003. The Second Sample "(((The Geometric Point of the SEXAGESSIMAL SYSTEM) + + (SINE + COSINE + TANGENTA + COTANGENTA) - - ( SERKET + SCORPION + NARMER)) + + 8 (parts of the Geometric Point)) x 8 ( parts of the Geometric Point) = = ((The SUM of all 4 faces ANGLE OF INCLINATION ( of the Pyramid) + = The Sum of all 4 faces ANGLE OF INCLINATION ( of the Pyramidon))" Let's again read the Time-Message in the Concept of the Mathematical Model of the Master Frequency: "(((60 x 8) + (8836+16900+26244+39204) - (16384+47524+19044) = = ((4356 + 4356) +8)) x8 = (5776 x 4) + (11664 x 4)" HISTORICAL PROOF : The names: "SINE, COSINE, TANGENTA, and COTANGENTA" were not known to have been in use during the time of the builting of the Cheops Pyramid ! The Third Sample "((((The Sum of all 4 faces ANGLE OF INCLINATION( of the Pyramid) + + The Sum of all faces ANGLE OF INCLINATION ( of the Pyramidon)) ; : 8 (parts of the Geometric Point) ))- 8 (parts of the Geometric Point))))x x 2 (the PYRAMID and her PYRAMIDON = CHEOPS". Let's again read the Time-Message in the Concept of the Mathematical Model of the Master Frequency: "((((5776 x 4) + (11664 x 4)) :8)))+ 8))))x 2 = 17424" HISTORICAL PROOF: The historical evidence indicates that the name "CHEOPS" was coined only later in time under the Greeks ! The Fourth Sample "(((GEMATRIA - ( ULAM + HEKAL + DEBIR)) x 4))) + 360 deg. = = ( The Year 2012 AD in the Sosigenes Calendar - - The Year 2012 AD in the Old Hebrews calendar) x x 8 ( parts of the Geometric Point)". Let's again read the Time-Message in the Concept of the Mathematical Frequency of the Master Frequency: "(((21904 -(8836 + 5476+5776)) x 4))) + 360 = (6725 - 5772) x 8 HISTORICAL PROOF:: The word "GEMATRIA" ( do not confuse with "GEOMETRIA"), it is known to have come toward us as a Time-Message of THE OLD HEBREWS. It reffers to the fact that the TORAH Time-Messages could be decoded through the use of the Numbers. The Fifth Sample "( The Sum of all 4 faces ANGLE OF INCLINATION ( of the Pyramid) + = The Sum of all 4 faces ANGLE OF INCLINATION ( of the Pyramidon) + + (( 1/8 of The Geometric Point of the Master Frequency of the Sum all all 4 of the PYRAMID + HER PYRAMIDON) )- SWASTIKA = = (ULAM + HEKAL + DEBIR ) x 3 (rooms of the Solomon Temple)" Let's again read the Time Message through the Mathematical Model of the Master Frequency: "( 5776 x4) + (11664 x4) + (4032 :8) - 10,000 = ((8836 + 5476 + 5776) x 3" The Sixth Sample "(((The Geometric Point of the 4 faces ANGLE OF INCLINATION of the Pyramid plus THE 4 FACES OF THE angle of inclination of the Pyramidon - ((The Geometric Point of: THE SOLOMON TEMPLE'S Interior Base Perimeter +THE SOLOMON TEMPLE interior Volume + THE SOLOMON TEMPLE interior Total Area))x 3 rooms))) x 8 - - THE ARTABA))) : 8)))) + THE DECIMAL SYSTEM = The Year 2012 AD in the Sosigenes Concept of Year Zero as 4713 BC + The Year 2012 AD in the Concept of THE OLD HEBREWS Calendar of Year Zero as 3760 BC" ...and again let's read the Time-Message through the Mathematical Model of the Master Frequency: "(((69760 - 60264)x 8 - ((8836 + 5476+5776)) x 3)))x 8 - - 29160))) : 8 + 10 = 6725 - 5772" HISTORICAL PROOF: The Time was separated in BC and AD, only later in the 6th century AD by Dionysius Exigus, thus AD and BC was not known during the design and builting of Cheops Pyramid. ...and how the Civilization in the time of AFTER CESAR ERA, LOST this Highly Advanced Knowledge, and we ended in this era in time when we do not know anything from the Time-Messages of the Designers of Cheops Pyramid, or of the Solomon Temple, you may have a hint looking in the Evolution of the History and the destruction of the Jewish sages who appears that were THE LAST (in Time) KEEPERS OF THIS KNOWLEDGE ! The publication of the Research Data of this miniserie of articles it is now intrerupted until sometimes in November/December 2011 when I intend to return to Mathforum. The interested researchers can reach the future research data by checking the posting of future articles on Mathforum, at: Ion Vulcanescu - Philosopher Independent Researcher in Geometry Author, Editor, and Publisher of May 5 2011 Sullivan County State of New York
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2261492","timestamp":"2014-04-20T16:50:19Z","content_type":null,"content_length":"27416","record_id":"<urn:uuid:d95d147f-db71-4d36-a35f-950a58da682a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionLiterature ReviewWireless local area networksKriging interpolationArtificial neural networks in GISMethodologyThe measurementsData preparation and use of geographic information systemsApplication of Kriging interpolationProgramming a neural networks tool in GIS for spatial interpolationThe topology of multilayer feed-forward back-propagation artificial neural networkProgramming the application with Visual Basic in GIS; the ANN interfaceANN interpolation patternResults and DiscussionsCoverage resultsComparison between ANN prediction and Kriging interpolation methodConclusionsReferences and NotesFigures and Tables The analysis of indoor radio propagation is essential for the maintenance of a sustainable and effective communication link in wireless systems. Sufficient radio propagation must be maintained to avoid the interruption of the data transmission, therefore, indoor radio propagation of wireless communication systems is modeled and analyzed by Artificial Neural Networks (ANN) and Kriging spatial interpolation methods in Geographic Information Systems (GIS). With the use of GIS model, an efficient wireless network can be designed. The huge demand in the wireless industry has accelerated studies on the development of accurate indoor radio wave propagation prediction methods. Hence indoor radio wave propagation modeling is a quite new and still rapidly developing area and it has become essential with the installation of Wireless Local Area Networks (WLAN) inside buildings [1]. The indoor propagation channel differs considerably from the outdoor one. The distance between transmitter and receiver is shorter, due to high attenuation caused by the internal walls and furniture and often also because of the lower transmitter power. Before any propagation computation can be performed, these surfaces must be characterized. It appears that accurate geographical information of this nature must be obtained [2]. Many additional effects can occur within buildings, making indoor propagation a very complex and fascinating process, whose exact description in rigorous mathematic formulations is not feasible [1]. However, GIS including spatial interpolation patterns is very efficient for studying the radio wave propagation environment, the number, position and transmitter power of access points, electromagnetic coverage and the radiation level values. Spatial interpolation is used to estimate values at locations within the area covered by existing observations and an important function of a GIS analysis. For this reason a wide variety of interpolation methods are practised and developed. Since it is essential to run an accurate interpolation method, several techniques based on data type are compared. There are different classes of interpolation methods such as geometrical nearness (e.g. the Voronoi approach), statistical methods (e.g. natural neighbor interpolation, weighting inverse distances, Kriging), using basis functions (e.g. trend surface analysis, regularized smoothing spline with tension, method of local polynomials) and the Artificial Neural Networks (ANN) method [3]. Some comparisons between ANN and traditional interpolation methods were done in different studies. For example, ANN interpolation perfomed better than Kriging in predicting precipitation levels in high altitude regions [4]. Xingong Li reported the use of a feed-forward neural network for precipitation estimation and the neural network performed consistently well the interpolation in contrast to the other methods such as Voronoy cells, trend surface analysis, inverse distance weighted and ordinary Kriging [3, 5]. The difficulties that existing methods have to represent complex nonstationary relationships are listed by Rigol et al. [6]. Snell et al. used a multilayer feed-forward back-propagation ANN for the spatial interpolation of daily maximum surface air temperatures and in 94% of case comparisons, the predictive accuracy of the ANN was superior to the benchmark methods (spatial average, nearest neighbor and inverse distance methods) [7]. The numerical results, the advantages and drawbacks of ANN were also discussed by Bollivier et al. [8]. ANN models provided greater accuracy than the inverse distance and average methods for estimating daily weather variables [9]. The use of back propagating feed-forward multilayer ANN using a sigmoidal function produced significantly better results compared with other spatial interpolation methods and were better than linear and log-linear models [10]. However, Pariente maintained that Hopfield neural nets (Hopfield 84) were much more precise than feed-forward nets and the other interpolation methods Few studies have been done on the use of ANNs as tools for GIS and it is obvious that the future GIS implementations should have ANN modules and more research activity must be performed, especially for spatial interpolation [3, 12]. In this study, GIS was used to represent indoor radio wave propagation environment and electromagnetic coverage by means of ANN and Kriging interpolation patterns with geographical features. In order to illustrate the approach, electromagnetic field values were measured at the entrance floor of T-Block building where one of the wireless communication systems is available in Yildiz Technical University for analyzing indoor radio wave propagation of WLAN (2.4 GHz). The proposed GIS also ensures 3- dimensional modeling of the study area, the number, position and transmitter power of access points and electromagnetic radiation level. The main goal of this paper is to integrate a multilayer feed-forward back-propagation ANN module in the GIS software (ArcGIS) by programming ArcObjects with Visual Basic to interpolate the indoor electromagnetic field measurements for propagation analysis. The accuracy was compared with geostatistical Kriging available in ArcGIS by adjusting procedures such as Root Mean Square Error and the Mean Absolute Error. It was demonstrated that the feed-forward back-propagation ANN for spatial interpolation of electromagnetic field measurements provides adequate accuracy. However, the predictions of Kriging interpolation are more accurate than the selected ANN model. A wireless LAN (WLAN) is a wireless local area network, allowing users to connect directly to a distribution system without interconnecting wires and cables. WLANs utilize spread-spectrum technology based on radio waves whose frequency is much lower than visible light to enable communication between devices in a limited area, also known as the Basic Service Set (BSS). The primary reasons of the popularity of wireless LANs are their convenience, cost efficiency, and ease of integration with other networks and network components [13]. Figure 1 illustrates a WLAN Architecture using BSS infrastructure. Server and wired workstations connected to a distribution system called wired LAN and access points are connected to this distribution system. Access points provide BSS communication areas between devices. All BSS areas constitute an Extended Service Set (ESS). The connections to the end-users in Wireless LANs are established via an air interface and the communication is maintained by an electromagnetic coverage area through WLAN Access Point (AP). With the rapid growth of wireless communications, cell sizes are getting smaller and site-specific propagation information is needed for the design of mobile systems. Coverage is simply the distance that a wireless network can transmit data at a given data rate subject to the regulations in its frequency band and the standard under which it operates. Indoor electromagnetic coverage is a primary consideration in the implementation of indoor wireless networks especially in the frequency range from 500 MHz to 5 GHz. Indoor coverage is important for WLANs where the indoor coverage directly impacts the critical capacity and cost. WLANs are mostly implemented in indoor environments and a circular coverage is expected, but the pattern of the coverage can usually be affected in a destructive or a constructive way. Thus, the coverage area the range and the radiation pattern of a WLAN communication system probably differ from the theoretical prediction approach [14, 15]. An indoor environment is usually very changeable, due to moving people, doors, windows, lifts, furniture and equipment. Indoor signal measurement and prediction are still therefore a kind of a ghost story [1]. The mechanisms behind electromagnetic wave propagation are diverse, but can generally be attributed to reflection, penetration, diffraction and scattering. Most mobile wireless communication systems operate in areas where there is no line of sight path between transmitter and receiver. Due to multiple reflections from various objects, the electromagnetic waves travel along different paths of varying lengths. The interaction between these waves causes multi path fading at a specific location, and the strengths of the waves decrease as the distance between the transmitter and receiver Aside from direct application in propagation modeling, GIS functionality is clearly essential in preparing data for the construction of a propagation specialized database. Many models require detailed geographic information i.e. location and material parameters [2]. Cisco Aironet 1100 Series Access Point is used for the WLAN communication system at the entrance floor of the T-Block building in Yildiz Technical University. The indoor electromagnetic field measurements and coverage area analysis were implemented according to these access positions. The Access Point has the main following features [25]: 2.4 GHz IEEE 802.11g Radio Standard Configurable output power up to 100 mW 10.4 cm wide; 20.5 cm high; 3.8 cm deep physical dimensions Integrated 2.2 dBi dipole antennas Up to 54 Mbps date rate for range of 27 m With the development of science and technology, Wireless LANs, Global System for Mobile communications (GSM), TV-radio transmitters, base stations etc. are used commonly for personal, industrial and commercial aims at every steps of life. The risk factor of electromagnetic pollution for environment and human health has been discussed by many scientists and a lot of research has been done in developed countries. As a result of this, electromagnetic radiation, density and frequency of sources must be under control as described in standards. This study also interrogates the radiation level of WLAN (2.4 GHz) and provides some insight for these areas of research. Spatial interpolation is a procedure of estimating the values of properties at unsampled locations based on the set of observed values at known locations. A large number of interpolation methods (Inverse distance weighted, Spline, Natural Neighbors, Kriging, etc.) have been developed [16]. It is not possible to measure every point for getting data, therefore, measurement values by interpolation methods are predicted. Interpolation methods must be chosen according to the modeling data type in order to get more accuracy. Besides, adequate number and efficient distribution of measurements provide reliable results. The Kriging method assumes that the distance or direction between sample points reflects a spatial correlation that can be used to explain variation in the surface. Kriging is a multi-step process; it includes exploratory statistical analysis of the data, variogram modeling, creating the surface, and optionally, exploring a variance surface. Inverse Distance Weighted and Spline are referred to as deterministic interpolation methods because they are directly based on the surrounding measured values or on specified mathematical formulas that determine the smoothness of the resulting surface. A second family of interpolation methods consists of geostatistical methods such as Kriging, which are based on statistical models that include autocorrelation (the statistical relationship among the measured points). Because of this, not only do these techniques have the capability of producing a prediction surface, but they can also provide some measure of the certainty or accuracy of the predictions. There are two important Kriging methods used; Ordinary Kriging and Universal Kriging. Ordinary Kriging is the most general and widely used of the Kriging methods. It assumes the constant mean is unknown. Universal Kriging assumes that there is an overriding trend in the data and it can be modeled by a deterministic function, a polynomial. This polynomial is subtracted from the original measures points, and the autocorrelation is modeled from the random errors. Once the model is fit to the random errors, before making a prediction, the polynomial is added back to the predictions to give you meaningful results [17]. Artificial Neural Networks (ANNs) are information processing systems that have the ability to implement new information formation and discovery automatically using the mode of learning of human brain and neural biology. ANNs are generally used for classification, prediction, identification, recognition and interpolation problems. The basic processing elements of an ANN are the neurons (units). A neuron has five basic parts. These are; input, weight, summation function, activation function and output as shown in Figure 2a. These units are interconnected by weighted links to form a network. The multi-layer ANN model is typically composed of three parts: input, one or many hidden layers, and an output layer as shown in Figure 2b. The weights are connections between neurons while the activation functions are linear or non-linear algebraic functions. When a pattern is presented to the network, weights and neurons are adjusted so that a particular output is obtained. Neural networks provide a learning rule for modifying their weights and neurons. Once a neural network is trained to a satisfactory level, it can be used with novel data. Training techniques can either be supervised or unsupervised. Supervised training methods are adapted for interpolation problem [12]. ANNs have recently started to be used for spatial data interpolation in an attempt to overcome some of the limitations presented by more traditional methods [6]. New solutions about spatial interpolation in GIS must be performed by more tools modeling different ANN and need to be discussed about the results. In general ANN form; a unit in the network sums the weighted inputs from the links feeding into it. The summation function is: N E T j a = ∑ k = 1 n A k j C k jwhere A[kj] and C k j are matrixes of weights and outputs respectively. The activation function applied to both hidden and output layers such as a non-linear Sigmoid Function is shown below. C j a = 1 1 + e − ( N E T j a + β j a )where β j a is threshold unit and output of threshold unit is constant and equal to one. The output is then fed to other units linked to it. In this study, during the training of a feed-forward network the weights of the network are adjusted in a process called back-propagation. As the algorithm's name implies, the errors (and therefore the learning) propagate backwards from the output nodes to the inner nodes so as to minimize the error which is difference between the output of the net and the desired output. So technically speaking, back-propagation is used to calculate the gradient of the error of the network with respect to the network's modifiable weights. This gradient is almost always then used in a simple stochastic gradient descent algorithm to find weights that minimize the error. The implementation of an ANN requires three main steps: model and architecture selection, training (also called learning) and independent performance assessment (testing). First the appropriate network model and architecture are selected [6]. In order to determine the best network topology, samples chosen for input data, the number of neurons at hidden layer, iterations, learning and momentum rate are changed by several combinations until obtaining an acceptable accuracy. The network having the lowest error is selected. ANN techniques are reviewed in detail by Freeman and Skapura [18] and Bishop [19]. The study area is the entrance floor of Yildiz Technical University's T-Block building at the Besiktas Campus in Istanbul, Turkey. In order to produce a map and 3-dimensional model of the study area, the T-Block building, observation points and details inside the building were surveyed by geodetic methods and a Totalstation was used. All details of T-Block building; classrooms, corridors, stairs, doors, columns, central heating radiators and access points of WLAN (2.4 GHz) were surveyed. In addition to this, the electromagnetic field values of 1085 observation points inside the T-Block building were measured. Electromagnetic field measurements, which are used for analyzing and predicting the electromagnetic coverage area, were performed at the entrance floor of T-Block building. In order to symmetrically cover the floor, 217 straight points were chosen in the corridor, which has an area of 150 square meters. To analyze the 3-dimensional electromagnetic coverage, the measurements were repeated at five different height levels at 50 cm, 100 cm, 140 cm, 215 cm and 290 cm height from the floor. Electromagnetic measurements were performed with an EMR-300 radiometer at every single point and the device was fixed at a constant position by using a tripod. The EMR-300 Radiometer is a versatile system for measuring electromagnetic fields. After setting the measurement system, the device turned on for at least three minutes at a given single position and waited for finding the average electromagnetic field values in units of Volt/meter (V/m). For every single point the same measurement procedure was repeated. The investigated Cisco Aironet 1100 Series Access Point is nearly at the top center of the corridor and attached to the outside walls of the classrooms. It is at 290 cm from the floor. A GIS is a computer system capable of capturing, storing, analyzing and displaying geographically referenced information; that is, data identified according to location. The power of GIS comes from the ability to relate different information in a spatial context and to reach a conclusion about this relationship [20]. A GIS is built around an integrated database that supports the functions of all units that need spatial processing or even mapping [21]. Although numerous definitions of geographic information and GIS can be found in the literature, all focus on the concept of geo-referencing the association of locations in the geographic domain with the properties of those locations [22]. In this study, the proposed GIS includes the maps of the study area, 3-dimensional model of the electromagnetic propagation environment, electromagnetic field values of observation points, electromagnetic coverage represented by interpolation patterns, the number, position and transmitter power of access points and information about electromagnetic pollution. ArcGIS, an integrated collection of GIS software products, was used for this study. ArcGIS desktop provides a collection of software products that create, edit, import, map, query, analyze, and publish geographic The T-Block building and observation points were mapped based on the national coordinate system. A personal geodatabase was performed and electromagnetic field value data were stored in that database. Figure 3 illustrates an attribute table of the points at 100 cm high from the floor; electromagnetic field values in units of Volt/meter (V/m) and electromagnetic power values in units of decibel (dB) calculated by equation (3). The power received at distance d can be calculated in terms of power flux density and effective aperture of the receiving antenna. Relation between electric field and received power is given by: P ( d ) d B = 10 log ( | E ( d ) 2 | G r λ 2 480 π 2 )where Gr is the receiver antenna gain, and λ= c / f[0] is the wavelength, c = 3.10^8m / s is the velocity of light and f[0] = 2.4GHz is the operating frequency of the wireless transmitter. In this calculation receiver antenna gain is assumed as unity. In addition, the material parameters (iron, steel, wood, glass, concrete etc.) used in the construction of T-Block were stored into the database in order to analyze reflection, penetration, diffraction and scattering effects. All the details are determined and transferred into the GIS in order to present data about propagation environment. Thus, the proposed system provides to make queries and analysis and utilize the results. A map of T-Block entrance floor, access point (AP), electromagnetic measurement points (observation points) in the corridor at 100 cm high from the floor and their values classified by five different colors is shown in Figure 4. The colors range from 0.15 V/m to 0.38 V/m; yellow colors are lower electromagnetic field values and the red colors are higher values. Access point is nearly at the top center of the corridor wall and 290 cm from the floor and provides wireless communication. The propagation environment was also represented as 3-dimensional. The T-Block building was modeled in order to provide 3-dimensional viewing and this presentation as shown in Figure 5. 1085 observation points at five different height levels were added into the 3D model, as shown in Figure 5c. The measurement points are separately interpolated for five different height levels (50, 100, 140, 215 and 290 cm from the floor) by the Kriging method. The Kriging interpolation tool is under the “Interpolate to Raster” menu in the 3D Analyst module of ArcGIS. The Kriging tool uses two functions for selecting the neighbor points in interpolation; these are fixed and variable types. In addition to this, two Kriging methods; ordinary and universal and semivariogram models; spherical, circular, exponential, Gaussian and linear are chosen according to the data type and distribution. Properties of Kriging interpolation tool are reviewed in detail by Bratt and Booth [17]. Figure 6 illustrates the Kriging interpolation pattern of the power values (dB) of observation points at 100 cm height from the floor and position of the access point (AP). The colors range from -68.86 dB to -64.97 dB; blue colors are lower electromagnetic power values and the red colors are higher values. For this study, ordinary Kriging with spherical semivariogram model was chosen. The graph of the spherical semivariogram model (Major range: 11.129; Partial Sill: 1.7416; Nugget: 0.89699), along with the experimental points at 100 cm from the floor is shown in Figure 7. In this study, a multilayer feed-forward back-propagation ANN module was integrated in GIS by programming ArcObjects with Visual Basic to interpolate the indoor electromagnetic field measurements. Indoor radio wave propagation was modeled with 3 dimensional GIS dataset in order to analyze the electromagnetic coverage pattern by the neural network interface. Different from Kriging interpolation, all measurements at five different height levels join to the ANN interpolation together and users can query for every altitude. ArcObjects is a set of programmable objects and Visual Basic is an object-oriented programming language and comes included with ArcGIS. ArcObjects are a set of computer objects specifically designed for programming with applications. ArcObjects can be used to program other applications such as toolbars, buttons, tools, menus and commands as well [24]. In this study; an artificial neural network (ANN), which is composed of one input layer with (k=3) neurons representing x-y-z coordinates, one hidden layer with (j=15) neurons and (m=1) output layer with a single neuron representing the electromagnetic field value (V/m) were used. Besides, threshold matrixes were applied through the hidden and output layers. Back Propagation training algorithm was implemented on the feed-forward network. The x-y-z coordinates were used as input data and they were reduced by replacing a point to the origin (0-0-0 values) of the coordinate system in order to mean the transfer function. Then the other measurement points were referenced to that point. The transfer function applied to both hidden and output layers was a non-linear Sigmoid Function. Figure 8 represents the topology of neural network. The 1085 measurement points were separated into two groups as training data (672 points) and test data (413 points), respectively. Firstly, the neural network was trained by the input of 672 points and back-propagation calculation performed for every training point in order to distribute the errors to weights. After 200 iterations the final updated weight matrix was found. 413 input points were tested by the updated network with optimized weight matrixes and the average error and accuracy of the neural network was calculated. The accuracy values of the selected 3-15-1 ANN model and some of the other neural networks trained for spatial interpolation are shown in Table 1. The accuracy of the results was determined by the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE). RMSE, square root of mean squared predicted electromagnetic power minus observed electromagnetic power and MAE, mean after all errors made positive. Proposed neural networks (3-15-1 network) module for spatial interpolation was programmed with the visual Basic Editor. The neural network interface provides the electromagnetic field and power values with adequate accuracy for every coordinates (x-y-z) input in the boundary of measurement area. There are two parts in ANN module programmed in GIS. These are the “Enter xyz” and “Run ANN” buttons. The “Enter xyz” button is used for selecting the points from the map and assigning the point coordinates to ANN. When a point is selected from the map screen by the cursor, x and y coordinates are assigned automatically. Hence the z coordinate can be entered by using an input box called “Enter Altitude”, as shown in Figure 9a. Because altitude value cannot be selected by the cursor from the map screen 2-dimensional view. The user can write an altitude value between 93.069 and 95.571 which are the measurement intervals. If the user enters a value out of this interval, it gives a warning message as shown in Figure 9b. After entering coordinates of the point, ANN interpolation interface is executed by “Run ANN” button. Figure 10 illustrates the ANN interface and performance assessment of the test results by error values (V/m). The user interface is formed by these following sections as shown in Figure 10a: The files about 672 training points, 413 test points and observed electromagnetic field values are added from the computer. After training, 413 input points are tested by the updated network with optimized weight matrixes and the average error and accuracy of the neural network is calculated by equation (4) (network “Result” button executes another interface as shown in Figure 10b. It is used for comparing between ANN outputs and observed (expected) values of electromagnetic field in test data. Coordinates of the points are added from the map screen mentioned before and it can be rearranged in XYZ boxes. The electromagnetic field (V/m) and power (dB) values are predicted. The performance rate of ANN (3-15-1 network) is calculated by; P = D T 100where D is the number of accurate predictions of ANN output comparing with observed (expected) values of electromagnetic field in test data and T is the number of test points (413). As a result, expected accuracy of the network is almost between 85% and 90% performance and the error result can be accepted for interpolation of electric field values and coverage prediction. A multilayer feed-forward back-propagation ANN (3-15-1 network) interpolation pattern of the observation points at 100 cm from the floor and position of the access point (AP) are shown in Figure 11. The colors range from -68.73 dB to -64.62 dB; blue colors are lower electromagnetic power values and the red colors are higher values. The Neural Network is finally formed with the optimized weight matrixes and these matrixes are set to the feed-forward network. After setting the final neural network, the WLAN coverage was analyzed for 100 cm altitude level which represents the usual height of a WLAN receiver. The coordinate values (x-y-z) defining the 100cm level were applied to the input nodes of the network and the predicted electric field values were given by the output node. The corresponding outputs of the input coordinate values were converted to the units of received power (dB;, and then they were sketched as Figure 11 representing the cross-section radiation pattern of the WLAN access point. The predicted coverage figure shows a linear propagation varying between -68.73 dB and -64.62 dB power values. In several attempts, it was noticed that various types of WLAN adapters could access to the system even below the -70 dB threshold. Thus, in a range of 27 m, the radiating WLAN access point can almost cover the whole corridor to satisfy up to a 54 Mbps communication with a IEEE 802.11g compliant WLAN Adapter [25]. However, actual throughput may vary based upon numerous environmental factors and the efficient communication data rate cannot be achieved for low power level points as shown in Figure 11. Moreover, this electromagnetic coverage does not lead to an electromagnetic pollution due to the low power levels [26]. The electromagnetic coverage in the propagation environment now can be modeled by both ANN prediction and Kriging interpolation method. The network architecture selected was in this case 3-15-1, that is three input nodes, 15 hidden nodes and one output node. The Kriging interpolation pattern in Figure 6 shows that the electromagnetic power values of WLAN (2.4 GHz) changes between -68.86 dB and -64.97 dB at 100 cm from the floor and there are sudden changes in radio wave propagation due to the environmental parameters (reflection, penetration, diffraction and scattering). Hence, the ANN prediction pattern in Figure 11 shows a linear propagation varying between -68.73 dB and -64.62 dB power values at the same height. It seems that the feed-forward back-propagation ANN (3-15-1 network) for spatial interpolation makes a generalization according to the learning of network. However, Kriging catches the sudden changes of electromagnetic power distribution. The predictive power of each of the two interpolation models was compared using Root Mean Square Error (RMSE) and The Mean Absolute Error (MAE). RMSE, square root of mean squared predicted electromagnetic power minus observed electromagnetic power and MAE, mean after all errors made positive. Table 2 shows the RMSE and MAE of the fully trained 3-15-1 network prediction and Kriging interpolation of electromagnetic power values of 1085 observed points and as a result, Kriging interpolation is more accurate than ANN interpolation of electromagnetic field measurements. An advantage of ANN module programmed in GIS is that ANN prediction uses a Back-propagation algorithm, updating itself by optimizing the weight matrixes to enable a three-dimensional (3D) query. In this study, a multilayer feed-forward back-propagation neural network was developed to interpolate the electromagnetic field measurements by programming a tool with Visual Basic in GIS and coverage prediction was investigated. The comparison of the selected ANN and Kriging was represented by adjusting procedures. The feed-forward back-propagation ANN provides adequate accuracy for spatial interpolation. However, Kriging interpolation is more accurate than ANN predictions. Concerning the interpolation patterns, ANN (3-15-1), which is composed of one input layer with (k=3) neurons representing x-y-z coordinates, one hidden layer with (j=15) neurons and (m=1) output layer with a single neuron representing the electromagnetic field value (V/m), generalized the data interpolated. However, Kriging catches the sudden changes of electromagnetic power distribution. Expected accuracy of the neural network is almost between 85% and 90% performance and the error result can be accepted for interpolation of electromagnetic field values and coverage prediction. This paper demonstrated that spatial interpolation with neural networks is a viable technique for electromagnetic power estimation. The proposed GIS ensures indoor radio wave propagation environment and electromagnetic coverage, 3-dimensional modeling of the study area, the number, position and transmitter power of access points and electromagnetic radiation level. With GIS, it is possible to get information about power of the wireless communication and efficiency of access points. It was noticed that the radiating WLAN access point can almost cover the whole study area and this electromagnetic coverage does not lead to an electromagnetic pollution due to the low power levels. As a result the proposed GIS system with ANN prediction help a telecom radio frequency designer making queries about the current electromagnetic coverage and pollution analysis in a given propagation environment and determining the communication signal quality. Future research on a number of open issues; the other ANN such as Hopfield networks can be developed for spatial interpolation as a tool in GIS and the electromagnetic coverage of GSM, TV-radio transmitters, base stations and their effects to the human health in cities can be analyzed with GIS using ANN.
{"url":"http://www.mdpi.com/1424-8220/8/9/5996/xml","timestamp":"2014-04-18T23:33:39Z","content_type":null,"content_length":"71593","record_id":"<urn:uuid:db3c9187-fac7-43db-8d64-e8af7b487306>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
binomial expansion February 2nd 2010, 08:06 AM binomial expansion Hi could somebody please help me. I really dont understand how to approach this question. the coefficient of x^2 is 3/8 in the expansion of (1+x/n)^n. find n. any sort of help would be much appreciated. February 2nd 2010, 08:34 AM The binomial expansion of $\bigl(1+\tfrac xn\bigr)^n$ is $\Bigl(1+\frac xn\Bigr)^n = 1 + {n\choose1}\Bigl(\frac xn\Bigr) + {n\choose2}\Bigl(\frac xn\Bigr)^2 + \ldots + \Bigl(\frac xn\Bigr)^n.$ You can see from that what the coefficient of $x^2$ is (in terms of n). Put it equal to 3/8 and solve for n.
{"url":"http://mathhelpforum.com/algebra/126790-binomial-expansion-print.html","timestamp":"2014-04-20T04:42:17Z","content_type":null,"content_length":"5016","record_id":"<urn:uuid:411f354f-5c01-49f4-b699-f20a9eebfffc>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
1 Aug 15:06 2010 Problem using solve... Julien Martin <balteo <at> gmail.com> 2010-08-01 13:06:09 GMT I am trying to solve the following equation using Maxima: Using the following command: solve([eq], [SS]); I get a solution that itself contains SS!! i.e. What I am getting wrong? Thanks in advance, Maxima mailing list Maxima <at> math.utexas.edu
{"url":"http://blog.gmane.org/gmane.comp.mathematics.maxima.general/month=20100801","timestamp":"2014-04-19T01:53:49Z","content_type":null,"content_length":"73598","record_id":"<urn:uuid:d85a803f-892f-4a9a-8df9-0665796953d6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Least-square optimization with a complex residual function Replies: 4 Last Post: Oct 30, 2010 11:16 AM Messages: [ Previous | Next ] elgen Re: Least-square optimization with a complex residual function Posted: Oct 28, 2010 10:20 PM Posts: 7 Registered: 8/27/10 On 10-10-28 09:38 PM, kym@kymhorsell.com wrote: > elgen<sket16@no.spam.hotmail.com> wrote: >> I have a question on the least-square optimization with a complex >> residual function. The residual function is r(z_1, z_2), in which z_1 >> and z_2 are complex variables. > [...] >> In my case r(z_1, z_2) is a complex function. If I use the Euclidean >> norm (conjugated inner product), the cost function becomes >> \sum_i conj(r)r > So your resid is real now? OK. Change your mind, that's alright. :) >> I am stuck on how to calculate the gradient of this cost function as >> conj(r) is not an analytic function and the gradient needs to take the >> derivative with respect to z_1 and z_2. > Ahhh. At worst you can treat the resid as 2 SoS -- the real parts of r and > the imag parts of r. > For more general resid functions maybe think in terms of Euclid form. I understand that you refer residual to conj(r) r in my case. How would I proceed to calculate its gradient? Would you mind being more specific? What is "SoS"? Date Subject Author 10/28/10 Least-square optimization with a complex residual function elgen 10/28/10 Re: Least-square optimization with a complex residual function R Kym Horsell 10/28/10 Re: Least-square optimization with a complex residual function elgen 10/29/10 Re: Least-square optimization with a complex residual function R Kym Horsell 10/30/10 Re: Least-square optimization with a complex residual function elgen
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2163058&messageID=7252093","timestamp":"2014-04-21T05:00:09Z","content_type":null,"content_length":"22155","record_id":"<urn:uuid:8b6475f1-751f-4b89-96b8-6af9bd0f22f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
2nd Midterm Sample Exam by Knicholls Sample Term Test 2A 1. A variable X has a distribution which is described by the density curve shown below: What proportion of values of X fall between 1 and 6? (A) 0.550 (B) 0.575 (C) 0.600 (D) 0.625 (E) 0.650 2. Which of the following statements about a normal distribution is true? (A) The value of µ must always be positive. (B) The value of σ must always be positive. (C) The shape of a normal distribution depends on the value of µ. (D) The possible values of a standard normal variable range from −3.49 to 3.49. (E) The area under a normal curve depends on the value of σ. 3. A variable X follows a uniform distribution, as shown below: The distribution of X has an interquartile range equal to 4 (since the middle 50% of the data are contained between the values 2 and 6). Consider the variables with the distributions shown below (assume that the heights of the curves are such that they are both valid density curves): The interquartile range of density curve (I) is of density curve (II) is . (A) (I) less than 4, (II) greater than 4 (B) (I) greater than 4, (II) less than 4 (C) (I) equal to 4, (II) equal to 4 (D) (I) less than 4, (II) less than 4 (E) (I) greater than 4, (II) greater than 4 and the interquartile range 4. A variable X has a uniform distribution on the interval from 2 to 6. The P (4.2 < X < 5.7) is equal to: (A) 0.375 (B) 0.475 (C) 0.575 (D) 0.675 (E) 0.775 5. A variable Z has a standard normal distribution. What is the value b such that P (−0.37 ≤ Z ≤ b) = 0.5749? (A) 2.02 (B) 1.48 (C) 0.97 (D) 0.63 (E) 1.72 6. What is the P (Z > −0.75)? (A) 0.2266 (B) 0.7734 (C) 0.0401 (D) 0.9599 (E) 0.4289 7. A variable X has a normal distribution with mean 100. It is known that about 47.5% of the values of X fall between 85 and 100. What is the approximate value of the standard deviation σ? (A) 5 (B) 7.5 (C) 12.5 (D) 15 (E) 30 8. Suppose that the variable Z follows a standard normal distribution. If P (−b < Z < b) = 0.92, then b...
{"url":"http://www.studymode.com/course-notes/2Nd-Midterm-Sample-Exam-1218464.html","timestamp":"2014-04-18T18:25:07Z","content_type":null,"content_length":"32502","record_id":"<urn:uuid:c7378168-484f-433b-9c6b-2a0426a24300>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigo surds question October 12th 2012, 11:51 PM #1 Sep 2012 Trigo surds question (a) Prove that sin 3A = 3 sin A - 4 sin^3 A. (b) If A = 36^o, show that sin 3A = sin 2A. (c) Deduce that cos 36^o = (1 + 5^1/2)/ 4. I've managed to solve (a) & (b), but I'm stuck at (c). I've figured out (c) has something to do with the fact that cos 36^o = cos (63 - 27)^o and that the angles 63^o, 27^o and 90^o form a right angle, which matches the hypotenuse (5^1/2), adjacent (1) and opposite (2) somehow. I tried substitution of (a) and (b) into the working for (c) too, and I get the left hand side as 0.5(3 - 4 sin ^2 A) and the right hand side as 0.5(sin A - 1)/ cos A. Thank you! Re: Trigo surds question Using parts a) and b), we may write: $3\sin A-4\sin^3A=2\sin A\cos A$ Since $\sin Ae0$ we may divide through by this to get: $3-4\sin^2A=2\cos A$ Now, using a Pythagorean identity, we have: $3-4(1-\cos^2A)=2\cos A$ Now, arrange this as a quadratic in $\cos A$ and take the positive root. October 13th 2012, 12:16 AM #2
{"url":"http://mathhelpforum.com/trigonometry/205220-trigo-surds-question.html","timestamp":"2014-04-16T16:27:21Z","content_type":null,"content_length":"32597","record_id":"<urn:uuid:1e116c37-9a3c-497b-a3b1-66831d0a7c67>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Merion Park, PA Trigonometry Tutor Find a Merion Park, PA Trigonometry Tutor ...Two years in this program not only helped me better my communication of science to youth, but gave me more experience in tutoring high school level science. In addition, local families contacted me to tutor students in high school level chemistry and help with AP Chemistry exam preparations. Pr... 9 Subjects: including trigonometry, chemistry, algebra 2, geometry ...My background in academia and industry allows me to teach calculus from either a theoretical or an applied approach, depending on student needs and interests. I studied statistics as part of the actuarial exam process. Two of the early exams focused on statistical methods, including regression, parameter fitting, Bayesian, and non-Bayesian techniques. 18 Subjects: including trigonometry, calculus, statistics, geometry ...I studied at West Chester University of Pennsylvania for Physics and am currently at Philadelphia University for Mechanical Engineering. I have worked with students in math and science for over two years from middle school to college levels. I specialize in math and science classes from algebra up to calculus and physics. 9 Subjects: including trigonometry, calculus, physics, geometry ...I received an A. I used these topics in many chemical engineering courses after that. I received a Bachelor's in chemical engineering at Rensselaer Polytechnic Institute in 2010. 25 Subjects: including trigonometry, chemistry, calculus, physics ...I have worked three semesters as a computer science lab TA at North Carolina State University, as well as three semesters as a general math tutor for the tutoring center at the Community College of Philadelphia. I have tutored privately in both these subjects for many years. I have had the opportunity to work with a wide variety of students from all backgrounds and age groups. 22 Subjects: including trigonometry, calculus, geometry, statistics Related Merion Park, PA Tutors Merion Park, PA Accounting Tutors Merion Park, PA ACT Tutors Merion Park, PA Algebra Tutors Merion Park, PA Algebra 2 Tutors Merion Park, PA Calculus Tutors Merion Park, PA Geometry Tutors Merion Park, PA Math Tutors Merion Park, PA Prealgebra Tutors Merion Park, PA Precalculus Tutors Merion Park, PA SAT Tutors Merion Park, PA SAT Math Tutors Merion Park, PA Science Tutors Merion Park, PA Statistics Tutors Merion Park, PA Trigonometry Tutors Nearby Cities With trigonometry Tutor Bala Cynwyd trigonometry Tutors Bala, PA trigonometry Tutors Belmont Hills, PA trigonometry Tutors Carroll Park, PA trigonometry Tutors Cynwyd, PA trigonometry Tutors Drexelbrook, PA trigonometry Tutors Kirklyn, PA trigonometry Tutors Llanerch, PA trigonometry Tutors Merion Station trigonometry Tutors Merion, PA trigonometry Tutors Narberth trigonometry Tutors Overbrook Hills, PA trigonometry Tutors Penn Valley, PA trigonometry Tutors Penn Wynne, PA trigonometry Tutors Wynnewood, PA trigonometry Tutors
{"url":"http://www.purplemath.com/Merion_Park_PA_Trigonometry_tutors.php","timestamp":"2014-04-17T07:50:53Z","content_type":null,"content_length":"24610","record_id":"<urn:uuid:185e559a-532f-47ca-b100-82ce78fe2327>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Puzzling Adventures: The Bermuda Toy Car Race Main Puzzle Solution 1. We can express the entire trajectory in terms of A and B. It will depend on the car speed b, the spinning speed s of the disks and the distances d, e and r. See: http://cs.nyu.edu/cs/faculty/shasha/papers/eddyfig1.doc Assuming angle A from the start point, we want to calculate a distance to the edge of the disk. Because the disk moves slowly compared to the car, we could tentatively approximate the distance from the start point to the edge of the disk to be d/cos(A). (This distance assumes that the triangle from the origin point to the top of the disk and then to the point where the trajectory from the origin hits the disk is a right triangle, whereas in fact it's a slightly oblique triangle.) Therefore, the time to travel from the start to the disk is approximately d/(b cos(A)). We will correct this approximation later. This result implies that C = arcsin ((d tan(A))/r). Similarly, D = arcsin ((e tan(B))/r) and the distance to the destination from the disk is e/cos(B). The time to travel from the disk to the destination is e/(b cos(B)). Now, using our chord approach, and with reference to the figure below, we will seek an angle H such that the time it takes to traverse the chord in the frame of the rotating disk equals the time it takes the disk to rotate appropriately. Thus, H is uniquely determined by the constraint that we want the car to displace itself in the absolute plane by (180 – (C + D)) degrees. We'll call this the target angle displacement, or tad for short. The tad will be equal to the angle subtended by the chord (achord = 180 – 2H) plus the amount of rotation the disk does as the chord is being traversed. adisk = angle by which disk rotates while chord is traversed = s * time to traverse chord where time to traverse chord = length of chord/b and where length of chord = 2r cos(H). So, adisk = (2rs cos(H))/B and we have adisk + achord = tad. This result constrains H. So, the total time equals the time to go from the start to the disk plus the time to traverse the chord of the disk plus the time to go from the disk to the destination. All of these times can be found, given A and B. Because the computer doesn't complain, I tried many values of A and B until I got the best value. Now let's revisit that approximation. The distance from the source to the disk is a bit more complicated, in fact. To figure it out, note that the distance from the center of the disk to Y is r. Let X be the point where the perpendicular from Y to the dashed line hits the dashed line. [see the figure eddyfig1.doc] Therefore, r cos C is the distance from the center of the disk to point X. (We don't know C yet, but we will figure it out by iteration.) Recall that the distance from the origin to the center of the disk is d + r, which is the distance from the origin to the topmost point of the disk plus the radius. Therefore, the distance from the origin to point X, denoted dsp for short, is dsp = d + (r – r cos C). This implies that the distance of the line segment from the source to the disk is [d + (r – r cos C)]/cos A. To determine angle C by iteration, we start with our initial estimate of dsp (which was d/cos(A)). This estimate, together with A, gives an estimate of the distance between X and Y: dsp * sin(A). That in turn gives an initial estimate of C, which is arcsin(r/(dsp * sin(A))). This formula allows us to recalculate dsp as d + (r – r cos C). The numerical conclusions are that the total time it takes, using the best route, is approximately 56.8 seconds. (It would be an even 60 seconds if the disk didn't move at all.) A and B are each 13 degrees. Coincidentally, C and D are each about 13.7 degrees. That means it is necessary to travel nearly 153 degrees around the disk, but because the disk is spinning quite fast, the angle subtended by the chord is only 114 degrees. The time on the disk is just under 25.2 seconds, in which time the disk spins a bit more than 37.7 degrees. So the numbers match up to within about 1 degree, which is good enough for a race. 2. The highest average value of A and B that I get is about 15 degrees. This occurs at spin rates of between 125 and 145 degrees per minute. Note that higher rates will shrink the A and B values. At 10,000 degrees per second, A and B decrease to about 1 degree.
{"url":"http://www.scientificamerican.com/article/puzz-adven-oct-08-main-sol/","timestamp":"2014-04-17T14:25:49Z","content_type":null,"content_length":"60399","record_id":"<urn:uuid:e59afbdd-153b-4751-81ae-9af34dd6d10d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Paper 2: Digital Electronics Next: Paper 2: Operating Systems Up: Michaelmas Term 2009: Part Previous: Paper 1: Discrete Mathematics Contents Paper 2: Digital Electronics This course is not taken by NST or PPST students. Lecturer: Dr I.J. Wassell No. of lectures and practical classes: 11 + 7 This course is a prerequisite for Operating Systems and Computer Design (Part IB). The aims of this course are to present the principles of combinational and sequential digital logic design and optimisation at a gate level. The use of transistors for building gates is also • Introduction. Semiconductors to computers. Logic variables. Examples of simple logic. Logic gates. Boolean algebra. De Morgan's theorem. • Logic minimisation. Truth tables and normal forms. Karnaugh maps. • Number representation. Unsigned binary numbers. Octal and hexadecimal numbers. Negative numbers and 2's complement. BCD and character codes. Binary adders. • Combinational logic design: further considerations. Multilevel logic. Gate propagation delay. An introduction to timing diagrams. Hazards and hazard elimination. Fast carry generation. Other ways to implement combinational logic. • Introduction to practical classes. Prototyping box. Breadboard and Dual in line (DIL) packages. Wiring. Use of oscilloscope. • Sequential logic. Memory elements. RS latch. Transparent D latch. Master-slave D flip-flop. T and JK flip-flops. Setup and hold times. • Sequential logic. Counters: Ripple and synchronous. Shift registers. • Synchronous State Machines. Moore and Mealy finite state machines (FSMs). Reset and self starting. State transition diagrams. • Further state machines. State assignment: sequential, sliding, shift register, one hot. Implementation of FSMs. • Circuits. Solving non-linear circuits. Potential divider. N-channel MOSFET. N-MOS inverter. N-MOS logic. CMOS logic. Logic families. Noise margin. [2 lectures] At the end of the course students should • understand the relationships between combination logic and boolean algebra, and between sequential logic and finite state machines; • be able to design and minimise combinational logic; • appreciate tradeoffs in complexity and speed of combinational designs; • understand how state can be stored in a digital logic circuit; • know how to design a simple finite state machine from a specification and be able to implement this in gates and edge triggered flip-flops; • understand how to use MOS transistors. Recommended reading * Harris, D.M. & Harris, S.L. (2007). Digital design and computer architecture. Morgan Kaufmann. Katz, R.H. (2004). Contemporary logic design. Benjamin/Cummings. The 1994 edition is more than sufficient. Hayes, J.P. (1993). Introduction to digital logic design. Addison-Wesley. Books for reference: Horowitz, P. & Hill, W. (1989). The art of electronics. Cambridge University Press (2nd ed.) (more analog). Weste, N.H.E. & Harris, D. (2005). CMOS VLSI Design - a circuits and systems perspective. Addison-Wesley (3rd ed.). Mead, C. & Conway, L. (1980). Introduction to VLSI systems. Addison-Wesley. Crowe, J. & Hayes-Gill, B. (1998). Introduction to digital electronics. Butterworth-Heinemann. Gibson, J.R. (1992). Electronic logic circuits. Butterworth-Heinemann. Next: Paper 2: Operating Systems Up: Michaelmas Term 2009: Part Previous: Paper 1: Discrete Mathematics Contents
{"url":"http://www.cl.cam.ac.uk/teaching/0910/CST/node13.html","timestamp":"2014-04-17T03:59:14Z","content_type":null,"content_length":"9412","record_id":"<urn:uuid:56ac4a64-5007-420c-baec-e1360556488c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Merging asymptotic expansions for cooperative gamblers in generalized St. Petersburg games. (English) Zbl 1199.60052 Merging asymptotic expansions are established for the distribution functions of suitably centered and normed linear combinations of winnings in a full sequence of generalized St. Petersburg games, where a linear combination is viewed as the share of any one of cooperative gamblers, who play with a pooling strategy. The expansions are given in terms of Fourier-Stieltjes transforms and are constructed from suitably chosen members of the classes of subsequential semistable infinitely divisible asymptotic distributions for the total winnings of the players and from their pooling strategy, where the classes themselves are determined by the two parameters of game. For all values of the tail parameter, the expansions yield best possible rates of uniform merge. Surprisingly, it turns out that for a subclass of strategies, not containing the averaging uniform strategy, the merging approximations reduce to asymptotic expansions of the usual type, derived from a proper limiting distribution. The Fourier-Stieltjes transforms are shown to be numerically invertible in general and it is also demonstrated that the merging expansions provide excellent approximations even for very small 60F05 Central limit and other weak theorems 60E07 Infinitely divisible distributions; stable distributions 60G40 Stopping times; optimal stopping problems; gambling theory 60G50 Sums of independent random variables; random walks 91A60 Probabilistic games; gambling [1] S. Csörgo, Rates of merge in generalized St. Petersburg games, Acta Sci. Math. (Szeged), 68 (2002), 815–847. [2] S. Csörgo, A probabilistic proof of Kruglov’s theorem on the tails of infinitely divisible distributions, Acta Sci. Math. (Szeged), 71 (2005), 405–415. [3] S. Csörgo, Merging asymptotic expansions in generalized St. Petersburg games, Acta Sci. Math. (Szeged), 73 (2007), 297–331. [4] S. Csörgo, Fourier analysis of semistable distributions, Acta Appl. Math., 96 (2007), 159–175. · Zbl 1117.60015 · doi:10.1007/s10440-007-9111-4 [5] S. Csörgo and R. Dodunekova, Limit theorems for the Petersburg game, in: Sums, Trimmed Sums and Extremes (M. G. Hahn, D. M. Mason and D. C. Weiner, eds.), Progress in Probability 23, Birkhäuser (Boston, 1991), pp. 285–315. [6] S. Csörgo and Z. Megyesi, Merging to semistable laws, Teor. Veroyatn. Primen., 47 (2002), 90–109. [Theory Probab. Appl., 47 (2002), 17–33.] [7] S. Csörgo and G. Simons, The two-Paul paradox and the comparison of infinite expectations, in: Limit Theorems in Probability and Statistics (Eds. I. Berkes, E. Csáki and M. Csörgo), Vol. I, János Bolyai Mathematical Society (Budapest, 2002), pp. 427–455. [8] S. Csörgo and G. Simons, Laws of large numbers for cooperative St. Petersburg gamblers, Period. Math. Hungar., 50 (2005), 99–115. · Zbl 1113.60026 · doi:10.1007/s10998-005-0005-9 [9] S. Csörgo and G. Simons, Pooling strategies for St. Petersburg gamblers, Bernoulli, 12 (2006), 971–1002. · Zbl 1130.91018 · doi:10.3150/bj/1165269147 [10] J. Gil-Pelaez, Note on the inversion theorem, Biometrika, 38 (1951), 481–482. [11] P. Kevei, Generalized n-Paul paradox, Statist. Probab. Lett., 77 (2007), 1043–1049. · Zbl 1138.60026 · doi:10.1016/j.spl.2006.08.027 [12] A. Martin-Löf, A limit theorem which clarifies the ’Petersburg paradox’, J. Appl. Probab., 22 (1985), 634–643. · Zbl 0574.60032 · doi:10.2307/3213866 [13] Z. Megyesi, A probabilistic approach to semistable laws and their domains of partial attraction, Acta Sci. Math. (Szeged), 66 (2000), 403–434. [14] G. Pap, The accuracy of merging approximation in generalized St. Petersburg games, preprint. [15] V. V. Petrov, Limit Theorems of Probability Theory, Oxford Studies in Probability 4, Clarendon Press (Oxford, 1996). [16] B. Rosén, On the asymptotic distribution of sums of independent identically distributed random variables, Ark. Mat., 4 (1962), 323–332. · Zbl 0103.11902 · doi:10.1007/BF02591508
{"url":"http://zbmath.org/?q=an:1199.60052&format=complete","timestamp":"2014-04-20T23:41:22Z","content_type":null,"content_length":"26577","record_id":"<urn:uuid:8f37b5a3-5b0d-40b1-b848-d0076965ef59>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: August 1992 [00038] [Date Index] [Thread Index] [Author Index] [no subject] • To: mathgroup at yoda.physics.unc.edu • From: twj • Date: Fri, 7 Aug 92 13:47:17 CDT In reply to E.A.Kumm (kumm at mitlns.mit.edu) >I have two Mathematica questions for the assembled multitudes. > (1) I would like to take two quadric surfaces, e.g. two paraboloids > 8-(x^2)-(y^2),(x^2)+3(y^2), graph the two together AND Then > graph the intersection on the xy plane "below" the two surfaces. > In general, how do we get the surfaces to "cast a shadow". > (2) Even if I calculate the intersection of the above myself, > (x^2)+2(y^2)=4, how can I get Mma to graph this. In general, > how do we graph functions expressed implicitly? Look at the Guide to Standard Packages. There is an ImplicitPlot package. Alternatively you use ContourPlot ContourPlot[ (x^2)+2(y^2), {x,-2,2}, {y,-2,2}, Contours -> {4.0}] To project the intersection on the surface there is no built-in method which does exactly what you want. You can use the functions in Graphics`Graphics3D` to project Graphics3D objects onto the walls of the bounding box. Alternatively with Mathematica V2.1 you can get the Graphics primitives such as Polygons and Lines which make the ContourGraphics. With this lots of interesting things can be done.... p1 = Plot3D[ 8-(x^2)-(y^2), {x,-3,3}, {y,-3,3}] p2 = Plot3D[ (x^2)+3(y^2), {x,-3,3}, {y,-3,3}] conts =ContourPlot[ (x^2)+2(y^2), {x,-3,3}, {y,-3,3}, Contours -> {4.0}] conts = Graphics[ conts] (* Needs V2.1 to work !! *) conts = Drop[ Part[ conts, 1], 1] ; conts = conts //. { Polygon[ { a1___, {x_, y_}, a2___}] -> Polygon[ { a1, {x, y, 37.15}, a2}] , Line[ { a1___, {x_, y_}, a2___}] -> Line[ { a1, {x, y, 37.15}, a2}] } ; (* This makes the primitives into 3D primitives *) Show[ p1, p2, Graphics3D[ conts]] This puts the intersection on the top of the bounding box. Tom Wickham-Jones
{"url":"http://forums.wolfram.com/mathgroup/archive/1992/Aug/msg00038.html","timestamp":"2014-04-20T23:42:21Z","content_type":null,"content_length":"35765","record_id":"<urn:uuid:b8525d34-d00c-4e76-bac4-fab4985b96c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
FluTE, a Publicly Available Stochastic Influenza Epidemic Simulation Model Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2) and 2009 pandemic A(H1N1) influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development. Author Summary Computer simulations can provide valuable information to communities preparing for epidemics. These simulations can be used to investigate the effectiveness of various intervention strategies in reducing or delaying the peak of an epidemic. We have made a detailed influenza epidemic simulator for the United States publicly available so that others may use the software to inform public policy or adapt it to suit their needs. Citation: Chao DL, Halloran ME, Obenchain VJ, Longini IM Jr (2010) FluTE, a Publicly Available Stochastic Influenza Epidemic Simulation Model. PLoS Comput Biol 6(1): e1000656. doi:10.1371/ Editor: Angela R. McLean, University of Oxford, United Kingdom Received: July 17, 2009; Accepted: December 22, 2009; Published: January 29, 2010 Copyright: © 2010 Chao et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by the National Institute of General Medical Sciences MIDAS grant U01-GM070749, the National Institute of Allergy and Infectious Diseases grant R01-AI32042, and the Los Angeles County Department of Health. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them [1]–[9]. Detailed computer simulations will play an important role in evaluating containment and mitigation strategies for future epidemics [8]. Although many simulation models have been described in the literature, few are publicly available. Releasing the source code of models would allow others to evaluate the quality of the simulation, replicate results, and alter and improve the model. We have released the source code for a new stochastic model of influenza epidemics, FluTE. FluTE is an individual-based model capable of simulating the spread of influenza across major metropolitan areas or the continental United States. The model's structure is based on previously published work [3],[6], but FluTE incorporates a more sophisticated natural history of influenza, more realistic intervention strategies, and can run on a personal computer. Here, we describe the new model and illustrate how it can be used to study the dynamics of an epidemic and to investigate the population-level effects of interventions. FluTE is an individual-based simulation model of influenza epidemics. In this section, we describe the model's community structure, natural history of influenza, and simulated interventions. Briefly, all individuals in the model are members of social mixing groups, within which influenza is transmitted by random mixing. The model can simulate several intervention strategies, and these can either change the transmission characteristics of influenza (e.g., vaccination) or change the contact probabilities between individuals (e.g., social distancing). Interventions can occur before the epidemic or in response to an ongoing epidemic. Community structure and social contacts The simulation creates synthetic populations based on typical American communities. The population is divided into census tracts, and each tract is subdivided into communities of 500–3000 individuals based on earlier models [6],[10]. Each community is populated by randomly generated households of size 1–7 using the US-wide family size distribution from the 2000 Census (Table 1). The household is the closest social mixing group, within which contacts between individuals occur most frequently and thus influenza is transmitted most often. The population is organized as a hierarchy of increasingly large but less intimate mixing groups, from the household cluster (sets of four socially close households), neighborhoods (1/4 of a community), and the community. Although the model results are not sensitive to the exact size of these groups, including such groups creates a realistic contact network for disease transmission [11]. At night, everyone can make contact with other individuals in their families, household clusters, home neighborhoods, and home communities. In the daytime, individuals might interact with additional groups. During the day, most children attend school or a playgroup, where there is a relatively high probability of transmission. Preschool-age children usually belong to either a playgroup of four children or a neighborhood preschool, which typically has 14 students. Each community has mixing groups that represent two elementary schools, one middle school, and one high school, which typically have 79, 128, and 155 students, Table 1. Frequency of household sizes. Most working-age adults (about 72% of 19–64 year-olds) are employed. Employment rates are determined on a tract-by-tract basis using data from the US Census 2000's Summary File 3, table PCT35. Employed individuals often work outside of their home communities. Each employed individual is assigned to work in a destination census tract based on commuting data taken from Part 3 of the Census Transportation Planning Package (http://www.fhwa.dot.gov/ctpp/dataprod.htm), which provides information on the home and destination census tracts of workers in the United States. We eliminated commutes over 100 miles from the data as in [6] because many of these trips represent sporadic long-distance travel rather than daily commutes. Working individuals are assigned to communities and neighborhoods within their destination tracts to simulate casual community contacts during the day, and a work group of about 20 people to represent their close contacts at the workplace. Unemployed individuals remain in their home communities and do not have close daytime contacts except with members of their households who are not employed or enrolled in school. Individuals can engage in short-term, long-distance domestic travel to represent vacations and other trips. Travel in our model is based on the implementation in [6], which uses data from the 1995 American Travel Survey data available from the U. S. Department of Transportation, Bureau of Transportation Statistics (http://www.bts.gov/publications/national_transportation_statistics/). Each day, an individual has a fixed probability of starting a trip based on an age-specific probability of traveling: 0.0023 for 0–4 year olds, 0.0023 for 5–18, 0.0050 for 19–29, 0.0053 for 30–64, and 0.0028 for 65 and older. The traveler will stay at the destination for 0–11 nights, with 23.9% of trips lasting for a single day (and no nights), 50.2% including 1–3 nights away, 18.5% including 4–7 nights away, and 7.4% for 8–11 nights. We do not include differences in travel frequency or duration during different times of the year (e.g., summer and holiday trips). The destination is a randomly selected census tract, in which a random community, neighborhood, and workplace (if the traveler is between 19 and 64 years old) are assigned to be the traveler's mixing groups. A random member of this community is assigned to be the traveler's contact person, and at night the traveler will behave as if he/she belongs to the contact's household, household cluster, and neighborhood. The traveler may withdraw to this household if ill. The exact implementation of short-term, long-distance travel is not important, but some long-distance travel is required in large populations for the epidemic to spread in a realistic manner. For simulations of smaller regions, such as a single county, there is no need to include long-distance travel. New infected individuals are introduced to a simulation by infecting randomly selected people. This epidemic seeding process can occur once at the beginning of a simulation or daily. In addition, one can simulate an epidemic that is seeded from international travelers. In this scenario, randomly selected individuals in the counties with one of the United States' 15 busiest international airports are infected each day, proportional to the daily traffic of these airports (see Table 2). Table 2. International traffic to the 15 US airports built into FluTE. Influenza natural history and transmission The current modeling of the natural history of influenza is as follows: An individual is infectious for six days starting the day after becoming infected. The individual's infectiousness is proportional to the log of the daily viral titers taken from a randomly chosen one of the six experimentally infected patients described in [12],[13] (Figure 1). An individual is asymptomatic during the incubation period, which lasts from one, two, or three days (with 30%, 50%, and 20% probabilities, respectively). After incubation, the individual has a 67% chance of becoming symptomatic [14], [15]. Symptomatic individuals are twice as infectious as asymptomatic people and may withdraw to the home after 0 to 2 days [16] (with probabilities summarized in Table 3). People who withdraw interact only with their households. Six days after infection, an individual recovers and is no longer susceptible. Figure 1. The natural history of influenza of simulated individuals in FLuTE. When a susceptible individual is infected (at time ), that person will be infectious for six days with infectiousness proportional to his or her viral load. The six possible viral load trajectories are plotted. Most individuals become symptomatic, which occurs after a 1, 2, or 3 day incubation period. Symptomatic individuals are twice as infectious as asymptomatic individuals (i.e., infectiousness is proportional to twice the viral load). Individuals recover six days after infection and are immune. Table 3. Probabilities that an individual will withdraw to the home 0, 1, or 2 days after becoming symptomatic. The simulation runs in discrete time, with two time steps per simulated day to represent daytime and nighttime social interactions. The contact probability of two individuals in the same mixing group is the probability that they will have sufficient contact for transmission during a time step. Contact probabilities of individuals within families were tuned so that the simulated household secondary attack rates match estimates from [17] (Table 4). Contact probabilities within other mixing groups were tuned so that the final age-specific illness attack rates were similar to past influenza pandemics (Table 5), particularly Asian A (H2N2) and 2009 novel influenza A(H1N1) influenza, and the percentage of transmissions that can be attributed to each mixing group matched those in [6], [18]–[20], although these values depend on the transmissibility () of the disease (Table 6). These contact probabilities are in general agreement with other simulation models [8] and with a recent study of physical contacts between individuals [21]. Contact probabilities for all types of mixing groups are summarized in Table 7. Table 4. Estimates of secondary household attack rates from [17] and illness attack rates using FluTE, stratified by the ages of the index and secondary cases. Table 5. Age-specific influenza illness attack rates in past influenza epidemics (from [46]) and in a simulation of metropolitan Seattle. Table 6. Major sources of influenza transmission in simulations of metropolitan Seattle. Table 7. Person-to-person contact probabilities for all social mixing groups in FluTE. Transmission probabilities in the simulation are adjusted by multiplying all contact probabilities by a scalar, , to obtain the desired , the basic reproductive number, which is defined as the average number of secondary infections from a typical infected individual in a fully susceptible population [22]. To derive the relationship between and , we infected a single randomly selected person in an otherwise fully susceptible 2000-person community with a 74% working-age adult employment rate and counted the number of individuals that person infected, repeating this procedure 1,000 times for several values of . The relationship between the average number of secondary cases was approximately linear for a biologically plausible range of values: (Figure 2). However, the average number of secondary cases was higher when the index case was a child because children tend to infect more individuals (and become infected more often) than adults. Therefore, in a procedure borrowed from [6], we measured the age distribution of secondary cases when the index case was randomly selected and used this distribution to weight the contribution from the various age groups to the calculation to define . The definition of applies to a population with no pre-existing immunity, an assumption that may be violated for seasonal influenza. One can use the model to simulate seasonal influenza epidemics by substituting with the desired , the average number of people a typical infected case infects in a population with pre-existing immunity. Figure 2. Influenza transmission properties in the simulation. (A) Observed secondary cases vs by the age of the index case and the weighted average. (B) Average case generation time vs . The simulated case generation time, or the time between infection of an individual and the transmission to susceptibles, was 3.4 days for a wide range of in a fully susceptible population (Figure 2B ). This is consistent with other estimates for seasonal and pandemic influenza [20],[23]. Simulated interventions The primary pharmaceutical intervention is vaccination. Vaccinated individuals in the simulation have a reduced probability of becoming infected (VE[S]), of becoming ill given infection (VE[P]), and of transmitting infection (VE[I]) [24]. In the model, these efficacy parameters are implemented by multiplying the transmission probability per time step by (1−VE[S]) if the susceptible individual is vaccinated and by (1−VE[I]) if the infectious individual is vaccinated. The probability of vaccinated individuals becoming symptomatic (ill) after they are infected is the baseline probability (67%) multiplied by (1−VE[P]). Vaccines do not reach full efficacy immediately – their protective effects may gradually increase over several weeks. The default behavior in the model is that the vaccine takes two weeks to reach maximum efficacy, with the efficacy increasing exponentially starting the day after the vaccination. Because of the delay in reaching maximum efficacy, it may be necessary to vaccinate the population early. In the simulation, vaccines can be administered at least four weeks before the epidemic (i.e., pre-vaccination), during the epidemic (reactive), or one dose can be administered at least three weeks before the epidemic and the boost can be administered reactively (prime-boost). Antiviral agents (neuraminidase inhibitors) can be used for treatment of cases and for prophylaxis of susceptibles. A single course of antiviral agents is enough for 10 days of prophylaxis or 5 days of treatment. In the model, 5% of individuals taking antiviral agents prophylactically stop after 2 days and 5% taking them for treatment stop after 1 day [19]. As with vaccines, individuals taking antiviral agents can have reduced susceptibility (AVE[S]), probability of becoming ill given infection (AVE[P]), and transmitting infection (AVE[I]). However, unlike vaccines, the protective effects of the antiviral agents last only as long as they are being taken (5 to 10 days). When a case is ascertained, the individual is treated with antiviral agents, and that individual's household members will also each be given a course if household targeted antiviral prophylaxis (HHTAP) is in effect. Several non-pharmaceutical interventions can be simulated in the model. School closures are simulated by eliminating school group contacts (including preschools and daycares but not playgroups) for those enrolled in school, but adding daytime contacts with other household members not in school or at work and doubling their daytime neighborhood and community contact probabilities to account for their non-school activities. Schools can be closed when cases are ascertained in communities or in the schools, and they can be closed for a fixed number of days or for the duration of the During an epidemic, individuals may be requested to stay at home if they become ill. When simulating isolation of cases, individuals withdraw to the home one day after becoming symptomatic (with a certain probability to represent the compliance probability). This will eliminate any daytime social contacts that they have other than with household members who are not working or at school. We simulate a liberal leave policy in a similar manner: employed individuals withdraw to the home with a pre-set compliance probability for one week one day after becoming symptomatic. During an epidemic, those living with symptomatic individuals may be requested to stay home [25]. In simulations of household quarantine, family members of symptomatic individuals will independently decide (based on a compliance probability) whether to obey quarantine for 7 days one day after the first individual becomes symptomatic. Individuals electing to quarantine themselves withdraw to the household and interact only with household members. If other family members become ill during quarantine, household members independently decide whether to obey quarantine for 7 days one day after each individual becomes symptomatic. Implementation of the stochastic model FluTE is written in C/C++ and is released under the GNU General Public License (GPLv3, see http://www.gnu.org/licenses/gpl.html). The source code is available at http://www.csquid.org/software, https://www.epimodels.org/midas/flute.do, and the Models of Infectious Disease Agent Study (MIDAS) repository [26]. The software includes two source code files that are also freely distributable but may come with different licenses because they were written by others: one for the pseudorandom number generator (SIMD oriented Fast Mersenne Twister (SFMT) pseudorandom number generator [27]) and one to generate binomially distributed random numbers (from Numerical Recipes in C [28]). Version 1.11 of FluTE was used to produce the results in this manuscript. A configuration file is used to specify the population to use for the simulation, the parameters for starting the epidemic, the transmissibility of the infectious agent, and the desired intervention strategies. The configuration file is text-based and can be typed in by a user or generated with a script. The simulation outputs results to text files, which can be easily parsed for plotting or statistical analysis. A parallelized version of the code supports simulations of large populations (up to the entire continental United States). This version of the program assigns the populations of different counties to different processors, and OpenMPI is used to update the status of individuals who travel between communities that are located on different processors and to update the global status of the epidemic and the interventions (e.g., the total number of vaccines used). The simulation uses approximately 80 megabytes of memory per million simulated individuals. The simulation was written with several competing goals: to explicitly represent each individual in the population, to conserve memory, to run quickly, and to be (relatively) easy to read and modify. Each simulated individual is represented by a C structure that includes unique identifiers for the person and for each of the social mixing groups to which that person belongs, the age of the individual, the person's infection and vaccination status and dates, and other attributes. For each infected individual, the simulation identifies all susceptible individuals in that person's community who share a common mixing group, the infectiousness of the infected individual, the susceptibility of the susceptible, and the probability that transmission takes place for every time step. Although comparing each individual with every other within a community results in the number of comparisons increasing with the square of the number of individuals, community sizes are always smaller than 3,000 residents. Therefore, the number of comparisons made between individuals scales approximately linearly with the number of individuals in the simulation. More sophisticated algorithms could improve the simulation's performance, but may do so at the expense of the code's flexibility and readability. The running time depends on the number of individuals infected during the course of a simulation. Simulating an epidemic in a population of 10 million people can take up to two hours (on a single processor on an Intel Core2 Duo T9400), but it may take only seconds if the virus is not highly transmissible (low ) or if there are effective interventions (e.g., high vaccination rates). On a cluster of 32 processors, simulating an epidemic covering the continental United States (population of 280 million) takes about 6 hours (192 hours of total CPU time). We illustrate the use of the model by simulating epidemics in metropolitan Seattle, a major metropolitan area with a population of approximately 560,000 according to the US 2000 Census. We ran simulations with different values of , starting with ten infected individuals chosen at random, and found that the epidemic could peak as early as 45 days after the start if is high () (Figure 3A). Pre-vaccination (with vaccine efficacies of VE[S] = 40%, VE[P] = 67%, VE[I] = 40%, which correspond to a well-matched seasonal influenza vaccine [29]) is likely to both lower and delay the epidemic peak (Figure 3B). Use of antivirals alone (AVE[S] = 30%, AVE[P] = 60%, and AVE[I] = 62% [11]) did not greatly reduce the epidemic peak, but they could reduce illness and mortality in an epidemic. Non-pharmaceutical interventions could be quite effective, but the epidemic may spike immediately upon ending the intervention (compare permanent school closure with school closure for 60 days in Figure 3B). Figure 3. Illness attack rates and daily prevalence of influenza in simulations of metropolitan Seattle. (A) Daily prevalence of symptomatic influenza in simulations of metropolitan Seattle for various and (B) for with various interventions. The interventions, which begin 30 days after the first case is detected, are: giving a course of antiviral agents to ascertained cases, closing schools either permanently or for 60 days, and pre-vaccination of 50% of the population with a well-matched seasonal influenza vaccine. (C) Final illness attack rates (180 days) vs for FluTE (simulating metropolitan Seattle) and a model with random mixing. Results for all panels are from one run of metropolitan Seattle for each or intervention strategy except for the simulation for in panel (A), which was run 5 times with different random number seeds and plotted to show stochastic variability. The illness attack rates in the simulation are lower than those in a SIR model with random mixing (where [30], where AR is the infection attack rate, and the illness attack rate is 0.67AR) (Figure 3C ). As observed in earlier studies, models with community structure have lower attack rates than those with random mixing [31]–[33]. Simulated epidemics struck school-age children earlier than adults, which had been observed in earlier studies [6],[34]. Therefore, we predict that early in an epidemic, the proportion of cases who are school-age children will be higher than later in the epidemic (Figure 4). This phenomenon might affect the accuracy of estimates in unfolding epidemics. For example, most confirmed cases in the recent novel influenza A(H1N1) outbreaks in the United States have been school-age children [35] and several early estimates of have been above 2 [36],[37]. In our model, we observed that infected children generate more secondary cases than infected adults (Figure 2A). For example, infected school-age children would transmit to an average of other individuals in a simulated epidemic with . Therefore, estimates of could be high early in an epidemic when a disproportionate number of infections are in children. Figure 4. The ratio of cumulative illness attack rates between school-age children (ages 5–18) and adults (ages 19–64) over time in simulated epidemics. Results plotted are from one simulation of metropolitan Seattle for each value of . One can simulate the population of the entire continental US using the parallel version of FluTE (mpiflute). The continental US had 280 million people in 64735 census tracts in 2000, based on the US 2000 Census. In our simulations, we found that the final illness attack rates for the US to be nearly identical to those of metropolitan Seattle, but the epidemic peak for a given is later for the United States (e.g., 94 vs 65 days for ) (Figure 5). Therefore, simulations of a sufficiently large metropolitan area may be adequate for determining the effect of a strategy on the national level on final illness attack rates, but the nation-wide peak of the epidemic may be later than in the major metropolitan areas because of the time it takes the epidemic to reach outlying areas. Figure 5. The prevalence of influenza in a single simulation of the United States 100 days after the start of an influenza epidemic with . The color of each dot corresponds to the illness prevalence in a census tract. Image created using ArcGIS (Environmental Systems Research Institute, Inc.) We have described a new publicly available influenza epidemic simulator, FluTE. It explicitly represents every individual in the simulation, so simulated epidemics can be studied in detail, even tracing individual transmission events. We illustrated the use of FluTE with examples in which we explored the effect of various intervention strategies on influenza epidemics in the United States and showed how transmissibility can be over-estimated early in an epidemic. The simulation was written so that one can easily set the transmissibility, vaccination policies (e.g., fraction of the population to vaccinate), and other reactive strategies (e.g., school closures). These settings can be used to investigate questions such as: 1) What fraction of the population will become infected or ill? 2) How much vaccine coverage is required to mitigate an epidemic with a given ? 3) What segment of the population should be vaccinated to reduce overall illness attack rates the most? 4) How long can one wait before reacting to an epidemic? and 5) What range of can be managed by a particular pandemic strategy? We have used FluTE to investigate some of these questions by simulating vaccinating children against seasonal and pandemic influenza [38] and pandemic mitigation [20]. The model was calibrated to simulate epidemics of a virus similar to 1957/1958 Asian A(H2N2) and 2009 pandemic A(H1N1). We attempted to model realistic pharmaceutical and non-pharmaceutical interventions, but their effects on an epidemic have not been well quantified. The model's results are plausible and likely to be qualitatively correct, but there is insufficient data to calibrate it to produce quantitatively accurate results for the various possible disease parameters and mitigation strategies. Although the model generates realistic population-level results, the spatial dynamics of the epidemics it produces should be used for illustrative purposes only. When using the model to evaluate mitigation strategies, it is important to consider one's goals. For example, using antiviral agents to treat cases does not greatly reduce the final illness attack rate in the simulation, but it could greatly reduce mortality. The model does not directly evaluate the cost of interventions, but the numbers of cases in a simulated epidemic can be linked to cost and healthcare utilization data [39]. Differential equation models are the most popular approach to disease modeling. The simplest of these (such as the SIR model [40]) can be used to study epidemics analytically, and more complex versions have been used to model the dynamics of epidemics on a global scale [41],[42]. However, if one wants to include a complicated natural history of disease or detailed intervention strategies, individual-based models, such as FluTE, may be more suitable. The current software supports a limited set of configuration options and is intended for batch runs using a scripting language. Using the model for scenarios not supported by the existing code, such as testing a novel intervention strategy or altering the contact parameters for a different attack rate pattern, would require modification of the source code, which we have released so that others can make such changes if needed. We decided to adopt the GNU General Public License (GPL), so that the source code of derivative works must be released. We believe this will facilitate the sharing of improvements. The availability of source code allows others to adapt the model to simulate outbreaks of other airborne infectious diseases such as smallpox [3],[43],[44] or to simulate other regions of the world with different social structures [3]. In the future, we would like to make our model more accessible to non-programmers. This may involve developing a user interface or adding new parameters to the configuration file. We would also like to include intervention strategies that best reflect government pandemic mitigation plans. Achieving these goals would depend upon close collaboration with public health officials to better understand their needs and to carefully simulate existing pandemic mitigation plans and capacities. Although we have calibrated our model to the best available data, more detailed and reliable information on the natural history of influenza, influenza transmission, human behavior in response to infection, and vaccine efficacy is needed. Sensitivity analyses of similar epidemic models have shown that results are robust to uncertainty in many parameters [3],[5],[6],[11]. However, more accurate model inputs would improve the quantitative predictions. Well-designed studies are needed to acquire these data. We thank Brandon Dean for helpful discussions and Jon Sugimoto for producing the image in Figure 5. Author Contributions Conceived and designed the experiments: DLC MEH IMLJ. Performed the experiments: DLC. Analyzed the data: DLC. Contributed reagents/materials/analysis tools: DLC VJO. Wrote the paper: DLC MEH VJO
{"url":"http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000656","timestamp":"2014-04-20T08:24:54Z","content_type":null,"content_length":"196881","record_id":"<urn:uuid:40fc74f7-99b2-4a0d-b4b9-d55bae972136>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
String Theory particles fundamental particle physics String theory (also termed "superstring" theory) is a mathematical attempt to describe all fundamental forces and particles as manifestations of a single, underlying entity, the "string." String theory's predictions are consistent with all known experimental data, and it is felt by some physicists to be a candidate for the long-sought "theory of every thing" (i.e., a single theory describing all fundamental physical phenomena); however, string theory has proved difficult to subject to definitive experimental tests, and therefore actually remains a speculative hypothesis. Physics as defined before the twentieth century, "classical" physics, envisioned the fundamental particles of matter as tiny, solid spheres. Quantum physics, which originated during the first 40 years of the twentieth century, envisioned the fundamental particles simultaneously as particles and as waves; both mental pictures were necessary to make sense out of certain experimental results. String theory, the first forms of which were developed in the late 1960s, proposes that the fundamental unit of everything is the "string," pictured as a bit of taut wire or string on the order of 10 ^-33 cm in length (a factor of 10^-20 smaller than a proton). These strings may be "open," like a guitar string, or form loops like rubber bands; also, they may merge with other strings or divide into substrings. Beginning a few years after its initial formulation, string theory stagnated for a decade because of mathematical difficulties, but exploded in the 1980s with the discovery that the theory actually possesses a highly desirable mathematical feature termed E(8)×E(8) symmetry. Several major theoretical victories were won by string theorists in the 1990s, and intense efforts to extend string theory continue today. String theory is not the product of a single mind, like the theory of relativity, but has been produced by scores of physicists refining each other's ideas in stages. Like the "waves" or "particles" of traditional quantum mechanics, of which string theory is an extension or refinement, "strings" are not objects like those found in the everyday world. A string-theory string is not made of any substance in the way that a guitar string, say, may be made of steel; nor is it stretched between anchor-points. If string theory is right, a fundamental string simply is. Not only does string theory propose that the string is the fundamental building block of all physical reality, it makes this proposition work, mathematically, by asserting that the Universe works not merely in the four dimensions of traditional physics—three spatial dimensions plus time—but in 10 or 11 dimensions, 6 or 7 of which are "hidden" from our senses because they "curled up" to subatomic size. Experimental proof of the existence of these extra dimensions has not yet been produced. Although the "strings" of string theory are not actual strings or wires, the "string" concept is nevertheless a useful mental picture. Just as a taut string in the everyday world is capable of vibrating in many modes and thus of producing a number of distinct notes (harmonics), the vibrations of an elementary string manifest, the theory proposes, as different particles: photon, electron, quark, and so forth. The string concept also resolves the problem of the "point particle" in traditional quantum physics. This arises during the mathematical description of collisions between particles, during which particles are treated as mathematical points having zero diameter. Because the fields of force associated with particles, such as the electric field that produces repulsion or attraction of charges, go by 1/r, where r is the distance to the particle, the force associated with a zero-diameter particle goes to infinity during a collision as r 0. The infinities in the point-particle theory have troubled quantum physicists' efforts to describe particle interactions for decades, but in the mathematics of string theory they do not occur at all. In the Standard Model, quantum physicists' systematic list of all the fundamental particles and their properties, the graviton (the particle that mediates the gravitational force) is tacked on as an afterthought because it is hypothesized to exist, not because the equations of the Standard Model explicitly predict its existence; in string theory, however, a particle having all the properties required of the graviton is predicted as a natural consequence of the mathematical system. In fact, when the existence of this particle was calculated by early string-theory workers, they did not recognize that it might be the graviton, for it had not occurred to them that their new theory might be powerful enough to resolve the biggest problem of modern fundamental physics, the split between general relativity (the large-scale theory of space, time, and gravity) and quantum mechanics (the small-scale theory of particles and of all forces except gravity). String theory—or, rather, the string theories, as a variety of different versions of string theory have been put forward—thus not only predict all the particles and forces catalogued by the Standard Model, but may offer a theory of "quantum gravity," a long-sought goal of physics. Doubt lingers, however, as to whether string theory may be too flexible to fulfill its promise. If it cannot be cast into a form specific enough to be tested against actual data, then its mathematical beauty may be a valuable tool for exploring new ideas, but it will fail to constitute an all-embracing theory of the real world, a "theory of everything." Excitement and skepticism about string theory both, therefore, continue to run high in the world of professional physics. Barnett, Michael R., Henry Möhry, and Helen R. Quinn. The Charm of Strange Quarks: Mysteries and Revolutions of Particle Physics. New York: Springer-Verlag, 2000. Kaku, Michio, and Jennifer Thompson. Beyond Einstein. New York: Anchor Books, 1995. Taubes, Gary, "String Theorists Find a Rosetta Stone." Science Vol. 285, No. 5427 (23 July 1999): 512-517. User Comments
{"url":"http://science.jrank.org/pages/6550/String-Theory.html","timestamp":"2014-04-21T07:03:17Z","content_type":null,"content_length":"20059","record_id":"<urn:uuid:e1558c62-023a-49a5-b1d2-895dc23eded3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
The difference between speed and velocity There is a little difference between two concepts: velocity and speed. First of all, let's regard that you are at axis-x, to 3 meters from origin of cartesians system, so you walk to origin. Later, you walk again more 4 meters at axis-y. So, your displacement is 5 meters, according Pitagoras' theorem, but, the distance that you've covered is 7 meters. In others terms, the displacement is a vector, then, depend only two points( at this case is final point minus initial point). The distance is what you really walked. Therefore the velocity is the time variation of displacement and the speed is the time variation of distance.
{"url":"http://www.physicsforums.com/showthread.php?t=46393","timestamp":"2014-04-19T15:17:51Z","content_type":null,"content_length":"29306","record_id":"<urn:uuid:68cbb3fb-6bcb-4029-9ccc-18e79e804012>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Natural Modes of Vibrations for Some Simple Systems This Demonstration displays the animation of eigenmodes of some simple vibrating systems. The eigenfunctions and eigenvalues are evaluated by solving the eigenvalue problem associated with the one- or two-dimensional wave equation. The systems considered are a fixed-fixed (left boundary condition-right boundary condition) string, fixed-fixed 1D rod, a 1D rod free at both ends, and a fixed square membrane. The rod in consideration is a 1D thin rod with no flexural bending. As the time slider moves, the evolution of the eigenmode in time is simulated using the general form of the solution to the wave equation. The sliders for mode 1 and mode 2 simulate the different eigenmodes—the first 10 modes for the system. Mode 2 is specifically for the simulation of the membrane, and is disabled for the one-dimensional systems.
{"url":"http://demonstrations.wolfram.com/NaturalModesOfVibrationsForSomeSimpleSystems/","timestamp":"2014-04-16T19:48:05Z","content_type":null,"content_length":"42622","record_id":"<urn:uuid:32f850b3-3569-4387-b00d-982eb3ccd586>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Galois group of a quartic over Q Replies: 4 Last Post: Dec 29, 2007 5:32 AM Messages: [ Previous | Next ] Galois group of a quartic over Q Posted: Dec 27, 2007 10:06 PM Problem: Let f(x) = x^4 + a*x^2 + b, be a polynomial with coefficient in Q, the rational numbers. Assume f(x) is irreducible over Q (and no multiple roots). I want to find the Galois group of it over Q. It is very easy to show that it is a subgroup of D_4 the Dihedral-4 group (of order 8). And indeed I it is not hard to see that if b is a square in Q, then the Galois group must be (Z/2Z)^2, the Klein-4 group. And with a little more effort I can show that if b(a^2 - 4*b) is a square in Q, then the Galois group must be Z/4Z. Finally, I think if neither b nor b(a^2 - 4*b) is a square in Q, then the Galois group must be D_4, and that's where I got stuck. Attempt: We know the roots are {u,-u, v,-v}. Then (u^2)(v^2)=b. Since f(x) is irreducible, [Q(u) : Q]=4. If I can show that v is not in Q(u), then I am all set, since that forces [Q(u,v) : Q]=8, and all subgroups of S_4 of order 8 are isomorphic to D_4. But how can show Date Subject Author 12/27/07 Galois group of a quartic over Q tianran.chen@gmail.com 12/28/07 Re: Galois group of a quartic over Q Jose Carlos Santos 12/28/07 Re: Galois group of a quartic over Q I.M. Soloveichik 12/28/07 Re: Galois group of a quartic over Q tianran.chen@gmail.com 12/29/07 Re: Galois group of a quartic over Q quasi
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1676891&messageID=6047003","timestamp":"2014-04-16T19:53:10Z","content_type":null,"content_length":"21527","record_id":"<urn:uuid:5d5887e4-e303-4055-8639-63662cf2fd56>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Keenan Crane - Trivial Connections on Discrete Surfaces Meshes are courtesy of the AIM@Shape Project, the Stanford 3D Scanning Repository, Jotero GbR, and Hugues Hoppe. Thanks to Felix Kälberer, Matthias Nieser, and Konrad Polthier for parameterizing the Aphrodite model using This research was partially funded by NSF grants (CCF-0635112, CCF-0811373, CMMI-0757106, and CCF-1011944), the Center for the Mathematics of Information at Caltech, and the IAS at TU München.
{"url":"http://www.cs.columbia.edu/~keenan/Projects/TrivialConnections/","timestamp":"2014-04-17T18:35:00Z","content_type":null,"content_length":"14177","record_id":"<urn:uuid:e91ff29c-86f4-48a8-880d-b53ef4a9acce>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Website Detail Page written by the U.S. Metric Association, Inc. and Dennis Brownridge This resource is a comprehensive guide on SI for teachers, students, and professionals. It contains tables, charts, and text explanations of base units, derived units, equivalencies, common quantities, and prefixes. To aid the learner in visualizing quantities, drawings of common multiples are provided for each of the base units. The materials are written so that a user with no prior experience with metrics can comprehend the measurements and how they are applied mathematically. This resource is excerpted from a larger set of materials on the metric Subjects Levels Resource Types General Physics - High School - Reference Material - Measurement/Units - Lower Undergraduate Appropriate Courses Categories Ratings - Physical Science - Physics First - Activity - Conceptual Physics - New teachers - Algebra-based Physics - AP Physics Intended Users: Access Rights: Limited free access Freely available online; hard copy textbooks are available for a cost. © 2007 Dennis Brownridge Additional information is available. International System, SI, derived unit, measurement, metric, metric equivalencies, metric system, units Record Creator: Metadata instance created July 23, 2007 by Emma Smith Record Updated: January 13, 2013 by Caroline Hall Last Update when Cataloged: February 1, 2001 Other Collections: AAAS Benchmark Alignments (2008 Version) 9. The Mathematical World 9A. Numbers • 3-5: 9A/E3. Specifying a quantity requires both a number and a unit. • 6-8: 9A/M3b. How a quantity is expressed depends on how precise the measurement is and how precise an answer is needed. 12. Habits of Mind 12B. Computation and Estimation • 3-5: 12B/E9. Use appropriate units when describing quantities. • 6-8: 12B/M7b. Convert quantities expressed in one unit of measurement into another unit of measurement when necessary to solve a real-world problem. • 6-8: 12B/M8. Decide what degree of precision is adequate and round off the result of calculator operations to enough significant figures to reasonably reflect those of the inputs. • 9-12: 12B/H1. Use appropriate ratios and proportions, including constant rates, when needed to make calculations for solving real-world problems. • 9-12: 12B/H9. Consider the possible effects of measurement errors on calculations. Common Core State Standards for Mathematics Alignments Measurement and Data (K-5) Solve problems involving measurement and conversion of measurements from a larger unit to a smaller unit. (4) • 4.MD.1 Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two column table. Ratios and Proportional Relationships (6-7) Understand ratio concepts and use ratio reasoning to solve problems. (6) • 6.RP.3.d Use ratio reasoning to convert measurement units; manipulate and transform units appropriately when multiplying or dividing quantities. Analyze proportional relationships and use them to solve real-world and mathematical problems. (7) • 7.RP.1 Compute unit rates associated with ratios of fractions, including ratios of lengths, areas and other quantities measured in like or different units. Common Core State Reading Standards for Literacy in Science and Technical Subjects 6—12 Range of Reading and Level of Text Complexity (6-12) • RST.9-10.10 By the end of grade 10, read and comprehend science/technical texts in the grades 9—10 text complexity band independently and proficiently. This resource is part of 3 Physics Front Topical Units. Measurement and the Language of Physics Unit Title: Physics Terminology As the name implies, this web page offers non-scientists a guide to SI. The materials are written so that a user with little prior experience with metrics can comprehend the measurements and how they are applied mathematically. ***NOTE: This is a 1.5MB download. Contribute Link to Unit: Similar Measurement and the Language of Physics Unit Title: Units of Measure This comprehesive guide on SI encompasses tables, charts, drawings, and a coherent set of text explanations of base units, derived units, common quantities, equivalencies, and prefixes. It is written so that a user with no prior experience with metrics can comprehend the measurements and how they are applied mathematically. Link to Unit: Measurement and the Language of Physics Unit Title: For the New Teacher This resource is a comprehensive guide on SI for teachers and students. It contains tables, charts, and text explanations of base units, derived units, equivalencies, common quantities, and prefixes. To aid the learner in visualizing quantities, drawings of common multiples are provided for each of the base units. The materials are written so that a user with no prior experience with metrics can comprehend the measurements and how they are applied mathematically. Link to Unit: ComPADRE is beta testing Citation Styles! <a href="http://www.thephysicsfront.org/items/detail.cfm?ID=5546">U.S. Metric Association, Inc., and Dennis Brownridge. A Practical Guide to the International System of Units. February 1, U.S. Metric Association, Inc. and D. Brownridge, (2007), WWW Document, (http://lamar.colostate.edu/~hillger/brownridge.html). U.S. Metric Association, Inc. and D. Brownridge, A Practical Guide to the International System of Units (2007), <http://lamar.colostate.edu/~hillger/brownridge.html>. U.S. Metric Association, Inc., & Brownridge, D. (2001, February 1). A Practical Guide to the International System of Units. Retrieved April 19, 2014, from http://lamar.colostate.edu/ U.S. Metric Association, Inc., and Dennis Brownridge. A Practical Guide to the International System of Units. February 1, 2001. http://lamar.colostate.edu/~hillger/brownridge.html (accessed 19 April 2014). U.S. Metric Association, Inc., and Dennis Brownridge. A Practical Guide to the International System of Units. 2007. 1 Feb. 2001. 19 Apr. 2014 <http://lamar.colostate.edu/~hillger/ @misc{ Author = "U.S. Metric Association, Inc. and Dennis Brownridge", Title = {A Practical Guide to the International System of Units}, Volume = {2014}, Number = {19 April 2014}, Month = {February 1, 2001}, Year = {2007} } %Q U.S. Metric Association, Inc. %A Dennis Brownridge %T A Practical Guide to the International System of Units %D February 1, 2001 %U http://lamar.colostate.edu/~hillger/brownridge.html %O text/html %0 Electronic Source %A U.S. Metric Association, Inc., %A Brownridge, Dennis %D February 1, 2001 %T A Practical Guide to the International System of Units %V 2014 %N 19 April 2014 %8 February 1, 2001 %9 text/html %U http://lamar.colostate.edu/~hillger/brownridge.html : ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. Citation Source Information The AIP Style presented is based on information from the AIP Style Manual. The APA Style presented is based on information from APA Style.org: Electronic References. The Chicago Style presented is based on information from Examples of Chicago-Style Documentation. The MLA Style presented is based on information from the MLA FAQ.
{"url":"http://www.thephysicsfront.org/items/detail.cfm?ID=5546","timestamp":"2014-04-19T07:08:07Z","content_type":null,"content_length":"52646","record_id":"<urn:uuid:ffe2d614-c12c-4a0f-a865-5472944f9c56>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Comb and multiplexed wavelet transforms and thier applications to signal processing - IEEE Trans. Signal Processing , 1998 "... In this paper, we extend the definition of dyadic wavelets to include frequency warped wavelets. The new wavelets are generated and the transform computed in discrete-time by alternating the Laguerre transform with perfect reconstruction filterbanks. This scheme provides the unique implementation of ..." Cited by 14 (7 self) Add to MetaCart In this paper, we extend the definition of dyadic wavelets to include frequency warped wavelets. The new wavelets are generated and the transform computed in discrete-time by alternating the Laguerre transform with perfect reconstruction filterbanks. This scheme provides the unique implementation of orthogonal or biorthogonal warped wavelets by means of rational transfer functions. We show that the discrete-time warped wavelets lead to well-defined continuous-time wavelet bases, satisfying a warped form of the two-scale equation. The shape of the wavelets is not invariant by translation. Rather, the "wavelet translates" are obtained from one another by allpass filtering. We show that the phase of the delay element is asymptotically a fractal. A feature of the warped wavelet transform is that the cut-off frequencies of the wavelets may be arbitrarily assigned while preserving a dyadic structure. The new transform provides an arbitrary tiling of the time--frequency plane, which can be designed by selecting as little as a single parameter. This feature is particularly desirable in cochlear and perceptual models of speech and music, where accurate bandwidth selection is an issue. As our examples show, by defining pitch-synchronous wavelets based on warped wavelets, the analysis of transients and denoising of inharmonic pseudo-periodic signals is greatly enhanced. - Proc. of 4th Int. Conf. of Spoken Language Processing "... In an effort to provide a more efficient representation of the acoustical speech signal in the pre-classification stage of a speech recognition system, we consider the application of the Best-Basis Algorithm of Coifman and Wickerhauser. This combines the advantages of using a smooth, compactly-suppo ..." Cited by 10 (0 self) Add to MetaCart In an effort to provide a more efficient representation of the acoustical speech signal in the pre-classification stage of a speech recognition system, we consider the application of the Best-Basis Algorithm of Coifman and Wickerhauser. This combines the advantages of using a smooth, compactly-supported wavelet basis with an adaptive time-scale analysis dependent on the problem at hand. - EURASIP JASP "... Voiced musical sounds have nonzero energy in sidebands of the frequency partials. Our work is based on the assumption, often experimentally verified, that the energy distribution of the sidebands is shaped as powers of the inverse of the distance from the closest partial. The power spectrum of these ..." Cited by 5 (3 self) Add to MetaCart Voiced musical sounds have nonzero energy in sidebands of the frequency partials. Our work is based on the assumption, often experimentally verified, that the energy distribution of the sidebands is shaped as powers of the inverse of the distance from the closest partial. The power spectrum of these pseudo-periodic processes is modeled by means of a superposition of modulated 1/f components, that is, by a pseudo-periodic 1/f-like process. Due to the fundamental selfsimilar character of the wavelet transform, 1/f processes can be fruitfully analyzed and synthesized by means of wavelets. We obtain a set of very loosely correlated coefficients at each scale level that can be well approximated by white noise in the synthesis process. Our computational scheme is based on an orthogonal P-band filter bank and a dyadic wavelet transform per channel. The P channels are tuned to the left and right sidebands of the harmonics so that sidebands are mutually independent. The structure computes the expansion coefficients of a new orthogonal and complete set of harmonic-band wavelets. The main point of our scheme is that we need only two parameters per harmonic in order to model the stochastic fluctuations of sounds from a pure periodic behavior. - PROCEEDINGS OF THE 99 DIGITAL AUDIO EFFECTS WORKSHOP , 1999 "... Voiced musical sounds have non-zero energy in sidebands of the frequency partials. Our work is based on the assumption, often experimentally verified, that the energy distribution of the sidebands is shaped as powers of the inverse of the distance from the closest partial. The power spectrum of thes ..." Cited by 4 (3 self) Add to MetaCart Voiced musical sounds have non-zero energy in sidebands of the frequency partials. Our work is based on the assumption, often experimentally verified, that the energy distribution of the sidebands is shaped as powers of the inverse of the distance from the closest partial. The power spectrum of these pseudo-periodic processes is modeled by means of a superposition of modulated 1/f components, i.e., by a pseudo-periodic 1/f-like process. Due to the fundamental selfsimilar character of the wavelet transform, 1/f processes can be fruitfully analyzed and synthesized by means of wavelets, obtaining a set of very loosely correlated coefficients at each scale level that can be well approximated by white noise in the synthesis process. Our computational scheme is based on an orthogonal P-band filter bank and a dyadic wavelet transform per channel. The P channels are tuned to the left and right sidebands of the harmonics so that sidebands are mutually independent. The structure computes the e... "... The aim of this paper is to present results on digital processing of sounds by means of both dispersive delay lines and pitch-synchronous transforms in a unified framework. The background on frequency warping is detailed and applications of this technique are pointed out with reference to the exi ..." Cited by 3 (2 self) Add to MetaCart The aim of this paper is to present results on digital processing of sounds by means of both dispersive delay lines and pitch-synchronous transforms in a unified framework. The background on frequency warping is detailed and applications of this technique are pointed out with reference to the existing literature. These include transient extraction, pitch shifting, harmonic detuning and auditory modeling. "... Dispersive tapped delay lines are attractive structures for altering the frequency content of a signal. In previous papers we showed that in the case of a homogeneous line with first order all-pass sections the signal formed by the output samples of the chain of delays at a given time is equivalent ..." Cited by 2 (1 self) Add to MetaCart Dispersive tapped delay lines are attractive structures for altering the frequency content of a signal. In previous papers we showed that in the case of a homogeneous line with first order all-pass sections the signal formed by the output samples of the chain of delays at a given time is equivalent to compute the Laguerre transform of the input signal. However, most musical signals require a time-varying frequency modification in order to be properly processed. Vibrato in musical instruments or voice intonation in the case of vocal sounds may be modeled as small and slow pitch variations. Simulations of these effects require techniques for timevarying pitch and/or brightness modification that are very useful for sound processing. In our experiments the basis for time-varying frequency warping is a time-varying version of the Laguerre transformation. The corresponding implementation structure is obtained as a dispersive tapped delay line, where each of the frequency dependent delay elem... "... In this paper, an automatic system is presented for word recognition using real Turkish word signals. This paper especially deals with combination of the feature extraction and classification from real Turkish word signals. A Discrete Wavelet Neural Network (DWNN) model is used, which consists of tw ..." Cited by 1 (0 self) Add to MetaCart In this paper, an automatic system is presented for word recognition using real Turkish word signals. This paper especially deals with combination of the feature extraction and classification from real Turkish word signals. A Discrete Wavelet Neural Network (DWNN) model is used, which consists of two layers: discrete wavelet layer and multi-layer perceptron. The discrete wavelet layer is used for adaptive feature extraction in the time-frequency domain and is composed of Discrete Wavelet Transform (DWT) and wavelet entropy. The multi-layer perceptron used for classification is a feed-forward neural network. The performance of the used system is evaluated by using noisy Turkish word signals. Test results showing the effectiveness of the proposed automatic system are presented in this paper. The rate of correct recognition is about % 92.58 for the sample speech signals. Key words: word recognition, Turkish word signal, feature extraction, DWT, entropy, wavelet neural networks, automatic system. * Address for correspondence:
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=58667","timestamp":"2014-04-16T10:48:32Z","content_type":null,"content_length":"32078","record_id":"<urn:uuid:1f2ea8a8-1ce7-41b3-a118-02c5f07f4cf6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions: having trouble with definition in my book Re: Functions: having trouble with definition in my book Now I have another definition "dependent variable" (regarding functions) the definition is: "the variable representing the second elements of the ordered pairs in a function; the outputs." Does this basically mean "The y-values in an equation regarding that function"? Re: Functions: having trouble with definition in my book Thank you! Now I understand it! iceking wrote:Can someone explain it? Got ya covered right here Functions: having trouble with definition in my book I have a definition for "functions" in my math book, but I don't understand it. The definition is "a relation in which each first coordinate is paired with exactly one second coordinate" Can someone explain it? Yes. The x-value is the "independent" variable: you, independently, choose the value of x. The y-value is the "dependent" variable: it depends on what you picked for x. stapel_eliz wrote:Yes. The x-value is the "independent" variable: you, independently, choose the value of x. The y-value is the "dependent" variable: it depends on what you picked for x.
{"url":"http://www.purplemath.com/learning/viewtopic.php?t=901","timestamp":"2014-04-20T04:04:30Z","content_type":null,"content_length":"26340","record_id":"<urn:uuid:7c081734-7650-460c-8efb-cedefdfd09a9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Boca Raton Trigonometry Tutor I have been tutoring in Boca Raton for the last 10 years and references would be available on request. I basically tutor Math, mostly junior high and high school subjects and also tutor college prep, both ACT and SAT. I have also tutored SSAT. 10 Subjects: including trigonometry, geometry, algebra 1, algebra 2 Hello, my name is Jose. I have taught Mathematics and Physics in High School and College for over 10 years, I have also taught Spanish privately in recent years. I have a degree in Physics and post-degree studies in Geophysics and Information Systems. 10 Subjects: including trigonometry, Spanish, physics, geometry ...If I accept you as a pupil, I am willing to GUARANTEE you will see an improvement in your grade.Algebra 1 is the foundation for the math courses that follow. In order to do well, students must UNDERSTAND the concepts presented not merely “get the answer.” I have had great success teaching the c... 7 Subjects: including trigonometry, calculus, geometry, algebra 1 ...I specifically focus on organization first. The structuring of notebooks and assignment logs for each course. This can be set up with a sign off list for parents or tutors to verify and be checked against teacher records. 53 Subjects: including trigonometry, chemistry, English, reading ...My name is Paul M. I graduated from college in 3 years with a double major in Math and Psychology. I was a Magna Cum Laude graduate as an undergrad while playing basketball for an NCAA National 30 Subjects: including trigonometry, calculus, geometry, GRE
{"url":"http://www.purplemath.com/boca_raton_fl_trigonometry_tutors.php","timestamp":"2014-04-21T07:26:16Z","content_type":null,"content_length":"24176","record_id":"<urn:uuid:8d1c7b97-e0ee-4166-97b4-27d7eba051bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: for commands [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: for commands From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject Re: st: for commands Date Tue, 4 Nov 2003 22:56:04 -0000 Andrew Eggers > I am used to using the "for" command to write a command that calls > members from more than one list in order. For example: > for any a b c \ num 1/3 \ any g h i: gen XYZ = 0 > would generate three variables named a1g, b2h, and c3i, each filled > with zeros. > I am trying to graduate from <for> to <forvalue> and <foreach>, and > wondered how I can do as above with these newer commands. I wrote about this on 16 October 2002 and more recently in Stata Journal 3(2):185--202 (2003). That article was a sequel to the article in Stata Journal 2(2):202--222 (2002), a shortened version of which is available at the URL kindly cited by Dimitriy Masterov in a previous posting. (The printed version is much fuller than the internet version, apart from a partly accidental, partly contrived triple pun on fortitude (forty-twode) - page 42 - the answer to the question in Douglas Adams' book.) Be that as it may, the sequel says much more about Andrew's problem does the prequel. In essence, -for- stars at the kind of parallel problem cited by Andrew, whereas -foreach- stars at nested problems. (This distinction is expanded in the 16 Oct 2002 posting.) Apart from the answer you don't want foreach v in a1g b2h c3i { gen `v' = 0 there are various ways to do it, discussed in SJ 3(2): 185--202. Here is one: local Y "1 2 3" local Z "g h i" local i = 1 foreach x in a b c { local y : word `i' of `Y' local z : word `i++' of `Z' gen `x'`y'`z' = 0 You don't like it? I don't blame you (although it's more long-winded than it need be). Here's another one. As so often, we could exploit whatever structure happens to exist: local Z "g h i" local i = 0 foreach x in a b c { local z : word `++i' of `Z' gen `x'`i'`z' = 0 No, it's not much better. However, 1. -for- is optimised for parallel problems; -foreach- is optimised for nested problems. Each is better playing at home, not surprisingly. 2. I have used -foreach- fairly intensively over the last couple of years, and I'd say that parallel problems like Andrew's (i.e. selecting one value from each of 3 or more lists) arise in practice pretty rarely. Your experience may differ. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-11/msg00099.html","timestamp":"2014-04-18T18:53:43Z","content_type":null,"content_length":"6963","record_id":"<urn:uuid:63edd3cb-75cb-472e-a781-b06a04bf9ddc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Covariant derivative up vote 1 down vote favorite Hi, Is it true that covariant derivative on any vector bundle over a manifold X comes from some connection on the bundle of linear frames of that vector bundle? This statement is true for the case of tangent bundle, but I am not sure if it is true in general or not. If it is true then can someone please suggest some reference for its proof. If you know how each vector gets transported along every curve, then you certainly know how a set of basis vector gets transported right? So I believe the answer is yes. – John Jiang Jan 5 '12 at This is the kind of thing you really should try to work out yourself. It's OK to consult references to get a rough idea of what's going on, but you should try to fill in all the details yourself. There's no trickery or ingenuity needed at all. – Deane Yang Jan 5 '12 at 11:04 add comment 2 Answers active oldest votes If $E \to X$ is a (finite-dimensional) vector bundle and $P$ is the principal $GL(n)$ bundle of frames, then there is a one-to-one correspondence between covariant derivatives on $E$ and principal connections on $P$. A good reference for details is proposition 4.4 of Lawson and Michelson's ``Spin Geometry" (unfortunately the relevant part is not on google books). up vote 2 down If $U$ is an open set of $X$ on which $E$ is trivial then relative to some local frame over $U$ we have the connection one-forms in $\Omega^1(U; \mathfrak{gl}(n))$. You can pull these vote accepted back to $U \times GL(n)$, which then determines a $\mathfrak{gl}(n)$-valued one-form on $P\vert_U$ since the frame gives an isomorphism $P\vert_U \simeq U \times GL(n)$. Then one can show that these locally defined forms on $P$ piece together to form a global connection form. thanks for the answer and for the reference. – dushya Jan 5 '12 at 6:33 add comment I don't know much about infinite dimensional things. I am not sure about the right answer but may be following may be useful... Once i saw the following book and statement: See Page 4 of the book "Lectures on closed geodesics"- W. Klingenberg. Where he says: "Whereas for Euclidean vector bundles over Euclidean manifolds such a map (Covariant derivative) $\nabla $ always defines a connection $K$, in our more general situation (That is Loop up vote 1 space: Hilbert Manifold) this need not always be true; see [FK] for further details. See also [El 3] for a more general setting."\ down vote EL3: Eliasson, H.: On the geometry of manifolds of maps. J. Diff. Geom. 1, 165 -194 (1967). So as far as I know in infinite dimension we can define co variant derivative which doesn't come from any so called connection. thanks for the references. – dushya Jan 5 '12 at 6:30 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/84930/covariant-derivative/84933","timestamp":"2014-04-19T15:15:48Z","content_type":null,"content_length":"57130","record_id":"<urn:uuid:25e499c8-62e8-4d59-935b-6d91fdd2c6ae>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Generate the first 5 terms of this sequence: f(1) = 0 and f(2) = 1, f(n) = f(n - 1) + f(n - 2), for n > 2. A. 0, -1, 1, 0, 2 B. 0, 1, 1, 2, 3 C. 0, 1, 2, 2, 3 D. 0, 1, 1, 2, 2 Please helppp! Thanks! • 6 months ago • 6 months ago Best Response You've already chosen the best response. f(n)=f(n-1)+f(n-2) Try replacing n with 3 like so f(3)=f(3-1)+f(3-2) f(3)=f(2)+f(1) You know what f(2) equals and you know what f(1) equals f(2)=1 and f(1)=0 Replace f(2) and f(1) with 1 and 0 respectively and then add those values to get f(3) Best Response You've already chosen the best response. Um... f(3) = f(1) + f(0) So, f(3) equals 1? How do I get all those other numbers in the choices? :o Best Response You've already chosen the best response. you keep going up with n Best Response You've already chosen the best response. find f(4) using the same f(n)=f(n-1)+f(n-2) formula Best Response You've already chosen the best response. then do f(5) and so on... Best Response You've already chosen the best response. ... But what //is// f(5)? Idk I've been doing this for like an hour and the lesson is horrible at describing it. :((( Best Response You've already chosen the best response. And I'm not supposed to find f(5), I think... I'm supposed to find 5 terms or something. Best Response You've already chosen the best response. f(1), f(2),f(3),f(4),f(5) are the numbers in your sequence. f(1) and f(2) are given we just found f(3) given f(n)=f(n-1)+f(n-2) then we do the same for f(4) and f(5) Evaluate: f(4)=f(4-1)+f(4-2) and f(5)=f(5-1)+f(5-2) Best Response You've already chosen the best response. You must do f(4) before you do f(5) though Best Response You've already chosen the best response. Since to evaluate f(5), you will need to know f(4) and f(3) Best Response You've already chosen the best response. f(4) = f(4-1) + f(4-2) f(4) = f(3) + f(2) f(4) = 5? _________ f(5) = f(5-1) + f(5-2) f(5) = f(4) + f(3) f(5) = 7? Is this correct? Best Response You've already chosen the best response. Best Response You've already chosen the best response. ...? I don't understand. D: Best Response You've already chosen the best response. So f(1)=0 f(2)=1 f(3)=f(3-1)+f(3-2) =f(2) +f(1) But we are given that f(2) is 1 and f(1) is 0 So we replace f(2) with 1 and we replace f(1) with 0 like so =1 +0 =1 So we just concluded as we concluded earlier that f(3) is 1 not 3 Best Response You've already chosen the best response. Now to find f(4) You use the same formula f(n)=f(n-1)+f(n-2) this is the same formula we used to evaluate f(3) We will use this formula over and over until we are done finding numbers in the sequence (which this sequence goes on forever) So anyways f(4) Replace n with 4 f(4)=f(4-1)+f(4-2) =f(3) +f(2) Recall we just found f(3) and f(2) was given to us. Best Response You've already chosen the best response. ... I am so confused. Okay so the formula comes out to ... f(4) = f(1) + f(1)? ??? ?? So the answer is 2? Does that make the answer to the entire problem C? :S Best Response You've already chosen the best response. well f(3) is just one not f(1) and f(2) is just one not f(1) Best Response You've already chosen the best response. f(1) is not equal to f(2) Best Response You've already chosen the best response. but 1 is equal to f(2) Best Response You've already chosen the best response. but yes 1+1 is equal to 2 Best Response You've already chosen the best response. But in the problem... it says " f(2) = 1"? Best Response You've already chosen the best response. yes f(2) =1 but f(2) does not equal f(1) f(1) is 0 0 is never 1 so f(2) is never f(1) because they hold different values Best Response You've already chosen the best response. Oh. I thought they were the same lol w/e. So is the answer C or do I have to waste my time farther into the problem?? :o Best Response You've already chosen the best response. I'm pretty sure that is not what we got... Best Response You've already chosen the best response. the first five terms are f(1),f(2),f(3),f(4),f(5) we have that f(1)=0, f(2)=1, f(3)=1, f(4)=2,.... Best Response You've already chosen the best response. Ohhh or it could be B or D. I know it isn't A... Best Response You've already chosen the best response. Ugh so I have to do more. x_x Math sucks omfg. Okay so lemme figure out the 5 term thingy. Best Response You've already chosen the best response. You don't have to do more. Best Response You've already chosen the best response. Best Response You've already chosen the best response. You just aren't realizing what we got for the first 4 numbers I guess Best Response You've already chosen the best response. Well two choices have "0, 1, 1, 2" as the first four numbers... Best Response You've already chosen the best response. Yep that is right Best Response You've already chosen the best response. Best Response You've already chosen the best response. So yeah we do have go one more because b and d both begin with that (sorry didn't realize) Best Response You've already chosen the best response. I leave finding f(5) to you Best Response You've already chosen the best response. Whats the equation thingy?? Best Response You've already chosen the best response. f(n)=f(n-1)+f(n-2) this is the same one we have been using Best Response You've already chosen the best response. If we want to find f(5) and we are given f(n)=f(n-1)+f(n-2) then in order for f(n) and f(5) to be the same n would have to be 5. So replace all the n's you see in f(n)=f(n-1)+f(n-2) with 5. Best Response You've already chosen the best response. f(5)=f(5-1)+f(5-2) f(5) = 4 + 3 f(5) = 2 + 1 f(5) = 3 So it's "B," right? Best Response You've already chosen the best response. You mean that f(5-1) is equal f(4) and that f(5-2) is equal to f(3) You only performed the inside operation You haven't even used f So You should have f(5)=f(5-1)+f(5-2) =f(4) + f(3) because 5-1 is 4 and 5-2 is 3 =2 + 1 because f(4) is 2 and f(3) is 1 =3 The answer is right Just the work was a little off. But yes now we can say what the answer is Best Response You've already chosen the best response. Yeah I don't need to show my work. Math is stupid so as long as I have the right answer, I'll pass. :) Hey, can you help me check the answer I found on the first part of a different question? Just so I know I'm on the right path to the correct answer? Best Response You've already chosen the best response. Math isn't stupid. Best Response You've already chosen the best response. Math is in everything. Best Response You've already chosen the best response. Lol I'm more of a calculator type of girl, myself. I'm a writer so it's not as important to me (unless someday a girl behind the cash register at the grocery store makes me solve a complicated equation about substitutions for x or whatever (with work shown!) LOL) So yeah, anyways, can you help just check my answer for the first part of this thing? I need to get this work done so I don't fail World History. D: Best Response You've already chosen the best response. (I'm homeschooled so my douchy World History teacher is going to call me in like 15 minutes and ask a bunch of questions I'll end up blundering on x_x. Next semester I'm going back to brick and mortar before I flunk World History and Algebra completely. lol) Best Response You've already chosen the best response. Well, I would have to say it would help you and I have if you had a more positive attitude about math. Best Response You've already chosen the best response. And any of your future math tutors. Best Response You've already chosen the best response. Yeah any positivity on the subject more along the lines of me looking forward to the bright horizon: the time that I'm allowed to stop taking math classes. Lol. I guess what motivates me is the "It'll be over soon~" idea like soldiers on the battlefield or somebody starving to death in the wilderness. I'd like to get it done and not have math bring down my 4.0 GPA. x_x I've always sucked at it, no matter how hard I try, and I don't like the subject in general (In my opinion, it's a bit impractical. Nobody I've ever met that isn't in a field like architecture or something has ever meticulously plotted graph points. And the only fractions I've ever found to be actually necessary in real life are 1/2, 1/3, and 1/4, basically.) I guess I'll never be a mathematician. *shrugs* Oh well. Lol. Best Response You've already chosen the best response. Go ahead and post your other problem. Best Response You've already chosen the best response. Okay. :) "Find f(5) for this sequence: f(1) = 2 and f(2) = 5 f(n) = f(1) + f(2) + f(n - 1) n > 2. f(5) = ______ " This is the problem. I made this equation: f(3) = f(1) + f(2) + f(3 - 1) So, with this, I ended up making: f(3) = 2 + 5 + f(3 - 1) The "f(3 - 1)" part is confusing me. I don't know what to do with this section as I try to evaluate the terms. D: Best Response You've already chosen the best response. 3-1 equals 2 Best Response You've already chosen the best response. so what is f(3-1) equal to? f(3-1) is equal to f(2) and f(2) is given to you in the problem Best Response You've already chosen the best response. Oooh! Okay, I didn't think about it that way. f(3) = 2 + 5 + 5 f(3) = 12 Is this correct? (My original answer wasn't at all; I got 9 somehow. Lol) Best Response You've already chosen the best response. that is right! good job! Best Response You've already chosen the best response. Okay! :)) Now one last question; how do I translate that into finding the 4th term? f(4) = f(1) + f(2) + f(4 - 1) f(4) = 2 + 5 + 12 Does it look like this to solve it? :3 Best Response You've already chosen the best response. Yep! That is great. I think you are getting it. Best Response You've already chosen the best response. Okay! Thanks for all your help! :D Best Response You've already chosen the best response. I'm really happy that you are happy that you did it. :) Now I must go. Good luck on everything. Best Response You've already chosen the best response. I gave you a medal for understanding by the way. :) Best Response You've already chosen the best response. Great, thanks! :)) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/523c7d58e4b00182897efbd3","timestamp":"2014-04-17T06:57:27Z","content_type":null,"content_length":"173314","record_id":"<urn:uuid:493d3818-6213-46c4-8cf8-908f06c7237d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
The notation used to describe observational relations in cosmological models varies greatly in the literature. Therefore, in order to maintain some consistency at least among the papers which deal directly with relativistic cosmology, here I shall attempt to follow as much as possible the definitions and notation similar to the ones used by Ellis (1971, p. 144). Let us call F the bolometric flux as measured by the observer. This is the rate at which radiation crosses unit area per unit time in all frequencies. Then F[G] will be the bolometric galaxy flux measured across an unit sphere located in a locally Euclidean space at rest with the galaxy ^(2) The distance definitions used here are three: i) the observer area distance r[0] is the area distance of a source as measured by the observer ^(3) ; ii) the galaxy area distance r[G] is defined as the area distance to the observer as measured from the distant galactic source. This quantity is unobservable, by definition; iii) the luminosity distance d[] is the distance measured by the observer as if the space were flat and nonexpanding, that is, as if the space were stationary and Euclidean. These three definitions of distance are related to each other by Etherington's reciprocity theorem (Ellis 1971, p. 153; Schneider et al. 1992, p. 111, 116), where z is the redshift of the source. Notice that all these distances tend to the same Euclidean value as z -> 0, but greatly differ at large z. Let us now call L the bolometric source luminosity, that is, the total rate of radiating energy emitted by the source and measured through an unity sphere located in a locally Euclidean spacetime near the source. Then observed frequency of the radiation, and [G] the emitted frequency, that is, the frequency of the same radiation The source spectrum function J([G]) gives the proportion of radiation emitted by the source at a certain frequency [G] as measured at the rest frame of the source. This quantity is a property of the source, and since it gives the percentage of emitted radiation, it obeys the following normalization condition, Considering this definition, then L[[G]] = L J([G]) is the specific source luminosity, and gives the rate at which radiation is emitted by the source at the frequency [G] at its locally Euclidean rest frame. Then, to summarize, the following expressions give the relationship between flux and luminosity: The redshift z is, by definition, given by, and from this expression it follows that The observed flux, redshift and observer area distance are then related by the equation below (Ellis 1971, p. 156), In the context of astronomical measurements, the observed flux is called observed luminosity of the source, and the bolometric apparent magnitude of the source is defined by The distance modulus is defined by where M[bol] is the bolometric absolute magnitude. The underlying spacetime geometry appears in the expressions for the redshift and the different definitions of distance. That can be seen if we remember that in the general geometric case the redshift is given by (Ellis 1971, p. 146), where u^a is the observer's four-velocity, and k^a is the tangent vector of the null geodesic connecting source and observer, that is, the past light cone. This expression allows us to calculate z for any given spacetime geometry. Similarly, the observer area distance r[0] is, by definition, given by where dA[0] is the cross-sectional area of a bundle of null geodesics diverging from the observer at some point, and [0] is the solid angle subtended by this bundle (Ellis 1971, p. 153; Schneider et al. 1992, p. 110). This quantity can in principle be measured, but it can also be obtained from the assumed spacetime geometry, especially in spherically symmetric metrics, from where it can be easily calculated. For instance, in the Einstein-de Sitter metric, the observer area distance is straightforwardly obtained as being given by ^2 Throughout this paper I will generically call any cosmological source by the term ``galaxy''. Back. ^3 This definition of distance has different names in the literature. It is the same as Weinberg's (1972) angular diameter distance, and Kristian & Sach's (1966) corrected luminosity distance. Back.
{"url":"http://ned.ipac.caltech.edu/level5/Ribeiro/Ribeiro2.html","timestamp":"2014-04-21T00:07:28Z","content_type":null,"content_length":"8930","record_id":"<urn:uuid:4811e67c-017f-417c-8366-691f669adbbe>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Topological Group Examples A selection of articles related to topological group examples. Original articles from our library related to the Topological Group Examples. See Table of Contents for further available material (downloadable resources) on Topological Group Examples. Topological Group Examples is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Topological Group Examples books and related Suggested Pdf Resources Jun 19, 2007 describe the Markov topology of the infinite permutation groups, while §7.3. unit interval I = [0, 1] can be the underlying spaces of a topological group, or even the quotient space of a topological group modulo a subgroup. Examples 1.9. Let F be a field that is a topological group (relative to addition). Assume that points in F are closed sets in the topology on F. For example, we could take F = R ,. As the topology of a topological group is determined by the neighbourhoods of the identity, (i) and (ii) are merely reformulations of each other. Examples. Suggested Web Resources Mar 13, 2000 There are many natural examples of topological groups. We will mention a few here. The real and complex numbers. Examples. Any group becomes a topological group if it is given the discrete topology. The following are familiar examples of topological groups: 1.G is the set of all real numbers with addition as the group product. 2. An example of a topological group. A. Krawczyk, H. Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/topological-group-examples/","timestamp":"2014-04-19T02:27:56Z","content_type":null,"content_length":"27394","record_id":"<urn:uuid:5c2361d5-45af-4bf1-aef6-90e104b1c994>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Consider The Cart Shown. There Is No Friction Or ... | Chegg.com Consider the cart shown. There is no friction or dampeng in in the system. The position of the cart is represented by x and the position of the end of the spring y. The spring is under neither compression nor tension when both x and y are zero. The initial position and velocity of the cart are zero. Starting at time zero, the right end of the spring is given y = 0.1t u(t) m. The cart has a mass of 2kg, and the spring constant is 18 N/m. Write the differential equation and initial conditions that represent the motion of the cart. So far this is what I have: I know the force will be the spring which is Summation Fx = Fk Fk = -k(x-xref) m d2y/dt2 = -k(x-xref) Now I am unsure how to finish it Mechanical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-cart-shown-friction-dampeng-system-position-cart-represented-x-position-end-sprin-q1328050","timestamp":"2014-04-20T05:06:28Z","content_type":null,"content_length":"23495","record_id":"<urn:uuid:83118de4-fb44-4f1e-9900-8c9e96a482fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Scheduling Concurrent Bag-of-Tasks Applications on Heterogeneous Platforms February 2010 (vol. 59 no. 2) pp. 202-217 ASCII Text x Anne Benoit, Loris Marchal, Jean-François Pineau, Yves Robert, Frédéric Vivien, "Scheduling Concurrent Bag-of-Tasks Applications on Heterogeneous Platforms," IEEE Transactions on Computers, vol. 59, no. 2, pp. 202-217, February, 2010. BibTex x @article{ 10.1109/TC.2009.117, author = {Anne Benoit and Loris Marchal and Jean-François Pineau and Yves Robert and Frédéric Vivien}, title = {Scheduling Concurrent Bag-of-Tasks Applications on Heterogeneous Platforms}, journal ={IEEE Transactions on Computers}, volume = {59}, number = {2}, issn = {0018-9340}, year = {2010}, pages = {202-217}, doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2009.117}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Scheduling Concurrent Bag-of-Tasks Applications on Heterogeneous Platforms IS - 2 SN - 0018-9340 EPD - 202-217 A1 - Anne Benoit, A1 - Loris Marchal, A1 - Jean-François Pineau, A1 - Yves Robert, A1 - Frédéric Vivien, PY - 2010 KW - Scheduling and task partitioning KW - online computation KW - parallelism and concurrency KW - measurement KW - evaluation KW - modeling KW - simulation of multiple-processor systems. VL - 59 JA - IEEE Transactions on Computers ER - Scheduling problems are already difficult on traditional parallel machines, and they become extremely challenging on heterogeneous clusters. In this paper, we deal with the problem of scheduling multiple applications, made of collections of independent and identical tasks, on a heterogeneous master-worker platform. The applications are submitted online, which means that there is no a priori (static) knowledge of the workload distribution at the beginning of the execution. The objective is to minimize the maximum stretch, i.e., the maximum ratio between the actual time an application has spent in the system and the time this application would have spent if executed alone. On the theoretical side, we design an optimal algorithm for the offline version of the problem (when all release dates and application characteristics are known beforehand). We also introduce a heuristic for the general case of online applications. On the practical side, we have conducted extensive simulations and MPI experiments, showing that we are able to deal with very large problem instances in a few seconds. Also, the solution that we compute totally outperforms classical heuristics from the literature, thereby fully assessing the usefulness of our approach. [1] M. Adler, Y. Gong, and A.L. Rosenberg, “Optimal Sharing of Bags of Tasks in Heterogeneous Clusters,” Proc. 15th ACM Symp. Parallelism in Algorithms and Architectures (SPAA '03), pp. 1-10, 2003. [2] H. Casanova and F. Berman, “Parameter Sweeps on the Grid with APST,” Proc. Grid Computing: Making the Global Infrastructure a Reality, F. Berman, G. Fox, and T. Hey, eds., 2003. [3] “BOINC: Berkeley Open Infrastructure for Network Computing,” http:/boinc.berkeley.edu, 2009. [4] C. Banino, O. Beaumont, L. Carter, J. Ferrante, A. Legrand, and Y. Robert, “Scheduling Strategies for Master-Slave Tasking on Heterogeneous Processor Platforms,” IEEE Trans. Parallel and Distributed Systems, vol. 15, no. 4, pp. 319-330, Apr. 2004. [5] J. Dongarra, J.-F. Pineau, Y. Robert, and F. Vivien, “Matrix Product on Heterogeneous Master-Worker Platforms,” Proc. ACM SIGPLAN, pp. 53-62, 2008. [6] T. Yang and A. Gerasoulis, “DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors,” IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 9, pp. 951-967, Sept. 1994. [7] P. Brucker, Scheduling Algorithms. Springer-Verlag, 2004. [8] H. Topcuoglu, S. Hariri, and M.-Y. Wu, “Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing,” IEEE Trans. Parallel and Distributed Systems, vol. 13, no. 3, pp. 260-274, Mar. 2002. [9] B. Hong and V. Prasanna, “Distributed Adaptive Task Allocation in Heterogeneous Computing Environments to Maximize Throughput,” Proc. Int'l Symp. Parallel and Distributed Processing (IPDPS '04), [10] P. Bhat, C. Raghavendra, and V. Prasanna, “Efficient Collective Communication in Distributed Heterogeneous Systems,” Proc. IEEE Int'l Conf. Distributed Computing Systems (ICDCS '99), pp. 15-24, [11] P. Bhat, C. Raghavendra, and V. Prasanna, “Efficient Collective Communication in Distributed Heterogeneous Systems,” J. Parallel and Distributed Computing, vol. 63, no. 3, pp. 251-263, 2003. [12] T. Saif and M. Parashar, “Understanding the Behavior and Performance of Non-Blocking Communications in MPI,” Proc. Euro-Par 2004: Parallel Processing, pp. 173-182, 2004. [13] W. Gropp, E. Lusk, N. Doss, and A. Skjellum, “A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard,” Parallel Computing, vol. 22, no. 6, pp. 789-828, Sept. [14] N.T. Karonis, B. Toonen, and I. Foster, “MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface,” J.Parallel and Distributed Computing, vol. 63, no. 5, pp. 551-563, 2003. [15] O. Beaumont, L. Carter, J. Ferrante, A. Legrand, and Y. Robert, “Bandwidth-Centric Allocation of Independent Tasks on Heterogeneous Platforms,” Proc. Int'l Symp. Parallel and Distributed Processing (IPDPS '02), 2002. [16] M.A. Bender, S. Chakrabarti, and S. Muthukrishnan, “Flow and Stretch Metrics for Scheduling Continuous Job Streams,” Proc. Symp. Discrete Algorithms (SODA '98), pp. 270-279, 1998. [17] O. Beaumont, A. Legrand, L. Marchal, and Y. Robert, “Independent and Divisible Tasks Scheduling on Heterogeneous Star-Shaped Platforms with Limited Memory,” Proc. Euromicro Conf. Parallel, Distributed and Network-Based Processing (PDP '05), pp.179-186, 2005. [18] P. Boulet, J. Dongarra, Y. Robert, and F. Vivien, “Static Tiling for Heterogeneous Computing Platforms,” Parallel Computing, vol. 25, no. 5, pp. 547-568, 1999. [19] O. Beaumont, L. Carter, J. Ferrante, A. Legrand, L. Marchal, and Y. Robert, “Centralized versus Distributed Schedulers for Multiple Bag-of-Task Applications,” IEEE Trans. Parallel and Distributed Systems, vol. 19, no. 5, pp. 698-709, May 2008. [20] “GNU Linear Programming Kit,” http://www.gnu.org/softwareglpk/, 2009. [21] A. Legrand, L. Marchal, and H. Casanova, “Scheduling Distributed Applications: The SIMGRID Simulation Framework,” Proc. IEEE/ACM Int'l Symp. Cluster Computing and the Grid (CCGrid '03), pp. 138-145, May 2003. [22] W. Gropp, “MPICH2: A New Start for MPI Implementations,” Proc. European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 37-42, [23] D.G. Feitelson, Workload Characterization and Modeling Book. John Wiley and Sons, http://www.cs.huji.ac.il/feitwlmod/, 2008. [24] SETI, http:/setiathome.ssl.berkeley.edu, 2009. [25] W. Cirne, D. Paranhos, L. Costa, E. Santos-Neto, F. Brasileiro, J. Sauvé, F.A.B. da Silva, C.O. Barros, and C. Silveira, “Running Bag-of-Tasks Applications on Computational Grids: The MyGrid Approach,” Proc. Int'l Conf. Parallel Processing (ICCP '03), Oct. 2003. [26] F. Berman, R. Wolski, H. Casanova, W. Cirne, H. Dail, M. Faerman, S. Figueira, J. Hayes, G. Obertelli, J. Schopf, G. Shao, S. Smallen, N. Spring, A. Su, and D. Zagorodnov, “Adaptive Computing on the Grid Using AppLeS,” IEEE Trans. Parallel and Distributed Systems, vol. 14, no. 4, pp. 369-382, Apr. 2003. [27] M. Litzkow, M. Livny, and M.W. Mutka, “Condor—A Hunter of Idle Workstations,” Proc. Eighth Int'l Conf. Distributed Computing Systems. pp. 104-111, 1988. [28] F.A. da Silva, S. Carvalho, and E.R. Hruschka, “A Scheduling Algorithm for Running Bag-of-Tasks Data Mining Applications on the Grid,” Proc. Euro-Par 2004: Parallel Processing, pp. 254-262, [29] C. Weng and X. Lu, “Heuristic Scheduling for Bag-of-Tasks Applications in Combination with QoS in the Computational Grid,” Future Generation Computer Systems, vol. 21, no. 1, pp. 271-280, 2005. [30] A. Sulistio and R. Buyya, “A Time Optimization Algorithm for Scheduling Bag-of-Task Applications in Auction-Based Proportional Share Systems,” Proc. 17th Int'l Symp. Computer Architecture and High Performance Computing (SBAC-PAD '05), pp. 235-242, 2005. [31] C. Anglano and M. Canonico, “Scheduling Algorithms for Multiple Bag-of-Task Applications on Desktop Grids: A Knowledge-Free Approach,” Proc. Second Int'l Workshop Desktop Grids and Volunteer Computing Systems (PCGRID '08) Workshop Colocated with Int'l Symp. Parallel and Distributed Processing (IPDPS '08), 2008. [32] Scheduling Theory and Its Applications, P. Chrétienne, E.G. Coffman,Jr., J.K. Lenstra, and Z. Liu, eds. John Wiley and Sons, 1995. [33] D. Bertsimas and D. Gamarnik, “Asymptotically Optimal Algorithms for Job Shop Scheduling and Packet Routing,” J. Algorithms, vol. 33, no. 2, pp. 296-318, 1999. [34] O. Beaumont, A. Legrand, L. Marchal, and Y. Robert, “Steady-State Scheduling on Heterogeneous Clusters,” Int'l J. Foundations of Computer Science, vol. 16, no. 2, pp. 163-194, 2005. [35] K. Baker, Introduction to Sequencing and Scheduling. Wiley, 1974. [36] A. Legrand, A. Su, and F. Vivien, “Minimizing the Stretch When Scheduling Flows of Divisible Requests,” J. Scheduling, vol. 11, no. 5, pp. 381-404, 2008. [37] M.A. Bender, S. Muthukrishnan, and R. Rajaraman, “Approximation Algorithms for Average Stretch Scheduling,” J. Scheduling, vol. 7, no. 3, pp. 195-222, 2004. [38] C. Chekuri and S. Khanna, “Approximation Schemes for Preemptive Weighted Flow Time,” Proc. 34th Ann. ACM Symp. Theory of Computing, pp. 297-305, 2002. [39] S. Muthukrishnan, R. Rajaraman, A. Shaheen, and J. Gehrke, “Online Scheduling to Minimize Average Stretch,” Proc. IEEE Symp. Foundations of Computer Science, pp. 433-442, 1999. [40] O. Sinnen and L. Sousa, “Communication Contention in Task Scheduling,” IEEE Trans. Parallel and Distributed Systems, vol. 16, no. 6, pp. 503-515, June 2004. Index Terms: Scheduling and task partitioning, online computation, parallelism and concurrency, measurement, evaluation, modeling, simulation of multiple-processor systems. Anne Benoit, Loris Marchal, Jean-François Pineau, Yves Robert, Frédéric Vivien, "Scheduling Concurrent Bag-of-Tasks Applications on Heterogeneous Platforms," IEEE Transactions on Computers, vol. 59, no. 2, pp. 202-217, Feb. 2010, doi:10.1109/TC.2009.117 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/2010/02/ttc2010020202-abs.html","timestamp":"2014-04-17T07:02:35Z","content_type":null,"content_length":"64697","record_id":"<urn:uuid:f667ad83-791f-4d3e-80d3-986615832f60>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Unifying parts of structures Those with even a passing familiarity with Prolog should recognise statements like [H|T] = [1,2,3]. In particular, = here is not “is equal to” but rather “unifies with”. So that statement causes the variable H to unify with 1, and T with the rest of the list, [2, 3]. Clojure’s abstract bindings provide much the same capability – (let [[h & t] '(1 2 3)] <do stuff>) – modulo the difference between pattern matching and unification, of course. There’s a subtlety in something like [H|T] = [1,2,3], at least, if your lists aren’t built of nested cons cells. Consider Smalltalk arrays. Suppose we have some ListUnifier that will rip a SequenceableCollection‘s head off, like #(1 2 3). We’d like the tail to unify with #(2 3) in other words. But that’s not a node in the original structure – it’s an entirely artificial node we wish to construct from the original collection. Firstly, how can we unify with only part of a structure and, secondly, how can we determine a solution from that partition? Let’s try model the parts: DestructuringUnifier subclass: #ListUnifier instanceVariableNames: 'head tail' classVariableNames: '' poolDictionaries: '' category: 'Unification-Destructuring'. ListUnifier >> head: anObject tail: anotherObject head := anObject. tail := anotherObject. ListUnifier class >> headNamed: headSymbol tailNamed: tailSymbol ^ self new head: headSymbol asVariable tail: tailSymbol asVariable. "Various helper constructors like #head:tailNamed:, #head:tail:, etc. elided for brevity." Then we can write the original Prolog statement as (ListUnifier headNamed: #x tailNamed: #y) =? #(1 2 3) As mentioned above, first we want to be able to construct an equivalence relation on the above (or, expressed differently, partition the set of nodes in the structure together with the artificial nodes we create) such that #x asVariable and 1 are in the same class, and ditto for #y asVariable and #(2 3). unificationClosureWith: anObject in: termRelation | h t partition | anObject isMetaVariable ifTrue: [^ termRelation union: self with: anObject]. anObject isCollection ifFalse: [^ self failToUnifyWith: anObject]. anObject isEmpty ifTrue: [^ self failToUnifyWith: anObject]. h := head isCollection ifTrue: [anObject first: head size] ifFalse: [1]. t := head isCollection ifTrue: [anObject allButFirst: head size] ifFalse: [anObject allButFirst]. partition := head unificationClosureWith: h in: termRelation. ^ tail unificationClosureWith: t in: partition. The mild complication around head isCollection lets us support a head that is itself a collection. So let’s check that we can construct a partition using parts of things: | left right partition | left := (ListUnifier headNamed: #x tailNamed: #y). right := #(1 2 3). partition := VariableTrackingUnionFind usingArrayType: PersistentCollection partitioning: Dictionary new. partition := (partition find: left) unificationClosureWith: (partition find: right) in: partition. partition elementsOfClass: #x asVariable. "=> {1 . (#Variable #x)}" partition elementsOfClass: #y asVariable. "=> {#(2 3) . (#Variable #y)}" We see that the partition that originally would only hold nodes in the structures may now hold parts of the original structure. The original algorithm for determining the most general unifier from some partition as described in Baader & Snyder (pp. 461-462) runs the solution finder starting from the left operand in the unification. Consider the partition we have above. What elements are in the equivalence class of ListUnifier? Well, just the ListUnifier itself! Clearly we need to adjust the solution finder a bit. The obvious approach would be to start the solution-finding from an element in each class, and merge the partial solutions: findSolutionFor: aVariableAvoidingUnionFind ^ aVariableAvoidingUnionFind inject: MostGeneralUnifier new into: [:mgu :node | mgu addAll: (self new findSolutionFor: aVariableAvoidingUnionFind starting: node)] where #addAll: merges the various MostGeneralUnifiers generated and #inject:into: folds over the representative node in each equivalence class. (Remember, a union-find always has a representative for each equivalence class, namely, myPartition find: someObject.) And it works, at the cost of turning a linear algorithm into a (worst case) quadratic one: | left right | left := (ListUnifier headNamed: #x tailNamed: #y). right := #(1 2 3). left =? right "=> MostGeneralUnifier((#Variable #x)->1 (#Variable #y)->#(2 3) )" But “finding a solution” really means “to what must we assign each variable?”. So we can at least speed things up by only solution-finding in those classes in which variables occur: findSolutionFor2: aVariableAvoidingUnionFind ^ aVariableAvoidingUnionFind variableContainingClasses inject: MostGeneralUnifier new into: [:mgu :node | mgu addAll: (self new findSolutionFor: aVariableAvoidingUnionFind starting: node)] This makes finding a solution O(NM), where N is the number of nodes in the structure, M the number of classes containing variables.
{"url":"http://www.lshift.net/blog/2012/01/16/unifying-parts-of-structures","timestamp":"2014-04-20T00:38:32Z","content_type":null,"content_length":"24257","record_id":"<urn:uuid:3f8bbcb2-bf02-4ed7-ae09-c32b6fb43fec>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Meeting Details For more information about this meeting, contact Jason Morton. Title: Eigenvectors of tensors and Waring decomposition Seminar: Applied Algebra Seminar Speaker: Luke Oeding, U.C. Berkeley Waring’s problem for polynomials is to write a given polynomial as a minimal sum of powers of linear forms. The minimal number of summands required in a Waring decomposition (the Waring rank) is related to secant varieties. I will explain recent work of Landsberg and Ottaviani that unified and generalized many constructions for equations of secant varieties via vector bundle techniques. With Ottaviani we have turned this construction into effective algorithms to actually find the Waring decomposition of a polynomial (provided the Waring rank is below a certain bound). Our algorithms generalize Sylvester’s algorithm for binary forms, using an essential new ingredient – eigenvectors of tensors. Of course a naive algorithm always exists, but is rarely effective. I will explain how computations using linear algebra make our algorithms effective. Given time, I will demonstrate our Macaulay2 implementations. Room Reservation Information Room Number: MB106 Date: 01 / 16 / 2013 Time: 04:40pm - 05:30pm
{"url":"http://www.math.psu.edu/seminars/meeting.php?id=15830","timestamp":"2014-04-20T08:36:07Z","content_type":null,"content_length":"3934","record_id":"<urn:uuid:c1b0fd48-4b56-4cd6-b9d0-a9584c1c1087>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellen Hayes Calculus with Applications: An Introduction to the Mathematical Treatment of Science Allyn and Bacon, Boston, 1900 This little book has been written for two classes of persons: those who wish, for purposes of culture, to know, in as simple and direct a way as possible, what the calculus is and what it is for; and students primarily engaged in work in chemistry, astronomy, economics, etc., who have not time or inclination to take long courses in mathematics, yet who would like "to know how to use a tool as fine as the calculus." The "pure" mathematician will note the omission of various subjects that are important from his point of view; but for him there are admirable and lengthy treatises on pure calculus. Also the student whose experience has led him to conceive of mathematical study as the doing of interminable lists of exercises, will be surprised and, possible, disappointed. This book is a reading lesson in applied mathematics. Fancy exercises have been avoided. The examples are, for the most part, real problems from mechanics and astronomy. This plan has been pursued in the conviction that such problems are just as good as make-believe ones for purposes of discipline, and a good deal better for purposes of knowledge. The time-honored method of presenting calculus is much as if travelers should be stopped and made to pound stone on the highway, so that they never get anywhere or even know what the road is for. The following pages are a protest against the conventional method; for I am wholly in sympathy with a remark made by Professor Lester F. Ward, in his Outlines of Sociology: "There is no more vicious educational practice, and scarcely any more common one, than that of keeping the student in the dark as to the end and purpose of his work. It breeds indifference, discouragement, and despair." A chapter on analytic geometry has been introduced, in the hope that teachers will try the plan of presenting the elements of the calculus and of analytic geometry together. There is no good reason either for keeping them distinct or for presenting analytic geometry first. To three works I have to express my deep obligation. The spirt manifest in them as been my chief encouragement in preparing this book. I refer to Greenhill's Differential and integral Calculus, Perry's Calculus for Engineers, and Nernst and Schönflies' Einführung in die mathematische Behandlung der naturwissenschaften. We have in these works, let us hope, an indication of the role which calculus is to play in schemes for liberal and scientific education in the not far distant future. Wellesley College, September, 1900 1. Differentiation and Integration Differentiation of algebraic functions Implicit functions. Exercises Differentiation of trigonometric functions Differentiation of exponential and logarithmic functions Second derivatives. Partial and total differentials Taylor's theorem Maclaurin's theorem Binomial theorem. Converging series Indeterminate forms Exercises. Logarithms 2. The Graph Cartesian system of coordinates Geometric meaning of dy/dx. Exercises. Maxima and minima. Exercises Examples in maxima and minima Polar coordinates 3. Applications Velocity and acceleration Simple harmonic motion Falling bodies Rectilinear motion Parabolic motion Motion in a vertical curve Simple pendulum Areas. Examples. Mean values Length of curves Volumes and surfaces of revolution Double and triple integrals Perfect differential Moment of inertia. Examples Kepler's laws 4. Analytic Geometry The equation f(x,y) = 0 Changes of axes Condition of parallelism, and of perpendicularity Straight line in terms of slope and intercept Straight line in terms of two points Straight line in terms of one point and slope Distance between two points Distance from point to line The ellipse The hyperbola The parabola Tangent and normal to a curve Path of middle point of ellipse-chord Determination of center and axes of ellipse Tangent in terms of slope and intercept Space coordinates Distance between two points in space Equation to a plane surface 5. Formulas Fundamental integrals Other integrals Miscellaneous formulas 6. Index
{"url":"http://www.agnesscott.edu/lriddle/WOMEN/abstracts/hayes_calculus.htm","timestamp":"2014-04-21T02:08:07Z","content_type":null,"content_length":"7873","record_id":"<urn:uuid:cbef6940-6b01-4b7f-b9f4-d464db8e4d89>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Burien, WA Algebra 1 Tutor Find a Burien, WA Algebra 1 Tutor ...My name is Bill, and I have been working as a chemist for several years. I have also been teaching chemistry part-time at Tacoma Community College. I have degrees from Santa Clara University and the University of Washington in Chemistry - majoring in Organic Chemistry. 12 Subjects: including algebra 1, chemistry, geometry, algebra 2 ...I also have a strong background in speaking, writing, and teaching German. Throughout high school, I took German classes and then afterwards spent a brief time in Germany. When I got back and went to Washington State University, I minored in German. 12 Subjects: including algebra 1, reading, geometry, ASVAB ...I am a perfect-score MCAT instructor, and I'd like to give you a free trial session. My first time taking the MCAT, I scored in 99th percentile, and subsequently I've consistently scored perfect on repeat exams. Tutoring is my full-time gig. 16 Subjects: including algebra 1, geometry, Chinese, GRE ...I learned this to be true. A profound composition can achieve so much. I worked in the writing center at my university, and there we helped students edit for content, using the details of a subject to back up a clear and persuasive argument. 22 Subjects: including algebra 1, reading, chemistry, English ...Thank you for your interest!4.0 in Differential, Integral, Vector, and Multi-variable Calculus At the college level, I have taken a differential equations course and earned a 4.0. I also have taken single variable, multi-variable, and vector calculus. I have been tutoring on campus for nearly two years as well, in both mathematics and logic. 10 Subjects: including algebra 1, physics, calculus, algebra 2
{"url":"http://www.purplemath.com/burien_wa_algebra_1_tutors.php","timestamp":"2014-04-16T07:42:40Z","content_type":null,"content_length":"23810","record_id":"<urn:uuid:4ec5763a-9d5d-4a85-a7e7-8ba46165ea60>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello Fellow Smart Ones! Hey people! I'm Lazernugget, I'm kinda young, but my love for Math, Physics, Astronomy, Inventing, Math, Physics, Astrono-Wait...I already said those! XD...So yeah....I LOVE advanced math...even though that math is like 5 grades ahead of me...I know 44 digits to Pi, (Currently) and just....well...love math! I want to be an astronomer and inventor, and understand most physics, ALL about astronomy, and I find Algebra section 2 being more fun than algebra section 1! I know geometry, and combination math, and (What I'm proud of,) I know how to do SOME Trigonometry! You will learn more of my personality soon, so I hope everyone's friendly and helpful! See ya! MATH......that is all. Re: Hello Fellow Smart Ones! Hi Lazernugget; Welcome to the forum! How young or how old are you? Why astronomy? Ever thought about a career in math? If you love it then that is where you ought to be, maybe? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hello Fellow Smart Ones! I am 10..funny right? I just love the idea of Astronomy...It has A LOT to do with math...and physics...Looking and finding new planets and stars...mapping the universe...My inventing comes in many ways...I have satellite blueprints, a new space-craft design, and such...yeah... MATH......that is all. Re: Hello Fellow Smart Ones! No that is not funny. I am impressed! Hope to see what you can do. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hello Fellow Smart Ones! P.S. What are these for? I seek knowledge! Here: ∴ ∑ ∫ MATH......that is all. Re: Hello Fellow Smart Ones! Thanks Bobbym, Good to see a respecting online Mod like you...mods on another forum I know of are too strict and un-understanding of people... I agree. I am subject to it too. They are bashing me horribly on another forum. I had posts deleted from my score there to prevent me from being considered for a moderator. I would only be a moderator there to protect myself and for no other reason. I am laughed at, ridiculed, called names by the mods. 2 or 3 people think that the moderation here is too stern. I only wish to the three of them that I could sign them up over there. They deserve each other. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hello Fellow Smart Ones! Again, Can you explain what these are called? Here: ∴ ∑ ∫ MATH......that is all. Re: Hello Fellow Smart Ones! That is the therefore sign. The next is sigma or sum of. The next is an integral. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hello Fellow Smart Ones! ahhh, now I remember! Great! Knowledge!!! I like to feed my brain... MATH......that is all. Re: Hello Fellow Smart Ones! Well do not feed it too much. I knew a guy who fed his brain and it became fat. So he had to put it on a diet. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Hello Fellow Smart Ones! That's funny. BRAIN FOOD.... MATH......that is all. Re: Hello Fellow Smart Ones! Hi Lazernugget ... welcome! Astronomy is fascinating. And yes, there is lots of mathematics in it! "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: Hello Fellow Smart Ones! Oh Hi MIF! I learned alot yesterday from your website! I'm a 10 year old who gets Sin/cosine/tangents! I find math awesome... MATH......that is all. Re: Hello Fellow Smart Ones! Hi Lazernugget, Belated welcome to the forum! You seem you are amazingly talented! Lazernugget wrote: I know 44 digits to Pi, (Currently) and just....well...love math! I remember 34 digits only to Pi! Character is who you are when no one is looking. Re: Hello Fellow Smart Ones! hi Lazernugget If you want to combine maths with astronomy here's a suggestion that shouldn't cost you too much. "A field guide to the stars and planets" by Donald Menzel. I bought this book 1966 edition but it's been re-printed lots of times and seems to be available second hand for a few cents. It has accurate diagrams of the visible planetary orbits along with the data you need to calculate which one(s) will be in the night sky and in what constellation. I find it very satisfying to work out what I will be able to see on a given date, and then go out and confirm it by making the observation. A wonderful combination of mathematical modelling and science in action. You can also automate the calculations by setting up the formulas in a spreadsheet program like Excel. Every year Nortons Star Atlas is published. It too is full of useful stuff for the star gazer. Couldn't find a 2011 edition on-line (surely it's out there somewhere?) but 2010 certainly exists. Worth a search for the latest I think. later edit: Looks like it isn't published every year. My mistake. So 2010 is the latest. This will cost you a bit. Last edited by bob bundy (2011-01-23 22:05:10) You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Hello Fellow Smart Ones! Cool! Thanks everyone! MATH......that is all. Re: Hello Fellow Smart Ones! hello, nice to meet you! all i can say is, learn as much as you can but also have fun, use khanacademy, examsolutions etc and as many youtube videos as you can, read books, find good math teachers, use mathisfun and physicsforums and WA to learn. when you dont understand just ask someone, other people never have this kind of thing when they were young, make the most of it Re: Hello Fellow Smart Ones! Hi dolgopolov, Welcome to the forum! Character is who you are when no one is looking. Re: Hello Fellow Smart Ones! Hello There... Re: Hello Fellow Smart Ones! Hi clayton20101, Welcome to the forum! Character is who you are when no one is looking. Re: Hello Fellow Smart Ones! Hi clayton20101; Welcome to the forum. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Super Member Re: Hello Fellow Smart Ones! 44 digits? Quite impressive. You may break my record soon! I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Re: Hello Fellow Smart Ones! Hi guys; That is impressive. I got to 50 once. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Star Member Re: Hello Fellow Smart Ones! Nice cartoon! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Hello Fellow Smart Ones! hi everyone! I'm Paul Travis. I am new here.Nice to know that many do love Math now a days! That's so nice! Last edited by paul50nine (2011-02-11 14:43:12)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=165142","timestamp":"2014-04-16T13:21:14Z","content_type":null,"content_length":"38138","record_id":"<urn:uuid:938c3798-f05d-4c49-b63c-f0cffd18e1dd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
TGIF problem - median of 5 Author TGIF problem - median of 5 In discussing this problem I looked up some of the possible solutions, and discovered a "median of medians" algorithm that intrigued me. Mar 17, It's basis is to divide the dataset into groups of 5 and calculate the median of each group, so my first thought was: why 5? Since the next logical set for a median would be 11, I wonder 2011 if this is a throwback to the days when machines only had 8 registers available? If anybody has any other ideas, I'd love to know. And is there any advantage of using 5 over 7 as the base? 7063 However, that aside, it got me thinking; because it's a problem that (oddly enough) I've never had to deal with: What is the most efficient way of determining the median of specifically a group of 5 values? I've been mulling it over for a few hours and think I'm pretty close to the best (mine involves at most 6 comparisons); but I'm interested in other suggestions, because I've been known to miss the obvious. I'm also finding it darn difficult to prove to myself that my solution works empirically. I like... Isn't it funny how there's always time and money enough to do it WRONG? Artlicles by Winston can be found here Hand Winston Gutkowski wrote:In discussing Joined: this problem Jul 27, 2010 I looked up some of the possible solutions, and discovered a 1157 "median of medians" I like... algorithm that intrigued me. It's basis is to divide the dataset into groups of 5 and calculate the median of each group, so my first thought was: why 5? Since the next logical set for a median would be 11, I wonder if this is a throwback to the days when machines only had 8 registers available? If anybody has any other ideas, I'd love to know. And is there any advantage of using 5 over 7 as the base? My first question too is why only groups of 5? Why not 3 or 7 or 9 or 11 or whatever? What is the reason for having 5 elements in a group? Winston Gutkowski wrote:However, that aside, it got me thinking; because it's a problem that (oddly enough) I've never had to deal with: What is the most efficient way of determining the median of specifically a group of 5 values? I've been mulling it over for a few hours and think I'm pretty close to the best (mine involves at most 6 comparisons); but I'm interested in other suggestions, because I've been known to miss the obvious. I'm also finding it darn difficult to prove to myself that my solution works empirically. As far as I can remember from my school days, a median is the value that lies midway between a list of sorted values. So, each sub group needs to be sorted, its median found and then a continuous group of such medians is to be formed. Then the median of these medians is to be found. This would be the pivot element. And you are attempting to do that in the least possible time. Am I correct? What is the objective of doing so much work just to find a "median of medians" which would only serve to be the "pivot" for the partition algorithm later? In other words, what I am asking is what does this algorithm achieve over and above the one that I implemented in my code? ~ Mansukh Mansukhdeep Thind wrote:And you are attempting to do that in the least possible time. Am I correct? Mar 17, 2011 Yup, and since the base group is 5, it makes sense to start with that problem; and it's not as easy as it looks. 16 What is the objective of doing so much work just to find a "median of medians" which would only serve to be the "pivot" for the partition algorithm later? I like... Because partition() does a lot of work itself, so it makes sense to make it as effective as possible. If you're unlucky enough to choose your pivots badly, the same element can get swapped many times by Quicksort (hence the worst case time of O(n^2)). The "median of medians, grouped by 5" is also extremely fast, since at each stage of recursion you're reducing by a power of 5 rather than the usual 2. Mansukhdeep Thind wrote:what I am asking is what does this algorithm achieve over and above the one that I implemented in my code? Well, according to the Wiki page, it improves worst-case time to O(n*log(n)) - although they also say that, in practice, it generally doesn't make much difference. I'm also darn sure that 5 wasn't chosen by accident, and that it probably gives the best "bang for the buck" in terms of efficiency; but exactly why I honestly don't know. Hand Looks like having blocks of 3 would not guarantee O(n): Feb 10, 2013 The reason is that by choosing blocks of 3, we might lose the guarantee of having an O(n) time algorithm. 117 For blocks of 5, the time complexity is I like... T(n) = T(n/5) + T(7n/10) + O(n) For blocks of 3, it comes out to be T(n) = T(n/3) + T(2n/3) + O(n) Check this out: Source: http://stackoverflow.com/questions/3908073/optimal-median-of-medians-selection-3-element-blocks-vs-5-element-blocks What is the most efficient way of determining the median of specifically a group of 5 values? You are, in fact, asking about the most efficient way to sort five elements, right? As the median will always be the third value in a sorted set of 5. Joined: Paul Mrozik wrote:You are, in fact, asking about the most efficient way to sort five elements, right? As the median will always be the third value in a sorted set of 5. Mar 17, Posts: Possibly, but I suspect not. Quickselect, which finds the kth element of n, only needs to do a partial sort (the "partition" method we've been talking about), since you already know which 7063 partition the result should be in. The problem is that the first partition of 5 requires 4 comparisons and at least 2 swaps. I'm trying to work out if there's a better alternative than Quickselect for specifically 5 items. Just a fun little problem that I've never really thought about before. I like... Winston subject: TGIF problem - median of 5
{"url":"http://www.coderanch.com/t/607917/java/java/TGIF-median","timestamp":"2014-04-20T09:15:05Z","content_type":null,"content_length":"34902","record_id":"<urn:uuid:4829d280-202a-4e6a-b141-b090ac83a164>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Keeping track of the circle for integral representations of π The fundamental constant π is characterized in many ways. Historically, it all began from tentative measurements of the circle's perimeter or area and gradually shifted into more mathematics, in such a way that the link between the circle and modern characterizations of π faded away, see for example the formulas presented at Wolfram MathWorld In the preceding posts , I mentioned infinite products as approximations for π. These may be seen geometrically as exhaustion methods, where the area of a polygon approaches the circular area alternately from above, from below, from above, from below, etc. In order to view the role of the circle in integral representations of π, we need to switch to alternative ways to visualize math functions. As an example, let's take the constant function y=f(x)=2. The function f maps an element x from a domain to the element y of the target. In this case, for every x, the target y has the constant value 2. With Cartesian coordinates, we are used to represent this function as a horizontal straight line, like in Figure 1a (click on the figure to view it enlarged). If however we write it as R=f(r)=2, where the function f maps any circle of radius r of the domain to a target circle of radius R=2, the same function can be viewed as a circle of constant radius, like in Figure 1b. So the same function f can be equally well viewed as a straight line or as a circle (x, y, r or R are only dummy variables). Now if we take another example, the linear function, y=f(x)=2x, we are often used to view it in Cartesian coordinates as a straight line with slope 2, like in Figure 1c. In the circular representation R=f(r)=2r, this works however differently. Because we are relating circles of the input domain to other circles of the target, for each circle of radius r, we need to draw the target circle of radius 2r. A single line won't do. For one value of r, we need to draw two circles. If we use blue circles for elements of the input domain and red circles for elements of the target, we could visualize it for successive values of r as an animation like in Figure 1d. In that way, we view the progression of the target circle as the input circle becomes larger. Unlike the Cartesian representation which shows the progression of a function in a static graph, this circular representation needs a dynamic or recurrent process to get grip of the progression of the function. Therefore it isn't very adapted for illustrations in print media. On the other hand, it has the advantage of keeping track of the geometrical form of the circle. And that's exactly what we need in order to perceive the circular nature when π shows up in mathematical functions. The relation of the integral of the Cauchy-Lorentz distribution ²) with the circle can then be seen with the help of the geometric counterparts of arithmetic operations like addition, squaring and dividing. A convenient procedure is illustrated in the successive steps of Figure 2. Step 1. Draw the input circle of radius r and the reference circle of radius unity. Step 2. Determine r². Step 5. Find the target ring related to the input ring ranging over [r, r + dr]. This yields a ring of width dr/(1+r²). The location of this ring depends on the relative progression rates of r and r² (I've not yet found a straightforward explanation for this determination). Step 6. Integrate d ²) for running over all space. For becoming larger and larger, the summed area tends towards the area of a circle of radius 1. For the positive half plane, this corresponds to the π/2 value found analytically The tricky step seems to be the way how to relate the progression between r and 1/(1+r²) in steps 5 and 6. One can verify for example the value of the integral at intermediate steps. For the integral from r=0 to 1, the value in the positive half plane must be π/4, which can be verified on the figure below. In order to gain more insight on π, it could be of interest to develop skills for this circular representation. No comments:
{"url":"http://commonsensequantum.blogspot.com/2010/04/keeping-track-of-circle-for-integral.html","timestamp":"2014-04-19T02:06:05Z","content_type":null,"content_length":"71683","record_id":"<urn:uuid:75a91df7-7b87-45ad-8a04-fbc7a4de27c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Poisson distribution Topic: Poisson distribution Related Topics In the News (Thu 17 Apr 14) Poisson Distributions The Poisson distribution describes a wide range of phenomena in the sciences. However, the important property of processes described by the Poisson distribution is that the SD is the square root of the total counts registered To illustrate, the table shows the results of counting our radioactive sample for different time intervals (with some artificial variability thrown in). Bernoulli distribution: This is used to describe discrete outcomes in trials such as coin-flipping, dice throwing, or the number and probabilities of DNA base pair changes. www.bio.cmu.edu /Courses/03438/PBC97Poisson/PoissonPage.html (1934 words) Definition: Poisson distribution (January 19, 2007) For the Poisson distribution, the variance, λ;, is the same as the mean, so the standard deviation is √λ. Professor Mean explains that the Poisson distribution often arises when you are counting events in a certain area or time interval. Poisson data tends to have distibution that is skewed to the right, though it becomes closer to symmetric as the mean of the distribution increases. www.childrens-mercy.org /stats/definitions/poisson.htm (1309 words) Poisson distribution The binomial distribution with parameters n and λ/n, i.e., the probability distribution of the number of successes in n trials, with probability λ/n of success on each trial, approaches the Poisson distribution with expected value &lambda; as n approaches infinity. The higher moments of the Poisson distribution are Touchard polynomials in λ, whose coefficients have a combinatorial meaning. All of the cumulants of the Poisson distribution are equal to the expected value &lambda;. publicliterature.org /en/wikipedia/p/po/poisson_distribution.html (817 words) Tales of Statisticians | Siméon-Denis Poisson Poisson was born to modestly situated parents, and owed his career to the new scientific institutions created by the Revolution, which systematically sought and advanced students of promise. Poisson's Law of Large Numbers (1835), a generalization of Bernoulli and an advance on de Moivre, was the direct inspiration for Quetelet, and determined the direction of what is called the Continental school of statistics. This was the Poisson distribution, which predicts the pattern in which random events of very low probability occur in the course of a very large number of trials. www.umass.edu /wsp/statistics/tales/poisson.html (790 words) Poisson distribution Summary The Poisson distribution is a mathematical rule that assigns probabilities to the number of occurrences of a certain event. Albert Einstein used Poisson noise to show that matter was composed of discrete atoms and to estimate Avogadro's number; he also used Poisson noise in treating flbody radiation to demonstrate that electromagnetic radiation was composed of discrete photons. For temporally distributed events, the Poisson distribution is the probability distribution of the number of events that would occur within a preset time, the Erlang distribution is the probability distribution of the amount of time until the nth event. www.bookrags.com /Poisson_distribution (2272 words) Poisson biography Poisson was named deputy professor at the École Polytechnique in 1802, a position he held until 1806 when he was appointed to the professorship at the École Polytechnique which Fourier had vacated when he had been sent by Napoleon to Grenoble. The Poisson distribution describes the probability that a random event will occur in a time or space interval under the conditions that the probability of the event occurring is very small, but the number of trials is very large so that the event actually occurs a few times. Poisson never wished to occupy himself with two things at the same time; when, in the course of his labours, a research project crossed his mind that did not form any immediate connection with what he was doing at the time, he contented himself with writing a few words in his little wallet. www-groups.dcs.st-and.ac.uk /~history/Biographies/Poisson.html (2581 words) Poisson Distribution (Site not responding. Last check: ) The poisson distribution is used to model rates, such as rabbits per acre, defects per unit, or arrivals per hour. For a random variable to be poisson distributed, the probability of an occurrence in an interval must be proportional to the length of the interval, and the number of occurrences per interval must be independent. The poisson cumulative distribution function is simply the sum of the poisson probability density function from 0 to x. www.engineeredsoftware.com /lmar/poisson.htm (495 words) News | TimesDaily.com | TimesDaily | Florence, AL (Site not responding. Last check: ) The Poisson distribution is sometimes called a Poissonian, analagous to the term Gaussian for a Gauss or normal distribution. The Poisson distribution can be derived as a limiting case to the binomial distribution as the number of trials goes to infinity and the expected number of successes remains fixed. For temporally distributed events, the Poisson distribution is the probability distribution of the number of events that would occur within a preset time, the Erlang distribution is the probability distribution of the amount of time until the nth event. www.timesdaily.com /apps/pbcs.dll/section?category=NEWS&template=wiki&text=Poisson_distribution (1983 words) Poisson Process The Poisson random variable is characterized by a value &lambda;=μ (the mean) which indicates the average number of events in the given time interval.. The distribution associated with the Poisson random variable is the Poisson distribution. MEDICINE - The Poisson Distribution can be used to count the number of victims of specific diseases, such as the number of cancer deaths per house, or the number of malaria deaths per year. www2.umassd.edu /CISW3/coursepages/pages/cis362/problems/poisson/explanation.html (752 words) 1.3.6.6.19. Poisson Distribution The Poisson distribution is used to model the number of events occurring within a given time interval. Note that because this is a discrete distribution that is only defined for integer values of x, the percent point function is not smooth in the way the percent point function typically is for a continuous distribution. Most general purpose statistical software programs, including Dataplot, support at least some of the probability functions for the Poisson distribution. www.itl.nist.gov /div898/handbook/eda/section3/eda366j.htm (175 words) The Poisson Distribution The Poisson Distribution is a discrete distribution which takes on the values X = 0, 1, 2, 3,... It is often used as a model for the number of events (such as the number of telephone calls at a business or the number of accidents at an intersection) in a specific time period. The Poisson distribution is determined by one parameter, lambda. www.math.csusb.edu /faculty/stanton/m262/poisson_distribution/Poisson_old.html (141 words) The Poisson Distribution online. For the statistical analysis of rare events and accidents The main differences between the poisson distribution and the binomial distribution is that in the binomial all eligible phenomena are studied, whereas in the poisson distribution only the cases with a particular outcome are studied. One assumption in this application of the poisson distribution is that the chance of having an accident is randomly distributed: every individual has an equal chance. Mathematically this is expressed in the fact that the variance and the mean for the poisson distribution are equal. home.clara.net /sisa/poishlp.htm (761 words) Poisson distribution distribution is the distribution of the number of events in a fixed time interval, provided that the events occur at random, independently in time and at a constant rate. Both the mean and variance of a Poisson distribution are equal to µ. Other common uses of Poisson are in Physics to model radioactive particle emission and in insurance companies to model accident rates. www.statsdirect.com /help/distributions/pp.htm (310 words) Poisson Distribution (Site not responding. Last check: ) Other phenomena that often follow a poisson distribution are death of infants, the number of misprints in a book, the number of customers arriving, and the number of activations of a geiger However, if we want to use the binomial distribution we have to know both the number of people who make it safely from A to B, and the number of people who have an accident while driving from A to B, whereas the number of accidents is sufficient for applying the poisson distribution. Thus, the poisson distribution is cheaper to use because the number of accidents is usually recorded by the police department, whereas the total number of drivers is not. www.berrie.dds.nl /poisson.html (324 words) Arizona Rangelands: Inventory and Monitoring: Poisson Distribution Poisson distributions are special sampling distributions generated when discrete individuals are counted from a series of sample units. Poisson distributions are similar to the binomial distribution, except there are more than two alternative outcomes associated with the attribute. Sample data following a Poisson distribution cannot be analyzed using conventional inferential statistical procedures, which assume that data fits a normal distribution. ag.arizona.edu /agnic/az/inventorymonitoring/poisson.html (163 words) Distribution Fitting To determine this underlying distribution, it is common to fit the observed distribution to a theoretical distribution by comparing the frequencies observed in the data to the expected frequencies of the theoretical distribution (i.e., a Chi-square goodness of fit test). The major distributions that have been proposed for modeling survival or failure times are the exponential (and linear exponential) distribution, the Weibull distribution of extreme events, and the Gompertz distribution. The beta distribution arises from a transformation of the F distribution and is typically used to model the distribution of order statistics. www.statsoft.com /textbook/stdisfit.html (1769 words) CHU - Motivating the Poisson Process in Queuing Models. The appropriateness of the exponential and Poisson distributions, their linkage and their properties which lead to simple analytics, often escape our students as textbooks rarely provide empirical evidence to justify them. Another example is Schmuland (2001), who uses the Poisson model to explain the phenomena of bursts in shark attacks and the scoring patterns of ice hockey legend Wayne Gretzky. Based on his observation that the Poisson distribution provides a good fit for goals scored in ice hockey games, Berry (2000) assumes an exponential distribution for the times between goals to estimate the strategic time to “pull the goalie” when a team is down in a game. ite.pubs.informs.org /Vol3No2/Chu/index.php (2484 words) Poisson distribution (Site not responding. Last check: ) The poisson distribution is an appropriate model for count data. The poisson distribution was derived by the french mathematician Poisson in 1837, and the first application was the describtion of the number of death by horse kicking in the prussian army (Bortkiewicz, 1898). The poisson distribution is a mathematical rule that assigns probabilities to the number occurences. www.stattucino.com /berrie/poisson.html (229 words) Poisson and Negative Binomial Regression This test tests equality of the mean and the variance imposed by the Poisson distribution against the alternative that the variance exceeds the mean. Poisson Regression Overview, that is, the log of the mean, m, is a linear function of independent variables, Instead of assuming as before that the distribution of Y, number of occurrences of an event, is Poisson, we will now assume that Y has a negative binomial distribution. www.uky.edu /ComputingCenter/SSTARS/P_NB_3.htm (741 words) PoissonDistributionImpl (Math 1.1 API) Create a new Poisson distribution with the given the mean. Calculates the Poisson distribution function using a normal approximation. distribution is used to approximate the Poisson distribution. jakarta.apache.org /commons/math/api-1.1/org/apache/commons/math/distribution/PoissonDistributionImpl.html (229 words) Normal & Poisson Distribution The normal distribution is symmetric and is used to approximate the binomial distribution when p = q = 1/2. Haldane and Kosambi used the Poisson distribution to adjust the observed number of crossover events to the map distance. A characteristic of the Poisson distribution is that the population mean and variance are equal. www.ag.ndsu.nodak.edu /plantsci/adv_genetics/genetics/np/np02.htm (174 words) [No title] (Site not responding. Last check: ) The Poisson distribution is used for counting discrete occurrences within a specified time interval. To estimate the number of topological domains, it is assumed that the nicks are introduced in a Poisson distribution (1, 38, 39). Using the Poisson formula, theoretical curves are calculated to determine the best fit with the experimental data points (described in the legend to Fig. www.lycos.com /info/poisson-distribution.html?page=2 (347 words) MMU - Research Design, Biol Sci Stats and RDPoisson distribution (Site not responding. Last check: ) The binomial distribution is used to determine the probability of obtaining a particular number of successes for a binary event. The Poisson distribution is associated with events that have a small probability of occurring. Poisson probabilities are defined by the mean alone since, in a Poisson distribution, µ = s2, (i.e. obelia.jde.aca.mmu.ac.uk /rd/poisson.htm (313 words) Try your search on: Qwika (all wikis)
{"url":"http://www.factbites.com/topics/Poisson-distribution","timestamp":"2014-04-17T07:42:56Z","content_type":null,"content_length":"49087","record_id":"<urn:uuid:1c7fdce8-2ecd-4293-a496-16e371932831>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 9 List two ways bacteria and blue-green bacteria differ. Any help would be appreciated. :) Algebra 1 Use the power of a product property to simplify the expression. (2)^3 I'm not sure whether you would just have it be 2^3 or just plain old 8. Any help would be appreciated, please. Try distributing the 19. Also, +(-2) could be written as just w-2 to make it simpler. (19*7)+19w-(19*-2) 133+19w-38 Explain how gravitational force keeps planets revolving around the sun. Take the distance you traveled (10,000 steps) and then multiply it by how far you moved each step. You should be able to figure that out quiet easily. :) I was just wondering if anyone would be able to give out some interesting facts on clean technology (ie solar power, wind power, etc). It's for a speech I have to give at a local college, so any facts would be greatly appreciated. :) Excuse the part about the link there, was helping someone else and that got in there some how. First, to determine if a given individual bond is polar, you need to know the electronegativity of two atoms involved in that bond. To find the electronegativities of all the elements, look at the periodic table (follow the link below this answer under Web Links). If the elect... The sum of twenty and five equal twenty five.
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kai+%3A)","timestamp":"2014-04-20T19:04:03Z","content_type":null,"content_length":"7626","record_id":"<urn:uuid:d9c40146-885a-4332-95c8-2f8d85721259>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Of, relating to, or based upon the number sixty. Sexagesimal refers especially to the number system with base 60. The Babylonians began using such a scheme around the beginning of the second millennium B.C. in what was the first example of a place-value system. Our degree of 60 minutes, minute of 60 seconds (in both time and angle measure), and hour of 60 minutes hark back to this ancient method of numeration. Quite why the Babylonians counted using sexagesimal isn't known, but 60 certainly has more factors than any other number of comparable size. Related category TYPES OF NUMBER
{"url":"http://www.daviddarling.info/encyclopedia/S/sexagesimal.html","timestamp":"2014-04-20T13:20:41Z","content_type":null,"content_length":"5789","record_id":"<urn:uuid:94fa4aa9-0801-4f29-8599-24902dbaef4b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Monochromatic Light Of Wavelength 513nm Is Incident ... | Chegg.com Monochromatic light of wavelength 513nm is incident on a narrow slit. On a screen 1.00 m away, the distance between the seconddiffraction minimum and the central maximum is 2.00 cm. (a) Calculate the angle of diffraction (b) Find the width of the slit.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/monochromatic-light-wavelength-513nm-incident-narrow-slit-screen-100-m-away-distance-secon-q187507","timestamp":"2014-04-17T15:03:33Z","content_type":null,"content_length":"22194","record_id":"<urn:uuid:e5d87dc8-fcde-458e-ae89-5a5cf1423fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Multiplication (powers) of fractions I agree with you Integral. But I don't care if it's new or old. I just want to enjoy mathematics I've not heard of the development of a completely new area of mathematics in a very very long times. Even quite revolutionary area of mathematics like probability or calculus that have only really come to light in the last few hundred years are still very much based on the mathematics that was already existing. A lot of mathematics is just building on old mathematics so you need to have good foundation of a lot of mathematics before you can start to understand new stuff. I suggest you stick around on this forum, help people when you can and try and soak in as much maths that is beyond the level you are being taught. I've certainly built up my maths way beyond my peers by doing this.
{"url":"http://www.physicsforums.com/showpost.php?p=378394&postcount=18","timestamp":"2014-04-16T22:17:03Z","content_type":null,"content_length":"8290","record_id":"<urn:uuid:c43c00e2-8eee-4661-9bcb-ff002ed7287a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Students will build new mathematical knowledge through problem solving. 2.PS.1 Explore, examine, and make observations about a social problem or mathematical situation 2.PS.2 Interpret information correctly, identify the problem, and generate possible solutions Students will solve problems that arise in mathematics and in other contexts. 2.PS.3 Act out or model with manipulatives activities involving mathematical content from literature and/or story telling 2.PS.4 Formulate problems and solutions from everyday situations (e.g., counting the number of children in the class, using the calendar to teach counting). Students will apply and adapt a variety of appropriate strategies to solve problems. 2.PS.5 Use informal counting strategies to find solutions 2.PS.6 Experience teacher-directed questioning process to understand problems 2.PS.7 Compare and discuss ideas for solving a problem with teacher and/or students to justify their thinking 2.PS.8 Use manipulatives (e.g., tiles, blocks) to model the action in problems 2.PS.9 Use drawings/pictures to model the action in problems Students will monitor and reflect on the process of mathematical problem solving. 2.PS.10 Explain to others how a problem was solved, giving strategies and justifications Students will recognize reasoning and proof as fundamental aspects of mathematics. 2.RP.1 Understand that mathematical statements can be true or false 2.RP.2 Recognize that mathematical ideas need to be supported by evidence Students will make and investigate mathematical conjectures. 2.RP.3 Investigate the use of knowledgeable guessing as a mathematical tool 2.RP.4 Explore guesses, using a variety of objects and manipulatives Students will develop and evaluate mathematical arguments and proofs. 2.RP.5 Justify general claims, using manipulatives 2.RP.6 Develop and explain an argument verbally or with objects 2.RP.7 Listen to and discuss claims other students make Students will select and use various types of reasoning and methods of proof. 2.RP.8 Use trial and error strategies to verify claims Students will organize and consolidate their mathematical thinking through communication. 2.CM.1 Understand how to organize their thought processes 2.CM.2 Verbally support their reasoning and answer Students will communicate their mathematical thinking coherently and clearly to peers, teachers, and others. 2.CM.3 Share mathematical ideas through the manipulation of objects, drawings, pictures, charts, and symbols in both written and verbal explanations Students will analyze and evaluate the mathematical thinking and strategies of others. 2.CM.4 Listen to solutions shared by other students 2.CM.5 Formulate mathematically relevant questions Students will use the language of mathematics to express mathematical ideas precisely. 2.CM.6 Use appropriate mathematical terms, vocabulary, and language Students will recognize and use connections among mathematical ideas. 2.CN.1 Recognize the connections of patterns in their everyday experiences to mathematical ideas 2.CN.2 Understand and use the connections between numbers and the quantities they represent to solve problems 2.CN.3 Compare the similarities and differences of mathematical ideas Students will understand how mathematical ideas interconnect and build on one another to produce a coherent whole. 2.CN.4 Understand how models of situations involving objects, pictures, and symbols relate to mathematical ideas 2.CN.5 Understand meanings of operations and how they relate to one another 2.CN.6 Understand how mathematical models represent quantitative relationships Students will recognize and apply mathematics in contexts outside of mathematics. 2.CN.7 Recognize the presence of mathematics in their daily lives 2.CN.8 Recognize and apply mathematics to solve problems 2.CN.9 Recognize and apply mathematics to objects, pictures and symbols Students will create and use representations to organize, record, and communicate mathematical ideas. 2.R.1 Use multiple representations, including verbal and written language, acting out or modeling a situation, drawings, and/or symbols as representations 2.R.2 Share mental images of mathematical ideas and understandings 2.R.3 Use standard and nonstandard representations Students will select, apply, and translate among mathematical representations to solve problems. 2.R.4 Connect mathematical representations with problem solving Students will use representations to model and interpret physical, social, and mathematical phenomena. 2.R.5 Use mathematics to show and understand physical phenomena (e.g., estimate and represent the number of apples in a tree) 2.R.6 Use mathematics to show and understand social phenomena (e.g., count and represent sharing cookies between friends) 2.R.7 Use mathematics to show and understand mathematical phenomena (e.g., draw pictures to show a story problem or show number value using fingers on your hand) Students will understand numbers, multiple ways of representing numbers, relationships among numbers, and number systems. Number Systems 2.N.1 Skip count to 100 by 2’s, 5’s, 10’s 2.N.2 Count back from 100 by 1’s, 5’s, 10’s using a number chart 2.N.3 Skip count by 3’s to 36 for multiplication readiness 2.N.4 Skip count by 4’s to 48 for multiplication readiness 2.N.5 Compare and order numbers to 100 2.N.6 Develop an understanding of the base ten system: 10 ones = 1 ten 10 tens = 1 hundred 10 hundreds = 1 thousand 2.N.7 Use a variety of strategies to compose and decompose two-digit numbers 2.N.8 Understand and use the commutative property of addition 2.N.9 Name the number before and the number after a given number, and name the number(s) between two given numbers up to 100 (with and without the use of a number line or a hundreds 2N.10 Use and understand verbal ordinal terms 2.N.11 Read written ordinal terms (first through ninth) and use them to represent ordinal relations 2.N.12 Use zero as the identity element for addition 2.N.13 Recognize the meaning of zero in the place value system (0-100) Number Theory 2.N.14 Use concrete materials to justify a number as odd or even Students will understand meanings of operations and procedures, and how they relate to one another. 2.N.15 Determine sums and differences of number sentences by various means (e.g., families, related facts, inverse operations, addition doubles, and doubles plus one) 2.N.16 Use a variety of strategies to solve addition and subtraction problems using one- and two-digit numbers with and without regrouping 2.N.17 Demonstrate fluency and apply addition and subtraction facts up to and including 18 2.N.18 Use doubling to add 2-digit numbers 2.N.19 Use compensation to add 2-digit numbers 2.N.20 Develop readiness for multiplication by using repeated addition 2.N.21 Develop readiness for division by using repeated subtraction, dividing objects into groups (fair share) Students will compute accurately and make reasonable estimates. 2.N.22 Estimate the number in a collection to 100 and then compare by counting the actual items in the collection Students will perform algebraic procedures accurately. Equations and Inequalities 2.A.1 Use the symbols <, >, = (with and without the use of a number line) to compare whole numbers up to 100 Students will recognize, use, and represent algebraically patterns, relations, and functions. Patterns, Relations, and Functions 2.A.2 Describe and extend increasing or decreasing (+,-) sequences and patterns (numbers or objects up to 100) Students will use visualization and spatial reasoning to analyze characteristics and properties of geometric shapes. 2.G.1 Experiment with slides, flips, and turns to compare two-dimensional shapes 2.G.2 Identify and appropriately name two-dimensional shapes: circle, square, rectangle, and triangle (both regular and irregular) 2.G.3 Compose (put together) and decompose (break apart) two-dimensional shapes Students will identify and justify geometric relationships, formally and informally. Geometric Relationships 2.G.4 Group objects by like properties Students will apply transformations and symmetry to analyze problem solving situations. Transformational Geometry 2.G.5 Explore and predict the outcome of slides, flips, and turns of two dimensional shapes 2.G.6 Explore line symmetry Students will determine what can be measured and how, using appropriate methods and formulas. Units of Measurement 2.M.1 Use non-standard and standard units to measure both vertical and horizontal lengths 2.M.2 Use a ruler to measure standard units (including whole inches and whole feet) 2.M.3 Compare and order objects according to the attribute of length 2.M.4 Recognize mass as a qualitative measure (e.g., Which is heavier? Which is lighter?) 2.M.5 Compare and order objects, using lighter than and heavier than Students will use units to give meaning to measurements. 2.M.6 Know and recognize coins (penny, nickel, dime, quarter) and bills ($1, $5, $10, and $20) 2.M.7 Recognize the whole dollar notation as $1, etc. 2.M.8 Identify equivalent combinations to make one dollar 2.M.9 Tell time to the half hour and five minutes using both digital and analog clocks Students will develop strategies for estimating measurements. 2.M.10 Select and use standard (customary) and non-standard units to estimate measurements Students will collect, organize, display, and analyze data. Collection of Data 2.S.1 Formulate questions about themselves and their surroundings 2.S.2 Collect and record data (using tallies) related to the question Organization and Display of Data 2.S.3 Display data in pictographs and bar graphs using concrete objects or a representation of the object Analysis of Data 2.S.4 Compare and interpret data in terms of describing quantity (similarity or differences) Students will make predictions that are based upon data analysis. Predictions from Data 2.S.5 Discuss conclusions and make predictions from graphs Jump to: │Table of Contents│Prekindergarten │Kindergarten│Grade 1│ Grade 3 │ │ Grade 4 │ Grade 5 │ Grade 6 │Grade 7│ Grade 8 │ │ Integrated Algebra │ Geometry │Algebra 2 and Trigonometry │
{"url":"http://www.p12.nysed.gov/ciai/mst/math/standards/revisedg2.html","timestamp":"2014-04-19T07:55:10Z","content_type":null,"content_length":"51706","record_id":"<urn:uuid:a22d22d2-4f80-4ca1-a7d0-01f058ab0a21>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
exact functor Category theory Universal constructions Limits and colimits Homological algebra A left/right exact functor is a functor that preserves finite limits/finite colimits. The term originates in homological algebra, see remark 2 below, where a central role is played by exact sequences (originally of modules, more generally in any abelian category) and the fact that various functors preserve or destroy exactness of sequences to some extent gave vital information on those functors. In this context, one says that an exact functor is one that preserves exact sequences. However, many functors are only “exact on one side or the other”. For instance, for all modules $M$ and short exact sequences $0 \to A \to B \to C \to 0$ of modules (over some ring $R$), the sequence $0 \to Mod_R(M, A) \to Mod_R(M,B) \to Mod_R(M,C)$ is exact – but note that there is no 0 on the right hand. Thus $F(-) = Mod_R(M,-)$ converts an exact sequence into a left exact sequence; such a functor is called a left exact functor. Dually, one has right exact functors. It is easy to see that an additive functor between additive categories is left exact in this sense if and only if it preserves finite limits. Since merely preserving left exact sequences does not require a functor to be additive, in a non-additive context one defines a left exact functor to be one which preserves finite limits, and dually. Below we give the general definition and then discuss the relation to the concept in homological algebra in the section Properties - On abelian categories. A functor between finitely complete categories is called left exact (or flat) if it preserves finite limits. Dually, a functor between finitely cocomplete categories is called right exact if it preserves finite colimits. A functor is called exact if it is both left and right exact. Specifically, Ab-enriched functors between abelian categories are exact if they preserve exact sequences. • A functor $F : C \to D$ between finitely cocomplete categories is right exact if and only if for all objects $d \in D$ the comma category $F/d$ is filtered. • A functor $F : C \to D$ between finitely complete categories is left exact if and only if for all objects $d \in D$ the opposite comma category $(d/F)^{op}$ is filtered. In other language, this says that a functor between finitely complete categories is left exact if and only if it is (representably) flat. Conversely, one can show that a representably flat functor preserves all finite limits that exist in its domain. A functor between categories with finite limits preserves finite limits if and only if: Since these conditions frequently come up individually, it may be worthwhile listing them separately: • $F: C \to D$preserves terminal objects if $F(t_C)$ is terminal in $D$ whenever $t_C$ is terminal in $C$; • $F: C \to D$preserves binary products if the pair of maps $F(c) \stackrel{F(\pi_1)}{\leftarrow} F(c \times d) \stackrel{F(\pi_2)}{\to} F(d)$ exhibits $F(c \times d)$ as a product of $F(c)$ and $F(d)$, where $\pi_1: c \times d \to c$ and $\pi_2: c \times d \to d$ are the product projections in $C$; • $F: C \to D$preserves equalizers if the map $F(i): F(e) \to F(c)$ is the equalizer of $F(f), F(g): F(c) \stackrel{\to}{\to} F(d)$, whenever $i: e \to c$ is the equalizer of $f, g: c \stackrel{\to}{\to} d$ in $C$. Between categories of modules Right exact functors between categories of modules are characterized by the Eilenberg-Watts theorem. See there for more details. On abelian categories / in homological algebra In the context of homological algebra, the notion of left/right exact functors is considered specifically in abelian categories. In this context the above formulation is equivalent formulated in terms of the behaviour of the functor on short exact sequences. We now discuss this case. A functor $F : C \to D$ between abelian categories is left exact if and only if it preserves direct sums and kernels. A functor $F : C \to D$ between abelian categories is right exact if and only if it preserves direct sums and cokernels. In particular for $0 \to A \to B \to C \to 0$ is an exact sequence in the abelian category $C$, we have that • if $F$ is left exact then $0 \to F(A) \to F(B) \to F(C)$ is an exact sequence in $D$; • if $F$ is right exact then $F(A) \to F(B) \to F(C) \to 0$ is an exact sequence in $D$; • if $F$ is exact then $0 \to F(A) \to F(B) \to F(C) \to 0$ is an exact sequence in $D$. Also: if $F$ is exact then it preserves chain homology. We discuss the first case. The second is formally dual. The third combines the two cases. For the first case notice that $0 \to A \stackrel{i}{\to} B \stackrel{p}{\to} C \to 0$ being an exact sequence is equivalent to $i$ being a monomorphism and $p$ being an epimorphism, hence to $0 \to A$ being the kernel of $i$, $i$ being the kernel of $p$ and $C \to 0$ being the cokernel of $p$. Since the functor $F$ is assumed to preserve this kernel-property, but not the cokernel property, it follows that $F(0) \to F(A)$ is the kernel of $F(A) \stackrel{F(i)}{\to} F(B)$, but not more than that. This means that $0 \to F(A) \to F(B) \to F(C)$ is an exact sequence, as claimed. An early use of left exact and exact is in: • A. Grothendieck, 1959, Technique de descente et théorèmes d’existence en géométrie algèbrique. II. Le théorème d’existence en théorie formelle des modules, in Séminaire Bourbaki, Vol. 5 , Exp. No. 195, 369 – 390, Soc. Math. France Numdam, Paris. A general discussion is for instance in section 3.3 of A detailed discussion of how the property of a functor being exact is related to the property of it preserving homology in generalized situations is in • Michael Barr, Preserving homology , Theory and Applications of Categories, Vol. 16, 2006, No. 7, pp 132-143. (TAC) Discussion of left exactness (or flat functor) in the context of (∞,1)-category theory is in
{"url":"http://ncatlab.org/nlab/show/exact+functor","timestamp":"2014-04-18T11:07:12Z","content_type":null,"content_length":"72253","record_id":"<urn:uuid:b1f85a05-a651-4bec-9ab6-1eaa7508ee8d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Classroom Assessments Illinois Learning Standards Stage F - Mathematics Mathematics Classroom Assessments Aligned to the Illinois Learning Standards Note: All documents are in http://www.adobe.com to download the most current version of Adobe Reader The assessments are coded according to learning standard and stage. Example: 6A.F is aligned to standard 6A, stage F (sixth grade). Sample student work, when available, will follow the assessment. Determine the approximate value of each marking on a number line using fraction and decimal notation. Demonstrate the meaning of multiplying fractions using color tiles. Create and solve number sentences on a calculator. Create a new deck of less than 52 cards showing given percents, explain how their created deck meets the criteria and how the sum of the percents can be more than 100%. Estimate distance, weight, temperature, and elapsed time using reasonable units and will determine level accuracy by measuring using appropriate tool. Draw a diagram of a rectangular prism-shaped and a triangular prism-shaped fish tank labeling reasonable dimensions for a fish tank that might be purchased for a home. Calculate the surface area and volume, justifying the dimensions, procedures and calculations. Compare the costs of two cellular phone plans and determine the best rate. Determine how the weights of 7 objects on a seesaw can vary and still be in balance and write equations in algebraic terms. Determine the circumference and diameter of 6 different objects, calculate the sum, difference, product, and quotient of the circumference and diameter, and from these calculations determine thich relationship produces the value of pi. Construct a building using snap cubes and an incomplete building plan (only two views) and draw the missing view. Find the measure of the interior angles of given regular polygons and create a formula for finding the measure of the interior angles of any polygon. Construct a convincing argument that the same amount of fabric is needed for each color of a quilt being made in a specific design and that one template will make both parts of the design. Justify the relationship between vertical angles and the sum of the interior angles of a triangle is 180º. Calculate theoretical and empirical probability by dropping coins on a grid and recording data, and compare the theoretical to the empirical probability. Return to Mathematics Classroom Assessments and Performance Descriptors
{"url":"http://isbe.net/ils/math/stage_F/assessment.htm","timestamp":"2014-04-17T06:55:35Z","content_type":null,"content_length":"16875","record_id":"<urn:uuid:f6f68d14-064b-48f8-ab71-496bb5736585>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] distance coefficient for amatrix with ngative valus R. Michael Weylandt michael.weylandt at gmail.com Thu Oct 6 16:16:44 CEST 2011 Did you read any of the comments I made regarding working examples, meaningful question asking, or replying to the entire list? If you look at the code, you'll see pco is just a very elementary wrapper for cmdscale, the author of which is active on this list and could have seen your question and replied to it with a hundredfold more speed and knowledge than myself, ... had you replied to the entire list. Looking further into cmdscale, you can see that it is not designed to return variable loadings directly (to be honest, I'm not particularly familiar with PCO and I'm not sure that the method provides such loadings but I'm assuming you have reason to at least think they exist) so you'll have to calculate them directly. Read the documentation of cmdscale to find understand what the various things being returned are. If another function in the labsdv package seems to calculate them, perhaps you can pilfer some code from there. On Tue, Oct 4, 2011 at 1:58 PM, dilshan benaragama <benaragamad at yahoo.com> wrote: > Hi, > As you mentioned I was able to run the pco ignoring the warning massege of > negative values. The nest problem I have is how to get the loadings for each > variable as it will not give the summary out put or loadings as we get for > pca. > Thanks. > From: R. Michael Weylandt <michael.weylandt at gmail.com> > To: dilshan benaragama <benaragamad at yahoo.com>; r-help > <r-help at r-project.org> > Sent: Monday, October 3, 2011 11:05:19 PM > Subject: Re: [R] distance coefficient for amatrix with ngative valus > Comments inline: > On Mon, Oct 3, 2011 at 11:27 PM, dilshan benaragama > <benaragamad at yahoo.com> wrote: >> Yes I think you did not get my problem. > No, you did not state your problem. I have replied to everything you > have actually included to this point. Admittedly, I have failed to > reply to things you did not say... >> Actualy I want run PCO with >> (labdsv). To do that I I am trying to get the distance metrix using >> following fuctions with library (vegan). > This is now the 7th email in this chain. You should mention the > packages and functions you are using in the FIRST email of the chain. > This is mentioned in the posting guide which you apparently have still > not yet read. >> pca.gower<- vegdist(envt[,2:9],method="gower") >> pca.eucl<-vegdist(envt[,2:9],method="euclidean") >> pca.chi<-vegdist(envt[,2:9],method="chi.square") >> pca.mahal<-vegdist(envt[,2:9],method="mahal") >> pca.bray<-vegdist(envt,method="bray") >> However none of the functions work > They all work for any data I put in. This is perhaps when that minimal > working example, which you also should have included, is necessary. > The append at the end of each of the 7 emails in this chain that tells > you to read the posting guide also asks for this, as did I explicitly. >> (gives an error saying that is not >> working due to negatve values) > No, they each give warnings. Warnings are not errors. They are > warnings and they say "warning". Perhaps unsurprisingly, errors say > "error". If you are using an old version of vegan that throws an > error, you should always update before seeking help.Not surprisingly, > a certain document suggests this. >> except euclidean distance for the raw data >> set as the raw data has negative values for some variables. It is no point >> of using euclidean metrix with PCO as we can do the same thing from PCA. >> So >> I need to find a way I can run PCO with a different dissimilarity metrix >> for this data. It will be a great help if you can help me on this > Actually read the warning message: it warns you that you have given > negative data to an ecological function and suggests this might be a > point you look into as this usually suggests a user-end problem. It > does not fail to work in any sense of the word as evidence by the > output of distances. If negative data is nonsense, you should heed > this warning; if you know its not, disregard it. > More importantly, as I said in my initial response, any distance > metric worth its salt is translation invariant. To wit, > x <- matrix(rnorm(50),5) > d1 = vegdist(x, method="gower") > d2 = vegdist(x + abs(min(x))*3, method="gower") > all.equal(as.numeric(d1), as.numeric(d2)) > TRUE > In fairness, I'll admit this does not seem to work for the bray > distance. I am not an ecologist and I do not know why this would be -- > it does leave me somewhat confused as to what sort of space motivates > the bray metric, but that's a discussion for another time and place -- > but the function still returns a valid dist object for both d1 and d2. >> Thanks, >> From: R. Michael Weylandt <michael.weylandt at gmail.com> >> To: dilshan benaragama <benaragamad at yahoo.com>; r-help >> <r-help at r-project.org> > You will note that I include the r-help list on each email on this > chain while you have not; this is mentioned in the posting guide. >> Sent: Monday, October 3, 2011 10:00:53 PM >> Subject: Re: [R] distance coefficient for amatrix with ngative valus >> You still haven't explained what's wrong with *almost every metric >> there is*, but if you want other distance metrics have you considered >> those in the package you are using, via the function dsvdis(). >> Consider, for example: >> library(labdsv) >> X <- get(data(bryceveg)); >> X[, sample(NROW(X))] <- (-1)*X[, sample(NROW(X))] # Put some negative >> values in all willy nilly like.... >> Y <- pco( dsvdis(X, index="bray/curtis") ) >> print(any(X < 0)) >> If you want more explanation, please provide actual details of what >> you are asking, as requested in my first email. >> Michael Weylandt >> On Mon, Oct 3, 2011 at 9:23 PM, dilshan benaragama >> <benaragamad at yahoo.com> wrote: >>> I am using (labdsv). If I can use euclidean distance I can do it with PCA >>> instead of PCO, so I am trying an alternative to PCA, but I cannot find a >>> disimilarity coefficient for that. >>> From: R. Michael Weylandt <michael.weylandt at gmail.com> >>> To: dilshan benaragama <benaragamad at yahoo.com>; r-help >>> <r-help at r-project.org> >>> Sent: Monday, October 3, 2011 3:27:53 PM >>> Subject: Re: [R] distance coefficient for amatrix with ngative valus >>> One order of the usual coming right up! >>> 1 course of "Why does XXX not work for you?" a la francaise, where XXX >>> is, in your case, the Euclidean distance. Specifically, any metric >>> worth its salt (in a normed space) satisfies dist(a,b) = dist(a+c,b+c) >>> so why are negative values a problem?... >>> 2 sides: a "Minimal Working Example" with a light buttery sauce and a >>> fried "what package/code are you using" >>> and, for desert, a Winsemian special of: "read the posting guide!" >>> Michael Weylandt, who is putting together a menu for a fancy dinner >>> even as he types >>> On Mon, Oct 3, 2011 at 12:55 PM, dilshan benaragama >>> <benaragamad at yahoo.com> wrote: >>>> Hi, >>>> I need to run a PCoA (PCO) for a data set wich has both positive and >>>> negative values for variables. I could not find any distancecoefficient >>>> other than euclidean distace running for the data set. Are there any >>>> other >>>> coefficient works with negtive values.Also I cannot get summary out put >>>> (the >>>> eigen values) for PCO as for PCA. >>>> Thanks. >>>> Dilshan >>>> [[alternative HTML version deleted]] >>>> ______________________________________________ >>>> R-help at r-project.org mailing list >>>> https://stat.ethz.ch/mailman/listinfo/r-help >>>> PLEASE do read the posting guide >>>> http://www.R-project.org/posting-guide.html >>>> and provide commented, minimal, self-contained, reproducible code. > Would you care to elaborate further as to what the actual problem > entails, with a minimal working example? > More generally, might I suggest you learn how these metrics work and > then apply the most appropriate one rather than groping blindly after > something solely on the criterion of it being non-Euclidean. If you > need other metrics, look into the various p-norms, all of which are > implemented directly in R by way of the dist() function as are a few > other norms with which I am not immediately familiar. > Regards, > Michael Weylandt More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2011-October/291869.html","timestamp":"2014-04-16T16:03:29Z","content_type":null,"content_length":"14855","record_id":"<urn:uuid:06159355-9b88-44e9-b4ab-62d3db6c6dc1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Polchinski 2.4.1 change of coordinates I've been working through Polchinski on my own, and I have a really basic question. How would one derive equation 2.4.1? What does it even mean to write the stress tensor with the new indices z and z-bar? I thought this was obvious, but it isn't working out for me.
{"url":"http://www.physicsforums.com/showthread.php?p=4278879","timestamp":"2014-04-19T04:46:40Z","content_type":null,"content_length":"26947","record_id":"<urn:uuid:80446c7b-30b2-497d-8f03-947fb7d25479>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Flatlands, New York, NY New York, NY 10016 GRE, GMAT, SAT, NYS Exams, and Math ...I specialize in tutoring math and English for success in school and on the SAT, GED, GRE, GMAT, and the NYS Regents exams. Whether we are working on high school proofs or GRE vocabulary, one of my goals for each session is to keep the student challenged,... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Flatlands_New_York_NY_Geometry_tutors.aspx","timestamp":"2014-04-17T22:49:37Z","content_type":null,"content_length":"61512","record_id":"<urn:uuid:dcd15463-eb8b-4e99-9e66-c4edde7bbe18>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
volume of a spherical cap September 30th 2008, 03:21 PM #1 Sep 2008 volume of a spherical cap i need help finding the volume of a spherical cap. the only information i know is that the rim of the cap is 5m and its height is 2m. I have tried integrating using the formula for a sphere but the problem is I don't know the radius of the sphere from which the cap came and I can't figure out where to go from there. please help. thanks i need help finding the volume of a spherical cap. the only information i know is that the rim of the cap is 5m and its height is 2m. I have tried integrating using the formula for a sphere but the problem is I don't know the radius of the sphere from which the cap came and I can't figure out where to go from there. please help. thanks You don't need the radius; you can determine it from the information given. Using pythagorean theorem: For the volume, I suggest cylindrical coordinates: $V=4\int_0^{\frac{\pi}{2}}\int_{\frac{25}{16\pi^2}-1}^{\frac{25}{16\pi^2}+1}\int_0^{\frac{5}{2\pi}}r\ ;dr\;dz\;d\theta$ September 30th 2008, 03:53 PM #2 Senior Member Feb 2008
{"url":"http://mathhelpforum.com/calculus/51391-volume-spherical-cap.html","timestamp":"2014-04-18T05:40:42Z","content_type":null,"content_length":"34356","record_id":"<urn:uuid:af1c7e50-9b57-4fa3-97bd-88cc80a2b21f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
FourFours in Haskell FourFours in Common Lisp I came across the FourFours problem recently. Stated succinctly, it asks: what are the ways to calculate each of the integers from 1 to 100 with formulas which use the digit four exactly four times? (No digits other than four can be used at all.) Peopl Weblog: Phil! Gregory Tracked: Dec 11, 00:24 Display comments as ( | Threaded) Add Comment To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly. What is the first name of the owner of this blog? / Wie heißt der Betreiber diess Blogs mit Vornamen?
{"url":"http://www.joachim-breitner.de/blog/archives/210-FourFours-in-Haskell.html","timestamp":"2014-04-17T09:35:42Z","content_type":null,"content_length":"47233","record_id":"<urn:uuid:d5c0c029-1202-48e5-aaf5-07d9bdf0e24f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Equivalence of Classical Klein-Gordon Field Theory to Correspondence-Principle First Quantization of the Spinless Relativistic Free Particle Authors: Steven Kenneth Kauffmann It has recently been shown that the classical electric and magnetic fields which satisfy the sourcefree Maxwell equations can be linearly mapped into the real and imaginary parts of a transverse-vector wave function which in consequence satisfies the time-dependent Schrödinger equation whose Hamiltonian operator is physically appropriate to the free photon. The free-particle Klein-Gordon equation for scalar fields modestly extends the classical wave equation via a mass term. It is physically untenable for complexvalued wave functions, but has a sound nonnegative conserved-energy functional when it is restricted to real-valued classical fields. Canonical Hamiltonization and a further canonical transformation maps the real-valued classical Klein-Gordon field and its canonical conjugate into the real and imaginary parts of a scalar wave function (within a constant factor) which in consequence satisfies the time-dependent Schrödinger equation whose Hamiltonian operator has the natural correspondence-principle relativistic square-root form for a free particle, with a mass that matches the Klein-Gordon field theory's mass term. Quantization of the real-valued classical Klein-Gordon field is thus second quantization of this natural correspondence-principle first-quantized relativistic Schrödinger equation. Source-free electromagnetism is treated in a parallel manner, but with the classical scalar Klein-Gordon field replaced by a transverse vector potential that satisfies the classical wave equation. This reproduces the previous first-quantized results that were based on Maxwell's source-free electric and magnetic field equations. Comments: 8 pages, Also archived as arXiv:1012.5120 [physics.gen-ph]. Download: PDF Submission history [v1] 24 Dec 2010 Unique-IP document downloads: 410 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. comments powered by
{"url":"http://vixra.org/abs/1012.0050","timestamp":"2014-04-20T18:31:17Z","content_type":null,"content_length":"8890","record_id":"<urn:uuid:c24d0631-14b2-4f54-a835-ae13571102a2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
My watch list With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • My newsletter Gay-Lussac's law Gay-Lussac's law is one of two laws named after the French chemist Joseph Louis Gay-Lussac, which relate to the properties of gases and are known by the same name. Law of combining volumes Gay-Lussac's law, known as the law of combining volumes, states that: The ratio between the combining volumes of gases and the product, if gaseous, can be expressed in small whole numbers Gay-Lussac discovered this law in 1809. This played a major role in the development of modern gas stoichiometry because in 1811, Avogadro used Gay-Lussac’s Law to form Avogadro's hypothesis. Other law The other law, discovered in 1802, states that: The pressure of a fixed amount of gas at fixed volume is directly proportional to its temperature in kelvins. It is expressed mathematically as: P is the pressure of the gas. T is the temperature of the gas (measured in kelvins). k is a constant. This law holds true because temperature is a measure of the average kinetic energy of a substance; as the kinetic energy of a gas increases, its particles collide with the container walls more rapidly, thereby exerting increased pressure. Simply put, if you increase the temperature you increase the pressure. For comparing the same substance under two different sets of conditions, the law can be written as: $\frac{P_1}{T_1}=\frac{P_2}{T_2} \qquad \mathrm{or} \qquad {P_1}{T_2}={P_2}{T_1}$ Charles's Law was also known as the Law of Charles and Gay-Lussac, because Gay-Lussac published the law in 1802 using much of Charles' unpublished data from 1787. However, in recent years the term has fallen out of favor since Gay-Lussac has the second but related law presented here attributed to him. This related form of Gay-Lussac's Law, Charles's Law, and Boyle's law form the combined gas law. The three gas laws in combination with Avogadro's Law can be generalized by the ideal gas law. • Castka, Joseph F.; Metcalfe, H. Clark; Davis, Raymond E.; Williams, John E. (2002). Modern Chemistry. Holt, Rinehart and Winston. ISBN 0-03-056537-5. • Guch, Ian (2003). The Complete Idiot's Guide to Chemistry. Alpha, Penguin Group Inc.. ISBN 1-59257-101-8. • Mascetta, Joseph A. (1998). How to Prepare for the SAT II Chemistry. Barron's. ISBN 0-7641-0331-8. Category: Gas laws This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Gay-Lussac's_law". A list of authors is available in Wikipedia.
{"url":"http://www.chemeurope.com/en/encyclopedia/Gay-Lussac's_law.html","timestamp":"2014-04-18T18:24:42Z","content_type":null,"content_length":"50508","record_id":"<urn:uuid:59fd99fb-df64-477f-8f1d-d2fe7d08624e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
, Spring 2009 Math 330: Introduction to Higher Math (aka Number Systems, or simply Proofs) Section 1, Spring 2009 Marco Varisco, marco@math.binghamton.edu [how to email a professor], math.binghamton.edu/marco/proofs/ Office: LN-2208A, Office Hours: MW 3:00–4:30 or by appointment. MWF 10:50–11:50 in SW-325 and R 10:05–11:05 in LN-G335. N.B.: The Thursday meeting is not a discussion session. You are expected to attend all class meetings. The maximum number of absences permitted to receive credit for this course is 5 (five). Excessive tardiness may count as absence. [University Bulletin] Math 222: Calculus II with a grade of C or better. [University Bulletin] “Careful discussion of the real numbers, the rational numbers and the integers, including a thorough study of induction and recursion. Countable and uncountable sets. The methodology of mathematics: basic logic, the use of quantifiers, equivalence relations, sets and functions. Methods of proof in mathematics. Training in how to discover and write proofs.” [University Bulletin] The Art of Proof: A Concrete Gateway to Mathematics, Matthias Beck and Ross Geoghegan, 2009. This textbook is published by Binghamton University and is only available from local bookstores. Grading & Examinations When calculating your course grade there is one more rule: if your homework score is an F then your course grade is an F; in this case I will ignore your midterm, project, and final exam scores. Of course, you are expected to obey the Student Academic Honesty Code. # Due on Problems 1 R 1/29 Prove propositions 1.7 & 1.10(ii). 2 M 2/02 Prove propositions 1.14, 1.15, & 1.20. 3 W 2/04 Prove propositions 1.16 & 1.19, and read chapter 2. 4 R 2/05 Reread chapter 2, and do projects 2.1 & 2.2. 5 M 2/09 Retake (but do not hand in) quiz #1, and do this. 6 R 2/12 Prove proposition 3.10(ii). An integer n is called even if it is divisible by 2, i.e., if there exists an integer k such that n=2k. Question: Is 0 even? Why or why not? Prove the following statements: 7 F 2/13 [A] If m and n are even then m+n and mn are even. [B] For any natural number n, n^2+n is even. [C] For any natural number n, n is even or n+1 is even. [D] For any integer n, n is even or n+1 is even. Prove propositions 3.17, 3.19, 3.20, & 3.23. An integer n is called odd if it is not even. Question: Is 1 odd? Why or why not? 8 W 2/18 Prove the following statements: [E] For any integer n, either n is even or n+1 is even, but not both. [F] If n is an odd integer, then there exists an even integer m such that n=m-1. (You may use statements [A], [B], [C], & [D] from the previous assignment.) 9 W 2/25 Retake (but do not hand in) quiz #2, and prove that for any natural number k, Find a formula for 10 W 3/04 for any natural number k, and prove that your formula is correct. 11 R 3/05 Prove that for all natural numbers x and y, there exists a natural number n such that nx≥y. 12 W 3/11 Prove theorem 6.8. Prove that any non-empty, bounded above set of integers has a largest element. 13 F 3/27 Prime time! 14 M 3/30 Retake (but do not hand in) quiz #3, and prove propositions 10.6 & 10.12 (you may use proposition 10.11). [1] Prove that the sequence (n-1)/n converges to 1. 15 M 4/20 [2] Write down explicitly what it means for a sequence to be divergent. [3] Prove that any convergent sequence of real numbers is bounded above. [4] Prove that any non-decreasing and bounded above sequence of real numbers is convergent. 16 W 4/22 Retake (but do not hand in) quiz #5, and prove that the sequence x[n]=(-1)^n diverges. 17 M 4/27 Prove that the sequence defined recursively by x[1]=2 and x[n+1]=(x[n]+6)/2 is convergent. This syllabus is subject to change. All official announcements and assignments are given in class, and this web page may not be up to date. Marco Varisco - May 12, 2009.
{"url":"http://www.albany.edu/~mv312143/bu/proofs/","timestamp":"2014-04-20T09:15:14Z","content_type":null,"content_length":"9675","record_id":"<urn:uuid:a455e8d6-d972-47d0-ad6a-87c90f97ee30>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What are the possible number of positive, negative, and complex zeros of f(x) = 3x^4 - 5x^3 - x^2 - 8x + 4 ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5122640fe4b06821731d590e","timestamp":"2014-04-18T08:21:08Z","content_type":null,"content_length":"42179","record_id":"<urn:uuid:1b985016-56f4-46a9-8be8-ee955a077854>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
what is the amount of x/2r 02-25-2012, 02:23 PM #1 New Member Join Date Dec 2011 what is the amount of x/2r Hello everyone, Please solve the following QS: in the following image EC=FC=DF and ABCD is the squar, and we have AB=AD=BC=DC=x, (X is parameter) so please say that what is the amount of x/2r ? Last edited by abtia; 02-25-2012 at 03:44 PM. Please share your work with us, indicating exactly where you are stuck - so that we may know where to begin to help you. “... mathematics is only the art of saying the same thing in different words” - B. Russell Actually I dont know how to start and solve it??? So in what class should I start it?(I am new in this website) I addition what is [TEX]r[/TEX] supposed to be? “A professor is someone who talks in someone else’s sleep” W.H. Auden I think you did not read my explanation carefuly because I explain that AB=BC=DC=AD=X Furthurmore there are many people in this website that solve problems even those which are too basic without directing them to the classroom help so if sending me to classroom is the only solution that you know please let other who are more generous to solve my problem ??? 02-25-2012, 02:26 PM #2 New Member Join Date Dec 2011 02-25-2012, 02:41 PM #3 Elite Member Join Date Jun 2007 02-25-2012, 02:57 PM #4 New Member Join Date Dec 2011 02-25-2012, 03:05 PM #5 Elite Member Join Date Jun 2007 02-25-2012, 03:11 PM #6 New Member Join Date Dec 2011 02-25-2012, 03:30 PM #7 Elite Member Join Date Jan 2005 02-25-2012, 03:38 PM #8 New Member Join Date Dec 2011 02-25-2012, 03:43 PM #9 New Member Join Date Dec 2011 02-25-2012, 03:44 PM #10 Elite Member Join Date Jan 2005
{"url":"http://www.freemathhelp.com/forum/threads/74640-what-is-the-amount-of-x-2r","timestamp":"2014-04-16T13:03:28Z","content_type":null,"content_length":"73174","record_id":"<urn:uuid:9abc3748-c1e2-4ba9-b79d-7e068d2ab1c7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Running ADMB-executables In a DOS window Under linux Command line option: -ilmn 5. Results: Computation times Model description Stochastic volatility models are used in mathematical finance to describe the evolution of asset returns, which typically exhibit changing variances over time. As an illustration we use a dataset previously analyzed by Harvey et al. (1994), and later by several other authors. The data consist of a time series of daily pound/dollar exchange rates {z[t]} from the period 01/10/81 to 28/6/85. The series of interest are the daily mean-corrected returns {y[t]}, given by the transformation y[t] = log(z[t])-log(z[t-1]) - average[logz[i]-logz[i-1]]. The stochastic volatility model allows the variance of y[t] to vary smoothly with time. This is achieved by assuming that y[t] ~ N(0,s[t]), where s[t] = exp{-0.5(m[x]+x[t])}. Here, the smoothly varying component {x[t]} is assumed to follow an autoregression x[t] = bx[t-1] + e[t], where e[t] ~ N(0,s^2). Further details about the model can be found here: sdv.pdf.
{"url":"http://otter-rsch.com/admbre/examples/sdv/sdv.html","timestamp":"2014-04-20T22:10:44Z","content_type":null,"content_length":"5392","record_id":"<urn:uuid:c7d08b7b-38ff-46e1-9206-75ee768c4a02>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
A Theory of Program Size Formally Identical to Information Theory Results 1 - 10 of 249 - IBM JOURNAL OF RESEARCH AND DEVELOPMENT , 1977 "... This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..." Cited by 320 (19 self) Add to MetaCart This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported. - IEEE Transactions on Information Theory , 1992 "... Abstruct-The problem of predicting the next outcome of an individual binary sequence using finite memory, is considered. The finite-state predictability of an infinite sequence is defined as the minimum fraction of prediction errors that can be made by any finite-state (FS) predictor. It is proved t ..." Cited by 158 (13 self) Add to MetaCart Abstruct-The problem of predicting the next outcome of an individual binary sequence using finite memory, is considered. The finite-state predictability of an infinite sequence is defined as the minimum fraction of prediction errors that can be made by any finite-state (FS) predictor. It is proved that this FS pre-dictability can be attained by universal sequential prediction schemes. Specifically, an efficient prediction procedure based on the incremental parsing procedure of the Lempel-Ziv data com-pression algorithm is shown to achieve asymptotically the FS predictability. Finally, some relations between compressibility and predictability are pointed out, and the predictability is proposed as an additional measure of the complexity of a sequence. Index Terms-Predictability, compressibility, complexity, fi-nite-state machines, Lempel- Ziv algorithm. - INFORMATION AND COMPUTATION , 2003 "... A constructive version of Hausdorff dimension is developed using constructive supergales, which are betting strategies that generalize the constructive supermartingales used in the theory of individual random sequences. This constructive dimension is used to assign every individual (infinite, binary ..." Cited by 93 (10 self) Add to MetaCart A constructive version of Hausdorff dimension is developed using constructive supergales, which are betting strategies that generalize the constructive supermartingales used in the theory of individual random sequences. This constructive dimension is used to assign every individual (infinite, binary) sequence S a dimension, which is a real number dim(S) in the interval [0, 1]. Sequences "... The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2 ..." Cited by 79 (21 self) Add to MetaCart The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2], showing that each low for Martin-Lof random set is # 2 . Our class induces a natural intermediate # 3 ideal in the r.e. Turing degrees (which generates the whole class under downward closure). Answering - SIAM Journal on Computing , 2004 "... The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff (1919), and packing dimension, developed independently by Tricot (1982) and Sullivan (1984). Both dimensions have the mathematical advantage of being defined from measures, and both have yielded exten ..." Cited by 79 (29 self) Add to MetaCart The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff (1919), and packing dimension, developed independently by Tricot (1982) and Sullivan (1984). Both dimensions have the mathematical advantage of being defined from measures, and both have yielded extensive applications in fractal geometry and dynamical systems. Lutz (2000) has recently proven a simple characterization of Hausdorff dimension in terms of gales, which are betting strategies that generalize martingales. Imposing various computability and complexity constraints on these gales produces a spectrum of effective versions of Hausdorff dimension, including constructive, computable, polynomial-space, polynomial-time, and finite-state dimensions. Work by several investigators has already used these effective dimensions to shed significant new light on a variety of topics in theoretical computer science. In this paper we show that packing dimension can also be characterized in terms of gales. Moreover, even though the usual definition of packing dimension is considerably more complex than that of Hausdorff dimension, our gale characterization of packing dimension is an exact dual - IEEE Transactions on Information Theory , 1998 "... The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition un ..." Cited by 67 (7 self) Add to MetaCart The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's mi... , 2002 "... We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the ..." Cited by 62 (20 self) Add to MetaCart We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the space of domain-specific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience. - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2002 "... AbstractÐWe studied a number of measures that characterize the difficulty of a classification problem, focusing on the geometrical complexity of the class boundary. We compared a set of real-world problems to random labelings of points and found that real problems contain structures in this measurem ..." Cited by 59 (6 self) Add to MetaCart AbstractÐWe studied a number of measures that characterize the difficulty of a classification problem, focusing on the geometrical complexity of the class boundary. We compared a set of real-world problems to random labelings of points and found that real problems contain structures in this measurement space that are significantly different from the random sets. Distributions of problems in this space show that there exist at least two independent factors affecting a problem's difficulty. We suggest using this space to describe a classifier's domain of competence. This can guide static and dynamic selection of classifiers for specific problems as well as subproblems formed by confinement, projection, and transformations of the feature vectors. Index TermsÐClassification, clustering, complexity, linear separability, mixture identifiability. 1 , 1976 "... Loveland and Meyer have studied necessary and sufficient conditions for an infinite binary string x to be recursive in terms of the programsize complexity relative to n of its n-bit prefixes x n . Meyer has shown that x is recursive i# K(x n /n) c, and Loveland has shown that this is false if ..." Cited by 59 (4 self) Add to MetaCart Loveland and Meyer have studied necessary and sufficient conditions for an infinite binary string x to be recursive in terms of the programsize complexity relative to n of its n-bit prefixes x n . Meyer has shown that x is recursive i# K(x n /n) c, and Loveland has shown that this is false if one merely stipulates that K(x n /n) c for infinitely many n. We strengthen Meyer's theorem. From the fact that there are few minimal-size programs for calculating a given result, we obtain a necessary and sufficient condition for x to be recursive in terms of the absolute program-size complexity of its prefixes: x is recursive i# K(n)+c. Again Loveland's method shows that this is no longer a sufficient condition for x to be recursive if one merely stipulates that K(x n ) K(n)+c for infinitely many n. - J. Symbolic Logic "... 2. Sets, measure, and martingales 4 2.1. Sets and measure 4 2.2. Martingales 5 ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.52.7440","timestamp":"2014-04-20T09:49:47Z","content_type":null,"content_length":"37085","record_id":"<urn:uuid:e1f8115e-eef4-4a17-a5dd-aed1e4bea746>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Determine the solution y(x) satisfying y'' + 2y' + 2y = 0 satisfying y(1)=2 and y'(1) =1. What technique would I use? • 6 months ago • 6 months ago Best Response You've already chosen the best response. That's a nice constant-coefficient, homogeneous ordinary differential equation. We can get it's characteristic polynomial: \[r^2+2r+2=0\] Then solve for the polynomial's roots (r1 and r2). The fundamental set of solutions will then be of the form \[y_1(t)= e ^{r_1*t} ; y_2(t) = e^{r_2*t} \] If we get a single repeated root r1, the fundamental solution set will be \[y_1(t)= e ^{r_1*t} ; y_2(t) = t*e^{r_1*t} \] Then we write the general solution as \[y(t)=c_1y_1(t) + c_2y_2(t)\] And finally use the initial conditions to solve for c1 and c2. Best Response You've already chosen the best response. \[r=\frac{ -2\pm \sqrt{2^{2}-4*1*2} }{2*1 }=-1 \pm \iota \] solution is \[y=e ^{-x}\left[ c1 \cos x+c2 \sin x \right]\] find c1 and c2 from the initial conditions. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5251ccd5e4b02a152c3c101c","timestamp":"2014-04-18T20:56:07Z","content_type":null,"content_length":"30640","record_id":"<urn:uuid:fd9cf626-4804-4c2d-ba58-da46ad173e36>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Aaron Greicius Fall 2013. This fall I am teaching Math 131-Applied Calculus I, and Math 162-Calculus II. Websites for these courses are hosted at sakai.luc.edu. • Office: 516 BVM Hall • Office hours: Tu: 10-12:30 and 2-3 Elementary number theory videos. In the Fall 2012 semester, Professor Steve Doty and I collaborated on a “flipped” number theory course. Lectures took the form of videos that the students could watch at their leisure. Class time was spent instead with students presenting proofs or working problems in groups. Steve and I generated a lot of course material during this time, most of which can be found on the Math 201-Elementary number theory page. Previous courses. Spring 2013. Classes taught: Math 131-Applied Calculus I, Math 132-Applied Calculus II. Fall 2012. Classes taught: Math 132-Applied Calculus II, Math 201-Number Theory. Spring 2012. Classes taught: Math 132-Applied Calculus II, Math 201-Number Theory. Fall 2011. Classes taught: Math 100-Intermediate Algebra, Math 131-Applied Calculus, and Stat 103-Fundamentals of Statistics.
{"url":"http://greicius.wordpress.com/","timestamp":"2014-04-20T03:18:28Z","content_type":null,"content_length":"17275","record_id":"<urn:uuid:5021b1a6-f171-4f91-a805-0574eed8a57c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - announcement: Maxima 5.28 Date: Aug 27, 2012 8:24 PM Author: Robert Dodier Subject: announcement: Maxima 5.28 Please distribute this message as you see fit. Announcing Maxima 5.28 Maxima is a GPL'd branch of Macsyma, the venerable symbolic computation system. Maxima 5.28 is a bug fix and feature enhancement release. The current version is 5.28.0. Maxima has functions to work with polynomials, matrices, finite sets, integrals, derivatives and differential equations, linear algebra, plotting, floating point and arbitrary-precision arithmetic, etc. Maxima can run on MS Windows and various flavors of Unix, including MacOS X. There is a precompiled executable installer for Windows, RPM's for Linux, and source code tar.gz. Maxima is implemented in Common Lisp; several Lisps can compile and run Maxima, including CMUCL, SBCL, Clisp, GCL, and ECL. The Maxima project welcomes new participants. You can contribute in several ways: reporting bugs, fixing bugs, writing new add-on packages, revising core functions, user interfaces, documentation, Why not see what's happening on the mailing list and consider how you might contribute to the project. Thanks to everyone who contributed to the 5.28 release. Robert Dodier Maxima developer and 5.28 release manager Project page: Bug reports. You must create a Sourceforge login before filing a bug report (anonymous reports no longer allowed). Mailing list. Please sign up before posting a message. Download page: Project home page: Change log: Maxima 5.28 change log Backwards-incompatible changes: * package stats: removed function simple_linear_regression (superseded by New items in core: * new function generalized_lambert_w * new functions zn_mult_table, zn_power_table * new functions for Galois fields: gf_set, gf_char, gf_prim, etc. New items in share: * package descriptive: new function principal_components * package descriptive: new histogram style 'density' * package stats: new function linear_regression Other changes: * revise system for building Maxima on MS Windows * function gamma_incomplete: improve accuracy for complex bigfloats * function expintegral_e: improved calculation for large imaginary Bug fixes: 3539699: limit of atan2 3538167: Wrong result for definite integral 3534858: wrong answer: limit 3533723: abs_integrate causes stack overflow 3530767: integrate changes k[0] --> k(0) 3530272: nthroot, bad error msg 3529992: Shi (sinh integral) wrong branch, integrate inconsistent 3529144: Error integrating exp(-x)*sinh(sqrt(x)) with domain: complex 3526359: gamma_incomplete(1/5,-32.0) not accurate 3526111: float erf (%i) not working 3522750: assume & integrate 3521596: atan2(sqrt(1-u)*(u-1),1); /* hangup */ 3517785: Wrong sign in exponential integral 3517034: polarform error on simple case mailing list 2012-04-09: Loading gentran mailing list 2012-03-27: bug in net present value unnumbered: inequality facts being forgotten unnumbered: limit(erfc(z), z, inf) unnumbered: bug in animated_gif unnumbered: Lisp output not readable unnumbered: bigfloats parsed incorrectly when ibase is not 10
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7875911","timestamp":"2014-04-19T20:53:26Z","content_type":null,"content_length":"5127","record_id":"<urn:uuid:672d0355-3f84-4780-a129-fdc352c2b9e5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Some Spaces of Double Sequences Obtained through Invariant Mean and Related Concepts Abstract and Applied Analysis Volume 2013 (2013), Article ID 507950, 11 pages Research Article Some Spaces of Double Sequences Obtained through Invariant Mean and Related Concepts Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia Received 29 November 2012; Accepted 11 March 2013 Academic Editor: Elena Braverman Copyright © 2013 S. A. Mohiuddine and Abdullah Alotaibi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We introduce some double sequences spaces involving the notions of invariant mean (or -mean) and -convergence for double sequences while the idea of -convergence for double sequences was introduced by Çakan et al. 2006, by using the notion of invariant mean. We determine here some inclusion relations and topological results for these new double sequence spaces. 1. Preliminaries, Background, and Notation In 1900, Pringsheim [1] presented the following notion of convergence for double sequences. A double sequence is said to converge to the limit in Pringsheim’ssense (shortly, -convergent to) if for every there exists an integer such that whenever . In this case, is called the -limit of . A double sequence of real or complex numbers is said to be bounded if . The space of all bounded double sequences is denoted by . If and is -convergent to , then is said to be boundedly -convergent to (shortly, -convergent to ). In this case, is called the -limit of . The assumption of -convergent was made because a double sequence on which -convergent is not necessarily bounded. In general, for any notion of convergence , the space of all -convergent double sequences will be denoted by , the space of all -convergent to double sequences by and the limit of a -convergent double sequence by -, where . Let denote the vector space of all double sequences with the vector space operations defined coordinatewise. Vector subspaces of are called double sequence spaces. In addition to the above-mentioned double sequence spaces, we consider the double sequence space as of all absolutely summable double sequences. All considered double sequence spaces are supposed to contain where We denote the pointwise sums ,, and by ,, and , respectively. Let be a one-to-one mapping from the set of natural numbers into itself. A continuous linear functional on the space of bounded single sequences is said to be an invariantmean or a -mean if and only if (i) when the sequence has for all , (ii) , where , and (iii) for all . Throughout this paper we consider the mapping which has no finite orbits, that is, for all integer and , where denotes the th iterate of at . Note that a -mean extends the limit functional on the space of convergent single sequences in the sense that for all , (see [2]). Consequently, the set of bounded sequences all of whose -means are equal. We say that a sequence is -convergent if and only if . Using this concept, Schaefer [3] defined and characterized -conservative, -regular, and -coercive matrices for single sequences. If is translation then is reduced to the set of almost convergent sequences [4]. Recently, Mohiuddine [5] has obtained an application of almost convergence for single sequences in approximation theorems and proved some related results. In 2006, Çakan et al. [6] presented the following definition of -convergence for double sequences and further studied by Mursaleen and Mohiuddine [7–9]. A double sequence of real numbers is said to be -convergent to a number if and only if , where while here the limit means -limit. Let us denote by the space of -convergent double sequences . For , the set is reduced to the set of almost convergent double sequences [10]. Note that . Maddox [11] has defined the concepts of strong almost convergence and -convergence for single sequences and established inclusion relation between strong almost convergence, -convergence, and almost convergence for single sequence. Başarir [12] extended the notion of strong almost convergence from single sequences to double sequences and proved some interesting results involving this idea and the notion of almost convergence for double sequences. In the recent past, Mursaleen and Mohiuddine [13] presented the notions of absolute and strong -convergence for double sequences. A bounded double sequence is said to be strongly -convergent if there exists a number such that while here the limit means -limit. In this case, we write -. Let us denote by the set of all strongly -convergent sequences . If is translation then is reduced to the set of strong almost convergence double sequences due to Başarir [12]. For more details of spaces for single and double sequences and related concepts, we refer to [14–31] and references therein. In this paper, we define and study some new spaces involving the idea of invariant mean and -convergence for double sequences and establish a relation between these spaces. Further, we extend above spaces to more general spaces by considering the double sequences such that for all , and and prove some topological results. 2. The Double Sequence Spaces We construct the following spaces involving the idea of invariant mean and -convergence for double sequences: where with for all ; Remark 1. If -, that is, as , uniformly in ,; then We remark that by using Abel’s transformation for single series We get Abel’s transformation for double series where In the recent past, Altay and Başar [32] also presented another form of Abel’s transformation for double series. 3. Inclusion Relations In the following theorem, we establish a relationship between spaces defined in Section 2. Before proceeding further, first we prove the following lemmas which will be used to prove our inclusion Lemma 2. Consider that - if and only if (L1)-; (L2)as (uniformly in , ); (L3)as (uniformly in , ); (L4)as (uniformly in , ), Proof. Suppose that -. Thus, we have -, that is, (L1) holds. We see that conditions (L2) and (L3) follows from the Remark 1. Write By our assumption, that is, -, as uniformly in ,. The condition (L1) implies that tends to zero as tending to uniformly in ; therefore as uniformly in and as uniformly in by the conclusion (L2) and (L3), respectively. Thus, (14) tends to zero as uniformly in , that is, (L4) holds. Conversely, let (L1)–(L4) hold. Then, () uniformly in . Lemma 3. One has Proof. Since First, we solve the expression in the first bracket Now, the expression in the second bracket Substituting (18) and (19) in (17), we get We know that From (21), we have Thus, (20) becomes Also (22) can be written as Similarly, we can write Using (24) and (25) in (23), we get This implies that Theorem 4. One has the following inclusions and the limit is preserved in each case:(i), (ii) if the conditions (L2) and (L3) of Lemma 2 hold, (iii). Proof. (i) Let with -, say. Then, This implies that Also, we have Hence, and (ii) We have to show that . If , then we have as , uniformly in , ; and that is, -. In order to prove that , it is enough to show that condition (L4) of Lemma 2 holds. Now, Replacing and by and , respectively, we have By Lemma 3, we have So that we have By using Abel’s transformation for double series in the right hand side of above equation, we have 0as, uniformly in (by (32)). Hence, by Lemma 2, . (iii) Let , and we have to show that where is an absolute constant. Since , there exist integers such that Hence, it is left to show that for fixed From (40), we have for every fixed , and for all ,. Since Accordingly, This implies that Using (42) and (45), we have for every fixed , and for all ,, where is a constant depending upon . Now, for any given infinite double series denoted as “”, let us write and be monotonically increasing. For simplicity in notation, we denote Again from the definition of , it is easy to obtain for all and with ,, , . Further, we calculate Thus, we have Hence, it follows from (46) that for each fixed ,, Hence, it follows from (52) that where is independent of . By (49), we have Also from (43) and (54), we have 4. Topological Results Here, we extend the spaces ,, to more general spaces, respectively, denoted by ,,, where is a double sequence of positive real numbers for all , and . First, we recall the notion of paranorm as follows. A paranorm is a function defined on a linear space such that for all (P1) if (P2)(P3)(P4)If is a sequence of scalars with and with in the sense that , then , in the sense that . A paranorm for which implies is called a total paranorm on , and the pair is called a total paranormed space. Note that each seminorm (norm) on is a paranorm (total) but converse needs not be true. A paranormed space is a topological linear space with the topology given by the paranorm . Now, we define the following spaces: We remark that if is a constant sequence, then we write in place of . Theorem 5. Let be a bounded double sequence of strictly positive real numbers. Then, is a complete linear topological space paranormed by where . If then is a Banach space and is a -normed space if . Proof. Let and be two double sequences. Then, where and . Since therefore where and . From (60), we have that if , then . Thus, is a linear space. Without loss of generality, we can take Clearly, ,. From (58) and Minkowski’s inequality, we have Hence, Since is bounded away from zero, there exists a constant such that for all ,. Now for , and so that is, the scaler multiplication is continuous. Hence, is a paranorm on . Let be a Cauchy sequence in , that is, Since it follows that In particular, Hence, is a Cauchy sequence in . Since is complete, there exists such that coordinatewise as . It follows from (66) that given , there exists such that for . Now taking and in (69), we have for . This proves that and . Hence is complete. If is a constant then it is easy to prove the rest of the theorem. Theorem 6. One has the following: is a complete paranormed space, paranormed by which is defined on . If then is a Banach space and if , is -normed space. Proof. In order for the paranorm in (70) be defined, we require that which is proved in the next theorem (i.e., Theorem 7). Using the standard technique as in the previous theorem, we can prove that is subadditive. Now, we have to prove the continuity of scalar multiplication. Suppose that . Then, for there exist integers such that If , then by (72) we have Since for fixed as , it follows from (73) and (74) that for fixed , as . Also, since implies that It follows that for fixed , as . This proves the continuity of scalar multiplication. Hence, is a paranorm. The proof of the completeness of can be achieved by using the same technique as in Theorem 5. Theorem 7. Suppose that is bounded double sequence of strictly positive real numbers. Then,(i)
{"url":"http://www.hindawi.com/journals/aaa/2013/507950/","timestamp":"2014-04-18T18:14:48Z","content_type":null,"content_length":"1046771","record_id":"<urn:uuid:5d1cbcfd-817a-4d15-ada0-cf1cd5fde74a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home Surfaces of constant curvature and geometric interpretations of the Klein-Gordon, Sine-Gordon and Sinh-Gordon equations Boris A. Rosenfeld and Nadezhda E. Maryukova Abstract: It is well known that the Sine-Gordon equation (SGE) $u_{xx}-u_{yy} = \sin u$ admits a geometric interpretation as the differential equation which determines surfaces of constant negative curvature in the Euclidean space $R^3$. This result can be generalized to the elliptic space $S^3$ and the hyperbolic space $H^3$. These results are analogous to the results of Chern that SGE also admits a geometric interpretation as the differential equation which determines spacelike surfaces of constant negative curvature in pseudo-Riemannian spaces $V_1^3$ of constant curvature, that is in the pseudo-Euclidean space $R^3_1$, in the pseudoelliptic space $S^3_1$, and in the pseudohyperbolic space $H^3_1$, and that the Sinh-Gordon equation (SHGE) $u_{xx}-u_{yy} = \sinh u$ admits geometric interpretations as the differential equation which determines timelike surfaces of constant positive curvature in the same spaces. In this paper it is proved also that the Klein-Gordon equation (KGE) $u_{xx}-u_{yy} = m^2u$ admits analogous geometric interpretations in the {\it Galilean space} $\Gamma^3$, and in the {\it pseudo-Galilean space} $\Gamma^3_1$, that is, in the affine space $ E^3 $ whose plane at infinity is endowed with the geometry of the Euclidean plane $ R^2 $ and of the pseudo-Euclidean plane $R^2_1$, respectively, in the {\it quasielliptic space} $S^{1,3}$, in the {\it quasihyperbolic space} $H^{1,3}$, in the {\it quasipseudoelliptic space} $S_{01}^{1,3}$, and in the {\it quasipseudohyperbolic space} $H^{1,3}_{01}$, that is, in the projective space $P^3$ whose collineations preserve two conjugate imaginary planes and two conjugate imaginary points on the line of their intersection, two conjugate imaginary planes and two real points on the line of their intersection, two real planes and two conjugate imaginary points on the line of their intersection, and two conjugate imaginary planes and two real points on the line of their intersection, Classification (MSC2000): 53C35 Full text of the article: Electronic fulltext finalized on: 1 Nov 2001. This page was last modified: 6 Feb 2002. © 2001 Mathematical Institute of the Serbian Academy of Science and Arts © 2001--2002 ELibM for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/075/15.html","timestamp":"2014-04-18T08:11:40Z","content_type":null,"content_length":"4981","record_id":"<urn:uuid:7823a49d-afc0-4b27-9a7e-d47e05bd9532>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
GEN05, GEN07 GEN Routines: Line/Exponential Segment Generators GEN05, GEN07 f # time size 5 a n1 b n2 c . . . f # time size 7 a n1 b n2 c . . . These subroutines are used to construct functions from segments of exponential curves (GEN05) or straight lines (GEN07). size – number of points in the table. Must be a power of 2 or power-of-2 plus 1 (see f statement). a, b, c, etc. – ordinate values, in odd-numbered pfields p5, p7, p9, . . . For GEN05 these must be nonzero and must be alike in sign. No such restrictions exist for GEN07. n1, n2, etc. – length of segment (no. of storage locations), in even-numbered pfields. Cannot be negative, but a zero is meaningful for specifying discontinuous waveforms (e.g. in the example below). The sum n1 + n2 + .... will normally equal size for fully specified functions. If the sum is smaller, the function locations not included will be set to zero; if the sum is greater, only the first size locations will be stored. If p4 is positive, functions are post-normalized (rescaled to a maximum absolute value of 1 after generation). A negative p4 will cause rescaling to be skipped. Discrete-point linear interpolation implies an increase or decrease along a segment by equal differences between adjacent locations; exponential interpolation implies that the progression is by equal ratio. In both forms the interpolation from a to b is such as to assume that the value b will be attained in the n + 1th location. For discontinuous functions, and for the segment encompassing the end location, this value will not actually be reached, although it may eventually appear as a result of final scaling. f 1 0 256 7 0 128 1 0 -1 128 0 This describes a single-cycle sawtooth whose discontinuity is mid-way in the stored function. GEN Routines: Line/Exponential Segment Generators
{"url":"http://www.lakewoodsound.com/csound/hypertext/gen/gen05.htm","timestamp":"2014-04-17T03:50:15Z","content_type":null,"content_length":"4426","record_id":"<urn:uuid:fb5b7586-028e-44cb-98c4-7cdd9bfb020d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Interesting math/programming problem.... July 20th 2013, 02:06 PM #1 Junior Member Jan 2010 Interesting math/programming problem.... The problem is simple, let me describe it: Suppose i ask 5 questions to 3 students and they give their answers. Every student is able to see what every student had answered. All the questions have a YES/NO answer. The correct answers are not given be me to the students, but i give to all students how many questions got right each of the students. •Example for the questions Q1,Q2,Q3,Q4,Q5 let's suppose the correct answers are 11001 (1 for YES and 0 for NO, so we have that question 1 has as an answer the YES, while question 4 has an answer of NO and question 5 has also an answer of YES.) and the students don't know about it. •Also let's suppose the student-1 gave the answer to all questions 10000, student-2 gave 11111 and student-3 gave 10110 and all 3 students know the answers all the others gave. •So now i give to the students the information that the student-1 had 3 correct answers, the student-2 had 3 correct answers also, while the student-3 had only 1 correct answer. My question is: ►Can a student deduce the correct answers from this information? IMPORTANT NOTE: I'm NOT speaking about this specific example, i'm interested in the general problem! So let's see the general problem: Let's say we have N questions being asked to K students. So we get the answers Ak1, Ak2, ..., AkNby the k-th students with k from 1 to K and Akx = 0 or 1 for every k at 1 to K and x at 1 to N. And the numbers of total student's correct answers is denoted by Cj with j from 1 to K. Example (6 questions, 4 students, random answers, random Cj numbers of correct answers) we are given the information: Student-1: 100111 Student-2: 100001 Student-3: 001101 Student-4: 010111 C1= 4 (I.e student-1 answered correctly 4 questions) C2= 4 C3= 1 C4= 5 ►Given that information, can we deduce with a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way what all the correct answers are? I.e to find the string of 1's and 0's with length N, that validates the above information that had been given. ►Is the aforementioned string unique? (Note: any Cj equal to 0 or N gives us the desired string with the correct answers immediately.) Special cases of the problem can be solved by taking multiple different cases and by deductive reasoning we obtain the solution, but that's a non general solution since we have to use different cases and different type of reasoning each time depending on what the N,K,Cj numbers are. I tried to solve the above with using the XOR operator but i couldn't go anywhere. Can you help me find the solution to the ► questions? Last edited by ChessTal; July 20th 2013 at 02:08 PM. Re: Interesting math/programming problem.... "Given that information, can we deduce with a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way what all the correct answers are? I.e to find the string of 1's and 0's with length N, that validates the above information that had been given." No, not always. For example, it might be that the n students all answer the m questions in exactly the same way. Knowing that each had i questions correct and m- i questions wrong tells us nothing about which answer were correct and which wrong. Re: Interesting math/programming problem.... Last edited by zzephod; July 20th 2013 at 10:48 PM. Re: Interesting math/programming problem.... "Given that information, can we deduce with a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way what all the correct answers are? I.e to find the string of 1's and 0's with length N, that validates the above information that had been given." No, not always. For example, it might be that the n students all answer the m questions in exactly the same way. Knowing that each had i questions correct and m- i questions wrong tells us nothing about which answer were correct and which wrong. Indeed, but in that case i should transform my question to ask about a GENERAL and STRAIGHTFORWARD(i.e to be programmed) way to find if the string of the correct answers can be found or not and if it can, to find it(with a general and straightforward way).... July 20th 2013, 04:10 PM #2 MHF Contributor Apr 2005 July 20th 2013, 10:41 PM #3 Apr 2012 July 20th 2013, 11:45 PM #4 Junior Member Jan 2010
{"url":"http://mathhelpforum.com/discrete-math/220714-interesting-math-programming-problem.html","timestamp":"2014-04-17T02:46:15Z","content_type":null,"content_length":"42846","record_id":"<urn:uuid:eb2bb0c4-5b94-417d-bcaa-9b41edcfb12f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
partial differential equation, complete integral November 11th 2012, 07:26 PM #1 Jul 2009 This is my question. Find the complete integral of the PDE $\frac {\partial ^2 u}{\partial x^2} +2 \frac {\partial ^2 u}{\partial x \partial y}+ \frac{\partial ^2 u}{\partial y^2}$ = x e^x+y involving arbitrary functions f1 and f2 A little nudge towards the right direction would be of great help. I'm sure I can take it from there... Re: partial differential equation, complete integral Undetermined coefficients gets you there doesn't it ? Assume that $u(x,y) = f(x)e^{x+y},$ substitute, and deduce a second order linear ode for $f(x).$ Re: partial differential equation, complete integral Thank you BobP. I could get as far as factoring the auxiliary equation and getting two equal roots as 1, and -1. I guess the solution should be of the form (A+Bx)e ^-x. But i'm not able to get the next step to solve for the complete integral. Re: partial differential equation, complete integral I think that your Auxiliary Equation is wrong, I think it should be $m^{2}+4m + 4 = 0$ having equal roots -2,-2. For the particular integral, you can use the undetermined coefficients method, let $f(x)=Ax+B.$ Re: partial differential equation, complete integral BobP, can you post a link from where I can review my notes on finding the auxillary equation for such partial differential equations? That'd be of great help as I can improvize and find the right set of solutions... Re: partial differential equation, complete integral Hi ! The three main steps for solving the PDE are summarized in attachment (without the whole calculus, that you certainly can do by yourself). Re: partial differential equation, complete integral Hi JJacquelin, I solved the homogenous equation and got -1and -1 as the two roots. Does this mean that the complementary function of the pde is, $\phi_{1}(y-x) +x \phi_{2}(y-x)$ ? I guess it could also be $\phi_{1}(x-y) +x \phi_{2}(x-y)$ As far as the particular integral is concerned, I'm a little at sea when i try following the image file you had attached with. Wish I was smart enough to get the gist of what you meant to Re: partial differential equation, complete integral F(y-x) is solution of the homogeneous equation. But since F is any function, F(y-x) is the same as G(x-y) with any function G , and the same as H(exp(y-x)) with any function H , and the same as... many others. So you can chose any one of these functions containing (y-x). The function F(x-y)=F(-(y-x)) is one of them. Now, to find a particular solution of the complete PDE, the method is already given in the preceeding post (part 2). Just apply it : bring back u=(ax+b)exp(x+y) into the PDE. November 13th 2012, 08:04 AM #2 Super Member Jun 2009 November 14th 2012, 06:18 AM #3 Jul 2009 November 14th 2012, 07:07 AM #4 Super Member Jun 2009 November 23rd 2012, 07:10 PM #5 Jul 2009 November 23rd 2012, 11:09 PM #6 Aug 2011 November 28th 2012, 01:29 AM #7 Jul 2009 November 28th 2012, 05:48 AM #8 Aug 2011
{"url":"http://mathhelpforum.com/differential-equations/207298-partial-differential-equation-complete-integral.html","timestamp":"2014-04-19T21:15:44Z","content_type":null,"content_length":"49573","record_id":"<urn:uuid:33e499c9-c4b8-444b-a32a-49ac2d93a2eb>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration.. mistake? February 23rd 2011, 08:27 AM Integration.. mistake? Question: The region R is bounded by the x-axis and the line $x=16$ and the curve with the equation $y=6-\sqrt{x}$ where $0\leq{x}\leq36$. Find in terms of $\pi$ the volume of the solid generated when R is rotated through one revolution about the x axis. Here's what I've done: $\int (6-\sqrt{x})^2 \pi dx$ (Integrating between the bounds 16 and 0 didn't know how to display that with latex) After integrating I have $[\frac {1}{2} (36-12\sqrt{x}+x]^2 \pi$ again for x= 16 and 0 solving that gives me the wrong answer any ideas? February 23rd 2011, 08:37 AM Prove It You have set up the integral correctly, but your evaluation of the integral is wrong. It looks like you've expanded the brackets, but kept the square there as well, and then tried to use the chain rule... $\displaystyle \int{(6-\sqrt{x})^2\,dx} = \int{36 - 12\sqrt{x} + x\,dx}$ $\displaystyle = \int{36 - 12x^{\frac{1}{2}} + x\,dx}$. Now just integrate term-by-term. February 23rd 2011, 09:09 AM So using the chain rule here is wrong? February 23rd 2011, 09:18 AM Prove It
{"url":"http://mathhelpforum.com/calculus/172350-integration-mistake-print.html","timestamp":"2014-04-21T12:53:09Z","content_type":null,"content_length":"6223","record_id":"<urn:uuid:da89fa69-f331-4875-ab99-561ba4ffeb2d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring Cleaning Data: 3 of 6- The Little but Big Correction April 10, 2013 By 0utlieR Building on the previous posts ( post 1 post 2 ) I found there were 12 instances with the type of credit where there was a "Primary*" which means the lender borrowed twice in the same day, in the 2010 q4 data. It would seem simple enough in Excel, use the filter, find the 12 instances, delete the "*" and be done. For R this turned out to be much more difficult to do. Mainly because the "*" is an operator used in multiplication, so it is hard to get rid of. What seemed to be a minor correction turned into something much bigger. Why would I go through all the trouble? Mainly because I know that there will be cases where I will be dealing with data much larger than this one, that will not fit into Excel. Where I will have to be able to make corrections in R because I will not be able to do it otherwise. Plus, it is a good way to hone my data manipulation skills. I used the following, but failed doing it (suggestions for improvement are welcomed): • Gsub()- did not work because of the '*' • Variable[condition]<-replacement- looking for a numerical value, which there is none • Ifelse()- did not find the difference between Primary and Primary* • Used the stringer package to try and delete the last character, but did not work So what I did was use the subset() function to flush the problem variables out. Knowing there was 12 I replaced the wrong variables with the correction, using the rep() function. Then I made sure it was a factor using the as.factor(). #Correcting the Primary Credit* #Step 1 isolate the rows of data subset=type.credit=='Primary Credit*') #Step 2 change the column of data to the correct label tmp$type.credit<-rep('Primary Credit', 12) #Step 3 make sure the data is a factor I then erased the problem variables from the original data using the file[-which(file$variable=='Primary Credit*'),]. Note the use of the '-' before the which to remove those rows. Then I added the cleaned up rows to the data. I am not particularly interested in keeping the data in any particular order as the date will keep things in chronological order. #Step 4 remove the Primary Credit* #rows of data from original dataframe dw.2010.q4$type.credit=='Primary Credit*'),] #Step 5 add the corrected data back into dataframe dw.2010.q4<-rbind(dw.2010.q4, tmp) #Check to see if correct Created by Pretty R at inside-R.org for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/spring-cleaning-data-3-of-6-the-little-but-big-correction/","timestamp":"2014-04-19T22:43:50Z","content_type":null,"content_length":"40234","record_id":"<urn:uuid:b3d80f2c-9b9d-4240-8fa2-04beff45f697>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] Arrays as truth values? Perry Greenfield perry at stsci.edu Tue Nov 8 09:35:42 CST 2005 On Nov 8, 2005, at 9:21 AM, Ed Schofield wrote: > I agree with this reasoning, but I'd like to illustrate a drawback to > the > new behaviour: >>>> a1 = array([1,2,3]) >>>> a2 = a1 >>>> if a1 == a2: > ... print "equal" > ... What about: >>> a1 = array([[1,2,3],[1,2,3]]) >>> a2 = array([1,2,3]) >>> a1 == a2 These two arrays are not the same shape but because of broadcasting will show to be equal. Is this what you intended? Some might, some might not. Robert has already pointed out that lots of people want == to result in an array of booleans (most I'd argue) rather than a single boolean And if you wanted to use the current Numeric behavior, then >>> array([0,0]) == array([0,1]) will not do what you wish it since there is at least one equal element, it is treated as true. (again reiterating Robert's point.) Your example illustrates exactly why allowing this behavior is dangerous. Two different people looking at this may expect two different results. More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2005-November/003923.html","timestamp":"2014-04-18T16:17:22Z","content_type":null,"content_length":"3611","record_id":"<urn:uuid:95cf312b-aa64-4ce4-8b51-a38638c1bbe6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Taming turbulence: Understanding the equations Fabian Waleffe. Almost every flow we know is turbulent, he says, citing not only the weather, but other common examples such as blood flow, boiling water, air rushing around a moving vehicle, and oil traveling through a pipeline. One of a series of 10 Leonardo da Vinci "deluge" drawings done circa 1515. (36K JPG) But while turbulence is ubiquitous, it is not well understood, despite studies that began as early as the 1500s, with artist Leonardo da Vinci’s observations of and drawings of water flows. Then in the 1880s, scientists Claude-Louis Navier and George Stokes derived the equations that govern fluid flow and describe it well, but which are tremendously complicated to solve, even with the help of a supercomputer. “We’re not able to derive from the equations when turbulence occurs, or what is turbulence, or how to describe it,” says Waleffe. As a result, engineers must resort to ad hoc empirical formulas of limited validity and scientists must study turbulence at its most basic, or slowest-moving, level. “Even if you just walk around, we cannot fully calculate the air flow around you,” he says. He is using the Navier-Stokes equations to calculate solutions that describe the turbulent structures scientists have long observed in channel flows and boundary layers. Although the very idea of turbulence suggests randomness and disorder, Waleffe says scientists’ experiments half a century ago pointed to an underlying order. “Most natural turbulent flow shows eddies—like you see when there’s a lot of wind and you see leaves spiraling around each other,” he says. “The flow’s clearly not random because it has these organized motions—such as the eddies—which we call coherent structures.” And Waleffe’s studies reveal that the equations have underlying coherent solutions that capture the average features of turbulence. In other words, there’s an order to the disorder. That’s why, he says, scientists can observe structures embedded within the turbulence—like wavy streaks and horseshoe vortices—in one area of a flow and then watch them disappear and return elsewhere. Given 40 years of supporting experimental evidence, pinpointing these coherent structures in the equations is a big step forward, he says, because experimentalists argued about what they were seeing. “It turned out to be all different pieces of the same structure,” says Waleffe. “Once you put it into mathematical terms and once you can actually compute the solution, everything sort of falls into place.” Now that a mathematical description exists, many scientists are rethinking their notions of this order and how it affects flow. “It’s really these underlying coherent structures that increase the transport—of momentum and heat, for instance—and the disorder that comes on top of it actually reduces the transport,” he says. The next step in Waleffe’s research, which is funded mainly by the National Science Foundation Division of Mathematical Sciences, is to investigate the relationship between a coherent structure and the disorder, and to try to learn why a flow becomes turbulent. Like the most recent discoveries, the answer won’t come easily or quickly. “Turbulence is a very hard problem to crack,” he says. “I think of it as a succession of hard shells, and we certainly have broken a shell. Now we can dig deeper into the problem, but we may not get to the core. There may be another shell to crack.”
{"url":"http://www.engr.wisc.edu/ep/newsletter/2003_springsummer/article01_turbulence.html","timestamp":"2014-04-19T08:04:24Z","content_type":null,"content_length":"19348","record_id":"<urn:uuid:6af7d7ce-3293-4ce8-a9f5-cea83d7952bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Confused about Special Theory of Relativity? Hi, I have a question (or maybe more) about special relativity. I'm reading The Fabric of the Cosmos by Brian Greene right now, and what I got from one of the sections is confusing. I might be understanding it wrong, or maybe what he's saying is wrong, or maybe it's just right. Well, since that is a popular science book, it is probably a mixture of all of the above ;) So this is what I'm confused about: Greene says that if we run away from light, in relation to us, it will stay at the same velocity. This is because velocity is just distance/time, and if we measure time slower while we're going faster, we will measure light as going faster. Everything will even out, and we will measure light as going at the same speed. Well, if this is true, then wouldn't this be true for all things coming towards us? Wouldn't everyday objects maintain a constant speed (in relation to you) as well? Sorry if I just got all of this completely wrong, I'm relatively new to physics. ... time dilation and length contraction happen to keep the speed of constant. Other things are going slower - so you see times and lengths, associated with them, adjusted by a different amount. The amount of the adjustment depends on the relative speed. On a sort of unrelated note, why is our four-velocity "c"? It isn't. It is the magnitude of the 4-velocity which is a constant. We usually choose the constant to be 1 and measure distances in terms of the speed of light ... so if time is in seconds, then distance is light-seconds, and |V|=1. That happens because time dilation and length contraction are complimentary terms. I take it the book does not have a lot of math in it?
{"url":"http://www.physicsforums.com/showthread.php?p=4154025","timestamp":"2014-04-17T12:35:24Z","content_type":null,"content_length":"73956","record_id":"<urn:uuid:750d6e57-4523-4b6c-9080-e7f28ebec041>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 26 - 31 of 31 26. CJM 2004 (vol 56 pp. 344) Predual of the Multiplier Algebra of $A_p(G)$ and Amenability For a locally compact group $G$ and $1 Keywords:Locally compact groups, amenable groups, multiplier algebra, Herz algebra 27. CJM 2003 (vol 55 pp. 1000) Some Convexity Results for the Cartan Decomposition In this paper, we consider the set $\mathcal{S} = a(e^X K e^Y)$ where $a(g)$ is the abelian part in the Cartan decomposition of $g$. This is exactly the support of the measure intervening in the product formula for the spherical functions on symmetric spaces of noncompact type. We give a simple description of that support in the case of $\SL(3,\mathbf{F})$ where $\mathbf{F} = \mathbf{R}$, $\mathbf{C}$ or $\mathbf{H}$. In particular, we show that $\mathcal{S}$ is convex. We also give an application of our result to the description of singular values of a product of two arbitrary matrices with prescribed singular values. Keywords:convexity theorems, Cartan decomposition, spherical functions, product formula, semisimple Lie groups, singular values Categories:43A90, 53C35, 15A18 28. CJM 2002 (vol 54 pp. 795) Structure Theory of Totally Disconnected Locally Compact Groups via Graphs and Permutations Willis's structure theory of totally disconnected locally compact groups is investigated in the context of permutation actions. This leads to new interpretations of the basic concepts in the theory and also to new proofs of the fundamental theorems and to several new results. The treatment of Willis's theory is self-contained and full proofs are given of all the fundamental results. Keywords:totally disconnected locally compact groups, scale function, permutation groups, groups acting on graphs Categories:22D05, 20B07, 20B27, 05C25 29. CJM 2000 (vol 52 pp. 633) Chern Characters of Fourier Modules Let $A_\theta$ denote the rotation algebra---the universal $C^\ast$-algebra generated by unitaries $U,V$ satisfying $VU=e^{2\pi i\theta}UV$, where $\theta$ is a fixed real number. Let $\sigma$ denote the Fourier automorphism of $A_\theta$ defined by $U\mapsto V$, $V\mapsto U^{-1}$, and let $B_\theta = A_\theta \rtimes_\sigma \mathbb{Z}/4\mathbb{Z}$ denote the associated $C^\ast$-crossed product. It is shown that there is a canonical inclusion $\mathbb{Z}^9 \hookrightarrow K_0(B_\theta)$ for each $\theta$ given by nine canonical modules. The unbounded trace functionals of $B_\ theta$ (yielding the Chern characters here) are calculated to obtain the cyclic cohomology group of order zero $\HC^0(B_\theta)$ when $\theta$ is irrational. The Chern characters of the nine modules---and more importantly, the Fourier module---are computed and shown to involve techniques from the theory of Jacobi's theta functions. Also derived are explicit equations connecting unbounded traces across strong Morita equivalence, which turn out to be non-commutative extensions of certain theta function equations. These results provide the basis for showing that for a dense $G_\delta$ set of values of $\theta$ one has $K_0(B_\theta)\cong\mathbb{Z}^9$ and is generated by the nine classes constructed here. Keywords:$C^\ast$-algebras, unbounded traces, Chern characters, irrational rotation algebras, $K$-groups Categories:46L80, 46L40 30. CJM 1999 (vol 51 pp. 96) Partial Characters and Signed Quotient Hypergroups If $G$ is a closed subgroup of a commutative hypergroup $K$, then the coset space $K/G$ carries a quotient hypergroup structure. In this paper, we study related convolution structures on $K/G$ coming from deformations of the quotient hypergroup structure by certain functions on $K$ which we call partial characters with respect to $G$. They are usually not probability-preserving, but lead to so-called signed hypergroups on $K/G$. A first example is provided by the Laguerre convolution on $\left[ 0,\infty \right[$, which is interpreted as a signed quotient hypergroup convolution derived from the Heisenberg group. Moreover, signed hypergroups associated with the Gelfand pair $\bigl( U(n,1), U(n) \bigr)$ are discussed. Keywords:quotient hypergroups, signed hypergroups, Laguerre convolution, Jacobi functions Categories:43A62, 33C25, 43A20, 43A90 31. CJM 1998 (vol 50 pp. 342) Shape fibrations, multivalued maps and shape groups The notion of shape fibration with the near lifting of near multivalued paths property is studied. The relation of these maps---which agree with shape fibrations having totally disconnected fibers---with Hurewicz fibrations with the unique path lifting property is completely settled. Some results concerning homotopy and shape groups are presented for shape fibrations with the near lifting of near multivalued paths property. It is shown that for this class of shape fibrations the existence of liftings of a fine multivalued map, is equivalent to an algebraic problem relative to the homotopy, shape or strong shape groups associated. Keywords:Shape fibration, multivalued map, homotopy groups, shape, groups, strong shape groups Categories:54C56, 55P55, 55Q05, 55Q07, 55R05
{"url":"http://cms.math.ca/cjm/kw/groups?page=2","timestamp":"2014-04-20T23:29:10Z","content_type":null,"content_length":"37034","record_id":"<urn:uuid:32c7a4e7-6d9f-4ebc-afad-c1c0b8956f18>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Reality Conditions Loops 07: Conference report (part 3, including discussion session) Saturday 30/07 There were only two plenary talks this morning, followed by a discussion session. The first was by John Stachel, who is a specialist on philosophy and history of physics (with special reference to Einstein and relativity) . He introduced a general philosophy called "measurability analysis", which is based on analyzing and defining possible measuring processes and abstracting from them the quantities that need to be quantized (transformed into non-commuting operators). His analysis of GR suggests, to him at least, that the projective and the conformal structures of spacetime geometry are "what needs to be quantized" in quantum gravity. The second one was by Michael Reisemberger, who sketched with admirable clarity a canonical formalism for GR in which initial data are on two null intersecting hypersurfaces. The plus is that null initial data are free, not subjected to constraints. He provided a definition of the Poisson Bracket in this formalism and suggested that quantization leads to area discretization, though this is not yet solid ground. In the discussion session, Carlo Rovelli followed the same procedure used in the Zakopane lectures and read some selected questions from a notebook that had circulated among the audience the previous days for people to write them. Obviously there were dozens of questions written and Rovelli, given the time constraints, had to select only a handful of them for people to discuss. These are the ones that made the cut: 1) Do we expect topology change in Quantum Gravity? Oriti said that he expects a general framework to allow for topology change, but probably the classical limit can only be recovered on a sector that disallows it. Ashtekar was of the opinion that that canonical LQG framework must allow macroscopic topology change (if lots of spins become trivial, we have macroscopically a "branching" spacetime) 2) What is the relation of Quantum Gravity to the foundational questions in Quantum Mechanics? Obviously, a question that provokes a lot of discussion. Thiemann and Rovelli are conservatives who think that QG and foundations of QM can be treated separately -for Rovelli's reasons see what he said in Zakopane . Bianca Dittrich thinks that we need to develop our understanding of relational observers. Lucien Hardy said that GR is at least as radical as QM, so it is unlikely that it can be treated with the standard QM framework. John Donoghue disagreed: according to him, effective field theory shows that GR breaks down at high energies, so the sensible thing is to modify it and keep QM. (As I said in a previous post, this overlooks the fact that in this context the what is meant by GR is not the exact Einstein theory, but the conceptual fact that spacetime is dynamical and not fixed.) 3) Could there be experimental consequences of fluctuating causal structure? Sabine Hossenfelder mentioned the possible consequences for arrival time of photons, but stressed that this comes only from a phenomenological model with no relation to underlying theory. Hardy said that a "fluctuation", as a superposition between two classical states, would need some kind of interference experiment to observe, which is very difficult to be realizable in practice. I think Ashtekar got into a discussion with him here, but I couldn't follow it well enough to take notes -anybody remembers? Martin Reuter said that causal structure may be different for different observables used to probe it, and especially the scale of these observables. 4) What is finite in spin foam models? Alejandro Perez gave a rather technical answer, of which the only notes I managed to take say: "some models (in 4D) are finite, some are not". Whoa, informative. Sorry. 5) Do we expect the fundamental theory to be combinatorial, or to be embedded in a pre-existing manifold? Rovelli pointed out the conflict between Thiemann's new "Algebraic Quantum Gravity" approach, which is purely combinatorial, and Smolin's program to recover matter from graph braiding, which requires graphs to be embedded. Thiemann said that matter can be included in the algebraic approach, just as a part in the complete Hamiltonian. (Obviously, it would be more appealing if we could derive matter instead of putting it by hand -but can we?) José Antonio Zapata said that the basic thing we need to do is to understand how to build up a quantum theory on a differential manifold (one not previously equipped with a metric structure, I gather). And now the last question. It asked, to all plenary speakers, to say they "dream for Loops '17"; that is, on their most optimistic possible view, what is the title and abstract of the talk they imagine themselves presenting within ten years? Many of the answers were predictable and variations of a basic template: abstracts saying "we present a complete theory of quantum gravity with testable (or, in the most ambitious cases, confirmed) predictions." Ashtekar said something like this, adding that his estimated probability for this scenario was 0%. (But he also gave the in my opinion rather optimistic figure of 50% for the probability of having some experimental evidence to start resolving ambiguities.) Reuter had one of the most concrete dreams: "It is shown that LQG is equivalent to Asymptotic Safety, and that that the quantuization ambiguities in it are finite in number and equivalent to the dimensionality of the Non-Gaussian Fixed Point." And finally, there was an extremely amusing exchange between Thiemann and Alejandro Perez, which is a fitting conclusion to this series of posts: Thiemann (reading his dream abstract): "We present quantum gravity corrections to the electron fine structure, and find that they are in agreement with experiments carried out by the author" [laughter from the audience] Perez (reading his dream abstract): "We show that Thiemann's calculations are totally wrong." laughter from the audience] Labels: physics This is the second part of my conference report on Loops 07. The first part was here. Remember that you can download the slides or the audio for most of the talks from the conference webpage. Wednesday 27/06 Plenary talks Moshe Rozali started by giving an excellent talk about background independence in string theory, a topic that has been subject of legendary long discussions on Cosmic Variance and other blogs. The main points of his talk were: a) Perturbative string theory is in fact background independent, being a generalization of GR in a background field gauge; it's just that the perturbative framework makes the background independence non-manifest. b) Holographic dualities provide a way of archiving background independence in a more explicit way. In AdS/CFT, a gauge theory on the boundary can be manifestly diffeomorphism invariant and be equivalent to quantum gravity in the bulk of AdS. Rozali stressed that only the asymptotics of AdS (i.e. a particular negative value of the cosmological constant) need to be fixed; the interior geometry is completely dynamical. Ashtekar seemed to disagree about the extent of this statement and tried to press for a discussion in the question session, but it was interrupted for lack of time. Klaus Fredenhagen talked about QFT in curved spacetime as a route quantum gravity. He extolled the virtues of Algebraic Quantum Field Theory and the techniques of microlocal analysis to provide a sound axiomatic foundation to QFT in curved spacetime, and explained some recent results proven in this area. Then he discussed the application of this formalism to the graviton field treated as a perturbation around a classical background, and wondered about its relation to the methods of effective field theory. My namesake and compatriot Alejandro Perez (with whom I have been confused a couple of times for those two reasons, though we look nothing alike) gave a rather technical talk involving strings, BF theory and path-integral sum over topologies. I wish I could say more about it, but I got lost soon after the introduction. Martin Reuter gave an exceptionally clear and compelling presentation on Asymptotic Safety in Quantum Gravity. It covered more or less all the ground that he had covered in the Zakopane lectures in March, which I will not summarise again (click on the link), but also a few tantalizing new implications for cosmology. If the results of the "Einstein-Hilbert truncation" are accepted as approximately true, then the physical cosmological constant "runs" with the scale in the following way: it is constant (at its currently observed tiny value) at lengthscales larger than 10^(-3) cm, and then starts growing as the fourth power of momentum (inverse length) until the Planck scale is reached, and from there on it grows quadratically. This means that in the early universe it was much larger than in the present but decreasing as the universe increased its scale. This provides a natural mechanism for inflation without any driving field. The inflation was driven by the same cosmological "constant" that we see today, and was due to the intrinsic running with scale of this parameter. Reuter had some calculations that seemed to show his model gives good results for the entropy of the universe, as well as a scale-invariant perturbation spectrum. This is obviously the kind of thing that is either brilliantly right, or completely wrong. The "dark matter + small cosmological constant + inflation" model that is accepted in conventional cosmology gives predictions of extraordinary accuracy for many different observations (at least with respect to its first two elements). A lot of care would be needed to examine if Reuter's model can really emulate all the confirmed predictions, and whether it can make new ones that are testable. But if Reuter is right, then his talk was by a large margin the most important in the conference. There were no parallel sessions on Wednesday afternoon, which was a free afternoon. Thursday 28/07 Plenary talks Daniele Oriti talked about Group Field Theory (GFT). According to him, GFTs (nonlocal field theories on group manifolds) can be interpreted as "second-quantized quantum gravity". They can be used as a general framework in which to rewrite discrete quantum gravity approaches such as LQG and spin foams. Oriti hopes that the elusive semiclassical limit of these theories may be more tractable with GFT methods. Instead of studying e.g. coherent semiclassical spin network superpositions, take a hugely populated "multi-particle state" of the GFT. The techniques of statistical field theory, used for the semiclassical limit of quantum mechanics in condensed matter theory, are suited to be applied to GFTs. By this way one may hope even to define notions of "temperature" and "phases" as they apply to quantum spacetime. One interesting result that he mentioned by the end, without much explanation, is that GFTs must be Fermi-quantized in the Lorentzian case and Bose-quantized in the Riemannian. Can anyone explain to me what he meant by this? By this time I was feeling ill and with a bit of temperature (I had been warned against the local food, but...), so I went back to my hotel room to have some medicine and rest an hour or so. I thus missed David Rideout's talk on supercomputers and came back for Martin Bojowald's on effective field theory applied to LQG, on which I had put high expectations. Bojowald rewarded these expectations by dedicated one slide of his talk to quoting this blog… …well, not exactly. The idea of the talk was to replace exact equations for quantum states by semiclassical, effective equations for a finite number of moments of a state (expectation value, fluctuation, etc.) This method is applied successfully to quantum cosmology. He hinted at the end at possible observable consequences in the inflation perturbation spectrum and at computable corrections to the Newtonian potential (meaning the 00 component of the metric in FRW cosmology). These do not seem to match those computed in Donoghue's ordinary effective field theory, but I'm not sure if this isn't because this is a different meaning of "Newtonian potential". I kept feeling ill and missed almost all the other talks of the day, and didn't take notes in the few I attended. These included talks in the parallel sessions by Sundance Bilson-Thomson and fellow blogger Yidun Wan on models in which spin network braids are standard model particles. Next day would see Lee Smolin champion the same idea in a plenary talk. I returned early to rest in my hotel room and watch Argentina beat USA by 4-1 at football. Friday 29/07 Plenary talks As I was still not feeling perfectly well, I slept till late and attended only the last two morning talks. The first was by blogfriend Sabine Hossenfelder, on Phenomenological Quantum Gravity. She has written up the introduction to the talk in this post, so I can do nothing better than recommend you to read it. The rest of the talk examined the generic predictions made by models such as Minimal Length, Generalised Uncertainty Principle and Deformed Special Relativity. According to her, the main problem with all these models is an insufficient connection with fully developed fundamental theories. Lee Smolin, as I said, talked on braided QG structures as elementary particles. He started making the point that for LQG and related models of quantum spacetime to work, it is needed to explain how low-energy excitations (gravitons, photons, etc.) can propagate through the spacetime foam without decohering with it. That is, one needs to identify "noiseless subsystems" and a ground state on which they propagate coherently, protected by an emergent symmetry. Then he presented the main result: a class of spin network models exists whose simplest coherent excitations (braided, embedded framed graphs) match the quantum numbers of Standard Model 's first generation of fermions. Higher generations can can be included, at the cost of some exotic states. Interactions can be included. (But he did not say the crucial thing: if these "interactions" match, or can be made to match, the U(1)xSU(2)xSU(3) gauge structure of the Standard Model.) Open problems are to include symmetry breaking and masses (all these degrees of freedom are massless), find momentum eigenstates and conservation laws. I can understand Smolin's excitement about these ideas, but for the moment I remain highly skeptical about them. The Standard Model is a lot more than a table with quantum numbers, and without much more development it will be hard to convince me that the behaviour of some pretty knots can reproduce the rich mathematical structure of Quantum Field Theory. Parallel sessions I chose to go to the sessions centred on black holes. William Donelly gave a talk on entanglement entropy of spin networks, and its use in calculation of black hole entropy. Ashetekar expressed skepticism, saying that those calculations did not include the fact that the surface used is a black hole horizon; Donelly answered that he assumes that any surface will have entropy for some observer accelerating in a way so that the surface is a horizon to him. Daniel Termo talked on how the bulk entropy of a graph scales with its boundary, hoping to identify a "holographic regime" of LQG. The conclusion is that LQG will not be holographic, unless the Hamiltonian constrain reduces dramatically the allowed graph complexity. Bad news, I guess. Yidun Wan talked a second time, this time giving the talk of his colleague Mohammed Ansari who couldn't make it to the conference. It was on an alternative framework to the "isolated horizons" one for dealing with quantum black holes. By a reasoning I could not follow, macroscopic corrections to Hawking radiation were predicted; Ashtekar was again skeptical. Another talk worth mentioning was Jacobo Diaz-Polo's on the old problem of the black hole area spectrum in LQG. Jacobo and his collaborators did exact numerical calculations of the area degrees of freedom, without the approximations used for analytical calculations. They obtain, as usual, the Bekenstein-Hawking entropy as leading term (up to a choice of the Imirizi parameter) and a universal logarithmic correction with prefactor -1/2. The number of states as a function of the area has an interesting structure with evenly spaced peaks of degeneracy. If as a first approximation one considers only the states on the peaks, one gets an equidistant area spectrum and the Bekenstein – Mukhanov effect. Of course, all of this is purely kinematical (Jacobo himself stressed it) and the question of how to incorporate the dynamical constraint seems to remain as elusive as always. This will be enough for today. My next and last post on the conference will describe the last day's two plenary talks and the discussion session that closed the conferemce. As always, if anyone has anything to add to my summaries, thinks I forgot something important, or wants to correct some egregious mistake, they are more than invited to do so. Labels: physics I apologize to all eager readers who will have to wait a few days more for the second part of mu Loops 07 report. After some hesitation, I have decided to go the Low-Energy Quantum Gravity Workshop at York on Thursday and Friday. And then Saturday will obviously be dedicated to reading Harry Potter and the Deathly Hallows. Quantum gravity blogging will probably not resume until Monday or Tuesday. After (or perhaps before) finishing off the Loops 07 posts I may say something about the York workshop if there was material of bloggable interest in it. By the way, #1 : The slides of my talk are now available at the conference webpage, so you can check by yourself whether my handwriting is really illegible. By the way, #2: Pandagon and Matthew Iglesias and this excellent article by Michel Berube discuss the "backlash" against the Harry Potter books by "serious" literary critics. I think there is a point missed in many of the discussions. Quite beyond the literary quality of the books themselves, there is genuine cultural value in sharing an experience with millions of other people. It is the reason I am not ashamed of eagerly waiting to read the next Harry Potter and then discuss it with people and see all the online fans' reactions to it, while I am blissfully ignorant of many other fantasy series which are probably better written. In the same way that I watch every football match I can catch during the World Cup and then forget almost completely about the sport for four years. (Yeah, Alejandro, nice one. Trying to pretend that you have forgotten about football just after Agentina has been ignominiously defeated 3-0 by Brazil. Very convenient!) This will be a very long post, or more likely, the first of a series of very long posts. So let me skip quickly over the praises for the quality of the conference and the people present (I met many old friends, both from the real and the virtual worlds) and go directly into the physics. Remember that you can go beyond my comments and get both the slides and audio for most of the talks at the conference website. Monday, 25/06 Plenary talks Lucien Hardy talked on the causaloid formalism for quantum gravity. It was actually a foundations of quantum mechaincs talk, based on a "operationalist" philosophy: data are recorded, and physics tries to predict probability correlations among data. These probabilities behave different if data are from "causally connected regions" or not; this allows a definition of what is meant by causal connection in background-independent theories. I found the talk interesting but think that concrete progress in quantum gravity is unlikely to come from such an extremely "top-down" approach. As a matter of philosophical principle, I am suspicious of theories motivated by philosophical principles. Rafael Sorkin on "anhomomorphic logic". Another foundations of QM talk. Sorkin favours a quantum logic interpretation, in which propositions describing unobserved microevents (e.g. "the particle passed through the lower slit") are assigned truth values that behave according axiomes different from classical logic. Besides what I said above on Hardy, I was especially suspicious of this approach because it "adds sturcture" that is not present on the bare quantum mechanics formalism, for reasons that I find unmotivated. John Donoghue talked next on Effective Field Theory of General Relativity. This was a much-expected talk, and it was also referenced by many of the following speakers. It was an introduction to effective field theory and the way it provides a consistent perturbative theory of quantum gravity, for sub-Planckian energy scales. Donoghue emphazised that scattering amplitudes and physical results such as the first quantum correction to the Newtonian potential can be calculated unambiguously and independently of the high-energy completition of the theory, and that any theory that pretends to provide this completition (such as LQG) must recover these corrections as well as the classical "zeroth-order" theory. According to Donoghue, the Problem with capital P is not "reconciliating GR and QM" but finding the fundamental high energy theory that completes quantum GR at the Planck scale. I think, however, that when most people in what is loosely called the "LQG community" talk about reconciliating GR and QM, they understand it as implying much more than what is provided by effective field theory. What is wanted is a quantum theory in which spacetime is fully dynamical, and the EFT results (while important, and truly a nontrivial check for any proposed theory) are still very far from this, as they are based on the perturbative framework of QFT. Parallel sessions My own talk "The transition rate of an Unruh detector in a general spacetime" was scheduled for one of the Monday afternoon sessions. It went quite well, with only a brief question by Jerzy Lewandowski during the presentation, and no questions afterwards (though a couple of persons came to talk to me expressing interest later). I suspect many people couldn't understand much, between my ilegible handwriting in the transparencies (I promise to use software next time!), the bad quality of the projector, and the high speed of my speaking due to nerves. I was feeling uncommonly nervous, both before and after the talk, and I didn't take many notes on other talks that afternoon. I have some notes on Rodolfo Gambini's talk, about how quantum mechanics is modified when instead of an abstract time variable we use a physical clock, subjected to decoherence, in the Schroedinger equation; of course, "unitarity" in this time variable is lost. Then he argued that there are fundamental limitations to any clock a the Planck scale, and therefore quantum mechanics would need modifications there. I think that a "timeless" formalism of QM (as Rovelli, Oeckl and others have tried to build) is needed before one can assess these arguments. Guillermo Mena-Marugan and Iñaki Garay talked about quantizations of restricted classical solutions of GR, the Gowdy model and Einstein-Rosen waves respectively; Iñaki had some nice plots of quantum solutions exhibiting both classical and non-classical behaviour. Garrett Lisi then talked on his ambitious "theory of everything" that attempts to describe the whole Standard Model, gravity included, with a single Lie group, E8. I thought when hearing it that it was just a formal game and was surprised to see Lee Smolin ask interested questions, and even more when I saw that John Baez had wrote a whole TWF column on this theory. Tuesday, 26/06 Plenary talks This was "the big LQG day", with talks by heavyweights Thiemann, Ashtekar and Rovelli. There was also a talk by Jan Ambjorn about the discrete sum over histories approach, but I missed it. Thomas Thiemann gave a summary of things known and unknown in Loop Quantum Gravity. For me it added little to to what he had covered in the more complete series of lectures in Zakopane. "Secured land" includes the kinematical framework, the LOST theorem, the area operator spectrum, and kinematical coherent states. "Uncharted territory" includes his more recent Master Constraint Operator (M) to define physical states and the checking of its good semiclassical behaviour. "Open problems" are whether 0 is in the spectrum of M or whether there are anomalies; the resolution of quantization ambiguities in the definition of M; a systematical calculational framework for physical states; a connection with quantum field theory in curved space, with perturbative theory, and a definition of gravitons and Feynman graphs; and conceptual issues related to the problem of time and relational observables. In response to a question by the audience, he admitted that little or none work had been done to connect LQG with the effective field theory results. I think that everyone came out of the conference with the agreement that this is an extremely important thing to do. Abhay Ashtekar gave a summary of results in symmetry reduced models: loop quantum cosmology and "loop quantum black holes". He started arguing that while results in symmetric models do not prove generic validity, they cannot be dismissed a priori either; witness the example of the hydrogen atom spectrum predicted correctly from symmetric model, against the complexity of solving full QED. He next summarised the by now familiar results of LQC: the Big Bang singularity is replaced by a bounce, both in the zero and positive curvature cases. An important feature is that the correct semiclassical limit heavily constrains how ambiguities in the Hamiltonian are resolved. Similar bounces avoid the singularity in black hole spacetimes, showing that there is no information loss and that evolution is deterministic throughout the quantum regime into a new classical region. It is also known that these bounces are stable against small perturbations. Carlo Rovelli asked a question at the end of Ashtekar's talk, one that has worried me for a long time, that I discussed briefly here about a year ago, and that has recently been discussed at Cosmic Variance (see previous post here for the link; I can't access CV now). In our universe, the Big Bang was a state of uncommonly low entropy; this ensures the existence of an arrow of time because entropy has naturally grown since then. If there was a collapsing phase and a bounce before the Big Bang, what was happening to entropy in it? Symmetry seems to demand it to decrease –but "naturally" a gravitational collapse increases entropy to a maximum, as in a black hole. The collapsing universe would need to be extremely fine-tuned for entropy to decrease in it. I couldn't follow Ashtekar's answer to Rovelli, but later I found an oportunity to pose the question again to him in a coffee break. He said that while matter entropy is very difficult to analyze in the simple models that have been studied so far, gravitational entropy –the "likeliness" of the gravitational state- does indeed seem to behave symmetrically in the bounce models. The quantum regime near the singularity is a very special, intrinsically low-entropy state. I have been convinced by Frank (fh) in the discussion here a year ago that if this is so, the most natural description of the situation is not "a previously collapsing universe with decreasing entropy followed by an expanding universe with increasing entropy" but "a low entropy state that expands, increasing entropy, in both two time directions". In other words, it seems more natuural to define "positive time direction" at each of the two stages by the increase of entropy, even if this gives two different results and time is no more a "line" but a "double arrow". Surely, if there were observers in the (from our point of view) "collapsing"phase, they would take themselves to live in an expanding universe, if as it seems almost certain the psychological arrow of time is tied to the thermodynamical one. Ashetekar however, didn't seem to think much of this point of view (probably dismissing it as too philosophical). For him the scalar field that serves as "internal time" in these quantum cosmology models is the true "clock", and it is monotonically increasing. I am still puzzled, however, about what happens with entropy in the closed universe model (postive curvature without dark energy). This one becomes under quantization cyclical, expanding and contracting again at regular rate. What happens when the apex of the expansion is reached? does entropy reverse itself suddenly, as in Gold's old cosmology? but how can this be, if the moment of maximum expansion is completely classical and localized systems should follow ordinary mechanical and thermodynamical laws without knowing about the cosmological turnaround? I find this very perplexing. A possible way out is that the existence of dark energy with its actual value, which accelerates the expansion and ensures that the universe is not cyclical, is somehow not an accidental but a necessary feature of the universe, so that the cyclical model will ultimately be shown to be inconsistent. But this is only a personal hope. See also my old review of Price's book on the arrow of time for more discussions of these questions. Going on with the conference: Carlo Rovelli talked next about the new spinfoam vertex, an improved model that pretends to replace the Barrett-Crane one. He discussed at length the graviton propagator calculation he and his collaborators did a couple of years ago, explaining that since then the nondiagonal terms of the propagator had been computed and found to be wrong –but only because the Barrett-Crane model was used! Using the new model the problem is solved. The key difference is that second class simplicity constraints are imposed weakly rather than strongly. In the improved model the bondary states of spin foams match exactly the spin network states of canonical LQG, and intertwiner degrees of freedom remain free. (There were some technicalities about all these that I couldn't follow, but if you are interested download the slides and audio; it was a very clearly delivered talk.) The conclusion was optimistic: Carlo believes that this model may be the key for reconciliating the "canonical" LQG approach and the "covariant" spin foam one. Paralell sessions The talks I attended to this afternoon were mostly about highly technical aspects of LQG and spin foams, and I don't want to bore neither me nor you by writing much about them. I will comment only on two of them which were of special importance, to me at least. Kristina Giesel talked of the work she did with Thiemann on Algebraic Quantum Gravity, a new version of LQG which is defined in a purely "combinatorial" way; spin networks are abstract graphs and not embedded in any pre-existing manifold. Semiclassical analysis, however, can be done by specifying a 3-manifold and a classical phase space point in it, and constructing coherent states peaked on that geometry. The zeroth-order and first-order in hbar of the expectation value of the master constraint in these states come out correct; what is unknown is whether there are anomalies in M or whether 0 is in its spectrum. The second talk I want to remark upon was Eugenio Bianchi, on work related to the graviton propagator calculations. He showed computations of large scale area correlations in spin foam models, for boundary states peaked on a classical geometry, and showed that they agree exactly with those computed in perturbative Regge calculus. The point is that correlations calculated in a semiclassical state of the full, nonperturbative theory are here compared with correlations in the vacuum state of the perturbative theory around a corresponding classical solution. Finding agreement is a nontrivial check for the spin foam model. In this case the model was Barrett-Crane, but Eugenio thinks the results still hold in the "improved" model Rovelli had talked about. And this is enough for today. The rest of the conference will be covered in one, or perhaps two, following post(s). As usual, stay tuned! Labels: physics I am adamant in my decision of not writing physics posts while on holiday, so the report about the talks will have to wait. But my self-imposed ban does not cover pictures and links about the The cathedral of Morelia, just a block away from the university centre were the conference was. Ileana and her unicorn-shaped balloon. Frank expresses his ignorance -of what, I don't know. Cecilia "la Madonna" Fiori. Bee has already posted on the conference, including a photo of yours truly. My detailed report will start coming in about one week. Meanwhile you can see the slides and audio of many of the talks clicking around from here. My slides are not there yet (I used transparencies and didn't get around to scan them yet). Tangentially related is this Cosmic Variance post, on a subject that fascinates me and that came up during the conference. More on this later.
{"url":"http://realityconditions.blogspot.com/2007_07_01_archive.html","timestamp":"2014-04-19T17:01:39Z","content_type":null,"content_length":"73103","record_id":"<urn:uuid:e67ada9e-79e4-40c0-a5ad-8db407a66aec>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Derivative Here is a formula for these type problems; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=18598","timestamp":"2014-04-20T18:40:33Z","content_type":null,"content_length":"14183","record_id":"<urn:uuid:2961b8c6-51d2-45ee-983a-167180ebc4d5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The shoe store has twice as many black shoes as it does brown shoes. The total number of shoes is 66. How many brown shoes are there? @SomeGirl1999 • 9 months ago • 9 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51dc6a6ae4b076f7da407406","timestamp":"2014-04-20T08:46:19Z","content_type":null,"content_length":"42086","record_id":"<urn:uuid:869136bf-adbf-4809-b421-c8958e3c6221>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
g01eec Upper and lower tail probabilities and probability density function for the beta distribution g01emc Computes probability for the Studentized range statistic g01erc Computes probability for von Mises distribution g01jcc Computes probability for a positive linear combination of χ^2 variables g01jdc Computes lower tail probability for a linear combination of (central) χ^2 variables g01kac Calculates the value for the probability density function of the Normal distribution at a chosen point g01kfc Calculates the value for the probability density function of the gamma distribution at a chosen point g01kkc Calculates a vector of values for the probability density function of the gamma distribution at chosen points g01kqc Calculates a vector of values for the probability density function of the Normal distribution at chosen points g01sac Computes a vector of probabilities for the standard Normal distribution g01sbc Computes a vector of probabilities for Student's t-distribution g01scc Computes a vector of probabilities for χ^2 distribution g01sdc Computes a vector of probabilities for F-distribution g01sec Computes a vector of probabilities for the beta distribution g01sfc Computes a vector of probabilities for the gamma distribution g08cjc Calculates the Anderson–Darling goodness-of-fit test statistic and its probability for the case of uniformly distributed data g08ckc Calculates the Anderson–Darling goodness-of-fit test statistic and its probability for the case of a fully-unspecified Normal distribution g08clc Calculates the Anderson–Darling goodness-of-fit test statistic and its probability for the case of an unspecified exponential distribution © The Numerical Algorithms Group Ltd, Oxford UK. 2012
{"url":"http://www.nag.com/numeric/CL/nagdoc_cl23/pdf/INDEXES/KWIC/probability.html","timestamp":"2014-04-19T07:15:38Z","content_type":null,"content_length":"5758","record_id":"<urn:uuid:8ad9729c-4cd9-411b-a3c6-698b48b8fafd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Chi-Square Variance Ratio Proof May 13th 2010, 06:17 AM Chi-Square Variance Ratio Proof How would you go about proving that [ ( n - 1 ) * s^2 ] / σ^2 has a Chi-Square Distribution? May 13th 2010, 01:55 PM Some information necessary about s^2 Anyway, I can just guess what it is. Note that aX~N(0,a²), where X~N(0,1) Now think about what happens if you divide a normal distribution N(0,σ²) by σ² ! And finally, remember that a chi-square distribution is the sum of squares of iid random variables following a N(0,1) May 14th 2010, 03:14 PM One way to get at this is to assume $\sigma^2 = 1$, prove it for the case n = 2, then induct by showing that $(n - 1) S^2_n = (n - 2) S^2_{n - 1} + \left(\frac{n - 1}{n}\right)(X_n - \bar{X}_{n - 1})^2$, which will get you where you need since you will have the sum of two independent chi-squares. Some facts that you will use are that the sum of independent chi-squares is chi-square, that the square of an $N(0, 1)$ is $\chi^2_1$, and that $\bar{X}_n \perp S^2_n$. The definitions are: $S^2 _n = \frac{\sum_{i = 1} ^ n (X_i - \bar{X}_{n})^2}{n - 1}<br />$ $\bar{X}_n = \sum_{i = 1} ^ n \frac{X_i}{n}$ Once you have this, it's just a matter of undoing the assumption that $\sigma^2 = 1$, which isn't really a big deal. To be honest, if this is just for kicks, it's actually more of a pain in the head than it appears, particularly in proving the identity above.
{"url":"http://mathhelpforum.com/advanced-statistics/144537-chi-square-variance-ratio-proof-print.html","timestamp":"2014-04-20T08:28:47Z","content_type":null,"content_length":"6438","record_id":"<urn:uuid:35aac992-aa1c-4113-8b07-ffddaa3a6a2f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Today complex numbers have such widespread practical use--from electrical engineering to aeronautics--that few people would expect the story behind their derivation to be filled with adventure and enigma. In An Imaginary Tale, Paul Nahin tells the 2000-year-old history of one of mathematics' most elusive numbers, the square root of minus one, also known as i. He recreates the baffling mathematical problems that conjured it up, and the colorful characters who tried to solve them. In 1878, when two brothers stole a mathematical papyrus from the ancient Egyptian burial site in the Valley of Kings, they led scholars to the earliest known occurrence of the square root of a negative number. The papyrus offered a specific numerical example of how to calculate the volume of a truncated square pyramid, which implied the need for i. In the first century, the mathematician-engineer Heron of Alexandria encountered I in a separate project, but fudged the arithmetic; medieval mathematicians stumbled upon the concept while grappling with the meaning of negative numbers, but dismissed their square roots as nonsense. By the time of Descartes, a theoretical use for these elusive square roots--now called "imaginary numbers"--was suspected, but efforts to solve them led to intense, bitter debates. The notorious i finally won acceptance and was put to use in complex analysis and theoretical physics in Napoleonic times. Addressing readers with both a general and scholarly interest in mathematics, Nahin weaves into this narrative entertaining historical facts and mathematical discussions, including the application of complex numbers and functions to important problems, such as Kepler's laws of planetary motion and ac electrical circuits. This book can be read as an engaging history, almost a biography, of one of the most evasive and pervasive "numbers" in all of mathematics. "A book-length hymn of praise to the square root of minus one."--Brian Rotman, Times Literary Supplement "An Imaginary Tale is marvelous reading and hard to put down. Readers will find that Nahin has cleared up many of the mysteries surrounding the use of complex numbers."--Victor J. Katz, Science "[An Imaginary Tale] can be read for fun and profit by anyone who has taken courses in introductory calculus, plane geometry and trigonometry."--William Thompson, American Scientist "Someone has finally delivered a definitive history of this 'imaginary' number. . . . A must read for anyone interested in mathematics and its history."--D. S. Larson, Choice "Attempting to explain imaginary numbers to a non-mathematician can be a frustrating experience. . . . On such occasions, it would be most useful to have a copy of Paul Nahin's excellent book at hand."--A. Rice, Mathematical Gazette "Imaginary numbers! Threeve! Ninety-fifteen! No, not those kind of imaginary numbers. If you have any interest in where the concept of imaginary numbers comes from, you will be drawn into the wonderful stories of how i was discovered."--Rebecca Russ, Math Horizons "There will be something of reward in this book for everyone."--R.G. Keesing, Contemporary Physics "Nahin has given us a fine addition to the family of books about particular numbers. It is interesting to speculate what the next member of the family will be about. Zero? The Euler constant? The square root of two? While we are waiting, we can enjoy An Imaginary Tale."--Ed Sandifer, MAA Online "Paul Nahin's book is a delightful romp through the development of imaginary numbers."--Robin J. Wilson, London Mathematical Society Newsletter More Endorsements Other Princeton books authored or coauthored by Paul J. Nahin: Subject Areas: A Selection of the Library of Science Book Club Paperback: Not for sale in South Asia
{"url":"http://press.princeton.edu/titles/9259.html","timestamp":"2014-04-21T13:07:05Z","content_type":null,"content_length":"21539","record_id":"<urn:uuid:194c42ba-0377-4e78-b6d3-2ddc5d923687>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Jackson Heights, New York, NY New York, NY 10016 Is math confusing? I've been in your shoes. Let me help! ...During my time there I received both individual and team awards at the Florida region, state, and national level. SUBJECTS At this time I am available to tutor Prealgebra, Algebra 1, Geometry and Algebra 2 . HOURS I am available to tutor Monday-Friday. TUTORING... Offering 4 subjects including algebra 2
{"url":"http://www.wyzant.com/Jackson_Heights_New_York_NY_algebra_2_tutors.aspx","timestamp":"2014-04-20T16:20:45Z","content_type":null,"content_length":"62961","record_id":"<urn:uuid:cb921dfe-a98b-4e26-8d62-74d707bf7e9a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics The arithmetical operation opposite to addition, i.e. finding one of the terms from the given sum and the given other term. The given sum is named the minuend, the given term is known as the subtrahend, while the term to be found is called the difference. Subtraction is denoted by $-$ (minus). Thus, in the expression $a$ is the minuend, $b$ is the subtrahend and $c$ is the difference. [a1] P.M. Cohn, "Algebra" , 1 , Wiley (1982) pp. 136 How to Cite This Entry: Subtraction. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Subtraction&oldid=31373
{"url":"http://www.encyclopediaofmath.org/index.php/Subtraction","timestamp":"2014-04-20T08:21:39Z","content_type":null,"content_length":"18823","record_id":"<urn:uuid:1ff94431-1fd3-41de-96bc-c13de1bec7f0>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Least-Squares Finite, Element Methods: Theory and Applications (Part I of II) Thursday, July 16 Least-Squares Finite, Element Methods: Theory and Applications (Part I of II) 10:30 AM-12:30 PM Room: Sidney Smith 2118 As a rule, variational formulations of systems of partial differential equations (for example, the Stokes problem) lead to saddle-point optimization problems. Although approximation of such problems is now well understood, their numerical solution may still be difficult and computationally demanding. This minisymposium will focus on important new developments in finite element methods of least-squares type. Such methods involve minimization of problem-dependent least-squares functionals and offer many attractive theoretical and computational advantages that are not present in other discretization schemes, e.g., mixed Galerkin methods. Most notably, least-squares methods circumvent stability conditions such as the inf-sup condition, lead to symmetric and positive definite algebraic systems, and allow one to enforce essential boundary conditions in a weak, variational sense. The speakers in this minisymposium will present new results in mathematical approaches, numerical algorithms and applications of least squares methods. This minisymposium will bring together experts in the field with extensive applied and industrial background and will provide the audience with a broad perspective of the current state of the theory and applications of least squares. See Part II, MS56. Organizer: Pavel B. Bochev University of Texas, Arlington 10:30 A Negative Norm Least Squares Approach to Div-Curl Systems in Two and Three Dimensions James Bramble and Xuejun Zhang, Texas A&M University, College Station 11:00 Analysis and Numerical Experiments in 2 and 3-D by Piecewise Linear Element to Stokes with Velocity Boundary Condition Ching L. Chang, Cleveland State University 11:30 Adaptive Refinement and Singular Functions with First-Order Systems Least Squares Functionals Thomas A. Manteuffel, Markus Berndt, Steve McCormick, and Gerhard Starke, University of Colorado, Boulder 12:00 Least-Squares Solution of Boundary Value Problems George Pinder, J. P. Laible and D. G. Zeitoun, University of Vermont LMH Created: 3/18/98; MMD Updated: 6/22/98
{"url":"http://www.siam.org/meetings/an98/ms50.htm","timestamp":"2014-04-20T19:59:36Z","content_type":null,"content_length":"4211","record_id":"<urn:uuid:d040a8a0-c20b-43c7-8298-88c9847ec2a6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Turbulent mixing in stably stratified flows Abstract (Summary) High resolution direct numerical simulations are used to investigate the dynamics of turbulence in flows subject to strong stable stratification, which are common in natural settings. Results are presented for two categories of simulations, uniform and non-uniform density stratification. For all simulated flows, the density stratification was held constant in time, and there was no ambient shear. Flows with uniform density stratification are first analyzed to help provide clear insight to physical processes, followed by flows with non-uniform density stratification which better represent the stratification occurring in nature. Areas of non-uniform density stratification include thermohaline staircases and atmospheric layer transitions. For uniform density gradient flows, it is observed that the Froude-Reynolds number scaling developed by Riley and de Bruyn Kops [2003] is similar to the buoyancy Reynolds number, Reb = ε/νN2 . This supports the use of two dimensionless parameters obtained from dimensional analysis to predict turbulence in a density stratified flow. Also, due to the intermittent nature of density stratified flows, an auto-correlation length scale may be more appropriate than the typical advective length scale L a = [Special characters omitted.] . Finally, the common assumption that kinetic energy dissipation rate ε can be approximated by the vertical shear is shown to be valid only when Re b ⤠[Special characters omitted.] (1). Non-uniformly stratified flows are often characterized simply by the average density change with height, which may not adequately describe the flow. For simulated wake flows with the same average density stratification, but altered vertical stratification profiles, the flow dynamics are seen to depend on the ratio ξ = δu /δ Ï , where δu and Î´Ï are characteristic wake and stratification vertical length scales. When ξ is monotonically increased from 0.01 (near linear stratification) to 2 (wake height is twice stratification height), typical stratified flow behavior is observed, such as reduced decay of kinetic energy and inhibited vertical motion. In contrast, when ξ > 2, a transition occurs and the flow demonstrates non-stratified qualities, including rapid decay of kinetic energy and minimal inhibition of vertical motion. In addition, a method for calculating available potential energy in non-uniform density stratified flows has been developed. It will be shown that mixing of available potential energy Ï is confined to the stratification layer, which supports the observation of large mixing in regions of salt fingering found by St. Laurent and Schmitt [1999] and Schmitt [2003]. Bibliographical Information:
{"url":"http://www.openthesis.org/documents/Turbulent-mixing-in-stably-stratified-554151.html","timestamp":"2014-04-19T22:19:18Z","content_type":null,"content_length":"10508","record_id":"<urn:uuid:0ba9259e-5d63-483b-833c-1ff1b477563d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Angus J. MacIntyre: How much has mathematics been affected by Gödel's work? Gödel's dramatic work seems to have affected mathematics (beyond logic) very little. In particular, it fits badly with the geometrical ideas widespread in contemporary mathematics. Of course, in set theory his influence has been profound, and thanks to him and Cohen, and many very gifted followers, one knows wonderful things about models of ZF. But this has not led to organic connections to other parts of mathematics, and indeed set theory is remote from the centre of mathematics. In number theory and geometry, no one has ever detected a hint of the Incompleteness Phenomena around problems perceived by practitioners as central or natural. What we now know about unprovability of consistency, undecidability, or the effect of large cardinals on the unsolvability of diophantine equations,has neither induced paralysis or anxiety in mathematicians, nor a rush to understand the fine detail of large cardinals. Technical ideas of Gödel (outside set theory) remain useful, for example around functional interpretations (relevant to the unwinding of proofs in the area where number theory meets ergodic theory and hard analysis). Refinements of his use of the Chinese Remainder Theorem were central to the negative solution of Hilbert's 10th problem (though quite new ideas, and probably geometric ones, appear to be needed for the problem for the rationals). Even in the integer case, the result is about situations far removed from foreseeable concerns of number theorists. Again, in combinatorial group theory one saw first a wave of negative results (a la Gödel) around decidability, then a brilliant twist by Higman to get hold of subgroups of finitely presented groups. But now, these ideas fade, and the geometrical ideas of Gromov dominate. It is to be noted that large parts of mathematics are provably nonGödelian. Notable examples are the "o-minimal" universes, relating to a slogan of Grothendieck about "topologie moderee", where one has enough expressive power to do geometrical things freely, but not so much that one gets dragged into the geometrically irrelevant pathologies of set-theoretic analysis. Tarski's work on real-closed fields provided the first example, and now many other richer ones are known. Gödel's work seems irrelevant here.
{"url":"http://www.logic.at/goedel2006/abstract.php?macintyre","timestamp":"2014-04-20T13:19:35Z","content_type":null,"content_length":"13465","record_id":"<urn:uuid:e57abde0-b96a-47c8-b976-1ac93fc1dcfd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Objective Type Re: Objective Type Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Objective Type Hi bobbym and anonimnystefy, The answers 53 and 54 are correct. Well done! 55. Given f(x) = 3x + 2, f(0) = ____________________ (a) 3 (b) 2 (c) 5 (d) 8 = __________________ (d) None of these Character is who you are when no one is looking. Re: Objective Type Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Objective Type Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi anonimnystefy and bobbym, The answers 55 and 56 are correct. Well done! 57. If cos θ = 1/2, then θ = ________________ (a) 30^o (b) 60^o (c) 45^o (d) 90^o 58. The probability of getting a red card from a pack of cards is ________________ (a) 1/26 (b) 1/13 (c) 1/2 (d) 2/13 Character is who you are when no one is looking. Re: Objective Type Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Objective Type Hi bobbym and anonimnystefy, The answers 57 and 58 are correct. Well done! 59. In a Geometric Progression, when r < 1, = __________________ (a) a/1 - r (b) a/r - 1 (d) None 60. The total surface area of a hemisphere of radius 5 cm is __________________ Character is who you are when no one is looking. Re: Objective Type Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi bobbym, The answers 59 and 60 are correct. Well done! 61. Given f(x) = x^2 and g(x) = 3x, f o g (2) = ______________________ (a) 36 (b) 9 (c) 18 (d) 2 62. Square root of is _____________________ (a) (x - 2)(y + 3) (b) (x + 2) (y + 3) (c) (x - 2) (y + 3) (d) None Character is who you are when no one is looking. Re: Objective Type In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi bobbym, The answer 61 is correct. 63. The roots of the equation x^2 + x - 6 = 0 are ______________ (a) 3, -2 (b) -3,2 (c) 2,3 (d) None 64. A point on 4x - 3y ≤ 10 is ___________________ (a) (2,0) (b) (3,-1) (c) (2,-3) (d) None Character is who you are when no one is looking. Re: Objective Type In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Objective Type Hi bobbym and anonimnystefy, The answers 63 and 64 are correct. Well done! 65. The x intercept of the line 3x + 5y = 15 is ___________________ (a) 5 (b) 3 (c) 15 (d) None 66. The lines 2x + 3y = 5 and 4x + y = 7 are ________________ (a) parallel (b) perpendicular (c) same (d) None Character is who you are when no one is looking. Re: Objective Type In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi bobbym The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Objective Type Hi bobbym and anonimnystefy, The answers 65 and 66 are correct. Well done! 67. The value of 1 - tan 45^o = _______________ (a) 1 + √3 (b) 2 (c) 0 (d) None of these 68. The variance is 9, then its Standard Deviation is _________________ (a) √3 (b) 3 (c) 81 (d) None of these Character is who you are when no one is looking. Re: Objective Type Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Objective Type In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi anonimnystefy and bobbym, The answers 67 and 68 are correct. Well done! 69. 1 + 3 + 5 + ........ + 11 = __________________ 70. The Curved Surface Area of a hemisphere of radius 'r' units is __________________ Character is who you are when no one is looking. Re: Objective Type Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Objective Type Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Objective Type Hi bobbym and anonimnystefy, The answers 69 and 70 are correct. Well done! 71. A point on 4x + y < 5 is __________________ (a) (1,1) (b) (2,0) (c) (0,7) (d) None of these 72. Two similar triangles have corresponding sides proportional to 3:4. Then their areas area proportional to __________________ (a) 3:4 (b) 4:3 (c) 9:16 (d) 16:9 Character is who you are when no one is looking. Re: Objective Type Hi ganesh The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=225317","timestamp":"2014-04-21T12:37:19Z","content_type":null,"content_length":"48141","record_id":"<urn:uuid:839bd062-ee6b-48d5-860a-5d38b1752ab9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of a Cone Date: 02/08/2002 at 01:09:37 From: Henry Subject: Another method of calculating volume of cone Doctor Math: Regarding the formula for calculating the volume a cone, I have observed that a cone consists of a right angle triangle and a circle base. I have came up with another method of calculating the volume of a cone, which is: (The Area of the Right Angle triangle)*(the base circle circumference) r = radius of base circle h = height of triangle but solving the above does not give me the formula 1/3Pir^2h. I am wondering if my formula is wrong. Please help correct it. Date: 02/08/2002 at 09:08:08 From: Doctor Peterson Subject: Re: Another method of calculating volume of cone Hi, Henry. Something close to your method works. The problem is that although the volume of a triangle moved in a straight line (a triangular prism) is the product of its area and the distance moved, when the triangle moves in a circle, not all of it moves the same distance, so you can't use the OUTER circumference as if the whole triangle moved that far. But it turns out (and can be proved with some advanced geometry, or with calculus) that the volume of such a "solid of revolution" will be the product of the area of the triangle and the distance moved by the center of gravity (centroid) of the triangle. Since this is 1/3 of the way out from the axis, this multiplies your formula by 1/3 and makes it correct. You can read about this "Theorem of Pappus" at Eric Weisstein's World of Mathematics: - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/51840.html","timestamp":"2014-04-24T23:28:32Z","content_type":null,"content_length":"6742","record_id":"<urn:uuid:facb79df-41d2-4333-b9ff-5837b5ad048d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Snohomish Trigonometry Tutor ...I'm a native Cantonese and Mandarin Chinese speaker. I'd studied Chinese for over ten years, before I moved to the U.S. I graduated from University of Washington with a B.A. degree in business. 13 Subjects: including trigonometry, geometry, Chinese, algebra 1 ...I took three years of college level German at the University of Washington, with a 4.0 GPA in all classes. I've also studied abroad in Germany. I can work with German students at the beginner and intermediate level. 32 Subjects: including trigonometry, English, reading, writing ...It's a beautiful and important foundational subject for so many interesting topics, and deserves a solid understanding. I received a 5 on the physics AP test, and later a 4.0 on general college-level physics. I find physics super fun and satisfying, and I'm always eager to tackle problems. 18 Subjects: including trigonometry, chemistry, physics, geometry Hi My name is George. I graduated from Bergen Community College, NJ, in 2009 with Associate in Science degree in Engineering Science. I earned my Bachelor of Science degree in Mechanical and Aerospace Engineering from Rutgers University (New Brunswick, NJ) in 2012. 11 Subjects: including trigonometry, calculus, algebra 1, Arabic ...I was often selected for literary competitions and won high prizes. When I came to US, I was teaching Russian individual students as part of our family company Russian Language Tours. I also worked as a Russian tutor for Pacific Language Institute. 20 Subjects: including trigonometry, reading, GED, algebra 1
{"url":"http://www.purplemath.com/Snohomish_Trigonometry_tutors.php","timestamp":"2014-04-18T13:58:26Z","content_type":null,"content_length":"23927","record_id":"<urn:uuid:b493b31b-0d35-4711-8575-ddb1e3b531e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 72 - IEEE Transactions on Computers , 1986 "... In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on th ..." Cited by 2927 (46 self) Add to MetaCart In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach. Index Terms: Boolean functions, symbolic manipulation, binary decision diagrams, logic design verification 1. - ACM Computing Surveys , 1992 "... Ordered Binary-Decision Diagrams (OBDDS) represent Boolean functions as directed acyclic graphs. They form a canonical representation, making testing of functional properties such as satmfiability and equivalence straightforward. A number of operations on Boolean functions can be implemented as grap ..." Cited by 879 (11 self) Add to MetaCart Ordered Binary-Decision Diagrams (OBDDS) represent Boolean functions as directed acyclic graphs. They form a canonical representation, making testing of functional properties such as satmfiability and equivalence straightforward. A number of operations on Boolean functions can be implemented as graph algorithms on OBDD - In DAC , 1985 "... In this paper we describe a data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations of Lee and Akers, but with further restrictions on the ordering of decision ..." Cited by 58 (2 self) Add to MetaCart In this paper we describe a data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations of Lee and Akers, but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms are quite efficient as long as the graphs being operated on do not grow too large. We present performance measurements obtained while applying these algorithms to problems in logic design verification. , 1994 "... . We investigate the use of oblivious, read-once decision graphs as structures for representing concepts over discrete domains, and present a bottom-up, hill-climbing algorithm for inferring these structures from labelled instances. The algorithm is robust with respect to irrelevant attributes, and ..." Cited by 45 (8 self) Add to MetaCart . We investigate the use of oblivious, read-once decision graphs as structures for representing concepts over discrete domains, and present a bottom-up, hill-climbing algorithm for inferring these structures from labelled instances. The algorithm is robust with respect to irrelevant attributes, and experimental results show that it performs well on problems considered difficult for symbolic induction methods, such as the Monk's problems and parity. 1 Introduction Top down induction of decision trees [25, 24, 20] has been one of the principal induction methods for symbolic, supervised learning. The tree structure, which is used for representing the hypothesized target concept, suffers from some wellknown problems, most notably the replication problem and the fragmentation problem [23]. The replication problem forces duplication of subtrees in disjunctive concepts, such as (A B) (C D); the fragmentation problem causes partitioning of the data into fragments, when a high-arity - INTERNATIONAL WORKSHOP ON LOGIC AND SYNTHESIS (IWLS01), LAKE TAHOE, CA , 2001 "... A realization of multiple-output logic functions using a RAM and a sequencer is presented. First, a multiple-output function is represented by an encoded characteristic function for non-zeros (ECFN). Then, it is represented by a cascade of look-up tables (LUTs). And finally, the cascade is simulated ..." Cited by 30 (26 self) Add to MetaCart A realization of multiple-output logic functions using a RAM and a sequencer is presented. First, a multiple-output function is represented by an encoded characteristic function for non-zeros (ECFN). Then, it is represented by a cascade of look-up tables (LUTs). And finally, the cascade is simulated by a RAM and a sequencer. Multiple-output functions for benchmark functions are realized by cascades of LUTs, and the number of LUTs and levels of cascades are shown. A partition method of outputs for parallel evaluation is also presented. A prototype has been developed by using RAM and - Proc. of Advance Research in VLSI, C. Seitz Ed , 1989 "... This article describes the use of if-then-else dags for multi-level logic minimization. A new canonical form for if-then-else dags, analogous to Bryant's canonical form for binary decision diagrams (bdds), is introduced. Two-cuts are defined for binary decision diagrams, and a relationship is exhibi ..." Cited by 30 (2 self) Add to MetaCart This article describes the use of if-then-else dags for multi-level logic minimization. A new canonical form for if-then-else dags, analogous to Bryant's canonical form for binary decision diagrams (bdds), is introduced. Two-cuts are defined for binary decision diagrams, and a relationship is exhibited between general if-then-else expressions and the two-cuts of a bdd for the same function. The canonical form is based on representing the lowest non-trivial two-cut in the corresponding bdd, instead of the highest two-cut, as in Bryant's canonical form. The definitions of prime and irredundant expressions are extended to if-then-else dags. - 4905 of LNCS , 2008 "... Abstract. Cellular signalling pathways, where proteins can form complexes and undergo a large array of post translational modifications are highly combinatorial systems sending and receiving extra-cellular signals and triggering appropriate responses. Process-centric languages seem apt to their repr ..." Cited by 23 (7 self) Add to MetaCart Abstract. Cellular signalling pathways, where proteins can form complexes and undergo a large array of post translational modifications are highly combinatorial systems sending and receiving extra-cellular signals and triggering appropriate responses. Process-centric languages seem apt to their representation and simulation [1–3]. Rule-centric languages such as κ [4–8] and BNG [9, 10] bring in additional ease of expression. We propose in this paper a method to enumerate a superset of the reachable complexes that a κ rule set can generate. This is done via the construction of a finite abstract interpretation. We find a simple criterion for this superset to be the exact set of reachable complexes, namely that the superset is closed under swap, an operation whereby pairs of edges of the same type can permute their ends. We also show that a simple syntactic restriction on rules is sufficient to ensure the generation of a swap-closed set of complexes. We conclude by showing that a substantial rule set (presented in Ref. [4]) modelling the EGF receptor pathway verifies that syntactic condition (up to suitable transformations), and therefore despite its apparent complexity has a rather simple set of reachables. 1 - IEEE Trans. on CAD/ICAS , 1995 "... Spectral methods for analysis and design of digital logic circuits have been proposed and developed for several years. The widespread use of these techniques has suffered due to the associated computational complexity. This paper presents a new approach for the computation of spectral coefficient ..." Cited by 21 (6 self) Add to MetaCart Spectral methods for analysis and design of digital logic circuits have been proposed and developed for several years. The widespread use of these techniques has suffered due to the associated computational complexity. This paper presents a new approach for the computation of spectral coefficients with polynomial complexity. Usually, the computation of the spectral coefficients involves the evaluation of inner products of vectors of exponential length. In the new approach, it is not necessary to compute inner products, rather, each spectral coefficient is expressed in terms of a measure of correlation between two Boolean functions. This formulation coupled with compact BDD representations of the functions reduces the overall complexity. Further, some computer aided design applications are presented that can make use of the new spectrum evaluation approach. In particular, the basis for a synthesis method that allows spectral coefficients to be computed in an iterative manner ... - IN PROCEEDINGS OF THE ICALP'96, LECTURE NOTES IN COMPUTER SCIENCE , 1996 "... We define the notion of a randomized branching program in the natural way similar to the definition of a randomized circuit. We exhibit an explicit function fn for which we prove that: 1) f n can be computed by polynomial size randomized read-once ordered branching program with a small one-sided ..." Cited by 19 (9 self) Add to MetaCart We define the notion of a randomized branching program in the natural way similar to the definition of a randomized circuit. We exhibit an explicit function fn for which we prove that: 1) f n can be computed by polynomial size randomized read-once ordered branching program with a small one-sided error; 2) fn cannot be computed in polynomial size by deterministic readonce branching programs; 3) fn cannot be computed in polynomial size by deterministic read- k-times ordered branching program for k = o(n= log n) (the required deterministic size is exp \Gamma\Omega \Gamma n k \Delta\Delta ).
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=53063","timestamp":"2014-04-20T10:17:32Z","content_type":null,"content_length":"37621","record_id":"<urn:uuid:b334396a-1f0b-41cb-a91b-f98a27e93f29>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
logarithmic forms of exponential equations: 6 = 2x, etc Hi I think i may have this right but can you please give me adivise if iam going on the right direction? I have to covert this problems into Logarithmic Form can you tell me if problem A is correct? a. 6 = 2^x so for this the ANSWER log(2)6=X The base is 2 and the exponent is x Last edited by isabelrachel on Tue Jul 14, 2009 3:55 am, edited 6 times in total. There is nothing to "do" or "explain" or "show" here. These exercises are asking you to demonstrate your knowledge of the equivalence between the logarithmic and exponential forms. For an explanation of what they're expecting you to know, try here.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=679&p=2124","timestamp":"2014-04-18T23:55:45Z","content_type":null,"content_length":"19241","record_id":"<urn:uuid:3f4b2cab-3ab9-464c-a112-ce3a4121de5f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00157-ip-10-147-4-33.ec2.internal.warc.gz"}