content
stringlengths
86
994k
meta
stringlengths
288
619
aroma.cn citation info Please cite aroma.cn one or more of approprite reference below H. Bengtsson, P. Neuvial and T.P. Speed. TumorBoost: Normalization of allele-specific tumor copy numbers from a single pair of tumor-normal genotyping microarrays, BMC Bioinformatics, 2010 H. Bengtsson, A. Ray, P. Spellman and T.P. Speed. A single-sample method for normalizing and combining full-resolutioncopy numbers from multiple platforms, labs and analysis methods, Bioinformatics, 2009 H. Bengtsson; K. Simpson; J. Bullard; K. Hansen. aroma.affymetrix: A generic framework in R for analyzing small to very large Affymetrix data sets in bounded memory, Tech Report 745, Department of Statistics, University of California, Berkeley, February 2008 H. Bengtsson, R. Irizarry, B. Carvalho, & T.P. Speed. Estimation and assessment of raw copy numbers at the single locus level, Bioinformatics, 2008 Corresponding BibTeX entries: author = {H. Bengtsson and Pierre Neuvial and Terence P Speed}, title = {TumorBoost: Normalization of allele-specific tumor copy numbers from a single pair of tumor-normal genotyping journal = {BMC Bioinformatics}, year = {2010}, month = {May}, volume = {11}, number = {245}, doi = {10.1186/1471-2105-11-245}, url = {http://www.biomedcentral.com/1471-2105/11/245/}, author = {Henrik Bengtsson and Amrita Ray and Paul Spellman and Terence P Speed}, title = {A single-sample method for normalizing and combining full-resolutioncopy numbers from multiple platforms, labs and analysis methods}, journal = {Bioinformatics}, year = {2009}, volume = {25}, number = {7}, doi = {10.1093/bioinformatics/btp074}, url = author = {H. Bengtsson and K. Simpson and J. Bullard and K. title = {{aroma.affymetrix}: A generic framework in {R} for analyzing small to very large {Affymetrix} data sets in bounded institution = {Department of Statistics, University of California, year = {2008}, month = {February}, number = {745}, author = {Henrik Bengtsson and R. Irizarry and B. Carvalho and T.P. title = {Estimation and assessment of raw copy numbers at the single locus level}, journal = {Bioinformatics}, year = {2008}, volume = {24}, number = {6}, url =
{"url":"http://cran.r-project.org/web/packages/aroma.cn/citation.html","timestamp":"2014-04-21T12:17:50Z","content_type":null,"content_length":"3360","record_id":"<urn:uuid:2b8fc7d7-0947-43a6-bee9-5c3291f92bbd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Does anyone know TI BASIC programming? I would really appreciate some help fixing a problem. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f1cab25e4b04992dd23bee3","timestamp":"2014-04-16T08:02:59Z","content_type":null,"content_length":"39678","record_id":"<urn:uuid:cfa431a1-5542-4904-a537-7d652b82d6de>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Charge Distribution on Conductor Usually people attack this kind of problem using so-called "moment methods". There's a good book by R.F. Harrington on the subject. In general, you know that the charges on the conductor are all going to reside on the surface. You also know that the potential at the conductor's surface (and inside its volume) must be constant. One application of the method of moments divides the surface up into many smaller subsurfaces, and assumes a constant charge density on each subsurface. You can then solve for the potential at, say, the center of any subsurface due to the charge at a single subsurface. Superposition also applies. You wind up with a matrix equation: Vm = Lmn*qn (summing over the n's is implied) Where Vm is the potential at subsurface m, qn is the charge density at subsurface n, and Lmn is the matrix connecting them. You invert this matrix to find the q's for constant V's. The accuracy depends on how finely you chop your surface up into. It can be a very messy problem, especially for complex geometries, and requires a computer to help with the matrix inversions. As a simple example, you might want to consider a thin square metal plate. Divide it up into, say, a 4x4 array of smaller metal squares. Solving this will show you that the charge tends to pile up in the corners.
{"url":"http://www.physicsforums.com/showthread.php?t=155444","timestamp":"2014-04-19T17:41:48Z","content_type":null,"content_length":"46962","record_id":"<urn:uuid:b70f20ec-657e-4b1e-b228-1293095b631a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Langhorne Algebra 1 Tutor ...I am currently a student majoring in secondary education: mathematics. I plan on becoming a full time math teacher as soon as my bachelor's degree is completed. After my bachelor completion, I plan on working toward a PhD in particle physics.Algebra 1 is perhaps my favorite mathematics subject of all! 18 Subjects: including algebra 1, English, writing, calculus ...I would consider these three fields to be my specialties within Biology. Additionally, I studied German for 5 years in high school, and then continued studying it for 4 years in college. This included a semester abroad, at the University of Tuebingen, in Tuebingen, Germany. 12 Subjects: including algebra 1, reading, chemistry, biology ...I have had much success using this technique in my tutoring over the past five years. I can tutor general chemistry as well as organic chemistry and I would be happy to help you prepare for the chemistry portion of the MCAT, the GRE subject exam in chemistry, or the chemistry AP exam. See my subject section below for a complete list of the subjects I tutor! 8 Subjects: including algebra 1, chemistry, biology, American history ...I want to displace the student as little as possible, and want to tutor them in an environment they are comfortable and familiar with. I look forward to working with you!I currently teach Algebra I at the high school level. I am also certified to teach Math in grades K-12. 2 Subjects: including algebra 1, prealgebra ...All sessions encourage good study habits and I strive to facilitate learning, without lecturing the material. Understanding the content of a written passage can be confusing for even the best of students. It is absolutely essential that a student be able to extract the content from books and articles. 62 Subjects: including algebra 1, reading, English, calculus
{"url":"http://www.purplemath.com/Langhorne_algebra_1_tutors.php","timestamp":"2014-04-20T21:10:10Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:3aa619a4-89eb-430b-a8b6-2790442f747a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Illustration 17.7 Java Security Update: Oracle has updated the security settings needed to run Physlets. Click here for help on updating Java and setting Java security. Illustration 17.7: Group and Phase Velocity phase velocity: m/s group velocity: m/s Please wait for the animation to completely load. So what do we mean by the velocity of a wave? This may seem like a simple question. When we talk about a wave on a string (or a sound wave) we can talk about the velocity as v = λ f. We can rewrite this expression in terms of the wave's wave number, k, and angular frequency, ω, given that λ = 2π/k and that f = 2π/ω. We therefore find that v = ω/k. We note here that the velocity of the wave is also fundamentally related to the medium in which the wave propagates. But what happens when we want to add several traveling waves together? In this case we are interested in several waves traveling in the same direction. We can change the wave number and angular frequency for each wave, but we must ensure that the wave speeds are identical. In this animation we add the red wave to the green wave to form the resulting blue wave (position is given in meters and time is given in seconds). Restart. Consider what happens when we change k[1] to 8 rad/m and ω[1] to 8 rad/s. Note the interesting pattern that develops in the superposition. Notice that there is an overall wave pattern that modulates a finer-detailed wave pattern. The overall wave pattern is defined by the propagation of a wave envelope with what is called the group velocity. The wave envelope has a wave inside it that has a much shorter wavelength that propagates at what is called the phase velocity. For these values (of k and ω), the phase and group velocities are the same. Now consider k[1] = 8 rad/m and ω[1] = 8.4 rad/s. What happens to the wave envelope now? It does not move! This is reflected in the calculation of the group velocity. The finer-detailed wave has a phase velocity of 1.02 m/s. Now consider k[1] = 8 rad/m and ω[1] = 8.2 rad/s. The group velocity is now about half that of the phase velocity (certain water waves have this property). Now consider k [1] = 8 rad/m and ω[1] = 7.6 rad/s. The group velocity is now about twice that of the phase velocity. For a superposition of two waves the group velocity is defined as v[group] = Δω/Δk and the phase velocity as v[phase] = ω[avg]/k[avg]. In general, the group velocity is defined as v[group] = ∂ω/∂k and the phase velocity as v[phase] = ω/k. So what velocity do we want? The physical velocity is that of the wave envelope, the group velocity. For waves on strings we got lucky: the phase and group velocities are the same (these are harmonic « previous next »
{"url":"http://www.compadre.org/Physlets/waves/illustration17_7.cfm","timestamp":"2014-04-19T09:26:02Z","content_type":null,"content_length":"21408","record_id":"<urn:uuid:99d282f9-4ce7-45d3-9380-594d1ad6380d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Random Variable. I don't know where to start please help me, thanks Last edited by mr fantastic; November 8th 2011 at 12:12 PM. Reason: Title. I assume that the two numbers are on the faces of the dice. Let $E=\{1,\dots,6\}$ and $\mathcal{E}=2^E$; then $E$ is the domain of $X$. The definition says that $X:\Omega\to E, X(x,y)=\max(x,y)$ is an $(E,\mathcal{E})$-valued random variable if $X^{-1}(B)\in\mathcal{F}_2$ for all $B\in\mathcal{E}$. That is, for any subset $B$ of values that $X$ may have, the set of outcomes from $\Omega$ that produce an answer in $B$ is a legitimate event, i.e., belongs to the set of events $\mathcal{F}_2$. For this example, this fact is obvious because $\mathcal{F}_2$ includes all possible subsets of $\ Omega$. In contrast, $X$ is not a random variable with respect to $(\Omega,\mathcal{F}_1,\mathbb{P})$ and $(E,\mathcal{E})$. E.g., consider $X^{-1}(\{2\})$. No, here $X=\max$ and $\maxolimits^{-1}(\{2\})=\{(1,2),(2,1),(2,2)\}$ because the maximum of all three pairs is 2. Now, $\{(1,2),(2,1),(2,2)\}otin\mathcal{F}_1$. If I wanted to describe the distribution function Fx, I take it I do a step function? Yes, so the cumulative distribution function $F_X$ is from {1, ..., 6} to [0, 1]; $F_X(n)=\mathop{\mbox{Pr}}(X\le n)$. So, $F_X(1)=\mathop{\mbox{Pr}}(X=1)=1/36$, $F_X(2)=\mathop{\mbox{Pr}}(X=1\mbox{ or }X=2)=1/36+3/36$, $F_X(3)=\mathop{\mbox{Pr}}(X=1\mbox{ or }X=2\mbox{ or }X=3)=1/36+3/36+5/36$, etc. emakarov Correction: looking at the Wikipedia page, it seems that CDF is always defined on real numbers, even for discrete random variables. So, $F_X(x)=0$ for $x<1$, $F_X(x)=1/36$ for $1\le x<2$, $F_X(x)=1/36+3/36$ for $2\le x<3$, ... $F_X(x)=1$ for $x\ge 6$.
{"url":"http://mathhelpforum.com/advanced-statistics/191453-random-variable.html","timestamp":"2014-04-21T11:31:32Z","content_type":null,"content_length":"68062","record_id":"<urn:uuid:c5cd1380-d6b2-4f56-891c-07b152405efa>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 130 Chip s Home Brew Whiskey management forecasts that if the firm sells each bottle of Snake-Bite for $20, then the demand for the product will be 15,000 bottles per year, whereas sales will be 91 percent as high if the price is raised 11 percent. Chip s variable cost p... 5.Capital Co. has a capital structure, based on current market values, that consists of 21 percent debt, 9 percent preferred stock, and 70 percent common stock. If the returns required by investors are 10 percent, 12 percent, and 17 percent for the debt, preferred stock, and c... use substitution to solve this linear system: 8x + y = -458 -5x + 3y = 221 no idea . please help! Model this situation w/ a linear system: Melissa borrowed $10, 000 for her university tuition. She borrowed part of the money at an annual interest rate of 2.4 % and the rest of the money at an annual rate of 4.5 %. Her total annual interest payment is $ 250.50 a.) a + b = 10 ... Sales clerk at an appliance store have a chance of 2 payment: Plan A: $580 every 2 weeks plus 4.2 % commission on all sales Plan B: $880 every 2 weeks plus 1.2 % commission on all sales a) Write a linear system to model this situation b) Graph the linear system in part a c) Us... will it work if i write : they have the same slope and are identical. Both have the slope of 2/5? Explain what happens when you try to solve this linear equation using an elimination strategy. What does this tell you about the graphs of these equations? -8x + 20y = -40 24x - 60y = 120 Model this situation w/ a linear system: Melissa borrowed $10, 000 for her university tuition. She borrowed part of the money at an annual interest rate of 2.4 % and the rest of the money at an annual rate of 4.5 %. Her total annual interest payment is $ 250.50 please get back asap is the answer 3x -2y -17= 0? write an equation in general form for the line that passes through A(3,-4) and B(11, 8) Create a linear system to model this situation. Then use substitution to solve the linear system to solve the problem. - Bobbie has been saving dimes & quarters to buy a new toy. She has a total of 28 dimes and quarters, with a value of $4.30. How many of each type of coin doe... cashew nuts sell for $21.00/kg. Brazil nuts sell for $15.00/k. A distributor sold a total of 120 kg of nuts and Brazil nuts for $2244. What mass of each type of nut was sold? algebra 2 How do you find the points of intersection, if any, of the equations: x^2 + y^2 = 5 x - y = 1 us history how was president Woodrow Wilson truly a progressive president? world history i don't get the question because at the time in which they developed, the US didn't even exist. world history the question is: Why is England and France developing into nations important to us? i need websites where i can get information on this please I figured out the first one...still struggling with the 2nd... Q1. The blood volume in a cancer patient was measured by injecting 5.0 mL of Na2SO4(aq) labeled with 35S (t1/2 = 87.4 d). The activity of the sample was 300 µCi. After 22 min, 12.9 mL of blood was withdrawn from the man and the activity of that sample was found to be 0.7... I also posted the above well before Dr. Bob had answered one of my other questions... I wasn't talking about DrBob being rude...that would be bobpursely...DrBob has been nothing but helpful while I have used this site as a resource. One last question...even though no one seems to be willing to help me out without being rude. The blood volume in a cancer patient was measured by injecting 5.0 mL of Na2SO4(aq) labeled with 35S (t1/ 2 = 87.4 d). The activity of the sample was 300 µCi. After 22 min, 12.9 ... Could you please just explain what I am doing wrong so that I can know how to approach these problems? I appreciate the advice and am well aware I need to go to bed...but I have to get these finished and my professor is out of town attending a memorial service and unable to an... Sleep is a bit of a luxury.....unfortunately I'm one of those students that doesn't have an option of taking a lighter load or working less....luckily I graduate in a few weeks. In the meantime I need to finish these problems so that I can go to bed....but they are due... Another attempt I made gave me relatively "reasonable" answers...but still doing something wrong. I converted the minutes to days and used that in the rate to find k via k= rate/N ; N being 3.25E18 atoms. this calculated k to be .00145. I plugged this into N'=Ne^... Oh and when I try to put this into my calculator I get 0... sorry if I am not catching on fast enough for some....I worked graveyard last night and Haven't slept in over 35 hours... A radioactive sample contains 3.25 1018 atoms of a nuclide that decays at a rate of 3.4 1013 disintegrations per 26 min. (a) What percentage of the nuclide will have decayed after 159 d? % (b) How many atoms of the nuclide will remain in the sample? atoms (c) What is the half-... A sample of pure cobalt-60 has an activity of 8 µCi. (t1/2 cobalt-60 = 5.26 a) (a) How many atoms of cobalt-60 are present in the sample? atoms (b) What is the mass in grams of the sample? g The way I approached part a was to first convert the microCi into Bq (for a valu... A certain Geiger counter is known to respond to only 1 of every 1000 radiation events from a sample. Calculate the activity of each radioactive source in curies, given the following data. 580 clicks in 104 s My text barely attempts to explain how to do this problem, and there ... A certain Geiger counter is known to respond to only 1 of every 1000 radiation events from a sample. Calculate the activity of each radioactive source in curies, given the following data. My text barely attempts to explain how to do this problem, and there is no example. I am ... no problem :) Think of the equations you have learned and which variables you have been given. It helps to keep a list of them in front of you so you can quickly figure it out. Then just plug in. If you have P, V, mass, and T....you're going to use PV=nRT...just solve for P PV=nRT is an... Nevermind...I realized that the problem I was using as an example called for 1 g of the reactant used....this didn't....so I have my correct answer now. Thanks! Still stumped on my earlier question How much energy is emitted in each α decay of plutonium-234? (234Pu, 234.0433mu; 230U, 230.0339mu; 4He = 4.0026 mu). I tried finding the difference in mass first: (230.0339 + 4.0026) - (234.0433) = -.0068 mu I then converted from amu to kg by multiplying by 1.6605E-27. Th... Hmm...I don't know. My webassign is telling me it's not a neutron. Thanks Oh i just realized how you typed the equation has the product flipped. so it's 1,1,H + 1,1P --> 2,1H + ? order is A,Z,X where A is mass and X is the particle or element neutron was my first answer...i am using +1 e^0 to be a positron though, not electron. I know they need to balance out on both sides. I am just confused because I am pretty sure it should be 0,+1, e ...e being X. Webassign is telling me this is wrong but if that's true I don't understand why and need that explained.... for A, Z, and X I'm typing it in the form A,Z,X (or X as a particle) since I obviously can't do sub or superscript... 1,1,H + 1,1,p --> 2.1,H + ? I thought initially the missing particle at the end would be either 0,-1,e or 0,1,e (beta emission/decay) but apparently... How can you tell if [ MnCl6]4- or [ Mn(CN)6 ] 4- has the longer wavelength of light? and then how do you figure out how many unpaired electrons each has? Is Zn+2 diamagnetic or paramagnetic? I am a little confused on how to figure this out and I'm finding inconsistent answers when I tried to look this up. Thanks! How can you tell if [ MnCl6]4- or [ Mn(CN)6 ] 4- has the longer wavelength of light? and then how do you figure out how many unpaired electrons each has? Trying to name this compound and figure out its oxidation state [ CoBr2(NH3)3(H2O) ]+ I came up with triammineaquadibromocobalt(II) ion with a +3 oxidation state...but not sure if this is right... similarily for [Fe(ox)(Cl)4]3- I came up with tetrachlorooxalatoferrate(IV) ion ... Which one of the following combinations of reagents will result in a spontaneous reaction? A) Al(s) + CuCl2(aq) B) Al(s) + CaCl2(aq) C) Zn(s) + CaCl2(aq) D) Cu(s) + ZnCl2(aq) I know the answer is A, but what I don't know is why, or how to figure this out. Thanks! You know it's second order so your rate=k[NO2]^2 All you do is like you said, find moles, use that to find molarity (convert mL to L) and then plug the concentration into the equation. You're given the k. Same thing with part B, just changing the concentration :) All you have to do is find the concentration of OH- and solve for pH (pOH= -log [OH-] nad then 14-pOH=ph [OH-} = Ksp/[M+} NVM i was using the pKA instead of pH....i had it right afterall...guess it's jsut time for bed.... I figured out the correct new pH for each respective part, but apparently the initial pH i found for the buffer solution is incorrect. I just need help in figuring that so I can calculate the change. I initially thought the buffer pH was 7.38. Thanks! for part a I tried using the HH equation, found moles of NaOH and used that for [base] and moles Na2HPO4 as [acid]...i used pKa 7.21...that was incorrect. For part b I used a similar method...plugging in HNO3 as the [acid] and Na2HPO4 moles as [base] the other pka values for H... A buffer solution of volume 100.0 mL is 0.150 M Na2HPO4(aq) and 0.100 M KH2PO4(aq). Refer to table 1. (a) What are the pH and the pH change resulting from the addition of 80.0 mL of 0.0500 M NaOH(aq) to the buffer solution? pH pH change (include negative sign if appropriate) (... I'm confused how I would use the Ka equation in the HH equation??? I thought HH was just pH=pka + log([base]/[acid]) ... Calculate the pH of the solution that results from mixing the following reagents. Refer to table 1. mix 0.189 L of 0.019 M (CH3)2NH(aq) with 0.327 L of 0.011 M (CH3)2NH2Cl(aq) I tried using the Henderson-Hasselbach equation by finding the pKa but it didn't work....I am sti... Nevermind I got it...I just mixed up my variables a little. thanks! I am pretty stuck...not sure if I used the Henderson-Hasselbach the right way and I'm a little confused on how to set up the Ka expression... I am trying to use pH=pKa + log(acid/base) to see where that gets me...I'm attempting to find the [base] using the given pH and pKa of 8.69 (for HBrO)...then use the concentration to find the new The pH of 0.50 M HBrO(aq) is 4.50. Calculate the change in pH when 6.50 g of sodium hypobromite is added to 110. mL of the solution. Ignore any change in volume. Ka of HBrO is 2E-9 I found the amount of moles used for each (.0547 mol NaBrO, and .055 mol HBrO soln) I'm not ... 1x10-2 mol of MgSO4 (Ksp = 9) is added to 1 L of water. Will 1x10-5 mol of CaSO4 (Ksp = 10-6) completely dissolve in the solution? I think the answer is yes...because the Q will be less than Ksp of 10^-6?? Which of the following salts would you expect to result in the greatest... Chemistry (or Physio/Biochem) Asthma affects about seven percent of the population. During an asthma attack smooth muscle cells in the bronchi constrict, the airways become inflamed and swollen, and breathing becomes difficult. What effect might an asthma attack have on blood pH? A. pH goes down B. pH goes... 100 mL of each of the following solutions is mixed; which one of the mixed solutions is a buffer? A) 1.0 M NH3(aq) + 0.6 M KOH(aq) B) 1.0 M NH4Cl(aq) + 1.0 M KOH(aq) C) 1.0 M NH3(aq) + 0.4 M HCl(aq) D) 1.0 M NH4Cl(aq) + 0.4 M HCl(aq) E) 1.0 M NH3(aq) + 1.0 M HCl(aq) I am unsur... Which of the following systems could act as a buffer? A. HCl/Cl- B. NaOH/OH- C. HSO4-/SO4-2 D. H3O+/OH- E. All of the above my guess is C because A and B are too strong, but I am unsure of D. Can someone explain how to think thru this? Given the equilibrium rxn CF4(g) + 2H2O(l) ⇔ CO2(g) + 4HF(g) ΔH < 0 if pressure is increased by adding argon gas to the equilibrium, I know it doesn't decrease the amount of HF. But if a random gas is added for any equilibrium rxn, will it always cause an inc... If a weak acid is say only 5% ionized at equilibrium, then the ionization rxn would be reactant favored, correct? I am a little on the fence about understanding this if anyone could better explain why this would be? I am assuming because it is not completely ionized, the amoun... I see how there are the numbers of particles in solution, I am a little confused on how that relates to these properties. My text is pretty brief over this section. I am confused about why this is the answer to a practice problem I was given. (answer is c). Could someone explain why/why not for each answer? 2) Solution A is made from 1 L of water and 0.1 moles of Na(CH3COO), and Solution B is made from 1 L of water and 0.1 moles of Na2(SO... world history ancient greece i need to do an outline of how to organize things for my research report on the Ancient Olympics. How cna i divide up my report? Right...I am confused on the solving for x part. Before when I have used this I have always been given ka or kb in order to solve for x. This time I'm not--so I don't know how to set up my equation to find x. Once I have x I know how to do everything else Calculate the initial molarity of KNH2 and the molarities of K+, NH3, OH-, and H3O+ in an aqueous solution that contains 0.75 g of KNH2 in 0.255 L of solution. You may ignore the reaction of NH3 with water. I already found the initial molarity, but I am not sure how to find ea... Calculate the initial molarity of KNH2 and the molarities of K+, NH3, OH-, and H3O+ in an aqueous solution that contains 0.75 g of KNH2 in 0.255 L of solution. You may ignore the reaction of NH3 with water. I already found the initial molarity, but I am not sure how to find ea... Well I'm not actually making it in a lab...I just don't fully understand the concept of this In a .1M aq. soln of NaCN, how would you determine the major species (other than water)? And how would you determine the pH (not exact pH, I just want to know how to know if it is acidic ie. <7, etc). Thanks in advance Physical Chemistry I figured out what I was doing...I ended up using the Van't Hoff Equation and the enthalpies of formation to solve for K at 100 C. This stuff is so time consuming... Physical Chemistry Calculate the equilibrium constant at 25°C and at 100.°C for each of the following reactions, using data available in Appendix 2A. Remember that the organic molecules are in a separate section behind the organic molecules in Appendix 2A. (a) HgO(s) Hg(l) + O2(g) at 25&... Chemistry Question!! When 1.00 g of gaseous I2 is heated to 1000. K in a 1.00 L sealed container, the resulting equilibrium mixture contains 0.830 g of I2. Calculate Kc for the dissociation equilibrium below. I2 <--> 2 I What I did was converted the initial I2 into moles then molarity since... Physical Chemistry When 1.00 g of gaseous I2 is heated to 1000. K in a 1.00 L sealed container, the resulting equilibrium mixture contains 0.830 g of I2. Calculate Kc for the dissociation equilibrium below. I2 <--> 2 I What I did was converted the initial I2 into moles then molarity since ... I've heard this one before: "A meteor is a flash of light made by a falling meteorite" nevermind--after 50 hours of pulling my hair out I finally arrived at the answer :) And when solving for molality do I need to use the formula dT=Kf*molality? Or do I find it independently of this equation? I am confused by the 1% by mass part...I don't understand what it really even means or if I am multiplying something by 1% or what? I did eventually realize I had to multiply by 3--I have never seen a problem like that before so I was slow to catch on. Thanks for your help A 1.00% by mass MgSO4(aq) solution has a freezing point of -0.192°C. Water has a kf of 1.86 (K·kg)/mol. (a) Estimate the van't Hoff i factor from the data. (b) Determine the total molality of all solute species. I can probably figure out the second part of the q... I still need help with this problem... Calculate the concentration of the solution the molality of hydroxide ions in a solution prepared from 9.16 g of barium hydroxide dissolved in 179 g of water I do not know how to approach this...I keep doing something wrong. Physical Chemistry the answer for a was actually incorrect Physical Chemistry Got it. for A i was using molar mass for FeCl3 instead of water..oops. I solved C...but I am still confused with the second question of anyone can help me out... Thanks! Physical Chemistry Calculate the concentrations of each of the following solutions. (a) the molality of chloride ions in an aqueous solution of iron(III) chloride for which xFeCl3 is 0.0195 (b) the molality of hydroxide ions in a solution prepared from 9.16 g of barium hydroxide dissolved in 179... Physical Chemistry Just want to double check...so for KCl, because it is endothermic, my q should then be negative? Physical Chemistry Ok, thanks for your help Physical Chemistry no I didn't get that, but I was adding both 18g of the salt and 190 g of water for the m used in the equation. I'm still a little confused with the signs...I thought if it is releasing heat then q would be negative? I did manage to get the same value for q I just had i... Physical Chemistry Determine the temperature change when 18.0 g of the following salt is dissolved in 190. g of water. The specific heat capacity of water is 4.184 J·K-1g-1. The enthalpies of solution in the table are applicable. Do not take the added mass of the salt into account when ca... I mean x is greater than or equal to, not less than. sorry. Solve the inequality: 2x-1 / 3x-2 < 1 [the < is actually less than or equal to] I got one of the answers, x is less than or equal to 1, but there's another. How do you find that answer? Thanks american lit. [[10th grade]] i need to write an essay about how holden caulfield from the catcher in the rye is fascinated with childhood and innocence. what can i include as examples for that? english/ literature/ essay oh thank you very much! :) english/ literature/ essay any other reasons? i had that reason well something like it... english/ literature/ essay hi i need help with this.. what is the importance of recognizing cultural differences as a young person in today's global society? algebra review! it says write the equation of the vertical line that passes through the point (7,-3). what do i need to start it off? find the equation for the line with undefined slope and passing through the point (-1,-2). what do i need to start that off? what equation do i use to find the... algebra review find the y-intercept of a line that passes through (2,1) and has a slope of -5. i forgot what equation i have to use to find this out. can someone give me which equation i have to use? or all the equations that deal with slopes please science- bio how can traits on a particular chromosome be determined? how can these traits determine the characteristics of an organism? what could happen if a base is out of order? not sure wat subject i need two examples of situations where you would think help/charity should be given but it didnt happen as you thought hi. i would like to know if someone can send me websites other than wikipedia that might be able to help me find out stories of how the earth and the things on earth were made... oh ok...thank you very much Pages: 1 | 2 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=trixie","timestamp":"2014-04-16T19:45:46Z","content_type":null,"content_length":"31981","record_id":"<urn:uuid:bf8a8da2-45b4-48c9-baab-7101686e9fb7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
N Miami Beach, FL Fort Lauderdale, FL 33311 Caring Fantastic Teacher who specializes in Reading, Writing and Math ...have spent my whole life helping people. Presently, I am full time GED teacher who has helped students improve their writing, reading and skills. I have over five years of professional experience and I have been helping people learn my whole life. I am experienced... Offering 10+ subjects including prealgebra
{"url":"http://www.wyzant.com/geo_N_Miami_Beach_FL_Math_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-21T03:59:24Z","content_type":null,"content_length":"62274","record_id":"<urn:uuid:b64030a7-04d3-4565-984f-d0602edb86cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
number of class September 20th 2012, 08:29 AM #1 number of class in one sophomore class 1/3 of the students are honor students, 2/7 are varsity athletes. If there are four athletes in the class, how many students are there in the class? anyone can help on this? Re: number of class your question is not clear. if 2/7 of the HONOR STUDENTS are athletes, then if S is the number of students: (1/3)(2/7)S = 4 2S/21 = 4 2S = 84 S = 42 if 2/7 of ALL the students are athletes, then 2S/7 = 4, and S = 14. quite frankly, this makes no sense, as the information about the honor students is then irrelevant, and/or contradictory, since 14 isn't divisible by 3. if "varsity athlete" differs from "athlete" (regardless of whether or not it's 2/7 of the honor students, or 2/7 of all students, that are VARSITY athletes), there is not enough information to solve the problem. precision in asking questions MATTERS (or else you get an answer to a DIFFERENT question, perhaps, than the one you wanted answered). EDIT: please post in the proper section. your question, no matter how difficult it may seem to you, is not "undergraduate university algebra" (linear algebra, group theory, ring theory, galois theory, etc.) and belongs in the pre-university section (even if it is for a college course). if you have doubts about where to post, message a moderator, it's part of their job to assist you with such things. Last edited by Deveno; September 21st 2012 at 02:26 AM. September 21st 2012, 02:20 AM #2 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/advanced-algebra/203765-number-class.html","timestamp":"2014-04-16T10:24:42Z","content_type":null,"content_length":"33179","record_id":"<urn:uuid:6005088b-a4fe-42d6-9986-ab8afcf9a60c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
DSpace at IIT Bombay: Hilbert polynomials and powers of ideals DSpace at IIT Bombay > IITB Publications > Article > Please use this identifier to cite or link to this item: http://dspace.library.iitb.ac.in/jspui/handle/10054/5262 Title: Hilbert polynomials and powers of ideals HERZOG, J Authors: PUTHENPURAKAL, TJ VERMA, JK castelnuovo-mumford regularity symbolic blow-ups Keywords: asymptotic-behavior bigraded algebras Issue Date: 2008 Publisher: CAMBRIDGE UNIV PRESS Citation: MATHEMATICAL PROCEEDINGS OF THE CAMBRIDGE PHILOSOPHICAL SOCIETY, 145(), 623-642 The growth of Hilbert coefficients for powers of ideals are Studied. For a graded ideal I in the polynomial ring S = K[x(1),.... x(n)] and a finitely generated graded S-module M, the Hilbert coefficients e(i)(M/I(k)M) are polynomial functions. Given two families of graded ideals (I(k))(k >= 0) and (J(k))(k >= 0) with J(k) subset of I(k) for all k with the property Abstract: that J(k)K(l) subset of J(k+l) and I(k)I(l) subset of I(k+l) for all k and l, and Such that the algebras A = circle plus(k >= 0) J(k) and B = circle plus(k >= 0) I(k) are finitely generated, we show the function k |-> e(0)(I(k)/J(k)) is of quasi-polynomial type, say given by the polynomials P(0),...,P(g-1). If J(k) = J(k) for all k, for a graded ideal J, then we show that all the Pi have the same degree and the same leading coefficient. As one of the applications it is shown that lim(k ->infinity) l(Gamma(m)(S/I(k)))/k(n) is an element of Q, if I is a monomial ideal. We also Study analogous statements in the local case. URI: http://dspace.library.iitb.ac.in/xmlui/handle/10054/5262 ISSN: 0305-0041 Appears in Article Files in This Item: There are no files associated with this item. View Statistics Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
{"url":"http://dspace.library.iitb.ac.in/jspui/handle/10054/5262","timestamp":"2014-04-18T18:37:54Z","content_type":null,"content_length":"17047","record_id":"<urn:uuid:36d28711-3379-4513-9ed3-46dac9c3d3a9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Thinning measurement models and questionnaire design Ricardo Silva In: Neural Information Processing Systems 2011, 12-15 December 2011, Granada, Spain. Inferring key unobservable features of individuals is an important task in the applied sciences. In particular, an important source of data in fields such as marketing, social sciences and medicine is questionnaires: answers in such questionnaires are noisy measures of target unobserved features. While comprehensive surveys help to better estimate the latent variables of interest, aiming at a high number of questions comes at a price: refusal to participate in surveys can go up, as well as the rate of missing data; quality of answers can decline; costs associated with applying such questionnaires can also increase. In this paper, we cast the problem of refining existing models for questionnaire data as follows: solve a constrained optimization problem of preserving the maximum amount of information found in a latent variable model using only a subset of existing questions. The goal is to find an optimal subset of a given size. For that, we first define an information theoretical measure for quantifying the quality of a reduced questionnaire. Three different approximate inference methods are introduced to solve this problem. Comparisons against a simple but powerful heuristic are presented.
{"url":"http://eprints.pascal-network.org/archive/00008975/","timestamp":"2014-04-16T10:42:48Z","content_type":null,"content_length":"7449","record_id":"<urn:uuid:7f0ac557-588f-4082-aef1-698b48d39c8a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 16 - In Schichtenberg and Steinbruggen [16 , 2001 "... Introduction SAMSON ABRAMSKY (samson@comlab.ox.ac.uk) Oxford University Computing Laboratory 1. Introduction Game Semantics has emerged as a powerful paradigm for giving semantics to a variety of programming languages and logical systems. It has been used to construct the first syntax-independen ..." Cited by 47 (3 self) Add to MetaCart Introduction SAMSON ABRAMSKY (samson@comlab.ox.ac.uk) Oxford University Computing Laboratory 1. Introduction Game Semantics has emerged as a powerful paradigm for giving semantics to a variety of programming languages and logical systems. It has been used to construct the first syntax-independent fully abstract models for a spectrum of programming languages ranging from purely functional languages to languages with non-functional features such as control operators and locally-scoped references [4, 21, 5, 19, 2, 22, 17, 11]. A substantial survey of the state of the art of Game Semantics circa 1997 was given in a previous Marktoberdorf volume [6]. Our aim in this tutorial presentation is to give a first indication of how Game Semantics can be developed in a new, algorithmic direction, with a view to applications in computer-assisted verification and program analysis. Some promising steps have already been taken in this - In ICAIL , 1999 "... We provide a formalism for the study of dialogues, where a dialogue is a two-person game, initiated by the proponent who defends a proposed thesis. We examine several different winning criteria and several different dialogue types, where a dialogue type is determined by a set of positions, an attack ..." Cited by 46 (0 self) Add to MetaCart We provide a formalism for the study of dialogues, where a dialogue is a two-person game, initiated by the proponent who defends a proposed thesis. We examine several different winning criteria and several different dialogue types, where a dialogue type is determined by a set of positions, an attack relation between positions and a legal-move function. We examine two proof theories, where a proof theory is determined by a dialogue type and a winning criterion. For each of the proof theories we supply a corresponding declarative semantics. 1 Introduction Artificial intelligence has long dealt with the challenge of modeling argumentation ([Tou84], [Fel84], [Vre97]). Abstract argumentation and formal dialectics have been developed in noteworthy works such as [Dun95], [Vre93], [KT96], [PS96], [PS97], [Ver96] and [Lou98a]. These fields are useful for the purpose of decision-making and discussion among intelligent agents, such as in [Ree97] and [PJ98]. In addition, they are important in the... , 1998 "... This thesis examines the use of denotational semantics to reason about control flow in sequential, basically functional languages. It extends recent work in game semantics, in which programs are interpreted as strategies for computation by interaction with an environment. Abramsky has suggested that ..." Cited by 32 (5 self) Add to MetaCart This thesis examines the use of denotational semantics to reason about control flow in sequential, basically functional languages. It extends recent work in game semantics, in which programs are interpreted as strategies for computation by interaction with an environment. Abramsky has suggested that an intensional hierarchy of computational features such as state, and their fully abstract models, can be captured as violations of the constraints on strategies in the basic functional model. Non-local control flow is shown to fit into this framework as the violation of strong and weak ‘bracketing ’ conditions, related to linear behaviour. The language µPCF (Parigot’s λµ with constants and recursion) is adopted as a simple basis for higher-type, sequential computation with access to the flow of control. A simple operational semantics for both call-by-name and call-by-value evaluation is described. It is shown that dropping the bracketing condition on games models of PCF yields fully abstract models of µPCF. - IN PROCEEDINGS OF THE 33RD IEEE INTERNATIONAL SYMPOSIUM ON MULTIPLE-VALUED LOGIC , 2003 "... Building on a version of Lorenzen’s dialogue foundation for intuitionistic logic, we show that a suitable game of communicating parallel dialogues is sound and complete for Gödel-Dummett logic G. Among other things, this provides a computational interpretation of Avron’s hypersequent calculus for G ..." Cited by 8 (4 self) Add to MetaCart Building on a version of Lorenzen’s dialogue foundation for intuitionistic logic, we show that a suitable game of communicating parallel dialogues is sound and complete for Gödel-Dummett logic G. Among other things, this provides a computational interpretation of Avron’s hypersequent calculus for G. - Bulletin of Symbolic Logic , 2001 "... We recall some of the early occurrences of the notions of interactivity and symmetry in the operational and denotational semantics of programming languages. We suggest some connections with ludics. ..." Cited by 5 (1 self) Add to MetaCart We recall some of the early occurrences of the notions of interactivity and symmetry in the operational and denotational semantics of programming languages. We suggest some connections with ludics. - Bulletin of Symbolic Logic , 1997 "... . A new games model of the language FPC, a type theory with products, sums, function spaces and recursive types, is described. A definability result is proved, showing that every finite element of the model is the interpretation of some term of the language. 1. Introduction. The work of Lorenzen [2 ..." Cited by 5 (1 self) Add to MetaCart . A new games model of the language FPC, a type theory with products, sums, function spaces and recursive types, is described. A definability result is proved, showing that every finite element of the model is the interpretation of some term of the language. 1. Introduction. The work of Lorenzen [24, 23] proposed dialogue games as a foundation for intuitionistic logic. The idea is simple: associated to a formula A is a set of moves for two players, each of which is either an attack on A---an attempt to refute its validity---or a defence. The players, O who wants to refute A and P who wants to prove A, take turns to make moves according to some rules. The rules determine which player has won when play ends, and the formula A is semantically valid if there is a strategy by which P can always win: a winning strategy. More recently, games of this kind have been applied in computer science to give programming languages a new kind of semantics with a strong intensional flavour. The game in... - Proceedings of TLCA 97, LNCS 1210 , 1997 "... . We present a game model for classical PCF, a finite version of PCF extended by a catch/throw mechanism. This model is build from E-dialogues, a kind of two-players game defined by Lorenzen. In the E-dialogues for classical PCF, the strategies of the first player are isomorphic to the Bohm trees of ..." Cited by 3 (0 self) Add to MetaCart . We present a game model for classical PCF, a finite version of PCF extended by a catch/throw mechanism. This model is build from E-dialogues, a kind of two-players game defined by Lorenzen. In the E-dialogues for classical PCF, the strategies of the first player are isomorphic to the Bohm trees of the language. We define an interaction in E-dialogues and show that it models the weak-head reduction in classical PCF. The interaction is a variant of Coquand's debate and the weak-head reduction is a variant of the reduction in Krivine's Abstract Machine. We then extend E-dialogues to a kind of games similar to Hyland-Ong's games. Interaction in these games also models weak-head reduction. In the intuitionistic case (i.e. without the catch/throw mechanism), the extended E-dialogues are Hyland-Ong's games where the innocence condition on strategies is now a rule. Our model for classical PCF is different from Ong's model of Parigot's lambda-mu-calculus. His model works by adding new moves t... - GaLoP 2005: Games for Logic and Programming Languages , 2005 "... Abstract. Theorem proving, or algorithmic proof-search, is an essential enabling technology throughout the computational sciences. We explain the mathematical basis of proof-search as the combination of reductive logic together with a control régime. Then we present a games semantics for reductive l ..." Cited by 3 (0 self) Add to MetaCart Abstract. Theorem proving, or algorithmic proof-search, is an essential enabling technology throughout the computational sciences. We explain the mathematical basis of proof-search as the combination of reductive logic together with a control régime. Then we present a games semantics for reductive logic and show how it may be used to model two important examples of control, namely backtracking and uniform proof. 1 Introduction to reductive logic and proof-search Theorem proving, or algorithmic proof-search, is an essential enabling technology throughout the computational sciences. We explain the mathematical basis of proof-search as the combination of reductive logic together with a control régime. Then we present a games semantics for reductive logic and show how it may be used to model two important - Proceedings of TABLEAUX 2003, Automated Reasoning with Analytic Tableaux and Related Methods , 2003 "... Abstract. A parallel version of Lorenzen’s dialogue theoretic foundation for intuitionistic logic is shown to be adequate for a number of important intermediate logics. The soundness and completeness proofs proceed by relating hypersequent derivations to winning strategies for parallel dialogue game ..." Cited by 3 (2 self) Add to MetaCart Abstract. A parallel version of Lorenzen’s dialogue theoretic foundation for intuitionistic logic is shown to be adequate for a number of important intermediate logics. The soundness and completeness proofs proceed by relating hypersequent derivations to winning strategies for parallel dialogue games. This also provides a computational interpretation of hypersequents. 1 "... After informally reviewing the main concepts from game semantics and placing the development of the field in a historical context we examine its main applications. We focus in particular on finite state model checking, higher order model checking and more recent developments in hardware design. 1. C ..." Cited by 2 (2 self) Add to MetaCart After informally reviewing the main concepts from game semantics and placing the development of the field in a historical context we examine its main applications. We focus in particular on finite state model checking, higher order model checking and more recent developments in hardware design. 1. Chronology, methodology, ideology Game Semantics is a denotational semantics in the conventional sense: for any term, it assigns a certain mathematical object as its meaning, which is constructed compositionally from the meanings of its sub-terms in a way that is independent of the operational semantics of the object language. What makes Game Semantics particular, peculiar maybe, is that the mathematical objects it operates with
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=470424","timestamp":"2014-04-16T12:17:14Z","content_type":null,"content_length":"37551","record_id":"<urn:uuid:65fc8446-ff95-4b3a-9bf8-e2791534c815>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Ludwig Josef Johann Wittgenstein 1889 - 1951 Click the picture above to see four larger pictures Ludwig Wittgenstein was a philosopher who worked on the foundations of mathematics and on mathematical logic Full MacTutor biography [Version for printing] List of References (16 books/articles) Additional Material in MacTutor Some Quotations (23) 1. Obituary: The Times A Poster of Ludwig Wittgenstein Mathematicians born in the same country Show birthplace location Other Web sites 1. Encyclopaedia Britannica 4. Stanford Encyclopedia of Philosophy 2. NNDB 5. Mathematical Genealogy Project 3. Internet Encyclopedia of Philosophy Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index JOC/EFR © October 2003 The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Mathematicians/Wittgenstein.html","timestamp":"2014-04-19T04:21:09Z","content_type":null,"content_length":"5486","record_id":"<urn:uuid:15892bd0-f7cf-40a5-ab29-844768fd0b98>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- October 2002, week 3 (#420)LISTSERV at the University of Georgia Date: Fri, 18 Oct 2002 12:42:57 GMT Reply-To: "Jerry W. Lewis" <post_a_reply@NO_E-MAIL.COM> Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU> From: "Jerry W. Lewis" <post_a_reply@NO_E-MAIL.COM> Organization: AT&T Broadband Subject: Re: Least Squares Linear Regression Content-Type: text/plain; charset=us-ascii; format=flowed Using Excel for statistical analysis is NOT for the novice, because so many obvious approaches are poorly implemented. However, Excel does do some things better than SAS. For instance gives an extremly ill-conditioned polynomial fit problem. As noted in the chart based polynomial trendline computes all coefficients correctly to 9 figures, which is far better than I know how to do in SAS 8.2. Please note that this is a discussion of numerical capabilities, not the wisdom of fitting a high order polynomial to a limited number of data points over a narrow range. David L. Cassell wrote: > First, *never* use Excel to do statistical analysis, unless > you can afford to get the occasional drastically wrong answer. > [If you are commanded to do so for a homework assignment, then > the failure of Excel clearly won't be counted against you.] > If you're asking in a SAS newsgroup/list, you should be using > SAS to make sure that ill-conditioned data don't cause your > analysis software to fail in embarrassing ways. > Second, your so-called 'Multiple R' is really just the square > root of your typical R-squared value from the regression. > That's all. You can get the formula for R-squared from any > intro textbook. > Go Anteaters! > David > -- > David Cassell, CSC > Cassell.David@epa.gov > Senior computing specialist > mathematical statistician
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0210c&L=sas-l&D=0&P=48071","timestamp":"2014-04-16T04:15:20Z","content_type":null,"content_length":"10881","record_id":"<urn:uuid:9789505f-46b3-4e7f-a794-84570372e984>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
twin paradox I haven't been able to decrypt what you're saying here. What two events? I assume that one is (20,16) in A's frame, but what's the other one? Also, a world line is a curve that represents the motion of an object. You want to imagine something moving from (20,16) to some other event? Why? The two events are "A" and "B" at the time when "B" turns around. Since there is no separation speed there should be no simultaneity disagreement, both should (with the application of SR calculations) agree that these events are (t=20,x=0), "A"'s position according to "A" and (12,0), "B"'s position according to "B". (Couplets are: (20,0), (20,16) and either (12,-9.6), (12,0) or (12,0),(12,9.6) depending on "B"'s choice of origin since both choices have some merit even if the former is standard.) The something moving from one event to another event is the information which I have discussed previously, a signal. A signal moving from "A" to "B". My contention is that even if "B" undergoes a change of frame, the calculations which "B" uses should not be used in such a way to indicate that this signal sent by "A" at (20,0) was simultaneous with any event at "B" earlier than (12,0). That is a consequence of the implication in your diagram ( ) that according to "B", "A" suddenly ages 25.6 years. With the information that "B" has to hand, there is no need to make such a ridiculous claim - even if it may be standard simultaneity fare. You're much too focused on those signals. I don't see how they are relevant at all. You seem to think that they somehow forbid us from using the inertial frame associated with B's return trip, but you haven't given us a reason for that. The signals are an attempt to get you to understand that it is unreasonable and unnecessary to state that "A" suddenly ages 25.6 years. The signals are also representative of the information flow from "A" to "B". There is real information about "A" which is accessible to "B", but it is speed limited so "B" will never get it instantaneously. The best "B" can do is use the information received to make projections which are valid for the prevailing frame. The way you talk about these things is pretty strange to me. How is it not a characteristic of the real universe that two different global coordinate systems disagree about stuff? And what "discontinuity" are you talking about? What function is supposed to be discontinous? All we have here are (at least) two inertial frames that describe things differently. This sudden ageing of 25.6 years is the discontinuity that I am referring to. The 25.6 years is based on realigning the frames with the end result, so that "A" is a nice 40 years old when "B" gets there. However, it is not real. The clock I discussed with Matheinste won't suddenly scroll forward from 7.2 to 32.8 years. And here is why not ... the 32.8 year figure is based on "A" not moving at all during the 20 years. That means that the clock would have to somehow predict the future. This is totally separate from the issue that the calculation behind the 32.8 years is based on a combination of situations, the bastard son of two frames, and that the calculation totally ignores how information flows in the universe. "B" should, at the turnaround, make a projection that "A" has aged a total of 20 years. Not 32.8 years. PS Perhaps you might like to create a chart which maps the "A" events which are, according to "B", simultaneous with "B" events. Make all the events ageing events, ie '"B" has aged x days, this is simultaneous, according to "B" with "A" having aged y days' and plot y against x. In my version, there will be a straight line (with a little bump in the middle if I am going to be pedantic), since "B" effectively maintains the same speed the whole time (0.8c) and I will not be ignoring the information that "B" receives. In your version, there will be three straight lines - (0,0) to (12,7.2), (12,7.2) to (12,32.8) and (12,32.8) to (24,40). Which sounds more representative of a realworld situation? PPS phyti has approximately the right sort of diagram. His figures are for a shorter trip and show the situation in a different way, but at each end of phyti's diagonal lines are the simultaneous events which I suggest you chart, Fredrik.
{"url":"http://www.physicsforums.com/showthread.php?t=257843&page=3","timestamp":"2014-04-20T21:33:05Z","content_type":null,"content_length":"105202","record_id":"<urn:uuid:00dcd52e-7169-4b7e-9b79-64baaf8f447a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Astronomical Clock This Demonstration illustrates the working of an astronomical clock. To visualize the movement of the Sun and Moon on the celestial sphere, astronomical clocks use stereographic projection. The Earth's projection is at the center of the clock, surrounded by the concentric circles of the Equator and the tropics. The blue ring is the projection of the ecliptic with the Sun and the Moon. The ecliptic ring is divided into sectors defining the four seasons and the 12 months of the year. The red circle is the projection of the horizon dividing the celestial sphere into day and night. Change the date to show the position of the Sun and the position and phase of the Moon for that day. Change the time to move the ecliptic to its corresponding position around the Earth. Setting the observer's latitude moves the horizon and defines the times when the Sun and Moon will rise or set. A detailed explanation of the astronomical clock in Prague, which inspired this Demonstration, can be found in [1]. The dial of the astronomical clock features several circles that are the stereographic projections of circles on the celestial sphere and of the supposed circular orbits of the Sun and Moon around the Earth. The Earth is supposed to be at and the projection plane is . Since the stereographic projection of a circle on the sphere is a circle in the plane, it is sufficient to calculate the centers and radii of the projected circles. The projection of the Earth is at the center of the clock. The three concentric circles around it are the projections of the Equator and the tropics. The largest circle just under the Roman numerals is the Tropic of Cancer. The dashed smallest circle is the Tropic of Capricorn and the middle circle is the Equator. These circles are fixed and stay fixed on the dial. The red circles are the projection of the almucantars, that is, the lines of fixed solar altitude on the celestial sphere. The fat circle of altitude zero is the horizon. This circle divides the celestial sphere into day and night. The smallest tiny circle is the zenith and represents altitude 90°. The other almucantars are at 9° altitude intervals. The dark disk and circle at altitude define the area of complete darkness or astronomical night. The almucantars are only dependent on the latitude of the observer. The projection of the ecliptic is represented by a ring of two concentric blue circles. This ring revolves around the center of the clock, showing the Earth's rotation around its axis. Its eccentricity to the center of the clock is the result of the Earth's axial tilt of 23.45°. Four spokes divide the ecliptic ring into four unequal parts representing the four seasons. The shortest spoke cuts the ecliptic at the winter solstice, the longest at the summer solstice. The two other spokes of equal length cut the ecliptic at the equinoxes. The ring is further divided into 12 sectors marking the months of the year. The Sun travels counterclockwise along the ecliptic ring during one year. While crossing the horizon it will rise and set in the East or West depending on the observer's hemisphere. East is to the left of the clock, West is to the right. South is at the top of the clock and North is at the bottom. The Moon travels along the ecliptic ring at a higher speed and goes to different phases depending on its position in relation to the Sun. The calibration for the Moon phases through the end of 2015 was made by setting a full Moon on April 6, 2012 and a new Moon on December 11, 2015 (see Related Links) and using linear interpolation in between. This corresponds to an average of approximately 12.36 lunations per year. The Prague clock uses gears with 379 and 366 teeth and hence gets lunations. Apparent solar time can be read by the Sun hand on the 2×12 hour Roman numeral dial. The division between am and pm hours depends on the observer's hemisphere. For this Demonstration, solar time is computed from the local (clock) time and depends on the observer's longitude, time zone, and the use of daylight saving time. The approximate date, or at least the month, can be read by estimating the sector of the ecliptic ring covered by the Sun hand. Snapshot 1: at the equator with 12 hours of daylight year round Snapshot 2: at the North pole in December and no daylight all day long Snapshot 3: Anchorage on December 21 with only a small time of daylight around solar noon Snapshot 4: Auckland, New Zealand in June with short days and long nights [1] V. Sedláček. "The Astronomical Clock in Prague (A Detailed, Illustrated Description)." Agentura ProVás . (Aug 2010)
{"url":"http://www.demonstrations.wolfram.com/AstronomicalClock/","timestamp":"2014-04-21T09:52:46Z","content_type":null,"content_length":"48094","record_id":"<urn:uuid:6437a73e-483b-4b14-8aea-b1c4e97a228f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
The Global Weak Solution for a Generalized Camassa-Holm Equation Abstract and Applied Analysis Volume 2013 (2013), Article ID 838302, 6 pages Research Article The Global Weak Solution for a Generalized Camassa-Holm Equation Department of Applied Mathematics, Southwestern University of Finance and Economics, Chengdu, Sichuan 610074, China Received 25 October 2012; Accepted 24 December 2012 Academic Editor: Yong Hong Wu Copyright © 2013 Shaoyong Lai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A nonlinear generalization of the famous Camassa-Holm model is investigated. Provided that initial value and satisfies an associated sign condition, it is shown that there exists a unique global weak solution to the equation in space in the sense of distribution, and . 1. Introduction In recent years, a lot of works have been carried out to investigate the Camassa-Holm equation [1], which is a completely integrable equation. In fact, the Camassa-Holm equation arises as a model describing the unidirectional propagation of shallow water waves over a flat bottom [1–3]. The equation was originally derived much earlier as a bi-Hamiltonian generalization of the Korteweg-de Vries equation (see [4]). Johnson [2], Constantin and Lannes [5] derived models which include the Camassa-Holm equation (1). It has been found that (1) conforms with many conservation laws (see [6, 7]) and possesses smooth solitary wave solutions if [3, 8] or peakons if [3, 9]. Equation (1) is also regarded as a model of the geodesic flow for the right invariant metric on the Bott-Virasoro group if and on the diffeomorphism group if (see [10–14]). The well-posedness of local strong solutions for generalized forms of (1) has been given in [15–17]. The sharpest results for the global existence and blow-up solutions are found in Bressan and Constantin [18, 19]. Recently, Li et al. [20] studied the following generalized Camassa-Holm equation: where is a natural number. Obviously, (2) reduces to (1) if . The authors applied the pseudoparabolic regularization technique to build the local well-posedness for (2) in Sobolev space with via a limiting procedure. Provided that the initial value satisfies a sign condition and , it is shown that there exists a unique global strong solution for (2) in space . However, the existence and uniqueness of the global weak solution for (2) is not investigated in [20]. The objective of this paper is to establish the well-posedness of global weak solutions for (2). Using the estimates in with , which are derived from the equation itself, we prove that there exists a unique global weak solution to (2) in space with if , and satisfies an associated sign condition. The structure of this paper is as follows. The main result is given in Section 2. Several lemmas are given in Section 3. Section 4 establishes the proof of the main result. 2. Main Results Firstly, we give some notations. The space of all infinitely differentiable functions with compact support in is denoted by . is the space of all measurable functions such that . We define with the standard norm . For any real number , we let denote the Sobolev space with the norm defined by where . For and nonnegative number , let denote the Frechet space of all continuous -valued functions on . We set . Defining and letting with and (convolution of and ), we know that for any with . Notation (or equivalently ) means that (or equivalently ) for an arbitrary sufficiently small . For the equivalent form of (2), we consider its Cauchy problem Definition 1. A function is called a global weak solution to problem (5) if for every , , and all , it holds that with . Now, we give the main result of this work. Theorem 2. Let , , , and (or equivalently , ). Then, problem (5) has a unique global weak solution in the sense of distribution, and . 3. Several Lemmas Lemma 3 (see [20]). Let with . Then, the Cauchy problem (5) has a unique solution where depends on . Lemma 4 (see [20]). Let , , and (or equivalently , . Then, problem (5) has a unique solution satisfying Using the first equation of system (5) derives from which one has the conservation law Lemma 5 (see [20]). Let , and the function is a solution of problem (5) and the initial data . Then, the following inequality holds: For , there is a constant such that For , there is a constant such that For (2), consider the problem Lemma 6 (see [20]). Let , , and let be the maximal existence time of the solution to problem (5). Then, problem (14) has a unique solution . Moreover, the map is an increasing diffeomorphism of with for . Differentiating (14) with respect to yields which leads to The next lemma is reminiscent of a strong invariance property of the Camassa-Holm equation (the conservation of momentum [21]). Lemma 7 (see [20]). Let with , and let be the maximal existence time of the problem (5). It holds that where and . Lemma 8. If , , such that , (or equivalently, ), then the solution of problem (5) satisfies Proof. Using , it follows from Lemma 7 that . Letting , we have from which we obtain On the other hand, we have The inequalities (19), (20), and (21) derive that inequality (18) is valid. Similarly, if , , we still know that (18) is valid. Lemma 9. For , , it holds that where is a constant independent of . The proof of this lemma can be found in Lai and Wu [15]. From Lemma 3, it derives that the Cauchy problem has a unique solution depending on the parameter . We write to represent the solution of problem (23). Using Lemma 3 derives that since . Lemma 10. Provided that , , , and (or equivalently , ), then there exists a constant independent of such that the solution of problem (23) satisfies Proof. Using identity (10) and Lemma 9, if with , we have where is independent of . From Lemma 8, we have which completes the proof. Lemma 11. For any , with , it holds that The proof of this lemma can be found in [15]. 4. Existence and Uniqueness of Global Weak Solution Provided that , for problem (23), applying Lemmas 5, 9, and 10, and the Gronwall’s inequality, we obtain the inequalities where , and is a constant independent of . It follows from the Aubin’s compactness theorem that there is a subsequence of , denoted by , such that and their temporal derivatives are weakly convergent to a function and its derivative in and , respectively, where is an arbitrary fixed positive number. Moreover, for any real number , is convergent to the function strongly in the space for and converges to strongly in the space for . 4.1. The Proof of Existence for Global Weak Solution For an arbitrary fixed , from Lemma 10, we know that is bounded in the space . Thus, the sequences , , , and are weakly convergent to , , , and in for any , separately. Using , we know that satisfies the equation with and . Since is a separable Banach space and is a bounded sequence in the dual space of , there exists a subsequence of , still denoted by , weakly star convergent to a function in . As weakly converges to in , it results that almost everywhere. Thus, we obtain . Since is an arbitrary number, we complete the global existence of weak solutions to problem (5). Proof of Uniqueness. Suppose that there exist two global weak solutions and to problem (5) with the same initial value , , we consider its associated regularized problem (23). Letting , from Lemma 10 , we get and which is independent of . Still denoting , and , it holds that Multiplying both sides of (30) by , we get Using , , , , we have Applying Lemma 11 repeatedly, we have For , using Lemma 11 derives Using (32)–(34), we get Applying results in . Consequently, we know that the global weak solution is unique. This work is supported by the Fundamental Research Funds for the Central Universities (JBK120504).
{"url":"http://www.hindawi.com/journals/aaa/2013/838302/","timestamp":"2014-04-21T00:10:54Z","content_type":null,"content_length":"550411","record_id":"<urn:uuid:1f84e50a-9eb3-4341-b20d-d3a34511ae40>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
How much one's individuality cost? At least 2.77544 bits of information #1 How much one's individuality cost? At least 2.77544 bits of information Forum Junior Join Date Jul 2008 June 22nd, 2012, 10:43 AM Imagine there is a population/database/dictionary and we would like to distinguish its elements. So for each element, let us somehow encode its individual features (e.g. using a hash function) as a bit sequence - the most dense way is to use sequence of uncorrelated P(0)=P(1)=1/2 bits. We can now create minimal prefix tree required to distinguish these sequences, like in the figure below. For such ensemble of random trees of given number of leaves ( It turns out that it asymptotically grows with at average 2.77544 bits per element The calculations can be found here: [1206.4555] Optimal compression of hash-origin prefix trees Is it the minimal cost of distinguishability/individuality? How to understand it? ps. related question: can anyone the sum for average depth of node: The first thought about this distinctiveness is probably n! combinatorial term while increasing the size of the system, but n! is about the order and its logarithm grows faster than linearly. Distinctiveness is something much more subtle. It is well seen in equation (10) from the paper (which should like): So this equation says: distinctiveness/individualism + order = total amount of information distinctiveness grows linearly with n (2.77544 asymptotic linear coefficient) information about their ordering grows faster: Update: Slides about it and graph compression: http://dl.dropbox.com/u/12405967/hashsem.pdf Last edited by Jarek Duda; October 13th, 2012 at 01:52 AM. July 1st, 2012, 06:46 AM Even individuality is devaluating in our times - to about 2.3327464 bits per element/specimen (thanks to James Dow Allen): http://groups.google.com/forum/#!top...on/j7i-eTXR14E But this is the final minimal price tag Specifically, for the minimal prefix tree, a random sequence (representing individual features of a specimen) has about 0.721 probability of being identified as belonging to the population ... so if we are interested only in distinguishing inside the population, we can afford increasing this probability up to 1. To reduce the amount of information in the minimal prefix tree, let us observe that if there appears degree 1 node inside the tree, all sequences from the population going through that node will certainly go in the corresponding direction - we can save 1 bit about the information which exactly is this direction. In standard prefix tree these degree 1 nodes were the place where it could turn out that an outsider does not belong to the population - removing this information would raise false positive probability from 0.721 to 1. So if for sequences (0101.., 1010.., 1001..), the minimal prefix tree remembers (without ordering!): (0....., 101..., 100...), such reduced one remembers only (0....., 1.1..., 1.0...) What decreases its asymptotic cost from 2.77544 bits/specimen to about 2.332746. July 16th, 2012, 11:04 AM If we can save lg(n!) bits of information about the order of unordered elements, one can ask if we can do the same with graphs having unlabeled vertices? I've just found such recent paper of Yongwook Choi: "Fast Algorithm for Optimal Compression of Graphs" about Erdos-Renyi graphs (for each two vertices, with p probability there is undirected edge between them): So straightforward compression would use h(p)*n*(n-1)/2 bits for encoding the adjacency matrix, but we would like to subtract about lg(n!) bits about the order of vertices (neglecting The paper uses quite a natural method for representing the graph (explained in a complex way): - choose an order of vertices(!!!) - assign length k-1 bit sequence to k-th vertex : 0 on i-th position if it's adjacent to i-th vertex, 1 otherwise (e.g. 010 if it's fourth and adjacent to first and third, not second). - create binary tree from these sequences (fig. 4 in the paper) - encode this tree - exactly like in my paper. I see the first step extremely suspicious - choosing a different order should usually lead to a different tree and so a different encoding sequence. So this encoding seems to be far from being bijection from equivalence classes to encoding sequences, what is required for the optimality. If each of n! ordering would lead to a different tree/encoded sequence, we should completely loose the lg(n!) income ... does anybody understand why this algorithm works so well?? How it could be improved? The encoding algorithm is fine, but we need to make it independent from the ordering and essentially use this independence to get better probabilities. One way to do it is sorting elements, like sorting the numbers and then encode the small nonnegative differences. Sorting vertices of graph with unlabeled vertices is faaaar from simple - having polynomial algorithm for it would solve graph isomorphism problem in polynomial time ... but there are possible very near algorithms: for example to each vertex assign a vector, which k-th coordinate is the number of length k cycles from it ((M^k)_ii) and sort these vectors lexicographically ( [0804.3615] Combinatorial invariants for graph isomorphism problem ). However, there are really nasty graphs these invariants don't distinguish: so called strongly regular graphs... The other problem is using this ordering to reduce the required amount of information ... what generally seems extremely difficult. So let us start with simpler vertex ordering - just accordingly to their degree (k=2). It makes we know the expected probability distributions varying with depth of nodes: the deeper, the more go right. It would save some information, especially for sparse graphs. Has anyone any idea how to do it in a really optimal way? Forum Junior Join Date Jul 2008 Forum Junior Join Date Jul 2008
{"url":"http://www.thescienceforum.com/mathematics/29038-how-much-ones-individuality-cost-least-2-77544-bits-information.html","timestamp":"2014-04-20T03:09:42Z","content_type":null,"content_length":"68560","record_id":"<urn:uuid:47b94d7e-bac9-42e6-805c-ec0dde739d88>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Data transformation 34,117pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory This article needs rewriting to enhance its relevance to psychologists.. Please help to improve this page yourself if you can.. File:Population vs area.svg A scatterplot in which the areas of the sovereign states and dependent territories in the world are plotted on the vertical axis against their populations on the horizontal axis. The upper plot uses raw data. In the lower plot, both the area and population data have been transformed using the logarithm function. In statistics, data transformation is an aspect of data processing and refers to the application of a deterministic mathematical function to each point in a data set — that is, each data point z[i] is replaced with the transformed value y[i] = f(z[i]), where f is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs. Nearly always, the function that is used to transform the data is invertible, and generally is continuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in some currency unit, it would be common to transform each person's income value by the logarithm function. Reasons for transforming data Guidance for how data should be transformed, or whether a transform should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However if the population is substantially skewed and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distribution before constructing a confidence interval. If desired, the confidence interval can then be transformed back to the original scale using the inverse of the transformation that was applied to the data. Data can also be transformed to make it easier to visualize them. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g. to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph. A final reason that data can be transformed is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon." However if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by the reciprocal function, yielding liters per kilometer, or gallons per mile. Data transformation in regression Linear regression is a statistical technique for relating a dependent variable Y to one or more independent variables X. The simplest regression models capture a linear relationship between the expected value of Y and each independent variable (when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity. Another assumption of linear regression is that the variance be the same for each possible expected value (this is known as homoskedasticity). Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss-Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality. This can be assessed empirically by plotting the fitted values against the residuals, and by inspecting the normal quantile plot of the residuals. Note that it is not relevant whether the dependent variable Y is marginally normally distributed. Examples of logarithmic transformations Equation: $Y = a + bX$ Meaning: A unit increase in X is associated with an average of b units increase in Y. Equation: $\log(Y) = a + bX$ (From taking the log of both sides of the equation: $Y = e^a e^{bX}$) Meaning: A unit increase in X is associated with an average of 100b% increase in Y. Equation: $Y = a + b \log(X)$ Meaning: A 1% increase in X is associated with an average b/100 units increase in Y. Equation: $\log(Y) = a + b \log(X)$ (From taking the log of both sides of the equation: $Y = e^a X^{b}$) Meaning: A 1% increase in X is associated with a b% increase in Y. Common transformations The logarithm and square root transformations are commonly used for positive data, and the multiplicative inverse (reciprocal) transformation can be used for non-zero data. The power transform is a family of transformations parametrized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse as special cases. To approach data transformation systematically, it is possible to use statistical estimation techniques to estimate the parameter λ in the power transform, thereby identifying the transform that is approximately the most appropriate in a given setting. Since the power transform family also includes the identity transform, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as the Box-Cox technique. The reciprocal and some power transformations can be meaningfully applied to data that include both positive and negative values (the power transform is invertible over all real numbers if λ is an odd integer). However when both negative and positive values are observed, it is more common to begin by adding a constant to all values, producing a set of non-negative data to which any power transform can be applied. A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes." The logarithm also has a useful effect on ratios. If we are comparing positive quantities X and Y using the ratio X / Y, then if X < Y, the ratio is in the unit interval (0,1), whereas if X > Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where X and Y are treated symmetrically, the log-ratio log(X / Y) is zero in the case of equality, and it has the property that if X is K times greater than Y, the log-ratio is the equidistant from zero as in the situation where Y is K times greater than X (the log-ratios are log(K) and −log(K) in these two situations). If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a logit transformation may be appropriate: this yields values in the range (−∞,∞). Transforming to normality It is not always necessary or desirable to transform a data set to resemble a normal distribution. However if symmetry or normality are desired, they can often be induced through one of the power To assess whether normality has been achieved, a graphical approach is usually more informative than a formal statistical test. A normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed, such as having skewness in the range of −0.8 to 0.8 and kurtosis in the range of −3.0 to 3.0.^[citation needed] Transforming to a uniform distribution If we observe a set of n values X[1], ..., X[n] with no ties (i.e. there are n distinct values), we can replace X[i] with the transformed value Y[i] = k, where k is defined such that X[i] is the k^th largest among all the X values. This is called the rank transform^[citation needed], and creates data with a perfect fit to a uniform distribution. This approach has a population analogue. If X is any random variable, and F is the cumulative distribution function of X, then as long as F is invertible, the random variable U = F(X) follows a uniform distribution on the unit interval [0,1]. From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If G is an invertible cumulative distribution function, and U is a uniformly distributed random variable, then the random variable G^−1(U) has G as its cumulative distribution function. Variance stabilizing transformations Main article: Variance-stabilizing transformation Many types of statistical data exhibit a "mean/variance relationship", meaning that the variability is different for data values with different expected values. As an example, in many parts of the world incomes follow an increasing mean/variance relationship. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances. A variance-stabilizing transformation aims to remove a mean/variance relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or Anscombe transform for Poisson data (count data), the Box-Cox transformation for regression analysis and the arcsine square root transformation or angular transformation for proportions (binomial data). Transformations for multivariate data Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data are observed as random vectors X[i] with covariance matrix Σ, a linear transformation can be used to decorrelate the data. To do this, use the Cholesky decomposition to express Σ = A A'. Then the transformed vector Y[i] = A^−1X[i] has the identity matrix as its covariance matrix. See also External links
{"url":"http://psychology.wikia.com/wiki/Data_transformation_(statistics)","timestamp":"2014-04-20T04:22:33Z","content_type":null,"content_length":"81760","record_id":"<urn:uuid:fc8ce02e-e304-421e-9c4a-3075cd2e7549>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
sine function transforms February 1st 2008, 11:19 AM #1 sine function transforms say i have two functions, $y=3^x$ and $y=x+1.2$ and i wanted to find where they intersect. since $y=x+1.2$ is a line, if i could rotate both functions so that $y=x+1.2$ became the x-axis, $y=0$, then to find the intersection i would solve the now rotated $y=3^x$ by setting $y=0$, then un-rotating the solutions. firstly, how can i accomplish this rotation transform, and secondly, can i do more complex transforms, like $y=2^x$ and $y=x^2$ transformed so that $y=x^2$ becomes the x-axis? again, i would also have to de-transform the solutions. if i was at all fluent in calculus (took 4 semesters worth of it, haha, just don't remember much), then i'm sure i'd have a better idea on how to figure this out, or at least ask better questions. for this one, i visualize flattening out one curve, and the other one obeys the same movements. mathematically i think it would be taking lines perpindicular to $x^2$ and rotating each one into $y=0$ as in the first case. and for the real kicker (or maybe you'll just kick me), how about $y=sin(\pi*2^x)$ and $y=sin(\pi*2^{1-x})$? ..and again, de-transformation. transforming certain functions would cause the other to no longer be a function..not a good idea haha. given the range $0<x<1$ could these last functions be modeled by a polynomial? thanks for any help or ideas! Last edited by pinion; February 2nd 2008 at 05:36 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-applied-math/27238-sine-function-transforms.html","timestamp":"2014-04-24T18:20:22Z","content_type":null,"content_length":"32743","record_id":"<urn:uuid:afb6fc05-f631-4759-9d76-710e6c788a9a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A relation where every x-value produces a unique y-value is called a ___________. domain range function relation • 11 months ago • 11 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5175a518e4b0be9997e03cb1","timestamp":"2014-04-17T16:35:28Z","content_type":null,"content_length":"49364","record_id":"<urn:uuid:e6acaeb0-f0ca-495d-98b1-1ff88ac35374>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
The New Prime Theorem (641)-(690) Authors: Chun-Xuan Jiang Using Jiang function we are able to prove almost all prime problems in prime distribution. This is the Book proof. In this paper using Jiang function J[2](ω) we prove that the new prime theorems (641)-(690) contain infinitely many prime solutions and no prime solutions.From (6) we are able to find the smallest solution. π[k](N[0],2) ≥ 1. This is the Book theorem. Comments: 71 pages Download: PDF Submission history [v1] 11 Sep 2010 Unique-IP document downloads: 53 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. comments powered by
{"url":"http://vixra.org/abs/1009.0044","timestamp":"2014-04-19T09:23:51Z","content_type":null,"content_length":"6949","record_id":"<urn:uuid:1b57b45f-beea-4329-97b3-d50d9b159018>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonstandard Card-Based Probablility Question September 20th 2011, 04:27 PM #1 Sep 2011 Nonstandard Card-Based Probablility Question This isn't a homework problem that I need help with or anything of that nature. This is more of a personal math-related problem. I play a collectable card game, and I am trying to determine which of my decks would benefit the most from a certain rare card that I possess. As part of this analysis I am trying to figure out the probability of drawing certain cards needed for turn-one or turn-two combos that can be played only if I draw the rare card in question. In all likelihood I cannot remotely rely upon these cards being in my opening hand with any reliability given the fact that the chances of drawing a five or six card combo will be ludicrously low, so this is really something I am looking at more for fun than anything else. I already know the probabilities of drawing one of the given cards of a combo in question in a seven card opening hand, but unfortunately the course I took that dealt with probability did not deal with anything as complex as figuring out the odds of having multiple events occur at once. Even if it did I no longer own the textbook and it was years ago. Let's say that one of the combos requires five specific cards in an opening hand of seven cards, and the probabilities of drawing each card on the first card drawn are as follows: □ Card 1 (the rare card in question): 1/60 chance of being drawn □ Card 2: 21/60 chance of being drawn. Two of this card in an opening hand are required for the combo, so the probability of a second one being drawn ranges from 20/59 to 20/54 depending on the number of cards drawn so far □ Card 4: 4/60 chance of being drawn □ Card 5: 3/60 chance of being drawn Obviously the probability of drawing one of the desired cards increases with each card drawn. Additionally, only five out of seven cards in an opening hand need to be the desired cards. The other two cards drawn can be anything without affecting the combo's success. Finally, the five cards can be drawn in any order without affecting the combo's success. Most of the combos I am looking at are five cards, but there is one that requires six cards (which has eight drawn cards instead of seven to draw from since it is a second-turn combo) and a couple that require four cards. I am hoping that whatever solution I am looking for can somehow be applied to any number of needed cards. I'm afraid that I am wholly unsure of where to begin, and I'm hoping that the solution I am looking for is not horribly complicated. Thank you! Re: Nonstandard Card-Based Probablility Question I think you need to study this web page. Re: Nonstandard Card-Based Probablility Question Hi Moxen, This is a fairly complicated problem if I understand it correctly. To recap my understanding, you have a non-standard 60-card deck with 1 One 21 Twos 4 Fours 3 Fives 31 Others and you would like to know the probability of drawing a 7-card hand with at least 1 One, 2 Twos, 1 Four, and 1 Five. There are $\binom{60}{7} = 382,606,920$ possible 7-card hands, all of which are equally likely. We need to count the number of acceptable hands. As a first step, let's list all the possibilities in terms of the numbers of Ones, Twos, etc. Let's say there are a Ones, b Twos, c Fours, d Fives, and e Others. Then the possibilities for an acceptable 7-card hand are a b c d e As you can see, there are 10 combinations of (a,b,c,d,e) possible. For each of these possibilities, the number of hands is $\binom{1}{a} \binom{21}{b} \binom{4}{c} \binom{3}{d} \binom{31}{e}$ Sum up these products for the 10 (a,b,c,d,e) possibilities. (I used a spreadsheet.) If you do it correctly, I think you will find the sum is 1,980,720. So the probability of drawing an acceptable hand is which is approximately 0.00512. If you are not familiar with the notation above, $\binom{n}{m} = \frac{n!}{m! (n-m)!}$ is the number of combinations of n objects taken m at a time. September 20th 2011, 05:08 PM #2 September 22nd 2011, 06:05 PM #3
{"url":"http://mathhelpforum.com/statistics/188435-nonstandard-card-based-probablility-question.html","timestamp":"2014-04-19T01:58:40Z","content_type":null,"content_length":"41459","record_id":"<urn:uuid:b96ecdfe-58e9-405b-b1aa-cde849b90855>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: help me to sketch graph for this question.. f(x)=(2/3)x^3+(5/2)x^2-3x? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d00d54e4b038af01955f6f","timestamp":"2014-04-19T04:27:59Z","content_type":null,"content_length":"420186","record_id":"<urn:uuid:1b1e7858-bbad-4795-b96f-b677a0bc8971>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
I'm Having Trouble Figuring This Problem Out. How ... | Chegg.com I'm having trouble figuring this problem out. How do I find thedamping ratio for this third order polynomial? I feel like I canget it once I find the damping ratio and natural frequency. Design a compensator such that the unit-step response curve willexhibit maximum overshoot of 25% or less and settling time of 5 secor less. G(s) = 1/(s^2*(s+4)) I got an answer of K(s) = 13.333(s+8), but I think that is reallywrong. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/m-trouble-figuring-problem--find-thedamping-ratio-third-order-polynomial-feel-like-canget--q534237","timestamp":"2014-04-20T19:24:10Z","content_type":null,"content_length":"21575","record_id":"<urn:uuid:a72a1212-bfc9-4b41-ba7c-755c34b74d50>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Maryland - PHYS - 270 CHAPTER19Heat and the First Law of Thermodynamics1* Body A has twice the mass and twice the specific heat of body B. If they are supplied with equal amounts of heat, CA = 4CB; TA = TB/4how do the subsequent changes in their temperatures com Maryland - PHYS - 270 CHAPTER20The Second Law of Thermodynamics1* Where does the energy come from in an internal-combustion engine? In a steam engine? steam.Internal combustion engine: From the heat of combustion (see Problems 19-106 to 19-109). Steam engine: Fr Maryland - PHYS - 270 CHAPTER21Thermal Properties and Processes1* 2 Why does the mercury level first decrease slightly when a thermometer is placed in warm water? A large sheet of metal has a hole cut in the middle of it. When the sheet is heated, the area of the Maryland - PHYS - 270 CHAPTER22The Electric Field I: Discrete Charge Distributions1* If the sign convention for charge were changed so that the charge on the electron were positive and the charge on the proton were negative, would Coulomb's law still be written the Maryland - PHYS - 270 CHAPTER23The Electric Field II: Continuous Charge Distributions1* A uniform line charge of linear charge density = 3.5 nC/m extends from x = 0 to x = 5 m. (a) What is the total charge? Find the electric field on the x axis at (b) x = 6 m, (c) Maryland - PHYS - 270 CHAPTER24Electric Potential1* A uniform electric field of 2 kN/C is in the x direction. A positive point charge Q = 3 C is released fromrest at the origin. (a) What is the potential difference V(4 m) V(0)? (b) What is the change in the pot Maryland - PHYS - 270 CHAPTER25Electrostatic Energy and Capacitance1* Three point charges are on the x axis: q1 at the origin, q2 at x = 3 m, and q3 at x = 6 m. Find the electrostatic potential energy for (a) q1 = q2 = q3 = 2 C, (b) q1 = q2 = 2 C and q3 = 2 C, and Maryland - PHYS - 270 CHAPTER26Electric Current and Direct-Current Circuits1* In our study of electrostatics, we concluded that there is no electric field within a conductor in electrostatic equilibrium. How is it that we can now discuss electric fields inside a co Maryland - PHYS - 270 CHAPTER27The Microscopic Theory of Electrical Conduction1* In the classical model of conduction, the electron loses energy on average in a collision because it loses the drift velocity it had picked up since the last collision. Where does this GWU - PSYC - 001 IntelligenceWhat is Intelligence?Intelligence (in all cultures) is the ability to learn from experience, solve problems, and use our knowledge to adapt to new situations.Controversies About IntelligenceDespite general agreement among psychologi VCU - BIOL - 103 Conventional Energy After studying Chapter 19 you should know:E basics (definitions and thermodynamics) Current E sources (how this cp. historically) Relative E consumption by diff. sectors and societies Timetables for depletion of nonrenewable fuel Maryland - PHYS - 270 CHAPTER The Magnetic Field281* When a cathode-ray tube is placed horizontally in a magnetic field that is directed vertically upward, the electrons emitted from the cathode follow one of the dashed paths to the face of the tube in Figure 28-30. Maryland - PHYS - 270 CHAPTER Sources of the Magnetic Field291* Compare the directions of the electric and magnetic forces between two positive charges, which move along parallel paths (a) in the same direction, and (b) in opposite directions. (a) The electric forces Maryland - PHYS - 270 CHAPTER Magnetic Induction301* A uniform magnetic field of magnitude 2000 G is parallel to the x axis. A square coil of side 5 cm has a single turn and makes an angle with the z axis as shown in Figure 30-28. Find the magnetic flux through the Maryland - PHYS - 270 CHAPTER Alternating-Current Circuits31Note: Unless otherwise indicated, the symbols I, V, E, and P denote the rms values of I, V, and E and the average power. 2 1* A 200-turn coil has an area of 4 cm and rotates in a magnetic field of 0.5 T. (a) Maryland - PHYS - 270 CHAPTER Maxwell's Equations and Electromagnetic Waves321* A parallel-plate capacitor in air has circular plates of radius 2.3 cm separated by 1.1 mm. Charge is flowing onto the upper plate and off the lower plate at a rate of 5 A. (a) Find the t Maryland - PHYS - 270 CHAPTER Properties of Light331* Why is helium needed in a heliumneon laser? Why not just use neon? The population inversion between the state E2,Ne and the state 1.96 eV below it (see Figure 33-9) is achieved by inelastic collisions between neon GWU - PSYC - 001 Motivation - 1(Myers, 2006, p. 469)Perspectives on motivation Instinct theory (evolutionary theory) Drive reduction theory Arousal theory Maslow's hierarchy of needsHungerAutomatic regulation of caloric intake prevents energy deficits GWU - PSYC - 001 Motivation - 2Motivation at Work Personnel psychology Organizational psychologyDiscovering and harnessing your strengths (p.501) What activities give you pleasure? What activities leave me wondering, &quot;When can I do this again?&quot; What so o Maryland - BMGT - 350 CHAPTER 9 Stocks and Their ValuationDetermining common stock values Preferred stockDeterminants of Intrinsic Value and Stock Prices (Figure 1-1)Different Approaches for Valuing Common Stock Dividend growth model Free cash flow method Usi Wisc La Crosse - POL - 202 Genocide in Darfur Conflict between the Sudan Government and the rebel Sudan Liberation Movement (SLM). Widespread human rights abuse. Hundreds of thousands dead and 2 million have been force to VCU - BIOL - 103 Happy Earth Day 2008 !38th Annual Earth Day is Tuesday 22 April 2008 The first Earth Day was held on April 22, 1970.Sen. G. Nelson: &quot;In 1969, I announced that in the spring of 1970 there would be a nationwide grassroots demonstration on behalf of t VCU - BIOL - 103 Sustainable EnergyAfter studying Chapter 20 you should know about: Energy Conservation and Cogeneration Energy policy Alternative energy sources &amp; their environ. effectsSolar, Wind, Biomass, Hydroelectric, Hydrogen, &amp; GeothermalTapping solar ener VCU - BIOL - 103 Environmental Science BIOL103 WARNING! McGraw-Hill just emailed me that the links to Flashcards and other online content at their end will not be available until this weekend. They apologized for the delay. First assignment is due 25 Jan so not too b VCU - BIOL - 103 Test on next Tue 8 AprilConsult first slide of these lectures for major topics: Ch 8, 14, 15, 16, 19, 20, 21 Study Group? Meet in front at end of class OR Log onto Study4Test and leave your emailWaste Production and DisposalAfter studying Ch 21, VCU - BIOL - 103 Welcome GuestJenna Garland from the Conserve Our Ocean Legacy campaign run by Pew Environment Group and Environment AmericaFirst Debate is this week! Go to &quot;Discussion Area&quot; on our web page &amp; Enter the Discussion Area Post twice over the course o VCU - BIOL - 103 www.thebiggestpicture.orgXCredit 5 pt per day Sign attendance each dayQuizzes take a while to grade (at least one week) we hope to have them all done by tomorrowBiological Communities and Species InteractionsPart I: EvolutionAfter studying VCU - BIOL - 103 Chapter 1 GoalsFocus on these aspects when rereading &amp; studying for the test Define environment History of conservation: who shaped knowledge, actions, and attitudes Major environmental dilemmas Human connection to envmt'l problemsFAQsWh VCU - BIOL - 103 Population Dynamics and Human PopulationsChapters 6 &amp; 7Spring 2008doesn't yet include XCExam I80 # S tu d en ts 60 40 20 0 F 29.5 D 34.5 C 39.5 Grade B 44.5 A MoreFrequencyNum ber of S tudents 70 60 50 40 30 20 10 0FYI, Spring 2007Exa m 1 Oakland University - PSY - 250 DEPARTMENT OF PSYCHOLOGY Research Participation Requirement PSY 250DEADLINES FOR RESEARCH PARTICIPATION THE DEADLINE FOR COMPLETING THE RESEARCH PARTICIPATION REQUIREMENT IS TWO (2) WEEKS (14 UOIT - PROGRAMMIN - 1020U UNIVERSITY OF ONTARIO INSTITUTE OF TECHNOLOGY FOUNDAMENTAL OF PROGRAMMINGCSCI 1020UThursday, February 28, 2008 12:40-2:00 PMMidtermLast Name: Student ID: First Name:Instructions: Use a pen to fill in the front page. This is a closed book e UC Riverside - BSAD - 184 AGSM: The role of leadership, mission, and vision in strategyW. Trexler Proffitt Jr. for BSAD 184This strategy case focuses on a small business school expanding between 1999 and 2003. It introduces the question of the role of leadership, mission a UC Riverside - BSAD - 184 Crime Scene Cleaners Inc.: Opportunity and ThreatBy W. Trexler Proffitt Jr. for BSAD 184 This case addresses strategic analysis of external threats and opportunities. It examines a small firm within a growing segment of the waste management industr UC Davis - GEL - 20 GEOLOGY 20 - LECTURE 4 - BASIN &amp; RANGE: Part 1 - Old Rocks, Young Faults, Building Mountains Basin and Range - Mountain-Building(Ch. 7 &amp; 1 in Harden - read selectively!)Basin &amp; Range Province (B/R) originated due to extensional forces that have p UC Davis - GEL - 20 GEL 20 Midterm Study GuideMidterm on Thursday, Feb. 7 will cover material from the first day of class through Tuesday, Feb. 5. About 30-35 multiple choice questions. Bring a scantron 2000 , a pencil and a clean eraser. - Have the scantron filled out UC Davis - GEL - 20 GEOLOGY 20 - LECTURE 5 - BASIN &amp; RANGE: Part 2 - Extensional Tectonics &amp; Earliest California(Ch. 7 &amp; 1 in Harden - read selectively!)Why is the Basin and Range being pulled apart? What are the forces that would cause this enormous scale of extens UC Davis - GEL - 20 GEOLOGY 20 - LECTURE 1CALIFORNIA VOLCANOES OF THE CASCADES (Ch. 5)California is divided into several geomorphic provinces based on their distinct geology and landscapes. &quot;geomorphology&quot; just refers to the characteristic landscape of an area which UC Davis - GEL - 20 GEOLOGY 20 - LECTURE 2PLATE TECTONICS &amp; THE CASCADES(Ch. 1, 5 in Harden)Now that you know the geography of the Cascades and a bit about how volcanoes work, it's time to address the WHY of the Cascades. Why do they form a linear pattern? Why are UC Davis - GEL - 20 GEOLOGY 20 - LECTURE 3DEEP TIME &amp; DESERTS Geologic Time (aka &quot;Deep Time&quot;)(Ch. 3 &amp; 6 in Harden - read selectively!)Key events in the history of Earth . . . . you don't need to memorize the following information. You only need to memorize a few k UCLA - PIC - 10B PIC 10B EXAM 1January 28, 2008KEYCheck your TA's name:Jacob _Ryan _Joe _You have 50 minutes to complete this exam. No books, notes, or calculators are allowed. Show all work. Incomplete, illegible, or unintelligible answers may receive no UCLA - PIC - 10B PIC 10B Practice Exam 1 Key Disclaimer: This practice exam covers topics similar to those on your actual exam, but it does not cover all the material we have learned. All material we have covered up to the exam date is fair game for the exam. 1.) Sup UCLA - PIC - 10B Lecture 8: Template ClassesPIC 10B Todd WittmanSec 22.3: Template ClassesWe can use templates to design general-purpose classes. We've actually already seen this in action with the vector class. vector&lt;double&gt; v(10); v[3]=-42.3; vector&lt;string&gt; s( UCLA - PIC - 10B PIC 10B Practice Exam 1 Key Disclaimer: This practice exam covers topics similar to those on your actual exam, but it does not cover all the material we have learned. All material we have covered up to the exam date is fair game for the exam. 1.) Sup UCLA - PIC - 10B PIC 10B EXAM 1April 23, 2007NAME: _KEY_PLEASE PRINTCheck your TA's name:Hengli (9:00) _Faizal (10:00) _You have 50 minutes to complete this exam. No books, notes, or calculators are allowed. Show all work. Incomplete, illegible, or uninte Columbia - PHYS - 1402 Columbia - PHYS - 1402 Columbia - PHYS - 1402 Cornell - CHEM - 2080 Chemistry 008 Handout #4 Mrs. Harriet Smithline Spring 2005 Chapters 12 and 13 Review Guide Lecture 2/1 and 2/3 Chemical Energetucs and Spontaneity of Chemical Processes I. Enthalpy of a Reaction (H) A. Erxn = heat + expansion work B. Most reactions Cornell - CHEM - 2080 Chemistry 008 Handout #5 Mrs. Harriet Smithline Spring 2005 Chapters 13 and 14 Review Guide Lectures 2/8 and 2/10 Spontaneity of Chemical Processes and Kinetics I Entropy and The Second Law of Thermodynamics A. The Second Law of Thermodynamics - in a Cornell - CHEM - 2080 Chemistry 008 Mrs. Harriet SmithlineChapter 14 Review Guide Lectures 2/22 and 2/24/05 Kinetics:Mechanisms and Rates of Reaction I. Integrated Rate Law A. Integrated Rate Law of Second Order Reactions - (when the rateHandout #7 Spring 2005= - [A Cornell - CHEM - 2080 Chemistry 008 Mrs. Harriet SmithlineChapter 14 Review Guide Lectures 2/15 and 2/17/05 Kinetics:Mechanisms and Rates of ReactionHandout #6 Spring 2005I. Rate Laws, Reaction Order, and Experimental Determination of a Rate Law A. The differential f Cornell - CHEM - 2080 Chemistry 008 Mrs. Harriet Smithline Chapter 12 Review Guide (Lectures 1/25/05 and 1/27/05) Chemical Energetics Thermodynamics A. Will a rxn happen? Will it go to completion? Where will it stop? B. 3 types of energy (units of joules) 2 1. kinetic Ek= Georgia Tech - ECON - 2105 Name: _ Date: _ Use the following to answer question 1: Figures: The Unemployment Rate and Aggregate Output1. (Figures: The Unemployment Rate and Aggregate Output) Between 1929 and 1933, the economy was in a state of: A) stagflation. B) expansion. Georgia Tech - ECON - 2105 Name: _ Date: _ 1. Which of the following represent the three consequences of the decline in demand during the Great Depression? A) fall in prices, decline in output, and a surge in unemployment B) rise in prices, decline in output, and a surge in un UCSD - MATH - 20F Name: Student ID:Thursday section time: Math 20F - Linear Algebra - Spring 2003 Quiz #4 - May 8(Do not discuss the quiz with students who haven't taken it yet until 8:00pm.) You must show your work in order to get credit for a problem. Label you UCSD - MATH - 20F Math 20F - Linear Algebra Midterm Examination #1Spring, April 28, 2003 - Instructor: Sam BussWrite your name or initials on every page before beginning the exam. You have 50 minutes. There are six problems. You may not use notes, textbooks, calcul UCSD - MATH - 20F Print Name:Student Number: Section Time:Math 20F. Midterm Exam 2 November 21, 2005Read each question carefully, and answer each question completely. Show all of your work. No credit will be given for unsupported answers. Write your solutions cl UCSD - MATH - 20F Math 20F Midterm Exam (version 1) April 29, 2005 x1 - 2x2 + 2x3 = 10 1. (6 points) Consider the system x2 + 3x3 = 6 . x1 - 3x2 - x3 = 4(a) Determine the solution set of the system and write it in parametric form. 1 -2 2 10 3 6 . The augmented UCSD - MATH - 20F Name: Student ID:Thursday section time: Math 20F - Linear Algebra - Spring 2003 Quiz #6 - May 29(Do not discuss the quiz with students who haven't taken it yet until 8:00pm.) You must show your work in order to get credit for a problem. Label yo UCSD - MATH - 20F Name: Student ID:Thursday section time: Math 20F - Linear Algebra - Spring 2003Answers for Self-assessment Quiz #6.5 - June 5 1 1 1 1. Let A be the matrix A = 0 2 2 . 0 0 5 Answer the following questions: What are the eigenvalues of A? For eac UCSD - MATH - 20F 1 PRACTICE MIDTERM 2 MATH 20F, LECTURE CDirections: Do all the problems. Write your solutions clearly and give explanations for your work. Answers without justifications will not be given credit. Problem 1: Consider the matrices 1 -3 5 A = -2 1
{"url":"http://www.coursehero.com/file/152957/Ch18F/","timestamp":"2014-04-20T23:43:00Z","content_type":null,"content_length":"78999","record_id":"<urn:uuid:857d6d5f-b026-479c-b0f4-0587cd352f4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Kendall Park Prealgebra Tutor ...I have tutored both professionally and privately students ranging from grade 3 through 11 for more than 10 years. I am able to also help prepare students for SAT Math. I offer flexible tutoring hours with advance requests and have a 24-hour cancellation policy. 6 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...If your child is having difficulty in math, I can help. I have raised 3 children to love math. Two of my children are in college. 7 Subjects: including prealgebra, reading, algebra 1, grammar I am an experienced tutor working with elementary through high school students. I have over 6 years of experience tutoring and 4 years of experience working with middle-school students from minority backgrounds who are struggling with reading, writing and math. I love working with students from all grade levels and helping them get motivated to set learning goals and to succeed in 25 Subjects: including prealgebra, English, reading, statistics ...January 2013-June 2013, I participated in an afterschool math tutor program at a state mandated Focus school for inadequate test score proficiency. I've substituted throughout the county in grades K-12. I am highly qualified in mathematics grades 6-8, through the State of New Jersey, Department of Education. 9 Subjects: including prealgebra, algebra 1, algebra 2, study skills Hi, I am a high school senior taking Honors and AP classes. I have been taking honors math classes since the eight grade. I am currently taking college-level Biology, Chemistry, Literature and 46 Subjects: including prealgebra, reading, English, biology
{"url":"http://www.purplemath.com/Kendall_Park_prealgebra_tutors.php","timestamp":"2014-04-21T13:06:21Z","content_type":null,"content_length":"24084","record_id":"<urn:uuid:dd3d88f9-fd5e-4e90-8e26-d323336f477a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - differentiation of a vektorfield klabautermann Mar2-12 04:41 AM differentiation of a vektorfield 1 Attachment(s) i flipped through my notes on a class on general relativity this morning and i found an expression wich doesn't make sense to me. im not sure if don't understand the last term in the last equality or it just dosn't make sense. i would appreciate your oppinion. a,b are abstract indicies. everything else are coordinate indicies. Re: differentiation of a vektorfield You should explain your notations... klabautermann Mar5-12 02:43 AM Re: differentiation of a vektorfield of course. as i said, a and b are abstract indicies, i,j,m,k are components with respect to a basis. bared and not bared components and differential operators correspond to different coordinate Re: differentiation of a vektorfield Ok but what are [itex]\partial[a][/itex], the quotation? x is a vector? Re: differentiation of a vektorfield All times are GMT -5. The time now is 04:46 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=583157","timestamp":"2014-04-21T09:46:53Z","content_type":null,"content_length":"6146","record_id":"<urn:uuid:85330481-3776-4086-840b-feac0223bb8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Plotting points I have lots of co-ordinates to plot into CAD - is there an easy way to do this without having to type in each of the points individually? Such as using the data from a spreadsheet. Using vanilla AutoCAD? How are the coordinates ordered? N,E,Z ??? Try ASCPOINT.LSP at http://www.caddzone.com/free.htm R.K. McSwain CAD Panacea | Hot Tip Harry | Cadalyst Cadtips I've was looking for a bit of lisp code for that very purpose also. In the end I came up with IMPORTXYZ.lsp If you look in the lisp forum, there is a link there and another piece of lisp code that also imports points from a .csv file. How do you want them plotted - as individual points? Connected by a polyline? If you get your coordinates in a single column in Excel, in format x,y,z you can copy the whole column of data and paste into AutoCAD when prompted to enter or pick a point. Diggin up an old topic because I'm just full of dumb questions today...though, this may not be so dumb... I've got a huge list of points from a scene survey that I need inputting into Acad, normally this would be easy, just make an excel spreadsheet with all the X,Y,Z points and run the LISP routine and badda-bing badda-boom... But these points are using a Vector Location (i.e. I have Horizontal Angle, Slope Distance and Vertical Angle data, not an X-dist, Y-dist, Z-dist). Horizontal angle as best I can guess is from the postitive X-axis (around the Z-axis), and vertical angle is from the positive Z-axis (around the X-axis????). The Slope Distance is the distance from the Total Station (origin?) to the point of interest. I know this can be done fairly simply with lines, make a line at the desired angle of the disired distance, then make a point at the endpoint of the line, and erase the line...but I've got 250+ points, and well, that would take all weekend, haha. I found something in Acad Help that was something along the lines of making a point with a string [dist<angle1<angle2], however, when I tried it, it spat out "Invalid Point." So....is there a command to make a point with "coordinates" in a vector system? Sounds like you have survey data that is not "reduced", it has the raw data from the survey. If you aren't able to get the reduced data, you can reduce it yourself (not recommended if it really is land survey data, should be done by the surveyor). *IF* al the points are relative to a single station location it may not be too difficult. I suggest you do the comps in Excel first to create the XYZ data then import as you usually do. You're correct, the slope angle is relative to the xy plane. So first get the horizontal distance (projected on x-y plane) using the cosine of the vertical angle. The sine of the angle times the slope distance is the delta z value. Now with the horizontal angles and the horiz distances, calculate the delta x's and delta y's. *yawns...* This is gonna be a long night....this has to be done in CAD by Noon on Monday so I probably will not be able to get reduced data from the surveyor. Well, it's scene survey data from an accident scene. Points were taken at the edge of the road, skidmarks, impact location, etc etc. The complete set of data I have reads something like this: Pt#: 1, HzAngle: 165.0224, SlpDist: 463.900, VtAng: 92.5940, ParOff: 0.000, PerpOff: 0.000, TgtHt: 5.000, Description: EP That's just point 1. It goes all the way to 272.... I guess I'd better get back to work then eh? Well once you get the formulas figured out, 272 points is not much worse than 2 points Be careful on vertical angle, 0 degrees is usually straight up so for angles greater than 90 the delta z will be negative. And be sure all numbers to the right of a decimal are decimal fractions, and are not (for angles) degreeminsecs.
{"url":"http://www.cadtutor.net/forum/showthread.php?5764-Plotting-points","timestamp":"2014-04-18T13:09:06Z","content_type":null,"content_length":"82986","record_id":"<urn:uuid:8cc8e991-0505-4315-bb26-cf037dd86838>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5786857 - Image processing system Reference numerals and symbols as used in the drawings 1: image-processing device, 2: subtracter, 3: FDCT conversion unit, 4: quantization unit, 5: reverse quantization unit, 6: IDCT unit, 7: adder, 8: filtering unit, 9: memory, 10: movement compensation FIG. 1 is a block diagram of one embodiment of an image-processing system in accordance with the invention. In general, the encoding has a loop structure in moving image compression processing because of movement correction, and this invention also has two practical methods placing the circuit within and outside the loop. It is explained by focusing on the embodiment of FIG. 1 in which the circuit is placed within the loop as follows, and the other case in which the circuit is placed outside the loop (FIG. 11) as explained later. The image-processing system 1 of FIG. 1 includes an FDCT conversion unit 3 inputting image information comprising (8 pixels) unit 3 converts the image information input to signals represented in the frequency domain by discrete cosine transformation (DCT) and outputs image data showing 64 DCT factors corresponding to the number of pixels to a quantization unit 4. The quantization unit 4 generates quantized data to be output to a reverse quantization unit 5, and in the reverse quantization unit 5, non-zero coefficient parameters (NZCL information) are formed. In the quantization unit 4, a quantization threshold value for a filtering unit 8 is also formed. The reverse quantization unit 5 is connected to an IDCT unit 6, where the reverse-quantized image data is reverse-DCT converted and output to the filtering unit as image information through an adder 7. The filtering unit 8 carries out noise reduction based on the non-zero coefficient parameters from the reverse quantization unit 5. Incidentally, the output of the filtering unit 8 is externally displayed as images, not shown in the figure. Furthermore in this embodiment, the filtering unit 8 receives the quantization threshold value from the quantization unit 4, but it is also possible to input a suitable intermediate value from the outside. This image-processing system 1 also includes a memory unit 9 and movement compensation unit 10 for movement compensation. The movement compensation unit 10 is to resolve a moving image to blocks of about 16 pixels. Subsequently, for each block, a signal having the closest form as a signal is searched from the previous frame signals (in the memory unit 9), for which the encoding has been completed, and an encoded image has been obtained, and by repeating the procedures for the whole frames, the most approximate frame signals to the input frame signals being encoded are synthesized. The synthesized signals are subtracted from the input frame signals (reducer 2) to compress the information, and on the other hand, the information is expanded by adding them (adder 7) after The optimization of quantization has been known to be identical to the minimization of the overall signal reproduction error. In this case, the mean square error (MSE) quantized noise can be represented as follows. ##EQU1## In equation (1), r is a constant related to a conversion coefficient pdf. In the case of Laplacian pdf to use a Max quantizer, r=4.5, and in the case of Gaussian pdf, r=2.7. Under the condition; ##EQU2## the above equation provides a constant value, the minimization of D yields the following equation. ##EQU3## In general in such bit assignment, MSE strains can be shown to be uniformly distributed to all coefficients. When such quantized coefficients are actually to be encoded, those coefficients over the minimum quantization level (threshold value) are to be encoded, and the coefficient numerals not encoded are subsequently encoded as a series of 0s. This threshold value quantization and encoding have been known to be well-suited for video compression to use orthogonal transformation and used in most of the currently available video compression devices. When 16 quantized, the same quantization step (threshold value), for example, Qstep is used for all four 8 weight" are explained later). Respective coefficient Org.sub.u,v are quantized to, for example, Dequant.sub.u,v. ##EQU4## In this case, the arithmetic noises are generally considered to be white. In this case, the reproduced image signals are as follows. ##EQU5## In equation (5), Pred.sub.i,j.sup.(0) is a value predicted, and Recon.sub.i,j.sup.(1) is a value reproduced. The true value of the signal TrueValue.sub.i,j is, therefore, found somewhere in the following range. ##EQU6## In equation (6), θ.sub.u.v is a quantized noise. θ.sub.u,v =Org.sub.u,v -Dequant.sub.u,vo reverse DCT is defined as follows. ##EQU7## In formula (7), C(u or v)=1/sqrt(2) (u or v=0) or =1 (other cases). If the quantized noise (in coefficient space) is white, the dispersion σ.sup.2.sub.m,n of deencoded noises in the pixel region is given by the following formula. ##EQU8## If the quantized noise dispersion is equal (quantization of a very small step), for example, Θ.sup.2.sub.u,v =Θ.sup.2 can be shown. For example, σ.sub.m,n.sup.2 =Θ.sup.2 (Constant) (9) This corresponds to the case in which the dispersion of quantized noises is the same for all the blocks as shown in FIG. 2(a). FIGS. 2(a) and 2(b) are drawings showing quantized noise dispersion within a block. FIG. 2(a) shows white noises having an equal dispersion, and FIG. 2(b) shows white noises having nonlinear effects as a result of the quantization explained below. If the quantization step becomes large, as is the case in actual video compression, many high-frequency coefficients become smaller than the quantization step height (threshold effects), and when they are quantized, they become 0 coefficients. Quantization noises corresponding to such coefficients are smaller than non-zero quantization coefficients as shown in FIG. 3. FIG. 3 explains it by the power spectrum of video signals. If the power spectrum of typical video signals (solid line in FIG. 3, shown in one dimension) is quantized over the frequency (since DCT coefficients are also frequency components), the results are as shown in the stepped dotted line shown in FIG. 3, but the electric power of noises is different in a portion above and below the threshold value (thick dotted line). Namely, there are two group coefficients, the coefficient of group A contains quantized noises having the same dispersion, and in group B, the dispersion is small. In general, the higher the frequency, the lower the dispersion. Therefore, there are DCT coefficients for 2 different groups for each quantized block. The coefficient of one group (group A) includes noises having dispersions determined by the quantization step, and the coefficient of the other group (group B) includes relatively small quantized noises. To carry out a stricter analysis, the power spectrum of each frame of the DCT coefficient is defined by the following equation. E((s-p)(s-p).sup.f)!= T! R! T!.sup.f (10) In equation (10) T! is a 64 the original 8 vectors representing the DCT coefficients of the original block and predicted block. The R! on the right side of the equation is a correlation matrix for each frame of the original video signals (dispersion=1.0) as shown in the following equation. ##EQU9## In equation (11), ρ is a self correlation coefficient of stock image, and (V.sub.x, V.sub.y) is a certain movement in it. It is assumed that the frame differential is generated by a image movement of Markov process as a model. The power spectrum matrix defined by the left side of Equation (10) is represented by R.sub.inter, below. This 64 correlation between pairs of each DCT coefficient, and the diagonal line term shows the dispersion of the DCT coefficient itself. The high-frequency noises can be regarded as to white noises. Such diagonal line term is rearranged in 8 (φ.sup.2.sub.ν, ω (u, v=0, 1 . . . 7) ) This defines the movement-compensated power spectrum of the DCT coefficient to the predicted error of each frame. In this case, (Vv,Vh) corresponds to an error of movement estimation. FIG. 4 shows some examples of spectra corresponding to an image of ρ=0.95. FIG. 4 is a drawing for energy spectra of movement-compensated prediction error DCT coefficients, and those three coordinates are vertical frequency (left side), horizontal frequency (right side), and spectrum amplitude (vertical). The movement shown is of a pixel/frame unit and corresponds to an estimated error of movement. The average of the spectrum amplitude is equal to the dispersion of a predicted error in signal space. The spectrum is found to concentrate characteristic coefficients corresponding to the movement. The quantization is a nonlinear processing defined as follows. φ.sup.2.sub.u,v =λ.sub.u,v.sup.2 (only when φ.sup.2.sub.u,v &gt;λ.sub.ν,ω.sup.2) (12) In formula (12), λ.sub.ν, ω.sup.2 is white noise dispersion introduced by quantization and quantization weight. The quantization noises are evaluated against dispersion (=1.0) of the original image signals. Specifically, the following equation is obtainable. ##EQU10## The quantization acts as a restricting action against the spectra shown in FIGS. 4(a)-4(c). FIGS. 5(a)-5(c) show examples for various quantized SNR (signal/noise ratio). FIGS. 5(a)-5(c) are drawings showing examples of quantized noise spectra, and only the cases of Vv=2 and Vh=2 are shown. If the quantization threshold value (step) is large, many high-frequency coefficients are smaller than the threshold value and forced to take a value of 0. As the quantization threshold value is lowered, more coefficients are quantized to non-zero. Therefore, it is covered by uniform quantized noises of Quantized noises in signal space can be calculated by substituting θ.sup.2.sub.u,v of Equation (8) with the noise spectra obtained as a result. As shown in Equation (7), if the quantized noises are uniform, as shown in FIG. 5(c), the quantized noises in signal space are similarly uniform, and consequently, one can do nothing to improve deencoded images except restoring overall images from those buried in white noises. However, as it is common among all highly efficient video compression devices, the quantization threshold value is far larger than the amplitude of many coefficients, and the quantized noise spectra are not uniform as shown in FIGS. 5(a) and 5(b). In most cases, quantized noises are distributed densely at specific sites inside a block, and other sites have less deterioration caused by noises. Therefore, it becomes possible to carry out effective improvement by restoring deencoded images. FIGS. 6(a)-6(f) show the results of some computations carried out on quantized noise distributions inside a block under various conditions. The results obtained indicate that the quantized noises are concentrated at the block boundary when the image signal correlation coefficient is large (&gt;0.5). Since it is so in many image signals, a lattice pattern is observed at the block boundary if the compression ratio is high. According to the results of certain research, the lattice pattern has been explained as a discrepancy in DC coefficients due to coarse quantization. In many cases, the DC coefficients are encoded with minimum losses, the theory is not correct. On the other hand, it has to be attributed to an increased level of white noises in a specific site inside a block. The white noises in this case are called mosquito noises in the field of image processing. When edges and high contrast outlines are moved, a large amount of quantized noises are generated at the outline portion of a block. However, if an edge or outline matches this portion, it cannot be recognized clearly. However, if they are away from the site by a certain distance, the noises appear as an effect as if mosquitoes are flying. On the other hand, the following equation is always valid as apparent from Equation (8). σ.sub.m,n.sup.2 =σ.sub.7-m,n.sup.2 =σ.sub.m,7-n.sup.2 =σ.sub.7-m,7-n.sup.2 (14) Most image signals have positive correlation coefficients. However, if interlacing scanning is carried out, a zigzag pattern is generated at the boundary of an object, and as a result, a negative correlation coefficient is generated at that position. (In MPEG2, such a block is mostly encoded in a field mode, but there are still many similar cases.) In the case of positive correlation signals, the quantized noise spectrum is a simple decreasing function, but in the case of negative signals, it is a simple increasing function. Therefore, in the case of positive correlation signals, non-zero quantized coefficients have a tendency to distribute around (0,0) as shown in FIG. 6, but in the case of negative correlation signals, they are spread around (7,7). If the quantized threshold value (Qstep) is determined, the quantized noise spectrum can be estimated from limited (in the region A) coefficient localization (two- dimensional shape) by (b.sub.v,b.sub.h) of the figure. This parameter is called non-zero coefficient localization (NZCL). FIG. 7 is a drawing estimating the boundary between coefficients of groups A and B of FIG. 3, and it shows that the non-zero coefficient localization is an important parameter for noise reduction. The NZCL parameter has a value between (0,0) and (7,7), and thus, there are 64 different possible values. When 8 encoded, the related NZCL is confirmed immediately from the non-zero coefficient distribution. The noises of this block encoding can be easily calculated as a difference between the deencoded signals and original image signals. Therefore, by using NZCL, the encoding noises can be classified into 64 different cases. FIG. 8 is a drawing showing mean noise dispersion of 8 8 corresponding to the theoretical results shown in FIGS. 6(a)-6(f). FIGS. 6(a)-6(f) are drawings showing quantized noises (dispersion) of deencoded blocks (8 the block boundary when the correlation coefficient of the original signals is large (&gt;0.5). In accordance with the invention, the following interesting observations were made. If either b.sub.v or b.sub.h is less than 4 and: (1) b.sub.v =b.sub.h, the encoded noises have the shape of FIG. 6(b), (2) b.sub.v &gt;b.sub.h, the encoded noises have the shape of FIG. 6(a), (3) b.sub.v &lt;b.sub.h, the encoded noises have the shape of FIG. 6(c), and both b.sub.v and b.sub.h are larger than 4, the encoded noises have the shape of FIG. 6(e) or 6(f). Therefore, if the NZCL parameter is disregarded, and all encoded noises are averaged, the noises appear to be distributed rather uniformly in the whole block (thus, the whole image). On the other hand, FIG. 8 shows the actual data determined by using 150 frames of mobile and calendar sequence, which is the standard test image of ISO/MPEG. The basic difference between the mobile and calendar sequence and other sequences is a difference in the quantization step and not in noise distribution. In the mobile and calendar sequence, the quantization step remains roughly constant (1015/255.4 Mbps). Incidentally, FIG. 8 is an overall 64 and vertical axes are shown by connected 8 using the NZCL parameter. The noise quantity is shown as a dispersion value. The filter used to reduce noise is explained as follows. The quantization process may be represented as follows. ##EQU11## In formula (15), X and Z are original and deencoded signals, Θ shows quantized noises, and H shows DCT conversion. Linearly arithmetic noises HΘH.sup.T are not correlated to X. If the signal dispersion is S.sub.m,n ω, ν, and the noise dispersion is N.sub.m,n ω, ν Weiner filter is represented by the following equa Weiner filter is represented by the following equation. ##EQU12## In formula (16), X.sub.m,n ω, ν shows the power spectrum density of X.sub.m,n, N.sub.m,n ω, ν corresponds to the noise term HΘH.sup.T ! (ω, ν) and, as explained in the previous section, it is determined from the NZCL information. To realize equation (16), Z.sub.m,n ω, ν is processed first with a directional low-pass filter DL(ω, ν), and the signals obtained as a result are allowed to contain most X.sub.m,n (ω, ν) or least N.sub.m,n (ω, ν). The equivalent procedure may be carried out by the one-dimensional filtering action in the direction of the minimum dispersion of Z.sub.m,n. By the subsequent second stage filtering action, DL(ω, ν) has to become a local filter extracting DC components of Z.sub.m,n. Four separate directions are searched by determining the minimum dispersion of Z.sub.m,n. FIG. 9 shows an example of a five-tap filter. In FIG. 9, the center position shows the pixel, the noises of which are to be removed by filtering, and the optimal one-dimensional direction for filtering searches for this FIG. 9 shows a directional low-pass filter, and the filter coefficients are all 1/5 in this example of a five-tap filter. For 4 separate directions, the minimum residual dispersion is determined. If a Weiner filter is applied, the following equation is obtained. ##EQU13## In formula (17), δz.sub.m,n is a high-pass component of Z.sub.m,n, A is a normalized constant represented by the following equation when DL(ω, ν) is a mean FIR filter (equal coefficient) of (2N+1) taps. ##EQU14## In equation (17), the shape of the low-pass filter as a function of the ratio of E (δz.sub.m,n).sup.2 ! and σ.sup.2.sub.m,n is hypothesized. Since σ.sup.2.sub.m,n is a set value, the normalized constant A corrects the energy density value. If τ.sub.m,n =Aσ.sup.2.sub.m,n /E (δz.sub.m,n).sup.2 !, the shape of the low-pass filter f(τ.sub.m,n) is optimized by simulation. FIG. 10 shows one example of the overall filter structure. In this filter structure, DL is a directional low-pass filter as shown in FIG. 9. F is a Weiner filter. Its characteristics are determined by the size of Qstep and NZCL information, and it is similarly a low-pass filter. The broken line block shows a noise source. The signals Z.sub.m,n correspond to the output signals from the adder 7 in the embodiment of FIG. 1. The low-pass filter F is an accommodation filter. In the case of coefficient block quantization, its Qstep and NZCL information can determine estimated quantized noises at respective pixel sites. If δz.sub.m,n is white, the optimal shape of F is represented by the following equation. ##EQU15## In this case, the size of Qstep is taken into consideration in σ.sup.2.sub.m,n normalization. By using information obtained for noise size at each pixel site, a strong low-pass filtering action is applied to a flat image region having a large amount of noise, and a weak filtering action is applied to a high contrast region having a small amount of noise and edges. As a result, most characteristic DCT noises can be removed from the flat regions adjoining edges and outlines. On the other hand, if the filter is placed outside the encoding-deencoding loop, it can be constructed with a post filter (filter unit 8) to the deencoder as shown in FIG. 11. Furthermore, it may be inserted before the adder 7 inside the deencoding loop (after IDCT 6). This new noise reducing filter is always applicable to deencoded images regardless of the kind (I, P, or B) of images. Furthermore, in the action of this new filter, no additional information to increase encoded data on the deencoder side is required. All information required can be extracted from the results of encoding, and the deencoder side can carry out synchronized processing with that of the encoder side. As discussed above, it has been shown to be possible to remove or whiten characteristic noises met in the case of image compression based on DCT. The extent of quality improvement obtained as a result is significantly large. For the optimal design of noise reduction filters, it is necessary to consider both quantization and reverse DCT. Although this invention has been described in connection with a specific embodiment, it will be understood that the scope of this invention is not necessarily limited to that embodiment. According to this invention, an image processing system has been provided in which harmful noises at the time of image compression, such as mosquito and block noises, etc., can be effectively FIG. 1 is a block diagram of an image-processing system in an embodiment of this invention. FIGS. 2(a) and 2(b) illustrate dispersion of quantized noises within a block. FIG. 3 is a graph of amplitude in relation to frequency for explaining the quantized noise dispersion of FIGS. 2(a) and 2(b) by power spectra of video signals. FIGS. 4(a)-4(c) are schematic drawings showing energy spectra of movement-compensated estimated error DCT coefficients. FIGS. 5(a)-5(c) are schematic drawings showing quantized noise spectra. FIGS. 6(a)-6(f) are schematic drawings showing quantized noises (dispersion) of an encoded block. FIG. 7 is a schematic drawing showing the boundary of coefficients of groups A and B of FIG. 3. FIG. 8 is a schematic drawing showing mean noise dispersion of 8 blocks connected with NZCL as a parameter. FIG. 9 is a schematic drawing of a filtering grid for explaining the filtering action to determine the minimum dispersion of Z.sub.m,n. FIG. 10 is a schematic diagram of a filter structure used in an embodiment of this invention. FIG. 11 is a block diagram of a deencoding unit of the image-processing system in another embodiment of this invention. This invention pertains to image processing for carrying out digital processing of image information and, in particular, to an image processing system including noise reduction. Various research and development on multimedia systems handling various data, such as images, sounds, etc., in a unified manner have been carried out actively in recent years, and as a result, it has become necessary to store and transmit digitized still and/or moving images. However, if the image information is digitized, the volume of data is very large compared with that of sound data. For example, if 720 images are digitized, a high-speed data rate of several 100Mb/sec is required, but to realize it, there are problems with respect to transmission speed and the recording medium. To solve such problems, the technology of compressing image information has been developed. The international standards are being established, such as CCITT, in the case of a video telephone and audiovisual remote conference, CMTT/2 for television transmissions, in the case of recording media, JPEG (Joint Photographic Expert Group) for still images, and MPEG (Moving Picture Expert Group) for moving image. Because of highly efficient video compression by orthogonal transformation (e.g., discrete cosine transformation), encoding with movements compensated becoming usable as one of the compression arts, the greatest concerns in recent years are towards removal of characteristic strains (mosquito and block noises) caused by crude quantization of the transformed coefficient. The mosquito noises are hazy noises with a small distance from the edges, and the block noises are block noises generated at the transformation block boundary. Ideas have been proposed previously to solve the problems, but have not been satisfactory. It is an object of this invention to provide an image processing system in which harmful noises, such as mosquito and block noises, in the case of video compression are substantially reduced. The image-processing system in accordance with the invention comprises first transformation means for inputting m pixels x n lines of image information and transforming the image information to data represented in the frequency domain; means for forming quantized data based on the frequency domain data transformed by the first transformation means; reverse quantization means connected to the quantization data formation means; second transformation means connected to the reverse quantization means for transforming the reverse quantized frequency domain data to image information; and filtering means for filtering the image information from the second transformation means based on the non-zero coefficient parameter formed by the reverse quantization means. This application is a continuation of application Ser. No. 08/316,764, filed Oct. 3, 1994 now abandoned.
{"url":"http://www.google.com/patents/US5786857?ie=ISO-8859-1&dq=6,563,928","timestamp":"2014-04-18T05:24:34Z","content_type":null,"content_length":"96658","record_id":"<urn:uuid:830696ca-1e2d-413b-bf11-bd14609f6bdf>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert to JOptionPane March 9th, 2011, 11:06 PM Convert to JOptionPane I'm extremely new to Java as well as programming so please bare with me. I'm writing a test program that asks the user to input 10 numbers then outputs the smallest number. I'm trying to convert this to an input dialog box... Code : public class SmallestElement public static void main(String[] args) double[] numbers = new double[10]; java.util.Scanner input = new java.util.Scanner(System.in); System.out.print("Enter ten numbers: "); for (int i = 0; i < numbers.length; i++) numbers[i] = input.nextDouble(); System.out.println("The minimum value is " + min(numbers)); public static double min(double[] array) double min = array[0]; for (int i = 1; i < array.length; i++) if (min > array[i]) { min = array[i]; return min; I tried to convert it over to JOptionPane but when the box pops up to enter 10 numbers, it's not exactly accepting the input. Here's the code for trying to change it over. Code : import javax.swing.JOptionPane; public class JSmallestElement public static void main(String[] args) double[] numbers = new double[10]; String input = JOptionPane.showInputDialog("Enter ten numbers: "); for (int i = 0; i < numbers.length; i++) numbers[i] = Double.parseDouble(input); JOptionPane.showMessageDialog(null, "The minimum value is " + min(numbers)); public static double min(double[] array) double min = array[0]; for (int i = 1; i < array.length; i++) if (min > array[i]) { min = array[i]; return min; I obviously have no clue what I'm doing so any help would be greatly appreciated! I cannot get past how confusing this stuff is. :confused: Thank you! March 10th, 2011, 08:52 AM Re: Convert to JOptionPane it's not exactly accepting the input. What do you mean by this? Does it throw an exception? Not behave properly? If a user enters a String - something like "10 5 1 100.....43", you need to get each number individually. This is where spending some time reviewing the API is helpful, as there are several methods to do this (see String (Java Platform SE 6) ) for example indexOf, split, using a StringTokenizer, etc...once you have these individual values you can parse them using Double.parseDouble
{"url":"http://www.javaprogrammingforums.com/%20java-theory-questions/7838-convert-joptionpane-printingthethread.html","timestamp":"2014-04-20T09:57:57Z","content_type":null,"content_length":"6973","record_id":"<urn:uuid:8c40dc45-a931-4848-82bc-a095902f55de>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractal Art The first set of images are coloured random fractal patterns. They were generated with a scientific intent, we needed to project fractal patterns onto smooth surfaces in order to get better triangulation off them when using stereo range finding. We had previously used Gaussian noise patterns, but I thought that coloured fractals should be better suited to our range mapping process which worked by pyramid decomposition of left and right stereo images. The coloured patterns that I generated turned out to be much more attractive to look at than Gaussian speckle patterns. The picture above was generated using a recursive subdivision of each colour plane into rectangles. Rectangles are repeatedly divided into two rectangles a, b on a random position along either the vertical or horixontal axis. As this is done a random offset the mean of whose probability density function is proportional to the square root of the area of the rectangle is added to the brightness level of a and subtracted from the brightness of b. The picture above achieves a more painting like effect by selecting random dividing line P Q for each rectangle as before, but then raising or lowering the dividing line in brightness space by adding a random variable with mean 0 and σ ≈ k |PQ|. The two sub rectangles then become inclined planes in brightness space which are further subdivided. When working with such fractals one can vary the degree of contract by altering the constant k which controls the degree of randomness in the colour space. The image on the left has a larger k than that on the right. Harmonic Warps The next series of images are the result of performing warping operations on source images. The warps in question are ones that map the entire Euclidean infinite plane into either a rectangle or an elipse. The distance compressing warp that I use is d(x )=1-1/(1+x) which tends to 1 as x tends to infinity. Thus an position x, y in the positive Cartesian quadrant can be mapped to a point d(x),d(y) in the unit square. If we imagine the plane being tiled by repeated occurrences of a photograph we can then squash the infinite tiles surface into a small rectangle. The picture above illustrates this infinite tiling process applied to to the fractal image fs-64-100 shown above. But the same thing can be done using photographs as is shown in this tilings with the famous Lena image: We can also transform more natural photos to get a mirrored effect Polar Harmonic transforms The following transforms have enhanced symmetry with infinity all round the margin of the disk. The mapping used to derive them is based on the use of polar coordinates with an inverse harmonic transform on the radius. Symmetries and exchange relations An artwork on the subject of money and exchange, featuring the monetary system introduced by the British colonialists to Nigeria, the Kola nut, and its most famous commodity derivative. The image is produced by a sequence of unitary rotations and scale preserving translations in the non Euclidean space of the Poincare disk. To read more about this click here . Ellipse Kunst from an original image by Lee Cockshott A harmonic polar tiling of the iconic Lena image, with is a metaphor for how the fovea is much more sensitive to detail than the periphery of our eyes If we apply the process to very simple and regular patterns like national flags we can get an intuitive understanding of the geometrical process underlying this. Take the crosses of St George and St Andrew and we get: St Andrew’s Cross St Andrew’s Poles St George’s Cross St George’s Parallels Union Flag Ever Wider Union Flag Infinite Poles In the harmonic warps used above, the spatial dimensions are treated separately so that we are treating the underlying space as a manhattan rather than a Euclidean space. The street lines of the Manhattan grid would be preserved as parallel lines as is show by the picture St George’s parallels. Lines at 45 degrees are transformed into curves , though within the transformed Manhattan metric these are still parallels that meet only at infinity – this is illustrated by the St Andrews poles. What happens if we transform the warp so that the infinities, instead of being at the edges of the picture, are shrunk to the 4 polar cardinal points of the compass? We get something very like the Hyperbolic space of General Relativity. In hyperbolic geometry the sum of the angles of a triangle are always less than 180 degrees. This can be represented in the Euclidean plane as a disk with infinity at the circumference of the circle, this is known as the Poincare disk. A problem with this representation of hyperbolic space is that straight lines map to circles in the Poincare disk. The X and Y axes correspond to circles of infinite radius so that they remain straight lines, but a line offset from the origin will always be a circle. We can map positions in the disk to x, y coordinates in Cartesian space by finding the circles which cut the x and y axes at 90 degrees and pass through a given point. Let these intercepts with the x and y axes be called hx , hy and the original coordinates within the Poincare Disk as px, py. Then the two are related by the rules: If we subsequently perform an inverse harmonic map on hx, hy we end up with positions in Cartesian 2 D space. What does this do to straight lines ad different angles? Well we can look again at the national flags. Note that I have retained the original aspect ratios for the flags after the inverse harmonic map so the Poincare disc becomes an ellipse. The original cross of St Andrew survives in the midst of polar collapse on either side St George’s Ellipses The rather more sinister image of the Union flag on the Poincare ellipse Escher’s chess board Htt Chess HYP Chess Expanding Chess Belly world Fractal Limit
{"url":"http://www.dcs.gla.ac.uk/~wpc/Fractal_Art.htm","timestamp":"2014-04-21T09:36:18Z","content_type":null,"content_length":"21228","record_id":"<urn:uuid:eb227bd5-0f27-4e41-a7b5-0f7675a1ba04>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Miscalculating the odds - Obama Conspiracy Theories I used the non-specific words “wildly implausible” directed at commenter John here on this forum to described his speculation on various fraud scenarios in Hawaii. I actually did a calculation once on the probability that Obama was born in Kenya, given the fact that his literary agent said otherwise in a brochure of biographical information about several authors. (I came up with a generous Well that crazy Christopher “Lord” Monckton has crunched his own numbers on whether Obama’s long form birth certificate is legitimate, reports WorldNetDaily. His answer is 0.0000000000000000000016. Monckton’s argument basically assumes probabilities for various anomalies in the long form PDF, and then multiplies them all together. Monckton obviously is not a mathematician (his college degree is in Classics) because he makes some obvious mistakes. I spotted two right off the bat: his numbers are wrong and his math is wrong. First, unless two outcomes are independent (not correlated), you can’t simply multiply the probabilities. My article (linked above) actually shows how correlated events are properly treated. Let me give an example: what is the probability that someone is a male freemason. I’ll just make up the individual probabilities and say that the chances that someone is a man is .5 and the chance that someone is a freemason is .0001. You can’t multiply those probabilities and get .00005 because all freemasons are men (the two are correlated). The second big mistake is to assign a probability to certain characteristics of the certificate PDF. For example, Monckton says that the chance that the registrar date stamp is in a separate clipping region is 1 in 100. This is a made up number. In fact given the hardware and software combination, the chances should be 1 or 0 for a given document since the process is algorithmic. The big fallacy, and I don’t know a formal name for it, is to take any event that already happened and then assign a probability to it having happened. Say the winning lottery number is 1 4 6 9 18 21 22 5 7. It makes no sense to say that the odds against that number coming up were a zillion to one and conclude that the lottery must be rigged. Monckton is fairly unfamiliar with the facts, for example, he assigns a 1 in 25 chance to the certificates being out of sequence (numbers matching birth order) in “error.” However, we know that the certificates were not in sequence for anybody. If anything they were in alphabetic order by last name, and Obama’s number is correct. The neonatal death is the only one out of alphabetic sequence (much higher). He also assigns a probability to Obama’s father’s birth date being wrong, but Obama Sr. put different birth dates on a host of government forms. Monckton’s analysis is just pseudomathematical gobbledygook, perfect for birthers. See also: 38 Responses to Miscalculating the odds 1. donna September 26, 2012 at 9:46 pm # the non-lord was the republicans’ only witness at a climate change hearing in 2010 GOP Chooses Non-Scientist Lord Monckton as Sole Expert Witness at Climate Change Hearing he gets around 2. Squeeky Fromm, Girl Reporter September 26, 2012 at 10:08 pm # Well synchronicity strikes again. I have just finished one on this, too. What are the odds of that happening??? Squeeky Fromm Girl Reporter 3. Andrew Vrba, PmG September 26, 2012 at 10:39 pm # Anyone else find it humorous that a man who falsely uses the title “Lord”, despite not being a member of the House of Lords, has the balls to call someone else a phony? 4. Dave B. September 26, 2012 at 10:50 pm # Monckton in motion! (down at the bottom of the page). 5. Slartibartfast September 26, 2012 at 11:04 pm # OMG! Squeeky is really Doc C! The probability of this is definitely 1 (or 0… I forget which Squeeky Fromm, Girl Reporter: Well synchronicity strikes again. I have just finished one on this, too. What are the odds of that happening??? Squeeky Fromm Girl Reporter 6. Paul September 26, 2012 at 11:15 pm # Ok, I’ve seen this guy quoted on FauxNews and the WingNut Interwebs. But seriously, who the F#*K is “Lord Monkton”, and why the F#*K should any of us care what the F#*K he thinks?! 7. brygenon September 27, 2012 at 12:06 am # Technically astute, as usual, Dr. C, but I think there are yet bigger sources of error. Even independent and correct probabilities multiplied together are not the net probability of the default hypothesis. Any particular observable outcome has a probability less than one, so it’s a one-way trip arbitrarily close to zero whether the document is authentic or not. When we go looking for anything we can always find something. When we look in many places for the unusual, we will usually find it. See the recent ig Nobel prize winner, “Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction”. Birthers fail to apply proper multiple comparison correction around their dead fish. Perhaps the neatest refutation is to turn the method around and use it asses the probability that Obama’s long form birth certificate is fake. What percentage of fake birth certificates are linked by the state authority that produces real ones with the description “a certified copy of his original Certificate of Live Birth”? 8. Greenfinches September 27, 2012 at 1:20 am # There are many people with the title ‘Lord’, who are not members of the House of Lords – but Monckton is at fault for claiming he is a member of that House, when he isn’t. As I recall there is a copy of a letter to him from the authorities at the House of Lords on their website, so that people will be able to see that Monckton knows he lies….. He talks a lot of nonsense about many things that he knows nothing of – climate change for one. Why, I don’t know. Follow the money???? 9. Slartibartfast September 27, 2012 at 1:53 am # If you’d like to see what a real expert has to say about an algorithm being used to generate the LFBC image, check out this article at John Woodman’s site: 10. Dr. Conspiracy September 27, 2012 at 2:00 am # Well, given that we probably get our ideas from similar sources, it’s not all that remarkable. However, there can be some really crazy arguments made that everything that happens is wildly improbable. One might quote a really tiny probability that a team of 100 monkeys with keyboards will reproduce the works of William Shakespeare in a year; however, they will produce something and that something is no more probable than anything else. Squeeky Fromm, Girl Reporter: Well synchronicity strikes again. I have just finished one on this, too. What are the odds of that happening??? 11. Squeeky Fromm, Girl Reporter September 27, 2012 at 2:37 am # Dr. C: I guess you are right. I was real excited for a while because I thought I discovered a new mathematical thingie. Which is if you have 2 sets of 5 cards each, and 3 cards in each set is a face card, then what are the odds you can draw 1 card from each set and get a face card??? Which is 3/5 X 3/5 = 9/25. But, if you have 10 cards and 6 of them are face cards, and you draw twice, then you should have 6/10 x 5/9 = 30/90 or 1/3. But 9/25 is MORE than 1/3, which would take 27 on bottom. Sooo I got to thinking maybe I had found all that missing stuff in the universe??? But I didn’t. Squeeky Fromm Girl Reporter 12. Slartibartfast September 27, 2012 at 3:25 am # I just think it’s good that you were excited when you thought you had discovered a new mathematical thingy—I felt exactly the same way when I thought I had figured out how to trisect an angle in high school geometry class. Squeeky Fromm, Girl Reporter: Dr. C: I guess you are right. I was real excited for a while becauseI thought I discovered a new mathematical thingie.Which is if you have 2 sets of 5 cards each, and 3 cards in each set is a face card, then what are the odds you can draw 1 card from each set and get a face card??? Which is 3/5 X 3/5 = 9/25. But, if you have 10 cards and 6 of them are face cards, and you draw twice, then you should have 6/10 x 5/9 = 30/90 or 1/3. But 9/25 is MORE than 1/3, which would take 27 on bottom. Sooo I got to thinking maybe I had found all that missing stuff in the universe??? But I didn’t. Squeeky Fromm Girl Reporter 13. The Magic M September 27, 2012 at 4:32 am # > The big fallacy, and I don’t know a formal name for it, is to take any event that already happened and then assign a probability to it having happened. Say the winning lottery number is 1 4 6 9 18 21 22 5 7. It makes no sense to say that the odds against that number coming up were a zillion to one and conclude that the lottery must be rigged. That’s basically the same fallacy as the assumption that because the chances that the initial conditions of the universe resulted in life coming to existence are so mathematically small, it cannot have happened without outside interference (“God”). Squeeky Fromm, Girl Reporter: But 9/25 is MORE than 1/3 … because there is a difference between having two independent sets and having a single (self-dependent) set. Drawing one face card from the 10 card set reduces your chances of drawing another face card (60% to 55.5…%) whereas with the 5 card sets, the chance for the second draw remains the same, regardless of what you draw from the first. I think this can be related to the famous Monty Hall problem which is also about (the illusion of) independent choice. Also, you might try to calculate the number of balls in the Ross-Littlewood paradox if you have an afternoon to spare. But these things remind me why I never took a statistics course when I studied math. My girlfriend at the time did, and I had a hard time understanding the stuff because I cannot switch off my intuition (and statistics is pretty often very counter-intuitive). I have no problems visualizing curves in tangent bundles over bounded manifolds, but statistics is my kryptonite. 14. Keith September 27, 2012 at 5:43 am # The Magic M: I think this can be related to the famous Monty Hall problem which is also about (the illusion of) independent choice. Oh yes, please lets go off on the Monty Hall Problem. Because the MSM (Main Street Mathematicians) have it wrong and are all complicit in a giant conspiracy to make us believe that a 50-50 choice is really a one third v two third choice after one third of the choices have been removed. Its a conspiracy I tell you! I’m right and the MSM are all ignorant ignoramuses. 15. Slartibartfast September 27, 2012 at 5:50 am # The Magic M: > The big fallacy, and I don’t know a formal name for it, is to take any event that already happened and then assign a probability to it having happened. Say the winning lottery number is 1 4 6 9 18 21 22 5 7. It makes no sense to say that the odds against that number coming up were a zillion to one and conclude that the lottery must be rigged. I don’t know that there is a name for it—it’s kind of like a cross between the Texas Sharpshooter fallacy and the probabilistic analog of lies, damn lies, and statistics. Formally, while the odds that a particular number came up were small, the odds that some number would come up were 1. That’s basically the same fallacy as the assumption that because the chances that the initial conditions of the universe resulted in life coming to existence are so mathematically small, it cannot have happened without outside interference (“God”). Yeah, the birthers often use the same tactics favored by creationists… (they like God of the gaps arguments as well). … because there is a difference between having two independent sets and having a single (self-dependent) set. Drawing one face card from the 10 card set reduces your chances of drawing another face card (60% to 55.5…%) whereas with the 5 card sets, the chance for the second draw remains the same, regardless of what you draw from the first. I think this can be related to the famous Monty Hall problem which is also about (the illusion of) independent choice. Yes—the Monty Hall problem is the extreme case (fewest number of choices). You are given the choice of three doors—behind one of which is a prize. After you choose, Monty (the host of the game show “Let’s Make a Deal” which operated this way) would open a door that didn’t contain the prize and offer to let you switch. Mathematically, you should always switch (you have a 50% chance of winning instead of a 33% chance). Psychologically, people are very biased towards going with their original choice (Mythbusters found that all of 20 people tested stayed with their first choice), and functionally, Monty said that (a) he knew where the prize was; and (b) he didn’t have to offer a deal (he could just open up the door and show you that you lost). When you consider the fact that he was a master at reading people, his suggestion that if you were offered money to switch (which he would sometimes do), you should go for the bird in the hand seems like a good one… Also, you might try to calculate the number of balls in the Ross-Littlewood paradox if you have an afternoon to spare. I’ll have to check it out… But these things remind me why I never took a statistics course when I studied math. My girlfriend at the time did, and I had a hard time understanding the stuff because I cannot switch off my intuition (and statistics is pretty often very counter-intuitive). I have no problems visualizing curves in tangent bundles over bounded manifolds, but statistics is my kryptonite. There’s your problem—you should have studied probability (these are all probabilistic fallacies), not statistics. If you can handle tensors on compact manifolds you could probably understand probability* (especially if you were good with PDEs as well—Brownian motion is a consequence of the heat equation), but statistics is something very different (although it uses probability to do all of the heavy lifting…). Types of mathematical intuition, like other sorts of intuition, are developed by training—it’s probably more accurate to say that your statistical intuition wasn’t very good because you never took a statistics course—just my $0.02. * pun intended 16. Slartibartfast September 27, 2012 at 5:58 am # You don’t want to cross the MSM—have you ever wondered what happens when the conspiracy theorist gets it right? Bwa-ha-ha-HA-ha! Keith: Its a conspiracy I tell you! I’m right and the MSM are all ignorant ignoramuses. 17. Thrifty September 27, 2012 at 7:06 am # So I assume that in Birther circles, Mr. Monckton is a Certified Mathematical and Probability Expert? 18. Slartibartfast September 27, 2012 at 7:55 am # Right—except that the birthers probably fawn over his made-up title in a stunning display of thoughtless hypocrisy… So I assume that in Birther circles, Mr. Monckton is a Certified Mathematical and Probability Expert? 19. Tarrant September 27, 2012 at 8:34 am # This tirade by Mr. Monckton made me laugh – not because of the way he multiplied all these correlated events together (although that was good), but in the haphazard way he just assigns “Let’s say the odds of the stamp showing up in a separate clipping layer are 100 to 1.” No justification, no nothing. He pulls the numbers for everything quote literally out of his ass and then concludes by arguing that the math is conclusive and straightforward. I mean I can say “Let’s say the odds of me having breakfast today are 1000000:1.” But I did have breakfast today…OMG it’s a million to one rarity! And EVERY DAY I hit that million to one jackpot! I’m the luckiest person alive! Or perhaps those odds are wrong, or its an event for which probability really doesn’t apply. Nah, couldn’t be. 20. The Magic M September 27, 2012 at 8:52 am # Thrifty: So I assume that in Birther circles, Mr. Monckton is a Certified Mathematical and Probability Expert? Makes perfect sense. In their bizarro world, everything is upside-down. They would likely hire Stephen Hawking to play on their basketball team and give the science department to Mike Tyson. No, wait, I forgot, Tyson’s black… Slartibartfast: Psychologically, people are very biased towards going with their original choice Psychologically, people are conditioned to know that if the host offers you money to switch, the odds are you picked the winner and he’s trying to lure you away. That’s something that probably outweighs any probability evaluations the candidates might have made. Slartibartfast: especially if you were good with PDEs as well No, PDE’s also weren’t my strongest suit, but I never had to take any exams there either. I love number theory and got my degree in differential geometry and computer science. 21. Scientist September 27, 2012 at 8:58 am # Tarrant: This tirade by Mr. Monckton made me laugh – not because of the way he multiplied all these correlated events together (although that was good), but in the haphazard way he just assigns probabilities. But, I think we could come up with some very good numbers for the probability of a US citizen living in Hawaii travelling to Kenya to give birth in 1961. There were 4.28 million births in the US in 1961. According to the State Department report that Doc posted here a while ago, there was 1 birth to a US citizen in all of East Africa in 1961 (and that was almost certainly to a mother who lived there rather than a tourist). So that places the odds of the President being born in Kenya at <1:4.28 million. Unlike Monckton's, that is a real number. 22. misha September 27, 2012 at 9:48 am # Lord Monckton is Sasah Baron Cohen: http://www.youtube.com/watch?v=w833cAs9EN0 23. Thrifty September 27, 2012 at 10:22 am # It kinda reminds me of Ziggy, the supercomputer from Quantum Leap, who was always calculating arbitrary probabilities of events that really had no quantitative values to calculate a probability on. “Ziggy computes that there’s a 86% chance that if you arrest this criminal, the victim’s wife will live happily ever after” and so on. Tarrant: This tirade by Mr. Monckton made me laugh – not because of the way he multiplied all these correlated events together (although that was good), but in the haphazard way he just assigns probabilities. 24. John Potter September 27, 2012 at 10:59 am # Thrifty: So I assume that in Birther circles, Mr. Monckton is a Certified Mathematical and Probability Expert? I mentioned elsewhere that his improbable blunderings had cost him dearly once (see the Eternity Puzzle). His mistake there was pandering to people of greater intelligence than his (and putting his own skin in the game!). He’s wised up and moved on to much safer endeavors … namely, pandering to fools on their dime. 25. The Magic M September 27, 2012 at 11:09 am # One of my favourite examples of miscalculating the odds in a legal context involved a murder case (don’t remember the country, I think it was England, but do they have trial by jury?) with DNA as the prime evidence. The scientific result was that one in a billion people had the DNA sequence that was found at the crime scene (and also in the defendant). Now the proper conclusion is that there are 7 people in the world who could have been the killer, therefore a low, but still pretty good chance that the defendant was innocent. The DA however presented this to the jury as “there is a 1:1 billion chance that the defendant wasn’t the killer”. (In related news, I always chuckle when people boast how their provider guarantees them a 99% yearly uptime, not realizing this means their online shop can be down for 3.5 days in a row, possibly in the middle of Christmas sales. My current employer – not a run-of-the-mill web hoster – guarantees a 97% service uptime. Clients are happy. Go figure.) 26. John Potter September 27, 2012 at 11:17 am # Monckton’s masturbatory “mathematics” are a better approximation of the likelihood of the birther explanation, as the facts he assigns values too are themselves unlikly birther memes. The Magic M: not realizing this means their online shop can be down for 3.5 days in a row 3.6525 days … c’mon now, M, you’re spoting them nearly 4 hours! 27. Dr. Conspiracy September 27, 2012 at 11:32 am # DNA evidence becomes more and more problematic when there are more and more people in DNA databases. What does a one in a million match mean when there are 10 million DNA results in the database? If you’re thinking of the same case in England that I am, the defendant whose DNA matched with a high probability, was the victim Parkinson’s disease that confined him to a wheel chair, but it was alleged that he committed a crime that required him to climb up a wall and enter a second-story window despite the fact he lived 200 miles away and had an alibi. He was not convicted. The Magic M: The scientific result was that one in a billion people had the DNA sequence that was found at the crime scene (and also in the defendant). 28. The Magic M September 27, 2012 at 12:16 pm # Dr. Conspiracy: If you’re thinking of the same case in England that I am Not sure, just that one statistical fallacy remained in memory. 29. foreigner September 27, 2012 at 2:08 pm # if he were a mathematician he would have realized what interesting properties Eternity had for algorithmic examination and teaching. It turned out to be a really interesting task for programmers and (nondeliberately) well in the small window of solvable-but not too easy. (not the ~10^25 years needed to solve E2) Monckton should have (co)written an article in some scientific journal about it. But all he was interested in was how to make it attractive for hand-solvers and to get good publicity in the newspapers. 30. foreigner September 28, 2012 at 3:25 am # none of the mathbots noticed the calculation error that WND even put into the headline ? 31. RuhRoh September 28, 2012 at 7:22 am # Monckton is going to speak at Southeastern Louisiana University about another subject he knows nothing about-climate change! http://www.desmogblog.com/2012/09/25/ 32. Bran Mak Morn September 28, 2012 at 7:56 am # Monckton is the biggest fraud of them all. Which is what is so ironic about birthers loving him. Just look at his Eternity Puzzle and what happened with that: http://www.scotsman.com/news/scottish-news/top-stories/aristocrat-admits-tale-of-lost-home-was-stunt-to-boost-puzzle-sales-1-679237 He does publicity stunts … you know.. lies… 33. Bran Mak Morn September 28, 2012 at 7:57 am # Also, with the Eternity Puzzle, his math was shown to be wrong… again.. he thought it wouldn’t be able to be solved in the time limit he gave for the prize! LOL. 34. misha September 28, 2012 at 8:39 am # RuhRoh: Monckton is going to speak at Southeastern Louisiana University about another subject he knows nothing about Bran Mak Morn:Monckton is the biggest fraud of them all. Which is what is so ironic about birthers loving him…He does publicity stunts … you know.. lies… Monckton is Sasha Baron Cohen: http://www.youtube.com/watch?v=w833cAs9EN0 35. John Potter September 28, 2012 at 9:17 am # RuhRoh: Monckton is going to speak at Southeastern Louisiana University about another subject he knows nothing about-climate change! Another subject on which the willing are eager to hear they are right rather then the truth, even willing to pay for the privilege of their own stupefication. I wish I could burden my opponents in Civ 5 with similar plagues of the stupid. Gin up a few demagogues and go to town! 36. The Magic M September 28, 2012 at 9:39 am # John Potter: Gin up a few demagogues and make money! 37. G September 28, 2012 at 10:57 am # There is a point where the nonsense is so clearly bogus that you don’t even waste your time. All of his numbers are pulled out of his arse and his correlations are ficticious. At that point, you just roll your eyes and dismiss complete garbage as not worthy of paying it any attention. Random stupidity cobbled together isn’t a real calculation in the first place. none of the mathbots noticed the calculation error that WND even put into the headline ? 38. gorefan September 28, 2012 at 11:59 am # Dr. C “If anything they were in alphabetic order by last name, and Obama’s number is correct. The neonatal death is the only one out of alphabetic sequence (much higher). ” According to Verna Lee, the BCs were collected monthly and sorted geographically. I agree that they were then sort alphabetically. The neonatal was not born at Kapiolani or even in the city of Honolulu, but at Wahiawa General. which is 20 miles away. There were 15578 births in Hawaii in 1961. If the first certificate in January was 00001 and adding the monthly totals, the last BC issued in July, 1961 was 09942. The first child in August would have number 09943. The WND girl was 09945 (the third issued in August) and Sunahara was 11080 (the 1128th issued in August). According to the NCHS the geographic area of Honolulu County was divided into the city of Honolulu and the rest of the county. I think they collected the BCs monthly, separated them by geographic area and then alphabethized them within their respective geographic area. If we test this with Edith Coates BC from 1962, we see that she was born June 15th at the same hospital as Sunahara and her cert # is 8498. Between January 1 and May 31st, 1962 there were 7400 births. That would mean that there were 1097 certificate numbers issued before her’s. With a last name that begins with “C”, she should have a lower cert #. But if all of the city of Honolulu births for June were number ahead of her’s, a higher number makes sense.
{"url":"http://www.obamaconspiracy.org/2012/09/miscalculating-the-odds/","timestamp":"2014-04-19T14:43:23Z","content_type":null,"content_length":"157847","record_id":"<urn:uuid:1293f2f8-72cf-4227-84e5-ed5712d6a72e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
algebraic K-theory and tensor products up vote 9 down vote favorite Algebraic K-theory defines a functor K taking commutative rings to E_\infty ring spectra. I'm interested in which pushouts (tensor/smash products) K preserves. For example, if R is a regular noetherian ring then (I believe) K(R[t, t^{-1}]) = K(R) / ΣK(R) = K(R) /\[K(Z)] K(Z[t, t^{-1}]). On the other hand, K(Q) = K(Q ⊗ Q) is not the same as K(Q) /\[K(Z)] K(Q) as you can check by computing Are there useful conditions under which K-theory preserves pushouts? Edit: I'm equally interested in more general positive answers and more geometric counterexamples. For example, what is an example of smooth schemes X and Y over Spec k such that K(X) /\[K(k)] K(Y) -> K(X x[k] Y) is not an equivalence? Also, what if I only cared about K[0]? Is the product map more often an isomorphism then? More generally, is there a spectral sequence to compute the K-theory of a fiber product of schemes? add comment 2 Answers active oldest votes For smooth varieties $X$ and $Y$ over a field $k$, it is pretty rare to have that the product map $K_0(X)\otimes K_0(Y)\to K_0(X\times_kY)$ is an isomorphism (or surjective). For instance, suppose $X$ is smooth and proper over $k$, and the product map above is surjective for $Y=X$. Consider the class of the diagonal $\Delta_X$ in $K_0(X\times_kX)$ (this makes sense because $K_0$ is the same for vector bundles and coherent sheaves). Then the class of $\Delta_X$ is expressible as a finite sum $\sum_ia_i\otimes b_i$ with $a_i,b_i\in K_0(X)$. View elements $\alpha\in K_0(X\times_kX)$ as correspondences, i.e., as endomorphisms of $K_0(X)$, given by $x\mapsto (p_2)_*(p_1^*(x)\cdot\alpha)$, where $\cdot$ is the multiplication in $K_0$ (the direct image $(p_2)_*$ exists because $X$ is proper). The class of the diagonal gives the identity endomorphism, while the class of an element which is the image of $a\otimes b$ is rather special: it is of the form $x\mapsto \chi(x\cdot a)b$, where $\chi:K_0(X)\to {\mathbb Z}$ is the Euler characteristic ($\chi$ is the direct image map $K_0(X)\to K_0(k)$). Using this, one can deduce that $K_0(X)$ must be free abelian of finite rank, and $a_i$ (and also $b_i$) form a $\mathbb Z$-basis (and in fact, for the non-degenerate pairing $(x,y)\mapsto \chi(x\ up vote cdot y)$, they form dual bases). This argument, in some form, is well-known in some circles; this ``dual bases'' formulation appears in work of Ivan Panin. 10 down vote Thus it is easy to give examples of such $X$ for which $K_0(X)\otimes K_0(X)\to K_0(X\times_kX)$ is not surjective, e.g., a smooth projective curve of positive genus over an algebraically closed field, or (more tricky) a complex Enriques surface ($K_0$ is finitely generated, but has torsion). I do not know a counterexample to the following: let $X$ be a smooth (say, proper) variety over a field $k$ for which $K_0(X)\otimes K_0(X)\to K_0(X\times_kX)$ is an isomorphism; then for any smooth variety $Y$, the product map $K_0(X)\ otimes K_0(Y)\to K_0(X\times_kY)$ is an isomorphism. 2 Welcome to MO ! – Chandan Singh Dalawat Mar 15 '13 at 17:36 add comment I don't have a complete answer to this. However, there is an argument (which I have not checked carefully, but I believe it works) to proves that K(XxY) = K(X) /\^L K(Y) when X, Y are smooth schemes over k, and one of them (say Y) is a linear variety. Here /\^L is the derived smash product over K(Spec k). The class of linear varieties is the smallest class of quasi-projective varieties such that 1. Affine spaces are linear, 2. Let X be a variety, U an open subvariety and Y its closed complement. If Y and either U or X is linear, so is the other. For example, any toric variety is linear. up vote 3 down vote Now using the localization exact triangle for the variety Y, the homotopy invariance of K-theory of smooth schemes (i.e. K(XxA^k) = K(X)) and the fact that derived-smashing with K(X) preserves exact triangles, I believe one can use an inductive five-lema to show that K(XxY) = K(X) /\^L K(Y). Maybe this argument can be extended to deal with more general fibre products over a general base S. But as it uses homotopy invariance of K-theory, which does not hold for singular schemes, and as Xx[S]Y may be singular, this might lead to trouble. This is a very special case though (Y is very special). For a general Y this result will not be true. add comment Not the answer you're looking for? Browse other questions tagged algebraic-k-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/984/algebraic-k-theory-and-tensor-products","timestamp":"2014-04-21T08:03:32Z","content_type":null,"content_length":"56007","record_id":"<urn:uuid:cc1f01cc-6303-4e43-b652-4ca3f6765afa>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Semi-Elliptical Cracks in a Cylinder Subjected to Stress Gradients STP677: Semi-Elliptical Cracks in a Cylinder Subjected to Stress Gradients Heliot, J Research engineer and scientific manager, Creusot-Loire, Paris, Labbens, RC Research engineer and scientific manager, Creusot-Loire, Paris, Pellissier-Tanon, A Research consultant, France Pages: 24 Published: Jan 1979 The calculation of stress intensity factors in three-dimensional situations, under any stresses, is an engineering necessity. The authors presented at the 9th National Symposium on Fracture Mechanics, Pittsburgh, 1975, a method for calculating three-dimensional weight functions by finite elements. But the computer time was found to be too long for engineering applications. In this study the three-dimensional problem is limited to symmetrical problems with applied stresses expressed by a polynomial in one coordinate. Calculations are performed on semi-elliptical cracks in the meridional plane of a cylinder, and the applied stress is expressed by a polynomial of the fourth degree in the coordinate in the radial direction (see nomenclature and Fig. 1). The method could be extended to other symmetrical geometries and loads. So-called “polynomial influence functions” are defined and correspond to the terms of the polynomial. These functions depend on the radii ratio, the shape, and the depth of the crack; since these parameters are fixed, they are functions ofthe eccentric angle that defines a point on the crack front. The polynomial influence functions are computed by the boundary integral equation method. The method was first tested on a penny-shaped crack for which the known weight function allowed a direct computation of the polynomial influence functions. The accuracy was found sufficient to apply the method to more difficult problems. These functions were then calculated for semi-elliptical cracks in The results are presented in the form of curves; they are discussed and compared with the results published by other authors. crack propagation, fracture parameters, stress intensity factor, three-dimensional problems, semi-elliptical cracks, cylinders, boundary integral equation method, fatigue (materials) Paper ID: STP34922S Committee/Subcommittee: E08.05 DOI: 10.1520/STP34922S
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP34922S.htm","timestamp":"2014-04-21T00:11:05Z","content_type":null,"content_length":"13934","record_id":"<urn:uuid:ae52caef-2f62-49a2-929f-f516d1823bef>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Randomized Rounding And Discrete Ham-Sandwich Theorems: Provably Good Algorithms for Routing and Packing Problems Prabhakar Raghavan EECS Department University of California, Berkeley Technical Report No. UCB/CSD-87-312 This thesis deals with the approximate solution of a class of zero-one integer programs arising in the design of integrated circuits, in operations research, and in some combinatorial problems. Our approach consists of first relaxing the integer program to a linear program, which can be solved by efficient algorithms. The linear program solution may assign fractional values to some of the variables, and these values are 'rounded' to obtain a provably good approximation to the original integer program. We first consider the problem of global routing in gate-arrays. This problem has important applications in the design of integrated circuits, and can be formulated as a zero-one integer program. We introduce a technique we call randomized rounding for producing a provably good approximation to this integer program from the solution to its relaxation. In order to prove the quality of this approximation, we make use of some new bounds on the tail of the binomial distribution. We present the results of experiments conducted on industrial gate-arrays using our methods; these are encouraging and call for further work. We then show that our randomized rounding technique can be applied to some problems in combinatorial optimization and operations research. We also describe the relation between the problems we study and a class of combinatorial results known as "discrete ham-sandwich theorems". This leads to the problem of rounding linear program solutions deterministically in polynomial time. We invoke an interesting "method of conditional probabilities" for this purpose. An extension of this method shows that it is possible to deterministically mimic the randomized algorithm in a certain precise sense. This leads us to the development of a deterministic polynomial time rounding algorithm that yields the same performance guarantees as the randomized method. Advisor: Clark D. Thompson BibTeX citation: Author = {Raghavan, Prabhakar}, Title = {Randomized Rounding And Discrete Ham-Sandwich Theorems: Provably Good Algorithms for Routing and Packing Problems}, School = {EECS Department, University of California, Berkeley}, Year = {1986}, Month = {Jul}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1986/5986.html}, Number = {UCB/CSD-87-312} EndNote citation: %0 Thesis %A Raghavan, Prabhakar %T Randomized Rounding And Discrete Ham-Sandwich Theorems: Provably Good Algorithms for Routing and Packing Problems %I EECS Department, University of California, Berkeley %D 1986 %@ UCB/CSD-87-312 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/1986/5986.html %F Raghavan:CSD-87-312
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1986/5986.html","timestamp":"2014-04-21T14:44:36Z","content_type":null,"content_length":"6528","record_id":"<urn:uuid:10887ec1-d172-4e42-b2ee-a3b27a157975>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate THD in spectre 1. 28th February 2007, 12:01 #1 Junior Member level 3 Join Date Jun 2005 1 / 1 spectre thd Hi all, I want to simulate the THD using spectre, I've done the Transient analysis and want to use Calculator -> Special functions -> thd; there're four parameters must fill in, (the first two must start and stop frequency I think, Number of Samples and Fundamental) What's the relation between them and THD? And how do I find out the THD in Caculator? 2. 28th February 2007, 13:48 #2 Junior Member level 2 Join Date Feb 2007 5 / 5 thd in spectre If you have in your transient responce current i.e I2, then you first press wave,it and then you press the waveform in transient responce so the expression is in the buffer.After that select from special functions thd.At this point you have to set as you said 4 parameters. At boxes From & To you have to write something in time, for example 3u to 5u(meaning usec).These values depends on the frequency of your output waveform.So it is good to select two periods of your signal. At fundamental choose 0 .Now for number of samples you can write 64,512,1024 but it depends on the step you have in transient analysis.Check at cadence help for calculator.I think you will find it at calc.pdf . There you will find the expression in order to select the right No of samples. If you have problem send a photo with your signal or just send the frequency of your signal. If your waveform is voltage, press wave,vt and then the waveform 2 members found this post helpful. Junior Member level 3 Join Date Jun 2005 1 / 1 cadance calc.pdf Thanks laoud! Would you pls send me the calc.pdf? There's no cadence manul in my computor. Advanced Member level 1 Join Date Jul 2005 44 / 44 thd analysis spectre when u plot a waveform, you can find the calculator in your waveform window. it is in the tools buttom. You can find the Thd function in the calculator Junior Member level 3 Join Date Dec 2006 5 / 5 cadence spectre calc.pdf And the this is the step by step method 1 members found this post helpful. 6. 19th February 2010, 08:36 #6 How to calculate THD in spectre the result got from thd simulation is the real value, or with unit%? 2 members found this post helpful. 7. 22nd February 2010, 14:04 #7 Advanced Member level 5 Join Date Sep 2008 1746 / 1746 Re: How to calculate THD in spectre the result got from thd simulation is the real value, or with unit%? (H)SPICE and Spectre results are always presented as real values, small or big values designated by an appropriate prefix or by an E notation. 8. 22nd February 2010, 17:55 #8 Re: How to calculate THD in spectre thanks..i just find it so strange that not only THD but also HD3, they does not behave the normal way, i mean normally, the third harmonics increase faster than signal amplitude when increasing the input signal, so HD3 should decrease till 0 with increasing output amplitude(so called IIP3 point),but in my case the curve is like a wave, very bad.lz se the atattched □ 22nd February 2010, 17:55 9. 23rd February 2010, 08:32 #9 Advanced Member level 5 Join Date Sep 2008 1746 / 1746 Re: How to calculate THD in spectre ... the third harmonics increase faster than signal amplitude when increasing the input signal so HD3 should decrease till 0 with increasing output amplitude(so called IIP3 point) No, this is a misunderstanding. See iamlearning's contribution from Thu, 18 Feb 2010 19:01, saying "IIP3 determines the input power after which gain for the 3rd harmonic is greater than that of the Fundamental". See also this wiki for a good explanation of IIP3. but in my case the curve is like a wave, very bad.lz se the atattched I do not quite understand your curve. From your text, it should be HD3 vs. HD1 (the Fundamental), right? Then - apart from its strange shape - how can you get an HD3 output signal between 60 and 90 dB from a fundamental signal between -70 and -20dB ? At a fundamental input signal of -60dB (1mV, in case of dbV), an HD3 output of 70dBV would mean an HD3 gain of 130dB (≈ 3e6) or an HD3 output voltage of > 3kV. There definitely something must be wrong with the size of the units. But may be this is my mis-interpretation? □ 23rd February 2010, 08:32 10. 23rd February 2010, 08:40 #10 Re: How to calculate THD in spectre Hi, right the x axis is the HD1, while the y axis is HD1-HD3, the distance between thrid order harmonics and fundamental signal. 11. 23rd February 2010, 09:54 #11 Advanced Member level 5 Join Date Sep 2008 1746 / 1746 Re: How to calculate THD in spectre Ok, then it seems you are still far, far away from the IIP3 point, which is HD1-HD3=0dB (see the wiki). So you should simulate the input range -20dB ≤ HD1 ≤ +10dB ! 1 members found this post helpful. 12. Re: How to calculate THD in spectre the result got from thd simulation is the real value, or with unit%? i am going through a linearity analysis right now, and it seems to me that the value given by the thd function is a percentage Newbie level 1 Join Date Jun 2009 0 / 0 Re: How to calculate THD in spectre yes it is, as can be seen in the wavescan user guide (calculator functions) for THD just multiply it with 100 and you can use it to calculate ENOB or SN(D)R Member level 3 Join Date Jan 2010 Waterloo, Ontario, Canada 9 / 9 Blog Entries How to calculate THD in spectre from the calculator use the Fourier transform then sum a few terms HD2, HD3 is usually sufficient good luck 15. 15th September 2010, 06:32 #15 Newbie level 6 Join Date Sep 2010 0 / 0 Re: How to calculate THD in spectre hello, is the thd in cadence calculator expressed in percentage? 16. 15th September 2010, 12:08 #16 Advanced Member level 5 Join Date Sep 2008 1746 / 1746 Re: How to calculate THD in spectre See the replies above yours! 17. 16th September 2010, 08:53 #17 Newbie level 6 Join Date Sep 2010 0 / 0 Re: How to calculate THD in spectre but why my THD always seem to be about 8%, my specs requirement is 0.92%, what do u think is the reason? 18. 16th September 2010, 10:53 #18 Advanced Member level 5 Join Date Sep 2008 1746 / 1746 Re: How to calculate THD in spectre 19. 30th September 2010, 05:58 #19 Newbie level 6 Join Date Sep 2010 0 / 0 Re: How to calculate THD in spectre Hello everyone I am using the calculator in cadence to find the settling time for the output voltage waveform. There is this calculator function(Settlingtime) in cadence which I believe would be useful to me. However, there are 4 blanks to be filled up: Initial value type: Initial value: Final value type: Final value: Kindly advise. I tried doing it but the answer does not seem sensible. Advance thanks 20. 20th December 2011, 07:42 #20 Newbie level 1 Join Date Oct 2011 0 / 0 Re: thd in spectre If you have in your transient responce current i.e I2, then you first press wave,it and then you press the waveform in transient responce so the expression is in the buffer.After that select from special functions thd.At this point you have to set as you said 4 parameters. At boxes From & To you have to write something in time, for example 3u to 5u(meaning usec).These values depends on the frequency of your output waveform.So it is good to select two periods of your signal. At fundamental choose 0 .Now for number of samples you can write 64,512,1024 but it depends on the step you have in transient analysis.Check at cadence help for calculator.I think you will find it at calc.pdf . There you will find the expression in order to select the right No of samples. If you have problem send a photo with your signal or just send the frequency of your signal. If your waveform is voltage, press wave,vt and then the waveform
{"url":"http://www.edaboard.com/thread89418.html","timestamp":"2014-04-20T16:50:11Z","content_type":null,"content_length":"124612","record_id":"<urn:uuid:b61c0d03-f9f4-42d1-b75f-d354ebc9cfa6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
The Sun-Planet Worm Gear block represents a two-degree-of-freedom planetary gear built from carrier, sun and planet gears. By type, the sun and planet gears are crossed helical spur gears arranged as a worm-gear transmission, in which the planet gear is a worm. Such transmissions are used in the Torsen type 1 differential. When transmitting power, the sun gear can be independently rotated by the worm (planet) gear, or by the carrier, or both. You specify a fixed gear ratio, which is determined as the ratio of the worm angular velocity to the sun gear angular velocity. You control the direction by setting the worm thread type, left-handed or right-handed. Rotation of the right-handed worm in positive direction causes the sun gear to rotate in positive direction too. The positive directions of the sun gear and the carrier are the same. C, W, and S are rotational conserving ports. They represent the carrier, worm (planet), and sun shafts, respectively. Sun-Planet Worm Gear Model Model Variables R[WG] Gear, or transmission, ratio determined as the ratio of the worm angular velocity to the gear angular velocity. The ratio is positive for the right-hand worm and negative for the left-hand worm. ω[S] Angular velocity of the sun gear ω[P] Planet (that is, worm) angular velocity ω[C] Carrier angular velocity ω[SC] Angular velocity of the sun with respect to the carrier α Normal pressure angle λ Worm lead angle L Worm lead d Worm pitch diameter τ[S] Torque applied to the sun shaft τ[P] Torque applied to the planet shaft τ[C] Torque applied to the carrier shaft τ[loss] Torque loss due to meshing friction. The loss depends on the device efficiency and the power flow direction. To avoid abrupt change of the friction torque at ω[S] = 0, the friction torque is introduced via the hyperbolic function. τ[instfr] Instantaneous value of the friction torque added to the model to simulate friction losses τ[fr] Steady-state value of the friction torque k Friction coefficient η[WG] Efficiency for worm-gear power transfer η[GW] Efficiency for gear-worm power transfer ω[th] Absolute angular velocity threshold μ[SC] Sun-carrier viscous friction coefficient μ[WC] Worm-carrier viscous friction coefficient Ideal Gear Constraints and Gear Ratio Sun-planet worm gear imposes one kinematic constraint on the three connected axes: ω[S] = ω[P]/R[WG] + ω[C] . The gear has two independent degrees of freedom. The gear pair is (1,2) = (S,P). The torque transfer is: R[WG]τ[P] + τ[S] – τ[loss] = 0 , τ[C] = – τ[S], with τ[loss] = 0 in the ideal case. Nonideal Gear Constraints In a nonideal gear, the angular velocity and geometric constraints are unchanged. But the transferred torque and power are reduced by: ● Coulomb friction between thread surfaces on W and G, characterized by friction coefficient k or constant efficiencies [η[WG], η[GW]] ● Viscous coupling of driveshafts with bearings, parametrized by viscous friction coefficients μ[SC] and μ[WC] The torque transfer for nonideal gear has the general form: τ[S] = – R[WG](τ[P] – μ[WC]ω[P]) + τ[instfr] , τ[instfr] = τ[fr]·tanh(4ω[SC]/ω[th]) + μ[SC]ω[SC] . The hyperbolic tangent regularizes the sign change in the friction torque when the sun gear velocity changes sign. ┃ Condition │ Friction Torque τ[fr] ┃ ┃ ω[SC]τ[S] > 0 │ |τ[S]|·(1 – η[GW]) ┃ ┃ ω[SC]τ[C] ≤ 0 │ |τ[S]|·(1 – η[WG])/η[WG] ┃ Because the transmission incorporates a worm gear, the efficiencies are different for the direct and reverse power transfer. The following table shows the value of the efficiency for all combinations of the power transfer. ┃ Driving shaft │ Driven shaft ┃ ┃ ├────────┬─────────┬─────────┨ ┃ │ Planet │ Sun │ Carrier ┃ ┃ Planet │ n/a │ η[WG] │ η[WG] ┃ ┃ Sun │ η[GW] │ n/a │ No loss ┃ ┃ Carrier │ η[GW] │ No loss │ n/a ┃ Geometric Surface Contact Friction In the contact friction case, η[WG] and η[GW] are determined by: ● The worm-gear threading geometry, specified by lead angle λ and normal pressure angle α. ● The surface contact friction coefficient k. η[WG] = (cosα – k·tanλ)/(cosα + k/tanλ) , η[GW] = (cosα – k/tanλ)/(cosα + k·tanα) . Constant Efficiencies In the constant efficiency case, you specify η[WG] and η[GW], independently of geometric details. Self-Locking and Negative Efficiency If you set efficiency for the reverse power flow to a negative value, the train exhibits self-locking. Power can not be transmitted from sun gear to worm and from carrier to worm unless some torque is applied to the worm to release the train. In this case, the absolute value of the efficiency specifies the ratio at which the train is released. The smaller the train lead angle, the smaller the reverse efficiency. Meshing Efficiency The efficiencies η of meshing between worm and gear are fully active only if the absolute value of the gear angular velocity is greater than the velocity tolerance. If the velocity is less than the tolerance, the actual efficiency is automatically regularized to unity at zero velocity. Viscous Friction Force The viscous friction coefficients of the worm-carrier and sun-carrier bearings control the viscous friction torque experienced by the carrier from lubricated, nonideal gear threads. For details, see the Nonideal Gear Constraints section.
{"url":"http://www.mathworks.co.uk/help/physmod/sdl/ref/sunplanetwormgear.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-24T04:31:03Z","content_type":null,"content_length":"55822","record_id":"<urn:uuid:3e489d6c-ee1e-45c8-8fcf-142e50f602a2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Newton's Pi Approximation This Demonstration gives Newton's approximation of based on calculating the area of a semicircle using an integral. The area under the arc (light blue) is equal to the area of the sector minus the area of triangle (light green), that is, . On the other hand, the semicircle has equation , so . Set and . Therefore, . Newton presented to 16 decimal places using 20 terms of the binomial series [1, p. 177]. [1] W. Dunham, Journey through Genius , New York: Penguin Books, 1990 pp. 174–177.
{"url":"http://demonstrations.wolfram.com/NewtonsPiApproximation/","timestamp":"2014-04-18T13:16:49Z","content_type":null,"content_length":"44165","record_id":"<urn:uuid:1f15a596-8752-43b7-93e5-06e5eab01ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Efficient Computation of Robust Weighted Low-Rank Matrix Approximations Using the L_1 Norm Sept. 2012 (vol. 34 no. 9) pp. 1681-1690 ASCII Text x A. Eriksson, A. van den Hengel, "Efficient Computation of Robust Weighted Low-Rank Matrix Approximations Using the L_1 Norm," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1681-1690, Sept., 2012. BibTex x @article{ 10.1109/TPAMI.2012.116, author = {A. Eriksson and A. van den Hengel}, title = {Efficient Computation of Robust Weighted Low-Rank Matrix Approximations Using the L_1 Norm}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {34}, number = {9}, issn = {0162-8828}, year = {2012}, pages = {1681-1690}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2012.116}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Pattern Analysis and Machine Intelligence TI - Efficient Computation of Robust Weighted Low-Rank Matrix Approximations Using the L_1 Norm IS - 9 SN - 0162-8828 EPD - 1681-1690 A1 - A. Eriksson, A1 - A. van den Hengel, PY - 2012 KW - Robustness KW - Approximation algorithms KW - Equations KW - Least squares approximation KW - Computational efficiency KW - Optimization KW - L_{{1}}-minimization. KW - Low-rank matrix approximation VL - 34 JA - IEEE Transactions on Pattern Analysis and Machine Intelligence ER - The calculation of a low-rank approximation to a matrix is fundamental to many algorithms in computer vision and other fields. One of the primary tools used for calculating such low-rank approximations is the Singular Value Decomposition, but this method is not applicable in the case where there are outliers or missing elements in the data. Unfortunately, this is often the case in practice. We present a method for low-rank matrix approximation which is a generalization of the Wiberg algorithm. Our method calculates the rank-constrained factorization, which minimizes the L[1] norm and does so in the presence of missing data. This is achieved by exploiting the differentiability of linear programs, and results in an algorithm can be efficiently implemented using existing optimization software. We show the results of experiments on synthetic and real data. [1] C. Tomasi and T. Kanade, "Shape and Motion from Image Streams under Orthography: A Factorization Method," Int'l J. Computer Vision, vol. 9, no. 2, pp. 137-154, 1992. [2] H.-Y. Shum, K. Ikeuchi, and R. Reddy, "Principal Component Analysis with Missing Data and Its Application to Polyhedral Object Modeling," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 1, pp. 854-867, Sept. 1995. [3] Q. Ke and T. Kanade, "A Subspace Approach to Layer Extraction," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, 2001. [4] M. Turk and A. Pentland, "Eigenfaces for Recognition," J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991. [5] H. Hayakawa, "Photometric Stereo under a Light Source with Arbitrary Motion," J. Optical Soc. of Am. A, vol. 11, no. 11, pp. 3079-3089, 1992. [6] T. Wiberg, "Computation of Principal Components When Data Are Missing," Proc. Second Symp. Computational Statistics, pp. 229-236, 1976. [7] A.M. Buchanan and A.W. Fitzgibbon, "Damped Newton Algorithms for Matrix Factorization with Missing Data," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 316-322, 2005. [8] T. Okatani and K. Deguchi, "On the Wiberg Algorithm for Matrix Factorization in the Presence of Missing Components," Int'l J. Computer Vision, vol. 72, no. 3, pp. 329-337, 2007. [9] P. Baldi and K. Hornik, "Neural Networks and Principal Component Analysis: Learning from Examples without Local Minima," Neural Networks, vol. 2, no. 1, pp. 53-58, 1989. [10] E. Oja, "A Simplified Neuron Model as a Principal Component Analyzer," J. Math. Biology, vol. 15, pp. 267-273, 1982. [11] F. De La Torre and M.J. Black, "A Framework for Robust Subspace Learning," Int'l J. Computer Vision, vol. 54, nos. 1-3, pp. 117-142, 2003. [12] H. Aanaes, R. Fisker, K. Astrom, and J.M. Carstensen, "Robust Factorization," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1215-1225, Sept. 2002. [13] Q. Ke and T. Kanade, "Robust $L_1$ Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 739-746, 2005. [14] M.J. Black and A.D. Jepson, "Eigentracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation," Int'l J. Computer Vision, vol. 26, pp. 329-342, 1998. [15] C. Croux and P. Filzmoser, "Robust Factorization of a Data Matrix," Proc. Computational Statistics, pp. 245-249, 1998. [16] E.J. Candès, X. Li, Y. Ma, and J. Wright, "Robust Principal Component Analysis?" Computing Research Repository, vol. abs/0912.3599, 2009. [17] M.K. Chandraker and D.J. Kriegman, "Globally Optimal Bilinear Programming for Computer Vision Applications," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2008. [18] A. Bjorck, Numerical Methods for Least Squares Problems. SIAM, 1995. [19] A. Eriksson and A. van den Hengel, "Efficient Computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data Using the $l_1$ Norm," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2010. [20] G.A. Watson, Approximation Theory and Numerical Methods. Wiley, 1980. [21] R. Fletcher, "A Model Algorithm for Composite Nondifferentiable Optimization Problems," Math. Programming Study, vol. 17, pp. 67-76, 1982. [22] R. Fletcher, Practical Methods of Optimization. second ed., Wiley-Interscience, 1987. [23] Y. Yuan, "Some Properties of Trust Region Algorithms for Nonsmooth Optimization," technical report, 1983. [24] Y. Yuan, "On the Superlinear Convergence of a Trust Region Algorithm for Nonsmooth Optimization," Math. Programming, vol. 31, no. 3, pp. 269-285, 1985. [25] C. Olsson and F. Kahl, "Generalized Convexity in Multiple View Geometry," J. Math. Imaging and Vision, vol. 38, pp. 35-51, 2010. [26] M. Alvira and R. Rifkin, "An Empirical Comparison of SNoW and SVMs for Face Detection," A.I. memo 2001-004, Center for Biological and Computational Learning, MIT, 2001. [27] L. Torresani, A. Hertzmann, and C. Bregler, "Learning Non-Rigid 3D Shape from 2D Motion," Proc. Neural Information Processing Systems, 2003. Index Terms: Robustness,Approximation algorithms,Equations,Least squares approximation,Computational efficiency,Optimization,L_{{1}}-minimization.,Low-rank matrix approximation A. Eriksson, A. van den Hengel, "Efficient Computation of Robust Weighted Low-Rank Matrix Approximations Using the L_1 Norm," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1681-1690, Sept. 2012, doi:10.1109/TPAMI.2012.116 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tp/2012/09/ttp2012091681-abs.html","timestamp":"2014-04-20T22:10:21Z","content_type":null,"content_length":"55050","record_id":"<urn:uuid:249ff4be-a889-4d21-9f5a-7b61f463b909>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
taught : caught :: length Number of results: 17,912 5th grade taught is to caught as length is to ???? Tuesday, December 9, 2008 at 4:43pm by Rasea 5th grade what is taught caught length Tuesday, December 9, 2008 at 4:43pm by tymaris Andrea caught seven more fish than Micah. Micah caught twice as many as Peter. If they caught a total of 37 fish, how may did each catch? Sunday, January 13, 2013 at 10:48pm by Wilton About 310 000 t of Alantic cod were caught in 1991. In 2001, the mass of cod caught was 87 percent less. How much cod was caught in 2001 ? and how do you find the answer? Tuesday, June 15, 2010 at 6:56pm by caitlin Ned caught 1/3 pound of fish. Sarah caught 5/12 pound of fish. Jessica caught 1/6 pound of fish. Which diagram shows how to find how many pounds of fish they caught altogether. 2/6 2/6 1/6 3/12 5/12 6/12 1/3 5/12 1/6 1/12 5/12 1/12 Thursday, January 26, 2012 at 9:26pm by Maya Ned caught 1/3 pound of fish. Sarah caught 5/12 pound of fish. Jessica caught 1/6 pound of fish. Which diagram shows how to find how many pounds of fish they caught altogether. 2/6 2/6 1/6 3/12 5/12 6/12 1/3 5/12 1/6 1/12 5/12 1/12 Thursday, January 26, 2012 at 10:20pm by Maya These 5 sentences are about the same topic. By comparing them, which sentence would you conclude to be bias. 1.Roughly ten illegal immigrants are caught at the border each day. 2.In ten days, about one hundred illegal immigrants were stopped at the border. 3.Say agent ... Friday, May 15, 2009 at 2:49pm by Taylor Ned caught 1/3 pound of fish. Sarah caught 5/12 pound of fish . Jessica caught 1/6 pound of fish. How much fish did all three of them catch? Wednesday, October 24, 2012 at 8:59pm by Jennifer 8th grade what are the major commercial fish caught in the bahamas and tell where they are caught Saturday, October 3, 2009 at 1:28pm by Anonymous All OK except for "caught him out herself." = It's sufficient to say "she caught him... Sra Friday, January 29, 2010 at 10:05am by SraJMcGin analogy consonant blend for caught length with word that begins with scr,spr,str or thr Wednesday, January 25, 2012 at 7:29pm by nicole Yes, you should definitely redo it. But since I don't know your assignment or prompt, I can't really make any constructive comments. Of course, we're not all taught right and wrong. Some parents don't follow society's norms about right and wrong. This may be because they're so... Wednesday, December 3, 2008 at 8:10pm by Ms. Sue On his last deep sea fishing trip, Greg caught a grouper that measured 20.716 inches in length. Rules require that grouper less than 22 inches in length be released back into the wild. How much longer, in inches, did Greg's fish need to be for him to keep it? Saturday, December 15, 2012 at 1:39pm by Rishad Alam Algebra word problem Megalodon Lives Angler went fishing and caught a monster of a fish! The head of the fish was 10 feet long. The length of the tail was equal to the length of the head plus half of the body. The body was as long as the head and the tail together. What was the total length of the... Tuesday, December 6, 2011 at 3:04pm by Linda Biology- Scientific Method An experimenter sets out to find out how many hours of sleep a dog needs in order to catch the highest number of balls without a bounce. To test this the experimenter uses 5 dogs in the experiment and allows each dog to sleep a predetermined amount of time: dog 1: 4 hrs, dog 2... Thursday, January 30, 2014 at 4:29pm by Emily The burglas climbed hastily out of the window, terrified that he'd be caught. He didn't notice the water-butt and eventually got stuck in it and he was caught by the passing policeman. Sunday, February 28, 2010 at 9:45pm by stephanie Algebra Word Problem...please help me Megalodon Lives Angler went fishing and caught a monster of a fish! The head of the fish was 10 feet long. The length of the tail was equal to the length of the head plus half of the body. The body was as long as the head and the tail together. What was the total length of the... Wednesday, December 7, 2011 at 3:26pm by Linda accept is to except; threw is to ? mountain is to fountain; change is to ? boil is to soil; goat is to ? thought is to caught; length is to ? wouldn't is to couldn't; green is to I need help with Tuesday, January 16, 2007 at 6:16pm by Jillian values and morals: I think that while most people are taught values and morals it's nearly impossible to say that everyone is taught values and morals. Many are taught, but never apply the principles; others unfortunately I think grow up without truly understanding the ... Wednesday, December 3, 2008 at 8:15pm by anonymoys A projectile is caught at the same height from which it was launched. If it is caught with a velocity of -15m/s, what is the initial velocity? Tuesday, October 19, 2010 at 8:14pm by Katsu 12th grade physics ap question. A baseball with a mass of 120 g is hit 21.8 m into the air. The ball is caught by the third baseman 2.5 m above the ground. What potential energy did the ball develop at its highest point? How much potential energy did it still have when it was ... Wednesday, October 24, 2012 at 7:38pm by Ravin Mark:Marker::Spread:Spreader taught:caught::length:strength Grass:Lawn::Toss:throw wouldn't:couldn't::green:screen farm:Ranch::boulevard:Street boil:soil::goat:throat nail:hammer::screw:screwdriver whirlwind:breese::downpor:sprinkle Fifteen:five::nine:three clarify:explain::... Tuesday, January 16, 2007 at 6:16pm by Codes Ned caught 1/3 pound of fish. Sarah caught 5/12 pound of fish. Jessica caught 1/6 pound of fish. 1/3 = 4/12 5/12 = 5/12 1/6 = 2/12 sum = 11/12 NONE are correct Thursday, January 26, 2012 at 10:05pm by Damon Analogies that begin with scr, spr, str, & thr. mark:marker::spread:spreader taught:caught::length:strength grass:lawn::toss:throw wouldn't:couldn't::green:screen farm:ranch::boulevard:street boil:spoil::goat:throat Nail:hammer::screw:screwdriver whilrwind:breeze::downpour:... Tuesday, January 16, 2007 at 6:16pm by Tracie A stone is thrown upward with a speed of 20 m/s. It is caught on its way down at a point 5.0 m above where it is thrown. A) How fast was it going when it was caught? B) how long did the trip take? Wednesday, December 12, 2012 at 8:30pm by Aaron Two men went on a fishing trip on the gulf lo Mexico and caught 25 fish. The verbs in this phrase are "went" and "caught" right because it shows the action of what they did?? Tuesday, May 11, 2010 at 10:04am by Fabian 4. A football player passes the ball. The ball leaves his hand and is caught 1.7 seconds later by a receiver 30 meters away. (You may assume the ball was caught at the same height from which it was thrown.) a) What is the vertical component of the velocity of the ball just ... Friday, October 5, 2012 at 1:14am by Anonymous Grade 12 English This is very repetitive. You also need specific examples of how the past has taught you valuable lessons which you can apply in the future. An example might be--- Did you say something ugly to your best friend and as a result loose that friendship? What did you learn; how can ... Wednesday, September 16, 2009 at 9:18am by GuruBlue Self-Taught Mathematicians Those of us who love math certainly know two basic facts about Mr. Gottfried Wilhelm Leibniz: (1) He and Newton (our calculus guy) did not get along AND (2) Mr. Leibniz was a self-taught mathematician My question is for all tutors here who love math. What is your view of ... Friday, February 22, 2008 at 4:26pm by Guido exceptional children appropriate social skills in young children:A.are aquired over time. b.are easily taught. c.cannot be taught. or c.arenot influenced by a childs temerament. Tuesday, October 9, 2012 at 4:00pm by tonia math help Two hundred fish caught in Cayuga Lake had a mean length of 14.1 inches. The population standard deviation is 2.9 inches. (Give your answer correct to two decimal places.) (a) Find the 90% confidence interval for the population mean length. Lower Limit Upper Limit (b) Find the... Sunday, June 16, 2013 at 12:29pm by Tom 81 two girls caught 25 frogs. lisa caught four times as many as jen did. how many frogs did jen catch Thursday, February 3, 2011 at 9:53pm by bob two girls caught 25 frogs. Lisa caught four times as many as Jen did. how many frogs did Jen catch Thursday, February 3, 2011 at 9:47pm by bob oout of 45 people ,37 caught the train,15 the bus,5 walked.how many caught both bus and train? Saturday, August 11, 2012 at 3:39am by don two house spiders caught 14 flies in theirrespective webs over a week's time. the larger spider caught 2 more flies than the smaller spider. Thursday, November 24, 2011 at 6:41am by Anonymous two house spiders caught 14 flies in theirrespective webs over a week's time. the larger spider caught 2 more flies than the smaller spider. Thursday, November 24, 2011 at 6:41am by Anonymous A ball is tossed vertically upward from the ground and caught at a height of 15m. After 1.85seconds. a)what is the velocity with which it was throuwn? b)What maximum height does it reach. c)whatt was the velocity when it was caught? Monday, September 24, 2012 at 4:51am by Julius muson And, ignore how I said that I do it. Do it your way, however you were taught. Your way IS correct. I just prefer to do it the way I was taught. I don't want to confuse you. :) Tuesday, January 25, 2011 at 10:30pm by helper physics and math g = -9.8 m/s^2 if the positive direction fot the vertical axis is defined as up. g = +9.8 m/s^2 if the positive direction fot the vertical axis is defined as down. You can do it either way. (a) Assume that the keys are caught while moving up. If Y is measured upwards from the ... Wednesday, October 8, 2008 at 11:42pm by drwls x = frogs Jen caught 4x = frogs Lisa caught x + 4x = 25 Solve for x Thursday, February 3, 2011 at 9:47pm by helper A baseball is hit vertically upward and is caught by the catcher 2.68s later. Assume the ball was hit and caught at a height of 1.00m. what is the velocity of the ball when it is hit off the bat? what is the balls maximum displacement? Wednesday, February 26, 2014 at 9:28pm by Anonymous On the first day of fishing season, 70% of the 125 people fishing caught fish. Find what percent of the people caught fish. Tuesday, February 25, 2014 at 9:15pm by Alexa Zimmerman On the first day of fishing season, 70% of the 125 people fishing caught fish. Find what percent of the people caught fish. Tuesday, February 25, 2014 at 10:03pm by keri smith Because the burglar was terrified he'd be caught, he climbed hastily out of the win dow, not noticing the water-butt. Because the burglar got stuck in the water-butt, he was caught by a passing policeman. Sra Sunday, February 28, 2010 at 8:46pm by SraJMcGin College Algebra 2 To determine the number of deer in a game preserve, a conservationist catches 810 deer, tags them and lets them loose. Later 684 deer are caught; 171 of them are caught. How many deer are in the Sunday, September 25, 2011 at 9:57pm by Francine A baseball is struck by a bat and 3 second later it is caught 30 ma way. If it is one meter above the ground when struck and caught, find the greatest height it reached above the ground. Monday, January 9, 2012 at 5:21pm by ellah Which of the following statements is true? a.Language is learned only when it is taught b.Language cannot be taught c.Maturation alone accounts for the development of language d.Most children are born with a potential for language I think a. not sure though am i correct Monday, February 18, 2013 at 5:24pm by tim Thank you very much. Here are some sentences I'd like you to check. I included my doubts in brackets. 1)They trained him to be good and not evil. (bad?). Miranda taught Caliban how to put his thoughts in(to) words without gabbling like an animal. 2)Prospero taught him the ... Friday, April 29, 2011 at 3:57am by Henry1 We cannot see what the plots in your figure look like. The boat will travel backwards between the time the sack is thrown and when it is caught. While the sack is in the air the boat will have a constant negative velocity. After it is caught its velocity will again be zero. Wednesday, February 9, 2011 at 5:04am by drwls Together, Misty and Keith caught 29 fish. Keith caught 5 fewer fish than Misty. How many did they each catch? Wednesday, October 10, 2012 at 5:42pm by Bert If a rod is moving at a velocity equal to 1/2 the speed of light parallel to its length, what will a stationary observer observe about its length? The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the ... Monday, October 22, 2012 at 4:39pm by laurie A student throws a set of keys vertically upward to her sorority sister, who is in a window 3.80 m above. The keys are caught 1.80 s later by the sister's outstretched hand. (b) What was the velocity of the keys just before they were caught? Wednesday, September 11, 2013 at 2:49pm by Kelsey Exactly what are your instructions for this assignment? What have you been taught about how to organize your thoughts before starting to write any paper? What have you been taught about how to be concise and avoid wordiness? Monday, January 28, 2013 at 11:59am by Writeacher In NX3 the N would be +3 and the X would be +1 or -1. This means that it would have to be HX and OX2 because it would have H+ and X- and O2-X- I'm not sure how you were taught but I was taught to cross the the charges. Tuesday, February 25, 2014 at 9:08pm by Anonymous Math...For All Math Tutors I have taught basic statistics as part of my intro psychology course at the community college level, where I taught for almost 40 years. I am currently retired. I enjoy helping others understand more about their world, even math. Friday, February 15, 2008 at 10:29am by PsyDAG A friend claims that the average length of trout caught in this lake is 19 inches. To test this claim we find that a sample of 13 trout has a mean length of 18.1 inches with a sample standard deviation of 3.3 inches. The population standard deviation is unknown. If you assume ... Saturday, December 22, 2012 at 9:49pm by Alison I was basically an English and social studies teacher in 7th and 8th grade. Occasionally, I also taught classes in 7th grade math. I taught in a small district in Michigan for 32 years. Thursday, January 17, 2013 at 7:10pm by Ms. Sue Billy kicks a soccer ball with an initial velocity of 24 ft/sec into the goal. How long was the ball in the air? If the goalie caught the ball at a height of 4 feet from the ground, how long was the ball in the air before it was caught? Tuesday, May 14, 2013 at 11:51am by Vanessa 5th grade math I taught math for over 35 years and I don't have a clue what you mean by "regroup when multiplying a 3 digit number by a 2 digit number" What new-fangled way are they using now, and where is that being taught? Friday, October 10, 2008 at 5:38pm by Reiny Make two clear sentences out of these five, avoiding 'so' and 'then'. The burglar climbed hastily out of the window. The burglar was terrified he'd be caught. The burglar didn't notice the water-butt. The burglar got stuck in the water-butt. The burglar was caught by a passing... Sunday, February 28, 2010 at 8:46pm by Sara Make two clear sentences out of these five, avoiding 'so' and 'then'. The burglar climbed hastily out of the window. The burglar was terrified he'd be caught. The burglar didn't notice the water-butt. The burglar got stuck in the water-butt. The burglar was caught by a passing... Sunday, February 28, 2010 at 9:45pm by Sara If the thesis is that children should be taught to appreciate nature, I don't think the author has proven it. I think the author has proved the negative -- what happens when children are not taught or experienced nature. I don't believe he's proven the thesis -- that children ... Saturday, March 15, 2008 at 9:09pm by Ms. Sue You should have been taught about this formula for the wave speed, V. V = sqrt (T/d) T = tension = 49 N d = density per length = 0.04/8 = 0.005 kg/m V = sqrt(49/.005) = 99 m/s Thursday, March 22, 2012 at 7:34am by drwls The speed of waves in the rope depends upon the mass per unit length of the rope and the tension in the rope. That is probably not something they have taught you at this point. The mass per length is constant and the tension probably is also, but that depends upon how the ... Saturday, April 5, 2008 at 2:50am by drwls Two girls caught 25 frogs. Lisa caught four times as many as Jen did. How many frogs did Jen catch? Lisa => 4x Jen => x 4x + x = 25 x = 5 Lisa => 20 Jen => 5 jen caught 5 frogs Thursday, December 28, 2006 at 1:13am by kievah If a rod is moving at a velocity equal to 1/2 the speed of light parallel to its length, what will a stationary observer observe about its length? The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the ... Tuesday, October 23, 2012 at 10:24am by Laurie 6/10 is how long he taught 8th grade so 10 - 6 = 4 is how long he taught something else. The new fraction would be 4/10 but if you simplify it, 2 being the common factor in both you would get 2/5 4 = 2 * 2 10 = 2 * 5 Sunday, December 12, 2010 at 2:39pm by Robert 7th grade math I taught in a rural school in southwestern Michigan. Mostly I taught middle school English and social studies with an occasional 7th grade math class to keep me on my toes. :-) Monday, May 20, 2013 at 7:45pm by Ms. Sue Can someone please help me with the following 2 questions? I wasn't sure if I was grasping the whole idea. **Consider a baseball that is caught and then thrown at the same speed. Which case illustrates the greatest change in momentum, which requires the greatest impulse: the ... Wednesday, November 18, 2009 at 5:05pm by Amanda Write an equation for the problem. And solve it. Sierra caught 3 times as many fish as Lily. They caught a total of 20 fish. 3x*2=20 ---is this right? if it is plz help me solve it thanks u You can say 3n+n=20 n= lily's fish. Friday, November 24, 2006 at 2:24pm by Maria college physics a tennis ball is thrown straight up with an initial velocity of 22.5 m/s. it is caught at the same distance above the ground. a). how long does it take to reach its maximum height? b). how high does the ball rise? c). at what speed does it hit the ground? d). what total length... Tuesday, April 2, 2013 at 11:53pm by confused The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the rod will decrease. The length of the rod will increase. Wouldn't the length double, thus being answer D? Tuesday, October 23, 2012 at 10:26am by Laurie The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the rod will decrease. The length of the rod will increase. Wouldn't the length double, thus being answer D? Wednesday, October 24, 2012 at 5:07pm by Laurie fine arts 1.Because of the style of some sculptures found in a Romanesque church, they are sometimes called: A. Bibles in Stone B. minaret C. illuminated manuscripts D. Psalters i think its either A or C 2.In which of the following painting is pigment applied to plaster? A. Maest'a B. ... Thursday, January 8, 2009 at 6:13pm by y912f If 5 fishermen catch 5 fish in 5 minutes, how many minutes will it take 50 fishermen to catch 50 fish? If 5 fishermen catch 5 fish in 5 minutes, then the average time it takes each fisherman is 5 minutes to catch one fish. It will take 50 fishermen 5 minutes to catch 50 fish... Tuesday, September 19, 2006 at 9:45am by Sam As a baseball is being caught, it's speed goes from 30.0m/s to 0m/s in about 0.0050s. The mass of the baseball is 0.145kg A) what is the baseballs acceleration B) what are the magnitude and direction of the force acting on it? C) what is the magnitude and direction of the ... Thursday, January 23, 2014 at 9:30pm by Sandara a 2.5 kg ball moving at 7.50 m/s is caught by a 70.0 kg man while the man is standing on ice. how fast will the man / ball combination be moving after the ball is caught by the man? Thursday, February 20, 2014 at 8:16pm by Gaby Math (logic and problem solving) a rubber ball is dropped froma building that is 16 meters high. Each time the ball bounces, it bounces up half as high as the previous bounce. It is caught by Romer, the wonder dog, when it bounces 1 meter high. How many meters did the ball travel before it was caught? Sunday, October 30, 2011 at 10:30pm by sally College Physics Ok, the radius of rotation is length*sin30 where length is the length of string. looking at vectors, mg is down, and mv^2/r is outward, so tan30=mv^2/(r*mg) tan 30= v^2/(length*sin30*g) but v= 2PI (length*sin30) square that, put it in the equation tan30=4PI^2*length*sin30/g ... Wednesday, November 10, 2010 at 1:47pm by bobpursley The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the rod will decrease. The length of the rod will increase. Wouldn't the length double, thus being answer D? No one has answered this question yet. Monday, October 22, 2012 at 8:35pm by Laurie PEhightest: mgh PE at catch: mgh For the last two, you have to know more, like the angle it was hit at. Lets assume then it had no horizontal velocity, so all KE was vertical velocity. KE caught then is the difference between PE hightest and PE caught mg(21.8-2.5) Velocity ... Wednesday, October 24, 2012 at 7:38pm by bobpursley a ball bounces three times before it is caught. it initially falls 5 m, then rises back up to 4m, then falls 4m, rises back up to 3m, then falls 1.5m where it is caught. what is the balls displacement from its original position 5m above the ground? Wednesday, February 12, 2014 at 5:06pm by cathy A quarterback throws a football with a velocity vo at an angle of 45o with the horizontal. At the same instant a receiver standing 20 ft in front starts running down the field at 15 ft/s and catches the ball. What is the distance of the receiver from the quarterback when the ... Monday, January 30, 2012 at 12:22am by ali 1) He tells how his ship was caught in a violent storm, then by mist and snow and finally was surrounded by ice. You have a three-part series here, but the second part in the series has no verb. You have this: "caught ... , _____ by mist and snow, and ... surrounded ..." AND ... Wednesday, November 16, 2011 at 3:36pm by Writeacher English(Review Paper) "her mother passing" = her mother's passing away? "and her friend’s right" = her friends "drains all off" OR "drains off all...?" Because you began in the Present Tense, you should stay there. "decides" they got caught = they get caught? and decide... There are several ... Wednesday, December 7, 2011 at 7:37pm by SraJMcGin World 201 8. Confucius taught that filial piety had to do with the care and provision parents religiously gave to their children? False 9. Daoism is the belief that all people should reject social harmony and work to maintain a good and strong government? False 10. Legalism was a school... Sunday, March 8, 2009 at 3:13pm by amy Three hundred fish are tagged and released into a pond. A month later 100 fish are caught and then released back into the same pond. Of those 100 fish caught only 15 had tags. How many fish are in the pond? Is the answer 2000 fish? Tuesday, May 1, 2012 at 10:40pm by Donna classroom instruction I taught the noncontroversial subjects as I would math -- with wrong and right answers. With controversial subjects, I taught the basics of the different points of view and invited serious class discussions on these subjects. It's important that the students realize that there... Wednesday, March 10, 2010 at 9:40pm by Ms. Sue p = 2*width + 2*length 24x + 18y + 32 = 2(3x + 4y + 1) + 2*length 24x + 18y + 32 = 6x + 8y + 2 + 2*length 18x + 10y + 30 = 2*length 9x + 5y + 15 = length Monday, December 19, 2011 at 12:49pm by Steve biology 110 Absorbance is a term used by chemists. It is the log to base 10 of Io/I. where Io is the intensity of light (or infrare radiation) entering a sample and I is what comes out at a specific wavcelength. It is related to path length (cm), species concentation (moles/lieter) and a ... Saturday, February 23, 2008 at 9:06am by drwls how to rewrite rule in symbols: At a certain time of day,the length of a shadow s cast by an object is twice the length of the object L each length h in a copy 1/100 the length k in the original Tuesday, November 30, 2010 at 6:57pm by ursula how to rewrite rule in symbols: At a certain time of day,the length of a shadow s cast by an object is twice the length of the object L each length h in a copy 1/100 the length k in the original Tuesday, November 30, 2010 at 9:34pm by ursula A ball is thrown upward. After reaching a maximum height, it continues falling back to- ward Earth. On the way down, the ball is caught at the same height at which it was thrown upward.If the time (up and down) the ball remains in the air is 2.3 s, find its speed when it ... Thursday, September 15, 2011 at 4:12pm by margaret I am stuck on this one. Would you like to read this detective novel? I am torn between adjective and noun. Noun because it is a direct object or adjective because it describes you? And this one. My grandmother taught me to make lentil soup. I am thinking adverb because it ... Wednesday, November 28, 2012 at 5:50pm by Kelley v = length*4*1 = 4*length mass = volume x density mass = 4*length*7.87 = 31.48*length Friday, August 27, 2010 at 11:47pm by DrBob222 At her wedding, Jennifer lines up all the single females in a straight line away from her in preperation for the tossing of the bridal bouquet. She stands Kelly at 1.0 m, Kendra at 1.5 m, Mary at 2.0 m, Kristen at 2.5 m, and Lauren at 3.0 m. Jennifer turns around and tosses ... Wednesday, October 14, 2009 at 7:48pm by jamie A student throws a set of keys vertically upward to her sorority sister, who is in a window 4.30 m above. The keys are caught 1.30 s later by the sister's outstretched hand. (a) With what initial velocity were the keys thrown? _____________ m/s upward (b) What was the velocity... Thursday, October 8, 2009 at 7:03pm by blair How do I find a missing length to a triangle? The lenths I have are 18 for one side 27 for the other and (x) for the bottom length. Then for the other triangle given the lengths are 14 for one length, 21 for the other length and 28 for the bottome length. I am trying to solve ... Saturday, October 17, 2009 at 1:11pm by Anonymous Any verb using "have" or "has" as its auxiliary verb is in the present perfect tense. has walked have walked has taught have taught has gone have gone etc. Any verb using "had" as its auxiliary verb is in the past perfect tense. had walked had taught had gone etc. The same can... Tuesday, February 14, 2012 at 8:56pm by Writeacher Lang. Arts Which of the following groups of words in bold is nonrestrictive and should therefore be set off with commas in the sentence? A.My yarn which is blue and gray will make a very pretty scarf. B.Hats knitted by my mother are worn all winter by my family. C.The person who taught ... Thursday, February 21, 2013 at 11:51pm by Cassie Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=taught+%3A+caught+%3A%3A+length","timestamp":"2014-04-19T13:16:14Z","content_type":null,"content_length":"41939","record_id":"<urn:uuid:ad27e62e-2fc6-4684-a5a6-8ae69d14bd24>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
What does the Jacobian of a curve tell us about the curve? up vote 5 down vote favorite A natural object in the study of curves is the Jacobian of a curve. What are some natural geometric properties of the curve that the Jacobian encapsulates? In other words, what can the Jacobian tell us about the curve that we didn't know already? Note, I am asking for concrete examples, statements like "The Jacobian having property blah implies the curve has property blah." Ideally these will be statements that are easier to prove using the Jacobian (whose construction is not so easy!) rather than directly from the curve. (Also, if this question is more appropriate for math.se, I'd be happy to delete it.) ag.algebraic-geometry algebraic-curves 2 Since there isn't a single correct answer, and could generate a big list, this should probably be a community wiki. – Karl Schwede Aug 10 '13 at 23:12 3 By Torelli's theorem, everything. – Felipe Voloch Aug 10 '13 at 23:31 5 @FelipeVoloch : not necessarily. You need the Jacobian plus the theta divisor (or polarization) to tell you everything - there could be two non-isomorphic curves with isomorphic Jacobians. – Abhinav Kumar Aug 11 '13 at 2:17 You could also look at the question mathoverflow.net/questions/128593/… – Dan Petersen Aug 11 '13 at 6:55 1 Abhinav: I believe "the Jacobian" usually denotes the polarized abelian variety, so I agree with Felipe. – roy smith Aug 12 '13 at 5:25 show 1 more comment 2 Answers active oldest votes The question seems fine to me. Off the top of my head: 1) The Jacobian is a group, and in fact an abelian variety, whereas the curve usually isn't. This gives you a lot of structure to play with that you didn't have initially. For example, to show that a general curve doesn't map onto a curve of smaller positive genus, you can use the fact that the Jacobian of such a curve is simple. 2) The Jacobian is the motive of the curve, loosely speaking. In particular, all cohomological information about the curve can be read off from its Jacobian. Eg. Etale cohomology $H^1(X,\ mathbb{Z}/n)$ is just the group of $n$-torsion points (up to twist if you're a stickler). I believe that Weil first constructed the Jacobian in the abstract setting precisely for this up vote 12 reason. down vote 3) It has not just one but two universal properties. It's the universal abelian variety the curve maps to, and it's also universal parameter space for divisor classes of degree $0$ (i.e. it's both $Alb$ and $Pic^0$). Could it be possible to formulate a universal property in terms of cohomology or even motives? (i.e. it's the universal thing the curve maps into inducing an isomorphism on first cohomology in all cohomology theories). – David Corwin Aug 11 '13 at 8:44 add comment I think at least historically the Jacobian is related to the function theory over a curve which was one the main areas of research back in the 19th century. In that time given a compact Riemann surface $X$ over $\mathbb{C}$, the question was to understand the behavior of holomorphis and mermorphic functions on this curve. If we have two effective divisors $D$ and $E$ on $X$, when is $D-E$ the divisors of zeros and poles of a meromorphic function $f$ on $X$? Let the genus of $X$ be $g$. Then there are $g$ basis elements of the vector space of differential forms on $X$. The clever solution that Abel proposed for this question was this: let $\omega_{1}$,...,$\omega_{g}$ be the generators of $\Omega(X)$, the space of holomorphic differentials of $X$. Given up a path $\gamma$ in $X$, the set $L=\{(\int_{\gamma}\omega_{1},...,\int_{\gamma}\omega_{g})\}$ is additive in $\mathbb{C}^{g}\cong\Omega(X)$ because of additivity property of integrals and in vote 5 fact is a lattice. Therefore we can quotient out and get a group $\mathbb{C}^{g}/L$. We also get a map $ A:X\rightarrow J(X)$ by choosing a base point and $p_{0}$ and sending each point $p\in down X$ to $(\int_{p_{0}}^{p}\omega_{1},...,\int_{p_{0}}^{p}\omega_{g})$ mod $L$. Abel realized that two divisors $D$ and $E$ (viewed as a collection of points on $X$) are linearly equivalent if vote and only if they have the same image under the map $A$. Note that the map $A:X\rightarrow J(X)$ in itself is a very interesting map: we have constructed an almost natural holomorphic map form $X$ to a variety that has a structure of a group. In the first glance it is not at all clear that we can have such a map. The second funny property is that this map is not injective if and only if $X\cong \mathbb{P}^{1}$. 3 to add to this nice answer, note that the abel map induces one on every symmetric product X^(d)-->J, whose fibers are linear systems ≈ P^r, and hence whenever d > dimJ= g(X), there must be divisors of degree d that are linearly equivalent. In particular dim.h^0(D) ≥ g-d. – roy smith Aug 16 '13 at 1:35 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-curves or ask your own question.
{"url":"http://mathoverflow.net/questions/139118/what-does-the-jacobian-of-a-curve-tell-us-about-the-curve","timestamp":"2014-04-18T00:56:02Z","content_type":null,"content_length":"67787","record_id":"<urn:uuid:0bd32346-5ad8-42d7-8142-e2a4f319d385>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] [newb] how to create arrays Anne Archibald peridot.faceted@gmail.... Thu Jan 3 01:56:05 CST 2008 On 02/01/2008, Neal Becker <ndbecker2@gmail.com> wrote: > How would I create a vector of complex random variables? > I'm thinking the best way is to create a complex vector, then > assign to the real and imag parts (using e.g., random.standard_normal). > I don't see any way to create an uninitialized array. I guess I'd have to > use zeros? Is there any way to avoid the wasted time of initializing just > to write over it? You can create an uninitialized array if you want to. But, from your question, you may well be thinking about your problem in the wrong way. If all you want to do is store a whole bunch of values you compute in a loop, use a python list. Python lists are really quite efficient and convenient. The point of scipy is that it lets you operate on the whole vector at once: a = numpy.linspace(0,numpy.pi,1000) b = numpy.sin(a) print numpy.average(b) The main reason for this is conceptual clarity: you can start thinking of array operations as single steps in your program, allowing you to do more with the same size of program. Secondarily, using the numpy functions allows the looping to be done in compiled code, which is much faster than python code. If you are filling an array, element by element, with a loop in python, the time the python code takes to run will be much longer than the time spend initializing with zeros. Thus numpy.empty() gets much less use than you might expect. Look into using numpy functions - linspace, arange, zeros, ones, exp/sin/arctan/etc. to create your array. Good luck, More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-January/014998.html","timestamp":"2014-04-16T04:23:37Z","content_type":null,"content_length":"4173","record_id":"<urn:uuid:d6256eb6-4b3a-4e6d-8a1c-ba4f3d199b9b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
2275 -- Flipping Pancake Flipping Pancake Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 1393 Accepted: 609 Special Judge We start with a stack n of pancakes of distinct sizes. The problem is to convert the stack to one in which the pancakes are in size order with the smallest on the top and the largest on the bottom. To do this, we are allowed to flip the top k pancakes over as a unit (so the k-th pancake is now on top and the pancake previously on top is now in the k-th position). For example: This problem is to write a program, which finds a sequence of at most (2n - 3) flips, which converts a given stack of pancakes to a sorted stack. Each line of the input gives a separate data set as a sequence of numbers separated by spaces. The first number on each line gives the number, N, of pancakes in the data set. The input ends when N is 0 (zero) with no other data on the line. The remainder of the data set are the numbers 1 through N in some order giving the initial pancake stack. The numbers indicate the relative sizes of the pancakes. N will be, at most, 30. For each data set, the output is a single-space separated sequence of numbers on a line. The first number on each line, K, gives the number of flips required to sort the pancakes. This number is followed by a sequence of K numbers, each of which gives the number of pancakes to flip on the corresponding sorting step. There may be several correct solutions for some datasets. For instance 3 3 2 3 is also a solution to the first problem below. Sample Input Sample Output [Submit] [Go Back] [Status] [Discuss] All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di Any problem, Please Contact Administrator
{"url":"http://poj.org/problem?id=2275","timestamp":"2014-04-18T03:00:16Z","content_type":null,"content_length":"7081","record_id":"<urn:uuid:b0b300b2-9375-4a8e-be19-02bdbb17b13a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
of roof members Calculation of roof members It is important to calculate the dimension of the true length of a roof member. These dimensions are essential to determine the accurate span of rafter, overhang, underpurlins, fan struts etc. Click here if you have forgotten the trigonometry functions. You can also use the Roof Multiplier Table, which you can print from the web page. Find the span of a rafter. The AS 1684.2 – 2006 Residential timber-framing construction clearly differentiate between spacing and span (see Fig. 2.18 and Paragraph 2.7.5. 2.7.5.2 Spacing the centre-to-centre distance between structural members, unless otherwise indicated. 2.7.5.3 Span the face-to-face distance between points capable of giving full support to structural members or assemblies. In particular, rafter spans are measured as the distance between points of support along the length of the rafter and not as the horizontal projection of this distance. 2.7.5.5 Single span the span of a member supported at or near both ends with no intermediate supports. 2.7.5.5 Continuous span the term applied to members supported at or near both ends and at one or more intermediate points such that no span is greater than twice another. Span/length calculation for roof members. As can be seen the span of the rafter in the figure below is not quite in agreement with the Code but we will use the figures for the rafter span as calculated below. We use trigonometric functions for the calculation The angle roof pitch The opposite site is the rise of the roof The adjacent side is the rafter run The hypotenuse is the true length of the rafter The example shows brick veneer construction (see Figure 1) the roof pitch is 20°, the rafter run is 3.340 metres and for an eaves width of 0.6 metres, the actual measurement to calculate the overhang is in this case 0.76 metres (0.60 + 0.16 face brick + cavity). Therefore the span of the rafter is 3.554 metre. Please remember that the rafter span according to the Code is the face-to-face distance and that our calculated distance is 96 mm longer in this case (see Figure 2). This provides a safety margin, and in a border case, where the span is just a couple of mm too short, you may use the face-to-face distance for the span. Calculation of roof overhang You need to distinguish between clad timber framing and a brick-veneer construction calculate the eaves overhang. Look also at Figure 2.18 (b) in Section 2 of the code regarding the length of the Let's assume an eaves width of 600 mm. The horizontal dimension for the overhang is then 600 mm plus the 110 mm brick wall plus the 50 mm cavity, which equals 760 mm. Now do the same as for the rafter (the overhang is in this case the Hypotenuse). [ ] The eaves overhang for the brick veneer building with 600 mm eaves width is 0.809 metre Find the length of a strut If a rafter is supported on to points only (single span) and an appropriate size cannot be found in the tables then an additional support (underpurlin) is needed. Underpurlin must be supported by Struts may be arranged vertically as shown in Figure 4 (a) or perpendicular to the rafter (b). The position of the underpurlin must be determined before you can calculate the length of the strut. To utilise the continuous span of a rafter the position of the underpulin must be in the middle one-third of the rafter as shown in the Figure 5. Remember that continuous span member is a member whereby no span is greater than twice another (Section 2, Paragraph 2.7.5.5). sin, cos or tan. However, the underpurlin in class examples is usually placed at midspan, because that simplifies the calculation for all roof-members of the Roof Framing Section. Roof struts can be applied in various ways, some examples are: Verticle struts Perpendicular to the roof Roof struts 7.2.15.1 Where necessary, struts shall be provided to support roof members, such as underpurlins, ridgeboards and hip and valley rafters. Ridge strut This strut support the ridge down the center of the roof. If a ridge is strutted, then you need to find the length of the ridge strut. The length of the strut is found by using the opposite calculation: Vertical strut If the underpurlin is positioned at midspan, then the vertical underpurlin strut length equals half of the length of the ridge strut. Alternatively,by calculation ½ of the rafter run multiplied by the tangent of the roof pitch. Therefore the length of the vertical strut is: [ ] Perpendicular strut To calculate the length of a strut perpendicular to the rafter you need to calculate the rafter span (Hypotenuse) first. The rafter span (strut at midspan) is is half of the rafter run (3340/2) divided by cos 20°. Now you can use the tan-function again to calculate the strut perpendicular to the rafter.[] The length of the perpendicular strut is therefore Fan strut The span of an underurlin can be reduced if a fan-strut is used instead of a single strut. Paragraph 7.2.15.3 in AS 1684.2 - 2006 stipulates that the angle of a fan-strut should not exceed 45°. (A single strut should not be less than 30° from the vertical.) Fan struts are more effective with steeper roof pitches where the length of the strut is notable. To reduce the span of an underpurlin effectively the fan strut should have an angle of 45° because this results in a maximum spread of the fan strut. Find the spread of a fan strut. Geometrically the fan strut should consists of two isosceles right angle triangles as shown in Figure 6. As both angles are the same therefore both sides must be the same. Check on your calculator the sin of 45° (= 0,707) and cos 45° (= 0.707) and you will see that you will get equal figure for both (sin and cos). In the previous calculation the length of the vertical strut is 0.608 m (at midspan) and the length of the perpendicular strut 0.647 m (at midspan) The calculation of load width and roof area supported is easily understood if you consider the load on a structural member. Ask yourself what load is going onto a member. Study Section 2 of the code and look at the Figure 2.10 & 2.11 Floor Load Width (FLW), Figure 2.12 Ceiling Load Width (CLW) and Figure 2.13 - 2.16 Roof Load Width (RWL) and make sure you understand the significance of the load width. If you have queries regarding this matter seek clarification in class. Roof area supported For dimensioning of strutting beams (7.3.11), combined counter-strutting beams (7.3.10) or combined strutting/hanging beams (7.3.9) you need to know the roof area supported (RAS). The area can be easily found by be multiplying the RLW with the length of the underpurlin that the strut supports. Usually we select only one size for the underpurlin and therefore only the worst case need to be considered. Find out how many strutting beams are needed. As soon you have determined how many strutting beams are required identify the worst case. This situation will be used for the size selection of the underpurlin. Figure 7 below is an example that illustrate the process to find RAS. RAS equals the RLW of the underpurlin multiplied by the longest span of the underpurlin. Refer to Figure 7 to find the worst case in the roof structure (longest underpurlin span). The strut on the left is vertical because the span on the left side of the strut is less then the reduced span 1. The span between the struts supported on walls is excessive and a strutting beam or combined counter-strutting beam is required. A fan struts have been chosen (see Figure 7) to reduce the span of the underpurlin even more (span u/p 1 and span u/p 2). As can be seen span u/p 1 has been reduced by ½ spread of the fan strut resulting in a reduce span 1. Span u/p 2 is reduced by the spread of the fan struts (left and right side) i.e. reduce span = span u/p 2 minus strut height &times2 ). Figure 7 Alternative strutting system Where it is not possible to support underpurlins off walls or struts some alternatives can be applied as shown in Figure 8. Often underpurlins are projected (cantilevered) more than 25% of the maximum allowable span then you may reinforce the hiprafter with a tie-bolt truss system. The hiprafter in this case will support the underpurlin. Figure 8 Span and spacing The Figure 8 shows you the difference between span and spacing of members and the load width (e.g. FLW in this case) for the middle bearer. The load area supported by the middle stump would be the floor load width (L1/2+L2 /2) &times Bearer span (i.e. half of the bearer span to the left and half of the bearer span to the right, as indicated by the blue area). Figure 9 Calculation set out All calculations should be done on a separate A4 sheet . Make sure it's logical set out because you may need to refer to previous calculation figures. Write all dimensions down as well as you calculated figures. Follow a similar procedure as shown below: 1. Rafter run = external width between the wall plates divided by two. 2. Rafter span = rafter run divided by cos 3. Overhang = eaves width divided by cos (add dimensions for brick veneer). 4. Ridge strut = rafter run times tan 5. Decide whether an underpurlin is needed; if it is place it at mid-span. 6. New rafter span = rafter span found in 2) divided by two. 7. Vertical strut to underpurlin = ridge strut length divided by 2 (if u/p positioned at midspan). 8. Strut perpendicular to rafter = rafter span time tan 9. Determine the position of struts (usually on supporting walls). 10. If the distance between supporting walls is excessive a strutting beam may be needed. 11. Span of underpurlin can also be reduced if fan-strut is used. 12. Determine the length of the strut and the dimensions between the struts (or fan-struts). 13. Roof load width (RLW) = rafter span (if placed at midspan) otherwise ˝ span1 + ˝ span2. 14. Roof load area = RLW &times (˝ u/p span left + ˝ u/p span right) or with fan struts RLW &times (˝ u/p span left + ˝ u/p span right + spread of fan strut). 15. Hanging beams are required if ceiling joist span is excessive. 16. Place hanging beams in center of room or if needed divide room length/width by 3 (4) and space them equally. Click here for a Calculation Template that you can print and use back to Timber Framing contents page
{"url":"http://boeingconsult.com/tafe/as1684/calc/calc-tim.htm","timestamp":"2014-04-20T08:15:11Z","content_type":null,"content_length":"19738","record_id":"<urn:uuid:5c4b510f-f0aa-4761-8c52-3541822b6834>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
MR: Matches for: MR=336014 MR336014 (49 #790) 35-06 (76.35) Nonlinear wave motion. Proceedings of the Summer Seminar, sponsored by the American Mathematical Society and the Society for Industrial and Applied Mathematics, held at Clarkson College of Technology, Potsdam, N.Y., 1972. Edited by Alan C. Newell. Lectures in Applied Mathematics, Vol. 15. American Mathematical Society, Providence, R.I., 1974. viii+229 pp.
{"url":"http://www.ams.org/mathscinet-getitem?mr=0336014","timestamp":"2014-04-24T16:49:25Z","content_type":null,"content_length":"3880","record_id":"<urn:uuid:a7fa90ae-cbe9-40ad-9ce3-0de93c8d97a5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Domain & Range HELP! Please help with two functions. Click here and see the coming soon drawing...UPDATE: Drawing is now available!!! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5057faede4b0a91cdf454d03","timestamp":"2014-04-16T13:18:52Z","content_type":null,"content_length":"125577","record_id":"<urn:uuid:7e47f6f8-950c-400a-b44c-1b3ee7a374c5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Distributions December 10th 2008, 08:15 AM #1 Dec 2008 Probability Distributions A coin is tossed 8 times. Calculate the probability of ; a) tossing 6 heads and 2 tails. b) tossing 7 heads and 1 tail c) tossing at least 6 heads Drawing a Tree diagram will help you visualise and solve problem a and b fairly easily. but for problem c you have to use binomial distribution 1 - P( x < 6 ). Hope it helps. Could you go into greater detail please? I guess drawing a tree diagram with 8 stages might be a little too complicated but it will greatly help to analyse and solve the problem at hand. Anyway for problem a the probability of obtaining a head or a tail is half. Therefore the answer to part a is (0.5)^6 x (0.5)^2 hope you will be able to do part b and c. December 10th 2008, 08:23 AM #2 Oct 2008 December 10th 2008, 08:46 AM #3 Dec 2008 December 10th 2008, 09:07 AM #4 Oct 2008
{"url":"http://mathhelpforum.com/statistics/64348-probability-distributions.html","timestamp":"2014-04-21T07:22:38Z","content_type":null,"content_length":"35368","record_id":"<urn:uuid:2b4e8286-289f-45e6-a12d-5bafb37c3f61>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Names: Thickening Brief Description Thickening is a morphological operation that is used to grow selected regions of foreground pixels in binary images, somewhat like dilation or closing. It has several applications, including determining the approximate convex hull of a shape, and determining the skeleton by zone of influence. Thickening is normally only applied to binary images, and it produces another binary image as The thickening operation is related to the hit-and-miss transform, and so it is helpful to have an understanding of that operator before reading on. How It Works Like other morphological operators, the behavior of the thickening operation is determined by a structuring element. The binary structuring elements used for thickening are of the extended type described under the hit-and-miss transform (i.e. they can contain both ones and zeros). The thickening operation is related to the hit-and-miss transform and can be expressed quite simply in terms of it. The thickening of an image I by a structuring element J is: Thus the thickened image consists of the original image plus any additional foreground pixels switched on by the hit-and-miss transform. In everyday terms, the thickening operation is calculated by translating the origin of the structuring element to each possible pixel position in the image, and at each such position comparing it with the underlying image pixels. If the foreground and background pixels in the structuring element exactly match foreground and background pixels in the image, then the image pixel underneath the origin of the structuring element is set to foreground (one). Otherwise it is left unchanged. Note that the structuring element must always have a zero or a blank at its origin if it is to have any The choice of structuring element determines under what situations a background pixel will be set to foreground, and hence it determines the application for the thickening operation. We have described the effects of a single pass of a thickening operation over the image. In fact, the operator is normally applied repeatedly until it causes no further changes to the image (i.e. until convergence). Alternatively, in some applications, the operations may only be applied for a limited number of iterations. Thickening is the dual of thinning, i.e. thinning the foreground is equivalent to thickening the background. In fact, in most cases thickening is performed by thinning the background. Guidelines for Use We will illustrate thickening with two applications, determining the convex hull, and finding the skeleton by zone of influence or SKIZ. The convex hull of a binary shape can be visualized quite easily by imagining stretching an elastic band around the shape. The elastic band will follow the convex contours of the shape, but will `bridge' the concave contours. The resulting shape will have no concavities and contains the original shape. Where an image contains multiple disconnected shapes, the convex hull algorithm will determine the convex hull of each shape, but will not connect disconnected shapes, unless their convex hulls happen to overlap (e.g. two interlocked `U'-shapes). An approximate convex hull can be computed using thickening with the structuring elements shown in Figure 1. The convex hull computed using this method is actually a `45° convex hull' approximation, in which the boundaries of the convex hull must have orientations that are multiples of 45°. Note that this computation can be very slow. Figure 1 Structuring elements for determining the convex hull using thickening. During each iteration of the thickening, each element should be used in turn, and then in each of their 90° rotations, giving 8 effective structuring elements in total. The thickening is continued until no further changes occur, at which point the convex hull is complete. The image is an image containing a number of cross-shaped binary objects. Applying the 45° convex hull algorithm described above results in This process took a considerable amount of time --- over 100 thickening passes with each of the eight structuring elements! Another application of thickening is to determine the skeleton by zone of influence, or SKIZ. The SKIZ is a skeletal structure that divides an image into regions, each of which contains just one of the distinct objects in the image. The boundaries are drawn such that all points within a particular boundary are closer to the binary object contained within that boundary than to any other. As with normal skeletons, various possible distance metrics can be used. The SKIZ is also sometimes called the Voronoi diagram. One method of calculating the SKIZ is to first determine the skeleton of the background, and then prune this until convergence to remove all branches except those forming closed loops, or those intersecting the image boundary. Both of these concepts are described (applied to foreground objects) under thinning. Since thickening is the dual of thinning, we can accomplish the same thing using thickening. The structuring elements used in the two processes are shown in Figure 2. Figure 2 Structuring elements used in determining the SKIZ. 1a and 1b are used to perform the skeletonization of the background. Note that these elements are just the duals of the corresponding skeletonization by thinning elements. On each thickening iteration, each element is used in turn, and in each of its 90° rotations. Thickening is continued until convergence. When this is finished, structuring elements 2a and 2b are used in similar fashion to prune the skeleton until convergence and leave behind the SKIZ. We illustrate the SKIZ using the same starting image as for the convex hull. shows the image after the skeleton of the background has been found. is the same image after pruning until convergence. This is the SKIZ of the original image. Since the SKIZ considers each foreground pixel as an object to which it assigns a zone of influence, it is rather sensitive to noise. If we, for example, add some `salt noise' to the above image, we The SKIZ of that image is given by Now, we not only have a zone of influence for each of the crosses, but also for each of the noise points. Since thickening is the dual to thinning, it can be applied for the same range of tasks as thinning. Which operator is used depends on the polarity of the image, i.e. if the object is represented in black and the background is white, the thickening operator thins the object. Interactive Experimentation You can interactively experiment with this operator by clicking here. 1. What would the convex hull look like if you used the structuring element shown in Figure 3? Determine the convex hull of using this structuring element and compare it with the result obtained with the structuring element shown in Figure 1. Figure 3 Alternative structuring element to determine convex hull. This structuring element is used together with its 90° rotations. 2. Why is finding the approximate convex hull using thickening so slow? 3. Can you think of (or find out about) any uses for the SKIZ? 4. Use thickening and other morphological operators (e.g. erosion and opening) to process Reduce all lines to a single pixel width and try to obtain their maximum length. R. Gonzalez and R. Woods Digital Image Processing, Addison-Wesley Publishing Company, 1992, pp 518 - 548. R. Haralick and L. Shapiro Computer and Robot Vision, Vol 1, Addison-Wesley Publishing Company, 1992, Chap. 5, pp 168 - 173. A. Jain Fundamentals of Digital Image Processing, Prentice-Hall, 1989, Chap. 9. Local Information Specific information about this operator may be found here. More general advice about the local HIPR installation is available in the Local Information introductory section. ©2003 R. Fisher, S. Perkins, A. Walker and E. Wolfart.
{"url":"http://homepages.inf.ed.ac.uk/rbf/HIPR2/thick.htm","timestamp":"2014-04-16T21:52:55Z","content_type":null,"content_length":"12082","record_id":"<urn:uuid:7f4d6c9f-7ff4-4934-80be-3341cbb1c150>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Self-study of Mathematics September 22nd 2013, 09:12 AM #1 Self-study of Mathematics From the list below, what is the most effective way to do a deep, self-study of math? -answering math questions in forums -learning from a textbook chapter by chapter -learning from watching math video clips on youtube? -help from in-person tutoring service Notice I did not list returning to school. This is about increasing personal math skills without going to college. What do you honestly think about a self-study of math? Re: Self-study of Mathematics From the list below, what is the most effective way to do a deep, self-study of math? -answering math questions in forums -learning from a textbook chapter by chapter -learning from watching math video clips on youtube? -help from in-person tutoring service Notice I did not list returning to school. This is about increasing personal math skills without going to college. What do you honestly think about a self-study of math? As I have been studying on my own for the better part of 15 years I'd say all five courses of action would be great. It's not just about the text or how often you study it's also about making sure you are getting the best benefit of your time. I've gotten to the point where I'm only studying one topic at a time. I think the best approach is to spend lots and lots of time with the text and asking questions to anyone who is capable of correcting you or giving you advice. I'd also like to say that, depending on the material, you might well find that you can only get through the text so far. That's when some sort of tutoring (or an online forum or such) is probably your best bet. Re: Self-study of Mathematics The forum is for enrichment. But it could a learning tool depending how much time you spare for it. Learning from textbook depending how much time you spend on it and and what book you are using. Most of math book are not written at respectable level, and many of them contains many errors, or false pretenses. Watching video for pass time. Keep in mind learning mathematics is a love story movie Person to person is good but you will need a good bood and it is very expensive. Re: Self-study of Mathematics You said something interesting--studying one topic at a time. For example, if you decide to review the law of sines, how much time is sufficient time for you to move into a new topic after the law of sines? Another question: Why have you been studying math on your own for 15 years? Do you have a goal in mind? Is math just a passion? Is it a hobby? Thanks.... Re: Self-study of Mathematics The forum is for enrichment. But it could a learning tool depending how much time you spare for it. Learning from textbook depending how much time you spend on it and and what book you are using. Most of math book are not written at respectable level, and many of them contains many errors, or false pretenses. Watching video for pass time. Keep in mind learning mathematics is a love story movie Person to person is good but you will need a good bood and it is very expensive. I find that many textbooks contain errors in the answer section of the book. I also dislike the fact that many, if not all, only contain odd number answers. I love the David Cohen Pre-calculus textbook. The questions are challengingly fun. Are you familiar with David Cohen? Re: Self-study of Mathematics You said something interesting--studying one topic at a time. For example, if you decide to review the law of sines, how much time is sufficient time for you to move into a new topic after the law of sines? Another question: Why have you been studying math on your own for 15 years? Do you have a goal in mind? Is math just a passion? Is it a hobby? Thanks.... As it happens I'm reasonably good at Math, but I like Physics more. Re: Self-study of Mathematics I admire your passion for math and physics. I also like physics. I just love equations. I have two college degrees from two different CUNY schools in areas other than math. My greatest mistake was not to major in math. I will forever regret not majoring in something that I love. Just like you, I help students with math questions in various forums. However, there are questions and topics that I have forgotten how to do. That is why I joined this great site. Most of the questions I post here will be from textbooks. I have several math books covering algebra, geometry, trig, calculus 1 and 2. I am learning calculus on my own. Right now, I am reviewing the product rule. Very cool stuff. The other day, my friend requested math help for her daughter in 9th grade. My mistake was to upload a picture of a math homework sheet revealing her name. I noticed that you became a little upset about my double posting. I really would like my recent uploaded pictures deleted. God forbid that my friend or her daughter should visit this site and see her name on the internet. I could get in trouble. I love helping students online. Lastly, I will post questions covering high school algebra through my self-study of calculus. Like I said before, I am learning calculus on my own and having lots of fun. Why take out a school loan at age 48 to take courses I can learn on my own with help from sites like this one? What do you say? Re: Self-study of Mathematics I just had the chance tu peruse through it. I don't know if more than a thousand pages are justified for a pre-calculus book. Re: Self-study of Mathematics If I had the opportunity to teach a math course, it would undoubtedly be pre-calculus. This course covers a little bit of everything taught in high school in preparation for calculus 1. The topics in pre-calculus are sufficiently interesting and challenging. Pre-calculus questions are cool. It covers trigonometry as well. It covers matrix algebra. It covers geometry and college Re: Self-study of Mathematics For learning methods including ways to help retaining learned material it might be helpful to look at this blog which is buy a guy who learned MIT's four year computer science course in one year using their free online course material. Here is his youtube channel as well which is partly methods of learning and partly progress updates Scott Young - YouTube Re: Self-study of Mathematics As a tutor, I would say you can't beat one-to-one interaction and learning (my biased view of course), but self study is easier now than it used to be because of the Internet/technology etc. I think YouTube and online videos are fantastic for learning mathematics. Re: Self-study of Mathematics I use the internet quite often to review math but there are questions that I need help with beyond the video clip. This is why I decided to post questions here. Re: Self-study of Mathematics Thanks for your basic tutorial, I can now start to Mathematics and this post is making a great help to me. Re: Self-study of Mathematics September 22nd 2013, 10:16 AM #2 September 22nd 2013, 10:34 AM #3 Senior Member Sep 2013 September 22nd 2013, 05:39 PM #4 September 22nd 2013, 05:46 PM #5 September 22nd 2013, 06:21 PM #6 September 23rd 2013, 07:50 AM #7 September 23rd 2013, 08:41 AM #8 Senior Member Sep 2013 September 24th 2013, 04:01 AM #9 September 24th 2013, 07:39 AM #10 Super Member Oct 2012 October 29th 2013, 12:49 AM #11 Oct 2013 October 29th 2013, 06:22 AM #12 January 28th 2014, 03:10 AM #13 Jan 2014 March 1st 2014, 12:06 AM #14 Mar 2014 991 US Highway 22 West
{"url":"http://mathhelpforum.com/math-topics/222184-self-study-mathematics.html","timestamp":"2014-04-17T02:28:31Z","content_type":null,"content_length":"77931","record_id":"<urn:uuid:340c2b46-9d0d-4795-9588-1ac8345d400f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursive coalgebras from comonads - In APLAS , 2005 "... Abstract. We propose a novel, comonadic approach to dataflow (streambased) computation. This is based on the observation that both general and causal stream functions can be characterized as coKleisli arrows of comonads and on the intuition that comonads in general must be a good means to structure ..." Cited by 18 (3 self) Add to MetaCart Abstract. We propose a novel, comonadic approach to dataflow (streambased) computation. This is based on the observation that both general and causal stream functions can be characterized as coKleisli arrows of comonads and on the intuition that comonads in general must be a good means to structure context-dependent computation. In particular, we develop a generic comonadic interpreter of languages for context-dependent computation and instantiate it for stream-based computation. We also discuss distributive laws of a comonad over a monad as a means to structure combinations of effectful and context-dependent computation. We apply the latter to analyse clocked dataflow (partial stream based) computation. 1 - In International Conference on Mathematics of Program Construction (MPC), Québec City, QC , 2010 "... Abstract. Lenses are one the most popular approaches to define bidirectional transformations between data models. A bidirectional transformation with view-update, denoted a lens, encompasses the definition of a forward transformation projecting concrete models into abstract views, together with a ba ..." Cited by 5 (1 self) Add to MetaCart Abstract. Lenses are one the most popular approaches to define bidirectional transformations between data models. A bidirectional transformation with view-update, denoted a lens, encompasses the definition of a forward transformation projecting concrete models into abstract views, together with a backward transformation instructing how to translate an abstract view to an update over concrete models. In this paper we show that most of the standard point-free combinators can be lifted to lenses with suitable backward semantics, allowing us to use the point-free style to define powerful bidirectional transformations by composition. We also demonstrate how to define generic lenses over arbitrary inductive data types by lifting standard recursion patterns, like folds or unfolds. To exemplify the power of this approach, we “lensify ” some standard functions over naturals and lists, which are tricky to define directly “by-hand ” using explicit recursion. "... Abstract. We study general structured corecursion, dualizing the work of Osius, Taylor, and others on general structured recursion. We call an algebra of a functor corecursive if it supports general structured corecursion: there is a unique map to it from any coalgebra of the same functor. The conce ..." Cited by 3 (1 self) Add to MetaCart Abstract. We study general structured corecursion, dualizing the work of Osius, Taylor, and others on general structured recursion. We call an algebra of a functor corecursive if it supports general structured corecursion: there is a unique map to it from any coalgebra of the same functor. The concept of antifounded algebra is a statement of the bisimulation principle. We show that it is independent from corecursiveness: Neither condition implies the other. Finally, we call an algebra focusing if its codomain can be reconstructed by iterating structural refinement. This is the strongest condition and implies all the others. 1 - Mathematics of Program Construction, 8th International Conference, MPC 2006 "... Dynamic programming is an algorithm design technique, which allows to improve efficiency by avoiding re-computation of identical subtasks. We present a new recursion combinator, dynamorphism,which captures the dynamic programming recursion pattern with memoization and identify some simple conditions ..." Cited by 2 (0 self) Add to MetaCart Dynamic programming is an algorithm design technique, which allows to improve efficiency by avoiding re-computation of identical subtasks. We present a new recursion combinator, dynamorphism,which captures the dynamic programming recursion pattern with memoization and identify some simple conditions when functions defined by structured general recursion can be redefined as a dynamorphism. The applicability of the new recursion combinator is demonstrated on classical dynamic programming algorithms: Fibonacci numbers, binary partitions, edit distance and longest common subsequence. "... We instantiate the general comonad-based construction of recursion schemes for the initial algebra of a functor F to the cofree recursive comonad on F. Differently from the scheme based on the cofree comonad on F in a similar fashion, this scheme allows not only recursive calls on elements structura ..." Cited by 2 (1 self) Add to MetaCart We instantiate the general comonad-based construction of recursion schemes for the initial algebra of a functor F to the cofree recursive comonad on F. Differently from the scheme based on the cofree comonad on F in a similar fashion, this scheme allows not only recursive calls on elements structurally smaller than the given argument, but also subsidiary recursions. We develop a Mendler formulation of the scheme via a generalized Yoneda lemma for initial algebras involving strong dinaturality and hint a relation to circular proofs à la Cockett, Santocanale. - In Mathematically Structured Functional Programming, Proceedings, Electronic Workshops in Computing. British Computer Society , 2006 "... The design of programs as the composition of smaller ones is a wide spread approach to programming. In functional programming, this approach raises the necessity of creating a good amount of intermediate data structures with the only aim of passing data from one function to another. Using program fu ..." Cited by 2 (1 self) Add to MetaCart The design of programs as the composition of smaller ones is a wide spread approach to programming. In functional programming, this approach raises the necessity of creating a good amount of intermediate data structures with the only aim of passing data from one function to another. Using program fusion techniques, it is possible to eliminate many of those intermediate data structures by an appropriate combination of the codes of the involved functions. In the standard case, no mention to the eliminated data structure remains in the code obtained from fusion. However, there are situations in which parts of that data structure becomes an internal value manipulated by the fused program. This happens, for example, when primitive recursive functions (socalled paramorphisms) are involved. We show, for example, that the result of fusing a primitive recursive function p with another function f may give as result a function that contains calls to f. Moreover, we show that in some cases the result of fusion may be less efficient than the original composition. We also investigate a general recursive version of paramorphism. This study is strongly motivated by the development of a fusion tool for Haskell programs called HFUSION. "... Folds over inductive datatypes are well understood and widely used. In their plain form, they are quite restricted; but many disparate generalisations have been proposed that enjoy similar calculational benefits. There have also been attempts to unify the various generalisations: two prominent such ..." Add to MetaCart Folds over inductive datatypes are well understood and widely used. In their plain form, they are quite restricted; but many disparate generalisations have been proposed that enjoy similar calculational benefits. There have also been attempts to unify the various generalisations: two prominent such unifications are the ‘recursion schemes from comonads ’ of Uustalu, Vene and Pardo, and our own ‘adjoint folds’. Until now, these two unified schemes have appeared incompatible. We show that this appearance is illusory: in fact, adjoint folds subsume recursion schemes from comonads. The proof of this claim involves standard constructions in category theory that are nevertheless not well known in functional programming: Eilenberg-Moore categories and bialgebras. The link between the two schemes is provided by the fusion rule of categorical fixedpoint calculus. "... Sorting algorithms are an intrinsic part of functional programming folklore as they exemplify algorithm design using folds and unfolds. This has given rise to an informal notion of duality among sorting algorithms: insertion sorts are dual to selection sorts. Using bialgebras and distributive laws, ..." Add to MetaCart Sorting algorithms are an intrinsic part of functional programming folklore as they exemplify algorithm design using folds and unfolds. This has given rise to an informal notion of duality among sorting algorithms: insertion sorts are dual to selection sorts. Using bialgebras and distributive laws, we formalise this notion within a categorical setting. We use types as a guiding force in exposing the recursive structure of bubble, insertion, selection, quick, tree, and heap sorts. Moreover, we show how to distill the computational essence of these algorithms down to one-step operations that are expressed as natural transformations. From this vantage point, the duality is clear, and one side of the algorithmic coin will neatly lead us to the other “for free”. As an optimisation, the approach is also extended to paramorphisms and apomorphisms, which allow for more efficient implementations of these algorithms than the corresponding folds and unfolds. "... Dynamic programming algorithms embody a widely used programming technique that optimizes recursively defined equations that have repeating subproblems. The standard solution uses arrays to share common results between successive steps, and while effective, this fails to exploit the structural proper ..." Add to MetaCart Dynamic programming algorithms embody a widely used programming technique that optimizes recursively defined equations that have repeating subproblems. The standard solution uses arrays to share common results between successive steps, and while effective, this fails to exploit the structural properties present in these problems. Histomorphisms and dynamorphisms have been introduced to expresses such algorithms in terms of structured recursion schemes that leverage this structure. In this paper, we revisit and relate these schemes and show how they can be expressed in terms of recursion schemes from comonads, as well as from recursive coalgebras. Our constructions rely on properties of bialgebras and dicoalgebras, and we are careful to consider optimizations and efficiency concerns. Throughout the paper we illustrate these techniques through several worked-out examples discussed in a tutorial style, and show how a recursive specification can be expressed both as an array-based algorithm as well as one that uses recursion schemes.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=493142","timestamp":"2014-04-21T13:24:26Z","content_type":null,"content_length":"34628","record_id":"<urn:uuid:00d76568-cf63-4ad6-ac1c-135caef91972>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with compile Hi MaplePrimers! I have a simulation in MapleSIM, exported as a compiled procedure in maple using -LinkModel(), and -GetCompiledProc. I'm trying to do parameter estimation on my MapleSIM model. Within a optimization scheme, I call the MapleSIM model, and it will output a curve. Using a least squares method, I compare this measurements to synthetic experimental data (I know the actual values), and generate an objective function. The optimization algorithm will try different parameter values, and try to minimze the objective function. When the curves are exactly the same, the objective function will be zero. The problem I am having is certain parameter sets will cause the model to require very small steps. I wish to put a timeout on these experiments, because speed is important. However, I would also like to see the results up to the point of requiring very small steps. For timeout, I was using code along the lines of: out:= timelimit(30,cProc(params = PData)); #simulate with 30s limit where PData are the parameter guessses, and cProc is the compiled MapleSim model. I would like 'out' to be assigned whatever the results were after 30 seconds, even if the model had not finished integrating. Thanks in advance for any help!
{"url":"http://www.mapleprimes.com/tags/compile","timestamp":"2014-04-20T06:44:39Z","content_type":null,"content_length":"103030","record_id":"<urn:uuid:047fcf88-8cdc-4af5-b252-8486cb232120>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Project Name Algorithms and Theory Our dual objective is to pursue basic research on a broad range of theoretical topics and to impact real-world issues by applying our expertise to solving problems for IBM and its clients. We do basic research in a number of areas of theoretical computer science, including approximation algorithms, combinatorics, complexity theory, computational geometry, distributed systems, learning theory, online algorithms, cryptography and quantum computing. IBM researchers have access to an extensive array of challenging problems that motivate innovative solutions and, at the same time, constantly push the theoretical state-of-the-art with the development of new algorithms and new optimization techniques. We provide innovative, custom solutions to business and industrial problems that are at the boundaries of what can be solved today. Research Staff Postdoctoral Fellows • Qin Zhang: Algorithms for massive, streaming, or distributed data; external memory algorithms. Watson Research Center Research Staff Postdoctoral Fellows IBM Research – India Research Staff See also Quantum Computing
{"url":"http://researcher.watson.ibm.com/researcher/view_pic.php?id=134","timestamp":"2014-04-20T03:47:14Z","content_type":null,"content_length":"42991","record_id":"<urn:uuid:9307dac5-d1fa-48d6-b3f0-4fa79c0ec0e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Formality of Ext algebras and direct sums up vote 2 down vote favorite Does taking direct summands/sums preserve formality of ext-algebras? More precisely: Given an abelian category, say linear over a field and with enough injectives, one gets an $A_\infty$-srutcture on the $ext$-algebras of its objects. Let $X,Y$ be objects of our category. What is the relation between the following assertions (with additional assumptions if necessary)? 1) $Ext^\bullet(X,X)$ is formal and $Ext^\bullet(Y,Y)$ is formal 2) $Ext^\bullet(X\oplus Y,X \oplus Y)$ is formal 3) Two out of $Ext^\bullet(X,X)$, $Ext^\bullet(Y,Y)$ and $Ext^\bullet(X\oplus Y,X \oplus Y)$ are formal formality homological-algebra add comment 1 Answer active oldest votes My understanding is that formality of the DGA $Ext^\bullet(X\oplus Y,X\oplus Y)$ implies 1), but also formality of $Ext^\bullet(X,Y)$ and $Ext^\bullet(Y,X)$ as bimodules over $Ext^\bullet (X,X)$ and $Ext^\bullet(Y,Y)$, and that this is a much stronger condition. For instance, let $(E,p)$ be an elliptic curve over a field, work in the abelian category of coherent sheaves, let $X=\mathcal{O}$ and let $Y=\mathcal{O}_p$ be the skyscraper at $p$. Then $Ext^\bullet(X,X)$ and $Ext^\bullet(Y,Y)$ are both (intrinsically) formal, but $Ext^\bullet(X\oplus Y,X\oplus Y)$ knows the affine coordinate ring of $E\setminus\{ p\}$ for the cubic embedding into $\mathbb{P}^2$. That's because one can iteratively build $\mathcal{O}(np)$ for $n>0$ as a twisted complex in $X$ and $Y$ (namely, $\mathcal{O}((n+1)p)$ is the twist of $\ mathcal{O}(np)$ along the spherical object $Y$). Over an algebraically closed field, this gives a $j$-line of quasi-isomorphism classes of $A_\infty$-algebras $Ext^\bullet(X\oplus Y,X\ oplus Y)$. As requested a bit more detail on why 2) implies 1), probably by too clunky an argument. Let $A=Ext^\bullet(X\oplus Y, X\oplus Y)$. We can regard this as an ordinary graded $K$-algebra, up vote 7 in which case non-formality of the $A_\infty$-structure is detected by the primary deformation class in $HH^\bullet_K(A,A)$. That is: after transferring the DG structure to a minimal $A_\ down vote infty$-structure on $A$ using homological perturbation theory, the composition $\mu^3$ defines a Hochschild cocycle. If it is a coboundary then we can kill $\mu^3$ by a gauge accepted transformation which leaves $\mu^1$ and $\mu^2$ untouched, whereupon $\mu^4$ is a cocycle; and so on. If the structure is not formal, one will eventually obtain a non-trivial Hochschild class, called the primary deformation class. We can alternatively regard $A$ as a 2-object graded-linear category, i.e., an algebra over $R=K\oplus K$, in which case non-formality is detected by a primary class in $HH^\bullet_R(A,A) $, defined similarly. But one checks using the bar resolution that $HH^\bullet_R(A,A)\cong HH^\bullet_K(A,A)$ as $K$-modules. Hence, if the algebra is formal, then so is the category; the restriction of the categorical primary deformation class to endomorphisms of $X$ is then trivial. The references I tend to use for this sort of thing are the first chapter of Seidel's book "Fukaya categories and Picard-Lefschetz theory", and also his paper "Homological mirror symmetry for the quartic surface", but there are certainly other possibilities. Thanks for this fast answer! Can you explain how formality of $Ext^\bullet(X\oplus Y, X\oplus Y)$ implies formality of $Ext^\bullet(X,X)$? Also I must admit, that I don't know what formality of $Ext^\bullet(X,Y)$ means. – Jan Weidner May 7 '12 at 19:59 2 In fact, in Tim Perutz's example, the $A_{\infty}$-algebra $Ext^{\bullet}(X\oplus Y, X \oplus Y)$ knows $D^{b}_{Coh}(E)$ and hence $E$, since once you can build powers of an ample globally generated line bundle, you can build the whole derived category. – Chris Brav May 7 '12 at 20:06 The formality of the cross-terms was carelessly phrased, but it means formality as bimodules. I've added something about why 2) implies 1). – Tim Perutz May 7 '12 at 21:38 Chris: that's true, but it sounds a bit back-to-front to me: the derived category isn't a complete invariant for projective varieties in general, but once you have those powers you have the homogeneous coordinate ring. – Tim Perutz May 7 '12 at 21:55 Tim: yes I agree. The choice of a generator for the derived category gives more information. But if $X$ is a smooth projective variety and I give you $D^{b}_{Coh}(X)$ as an abstract $A_{\infty}$-category (with no t-stucture or symmetric monoidal structure) together with a generating set of the form $\mathcal{O}_{X}, \mathcal{O}_{X}(1), \cdots, \mathcal{O}_{X}({\rm dim}\; X)$, then you have no way of knowing that the generator is of this form (using only the $A_{\infty}$-structure, and so I think no way of constructing the homogeneous coordinate ring. – Chris Brav May 8 '12 at 6:29 show 4 more comments Not the answer you're looking for? Browse other questions tagged formality homological-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/96255/formality-of-ext-algebras-and-direct-sums","timestamp":"2014-04-16T14:13:06Z","content_type":null,"content_length":"58497","record_id":"<urn:uuid:543d4463-0be8-4de4-9cc2-e3e88952c4d0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Ehrenfest Urn Problem with Applications Ehrenfest Urn Problem with Applications Daniel J. Castellano December 18, 1997 1 Introduction The Ehrenfest urn problem was originally proposed as a model for dissipation of heat, but has since come to be applied in a wide variety of fields, thanks in part to generalizations and variations of the problem, and also, no less importantly, to visualizing the exact original problem in a different light. To illustrate, we will consider specific examples and analyze them both algebraically and geometrically, and also use physical analogies to demonstrate the relevance of this model to the pure sciences. First, we shall consider the original problem as formulated by the Ehrenfests, and derive some of its basic results and implications. We will then recast the problem as a random walk on a hypercube, and consider it yet again as a Markov chain. Finally, we will look at a couple of variations of the problem, checking that the results of the Ehrenfest model are confirmed, and then expound further physical implications of these modified urn problems. 2 The original Ehrenfest urn model Consider two urns A and B. Urn A contains N marbles and Urn B contains none. The marbles are labelled 1,2,...N. In each step of the algorithm, a number between 1 and N is chosen randomly, with all values having equal probability. The marble corresponding to that value is moved to the opposite urn. Hence the first step of the algorithm will always involve moving a marble from A to B. What will the two urns look like after k steps? If k is sufficiently large, we may expect the urns to have equal populations, as the probabilities of drawing a marble from A or from B become increasingly similar. After the first step, for example, the odds of putting the marble in Urn B back into A is 1/N. Going a step further, the probability of Urn B having three marbles after three steps is N−1/N⋅ N−2/N. For N large, this is a very high probability, but as we multiply repeatedly by N−k/N for increasingly large k, the probability of moving marbles from A to B diminishes over time, as we would expect intuitively. States in which one urn has many more marbles than the other may be said to be unstable, as there is an overwhelming tendency to move marbles to the urn that contains fewer. As the populations equalize, the transition probabilities from A to B and B to A approach each other, creating a stable, or stationary, state of roughly equal populations thereafter. These qualitative results are in perfect accordance with familiar concepts in thermodynamics. Two bodies of different temperatures eventually approach the same intermediate temperature when in contact with each other for an extended period of time. Since thermal motion is random, we would expect an urn model which gives equal probabilities to each unit of heat transfer to be the most appropriate. To proceed to more quantitative results, it is helpful to recast the problem in a more convenient form. 2.1 Random walk on a hypercube Each of the N marbles may be in one of 2 states: it is in A or it is in B. In each step of the algorithm we reverse the state of one of the marbles, an action which is independent of the states of the other marbles, hence the states are orthogonal. If we consider each marble to correspond to a coordinate axis, and states A and B are treated as values 0 and 1, then the state of the system at any point in time may be expressed as the vertex of an N-dimensional unit hypercube. Executing a step in the algorithm is analogous to moving along an edge to an adjacent vertex. Our condition that all marbles be chosen with equal probability means that each edge connected to the vertex has equal chance of being traversed. What we have is a uniform random walk on the hypercube. The state of the system may be expressed as a binary N-tuple if we align the edges of the hypercube with the standard basis in ℜ^N. These states form a group under binary vector addition. We may define a probability measure P on this space which describes the first step of the algorithm. This would be P(x)=1/N if x=e[i] where e[i] is the ith basis vector. However, to remove parity problems arising from an odd number of steps, it is better to use P(x)=1/N+1 if x=e[i] or 0. (In both cases, P is zero-valued elsewhere, of course.) As in all random walk problems, further steps are expressed by convoluting the probability function. The second step is P^∗ 2(x)=ΣP(x−y)P(y) (1) where P(y) is the probability distribution of the first step. For the kth step, we have: P^∗ k=P∗ P^∗ k−1 (2) As k approaches infinity, this expression converges to the uniform distribution. That is, after a sufficiently long random walk, each vertex on the hypercube has an equal probability of being the walker's location. Thus the expectation value of the sum of the coordinates of the corresponding N-tuple is N/2. This reaffirms our thermodynamic model. An interesting question is that of how big k must get in order to have a uniform distribution. The transition from an unstable to a stable configuration is actually quite dramatic. For k greater than N/4log N, the distribution rapidly approaches uniformity. To show that P^∗ k tends to uniformity for this value, a lengthy proof involving Fourier transforms is required.^1 (Taking the Fourier transform of a convolution gives a normal product.) 2.1.1 Electric model Random walks may also be considered treating each edge as a resistor of unit resistance. A linear random walk, for instance, could never go off toward infinity because the resistors add in series to give infinite resistance (a random walk in a plane, however is a different story, both electrically and mathematically). Our hypercube may be thought of us as a highly symmetric parallel circuit between the origin and the opposite vertex. Using this method gives some interesting results. For example, the expectation value of the number of steps it takes for an urn lacking one marble to get the last one is 2^N−1!^2 2.2 The urn problem as a Markov chain Discussion to this point has made the problem a bit more complicated than it needs to be, as far as physical considerations are concerned. For one thing, we have 2^N possible states, but many of these are degenerate. If we are considering the diffusion of gases, for example, with each molecule acting as a marble (so N is on the order of Avogadro's number), we really don't care which molecule goes where, but only the relative quantities in each region. So our hypercube collapses considerably, with vertices that have the same coordinate sum collapse into a single vertex. The number of edges should remain the same, in order to preserve the correct relative probabilities, and it is easy to see that geometrically this becomes something of a mess, even though our new graph has many fewer vertices than the hypercube. It may be advisable to abandon our geometric model at this point, and consider instead a further simplification. The random walk model expresses each state as being determined by all previous states. This is a consequence of labelling marbles only in the initial state. If instead, we relabel the marbles after each step, flaunting our disconcern for the identity of each marble, the state of the system after k steps is only determined by the state after k−1 steps. Instead of 2^N states, we have only N+1 states (0, 1,...N marbles in Urn A). This is a two-state Markov chain, since each step can only increase or decrease the number of marbles in A by 1. With each state being determined only by the previous state, we need only concern ourselves with the probability of transition from one state to the next. These are easily computed: with zero probability for all other states. As in most Markov chain problems, this has a variation cutoff; in other words, a sharp transition to a stationary state. This cutoff is at N/4 log N, as before. The cutoff phenomenon was made mathematically precise by Diaconis and Aldous (1987), and led to the result that a 52-card deck needs only 7 shuffles to achieve uniform mixing. Here we have two-way mixing; we exchange the order of cards in a single switch rather than moving a card from one pile to another, as in the Ehrenfest model. Thus the cutoff is only at N/2log N, which for N=52 gives 102.7. At 13 switches per shuffle (26 pairs, half of which are switched), we have a maximum of 7.9 shuffles (in practice, there are slightly more switches per shuffle). 3 Variations of the Ehrenfest model The treatment of urn problems as Markov chains lends itself readily to generalizations and modifications of the model. We will consider a couple of these below. 3.1 The Krafft-Schaefer generalization^4 Fundamental to the Ehrenfest model is the condition that marbles be chosen with equal probability. However, the properties of the urns (e.g., permeability) may be generalized as follows. If the marble chosen is in Urn A, then it will be placed in B with probability s; otherwise, it remains in A. Similarly, a marble in Urn B is moved to A with conditional probability t. The original model may be retrieved by simply setting s=t=1. The transition probabilities for the Markov chain are (zero otherwise): For the one parameter case s=t, we have: P(i→ i)=1−s (10) In the two parameter case, the matrix of transition probabilities has N+1 distinct eigenvalues λ[j]=1−2j/N, where j=0, 1,…, N. 3.2 Uppuluri-Wright variation^6 In another twist of the urn problem, we have a single urn with w[0] white balls and b[0] black balls. Once again, balls are chosen at random with equal probability. If the ball is white, it is replaced with a black ball with probability α[1]; otherwise, the system is unchanged. Similarly, if the ball is black, it becomes white with probability α[2]. This model is probabilistically similar to the Krafft-Schaefer model, but it is presented in such a way that lends itself to explicit analysis. The expectation values of the number of black and white balls after k steps is where A= and μ[0]=(w[0] b[0])^t. The matrix exponential can be evaluated using Blatz's (1968) result:^7 Thus we have an explicit computation of the probability of the final state from an arbitrary initial state without anything more involved than finding the eigenvalues of a two by two matrix and raising them to the kth power. For the classical Ehrenfest model, α[1]=α[2]=1, and I've computed the eigenvalues of I+1/NA to be -1, -(2/N)-1.^8 4 Conclusion It took some doing, but we finally arrived at a completely generalized formula that solves the Ehrenfest urn problem explicitly. More importantly, we've covered various geometric and physical interpretations of the problem which have rendered certain aspects of it more intelligible and at the same time illustrated the applications of the model to other fields. This is but one urn model among many, and one of the simplest ones at that. Nonetheless, it has continued to provoke significant developments even in recent years, exposing some highly fertile ground in combinatorics and other applied mathematics. Such a proof is given in Diaconis, P. (1991). Finite Fourier Methods: Access to Tools, Proceedings of Symposia in Applied Mathematics, 44, 174-175. This, and more general results, may be found in Palacios, J. L. (1994). Another look at the Ehrenfest urn via electric networks, Advances in Applied Probability, 26, 820-824. Krafft, O. and Schaefer, M. (1993). Mean passage times for triangular transition matrices and a two parameter Ehrenfest urn model, Journal of Applied Probability, 30, 964-970. Krafft, O. and Schaefer, M. (1993). Mean passage times for triangular transition matrices and a two parameter Ehrenfest urn model, Journal of Applied Probability, 30, 964-970. Uppuluri, V. R. R. and Wright, T. (1981). A note on a further generalization of the Ehrenfest urn model, Proceedings of the American Statistical Association, Sampling Survey Section, 564-569. Uppuluri, V. R. R. and Wright, T. (1981). A note on a further generalization of the Ehrenfest urn model, Proceedings of the American Statistical Association, Sampling Survey Section, 564-569. Blatz, P. J. (1968). On the arbitrary power of an arbitrary 2 x 2 matrix, American Mathematical Monthly, 75, 57-58. Notes attached. This document was translated from L^AT[E]X by H^EV^EA. © 1997, 2006 Daniel J. Castellano. All rights reserved. http://www.arcaneknowledge.org
{"url":"http://www.arcaneknowledge.org/science/ehrenfest.html","timestamp":"2014-04-21T07:12:30Z","content_type":null,"content_length":"23793","record_id":"<urn:uuid:4cd03af5-f73e-444e-892f-36b80ff0f02e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2009 [00676] [Date Index] [Thread Index] [Author Index] Simplifying and Rearranging Expressions • To: mathgroup at smc.vnet.net • Subject: [mg95889] Simplifying and Rearranging Expressions • From: "David Park" <djmpark at comcast.net> • Date: Thu, 29 Jan 2009 05:55:14 -0500 (EST) I want to start a thread on this because I believe many MathGroup people will have some useful things to say. A common task for Mathematica users is to obtain an expression that is in a particular form. For students and teachers this may often be a textbook form, or there may be other reasons that a particular form is desired. It might be thought that this should be an easy task but quite often it can be a very difficult task, even involving mathematical derivation and many of the capabilities of Mathematica. Not obtaining a specific form may be a matter of not knowing how to solve the problem in the first place. Nevertheless, even simple rearrangement can be difficult. I sometimes think of it as doing surgery on expressions. I believe it is generally desirable to use Mathematica to rearrange an expression and not retype the expression. Retyping is too error prone. Simplify and FullSimplify are amazingly useful but it is difficult to control them and obtain a precise result. One will often have to do additional piecemeal operations. One downside of Simplify and FullSimplify is that they can return different forms with different Mathematica versions. Then any additional operations in an old notebook may no longer work. It would be nice if there was a method of using these commands that would be more version independent. Various routines such as Together, Apart, Factor, TrigReduce, TrigFactor, TrigExpand, TrigToExp, GroebnerBasis etc., can be useful in getting a specific form. MapAt is very useful for doing surgery on specific parts of an expression. Mathematica often gets two factors that have extra minus signs. You can correct that by mapping Minus onto the two factors. For integrals in the wrong form you could cheat by trying to find the constant by which they differ by subtracting and simplifying, and then use that in the derivation. Over the years I've collected a number of routines that aid in manipulating expressions and have included them in the Presentations package. Some of these are: CompleteTheSquare, FactorOut (any 'factor' expression with ability to hold results such as factoring from a matrix), MultiplyByOne (a common mathematical technique), LinearBreakout, PushOnto (much better than Through), HoldOp (hold a specific operation but evaluate the arguments), CreateSubexpression (creates a tooltip and holds expressions together with a tag so they won't get split by routines like Simplify), ReleaseSubexpressions, MaplevelParts (apply an operation to a subset of level parts, for example Factor three out of five terms in a sum), MapLevelPatterns, EvaluateAt (evaluate specific parts of held expressions), EvaluateAtPattern. SymbolsToPatterns, LHSSymbolsToPatterns (convert specific derived rules to general patterned rules). It is very useful to get Mathematica generated expressions into the form that one wants. I believe that this is probably a sticking point with many users. In general it is not a trivial topic. Others may have some good general ideas that I don't know about. Someday someone may even write a good tutorial on it. David Park djmpark at comcast.net <http://home.comcast.net/~djmpark> http://home.comcast.net/~djmpark/
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jan/msg00676.html","timestamp":"2014-04-19T14:54:15Z","content_type":null,"content_length":"28367","record_id":"<urn:uuid:bf009a3b-4944-4ccb-a5c1-d7e61d93c501>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
In future we would like to apply this method to a range of time dependent systems, such as the shear flow system with a shear rate which is a combination of a constant and oscillatory component, and oscillatory colour field of different frequencies, as well as to apply the extended phase space method to generalise the Kawasaki response formula. What computational techniques are used? Equilibrium and nonequilibrium molecular dynamics simulation methods are being used and are developed. Supercomputers are required to obtain statistically valid data for small systems and due to large system size requirements. Petravic, J., Evans, D. J., Nonlinear response for time dependent external fields, Phys. Rev. Lett. 78, 1997, 1199-1202. Evans, D.J. and Searles, D.J., Causality and response theory, In Proceedings of 1st Tohwa University International Meeting on Statistical Physics, Fukuoka, Japan, November 1995 (Ed. M. Tokuyama, Kyoto Institute for Theoretical Physics), Butsusei Kenkyu, 66(3), 452454 (1996). Evans, D.J. and Searles, D.J., Causality, response theory and the second law of thermodynamics, Physical Review E, 53, 5808-5815 (1996). Searles, D.J. and Evans, D.J., On the lifetimes of antisteady states, Australian Journal of Physics, 49, 39-49 (1996). Searles, D. J., Evans, D. J. and Isbister, D. J. The number dependence of the maximum Lyapunov exponent , Physica A, accepted for publication (1996).
{"url":"http://anusf.anu.edu.au/annual_reports/annual_report96/AR3_html_96/evans2.html","timestamp":"2014-04-21T09:57:42Z","content_type":null,"content_length":"8690","record_id":"<urn:uuid:660934b8-2401-436a-bc01-1e7d34423c39>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
The Physics in Games – Real-Time Simulation Explained Check out this great talk at http://channel9.msdn.com/Showpost.aspx?postid=314874. Ever find yourself wondering about the math behind your favorite simulation game? Did you know that the motion physics of a car are much more complicated than the those of an airplane? Brian Beckman, physicist, programmer and Channel 9 celebrity sure does. Besides spending time innovating programming languages and tools, Brian spends time working on the mathematics behind real-time physics simulation. Most recently, he worked on the math behind the tire physics of the popular racing game Forza. Simulation, by definition, needs to be accurate. Otherwise, well, it’s not simulating reality, really, which is of course the idea of simulation. Games like Forza in fact simulate real physics of racing in a predictable and highly mathematically precise manner. That’s exactly why Forza is a real-time automobile racing simulation game. The past, present and future of computer simulation of real-time physical events, or simply computer-based simulations that involve highly accurate representations of things moving/changing in space and time that are precisely affected by multiple variables like wind, rain, gravity, mud, oil, planets, waves, etc are very fascinating topics for gamers(many may not realize this explicitly, but they sure experience it!), mathematicians, programmers and physicists alike. Heck, any body who thinks about the thinking behind things that they experience in a simulated environment should watch/listen to this interview (available in podcast form as well as video). Towards the end of this conversation, Brian mentions Rigs of Rods and Plasma Pong. Check out the Rigs of Rods simulation demo at 00:58:11! Thanks for the link Walter!
{"url":"http://www.3dgameprogramming.net/2007/06/25/the-physics-in-games-real-time-simulation-explained/","timestamp":"2014-04-19T06:54:33Z","content_type":null,"content_length":"17872","record_id":"<urn:uuid:b85fda86-ee29-4d5f-83b2-8080d6349918>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
nV News Forums - View Single Post - why do fx cards perform so badly in 3dmark05 Originally Posted by zakelwe From your table 2 vertex shaders offer a loss of 300 to 500 points 8 pixel shaders offer a loss of 900 to 1400 points So your conclusion of "The fact that 2 vertex units can offer a similar loss to 8 pixel shader units" does not match your own table. Your table shows 2x -3x greater loss when losing 8 pixel pipelines than 2 vertex units. Like I said at the very beginiing, better to gain 4 pixel pipelines when modding a 6800 than the 1 vertex unit for 3dmark05. You realise there are 8 pixel shader units in 1 quad? 2 quads is 16 shader units. You arguing a point you simply dont seem to understand. 1 quad should have a higher effect than 1 vertex unit, because of the pixel shader architecture of a quad. The graph here shows how 2 vertex units can offer a similar performance deficit to 1 quad, (IE 4 shader units and 4 non dedicated shader units, 8 Shader units per Quad, Including a 25% reduction in Fillrate)
{"url":"http://www.nvnews.net/vbulletin/showpost.php?p=459088&postcount=38","timestamp":"2014-04-21T10:47:14Z","content_type":null,"content_length":"17344","record_id":"<urn:uuid:987550ea-a02c-4e58-b38a-f019cb60897c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help - my grandson came up with this question: Prove that every natural number is either even or odd. Best Response You've already chosen the best response. - his teacher suggested proof by induction Best Response You've already chosen the best response. i gave a solution to this problem a few semesters ago in some math class, but my professor didnt like it <.< im going to post it to see what others think. one sec let me see if i can find it. Best Response You've already chosen the best response. i saw what was wrong with my answer, never mind. I would just say that it is impossible to solve the equation:\[2n=2m+1\]with natural numbers n and m, so its impossible to have a natural number that is both even and odd. Not too sure how you would do that my induction, since a number being even or odd doesnt really depend on previous numbers being even or odd. Best Response You've already chosen the best response. let n one number from set of natural numbers N ,so than odd even 2n+1 2n 1=2*0+1=1 2=2*1=2 3=2*1+1=3 4=2*2=4 Best Response You've already chosen the best response. Alrighty, lets give induction a go. First, we need to define even and odd. The standard definitions are as follows: An integer n is even if it can be expressed in the form n = 2k, where k is some integer An integer n is odd if it can be expressed in the form n = 2k + 1, where k is some integer We are now equipped to prove the statement via induction. We assume that the natural number n is either even or odd. We seek to show that this implies that the natural number (n+1) is also either even or odd. Case 1: n is an even number If n is an even number, then it can be expressed as n = 2k, with k some integer. Thus, n+1 = 2k+1, and by the definition of an odd number, n+1 is odd. Case 2: n is an odd number If n is an odd number, then it can be expressed as n = 2k + 1, with k some integer. Thus, n+1 = 2k+1 +1 = 2k+2 = 2(k+1). Since k is an integer, k+1 is an integer, and thus by the definition of an even number, n+1 is even. In either of these two cases, n+1 is either even or odd. Finally, we show that n= 1 is either even or odd. Since k=0 is an integer, note that 1 can be expressed as n = 2k+1 -> 1 = 2(0) + 1. Thus n is odd, and more generally, n=1 is either even or odd. By induction, we may say that any natural number n is either even or odd. Best Response You've already chosen the best response. thnx guys - number theory is not my thing Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ef62db1e4b01ad20b50b65f","timestamp":"2014-04-16T13:41:42Z","content_type":null,"content_length":"41374","record_id":"<urn:uuid:f5946a67-cb22-4781-9d53-77c9ba94297c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Envy and CoffeeScript's Foibles, Part 2 In the previous post I presented the basics of operational semantics and showed how derivations trees can be used to differentiate two terms that are syntactically similar. This post develops the closing thoughts further with the introduction of type rules, example tools for automating evaluation and type derivation, and a concrete definition of semantic ambiguity. The primary goal is to establish the best way to detect ambiguous term pairings and then outline what will work for a tool that can be generalized beyond the CoffeeScript subset. Type Rules Type rules are similar in construction to evaluation rules, consisting of a premise and conclusion. As with evaluation rules the premise establishes the preconditions for the conclusion. Again, each rule is tagged with a name for reference but preceded by a t- in this case to distinguish them from inference rules (_e-_). Type rules without a premise like t-true and t-false are taken to be true out of hand. That is, the terms true and false both have the type Bool. The others are more complicated. t-lambda illustrates how to determine the type of a lambda term like (-> true). The premise above the line states that if the subterm t has the concrete type T, then the conclusion λt has the type X -> T. Here X is a type variable because we don't know whether the lambda will be evaluated with the invocation operator () or applied to an argument. T will be concrete because it can be determined from the body of the lambda expression. For example, in (-> true) the subterm true has the type Bool so the lambda term has the type X -> Bool. t-inv shows how to determine the type of lambda invocation like (-> true)(). The premise states if the lambda term has the type X -> T the term λt() has the type T. For example, (-> true)() evaluates to true and has the type Bool. It's worth noting that X is constrained to be the Unit or empty type since no argument is used. t-app is the type rule for lambda applications, e.g. (-> true) false. The premise says that if the lambda term λt on the left has the type X -> T the conclusion is that the application will have the result type of the lambda term. Again, the type of an application, like invocation, is only concerned with the type of the first lambda's subterm t and it ignores the type of the argument that it's applied to. Type Rule Stacking This notation makes it easy to establish the type of a term by stacking the type rules on one another in the same fashion as evaluation rules. Taking a very simple example, some diagrams will illustrate how this works: This highlights how to derive the type at the bottom from its subterms. Typing the innermost subterm true with t-true can be "stacked" by using it to replace the premise of type rule t-lambda. The type derivation expands from the subterm to establish each subsequent parent term's type. Another more complex derivation: The second subterm of the application is unimportant where the type of the term is concerned and as a result it's wholly ignored. Working on the left term, the tree extends upward until it reaches the atomic value type, false : Bool. The complexity of nested lambdas and invocation make for a taller stack of type rules to reach the atomic false when compared with the previous example. Not Quite There At this point the type rules can describe the original issue. A derivation tree based on the typing rules highlights that the term is untypable. Taking our canonical example, (-> true)() -> false: Once the derivation tree reaches the outermost term it breaks. There is no type rule for the application of something with type Bool to something with type X -> Bool since t-app requires the first term have the type X -> T in its premise. It's a type error. Previously we saw that this would result in a type error under evaluation by the CoffeeScript interpreter. We also saw that it was easy to construct a term that suffered the same semantic confusion without the type error (-> (-> true))() -> false. This issue applies to the type derivation as well. In addition we saw that it's possible to construct terms, albeit in the boolean example language, that might produce the same value through different evaluation paths. That is, they had different derivation trees in the evaluation relation but the same evaluation result. This issue also applies to type derivations. In both cases useful information is lost when the derivation is discarded in favor of the final value or type. The advantage with the type information is obviously that no evaluation is required to determine if two terms are "different" in some way other than their syntax. The disadvantage is that not all languages make determining type information easy. Ultimately the type information provides a second way to differentiate syntactically similar terms. Indeed there are cases where both the evaluation and type information are necessary to distinguish terms. For example ((x, y) -> x + y)(1, 1) has a type derivation identical to ((x, y) -> x * y)(1, 2) and the same evaluation result, but it clearly takes a different evaluation path [1]. Note: The next five sections cover an implementation of a lexer, parser, evaluator and mechanisms for type/evaluation derivation. If you'd rather just read about how the generated evaluation and type derivations are used to find confusing term pairings you can skip to Detecting Ambiguity Happy Parsing It's time to build something concrete from the formal notion of evaluation and types. An AST for this CoffeeScript subset will provide enough information to perform evaluation and establish derivation trees for both evaluation and types. I've chosen Haskell along with the Alex and Happy tools to implement a simple lexer and parser. As you would expect the parser grammar definition looks very similar to the grammar definition presented in the previous post: white { Whitespace } bool { Boolean $$ } '()' { Unit } '->' { Arrow } '(' { LeftParen } ')' { RightParen } Expr : Value { $1 } | Lambda '()' { Invoke $1 } | Expr white Expr { Apply $1 $3 } Lambda : '()' white '->' white Expr { Lambda $5 } | '->' white Expr { Lambda $3 } | '(' Lambda ')' { $2 } Value : bool { BooleanExpr $1 } | Lambda { $1 } You can view the full lexer and parser implementations here. There are two differences from the original grammar definition. Lambda terms in parentheses are just a convenience for readability. More importantly a correction must be made to application of two terms, allowing for any term as the left side (the applicand) [2]. This enables the grammar to reproduce the original issue, since (-> true)() -> false translates to an invocation applied to a lambda term. The corrected grammar: Also, a correction and an addition must be made to the inference rules presented in the previous post. This will ensure that any term type is permitted as the left half of an application, and that it is fully evaluated before applying it. Where e-arg-eval ensures that the argument of an application is fully evaluated, e-app-eval ensures that the applicand is fully evaluated before the application takes place. Matching Rules The abstract representation produced by the parser is a simple tree structure built with Haskell types. Pattern matching can be used with the type and inference rules to produce evaluation and derivation results. To start let's look at a simple evaluator and derivation builder. -- an enumeration of each inference rule data InfRule = Inv | App | ArgEval | AppEval The InfRule Haskell type is a simple enumeration of the tags belonging to each inference rule. e-inv corresponds to Inv and so on. -- an intermediate form for performing derivation and evaluation data RuleMatch = None | RuleMatch InfRule (Maybe Expr) Expr -- match a rule and provide the relevant sub terms for action matchRule :: Expr -> RuleMatch Both the evaluator and the derivation builder will operate based on the inference rules that apply to each term and its subterms. The function matchRule takes an expression, Expr, and provides three pieces of information in a RuleMatch result: the inference rule that applies to the term, an optional term for the premise of an inference rule pulled from the body of the parent term, and a term for the conclusion of the inference rule also pulled from the body of the parent term. There are pattern matching definitions for each rule. matchRule (BooleanExpr _) = None matchRule (Lambda _) = None The value terms true, false and (-> x) are the base case of matchRule. That is, whenever another function requests a rule match on the value terms None is provided to signal that the term has been fully evaluated. -- Rule: e-inv matchRule (Invoke (Lambda t)) = RuleMatch Inv Nothing t Invocation can only be applied to a lambda term and the result of the invocation is the lambda's subterm, e.g. (-> true)() evaluates to true. An invocation on anything else will simply drop through this match and ultimately to the catch all error case. For example the CoffeeScript true() is invalid. Its abstract representation from the parser is Invoke (BooleanExpr True) which clearly won't match here. On a match, the RuleMatch result contains the rule tag for invocation Inv, nothing for an inference rule premise since there isn't one for e-inv and the subterm t for further derivation in the conclusion. -- Rule: e-app matchRule (Apply (Lambda t) (BooleanExpr _)) = RuleMatch App Nothing t matchRule (Apply (Lambda t) (Lambda _)) = RuleMatch App Nothing t Like invocation e-app only works with lambda terms, but it carries the addition requirement that the argument be a value term. The grammar shows that the only v (value) terms are lambdas and boolean values so there's a match for those cases here. When there's a match the rule tag is App and the lambda subterm is again provided for possible further inspection/operation. -- Rule: e-arg-eval matchRule (Apply t i@(Invoke _)) = RuleMatch ArgEval (Just i) t matchRule (Apply t a@(Apply _ _)) = RuleMatch ArgEval (Just a) t -- Rule: e-app-eval matchRule (Apply i@(Invoke _) t) = RuleMatch AppEval (Just i) t matchRule (Apply a@(Apply _ _) t) = RuleMatch AppEval (Just a) t e-arg-eval and e-app-eval are more complicated than either e-inv or e-app which makes sense when comparing them as inference rules. Both e-arg-eval and e-app-eval carry a premise. Both rules require that some evaluation take place on one of the subterms. More importantly the shape of the term remains the same. Neither e-arg-eval or e-app-eval change the shape of the term to which they apply, only the shape of the sub terms. This is in contrast to e-inv and e-app which discard the invocation operator and second term respectively. As a result the RuleMatch contains the subterm that needs to be evaluated further and the other subterm that remains stagnant. Note that in the function definition the e-arg-eval rule is matched first so that the e-app-eval rule can ignore the second subterm under the assumption that it's a value term (not Invoke or Apply). matchRule t = error $ "No inference rule applies for: " ++ (show t) Finally, in situations like true() or true (-> true) where no rule applies, an error is raised. Evaluating the Options The information contained in a RuleMatch instance can be used to evaluate or derive a given term. Evaluation is a simple matter of applying the rules recursively. -- perform a single evaluation step eval :: Expr -> Expr eval t = case (matchRule t) of None -> t (RuleMatch _ Nothing t1) -> t1 (RuleMatch ArgEval (Just t1) t2) -> Apply t2 (eval t1) (RuleMatch AppEval (Just t1) t2) -> Apply (eval t1) t2 eval performs a single step of evaluation according the the inference rules. The first case match returns the original term t because None is the match for fully evaluated value terms like true, false, and (-> x). The second match handles both the Inv and App by returning the subterm of the invoked or applied lambda term. The matchRule function does a bit of evaluation for these two rules by stripping the applied lambda term. For example, (-> true) true and (-> true)() become true. (RuleMatch ArgEval (Just t1) t2) -> Apply t2 (eval t1) (RuleMatch AppEval (Just t1) t2) -> Apply (eval t1) t2 For ArgEval and its cousin AppEval the subterm that needs further evaluation gets it and then the whole term is reassessed. The order of which evaluation happens first is preserved here by recursion. If the argument in an application needs more than one evaluation step, eval will continue to work on it until the result is returned to the original invocation. Subsequently if the applicand needs evaluation it will do the same. For example, in (-> true) (-> true)() the second term is evaluated with an Inv and then the boolean result is the argument to the first lambda term. -- reduce an expression to a value term fullEval :: Expr -> Expr fullEval t = case (matchRule t) of None -> t _ -> fullEval $ eval t fullEval simply applies eval to t until it reaches a value term. Automating Evaluation Derivation The RuleMatch instance is primarily geared toward building derivation trees. That's why the structure appears so awkward in use with eval. data Derivation a = Empty | Derivation InfRule Derivation Derivation Expr The Derivation data type is comprised of a tag from the InfRule enumeration, one possible derivation as a premise, the final derivation as the conclusion, and the expression representing the state of evaluation at a given moment. Taking the derivation tree of a simple example (-> (-> true))() false which is parsed to: Apply (Invoke (Lambda (Lambda (BooleanExpr True)))) (BooleanExpr False) In english, the application of an invocation of a lambda with a lambda subterm to a boolean value. The resulting tree in the original notation takes the form: The Derivation instance has to work from the outside in so it's much harder to read than the notation, but it contains the same information Derivation AppEval -- | premise | | e-inv value | (Derivation Inv Empty Empty (Lambda (BooleanExpr True))) -- | conclusion | | e-app value | (Derivation App Empty Empty (BooleanExpr True)) -- | e-app-eval value | (Apply (Lambda (BooleanExpr True)) (BooleanExpr False)) It's clear that the applicand (-> (-> true)() needs evaluation using e-app-eval before it can be applied to the argument false. The premise of e-app-eval requires that the applicand take a step and here that means an invocation with e-inv. Finally the result of the invocation (-> true) is applied to the false with e-app as the "conclusion" of the e-app-eval. In reality, e-app is applied to the result of the first derivation tree as it is with the logic notation. -- build a derivation from an expression derive :: Expr -> Derivation derive t = case (matchRule t) of None -> Empty (RuleMatch rule Nothing t1) -> Derivation rule Empty (derive t1) (evald) (RuleMatch rule (Just t1) _) -> Derivation rule (derive t1) (derive $ evald) (evald) where evald = eval t The derive function works in a similar fashion to eval. For a value/None result from matchRules there are no inference rules that apply. For e-inv or e-app, derive can recurse and build a derivation from the lambda's subterm. For e-arg-eval or e-app-eval the premise must be further derived and the conclusion is a derivation for the original term t with one evaluation step applied. That is, evaluating the subterm t1 once inside the original term t. The use of eval to do that may look funny but it's just a convenience. Automating Type Derivation Deriving the type for a term in the CoffeeScript subset is slightly less complex than deriving the evaluation. Again, a type rule is matched to each valid AST construction. data RuleMatch = None | RuleMatch TypeRule Expr -- match a rule and provide the relevant sub terms for action matchRule :: Expr -> RuleMatch The RuleMatch definition for types requires one less Expr. The derive and fixType definitions for types only require the first subterms in each expression. This is in contrast to eval which required both the conclusion and premise terms. -- t-true & t-false matchRule b@(BooleanExpr True) = RuleMatch TrueType b matchRule b@(BooleanExpr False) = RuleMatch FalseType b -- t-lambda matchRule (Lambda t) = RuleMatch LambdaType t -- t-inv matchRule (Invoke (Lambda t)) = RuleMatch Inv t -- t-apply matchRule (Apply t@(Lambda _) _) = RuleMatch App t matchRule (Apply t@(Invoke _) _) = RuleMatch App t matchRule (Apply t@(Apply _ _) _) = RuleMatch App t The Apply matches capture only valid applicands and let the rest fall through to the error case. It's also worth noting that each of the Apply matches discards the argument term because it's unnecessary to the type of the expression. This fits with the definition of the type rules. Fixing the type of a given expression is a simple recursive effort on applications. The Type data type captures both the Bool result, and the recursive Arrow type. For example (-> (-> true)) has the type Arrow (Arrow Bool). -- the two possible types for a given expression data Type = Bool | Arrow Type -- determines the type of a given expression data Derivation = Empty | Derivation TypeRule Derivation Type fixType :: Expr -> Type fixType t = case (matchRule t) of (RuleMatch TrueType _) -> Bool (RuleMatch FalseType _) -> Bool (RuleMatch LambdaType t1) -> Arrow $ fixType t1 (RuleMatch Inv t1) -> fixType t1 (RuleMatch App _) -> fixType $ eval t The type of an invocation is determined by the lambda's subterm, so matchRule provides that as t1 here for further type information. The type of an application is dependent on the type of it's first argument, so we cheat a bit here and use the single step eval to get at the result of the application. derive :: Expr -> Derivation derive t = case (matchRule t) of (RuleMatch TrueType _) -> Derivation TrueType Empty $ fixType t (RuleMatch FalseType _ ) -> Derivation FalseType Empty $ fixType t (RuleMatch rule t1) -> Derivation rule (derive t1) $ fixType t The type rules are much easier to apply, they simply descend into the terms to build up the type, providing the fixed type at each step as the conclusion. Taking the same example from the evaluation rules earlier (-> (-> true))() false which is parsed to: Apply (Invoke (Lambda (Lambda (BooleanExpr True)))) (BooleanExpr False) The type derivation using logic notation looks like: The Derivation instance corresponding the the logic notation is again much larger but captures the same information (formatting added after the fact): Derivation App (Derivation Inv (Derivation LambdaType (Derivation TrueType Nothing Bool) ) (Arrow Bool) -- Lambda type is X -> <subterm type> ) (Arrow Bool) -- Inv type is it's subterm's type ) Bool -- App type T in the applicand's X -> T As noted in the comments each step in the derivation resolves the type at that step based on the inference rules. Detecting Ambiguity So far we've seen that it's possible to build an understanding of evaluation and typing that provides more information than just the evaluation result or the fixed type for a term. Capturing that extra information, a term can be represented by a triple (S, E, T), where S is the syntax string of the term, E is the evaluation derivation, and T is the type derivation. This triple can be used to determine whether two terms will cause confusion. One approach is to first compare the S values for two terms and then determine if the E and T values match. Terms with "similar" S values but different E or T values might be ambiguous and could be flagged for review. Using the Levenshtein Distance to keep the calculation for similarity simple: lev is the Levenshtein distance function and dist is just the ratio of the distance between the two strings and the maximum length of both. This is sometimes referred to as the Levenshtein Ratio. For (-> true)() -> false and (-> true) () -> false: A relative value for string distance that can be used as a threshold "setting" makes building a tool for automating the process easier. That is, if two terms are deemed "close enough" by virtue of their dist value being below a predetermined threshold and they have different information in either E or T then they might be flagged [3]. Fuzzy Search We now have enough information to define a system that will automate the exploration of the "term space" (all term combinations), and run a check against existing known terms for ambiguous pairs for each generated term. Storing the triple of known terms for comparison is fairly easy with the text search capabilities available in most modern databases. One might even implement the Levenshtein Distance function and use it to check a new term against known terms. It may be that a purpose built data structure for the storage and retrieval based on a text search algorithm would perform better, but a good all purpose RDBMS would be fine for a first pass. More interesting is the generation of terms for a non-trivial language. A term generator would start with atomic types and successively wrap them in terms defined to have subterms. That part can likely be performed with nothing more than knowledge of the grammar. There are two issues with this. First, the complexity of many programming languages makes re-examining the same terms an enormous waste of time. Tracking the explored terms and "resuming" the exploration process would have a lot of value. Second, generating the derivations to store and compare along with the syntax is an involved effort. Again, it's easy to tag a piece of syntax with the result of execution or typing but information is lost. Quick and Dirty A less complicated representation of a term might still be effective, and could avoid extra effort required of the language creator in generating the evaluation and type derivations. For example the tuple (S, A), where S remains the syntax of the term and A is the AST representation. ( "(-> true)() -> false", Apply (Invoke (Lambda (BooleanExpr True))) (Lambda (BooleanExpr False)) ) ( "(-> true) () -> false", Apply (Lambda (BooleanExpr True)) (Lambda (BooleanExpr False)) ) It's obvious that the abstract representations capture the issue at hand even if there is some information lost [4]. Best of all the AST for a term is available regardless of the host language and serialization is the only extra requirement. Having a term generator that works with a (E)BNF, a way to generate the AST for a term (presumably through the language parser), and a database equipped with the ability to find like terms it seems entirely possible to alert the language creator of complex or convoluted pairings. Further Work First I have to apologize for not building out a tool for generating terms or a schema for term storage. I wanted to do the automated evaluation and type derivations to get a feel for the effort involved and the result was an exceptionally long post. If I find the time to return to this I'd like to build out the term generator and couple it with a simple database. I think that going through the process of building a BNF parser would be a lot of fun by itself. In the course of these two posts we've seen what it looks like to formalize both the evaluation and type semantics of a simple programming language. We've also come to a relatively satisfying formalization of semantic ambiguity that could be used in conjunction with a common language definition form (BNF, EBNF) to alert a language designer of potential issues [5] [6]. 1. It might be that when a function identifier is the only difference between terms, here * and +, it's reasonable to ignore ambiguous terms. In this case because the total string length for both terms is small it might be that a single character difference is enough to break some arbitrary threshold. I'm leaving this for further consideration. 2. The implementation in Haskell forced these issues out into the open. I'm curious if proving progress and preservation would have pointed out the flaws in my approach (this may be obvious one way or another to a better educated reader). 3. Assuming it's possible, it's interesting to think about what the inverse result means. That is, when two terms are very syntactically different but have identical types/evaluation derivations. This might signal the two terms or the parent language as antithetical to Python's slogan of "one and only one way to do it". 4. For example, the AST doesn't capture the type of lambda form that was used. This may be useful information even if this particular example doesn't require it. 5. Though it would be infinitely more satisfying if we could build a tool based on the ideas and arrive at that same conclusions about this CoffeeScript subset and a few other BNF friendly 6. It's worth pointing out that the CoffeeScript issue with lambdas and invocation has been/was known to Jeremy. It was simply a choice in favor of flexibility. I like to think that the hypothetical tool presented here would be useful in cases where ambiguous term pairings are less obvious and for people who may want less flexibility. 7. A special thanks to keyist for proofreading. 09 Jan 2013
{"url":"http://johnbender.us/2013/01/09/math-envy-and-coffeescripts-foibles-2/","timestamp":"2014-04-17T00:56:34Z","content_type":null,"content_length":"56524","record_id":"<urn:uuid:e8bca20a-7073-4c98-8ccb-17babdfd90c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
TransLucid and Cartesian Programming This release fully supports higher-order functions for TransLucid. It is available on sourceforge. The blog’s address is now cartesianprogramming.com. The word “TransLucid” comes from an essay by Ralph Waldo Emerson: This insight, which expresses itself by what is called Imagination, is a very high sort of seeing, which does not come by study, but by the intellect being where and what it sees; by sharing the path or circuit of things through forms, and so making them translucid to others. The second release of TransLucid, version 0.2.0, is out. It is available at the following link. It includes intensions as first-class values, and higher-order functions fully work. Tournament computation can also take place in two dimensions. Here, tournamentOp₂ applies a quaternary function g to a 2-dimensional variable X, and keeps doing this to the results until there is a single result. fun tournamentOp₂.d₁.d₂.n.g X = Y @ [d <- 0] dim t <- ilog.n ;; var Y = fby.t X (g.(NWofQuad.d₁.d₂ Y).(NEofQuad.d₁.d₂ Y). (SWofQuad.d₁.d₂ Y).(SEofQuad.d₁.d₂ Y)) ;; end ;; As for the single-dimensional case, it is useful to have a way of filling a two-dimensional grid with a neutral element if we do not have a grid whose extent in every dimension is the same power of 2. fun default₂.d₁.m₁.n₁.d₂.m₂.n₂.val X = Y var Y [d₁ : m₁..n₁, d₂ : m₂..n₂] = X ;; var Y [d₁ : nat, d₂ : nat] = val ;; end ;; In the post on factorial, the following code appears: var f = tournamentOp₁.d.n.times (default₁.d.1.n.1 (#!d)) ;; What is going on? Let us look at the definitions from the TransLucid Standard Header: fun default₁.d.m.n.val X = Y var Y [d : m..n] = X ;; var Y [d : nat] = val ;; end ;; fun tournamentOp₁.d.n.g X = Y @ [d <- 0] dim t <- ilog.n ;; var Y = fby.t X (g.(LofPair.d Y).(RofPair.d Y)) ;; end ;; The default₁ function creates a stream Y varying in dimension d such that in the interval [m,n], the result will be the value of X. Everywhere else, the value of Y is the default val. As for tournamentOp₁, when #!t ≡ 0, the value of Y is X. When #!t > 0, each element of Y is the result of applying the binary function g to a pair of elements from Y when #!t was one less. This process is completed until there is just one element left. Since the number n is not necessarily a power of 2, we use default₁ to fill in the slots of X with the neutral element of g. This form of computation is called tournament computation, and writing programs this way encourages parallel implementations. The origins of Cartesian Programming came from what was called Intensional Programming, in which the behavior of a program was context-dependent: a context is a set of (dimension,ordinate) pairs, and the program can change behavior if some of the ordinates are changed. Formally, a variable in an intensional programming language is an intension, i.e., a mapping from contexts to values. In TransLucid, after several failed attempts at defining the semantics of functions over these intensions, it finally dawned on us that the intension itself needs to be a first-class value. What this means is that the context in which an intension is created is as important as the context in which it is evaluated. Consider: var tempAtLocation = ↑{location} temperature ;; var tempInInuvik = tempAtLocation @ [location ← "Inuvik"] ;; What this means is that whatever the value of the location-ordinate, variable tempInInuvik would always give the temperature in Inuvik, allowing any other dimensions to vary freely. Hence ↓tempInInuvik @ [location ← "Paris", date ← #!date - 1] ;; would give the temperature in Inuvik yesterday, not in Paris yesterday. Here we give an example of programming with infinite arrays. We take the well-known factorial function, and calculate using tournament computation. The TransLucid source code is found below. We build an array f which varies with respect to dimensions t and d, effectively creating a computation tree. For example, to compute the factorial of 3, the variable f becomes t 1 1 2 3 1 1 ... 1 6 1 1 1 1 ... 6 1 1 1 1 1 ... and the answer is 6, picked up when t=2 and d=0. Similarly, for the factorial of 6, f becomes t 1 1 2 3 4 5 6 1 1 ... 1 6 20 6 1 1 1 1 1 ... 6 120 1 1 1 1 1 1 1 ... 720 1 1 1 1 1 1 1 1 ... and the answer is 720, picked up when t=3 and d=0. When t = 0, the value of f is a d-stream such that f is the current d-index if it is between 1 to n, and 1 otherwise. When t > 0, the value of f is a d-stream such that f is the product of pairs from the (t-1) d-stream. fun fact.n = f dim d <- 0 ;; var f = tournamentOp₁.d.n.times (default₁.d.1.n.1 (#!d)) ;; end ;; There are now a number of TransLucid examples available at the TransLucid Web site. All of these examples use the declarations found in the preamble. To help gather the open problems related to implementing TransLucid, a publication archive has been prepared. It is available at Included in that archive are the collected works of John Plaice, Blanca Mancilla and Bill Wadge, along with all of the papers presented at the International Symposia on Lucid and Intensional Programming and the Conferences on Distributed Communities on the Web.
{"url":"http://cartesianprogramming.com/","timestamp":"2014-04-20T19:00:17Z","content_type":null,"content_length":"50642","record_id":"<urn:uuid:91577c8c-6227-4e6b-9c73-6261939a420b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibonacci using Recursion 10-03-2004 #1 Registered User Join Date Oct 2004 Hello.I've just sttarted byfirst year in C programming & were suppose to do a program to gnerate a fibonacci series of numbers using recursion,by inputing a halt number using scanf.for examle if I input 25 the fibonacci sequence should end at 21.I'd prefer if you couls type the prograam but any help will do.I need your help & would be really greatful as I have to submit this program on Tuesday.Thank you for helping.I appreciate it. □ Using The GNU GDB Debugger: Tutorial with examples and exercises. #include <mathx.h> stdcod main() fibonacci f; int end = -1; cin <> int; for(int i > fibonacci.value(); i <= fibonacci.sequence(); i++) printf("%d", end); return ls; now was that really that hard? > stdcod main() Man, that's some fishy looking code you've posted there. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. i was just helping with his homework salem Writing out full source code for a program that does exactly what he was assigned to do is not 'helping'. >Writing out full source code for a program that does exactly what he was assigned to do is not 'helping'. Or course, it doesn't hurt either seeing as how the full source code isn't C. It isn't even C++, just some bastardization of C++. >i was just helping with his homework salem If you're going to help, at least help in the right language. My best code is written with the delete key. error: joke undeclared in this context. edit: previous post by Dave was deleted. waiting for repost as not to look like a dumbass replying to an inexistant post Last edited by misplaced; 10-03-2004 at 02:44 PM. edit: previous post by Dave was deleted. waiting for repost as not to look like a dumbass replying to an inexistant post Sorry, I decided to drop it. My feeble attempt to lighten up was apparently misguided. .................................................. ................................. ................................i've got no quarrals Writing out full source code for a program that does exactly what he was assigned to do is not 'helping'. Come on, he was joking. Look at the code. Last edited by Sang-drax : Tomorrow at 02:21 AM. Reason: Time travelling I would suggest reading up on recursion. If you don't understand how that works, you're not going to be able to write the program. #include <mathx.h> stdcod main() fibonacci f; int end = -1; cin <> int; for(int i > fibonacci.value(); i <= fibonacci.sequence(); i++) printf("%d", end); return ls; now was that really that hard? Looks like its time to break out my trust Visual C#+ compiler. >Looks like its time to break out my trust Visual C#+ compiler. Don't bother. If my brain can't parse it then no compiler will. My best code is written with the delete key. 10-03-2004 #2 10-03-2004 #3 Registered User Join Date Sep 2004 10-03-2004 #4 10-03-2004 #5 Registered User Join Date Sep 2004 10-03-2004 #6 Super Moderator Join Date Sep 2001 10-03-2004 #7 10-03-2004 #8 Registered User Join Date Sep 2004 10-03-2004 #9 Registered User Join Date Sep 2004 10-03-2004 #10 Registered User Join Date Mar 2004 10-03-2004 #11 Registered User Join Date Sep 2004 10-03-2004 #12 10-03-2004 #13 10-03-2004 #14 10-03-2004 #15
{"url":"http://cboard.cprogramming.com/c-programming/57466-fibonacci-using-recursion.html","timestamp":"2014-04-19T23:38:05Z","content_type":null,"content_length":"95732","record_id":"<urn:uuid:e93ae015-e249-47ef-a91f-3ed4a97fd810>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Haar Wavelets March 1st 2011, 11:27 AM Haar Wavelets Show that the rescaled Haar wavelets $\psi_{jk}(x)=2^{j/2}\psi(2^jx-k)$ form an orthonormal basis basis for $L^2(\mathbb{R})$. So I know that what I have to show is that: $\int_{\mathbb{R}}\psi_{jk}(x)\psi_{lm}(x)dx=\delta _{jl}\delta_{km}$ I'm just stuck as to how to simplify that integral. March 1st 2011, 07:07 PM Hmm. Maybe this might help: $\psi_{jk}(x)=2^{j/2}\begin{cases}<br /> 1,&\quad 0\le 2^{j}x-k<1/2\\<br /> -1,&\quad 1/2\le 2^{j}x-k<1\\<br /> 0,&\quad\text{otherwise}<br /> \end{cases}=2^{j/2}\begin{cases}<br /> 1,&\quad k\le 2^{j}x<1/2+k\\<br /> -1,&\quad 1/2+k\le 2^{j}x<1+k\\<br /> 0,&\quad\text{otherwise}<br /> \end{cases}$ $=2^{j/2}\begin{cases}<br /> 1,&\quad 2^{-j}k\le x<2^{-j}(1/2+k)\\<br /> -1,&\quad 2^{-j}(1/2+k)\le x<2^{-j}(1+k)\\<br /> 0,&\quad\text{otherwise}<br /> \end{cases}.$ Here I'm using the mother wavelet function $\psi(x)$as defined in the wiki. So the only points at which this function is nonzero are in the half-open interval $[2^{-j}k,2^{-j}(k+1)).$ Now, what if you could show that $[2^{-j}k,2^{-j}(k+1))\cap[2^{-l}m,2^{-l}(m+1))=\varnothing$ if either $jot= l$ or $kot=m?$ That would certainly be sufficient to show the zero part of the delta functions, wouldn't it? Because then, under the integral sign, each function would drag the other one down to zero. What happens when both $j=l$ and $k=m?$ What does the integrand do?
{"url":"http://mathhelpforum.com/advanced-applied-math/173068-haar-wavelets-print.html","timestamp":"2014-04-16T19:34:32Z","content_type":null,"content_length":"7468","record_id":"<urn:uuid:4df479b6-0112-4da6-828d-213fe1322e56>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Order this book from Amazon This book is written by the author of the ggplot2 package for R, which is a package with a design inspired by the grammar of graphics and can remove some of the effort required to put together impressive graphs. The book is just under 200 pages and covers a Summarising data using scatter plots A scatter plot is a graph used to investigate the relationship between two variables in a data set. The x and y axes are used for the values of the two variables and a symbol on the graph represents the combination for each pair of values in the data set. This type of graph is Cherry Picking to Generalize ~ NASA Global Temperature Trends ~ enhanced w/ ggplot2 In a prior article, I tried to visualize the linear global temperatures trends for a grid of start and end years. The visual I created was confusing in that the specification of color scale was interdependent with the data values. I wanted a blue -> white -> red scale of the temperatures indicating cool -> Jeroen Ooms’s ggplot2 web interface – a new version released (V0.2) Good news. Jeroen Ooms released a new version of his (amazing) online ggplot2 web interface: yeroon.net/ggplot2 is a web interface for Hadley Wickham’s R package ggplot2. It is used as a tool for rapid prototyping, exploratory graphical analysis and education of statistics and R. The interface is written completely in javascript, therefore there is no need to install anything on the... Summarising data using histograms The histogram is a standard type of graphic used to summarise univariate data where the range of values in the data set is divided into regions and a bar (usually vertical) is plotted in each of these regions with height proportional to the frequency of observations in that region. In some cases the proportion of Summarising data using dot plots A dot plot is a type of display that compares counts, frequencies, totals or other summary measures for a series of categories. The dot plot can be arranged with the categories either on the vertical or horizontal axis of the display to allow comparising between the different categories as well as comparison within categories where Video: ggplot2 Creator Hadley Wickham’s Short Course on Data Visualization Using R Hadley Wickham, creator of ggplot2, has posted a 2 hour video on data visualization using R. You can find links to the videos and slides over at Revolutions Blog.Check back here soon. I am working with Hadley to arrange a day-long ggplot2 short cours... Create annotated GWAS manhattan plots using ggplot2 in R A few months ago I showed you in this post how to use some code I wrote to produce manhattan plots in R using ggplot2. The qqman() function I described in the previous post actually calls another function, manhattan(), which has a few options you can s...
{"url":"http://www.r-bloggers.com/tag/ggplot2/page/19/","timestamp":"2014-04-19T14:42:00Z","content_type":null,"content_length":"40149","record_id":"<urn:uuid:a28a9215-81e3-48ff-aa30-77cf4166852c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Physic - Velocity, distances, acceleration, and time October 3rd 2007, 05:34 PM #1 Sep 2007 Physic - Velocity, distances, acceleration, and time here's 3 problems i need to solve.. but i got all of them wrong and I don't understand why.. I believe i had the right formula/procedure. can someone show me how to do it? thanks 1. A football is thrown directly toward a receiver with an initial speed of 15.0 m/s at an angle of 23° above the horizontal. At that instant, the receiver is 18 m from the quarterback. In what direction and with what constant speed should the receiver run to catch the football at the level at which it was thrown? 2. A rocket is launched at an angle of 45° above the horizontal with an initial speed vi = 57 m/s, as shown below. It moves for 25 s along its initial line of motion wth an acceleration of 22.2 m /s2. At this time, its engines fail and the rocket proceeds to move as a free body. (a) What is the rocket's maximum altitude? (b) What is the rocket's total time of flight? (c) What is the rocket's horizontal range? 3. A science student riding on a flatcar of a train moving at a constant speed of 12.8 m/s throws a ball toward the caboose along a path that the student judges as making an initial angle of 65° to the horizontal. The teacher, who is standing on the ground nearby, observes the ball rising vertically. How high does the ball rise? thanks so much for the help!! 1. A football is thrown directly toward a receiver with an initial speed of 15.0 m/s at an angle of 23° above the horizontal. At that instant, the receiver is 18 m from the quarterback. In what direction and with what constant speed should the receiver run to catch the football at the level at which it was thrown? Something bothers me about this question. I'll get to that later. Here's the basic plan of attack for this kind of problem. Always start by setting up and origin and coordinate system. I would put the origin where the ball was thrown with a +x axis in the direction of the horizontal component of the throw and a +y axis directly upward. I will assume the ball is caught at the same height it was thrown at, since we have been given no information to the contrary. Now list all of your information: $x_0 = 0~m$ and $y_0 = 0~m$ $v_{0x} = 15~cos(23)$ and $v_{0y} = 15~sin(23)$ $a_x = 0~m/s^2$ and $a_y = -9.8~m/s^2$ The ball is caught at time t at $x = 18~m$ and $y = 0~m$. Now, in this time, the receiver runs from the origin to x = 18 m in this same time t at a constant speed. So $x = x_0 + vt$ $18 = vt$ $v = \frac{18}{t}$ So we need an expression for t. Look back at the football data. We have essentially 5 equations at our disposal: $x = x_0 + v_{0x}t$ <-- Since $a_x = 0~m/s^2$ this equation contains all the Physics. $y = y_0 + v_{0y}t + \frac{1}{2}a_yt^2$ $y = y_0 + \frac{1}{2}(v_{0y} + v_y)t$ $v_y = v_{0y} + a_yt$ $v_y^2 = v_{0y}^2 + 2a_y(y - y_0)$ From the x equation we get $18 = 15~cos(23)~t$ Or we could use the first y equation: $0 = 15~sin(23)t - 4.9t^2$ My difficulty with this problem is that the two equations do not agree with what the time is. The only way I can correct for this is to assume that either the receiver is not at the level of the origin, or the quarterback isn't. If this is the case then we must choose the x equation to give us the time since this equation is not altered by a height change. I do not like the necessity of making this kind of logic chain, and I certainly would not expect a typical student to come up with it. It is standard in "throwing" problems like these to make the assumption that the projectile starts and ends at the same height unless otherwise explicitly stated in the problem. So anyway I get that $t = \frac{18}{15~cos(23)} = 1.30363~s$ Thus the receiver needs to run at $v = \frac{18~m}{1.30363~s} = 13.8076~m/s$ The other problems can be done using a similar setup. ^ i got the exact same thing T = 1.2 s and the V = 13.8 m/s the thing is it's telling me its wrong. so i'm not sure what's wrong. may be it is right and my teacher just entered the answer wrong.. for 3 questions straight? hmm also another thing is that the receiver suppose to wrong TOWARD the quarterback to catch the ball because it will land at 16.6 m.. i'm not sure if it will effect anything much. but thanks so much for the help!! at least i know i'm on the right track ^ i got the exact same thing T = 1.2 s and the V = 13.8 m/s the thing is it's telling me its wrong. so i'm not sure what's wrong. may be it is right and my teacher just entered the answer wrong.. for 3 questions straight? hmm also another thing is that the receiver suppose to wrong TOWARD the quarterback to catch the ball because it will land at 16.6 m.. i'm not sure if it will effect anything much. but thanks so much for the help!! at least i know i'm on the right track (sigh) Well that changes things. So for the football data $x = 16.6~m$ and for the receiver he has to start at 18 m?? This needs to be confirmed before a solution can be given. Yes, I caculated that the football's x value is 16.6 (but it wasn't given) the 18m was given in the problem though 1. A football is thrown directly toward a receiver with an initial speed of 15.0 m/s at an angle of 23° above the horizontal. At that instant, the receiver is 18 m from the quarterback. In what direction and with what constant speed should the receiver run to catch the football at the level at which it was thrown? The football will be thrown a maximum horizontal distance of H = (15)cos(23deg)*T meters. What is T? At maximum heigth of the ball's path, the net vertical velocity is zero, so, (15)sin(23deg) +(-10)t = 0 <----we assume g = 9.8 m/sec/sec. t = 15sin(23deg) /9.8= 0.598 sec. Meaning, the ball spent 0.586 seconds to reach the maximum height. To reach H, the ball will spend the same 0.586 seconds again, so, T = 2(0.598) = 1.196 sec. Hence, H = 15cos(23)*1.196 = 16.514 meters. The receiver is intially at 18 meters from the thrower, so the receiver must run towards the thrower to cover that 18 -16.514 = 1.486 meters...(the ball is thrown short) Let s = constant speed of the receiver, distance = speed*time [time is 1.196 sec] 1.486 = s(1.196) s = 1.486 /1.196 = 1.242 m/sec Therefore, the receiver should run towards the point of throwing, at 1.242 m/sec to catch the football. --------answer. thanks i realized that i got the problem wrong only because of significant figure T_T i put 1.2 instead of 1.24. anyways what i really need help is with number 2 or 3. i really am confused especially with them. whichever problem is easier for you, can you do them? *2. A rocket is launched at an angle of = 45° above the horizontal with an initial speed vi = 57 m/s, as shown below. It moves for 25 s along its initial line of motion wth an acceleration of 22.2 m/s2. At this time, its engines fail and the rocket proceeds to move as a free body. (a) What is the rocket's maximum altitude? (b) What is the rocket's total time of flight? (c) What is the rocket's horizontal range? It moves for 25 s along its initial line of motion wth an acceleration of 22.2 m/s2. I assume that means for 25 seconds, the rocket flew a straight line that is at 45 degrees above the horizontal, and that its net acceleration (including the effect of gravity) is 22.2 m/sec/sec in a straight line. So, at the end of 25 seconds, V(t) = Vo +at V(25) = 57 +22.2(25) = 612 m/sec at 45 degrees above horizontal After that, the rocket moves as a free body. Meaning, no more booster. (a) What is the rocket's maximum altitude? From 0 to 25 seconds, s = ([(Vo +V(25)]/2)*25 = ((57 +612)/2)(25) = 8362.5 meters at 45 degrees above the ground. h1 = (8362.5)sin(45deg) = 5913.2 meters vertically above the ground. From just after 25 seconds to time the rocket reaches its maximum height, h2 = 612sin(45deg)*t -(1/2)(9.8)t^2 ------------(i) What is t? At maximum height, vertical velocity is zero, so, 612sin(45deg) -(9.8)t = 0 t = 612sin(45deg) /9.8 = 44.158 sec. h2 = 612sin(45deg)*(44.158) -4.9(44.158)^2 = 9554.7 meters above h1. Therefore, the rocket's maximum altitude is 5913.2 +9554.7 = 15,468 meters above ground. [That is almost 15 and a half kilometers above ground.] -------------answer. (b) What is the rocket's total time of flight? The total time until the rocket hits the ground? With booster, 25 sec. From h1 to h1 again, 2(44.158) = 88.316 sec. From h1, on the flight downwards, to the ground, -------Vertical velocity is the opposite of the vertical velocity at the h1 while on the way up, so, V = -612sin(45deg) = -432.75 m/sec ------h = -h1 = -5913.2 m s = Vo*t -(1/2)g*t^2 -5913.2 = -432.75*t -4.9t^2 4.9t^2 +432.75t -5913.2 = 0 Divide both sides by 4.9, t^2 +88.32t -1206.78 = 0 t = {-88.32 +,-sqrt[(88.32)^2 -4(1)(-1206.78)]} /2(1) t = {-88.32 +,-112.37}/2 t = -100 sec, or 12.025 sec t = 12.025 sec <------------from h1 to ground. Therefore, total flight time = 25 +88.316 +12.025 = 125.34 seconds. ----answer. (c) What is the rocket's horizontal range? From 0 to 25 seconds, x1 = h1 = 5913.2 meters From just after 25 sec to (25 +88.316 = 113.316) sec, x2 = 612cos(45deg)*(88.316) = 38,218.7 meters From just after 113.316 sec to the rocket hitting the ground, x3 = 612cos(45deg)*(12.025) = 5203.8 meters Therefore, the rocket's horizontal range is 5913.2 +38,218.7 +5203.8 = 49,335.7 meters. [About 49 and 1/3 kilometers.] -----------answer. If the answers above are wrong with your teacher, then your teacher considered the effect of gravity at the first 25 seconds of the flight. We can solve for that too. 3. A science student riding on a flatcar of a train moving at a constant speed of 12.8 m/s throws a ball toward the caboose along a path that the student judges as making an initial angle of 65° to the horizontal. The teacher, who is standing on the ground nearby, observes the ball rising vertically. How high does the ball rise? The initial velocity of the ball is not known? That means the throw is against the direction of the caboose. So, the horizontal component of the initial velocity of the ball is equal to 12.8 m/sec since the flight of the ball is vertical---no horizontal displacement. (Vo)cos(65deg) = 12.8 Vo = 12.8 /cos(65deg) = 30.29 m/sec And so, the vertical component of the initial velocity of the ball is (Vo)sin(65deg) = 30.29sin(65deg) = 27.45 m/sec. max height, H = 27.45(t) -4.9(t^2) What is t? At H, vertical velocity is zero, so, 27.45 -9.8t = 0 t = 27.45/9.8 = 2.80 sec. Therefore, H = 27.45(2.8) -4.9(2.8)^2 = 38.44 meters above the point of throwing. -------------answer. wow, the answers are perfectly right! i have to figure what i did wrong. what i did was break down the problem into 3 stages. 1st stage = the point from the ground to the point where the engine stopped working 2nd stage = the point where the engine stopped working to the highest altitude 3rd stage = the highest altitude to the point it landed back to Earth. For each step I use the coordinate system in x direction and y direction and find out my missing value (ex. final velocity, final delta x) THANKS SO MUCH FOR YOU GUYS HELP!! it really helped me! October 3rd 2007, 06:44 PM #2 October 3rd 2007, 06:48 PM #3 Sep 2007 October 3rd 2007, 06:55 PM #4 October 3rd 2007, 07:57 PM #5 Sep 2007 October 3rd 2007, 08:45 PM #6 MHF Contributor Apr 2005 October 3rd 2007, 08:51 PM #7 Sep 2007 October 3rd 2007, 10:18 PM #8 MHF Contributor Apr 2005 October 3rd 2007, 10:40 PM #9 MHF Contributor Apr 2005 October 4th 2007, 05:54 AM #10 Sep 2007
{"url":"http://mathhelpforum.com/math-topics/19936-physic-velocity-distances-acceleration-time.html","timestamp":"2014-04-17T20:29:37Z","content_type":null,"content_length":"70864","record_id":"<urn:uuid:8deed28f-a0e5-4e23-b176-8158ddda6ce4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help August 30th 2009, 01:25 PM #1 Super Member Nov 2008 (3x + |x|) / x How do i find the domain of: (3x + |x|) / x Also, what kind of function is this? To the best of my abilities, I identify this as some kind of mix of a rational and absolute value functions, however, I have no idea what to do with this. Any help would be greatly appreciated! Thanks in advance! domain is all real numbers except 0, because division by 0 is undefined. you're right about a "mixed" function ... $y = 3 + \frac{|x|}{x}$ graph attached August 30th 2009, 01:41 PM #2
{"url":"http://mathhelpforum.com/calculus/99824-3x-x-x.html","timestamp":"2014-04-19T08:08:12Z","content_type":null,"content_length":"29209","record_id":"<urn:uuid:71af083b-c3b7-455f-a501-513e565c4237>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
CBSE Maths Sample Paper for Class 5 Math skills are required in every phase of students’ academic career. Mathematics is all about solving problem. It develops students’ logical and analytical skills. In grade 5, the board has conceptualized and included important math topics such as number system, fraction, decimals, geometry, percentage, speed, distance, time, etc. The CBSE board prepares sample question paper for grade 5 by considering the learning requirements of students. With the help of sample question papers, students can enhance their math knowledge and can work on those areas where they are lacking. CBSE sample papers are great resources for students as these determine their expertise and proficiency with regards to attempting math sums correctly. It has been observed that the students, who regularly practice math sums through sample question papers, usually score good marks in exams. The CBSE maths sample paper for class 5 is available online, students can use it before exams. Maths Sample Papers for Class 5 CBSE 2013 Maths develops students’ critical thinking skills and also makes them logically strong. The students, who acquire math knowledge with conviction generally, utilize it aptly in their day-to-day life activities. The CBSE board has included math subject in class 5 in order to strengthen students’ basic math skills. They also design sample question papers for students so as provides them necessary information about the actual test paper. By practicing sample papers for class 5 CBSE maths, students can analysis the question and marking distribution pattern. Along with this, they can also get an idea about the questions that are consecutively asked in the exams. Students can download maths sample papers for class 5 CBSE 2013 online. Sample Papers Class 5 CBSE Maths 2012 Sample papers are the blueprints of actual test paper. These are designed under the guidance of subject experts and in line with defined educational norms and standards. In grade 5, students learn mathematical operations, fractions, measurements, pattern, money and many more concepts. To evaluate students’ performance in maths, the board has prepared very essential sample question papers. With the help of sample papers for class 5 CBSE maths, students can get in-depth knowledge about all possible and probable questions that might ask in exams. Besides this, students can also understand the marking scheme that is prescribed by the board. Sample papers class 5 CBSE maths 2012 are available online, students can use it for revision purpose. CBSE Class V Maths Sample Papers 2011 Sample papers are ideal exam preparation materials for students. These sample papers are exclusively designed by subject experts keeping mind the psychological needs of students. Based on the CCE pattern, these sample papers give glimpse of real test papers as these are designed in same format. Students can practice sample papers Class 5 CBSE Maths 2012 at regular intervals and can develop a good pace at attempting questions. Math is a useful subject, which improves students’ logical and critical thinking skills. Students can use CBSE class v maths sample papers 2011 and revise each math topic appropriately. It is a great resource for exam preparation purpose. Related Concepts
{"url":"http://cbse.edurite.com/cbse-sample-papers/cbse-maths-sample-paper-for-class-5.html","timestamp":"2014-04-17T18:24:02Z","content_type":null,"content_length":"18469","record_id":"<urn:uuid:0e750323-32a3-4267-af17-668a4d222599>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
CS1500 Algorithms and Data Structures for Engineering, Summer 1 2012 LAB 5: QuickSort Implement a function QuickSort that sorts an array int A[MAX] using the quicksort algorithm. Obviously, you cannot use any C++ subroutine; you have to implement your own. int QuickSort(int A[], int b, int e) where b and e are the begin and end indexes determining the part of the array being sorted. You can assume MAX is a globally defined variable, but you have to pass the array as input to the function. Your function has to be recursive, essentially executing the following: • call Partition function: arrange the array so that all elements smaller than the pivot come before the pivot, and all elements higher than the pivot come after the pivot. Returns the pivot. • recursively call the function QuickSort on the sub-array of elements up to the pivot • recursively call QuickSort on elements after the pivot. Here is a step by step Partition example. EXTRA CREDIT: Implement a function that performs counting sort. Verify that is faster than Quicksort empirically by keeping track (for each routine) of the number of comparisons made; run both on a large array, say like 100,000 elements randomly generated.
{"url":"http://www.ccs.neu.edu/home/vip/teach/Cpp_ENG/Labs/LAB5.html","timestamp":"2014-04-20T20:58:11Z","content_type":null,"content_length":"2628","record_id":"<urn:uuid:d70ffe87-6ea5-43e0-9c91-6e25e2030f84>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Phase Unwrapping Project We have invented a new suite of algorithms for 2-D phase unwrapping, based on iterative probability propagation (the sum-product algorithm). Phase unwrapping in 2-dimensional topologies is a signal processing problem that has been extensively studied over the past 20 years and has many important applications, including medical imaging, radar imaging, and satellite imaging. The phase unwrapping problem is simply stated: From a 2-dimensional image of scalar values, we measure each value modulus 1. A value of 1.3 is measured as 0.3, a value of 2.3 is measured as 0.3, etc. (More generally, the values are measured modulus some known wavelength, but we assume the data is normalized to this wavelength.) Given these wrapped measurements, reconstruct the original image, taking into account prior information such as smoothness in the unwrapped values. The following videos show a surface being wrapped and then unwrapped using our algorithm and the iterative correction of the wrapping errors. Notice that the wrapped surface can be viewed as a grayscale image, where a bright pixel corresponds to a wrapped value near 1 and a dark pixel corresponds toa wrapped value near 0. Click to see video: phase unwrapping in 35 iterations of sum-product algorithm Click to see video: wrapped phase violations during 200 iterations of sum-product algorithm Practical applications include unwrapping MRI images, such as the MRI image of the human head shown below, and unwrapping synthetic aperture radar (SAR) topographic maps, such as the map from Sandia National Laboratories, New Mexico, shown below. Although phase unwrapping in 1 dimension is tractable, phase unwrapping in 2 dimensions is NP-hard integer programming problem. Our conjecture is that there exist a near-optimal phase unwrapping algorithm for Gaussian process priors. We propose the graphical model and approximate inference as the sub-optimal solution. Unwrapped Sandia data from above For detailed description see our NIPS'01 paper on belief propagation for 2D phase unwrapping.
{"url":"http://www.ifp.illinois.edu/~nemanja/phase.html","timestamp":"2014-04-19T19:33:39Z","content_type":null,"content_length":"4425","record_id":"<urn:uuid:269d77ba-e112-4667-a23e-0e8a1641a334>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
The Golden Ratio - eBooks.com The Golden Ratio The Story of PHI, the World's Most Astonishing Number US$ 13.99 (If any tax is payable it will be calculated and shown at checkout.) • iPhone / iPad • Android phones & tablets • Kindle Fire • e-readers with Adobe Digital Editions installed • PC • Mac See the full list This ebook is available for the following devices: • iPhone • iPad • Android • Kindle Fire • Windows • Mac • Sony Reader • Cool-er Reader • Nook • Kobo Reader • iRiver Story File Formats Download: EPUB. ePub off (no printing) ePub off (no copying) Read Aloud ePub off Throughout history, thinkers from mathematicians to theologians have pondered the mysterious relationship between numbers and the nature of reality. In this fascinating book, Mario Livio tells the tale of a number at the heart of that mystery: phi, or 1.6180339887...This curious mathematical relationship, widely known as "The Golden Ratio," was discovered by Euclid more than two thousand years ago because of its crucial role in the construction of the pentagram, to which magical properties had been attributed. Since then it has shown a propensity to appear in the most astonishing variety of places, from mollusk shells, sunflower florets, and rose petals to the shape of the galaxy. Psychological studies have investigated whether the Golden Ratio is the most aesthetically pleasing proportion extant, and it has been asserted that the creators of the Pyramids and the Parthenon employed it. It is believed to feature in works of art from Leonardo da Vinci's Mona Lisa to Salvador Dali's The Sacrament of the Last Supper, and poets and composers have used it in their works. It has even been found to be connected to the behavior of the stock market! The Golden Ratio is a captivating journey through art and architecture, botany and biology, physics and mathematics. It tells the human story of numerous phi-fixated individuals, including the followers of Pythagoras who believed that this proportion revealed the hand of God; astronomer Johannes Kepler, who saw phi as the greatest treasure of geometry; such Renaissance thinkers as mathematician Leonardo Fibonacci of Pisa; and such masters of the modern world as Goethe, Cezanne, Bartok, and physicist Roger Penrose. Wherever his quest for the meaning of phi takes him, Mario Livio reveals the world as a place where order, beauty, and eternal mystery will always coexist. From the Hardcover edition. less Numberless are the world's wonders. --Sophocles (495-405 b.c.) The famous British physicist Lord Kelvin (William Thomson; 1824-1907), after whom the degrees in the absolute temperature scale are named, once said in a lecture: "When you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind." Kelvin was referring, of course, to the knowledge required for the advancement of science. But numbers and mathematics have the curious propensity of contributing even to the understanding of things that are, or at least appear to be, extremely remote from science. In Edgar Allan Poe's The Mystery of Marie Roget, the famous detective Auguste Dupin says: "We make chance a matter of absolute calculation. We subject the unlooked for and unimagined, to the mathematical formulae of the schools." At an even simpler level, consider the following problem you may have encountered when preparing for a party: You have a chocolate bar composed of twelve pieces; how many snaps will be required to separate all the pieces? The answer is actually much simpler than you might have thought, and it does not require almost any calculation. Every time you make a snap, you have one more piece than you had before. Therefore, if you need to end up with twelve pieces, you will have to snap eleven times. (Check it for yourself.) More generally, irrespective of the number of pieces the chocolate bar is composed of, the number of snaps is always one less than the number of pieces you need. Even if you are not a chocolate lover yourself, you realize that this example demonstrates a simple mathematical rule that can be applied to many other circumstances. But in addition to mathematical properties, formulae, and rules (many of which we forget anyhow), there also exist a few special numbers that are so ubiquitous that they never cease to amaze us. The most famous of these is the number pi (?), which is the ratio of the circumference of any circle to its diameter. The value of pi, 3.14159 . . . , has fascinated many generations of mathematicians. Even though it was defined originally in geometry, pi appears very frequently and unexpectedly in the calculation of probabilities. A famous example is known as Buffon's Needle, after the French mathematician George-Louis Leclerc, Comte de Buffon (1707-1788), who posed and solved this probability problem in 1777. Leclerc asked: Suppose you have a large sheet of paper on the floor, ruled with parallel straight lines spaced by a fixed distance. A needle of length equal precisely to the spacing between the lines is thrown completely at random onto the paper. What is the probability that the needle will land in such a way that it will intersect one of the lines (e.g., as in Figure 1)? Surprisingly, the answer turns out to be the number 2/?. Therefore, in principle, you could even evaluate ? by repeating this experiment many times and observing in what fraction of the total number of throws you obtain an intersection. (There exist, however, less tedious ways to find the value of pi.) Pi has by now become such a household word that film director Darren Aronofsky was even inspired to make a 1998 intellectual thriller with that title. Less known than pi is another number, phi (f), which is in many respects even more fascinating. Suppose I ask you, for example: What do the delightful petal arrangement in a red rose, Salvador Dali's famous painting "Sacrament of the Last Supper," the magnificent spiral shells of mollusks, and the breeding of rabbits all have in common? Hard to believe, but these very disparate examples do have in common a certain number or geometrical proportion known since antiquity, a number that in the nineteenth century was given the honorifics "Golden Number," "Golden Ratio," and "Golden Section." A book published in Italy at the beginning of the sixteenth century went so far as to call this ratio the "Divine Proportion." In everyday life, we use the word "proportion" either for the comparative relation between parts of things with respect to size or quantity or when we want to describe a harmonious relationship between different parts. In mathematics, the term "proportion" is used to describe an equality of the type: nine is to three as six is to two. As we shall see, the Golden Ratio provides us with an intriguing mingling of the two definitions in that, while defined mathematically, it is claimed to have pleasingly harmonious qualities. The first clear definition of what has later become known as the Golden Ratio was given around 300 b.c. by the founder of geometry as a formalized deductive system, Euclid of Alexandria. We shall return to Euclid and his fantastic accomplishments in Chapter 4, but at the moment let me note only that so great is the admiration that Euclid commands that, in 1923, the poet Edna St. Vincent Millay wrote a poem entitled "Euclid Alone Has Looked on Beauty Bare." Actually, even Millay's annotated notebook from her course in Euclidean geometry has been preserved. Euclid defined a proportion derived from a simple division of a line into what he called its "extreme and mean ratio." In Euclid's words: A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the lesser. In other words, if we look at Figure 2, line AB is certainly longer than the segment AC; at the same time, the segment AC is longer than CB. If the ratio of the length of AC to that of CB is the same as the ratio of AB to AC, then the line has been cut in extreme and mean ratio, or in a Golden Ratio. Who could have guessed that this innocent-looking line division, which Euclid defined for some purely geometrical purposes, would have consequences in topics ranging from leaf arrangements in botany to the structure of galaxies containing billions of stars, and from mathematics to the arts? The Golden Ratio therefore provides us with a wonderful example of that feeling of utter amazement that the famous physicist Albert Einstein (1879-1955) valued so much. In Einstein's own words: "The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and science. He who knows it not and can no longer wonder, no longer feel amazement, is as good as dead, a snuffed-out candle." As we shall see calculated in this book, the precise value of the Golden Ratio (the ratio of AC to CB in Figure 2) is the never-ending, never-repeating number 1.6180339887 . . . , and such never-ending numbers have intrigued humans since antiquity. One story has it that when the Greek mathematician Hippasus of Metapontum discovered, in the fifth century b.c., that the Golden Ratio is a number that is neither a whole number (like the familiar 1, 2, 3, . . .) nor even a ratio of two whole numbers (like the fractions 1/2, 2/3, 3/4, . . . ; known collectively as rational numbers), this absolutely shocked the other followers of the famous mathematician Pythagoras (the Pythagoreans). The Pythagorean worldview (which will be described in detail in Chapter 2) was based on an extreme admiration for the arithmos--the intrinsic properties of whole numbers or their ratios--and their presumed role in the cosmos. The realization that there exist numbers, like the Golden Ratio, that go on forever without displaying any repetition or pattern caused a true philosophical crisis. Legend even claims that, overwhelmed with this stupendous discovery, the Pythagoreans sacrificed a hundred oxen in awe, although this appears highly unlikely, given the fact that the Pythagoreans were strict vegetarians. I should emphasize at this point that many of these stories are based on poorly documented historical material. The precise date for the discovery of numbers that are neither whole nor fractions, known as irrational numbers, is not known with any certainty. Nevertheless, some researchers do place the discovery in the fifth century b.c., which is at least consistent with the dating of the stories just described. What is clear is that the Pythagoreans basically believed that the existence of such numbers was so horrific that it must represent some sort of cosmic error, one that should be suppressed and kept secret. The fact that the Golden Ratio cannot be expressed as a fraction (as a rational number) means simply that the ratio of the two lengths AC and CB in Figure 2 cannot be expressed as a fraction. In other words, no matter how hard we search, we cannot find some common measure that is contained, let's say, 31 times in AC and 19 times in CB. Two such lengths that have no common measure are called incommensurable. The discovery that the Golden Ratio is an irrational number was therefore, at the same time, a discovery of incommensurability. In On the Pythagorean Life (ca. a.d. 300), the philosopher and historian Iamblichus, a descendant of a noble Syrian family, describes the violent reaction to this discovery: They say that the first [human] to disclose the nature of commensurability and incommensurability to those unworthy to share in the theory was so hated that not only was he banned from [the Pythagoreans'] common association and way of life, but even his tomb was built, as if [their] former colleague was departed from life among humankind. In the professional mathematical literature, the common symbol for the Golden Ratio is the Greek letter tau (from the Greek solag, to-mi, which means "the cut" or "the section"). However, at the beginning of the twentieth century, the American mathematician Mark Barr gave the ratio the name of phi, the first Greek letter in the name of Phidias, the great Greek sculptor who lived around 490 to 430 b.c. Phidias' greatest achievements were the "Athena Parthenos" in Athens and the "Zeus" in the temple of Olympia. He is traditionally also credited with having been in charge of other Parthenon sculptures, although it is quite probable that many were actually made by his students and assistants. Barr decided to honor the sculptor because a number of art historians maintained that Phidias had made frequent and meticulous use of the Golden Ratio in his sculpture. (We shall examine similar claims very scrupulously in this book.) I will use the names Golden Ratio, Golden Section, Golden Number, phi, and also the symbol interchangeably throughout, because these are the names most frequently encountered in the recreational mathematics literature. Some of the greatest mathematical minds of all ages, from Pythagoras and Euclid in ancient Greece, through the medieval Italian mathematician Leonardo of Pisa and the Renaissance astronomer Johannes Kepler, to present-day scientific figures such as Oxford physicist Roger Penrose, have spent endless hours over this simple ratio and its properties. But the fascination with the Golden Ratio is not confined just to mathematicians. Biologists, artists, musicians, historians, architects, psychologists, and even mystics have pondered and debated the basis of its ubiquity and appeal. In fact, it is probably fair to say that the Golden Ratio has inspired thinkers of all disciplines like no other number in the history of mathematics. An immense amount of research, in particular by the Canadian mathematician and author Roger Herz-Fischler (described in his excellent book A Mathematical History of the Golden Number), has been devoted even just to the simple question of the origin of the name "Golden Section." Given the enthusiasm that this ratio has generated since antiquity, we might have thought that the name also has ancient origins. Indeed, some authoritative books on the history of mathematics, like Frankois Lasserre's The Birth of Mathematics in the Age of Plato, and Carl B. Boyer's A History of Mathematics, place the origin of this name in the fifteenth and sixteenth centuries, respectively. This, however, appears not to be the case. As far as I can tell from reviewing much of the historical fact-finding effort, this term was first used by the German mathematician Martin Ohm (brother of the famous physicist Georg Simon Ohm, after whom Ohm's law in electromagnetism is named), in the 1835 second edition of his book Die Reine Elementar-Mathematik (The pure elementary mathematics). Ohm writes in a footnote: "One also customarily calls this division of an arbitrary line in two such parts the golden section." Ohm's language clearly leaves us with the impression that he did not invent the term himself but rather used a commonly accepted name. Yet the fact that he did not use it in the first edition of his book (published in 1826) suggests at least that the name "Golden Section" (or, in German, "Goldene Schnitt") gained its popularity only around the 1830s. The name might have been used orally prior to that, perhaps in nonmathematical circles. There is no question, however, that following Ohm's book, the term "Golden Section" started to appear frequently and repeatedly in the German mathematical and art history literature. It may have made its debut in English in an article by James Sully on aesthetics, which appeared in the ninth edition of the Encyclopaedia Britannica in 1875. Sully refers to the "interesting experimental enquiry . . . instituted by [Gustav Theodor] Fechner [a physicist and pioneering German psychologist in the nineteenth century] into the alleged superiority of 'the golden section' as a visible proportion." (I discuss Fechner's experiments in Chapter 7.) The earliest English uses in a mathematical context appear to have been in an article entitled "The Golden Section" (by E. Ackermann) that appeared in 1895 in the American Mathematical Monthly and, around the same time, in the 1898 book Introduction to Algebra by the well-known teacher and author G. Chrystal (1851-1911). Just as a curiosity, let me note that the only definition of a "Golden Number" that appears in the 1900 edition of the French encyclopedia Nouveau Larousse Illustre is: "A number used to indicate each of the years of the lunar cycle." This refers to the position of a calendar year within the nineteen-year cycle after which the phases of the Moon recur on the same dates. Clearly the phrase took a longer time to enter the French mathematical nomenclature. But what is all the fuss about? What is it that makes this number, or geometrical proportion, so exciting as to deserve all of this attention? The Golden Ratio's attractiveness stems first and foremost from the fact that it has an almost uncanny way of popping up where it is least expected. Take, for example, an ordinary apple, the fruit often associated (probably mistakenly) with the tree of knowledge that figures so prominently in the biblical account of humankind's fall from grace, and cut it through its girth. You will find that the apple's seeds are arranged in a five-pointed star pattern, or pentagram (Figure 3). Each of the five isosceles triangles that make the corners of a pentagram has the property that the ratio of the length of its longer side to the shorter one (the implied base) is equal to the Golden Ratio, 1.618. . . . But, you may think, maybe this is not so surprising. After all, since the Golden Ratio has been defined as a geometrical proportion, perhaps we should not be too astonished to discover that this proportion is found in some geometrical From the Hardcover edition.
{"url":"http://www.ebooks.com/369859/the-golden-ratio/livio-mario/","timestamp":"2014-04-19T02:34:28Z","content_type":null,"content_length":"99516","record_id":"<urn:uuid:ae422033-1920-4ea3-85df-f71d8bf4280b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Factor and Lm functions [R] Factor and Lm functions RORAM rogelio.a.mancisidor at student.bi.no Wed Feb 4 12:33:57 CET 2009 I have a formula for a model as follows lm(TS~log(BodyWt)+log(BodyWt):factor(D). I do not use R for programming hence I dont understand what is the second covariate in the model Where BodyWt = body weight and D = danger index (either 1 or 2). I want to run the same model in other program. Can anyone explain me what is doing the : operator and the factor() function. View this message in context: http://www.nabble.com/Factor-and-Lm-functions-tp21828771p21828771.html Sent from the R help mailing list archive at Nabble.com. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2009-February/187135.html","timestamp":"2014-04-16T10:13:16Z","content_type":null,"content_length":"3114","record_id":"<urn:uuid:ee5dfba8-7876-48fc-bbfe-359808cee59c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove integrability May 10th 2010, 07:04 PM #1 Prove integrability Let $f(x) = \sin(\frac{1}{x})$ for $xe 0$ and $f(0)=0$. Show that $f$ is Riemann integrable on $[-1,1]$. I have a feeling that I have to use some fact like $f$ is monotone on some intervals but I really don't know. Any help would be appreciated. I'ts easy to prove that if a function is discontinous only at a finite number of points (and bounded of course) then it is Riemann integrable (use induction and isolate the point of discontinuity). Or more generally Lebesgue's criterion for integrability which says a function is Riemann integrable iff the set on which said function is discontinous has measure 0. Well, the point of discontinuity is $x=0$, but could you show how only a finite number of discontinuities and boundedness implies integrability? Haha, every freshman analysis student hates when people say it because it's so tempting to use but they can't! I don't see how the inf of the function on that interval would be 0 since it can take on negative values on such an interval. But I guess the general idea would be that $U(f, [0,1]) = U(f,[0,a]) + U(f,[a,1])$ and the same for the lower sums, correct? (I know that is technically an incorrect statement since we have to state that the union of the two partitions is in fact the partition of the larger interval and that a is an element in both partitions, etc, etc). It's easy: Given $1>r>0$ just divide $[a,b]$ into $[a,c-dr], [c-dr,c+dr], [c+dr,b]$ where $d=\min \{ \frac{1}{2(M-m)} , \frac{c-a}{2} ,\frac{c-b}{2} \}$ and $m\leq f(x)\leq M$ for all $x\in [a,b] $ then in the first and last intervals the function is continous and in the second just use an argument analogous to Drexel's (if the discontinuity is on one of the endpoints the argument is the same). The result follows by induction. Damnit, this is why I always lose points on quizzes. Try $[0,\tfrac{\varepsilon}{4}]$ then you should get $U-L=\frac{\varepsilon}{4}-\frac{-\varepsilon}{4}=\frac{\varepsilon}{2}$ But I guess the general idea would be that $U(f, [0,1]) = U(f,[0,a]) + U(f,[a,1])$ and the same for the lower sums, correct? Yes, and same for $L$s May 10th 2010, 07:11 PM #2 Super Member Apr 2009 May 10th 2010, 07:20 PM #3 May 10th 2010, 07:22 PM #4 May 10th 2010, 07:37 PM #5 May 10th 2010, 07:41 PM #6 Super Member Apr 2009 May 10th 2010, 07:42 PM #7
{"url":"http://mathhelpforum.com/differential-geometry/144097-prove-integrability.html","timestamp":"2014-04-20T06:55:04Z","content_type":null,"content_length":"60165","record_id":"<urn:uuid:85054e84-8c91-4f77-a5fc-734ef8643415>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Mitchener, W. Garrett - Department of Mathematics, College of Charleston • Mathematical Models of Human Language by W. Garrett Mitchener • Bulletin of Mathematical Biology (2003) 00, 130 Competitive Exclusion and Coexistence of • Communications II Seminar Spring 2006 • RESEARCH STATEMENT W. GARRETT MITCHENER • Communications II 1 Presentation Skills • The administrationof A Better Class College (ABC) is concerned about grade in ation, since the average grade given out at this school is A-. It is impossible to use the traditional grade point average (GPA) to • A Mathematical Model of the Loss of Verb-Second in Middle English • A Chat Room Assignment for Teaching Network Security W. Garrett Mitchener • Mistakes are easy to make Four students, Amy, Ben, Cai, and Dee, each tried to solve a different equation or • Determining the People Capacity of a Structure May 7, 1999 • LAB: DOSUPPORT AND THE CONSTANT RATE EFFECT W. GARRETT MITCHENER • Cusp Catastrophe How-to by W. Garrett Mitchener • LAB: DO-SUPPORT AND THE CONSTANT RATE EFFECT W. GARRETT MITCHENER • MATH 131 COMPUTER LAB 1: SLOPE FIELDS AND SOLUTION W. GARRETT MITCHENER • PROJECT DESCRIPTION FOR W. G. MITCHENER 1. Introduction • Estimating transition times for a model of • Learning the Raising-Control Distinction 1 Title: Computational Models of Learning the Raising-Control Distinction • Simulating Language Change in the Presence of Non-Idealized Syntax W. Garrett Mitchener • Chaos and language W. Garrett Mitchener1 • Journal of Mathematical Biology manuscript No. (will be inserted by the editor) • Using AsymptoticsTools by W. Garrett Mitchener • Grade In ation February 9, 1998 • Using Ambient Noise Fields for Submarine Team #525 for the Mathematical Contest in Modeling • We wish to develop a new method of detecting submarines that does not require the generation of sound, as sonar does. Rather, it should employ changes in the water's ambient • JOHN VENN'S PIZZA PARTY School or club name • Truths and Consequences Sprint All-Day Sprint • Communications II: Talk Evaluation Sheet Part 1: Speaking • Plotting How-to by W. Garrett Mitchener • Curve fitting How-to by W. Garrett Mitchener • Plotting and Dynamical Systems W. Garrett Mitchener • A moderately honorable tale of Sir Lancelot • The Runge-Kutta method W. Garrett Mitchener • Stokes diagram for Lanier Watkin's Regge pole problem • Grade InAEation February 9, 1998 • Learning to program simulations in Mathematica. • JOHN VENN'S PIZZA PARTY School or club name • RESEARCH STATEMENT W. GARRETT MITCHENER • June 2, 2010 16:47 World Scientific Book -9in x 6in BookWrapper Inferring Leadership Structure From • Noname manuscript No. (will be inserted by the editor) • MATH 131 COMPUTER LAB 2: NUMERICAL METHODS W. GARRETT MITCHENER • Variation of parameters W. Garrett Mitchener • Velociraptor mongoliensis vs. Thescelosaurus neglectus : Winning • PROJECT DESCRIPTION FOR W. G. MITCHENER 1. Introduction • idnightA tale of horror and misery and stale bagels by Tarneeg Rhemtrict. • A Mathematical Model of Human Languages: The Interaction of Game Dynamics and • Stokes diagram for the overdense barrier by W. Garrett Mitchener • A Cautionary Tale of Caterpillars and Selectional Interference • Using Ambient Noise Fields for Submarine Team #525 for the Mathematical Contest in Modeling • Design Patterns by Example Garrett Mitchener • Unofficial Errata for Differential Equations and Boundary Value Problems, Computing • Vector Bundles: Homework 1 Anthony Narkawicz
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/24/540.html","timestamp":"2014-04-20T07:14:55Z","content_type":null,"content_length":"14942","record_id":"<urn:uuid:73dafdff-9b3e-4179-b353-5aec439d6182>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
East Palo Alto, CA Trigonometry Tutor Find an East Palo Alto, CA Trigonometry Tutor I believe that the biggest hurdle to overcome with most struggling students is a fear of failure. Let me help your child to build the confidence they need to be successful. I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to the bay area because my husband found employment here. 11 Subjects: including trigonometry, chemistry, physics, calculus ...Working with an experienced tutor for 4-6 hours can streamline your quest for a better score. Techniques to get more correct answers include: identifying just what information is given, using that information to solve by backsolving, picking numbers, process of elimination, strategic guessing and straightforward math. The questions are all multiple choice. 32 Subjects: including trigonometry, reading, calculus, English ...I have worked with students who have weak math skills and are currently struggling to keep up, students who are doing well and want to do advanced work, as well as students who fall somewhere in between. I have tutored in all junior high and high school math subject areas. I am comfortable with and have ample experience tutoring students of all ages. 5 Subjects: including trigonometry, geometry, algebra 2, prealgebra ...Due to traffic, I could only travel in the neighborhood (Dublin, Pleasanton, Livermore and San Ramon). Weekdays 11am-3pm remain mostly open. Thanks you very much for your support. I am an experienced Math tutor (4 recent years, 50+ students), college instructor and software engineer. 15 Subjects: including trigonometry, calculus, GRE, algebra 1 ...I want to be known as a great engineer that helped mankind in some small but meaningful way. My greatest passion is to build a spacecraft to take humanity to the stars. I love to tutor math and science to people in hopes of aspiring them to join me in engineering! 20 Subjects: including trigonometry, English, physics, reading Related East Palo Alto, CA Tutors East Palo Alto, CA Accounting Tutors East Palo Alto, CA ACT Tutors East Palo Alto, CA Algebra Tutors East Palo Alto, CA Algebra 2 Tutors East Palo Alto, CA Calculus Tutors East Palo Alto, CA Geometry Tutors East Palo Alto, CA Math Tutors East Palo Alto, CA Prealgebra Tutors East Palo Alto, CA Precalculus Tutors East Palo Alto, CA SAT Tutors East Palo Alto, CA SAT Math Tutors East Palo Alto, CA Science Tutors East Palo Alto, CA Statistics Tutors East Palo Alto, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/east_palo_alto_ca_trigonometry_tutors.php","timestamp":"2014-04-19T07:15:22Z","content_type":null,"content_length":"24742","record_id":"<urn:uuid:7042b0d7-342c-458b-b09c-fa76ce84dc74>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivation of Planck's Law About this page This page provides a brief derivation of Planck’s law from basic statistical principles. For more information, the reader is referred to the textbook by Rybicki and Lightman (Radiative Processes in Astrophysics, Wiley, 2004) http://books.google.com/books?id=LtdEjNABMlsC&dq=isbn:0471827592&ei=0KPFSOKvE4mIjwGQ2Oj3BA. The reader might also find interest in the historical development of early research in radiation physics as surveyed by Barr http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=AJPIAS000028000001000042000001&idtype=cvips&gifs=yes. Photon gas in a box First, consider a cubic box with each side of length L that is filled with electromagnetic (EM) radiation (a so-called ‘photon gas’) that forms standing waves whose allowable wavelengths are restricted by the size of the box. We will assume that the waves do not interact and therefore can be separated into the three orthogonal Cartesian directions such that the allowable wavelengths are: $\lambda_i = \frac{2L}{n_i}$ where n[i] is an integer greater than zero, and i represents one of the three Cartesian directions—x, y, or z. From quantum mechanics, the energy of a given mode (i.e., an allowable set n[x],n[y],n[z]) can be expressed as $E(N) = \left( N + \frac{1}{2} \right) \frac{hc}{2L} \sqrt{n_x^2 + n_y^2 + n_z^2}$ where h is Planck’s constant (6.626×10^ − 34J s). The number N represents the number of such modes, or photons, of the given energy. Importantly, unlike electrons, an unlimited number of modes, or photons, of a given energy can exist; thus, photons are governed by Bose-Einstein statistics. Statistical mechanics of the photon gas To derive the energy density in this photon gas, we first need to know the relative probability with which a given energy state E(N) is occupied at a given temperature. Here, we turn to statistical mechanics, which reveals this probability as $P_N = \frac{\exp (-\beta E(N))}{Z(\beta)}$ where β is the inverse of thermal energy, or β = (k[B]T)^ − 1, and Z(β) is a factor, called the partition function, that normalizes the probability as $Z(\beta) = \sum_{N=0}^{\infty} \exp (-\beta E(N)) = \frac{1}{1-\exp (-\beta \varepsilon)}$ where $\varepsilon = \frac{hc}{2L} \sqrt{n_x^2 + n_y^2 + n_z^2} = \frac{hc}{\lambda}$ is the energy of a single photon, and the latter equality derives from the relationship between the wavelength λ and the n[i] indices of the EM waves in the box. This wavelength is related to the speed of light c and frequency ν through the familiar relation $\frac{c}{\lambda} = u \Rightarrow \epsilon = h u$ Again from statistical mechanics (and specifically Bose-Einstein statistics), the average energy within a given mode (which is related to the average number of photons N) can be expressed as $\langle E(N) \rangle = - \frac{d \ln Z}{d \beta} = \frac{\varepsilon}{\exp (\beta \varepsilon) - 1}$ Energy density of the photon gas Now that we have an expression for the average energy of a given mode, we can sum (integrate) over all modes to find the total energy within the photon gas. The total energy can be expressed as an integral over all energies as $U = \int_0^\infty \langle E \rangle g(\varepsilon) d\varepsilon = \int_0^\infty \frac{\varepsilon}{\exp (\beta \varepsilon) - 1} g(\varepsilon) d\varepsilon$ where $g(\varepsilon)$ is an important function called the density of states. This function gives the number of allowed modes per unit energy within an interval between $\varepsilon$ and $\varepsilon + d \varepsilon$. This function can be derived from the allowable wavelengths and ‘n’ indices as $g(\varepsilon) d \varepsilon = \frac{8\pi L^3}{h^3 c^3} \varepsilon^2 d \varepsilon$ Thus, the energy per unit volume can be expressed as $\frac{U}{L^3} = \int_0^\infty \frac{8\pi}{h^3 c^3}\frac{\varepsilon^3}{\exp (\beta \varepsilon) - 1} d\varepsilon$ where the integrand is the spectral energy density u. This function can be expressed in terms of energy, wavelength, or frequency through the relation $\varepsilon = hc/\lambda$ such that different forms of u are commonly used. However, they are each integrands in expressions that are used to calculate the overall energy density as $\frac{U}{L^3} = \int_0^\infty u(\varepsilon, T) d\varepsilon = \int_0^\infty u(\lambda, T) d\lambda = \int_0^\infty u(u, T) du$ The corresponding expressions for spectral energy density follow: $u(\varepsilon, T) = \frac{8\pi}{h^3 c^3}\frac{\varepsilon^3}{\exp \left( \frac{\varepsilon}{k_B T} \right) - 1}$ $u(\lambda, T) = \frac{8\pi h c}{\lambda^5} \frac{1}{\exp \left( \frac{hc}{\lambda k_B T} \right) - 1}$ $u(u, T) = \frac{8\pi h u^3}{c^3} \frac{1}{\exp \left( \frac{hu}{k_B T} \right) - 1}$ Blackbody emission intensity Now assume that a small hole is cut into the box. All radiation emanating from this hole will be moving at the speed of light c. Also, the radiation will be uniformly distributed throughout the hemisphere of solid angles (2π steradians), and one half of the energy will be oriented such that it can move outward through the hole. The spectral radiation intensity is defined as the rate of energy emitted per unit area per unit solid angle and per unit wavelength. The rate of energy emitted per area is simply the product of the energy density derived above and the speed of light (i.e., the distance swept by a ray per unit of time). Therefore, the spectral intensity becomes $I(\lambda, T) = \frac{1}{2} \left[ \frac{u(\lambda, T) c}{2 \pi} \right] = \frac{2 h c^2}{\lambda^5} \frac{1}{\exp \left( \frac{hc}{\lambda k_B T} \right) - 1}$ Similarly, the spectral intensity (per unit frequency instead of wavelength) is $I(u, T) = \frac{1}{2} \left[ \frac{u(u, T) c}{2 \pi} \right]= \frac{2 h u^3}{c^2} \frac{1}{\exp \left( \frac{hu}{k_B T} \right) - 1}$
{"url":"http://nanohub.org/topics/DerivationofPlancksLaw","timestamp":"2014-04-25T02:53:56Z","content_type":null,"content_length":"29380","record_id":"<urn:uuid:17adb813-6780-40cc-adc4-b2b6808d6263>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Dutch National Flag Problem I added code about this problem in this post. In this article, I mainly add more pictures to help understanding the analysis/thinking process of this question. When beginners could understand the whole process of this problem, they would improve greatly in using pointers, indexes and edges in an array. We can divide an array into four parts – Negative, Zero, Unknown, and Positive. This aims at analyzing swap and switch pointers when necessary. This is also a kind of method to analyze problem — stay in an particular intermediate state and then go further starting from this state. Assue that the front, i, and back — three pointers point to three positions respectively. What positions? as the following picture show, they point to the positions immediately after/before the boundary between Negative and Zero, Zero and Unknown, and Unknown and Positive. Pretty straightforward when looking at the picture. When i(the index points to unknown part) is found to be a zero, we should merge this “zero” into the zero part of array, and then move i points to next element in the unknown part. When i is found to be a negative number, we need to swap this “negative” number with the first zero which is pointed by front, so that both “negative” and “zero” part remain unchanged. After swapping, we need move both front and i one step afterward, so that they remain pointing to the boundaries. When i is found to be a positive number, the same with the previous situation, just swap this “positive” number with what back points to ( array[back] ), and then move back one step forward.Here needs pay attention, since we don’t know the number i points to after swapping is negative/positive/zero, we need remain i unchanged for next round checking. Now, we are almost done, so when we need stop? Yes, when the length of unknown part is zero, at that time, i meets k, then we finish the while loop. I like this question very much, since this method solve the problem using O(n). I cannot think any other methods better than this one, since I could come up with O(n^2), even O(n^3). But none wins over this method. Emma Y.Guo 1/18/2013 8:58 AM
{"url":"http://emmaguo.com/2013/01/dutch-national-flag-problem/","timestamp":"2014-04-21T04:32:33Z","content_type":null,"content_length":"25317","record_id":"<urn:uuid:dc0f6529-3963-4135-99c5-c23865106186>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
UV and irradiance Dear forum members, I would like to ask some help regarding irradiance calculation. I am a high school student doing a project in biology in which I expose drosophila m. (fruit flies) to UVA radiation and I would like to mention in my paper the actual dose that was used. I know that... UV dose (J/m^2) = W/m^2 x s W/m^2 refers to irradiance, this is the formula that I found for calculating it: E= dΦ / d Ad 1. Ad is the area which is exposed to the radiation. What does the d in front of Ad stand for? dΦ = E0 cosθ d Ad So, the formula above is the radiation intensity (W)... 2. E0 is the irradiance of light incident upon material, how do I calculate that? From here I found the information: (on the 5th page) Original Philips UVA Type HP3147/A 220V ~ 50 Hz 75W Thank you for any advice and help!
{"url":"http://www.physicsforums.com/showthread.php?t=330580","timestamp":"2014-04-17T21:42:18Z","content_type":null,"content_length":"30517","record_id":"<urn:uuid:da43b67e-699c-498d-9629-a06e970e1c39>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Vlookup with no #N/A? I'm not sure if there is an easier way but I use an 'IF' and 'ISNA'. for Basically, it says that if the vlookup returns #N/A then the true part of the IF statement returns "" (blank) alternatively it returns the value of the I'm not doing very well posting advice on here but I hope this helps anyway!
{"url":"http://www.excel-answers.com/microsoft/Excel-Miscellaneous/30835282/vlookup-with-no-na.aspx","timestamp":"2014-04-17T09:54:48Z","content_type":null,"content_length":"8293","record_id":"<urn:uuid:4c8d5307-7dca-4d1b-8cf0-37f6a1b3ba7d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
13 September 2004 Vol. 9, No. 37 THE MATH FORUM INTERNET NEWS Dr. Math Introduces Geometry | Dr. Math Book Reviews Irene Etkowicz Eizen Grant - MET DR. MATH INTRODUCES GEOMETRY by the Math Forum @ Drexel John Wiley & Sons For almost a decade, kids have been writing to Dr. Math at the Math Forum Web site with questions about their math problems. And the math doctors at The Math Forum @ Drexel have been replying with clear explanations and helpful hints. "Dr. Math Introduces Geometry" is the third book in our series. The book includes dozens of letters from kids who've had trouble understanding the basic math concepts used in geometry, along with answers from trained volunteers drawn from a pool of college students, mathematicians, teachers, and professionals from the mathematical community. Topics covered include introduction to 2-D geometric figures, area and perimeter of 2-D geometric figures, circles, introduction to 3-D geometric figures, symmetry, and much For more information and a link to purchase the book from Amazon.com, please visit: We hope the books will find a place in classroom, library, and home collections. We invite you to display a link from your school's website to our book information page. Please follow the instructions given on this page: DR. MATH BOOK REVIEWS Read This! The MAA Online Book Review Column Hema Gopalakrishnan, assistant professor of mathematics at Sacred Heart University, wrote a review of our first book in the Dr. Math series, "Dr. Math Gets You Ready for Algebra." Janie P. Bower, of Hattiesburg High School Freshman Academy (Hattiesburg, MS), wrote a review of "Dr. Math Gets You Ready for Algebra" on page 520 of the May, 2004, issue of the NCTM journal Mathematics Teaching in the Middle School. David Ebert, of Oregon High School (Oregon, WI), wrote a review of "Dr. Math Explains Algebra" published on page 142 of the September, 2004, issue of the NCTM journal Mathematics We invite you to post a review on the Amazon.com site: Dr. Math Gets You Ready for Algebra Dr. Math Explains Algebra Dr. Math Introduces Geometry The Irene Etkowicz Eizen Grant for Emerging Leaders in Elementary School Mathematics, with a maximum award of $6000, will be given to a classroom teacher working collaboratively with other K-5 teachers to improve mathematics instruction in a school or district. Applications must be postmarked by December 3, 2004. For additional information on grants and scholarships, view: - Mathematics Education Trust - Tips for Writing Successful Proposals CHECK OUT OUR WEB SITE: The Math Forum http://mathforum.org/ Ask Dr. Math http://mathforum.org/dr.math/ Problems of the Week http://mathforum.org/pow/ Mathematics Library http://mathforum.org/library/ Math Tools http://mathforum.org/mathtools/ Teacher2Teacher http://mathforum.org/t2t/ Discussion Groups http://mathforum.org/discussions/ Join the Math Forum http://mathforum.org/join.forum.html Send comments to the Math Forum Internet Newsletter editors Donations http://deptapp.drexel.edu/ia/GOL/giftsonline1_MF.asp Ask Dr. Math Books http://mathforum.org/pubs/dr.mathbooks.html _o \o_ __| \ / |__ o _ o/ \o/ __|- __/ \__/o \o | o/ o/__/ /\ /| | \ \ / \ / \ /o\ / \ / \ / | / \ / \
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews9.37.html","timestamp":"2014-04-19T07:59:50Z","content_type":null,"content_length":"8573","record_id":"<urn:uuid:f028b428-cfb3-482b-a547-61b1bf8464b4>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
SIMD Within A Register (e.g., using MMX) Next Previous Contents SIMD (Single Instruction stream, Multiple Data stream) Within A Register (SWAR) isn't a new idea. Given a machine with k-bit registers, data paths, and function units, it has long been known that ordinary register operations can function as SIMD parallel operations on n, k/n-bit, integer field values. However, it is only with the recent push for multimedia that the 2x to 8x speedup offered by SWAR techniques has become a concern for mainstream computing. The 1997 versions of most microprocessors incorporate hardware support for SWAR: There are a few holes in the hardware support provided by the new microprocessors, quirks like only supporting some operations for some field sizes. It is important to remember, however, that you don't need any hardware support for many SWAR operations to be efficient. For example, bitwise operations are not affected by the logical partitioning of a register. Although every modern processor is capable of executing with at least some SWAR parallelism, the sad fact is that even the best SWAR-enhanced instruction sets do not support very general-purpose parallelism. In fact, many people have noticed that the performance difference between Pentium and "Pentium with MMX technology" is often due to things like the larger L1 cache that coincided with appearance of MMX. So, realistically, what is SWAR (or MMX) good for? • Integers only, the smaller the better. Two 32-bit values fit in a 64-bit MMX register, but so do eight one-byte characters or even an entire chess board worth of one-bit values. Note: there will be a floating-point version of MMX, although very little has been said about it at this writing. Cyrix has posted a set of slides, ftp://ftp.cyrix.com/developr/mpf97rm.pdf, that includes a few comments about MMFP. Apparently, MMFP will support two 32-bit floating-point numbers to be packed into a 64-bit MMX register; combining this with two MMFP pipelines will yield four single-precision FLOPs per clock. • SIMD or vector-style parallelism. The same operation is applied to all fields simultaneously. There are ways to nullify the effects on selected fields (i.e., equivalent to SIMD enable masking), but they complicate coding and hurt performance. • Localized, regular (preferably packed), memory reference patterns. SWAR in general, and MMX in particular, are terrible at randomly-ordered accesses; gathering a vector x[y] (where y is an index array) is prohibitively expensive. These are serious restrictions, but this type of parallelism occurs in many parallel algorithms - not just multimedia applications. For the right type of algorithm, SWAR is more effective than SMP or cluster parallelism... and it doesn't cost anything to use it. The basic concept of SWAR, SIMD Within A Register, is that operations on word-length registers can be used to speed-up computations by performing SIMD parallel operations on n k/n-bit field values. However, making use of SWAR technology can be awkward, and some SWAR operations are actually more expensive than the corresponding sequences of serial operations because they require additional instructions to enforce the field partitioning. To illustrate this point, let's consider a greatly simplified SWAR mechanism that manages four 8-bit fields within each 32-bit register. The values in two registers might be represented as: PE3 PE2 PE1 PE0 Reg0 | D 7:0 | C 7:0 | B 7:0 | A 7:0 | Reg1 | H 7:0 | G 7:0 | F 7:0 | E 7:0 | This simply indicates that each register is viewed as essentially a vector of four independent 8-bit integer values. Alternatively, think of A and E as values in Reg0 and Reg1 of processing element 0 (PE0), B and F as values in PE1's registers, and so forth. The remainder of this document briefly reviews the basic classes of SIMD parallel operations on these integer vectors and how these functions can be implemented. Polymorphic Operations Some SWAR operations can be performed trivially using ordinary 32-bit integer operations, without concern for the fact that the operation is really intended to operate independently in parallel on these 8-bit fields. We call any such SWAR operation polymorphic, since the function is unaffected by the field types (sizes). Testing if any field is non-zero is polymorphic, as are all bitwise logic operations. For example, an ordinary bitwise-and operation (C's & operator) performs a bitwise and no matter what the field sizes are. A simple bitwise and of the above registers yields: PE3 PE2 PE1 PE0 Reg2 | D&H 7:0 | C&G 7:0 | B&F 7:0 | A&E 7:0 | Because the bitwise and operation always has the value of result bit k affected only by the values of the operand bit k values, all field sizes are supported using the same single instruction. Partitioned Operations Unfortunately, lots of important SWAR operations are not polymorphic. Arithmetic operations such as add, subtract, multiply, and divide are all subject to carry/borrow interactions between fields. We call such SWAR operations partitioned, because each such operation must effectively partition the operands and result to prevent interactions between fields. However, there are actually three different methods that can be used to achieve this effect. Partitioned Instructions Perhaps the most obvious approach to implementing partitioned operations is to provide hardware support for "partitioned parallel instructions" that cut the carry/borrow logic between fields. This approach can yield the highest performance, but it requires a change to the processor's instruction set and generally places many restrictions on field size (e.g., 8-bit fields might be supported, but not 12-bit fields). The AMD/Cyrix/Intel MMX, Digital MAX, HP MAX, and Sun VIS all implement restricted versions of partitioned instructions. Unfortunately, these different instruction set extensions have significantly different restrictions, making algorithms somewhat non-portable between them. For example, consider the following sampling of partitioned operations: Instruction AMD/Cyrix/Intel MMX DEC MAX HP MAX Sun VIS | Absolute Difference | | 8 | | 8 | | Merge Maximum | | 8, 16 | | | | Compare | 8, 16, 32 | | | 16, 32 | | Multiply | 16 | | | 8x16 | | Add | 8, 16, 32 | | 16 | 16, 32 | In the table, the numbers indicate the field sizes, in bits, for which each operation is supported. Even though the table omits many instructions including all the more exotic ones, it is clear that there are many differences. The direct result is that high-level languages (HLLs) really are not very effective as programming models, and portability is generally poor. Unpartitioned Operations With Correction Code Implementing partitioned operations using partitioned instructions can certainly be efficient, but what do you do if the partitioned operation you need is not supported by the hardware? The answer is that you use a series of ordinary instructions to perform the operation with carry/borrow across fields, and then correct for the undesired field interactions. This is a purely software approach, and the corrections do introduce overhead, but it works with fully general field partitioning. This approach is also fully general in that it can be used either to fill gaps in the hardware support for partitioned instructions, or it can be used to provide full functionality for target machines that have no hardware support at all. In fact, by expressing the code sequences in a language like C, this approach allows SWAR programs to be fully portable. The question immediately arises: precisely how inefficient is it to simulate SWAR partitioned operations using unpartitioned operations with correction code? Well, that is certainly the $64k question... but many operations are not as difficult as one might expect. Consider implementing a four-element 8-bit integer vector add of two source vectors, x+y, using ordinary 32-bit operations. An ordinary 32-bit add might actually yield the correct result, but not if any 8-bit field carries into the next field. Thus, our goal is simply to ensure that such a carry does not occur. Because adding two k-bit fields generates an at most k+1 bit result, we can ensure that no carry occurs by simply "masking out" the most significant bit of each field. This is done by bitwise anding each operand with 0x7f7f7f7f and then performing an ordinary 32-bit add. t = ((x & 0x7f7f7f7f) + (y & 0x7f7f7f7f)); That result is correct... except for the most significant bit within each field. Computing the correct value for each field is simply a matter of doing two 1-bit partitioned adds of the most significant bits from x and y to the 7-bit carry result which was computed for t. Fortunately, a 1-bit partitioned add is implemented by an ordinary exclusive or operation. Thus, the result is (t ^ ((x ^ y) & 0x80808080)) Ok, well, maybe that isn't so simple. After all, it is six operations to do just four adds. However, notice that the number of operations is not a function of how many fields there are... so, with more fields, we get speedup. In fact, we may get speedup anyway simply because the fields were loaded and stored in a single (integer vector) operation, register availability may be improved, and there are fewer dynamic code scheduling dependencies (because partial word references are avoided). Controlling Field Values While the other two approaches to partitioned operation implementation both center on getting the maximum space utilization for the registers, it can be computationally more efficient to instead control the field values so that inter-field carry/borrow events should never occur. For example, if we know that all the field values being added are such that no field overflow will occur, a partitioned add operation can be implemented using an ordinary add instruction; in fact, given this constraint, an ordinary add instruction appears polymorphic, and is usable for any field sizes without correction code. The question thus becomes how to ensure that field values will not cause carry/borrow events. One way to ensure this property is to implement partitioned instructions that can restrict the range of field values. The Digital MAX vector minimum and maximum instructions can be viewed as hardware support for clipping field values to avoid inter-field carry/borrow. However, suppose that we do not have partitioned instructions that can efficiently restrict the range of field values... is there a sufficient condition that can be cheaply imposed to ensure carry/ borrow events will not interfere with adjacent fields? The answer lies in analysis of the arithmetic properties. Adding two k-bit numbers generates a result with at most k+1 bits; thus, a field of k +1 bits can safely contain such an operation despite using ordinary instructions. Thus, suppose that the 8-bit fields in our earlier example are now 7-bit fields with 1-bit "carry/borrow spacers": PE3 PE2 PE1 PE0 Reg0 | D' | D 6:0 | C' | C 6:0 | B' | B 6:0 | A' | A 6:0 | A vector of 7-bit adds is performed as follows. Let us assume that, prior to the start of any partitioned operation, all the carry spacer bits (A', B', C', and D') have the value 0. By simply executing an ordinary add operation, all the fields obtain the correct 7-bit values; however, some spacer bit values might now be 1. We can correct this by just one more conventional operation, masking-out the spacer bits. Our 7-bit integer vector add, x+y, is thus: ((x + y) & 0x7f7f7f7f) This is just two instructions for four adds, clearly yielding good speedup. The sharp reader may have noticed that setting the spacer bits to 0 does not work for subtract operations. The correction is, however, remarkably simple. To compute x-y, we simply ensure the initial condition that the spacers in x are all 1, while the spacers in y are all 0. In the worst case, we would thus get: (((x | 0x80808080) - y) & 0x7f7f7f7f) However, the additional bitwise or operation can often be optimized out by ensuring that the operation generating the value for x used | 0x80808080 rather than & 0x7f7f7f7f as the last step. Which method should be used for SWAR partitioned operations? The answer is simply "whichever yields the best speedup." Interestingly, the ideal method to use may be different for different field sizes within the same program running on the same machine. Communication & Type Conversion Operations Although some parallel computations, including many operations on image pixels, have the property that the ith value in a vector is a function only of values that appear in the ith position of the operand vectors, this is generally not the case. For example, even pixel operations such as smoothing require values from adjacent pixels as operands, and transformations like FFTs require more complex (less localized) communication patterns. It is not difficult to efficiently implement 1-dimensional nearest neighbor communication for SWAR using unpartitioned shift operations. For example, to move a value from PEi to PE(i+1), a simple shift operation suffices. If the fields are 8-bits in length, we would use: (x << 8) Still, it isn't always quite that simple. For example, to move a value from PEi to PE(i-1), a simple shift operation might suffice... but the C language does not specify if shifts right preserve the sign bit, and some machines only provide signed shift right. Thus, in the general case, we must explicitly zero the potentially replicated sign bits: ((x >> 8) & 0x00ffffff) Adding "wrap-around connections" is also reasonably efficient using unpartitioned shifts. For example, to move a value from PEi to PE(i+1) with wraparound: ((x << 8) | ((x >> 24) & 0x000000ff)) The real problem comes when more general communication patterns must be implemented. Only the HP MAX instruction set supports arbitrary rearrangement of fields with a single instruction, which is called Permute. This Permute instruction is really misnamed; not only can it perform an arbitrary permutation of the fields, but it also allows repetition. In short, it implements an arbitrary x[y] Unfortunately, x[y] is very difficult to implement without such an instruction. The code sequence is generally both long and inefficient; in fact, it is sequential code. This is very disappointing. The relatively high speed of x[y] operations in the MasPar MP1/MP2 and Thinking Machines CM1/CM2/CM200 SIMD supercomputers was one of the key reasons these machines performed well. However, x[y] has always been slower than nearest neighbor communication, even on those supercomputers, so many algorithms have been designed to minimize the need for x[y] operations. In short, without hardware support, it is probably best to develop SWAR algorithms as though x[y] wasn't legal... or at least isn't cheap. Recurrence Operations (Reductions, Scans, etc.) A recurrence is a computation in which there is an apparently sequential relationship between values being computed. However, if these recurrences involve associative operations, it may be possible to recode the computation using a tree-structured parallel algorithm. The most common type of parallelizable recurrence is probably the class known as associative reductions. For example, to compute the sum of a vector's values, one commonly writes purely sequential C code like: t = 0; for (i=0; i<MAX; ++i) t += x[i]; However, the order of the additions is rarely important. Floating point and saturation math can yield different answers if the order of additions is changed, but ordinary wrap-around integer additions will yield the same results independent of addition order. Thus, we can re-write this sequence into a tree-structured parallel summation in which we first add pairs of values, then pairs of those partial sums, and so forth, until a single final sum results. For a vector of four 8-bit values, just two addition steps are needed; the first step does two 8-bit adds, yielding two 16-bit result fields (each containing a 9-bit result): t = ((x & 0x00ff00ff) + ((x >> 8) & 0x00ff00ff)); The second step adds these two 9-bit values in 16-bit fields to produce a single 10-bit result: ((t + (t >> 16)) & 0x000003ff) Actually, the second step performs two 16-bit field adds... but the top 16-bit add is meaningless, which is why the result is masked to a single 10-bit result value. Scans, also known as "parallel prefix" operations, are somewhat harder to implement efficiently. This is because, unlike reductions, scans produce partitioned results. For this reason, scans can be implemented using a fairly obvious sequence of partitioned operations. For Linux, IA32 processors are our primary concern. The good news is that AMD, Cyrix, and Intel all implement the same MMX instructions. However, MMX performance varies; for example, the K6 has only one MMX pipeline - the Pentium with MMX has two. The only really bad news is that Intel is still running those stupid MMX commercials.... ;-) There are really three approaches to using MMX for SWAR: 1. Use routines from an MMX library. In particular, Intel has developed several "performance libraries," http://developer.intel.com/drg/tools/ad.htm, that offer a variety of hand-optimized routines for common multimedia tasks. With a little effort, many non-multimedia algorithms can be reworked to enable some of the most compute-intensive portions to be implemented using one or more of these library routines. These libraries are not currently available for Linux, but could be ported. 2. Use MMX instructions directly. This is somewhat complicated by two facts. The first problem is that MMX might not be available on the processor, so an alternative implementation must also be provided. The second problem is that the IA32 assembler generally used under Linux does not currently recognize MMX instructions. 3. Use a high-level language or module compiler that can directly generate appropriate MMX instructions. Such tools are currently under development, but none is yet fully functional under Linux. For example, at Purdue University ( http://dynamo.ecn.purdue.edu/~hankd/SWAR/) we are currently developing a compiler that will take functions written in an explicitly parallel C dialect and will generate SWAR modules that are callable as C functions, yet make use of whatever SWAR support is available, including MMX. The first prototype module compilers were built in Fall 1996, however, bringing this technology to a usable state is taking much longer than was originally expected. In summary, MMX SWAR is still awkward to use. However, with a little extra effort, the second approach given above can be used now. Here are the basics: 1. You cannot use MMX if your processor does not support it. The following GCC code can be used to test if MMX is supported on your processor. It returns 0 if not, non-zero if it is supported. inline extern int mmx_init(void) int mmx_available; __asm__ __volatile__ ( /* Get CPU version information */ "movl $1, %%eax\n\t" "andl $0x800000, %%edx\n\t" "movl %%edx, %0" : "=q" (mmx_available) : /* no input */ return mmx_available; 2. An MMX register essentially holds one of what GCC would call an unsigned long long. Thus, memory-based variables of this type become the communication mechanism between your MMX modules and the C programs that call them. Alternatively, you can declare your MMX data as any 64-bit aligned data structure (it is convenient to ensure 64-bit alignment by declaring your data type as a union with an unsigned long long field). 3. If MMX is available, you can write your MMX code using the .byte assembler directive to encode each instruction. This is painful stuff to do by hand, but not difficult for a compiler to generate. For example, the MMX instruction PADDB MM0,MM1 could be encoded as the GCC in-line assembly code: __asm__ __volatile__ (".byte 0x0f, 0xfc, 0xc1\n\t"); Remember that MMX uses some of the same hardware that is used for floating point operations, so code intermixed with MMX code must not invoke any floating point operations. The floating point stack also should be empty before executing any MMX code; the floating point stack is normally empty at the beginning of a C function that does not use floating point. 4. Exit your MMX code by executing the EMMS instruction, which can be encoded as: __asm__ __volatile__ (".byte 0x0f, 0x77\n\t"); If the above looks very awkward and crude, it is. However, MMX is still quite young.... future versions of this document will offer better ways to program MMX SWAR. Next Previous Contents
{"url":"http://www.linuxdoc.org/HOWTO/Parallel-Processing-HOWTO-4.html","timestamp":"2014-04-16T22:03:04Z","content_type":null,"content_length":"28989","record_id":"<urn:uuid:de6fbce3-d296-4ca1-b049-a5e88bb0c111>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Exp((-i*pi)/4) = ? Not really homework, but part of a homework problem I am working on. I know that [tex]e^{i\pi}+1=0[/tex] (Euler's Identity) And also that [tex]e^{i \pi} = e^{-i \pi}[/tex] But I'm having trouble understanding [tex]e^{\frac{-i \pi}{4}}[/tex] In the complex plane this is a clockwise rotation around the origin of [tex]\frac{\pi}{2}[/tex] radians. No, the rotation is by [tex]\frac{\pi}{4}[/tex] radians. Was the 2 in the denominator a typo? But I think it should reduce to some real constant which I am having trouble finding. Why would you think that? Just because [tex]e^{-i \pi}[/tex] is a real constant, doesn't mean that the other one is also a real constant. In Mathematica, I get two different answers... [tex]N[e^{\frac{-i \pi}{4}}] = 0.707107 - 0.707107 i[/tex] This implies that [tex]e^{\frac{-i \pi}{4}} = \frac{1}{\sqrt{2}}(1-i)[/tex] which seems wrong to me. These are two representations of the same complex number. The first is an approximation and the second is exact. The other answer given is: Simplify[tex][e^{\frac{-i \pi}{4}}] = -(-1)^{\frac{3}{4}}[/tex] I am not familiar with Mathematica, so I don't know the difference between the N command and the Simplify command. But this reduces to 1, which I believe is probably the correct answer. No it doesn't, and 1 is not the correct answer. Let's look at -(-1) a little more closely. Before the final sign change, you have (-1) to the 3/4 power. That is the same as -1 cubed (still -1), which we take the 4th root of. This is not a real number, since there is no real number that when squared, and then squared again, yields a negative number. The final step is to change the sign of this (nonreal) number, which still doesn't give us 1, as you claimed. Is the first result just spurious rounding? Can I just write the following identity as true? [tex]e^{\frac{-i \pi}{4}} = 1[/tex] Absolutely not.
{"url":"http://www.physicsforums.com/showthread.php?p=2343346","timestamp":"2014-04-20T03:26:19Z","content_type":null,"content_length":"34255","record_id":"<urn:uuid:7b436239-e3a7-4147-aace-8462bbc95805>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Function problem (recursion) 05-27-2010 #1 Registered User Join Date May 2010 Function problem (recursion) I want to calculate the formula : 1+2+3+4+............+n = n (n+1)/2. anyone will help me to find the error of the program? it just show an error result ........ // Function (RECURSION) # include<stdio.h> int funct1(int n); void main() int n,t; printf("Enter the value until which you want to sum up\n"); scanf("%n", &n); t= funct1(n); printf(" summation= %d", t ); int funct1(int n) if (n>0) return (n+ funct1(n-1)); Last edited by chaklader; 05-27-2010 at 11:19 PM. According to the scanf docs there's no such specifier as "%n". There are different ones to use depending on exactly what you need, the documentation describes them all. scanf - C++ Reference There's no 'bottom case' (can't remember what it's called) for your recursive function. You need to specify something to return for when it does get to zero. Every thing is ok.other than the '%n' in the scanf .You have to use '%d' is it . # include<stdio.h> int funct1(int n); int main() int n,t; printf("Enter the value until which you want to sum up\n"); scanf("%d", &n); t= funct1(n); printf(" summation= %d", t ); int funct1(int n) if (n>0) return (n+ funct1(n-1)); You forget base case. int funct1(int n) if(n > 0) return n + funct1(n-1); return 0; Btw, do you enable compiler warning? Last edited by Bayint Naung; 05-27-2010 at 11:52 PM. DeadPlanet, karthigayan and Bayint Naung @ its working now . thank you all . 05-27-2010 #2 Registered User Join Date Jan 2009 05-27-2010 #3 Registered User Join Date Nov 2008 05-27-2010 #4 Registered User Join Date May 2010 05-27-2010 #5 Registered User Join Date May 2010
{"url":"http://cboard.cprogramming.com/c-programming/127309-function-problem-recursion.html","timestamp":"2014-04-24T17:00:31Z","content_type":null,"content_length":"54206","record_id":"<urn:uuid:9e88beab-5e94-458d-8017-ecf326b537c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
How do I help my child with basic math facts? | Ask Mrs. Brooke - Kirkland Reporter How do I help my child with basic math facts? | Ask Mrs. Brooke Hi Mrs. Brooke, My daughter is a second grader and I am wondering about how to help her with basic math facts. In her homework, I can see they are working on a variety of strategies (such as making 10) but she continues to rely on her fingers. She does count on from the larger number. I am wondering if using fingers is okay given her age? In addition, what strategies do you suggest for helping make the Katie Vice Trierweiler Dear Katie, This is a great question, and as you know, a very important one as knowing basic math facts will be so important as your daughter is met with more difficult math problems in school and in everyday Typically the developmental continuum for solving basic math problems and improving computational fluency moves from concrete to abstract, so from using fingers, to objects, to pictures, to symbols, and then to memorization. Your daughter’s current strategies are very developmentally appropriate, especially given that she’s already counting on from the larger number. By the end of second grade, students should be fluent in their facts to 20, which requires that students are typically not relying heavily on their fingers as tools. I am encouraged to hear that your child’s teacher is teaching a variety of strategies to teach basic math facts rather than just relying on flash cards or time tests. Although these methods might be effective for maintaining knowledge or improving computational fluency, they are not the most effective methods for a student to understanding math facts. Second grade teacher Andrea Rulon says, “The most important thing that I stress is that for most kids, simply memorizing facts with flash cards is not going to help them really learn or understand the facts.” Below are some strategies for teaching mastery of basic facts. These are what many second grade teachers I know, including Mrs. Rulon and the second grade team at Ben Franklin Elementary, teach students and recommend to parents in order to help with this basic fact understanding. Here are some strategies for teaching mastery of basic facts: Adding zero When you add zero you add nothing. Make sure this understanding is in place. Adding one (counting up) Adding one means saying the larger number, then jumping up one number, or counting up one number. This happens every time you add one. It never changes. Never recount the larger number, just say it and count up one. Examples: 6+1=say 6 then 7, 44+1=say 44 then 45. Adding two: Count up two Adding two means saying the larger number, then jumping up or counting up twice. Examples: 9+2=say 9 then 10 then 11, 45+2=say 45 then 46 then 47 Commutative property You also have to teach or review the commutative property. The answer will be the same regardless of the order you add the two numbers. 9+2=2+9 Order doesn’t matter. Adding 10 Adding 10 means jumping up 10 (think of a hundreds chart). The ones digit stays the same but the 10’s digit increases by one. Examples: 5+10=15, 10+7=17 For older students you can relate this to higher numbers: Example 23+10=33, 48+10=58 Adding 9 Adding 9 makes sense if students understand adding 10. It sounds more difficult than it actually is. Remind students of the jump of 10–5+10=15. A student would say (in their head) “5+10=15.” The five and 15 are naming the same number of ones. With the nines, a student must count down one in the ones. A student would say “5+9=14.” Work with lots of examples until the idea is understood: 5+10=15, 5+9=14, 7+10=17, 7+9=16 Adding 9’s another way It should be pointed out to students that when adding nine, the ones digit in the sum is always one less than the number added to 9. For example 7+9=16, the 6 is one less than 7. Another example, 5+9 Adding 8 This works exactly the same only a child must think 2 less. Using the examples above students would say; 5+10=15, so 5+8=13, 7+10=17 so 7+8= 15 (2 less) Double numbers To add double numbers there are a couple of strategies that might help students. When you add a double you are counting by that number once. For example: 4+4= think of 4, 8 counting by fours. Practice skip counting by each number in turn: 2-4, 3-6, 4-8 etc. This gets harder with the higher numbers but skip counting is an important skill for students to have. Doubles occur everywhere in life. For example: an egg carton is 6+6, two hands are 5+5, 16 pack of crayons has 8+8, two weeks 7+7, legs on an insect (3 on each side) 3+3. Near doubles To use the near doubles strategy a student first has to master the doubles. Then, if the double is known, they use that and count up or down one to find the near double. Example: 4+4=8, 5+4=9 (count up one) Or: 4+4=8, so 4+3=7 (count down one) Doubles plus two This method works when the addends differ by two. When this occurs it is possible to subtract 1 from one addend and add one to the other addend. This results in a doubles fact that has already been memorized, 7+5 becomes 6+6. Adding 5 Adding five has a strategy that is helpful but not completely effective as it is a bit tricky. You can decide if it is helpful or not. To add fives look for the five in both numbers to make a 10 then count on the extra digits. Examples: 5+7=(10+2)= 12, 5+8=5+5+3=13 Students who can see the five in 8 should have no difficulty. Students who can’t visualize numbers will find this hard. Most students can be taught to do this with some extra work. Also, as a teacher and a parent I remind myself whenever a child in my class or even my own child is not catching on as quick as I hoped maybe I need to stop and think about this individual child and how that child learns best. Howard Gardner, a Harvard researcher, believes that there are eight intelligences - or ways kids learn best that include: musical, spatial, logical-mathematical, linguistic, bodily, intrapersonal, interpersonal and naturalist. So, for instance maybe a math fact song would work if your child is musical. Does your child love the outdoors and is more of a naturalist? Go on a walk and collect sticks, use the math strategies with the rocks by the river. If she is artistic, get out the paint and have her create a number story and visually see the connection. Have her draw pictures and eventually move to symbols like tally marks, which are faster to draw and count. Is your child interpersonal and craves the social interaction with another? Play games and it doesn’t have to be anything fancy or expensive. There are many math games that teach the basic math facts, which require only a typical deck of cards or regular dice. Here are just a few I used with my second grade students: Mental Math with Playing Cards (Number Sense) Predetermine the “rule” of the game, such as “Add 5” or “Double it.” Prepare a deck of cards by removing all the face cards and jokers. Then have the child turn over one card at a time and apply the “rule” then give the answer. Find Ten (STRAND: Number Sense-Addition: Finding Tens) This is a math game similar to Concentration. In this game, children try to make a 10 by turning over combinations of cards that total 10. Variation: Use jokers or face cards as wild cards. Other games include playing the card game War with two cards instead of one, Yahtzee, and rolling two dice adding them together. Games are always an effective tool to use to teach, maintain, reinforce, and most of all keep learning fun! Although research has failed to identify any difference between girls and boys math skills, studies have found that girls often receive less encouragement in math than boys. They also are affected more than boys when having female role models/teachers in their lives who display anxiety about math. The fact that you are willing as a mother to reach out and learn strategies in order to help your daughter master her basic math facts may have more of an impact than you may know. Your eagerness and positive approach on math could ultimately alleviate years of anxiety and produce a child who loves math, enabling her to enter and find success in fields of technology, science, engineering, and math in the future if she so chooses. As your child’s first and most important teacher, with your continued support and encouragement in math, your child will stop counting on her fingers and instead be counting endless opportunities. Joy Brooke is the first and most important teacher of her 5-year-old son and 3-year-old daughter. She resides in downtown Kirkland with her husband and two children. Brooke is a National Board Certified teacher in Literacy: Reading- Language Arts/Early and Middle Childhood, holds a B.A. in Educational Studies and a M.A. in Educational Policy and Management from the University of Oregon. The opinions provided in this column do not reflect that of the LWSD or any other organization she is affiliated. Our Mobile Apps Trending Stories Apr 8 - Apr 15 Community Events, April 2014 We encourage an open exchange of ideas on this story's topic, but we ask you to follow our guidelines for respecting community standards. Personal attacks, inappropriate language, and off-topic comments may be removed, and comment privileges revoked, per our Terms of Use . Please see our if you have questions or concerns about using Facebook to comment. Browse the print edition page by page, including stories and ads. Apr 11 edition online now. Browse the archives.
{"url":"http://www.kirklandreporter.com/opinion/177349671.html","timestamp":"2014-04-16T04:26:12Z","content_type":null,"content_length":"74502","record_id":"<urn:uuid:a2dbe56c-b6fd-44a8-811f-bd782614c5d3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
SOS: how to calculate out the mean and variance of the Beta-Pascal distribution? How to calculate out the mean and variance of the Beta-Pascal distribution? The Beta-Pascal distribution is: I've never seen this distribution before. I just came across this...Beta Pascal Distribution | Beta Distribution | Negative Binomial Distribution | Discrete Probability Distribution | Probability Distribution | Statistics Homework Help | Statistics Assignment Help How we can calculate mean and variance of Beta-Pascal distribution??
{"url":"http://mathhelpforum.com/advanced-statistics/110399-sos-how-calculate-out-mean-variance-beta-pascal-distribution.html","timestamp":"2014-04-17T11:57:38Z","content_type":null,"content_length":"38610","record_id":"<urn:uuid:0fc3aafe-aad7-44d3-84d6-a25a7cf7459d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Alternative Parametric Boundary Reconstruction Method for Biomedical Imaging Journal of Biomedicine and Biotechnology Volume 2008 (2008), Article ID 623475, 07 pages Research Article Alternative Parametric Boundary Reconstruction Method for Biomedical Imaging ^1Department of Mathematics, College of Science and Technology, The University of Southern Mississippi, Hattiesburg, MS 39406-0001, USA ^2QinetiQ PLC, Malvern, Worcestershire WR14 3PS, , UK Received 2 September 2007; Accepted 15 January 2008 Academic Editor: Halima Bensmail Copyright © 2008 Joseph Kolibal and Daniel Howard. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Determining the outline or boundary contour of a two-dimensional object, or the surface of a three-dimensional object poses difficulties particularly when there is substantial measurement noise or uncertainty. By adapting the mathematical approach of stochastic function recovery to this task, it is possible to obtain usable estimates for these boundaries, even in the presence of large amounts of noise. The technique is applied to parametric boundary data and has potential applications in biomedical imaging. It should be considered as one of several techniques to improve the visualization of images. 1. Introduction Three-dimensional (3D) computer reconstruction of a target volume or of a surface is an important activity in modern biomedical imaging. The accurate anatomical reconstruction in trauma or for use in image-guided intervention relies on mathematical imaging technology; and this paper develops the mathematical technique of stochastic function recovery [1] and illustrates its use for noisy boundary reconstruction. This is an alternative approach to the standard polynomial-based methods that we see as an add-on or complement to other techniques in use or being developed to improve upon the reconstruction of noisy boundary data to provide enhanced biomedical visualization. The ability to distinguish features related to boundaries is intrinsic to technology of visualization. For example, in MRI imaging, a range of specialized methods have been developed for extracting information from signals so as to reconstruct images representing internal body structures [2]. Boundary recovery techniques apply to complex surgical procedures as with electroanatomical mapping that tracks the position of catheters inside the body with sparse signals recorded from electrodes at the tip of the catheter. Resulting surface maps must integrate real-time measurements with preoperative MR or CT images, and account for mapping data errors in registration and error due to patient movement [3]. In general, when signals are affected by noise, it must be effectively removed in order to improve the visualization and compared with other medical imaging modalities, ultrasound images suffer from speckle noise that often makes for weak or incomplete boundaries [4]. Our approach is to use stochastic convolution-deconvolution operators [1, 5–7], which have useful statistical properties, to smooth noisy surface data in a manner that does not obliterate detail, and which effectively removes Gaussian noise. The motivation for the approach is that, intrinsically, stochastic interpolation using probabilistic kernels for the generating function of the row space of the linear operator performs well at removing noise when used to approximate data. However, the difficulty in applying these methods directly is that they bias the data to a mean of zero. In approximating one-dimensional data, this is not usually a difficulty. However, in approximating multidimensional data, this can cause an apparent shift in the approximant when working in parametric coordinates. This is because the Gaussian kernels used smooth positive values to their mean value, thus potentially shifting the coordinates nearer the origin. When we approximate in one dimension, the data values are shifted to the mean, however, the coordinates are not touched. If we are using parametric data, the coordinates are the data values ; and this means that the data can be shifted in or space. The less data existing or the greater the smoothing, the more will be this shift. For example, when data is taken from a circle centered on the origin, the approximating curve is a circular curve centered on the origin, but of smaller radius, and the greater the smoothing applied in the approximation (if the data is very noisy), the smaller the area circumscribed by the approximating curve. As the number of sample points is increased, the approximation improves. However, for coarse surface data that is only smoothly varying, this can cause difficulties. One approach is to make use of stochastic interpolation. Creating a dense noisy data set from the sparse noisy data using interpolation is followed by approximating this fine data set to recover the smoothed surface curve, thereby mitigating the shifting of the mean. While workable, this approach has several disadvantages, most notably that it requires a more costly interpolation step. It also requires the application of the technique more than once, and in the second or subsequent applications, the approximation must be done on a fine data set, meaning that many more points require approximation, again incurring a larger computational cost. The solution is to make use of the convolution-deconvolution properties of stochastic interpolation combining the densification step with the approximation step. This would still be expensive. However, we introduce an approximate means for doing the interpolation that interpolates for smooth data, but which approximates for noisy data, thus avoiding the costly need to construct the inverse operator needed for interpolation. 2. Development Consider the task of sampling a known function at points with so as to determine its value at . The stochastic interpolant [5] to the data , is given bywhere is the data vector, and where is a row stochastic matrix whose coefficients consist of the values . Choosing the generator of the row space of to be the Bernstein functions [1] (named after Bernstein as the derivation of this form that can be obtained from the Bernstein polynomials), we havewith on the partition , with and yielding a stochastic matrix. Setting generates the entries of , and setting to any set of consecutive values in generates the coefficients of , that is, the coefficients of are constructed in the same manner as for , except that the nodes at which is evaluated may differ from the values at which the data are given. While any probability density function (pdf) can be used, appropriately replacing the mean and variance of the Gaussian in (2), a pdf based on the normal distribution is consistent with the problem of filtering Gaussian noise. In stochastic interpolation, with the coefficient of generated by (2), we can interpret as the discrete deconvolution of the data yielding the preimage generated by . This preimage is then convolved by to yield an -vector of values that interpolates the data at the output coordinates , ; the output coordinates are those that were used to generate the coefficients of . It is for this reason that we have elected to represent the matrix using another symbol since it is desirable to emphasize its role in convolution. Defining with and with constant yields a coefficient structure in which is a diagonal matrix times a symmetric Toeplitz matrix: . Inversion or solution of these matrices can be accomplished in operations [8], however in the cases of interest to us, this is not necessary. It is possible to do better than this using an approximate inverse in which the row space of is generated directly. While evaluating is still , it is significantly faster than the Toeplitz matrix inversion of to obtain . The approximate inverse is an approximation precisely because , that is, in applying stochastic interpolation to the data , there is an error: Thus the interpolant to the data can be expressed using successive correction to the errors usingSubstituting for , from to , givesand truncating the sum gives a working formula in which even for larger values of , so that the method nearly interpolates. Truncating at and applying the formula to generate output values instead of giveswhere . Provided that in (2) is small, the error in constructing using is small, however if larger values of are used, then greater smoothing is applied to the data and the use of (6) becomes necessary when is larger than 0.05. However, it will become apparent that it is because of this greater smoothing that it is unnecessary to apply any corrections as shown in (6), and thus the direct computation of is found to be convenient and efficient, requiring only a single matrix multiply. In working with stochastic data recovery, it is obvious that using the Bernstein functions mollifies the data vector , and thus provides an approximation vector of length to the initial vector of length . Consider the evaluation of where the generator of the row space of and are given by (2). This interpolates the data provided that , meaning that the variance of the Gaussian pdf that is used to generate , is the same as the variance that is used to generate . If instead , then the preimage will be oversmoothed when is applied to the preimage, and the result will be approximation. Similarly, if , then the preimage will not be smoothed sufficiently, and the data will be roughened, or more appropriately it will be deconvolved. With statistical errors present in multidimensional data, interpolation in parametric coordinates may yield an extremely complex curve, and the errors may cause the curve to wiggle excessively, often crossing over on itself. Clearly, some form of smoothing is necessary. However, as noted in Section 1, simply approximating the data so as to smooth these errors may introduce translation errors in the approximating surface representation, particularly when the number of data points is small, or the smoothing specified by is large. Since the approximation is convergent, a simple work-around can be achieved by densification of the data by interpolation as this will minimize this shift on subsequent smoothing. This leads to a computationally efficient approach to surface recovery that avoids translation errors while smoothing the noise based on evaluating , where is the desired number of output points representing the boundary, and are the number of input data points and . To demonstrate this form, note that the intended construction is to evaluate , where is significantly greater than with , and then applying smoothing to this densified data by multiplying by , where . In effect, this interpolates the data and then smooths it by the application of the approximation to the densified data. Thus a two-step algorithm can be described as follows: This algorithm is equivalent to the following: (1) that is, there exist and such that applying to is the same, or nearly the same, as applying to . The use of for a wide range of instead of introduces some additional smoothing, allowing for less smoothing to be used on the convolution step, that is, when applying . Since it saves an unnecessary matrix multiply, it is clearly faster. The construction of is not difficult, remarkably being given by the inverse of the generator of the row space of . The elements of the approximate inverse are given by the reciprocals of the coefficients of . It is for this reason that the direct inversion of , or solution of the system using efficient Toeplitz solvers, is not needed. The only exception may be when the data is free of noise and exact interpolation without any smoothing is The reconstruction of surface data is done using a parametric representation in which for two-dimensional data, and for three-dimensional data, where . The recovery of the data is done using (1) or its modification using an approximate inverse , applied successively to the data pairs , in two dimensions and to the data triplets , , in three dimensions. For example, in the case of interpolating two-dimensional data, we apply and to obtain the interpolant to the position vector and the position vector , or we apply and to obtain the approximate interpolants. In reconstructing a parametrically represented surface generated from image data from pixel values, for instance, an algorithm for boundary detection and sorting is necessary. In our analysis, it is assumed that this is available, however the errors generated in parameterizations of the surface may not be entirely random, and thus systematic errors in surface representation will also be introduced. For the purpose of assessing the performance of the boundary recovery algorithms, it will be assumed that the errors are random Gaussian with a mean of zero, with variable variances , generated using the random variablewhere and are two random numbers uniformly distributed in the interval . Thus the parametrically ordered data , with or , and , are perturbed to yield the data sets In applying stochastic data recovery to the problem of finding the shape of a parametrically defined boundary, the problem of closed curves needs to be addressed. In the presence of large amount of noise, the two endpoints of the parametrically defined curves may not match: while ideally in the case of a close loop, in the presence of errors this will not be the case. Finally, in implementing the algorithm, it was found that the dependence of (2) on in the denominator made the choice of dependent on , as the boundary data density increased, the recovery using any given value of produced increasingly rougher curves as increased, and thus it was found convenient to evaluate and based on a constant value of . Additionally, the algorithm was applied to all of the boundary data associated with a parametric data set to construct the boundary curve, rather than decomposing the data into overlapping blocks. The merits of using all of the data are a slight improvement in accuracy, while the drawbacks are that the cost of evaluating the algorithm increases as the block size increases, and this should be born in mind in applying the algorithm to large complex three-dimensional data sets. The choice of depends on the smoothness of the desired boundary curve. The larger is the value of , the smoother are the results. The values for and depend on the amount of noise, as well as on the presumed smoothness of the boundary data. While this seems to present difficult choices, it is less complicated than it appears, as the choice for will usually be any sufficiently small value. For example, setting allows for representing the data as nearly piecewise linear along the boundaries. In contrast, the choice for requires some evaluation as this determines the smoothness of the recovered boundary. 3. Results and Discussion We begin by applying the process to the recovery of the boundary of a disk defined parametrically by 12 uniformly distributed points with no random errors in the data as shown in Figure 1. The figure clearly illustrates the aforementioned difficulty of attempting to approximate (smooth parametric data). In examining Figure 1(a), it is also important to observe that the beginning and the end of the curve are joined by a straight line in this example. The reason is that the slope of the approximant to the data is not the same as the slope to the data , and thus in this example, the first approximated point does not agree with the last approximated point , and are joined graphically with a straight line to close the curve. A solution to the mismatch is to overlap the curves during reconstruction, that is, at several points away from the last point, and ending the construction several points away from the first point, then only using the curve from the first to the last point. In all of the studies presented, there is no attempt to overlap the curves in order to magnify these boundary affects, and to demonstrate that they are mostly negligible, as seen in Figure 1(b), whenever the construction is done correctly. For large data sets where it may be computationally advantageous to block the data, overlapping the endpoints is readily accomplished. It is important to realize that if the number of points representing the disk were to be much more than 12, then it would have been difficult to visualize the contraction of the recovered boundary curve since the approximation is convergent. Thus for a sufficiently large number of data points, the difference between the approximating curve and the curve itself becomes arbitrarily small. Recovering the boundary of a disk becomes more difficult, as shown in Figure 2, particularly if the amount of noise is quite large. In this example, the Gaussian noise for a disk of radius 5 is specified as yielding an extensively scattered data set. While the level of noise in this illustrative example is much higher than would be expected in any realistic imaging situation, the example serves a two-fold purpose: (1) on the one had it shows the robustness of the method at recovering a reasonable representation of the surface from the data that is more consistent with noise than data, and (2) it shows that the effects of errors in constructing the parameterization are less of an issue than might be presumed. In the figure, the connectivity of the boundary data is not ordered in moving counterclockwise around the circle, and so it is quite likely that any minor errors in parameterization, for example, using even the simplest unconstrained nearest neighbor search of the data, would cause unrecoverable errors in the surface representation that is recovered. As expected, reducing the noise to significantly improves the recovery, even for half as many points, as shown in Figure 3. While both of these figures are typical, the symmetry and the smoothness of the boundary of the disk make the recovery somewhat less challenging than for a more complex geometrical shape. Thus the performance of the algorithm is examined on a star-shaped region generated from the vertex data (8,65), (72,65), (92,11), (112,65), (174,65), (122,100), (142,155), (92,121), (42,155), (60,100), (8,65). Note that the recovery makes use of that on both steps is the same as that used in recovering the boundary of the disk. The shape of the star in Figure 5 has become more wiggly using the smoothing parameters the same as in the case of lesser noise. The problem is a classical one: there is no means to discern the shape of the figure from the noisy data, except to note that an acceptable shape is determined by the smoothness of the boundary that is intrinsic to the figure. In this case, changing the smoothing can accommodate this subjective assessment, as illustrated in Figure 6. Note that the recovered boundary is consistent with the curve recovered from the less noisy data set: compare Figures 4 and 6 as shown in Figure 7. The effects of doubling the smoothing by taking to be twice as large are clearly evident. Since the noise in both cases was generated using the same seed, it is only the magnitude of the excursions away from the star's boundary that change, and hence the figures are directly comparable. This perhaps most plainly illustrates that interaction between noise and smoothing, as the two curves are nearly identical. Even when the noise is doubled again to , the shape of the recovered curve using is remarkably consistent, as shown in Figure 8. At this level of noise there is some loss of resolution of the limbs, however the recovered boundary is recognizable as being related to the two boundaries obtained in Figures 4 and 6. While this is artificial in that the amount of noise in the data in real problems is not known, it does demonstrate the robustness of the algorithm at consistently recovering the boundary data irrespective of the added noise. A final remark on the computing of the approximate interpolant is the following. Since the approximate interpolant fails to interpolate when is large or, more appropriately, fails to interpolate rapidly varying data, quite some effort was expended in developing mechanisms for being able to use large for smoothing and yet still maintain some fidelity with the boundary data while constructing the preimage. As it developed, this iterative correction was not necessary: the algorithm performed well even without the introduction of any corrections. In part, this is due to the rather simple shapes, and the relatively large amounts of noise that were examined. In the case when a surface is oscillating rapidly with the noise much less than this surface oscillation, it is clearly necessary to implement the algorithm using this additional correction. Indeed, given sufficiently rough surface data, it may be necessary to use stochastic interpolation as otherwise some high-frequency surface details will be oversmoothed by the approximate inverse 4. Conclusions Examination of several attempts at domain boundary reconstruction in two dimensions is encouraging using the stochastic data recovery techniques. In particular, qualitative assessments demonstrate the viability of utilizing stochastic function recovery methods for reconstructing parametrically defined edges. Since the approach is intrinsically one dimensional, its extension to three dimensions is not difficult, and thus can easily be implemented and tested on more realistic problems. Moving to three dimensions poses no additional cost other than that more data has to be processed, that is, the algorithmic costs scale directly with the number of lines being evaluated, and the number of points on each line at which the algorithm is applied. It should be noted that the proposed technique does not solve all problems involving noisy surface reconstruction, however it does provide an additional tool for analysis. The Gaussian-based kernels (the Bernstein functions) which were used to generate the row space of the matrices, were remarkably effective at cancelling the noise even under the most extreme conditions where the noise essentially obliterated the image shape. Of course, this noise was Gaussian and so it is only reasonable that the proposed approach would work well in these circumstances. For other types of noise, the use of alternative probability density functions for generating the mollifiers is easily accomplished, and thus the method has substantial design flexibility, and these options need to be explored in detail to ascertain their utility at cancelling these other types of noise. The computational advantages of the technique are that it requires only two matrix multiplies of the data vector, and thus the approach is relatively cost effective. Moreover, the method is easily implemented in parallel, and further computational gains in efficiency can be achieved by blocking the data since typically only a segment of the entire data vector is needed to recover the data in any region. If fixed block sizes can be implemented, then the cost of the two matrix vector multiplies can be further reduced as the matrix-matrix multiply needs to be done only once, and so the algorithm reduces to a single-block matrix-vector multiply. 1. J. Kolibal and D. Howard, “The novel stochastic Bernstein method of functional approximation,” in Proceedings of the 1st NASA/ESA Conference on Adaptive Hardware and Systems (AHS '06), pp. 97–100, Istanbul, Turkey, June 2006. View at Publisher · View at Google Scholar 2. I. Kastanis, S. R. Arridge, A. M. S. Silver, D. L. Hill, and R. Ravazi, “Reconstruction of the heart boundary from undersampled cardiac MRI using Fourier shape descriptors and local basis functions,” in Proceedings of the 2nd IEEE International Symposium on Biomedical Imaging: Macro to Nano (ISBI '04), vol. 2, pp. 1063–1066, Arlington, Va, USA, April 2004. View at Publisher · View at Google Scholar 3. Z. Malchano, “Image guidance in cardiac electrophysiology,” Massachusetts Institute of Technology, Cambridge, Mass, USA, 2006. 4. J. Xie, Y. Jiang, and H.-T. Tsui, “Segmentation of kidney from ultrasound images based on texture and shape priors,” IEEE Transactions on Medical Imaging, vol. 24, no. 1, pp. 45–57, 2005. View at Publisher · View at Google Scholar 5. D. Howard and J. Kolibal, “Image analysis by means of the stochastic matrix method of function recovery,” in ECSIS Symposium on Bio-Inspired, Learning, and Intelligent Systems for Security (BLISS '07), pp. 97–101, Edinburgh, UK, August 2007. View at Publisher · View at Google Scholar 6. J. Kolibal and D. Howard, “Implications of a novel family of stochastic methods for function recovery,” in Proceedings of the 2nd International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP '06), pp. 495–498, Pasadena, Calif, USA, December 2006. View at Publisher · View at Google Scholar 7. J. Kolibal and D. Howard, “MALDI-TOF baseline drift removal using stochastic bernstein approximation,” EURASIP Journal on Applied Signal Processing, vol. 2006, Article ID 63582, p. 9 pages, 2006. View at Publisher · View at Google Scholar 8. W. F. Trench, “An algorithm for the inversion of finite Toeplitz matrices,” SIAM Journal on Applied Mathematics, vol. 12, no. 3, pp. 515–522, 1964. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/bmri/2008/623475/","timestamp":"2014-04-21T00:27:37Z","content_type":null,"content_length":"246194","record_id":"<urn:uuid:93b52d06-eef9-4906-90da-389a6e55bea8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel EXPON.DIST Function The Excel EXPON.DIST Function Exponential Distribution The exponential distribution is a continuous probability distribution, which is often used to model the time between events. The probability density function for the Exponential Distribution is given by the formula : and the Cumulative Exponential Distribution is given by the formula : where x is the independant variable and λ is the parameter of the distribution. Basic Description For a given value of x and parameter λ the Excel EXPON.DIST function calculates the value of the probability density function or the cumulative distribution function for the exponential distribution. The function is new in Excel 2010, so is not available in earlier versions of Excel. However, the Expon.Dist function is simply a renamed version of the Expondist function, that is available in earlier versions of Excel. The syntax of the function is : EXPON.DIST( x, lambda, cumulative ) where the function arguments are: x - A positive number, denoting the value that the exponential distribution is to be evaluated at lambda - The parameter of the distribution (must be > 0) A logical argument that specifies the type of distribution to be calculated. This can have the value TRUE or FALSE, meaning: cumulative - TRUE - calculate the cumulative distribution function FALSE - calculate the probability density function Probability Density Exponential Distribution with λ = 0.5, 1 and 2 Expon.Dist Function Examples Example 1 - Probability Density Function The chart on the right shows the probability density functions for the exponential distribution with the parameter λ set to 0.5, 1, and 2. If you want to calculate value of the function with λ = 1, at the value x=0.5, this can be done using the Excel Expon.Dist function as follows : =EXPONDIST( 0.5, 1, FALSE ) which gives the result 0.60653066. Cumulative Exponential Distribution with λ = 0.5, 1 and 2 Example 2 - Cumulative Distribution Function The chart on the right shows the cumulative exponential distribution functions with the parameter λ equal to 0.5, 1 and 2. If you want to calculate value of the function with λ = 1, at the value x=0.5, this can be done using the Excel Expon.Dist function as follows : =EXPON.DIST( 0.5, 1, TRUE ) This gives the result 0.39346934. Further examples of the Excel Expon.Dist function can be found on the Microsoft Office website. Expon.Dist Function Errors The following table lists the reasons for the most commonly encountered Excel errors, when using the Excel Expon.Dist function : Common Errors Occurs if either: #NUM! - - the supplied value of x is negative - the supplied lambda argument is ≤ 0 #VALUE! - Occurs if any of the supplied arguments are not non-numeric
{"url":"http://www.excelfunctions.net/Excel-Expon-Dist-Function.html","timestamp":"2014-04-18T23:16:21Z","content_type":null,"content_length":"17365","record_id":"<urn:uuid:6aa7b850-fb35-4d51-8016-aee3c6fb4aec>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Projectile Trajectory Equation When a projectile moves near the surface of the earth, the acceleration can be taken as constant, directed down, and equal to the free fall accelearation of any object. (We ignore wind resistance.) The free fall acceleration is taken to have magnitude, g. The direction of the acceleration is down; the negative y direction for the coordinate system shown in the figure. trajectory; the equation that gives the vertical displacment (Y) as a function of horizontal displacement (X). We can think of this equation as having the x and y components of initial velocity (V sub 0) as parameters. Alternatively, we can write the parameters in terms of the initial velocity magnitude and direction. Both parameterizations are shown in the figure.
{"url":"http://www.millersville.edu/~jdooley/formulas/MOTION/TRAJEC/TRAJEC.HTM","timestamp":"2014-04-18T03:31:29Z","content_type":null,"content_length":"2384","record_id":"<urn:uuid:62aa5d62-92e5-4abd-a547-b8a45f99b92c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Understanding Distance, Speed, and Time Relationships Using Simulation Software tool, Runners Activity: Some Observations Discussion: Understanding Distance, Speed, and Time Relationships Using Simulation Software tool Topic: Runners Activity: Some Observations Related Item: http://mathforum.org/mathtools/tool/13171/ << see all messages in this topic < previous message | next message > Subject: RE: Runners Activity: Some Observations Author: ihor Date: Jun 28 2006 Craig wrote; ...perpendicular lines on a graph are pretty much for aesthetics only, and a tie-in to geometry. However, in the context of a distance versus time perpendicularity is practically meaningless... I'm not sure that Rene Descartes would completely agree with you, but I get your point and it is significant area of learning for students. Looking through the algebra 1 lens there's a strong (maybe too strong) inclination on my part to keep the X and Y axes the same. Reply to this message Quote this message when replying? yes no Post a new topic to the Understanding Distance, Speed, and Time Relationships Using Simulation Software tool Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=tool&do=r&msg=25092","timestamp":"2014-04-18T03:10:16Z","content_type":null,"content_length":"16503","record_id":"<urn:uuid:edf1d419-c3e5-428d-a2e0-6ead1732280f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
What is $A+A^T$ when $A$ is row-stochastic ? up vote 16 down vote favorite This is motivated by this MO question. If $A\in{\bf M}_n({\mathbb R})$ is row-stochastic (entrywise non-negative, and $\sum_j a_{ij}=1$ for all $i$), then $M:=A+A^T$ is • symmetric, • entrywise non-negative. One finds easily the • additional property that $$\sum_{i\in I}\sum_{j\in J}m_{ij}\le|I|+|J|$$ for every index subsets $I$ and $J$, • equality in the extremal case: $$\sum_{i,j=1}^nm_{ij}=2n.$$ My question is whether all these four properties imply in turns that $M$ has the form $A+A^T$ for some row-stochastic $A$. Edit. The answer is Yes when $n=2$ (obvious) or $n=3$ (more interesting). matrices inequalities convex-polytopes This smells like "matrix majorization" to me; unfortunately, I don't have time to dig up more on this. Please add your $n=3$ argument to the question if possible, or as a partial answer to this question. Thanks! – Suvrit Jan 5 '13 at 12:23 Lovely question! Each of the two sets of matrices forms a convex polytope. It will suffice to show that each vertex of the second polytope (the symmetric matrices satisfying your conditions) lies 3 in the first polytope. This might help because I suspect that the vertices have quite special form, namely $A+A^T$ where each row of $A$ has a single 1. I didn't prove that, though. – Brendan McKay Jan 5 '13 at 12:33 add comment 2 Answers active oldest votes My answer builds on Brendan McKay's idea. We will show the vertices have the form he describes.. It's obvious that vertices in the polytope of row-stochastic matrices have this form, because each row is independent and the equations for each row form a simplex. So it's enough to show that vertices in the polytope of symmetric matrices satisfying these conditions have the analogous form. First we are going to show that integer symmetric matrices (with even diagonal entries) satisfying these inequalities have the desired form. Then we are going to show that vertices are always integer matrices. To show the first thing, use the following algorithm: Whenever any row sums to $1$, remove that row and the corresponding column, and put a $1$ in the corresponding place in the row-stochastic matrix to account for it. Whenever any row sums to $0$, derive a contradiction: that row and all previously removed rows consist of $k+1$ rows and $k+1$ columns that together contribute only $2k$ to $\sum_{i,j} m_{ij}$, so the remaining $n-k-1$ rows and $n-k-1$ columns, when intersected, contribute $2n-2k$, which is more than $(n-k-1)+(n-k-1)$. When this process is complete, every row sums to at least $2$, and the average row sums to $2$, so every row sums to $2$. then you have the adjacency matrix of a graph where every vertex has valence 2 - a union of disjoint cycles. Choose an orientation of each cycle, and complete the stochastic matrix by adding the corresponding oriented adjacency matrix. So it's enough to show that every vertex has integer entries. Suppose not. Call a pair $I,J$ tight if $\sum_{i\in I} \sum_{j\in J} m_{i,j}=|I|+|J|$. Note that the intersection of two tight pairs is tight, by the following inequality: up vote 2 $\sum_{i\in I_1} \sum_ {j\in J_1} m_{i,j} + \sum_{i\in I_2} \sum_ {j\in J_2} m_{i,j} \leq \sum_{i\in I_1 \cup I_2} \sum_ {j\in J_1 \cup J_2} m_{i,j} + \sum_{i\in I_1 \cap I_2} \sum_ {j\in down vote J_1 \cap J_2} m_{i,j}$ Call a tight pair integral if $m_{i,j}\subset \mathbb Z$ for all $i,j \in I,J$ (and diagonal entries are even). If some $m_{i,j}$ is not integral, let $I,J$ be a minimal non-integral tight pair. It exists because the set of all indices is always a non-integral tight pair. There must be at least one non-integral entry in $I \times J$, but the sum of all the entries is an integer, so there is another non-integral entry. Or there is an odd diagonal entry. Assume there are two entries, and that they are not just the same entry reflected around the diagonal. Then you can increase one entry and decrease the other by some small amount $\ epsilon$. This will preserve all the equalities and inequalities: If $\epsilon$ is small enough it will preserve all the inequalities that are not currently tight. $m_{i,j}\geq 0$ is not tight for either of these because they are nonzero. Since $I,J$ is a minimal non-integral tight pair, any tight pair containing one contains the other, and so every tight inequality remains tight. Since one could just as well decrease one and increase the other, this shows that $m_{i,j}$ is not a vertex. The remaining case to consider if the minimal non-integral tight pair has only $m_{i,j}\not \in \mathbb Z$ and $m_{j,i} \not\in \mathbb Z$ and is otherwise integral, or has only an odd diagonal entry and is otherwise non-integral. If $i\neq j$ then $m_{i,j}$ is clearly a half-integer, so either way the total contribution of $m_{i,j}$ and $m_{j,i}$ is odd. But since $J,I$ is also a tight pair, $I\cap J$, $I\cap J$ is also a tight pair, and contains $m_{i,j}$, so it is the same as $I,J$ and $I=J$, so $|I|+|J|$ is even. So the total contribution is even, which means there must be another non-integral or odd diagonal entry, and we are done. Great answer ! Thanks. – Denis Serre Jan 14 '13 at 8:46 add comment My solution for $n=3$ (upon Suvrit's request): To begin with, we solve $A+A^T=M$ together with $A{\bf1}={\bf1}$ (no inequality for the moment). This is a linear system in $A$, which consists in $9$ equations in $9$ unknowns. However, it is not Cramer, because the set of skew-symmetric matrices $B$ such that $B{\bf1}=0$ is one-dimensional, spanned by $$\begin{pmatrix} 0 & 1 & -1 \\\\ -1 & 0 & 1 \\\\ 1 & -1 & 0 \end {pmatrix}.$$ In particular, there is a condition for solvability in $A$, but this condition is met by the assumption that $\sum_{i,j}m_{ij}=6$. Notice that $a_{ii}=\frac12m_{ii}$. Thus there is a solution $A$, and every solution is of the form $A+aB$. There remains to find $a$ so as to satisfy the inequality $A+aB\ge0_n$. For this, let us denote $\mu$ the lower bound of $(a_{12},a_{23},a_{31})$, and $\nu$ that of $(a_{21},a_{13},a_{32})$. Claim: we have $\nu+\mu\ge0$. This inequality allow us to find an $a$ such that $a_{12}+a,a_{23}+a,a_{31}+a,a_{21}-a,a_{13}-a,a_{32}-a\ge0$, which solves the problem. Proof of the claim: we have $a_{12}+a_{21}=m_{12}\ge0$, $a_{12}+a_{13}=1-\frac12m_{11}\ge0$ because of the assumption that $m_{ij}\le2$, and finally $$a_{12}+a_{32}=a_{12}+a_{21}+a_{32}+a_ {23}-a_{21}-a_{23}=m_{12}+m_{23}+\frac12m_{22}-1.$$ Form the assumption, this is equal to $$2-m_{13}-\frac12(m_{11}+m_{33})\ge0.$$ Finally, every sum $a_{ij}+a_{ji}$, $a_{ij}+a_{ik}$ and $a_{ij}+a_{kj}$ of elements of both sets is non-negative, hence $\mu+\nu\ge0$. Q.E.D. up vote 4 down vote Adapting this proof to higher $n$ seems difficult, but not impossible. Let us define $$s_A(I,J)=\sum_{i\in I,j\in J}a_{ij}.$$ If $A$ is any solution of $A+A^T=M$ and $A{\bf1}={\bf1}$, where $M$ meets the assumptions above, then for every $I,J$, we have $$s_A(I,J^c)+s_A(J,I^c)=|I|+|J|-s_M(I,J)\ge0.$$ Likewise, $a_{ij}+a_{ji}=m_{ij}\ge0$ for every $i,j$. We have therefore reduced our question to the following one Suppose that a matrix $A\in M_n({\mathbb R})$ satisfies $a_{ij}+a_{ji}\ge0$ for every $i,j$, and $s_A(I,J^c)+s_A(J,I^c)\ge0$ for every index sets $I,J$. Is it true that there exists a skew-symmetric matrix $B$, satisfying $B{\bf1}={\bf0}$, such that $A+B$ is entrywise non-negative? (Remark that for such $B$, one has $s_B(I,J^c)+s_B(J,I^c)\equiv0$.) A side remark: this set of assumptions about $A$ is redundant. All of them derive from the smaller set of inequalities $$a_{ij}+a_{ji}\ge0,\quad\forall i,j,\qquad s_A(I,I^c)\ge0,\quad\ forall I.$$ As a matter of fact, one has $$s_A(I,J^c)+s_A(J,I^c)=s_A(I\setminus J,I\setminus J)+s_A(J\setminus I,J\setminus I)+s_A(I\cap J,(I\cap J)^c)+s_A((I\cup J)^c,I\cup J)$$ and $s_A (K,K)\ge0$ follows from $a_{ij}+a_{ji}\ge0$. add comment Not the answer you're looking for? Browse other questions tagged matrices inequalities convex-polytopes or ask your own question.
{"url":"http://mathoverflow.net/questions/118034/what-is-aat-when-a-is-row-stochastic/118123","timestamp":"2014-04-16T14:02:54Z","content_type":null,"content_length":"64338","record_id":"<urn:uuid:b29753bd-3d1e-487c-ad9b-b805f8df2137>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
more linear algebra help please? Let S = {(1234),(4321)} and W the subspace of R4 generated by S. Find an orthogonal basis of the orthogonal complement W^ of W Find the shortest distance from u+(12-21) to W^ Obviously the vectors of S are meant to be written as one colum instead of 4.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=231333","timestamp":"2014-04-16T16:15:02Z","content_type":null,"content_length":"8390","record_id":"<urn:uuid:63e0b45f-dc0b-47e3-862a-f0766b435a6d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Hancock, PA Math Tutor Find a Hancock, PA Math Tutor ...If you can understand the building blocks, you can build anything. I'll help you translate the facts of your coursework into knowledge. I really want to work with students from middle school to adults returning to school. 15 Subjects: including algebra 1, elementary math, SAT reading, SAT math ...I am a patient, flexible, and encouraging tutor, and I'd love to help you or your child gain confidence and succeed academically. I adapt my teaching style to students' needs, explaining difficult concepts step by step and using questions to "draw out" students' understanding so that they learn ... 38 Subjects: including calculus, composition (music), ear training, elementary (k-6th) ...I also tutored students in Organic Chemistry, for two years, helping them prepare their entrance exam to Medical School, or Nursing school. Presently I am a successful Senior Analytical Chemist, working in Research and Development, in the Pharmaceutical industry, and my work is very interesting,... 7 Subjects: including geometry, prealgebra, precalculus, trigonometry ...I truly enjoy helping students achieve their goals. Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely score 800/800 on practice tests. 19 Subjects: including algebra 1, algebra 2, calculus, geometry ...I scored best on the writing section of my SATs, scoring a 680/800. I love reading, writing and grammar components. I played lacrosse for 6 years up until my junior year of high school. 12 Subjects: including algebra 1, prealgebra, reading, German Related Hancock, PA Tutors Hancock, PA Accounting Tutors Hancock, PA ACT Tutors Hancock, PA Algebra Tutors Hancock, PA Algebra 2 Tutors Hancock, PA Calculus Tutors Hancock, PA Geometry Tutors Hancock, PA Math Tutors Hancock, PA Prealgebra Tutors Hancock, PA Precalculus Tutors Hancock, PA SAT Tutors Hancock, PA SAT Math Tutors Hancock, PA Science Tutors Hancock, PA Statistics Tutors Hancock, PA Trigonometry Tutors Nearby Cities With Math Tutor Clayton, PA Math Tutors District, PA Math Tutors Dryville, PA Math Tutors Fredericksville, PA Math Tutors Grimville, PA Math Tutors Harlem, PA Math Tutors Klines Corner, PA Math Tutors Longswamp, PA Math Tutors Lower Longswamp, PA Math Tutors Mertztown Math Tutors Oreville, PA Math Tutors Rockland, PA Math Tutors Schofer, PA Math Tutors Shamrock Station, PA Math Tutors Topton, PA Math Tutors
{"url":"http://www.purplemath.com/Hancock_PA_Math_tutors.php","timestamp":"2014-04-18T19:09:26Z","content_type":null,"content_length":"23669","record_id":"<urn:uuid:26fcbea1-076d-4e56-9c9f-2e3bf01854d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
An Onofri-type Inequality on the Sphere with Two Conical Singularities Canad. Math. Bull. 55(2012), 663-672 Printed: Sep 2012 • Chunqin Zhou, In this paper, we give a new proof of the Onofri-type inequality \begin{equation*} \int_S e^{2u} \,ds^2 \leq 4\pi(\beta+1) \exp \biggl\{ \frac{1}{4\pi(\beta+1)} \int_S |\nabla u|^2 \,ds^2 + \frac{1} {2\pi(\beta+1)} \int_S u \,ds^2 \biggr\} \end{equation*} on the sphere $S$ with Gaussian curvature $1$ and with conical singularities divisor $\mathcal A = \beta\cdot p_1 + \beta \cdot p_2$ for $\ beta\in (-1,0)$; here $p_1$ and $p_2$ are antipodal. MSC Classifications: 53C21 - Methods of Riemannian geometry, including PDE methods; curvature restrictions [See also 58J60] 35J61 - Semilinear elliptic equations 53A30 - Conformal differential geometry
{"url":"http://cms.math.ca/10.4153/CMB-2011-115-4","timestamp":"2014-04-19T17:04:36Z","content_type":null,"content_length":"35189","record_id":"<urn:uuid:d76977f1-4451-42a4-b67b-a28c6471de64>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
prove nullity November 13th 2008, 12:53 AM #1 Sep 2008 prove nullity Let V1, V2, W1 and W2 be vector spaces over a field F. Let T ∈L(V1, V2), U1 ∈ L(W1, V1) and U2 ∈ L(V2,W2). Suppose that nullity(T) is finite and U1 and U2 are isomorphisms. (For full marks, do not assume that the vector spaces Vj and Wj are finite-dimensional.) Prove that nullity(U1TU2) = nullity(T). Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/59324-prove-nullity.html","timestamp":"2014-04-21T12:58:45Z","content_type":null,"content_length":"28238","record_id":"<urn:uuid:18466e6f-e100-434a-983a-0b424db88dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
I needed to plot data from a program in real-time for a demo, that is, as the program generates the data, I wanted it to show in some nice diagram. However, I needed to plot several streams in a single window, which that version lacked. So I extended that program with the ability to plot several streams into a single window (download). For this, you specify both the number of streams you want to plot and the number of windows you want to show. Furthermore, you specify in which window each stream is plotted. I kept as much as possible of the original command line interface to ease the transition to this new version. The result looks like this: Three data streams plotted in two windows. You see the three data streams sin, cos, and log plotted in two windows. sin and cos are plotted in the first window, log is plotted in the second window. Also note that the data rate of log is different from the data rate of sin and cos. You could also plot all three data streams in a single window. The input (on stdin or from a file) looks like this, as for the original version: The number before the colon specifies to which stream the data point belongs, the number after the colon is the (next) data point. Note that in this example, the data points for stream 0 and 1 (sin and cos in the example above) come twice as fast as the data points for stream 2 (log in the example above). The command line to generate these plots looks like this: perl ./driveGnuPlotStreams.pl 3 2 \ # number of streams and windows 50 50 \ # width of sliding window in each window -1 1 -2 6 \ # min/max values for each window 500x300+0+0 500x300+500+0 \ # geometry of each window 'sin' 'cos' 'log' \ # title of each stream 0 0 1 # in which window to plot each stream The first two numbers on the command line give the number of streams and number of windows to plot, respectively. The second line gives the number of data points to plot, that is, the size of the sliding window. The third line gives the min/max values for each window. The fourth line gives the geometry of the windows. The fifth line gives the titles of the streams. And the sixth and last line gives the window number in which each stream is to be plotted. Compared to the original program, I changed the position of the geometry specification and the titles, so that the options for the windows and the streams are each grouped together. The command prints its status on stdout like this: Will display 3 Streams in 2 windows... Window 0 will use a window of 50 samples Window 1 will use a window of 50 samples Window 0 will use a range of [-1, 1] Window 1 will use a range of [-2, 6] Window 0 will use a geometry of '500x300+0+0' Window 1 will use a geometry of '500x300+500+0' Stream 0 will use a title of 'sin' Stream 1 will use a title of 'cos' Stream 2 will use a title of 'log' Stream 0 will be plotted in window 0 Stream 1 will be plotted in window 0 Stream 2 will be plotted in window 1 It starts to plot the windows as soon as the data arrives. The program expects its data on stdin or from a file/pipe given after all the command line options. For example, the plots above where generated like this: (perl -e ' for (1..100) { print "0:", sin($i), "\n1:", cos($i), "\n"; if ($_%2) { print "2:", log(2*$_), "\n" system "sleep 0.1" }' ; read) | \ perl ./driveGnuPlotStreams.pl 3 2 50 50 -1 1 -2 6 500x300+0+0 500x300+500+0 'sin' 'cos' 'log' 0 0 1 The result looks like this:
{"url":"http://www.lysium.de/blog/index.php?/archives/234-Plotting-data-with-gnuplot-in-real-time.html","timestamp":"2014-04-18T18:48:53Z","content_type":null,"content_length":"7168","record_id":"<urn:uuid:05597aba-abe2-4c2f-b45e-2981eb7b07b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Publications of Daniel W. Lozier [FLR03] B.R. Fabijonas, D.W. Lozier and J.M. Rappoport, Algorithms and Codes for the Macdonald Function: Recent Progress and Comparisons, Journal of Computational and Applied Mathematics 161 (2003), pp. 179-192. Preprint: [nistir6596.pdf] [Loz03] D.W. Lozier, NIST Digital Library of Mathematical Functions, Annals of Mathematics and Artificial Intelligence 38,1-3 (May 2003), pp. 105-119. Preprint: [Linz01.pdf] [BDL+01] R.F. Boisvert, M.J. Donahue, D.W. Lozier, R. McMichael and B.W. Rust, Mathematics and Measurement, NIST Journal of Research 106,1(Jan-Feb 2001), pp. 293-313. [BL01] R.F. Boisvert and D.W. Lozier, Handbook of Mathematical Functions, in D.R. Lide, ed., A Century of Excellence in Measurements, Standards, and Technology, CRC Press, 2001, pp. 135-139. Also printed as NIST Special Publication 958, Jan. 2001. [LO00] D.W. Lozier and F.W.J. Olver, Numerical Evaluation of Special Functions (Version 2), December 2000, 48 pages. Update of NISTIR 5383 through 1999; see [LO94] for the original version. [Loz00] D.W. Lozier, The DLMF Project: A New Initiative in Classical Special Functions, in C. Dunkl, M. Ismail and R. Wong, eds., Special Functions: Proceedings of the International Workshop, World Scientific (Singapore), 2000, pp. 207-220. [HongKong99.pdf] [LMS99] D.W. Lozier, B.R. Miller and B.V. Saunders, Design of a Digital Mathematical Library for Science, Technology and Education, in Proc. IEEE Forum on Research and Technology Advances in Digital Libraries, May 19-21, 1999, Baltimore, Maryland. IEEE Computer Society Press, Los Alamitos, California, 1999, pp. 118-128. Preprint: NISTIR 6297, Feb. 1999, 13 pages. [nistir6297.pdf]. [Loz97b] D.W. Lozier, Toward a Revised NBS Handbook of Mathematical Functions, NISTIR 6072, September 1997, 8 pages. [nistir6072.pdf], [Loz97a] D.W. Lozier, A Proposed Software Test Service for Special Functions, in R.F. Boisvert, ed., The Quality of Numerical Software: Assessment and Enhancement, Chapman and Hall, London, 1997, pp. 167-178. Preprint: NISTIR 5916, October 1996, 11 pages. [nistir5916.pdf], [ALST96] M.A. Anuta, D.W. Lozier, N. Schabanel and P.R. Turner, Basic Linear Algebra Operations in SLI Arithmetic, Proc. Euro-Par'96, LNCS 1124, vol. 2, Springer-Verlag, 1996, pp. 193-202. Preprint: NISTIR 5811, March 1996, 15 pages. [nistir5811.pdf]. [ALT96] M.A. Anuta, D.W. Lozier and P.R. Turner, The MasPar MP-1 as a Computer Arithmetic Laboratory, J. Res. Nat. Inst. Standards and Technology 101,2 (March-April 1996), pp. 165-174. [Loz96] D.W. Lozier, Software Needs in Special Functions, J. Comput. Appl. Math. 66(1996) pp. 345-358. Preprint: NISTIR 5490, August 1994, 16 pages. [nistir5490.pdf] [LT96] D.W. Lozier and P.R. Turner, Error-Bounding in Level-Index Computer Arithmetic, in G. Alefeld and J. Herzberger, eds., Numerical Methods and Error Bounds, Akademie Verlag, Berlin, 1996, pp. 138-145. Preprint: [oldenburg95.pdf] [LO94] D.W. Lozier and F.W.J. Olver, Numerical Evaluation of Special Functions, NISTIR 5383, March 1994, 47 pages. For an update through 1999 see [LO00]. [nistir5383.pdf] [LO93] D.W. Lozier and F.W.J. Olver, Airy and Bessel Functions by Parallel Integration of ODEs, in R.F. Sincovec, et al., eds., Proc. Sixth SIAM Conference on Parallel Processing for Scientific Computing, vol. 2, SIAM, 1993, pp. 530-538. [Loz93] D.W. Lozier, An Underflow-Induced Graphics Failure Solved by SLI Arithmetic, in E. Swartzlander Jr. et al., eds., Proc. 11th Symposium on Computer Arithmetic, IEEE, 1993, pp. 10-17. [LT92b] D.W. Lozier and P.R. Turner, Symmetric Level-Index Arithmetic in Simulation and Modeling, J. Res. Nat. Inst. Standards and Technology 97 (1992), pp. 471-485. [LT92a] D.W. Lozier and P.R. Turner, Robust Parallel Computation in Floating-Point and SLI Arithmetic Computing 48 (1992), pp. 239-257. [LO90] D.W. Lozier and F.W.J. Olver, Closure and Precision in Level-Index Arithmetic, SIAM J. Numer. Anal. 27 (1990), pp. 1295-1304. [LR89] D.W. Lozier and R.G. Rehm, Some Performance Comparisons for a Fluid Dynamics Code, Parallel Comput. 11 (1989), pp. 305-320. [CLOT86] C.W. Clenshaw, D.W. Lozier, F.W.J. Olver and P.R. Turner, Generalized Exponential and Logarithmic Functions, Comput. Math. Appl. 12B (1986), pp. 1091-1101. FullText [Loz83] D.W. Lozier, The Use of Floating-Point and Interval Arithmetic in the Computation of Error Bounds, IEEE Trans. Comp. C-32 (1983), pp. 411-417. [SOL81] J.M. Smith, F.W.J. Olver and D.W. Lozier, Extended-Range Arithmetic and Normalized Legendre Polynomials, ACM Trans. Math. Software 7 (1981), pp. 93-105, 141-146. [Loz80] D.W. Lozier, Numerical Solution of Linear Difference Equations, NBSIR80-1976, March 1980, 174 pages. [Loz78] D.W. Lozier, A Universal Set of Test Data for Computer Implementations of Elementary Mathematical Functions, NISTIR 78-1478, May 1978, 24 pages. [WLO76] W.T. Wyatt, D.W. Lozier and D.J. Orser, A Portable Extended-Precision Arithmetic Package and Library with Fortran Precompiler, ACM Trans. Math. Software 2 (1976), pp. 209-231. [LMS73] D.W. Lozier, L.C. Maximon and W.L. Sadowski, A Bit Comparison Program for Algorithm Testing, Comput. J. 16 (1973), pp.111-117.
{"url":"http://math.nist.gov/~DLozier/publications/","timestamp":"2014-04-16T20:01:00Z","content_type":null,"content_length":"6834","record_id":"<urn:uuid:fd9bdebe-ebfe-428f-8c5b-52832a1210bb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
68Qxx Theory of computing • 68Q01 General • 68Q05 Models of computation (Turing machines, etc.) [See also 03D10, 68Q12, 81P68] • 68Q10 Modes of computation (nondeterministic, parallel, interactive, probabilistic, etc.) [See also 68Q85] • 68Q12 Quantum algorithms and complexity [See also 68Q05, 81P68] • 68Q15 Complexity classes (hierarchies, relations among complexity classes, etc.) [See also 03D15, 68Q17, 68Q19] • 68Q17 Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) [See also 68Q15] • 68Q19 Descriptive complexity and finite models [See also 03C13] • 68Q25 Analysis of algorithms and problem complexity [See also 68W40] (1) • 68Q30 Algorithmic information theory (Kolmogorov complexity, etc.) [See also 03D32] • 68Q32 Computational learning theory [See also 68T05] • 68Q42 Grammars and rewriting systems • 68Q45 Formal languages and automata [See also 03D05, 68Q70, 94A45] • 68Q55 Semantics [See also 03B70, 06B35, 18C50] • 68Q60 Specification and verification (program logics, model checking, etc.) [See also 03B70] • 68Q65 Abstract data types; algebraic specification [See also 18C50] • 68Q70 Algebraic theory of languages and automata [See also 18B20, 20M35] • 68Q80 Cellular automata [See also 37B15] • 68Q85 Models and methods for concurrent and distributed computing (process algebras, bisimulation, transition nets, etc.) • 68Q87 Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) [See also 68W20, 68W40] • 68Q99 None of the above, but in this section
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/12491","timestamp":"2014-04-20T05:59:48Z","content_type":null,"content_length":"20772","record_id":"<urn:uuid:9a0d31d5-1646-4f25-9777-52b2fdd8698a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Word Problems Re: Solving Word Problems Hi soroban; Rule #7 rules!!! Thats what powers Eddingtons monkey. If you type long enough and fast enough not only will you solve the word problem but every other problem, even ones that haven't been thought of yet. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=12615","timestamp":"2014-04-20T00:48:47Z","content_type":null,"content_length":"20195","record_id":"<urn:uuid:4555dc29-9215-4606-9cae-c48ccc105048>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Find sequence Term For set of Numbers February 14th, 2013, 01:27 PM #1 Junior Member Join Date Feb 2013 how we can find an sequence Term for set of numbers 1-numbers are always in order 2-if we have n numbers n/2 numbers are always present for example we have: And Result: i think this is very difficult whats your comment? what computer field or algorithm can help me? thank you Re: Find sequence Term For set of Numbers it fully depends upon what operations you permitted to use. Just arithmetic ones??? Re: Find sequence Term For set of Numbers The general term for inferring a relationship between two variables based only one a number of observations is regression. For the example you listed, it can be expressed as a line equation (i.e. y = a*x + b). Linear regressions are quite simple to calculate. See https://en.wikipedia.org/wiki/Simple_linear_regression For relationships that cannot be expressed in that format, you can try using non-linear regression. The algorithms for such regressions are approximate and not always guaranteed to find an optimal solution. Nevertheless, they can be (and are) used effectively in virtually all fields of science and engineering. In general, though, you'll need to have an idea of the functional form (and perhaps some guesses at the approximate constants) to generate high quality fits. You can read about the issue in detail at the Wikipedia page: https://en.wikipedia.org/wiki/Regression_analysis Best Regards, All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on. February 15th, 2013, 05:57 PM #2 Junior Member Join Date Feb 2013 February 15th, 2013, 07:36 PM #3 Join Date Feb 2011 United States
{"url":"http://forums.codeguru.com/showthread.php?534633-Find-sequence-Term-For-set-of-Numbers&p=2105575&mode=linear","timestamp":"2014-04-16T07:31:28Z","content_type":null,"content_length":"74188","record_id":"<urn:uuid:b4a4147a-37be-47f7-9a57-caa385f6fcd9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with simulation Hi , i need to write some maple functions for the equations below and i am not sure if this should be arrays or not. please see the question below; Consider a factory which manufactures only one product. Raw material is bought from external supplier and stored until required. Finished items are held in a warehouse. The operation of factory and its warehouse can be modelled as a set of equations as folllows. Let us define at time t : R(t) = Raw material stored (units) F(t) = finished goods stock (units) B(t) = order backlog (units) T(t) = target stock level for finished goods (units) All variables defined above give quantities at the start of week t. X( t, t+1 )= weekly orders received from customers M( t, t+1 )= raw material supplied per week. P( t, t+1 )= production per week. D(t, t+1 )= amount dispatched to customers per week. All variables defined above give quantities over week t to t+1 (i.e over the week t). The operation of the factory and its warehouse can be expressed as a set of equations given as follows: Backlog and Stock Position (1) B(t+1) = B(t)+X (t,t+1)–D(t,t+1) (2) T(t+1) =(m +1)/m(X(t,t+1)+X(t–1,t)+…+X (t–m+1,t–m+2)) (assuming that the company wishes to maintain m (suppose m=5)) weeks stock of finish items and hence the target level is m times the average of the last m–1 weeks) (3) R (t+1)=R(t)+M(t,t+1)–P(t,t+1) (4) F (t+1)=F (t) +P(t,t+1)–D(t,t+1) (5) D (t,t+1) = B(t) if B(t)<F(t) F (t) otherwise (6) M (t,t+1) = P(t–1,t) (7) P (t,t+1) = T(t) – F(t) + D(t,t+1) = R(t) if result exceeds R(t) = 0 if the result is negative Given the initial values for the variables, it is possible to simulate this system to study how the system will respond to the order rate. Suppose that all is calm, and the factory has operated as follows for the last five weeks. Target warehouse stock = 250 Finished goods stock = 250 Raw material stock = 150 Production rate = 50/week Material supply rate = 50/week Order rate = 50/week Order backlog = 50 Suppose the behaviour continues for the first week of the simulation but that during next week orders double due to the sales promotion. During the third week orders drops to zero as all demand returns of the previous week was satisfied. For the fourth week and the succeeding weeks, demand returns to an order rate of 50/week. What happens elsewhere in the system? A deterministic simulation will provide the answer to the above mentioned question. For this compute the following. i) The values of the equations (1)–(4) at the start of week t. ii) The values of the equations (5)–(7) i.e. the new values of the rates during the following week. iii) Move simulation time to the start of the next week. Next simulation should be presented in tabular form and plot production and demand rate to examine the performance of the system. All help will be much appreciated. Best Regards,
{"url":"http://www.mapleprimes.com/tags/simulation","timestamp":"2014-04-17T15:52:55Z","content_type":null,"content_length":"109611","record_id":"<urn:uuid:1683e7c4-55cf-4067-9fa8-b7ded4f5b2ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00554-ip-10-147-4-33.ec2.internal.warc.gz"}