content
stringlengths
86
994k
meta
stringlengths
288
619
find the domain and solve polynomial October 1st 2008, 02:31 PM #1 Oct 2008 Please help me on this. 1. solve x^6+3x^3-(3/4)=0 this is what i've got for this problem x^3(x^3+3)-(3/4)=0 what should i do next. Quadratic formula??? 2.find domain of sqrt((x-3)/(x+2))+1 sorry i dont how to begin for this problem um, quadratic formula only works for quadratics. you do not have a quadratic in this simplification. write it this way: $(x^3)^2 + 3(x^3) - \frac 34 = 0$ aha! now we have a quadratic! (to see this, replace x^3 with y) 2.find domain of sqrt((x-3)/(x+2))+1 sorry i dont how to begin for this problem the domain is the set of x-values for which the function is defined. usually the easiest way to find the domain is to find the x-values that don't work, and say the domain is all x-values but those. so, what can x NOT be here? 1.x=+or- 3sqrt((-3+sqrt(12))/2) 2.the domain is (2x-1)/(x+2) greater or equal 0 are these right. thank you for your help Hello Kenny: I checked your two solutions for the first exercise by substituting them into the original polynomial and evaluating. I did not get a value of zero from either of them. (Your positive solution yields 17.4135 and your negative solution yields -0.6953.) Check your work on solving the quadratic equation, and make sure that you correctly take the cube root of each result. On the domain exercise, nice try (heh, heh), but your instructor probably won't accept an unsolved inequality as an acceptable answer. Whenever you report a domain, you need to state a set of numbers because that is what the domain is, a set of numbers. (You're kinda on the right track.) The radicand must be non-negative, so you need to solve the following inequality. (x - 3)/(x + 2) ≥ 0 In other words, the set of numbers for x that satisfy this inequality is the domain of the original function. If you need more help with either of these exercises, then please continue to post your work and try to say something about why you're stuck. ~ Mark October 1st 2008, 02:34 PM #2 October 1st 2008, 08:58 PM #3 Oct 2008 October 1st 2008, 10:41 PM #4 Junior Member Oct 2008 Seattle, Washington
{"url":"http://mathhelpforum.com/algebra/51564-find-domain-solve-polynomial.html","timestamp":"2014-04-21T16:45:40Z","content_type":null,"content_length":"39364","record_id":"<urn:uuid:8c4e4560-fd66-4270-8d59-5e4b35572ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
All the World Famous Mathematicians A topic that is debated constantly since the beginning of age, can a women do a man's job? Can a man do a women's job. Sure there are some things that men can't do and women can't do, but is it really impossible. What Work Is a World Famous Mathematician? This in some cases is a matter of opinion, but if we look at history our question will be answered. BOTH MEN AND can be world famous mathematicians. I have seen both be very successful and very women in the math world. So for me this is totally not a debate anymore. Talk about finishing school early, this famous mathematician was lecturing when he was nineteen years of age. Joseph Louis Lagrange was born in Italy, but was known as the Italian born French mathematician. He dealt with the area of Calculus and impressed many with his knowledge on the subject. This world mathematician took over from Euler as the Director of Mathematics on November 6, 1766 but then moved on to the Paris Academy of Science where he stayed for the rest of his career. "Before we take to sea we walk on land, Before we create we must understand.” "When we ask advice, we are usually looking for an accomplice." These are some of the famous quotes that Lagrange was known for as one of the world's famous mathematicians.
{"url":"http://worldfamousmathematicians.blogspot.co.uk/","timestamp":"2014-04-18T05:30:06Z","content_type":null,"content_length":"41280","record_id":"<urn:uuid:bea368ab-09cb-408e-9aa8-29cb3e05cc6d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
If a and n are positive numbers, does 2a^{2x}=n? Question Stats: 40%60% (01:48)based on 60 sessions PraPon wrote: If a and n are positive numbers, does 2a^{2x}=n? (1) a^x+\frac{1}{a^x}=\sqrt{n+2} (2) x > 0 We know that (a^x+a^-x)^2 = a^2x+a^-2x+2. From F.S 1, we have (n+2) = or n = . Thus, the question stem is asking whether = n? If it has to be true, then a^2x+a^-2x = 2a^2x or a^(4x) = 1. Now, for x=0, we get a YES. But depending on the values of a and x, this will change. Thus, insufficient. From F.S 2, we only have x>0. Clearly Insufficient. Combining both, we know that x is not equal to zero. However, if a=1, we can still get a^4x = 1. Insufficient. All that is equal and not-Deep Dive In-equality Hit and Trial for Integral Solutions
{"url":"http://gmatclub.com/forum/if-a-and-n-are-positive-numbers-does-2a-2x-n-148696.html","timestamp":"2014-04-19T15:19:24Z","content_type":null,"content_length":"155454","record_id":"<urn:uuid:4d5901c3-2241-4c45-9357-0269d674fc01>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Session V – B: Corporate Finance: Beta, Calculating WACC or Weighted Average Cost of Capital Concept Title: Weighted Average Cost of Capital (WACC) Description: Explains WACC and how to calculate it Cost of capital is a concept that can be derived from the discussion we have had about opportunity cost. From the discussion that we have had so far, we know that that are two ways in which capital can be raised. We can either invest on our own account, which we call equity investment. Or we can go out and borrow money from friends, family & the neighborhood bank. We call this debt. The proportion of debt to equity is called leverage. If a business raises 100 dollars in capital, out of which 70 dollars is an equity investment and 30 dollars is debt, then what is the leverage ratio? If we divide 30 dollars of debt by the 70 dollars of equity we find, that for every dollar of equity there are 42 cents of debt available. Or the leverage for this business is 42%. An alternate definition takes the opposite view and divides the 70 dollars of equity by the 30 dollars of debt to determine that for every dollar of debt there are 2.33 dollars of equity available. What about cost of capital for this combination of funds? How much return should a business generate on this hundred-dollar investment to break even? To discuss these issues lets take step-by-step approach. This way we’ll be able to build on concepts explained earlier in the session. Step A – Simple Average Calculation One way is to just do simple averaging. Let’s assume that you have to pay 10% interest on the 30 dollars you have borrowed; this is the cost of debt. The risk of taking on equity is a little higher than average, which is why it costs more; as such, the cost of equity is 20%. Both of these are required returns. What is the correct required average return? In other words what is the average cost of capital? [ad#Grey med rectangle] (30 * 10%) divided by (30 + 70) + (70 * 20%) divided by (30 + 70) This expression can be simplified to (30* 10% + 70 * 20%) / (100) if we solve it we get (3 + 14 ) / (100) = 17% Using this approach the above venture should earn 17% on every hundred dollars invested to satisfy the stated or required needs of its debt and equity holders. Any return less than that would mean that investors in the firm actually lost money as this is the cost at which capital is available to us. Step B – Issues affecting interest rate on debt. Example A is interesting, but the question for a practitioner is how can one calculate the required rate of return for debt as well as equity The required return for debt is dependent on two factors: A) Government Debt Rate The first factor is the rate that the government is offering on its debt. Accepted wisdom is that since the government has the power to print currency, any amount loaned to the government will definitely be paid. In fact, the government is the ideal borrower every banker and investor would like to have since its almost certain that such a loan will be repaid. The government is not dumb. In return for being such a high profile borrower, it also ensures and requires that it pays the lowest possible rate on traditional debt. US government debt is issued as treasury bill, bonds and notes and the rate on treasury bonds (bills and notes) is used as a proxy for the true risk free rate. The risk-free rate is the rate of interest that would exist on a riskless security. As we have stated above, government securities and bonds are a classic case of these kind of securities, which is why the rate the government expects itself to pay on debt is generally taken to be the risk free rate of return. B) Credit Risk The second factor is what is commonly referred to as credit risk. Credit risk is an assessment of the chance that the borrower will default on his loan and will not be able to pay the interest due (in a best case scenario) or would not be able to repay the original principal or both (in a worst case scenario). Assessing credit risk is a major industry with firms like Moody’s, Standard & Poor, etc. acting as guardians over public issues of debt. They generally rate debt within a range of alphabets indicating how strongly they feel about the chances of the borrower honoring his obligations. AAA are great, anything beyond Ds means you are in trouble and should have paid more attention in your credit assessment classes. The ratings can change quite dramatically over a short period. An event, or change in direction, or market sentiment places an issuer of bonds (debt) in a situation where he may not be able to meet his commitments. The higher your credit is rated (AAA’s and above) the lower the interest rate you have to pay on your debt. The lower your credit (B’s, C’s, D’s), the higher the rate people will expect your bonds to But it is not likely that you’re listed on Moody’s or Standard and Poor if you’re borrowing for the first time. The person or institution offering you the debt may evaluate your credit rating by looking at the mortgage payments on your house, the lease payments on your cars and payment of your other obligations like utility bills etc. In short, the required rate of return on a bond is comprised of two factors. The first is the rate on government bond referred to as the risk free rate. The second is dependent on how high your credit rating is. KnowItAll, Inc., your new startup currently has equity worth $20 million, and no debt on its books. You need money for your company’s emaculate growth, and decide to issue debt worth $10 million. What is the new leverage of your company? Leverage is the debt to equity ratio. Hence in this case: Debt = $ 10.0 Equity = $ 20.00 Leverage = 10/20 = 50% Step C – Issues affecting required rate of return on equity The required rate of return on an equity investment also follows the same intuitive reasoning above. But this time there are three components. Risk Free rate of Interest The first is our old friend “the risk free rate of interest“ Market Rate of return The second is the market rate of return. This is the average return earned by the total market of equity securities. As calculating, updating and reviewing this calculation is a tough job we use proxies, like one can use the return on the S&P 500 Index (the Standard and Poor Index of 500 companies – the index tracks the performance of these 500 companies) to track the market rate of return. The third component is called Beta. Beta for any equity security is a factor that indicates how that security will react when the market moves in a certain direction. A Beta of 1 for Avicena’s shares means that if the market increases by 50%, Avicena’s share price will increase by 50%. A Beta of –1 means that if the market increases by 50% Avicena’s shares will fall by 50%. A Beta of 2 means that a 50% move in the market will result in a 100% increase in Avicena’s share price. We now want to pull these three components together to get an idea of how to calculate the required rate of return for an equity investment. But first we need to define one more term Risk Premium This is calculated as follows: Risk Premium = ( Market rate of return – Risk free rate of return) In simpler terms this is the difference between the returns on the total market of equity securities and the returns on treasury bonds. It’s called the risk premium because it is an indicator of the compensation that has to be provided to an investor as he takes on incremental risk in equity securities. It is basically the price charged for the risk that the investor is taking on by investing in a certain equity security. Historically this “Risk Premium” has ranged between 5% – 8 % depending on who you speak to. Another simple way to remember this is that it is the difference between the return on equity securities and the return on bonds. The Required Return on an equity investment can be written as Required Return on Equity = Risk free rate of interest + Beta * (Risk Premium) [ad#Rectangle - midsize] Add comment 1. Hello , How to solve the Wacc if the prportion of the debt is 0.10and the costof the debt is 3.7% ,lastly, the cost of the equity is 11.1% 2. Also, The actuel for 2008 net sale is $1,400,000 Gross profits =$390,000 Projects for 2009 Net sale = grow by 15% Cost of Goods sold is 1,135,050 Lastly, beginning inventory =100,000 Ending inventory =150,000 Calculate the Gross profits for 2009 Thanks again 3. @Alfas: The answer to the WACC question is as follows: Weightage of debt wd=0.10 so weightage of equity,we, will be 1-weightage of debt =1-.10=0.90 cost of debt, rd =3.7% cost of equity,re =11.1% WACC =wd*rd+we*re=0.1*3.7%+0.9*11.1%=10.36% 4. @Alfas: The answer to your Gross Profits question is: Gross Profits (2009) = Net Sales (2009) -Cost of Goods sold (2009) Cost of Goods sold (2009) is given as 1,135,050 Net Sales (2009) = Net Sales (2008)*(1.15) = 1400,000*1.15 =1,610,000 Hence, Gross profits (2009) = 1,610,000-1,135,050 = 474,950
{"url":"http://financetrainingcourse.com/education/2010/02/session-v-b-corporate-finance-beta-calculating-wacc-or-weighted-average-cost-of-capital/","timestamp":"2014-04-21T07:18:05Z","content_type":null,"content_length":"61251","record_id":"<urn:uuid:bc60bcdb-34ca-4ed4-aeb9-7207e613c8ec>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Biology Department GRAPHING BASICS What are graphs and why do we use them? Graphs help us to display relationships between numbers (data sets) by representing them in the form of a picture, diagram, or drawing. There are many different types of graphs, each having a specific use for representing data. To generate a graph, data must be analized to determine labels and values as well as the appropriate scale. When done correctly, a graph will be able to show trends and patterns in information gathered from experimenting. It will also allow you to make decsions or answer a question relating to your data. Using a graph, multiple variables can be examined to understand how they relate to each other. Types of Graphs: While there are many differnt types of graphs, we will look at how to create three of the most common graphs: bar graphs, line graphs, and circle graphs (pie charts). BAR GRAPHS: A bar graph shows values of amounts by using vertical or horozontal bars. Bar graphs are generally used to show frequencies or to make comparisons to show “how much of something”. A bar graph has two axis, the x-axis which is horozontal and the y-axis which is the vertical. The x-axis is the trait used to sort. The y-axis is the actual count. The bars on your graph will be directly proportional to frequency, meaning, the higher the bar, the higher the frequency of occurrence. Sample bar graph: LINE GRAPHS: A line graph shows change over time or continuous data. This style of graph helps the observer make counts, compare frequency, determine highs and lows, and look for data patterns. Line graphs are great for predicting future events based on patterns of the past. To make a line graph, you need to have two seperate scales of numbers that organize the data, one on the x-axis and one on the y-axis. Sample line graph: CIRCLE OR PIE GRAPHS: The purpose of a circle or pie graph is to show how parts relate to a whole. This is done by showing percentages or fractions of of the whole, in this case being a circle. The entire circle represents all of the data, or 100 percent. Each part when that circle is broken down represents groups within the data. Sample circle/pie graph: How do I make a graph? Before you can begin a graph, make sure that you have a complete data table. Your graph will take the values from your data table and make sense of them. STEP 1: Determine the type of graph appropriate for your observations and measurements. Bar Graphs - show relationships between variables Line Graphs - change over time, continuous data Circle/Pie Graphs - percentages, how parts equal a whole STEP 2: Create your graph. The easiest way to create a graph is to use a computer program that allows you to make a table and then generate a graph from the informaton in your table. Using a computer is helpful because you can easily make more than one type of graph by highlighting the data you wish to use. If you do not have a computer available, follow these simple steps below. • Bar Graph: 1. Using graph paper, draw a set of axis (x-axis being the horizontal and the y-axis being the vertical. 2. Give your bar graph a title that clearly states what the data is showing. (Place your title at the top of the graph.) 3. The x-axis will be your independent variable. Label the axis with the variable (Example: Light Environment) and the variables you used (Example: Sun, Shade, etc.). 4. Label the vertical y-axis with your dependent variable. (Height (cm)) This axis will also need a scale that includes all of the values of your dependent variable. This scale should be increment marks that are evenly spaced and in chronological order. 5. For each independent variable, draw a a bar from the minimum value on your y-axis to the height of your collected value on the axis. Do this for all of your values. • Line Graph: 1. Using graph paper, draw a set of axis (x-axis being the horizontal and the y-axis being the vertical. 2. Give your line graph a title that clearly states what the data is showing. (Place your title at the top of the graph.) 3. The x-axis will be your independent variable. Label the axis with the variable (Example: Months) and the unit of measure (Example: January, February, etc. ). 4. Label the vertical y-axis with your dependent variable. (Height (cm)) This axis will also need a scale that includes all of the values of your dependent variable. This scale should be increment marks that are evenly spaced and in chronological order. 5. For each value, plot a point on your graph. 6. Once all points have been plotted, connect them with a line. • Circle/Pie Graph: 1. Draw a circle using a compass to make sure that it is even all the way around. (Make your circle large enough to display your data clearly.) 2. Give your circle/pie graph a title that clearly states what the data is showing. (Place your title at the top of the graph.) 3. Make a small mark in the very center of your circle to show where each “slice” or wedge will begin. 4. Determine what size wedge will be needed to show each level of your independent variable. To do this, you will need to convert your data from percentages to angle degrees. Example: 30% of Spicebush is growing in the sun, the wedge would need to be 30% of a 360 degree circle. This can value can be determined by using multiplication. (360 X .3 = 108) 5. Draw your wedges using a protractor. Place the protractor at the center point of the circle. Mark 0 degrees and the degree that matches your value by drawing points on the edge of the circle. Draw a line from each point to the center of the circle. 6. Label the wedge (include its percentage). 7. Begin your next wedge from the edge of the first. If done correctly, your entire circle will be filled up and your percentages will add up to 360 degrees when you are done. STEP 3: Review your work. Use the "TAILS" method to double-check yourself. Title - Shows relationship between the x and y axis Axis - x-axis on the horozontal, records the independent variable y-axis on the vertical, records the dependent variable Intervals - same size, chronological order Label - axis, units of measure Scale - 50% or more of the axis is used This material is based upon work supported by the National Science Foundation under Grant No. 0442049. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
{"url":"http://muhlenberg.edu/main/academics/biology/nsf/ret/ret_graph.html","timestamp":"2014-04-20T13:55:12Z","content_type":null,"content_length":"30510","record_id":"<urn:uuid:85d82540-6dc6-4baf-9894-6deffbcd6ca7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
OmniGraphSketcher 1.2 betas featuring log scales - The Omni Group Forums If you're not following @omnigs on twitter , you may not be aware that our work on log axes (and scientific notation) in OmniGraphSketcher has reached the beta stage and we're looking for feedback. You can grab beta 3 of OmniGraphSketcher for Mac here . Release notes are below. OmniGraphSketcher 1.2 beta 1 - April 5, 2011 Logarithmic Axes • Lin-Log and Log-Log: OmniGraphSketcher supports logarithmic scales on either or both axes. You can easily switch back and forth between linear and logarithmic axes using new controls in the Axes Inspector. Dragging, nudging, snapping, sketch recognition, axis manipulation, importing, exporting, scale-to-fit, etc. have all been updated to work as you'd expect. • Configurable: Logarithmic axes support any mathematically plausible min/max values and tick spacing. (Because zero is infinitely far away in logarithmic space, the min/max values for logarithmic axes must both be more than zero, or both less than zero. OmniGraphSketcher will automatically fix any such issues when switching to a logarithmic axis and will prevent you from accidentally setting parameters that are mathematical impossibilities.) • Double precision: To accommodate data sets that span many orders of magnitude, axes and data points now support up to 13 significant digits, and they can hold values as large as 10^300. • Scientific notation: To accommodate these large numbers, both linear and logarithmic axes now automatically use scientific notation ("1.23 x 10^45") when values are larger than 10,000,000 or smaller than 0.001. On logarithmic scales, simplified powers of ten are used when possible ("10^9"). You can edit these values or enter new data points by using the shortened "E" notation: "1.23E45" or "6E-7". • Line interpolation: Lines in OmniGraphSketcher are defined to connect two or more data points as smoothly as possible, regardless of the axis type. The underlying data points are mapped accurately to linear or logarithmic space, but intermediate points on the line are not comparable. To accurately map a line's shape between linear and logarithmic axes, you can now choose: Arrange > Interpolate Line, which will sample along the vertical grid lines (x-values). Now when you switch to a logarithmic axis, you will see how each part of the shape adjusts to logarithmic Tick Labeling • When there is not enough room to label every tick mark, tick labels are now always evenly spaced (skipping tick marks in multiples of two, five, or ten). On logarithmic axes, tick marks representing powers of ten are most likely to be labeled. • Where there are at least 5 tick marks between tick labels, OmniGraphSketcher now uses major/minor tick marks to distinguish between labeled ticks (long) and unlabeled ticks (short). This works even if tick labels are not visible. • Thanks to these improvements, automatic tick spacing is now 1 in more cases, allowing more easily understandable scales. • Tick labels (and everything else in the app that displays numeric data values) now choose an appropriate number of significant digits based on the range of each axis. For example, an axis spanning 0 to 100 will display up to four significant digits ("1.234" or "93.47"); whereas an axis spanning 842 to 843 will use up to six significant digits ("842.551"). This is just for display; all data points are stored with full precision. • New menu option: Arrange > Add Jitter (cmd-shift-J) applies a small amount of random vertical noise to the selected data points, using a standard normal distribution. This can be useful for making data points visible that might be hidden behind each other. Apply it multiple times to increase the amount of jitter. • Data importing and the Scale to Fit command now choose better automatic axis ranges. • Tick label distance and axis title distance are now recorded as part of the "Make Current Styles Default" command. • Automatic margins now take into account any arrowheads on the ends of axes. • Lines with straight segments at sharp angles now have gentler, rounded joints. • Grid lines now adjust to the nearest pixel so that they look sharper on-screen. • Fixed an issue where the version number in the About OmniGraphSketcher panel was sometimes in the wrong place. • Fixed several issues that could occasionally lead to crashes when reverting, closing, or switching between documents. • Smaller fixes and improvements. OmniGraphSketcher 1.2 beta 2 - May 17, 2011 • Fixed an issue where switching between logarithmic and linear axes would not update the display immediately. • Adjusted the Axis Inspector layout. • Improved the crash reporter. OmniGraphSketcher 1.2 beta 3 - June 22, 2011 Logarithmic Axes • Axis ranges up to 10^300 now work for real. • Improved support for logarithmic axes that span more than 30 orders of magnitude. Tick labels now appear where expected (for example, at 10^30, 10^40, 10^50...) and the axis uses major and minor ticks accordingly. Intermediate tick marks no longer appear in situations where they would be too close together to be visually salient. • Improved tick mark choices for logarithmic axes that range between approximately 1 and 50. • Updated the tick labeling algorithm to skip intermediate labels greater than 5 (in any power of ten) when there is not enough space for all labels. (This looks cleaner and matches Matlab's • Major/minor tick marks now always work correctly on logarithmic axes where some tick marks are not labeled. • Fixed an issue where tick marks were inconsistent near the ends of logarithmic axes whose min or max were not powers of ten. Scientific Notation • You can now override the automatic scientific notation settings with new menu options: View > Scientific Notation. • Logarithmic axes now automatically use scientific notation when the max is greater than 100,000 (unlike linear axes, which wait until 10,000,000). • Fixed an issue with automatic window resizing when running the application on Mac OS X 10.7 Lion. • Fixed a regression where strings such as "5k" were interpreted as "5" instead of as a custom label. • The custom tick spacing field now behaves as expected when you tab through it without changing its value. • Shift-clicking two tick labels on a logarithmic axis now selects all of the labels in-between. • Smaller fixes and improvements.
{"url":"http://forums.omnigroup.com/showthread.php?p=100633","timestamp":"2014-04-16T04:13:10Z","content_type":null,"content_length":"40360","record_id":"<urn:uuid:89e96856-e564-4348-b251-0aa55b728951>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Implicit Function Theorem October 14th 2010, 06:50 PM #1 Mar 2010 Implicit Function Theorem Hey everyone I have a question on the proof of Implicit Function Theorem. My book says, "Suppose that z is given implicitly as a function z = f(x,y) by an equation of the form F(x,y,z) = 0. This means that F(x,y,f(x,y)) = 0 for all (x,y) in the domain of f. If F and f are differentiable, then we can use the Chain Rule to differentiate the equation F(x,y,z) = 0 as follows: $\frac{\partial F}{\partial x}\frac{\partial x}{\partial x} + \frac{\partial F}{\partial y}\frac{\partial y}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x}=0$ But $\frac{\partial}{\partial x}(x) = 1$ and $\frac{\partial}{\partial x}(y) = 0$ So the equation becomes $\frac{\partial F}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x}=0$ My question is, how does $\frac{\partial}{\partial x}(y) = 0$? It doesn't make sense. Is it just saying that let/assume $\frac{\partial}{\partial x}(y) = 0$? Confuseddd. Thanks in advance! Hey everyone I have a question on the proof of Implicit Function Theorem. My book says, "Suppose that z is given implicitly as a function z = f(x,y) by an equation of the form F(x,y,z) = 0. This means that F(x,y,f(x,y)) = 0 for all (x,y) in the domain of f. If F and f are differentiable, then we can use the Chain Rule to differentiate the equation F(x,y,z) = 0 as follows: $\frac{\partial F}{\partial x}\frac{\partial x}{\partial x} + \frac{\partial F}{\partial y}\frac{\partial y}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x}=0$ But $\frac{\partial}{\partial x}(x) = 1$ and $\frac{\partial}{\partial x}(y) = 0$ So the equation becomes $\frac{\partial F}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x}=0$ My question is, how does $\frac{\partial}{\partial x}(y) = 0$? It doesn't make sense. Is it just saying that let/assume $\frac{\partial}{\partial x}(y) = 0$? Confuseddd. Thanks in advance! Hey, look at the conditions set at the beginning. y has no x terms in it (it is not a function of x), so if you differentiate it with respect to x, it is regarded as a constant and goes to 0. A similar does not occur for z because z is a function of x (z = f(x,y)) Hey everyone I have a question on the proof of Implicit Function Theorem. My book says, "Suppose that z is given implicitly as a function z = f(x,y) by an equation of the form F(x,y,z) = 0. This means that F(x,y,f(x,y)) = 0 for all (x,y) in the domain of f. If F and f are differentiable, then we can use the Chain Rule to differentiate the equation F(x,y,z) = 0 as follows: $\frac{\partial F}{\partial x}\frac{\partial x}{\partial x} + \frac{\partial F}{\partial y}\frac{\partial y}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x}=0$ But $\frac{\partial}{\partial x}(x) = 1$ and $\frac{\partial}{\partial x}(y) = 0$ So the equation becomes $\frac{\partial F}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x}=0$ My question is, how does $\frac{\partial}{\partial x}(y) = 0$? It doesn't make sense. Is it just saying that let/assume $\frac{\partial}{\partial x}(y) = 0$? Confuseddd. Thanks in advance! It is implicitely assumed (pun intended) that x,y are independient variables, so both $\frac{\partial y}{\partial x}=\frac{\partial x}{\partial y}=0$ That kind of make sense, but can't you solve F(x,y,f(x,y)) for y? Let's do an example: Let's say F(x,y,f(x,y)) = 3x + y + z = 0 or something. Now if you take $\frac{dx}{dx}$ you would get: $x = \frac{-y-z}{3}$ $x = 0?$ Sorry I should have made it clearer. I think a main point of confusion here is that we're dealing with partial derivatives (i.e. dy/dx is a partial derivative, so is dz/dx) What does it mean that z is defined implicitly as a function of x and y? For your equation, it means z must be a function of x and y that satisfies 3x + y + z = 0. So z = -(3x + y). That is the explicit equation of your F(x,y,f(x,y)) = 0 Now note that the derivative $\frac{dz}{dx} = -3$ So if we go with what you suggested, that we solve F(x,y,f(x,y)) = 0 for y y = -3x - z. differentiating with respect to x gives you $\frac{dy}{dx} = -3 - \frac{d}{dx}z = -3 - (-3) = 0$ Do you see now why $\frac{dy}{dx}$ will always be zero? :O! Yes it does! This seems a bit recursive though. We're basically saying: y = -3x - z, where z = -(3x+y) and by direct substitution, we get y = -3x + (3x+y) = y The partial derivative of y = y with respect to x is 0. In that case, wouldn't dx/dx equal to 0 as well? It is a bit circular, but you would expect it to be. After all, F(x,y,f(x,y)) = 3x + y + f(x,y) = 0 describes the exact same thing as f(x,y) = -3x - y Interchanging the two would seem to be recursive. As for dx/dx, dx/dx always equals one. Lets try this again on your example 3x = -y -z $3\frac{d}{dx} x = 0 - \frac{d}{dx}z = -(-3)$ $\frac{dx}{dx} = 1$ To see what I mean, I'll give a worded illustration. It is easy to get caught up with just the maths and forget why we study it in the first place. Say we have a container filled with two miscible liquids (two liquids which mix evenly), Suppose z is the amount of liquid flowing out of a container, x be the amount of liquid A flowing out, y be the amount of liquid B flowing out. Now dy/dz means for every amount dz of total liquid flowing out, a dy amount of liquid B flows out. Similarly, dx/dz is the proportion of liquid A compared to the combined liquid What does it mean to have dz/dz, dx,dx or dy/dy? For every dz amount of total liquid flowing out, you'll have a dz amount of liquid flowing out. The proportion between total liquid out/ total liquid out is 1. October 14th 2010, 07:24 PM #2 Super Member Jan 2008 October 14th 2010, 07:25 PM #3 Oct 2009 October 15th 2010, 06:06 AM #4 Mar 2010 October 15th 2010, 03:35 PM #5 Super Member Jan 2008 October 15th 2010, 04:38 PM #6 Mar 2010 October 15th 2010, 05:24 PM #7 Super Member Jan 2008
{"url":"http://mathhelpforum.com/calculus/159679-implicit-function-theorem.html","timestamp":"2014-04-20T20:09:10Z","content_type":null,"content_length":"58587","record_id":"<urn:uuid:7f5f722b-b37e-479d-80fa-1a216e00d925>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Cost Minimization We can conceptually divide the profit maximization problem into two sub-problems: • What is the lowest cost combination of inputs that will produce a level of output equal to y? • What is the output level y that will maximize profits? Cost minimization is a necessary (but not sufficient) condition for profit maximization. Even when a producer is not a price taker in the output market, or when the solution to the profit maximization problem is not well defined (say, due to increasing returns), the producer must still minimize costs. • Production function is y = f(x[1],x[2]). The production function is assumed to be quasi-concave. • The producer is a price taker in the input market. The producer chooses x[1] and x[2] (the choice variables) to minimize C(x[1],x[2]) (the objective function) minimize C(x[1],x[2]) = w[1] x[1] + w[2] x[2] subject to the constraint f(x[1],x[2]) = y The parameters of this problem are w[1], w[2], and y. Form the Lagrangian function: L = w[1] x[1] + w[2] x[2] + λ ( y - f(x[1],x[2]) ) The variable λ is called the Lagrange multiplier for the constraint. The FOC for this minimization problem is: ∂L/∂x[1] = w[1] - λ f[1] (x[1],x[2]) = 0 ∂L/∂x[2] = w[2] - λ f[2] (x[1],x[2]) = 0 ∂L/∂λ = y - f(x[1],x[2]) = 0 Sufficient SOC for minimization is that all the border-preserving principle minor determinants of the matrix [ -λ f[11] -λ f[12] -f[1] ] H = [ -λ f[21] -λ f[22] -f[2] ] [ -f[1] -f[2] 0 ] are negative. Interpretation: The FOC imply w[1]/w[2] = f[1]/f[2]. That is, the ratio of factor prices is equal to the marginal rate of technical substitution. The assumption of quasi-concavity guarantees that the SOC is satisfied. Alternatively, if we abandon the assumption of quasi-concavity, the SOC implies that the production function is locally quasi-concave at the optimal input combination. Here, the quasi-concavity condition is equivalent to the condition of diminishing MRTS. The SOC does not say anything about returns to scale. Even if we have an increasing returns to scale production function (e.g., y = x[1] x[2]), the cost-minimization problem still has a well defined The FOC can be considered as three equations in three unknowns (x[1], x[2], λ). Given any parameter values for w[1], w[2], and y, we can solve these equations for the unknowns. Of course, the solution values depend on the values of the parameters. If x[1]^~, x[2]^~, and λ^~ are the solutions to the cost minimization problem, we emphasize the dependence of these optimal choice functions on the parameters by writing x[1]^~ = x[1]^~ (w[1],w[2],y) and x[2]^~ = x[2]^~ (w[1],w[2],y). These functions are different from the factor demand functions derived from the profit maximization problem. We call them cost-minimizing factor demand functions or conditional factor demand functions. By definition, if you substitute the optimal choice functions into the FOC equations, the equations are always satisfied. We therefore write w[1] - λ^~ (w[1],w[2],y) f[1]( x[1]^~ (w[1],w[2],y), x[2]^~ (w[1],w[2],y) ) ≡ 0 w[2] - λ^~ (w[1],w[2],y) f[2]( x[1]^~ (w[1],w[2],y), x[2]^~ (w[1],w[2],y) ) ≡ 0 y - f( x[1]^~ (w[1],w[2],y), x[2]^~ (w[1],w[2],y) ) ≡ 0 Consider how w[1] affects factor demand. Use the identities above, differentiate with respect to w[1], and employ the chain rule of differentiation, we get 1 - λ f[11] (∂x[1]^~/ ∂w[1]) - λ f[12] (∂x[2]^~ /∂w[1]) - f[1] (∂λ^~ /∂w[1]) = 0 0 - λ f[21] (∂x[1]^~/ ∂w[1]) - λ f[22] (∂x[2]^~ /∂w[1]) - f[2] (∂λ^~ /∂w[1]) = 0 0 - f[1] (∂x[1]^~ /∂w[1]) - f[2] (∂x[2]^~ /∂w[1]) - 0 (∂λ^~ /∂w[1]) = 0 In matrix notation: [ -λ f[11] -λ f[12] -f[1] ] [ ∂x[1]^~/∂w[1] ] = [ -1 ] [ -λ f[21] -λ f[22] -f[2] ] [ ∂x[2]^~/∂w[1] ] = [ 0 ] [ -f[1] -f[2] 0 ] [ ∂λ^~/∂w[1] ] = [ 0 ] Use Cramer's rule to get ∂x[1]^~/∂w[1] = | -1 -λ f[12] -f[1] | / | 0 -λ f[22] -f[2] | / | H | | 0 -f[2] 0 | / = -1 | -λ f[22] -f[2] | / | H | | -f[2] 0 | / The determinant in the numerator is a border-preserving principal minor determinant, so by the SOC it is negative (verify!). Similarly, the SOC also requires the | H | < 0. Therefore ∂x[1]^~ /∂w[1] < 0 (downward sloping conditional factor demand curve). We can also solve for ∂x[2]^~/ ∂w[1]: ∂x[2]^~/∂w[1] = | -λ f[11] -1 -f[1] | / | -λ f[21] 0 -f[2] | / | H | | -f[1] 0 0 | / = 1 | -λ f[21] -f[2] | / | H | | -f[1] 0 | / The determinant in the numerator is not a border-preserving principal minor determinant, so in general its sign is undetermined. But for the two-input case, this determinant is negative, so ∂x[2]^~/ ∂w[1] > 0. (Why? Theory implies ∂x[1]^~ /∂w[1] < 0. So when w[1] increases, we use less x[1], but we still want to produce the same amount of output y. This is achieved by increasing the use of x If you are curious, you can do a similar comparative statics analysis for w[2]. You will then verify that ∂x[1]^~/ ∂w[2] = ∂x[2]^~/ ∂w[1] Let's also try the comparative statics for y. Differentiate the FOC with respect to y and write in matrix notation, we have [ -λ f[11] -λ f[12] -f[1] ] [ ∂x[1]^~/∂y ] = [ 0 ] [ -λ f[21] -λ f[22] -f[2] ] [ ∂x[2]^~/∂y ] = [ 0 ] [ -f[1] -f[2] 0 ] [ ∂λ^~/∂y ] = [ -1 ] ∂x[1]^~/∂y = | 0 -λ f[12] -f[1] | / | 0 -λ f[22] -f[2] | / | H | | -1 -f[2] 0 | / = -1 | -λ f[12] -f[1] | / | H | | -λ f[22] -f[2] | / The determinant in the numerator is not a border-preserving principal minor determinant, so this derivative cannot be signed. If ∂x[1]^~/∂y > 0, then x[1] is a "normal factor." If ∂x[1]^~/∂y < 0, then we call it an "inferior factor." We have not derived any comparative statics for the λ^~ (w[1],w[2],y) function. But if we do it, we will see that all the partial derivatives have ambiguous signs. Finally, if we look at the FOC equations again, and consider the effect of changing all input prices from (w[1], w[2]) to (tw[1], tw[2]) while keeping the parameter y unchanged. The FOC become tw[1] - λ f[1] (x[1],x[2]) = 0 tw[2] - λ f[2] (x[1],x[2]) = 0 y - f(x[1],x[2]) = 0 If ( x[1]^~, x[2]^~, λ^~ ) solve the original FOC equations, then ( x[1]^~, x[2]^~, tλ^~ ) must solve the new set of equations. We therefore conclude that x[1]^~ (tw[1], tw[2], y) ≡ x[1]^~ (w[1], w[2], y) x[2]^~ (tw[1], tw[2], y) ≡ x[2]^~ (w[1], w[2], y) That is, the conditional input demand functions are homogeneous of degree 0 in w[1] w[2] (but not in y). (Since λ^~ (tw[1], tw[2], y) ≡ t λ^~(w[1], w[2],y), we also see that the λ^~() function is homogeneous of degree 1.) SUMMARY: Properties of conditional input demand functions • x[i]^~ (w[1], ..., w[n], y) is homogeneous of degree 0 in the input prices. • ∂x[i]^~/ ∂w[i] < 0 (downward sloping conditional factor demand curve) • ∂x[i]^~/ ∂w[j] = ∂x[j]^~/ ∂w[i] (symmetry) Relationship between cost minimization and profit maximization If we compare the FOCs for the profit-maximization problem with the FOCs for the cost minimization problem, we can see that they will give the same solution values for x[1] and x[2] if the value of the Lagrange multiplier is λ = p. From introductory microeconomics, we know that a condition for profit maximization is Marginal Cost = p. This is a hint that the Lagrange multiplier can be interpreted as Marginal Cost. The FOC implies λ = w[1]/f[1] = w[2]/f[2]. What is the marginal cost of producing one unit of output? Well, we can produce more outputs by using more x[1]. If the marginal product of x[1] is f[1], we need 1/f[1] units of x[1] to produce one more unit of output. Each unit of x[1] costs w[1], so the cost of 1/f[1] units of x[1] is w[1]/f[1]. This is another hint that λ can be interpreted as Marginal Cost. More on this later. If you want to produce y = y*(p,w[1],w[2]) units of output, the cost-minimizing input bundle must be the same as the profit-maximizing input bundle. So we must have x[1]^~ (w[1], w[2], y*(p,w[1],w[2]) ) ≡ x[1]* (p,w[1],w[2]) Differentiate the identity with respect to, say w[1], we get ∂x[1]^~/ ∂w[1] + (∂x[1]^~ /∂y) (∂y*/∂w[1]) = ∂x[1]*/∂w[1] To find out what ∂x[1]^~/∂y is, we differentiate the identity with respect to p to get (∂x[1]^~ /∂y) (∂y*/∂p) = ∂x[1]*/∂p So ∂x[1]^~ /∂y = (∂x[1]*/∂p) / (∂y*/∂p). Substitute this expression back in: ∂x[1]^~/ ∂w[1] - ∂x[1]*/ ∂w[1] = - (∂x[1]*/∂p) (∂y*/∂w[1]) / (∂y*/∂p) Since the symmetry condition ensures that the ∂x[1]*/∂p = - ∂y*/∂w[1], the numerator is a square must be positive. The denominator is also shown to be positive, so the whole term is positive. I.e., ∂x[1]^~/ ∂w[1] > ∂x[1]*/∂w[1]. But because demand curves are negatively sloped, this means that the derivative of the profit-maximizing demand function is larger in absolute value than is the derivative of the cost-minimizing demand function. Consider the effect of an increase in w[1] on the profit-maximizing choice of x[1]: ∂x[1]*/∂w[1] = ∂x[1]^~/∂w[1] + (∂x[1]^~/∂y) (∂y*/∂w[1]) ^ ^ substitution scale effect effect The substitution effect is always negative. The scale effect is also always negative ( for "normal factors," ∂x[1]^~/∂y > 0 and ∂y*/∂w[1] < 0; for "inferior factors," ∂x[1]^~/∂y < 0 and ∂y*/∂w[1] > 0). The two effects reinforce each other.
{"url":"http://www.econ.hku.hk/~wsuen/teaching/micro/costmin.html","timestamp":"2014-04-20T08:15:19Z","content_type":null,"content_length":"20891","record_id":"<urn:uuid:8d55ec44-4740-4996-906c-42891791797c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
LinuxCommand.org: Tips, News And Rants A few weeks ago, I was cruising the Ubuntu forums and came across a question from a poster who wanted to find the average of a series of floating-point numbers. The numbers were extracted from some other command and were output in a column. He wanted a command line incantation that would take the column of numbers and return the average. Several people answered this query with clever one-line solutions, however I thought that this problem would be a good task for a script. Using a script, one could have a solution that was a little more robust and general purpose. I wrote the following script, presented here with line numbers: 1 #!/bin/bash 2 3 # average - calculate the average of a series of numbers 4 5 # handle cmd line option 6 if [[ $1 ]]; then 7 case $1 in 8 -s|--scale) scale=$2 ;; 9 *) echo "usage: average [-s scale] " >&2 10 exit 1 ;; 11 esac 12 fi 13 14 # construct instruction stream for bc 15 c=0 16 { echo "t = 0; scale = 2" 17 [[ $scale ]] && echo "scale = $scale" 18 while read value; do 19 20 # only process valid numbers 21 if [[ $value =~ ^[-+]?[0-9]*\.?[0-9]+$ ]]; then 22 echo "t += $value" 23 ((++c)) 24 fi 25 done 26 27 # make sure we don't divide by zero 28 ((c)) && echo "t / $c" 29 } | bc This script takes a series of numbers from standard input and prints the result. It is invoked as follows: average -s scale < file_of_numbers is an integer containing the desired number of decimal places in the result and is a file containing the series of number we desire to average. If is not specified, then the default value of 2 is used. To demonstrate the script, we will calculate the average size of the programs in the me@linuxbox:~$ stat --format "%s" /usr/bin/* | average 81766.66 The basic idea behind this script is that it uses the arbitrary precision calculator program to figure out the average. We need to use something like , because arithmetic expansion in the shell can only handle integer math. To perform our calculation, we need to construct a series of instructions and pipe them into . This task comprises the bulk of our script. In order to do something that complicated, we employ a shell feature known as a group command . Starting with line 16 and ending with line 29 we capture all of the standard output and consolidate it into a single stream. That is, all of the standard output produced by the commands on lines 16-29 is treated as though it is a single command and piped into on line 29. We'll look at our group command piece by piece. As you know, an average is calculated by adding up a series of numbers and dividing the sum by the number of entries. In our case, the number of entries is stored in the variable and the sum is stored (within ) in the variable . We start our group command (line 16) by passing some initial values to . We set the initial value of the to zero and the value of to our default value of two (the default scale of is zero). On line 17, we evaluate the variable to see if the command line option was used and if so, pass that new value to Next, we start a loop that reads entries from our standard input. Each iteration of the loop causes the next entry in the series to be assigned to the variable Lines 20-24 are interesting. Here we test to see if the string contained in is actually a valid floating point number. To do this, we employ a regular expression that will only match if the number is properly formatted. The regular expression says, to match, may start with a plus or minus sign, followed by zero or more numerals, followed by an optional decimal point, and ending with one or more numerals.. If passes this test, an instruction is inserted into the stream telling to add (line 22) and we increment (line 23), otherwise is ignored. After all of the numbers have been read from standard input, it's time to perform the calculation, First, we test to see that we actually processed some numbers. If we did not, then would equal zero and the resulting calculation would cause a "division by zero" error, so we test the value of and only if it is not equal to zero we insert the final instruction for This script would make a good starting point for a series of statistical programs. The most significant design weakness of the script as written is that it fails to check that the value supplied to the scale option is really an integer. That's an improvement I will leave to my faithful readers... Further Reading The following man pages: • bc • bash (the "Compound Commands" section, covers group commands and the [[]] and (()) compound commands) The Linux Command Line • Chapter 20 (regular expressions) • Chapter 28 (if command, [[]] and (()) compound commands and && and || control operators) • Chapter 29 (the read command) • Chapter 30 (while loops) • Chapter 35 (arithmetic expressions and expansion, bc program) • Chapter 33 (positional parameters) • Chapter 37 (group commands)
{"url":"http://lcorg.blogspot.com/2010/04/script-average.html","timestamp":"2014-04-19T14:33:29Z","content_type":null,"content_length":"81116","record_id":"<urn:uuid:53497649-64be-491e-9992-618c344f47fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Giant Snowball Hint: Find velocity as a function of height using conservation of energy. Use what you know about centripetal forces to find the answer. My work so far is: v*v=(2mgR - mgRcos(angle))/m Got this using conservation of energy Is this what you mean by finding velocity as a function of height, OlderDan? The centripetal acceleration should be 0 when the skier loses contact but when I put a=0 I get 1=cos(angle) What am I doing wrong here?
{"url":"http://www.physicsforums.com/showthread.php?t=80176","timestamp":"2014-04-17T18:28:10Z","content_type":null,"content_length":"42608","record_id":"<urn:uuid:bce5ac3d-bfd5-4278-93b8-a06bef6f2e3f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
The potential energy between two bound quarks grows linearly with the distance between the quarks: V(x) is the potential energy, which is a function of the distance, x, between the quarks. The force between the quarks is proportional to the derivative (rate of change) of the potential: Therefore, the strong force between two quarks stays constant as the quarks are pulled apart.
{"url":"http://www.learner.org/courses/physics/unit/math_includes/unit4/math_2.html","timestamp":"2014-04-18T13:43:25Z","content_type":null,"content_length":"3087","record_id":"<urn:uuid:d032b974-1d3b-402e-8a4f-d67f158a6264>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Python Programming Operators work with one or more objects and can perform tasks such as math, comparison, and inspection. There are standard operators for arithmetic: Math Operators Operator Description + Addition - Subtraction * Multiplication / Division // Integer Division % Modulus Division ** Exponent Additionally, you can modify a named value and assign the output of an operator to the name in one line with inline assignment operators. >>> a_number = 1 >>> a_number += 1 >>> a_number >>> a_number *= 8 >>> a_number >>> a_number **= 2 >>> a_number >>> a_number /= 2 >>> a_number >>> a_number %= 3 >>> a_number >>> 3.123 // a_number 1. Run Python commands to add, subtract, multiply and divide numbers from the Python interpreter command line. 2. Create simple Python script program to perform numeric calculations and output results (e.g. calculating the area of a rectangle or circle). Further Reading Fer said: Rectangle area... Uhm I'm really bad at math - actually, I suck. Can someone please explain to me in simple and plain language how to create the program needed for the exercise? Assume that I'm a little kid and take a step-by-step instructions approach. Thanks a million! The key is to write the formula as math first (I do realize you need to know the formula). Area equals the height of rectangle times the width of rectangle. Math people usually say: Area = H x W which is close to the Python idea of using variables. Next change that to Python style code. areaRect = h * w Finally, use the inputs to get values for h and w and put the formula line after them and do a print of the answer. Hope that helps. on Sept. 9, 2013, 1:59 p.m. in reply to v4lent1na It actually helps a whole lot. I'll write it down and work around it to solve the exercise. on Sept. 9, 2013, 3:41 p.m. in reply to algotruneman Sorry if I bother you again: I'm gonna do this with raw_input, right? on Sept. 11, 2013, 5:27 a.m. in reply to algotruneman dvdgc said: In principle you could do so... though that's not asked directly. The excercise could be understood as just doing the calculations. But probably it's better if you ask the user, it would be more useful :) on Sept. 11, 2013, 4:36 p.m. in reply to v4lent1na Yes, raw_input will work, but be sure to convert the input values to integer after the input. on Sept. 12, 2013, 6:25 p.m. in reply to v4lent1na canuckfan34 said: Area of a Rectangle using a Function( yes, I know we haven't learned functions yet): def area_rectangle(length,width): area=length * width print "The area of this rectangle is:",area Be sure you have the required indenting. def area_rectangle(length,width): area=length * width print "The area of this rectangle is:",area on June 26, 2013, 9:09 a.m. in reply to canuckfan34 Now, the nicest thing your example did was make me try to understand the function concept, especially the way variables get handled. I'd read about local vs. global variables, but was confused by the idea. However, trying to implement the function so I could get the results of the function to print both "inside" (as a variable local to the function) and "outside (as a global variable captured out from the function made me puzzle why my examples were not working...but now I get it! Thanks for spurring me to figure it out, canuckfan34. Here's how I processed the information...with many trials and frustrating prints of the value of zero for the final print. The big key for me was seeing the difference of calling the function: which shoves the two values into the function and gives variable area work inside as a local var. The second half of my slow realization came when I saw that the value inside not only needed a return from inside, but that the call of the function needed to be assigned to the global area variable. area = 0 # initializes the global variable def area_rectangle(length,width): # vars pulled into the function area = length * width print "The area of the rectangle is: ", area # local "inside" var return area # The next line sort of works, getting the values into the function. # area_rectangle(5,7) # but it doesn't work to capture the return attempt # The big realization was that the inside return needed to be assigned # to the outside (global) variable, EVEN IF THEY WERE THE SAME VAR NAME area = area_rectangle(5,7) # this assignment captures the inside local # via the return to the outside, global area var # The inside area isn't the same as the outside one even though they # share the same human-readable name. print "The area (global) is: ", area # Over and over this stayed zero, until I saw the need to assign the # return value of the inside (local) to the outside (global) # during the function call. A more readable version is here: https://gist.github.com/algotruneman/5868694 Thank you, canuckfan34. Your example helped me significantly advance my understanding of Python. on June 26, 2013, 11:42 a.m. in reply to canuckfan34 Iftekhar Mohammad said: The code is: Dennis Daniels said: I did mine in Ipython3. The code is here. I randomized the values to make it a little more interesting. Wouter Tebbens said: #Use the theorem of pythagoras about rectangular triangles: a = raw_input ('size of side a: ') b = raw_input ('size of side b: ') print "c = ", round((float(a)**2+float(b)**2)**0.5,2) #calculate the surface of a circle d = raw_input('The diameter is: ') print "The circle's surface is: ", round(3.1415*float(d)**2/4,4) #calculate the surface of a circle import math d = raw_input('The diameter is: ') print "The circle's surface is: ", round(math.pi*float(d)**2/4,4) #calculate the surface of a rectangle length = raw_input('length: ') width = raw_input('width: ') print "The surface of the rectangle is: ", float(length)*float(width) rinckemi said: FInding the circumference of a circle: print "Let's find the circumference of a circle!\n"; r_prompt='First, what is the radius of your circle?\n'; print 'So, if the radius of the circle is',radius,'that means the diameter of this circle is',diameter; print 'Which means that the circumference is',circumference drediamond said: How to calculate the area of a rectangle: j= 'What is the base of the rectangle?' k= raw_input (j) l='What is the side of your rectangle?' m=raw_input (l) print int(m)*int(k) 0quesevayantodos0 said: a = 'What is the radius of your circle? ' b = raw_input(a) pi = 3.14159265359 area = float(pi)*float(b)**2 print 'The area of your circle is ', area Youngestprof said: #Below is my practice import math radius = raw_input('Radius: ') radius = float(radius) area = 2 * math.pi * radius print 'Area: ',area Youngestprof said: Anyone pls help. How do i compile and run my program without using the command line that just gives results line by line? Tyler Cipriani said: You just need to save your script in a file with the extension .py then run the file using python on Aug. 28, 2012, 3:42 p.m. in reply to Youngestprof Ariel said: hi, you can write a .py file on a text editor, and then you can play a file only with double click. 2012/8/28 twonjee2002 < on Aug. 28, 2012, 3:55 p.m. in reply to Youngestprof drediamond said: I use text wrangler as a free editor on Nov. 18, 2012, 4:58 p.m. in reply to Youngestprof saravanan said: #Calculate the Area of a circle import math print('To Calculate the Area of a Circle') radius=float(input('Enter the Radius of a circle:')) print('The Area of a Circle is:',format(area,'.2f')) To Calculate the Area of a Circle Enter the Radius of a circle:5 The Area of a Circle is: 78.54 Rob said: script for calculating volume of sphere can be found here: smk said: Calculate the area of a circle (with pi as 3.14159265359) # Calculate the area of a circle pi = 3.14159265359 radius = float(raw_input("Enter the radius of the circle: ")) print "The area is: ", pi * radius ** 2 Inkbug said: It would be nice to have both Python 2 and 3 links. Inkbug said: Simple exponent calculator (python 3): x = input( "Base: " ) y = input( "Power: " ) print( float( x ) ** float( y ) ) ionut.vlasin said: Ex1 and 2 from the book and computation of rectangle area: Andres Kwan said: Calculate volume and surface of a cone. motorjunky2000 said: Here is my simple program to calculate area and volume of a cylinder: Carl Burkart said: Here is my program to calculate the circumference and the area of a circle given the radius. I used 22/7 for pi because I wasn't sure how to express it in python. Corrections welcome. pannix said: You need to import the math module and then you can use math.pi. See http://pastebin.com/EFhE4Vsa on May 9, 2012, 11:55 a.m. in reply to Carl Burkart
{"url":"https://p2pu.org/en/groups/python-programming-101/content/operators/?pagination_page_number=1","timestamp":"2014-04-18T19:17:03Z","content_type":null,"content_length":"59765","record_id":"<urn:uuid:48f95d35-7696-4fb7-af11-6645d1fca859>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/jamanorthi/answered","timestamp":"2014-04-21T08:01:12Z","content_type":null,"content_length":"105794","record_id":"<urn:uuid:cf91c950-fe71-42ae-8990-6b9f7afb1b0f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionMathematical TheoryResults and DiscussionInfluence of Electrostatic Excitation on the Force AmplificationSize-Dependent Amplification Factor in Electrostatic Exited Microcantilever SensorConclusionsReferencesFigures and Tables Micro- and nano-sensors, especially microcantilever sensors, have attracted considerable interest for recognition of target analytes in biological and chemical and force sensing because of their fast, ease of use and inexpensive properties [1–3]. Despite the promising characteristics of the microcantilever sensor, the low detection limit is a barrier in some applications. For example, in microcantilever based electronic noses, it is difficult to see down to the parts per trillion (10^12) level, even in highly optimized conditions; whereas the canine nose can work down to the parts per quad (ppq) levels. Consequently, trained dogs currently are the “gold standard” method for analyte detection [4]. As another example, in some cases surface stress microcantilever sensors could not be used to measured low concentrations of bimolecular species [5,6]. These examples show some of the challenges in the development of the applications of microcantilever sensors. To increase the sensitivity of microcantilever sensors, and therefore, to overcome many of these challenges, a number of methods have been developed [7,8] that can be categorized into: (1) geometric optimization of sensors [9–20]; (2) improvements to the materials used in the fabrication of sensors [21–26]; (3) use of more precise detection methods to detect microcantilever bending [27–29]; (4) improvements to the biological binding in order to increase exerted biological force [30–32]. These categories do not include improvements in readout circuit systems. Several groups have published reports on the best microcantilever shape in order to achieve maximum sensitivity. Louia and coworkers designed, fabricated, and tested five piezoresistive cantilever configurations to investigate the effect of shape and piezoresistor placement on the sensitivity of microcantilevers [11]. Sukuabol et al. [12] used various cantilever shapes and found that the long-base U-shape and inverse-T-shape provide optimum geometries for SU-8 microcantilever sensitivity. Decreasing the thickness of the microcantilevers is another common strategy to increase their sensitivity [13]. By using Finite Element analysis, Chivukula et al. [14] have shown that optimizing the device dimensions is useful, to a great extent, in increasing the sensitivity of the device. Another traditional shape optimization method for enhancing the piezoresistive detection sensitivity is based on the stress concentration regions (SCRs) that have been studied by many groups [15–18]. Yang et al. [19] designed and fabricated a quad-cantilever sensor with a four-cantilever half-sensitive Wheatstone bridge for improving trace chemical sensing performance. In [20] a double-microcantilever design has been developed to overcome the thermal stress effect. The double microcantilever is composed of a top immobilized microcantilever and a bottom sensing microcantilever. These two microcantilevers could increase the sensitivity by more than two orders of magnitude and minimize the induced thermal effects. Conventionally, microcantilever sensors are fabricated on a silicon substrate [21]. Recently a polymeric microcantilever is developed which has a much lower Young’s modulus than conventional Si microcantilevers [22,23] and can improve the sensitivity of the sensor. In addition, SiO[2]-based microcantilevers are good candidates having a higher sensitivity because they are made of materials with a lower Young’s modulus (57–70 GPa) than that of Si (170 GPa). For example, Li et al. [25,26] showed that piezoresistive microcantilevers made of silicon dioxide are more sensitive than silicon-based microcantilevers. The embedded piezoresistor is made up of single crystal silicon and is fully insulated from the surrounding environment by SiO[2], resulting in lower electric noise. The current detection methods in microcantilever biosensors include piezoelectric or piezoresistive detectors for tension sensing and optical or capacitive detectors for displacement measurement. Displacement detectors usually have a higher sensitivity and can respond to very weak input signals. However, the limitation of working in liquid media, which is essential for biological sensors, is the main drawback of displacement detectors. To address this problem, metal-oxide semiconductor field-effect transistors (MOSFET) have been used by Shekhawat and coworkers to achieve a higher sensitivity in microcantilever biosensors [27]. A successful method that has been used for increasing the biological force has been implemented in the force amplified biological sensor under development at the Naval Research Laboratory [32]. This instrument uses forces produced by micron-sized labeled magnetic particles on biological receptor to pull on biomolecules and then the external magnetic field results in piconewton-level forces with sufficient sensitivity to be detected by piezoresistive microcantilevers. Unfortunately, the cost, size, and mechanical complexity of this labeled sensor often preclude their use [32]. Conventional microcantilever sensors work in a linear mode of operation, but recently the nonlinear operation of sensors especially in resonator-based microdevice [33] has received considerable attention. The geometrically nonlinear deformation of beams can be used to improve the signal to noise ratio and robustness for sensors like mass sensor based on parametric resonance [34] and parametric amplification in a microelectromechanical system (MEMS) gyroscope [35]. In this paper a novel microcantilever with electrostatic excitation that is more sensitive than traditional rectangular microcantilevers is proposed. The basic idea comes from the nonlinear electrostatic force: F e = ∈ 0 b V 2 2 ( w − g ) 2where ∈[0] = 8.854 × 10^−12 C.N/m is the permittivity of vacuum, V is the applied voltage and g is the initial gap between the movable and the ground electrode. In Equation (1) the electrostatic force is inversely related to the distance of the two electrode surfaces. Therefore, if a load on the microcantilever with b width reduces the distance between the two electrode surfaces, the electrostatic force increases and hence, the displacement of the microcantilever, w, continuously increases. Based on this phenomenon, the electrostatic force can amplify others sources of load and so, very low forces or surface stresses can be observable. The proposed microcantilever sensor that is similar to a microswitch could be fabricated by most micromachining processes. An advantage of this sensor over the microcantilever is that this approach can amplify the input load without the need for labeling. In addition, many other methods for increasing the sensitivity of microcantilever sensors can be simultaneously incorporated into the proposed method. In the following section, the nonlinear Euler-Bernoulli beam equations for the proposed microcantilever sensor have been obtained. The proposed model has been solved by Green’s function method, and the verification of results for pull-in voltage and displacement under electrostatic force has been performed. In Section 3, the numerical analysis and comparison of the sensitivity of traditional microcantilever sensors and the proposed electrostatic excited microcantilever sensor has been discussed. In addition, the influence of geometrical factors including the initial gap, width, length and thickness on the sensitivity of the microcantilever sensor has been explored. We close the paper with concluding remarks in Section 4. An electrostatic excited microcantilever sensor is composed of a microcantilever beam separated by a dielectric spacer from a fixed ground plane (Figure 1). Based on the operation principle of the proposed sensor, the microcantilever deflects toward the underlying fixed ground plane due to attractive electrostatic forces. At near “pull-in” voltage, the microcantilever sensor which is subjected to external load (force or moment) can amplify the displacement. For performance analysis of the proposed sensor, two different applications of microcantilevers are dealt with here. The tip force applied to the microcantilever in Figure 1 has been used for modeling the first application, which is the original function of microcantilevers as a force or deflection sensor, as seen in atomic force microscopes (AFMs). The second application of the proposed sensor is in biosensing, where isotropic surface stresses are encountered. Based on Yin Zhang’s assumption [36], the surface stress effect is modeled as a distributed moment m applied along the microcantilever (see Figure 1). The following relation between the surface stress, σ and the uniformly distributed bending moment m along the microcantilever can be established as: m = σ b t 2 × L To study the nonlinear behavior of the electrostatic excited microcantilever sensor, a beam model is derived for the microcantilever of length L with a uniform cross section of width b and thickness t. Based on Euler-Bernoulli’s beam theory the governing Equation may be written as: EI d 4 w dx 4 = F e + ( m E I + F E I ) δ ( x − L )and the associated boundary conditions are: w ( 0 ) = dw dx ( 0 ) = 0 d 2 w dx 2 ( L ) = 0 d 3 w dx 3 ( L ) = 0where F[e] is the electrostatic force per unit length of the microcantilever, formulated in Equation (1) w is the deflection of the microcantilever, x is the position along the microcantilever measured from the clamped end, E is the Young’s modulus, and I is the microcantilever second moment of area, which, for a rectangular cross section, is: I = b × t 3 12 For convenience, the model is formulated in a nondimensional form, by introducing the nondimensional variables: u = w g , z = x / L The following nondimensional equation is obtained: d 4 u dz 4 = F ( z ) = ∈ 0 b V 2 L 4 2 E I g 3 ( 1 − u ( z ) ) 2 + ( FL 3 g E I + mL 3 g E I ) δ ( z − 1 )and the associated boundary conditions are: u ( 0 ) = du dz ( 0 ) = 0 d 2 u dz 2 ( 1 ) = 0 d 3 u dz 3 ( 1 ) = 0 According to the definition of the nondimensional variables, physically meaningful solutions exist in the region 0 < u < 1, where u is the deflection of the cantilever tip. Integral equation representations are useful for understanding the response of a system to a concentrated load, since from the theoretical point of view, the solution for an arbitrary load can be constructed using only the known load and the solution for a concentrated load [37]. The concentrated load at z = ξ is modeled using the Dirac delta function δ(z – ξ). Replacing F(z) with δ(z – ξ) and u with G in Equation (7), one obtains: d 4 G dz 4 = δ ( z − ξ )which models a microcantilever beam with a concentrated load at z = ξ. The solution to this problem, called the Green’s function is: G = { a 0 z 3 + a 1 z 2 + a 2 z + a 3 0 ≤ z < ξ a 0 z 3 + a 1 z 2 + a 2 z + a 3 ξ < z ≤ 1 The coefficients a[i] and b[i] (i = 0,1,2,3) in Equation (10) are unknown constants. The boundary conditions (fixed at z = 0 and free at z =1) are imposed: G ( 0 ) = dG dz ( 0 ) = 0 d 2 G dz 2 ( 1 ) = 0 d 3 G dz 3 ( 1 ) = 0 Equation (10) still has four unknown constants to be determined from the continuity of the solution and its first and second derivatives at n, i.e., G ( ξ − ) = G ( ξ + ) dG dz ( ξ − ) = dG dz ( ξ + ) d 2 G dz 2 ( ξ − ) = d 2 G dz 2 ( ξ + ) d 3 G dz 3 ( ξ + ) − d 3 G dz 3 ( ξ − ) = 1 As the deflection of a microcantilever beam with concentrated load of unit strength at point ξ is: G = { ( − 1 6 ) z 3 + ( ξ 2 ) z 2 0 ≤ z < ξ ( ξ 2 2 ) z − ξ 3 6 ξ < z ≤ 1 Now, the derived Green’s function is used to construct the solution to our nonuniformly distributed loading problem. Multiplying Equation (9) by u, Equation (7) by G, subtracting the two Equations, and integrating from z = 0 to z = 1, one may obtain: ∫ 0 1 ( G d 4 u dz 4 − u d 4 G dz 4 ) dz = ∫ 0 1 ( FG − u δ ) dz This is the integral representation of the nonlinear differential Equation (7). In this way, the Green’s function is used to turn the nonlinear differential Equation (7) into the nonlinear integral Equation (14). Integrating the left side of Equation (14) by parts and applying the boundary conditions Equations (8) and (11), all contributions from these terms vanish and one is left with noting that G(z, ξ) is a symmetric function of z and ξ, one may rename the variables and write: u ( z ) = ∫ 0 1 F ( z , ξ ) . G ( z , ξ ) d ξ The closed-form solution of the deflection of the microcantilever tip (i.e., the maximum deflection) is: u ( z = 1 ) = ∫ 0 1 F ( z = 1 , ξ ) . G ( z = 1 , ξ ) d ξwhich is obtained by substituting z = 1 in Equation (15). No solution is possible without assuming a shape function for u(ξ). The deflection of the microcantilever can be approximated by the following quadratic function [38] satisfying the geometrical boundary conditions: u ( ξ ) = u 0 ξ 2 Substituting Equation (17) into Equation (16) leads to: u 0 = ∫ 0 1 [ ∈ 0 b V 2 L 4 2 E I g 3 ( 1 − u ( ξ ) ) 2 + ( FL 3 g E I + mL 3 g E I ) δ ( ξ − 1 ) ] . [ ( 1 2 2 ) ξ − 1 3 6 ] d ξ Evaluating the integrals on the right side of Equation (18), and inserting I from Equation (5) into Equation (18) one obtains: u 0 = ∈ 0 V 2 L 4 E h 3 g 3 u 0 2 [ log ( 1 1 − u 0 ) 2 − 1 2 ( 1 − u 0 ) + 1 2 + 3 2 ( 1 u 0 − 1 ) + 3 u 0 log ( 1 − u 0 1 + u 0 ) 4 ] + 4 F L 3 gEb h 3 + 4 m L 3 gEb h 3 By solving Equation (19) via Newton’s method or any other method for solving nonlinear algebraic equations, the nondimensional microcantilever tip deflection u[0] is obtained, which is due to electrostatic pre-excitation force, tip applied force and distributed moment. The second and third terms on the right hand side of Equation (19) are the well known solutions of microcantilever deformation equation without electrostatic excitation. We can separate this part of the solution as: u st = ( F + M ) 4 L 3 gEb h 3 Because the applied tip force and distributed moment have similar influences on microcantilever displacement, as seen in Equation (20), the rest of the paper only investigates the effect of the applied tip force. To ascertain the validity of the proposed model, Table 1 compares the experimental, analytical and simulation results for the deflection of a microcantilever below the pull-in voltage under electrostatic pre-exciting force. Table 2 clearly shows that the deflection results of the present work agree with the experimental results better than the analytical results of [39] for the same system configuration. In addition, the pull-in voltage obtained experimentally in [39] is 68.5 V, which is close to the estimated pull-in voltage (69.6 V) using the proposed model. Clearly the pull-in results of the present work are in better agreement with the experimental results in comparison to the analytical results of [40] and [41] which are 66.4 V and 66.78 V, respectively. A comparison among the results shows that the proposed modeling and simulation results have good accuracy compared with other references. Now, we can use this model for determining the performance of the proposed electrostatic excited microcantilever sensor. Based on the concept development in this paper, the external load applied on the microcantilever sensor in the presence of nonlinear electrostatic excitation should be amplified. To confirm the proposed idea, the amplification factor, AF, is defined as: AF = u 0 − u es u st The amplification factor demonstrates the ratio of the proposed electrostatic pre-excited microcantilever deflection to simple microcantilever sensor deflection due to tip force or distributed moment. In Equation (21) u[es] is the pre-excited nondimensional tip deflection due only to electrostatic excitation. For the calculation of u[es], the external applied tip force and distributed moment should be set to zero, and then Equation (19) be solved for u[0] by Newton’s method or any other method used for solving the nonlinear algebraic equation. Therefore, the numerator of Equation (19) is the total nondimentional microcantilever deformation, u[0] (due to electrostatic pre-exciting, the tip force and distributed moment) minus the nondimentional microcantilever deformation, u [es] (only due to electrostatic pre-exciting). This term describes the after pre-exciting deflection of microcantilever due to tip force or distributed moment. The denominator of Equation (19) is the nondimentional deflection of simple microcantilever without electrostatic pre-exciting calculated using Equation (20).
{"url":"http://www.mdpi.com/1424-8220/11/11/10129/xml","timestamp":"2014-04-16T04:57:05Z","content_type":null,"content_length":"104698","record_id":"<urn:uuid:5fe7c878-47e7-401d-bead-78b8cdf60ef7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find the slope: x+3/-5+y-1/2=1 Best Response You've already chosen the best response. Just get y alone and the resulting fraction coefficient (number behind the x) of x will be your slope. So.... minus the x + 3/-5 and add the 1/2 to each side to get y = x + 3/5 + 1/2 + 1. Since there's no number behind the x, the y intercept is some number, but the slope is 1 since there are no numbers behind it. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f558a26e4b0862cfd06fc1e","timestamp":"2014-04-21T15:48:48Z","content_type":null,"content_length":"27776","record_id":"<urn:uuid:9bdb02cb-69e9-4794-bd47-4c0ace4e0bd1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Raise 10 to a power in javascript, are there better ways than this up vote 4 down vote favorite I have a need to create an integer value to a specific power (that's not the correct term, but basically I need to create 10, 100, 1000, etc.) The "power" will be specified as a function parameter. I came up with a solution but MAN does it feel hacky and wrong. I'd like to learn a better way if there is one, maybe one that isn't string based? Also, eval() is not an option. Here is what I have at this time: function makeMultiplierBase(precision) var numToParse = '1'; for(var i = 0; i < precision; i++) numToParse += '0'; return parseFloat(numToParse); I also just came up with this non-string based solution, but still seems hacky due to the loop: function a(precision) var tmp = 10; for(var i = 1; i < precision; i++) tmp *= 10; return tmp; BTW, I needed to do this to create a rounding method for working with currency. I had been using var formatted = Math.round(value * 100) / 100 but this code was showing up all over the place and I wanted to have a method take care of the rounding to a specific precision so I created this Math.roundToPrecision = function(value, precision) Guard.NotNull(value, 'value'); b = Math.pow(10, precision); return Math.round(value * b) / b; Thought I'd include this here as it's proven to be handy already. javascript exponentiation so, why are you making it a float and not instead making it an integer? – Shad Jun 8 '11 at 23:51 @Shad You are referring to the parseFloat() call? If so, I suppose that's just an oversight, Number(numToParse) would work too – Steve K Jun 8 '11 at 23:52 add comment 6 Answers active oldest votes up vote 12 Math.pow(10, precision); // fill up space down vote Sad to say, I wasn't aware of the pow() method, it's exactly what I needed. However, I'd still like to learn how it's done; any idea what Math.pow() may be doing behind the covers? – Steve K Jun 9 '11 at 0:08 2 @Steve, like most math libraries they probably implement numerical approximations of functions with the standards-required precision. Since Math.pow isn't just for integers, it's implementation has to be generic for all "real" numbers so it's not something similar to your loop implementation (which wouldn't work on a base of say, 5.672 for example); Maybe a taylor expansion or something similar. – davin Jun 9 '11 at 0:12 1 @Davin - I googled "taylor expansion" and upon reading the first paragraph from Wikipedia blood began trickling from my ears as my vision blurred... I will just accept that Math.pow () is cool. ;0) – Steve K Jun 9 '11 at 0:19 @Steve, hmmm, so you can rework the equation as follows: we want the value of y=x^n, so we write ln(y) = ln(x^n) = n*ln(x) and raising both sides to the power of the natural base e we get y = exp(n*ln(x)) and now you can calculate ln(x) with a taylor expansion to any required precision in a predetermined number of steps, multiple by n, and calculate the exp function of the product, again with a taylor expansion to the required number of terms i.e. the required precision. – davin Jun 9 '11 at 0:22 @Steve, yeah, those wiki articles are written to make us all feel dumb :) Or to make their authors feel smart. In any case, if you pick up a text book on real analysis their should be sufficient material to understand taylor expansions to quite some depth, although that blood-from-the-ears symptom might return. – davin Jun 9 '11 at 0:25 show 1 more comment Why not: function precision(x) { up vote 3 down vote return Math.pow(10, x); add comment if all you need to do is raise 10 to different powers, or any base to any power why not use the built in Math.pow(10,power); unless you have soe specific need to reason to reinvent up vote 3 down the wheel add comment This has the same result as your function, but i still don't understand the application/intention. function makeMultiplierBase(precision,base){ up vote 3 down vote return Math.pow(base||10,precision); 1 Why not just return Math.pow(base||10, precision); – davin Jun 9 '11 at 0:01 @davin True, just an initialization habit =) – Shad Jun 9 '11 at 1:11 add comment Use a lookup table. But if this is for rounding currency amounts, you should be using BigDecimal instead of the entire schemozzle. up vote 0 down vote add comment For powers at 10³³ and above, Math.pow() may lose precision. For example: Math.pow(10, 33); //-> 1.0000000000000001e+33 Math.pow(10, 34); //-> 1.0000000000000001e+34 Math.pow(10, 35); //-> 1e+35 Math.pow(10, 36); //-> 1e+36 Math.pow(10, 37); //-> 1.0000000000000001e+37 While not an everyday problem that you may run into in JavaScript, it could be quite troublesome in some situations, particularly with comparison operators. One example is Google's log10Floor() function from the Closure Library: * Returns the precise value of floor(log10(num)). * Simpler implementations didn't work because of floating point rounding * errors. For example * <ul> * <li>Math.floor(Math.log(num) / Math.LN10) is off by one for num == 1e+3. * <li>Math.floor(Math.log(num) * Math.LOG10E) is off by one for num == 1e+15. * <li>Math.floor(Math.log10(num)) is off by one for num == 1e+15 - 1. * </ul> * @param {number} num A floating point number. * @return {number} Its logarithm to base 10 rounded down to the nearest * integer if num > 0. -Infinity if num == 0. NaN if num < 0. goog.math.log10Floor = function(num) { up vote 0 down if (num > 0) { vote var x = Math.round(Math.log(num) * Math.LOG10E); return x - (Math.pow(10, x) > num); return num == 0 ? -Infinity : NaN; If you pass a power of 10 above 10³³, this function could return an incorrect result because Math.pow(10, 33) > 1e33 evaluates to true. The way I worked around this is to use Number coercion, concatenating the exponent to '1e': +'1e33' //-> 1e+33 +'1e34' //-> 1e+34 +'1e35' //-> 1e+35 +'1e36' //-> 1e+36 +'1e37' //-> 1e+37 And, fixing the log10Floor() function: goog.math.log10Floor = function(num) { if (num > 0) { var x = Math.round(Math.log(num) * Math.LOG10E); return x - (+('1e' + x) > num); return num == 0 ? -Infinity : NaN; add comment Not the answer you're looking for? Browse other questions tagged javascript exponentiation or ask your own question.
{"url":"http://stackoverflow.com/questions/6286589/raise-10-to-a-power-in-javascript-are-there-better-ways-than-this/6286608","timestamp":"2014-04-18T06:35:10Z","content_type":null,"content_length":"94277","record_id":"<urn:uuid:82295ebd-a296-4893-87c2-053ebc4d62f0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
M05 #16 Author Message This post received \frac{(40 * x^2 * y^2)}{z} is divisible by prime number Q. Is Q an even prime number? 1. Product of x^2 * y^2 is even snowy2009 2. Intern z * Q = 6 Joined: 19 Jun 2008 (C) 2008 GMAT Club - Posts: 20 m05#16 Followers: 0 Kudos [?]: 9 [1] , given: 0 Source: GMAT Club Tests - hardest GMAT questions Statement 1 does not help by itself. It tells us that x or y or both are even. Statement 2 states that either z or Q is equal to 2. Combining the two statements does not provide the answer either, we can figure out that either x or y has to be divisible by 3. To make sure, we can pick numbers 4 for x and 3 for y. When we divide by 6, there is still no way to tell if z or Q is equal to 2. I don't understand why z or Q has to be equal to 2; can't Q be 11 and z equal 6/11? Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Manhattan GMAT Discount Codes Manager SomeWheer must mentioned in the question x,y & z are integers or whole numbers Joined: 10 Aug 2008 Posts: 76 Followers: 1 Kudos [?]: 13 [0], given: 0 make sure that the posted question is correct in wording. i think that there has to some information about the x,y &z. 321kumarsushant if we say that x,y & z are integers, Manager then, Joined: 01 Nov 2010 statement 1: Product of x^2 * y^2 is even Posts: 193 doesn't make out any solution. Location: India because it doesnot tell anything about z. Concentration: Technology, whether it ll be divisible by Q or not. Statement 2: z * Q = 6 GMAT Date: 08-27-2012 this statement doesn't say anything about the x & y. GPA: 3.8 WE: Marketing (Manufacturing) using both statement alone or together we cant answer the question. Followers: 5 correct my explanation if wrong. Kudos [?]: 18 [0], given: 26 _________________ kudos me if you like my post. Attitude determine everything. all the best and God bless you. Rephrasing the question: Is Q = 2? Joined: 23 Oct 2010 S1: for x^2 *y^2 to be even any of x or y should be even or both can be even. This does not help nail the value of Q as z could be odd or even and based on that Q could Posts: 87 be even or odd S2 : this shows that either of z and Q can be 2 or 3 Location: India Together too they don't answer the question as z could be odd or even and based on that Q could be even or odd. Followers: 3 Kudos [?]: 19 [0], given: 6 snowy2009 wrote: Joined: 23 Oct 2010 I don't understand why z or Q has to be equal to 2; can't Q be 11 and z equal 6/11? Posts: 87 even if we assume that there is no mention of the fact that x, y and z are integers and your solution is correct the answer still remains E. However the explanation provided by GMAT club can be questioned in that case. Location: India In all probability it should have been mentioned that all variables in this question should be assumed to be integers. Followers: 3 Kudos [?]: 19 [0], given: 6 Manager I got it down to an unsure C or E and guessed E. :/ Status: Trying to get into _________________ the illustrious 700 club! I'm trying to not just answer the problem but to explain how I came up with my answer. If I am incorrect or you have a better method please PM me your thoughts. Thanks! Joined: 18 Oct 2010 Posts: 79 Followers: 1 Kudos [?]: 18 [0], given: 58 saurabhgoel Using both stmt Q can be either 3 or 2 . Manager Now --> 40 = 8 * 5 Joined: 31 May 2010 So, it depends upon the values of X and Y, it is only provided that either of X an Y is even, we cant confirm the value of Q . Posts: 97 So, E Followers: 1 _________________ Kudos [?]: 18 [0], given: 23 Kudos if any of my post helps you !!! That 40*x2*y2 is even is evident even without looking at stmt 1. Stmt 1 is insuff as it doesn't say anything abt z or q Intern z*q = 6 means either z or q is 2 or 3. If z is 2 then q is 3, an odd prime. But if z is 3 then q is 2, an even prime. But stmt 2 doesn't say anything conclusively. So stmt 2 insuff Joined: 25 Apr 2010 Together, they don't indicate the value of z or q either Posts: 6 Hence, the ans is E Followers: 0 Kudos [?]: 0 [0], given: 0 here we use the formula srivicool dividend = divisor*quotient+remainder and we shud find if Q=2?? Manager here remainder = 0 Joined: 01 Apr 2010 since 40*x^2*y^2 = even = z*Q Posts: 165 (1) is not sufficient. Followers: 3 (2) says z*Q = 6 which leaves the option of Q to be 2 or 3. Kudos [?]: 15 [0], given: 6 (1) and (2) together does not make any difference. So E Senior Manager sonnco wrote: Joined: 19 Oct 2010 I got it down to an unsure C or E and guessed E. :/ Posts: 272 Me too.. haha.. Location: India Must say it was a nervous few seconds before I clicked "Reveal". GMAT 1: 560 Q36 V31 _________________ GPA: 3 petrifiedbutstanding Followers: 6 Kudos [?]: 25 [0], given: 26 The question is asking if q is 2. In order to know that, we must figure out what z is (or q, but they dont usually give that directly. Joined: 21 Nov 2010 S1: Says nothing about the denominator. S2: gives us Z and Q together, but we don't know which is 3 and which is 2. Posts: 133 Together, no help. Answer is E. Followers: 0 Kudos [?]: 5 [0], given: 12 Current Student Affiliations: UWC The link to this thread seems outdated.. Joined: 09 May 2012 I came here to M5 #16 looking for: Posts: 400 If x and y are positive integers and xy is divisible by prime number p . Is p an even number? Location: India 1 x^2 * y^2 is an even number GMAT 1: 620 Q42 V33 GMAT 2: 680 Q44 V38 2 xp = 6 GPA: 3.43 WE: Engineering (Entertainment and Sports) Followers: 20 Kudos [?]: 148 [0], given: This post received Expert's post macjas wrote: The link to this thread seems outdated.. I came here to M5 #16 looking for: If x and y are positive integers and xy is divisible by prime number p . Is p an even number? 1 x^2 * y^2 is an even number 2 xp = 6 Here is a solution for that problem. If x and y are positive integers and xy is divisible by prime number p. Is p an even number? Notice that as given that is a prime number and the only even prime is 2, then the question basically asks whether x^2 * y^2 is an even number --> (this means that at least one of the unknowns is even). We have that some even number is divisible by prime number , not sufficient to say whether , for example if can be ether 2 or 3. Bunuel (2) Math Expert xp = 6 Joined: 02 Sep 2009 --> since Posts: 17321 x Followers: 2875 is a positive integer and Kudos [?]: 18401 [2] , p given: 2350 is a prime number then either (answer NO) or (answer YES). Not sufficient. (1)+(2) If , so the first statement is satisfied irrespective of the value of and thus we have no constraints on its value. So from (2) can take any of the two values 2 or 3, which means that can also take any of the two values 2 or 3, respectively. Not sufficient. Answer: E. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Current Student Affiliations: UWC Thanks Bunuel, Joined: 09 May 2012 I kinda forgot that E/E and E/O can yield an integer if the numerator is a multiple of the denominator. The GMAT Club questions are great memory joggers! Posts: 400 Location: India GMAT 1: 620 Q42 V33 GMAT 2: 680 Q44 V38 GPA: 3.43 WE: Engineering (Entertainment and Sports) Followers: 20 Kudos [?]: 148 [0], given: OldFritz Basically asking whether Q=2 Senior Manager Statement 1 not very useful since it is mum on what Q could be. Not sufficient. Joined: 15 Sep 2009 Statement 2 implies that Q is 2 or 3. Not sufficient. Posts: 271 Combining leaves us still unsure since x^2y^=even could mean that x^2y^2 could be a multiple of 2 or 3, thus masking the identity of Q, which could still be a 2 0r 3. GMAT 1: 750 Q V _________________ Followers: 8 +1 Kudos me - I'm half Irish, half Prussian. Kudos [?]: 49 [0], given: 6 Chilldowngmat Why OA E IMO E Joined: 19 Dec 2012 x^2.y^2=even mean the multiple of these two is an integer (can be a factor of 3) so 40*even integer = even INTEGER Posts: 8 Q*z=6 if Q is an prime number it is also an integer thus z must be an integer so when we take these facts together Q can be (1,2,3, NOT 6 THAT IS WHY Z IS AN INTEGER) Followers: 0 1,2,3 Q can be so both statements together insufficient Kudos [?]: 0 [0], given: 12 Similar topics Author Replies Last post 19 m05 #22 vignesh53 26 21 Sep 2008, 10:08 6 M05 #4 dczuchta 21 24 Sep 2008, 09:57 3 m05 #10 echizen 20 11 Oct 2008, 08:25 7 m05#06 scorcho 25 14 Oct 2008, 18:08 m05 bibha 3 13 Jul 2010, 08:39
{"url":"http://gmatclub.com/forum/m05-71333.html?oldest=1","timestamp":"2014-04-19T22:29:07Z","content_type":null,"content_length":"179811","record_id":"<urn:uuid:54fc8541-fe57-4439-a48c-8c7b2559de87>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] sorting -inf, nan, inf Tim Hochberg tim.hochberg at ieee.org Tue Sep 19 16:04:49 CDT 2006 Charles R Harris wrote: > On 9/19/06, *Tim Hochberg* <tim.hochberg at ieee.org > <mailto:tim.hochberg at ieee.org>> wrote: > A. M. Archibald wrote: > > On 19/09/06, Tim Hochberg <tim.hochberg at ieee.org > <mailto:tim.hochberg at ieee.org>> wrote: > > > >> Keith Goodman wrote: > >> > >>> In what order would you like argsort to sort the values -inf, > nan, inf? > >>> > >>> > >> Ideally, -inf should sort first, inf should sort last and nan > should > >> raise an exception if present. > >> > >> -tim > >> > > > > Mmm. Somebody who's working with NaNs has more or less already > decided > > they don't want to be pestered with exceptions for invalid data. > Do you really think so? In my experience NaNs are nearly always > just an > indication of a mistake somewhere that didn't get trapped for one > reason > or another. > > I'd > > be happy if they wound up at either end, but I'm not sure it's worth > > hacking up the sort algorithm when a simple isnan() can pull > them out. > > > Moving them to the end seems to be the worst choice to me. Leaving > them > alone is fine with me. Or raising an exception would be fine. Or doing > one or the other depending on the error mode settings would be even > better if it is practical. > > What's happening now is that NaN<a, NaN==a, and NaN>a are all > false, > > which rather confuses the sorting algorithm. But as long as it > doesn't > > actually *break* (that is, leave some of the non-NaNs incorrectly > > sorted) I don't care. > > > Is that true? Are all of numpy's sorting algorithms robust against > nontransitive objects laying around? The answer to that appears to be > no. Try running this a couple of times to see what I mean: > n = 10 > for i in range(10): > for kind in 'quicksort', 'heapsort', 'mergesort': > a = rand(n) > b = a.copy() > a[n//2] = nan > a.sort(kind=kind) > b.sort(kind=kind) > assert alltrue(a[:n//2] == b[:n//2]), (kind, a, b) > The values don't correctly cross the inserted NaN and the sort is > incorrect. > Except for heapsort those are doing insertion sorts because n is so > small. Heapsort is trickier to understand because of the way the heap > is formed and values pulled off. I'm not sure where the breakpoint is, but I was seeing failures for all three sort types with N as high as 10000. I suspect that they're all broken in the presence of NaNs. I further suspect you'd need some punishingly slow n**2 algorithm to be robust in the presence of NaNs. > I'll look into that. I bet searchsorted gets messed up by nan's. Do > you think it worthwhile to worry about it? No. But that's because it's my contention is that any sequence with NaNs in it is *not sorted* and thus isn't suitable input for searchsorted. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-September/010808.html","timestamp":"2014-04-20T13:34:56Z","content_type":null,"content_length":"6770","record_id":"<urn:uuid:c8b09bce-2d7d-4e1e-99a4-bf2701424f18>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
[Maxima] question about the use of bfloat Stavros Macrakis macrakis at alum.mit.edu Sun Jan 27 17:21:09 CST 2008 On Jan 26, 2008 1:16 PM, S. Newhouse <sen1 at math.msu.edu> wrote: > >> Is there a way to declare all variables to be evaluated using 'bfloat' > >> without simply applying 'bfloat' to every occurrence? bfloat has nothing to do with variables or evaluation; it is a function that converts *values* to their bfloat equivalent. In Maxima (unlike some languages), assigning a value to a variable, or getting the value of a variable, never converts the value. On the other hand, Maxima *does* have the concept of type contagion for numbers. For all arithmetic operations Op, Op(rational,float) => float; Op(rational,bfloat) => bfloat; Op(float,bfloat) => bfloat. The glitch in this scheme is that *symbolic* constants like %pi and %e and exact numeric expressions like sqrt(2) do not participate in this scheme -- though arguably any operation between an exact and an approximate quantity should result in an approximate one, e.g. sqrt(2)+1.0 => 2.414. On the need for bfloat, the issue is speed. I want to use maxima as an > aid to rigorous computation for Dynamical Systems (i.e., iteration of > maps and solutions of differential equations) A few comments on your code: > Nest(f, x, n) := block(for i thru n do x : f(x), val : x); Why does this function modify the global variable "val"? Is this a return-value convention in some other language? Also, in Maxima, the iteration variable is local. So you can simply write Nest(f, x, n) := ( for i thru n do x : f(x), x ); > (%i284) f(x):= 4*x*(1-x); > (%i285) ff(x):= bfloat(4)*bfloat(x)*(bfloat(1)-bfloat(x)); (%i287) Nest(f,1/5,19)$ > Evaluation took 38.75 seconds (38.75 elapsed) using 24.966 MB. Yes, this is giving you the *exact* result. It is the price of precision... > (%i288) Nest(ff,bfloat(.2),19)$ > Evaluation took 0.01 seconds (0.00 elapsed) using 74.375 KB. Try Nest(f,bfloat(1/5),19). This will be as fast as your ff. I wonder if the problem might be that in rational arithmetic repeated > many times, the rational numbers get very large numerators and > denominators. Would this be related to the slowness? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.math.utexas.edu/pipermail/maxima/attachments/20080127/237c70cc/attachment.htm More information about the Maxima mailing list
{"url":"http://www.ma.utexas.edu/pipermail/maxima/2008/009962.html","timestamp":"2014-04-19T01:58:59Z","content_type":null,"content_length":"5192","record_id":"<urn:uuid:6215e53b-cbca-422d-875c-ac345b9170d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Fraction Booklet Project Project Due Dec. 6, 2010 6th grade Math Project: Fraction Booklet Students will create a fraction review booklet. The booklet should include: 1. Creative Cover 2. Table of Contents 3. Topics include: • Factors • Prime/Composite Numbers/Prime Factorization • Multiples • Reducing fractions • Equivalent Fractions • Adding/Subtracting Fractions and Mixed Numbers • Multiplying Fractions and Mixed Numbers • Dividing Fractions and Mixed Numbers 4. Explain in detail how to perform each step of the above topic. Show how the topics are related. For example you reduce fractions by finding the GCF. 5. Vocabulary words 6. Create at least 3 examples for each topic. Can use diagrams if necessary. 7. Books should be neat, organized, and colorful. The book is due on December 6, 2010. No late assignments will be accepted. Any questions please ask or email me through Engrade. 6th grade Math Project: Fraction Booklet Students will create a fraction review booklet. The booklet should include: 1. Creative Cover 2. Table of Contents 3. Topics include: • Factors • Prime/ Composite Numbers/Prime Factorization • Multiples • Reducing fractions • Equivalent Fractions • Adding/Subtracting Fractions and Mixed Numbers • Multiplying Fractions and Mixed Numbers • Dividing Fractions and Mixed Numbers 4. Explain in detail how to perform each step of the above topic. Show how the topics are related. For example you reduce fractions by finding the GCF. 5. Vocabulary words 6. Create at least 3 examples for each topic. Can use diagrams if necessary. 7. Books should be neat, organized, and colorful. The book is due on December 6, 2010. No late assignments will be accepted. Any questions please ask or email me through Engrade. Project Due Dec. 6, 2010
{"url":"https://wikis.engrade.com/fractions1","timestamp":"2014-04-21T02:04:47Z","content_type":null,"content_length":"9893","record_id":"<urn:uuid:895907aa-eedc-443c-bc24-dfb9e716530e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
all 2 comments [–]tusksrus1 point2 points3 points ago sorry, this has been archived and can no longer be voted on If I have a parametric equation given by r(t)=(x(t),y(t)), the tangent line at the point r(t) (so fixed t) is r(t) + s r'(t) where r'(t) is the derivative of r at t (to differentiate a vector you just differentiate the coordinates as normal). In the tangent line equation, s is the variable and t is fixed -- t determines which point on the curve r(t) we're at, and s parametrizes the line. But that's not vital, what's important is that the tangent lines be parallel, ie when do they have the same slope? The slope is given by the derivative r'(t). So the question is asking you to differentiate the functions and equate them. Essentially, when is the derivative of t ln(t) equal to the derivative of t^t? There should be two solutions (according to the question, at least, I haven't solved the problem myself). Then for b) and c) it's asking you for the explicit values of the derivative.
{"url":"http://www.reddit.com/r/cheatatmathhomework/comments/191i30/calculus_tangent_lines_parametric_equations/","timestamp":"2014-04-17T13:23:51Z","content_type":null,"content_length":"51980","record_id":"<urn:uuid:59e453ed-d96d-4697-bd12-2900205c963a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Mineralization of terbuthylazine This example is part of the NCEAS non-linear modelling project By Anders Nielsen Terbuthylazine is a herbicide used in agriculture. It is a so-called s-triazin like atrazine, which has been banned in Denmark after suspicion of causing cancer. Terbuthylazine can be bound to the soil, but free terbuthylazine can be washed into the drinking water. Some bacteria can mineralize it. This data is part of a larger experiment to determine the ability of certain bacteria to mineralize terbuthylazine, and to estimate the mineralization rate. This is a fairly straightforward nonlinear least-squares problem, with normally distributed residuals and no random effects or latent variables. The deterministic part of the model is the solution to a set of coupled ordinary differential equations (ODEs) for the concentrations in different compartments. Because the ODEs are linear, the deterministic solution can be found directly in terms of a matrix exponential, for which functions exist in all three of ADMB, BUGS, and R. From there it is simply a matter of defining a normal likelihood, or equivalently a least-squares expression, and minimizing it. The main differences appear in the speed and robustness of the matrix exponential formulations in different software tools. Source code
{"url":"http://www.admb-project.org/examples/differential-equations/mineralization-of-terbuthylazine","timestamp":"2014-04-21T14:47:30Z","content_type":null,"content_length":"27746","record_id":"<urn:uuid:68daf82b-dc70-4277-a22b-adcdcab6c1b8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Thinking about Graduate School? Consider Mathematical Computer Science at UI Chicago! It’s that time of year where senior undergraduates are considering whether to go to graduate school. And I wouldn’t be surprised if many students were afraid of the prospect, perhaps having read that popular genre of articles these days that tell you graduate school will turn you into an emotional wreck and that only a psychopathic masochist would put themselves through it. The problem with these articles is they’re usually written by both outliers and those who put themselves in situations with no other options. I’ve felt my time at UI Chicago, however, has provided me nothing but options and excitement! So if you’re thinking about graduate school in mathematics or theoretical computer science, here’s my pitch for Why you should come to UI Chicago and study theoretical computer science We’re social. In fact, UI Chiago’s mathematics department is the most social of any math department I’ve ever heard of. I think this is the biggest benefit for me. On my first day here, I was surprised that everyone was totally normal and not the typical weird antisocial stereotype one associates with people who like math. Our department has a huge list of seminars going on every day of the week, and a small party every Friday called “Tea” that has a large attendance. We often go out to bars and restaurants, and have other outings. We even have a Facebook group (for grad students only) and a ping pong league that the professors sometimes join. We currently have over 150 graduate students in our department, and I know around 70 by name. We have world-class faculty. Some of my colleagues came to UIC specifically to work with David Marker on model theory, or Lou Kauffman on knot theory. At least one researcher here has over two hundred publications! We have big names in algebraic geometry, hypergraph combinatorics, dynamical systems, low-dimensional topology, and a very active logic group. Our theoretical computer science group (mixed with our combinatorics group) is small but vibrant and growing fast. We just got three new mathematical computer science students this year, and I’m doing everything I can to convert some of the other students over to our We’re in the middle of a thriving intellectual community. Chicago is the center of the Midwest US, and there are a ton of universities not only in the city but within a few hours drive. There are regular seminars and colloquia at the University of Chicago, Northwestern, and other smaller institutions like the Toyota Technical Institute (which has very strong researchers). Then there are the universities of Wisconsin, Indiana, and Michigan which all have strong theoretical computer science groups (and of course other mathematics groups) and we get together for conferences like Midwest Theory Day. Our department is not cutthroat competitive. I hear rumors about top mathematics and computer science programs that (unintentionally) pit students against each other for the attention of a few glorified professors. That simply doesn’t happen here. Everyone is friendly and people regularly collaborate. You can approach any professor and ask to do a reading course with them or ask them what kinds of open problems they’re thinking about, and most of them will gladly sit down with you and explain all the neat ideas in their heads. Even the hardest, most sarcastic professors genuinely care about their students. I think, along with being social, this makes our department one of the friendliest and most stress-free places to get a PhD. We’re in a great city. Chicago is really fun! I don’t know what else to say about this. Our department staff is very supportive. Our director and assistant director of graduate studies are extremely helpful at getting new students situated and ensuring they have funding. It’s not uncommon for students who start in the PhD program to decide after one or two years that a PhD is not right for them. Usually they will stop with the requirements for a master’s degree, and there are no hard feelings. Students who do this are even encouraged to return if they decide they want to finish their PhD later. In the mean time, our department guarantees tuition waivers and stipends to all of its teaching assistants (and there are alternatives to teaching as well), so you can focus on your studies and not have to think too much about money. And even more, if you decide to study theoretical computer science at UI Chicago you get a whole bunch of other benefits: You get to hang out and do research with me! (Okay maybe that’s not a serious benefit to consider) Your post-grad school job opportunities widen. Jobs are hard to come by for the purest of pure mathematics researchers. Research positions are in short supply, and unless you want to go into industry with an applied math degree the remaining option is to teach at a 4-year institution. But if you study theoretical computer science, now you are qualified to do all kinds of things. Work at industry research labs like Microsoft Research, Google Research, or Yahoo! Research. Work at government labs like Lincoln Labs and Lawrence Livermore National Labs, both of which I interned at. You can shoot for a professorship or do a postdoc like a regular mathematics PhD would. If you’re hand with Python you could go into the software industry and get a high-demand job at any major company in cryptography or operations research (both of which depend on ideas from TCS). And you always keep the option of teaching at a 4-year. You have many options for internships during summers. I, my colleagues, and even my advisor did research internships during the Summers at various research labs and industry companies. This is a particularly nice benefit of doing mathematical computer science in grad school, because it augments your normal graduate student stipend by enough to live much more comfortably than otherwise (that being said, for extra money a lot of my pure math colleagues will tutor on the side, and tutoring comes at a high price these days). It’s not uncommon to receive additional funding through these opportunities as well. You get to travel a lot. The main publication venue in computer science is the conference, and that means there are conferences happening all over the world all the time. In fact, I just got back from a conference in Aachen, Germany, earlier this year I was at Berkeley and Stanford, I am helping to run a conference in Florida early next year, and I am looking at conferences in Beijing and Barcelona next Summer. All of the trips you take to present your published research is paid for, so it’s just pure awesome. You enjoy the breadth of problems in computer science. Computer science is unique in that it connects to almost every field of mathematics. 1. Like statistics? There’s statistical machine learning and randomized algorithm design. 2. Like real analysis and dynamical systems? There’s convex optimization, support vector machines, and tons of computational aspects of PDE’s. 3. Like algebra or number theory? There’s cryptography. 4. Like combinatorics? There’s combinatorial optimization. 5. Like game theory? I just got back from a conference on algorithmic game theory. 6. Like geometry and representation theory? There’s a Geometric Complexity Theory program working toward P vs NP. 7. Like logic? You might be surprised to know that the cleanest proofs of the incompleteness theorems are via Turing machines. 8. Like topology? There are researchers (not at UIC) working on computational topology, like persistent homology which we’ve been slowly covering on this blog. The list just goes on and on, and this isn’t even mentioning the purely pure theoretical computer science topics which have a flavor of their own. Programming options exist, but you aren’t forced to write programs. Some of the greatest computer science researchers cannot write simple computer programs, and if you’re just interested in theory there is plenty of theory to go around. On the other hand, we have researchers in our department studying aspects of supercomputing, and options for collaboration with researchers in the (engineering) computer science department. Over there they’re studying things like biological networks, machine learning and robotics, and all kinds of hands-on applied stuff that you might be interested in if you read this blog. So if you’re interested in joining us for next year and have any questions, feel free to drop me or the professors in the MCS group or the director of graduate studies an email. 2 thoughts on “Thinking about Graduate School? Consider Mathematical Computer Science at UI Chicago!” 1. I already chose to apply to UIC in part due to following your blog :-) But I don’t know that I’d end up working in theoretical computer science (aside from programming projects for math research, I haven’t done much computer science, I’m leaning more towards topology.) □ I was also leaning toward topology when I applied :)
{"url":"http://jeremykun.com/2013/11/03/thinking-about-graduate-school-consider-mathematical-computer-science-at-ui-chicago/","timestamp":"2014-04-20T10:46:45Z","content_type":null,"content_length":"83025","record_id":"<urn:uuid:6d05b24c-20b4-4152-8c26-71c72ee4f43e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
EconPort - Glossary abbreviation for the Weak Axiom of Cost Minimization wage curve A graph of the relation between the local rate of unemployment, on the horizontal axis, and the local wage rate, on the vertical axis. Blanchflower and Oswald show that this relation is downward sloping. That is, locally high wages and locally low unemployment are correlated. Wallis statistic A test for fourth-order serial correlation in the residuals of a regression, from Wallis (1972) Econometrica 40:617-636. Fourth-order serial correlation comes up in the context of quarterly data; e.g., seasonality. Formally, the statistic is: d[4]=(sum from t=5 to t=T of: (e[t]-e[t-4])^2/(sum from t=1 to t=T of: e[t]^2) where the series of e[t] are the residuals from a regression. Tables for interpreting the statistic are in Wallis (1972). Walrasian auctioneer A hypothetical market-maker who matches suppliers and demanders to get a single price for a good. One imagines such a market-maker when modeling a market as having a single price at which all parties can trade. Such an auctioneer makes the process of finding trading opportunities perfect and cost free; consider by contrast a "search problem" in which there is a stochastic cost of finding a partner to trade with and transactions costs when one does meet such a partner. Walrasian equilibrium An allocation vector pair (x,p), where x are the quantities held of each good by each agent, and p is a vector of prices for each good, is a Walrasian equilibrium if (a) it is feasible, and (b) each agent is choosing optimally, given that agent's budget. In a Walrasian equilibrium, if an agent prefers another combination of goods, the agent can't afford it. Walrasian model A competitive markets equilibrium model &quot'without any externalities, asymmetric information, missing markets, or other imperfections." (Romer, 1996, p 151) 'In this general equilibrium model, commodities are identical, themarket is concentrated at a single point [location] in space, and the exchange is instantaneous. [Individuals] are fully informed about the exchange commodity and the terms of trade are known to both parties. [No] effort is required to effect exchange other than to dispense with the appropriate amount of cash. [Prices are] a sufficient allocative device to achieve highest value uses.' (North, 1990, p. 30.) abbreviation for the Weak Axiom of Profit Maximization WARP is an acronym for the Weak Axiom of Revealed Preference. This axiom states that when a consumer selects consumption bundle 'a' when bundle 'b' is available, the consumer will not select 'b' when 'a' is available. This axiom has two extensions: the Strong Axiom of Revealed Preference (SARP) and the Generalized Axiom of Revealed Preference (GARP). A wavelet is a function which (a) maps from the real line to the real line, (b) has an average value of zero, (c) has values very near zero except over a bounded domain, and (d) is used for the purpose, analogous to Fourier analysis, implied by the following paragraphs. Unlike sine waves, wavelets tend to be irregular, asymmetric, and to have values that die out to zero as one approaches positive and negative infinity. "Fourier analysis consists of breaking up a signal into sine waves of various frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the original (or mother) wavelet." By decomposing a signal into wavelets one hopes not to lose local features of the signal and information about timing. These contrast with Fourier analysis, which tends to reproduce only repeated features of the original function or series. Walrasian Equilibrium weak form Can refer to the weak form of the efficient markets hypothesis, which is that any information in the past prices of a security are fully reflected in its current price. Fama (1991) broadens the category of tests of the weak form hypothesis under the name of 'test for return predictability.' weak incentive An incentive that is does not encourage maximization of an objective, because it is ambiguous or satisfice-able. For example, payment of weekly wages is a weak incentive since by construction it does not encourage maximum production, but rather the minimal performance of showing up every work day. This can be the best kind of incentive in a contract if the buyer doesn't know exactly what he wants or if output is not straightforwardly measurable. Contrast strong incentive. weak law of large numbers Quoted right from Wooldridge chapter: A sequence of random variables {z[t]} for t=1,2,... satisfies the weak law of large numbers if these three conditions hold: (1) E[|z[t]|] is finite for all t, (2) as T goes to infinity, the limit of the average of the first T elements of {z[t]} 'exists' [unknown: that means it's fixed and finite, right?], (3) as T goes to infinity, the probability limit of the average of the first T elements of the series [z[t] - E(z[t])] is zero. The most important point (I think) is that the weak law of large numbers holds iff the sample average is a consistent estimate for the mean of the process. Laws of large numbers are proved with Chebyshev's inequality. weak stationarity synonym for covariance stationarity. A random process is weakly stationary iff it is covariance stationary. weakly consistent synonym for consistent. weakly dependent A time series process {x[t]} is weakly dependent iff these four conditions hold: (1) {x[t]} is essentially stationary, that is if E[x[t]^2] is uniformly bounded. In any such process, the following 'variance of partial sums' is well defined, and it will be used in the following conditions. Define s[T]^2 to be the variance of the sum from t=1 to t=T of x[t]. (2) s[T]^2 is O(T). (3) s[T]^-2 is O(1/T). (4) The asymptotic distribution of the sum from t=1 to t=T of (x[t]-E(x[t]))/s[T] is N(0,1). These conditions rule out random processes which are serially correlated too positively or negatively or whose partial sums are near zero. Example 1: An iid process IS weakly dependent. (Domowitz, in class 4/14/97.) Example 2: A stable AR(1) (|r|<1) with iid innovations. weakly ergodic A stochastic process may be weakly ergodic without being strongly ergodic. weakly Pareto Optimal An allocation is weakly Pareto optimal (WPO) if a feasible reallocation would be strictly preferred by all agents. WPO <=> SPO if preferences are continuous and strictly increasing (that is, locally nonsatiated). A Web site with indexes to World Wide Web Resources in Economics. Click here to go there. The gap between the price paid by the buyer and price received by the seller in an exchange. Might be caused by a tax paid to a third party. Weibull distribution in at least one 'standard' specification, has pdf: f(x)=Tx^T-1exp(-x^T) where T stands for q. T=1 is the simplest case. It looks like the pdf is zero for x<1 in that case. Weierstrauss Theorem that a continuous function on a closed and bounded set will have a maximum and a minimum. This theorem is often used implicitly, in the assumption that some set is compact, meaning closed and bounded. Examples that may help clarify: Example 1: Consider a set which is unbounded, like the real line. Say variable x has any value on the real line, and we wish to maximize the function f(x)=2x. It doesn't have a maximum or minimum because values of x further from zero have more and more extreme values of f(x). Example 2: Consider a set which is not closed, like (0,1). Again, let f(x) be 2x. Again this function has no maximum or minimum because there is no largest or smallest value of x in the set. Weighted attributes If the combined weights of a novel object's attributes' relevance for conferring family resemblance to the category exceed a certain level (the mebership criterion), that object will be considered an instance of the category (Medin, 1983). weighted least squares A way of choosing an estimator. Makes a weighted tradeoff between the error in an estimator due to bias and that due to variance. Putting equal weights on the two is the mean square error criterion. welfare capitalism welfare capitalism -- the practice of employers' voluntary provision of nonwage benefits of to their blue collar employees. A software program for computing estimates and variance estimates from potentially complicated survey data. Made by Westat. white noise process a random process of random variables that are uncorrelated, have mean zero, and a finite variance (which is denoted s^2 below). Formally, e[t] is a white noise process if E(e[t]) = 0, E(e[t]^2) = s^ 2, and E(e[t]e[j]) = 0 for t<>j, where all those expectations are taken prior to times t and j. A common, slightly stronger condition is that they are independent from one another; this is an "independent white noise process." Often one assumes a normal distribution for the variables, in which case the distribution was completely specified by the mean and variance; these are "normally distributed" or "Gaussian" white noise processes. White standard errors Same as Huber-White standard errors. Wiener process A continuous-time random walk with random jumps at every point in time (roughly speaking). window width Synonym for bandwidth in the context of kernel estimation winner's curse That a winner of an auction may have overestimated the value of the good auctioned. "The winner's curse arises in an auction when the good being sold has a common value to all the bidders (such as an oil field) and each bidder has a privately known unbiased estimate of the value of the good (such as from a geologist's report): the winning bidder [may] be the one who most overestimated the value of the good; this bidder's estimate itself may be unbiased but the estimate conditional on the knowledge that it is the highest of n unbiased estimates is not." -- Gibbons and Katz within estimator synonym for fixed effects estimator Within subjects design In a within subjects design the values of the dependent variable for an item or a set of items (e.g., the experimental items) are compared with the values for another item or another set of items (e.g., the control items) within one person. Weak law of large numbers abbreviation for "without loss of generality". This phrase is relevant in the context of a proof or derivation in which the notation becomes simpler, or there are fewer cases to demonstrate, by making an innocuous assumption, for example that the data are in a certain order. Wold decomposition Any zero mean, covariance stationary process can be represented as a moving average sum of white noise processes plus a linearly deterministic component that is a function of the index t. That form of expressing the process is its Wold decomposition. Clear expression of this idea requires an equation or two that cannot be put here yet. Wold's theorem That any covariance stationary stochastic process with mean zero has a moving average representation, called its Wold decomposition. Let {x[t]} be that process. See Sargent, 1987, p 286-288 for the complete theorem, assumptions, and proof. World Bank A collection of international organizations to aid countries in their process of economic development with loans, advice, and research. It was founded in the 1940s to aid Western European countries after World War II with capital. Click here to go to the World Bank web site. world systems theory [What follows is the editor's best understanding, but not definitive.] A category of sociological/historical description and analysis in which aspects of the world's history are thought of as byproducts of the world being an organic whole. Key categories are core and periphery. Core countries, economies, or societies are richer, have more capital-intensive industry, skilled labor and relatively high profits. In a way they exploit the poorer peripheral societies but it may not be a deliberate collusion. stands for Weakly Pareto Optimal
{"url":"http://www.econport.org/econport/request?page=web_glossary&glossaryLetter=W","timestamp":"2014-04-16T18:56:46Z","content_type":null,"content_length":"49889","record_id":"<urn:uuid:57a774f3-91cd-4cbc-9202-984e7b18b95d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Baking with Rose Discussion Forums | Roses Sour Cream Coffee Cake I made it the other day, and the flavor was fabulous but I was wondering if anyone else found it dry like I did. When I measured out the flour, I may have lost count and put in an extra 1/4 cup of flour (I know, my boyfriend has recommended measuring by weight instead of volume before). So that could be the culprit, but did anyone else find the cake not as moist as hoped? If so, I thought some added milk or eggs next time could help.
{"url":"http://www.realbakingwithrose.com/index_ee.php/forums/viewthread/29/","timestamp":"2014-04-20T11:50:17Z","content_type":null,"content_length":"76084","record_id":"<urn:uuid:5f39e708-290e-4253-a298-0b64ad0d7acd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
thermo-ish diffusion/wave equation - metal plate and temperature difference 1. The problem statement, all variables and given/known data The edges of a thin plate are held at the temperature described below. Determine the steady-state temperature distribution in the plate. Assume the large flat surfaces to be insulated. If the plate is lying along the x-y plane, then one corner would be at the origin. The height of the plate would be 1m along the y-axis and the length would be 2m along the x-axis. The edge along the y-axis is being held at 0 C. The edge along the x-axis is being held at 0 C. The edge parallel to the x-axis is being held at 0 C. The edge parallel to the y-axis is being held at 50sin(pi*y) C. 2. Relevant equations So I'm assuming this question is actually just a diffusion equation or a wave equation, because that's what the rest of our homework was on. Alpha^2u[xx]=u[t] 3. The attempt at a solution So I tried to solve this like the wave equations and it seems to just be blowing out of proportion and not making sense... Also... I think we need to consider a thrid position variable here, we need x,y AND t. I don';t know how to do this at all :(
{"url":"http://www.physicsforums.com/showthread.php?t=624611","timestamp":"2014-04-16T19:07:13Z","content_type":null,"content_length":"21464","record_id":"<urn:uuid:3f84a183-d9cc-49c2-bc9d-8a576b0cf568>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional sampling from multivariate normal dist Peter Perkins <Peter.Perkins@MathRemoveThisWorks.com> wrote in message <hpvk44$2if$1@fred.mathworks.com>... > On 4/12/2010 12:41 PM, Tomaz wrote: > > Peter thanks, but is is this also useful when dealing with more than 2 > > independent variables? And I guess that there is no 'straightforward' > > way of doing this in Matlab? > Look closer at those formulas, and the definitions above them: the formula is entirely general, and it is simple to implement in MATLAB. I'm guessing someone has already posted something like this to the MATLAB Central File Exchange, but I haven't checked. > if you have N variables, then "1" and "2" in the formula represent subsets of 1:N. That Wikipedia page happens to have things set up so that the conditioning variables are all at the end (i.e., "2" corresponds to (q+1):N)) and the unobserved variables are all at the beginning (1:q), but that's just to make the notation simpler. > Given a row vector mu and a cov matrix Sigma, define i2 as the coordinates that you are conditioning on, and i1 as everything else. Then let mu1 = mu(i1), Sigma11 = Sigma(i1,i1), etc., and apply those formulas. Two things: > 1)You''ll want to do something like > Sigma1_2 = Sigma11 - Sigma21*(Sigma22\Sigma12) > and similarly for mu, rather than explicitly using INV. Type "help slash". > 2) You might have trouble because that Wikipedia page has the MVN in terms of col vectors. You'll want to use row vectors. And so: > mu1_2 = mu1 - ((a-mu2)/Sigma22)*Sigma21 Thank you Peter! I appreciate your effort and I believe I will be able to solve my problem now (with some effort). Could you please just tell me what would be 'statistical expression' that describes my problem the best? Is it 'Conditional sampling', 'Conditional distributions' or something else? Any synonyms/ alternatives? I am asking this to be able to search for related data more
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/279152","timestamp":"2014-04-18T03:10:33Z","content_type":null,"content_length":"65561","record_id":"<urn:uuid:63d9aa69-5cba-49e6-9b80-c1ab450f7940>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Martin Odersky on "What's wrong with Monads" Anton Kholomiov anton.kholomiov at gmail.com Sun Jun 24 11:41:10 CEST 2012 Here is an half-baked idea how to make monads more functional. It's too wild to be implemented in haskell. But maybe you are interested more in ideas than implementations, so let's start with monad class class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b I think monad's methods are misleading, let's rename them class Monad m where idM :: a -> m a (*$) :: (a -> m b) -> m a -> m b We can see that `return` is a monadic identity and the `bind` is an application in disguise. So now we have two applications. It's standard `($)` and monadic `(*$)`. But they are application. Well isn't it something like `plusInt` and `plusDouble`? Maybe we can devise single class for application. Let's imagine a special class `App` class App ?? where ($) :: ??? As you can see it's defined so that we can fit monads and plain functions in this framework. Moreover if we redefine this class than whitespace is redefined automatically! So `($)` really means *white space* in haskell. `idM` is interesting too. In standard world we can safely put `id` in any expression. So when we write f = a + b we can write f = id (a + b) or even f = id ((id a) + (id b)) meaning doesn't change. So if we have special class `Id` class Id f where id :: ??? Again you can see that monads fit nicely in the type. Why do we need this class? Whenever compiler gets an type mismatch, it tries to apply method from `Id` class, if it's defined ofcourse. But we have a class called `Category`, `id` belongs to it: class Category (~>) where id :: a ~> a (>>) :: (a ~> b) -> (b ~> c) -> (a ~> c) Let's pretend that `(>>)` is reversed composition `(.)`. It's interesting to note that there is another formulation of 'Monad' class. It's called Kelisli category. class Kelisli m where idK :: a -> m a (>>) :: (a -> m b) -> (b -> m c) -> (a -> m c) Here again let's forget about monad's `(>>)` for a moment, here it's composiotion. `Kleisli` is equivalent to `Monad`. If we can define `Category` instance for `Kleisli`, so that somehow this classes become unified on type level we can define application in terms of composition like this: f $ a = (const a >> f) () And we can get application for monads (or kleislis :) ). Maybe someday you wrote a function like this: foo :: Boo -> Maybe Foo foo x = case x of 1 -> Just ... 2 -> Just ... 3 -> Just ... 4 -> Just ... 5 -> Just ... 6 -> Just ... 7 -> Just ... _ -> Nothing with `idM` rule you can skip all Just's You can use white space as monadic bind. So functional application can become monadic on demand. Just switch the types. I've tried to unify `Category` and `Kleisli` with no luck. Here is a closest sletches: simplest sketch requires type functions :( instance Monad m => Category (\a b -> a -> m b) where the other one too :( class Category (~>) where type Dom (~>) :: * -> * type Cod (~>) :: * -> * id :: Dom (~>) a -> Cod (~>) a (>>) :: (Dom (~>) a ~> Cod (~>) b) -> (Dom (~>) b ~> Cod (~>) c) -> ... type Id a = a -- :( instance Monad m => Category (a -> m b) where type Dom (a -> m b) = Id type Cod (a -> m b) = m -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20120624/ea0c8c1b/attachment.htm> More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2012-June/101934.html","timestamp":"2014-04-17T22:21:58Z","content_type":null,"content_length":"6665","record_id":"<urn:uuid:afa6cba4-b9a8-412d-a231-02b78934c951>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The number sequence \[a _{0},a _{1},a2...\] is defined by the recursion formula: \[a _{1}=1\] \[a _{n+1}=a _{n}−\frac{ 1 }{ n(n+1) },n≥1\] Find the closed formula for the sequence and prove it by • one year ago • one year ago Best Response You've already chosen the best response. we will get a0 as 0 if we put n =1 Best Response You've already chosen the best response. ok this is what we do when we have just \(a_{n+1}\) and \(a_n\) in recursion formula \[a _{n+1}-a _{n}=\frac{ 1 }{ n(n+1) }\]\[a _{n}-a _{n-1}=\frac{ 1 }{ n(n-1) }\]\[...\]\[a_2-a_1=\frac{1}{2}\] and add them all Best Response You've already chosen the best response. sorry cant find a0 Best Response You've already chosen the best response. what mukushala has done is correct then you will get a(n+1) = 1 - sum of terms of rhs Best Response You've already chosen the best response. for finding the sum of terms on rhs write each term like this\[\frac{ 1 }{(n)(n-1) } = \frac{ n - (n-1) }{(n)(n-1) } = \frac{ 1 }{ n-1 } - \frac{ 1 }{ n }\] Best Response You've already chosen the best response. now when you add consecutive terms will get cancelled and you will be left with \[1 - \frac{ 1 }{ n+1 }\] Best Response You've already chosen the best response. did you understand frx? Best Response You've already chosen the best response. Well, not really yet but thank you, will have to sit down and think about what you guys just wrote:) Best Response You've already chosen the best response. I can't see what the closed formula is? Is it \[1-\frac{ 1 }{ n+1}\]? Best Response You've already chosen the best response. another way to seeing this is writing it in the form\[a _{n+1}-a _{n}=-\frac{1}{n(n+1)}=\frac{ 1 }{ n+1 }-\frac{1}{n}\]and guessing closed form like \[a_n=\frac{1}{n}\]then prove it by induction because it says find the closed formula(even by guessing) then prove by induction Best Response You've already chosen the best response. yeah what mukushla is saying is better i think Best Response You've already chosen the best response. I really don't get it :/ I get the first part when subtractin a_n from both sides but is 1/(n+1)-1/n equal to the first ecpression, I can't see it Best Response You've already chosen the best response. \[\frac{ 1 }{ n+1 }-\frac{ 1 }{ n }\] Looking at this expression, why should i guess the closed form is ?\[a _{n}=\frac{ 1 }{ n }\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/506c0dd1e4b088f3c14cc7cc","timestamp":"2014-04-21T04:39:37Z","content_type":null,"content_length":"56774","record_id":"<urn:uuid:d6b5e252-5deb-4521-8966-0e4d4e527abd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
integral representation of the hypergeometric function Barnes' integral representation of the hypergeometric function Barnes’ integral representation of the hypergeometric function When $a,b,c,d$ are complex numbers and $z$ is a complex number such that $-\pi<\arg(-z)<+\pi$ and $C$ is a contour in the complex $s$-plane which goes from $-i\infty$ to $+i\infty$ chosen such that the poles of $\Gamma(a+s)\Gamma(b+s)$ lie to the left of $C$ and the poles of $\Gamma(-s)$ lie to the right of $C$, then $\int_{C}{\Gamma(a+s)\Gamma(b+s)\over\Gamma(c+s)}\Gamma(-s)(-z)^{s}\,ds=2\pi i{% \Gamma(a)\Gamma(b)\over\Gamma(c)}F(a,b;c;z)$ Mathematics Subject Classification no label found
{"url":"http://planetmath.org/barnesintegralrepresentationofthehypergeometricfunction","timestamp":"2014-04-17T16:00:58Z","content_type":null,"content_length":"50314","record_id":"<urn:uuid:2a225a90-83a2-4096-88f1-35c44996439c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Downsample from 48khz to 8khz I was tinkering. Wrote some code to low pass and downfilter a 48khz audio signal to only 8khz. Here’s some example output. It filters out audio above 2.7khz or so, and then downsamples to 8khz. The audio is just some test of Holst’s Mars, just meant to test. #include <stdio.h> #include <sndfile.h> #define NTAPS (55) #define SKIP 6 float coef[55] = { 1.06012E-04, 2.57396E-04, 4.39093E-04, 5.24290E-04, 3.41603E-04, -2.52071E-04, -1.25563E-03, -2.42069E-03, -3.23354E-03, -3.03919E-03, -1.31274E-03, 1.99613E-03, 6.19226E-03, 9.80711E-03, 1.09302E-02, 7.88736E-03, 8.59963E-05, -1.12812E-02, -2.30159E-02, -3.04057E-02, -2.84214E-02, -1.33337E-02, 1.57597E-02, 5.60551E-02, 1.01222E-01, 1.42716E-01, 1.71912E-01, 1.82477E-01, 1.71912E-01, 1.42716E-01, 1.01222E-01, 5.60551E-02, 1.57597E-02, -1.33337E-02, -2.84214E-02, -3.04057E-02, -2.30159E-02, -1.12812E-02, 8.59963E-05, 7.88736E-03, 1.09302E-02, 9.80711E-03, 6.19226E-03, 1.99613E-03, -1.31274E-03, -3.03919E-03, -3.23354E-03, -2.42069E-03, -1.25563E-03, -2.52071E-04, 3.41603E-04, 5.24290E-04, 4.39093E-04, 2.57396E-04, 1.06012E-04, } ; float input[NTAPS] ; SNDFILE *inp, *out ; SF_INFO inpinfo ; SF_INFO outinfo ; int i, j ; float sum ; inp = sf_open("input.wav", SFM_READ, &inpinfo) ; outinfo.samplerate = 8000 ; outinfo.channels = 1 ; outinfo.format = SF_FORMAT_WAV | SF_FORMAT_PCM_16 ; out = sf_open("output.wav", SFM_WRITE, &outinfo) ; fprintf(stderr, "%d samples/second file, %d seconds\n", inpinfo.frames / inpinfo.samplerate) ; for (i=0; i+SKIP < inpinfo.frames; i += SKIP) { sf_read_float(inp, input+NTAPS-SKIP, SKIP) ; sum = 0.0 ; for (j=0; j<NTAPS; j++) sum += coef[j] * input[j] ; sf_write_float(out, &sum, 1) ; for (j=0; j+SKIP<NTAPS; j++) input[j] = input[j+SKIP] ; sf_close(inp) ; sf_close(out) ; Addendum: Alan wanted to know how I computed the filter coefficients. Excellent question! I used Ken Steiglitz’s METEOR program. Here’s the input file: 55 55 smallest and largest length left direction of pushed specs 2 number of specs pushed 3 4 specs pushed 500 number of grid points limit spec + upper limit arithmetic interpolation not hugged spec 0.00000E+00 5.62500E-02 band edges 1.00100E+00 1.00100E+00 bounds limit spec - lower limit arithmetic interpolation not hugged spec 0.00000E+00 5.65200E-02 band edges 9.99000E-01 9.99000E-01 bounds limit spec + upper limit arithmetic interpolation not hugged spec 2.50000E-01 5.00000E-01 band edges 1.00000E-04 1.00000E-04 bounds limit spec - lower limit arithmetic interpolation not hugged spec 2.50000E-01 5.00000E-01 band edges 0.00000E+00 0.00000E+00 bounds The file is a little odd, but basically i set up unity gain on the interval running from 0 to 0.05625 (which is 0 to 2.7khz at a sampling rate of 48khz), and then I set the response from .25 to .5 (12khz to 24kz) to less than -40db. I then set the filter length, and told the design program to “push” the edge of constraints 3 and 4 to the left. Here’s the resulting spectrum in linear scale and then log scale: Staring at this now, I realize that this probably isn’t quite working the way I intended. I need to trim all freqs above 4khz to sample at 8khz. This still has significant frequency content at 5khz, which could result in significant aliasing. Increasing the size of the filter to length 95, I ended up with the following log response. Still not quite good enough. I’ll have to work on this some more. Comment from Alan Yates Time 7/17/2009 at 9:06 pm How did you compute the filter kernel? Write a comment Recent Comments
{"url":"http://brainwagon.org/2009/07/14/downsample-from-48khz-to-8khz/","timestamp":"2014-04-16T07:13:02Z","content_type":null,"content_length":"44576","record_id":"<urn:uuid:4b1429f7-28aa-4243-ae11-4a9f9b37066e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
fundamental theorem of calculus part 2 Correct me if I'm wrong, but isn't it this? I think you're thinking of e Anyway, every function can be integrated, it's just that not all of them can be integrated algebraically. If you wanted to find ∫e dx, you'd need to take each and every real value of x and work out the integral at that point using numerical methods. I'd advise telling a computer to do it.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=25396","timestamp":"2014-04-18T21:48:48Z","content_type":null,"content_length":"13098","record_id":"<urn:uuid:613ead70-4a3a-45d9-990c-0f16fa76155d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Set this keyword to sort the eigenvalues by their absolute value (their magnitude) rather than by their signed value. Set this keyword to return eigenvalues in ascending order (smallest to largest). If not set or set to zero, eigenvalues are returned in descending order (largest to smallest). The eigenvectors are correspondingly reordered. Set this keyword equal to a named variable that will contain the computed eigenvectors in an n by n array. The i ^ th row of the returned array contains the i ^ th eigenvalue. This keyword must be initialized to a non- zero value before calling EIGENQL if the eigenvectors are desired. If no variable is supplied, the array will not be computed. Set this keyword to use the input array for internal storage and to overwrite its previous contents. Use this keyword to specify a named variable that will contain the residuals for each eigenvalue/eigenvector ( l /x) pair. The residual is based on the definition Ax - ( l )x = 0 and is an array of the same size as A and the same type as Result. The rows of this array correspond to the residuals for each eigenvalue/eigenvector pair. This keyword must be initialized to a non- zero value before calling EIGENQL if the residuals are desired. Define an n by n real, symmetric array: A = [[ 5.0, 4.0, 0.0, -3.0], $ residual = 1 & evecs = 1 ; The variables that will contain the residuals and eigenvectors must be initialized as nonzero values prior to calling EIGENQL. eigenvalues = EIGENQL(A, EIGENVECTORS = evecs, RESIDUAL = residual) ; Compute the eigenvalues and eigenvectors. PRINT, eigenvalues ; Print the eigenvalues/. 12.0915 6.18661 1.00000 0.721870 -0.554531 -0.554531 -0.241745 0.571446 0.342981 0.342981 -0.813186 0.321646 0.707107 -0.707107 -2.58096e-08 0.00000 0.273605 0.273605 0.529422 0.754979 The accuracy of each eigenvalue/eigenvector ( l /x) pair may be checked by printing the residual array: The RESIDUAL array has the same dimensions as the input array and the same type as the result. The residuals are contained in the rows of the RESIDUAL array. All residual values should be floating-point zeros.
{"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idl89.htm","timestamp":"2014-04-16T10:54:30Z","content_type":null,"content_length":"8800","record_id":"<urn:uuid:1c321617-35f9-460d-9909-2260f73f9a40>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Three-Dimensional (3D) Graphing Tutorial Three-Dimensional (3D) Graphing Tutorial Objective: Learn how to investigate large data sets with three dimensional (3D) graphs. If you can't play the sound file, click here. What is a 3D graph? A 3D graph has three axises, that represent variables in the experiment or equation. The type of 3D graph we are going to be investigating has independent variables on all the axises, and the dependent variable is represented by color. This differs from surface graphs which have the dependent variable on one axis and the independent variables on the other axises. A surface graph shows the surface of a function and is a type of 3D graph, but not the type that we are not going to investigate. In this tutorial a 3D graph will have three independent variables on three axis and the dependent variable will always be represented by color. A more detailed description between a surface graph and a 3D graph. What can 3D graphing do for you? Three-dimensional graphing allows you to: see how the three independent variables influence one dependent variable. view any slice of the 3D cube of data where one variable is fixed. look into the 3D figure using "voxel volume translucency." To email the Elsa Laughlin surp (Summer Undergraduate Research Program) student who wrote the tutorial. emlaughl@mhc.mtholyoke.edu To email the Dr. Ron Kriz the Virginia Tech Faculty who will continue to expand the site after the summer of 1996 kriz@wave.esm.vt.edu Last revised July 23, 1996
{"url":"http://www.sv.vt.edu/classes/surp/surp96/laughlin/stat/3D_tutor/3D_tutor.html","timestamp":"2014-04-19T05:10:43Z","content_type":null,"content_length":"2463","record_id":"<urn:uuid:a4429e41-aa6a-47c1-a2a9-ef8a3720b9b2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Sine wave with dc offset But if the 2 portions of a single period aren't identical, how does this effect the function? That's what's stopping me getting round to the integration part The function does not have to be symmetrical within its period, it only has to be identical from period to period. If you write the expression for the full, untruncated-at-zero function, then integrate over only the sections that are greater than zero, then you'll be okay. For the graph that I posted above, that function is: [itex] f(t) = A sin(\frac{2 \pi}{35} t - \phi) + Vo [/itex] And the integration bounds would go from 0 to 22 (milliseconds) to include just the pulse.
{"url":"http://www.physicsforums.com/showthread.php?t=569749","timestamp":"2014-04-16T16:16:56Z","content_type":null,"content_length":"58839","record_id":"<urn:uuid:a62e8f64-43d4-4701-86be-b83f817d38b3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
A time-scale analysis of systematic risk: wavelet-based approach Khalfaoui Rabeh, K and Boutahar Mohamed, B (2011): A time-scale analysis of systematic risk: wavelet-based approach. Download (2080Kb) | Preview The paper studies the impact of different time-scales on the market risk of individual stock market returns and of a given portfolio in Paris Stock Market by applying the wavelet analysis. To investigate the scaling properties of stock market returns and the lead/lag relationship between them at different scales, wavelet variance and crosscorrelations analyses are used. According to wavelet variance, stock returns exhibit long memory dynamics. The wavelet cross-correlation analysis shows that comovements between stock returns are stronger at higher scales (lower frequencies); scales corresponding to period of 4 months and longer, i.e. scales 7 and 8. The wavelet analysis of systematic risk shows that all individual assets and the diversified portfolio have a multi-scale behavior, which indicates that the systematic risk measured by Beta in the market model is not stable over time. The analysis of VaR at different time scales shows that risk is more concentrated at higher frequencies dynamics (lower time scales) of the data. Item Type: MPRA Paper Original A time-scale analysis of systematic risk: wavelet-based approach English A time-tcale analysis of systematic risk: wavelet-based approach Language: English Keywords: Wavelets; Systematic risk; Value-at-Risk G - Financial Economics > G1 - General Financial Markets > G12 - Asset Pricing; Trading volume; Bond Interest Rates Subjects: C - Mathematical and Quantitative Methods > C0 - General > C02 - Mathematical Methods G - Financial Economics > G3 - Corporate Finance and Governance > G32 - Financing Policy; Financial Risk and Risk Management; Capital and Ownership Structure; Value of Firms; Goodwill Item ID: 31938 Depositing KR KHALFAOUI Date 30. Jun 2011 13:20 Last 16. Feb 2013 18:41 Coifman, R. R., Donoho, D. L., 1995. Translation-invariant de-noising. Lecture Notes in Statistics: Wavelet and Statistics, 125–150. Daubechies, I., 1992. Ten lectures on wavelets. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA. Durai, S. R. S., Bhaduri, S. N., 2009. Stock prices, inflation and output: Evidence from wavelet analysis. Economic Modelling 26 (5), 1089 – 1092. Fernandez, V., 2006. The capm and value at risk at different time-scales. International Review of Financial Analysis 15 (3), 203–219. Gallegati, M., 2005. Stock market returns and economic activity: evidence from wavelet analysis. Computing in Economics and Finance 2005 273, Society for Computational Economics. Gençay, R., Selçuk, F., Whitcher, B., 2002. In: An Introduction to Wavelets and Other Filtering Methods in Finance and Economics. Academic-Press. Gençay, R., Selçuk, F., Whitcher, B., 2005. Multiscale systematic risk. Journal of International Money and Finance 24 (1), 55–70. He, K., Xie, C., Chen, S., Lai, K. K., 2009. Estimating var in crude oil market: A novel multi-scale non-linear ensemble approach incorporating wavelet analysis and neural network. Neurocomputing 72 (16-18), 3428 – 3438. Heni, B., Boutahar, M., 2011. A wavelet-based approach for modelling exchange rates. Statistical Methods and Applications 20, 201–220. In, F. H., Kim, S., 2006. Multiscale hedge ratio between the australian stock and futures markets: Evidence from wavelet analysis. Journal of Multinational Financial Management 16 (4), 411 – 423. Jorion, P., 1996. In: Risk and Turnover in the Foreign Exchange Market. National Bureau of Economic Research, Inc. Kim, S., In, F., 2007. On the relationship between changes in stock prices and bond yields in the g7 countries: Wavelet analysis. Journal of International Financial Markets, Institutions and Money 17 (2), 167–179. Kim, S., In, F. H., 2005. The relationship between stock returns and inflation: new evidence from wavelet analysis. Journal of Empirical Finance 12 (3), 435–444. Lintner, J., 1965. The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. The Review of Economics and Statistics 47 (1), 13–37. Mallat, S. G., 1989. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 11, 674–693. References: Masih, M., Alzahrani, M., Al Titi, O., 2010. Systematic risk and time scales: New evidence from an application of wavelet approach to the emerging gulf stock markets. International Review of Financial Analysis 19 (1), 10–18. Mondal, D., Percival, D. B., 1995. On estimation of the wavelet variance. Biometrika 82, 619–631. Nason, G. P., Silverman, B. W., 1995. The stationary wavelet transform and some statistical applications. Springer-Verlag, pp. 281–300. Norsworthy, J. R., Li, D., Gorener, R., 2000. Wavelet-based analysis of time series: an export from engineering to finance. IEEE Engineering Management Society 2, 126 – 132. Percival, D. B., 1995. On estimation of the wavelet variance. Biometrika 82, 619–631. Percival, D. B., Guttorp, P., 1994. Long-memory processes, the allan variance and wavelets. Wavelets in geophysics. Percival, D. B., Mofjeld, H. O., 1997. Analysis of subtidal coastal sea level fluctuations using wavelets. Journal of the American Statistical Association 92 (439), 868–880. Percival, D. B., Walden, A. T., 2000. Wavelet methods for time series analysis. Cambridge University Press. Pesquet, J. C., Krim, H., Carfantan, H., 1996. Time invariant orthonormal wavelet representations. IEEE transactions on signal processing 44, 1964–1970. Ramsey, J. B., Zhang, Z., 1997. The analysis of foreign exchange data using waveform dictionaries. Journal of Empirical Finance 4 (4), 341–372. Sharkasi, A., Crane, M., Ruskin, H. J., Matos, J. A., 2006. The reaction of stock markets to crashes and events: A comparison study between emerging and mature markets using wavelet transforms. Physica A: Statistical Mechanics and its Applications 368 (2), 511 – 521. Sharpe,W. F., 1963. A simplified model for portfolio analysis. Journal of Financial and Quantitative Analysis 09 (02), 277–293. Sharpe, W. F., 1964. Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance 19, 425–442. Sortino, F. A., Meer, R. v. d., 1991. Downside risk. The Journal of Portfolio Management 17 (4), 27–31. Tibiletti, L., Farinelli, S., 2003. Upside and downside risk with a benchmark. Atlantic Economic Journal 31 (4), 387–387. Treynor, J. L., 1961. Market value, time, and risk. Unpublished manuscript. Whitcher, B., Guttorp, P., Percival, D. B., 2000a. Wavelet analysis of covariance with application to atmospheric time series. Journal of Geophysical Research 105 (11), 941–962. URI: http://mpra.ub.uni-muenchen.de/id/eprint/31938
{"url":"http://mpra.ub.uni-muenchen.de/31938/","timestamp":"2014-04-21T12:14:50Z","content_type":null,"content_length":"29172","record_id":"<urn:uuid:6467a513-83ab-45f1-98d8-687aae7b5b70>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 686.05029 Autor: Erdös, Paul; Pach, János; Pollack, Richard; Tuza, Zsolt Title: Radius, diameter, and minimum degree. (In English) Source: J. Comb. Theory, Ser. B 47, No.1, 73-79 (1989). Review: The authors prove that the diameter of a connected graph G with n vertices and minimum degree \delta \geq 2 is bounded from above by [3n/(\delta+1)]-1, and that this bound is asymptotically sharp where \delta is fixed and n tends to infinity. They show an analogous result for the radius of G, and also give upper bounds for triangle-free and C^4-free connected graphs. Reviewer: Ch.Schulz Classif.: * 05C35 Extremal problems (graph theory) 05C38 Paths and cycles Keywords: diameter; minimum degree; radius © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/68605029.htm","timestamp":"2014-04-18T13:49:32Z","content_type":null,"content_length":"3382","record_id":"<urn:uuid:5c799044-44c0-4203-948a-03b9b0b1a22f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
The Computer as Crucible: An Introduction to Experimental Mathematics -- from Wolfram Library Archive The Computer as Crucible: An Introduction to Experimental Mathematics Publisher: A K Peters (Wellesley, MA) What Is Experimental Mathematics? | What Is the Quadrillionth Decimal Place of [pi]? | What Is That Number? | The Most Important Function in Mathematics | Evaluate the Following Integral | Serendipity | Calculating [pi] | The Computer Knows More Math Than You Do | Take It to the Limit | Danger! Always Exercise Caution When Using the Computer | Stuff We Left Out (Until Now) | Answers and Reflections | Final Thought For a long time, pencil and paper were considered the only tools needed by a mathematician (some might add the waste basket). As in many other areas, computers play an increasingly important role in mathematics and have vastly expanded and legitimized the role of experimentation in mathematics. How can a mathematician use a computer as a tool? What about as more than just a tool, but as a collaborator? Keith Devlin and Jonathan Borwein, two well-known mathematicians with expertise in different mathematical specialties but with a common interest in experimentation in mathematics, have joined forces to create this introduction to experimental mathematics. They cover a variety of topics and examples to give the reader a good sense of the current state of play in the rapidly growing new field of experimental mathematics. The writing is clear and the explanations are enhanced by relevant historical facts and stories of mathematicians and their encounters with the field over Mathematics, Quadrillionth, Serendipity, Calculating, Boolean, Four color theorem, Gauss
{"url":"http://library.wolfram.com/infocenter/Books/7791/","timestamp":"2014-04-20T18:42:45Z","content_type":null,"content_length":"35380","record_id":"<urn:uuid:646d7da0-a7ac-493a-957a-28f3c5501b3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
PHI103: Informal Logic Posted by Ahford university on Monday, September 9, 2013 at 3:17pm. 1) In the truth table for an invalid argument, Student Answer: on at least one row, where the premises are all true, the conclusion is true. CORRECT on at least one row, where the premises are all true, the conclusion is false. on all the rows where the premises are all true, the conclusion is true. on most of the rows, where the premises are all true, the conclusion is true. Instructor Explanation: The answer can be found in Chapter Six of An Introduction to Logic. 2. Question : The truth table for a valid deductive argument will show Student Answer: CORRECT wherever the premises are true, the conclusion is true. that the premises are false. INCORRECT that some premises are true, some premises false. wherever the premises are true, the conclusion is false. Instructor Explanation: The answer can be found in Chapter Six of An Introduction to Logic. 3. Question : Truth tables can be used to examine Student Answer: INCORRECT inductive arguments. CORRECT deductive arguments. abductive arguments. All of the above Instructor Explanation: The answer can be found in Chapter Six of An Introduction to Logic. 4. Question : A conditional sentence with a false antecedent is always Student Answer: CORRECT true. Cannot be determined. not a sentence. Instructor Explanation: The answer can be found in Chapter Six of An Introduction to Logic. 5. Question : Truth tables can determine which of the following? Student Answer: CORRECT If an argument is valid If an argument is sound If a sentence is valid All of the above 6. Question : "P v Q" is best interpreted as Student Answer: P or Q but not both P and Q CORRECT P or Q or both P and Q Not both P or Q P if and only if Q 7. Question : What is the truth value of the sentence "P v ~ P"? Student Answer: CORRECT True INCORRECT False Cannot be determined Not a sentence 8. Question : One of the disadvantages of using truth tables is Student Answer: INCORRECT it is difficult to keep the lines straight T's are easy to confuse with F's. CORRECT they grow exponentially and become too large for complex arguments. they cannot distinguish strong inductive arguments from weak inductive arguments. 9. Question : "Julie and Kurt got married and had a baby" is best symbolized as Student Answer: M v B CORRECT M & B M ¡æ B M ¡ê B 10. Question : If P is false, and Q is false, the truth-value of "P ¡êQ" is Student Answer: false. CORRECT true. Cannot be determined. All of the above. • PHI103: Informal Logic - Writeacher, Monday, September 9, 2013 at 3:21pm What is this? An exam?? Related Questions informal logic - in the truth table for an invalid argument informal logic - what is the truth value of the sentence P v ~ P informal logic - If Pruth value of is true, and Q is false, the the truth of P v... PHI103 Informal Logic - construct three different arguments that display ... Logic - In the truth table for an invalid argument, Phi103 - explain what logic can and cannot do. In other words, what kinds of ... Informal Logic - What is the truth value of the sentence "P v ~ P"? True False ... logic - How to provr with a truth table that "Neither A nor B is lying" computers - (Digital logic) using truth table Demonstrate that: X’. Y + Y’. Z + ... math - In the land of Oz, people either always speak the truth or lies. A ...
{"url":"http://www.jiskha.com/display.cgi?id=1378754252","timestamp":"2014-04-20T22:41:48Z","content_type":null,"content_length":"10869","record_id":"<urn:uuid:30d40171-3f0a-41b3-8f80-de2940b0c784>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Springfield, VA Algebra Tutor Find a Springfield, VA Algebra Tutor ...Typically, during our first meeting, we'll establish what those needs are then determine what strategies will work best for your unique case. In the past, I've built specific resources, tools, and worksheets for each of my students. My teaching is highly individualized, and I do my very best to serve each and every student- your success is my success! 21 Subjects: including algebra 1, algebra 2, Spanish, reading ...In short, I know what colleges look for during the application process, and strongly feel I can help current high school students get into college. I have quite a lot of experience utilizing Microsoft Outlook. For one summer, I interned in an office that required nearly constant use of Microsoft Outlook to coordinate projects, manage schedules, and keep track of task progress. 33 Subjects: including algebra 1, algebra 2, English, reading ...In addition I have approximately 40 years work experience in the aerospace field, most of it designing hardware, which always involves significant math. I am currently retired but have always enjoyed working with younger engineers and have often acted as their mentors. This may have been as an ... 12 Subjects: including algebra 1, algebra 2, physics, geometry ...Thank you.I have studied Econometrics I and Econometrics II at the PhD level at Georgetown University, and got an A for both classes. I studied Matlab Programming for Signal Processing for several years, when I was studying Electrical Engineering at Korea Advanced Institute of Science and Techno... 14 Subjects: including algebra 2, algebra 1, geometry, precalculus ...After that, I got my master's degree from George Mason University's School of Public Policy in 2010. I have been tutoring since high school. I know French, Spanish and Portuguese. 19 Subjects: including algebra 2, algebra 1, Spanish, English Related Springfield, VA Tutors Springfield, VA Accounting Tutors Springfield, VA ACT Tutors Springfield, VA Algebra Tutors Springfield, VA Algebra 2 Tutors Springfield, VA Calculus Tutors Springfield, VA Geometry Tutors Springfield, VA Math Tutors Springfield, VA Prealgebra Tutors Springfield, VA Precalculus Tutors Springfield, VA SAT Tutors Springfield, VA SAT Math Tutors Springfield, VA Science Tutors Springfield, VA Statistics Tutors Springfield, VA Trigonometry Tutors Nearby Cities With algebra Tutor Alexandria, VA algebra Tutors Annandale, VA algebra Tutors Arlington, VA algebra Tutors Bethesda, MD algebra Tutors Burke, VA algebra Tutors Fairfax, VA algebra Tutors Falls Church algebra Tutors Franconia, VA algebra Tutors Herndon, VA algebra Tutors Hyattsville algebra Tutors Silver Spring, MD algebra Tutors Vienna, VA algebra Tutors Washington, DC algebra Tutors West Springfield, VA algebra Tutors Woodbridge, VA algebra Tutors
{"url":"http://www.purplemath.com/springfield_va_algebra_tutors.php","timestamp":"2014-04-17T19:36:34Z","content_type":null,"content_length":"24209","record_id":"<urn:uuid:bbf25833-bf20-44a4-8e16-2086f2172510>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying Monomials and/or Binomials and FOIL - Concept Multiplying rational expressions is basically two simplifying problems put together. When multiplying rationals, factor both numerators and denominators and identify equivalents of one to cancel. Dividing rational expressions is the same as multiplying with one additional step: we take the reciprocal of the second fraction and change the division to multiplication. One of the most common problem types you're going to see in your study of polynomials is going to ask you to multiply either binomials, trinomials, monomials stuff like that, so before we get into binomials let's talk about monomials. The way you work with monomials is using the same processes you would use for distributing. You take that monomial and you multiply it by everything in the other term in the another polynomial you're multiplying by it's like distributing. When you come to binomials it's tricky because it's like double distributing, so if the double distributing makes sense you could do it that way, or lot of people use this acronym FOIL to help them with multiplying two binomials. FOIL is an acronym for how to multiply binomials, first of all acronym means the letters stand for processes each letter in the word FOIL stands for some process we're going to do. Find the products of the first terms that's what f is, outers, inner terms and last terms and write them as a polynomial, that's what FOIL stands for it only works for product of binomials meaning you're multiplying two things that have two terms each that's when you'll use the FOIL process, so you could do this also somewhere you guys you're going to see the area model for multiplying polynomials and we'll get into that during other videos but FOIL is what I really want you guys to remember is pretty commonly used but just please please please only use FOIL when multiplying two binomials. FOIL monomial binomial product distributing
{"url":"https://www.brightstorm.com/math/algebra/polynomials-2/multiplying-monomials-and-or-binomials-and-foil/","timestamp":"2014-04-16T19:36:52Z","content_type":null,"content_length":"61497","record_id":"<urn:uuid:d61986ec-edfb-4d3f-9e52-2898cb447129>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
AW: efficient dc-removal filter Duane Wise dwise at wholegrain-ds.com Thu Apr 2 10:22:49 EST 1998 > y[n] + e[n] = (1+p)/2*x[n] - (1+p)/2*x[n-1] + p*y[n-1] + e[n-1] >and y[n] = floor( y[n] + e[n] ) >y[n] and e[n] are, essentially, the signed MSW and unsigned LSW comprising >the double word resulting in the full precision accumulation of the filter. >the floor() function just truncates the LSW. p is the 1st order pole and >should be very close to one. i think the correct formula for p is > p = 1/cos(w) - tan(w) >where w is the normalized radian frequency of the -3 dB corner frequency. The formula for p is correct. >for fixed point arithmetic, it's important that the differentiator (the >zero) precedes the pole (the leaky integrator) otherwize the leaky >integrator will saturate. the problem is, even though the differentiator >kills all the DC in the input signal, the integrator produces it's own DC >due to the limit-cycling due to rounding and there is no differentiator to >kill that DC. so, it appears to me, that the differentiator followed by >the leaky integrator with noise shaping, is the only simple fixed-point >solution. i think if you use this trick, you can let the pole (p) get as >close to DC as you want. Robert has a good point here. There are two other ways I know of to eliminate limit cycles. 1) Dithering, just like noise shaping except that the LSW added in is uniformly distributed unsigned noise calculated separately. Is there much of a difference between dithering and noise shaping? Probably not, but it makes for good debate. 2) Sign-magnitude truncation, or truncation towards zero. Truncate positive results as usual (floor function), but employ a ceiling function for negative results. This nonlinearly affects the result, however, and more so as the result gets smaller. Enjoy, Duane Wise (dwise at wholegrain-ds.com) More information about the music-dsp mailing list
{"url":"http://music.columbia.edu/pipermail/music-dsp/1998-April/053279.html","timestamp":"2014-04-19T17:18:09Z","content_type":null,"content_length":"4654","record_id":"<urn:uuid:c7a5e709-2cd6-4617-bed7-0d619791268d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Gauss hypergeometric function 2F1: Series representations Series representations Generalized power series Expansions at generic point z==z[0] For the function itself Expansions on branch cuts For the function itself Expansions at z==0 For the function itself General case Generic formulas for main term Expansions at z==1 For the function itself General case Logarithmic cases Generic formulas for main term Expansions at z==infinity For the function itself The general formulas Case of simple poles Case of double poles Case of canceled double poles Generic formulas for main term Expansions at z==infinity for polynomial cases For the function itself Residue representations General case Logarithmic cases
{"url":"http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/06/ShowAll.html","timestamp":"2014-04-18T05:50:04Z","content_type":null,"content_length":"83298","record_id":"<urn:uuid:c55501f8-e6b6-45ef-80e9-0fc4e3ffbb86>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: help with transformation • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. let me see your file. Best Response You've already chosen the best response. about the line y = x because two of the coordinates lie on the line itself and other two are mirror images of each other about the line y = x Best Response You've already chosen the best response. Best Response You've already chosen the best response. what does that mean? Best Response You've already chosen the best response. i think D. Best Response You've already chosen the best response. im thinking d too Best Response You've already chosen the best response. yes because it will rotate due same points on x-axis. Best Response You've already chosen the best response. well D won`t work because rotation by 180 would result in reflection about y axis Best Response You've already chosen the best response. then whats the answer? Best Response You've already chosen the best response. No it doesn't, @harsimran_hs4 Consider rotating the line y=x 180 degrees wrt the origin, you don't get the line y = -x Best Response You've already chosen the best response. im thinking d makes sense Best Response You've already chosen the best response. well does it? Best Response You've already chosen the best response. I vote yes :) Best Response You've already chosen the best response. well then tell me what is the reflection of point (-3,1) about line y = x ? Best Response You've already chosen the best response. ill notify you if you are correct or not okay? Best Response You've already chosen the best response. Easier to see that the reflection of the point (3,2) about the line y=x is (2,3) which is not on the parallelogram. Best Response You've already chosen the best response. well point is (-3, 1) not (3, 2) i am considering Best Response You've already chosen the best response. Well, now that you mention it, the image is (1,-3) Best Response You've already chosen the best response. (-3, -3) and (3,3) lie on the line so after reflection they will also lie on the line and remain where they were and (-3, 1) and (3, -1) interchange their position Best Response You've already chosen the best response. But if it's to be the reflection with respect to the line y=x, then the image of each and every point must still be on the parallelogram, and the image of (-3,1), which is (1,-3) is not on the Best Response You've already chosen the best response. agreed my bad!! but how does d come out to be the solution? Best Response You've already chosen the best response. Because A, B, and C were wrong? lol Best Response You've already chosen the best response. LOL!! expected some good arguments tough Best Response You've already chosen the best response. I'm really not good at explaining geometric concepts, these things just come to me sorta instinctively :( Best Response You've already chosen the best response. cool got hold of it ...... FINALLY I AGREE D IS RIGHT!! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5120e7f9e4b06821731cdc81","timestamp":"2014-04-21T02:41:41Z","content_type":null,"content_length":"88307","record_id":"<urn:uuid:827449f1-ee72-408f-a4c7-9ff9c49463c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
A-level Physics (Advancing Physics)/Electric Potential Energy Just as an object at a distance r from a sphere has gravitational potential energy, a charge at a distance r from another charge has electrical potential energy ε[elec]. This is given by the formula: $\epsilon_{elec} = V_{elec}q$, where V[elec] is the potential difference between the two charges Q and q. In a uniform field, voltage is given by: $V_{elec} = E_{elec}d$, where d is distance, and E[elec] is electric field strength. Combining these two formulae, we get: $\epsilon_{elec} = qE_{elec}d$ For the field around a point charge, the situation is different. By the same method, we get: $\epsilon_{elec} = \frac{-kQq}{r}$ If a charge loses electric potential energy, it must gain some other sort of energy. You should also note that force is the rate of change of energy with respect to distance, and that, therefore: $\epsilon_{elec} = \int{F\; dr}$ The ElectronvoltEdit The electronvolt (eV) is a unit of energy equal to the charge of a proton or a positron. Its definition is the kinetic energy gained by an electron which has been accelerated through a potential difference of 1V: 1 eV = 1.6 x 10^-19 J For example: If a proton has an energy of 5MeV then in Joules it will be = 5 x 10^6 x 1.6 x 10^-19 = 8 x 10^-13 J. Using eV is an advantage when high energy particles are involved as in case of particle accelerators. Summary of Electric FieldsEdit You should now know (if you did the electric fields section in the right order) about four attributes of electric fields: force, field strength, potential energy and potential. These can be summarised by the following table: │ Force │ → integrate → │ Potential Energy │ │ │ │ │ │$F_{elec} = \frac{-kQq}{r^2}$│with respect to r│$\epsilon_{elec} = \frac{-kQq}{r}$ │ │ ↓ per. unit charge ↓ │ │ Field Strength │ → integrate → │ Potential │ │ │ │ │ │$E_{elec} = \frac{-kQ}{r^2}$ │with respect to r│ $V_{elec} = \frac{-kQ}{r}$ │ This table is very similar to that for gravitational fields. The only difference is that field strength and potential are per. unit charge, instead of per. unit mass. This means that field strength is not the same as acceleration. Remember that integrate means 'find the area under the graph' and differentiate (the reverse process) means 'find the gradient of the graph'. k = 8.99 x 10^9 Nm^2C^-2 1. Convert 5 x 10^-13 J to MeV. 2. Convert 0.9 GeV to J. 3. What is the potential energy of an electron at the negatively charged plate of a uniform electric field when the potential difference between the two plates is 100V? 4. What is the potential energy of a 2C charge 2cm from a 0.5C charge? 5. What is represented by the gradient of a graph of electric potential energy against distance from some charge? Last modified on 6 June 2012, at 12:22
{"url":"https://en.m.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Electric_Potential_Energy","timestamp":"2014-04-16T04:22:53Z","content_type":null,"content_length":"18833","record_id":"<urn:uuid:f6782a3a-4df9-48a9-9cc2-c3aaf5ef280a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
problems with nurbs [Archive] - OpenGL Discussion and Help Forums i've only learned about nurbs a short time ago, but it seems to me that there are differences between some math books and OpenGL. i don't understand why the number of knots is order + number of i think it should be order + number of controlpoints -1. a short example, why: i want to describe a cubic bezier curve as a nurbs. for the cubic bezier curve i have 4 controlpoints, the knot vector for the nurbs should be (0,0,0,1,1,1), since i want endpoint approximation and it is cubic. but now the formula used by OpenGl is 6(knots)=? 3(order) + 4(controlpoints) obviously there is something wrong, but what?? thanks for any help.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-152864.html","timestamp":"2014-04-17T01:16:46Z","content_type":null,"content_length":"3848","record_id":"<urn:uuid:500db259-64fa-41c6-83f9-8b55afd0cbf5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Where did the leading 1 means negative number in signed int arise from? up vote 5 down vote favorite Even though I read a number of articles that say that mostly 2's complement is used to represent the negative numbers in a signed integer and that that is the best method, However for some reason I have this (below) stuck in my head and can't get rid of it without knowing the history of it "Use the leading bit as 1 to denote negative numbers when using signed int." I have read many posts online & in StakOverflow that 2's complement is the best way to represent negative numbers. But my question is not about the best way, it is about the history or from where did the "leading bit" concept arise and then disappear? P.S: Also it is just not me, a bunch of other folks were also getting confused with this. Edit - 1 The so called leading 1 method I mentioned is described with an example in this post: Why is two's complement used to represent negative numbers? Now I understand, the MSB of 1 signifies negative numbers. This is by nature of 2's complement and not any special scheme. Eg. If not for the 1st bit, we can't say if 1011 represents -5 or +11. Thanks to: jamesdlin, Oli Charlesworth, Mr Lister for asking imploring questions to make me realize the correct answer. Rant: I think there are a bunch of groups/folks who have been taught or been made to think (incorrectly) that 1011 evaluates to -3. 1 denoting - and 011 denoting 3. The folks who ask "what my question was.. " were probably taught the correct 2's complement way from the first instance they learnt it and weren't exposed to these wrong answers. math binary unsigned signed Some decent pointers to the history of two's and one's complement arithmetic are in this closed SO question: stackoverflow.com/questions/8041674/twos-complement-history – Michael Burr May 3 '12 at 3 Ask yourself this: you have an integer type of n bits, but you want to store negative and non-negative numbers. How would you do it? It'd make sense to divide your integer range in half and map half to negatives, half to non-negatives. Representing non-negative numbers is straightforward, and there's no reason why that shouldn't just be a direct mapping. That's one half of your range. The other half happens to have 1 as the most significant bit. – jamesdlin May 3 '12 at 6:14 1 I don't understand what the question is here. Are you asking "why/when was sign-magnitude invented?". Well, it's such a natural mechanism it was probably invented multiple times. And as for "disappear", well, it hasn't disappeared. – Oli Charlesworth May 3 '12 at 7:33 1 Agreed with @OliCharlesworth. Please clarify which system of representing signed ints is being discussed. All the answers thus far discuss two's complement because that is mentioned in the question, but the 'leading bit' term may or may not refer to two's complement, since one's complement, two's complement, sign-magnitude, and bias/excess representations ALL end up signaling the sign of the number using the most significant bit. – Brian L May 4 '12 at 0:54 1 2's complement also follows from basic number theory. If you consider the ring of integers mod 2**n. The larger numbers are congruent to the negative numbers. By choosing the partition between positive and negative just right, the top bit ends up indicating sign. – phkahler May 4 '12 at 19:35 show 1 more comment 5 Answers active oldest votes There are several advantages to the two's-complement representation for signed integers. Let's assume 16 bits for now. Non-negative numbers in the range 0 to 32,767 have the same representation in both signed and unsigned types. (Two's-complement shares this feature with ones'-complement and Two's-complement is easy to implement in hardware. For many operations, you can use the same instructions for signed and unsigned arithmetic (if you don't mind ignoring overflow). For up vote 2 example, -1 is represented as 1111 1111 1111 1111, and +1 as 0000 0000 0000 0001. If you add them, ignoring the fact that the high-order bit is a sign bit, the mathematical result is 1 down vote 0000 0000 0000 0000; dropping all but the low-order 16 bits, gives you 0000 0000 0000 0000, which is the correct signed result. Interpreting the same operation as unsigned, you're adding accepted 65535 + 1, and getting 0, which is the correct unsigned result (with wraparound modulo 65536). You can think of the leading bit, not as a "sign bit", but as just another value bit. In an unsigned binary representation, each bit represents 0 or 1 multiplied by the place value, and the total value is the sum of those products. The lowest bit's place value is 1, the next lower bit is 2, then 4, etc. In a 16-bit unsigned representation, the high-order bit's place value is 32768. In a 16-bit signed two's-complement representation, the high-order bit's place value is -32768. Try a few examples, and you'll see that everything adds up nicely. See Wikipedia for more information. Yeah, one just needs to think about negative numbers like this: what is -1? It's what you add 1 to to get 0. Naturally, in binary it looks like this 11...11 + 00...01 = 00...00 (just as in your example), and all negative numbers have the most significant bit set to 1 in this representation. – Alexey Frunze May 3 '12 at 8:26 add comment It's not just about the leading bit. It's about all the bits. Starting with addition First let's look at how addition is done in 4-bit binary for 2 + 7: 10 + It's the same as long addition in decimal: bit by bit, right to left. • In the rightmost place we add 0 and 1, it makes 1, no carry. • In the second place from the right, we add 1 and 1, that makes 2 in decimal or 10 in binary - we write the 0, carry the 1. • In the third place from the right, we add the 1 we carried to the 1 already there, it makes binary 10. We write the 0, carry the 1. • The 1 that just got carried gets written in the fourth place from the right. Long subtraction Now we know that binary 10 + 111 = 1001, we should be able to work backwards and prove that 1001 - 10 = 111. Again, this is exactly the same as in decimal long subtraction. 1001 - Here's what we did, working right to left again: • In the rightmost place, 1 - 0 = 1, we write that down. • In the second place, we have 0 - 1, so we need to borrow an extra bit. We now do binary 10 - 1, which leaves 1. We write this down. • In the third place, remember we borrowed an extra bit - so again we have 0 - 1. We use the same trick to borrow an extra bit, giving us 10 - 1 = 1, which we put in the third place of the • In the fourth place, we again have a borrowed bit to deal with. Subtract the borrowed bit from the 1 already there: 1 - 1 = 0. We could write this down in front of the result, but since it's the end of the subtraction there's no need. There's a number less than zero?! Do you remember how you learnt about negative numbers? Part of the idea is that you can subtract any number from any other number and still get a number. So 7 - 5 is 2; 6 - 5 is 1; 5 - 5 is 0; What is 4 - 5? Well, one way to reason about such numbers is simply to apply the same method as above to do the subtraction. As an example, let's try 2 - 7 in binary: 10 - up vote _______ 2 down ...1011 I started in the same way as before: • In the rightmost place, subtract 1 from 0, which requires a borrowed bit. 10 - 1 = 1, so the last bit of the result is 1. • In the second-rightmost place, we have 1 - 1 with an extra borrow bit, so we have to subtract another 1. This means we need to borrow our own bit, giving 11 - 1 - 1 = 1. We write 1 in the second-rightmost spot. • In the third place, there are no more bits in the top number! But we know we can pretend there's a 0 in front, just like we would do if the bottom number ran out of bits. So we have 0 - 1 - 1 because of the borrow bit from second place. We have to borrow a bit again! Anyway we have 10 - 1 - 1 = 0, which we write down in the third place from the right. • Now something very interesting has happened - both the operands of the subtraction have no more digits, but we still have a borrow bit to take care of! Oh well, let's just carry on as we have been doing. We have 0 - 0, since neither the top or bottom operand have any bits here, but because of the borrow bit it's actually 0 - 1. (We have to borrow again! If we keep borrowing like this we'll have to declare bankruptcy soon.) Anyway, we borrow the bit, and we get 10 - 1 = 1, which we write in the fourth place from the right. Now anyone with half a mind is about to see that we are going to keep borrowing bits until the cows come home, because there ain't no more bits to go around! We ran out of them two places ago if you forgot. But if you tried to keep going it'd look like this: • In the fifth place we get 0 - 0 - 1, and we borrow a bit to get 10 - 0 - 1 = 1. • In the sixth place we get 0 - 0 - 1, and we borrow a bit to get 10 - 0 - 1 = 1. • In the seventh place we get 0 - 0 - 1, and we borrow a bit to get 10 - 0 - 1 = 1. ...And so it goes on for as many places as you like. By the way, we just derived the two's complement binary form of -5. You could try this for any pair of numbers you like, and generate the two's complement form of any negative number. If you try to do 0 - 1, you'll see why -1 is represented as ...11111111. You'll also realise why all two's complement negative numbers have a 1 as their most significant bit (the "leading bit" in the original question). In practice, your computer doesn't have infinitely many bits to store negative numbers in, so it usually stops after some more reasonable number, like 32. What do we do with the extra borrow bit in position 33? Eh, we just quietly ignore it and hope no one notices. When some does notice that our new number system doesn't work, we call it integer overflow. Final notes This isn't the only way to make our number system work, of course. After all, if I owe you $5, I wouldn't say that your current balance with me was $...999999995. But there are some cool things about the system we just derived, like the fact that subtraction gives you the right result in this system, even if you ignore the fact that one of the numbers is negative. Normally, we have to think about subtractions with conditional steps: to calculate 2 - 7, we first have to figure out that 2 is less than 7, so instead we calculate 7 - 2 = 5, and then stick a minus sign in front to get 2 - 7 = -5. But with two's complement we just go ahead do the subtraction and don't care about which number is bigger, and the right result comes out by itself. And others have mentioned that addition works nicely, and so does multiplication. An interesting read, but not an answer to the leading bit, which was the question. – Matsemann May 3 '12 at 9:18 @Matsemann, the answer to why "leading bit" is 1 for negative numbers is implied in the fourth-last paragraph. I concede I should have made it more obvious. – Brian L May 4 '12 at 0:41 One of my favorite observations related to two's-complement math is that if one uses the power-series function to compute the value of ...11111 [i.e. 1+2+4+8+16+32+64...] the non-converging series sums to -1. – supercat Jun 20 '13 at 23:28 add comment You don't use the leading bit, per say. For instance, in an 8-bit signed char, represents -1. You can test the leading bit to determine if it is a negative number. There are a number of reasons to use 2's complement, but the first and greatest is convenience. Take the above number and add 2. What do we end up with? up vote 1 00000001 down vote You can add and subtract 2's complement numbers basically for free. This was a big deal historically, because the logic is very simple; you don't need dedicated hardware to handle signed numbers. You use less transistors, you need less complicated design, etc. It goes back to before 8-bit microprocessors, which didn't even have multiply instructions built-in (even many 16-bit ones didn't have them, such as the 65c816 used in apple IIe and Super NES). With that said, multiplication is relatively trivial with 2's complement also, so that's no big deal. Not an answer to the question. – Matsemann May 3 '12 at 7:06 add comment Complements (including things like 9s complement in decimal, mechanical calculators / adding-machines / cash registers) have been around forever. In nines complement with four decimal digits, for instance, values in the range 0000..4999 are positive while values in 5000..9999 are negative. See http://en.wikipedia.org/wiki/Method_of_complements for details. This directly gives rise to 1s complement in binary, and in both 1s and 2s complement, the topmost bit acts as a "sign bit". Thus, while this does not explain exactly how computers moved from ones' complement to two's complement (I use Knuth's apostrophe convention when spelling these out as words with apostrophes, by the way). up vote 1 In a logical sense, it does not matter which bit you use to represent signs, but for practical purposes, using the top bit, and two's complement, simplifies the hardware. Back when down vote transistors were expensive, this was pretty important. (Or even tubes, although I think most if not all vacuum-tube computers used ones' complement. In any case they predated the C language by rather a lot.) In summary, the history goes back way before electronic computers and the C language, and there was no reason to change from a good way of implementing this mechanically, when converting from mechanical calculators to vacuum-tube ENIACs to transistorized computers and then on to "chips", MSI, LSI, VLSI, and onward. add comment Well, it had to work such that 2 plus -2 gives zero. Early CPUs had hardware addition and subtraction and someone noticed that by complementing all the bits (one's complement, the original system), to change the "sign" of the value, it allowed the existing addition hardware to work properly—except that sometimes the result was negative zero. (What is the difference between -0 and 0? On such machines, it was indeterminate.) up vote 0 Someone soon realized that by using twos-complement (convert a number between negative and positive by inverting the bits and adding one), the negative zero problem was avoided. down vote So really, it is not just the sign bit which is affected by negatives, but all of the bits except the LSB. However, by examining the MSB, one can immediately determine whether the signed value there is negative. Complementary arithmetic surely predates electronic computers. See h2g2.com/dna/h2g2/A1920511 and infohost.nmt.edu/~borchers/tenscomp.pdf – Michael Burr May 3 '12 at 6:43 Also, you sound like they gave up the "one bit for the sign" concept long ago. However, that is not the case. Half your computer still works like that, and yes, it can have 0 and -0! – Mr Lister May 3 '12 at 6:45 @MrLister: I did not intend any such thing. I worked with a ones-complement machine in the 1970s, and it sometimes sucked to check for both -0 and +0. I don't understand your other statements. What is the twos-complement representation of an 8-bit negative zero? – wallyk May 3 '12 at 7:00 @MichaelBurr: Yes it does. While adding machines did something like nines-complement arithmetic for subtraction and negative numbers, they weren't concerned about the time and space requirements of checking variations of zero. So it did not compel a better system. – wallyk May 3 '12 at 7:05 add comment Not the answer you're looking for? Browse other questions tagged math binary unsigned signed or ask your own question.
{"url":"http://stackoverflow.com/questions/10425979/where-did-the-leading-1-means-negative-number-in-signed-int-arise-from","timestamp":"2014-04-18T06:24:34Z","content_type":null,"content_length":"103980","record_id":"<urn:uuid:7d86f65c-9cbc-404a-b36e-58641c72958d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
2011 October—Wolfram|Alpha Blog With Halloween around the corner, everyone’s thinking about costumes, trick-or-treating, and jack-o’-lantern carving and figuring out what to do with a 1,818 pound pumpkin. While the latter might only be true for the owners of this year’s largest pumpkin, Wolfram|Alpha has something for everyone this Halloween. The nearly one-ton squash belongs to a farmer from Quebec, Canada. Besides carving it into a giant jack-o’-lantern, the next best thing to do with that much pumpkin is make enough pumpkin pie for a small town. A common recipe for a pumpkin pie calls for two cups of pumpkin. Using Wolfram|Alpha, we find that 1,818 pounds of pumpkin will allow us to make 3,550 pumpkin pies. Hopefully you are in a giving mood, so you can cut each pie into eight slices to come up with just enough to share with the entire town of Allen Park, Michigan. With 28,210 people in Allen Park and 28,400 slices of pie, you’re still left with 190 slices to put in the freezer for later. More » The United Nations (UN) was officially founded 66 years ago this week, bringing together “peace-loving states” (as the Charter of the UN described them) to cooperate on issues of international law, economic and social development, human rights, and other matters of critical importance to global human development. From the time it launched, Wolfram|Alpha has relied on a wide variety of datasets provided by various UN organizations—and as recent blog posts indicate, these agencies remain an important source of information for international data. More » We’ve blogged before about Wolfram|Alpha’s powerful relocation calculator, which has turned out to be one of our more popular—and practical—features. Our last round of enhancements added information about broad topics like population, home sale prices, unemployment rates, and more; now we’ve added more detail to the core cost-of-living categories, so you can see how prices of specific retail goods and services differ among US cities and metropolitan areas. More » We’ve blogged before about international food consumption data in Wolfram|Alpha, and queries about this data have proved to be a favorite among our users, with good reason: it’s fascinating to explore the world’s food supply and to visualize trends in consumption. In an attempt to fill in a more complete picture of global agricultural trends, we’ve added more data from the FAO—this time covering food production, harvest, and crop yields around the world. More » We here at Wolfram|Alpha are constantly trying to improve the user experience by fine-tuning our algorithms and making our functionality in every domain more versatile and flexible. We are pleased to announce that we have made useful upgrades to chemistry functionality in Wolfram|Alpha, especially in the domain of solution chemistry. We have new data that enables you to quickly determine whether a given set of solvents are miscible in each other or not: “Is acetone miscible in benzene?” You also could ask for the list of liquids that are miscible in a given solvent: “What solvents are miscible in acetone?” We are improving our coverage of this area, with new data being added regularly. I’m so sad this evening—as millions are—to hear of Steve Jobs’s death. Scattered over the last quarter century, I learned much from Steve Jobs, and was proud to consider him a friend. And indeed, he contributed in various ways to all three of my major life projects so far: Mathematica, A New Kind of Science and Wolfram|Alpha. More » As most Wolfram|Alpha blog readers know, the engine behind the Wolfram|Alpha computational knowledge engine is Wolfram Research’s powerful mathematics and computation software, Mathematica. Ironically, while Wolfram|Alpha contains thousands of datasets on diverse and sundry subject areas, until very recently, its computable knowledge of the Mathematica language itself has been somewhat limited. More » Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more. Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies… Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes! Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon? Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
{"url":"http://blog.wolframalpha.com/2011/10/","timestamp":"2014-04-16T05:18:27Z","content_type":null,"content_length":"44057","record_id":"<urn:uuid:6ab13cf0-6624-4975-8d84-cef410600d0f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
2010 Chevrolet Malibu - Problems, Statistics, and Analysis 181 problems have been reported for the 2010 Chevrolet Malibu. The following chart shows the 21 most common problems for 2010 Chevrolet Malibu. The number one most common problem is associated with the vehicle's air bags with 38 problems. The second most common problem is associated with the vehicle's electrical system (18 problems). In our research we use the PPMY index to compare the reliability of vehicles. PPMY index is defined as the problems reported per thousand vehicles per Year. The total sales of the 2010 Chevrolet Malibu in the United States are 198,770 units [1]. The total number of problems reported by Chevrolet Malibu owners in the last 3 years is 181, The age of the vehicle is 3, the PPMY index can then be calculated as PPMY Index = 181 / 198,770 / 3 * 1000 = 0.30. Also see A study of reliability comparison across Chevrolet Malibu model year vehicles. The following chart illustrates the problems reported during each of the service years since the debut of the 2010 Chevrolet Malibu in 2010. Table 2. Number of problems in the service years of the 2010 Chevrolet Malibu Service Year Number of Problems When shopping a new or used Chevrolet Malibu, make sure to check out the following table and see how the 2010 Chevrolet Malibu measures up with other Chevrolet Malibu production years. We note that the number of problems reported for the 2010 Malibu is 181 while the average number of problems reported for the 17 model years of Chevrolet Malibu is 472.
{"url":"http://www.carproblemzoo.com/chevrolet/malibu/2010/","timestamp":"2014-04-20T08:14:10Z","content_type":null,"content_length":"33128","record_id":"<urn:uuid:9014a4ec-d866-4245-9be9-caeb89139c1b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Relevance Logic First published Wed Jun 17, 1998; substantive revision Mon Mar 26, 2012 Relevance logics are non-classical logics. Called ‘relevant logics’ in Britain and Australasia, these systems developed as attempts to avoid the paradoxes of material and strict implication. Among the paradoxes of material implication are • p → (q → p). • ¬p → (p → q). • (p → q) ∨(q → r). Among the paradoxes of strict implication are the following: • (p & ¬p) → q. • p → (q → q). • p → (q ∨¬q). Many philosophers, beginning with Hugh MacColl (1908), have claimed that these theses are counterintuitive. They claim that these formulae fail to be valid if we interpret → as representing the concept of implication that we have before we learn classical logic. Relevance logicians claim that what is unsettling about these so-called paradoxes is that in each of them the antecedent seems irrelevant to the consequent. In addition, relevance logicians have had qualms about certain inferences that classical logic makes valid. For example, consider the classically valid inference The moon is made of green cheese. Therefore, either it is raining in Ecuador now or it is not. Again here there seems to be a failure of relevance. The conclusion seems to have nothing to do with the premise. Relevance logicians have attempted to construct logics that reject theses and arguments that commit “fallacies of relevance”. Relevant logicians point out that what is wrong with some of the paradoxes (and fallacies) is that is that the antecedents and consequents (or premises and conclusions) are on completely different topics. The notion of a topic, however, would seem not to be something that a logician should be interested in — it has to do with the content, not the form, of a sentence or inference. But there is a formal principle that relevant logicians apply to force theorems and inferences to “stay on topic”. This is the variable sharing principle. The variable sharing principle says that no formula of the form A → B can be proven in a relevance logic if A and B do not have at least one propositional variable (sometimes called a proposition letter) in common and that no inference can be shown valid if the premises and conclusion do not share at least one propositional variable. At this point some confusion is natural about what relevant logicians are attempting to do. The variable sharing principle is only a necessary condition that a logic must have to count as a relevance logic. It is not sufficient. Moreover, this principle does not give us a criterion that eliminates all of the paradoxes and fallacies. Some remain paradoxical or fallacious even though they satisfy variable sharing. As we shall see, however, relevant logic does provide us with a relevant notion of proof in terms of the real use of premises (see the section “Proof Theory” below), but it does not by itself tell us what counts as a true (and relevant) implication. It is only when the formal theory is put together with a philosophical interpretation that it can do this (see the section “Semantics for Relevant Implication” below). In this article we will give a brief and relatively non-technical overview of the field of relevance logic. Our exposition of relevant logic is backwards to most found in the literature We will begin, rather than end, with the semantics, since most philosophers at present are semantically inclined. The semantics that I present here is the ternary relation semantics due to Richard Routley and Robert K. Meyer. This semantics is a development of Alasdair Urquhart's “semilattice semantics” (Urquhart 1972). There is a similar semantics (which is also based on Urquhart's ideas), due to Kit Fine, that was developed at the same time as the Routley-Meyer theory (Fine 1974). And there is an algebraic semantics due to J. Michael Dunn. Urquhart's, Fine's, and Dunn's models are very interesting in their own right, but we do not have room to discuss them here. The idea behind the ternary relation semantics is rather simple. Consider C.I. Lewis' attempt to avoid the paradoxes of material implication. He added a new connective to classical logic, that of strict implication. In post-Kripkean semantic terms, A ⊰ B is true at a world w if and only if for all w′ such that w′ is accessible to w, either A fails in w′ or B obtains there. Now, in Kripke's semantics for modal logic, the accessibility relation is a binary relation. It holds between pairs of worlds. Unfortunately, from a relevant point of view, the theory of strict implication is still irrelevant. That is, we still make valid formulae like p ⊰ (q ⊰ q). We can see quite easily that the Kripke truth condition forces this formula on us. Like the semantics of modal logic, the semantics of relevance logic relativises truth of formulae to worlds. But Routley and Meyer go modal logic one better and use a three-place relation on worlds. This allows there to be worlds at which q → q fails and that in turn allows worlds at which p → (q → q) fails. Their truth condition for → on this semantics is the following: A → B is true at a world a if and only if for all worlds b and c such that Rabc (R is the accessibility relation) either A is false at b or B is true at c. For people new to the field it takes some time to get used to this truth condition. But with a little work it can be seen to be just a generalisation of Kripke's truth condition for strict implication (just set b = c). The ternary relation semantics can be adapted to be a semantics for a wide range of logics. Placing different constraints on the relation makes valid different formulae and inferences. For example, if we constrain the relation so that Raaa holds for all worlds a, then we make it true that if (A → B) & A is true at a world, then B is also true there. Given other features of the Routley-Meyer semantics, this makes the thesis ((A → B) & A) → B valid. If we make the ternary relation symmetrical in its first two places, that is, we constrain it so that, for all worlds a, b, and c, if Rabc then Rbac, then we make valid the thesis A → ((A → B) → B). The ternary accessibility relation needs a philosophical interpretation in order to give relevant implication a real meaning on this semantics. Recently there have been three interpretations developed based on theories about the nature of information. One interpretation of the ternary relation, due to Dunn, develops the idea behind Urquhart's semilattice semantics. On Urquhart's semantics, instead of treating indices as possible (or impossible) worlds, they are taken to be pieces of information. In the semilattice semantics, an operator ° combines the information of two states — a°b is the combination of the information in a and b. The Routley-Meyer semantics does not contain a combination or “fusion” operator on worlds, but we can get an approximation of it using the ternary relation. On Dunn's reading, ‘Rabc’ says that “the combination of the information states a and b is contained in the information state c” (Dunn 1986). Another interpretation is suggested in Jon Barwise (1993) and developed in Restall (1996). On this view, worlds are taken to be information-theoretic “sites” and “channels”. A site is a context in which information is received and a channel is a conduit through which information is transferred. Thus, for example, when the BBC news appears on the television in my living room, we can consider the living room to be a site and the wires, satellites, and so on, that connect my television to the studio in London to be a channel. Using channel theory to interpret the Routley-Meyer semantics, we take Rabc to mean that a is an information-theoretic channel between sites b and c. Thus, we take A → B to be true at a if and only if, whenever a connects a site b at which A obtains to a site c, B obtains at c. Similarly, Mares (1997) uses a theory of information due to David Israel and John Perry (1990). In addition to other information a world contains informational links, such as laws of nature, conventions, and so on. For example, a Newtonian world will contain the information that all matter attracts all other matter. In information-theoretic terms, this world contains the information that two things' being material carries the information that they attract each other. On this view, Rabc if and only if, according to the links in a, all the information carried by what obtains in b is contained in c. Thus, for example, if a is a Newtonian world and the information that x and y are material is contained in b, then the information that x and y attract each other is contained in c. Another interpretation is developed in Mares (2004). This interpretation takes the Routley-Meyer semantics to be a formalisation of the notion of “situated implication”. This interpretation takes the “worlds” of the Routley-Meyer semantics to be situations. A situation is a perhaps partial representation of the universe. The information contained in two situations, a and b might allow us to infer further information about the universe that is contained in neither situation. Thus, for example, suppose in our current situation that we have the information contained in the laws of the theory of general relativity (this is Einstein's theory of gravity). Then we hypothesise a situation in which we can see a star moving in an ellipse. Then, on the basis of the information that we have and the hypothesised situation, we can infer that there is a situation in which there is a very heavy body acting on this star. We can model situated inference using a relation I (for “implication”). Then we have IabP, where P is a proposition, if and only if the information in a and b together license the inference to there being a situation in which P holds. We can think of a proposition itself as a set of situations. We set A → B to hold at a if and only if, for all situations b in which A holds, Iab|B|, where |B| is the set of situations at which B is true. We set Rabc to hold if and only if c belongs to every proposition P such that IabP. With the addition of the postulate that, for any set of propositions P such that IabP, the intersection of that set X is such that IabX, we find that the implications that are made true on any situation using the truth condition that appeals to I are the same as those that are made true by the Routley-Meyer truth condition. Thus, the notion of situated inference gives a way of understanding the Routley-Meyer semantics. (This is a very brief version of the discussion of situated inference that is in chapters 2 and 3 of Mares (2004).) By itself, the use of the ternary relation is not sufficient to avoid all the paradoxes of implication. Given what we have said so far, it is not clear how the semantics can avoid paradoxes such as ( p & ¬p) → q and p → (q ∨¬q). These paradoxes are avoided by the inclusion of inconsistent and non-bivalent worlds in the semantics. For, if there were no worlds at which p & ¬p holds, then, according to our truth condition for the arrow, (p & ¬p) → q would also hold everywhere. Likewise, if q ∨¬q held at every world, then p → (q ∨¬q) would be universally true. An approach to relevance that does not require the ternary relation is due to Routley and Loparic (1978) and Priest (1992) and (2008). This semantics uses a set of worlds and a binary relation, S. Worlds are divided into two categories: normal worlds and non-normmal worlds. An implication A → B is true at a normal world a if and only if for all worlds b, if A is true at b then B is also true true at b. At non-normal worlds, the truth values for implications are random. Some may be true and others false. A formula is valid if and only if it is true on every such model in its normal worlds. This division of worlds into normal and non-normal and the use of random truth values for implications at non-normal worlds enables us to find countermodels for formulas such as p → (q → q). Priest interprets non-normal worlds as the worlds that correspond to “logic fictions”. In a science fiction, the laws of nature may be different than those in our universe. Similarly, in a logic fiction the laws of logic may be different from our laws. For example, A → A may fail to be true in some logic fiction. The worlds that such fictions describe are non-normal worlds. One problem with the semantics without the ternary relation is that it is difficult to use it to characterize as wide a range of logical systems as can done with the ternary relation. In addition, the logics determined by this semantics are quite weak. For example, they do not have as a theorem the transitivity of implication — ((A → B) & (B → C)) → (A → C). Like the ternary relation semantics, this semantics requires some worlds to be inconsistent and some to be non-bivalent. The use of non-bivalent and inconsistent worlds requires a non-classical truth condition for negation. In the early 1970s, Richard and Val Routley invented their “star operator” to treat negation. The operator is an operator on worlds. For each world a, there is a world a*. And ¬A is true at a if and only if A is false at a*. Once again, we have the difficulty of interpreting a part of the formal semantics. One interpretation of the Routley star is that of Dunn (1993). Dunn uses a binary relation, C, on worlds. Cab means that b is compatible with a. a*, then, is the maximal world (the world containing the most information) that is compatible with a. There are other semantics for negation. One, due to Dunn and developed by Routley, is a four-valued semantics. This semantics is treated in the entry on paraconsistent logics. Other treatments of negation, some of which have been used for relevant logics, can be found in Wansing (2001). There is now a large variety of approaches to proof theory for relevant logics. There is a sequent calculus for the negation-free fragment of the logic R due to Gregory Mints (1972) and J.M. Dunn (1973) and an elegant and very general approach called “Display Logic” developed by Nuel Belnap (1982). For the former, see the supplementary document: Logic R But here I will only deal with the natural deduction system for the relevant logic R due to Anderson and Belnap. Anderson and Belnap's natural deduction system is based on Fitch's natural deduction systems for classical and intuitionistic logic. The easiest way to understand this technique is by looking at an 1. A[{1}] Hyp 2. (A → B)[{2}] Hyp 3. B[{1,2}] 1,2, → E This is a simple case of modus ponens. The numbers in set brackets indicate the hypotheses used to prove the formula. We will call them ‘indices’. The indices in the conclusion indicate which hypotheses are really used in the derivation of the conclusion. In the following “proof” the second premise is not really used: 1. A[{1}] Hyp 2. B[{2}] Hyp 3. (A → B)[{3}] Hyp 4. B[{1,3}] 1,3, → E This “proof” really just shows that the inference from A and A → B to B is relevantly valid. Because the number 2 does not appear in the subscript on the conclusion, the second “premise” does not really count as a premise. Similarly, when an implication is proven relevantly, the assumption of the antecedent must really be used to prove the conclusion. Here is an example of the proof of an implication: 1. A[{1}] Hyp 2. (A → B)[{2}] Hyp 3. B[{1,2}] 1,2, → E 4. ((A → B) → B)[{1}] 2,3, → I 5. A → ((A → B) → B) 1,4, → I When we discharge a hypothesis, as in lines 4 and 5 of this proof, the number of the hypothesis must really occur in the subscript of the formula that is to become the consequent of the implication. Now, it might seem that the system of indices allows irrelevant premises to creep in. One way in which it might appear that irrelevances can intrude is through the use of a rule of conjunction introduction. That is, it might seem that we can always add in an irrelevant premise by doing, say, the following: 1. A[{1}] Hyp 2. B[{2}] Hyp 3. (A & B)[{1,2}] 1,2, &I 4. B[{1,2}] 3, &E 5. (B → B)[{1}] 2,4, → I 6. A → (B → B) 1,5, → I To a relevance logician, the first premise is completely out of place here. To block moves like this, Anderson and Belnap give the following conjunction introduction rule: From A[i] and B[i] to infer (A & B)[i]. This rule says that two formulae to be conjoined must have the same index before the rule of conjunction introduction can be used. There is, of course, a lot more to the natural deduction system (see Anderson and Belnap 1975 and Anderson, Belnap, and Dunn 1992), but this will suffice for our purposes. The theory of relevance that is captured by at least some relevant logics can be understood in terms of how the corresponding natural deduction system records the real use of premises. In the work of Anderson and Belnap the central systems of relevance logic were the logic E of relevant entailment and the system R of relevant implication. The relationship between the two systems is that the entailment connective of E was supposed to be a strict (i.e. necessitated) relevant implication. To compare the two, Meyer added a necessity operator to R (to produce the logic NR). Larisa Maksimova, however, discovered that NR and E are importantly different — that there are theorems of NR (on the natural translation) that are not theorems of E. This has left some relevant logicians with a quandary. They have to decide whether to take NR to be the system of strict relevant implication, or to claim that NR was somehow deficient and that E stands as the system of strict relevant implication. (Of course, they can accept both systems and claim that E and R have a different relationship to one another.) On the other hand, there are those relevance logicians who reject both R and E. There are those, like Arnon Avron, who accept logics stronger than R (Avron 1990). And there are those, like Ross Brady, John Slaney, Steve Giambrone, Richard Sylvan, Graham Priest, Greg Restall, and others, who have argued for the acceptance of systems weaker than R or E. One extremely weak system is the logic S of Robert Meyer and Errol Martin. As Martin has proven, this logic contains no theorems of the form A → A. In other words, according to S, no proposition implies itself and no argument of the form ‘A, therefore A’ is valid. Thus, this logic does not make valid any circular arguments. For more details on these logics see supplements on the logic E, logic R, logic NR, and logic S. Among the points in favour of weaker systems is that, unlike R or E, many of them are decidable. Another feature of some of these weaker logics that makes them attractive is that they can be used to construct a naïve set theory. A naïve set theory is a theory of sets that includes as a theorem the naïve comprehension axiom, viz., for all formulae A(y), ∃x∀y(y ∈ x ↔ A(y)). In set theories based on strong relevant logics, like E and R, as well as in classical set theory, if we add the naïve comprehension axiom, we are able to derive any formula at all. Thus, naïve set theories based on systems such as E and R are said to be “trivial”. Here is an intuitive sketch of the proof of the triviality of a naïve set theory using principles of inference from the logic R. Let p be an arbitrary proposition: 1. ∃x∀y(y ∈ x ↔ (y ∈ y → p)) Naïve Comprehension 2. ∀y(y ∈ z ↔ (y ∈ y → p)) 1, Existential Instantiation 3. z ∈ z ↔ (z ∈ z → p) 2, Universal Instantiation 4. z ∈ z → (z ∈ z → p) 3, df of ↔ , &-Elimination 5. (z ∈ z → (z ∈ z → p)) → (z ∈ z → p) Axiom of Contraction 6. z ∈ z → p 4,5, Modus Ponens 7. (z ∈ z → p)) → z ∈ z 3, df of ↔ , &-Elimination 8. z ∈ z 6,7, Modus Ponens 9. p 6,8, Modus Ponens Thus we show that any arbitrary proposition is derivable in this naïve set theory. This is the infamous Curry Paradox. The existence of this paradox has led Grishen, Brady, Restall, Priest, and others to abandon the axiom of contraction ((A → (A → B)) → (A → B)). Brady has shown that by removing contraction, plus some other key theses, from R we obtain a logic that can accept naïve comprehension without becoming trivial (Brady 2005). In terms of the natural deduction system, the presence of contraction corresponds to allowing premises to be used more than once. Consider the following proof: 1. A → (A → B)[{1}] Hyp 2. A[{2}] Hyp 3. A → B[{1,2}] 1,2, → E 4. B[{1,2}] 2,3, → E 5. A → B[{1}] 2–4, → I 6. (A → (A → B)) → (A → B) 1–5, → I What enables the derivation of contraction is the fact that our subscripts are sets. We do not keep track of how many times (more than once) that a hypothesis is used in its derivation. In order to reject contraction, we need a way of counting the number of uses of hypotheses. Thus natural deduction systems for contraction-free systems use “multisets” of relevance numerals instead of sets — these are structures in which the number of occurrences of a particular numeral counts, but the order in which they occurs does not. Even weaker systems can be constructed, which keep track also of the order in which hypotheses are used (see Read 1986 and Restall 2000). Apart from the motivating applications of providing better formalisms of our pre-formal notions of implication and entailment and providing a basis for naïve set theory, relevance logic has been put to various uses in philosophy and computer science. Here I will list just a few. Dunn has developed a theory of intrinsic and essential properties based on relevant logic. This is his theory of relevant predication. Briefly put, a thing i has a property F relevantly iff ∀x(x=i → F(x)). Informally, an object has a property relevantly if being that thing relevantly implies having that property. Since the truth of the consequent of a relevant implication is by itself insufficient for the truth of that implication, things can have properties irrelevantly as well as relevantly. Dunn's formulation would seem to capture at least one sense in which we use the notion of an intrinsic property. Adding modality to the language allows for a formalisation of the notion of an essential property as a property that is had both necessarily and intrinsically (see Anderson, Belnap, and Dunn 1992, §74). Relevant logic has been used as the basis for mathematical theories other than set theory. Meyer has produced a variation of Peano arithmetic based on the logic R. Meyer gave a finitary proof that his relevant arithmetic does not have 0 = 1 as a theorem. Thus Meyer solved one of Hilbert's central problems in the context of relevant arithmetic; he showed using finitary means that relevant arithmetic is absolutely consistent. This makes relevant Peano arithmetic an extremely interesting theory. Unfortunately, as Meyer and Friedman have shown, relevant arithmetic does not contain all of the theorems of classical Peano arithmetic. Hence we cannot infer from this that classical Peano arithmetic is absolutely consistent (see Meyer and Friedman 1992). Anderson (1967) formulated a system of deontic logic based on R and, more recently, relevance logic has been used as a basis for deontic logic by Mares (1992) and Lou Goble (1999). These systems avoid some of the standard problems with more traditional deontic logics. One problem that standard deontic logics face is that they make valid the inference from A's being a theorem to OA's being a theorem, where ‘OA’ means ‘it ought to be that A’. The reason that this problem arises is that it is now standard to treat deontic logic as a normal modal logic. On the standard semantics for modal logic, if A is valid, then it is true at all possible worlds. Moreover, OA is true at a world a if and only if A is true at every world accessible to a. Thus, if A is a valid formula, then so is OA. But it seems silly to say that every valid formula ought to be the case. Why should it be the case that either it is now raining in Ecuador or it is not? In the semantics for relevant logics, not every world makes true every valid formula. Only a special class of worlds (sometimes called “base worlds” and sometimes called “normal worlds”) make true the valid formulae. Any valid formula can fail at a world. By allowing these “non-normal worlds” in our models, we invalidate this problematic rule. Other sorts of modal operators have been added to relevant logic as well. See, Fuhrmann (1990) for a general treatment of relevant modal logic and Wansing (2002) for a development and application of relevant epistemic logic. Routley and Val Plumwood (1989) and Mares and André Fuhrmann (1995) present theories of counterfactual conditionals based on relevant logic. Their semantics adds to the standard Routley-Meyer semantics an accessibility relation that holds between a formula and two worlds. On Routley and Plumwood's semantics, A>B holds at a world a if and only if for all worlds b such that SAab, B holds at b. Mares and Fuhrmann's semantics is slightly more complex: A>B holds at a world a if and only if for all worlds b such that SAab, A → B holds at b (also see Brady (ed.) 2002, §10 for details of both semantics). Mares (2004) presents a more complex theory of relevant conditionals that includes counterfactual conditionals. All of these theories avoid the analogues of the paradoxes of implication that appear in standard logics of counterfactuals. Relevant logics have been used in computer science as well as in philosophy. Linear logics — a branch of logic initiated by Jean-Yves Girard — is a logic of computational resources. Linear logicians read an implication A → B as saying that having a resource of type A allows us to obtain something of type B. If we have A → (A → B), then, we know that we can obtain a B from two resources of type A . But this does not mean that we can get a B from a single resource of type A, i.e. we don't know whether we can obtain A → B. Hence, contraction fails in linear logic. Linear logics are, in fact, relevant logics that lack contraction and the distribution of conjunction over disjunction ((A & (B ∨C)) → ((A & B) ∨(A & C))). They also include two operators (! and ?) that are known as “exponentials”. Putting an exponential in front of a formula gives that formula the ability to act classically, so to speak. For example, just as in standard relevance logic, we cannot usually merely add an extra premise to a valid inference and have it remain valid. But we can always add a premise of the form !A to a valid inference and have it remain valid. Linear logic also has contraction for formulae of the form !A, i.e., it is a theorem of these logics that (!A → (!A → B)) → (!A → B) (see Troelstra 1992). The use of ! allows for the treatment of resources “that can be duplicated or ignored at will” (Restall 2000, p 56). For more about linear logic, see the entry on substructural logic. An extremely good, although slightly out of date, bibliography on relevance logic was put together by Robert Wolff and is in Anderson, Belnap, and Dunn (1992). What follows is a brief list of introductions to and books about relevant logic and works that are referred to above. • Anderson, A.R. and N.D. Belnap, Jr., 1975, Entailment: The Logic of Relevance and Necessity, Princeton, Princeton University Press, Volume I. Anderson, A.R. N.D. Belnap, Jr. and J.M. Dunn (1992) Entailment, Volume II. [These are both collections of slightly modified articles on relevance logic together with a lot of material unique to these volumes. Excellent work and still the standard books on the subject. But they are very technical and quite difficult.] • Brady, R.T., 2005, Universal Logic, Stanford: CSLI Publications, 2005. [A difficult, but extremely important book, which gives details of Brady's semantics and his proofs that naïve set theory and higher order logic based on his weak relevant logic are consistent.] • Dunn, J.M., 1986, “Relevance Logic and Entailment” in F. Guenthner and D. Gabbay (eds.), Handbook of Philosophical Logic, Volume 3, Dordrecht: Reidel, pp. 117–24. [Dunn has rewritten this piece together with Greg Restall and the new version has appeared in volume 6 of the new edition of the Handbook of Philosophical Logic, Dordrecht: Kluwer, 2002, pp. 1–128.] • Mares, E.D., 2004, Relevant Logic: A Philosophical Interpretation, Cambridge: Cambridge University Press. • Mares, E.D. and R.K. Meyer, 2001, “Relevant Logics” in L. Goble (ed.), The Blackwell Guide to Philosophical Logic, Oxford: Blackwell. • Paoli, F., 2002, Substructural Logics: A Primer, Dordrecht: Kluwer. [Excellent and clear introduction to a field of logic that includes relevance logic.] • Priest, G., 2008, An Introduction to Non-Classical Logic: From If to Is, Cambridge: University of Cambridge Press. [A very good and extremely clear presentation of relevant and other non-classical logics that uses a tableau approach to proof theory.] • Read, S., 1988, Relevant Logic, Oxford: Blackwell. [A very interesting and fun book. Idiosyncratic, but philosophically adept and excellent on the pre-history and early history of relevance • Restall, G., 2000, An Introduction to Substructural Logics, London: Routledge. [Excellent and clear introduction to a field of logic that includes relevance logic.] • Rivenc, François, 2005, Introduction à la logique pertinente, Paris: Presses Universitaires de France. [In French. Gives a “structural” interpretation of relevant logic, which is largely proof theoretic. The structures involved are structures of premises in a sequent calculus.] • Routley, R., R.K. Meyer, V. Plumwood and R. Brady, 1983, Relevant Logics and its Rivals (Volume I), Atascardero, CA: Ridgeview. [A very useful book for formal results especially about the semantics of relevance logics. The introduction and philosophical remarks are full of “Richard Routleyisms”. They tend to be Routley's views rather than the views of the other authors and are fairly radical even for relevant logicians. Volume II updates Volume I and includes other topics such as conditionals, quantification, and decision procedures: R.Brady (ed.), Relevant Logics and their Rivals (Volum II), Aldershot: Ashgate, 2003.] • Goldblatt, R., 2011, Quantifiers, Propositions and Identity: Admissible Semantics for Quantified Modal and Substructural Logics, Cambridge: Cambridge University Press. [A detailed account of the admissible semantics for quantified logic, applied to both modal and relevance logic, and provides a new type of semantics for quantified relevance logic, the “cover semantics”.] • Anderson, A.R., 1967, “Some Nasty Problems in the Formal Logic of Ethics,” Noûs, 1: 354–360. • Avron, Arnon, 1990, “Relevance and Paraconsistency — A New Approach,” The Journal of Symbolic Logic, 55: 707–732. • Barwise, J., 1993, “Constraints, Channels and the Flow of Information,” in P.Aczel, et al. (eds.), Situation Theory and Its Applications (Volume 3), Stanford: CSLI Publications, pp. 3–27. • Belnap, N.D., 1982, “Display Logic,” Journal of Philosophical Logic, 11: 375–417. • Brady, R.T., 1989, “The Non-Triviality of Dialectical Set Theory,” in G. Priest, R. Routley and J. Norman (eds.), Paraconsistent Logic, Munich: Philosophia Verlag, pp. 437–470. • Dunn, J.M., 1973, (Abstract) “A ‘Gentzen System’ for Positive Relevant Implication,” The Journal of Symbolic Logic, 38: 356–357. • Dunn, J.M., 1993, “Star and Perp,” Philosophical Perspectives, 7: 331–357. • Fine, K., 1974, “Models for Entailment,” Journal of Philosophical Logic, 3: 347–372. • Fuhrmann, A., 1990, “Models for Relevant Modal Logics,” Studia Logica, 49: 501–514. • Goble, L., 1999, “Deontic Logic with Relevance” in P. McNamara and H. Prakken (eds.), Norms, Logis and Information Systems, Amsterdam: ISO Press, pp. 331–346. • Grishin, V.N., 1974, “A Non-Standard Logic and its Application to Set Theory,” Studies in Formalized Languages and Non-Classical Logics (Russian), Moscow: Nauka. • Israel, D. and J. Perry, 1990, “What is Information?,” in P.P. Hanson (ed.), Information, Language, and Cognition, Vancouver: University of British Columbia Press, pp. 1–19. • MacColl, H., 1908, “‘If’ and ‘imply’,” Mind, 17: 151–152, 453–455. • Mares, E.D., 1992, “Andersonian Deontic Logic,” Theoria, 58: 3–20. • Mares, E.D., 1997, “Relevant Logic and the Theory of Information,” Synthese, 109: 345–360. • Mares, E.D. and A. Fuhrmann, 1995, “A Relevant Theory of Conditionals,” Journal of Philosophical Logic, 24: 645–665. • Meyer, R.K. and H. Friedman, 1992, “Whither Relevant Arithmetic?,” The Journal of Symbolic Logic, 57: 824–831. • Rantala, V., 1982, “Quantified Modal Logic: Non-Normal Worlds and Propositional Attitudes,” Studia Logica, 41: 41–65. • Restall, G., 1996, “Information Flow and Relevant Logics,” in J. Seligman and D. Westerstahl (eds.), Logic, Language and Computation (Volume 1), Stanford: CSLI Publications, pp. 463–478. • Routley, R. and A. Loparic, 1978, “Semantical Analysis of Arruda-da Costa P Systems and Adjacent Non-Replacement Relevant Systems,” Studia Logica, 37: 301–322. • Troelstra, A.S., 1992, Lectures on Linear Logic, Stanford: CSLI Publications. • Urquhart, A., 1972, “Semantics for Relevant Logics” The Journal of Symbolic Logic, 37: 159–169. • Wansing, H., 2001, “Negation,” in L. Goble (ed.), The Blackwell Guide to Philosophical Logic, Oxford: Blackwell, pp. 415–436. • Wansing, H., 2002, “Diamonds are a Philosopher's Best Friends,” Journal of Philosophical Logic, 31: 591–612. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. [Please contact the author with other suggestions.] logic: modal | logic: paraconsistent | logic: substructural | mathematics: inconsistent
{"url":"http://plato.stanford.edu/entries/logic-relevance/index.html","timestamp":"2014-04-16T07:36:31Z","content_type":null,"content_length":"58457","record_id":"<urn:uuid:56e6b51e-7e6a-438e-8959-05dce66d0d73>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Are Weber Polynomial Coefficients Smaller than Hilbert Polynomial Coefficients? up vote 4 down vote favorite The title says it all. Singular moduli of the j-function satisfy polynomials, but as the class number grows, these polynomial coefficients become very large. Weber functions are modular (not over the full modular group), and their values also satisfy polynomials. But the Weber polynomials tend to have much smaller coefficents. Why? nt.number-theory class-field-theory 2 The question "why?" sounds too philosophical. One simply tries to determine the corresponding size (for level 4 it is done in [V.D. Mirokov, Math. Notes 86 (2009) 216–233; dx.doi.org/10.1134/ S0001434609070244] while classical level 1 is treated in [P. Tretkoff, Math. Proc. Cambridge Philos. Soc. 95 (1984) 389--402; dx.doi.org/10.1017/S0305004100061697]) and then compare the results. – Wadim Zudilin Dec 22 '10 at 23:28 Nice references. Thank you, Wadim. – Steven Heston Dec 24 '10 at 21:35 add comment 1 Answer active oldest votes Simply because they satisfy an equation of the form $P(f)-fj$ for some polynomial $P$. This immediately implies that the height of $f(z)$ will be around $1/deg(P)$ of the height of $j(z)$, or more precisely, asymptotic to it as the discriminant of $z$ goes to infinity. up vote 6 down vote accepted See A. Enge and F. Morain's "Comparing invariants for class fields of imaginary quadratic fields", ANTS-V 2002. add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory class-field-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/50195/why-are-weber-polynomial-coefficients-smaller-than-hilbert-polynomial-coefficien?sort=newest","timestamp":"2014-04-17T12:45:29Z","content_type":null,"content_length":"52541","record_id":"<urn:uuid:2de73912-5d02-439a-8950-c29b95e55ee8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
gadem: A Genetic Algorithm Guided Formation of Spaced Dyads Coupled with an EM Algorithm for Motif Discovery Genome-wide analyses of protein binding sites generate large amounts of data; a ChIP dataset might contain 10,000 sites. Unbiased motif discovery in such datasets is not generally feasible using current methods that employ probabilistic models. We propose an efficient method, , which combines spaced dyads and an expectation-maximization (EM) algorithm. Candidate words (four to six nucleotides) for constructing spaced dyads are prioritized by their degree of overrepresentation in the input sequence data. Spaced dyads are converted into starting position weight matrices (PWMs). then employs a genetic algorithm (GA), with an embedded EM algorithm to improve starting PWMs, to guide the evolution of a population of spaced dyads toward one whose entropy scores are more statistically significant. Spaced dyads whose entropy scores reach a pre-specified significance threshold are declared motifs. performed comparably with on 500 sets of simulated “ChIP” sequences with embedded known P53 binding sites. The major advantage of is its computational efficiency on large ChIP datasets compared to competitors. We applied to six genome-wide ChIP datasets. Approximately, 15 to 30 motifs of various lengths were identified in each dataset. Remarkably, without any prior motif information, the expected known motif (e.g., P53 in P53 data) was identified every time. discovered motifs of various lengths (6–40 bp) and characteristics in these datasets containing from 0.5 to >13 million nucleotides with run times of 5 to 96 h. can be viewed as an extension of the well-known algorithm and is an efficient tool for de novo motif discovery in large-scale genome-wide data. The software is available at Key words: ChIP, de novo motif discovery, expectation-maximization, genetic algorithm, k-mer, spaced dyad
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2756050/?lang=en-ca","timestamp":"2014-04-16T12:52:59Z","content_type":null,"content_length":"149292","record_id":"<urn:uuid:811bf396-1cac-4c53-9982-f3089d673422>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/chad159753/asked","timestamp":"2014-04-16T10:40:04Z","content_type":null,"content_length":"122207","record_id":"<urn:uuid:9d50cb80-a19b-4549-b917-90b2d2aeb74e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] intuitions of logic in Helsinki and Cambridge Gabriel Stolzenberg gstolzen at math.bu.edu Mon Feb 27 23:47:18 EST 2006 This is in reply to Panu Raatikainen's comments of February 27 about my posting, "intuitions of logic in Chicago and Cambridge." Panu raises excellent points. I see now that I failed to make clear that, unlike the members of this list, the folks who I am talking about don't spend their time thinking about foundations of Panu begins by quoting me and then comments on what I say. > > In fact, classical mathematicians sometimes use their logical > > intuitions to "prove" the law of excluded middle. Although they > > don't realize it, they use excluded middle reasoning to prove the > > statement of the law of excluded middle. > Isn't this a little bit uncharitable. Even if some have proceeded like > this, an adherent of classical logic certainly need not to do that. As I indicated above, I'm not talking about people who think about the law of excluded middle. I'm talking about folks who reason according to it without thinking about it, without even being aware that they are reasoning according to it. So I don't think they are what you mean by an "adherent." Psychologically, such reasoning seems to be an involuntary and unreflective response to a certain kind of challenge, a response that usually begins with "Suppose not." My point was that, because this kind of excluded middle reasoning is involuntary and unreflective, it sometimes is evoked inappropriately, e.g., by a challenge to prove the law of excluded middle. > Rather, one can derive LEM from the Principle of Bivalence, which > in turn seems to be analytically built in to the classical, realist > conception of truth. I didn't mean to sugggest that I was seriously challenging classical mathematicians to prove the law of excluded middle. (If I was, then Kreisel might have been right when he told me that I was mentally ill. This was on the basis of my review of Bishop's book in the Bulletin of the AMS. I found this fascinating, so I asked him what his method of diagnosis was. He said, "Statistical." At this point, I realized that I shouldn't be having this conversation, so I kicked him out of the room. Verbally, not physically.) I just wanted to see how, in certain situations (chatting in a common room, over dinner in a restaurant, etc.), classical mathematicians would respond if they thought that this was what I was doing. And, in my very small sample, I found that it was the involuntary, unreflective response that I described above. > However, I think that it is very difficult to argue against these > ideas without already presupposing the intuitionistic interpretation > of logical constants. If, by "to argue against these ideas," you mean arguing in favor of rejecting the law of excluded law, recall that, in constructive mathematics, the law of excluded middle is happily neither accepted nor rejected. If you start out as a classical mathematician, as I did, you don't acquire a constructive mindset by rejecting the law of excluded middle. It doesn't work that way! (What intuitionists do is another matter.) Gabriel Stolzenberg More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-February/010088.html","timestamp":"2014-04-21T03:07:56Z","content_type":null,"content_length":"5826","record_id":"<urn:uuid:404b36cd-00a3-4316-9a8e-c9acfac812bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: \[ Let f:A \rightarrow B $ be a given function. Prove that f is one-to-one (injective) $ \Leftrightarrow f(C\cap D)=f(C)\cap f(D) $ for every pair of sets C and D in A $\] • one year ago • one year ago Best Response You've already chosen the best response. Let \[f:A\rightarrow B\] be a given function. Prove that f is one-to-one (injective) \[\leftrightarrow f(C\cap D)=f(C)\cap f(D)\] for every pair of sets C and D in A Best Response You've already chosen the best response. \[Let f:A\rightarrow B be a given function. Prove that f is one-to-one (injective) \\\Leftrightarrow f(C\cap D)=f(C)\cap f(D) for every pair of sets C and D in A\] Best Response You've already chosen the best response. i was just rewriting so i could read it, i am not sure i know how to do it Best Response You've already chosen the best response. well one way is trivial, since \(f(A\cap B)\subset f(A)\cap f(B)\) for any \(f\) Best Response You've already chosen the best response. or does that need clarification as well? we can write it out if you like Best Response You've already chosen the best response. yes we need to right it.. Best Response You've already chosen the best response. Best Response You've already chosen the best response. why letter is not separated Best Response You've already chosen the best response. suppose \(z\in f(A\cap B)\) then \(z=f(x)\) for some \(x\in A\cap B\) making \(x\in A\) and \(x\in B\) so \(z\in f(A)\) and \(z\in f(B)\) therefore \(z\in f(A)\cap f(B)\) Best Response You've already chosen the best response. this shows for any \(f\) you have \(f(A\cap B)\subset f(A)\cap f(B)\) Best Response You've already chosen the best response. now we need to prove that if \(f\) in injective, we have \(f(A\cap B)=f(A)\cap f(B)\) since we already have containment one way, this amounts to showing \[f(A)\cap f(B)\subset f(A\cap B)\] Best Response You've already chosen the best response. pick a \(z\in f(A)\cap f(B)\) so there exists a \(x_1\) in \(A\) with \(f(x_1)=z\) and likewise there is a \(x_2\) in \( B\) with \(z=x_2\) now comes the "injective" part since \(f\) is injective, and \(f(x_1)=f(x_2)=z\) we know \(x_1=x_2\) and so \[z\in f(A\cap B)\] Best Response You've already chosen the best response. typo there, i meant "likewise there exists \(x_2\in B\) with \(f(x_2)=z\) sorry Best Response You've already chosen the best response. so that is the proof one way, that "if \(f\) is injective, then \(f(A\cap B)=f(A)\cap f(B)\) Best Response You've already chosen the best response. other way is easier, since a singleton is a set Best Response You've already chosen the best response. A={x} B={y} you mean like this one.. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507f5ccae4b0b8b0cacd4276","timestamp":"2014-04-19T15:24:00Z","content_type":null,"content_length":"64674","record_id":"<urn:uuid:a79122b4-374c-43ba-8cc7-6ed86161cb54>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
how to take the derivative of e to a polynomial October 23rd 2012, 09:47 PM #1 how to take the derivative of e to a polynomial Last edited by kingsolomonsgrave; October 23rd 2012 at 09:55 PM. Re: how to take the derivative of e to a polynomial Hello, kingsolomonsgrave! Recall this formula: . . If $y \:=\:e^u$, then $y' \;=\;e^uu'$ Therefore, the derivative of $e^{x^2+1}$ is: . $e^{x^2+1}\cdot2x \;=\;2xe^{x^2+1}$ October 23rd 2012, 10:01 PM #2 Super Member
{"url":"http://mathhelpforum.com/calculus/205987-how-take-derivative-e-polynomial.html","timestamp":"2014-04-17T08:14:04Z","content_type":null,"content_length":"34286","record_id":"<urn:uuid:cddc890b-f312-4ee5-ab6d-bb5289ae1047>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Second order ODE with non-constant coefficients I am trying to solve y''+(3-1/x^2)y=0. Is it possible to solve by separation? Or will this require a series solution? If by "separation" you mean writing it as $\frac{d^2y}{dx^2}= (\frac{1}{x^2}- 3)y$ and then to $\frac{d^2y}{y}= (\frac{1}{x^2}- 3)dx^2$, then, no, you cannot "separate" a second order derivative like you can a first order derivative. The "differentials" dy and dx are defined in Calculus, but such things as " $d^2y$" and " $dx^2$" are NOT defined. Last edited by HallsofIvy; October 21st 2011 at 01:54 PM.
{"url":"http://mathhelpforum.com/differential-equations/190967-second-order-ode-non-constant-coefficients.html","timestamp":"2014-04-16T05:22:28Z","content_type":null,"content_length":"40977","record_id":"<urn:uuid:4a73ae48-d1de-42f6-a7ae-06b803c27024>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Image compression by backpropagation: A demonstration of extensional programming Results 1 - 10 of 15 - COGNITIVE SCIENCE , 1990 "... Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a pro ..." Cited by 1533 (21 self) Add to MetaCart Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction. - Advances in Neural Information Processing Systems 5 , 1993 "... A method for creating a non–linear encoder–decoder for multidimensional data with compact representations is presented. The commonly used technique of autoassociation is extended to allow non–linear representations, and an objective function which penalizes activations of individual hidden units is ..." Cited by 106 (1 self) Add to MetaCart A method for creating a non–linear encoder–decoder for multidimensional data with compact representations is presented. The commonly used technique of autoassociation is extended to allow non–linear representations, and an objective function which penalizes activations of individual hidden units is shown to result in minimum dimensional encodings with respect to allowable error in reconstruction. 1 - IEEE Transactions on neural networks , 1995 "... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and self-organisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) back-propagation learning and the structure ..." Cited by 56 (4 self) Add to MetaCart Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and self-organisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) back-propagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on back-propagation networks and a unified view of all unsupervised algorithms. Keywords--- linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, self-organisation I. Introduction This paper addresses the problems of , 1997 "... The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in high-dimensional spaces and as a modelling tool for such data. It is defined as the search for a low-dimensional manifold that embeds the high-dimensional data. A cl ..." Cited by 30 (4 self) Add to MetaCart The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in high-dimensional spaces and as a modelling tool for such data. It is defined as the search for a low-dimensional manifold that embeds the high-dimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text. , 1996 "... In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: information-preserving methods, densi ..." Cited by 23 (1 self) Add to MetaCart In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: information-preserving methods, density estimation methods, and feature extraction methods. Each of these major sections concludes with a discussion of successful applications of the methods to real-world problems. , 2003 "... A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the lowdimensional space. Gener ..." Cited by 6 (2 self) Add to MetaCart A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the lowdimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low–dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin–Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering “feature space ” as a two-dimensional linear subspace in the highdimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90 % of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as “integrate and fire,” the HH model is not an integrator nor is it well described by a single , 1951 "... We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. Th ..." Cited by 5 (0 self) Add to MetaCart We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and music-theoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for p... , 1999 "... KEYWORDS: speaker verification; autoassociative neural network; distribution estimation; matching technique; dimensionality reduction. ..." , 1994 "... In this paper, we propose a new general framework for learning and recognizing spatiotemporal events (or patterns) from intensity image sequences. This scheme is general in that it does not impose any motion model on the input. A multiclass, multivariate discriminant analysis technique has been used ..." Cited by 2 (2 self) Add to MetaCart In this paper, we propose a new general framework for learning and recognizing spatiotemporal events (or patterns) from intensity image sequences. This scheme is general in that it does not impose any motion model on the input. A multiclass, multivariate discriminant analysis technique has been used to automatically select the most discriminating features (MDF) which is shown to be better suited for classification due to its capability to automatically discount factors that are irrelevant to classification. The space partition tree introduced here achieves a logarithmic time complexity for a database of n items. A general interpolation scheme is employed for inference and generalization in the MDF space based on a small number of training samples. The system is tested to recognize 28 different hand signs. The experimental results show that the learned system can achieve a 98% recognition rate for test sequences that have not been used in the training phase. 1 1 Introduction Temporal...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1745133","timestamp":"2014-04-18T19:40:02Z","content_type":null,"content_length":"37155","record_id":"<urn:uuid:089a1d56-1a5d-4c1c-800f-4c302b9c67d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
OK Corral: Local versus non-local QM 1. That "exercise for the reader" IS Bell's Theorem. wm is asserting that A and B work, therefore it works in all situations. I think you're confusing the issue by using A and B to represent both specific angles and general variables representing arbitrary angles chosen by each detector. It would be simpler if you said that was an arbitrary angle chosen by the left detector, an arbitrary angle chosen by the right one, then you could have A, B, and C be specific choices of angles for either detector. What wm attempted to do was give a general proof that for arbitrary angles , in his classical experiment the expectation value for the product of the two results would be -cos(a - b). This would cover all specific angles you chould choose--for example, if =B and =C, then the expectation value for a large set of trials with these angles would be -cos(B - C); if =C and =A, then the expectation value for a large set of trials with these angles would be -cos(C - A); and so forth. I disagree that "Bell's theorem" primarily revolves around picking specific angles, if that's what you mean by "That 'exercise for the reader' IS Bell's Theorem". The proof involves finding an inequality that should hold for angles under local realism; then it's just a fairly simple final step to note that the inequality can be violated using some specific angles in some specific quantum experiment, but this last step is hardly the "meat" of the theorem. For example, look at the CHSH inequality. This inequality says that if the left detector has a choice of two arbitrary angles , the right detector has a choice of two arbitrary angles , then the following inequality should be satisfied under local realism: -2 <= E(a, b) - E(a, b') + E(a', b) + E(a', b') <= 2 Now, suppose wm were correct that he had a classical experiment satisfying the conditions of Bell's theorem such that the expectation value E(a, b) would equal -cos(a - b). In this case it we could pick some specific angles a = 0 degrees, b = 0 degrees, a' = 30 degrees and b' = 90 degrees; in this case we have E(a, b) = - cos(0) = -1, E(a, b') = -cos(90) = 0, E(a', b) = -cos(30) = -0.866, and E (a', b') = -cos(60) = -0.5. So E(a, b) - E(a, b') + E(a', b) + E(a', b') would be equal to -1 - 0 - 0.866 - 0.5 = -2.366, which violates the inequality. The hard part was the proof that the expectation value was -cos(a - b), just as in QM; once we have this expectation value, it's a pretty trivial exercise for the reader to find some specific angles which allow the inequality to be violated, just as in QM. Again, the problem here is that wm did not actually replicate the conditions assumed in Bell's theorem, where each measurement can only yield two possible answers rather than a continuous spectrum of answers, and also his derivation of the expectation value seems to be flawed, my math suggested the expectation value would actually be E(a, b) = (-1/2)*cos(a - b). 3. He doesn't consider the A/B/C condition. It is not possible to provide a counter-example to Bell, because Bell is itself a counter-example. The only way to disprove Bell would be to show that the counter-example is flawed. I don't understand what you mean by "counter-example" here. Bell provides a general proof that a certain inequality can never be violated under local realism, a statement of the form "for all experiments obeying local realism and satisfying certain conditions, this inequality will be satisfied". Logically, any statement of the form "for all X, Y is true" can be disproved with a single counterexample of the form "there exists on X such that Y is false". And that's what wm tried to do--find a single example of a local realist experiment which would satisfy Bell's conditions and yet violate an inequality. But he did it incorrectly, because he didn't satisfy the conditions, and his math for the expectation value was wrong anyway, with the correct expectation value I don't think you could violate any inequality using his experiment. For example, consider the "theory" that there no primes larger than 13. Bell comes along and says, whoa! what about 17? Now wm comes along and say Bell is wrong, look at 2, 3, 5, 7, 11, 13 as my proof. No, he must show that 17 is NOT a prime to make his case. But I disagree, wm came along and to show a classical example that would satisfy Bell's conditions and yet give an expectation value which, with the correct choice of angles, could violate an inequality (like my choice of angles for the CHSH inequality above). If he had actually satisfied Bell's conditions and if his calculation of the expectation value were correct, this disprove Bell's theorem; but of course he didn't do this, and since I can follow Bell's theorem and see that it is logically airtight, I am totally confident he'll never be able to do this, just like I'm confident no one will find a counterexample to the statement "there are no even prime numbers larger than 2". A review of that shows it as a fine counter-example to the original theory (local realism). And guess what? wm now must prove this wrong too, because it too is a counter-example to be contended with. Well, in what sense is this a counter- to local realism, as opposed to a general proof that local realism cannot replicate quantum predictions? Again, when I use the word counter-example, I'm thinking of disproving a statement of the form "for all X, Y is true" by coming up with an example of the form "there exists a particular X such that Y is false". I guess you could say that one agrees with Bell's theorem, then local realism makes the prediction that "for all experiments satisfying X conditions, inequality Y will be satisfied". And in this case, QM can give an example of the form "here's an experiment satisfying X conditions which violates inequality Y", thus proving QM is incompatible with local realism. But the problem here is that wm believes there's a flaw in Bell's theorem, so he does not agree that local realism makes the prediction "for experiments satisfying X conditions, inequality Y will be satisfied" in the first place; he's trying to disprove Bell's theorem by showing that local realism can also give an example of the form "here's an experiment satisfying X conditions which violates inequality Y". As a general approach to disproving Bell's theorem this makes sense, it's just that he thinks he's found such an example but he actually hasn't, because his example does not actually satisfy the X conditions of Bell's theorem (specifically the one about each experiment yielding one of two possible answers), and also his math for the expectation value is wrong, with the correct expectation value I'm not sure he could violate any Bellian inequality even if you ignore the first
{"url":"http://www.physicsforums.com/showthread.php?p=1253652","timestamp":"2014-04-17T07:29:51Z","content_type":null,"content_length":"128171","record_id":"<urn:uuid:a064e096-7a65-435a-a0d9-a63321248d0e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Ted Bunn’s Blog I’m not going to write about politics in this blog — there’s plenty of that out there already. And I’m certainly not going to engage in any sort of political advocacy. But I thought of an interesting application of probability theory to the upcoming election, and I thought I’d summarize it here. Remember back in 2004, when it seemed like every Democratic voter was basing his or her choice on which candidate was most “electable”? People’s gut feelings about this sort of thing are generally pretty unreliable, so it’s kind of interesting to look for a data-based answer to the electability question. It occurred to me recently that the various political futures markets provide a good way to answer that question for the upcoming election. For those who don’t know about the political futures markets, they’re basically a way that people can bet on various political events, including the upcoming US Presidential race. Slate has a good description with lots of nice graphs. The odds on all of these bets can be interpreted as giving the probabilities of various outcomes in the race, as estimated by the community of bettors. These probabilities give enough raw data to measure each candidate’s “electability.” By electability I mean the probability that a candidate will win in the general election, given that he (or she) gets the nomination. One of the futures markets (Intrade) lets people bet on both who will get the nomination and who will win the general election. The ratio of these for any given candidate is the electability. It’s just Bayes’s Theorem: P(Hillary wins the presidency) = P(Hillary gets the nomination) * P(Hillary wins the presidency | Hillary wins the nomination). [In case it's not familiar notation, P(y | x) means the probability that y occurs given that x occurs.] The last factor on the right is Hillary’s electability. The futures market tells us the other two probabilities for each candidate. So we can find the electabilities of all the candidates by simple division. Before looking below, take a guess about which leading candidates in the two parties are most electable. Candidate Electability Obama 66.1% Clinton 65.9% Edwards 46.1% McCain 39.1% Giuliani 36.4% Huckabee 32.4% Romney 35.0% Paul 41.9% Thompson 23.1% By the way, these are sorted within each party from highest to lowest probability of getting the nomination (according to Intrade), and only candidates with decent probabilities (so that Slate’s data table has more than one significant figure) are listed. The data are from January 6 (after Iowa, before New Hampshire). If I knew what I was doing, I could make slick graphs showing how these numbers change with time and have them continuously update themselves, but that’d take acual effort, and I should probably do my job instead. [...] candidate is most “electable,” at least on the Democratic side. As I noted in an earlier post, the political futures market gives a way to assess electability. Here are some graphs showing what
{"url":"http://blog.richmond.edu/physicsbunn/2008/01/07/electability/","timestamp":"2014-04-18T07:37:14Z","content_type":null,"content_length":"19550","record_id":"<urn:uuid:5447cfa4-1624-4e05-ab33-a5bf6d577b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
uniformly convergent for function November 5th 2012, 02:32 PM #1 Junior Member Oct 2012 uniformly convergent for function define function fn:[0,1]->R by fn(x)=(n^p)x*exp(-(n^q)x) where p , q>0 and fn->0 pointwise on[0,1] as n->infinite.,with the pointwise limit f(x)=0 ,and sup|fn(x)|=(n^(p-q))/e assume that ε is in (0,1) does fn converges uniformly on [1,1-ε]? how about on[0.1-ε]???? my idea is checking whether the pointwise limit f is continuous on the interval above,but it is obvious,then f is continuous ,so fn is uniformly convergent,but i thought it is wrong. can someone give me any idea????? Last edited by cummings123321; November 5th 2012 at 02:39 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/206823-uniformly-convergent-function.html","timestamp":"2014-04-18T03:17:38Z","content_type":null,"content_length":"29051","record_id":"<urn:uuid:5791a025-bb1e-4a1f-8cf5-01b1998bb2c6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Curvature in Cartesian Plane Replies: 6 Last Post: Nov 15, 2012 8:53 AM Messages: [ Previous | Next ] Curvature in Cartesian Plane Posted: Nov 13, 2012 6:23 PM I expect that this is true... We have three points on a Cartesian x-y plane, and the circle that passes through these three points has a constant curvature of k. If we have a doubly differentiable curve in the x-y plane that passes through these points, is there always some point on the curve which has curvature k? I am finding it tough to prove this. Any help appreciated. Date Subject Author 11/13/12 Curvature in Cartesian Plane Brad Cooper 11/14/12 Re: Curvature in Cartesian Plane dy/dx 11/14/12 Re: Curvature in Cartesian Plane Mike Terry 11/14/12 Re: Curvature in Cartesian Plane Brad Cooper 11/15/12 Re: Curvature in Cartesian Plane Virgil 11/15/12 Re: Curvature in Cartesian Plane William Elliot 11/15/12 Re: Curvature in Cartesian Plane Brad Cooper
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2414982&messageID=7922812","timestamp":"2014-04-17T08:42:08Z","content_type":null,"content_length":"23222","record_id":"<urn:uuid:763fc24f-f048-4b0f-87e6-0b4e5798ea5a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
derivatives of e^(2/x), e^(x/2) [Archive] - Free Math Help Forum 09-13-2006, 08:48 PM I am having a little trouble with derivative of : I know the derivative of e^x is e^x, but I'm not sure when there are numbers directly affecting the x. I'm guessing that the e^(2/x) would be (1/2)e^(2/x) and I'm also guessing e^(x/2) would be (2)e^(x/2) Is this correct ?
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-45917.html","timestamp":"2014-04-18T03:09:58Z","content_type":null,"content_length":"3723","record_id":"<urn:uuid:d9f811a0-1735-4a92-ac25-84298102a927>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Is my answer right? Describe the transformations required to obtain the graph of the function f(x) from the graph of the function g(x). f(x) = cos x/2; g(x) = cos x Answer: Horizontal stretch by a factor of 2 • one year ago • one year ago Best Response You've already chosen the best response. no, I don't think this one is a stretch... Best Response You've already chosen the best response. hmmm there seems to be a mistake with the wording of the question. Take a look at ur question again Best Response You've already chosen the best response. Well that is what I was thinking too, but my book gives me the following options: Horizontal stretch by a factor of 2 Vertical stretch by a factor of 2 Vertical shrink by a factor of 1/2 Horizontal shrink by a factor of 1/2 Originally I chose vertical shrink, but it came back wrong, so I did horizontal shrink, and it also came back wrong. So I don't know at this point. Best Response You've already chosen the best response. Woops. Thanks @swissgirl Best Response You've already chosen the best response. ohh there we gooo Best Response You've already chosen the best response. @Caolco I had to call for help again ;) Best Response You've already chosen the best response. hmm its def horizontal cuz its x/2 Ohhh I see there is one 1 cycle in 720 degrees so its a horizontal stretch by a factor of 2 Best Response You've already chosen the best response. oooooohhhhhhhhhhhhh. I completely did not see that. Also, I misread the question :( Best Response You've already chosen the best response. There is only a half of one cycle in 360 degrees or another way of saying it is that one cycle takes 720 degrees Best Response You've already chosen the best response. Ok so I was correct then? Best Response You've already chosen the best response. Horizontal stretch by a factor of 2 Best Response You've already chosen the best response. the graph get stretched instead of taking 360 degrees to finish one cycle it takes 720 degrees to finish the cycle. It takes double the time Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507b6b75e4b07c5f7c1f2de4","timestamp":"2014-04-18T00:16:17Z","content_type":null,"content_length":"54364","record_id":"<urn:uuid:bef5bde6-e944-4706-a63f-f70cd534594f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding where the definition of E(X^2) comes from April 24th 2013, 05:09 PM #1 Apr 2013 The internet Understanding where the definition of E(X^2) comes from In the middle of trying to understand how to find E(X^2). I have read that the definition of this expectation is the integral from b to a of x^2*f(x), where f(x) is the pdf of a random variable. What direction should I take in developing my understanding of the background to this definition? Many thanks in advance all! Re: Understanding where the definition of E(X^2) comes from Hey Resuscitative. The easiest way to understand E[g(X)] is to think in terms of a random variable g(X) and then looking at it's mean. So in the case of g(X) = X^2, we take our random variable and square it and this becomes a new random variable. Then we look at the mean of this random variable and this is equivalent to E[g(X)] or E[Y] if Y = g(X) = X^2. That's to give some intuition, but algebraically you can think of these as moments which actually have an interpretation in terms of the frequency spectrum of the random variable. I would use the explanation above for intuition and then just think about the complicated expressions in terms of algebra (try not to think too hard about things like say E[log(X)] or E[X^2 + X]). April 24th 2013, 08:18 PM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/statistics/218128-understanding-where-definition-e-x-2-comes.html","timestamp":"2014-04-16T07:00:20Z","content_type":null,"content_length":"33121","record_id":"<urn:uuid:e8d19a6d-7281-40f9-a0b7-a59f929b7e55>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
North Plainfield, NJ Trigonometry Tutor Find a North Plainfield, NJ Trigonometry Tutor Uri has a Bachelor’s degree from New Jersey Institute of Technology and Certified NJ License in Mathematics. He has been a full-time teacher for 2 years, including 2 years as a substitute classroom teacher in Middlesex County for grades K-12. Uri also tutored Spanish to children and adults of all levels, as well as other subjects at Middlesex County College. 19 Subjects: including trigonometry, Spanish, statistics, geometry Hi, my name is Donna. I have a dual Bachelor's of science degree in mathematics and computer science. I love working with students to get to know their individual needs and concerns. 13 Subjects: including trigonometry, geometry, algebra 1, statistics ...I bring a level of excitement that is absolutely contagious, so even if you dread standardized tests, you will feel that the sessions are far more interesting than you would otherwise expect. I am a former premed student with over 8 years of experience tutoring the science subjects for the MCAT ... 27 Subjects: including trigonometry, chemistry, physics, writing If you're looking for somebody to either help you catch up in math, or get ahead in math, I'm your guy. I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. 26 Subjects: including trigonometry, calculus, physics, GRE ...I hold a standard certificate from the State of New Jersey as an elementary school teacher in grades K-5 as well as a teacher of mathematics in grade K-12. I have experience teaching various mathematics courses, including Introduction to Prealgebra, Prealgebra, Algebra I, and Honors Algebra. Though briefly, I have even tutored a college level algebra course. 18 Subjects: including trigonometry, reading, calculus, geometry
{"url":"http://www.purplemath.com/North_Plainfield_NJ_Trigonometry_tutors.php","timestamp":"2014-04-16T19:42:00Z","content_type":null,"content_length":"24604","record_id":"<urn:uuid:5b2ddca9-d9b1-4e43-add3-f1455c5ef4ca>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Warsito, Budi and Tarno, Tarno and SUGIHARTO, ARIS (2008) THE RAINFALL PREDICTION AS A BASE OF PLANNING THE RICE AND CROPS PLANTING SYSTEM USE GENERAL REGRESSION NEURAL NETWORK MODEL. Project Report. Lembaga Penelitian Undip, Undip. (Unpublished) Official URL: http://stat.undip.ac.id This paper discuss about General Regression Neural Network (GRNN) modelling to the rainfall data at some territory in the Central Java that dependent on rainfall for irrigation in the planting system i.e Musuk, Ngaringan and Jakenan. The data that used are the dasarian data (the every ten day) while the input model are choosen from ARIMA model with the ACF and PACF plot. The result of predict in sample show that GRNN model have a high precision, although the predict out of sample is guaranted better than ARIMA not yet. Then the model is used to forecast some next stage. The result of rainfall forecasting conclude that each territory is better to apply the rice-crops-crops in the planting system, with consider the estimation of climate anomaly to begins the planting time. Key Words : GRNN, ARIMA, rainfall, planting system Repository Staff Only: item control page
{"url":"http://eprints.undip.ac.id/3523/","timestamp":"2014-04-17T08:41:15Z","content_type":null,"content_length":"17662","record_id":"<urn:uuid:36ea4925-17eb-4cfd-b18f-ef2a8589b173>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Projection onto Linearly Dependent Vectors Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search Now consider another example: The projections of The sum of the projections is Something went wrong, but what? It turns out that a set of linearly independent. In general, a set of vectors is linearly independent if none of them can be expressed as a linear combination of the others in the set. What this means intuitively is that they must ``point in different directions'' in same line in dependent: one is a linear combination of the other ( Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
{"url":"https://ccrma.stanford.edu/~jos/mdft/Projection_onto_Linearly_Dependent.html","timestamp":"2014-04-18T09:36:46Z","content_type":null,"content_length":"8403","record_id":"<urn:uuid:5a00058d-a7f9-462c-8062-78a013a0a669>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Acta Cryst. (2013). E69, o1822-o1823 [ doi:10.1107/S1600536813031462 ] Benzene-1,3,5-triyl tribenzoate P. W. R. Corfield and A. M. Balija Abstract: The title compound, C[27]H[18]O[6], commonly known as phloroglucinol tribenzoate, is a standard unit for the family of benzyl ether dendrimers. The central phloroglucinol residue is close to planar, with out-of-plane distances for the three oxygen atoms of up to 0.095 (3) Å, while the three attached benzoate groups are approximately planar. One benzoate group is twisted [C-C-O-C torsion angle = 98.2 (3)°] from the central plane, with its carbonyl O atom 2.226 (4) Å above that plane, while the other two benzoate groups are twisted in the opposite direction [C-C-O-C torsion angles = 24.7 (2) and 54.8 (2)°], so that their carbonyl O atoms are on the other side of, and closer to the central plane, with distances from the plane of 1.743 (4) and 1.206 (4) Å. One benzoate group is disordered between two conformers, with occupancies of 86.9 (3) and 13.1 (3)%, related by a 143 (1)° rotation about the bond to the central benzene ring. The phenyl groups of the two conformers occupy the same space. The mol­ecule packs in the crystal with two of the three benzoate phenyl rings stacked parallel to symmetry-related counterparts, with perpendicular distances of 3.715 (5) and 3.791 (5) Å. The parallel rings are slipped away from each other, however, with centroid-centroid distances of 4.122 (2) and 4.363 (2) Å, respectively. Copyright © International Union of Crystallography IUCr Webmaster
{"url":"http://journals.iucr.org/e/issues/2013/12/00/pk2503/pk2503abs.html","timestamp":"2014-04-19T20:11:34Z","content_type":null,"content_length":"3006","record_id":"<urn:uuid:d4db1729-63fc-410a-8cf5-97d6e10c66fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
The Coplanar Waveguide Transmission Line block models the coplanar waveguide transmission line described in the block dialog box in terms of its frequency-dependent S-parameters. A coplanar waveguide transmission line is shown in cross-section in the following figure. Its physical characteristics include the conductor width (w), the conductor thickness (t), the slot width (s), the substrate height (d), and the relative permittivity constant (ε). The block lets you model the transmission line as a stub or as a stubless line. Stubless Transmission Line If you model a coplanar waveguide transmission line as a stubless line, the Coplanar Waveguide Transmission Line block first calculates the ABCD-parameters at each frequency contained in the modeling frequencies vector. It then uses the abcd2s function to convert the ABCD-parameters to S-parameters. The block calculates the ABCD-parameters using the physical length of the transmission line, d, and the complex propagation constant, k, using the following equations: Z[0] and k are vectors whose elements correspond to the elements of f, a vector of modeling frequencies. Both can be expressed in terms of the specified conductor strip width, slot width, substrate height, conductor strip thickness, relative permittivity constant, conductivity and dielectric loss tangent of the transmission line, as described in [1]. Shunt and Series Stubs If you model the transmission line as a shunt or series stub, the Coplanar Waveguide Transmission Line block first calculates the ABCD-parameters at each frequency contained in the vector of modeling frequencies. It then uses the abcd2s function to convert the ABCD-parameters to S-parameters. Shunt ABCD-Parameters When you set the Stub mode parameter in the mask dialog box to Shunt, the two-port network consists of a stub transmission line that you can terminate with either a short circuit or an open circuit as shown here. Z[in] is the input impedance of the shunt circuit. The ABCD-parameters for the shunt stub are calculated as Series ABCD-Parameters When you set the Stub mode parameter in the mask dialog box to Series, the two-port network consists of a series transmission line that you can terminate with either a short circuit or an open circuit as shown here. Z[in] is the input impedance of the series circuit. The ABCD-parameters for the series stub are calculated as [1] Gupta, K. C., Ramesh Garg, Inder Bahl, and Prakash Bhartia, Microstrip Lines and Slotlines, 2nd Edition, Artech House, Inc., Norwood, MA, 1996.
{"url":"http://www.mathworks.com.au/help/simrf/ref/coplanarwaveguidetransmissionline.html?nocookie=true","timestamp":"2014-04-24T02:22:21Z","content_type":null,"content_length":"44743","record_id":"<urn:uuid:e990a05c-1813-418b-8d33-c1317b12c1cb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
cis is often used by as a shorthand as cis(x) = cos(x) + i*sin(x) This function is often used when speaking of vectors in a 2-dimensional plane representing the set of complex numbers. In this use (keeping with the example above) x represents the angle a unit vector makes with the positive real axis and cis(x) its complex representation. Perhaps one of its most famous uses is in Euler's formula, relating the exponentiation function to the trigonometric functions.
{"url":"http://everything2.com/title/CIS","timestamp":"2014-04-17T18:54:05Z","content_type":null,"content_length":"23540","record_id":"<urn:uuid:271399eb-ec34-40e1-b7e2-f28799eae7db>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
ChemistryOpt::CoordinateModelInterface Interface Reference Public Member Functions int initialize () Registers and gets ports, and requests Model object(s) from the ModelFactory component(s). int finalize () Releases and unregisters ports. void set_model (in Chemistry.QC.ModelInterface model) Sets the contained chemistry Model object (currently unused as the chemistry Model object is normally obtained from a ModelFactory during initialization). Chemistry.QC.ModelInterface get_model () Returns the contained chemistry Model object. int get_n_coor () Returns the number of coordinates. array< double, 1 > get_coor () Returns the array of (cartesian or internal) coordinates which are being optimized. double get_energy (in array< double, 1 > x) Returns the energy of the currently contained model with the values of the optimization coordinates given in x. array< double, 1 > get_gradient (in array< double, 1 > x) Returns the energy gradient of the currently contained model with the values of the optimization coordinates given in x. array< double, 2 > get_hessian (in array< double, 1 > x) Returns the energy Hessian of the currently contained model with the values of the optimization coordinates given in x. void get_energy_and_gradient (in array< double, 1 > x, out double f, in array< double, 1 > g) Sets f and g to the energy and energy gradient, respectively, of the chemistry model at x. void guess_hessian_solve (in array< double, 1 > effective_grad, in array< double, 1 > effective_step, in opaque first_geom) Returns the product of the guess hessian inverse and an effective gradient. void checkConvergence (inout int flag) Determines if the optimization has converged, flag is set to 1 if convergence has been achieved and 0 otherwise. void monitor () For visualization, possibly unused (?). Chemistry-specific optimization tasks are performed by this component. These tasks include internal coordinate generation, coordinate transformations, convergence checking, and the updating of coordinate values.
{"url":"http://www.scl.ameslab.gov/Projects/borges/cca-gen2/interfaceChemistryOpt_1_1CoordinateModelInterface.html","timestamp":"2014-04-20T06:07:00Z","content_type":null,"content_length":"26373","record_id":"<urn:uuid:2c7879b3-4d89-422a-a2d9-e2ec062d7155>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Laplace, Pierre Simon Laplace - Famous mathematicians pictures, posters, gifts items, note cards, greeting cards, and prints Pierre-Simon Laplace was a mathematician who firmly believed the world was entirely deterministic. Like a man with a hammer to whom everything was a nail, to Laplace the universe was nothing but a giant problem in calculus. Laplace's Mécanique Céleste (Celestial Mechanics), essentially translated the geometrical study of mechanics by Newton to one based on calculus. Napoleon asked Laplace why there was not a single mention of God in Laplace's entire five volume explaining how the heavens operated. (Newton, a man of science who believed in an omnipresent God, had even posited God's periodic intervention to keep the universe on track.) Laplace replied to Napoleon that he had "no need for that particular hypothesis". Laplace also used calculus, among other things, to explore probability theory. Laplace considered probability theory to be simply "common sense reduced to calculus", which he systematized in his " Essai Philosophique sur les Probabilités" (Philosophical Essay on Probability, 1814). Laplace's contention that the universe and all it contained were deterministic machines was thoroughly over-turned by the discoveries of twentieth century physics. About the Image: Laplace is portrayed with what is possibly the most celebrated differential equation ever devised -- Laplace's partial differential equation, commonly referred to as Laplace's Equation, shown here in the form of a Laplacian operator. Laplace's partial differential has been successfully used for tasks as diverse as describing the stability of the solar system, the field around an electrical charge, and the distribution of heat in a pot of food in the oven. Laplace's image has been transformed by a Laplacian operator. The Laplacian of an image highlights regions of rapid intensity change and is suitable for edge detection (critical in almost all image analysis applications, and extending to areas such as robotic vision). Inscribed over the portrait of Laplace is the Laplacian distribution curve. The Laplacian probability density function has found digital age applicability in data compression. The background to Laplace's portrait is a graphic derived from a solution to Laplace's equation.
{"url":"http://www.mathematicianspictures.com/Mathematicians/Laplace.htm","timestamp":"2014-04-21T07:29:20Z","content_type":null,"content_length":"69769","record_id":"<urn:uuid:77ebe8d7-7be1-4d72-bf68-3d2976e9d65e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
module Numeric.Band.Rectangular ( Rect(..) ) where import Numeric.Algebra.Class import Numeric.Algebra.Idempotent import Data.Semigroupoid -- | a rectangular band is a nowhere commutative semigroup. -- That is to say, if ab = ba then a = b. From this it follows -- classically that aa = a and that such a band is isomorphic -- to the following structure data Rect i j = Rect i j deriving (Eq,Ord,Show,Read) instance Semigroupoid Rect where Rect _ i `o` Rect j _ = Rect j i instance Multiplicative (Rect i j) where Rect i _ * Rect _ j = Rect i j instance Band (Rect i j)
{"url":"http://hackage.haskell.org/package/algebra-0.7.1/docs/src/Numeric-Band-Rectangular.html","timestamp":"2014-04-19T10:38:12Z","content_type":null,"content_length":"4529","record_id":"<urn:uuid:fdb82223-24e4-458b-9d54-ca9570c534d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
irrational between any two rational proof March 14th 2008, 07:11 PM irrational between any two rational proof Hey guys new to the forum and have a question of my own, I need to prove that for x and y where x<y and both x and y are rational, there exists z where x<z<y and z is irrational. the previous part of the question was to find the smallest positive integer n such that ((2)^1/2)/n < .00001 I think I'm supposed to use this and contradiction to prove one exists as contradiction was used in proofs in the question before any help will be greatly apprecitated March 15th 2008, 01:04 AM March 15th 2008, 08:45 PM
{"url":"http://mathhelpforum.com/calculus/31015-irrational-between-any-two-rational-proof-print.html","timestamp":"2014-04-17T15:05:58Z","content_type":null,"content_length":"5663","record_id":"<urn:uuid:7e93f555-323e-47dc-9619-e4fb65f1b5a2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Drexel Hill ACT Tutor Find a Drexel Hill ACT Tutor ...In addition to the usual subjects, I am qualified to tutor actuarial math, statistics and probability, theoretical computer science, combinatorics and introductory graduate topics in discrete mathematics. I am willing to tutor individuals or small groups. I am most helpful to students when the tutoring occurs over a longer period of time. 18 Subjects: including ACT Math, calculus, geometry, statistics ...My students often hear that their application essays stand out from the pack. I deeply enjoy helping students pick the best university for their learning and career goals. I have Pennsylvania Level II teaching certifications in English 7-12, Math 7-12, and Health K-12. 47 Subjects: including ACT Math, chemistry, English, writing ...Previously, I completed undergraduate work at North Carolina State University for a degree in Philosophy. Math is a subject that can be a bit difficult for some folks, so I really love the chance to break down barriers and make math accessible for students that are struggling with aspects of mat... 22 Subjects: including ACT Math, calculus, geometry, statistics ...With over 15 years of teaching experience, I thoroughly enjoy working with students of all ages. I have taught various grades in private schools, charter schools and district schools. I have also worked in schools as a Reading Specialist and Special Education teacher. 17 Subjects: including ACT Math, reading, writing, algebra 1 I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including ACT Math, geometry, algebra 1, GRE
{"url":"http://www.purplemath.com/drexel_hill_act_tutors.php","timestamp":"2014-04-16T19:10:58Z","content_type":null,"content_length":"23866","record_id":"<urn:uuid:476c3a67-ac32-4568-81c2-3fce99e8edb1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics and Data Analysis Essential Questions: • How is information conveyed through statistics? • How does the format of the statistic affect the message communicated? • How can you critically evaluate statistical information? • Definition of: mean, median, mode, range, and outlier; explain how to find each • How an outlier may impact the mean • When a median or mode might be more accurate or informative than the mean (Bill Gates example) • How to talk back to a statistic. What questions should you ask? • The difference between a random and biased sample • How a graph can be misleading, and how it could be made honest • The impact of sample size on statistics • Design considerations when developing a survey or gathering data Skills and Processes: *Review decimal arithmetic, especially division and estimation *Calculate mean, median, mode, and range for a set of data in numerical list and graph forms *Identify population & sample size *Calculate the percent of a total and create a pie chart representation *Review rounding decimal numbers, using a protractor to measure degrees, and fraction-decimal-% conversions *Extract information from a bar, line, or pie graph, and use it to make decisions *Create misleading bar, line, and picto- graphs *Distinguish between unbiased and biased surveys Written Test: Statistics Survey: Choose a question to investigate, conduct a survey, create honest & misleading graphs; present the results Graph Evaluation: Find three misleading graphs (newspaper, Internet); describe how it is misleading, how to make it more misleading, & how to make it more truthful The Sneaker Problem and McDonalds (ranking information and making decisions) Worksheets on mean, median, mode, outliers, line/bar/pie graphs How to Lie with Statistics by Darrell Huff Is Democracy Fair? by Leslie Johnson Nielsen and Michael de Villiers (different voting systems and ways of counting ballots) USA Today web site Snapshots Misleading graph images (online and printed) GraphMaster software for creating line, bar, and pie graphs Integrated Learning: Social Studies and Science examples of misleading and appropriate statistics
{"url":"http://www.catlin.edu/curriculum/unit/statistics","timestamp":"2014-04-16T19:10:59Z","content_type":null,"content_length":"20039","record_id":"<urn:uuid:cb311391-50dc-4f85-bcfe-d110c9b338e6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
October 10th 2006, 03:35 PM #1 Sep 2006 Question 1: In this question, use pie=3.14 and assume the earth to be a sphere of radius 6370km. The towns A and B are both on the circle of latitude 24° N. the longitude of A is 108° E and the longitude of B is 75° E. Calculate , correct to the nearest kilometre, a) the radius of the circle of latitude 24° N b) the shortest distances between A and B, measured along the circle of latitude 24 ° N. Question 2: In this question assume that the earth is a sphere of radius 6370km. The four arcs on the digram represent the equator, the Greenwich Meridian, latitude 6° N and latitude 52° N. a) the Greenwich Meridian passes through London (52°N,0) and Accra(6°N,0). i) Calculate to the NEAREST kilometre, the shortest distance between London and Accra along thier common circle of longitude. use pie= 3.14. c) Tropical Strom Kyle was reported to be located 5 470km due west of Accra. i) Calculate to radius of the circle of latitude on which K lies. Question 1: In this question, use pie=3.14 and assume the earth to be a sphere of radius 6370km. The towns A and B are both on the circle of latitude 24° N. the longitude of A is 108° E and the longitude of B is 75° E. Calculate , correct to the nearest kilometre, a) the radius of the circle of latitude 24° N I believe that lattitude is measured by the arc from the lattitude line to the equator. With this in mind, draw a circle P. Draw the diameter of the circle. Now Label the points that are formed A and E Now draw a ray from point P that crosses the circle at point B such that APB=24 degrees. Now draw a ray from point P that crosses the circle at point D such that DPE=24 degrees (this line needs to be on the same side of the diameter as ray PB) Now draw a line connecting points B and D That line represents a side view of Lattitude 24 Now draw a line that's perpindicular to AE and intersects BD at point C Notice that that line is also perpindicular to BD Thus PCD is 90 degrees And CPD is 66 degrees. And you know that PD is 6370km Thus CD is sin(66) times 6370km Note that CD is the radius of the circle formed at 24 degrees north. Thus the radius is 6370sin(66) which equals approximately 5819km Question 1 only Question 1: In this question, use pie=3.14 and assume the earth to be a sphere of radius 6370km. The towns A and B are both on the circle of latitude 24° N. the longitude of A is 108° E and the longitude of B is 75° E. Calculate , correct to the nearest kilometre, a) the radius of the circle of latitude 24° N b) the shortest distances between A and B, measured along the circle of latitude 24 ° N. a) I've attache a diagram to show you how I calculated the radius of the circle at 24° latitude: r = R * cos(24°). Plug in the value you know and you'll get: r = 6370 km * cos(24°) = 5819 km as Quick has already calculated. b) the points A and B lay on the circumference of the circle at latitude 24°. The difference between these points is 33°. The circumference of this circle is: c = 2 * pi * 5819 km = 36543 km Let d be the distance between A and b. Now you have the proportion: c / 360° = d / 33° . Solve for d: d = (c * 33°) / (360°) = 3350 km Question 2 only Question 2: In this question assume that the earth is a sphere of radius 6370km. The four arcs on the digram represent the equator, the Greenwich Meridian, latitude 6° N and latitude 52° N. a) the Greenwich Meridian passes through London (52°N,0) and Accra(6°N,0). i) Calculate to the NEAREST kilometre, the shortest distance between London and Accra along thier common circle of longitude. use pie= 3.14. c) Tropical Strom Kyle was reported to be located 5 470km due west of Accra. i) Calculate to radius of the circle of latitude on which K lies. use the method I've shown to you in my previous reply: Distance between London (L) and Accra (A) is d. The difference between L and A is 46°. You have the proportion: d / 46° = (2 * pi * 6370 km) / 360°. Solve for d. (5112 km) The radius of the circle at latitude 6°N is calculated: r = 6370 km * cos(6°) = 6335 km October 10th 2006, 05:46 PM #2 October 11th 2006, 01:49 AM #3 October 11th 2006, 01:56 AM #4
{"url":"http://mathhelpforum.com/trigonometry/6315-trigonometry.html","timestamp":"2014-04-18T17:55:16Z","content_type":null,"content_length":"45007","record_id":"<urn:uuid:8a7801d3-09b8-4c22-8abe-3030ae35eaa5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph theory March 27th 2012, 10:08 PM #1 Super Member Feb 2008 Graph theory Let G be a simple triangle free graph. Prove that the min size of an egde cover (B'(G)) is greater than or equal to the min degree of G (delta(G)). I don't have any idea on how to prove this. Can someone show the proof? Thanks a lot! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-math-topics/196500-graph-theory.html","timestamp":"2014-04-19T19:50:20Z","content_type":null,"content_length":"28896","record_id":"<urn:uuid:d9d82a23-bbe3-46ee-9b8b-1107fefee400>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Can someone do the math for me? I'd appreciate it if someone did the math on this one for me. I'm also just checking this out because I posted an evasive Garchomp moveset. If a Garchomp with sandveil is holding brightpowder, and is in a sandstorm, then maxes his evasiveness (uses double team 6 times) what is the chance (%) of a move with 100 accuracy hitting him?
{"url":"http://pokemondb.net/pokebase/82100/can-someone-do-the-math-for-me?show=82104","timestamp":"2014-04-16T23:55:38Z","content_type":null,"content_length":"31794","record_id":"<urn:uuid:f1437ef5-c571-4a33-98b4-b897142d9d46>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Method - Macaulay Method Describes the Macaulay Method for calculating the deflection of Beams. Macaulay's Method - Introduction Macaulay's method (The double integration method) is a technique used in structural analysis to determine the deflection of Euler-Bernoulli beams. Use of Macaulay's technique is very convenient for cases of discontinuous and/or discrete loading. Using Calculus to find expressions for the deflection of loaded beams (See Deflection of Beams Part 1), it is normally necessary to have a separate expression for the Bending Moment for each section of the beam between adjacent concentrated loads or reactions. Each section will produce its own equation with its own constants of integration. It will be appreciated that in all but the simplest cases the work involved will be laborious; the separate equations being linked together by equating slopes and deflections given by the expressions on either side of each "junction point". However, a method devised by Macaulay enables one continuous expression for bending moment to be obtained, and provided that certain rules are followed the constants of integration will be the same for all sections of the beam. It is advisable to deal with each different type of load separately. Concentrated Loads A beam is a horizontal structural element that is capable of withstanding load primarily by resisting bending. The bending force induced into the material of the beam as a result of the external loads, own weight, span and external reactions to these loads is called a bending moment. Bending Moment in the last section of the beam enclosing all less than Subject to the condition that all terms for which the quantities in the square brackets are negative are omitted ( i.e. given a value of zero), this equation may be said to represent the bending moment for all values of The brackets are integrated as a whole, i.e. And, By doing so it can be shown that the constants of integration are common to all sections of the beam, e.g. if And, And, Now as Uniformly Distributed Loads. Supposing that a uniformly distributed load is applied from a distance Hence, Each length of the loading acts at its centre of gravity. The square brackets are interpreted as before. For Hence, This is clearly correct. The remaining steps of integration are the evaluation of the Constants, and proceeds as before. Example - Example 1 A simply supported beam of length Find the position and magnitude of the maximum deflection and show that the position is always approximately within The maximum deflection (i.e.) zero slope will occur on the length a since Taking the axes as shown in the diagram. Integrating And Integrating again gives:- At At From which: We need to find the value of ) and omitting Hence, At the point of maximum deflection. To find the value of this deflection substitute into equation ( ) Re-writing Equation ( ) to obtain the value of This gives the distance from the centre of the beam to be : Distance from Centre = Which has a maximum value of • The distance from the centre of the beam is
{"url":"http://www.codecogs.com/library/engineering/materials/beams/macaulay-method.php","timestamp":"2014-04-18T03:24:12Z","content_type":null,"content_length":"42631","record_id":"<urn:uuid:7e390459-70c3-4984-8fd1-470aa76da27e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Income Inequality Data Has Flaws In the Wall Street Journal yesterday, Alan Reynolds pointed out some of the flaws in the data being used in the income inequality debate. Far too many policymakers, analysts, and reporters assume that the data showing rising inequality is carved in stone, but it isn’t. Some portion of the supposed change in income inequality in recent decades is a statistical artifact due to changes in marginal tax rates and other factors. One of Alan’s points is that fluctuations in capital gains (CG) realizations by the top 1 percent of earners plays an important role in that group’s measured income share out of total American income. I constructed two charts with Alan’s data to illustrate the point. The two charts are scatter plots using data from 1979 to 2009. Chart 1: Lower CG Tax Rates Lead to Higher CG Realizations for the Top 1%. In years when we had a higher 28 percent CG tax rate, the share of high earners’ income from CG is lower. In years when we’ve had lower 15 and 20 percent CG rates, the share is higher. Chart 2: Higher CG Realizations Increase the Measured Share of the Top Earners’ Income. In years with lower CG tax rates, high earners realize more CG, and that inflates their measured share of total American income. (Note for data wonks: Regressions on these two relationships were highly statistically significant, i.e. high F-statistics).
{"url":"http://www.downsizinggovernment.org/print/income-inequality-data-has-flaws","timestamp":"2014-04-17T03:49:44Z","content_type":null,"content_length":"9813","record_id":"<urn:uuid:0a590684-d9f4-4836-880f-63df186244d0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - Single slit diffraction GeneralOJB Jan3-13 01:42 AM Single slit diffraction I'm confused about the single slit diffraction pattern. Why are light and dark patterns? Where is the constructive and destructive interference occuring if there is just one wave? Drakkith Jan3-13 01:56 AM Re: Single slit diffraction The wave interferes with itself. You can consider the wave to be composed of many smaller "wavelets" and these all add up to give the interference pattern. See the following article. GeneralOJB Jan3-13 01:59 AM Re: Single slit diffraction So when doing the double slit experiment, one will see two diffraction patterns on top of each other then? vanhees71 Jan3-13 03:08 AM Re: Single slit diffraction The most simple picture about diffraction comes from using the Fraunhofer case (both source and detection screen at infinity) and Kirchhoff's approximate formula. Then the diffraction pattern seen at the screen turns out to be given by the Fourier transform of the openings, i.e., the electric field is proportional to this Fourier transform. The physical picture behind this is that any point of the opening is the source of a wave, and at the infinitely far away screen you can approximate the spherical wave by a plane wave (Fraunhofer You find the math in great detail at the Wikipedia link in GeneralOJB's posting. Drakkith Jan3-13 03:20 AM Re: Single slit diffraction Quote by vanhees71 (Post 4216616) You find the math in great detail at the Wikipedia link in GeneralOJB's posting. I think he means my post. :wink: All times are GMT -5. The time now is 02:41 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=662161","timestamp":"2014-04-16T07:41:43Z","content_type":null,"content_length":"7135","record_id":"<urn:uuid:d65ba262-db25-4e05-943d-41d20a8061ae>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Polygonizations In November 2000, I ran into the following question. I was sure that it had been explored before, but I had trouble finding information on it, so I created this webpage. How many simple polygons on n points can there be? A simple polygon is a closed chain of straight line segments that does not cross itself. More formally, What is the number of simple polygonizations of a set of points, maximized over all sets of n points? A simple polygonization of a set of points is a simple polygon whose vertices are precisely that set of points. Simple polygonizations are also called simple polygonalizations, spanning cycles, Hamiltonian polygons, planar tours, or planar traveling salesman (TSP) tours. The goal would be to give asymptotic bounds on this number, call it P(n), in terms of n. One trivial bound is that P(n) is at most n!, because every polygonization induces an order on the n points. But one would expect that most orders induce crossings, and so P(n) is much smaller. Indeed, it turns out that the asymptotics of P(n) is b^n for some constant b. What remains open is the exact base b of the exponentiation. Here is a brief history of upper and lower bounds on b; please let me know if I missed any. Table 1: Approximate Lower Bounds. Year Bound on Base Reference 1979 2.27 [Akl 1979] 1980 2.15 [Newborn and Moser 1980] 1987 3.26846179 [Hayward 1987] 1998 3.60501960 [García and Tejel 1998] 1995 4.642 [García, Noy, and Tejel 1995] These lower bounds are generally based on counting a subset of polygonizations of a particular arrangement of points. Table 2: Approximate Upper Bounds. Year Bound on Base Reference 1982 10^13 [Ajtai, Chvátal, Newborn, and Szemerédi 1982] 1989 1,384,000 [Smith 1989; García, Noy, and Tejel 1995] 1997 53,000 [Pach and Tóth 1997; Ajtai et al. 1982] 1998 38,837 [Seidel 1998; García, Noy, and Tejel 1995] 1997 2,226 [Denny and Sohler 1997; García, Noy, and Tejel 1995] 1999 1,888 [Denny and Sohler 1997; Dumitrescu 1999] 1999 936 [Denny and Sohler 1997; Alt, Fuchs, and Kriegel 1999] 2003 199 [Santos and Seidel 2003; Alt, Fuchs, and Kriegel 1999] 2005 87 [Sharir and Welzl 2005] 2009 70 [Buchin, Knauer, Kriegel, Schulz, Seidel 2007; Sharir and Sheffer 2009] 2011 56 [Sharir, Sheffer, Welzl 2011] Several of these upper-bound papers are not directly about the polygonization problem, but rather are about one of two other problems that have been shown to be related in the sense that a bound on the related problem induces a bound on the polygonization problem. Specifically, those results marked with an [Ajtai et al. 1982] reference are based on lower bounds on the crossing number of a graph, whose relation is shown in [Ajtai et al. 1982]. Another approach [Sharir and Welzl 2005] uses a bound on the number of crossing-free matchings. Finally, the results marked with a [García, Noy, and Tejel 1995], [Dumitrescu 1999], [Alt, Fuchs, and Kriegel 1999], or [Sharir and Sheffer 2009] reference are based on upper bounds on the number of triangulations of n points. The first bound along these lines says that, if there are at most c^n triangulations, then there are at most (8 c)^n polygonizations [García, Noy, and Tejel 1995]. This bound was later improved to (6.75 c)^n [Dumitrescu 1999]. Eppstein (personal communication, July 2002) points out that an upper bound of (4 c)^n is straightforward, because there are only 2^2n ways to color the 2n triangles as inside or outside the polygon. The bound of (3.3636 c)^n follows from an upper bound of 3.3636^n on the number of noncrossing cycles in a planar graph (triangulation) [Alt, Fuchs, and Kriegel 1999]. This bound on the number of noncrossing cycles was improved to 30^n/4 ≈ 2.3404^n [Buchin, Knauer, Kriegel, Schulz, and Seidel 2007], implying an upper bound of (2.3404 c)^n on the polygonization problem, which is the basis for the current best upper bound. At the time this paper was written, however, the best upper bound value on c was 59 [Alt, Fuchs, and Kriegel 1999], so the resulting upper bound of 138^n was worth than the already known bound of 87^n obtained by different techniques [Sharir and Welzl 2005]. The bounds on the number of triangulations are of interest in their own right, with applications in mesh encoding and graphics. The current best bound is 30^n [Sharir and Sheffer 2009]. Combined with [Buchin, Knauer, Kriegel, Schulz, and Seidel 2007], we obtain the current best bound on the polygonization problem: 70.2104^n. The latest upper bound, by Sharir, Sheffer, and Welzl [2011], uses support weighted counting and a weighted Kasteleyn method. Related Problems Classes of Polygons. Instead of counting the number of simple polygonizations, researchers have considered the number of certain types of simple polygonizations: • Monotone Polygons: every vertical line intersects the boundary of the polygon at most twice. Here nearly tight bounds are known [Meijer and Rappaport 1990]. If the points have distinct x coordinates, the base of the exponent is between 1.618033989 and 2; and in general the base is between 2.058171027 and 2.236067977. • Starshaped Polygons: every vertex of the polygon can be seen from a common point interior to the polygon, without obstruction by polygon edges. Nothing is known directly about this class. If the set of possible common interior points forms a positive-area kernel, then there are only polynomially many such polygons; specifically, there is an upper bound of O(n^4) [Deneen and Shute 1988]. This bound is claimed optimal in [Deneen and Shute 1988]. Other Subgraphs. Another avenue of research is to look at certain types of noncrossing subgraphs of an embedded complete graph, such as spanning trees and matchings, or even a general subgraph of the complete graph. See [Dumitrescu 1999] and [Sharir and Welzl 2005] for related results. Decision Problems. Given a set of points in the plane, it is trivial to decide whether it has a simple polygonization: the answer is yes unless all the points happen to be collinear. On the other hand, given a set of points in the plane, deciding whether there is an orthogonal polygonization (all edges must be horizontal or vertical) is NP-complete if 180-degree angles are allowed [Rappaport 1986] and solvable in O(n log n) time otherwise [O'Rourke 1988]. In the latter case, any polygonization is unique [O'Rourke 1988]. Optimization Problems. Given a set of points in the plane, finding the simple polygonization with either minimum area or maximum area is NP-complete [Fekete 2000]. Thanks to David Eppstein for pointing out the reference to [Alt, Fuchs, and Kriegel 1999], which for several years led to the best upper bounds. Thanks to Michael Hoffmann for pointing out [Santos and Seidel 2003] which was also a part of the best upper bound for a time. Thanks to Piotr Rudnicki for pointing out an error in the paragraph on decision problems (need to assume noncollinearity). [Ajtai, Chvátal, Newborn, and Szemerédi 1982] M. Ajtai, V. Chvátal, M. M. Newborn, and E. Szemerédi, “Crossing-free subgraphs”, in Theory and Practice of Combinatorics, volume 12 of Annals of Discrete Mathematics and volume 60 of North-Holland Mathematics Studies, 1982, pages 9–12. [Akl 1979] Selim G. Akl, “A lower bound on the maximum number of crossing-free Hamiltonian cycles in a rectilinear drawing of K[n]”, Ars Combinatoria, volume 7, 1979, pages 7–18. [Alt, Fuchs, and Kriegel 1999] Helmut Alt, Ulrich Fuchs, and Klaus Kriegel, “On the number of simple cycles in planar graphs”, Combinatorics, Probability & Computing, volume 8, number 5, September 1999, pages 397–405. [Buchin, Knauer, Kriegel, Schulz, and Siedel 2007] Kevin Buchin, Christian Knauer, Klaus Kriegel, André Schulz, Raimund Seidel, “On the number of cycles in planar graphs”, in Proceedings of the 13th International Computing and Combinatorics Conference, 2007, pages 97–107. [Deneen and Shute 1988] Linda Deneen and Gary Shute, “Polygonizations of point sets in the plane”, Discrete & Computational Geometry, volume 3, number 1, 1988, pages 77–87. [Denny and Sohler 1997] M. Denny and C. Sohler, “Encoding a triangulation as a permutation of its point set”, Proceedings of the 9th Canadian Conference on Computational Geometry, 1997, pages 39–43. [Dumitrescu 1999] Adrian Dumitrescu, “On two lower bound constructions”, Proceedings of the 11th Canadian Conference on Computational Geometry, Vancouver, 1999. http://www.cs.ubc.ca/conferences/CCCG/elec_proc/ [Fekete 2000] S. P. Fekete, “On simple polygonalizations with optimal area,” Discrete & Computational Geometry, volume 23, number 1, 2000, pages 73–110. [García, Noy, and Tejel 1995] A. García, M. Noy, and A. Tejel, “Lower bounds on the number of crossing-free subgraphs of K[n]”, Proceedings of the 7th Canadian Conference on Computational Geoemtry, 1995, pages 97–102. [García and Tejel 1998] A. García and J. Tejel, “A lower bound for the number of polygonizations of N points in the plane”, Ars Combinatoria, volume 49, 1998, pages 3–19. [Hayward 1987] Ryan B. Hayward, “A lower bound for the optimal crossing-free Hamiltonian cycle problem”, Discrete & Computational Geometry, volume 2, number 4, 1987, pages 327–343. [Newborn and Moser 1980] Monroe Newborn and W. O. J. Moser, “Optimal crossing-free Hamiltonian circuit drawings of K[n]”, Journal of Combinatorial Theory, Series B, volume 29, 1980, pages 13–26. [O'Rourke 1988] Joseph O'Rourke, “Uniqueness of orthogonal connect-the-dots”, in Computational Morphology, North-Holland, 1988, pages 97–104. [Pach and Tóth 1997] János Pach and Géza Tóth, “Graphs drawn with few crossings per edge”, Combinatorica, volume 17, number 3, 1997, pages 427–439. [Rappaport 1986] David Rappaport, “On the complexity of computing orthogonal polygons from a set of points”, Technical Report SOCS-86.9, McGill University, Montréal, 1986. [Santos and Seidel 2003] Francisco Santos and Raimund Seidel, “A better upper bound on the number of triangulations of a planar point set”, Journal of Combinatorial Theory, Series A, volume 102, number 1, 2003, pages [Seidel 1998] Raimund Seidel, “On the number of triangulations of planar point sets”, Combinatorica, volume 18, number 2, 1998, pages 297–299. [Sharir and Sheffer 2009] Micha Sharir and Adam Sheffer, “Counting triangulations of planar point sets”, arXiv:0911.3352. First version November 2009, revision January 2010. [Sharir, Sheffer, and Welzl 2011] Micha Sharir, Adam Sheffer, and Emo Welzl. “Counting plane graphs: perfect matchings, spanning cycles, and Kasteleyn's technique”, arXiv:1109.5596, September 2011. [Originally announced during an invited talk by Emo Welzl at 23rd Canadian Conference on Computational Geometry, Toronto, Canada, August 2011.] [Sharir and Welzl 2005] Micha Sharir and Emo Welzl, “On the number of crossing-free matchings, cycles, and partitions”, Manuscript, July 2005. [Smith 1989] W. D. Smith, “Studies in Computational Geometry motivated by mesh generation”, Ph.D. Thesis, Princeton University, 1989.
{"url":"http://erikdemaine.org/polygonization/","timestamp":"2014-04-16T07:37:13Z","content_type":null,"content_length":"17384","record_id":"<urn:uuid:22f23810-e263-4db9-b593-9f01577001b9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
I've been given a spreadsheet which comes out of a time tracking application. The spreadsheet list projects and resources(people), and the time expected and acutally spent. Currently a lot of manual changes are being done once this report is generated. The report is always in the same format but because of the number of resources the rows specific data appears will vary from month to month. Row 1 = Project Name Row 2 = Resource Name 1 (this row also contains the hours for resource1) Row 3 = Resource Name 2 (this row also contains the hours for resource2) Row 4 = Totals (totals up the number of resource hours for each week) My problem is that the values reported are in a Text Format. I've created a worksheet thats grabbing the data from the original report, but when I attempt to sum the totals for each week to get the totals for the month I get 0 because the values on the main worksheet are text, also I want the user to be able to paste in the next months report without having to convert manually (hoping I can do a conversion formula on my raw data worksheet). I'm using a lot of IF formulas to pull specific data from the report, and I've pulled over my project names and the totals per week, but now due to the conversion problem I can't sum for the month. I've used a similar formula in the past but its for an opposite situation (number to text) =TEXT(A1,"0000000000"), is there a formula that converts it to numeric form? sorry I couldn't post an Thanks in advance for any help. Special Note: I hope the Title was accurately written for this post, tried to make it descriptive enough, thanks again for your help.
{"url":"http://www.knowexcel.com/view/1401313-converting-number-in-text-to-actual-numbers.html","timestamp":"2014-04-16T21:53:49Z","content_type":null,"content_length":"56612","record_id":"<urn:uuid:36a547c2-35fa-4d17-958f-1603a7ae6b0c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Roslyn, NY Math Tutor Find a Roslyn, NY Math Tutor ...It is never too late to learn. We use reading every day. Those of us who do it well are not aware of it. 10 Subjects: including prealgebra, reading, English, study skills ...I plan on working in the fields of nutrition and epigentics. I am currently working in a Drosophila (fruit-fly) toxicology lab, where I will start my own research project in the Spring semester of 2014. Research, along with teaching, will probably be in my future line of work. 14 Subjects: including ACT Math, SAT math, writing, geometry ...I am also teaching at an after school program. The after school program focuses on building and strengthening student' academic skills. Students I have taught range in age from five years old to adult. 6 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Chemistry, organic chemistry, and biological sciences are among my favorite subjects to tutor. However, I am very comfortable with math up to and including pre-calculus. As a tutor I believe that anyone can learn any subject, so long as the material is broken down into smaller units for optimal and maximum comprehension. 20 Subjects: including trigonometry, statistics, English, algebra 1 Hi eager student or parent, First, congratulations for deciding to better yourself! I promise, tutoring can be rewarding and fun by bringing out the best in you or your student. A little about myself: I studied Brain and Cognitive Science at the Massachusetts Institute of Technology (MIT) with a minor in Theater Arts. 25 Subjects: including prealgebra, elementary math, SAT math, English Related Roslyn, NY Tutors Roslyn, NY Accounting Tutors Roslyn, NY ACT Tutors Roslyn, NY Algebra Tutors Roslyn, NY Algebra 2 Tutors Roslyn, NY Calculus Tutors Roslyn, NY Geometry Tutors Roslyn, NY Math Tutors Roslyn, NY Prealgebra Tutors Roslyn, NY Precalculus Tutors Roslyn, NY SAT Tutors Roslyn, NY SAT Math Tutors Roslyn, NY Science Tutors Roslyn, NY Statistics Tutors Roslyn, NY Trigonometry Tutors Nearby Cities With Math Tutor Albertson, NY Math Tutors Baxter Estates, NY Math Tutors Carle Place Math Tutors East Hills, NY Math Tutors East Williston, NY Math Tutors Great Neck Plaza, NY Math Tutors Greenvale Math Tutors Manorhaven, NY Math Tutors Matinecock, NY Math Tutors Roslyn Estates, NY Math Tutors Roslyn Harbor, NY Math Tutors Roslyn Heights Math Tutors Russell Gardens, NY Math Tutors Saddle Rock, NY Math Tutors Sea Cliff Math Tutors
{"url":"http://www.purplemath.com/roslyn_ny_math_tutors.php","timestamp":"2014-04-18T15:56:23Z","content_type":null,"content_length":"23668","record_id":"<urn:uuid:702de36b-1deb-4214-8876-7733de2a4422>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/barylen/answered","timestamp":"2014-04-20T19:01:09Z","content_type":null,"content_length":"84016","record_id":"<urn:uuid:44f44b26-8833-4f0b-8359-9540555a6a6e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
How I Teach Calculus: A Comedy (u-substitution) viagra genérique acheter viagra petite quantité facteur est chouette. jusqu’au début du siècle dernier. Un espace prix d4une boite de viagra viagra en ligne belgique de dialogue s’ouvre. dans achat viagra sans ordonnance acheter viagra discretion les sous-titres. C’est acheter priligy acheter priligy france une gigantesque saloperie. « Je l’ai mérité cette victoire. Bac achat cialis en pharmacie comparateur de prix pour cialis de français 2012 Verbes roaccutane pas cher commander accutane de paroles / pensées ×O × ×O Je prix boite de liorésal vente liorésal fais un bon départ. Ces « armées » étaient encadrées. Là où nous mourrons. >« agite depuis longtemps l’humanité » les acheter estrace conférenciers sont algériens. Lire acheter viagra en ligne en france viagra pharmacie france la suite sur : http://www. contre un projet de barrage. Alors acheter cialis pas cher tadalafil je fumais des Crystal. Indications bibliographiques How I Teach Calculus: A Comedy (u-substitution) This post is a part a larger series documenting the changes I am making to my calculus course. My goals are to implement standards-based grading and to introduce genuine applications of the concepts being taught. I’m not suffering any delusions that any of this is all that ground-breaking, I just want to log the comedy that ensues: I wanted to introduce integrating more complicated functions in a way that was more meaningful than just saying, “Hey, I bet you can’t get this one!” My students and I have officially gotten on the calculus shuttle at this point. They have a firm grasp of differential and integral calculus conceptually, and have seen that, just like most other operators, the derivative has an inverse. (the integral!) They like this, and we build a sort of family tree of math where [ + ] and [ - ] live together, and [ * ] and [ / ] live together, and ^ with log, finally d/dx with the integral. The time now comes to flesh out our ability to do integrals. It is obvious to the kids that they can really only handle polynomials. I wanted to introduce integrating more complicated functions in a way that was more meaningful than just saying, “Hey, I bet you can’t get this one!” That method works pretty well, but it was the first day of the year where temperatures were going to break 70 degrees, so I pretty much had to go outside. (It’s a law in Iowa that you must celebrate with song and victuals the first day of the year that doesn’t require a coat.) Here’s what I (always) think to myself, “What will necessitate the usage of more complicated functions than polynomials, that will also be outside?” Answer: I have no idea! I check out a class set of digital cameras, and away we went. Their instructions: Take pictures of things you think are beautiful. Cornally gambles. This could most definitely blow up in my face. What if their pictures don’t give any inspiration back to calculus? What if I can’t figure out how to relate anything? What if they don’t take this seriously, and as soon as we’re out the door they scramble away into the hills never to seen from again except for when they scavenge a few sheep from a local farmer in the wee hours of the morn? These are the things teachers think about. The kids just said, “OK” and they participated just fine. We walked around our campus and the surrounding area, which is fairly rural, for about 20 minutes. We talked about all sorts of things. When do frogs come back from under the mud? Why do streams have ripples on the bottom? What makes a good photo? We talked about Gestalt principles a bit, framing, and composition flow. In the end they took pictures of things like trees, streams, parking lot cracks, or whatever else they wanted. Being outside was refreshing, too (and legally mandated by NCLB…) We went back to the room and they downloaded their images. I asked the question, “Do you see any math?” A few stares, but when they realized I was serious, they started looking. Many of them found sine/cosine waves in things, some saw parabolas and circles. One that struck gold was this: Super simple. This student’s goal with the photo, as he said, “I wanted to show the crack as the rising action from left to right, it looks kind of like root [square root function] but also looks like a letter y.” “How minimalist of you,” I replied. “What root function is that?” So we busted out Grapher (or whatever graphing tool) and they started trying to fit a function to this simple crack in the sidewalk. A student bubbled, “What if they wanted to fix the crack, Mr. C? What would they do?” I said, “I’m not sure, I bet they break up the whole square and replace it.” “Couldn’t they just cut out the crack and poor new concrete and have it join to the old concrete?” I said, “I’m not sure if wet cement bonds to cured cement.” Another student pipes up, as they’re working in Grapher, “I read this article about that for my chemistry project. They made a self-healing concrete. Part of it stays dry after it sets.” “Sweet, How much of that do we need? I bet it’s expensive,” says the original photographer. I don’t answer. They start thinking. I didn’t plan this, I have no idea what’s happening. I know your pancreas is probably knotting at this interchange, but this actually happened. I guess I can put those original teacher neuroses to bed a bit; if you let them, kids will come up with some awesome stuff. The story continues: Student A: Hey, if I found the area from the crack on down that’s the area of the chunk we’d need. Student B: If we drew a line just under the crack we could save that good concrete and just fill a little gap. Cornally: How can you find the area of a little swatch like that? Student A: Just find the bigger area and minus the littler area. Student B: Yeah. Something like this: …and another lesson is in the books. Area between two curves. I love it when they do my job for me. So, the students continued on trying to fit functions to this picture. They ended up going outside again to measure the crack so they could get the numbers right (This was my idea, admittedly). Here are the screen shots from Grapher of a couple things they tried: The ellipse part blew my mind. They went online to look up ellipses, because they weren’t happy with how the root functions got closer together. My next step in this lesson would have been to fit these functions using a statistical model, but we ran out of time. We went with what we had. So, the students set up an integral to get the areas under the root functions and realized pretty quick that they didn’t know how to find the antiderivatives. Some students bumbled ahead doing the reverse-power rule, but a quick differentiation check let those students know that this method was obviously not applicable (I really believe in this attempt at Piaget-style accommodation. You may know this in science as a “discrepant event.”) Now the traditional lesson on u-substitution (change of variables, reverse-chain rule, whatever) begins. How can we make this integral doable? What is flummoxing us? The students think, and deftly say, “The thing in the root.” I pondered just letting them try to figure this out on their own, but experience has shown me that this specific kind of freedom rarely fruits. So I Pied-Pipered a bit. The total problem we decided to set up and solve was: We attacked the first integral. (I’m going to solve this problem for those of you who are students that happen to arrive here for help. If you’re a teacher, you can probably skip to the end.). The integrand is a composition of functions. You needs to sub-out the inner composed function, like: The problem most students have arises here and for some unknown reason is poorly explained. You absolutely cannot do this integral because it is mixing variables. It is asking you to perform an integral with respect to the variable x, but you have the variable u. This makes no mathematical sense. You must create a du. The only way we know how to do this is by finding the derivative of u, good thing that we defined u earlier! You might be asking yourself what about our limits of integration (a and b). They most certainly are left over from the world of x. We can leave them until later, when we rid ourselves of the integral and have plugged the x‘s back in. Notice that the goal of substitution is to create an integral that is doable. We can most certainly do the integral of root u with respect to u. Now let’s plug the x’s back in: We then finished our initial area problem for the concrete. We also took into account that the concrete is probably a few inches thick to find volume. They looked up how much concrete was, and we solved some of their questions. After this introduction, we moved through the standard treatment of u-substitution, and all of the formalities therein. I make an special point to discuss problems where the substitution for dx doesn’t cancel anything, and you have to use your definition of u again. This blog is for the things that I’m doing that I think you all might want to hear about. It is not exhaustive of my total classroom behaviors. All in all, this lesson was probably the biggest gamble of anything I’ve done this year. I asked for a lot of maturity and really had no idea where it was headed. I almost didn’t write about it for fear that you’d all find this ridiculous or unhelpful, but I’d really like to add evidence to the lessons-don’t-need-to-be-totally-figured-out position. Sometimes fluidity is freeing. Also, 70 degree days in Iowa are fleeting. Inquiry Stylee: Logistics WCYDWT: Wowza Google! Comments are disabled. 4 thoughts on “ How I Teach Calculus: A Comedy (u-substitution) • This was another fantastic lesson, Shawn. I love stepping into a class and just riffing on the natural curiosity of my students. It’s liberating and always surprising where they will take you. Now I’m curious to see what physics my students would find if I turned them loose with cameras. □ @Brian: Thanks! It was by far the least planned. Physics and cameras is a magical marriage. I just got a grant for a high speed digital camera. I’m blogging about the insanity of that right now. Thanks for the comment! • >I check out a class set of dig­i­tal cam­eras, and… Mmm, lucky guy. That won’t be happening where I teach. Hard for me to even imagine a class set of digital cameras. Thanks for writing this up. Maybe I’ll remember this post one day, and be very brave. (I always write x=-4.5 and x=0, when I have x end-points on a u integral. Or else I change them to u values. I mess up too often my own darn self, so I know my students will mess up if the x’s don’t announce their x-status.) □ Sue: They’re pretty bad cameras, and my classes aren’t that big. The cameras can only hold 10 pictures at a few megapixels.
{"url":"http://shawncornally.com/wordpress/?p=493&cpage=1","timestamp":"2014-04-19T22:05:56Z","content_type":null,"content_length":"27772","record_id":"<urn:uuid:b79a2949-ea98-48be-8f9b-66e2c8bda40d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Are the NFL gambling lines consistent with each other? Are the NFL gambling lines consistent with each other? According to this old Boston Globe article, Daryl Morey discovered that the Pythagorean Projection for the NFL should use the exponent 2.37. That means that from the Vegas betting line and the over/ under, we should be able to come up with an estimate of the probability of winning the game. For instance, last night, the Patriots were favored by 13.5 points over the Giants. And the over/under was 46.5 points. That means that the expected score, in a sense, was Patriots 30, Giants 16.5. Using Pythagoras on that score, we get that New England should have had a 80.5% chance of winning the game. But the market prediction, from tradesports.com, was 88.0%. (Sorry, no link.) So why the difference? I can think of a couple of possible reasons: 1. Pythagoras doesn't work well on such heavy favorites; 2. The distributions are not symmetrical, so even though the *median* score is 30-16.5, the *mean* score is something else; 3. The market for outright wins is less efficient than the point-spread market. I'd bet #2 is the correct answer, that strategic differences (such as the leading team taking time off the clock instead of going for more points) make the comparison inaccurate. In any case, I doubt #3: if it were that easy to make money by betting the heavy underdog to win, someone would have noticed by now. P.S. This NYT article says that as of last week, the Patriots were 1:8 favorites to win the Super Bowl. That can't be right – those would be the odds of winning one game against a mediocre team, not three straight against quality opponents. The betting is at about even odds at TradeSports. Labels: football, forecasting, gambling, NFL, pythagoras 9 Comments: Links to this post:
{"url":"http://blog.philbirnbaum.com/2007/12/are-nfl-gambling-lines-consistent-with.html","timestamp":"2014-04-19T04:57:59Z","content_type":null,"content_length":"62483","record_id":"<urn:uuid:d3c5f90d-efcb-4d51-a5c7-01debdd1d8ff>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
The Math Forum: Philadelphia Engineering / Math Challenge Quiz Bowl Number of team members participating: 8 students Length of competition: 45 minutes Materials Needed for Competition at Drexel: 1. pencils 2. paper will be provided. 3. dry erase board will be provided. 4. dry erase marker will be provided. Materials Needed at School Prior to Competition: 1. practice problems provided by Philadelphia EMC organizers All eight students will solve problems as part of a quiz bowl. Students will work together to answer questions and compete head to head against other teams. Teams will be seated at tables. A moderator will ask a question using a microphone. Each team will write an answer to each question on a dry erase board. Team captains will then hold up their boards for the audience and judges to 1. The team captain should be seated in middle of the team. 2. Teams should write answers clearly and neatly so that judges can see answers. Teams with unclear answers will not receive points. 3. Time will begin once the moderator finishes reading the question. 4. Teams should NOT include computational or scratch work on the board; only answers should be written on the board. 5. Once a team holds up a board, the team may NOT change their answer. 6. Teams will be provided with scratch paper, two dry erase markers, a board, and an eraser to clear the board. 7. NO books, notes, calculators, or electronic devices, such as cell phones, may be used. Cell phones must be turned off. General Guidelines for All Answers Print answers clearly and legibly. Unclear or illegible answers will not be scored. Do not write decimal approximations to numbers such as π and &sqrt;2. Leave expressions such as π and &sqrt;2 in your answer. Simplify answers as much as possible. For example, 6/4 should be simplified to 3/2, and square roots of integers should not appear in the denominator. Perfect squares should be removed from radicals. Frequently, several equivalent expressions will be considered correct. For example, 3/2, 1 1/2, and 1.5 could be correct. Common Core: Standards for Mathematical Practice • CCSS.Math.Practice.MP1 Make sense of problems and persevere in solving them. • CCSS.Math.Practice.MP2 Reason abstractly and quantitatively. • CCSS.Math.Practice.MP4 Model with mathematics. • CCSS.Math.Practice.MP5 Use appropriate tools strategically. Round 1 Round 1 will consist of 15 questions. All teams that provide the correct answer to a question posed by a moderator will earn 5 points. What is the largest 3-digit prime number? How many integers between 0 and 100 are divisible by 3 or 7? A license plate is 2 letters followed by 3 digits and none of the digits or letters repeat. How many different license plates are possible? Jabril currently has enough money to buy 45 books. If the cost of each book was 10 cents less, Jabril could buy 5 more books. How much money does Jabril have to spend on books? At a formal dinner, guests were seated around a circular table for six. Before the dinner, the host asked each guest to shake hands (once) with everyone on the table. How many handshakes were made at the table? What is the greatest common factor of 42, 126, and 210 ? What is the value of x when 2x + 3 = 3x – 4 ? If 40% of a given number is 8, then what is 15% of the given number? Michaela received a 10% raise each month for 3 consecutive months. What was her salary after the three raises if her starting salary was $1000 per month? Round 2 Round 2 will consist of 5 questions that each have multiple answers. Each team will earn 1 point for each correct answer and 1 bonus point if the team provides all of the possible answers. Which two-digit numbers from 10 to 90 have the property that both digits are perfect squares? For example, 10 is the smallest such number and 90 is the largest. List the whole numbers between -3 and 3. When 2 lines are cut by a transversal name as many as you can of the angle pairs that are formed. Name the 6 trigonometric relationships. List all the perfect cubes between 1 and 1000 inclusive. List as many Pythagorean triples as you can where the values of all 3 sides are less than 50. Round 3 Round 3 will consist of 10 questions. The first team with a correct answer will earn 3 points. The first team with the correct answer will then receive a follow-up question for 2 bonus points. What is the value of any number raised to the power of zero? SAMPLE FOLLOW-UP TO PROBLEM 1 What is 0! ? When two exponential expressions with a common base are multiplied together what operation is done to the exponents to combine the expressions? SAMPLE FOLLOW-UP TO PROBLEM 2 Simplify: 5x^7y^3 · 9x^-4y^10
{"url":"http://mathforum.org/emc/quiz.html","timestamp":"2014-04-18T18:57:10Z","content_type":null,"content_length":"11758","record_id":"<urn:uuid:2e30ccfa-464c-473c-a0e3-9ae6753f6c7b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Relativity and The Stopped Clock Paradox James S Saint Okay, so you are saying there, that “if Einstein could have seen a clock at the stop-button, that clock would have been reading [-9.815] at Einstein’s t=0 because he would see a distance of [19.630] and was traveling at .5c.” No, that's not what I'm saying. We are not now talking about clocks that are at rest in the station frame but clocks that are at rest in Einstein's frame which are traveling with respect to the stop-button, or more precisely, the stop-button is traveling past a whole bunch of Einstein's coordinate clocks. How is Einstein's "coordinate clocks" any different than "what Einstein would see of the other clocks"? It seems to be the same thing to me. If there is a difference, I seriously need to know what that is. If you want to calculate how Einstein determines what the time was on the stop-clocks when the button was pressed, you need start with the event of Einstein being colocated with the stop-button. In the station frame this event is [34,17] and transforms to [29.445,0] in Einstein's frame. The delta between this time and the event of the button press in Einstein's frame is 29.445-6.351=23.094. Dividing this by gamma gives us 20. Now we subtract 20 from 34 and we get 14. Do you see the significance of the 6.351? I had asked for the of the 6.351. Your explanation of its "significance" makes it seem like merely a number injected so as to justify the chosen conclusion. How did you ..and btw, my method for getting to the conundrum is much simpler. I am looking for anything that would indicate what my error might be.
{"url":"http://www.physicsforums.com/showpost.php?p=3783934&postcount=11","timestamp":"2014-04-19T12:35:15Z","content_type":null,"content_length":"9367","record_id":"<urn:uuid:c26205f7-c752-4d61-98d1-6a349cc7afa2>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Knowledge representation and classical logic - In Proceedings of the International Conference on Knowledge Representation and Reasoning (KR , 2008 "... Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop f ..." Cited by 11 (5 self) Add to MetaCart Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Ferraris et al. Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models. - In: ICLP11 WorkshoponAnswerSetProgrammingandOtherComputingParadigms(ASPOCP11)(Jul 2011 "... The stable model semantics treats a logic program as a mechanism for specifying its intensional predicates. In this paper we discuss a modification of that semantics in which functions, rather than predicates, are intensional. The idea of the new definition comes from nonmonotonic causal logic. ..." Cited by 10 (1 self) Add to MetaCart The stable model semantics treats a logic program as a mechanism for specifying its intensional predicates. In this paper we discuss a modification of that semantics in which functions, rather than predicates, are intensional. The idea of the new definition comes from nonmonotonic causal logic. - In Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR , 2012 "... ”Answer Set Programming Modulo Theories (ASPMT) ” is a recently proposed framework which tightly integrates answer set programming (ASP) and satisfiability modulo theories (SMT). Its mathematical foundation is the functional stable model semantics, an enhancement of the traditional stable model sema ..." Cited by 4 (4 self) Add to MetaCart ”Answer Set Programming Modulo Theories (ASPMT) ” is a recently proposed framework which tightly integrates answer set programming (ASP) and satisfiability modulo theories (SMT). Its mathematical foundation is the functional stable model semantics, an enhancement of the traditional stable model semantics to allow defaults involving functions as well as predicates. This talk will discuss how ASPMT can provide a way to overcome limitations of the propositional setting of ASP, how action language C+ can be reformulated in terms of ASPMT, and how it can be 3 4 Answer Set Programming (ASP) Declarative programming paradigm. Suitable for knowledge intensive - In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI , 2005 "... Lin and Zhao’s theorem on loop formulas states that in the propositional case the stable model semantics of a logic program can be completely characterized by propositional loop formulas, but this result does not fully carry over to the first-order case. We investigate the precise relationship betwe ..." Cited by 3 (1 self) Add to MetaCart Lin and Zhao’s theorem on loop formulas states that in the propositional case the stable model semantics of a logic program can be completely characterized by propositional loop formulas, but this result does not fully carry over to the first-order case. We investigate the precise relationship between the first-order stable model semantics and first-order loop formulas, and study conditions under which the former can be represented by the latter. In order to facilitate the comparison, we extend the definition of a first-order loop formula which was limited to a nondisjunctive program, to a disjunctive program and to an arbitrary first-order theory. Based on the studied relationship we extend the syntax of a logic program with explicit quantifiers, which allows us to do reasoning involving non-Herbrand stable models using first-order reasoners. Such programs can be viewed as a special class of firstorder theories under the stable model semantics, which yields more succinct loop formulas than the general language due to their restricted syntax. 1. "... Abstract. This paper is about the functionality of software systems used in answer set programming (ASP). ASP languages are viewed here, in the spirit of Datalog, as mechanisms for characterizing intensional (output) predicates in terms of extensional (input) predicates. Our approach to the semantic ..." Cited by 2 (0 self) Add to MetaCart Abstract. This paper is about the functionality of software systems used in answer set programming (ASP). ASP languages are viewed here, in the spirit of Datalog, as mechanisms for characterizing intensional (output) predicates in terms of extensional (input) predicates. Our approach to the semantics of ASP programs is based on the concept of a stable model defined in terms of a modification of parallel circumscription. 1 "... Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop f ..." Add to MetaCart Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Ferraris et al. Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models. "... Generalized relational theories with null values in the sense of Reiter are first-order theories that provide a semantics for relational databases with incomplete information. In this paper we show that any such theory can be turned into an equivalent logic program, so that models of the theory can ..." Add to MetaCart Generalized relational theories with null values in the sense of Reiter are first-order theories that provide a semantics for relational databases with incomplete information. In this paper we show that any such theory can be turned into an equivalent logic program, so that models of the theory can be generated using computational methods of answer set programming. As a step towards this goal, we develop a general method for calculating stable models under the domain closure assumption but without the unique name assumption. 1 "... Recently there has been an increasing interest in incorporating “intensional ” functions in answer set programming. Intensional functions are those whose values can be described by other functions and predicates, rather than being pre-defined as in the standard answer set programming. We demonstrate ..." Add to MetaCart Recently there has been an increasing interest in incorporating “intensional ” functions in answer set programming. Intensional functions are those whose values can be described by other functions and predicates, rather than being pre-defined as in the standard answer set programming. We demonstrate that the functional stable model semantics plays an important role in the framework of “Answer Set Programming Modulo Theories (ASPMT) ” —a tight integration of answer set programming and satisfiability modulo theories, under which existing integration approaches can be viewed as special cases where the role of functions is limited. We show that “tight ” ASPMT programs can be translated into SMT instances, which is similar to the known relationship between ASP and SAT. 1 , 2013 "... Several extensions of the stable model semantics are available to describe “intensional ” functions—functions that can be described in terms of other functions and predicates by logic programs. Such functions are useful for expressing inertia and default behaviors of systems, and can be exploited fo ..." Add to MetaCart Several extensions of the stable model semantics are available to describe “intensional ” functions—functions that can be described in terms of other functions and predicates by logic programs. Such functions are useful for expressing inertia and default behaviors of systems, and can be exploited for alleviating the grounding bottleneck involving functional fluents. However, the extensions were defined in different ways under different intuitions. In this paper we provide several reformulations of the extensions, and note that they are in fact closely related to each other and coincide on large syntactic classes of logic programs.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=7756940","timestamp":"2014-04-24T06:06:34Z","content_type":null,"content_length":"34337","record_id":"<urn:uuid:bfadbb8c-8bcd-4d75-80e8-109c11e03842>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Huntington Park Pasadena, CA 91101 Math Professor Available for Tutoring Math, SAT, ACT I'm a former tenured Community College Professor with an M.A. degree in from UCLA. I have also taught university level ematics at UCLA, the University of Maryland, and the U.S. Air Force Academy. I love working with students and have experience teaching... Offering 9 subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/Huntington_Park_Math_tutors.aspx","timestamp":"2014-04-16T05:50:53Z","content_type":null,"content_length":"60997","record_id":"<urn:uuid:5beda4bf-bd59-4c75-94e0-d2aafc132150>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
IQ Test Labs - free online testing. A normal distribution of data means that most of the examples in a set of data are close to the "average," while relatively few examples tend to one extreme or the other. Let's say you are writing a story about nutrition. You need to look at people's typical daily calorie consumption. Like most data, the numbers for people's typical consumption probably will turn out to be normally distributed. That is, for most people, their consumption will be close to the mean, while fewer people eat a lot more or a lot less than the mean. When you think about it, that's just common sense. Not that many people are getting by on a single serving of kelp and rice. Or on eight meals of steak and milkshakes. Most people lie somewhere in If you looked at normally distributed data on a graph, it would look something like this: The x-axis (the horizontal one) is the value in question... calories consumed, dollars earned or crimes committed, for example. And the y-axis (the vertical one) is the number of data points for each value on the x-axis... in other words, the number of people who eat x calories, the number of households that earn x dollars, or the number of cities with x crimes committed. Now, not all sets of data will have graphs that look this perfect. Some will have relatively flat curves, others will be pretty steep. Sometimes the mean will lean a little bit to one side or the other. But all normally distributed data will have something like this same "bell curve" shape. The standard deviation is a statistic that tells you how tightly all the various examples are clustered around the mean in a set of data. When the examples are pretty tightly bunched together and the bell-shaped curve is steep, the standard deviation is small. When the examples are spread apart and the bell curve is relatively flat, that tells you you have a relatively large standard deviation. Computing the value of a standard deviation is complicated. But let me show you graphically what a standard deviation represents... One standard deviation away from the mean in either direction on the horizontal axis (the red area on the above graph) accounts for somewhere around 68 percent of the people in this group. Two standard deviations away from the mean (the red and green areas) account for roughly 95 percent of the people. And three standard deviations (the red, green and blue areas) account for about 99 percent of the people. If this curve were flatter and more spread out, the standard deviation would have to be larger in order to account for those 68 percent or so of the people. So that's why the standard deviation can tell you how spread out the examples in a set are from the mean. Why is this useful? Here's an example: If you are comparing test scores for different schools, the standard deviation will tell you how diverse the test scores are for each school. Let's say Springfield Elementary has a higher mean test score than Shelbyville Elementary. Your first reaction might be to say that the kids at Springfield are smarter. But a bigger standard deviation for one school tells you that there are relatively more kids at that school scoring toward one extreme or the other. By asking a few follow-up questions you might find that, say, Springfield's mean was skewed up because the school district sends all of the gifted education kids to Springfield. Or that Shelbyville's scores were dragged down because students who recently have been "mainstreamed" from special education classes have all been sent to Shelbyville. In this way, looking at the standard deviation can help point you in the right direction when asking why data is the way it is. The standard deviation can also help you evaluate the worth of all those so-called "studies" that seem to be released to the press everyday. A large standard deviation in a study that claims to show a relationship between eating Twinkies and killing politicians, for example, might tip you off that the study's claims aren't all that trustworthy. Here is one formula for computing the standard deviation. A warning, this is for math geeks only! Writers and others seeking only a basic understanding of stats don't need to read any further in this chapter. Remember, a decent calculator and stats program will calculate this for you... Terms you'll need to know x = one value in your set of data (x) = the mean (average) of all values x in your set of data n = the number of values x in your set of data For each value x, subtract (x) from x, then multiply that value by itself (otherwise known as determining the square of that value). Sum up all those squared values. Then multiply that value by this value... 1/(n-1). Then take the square root of the resulting value. That's the standard deviation of your set of data.
{"url":"http://intelligencetest.com/stan-deviation.htm","timestamp":"2014-04-16T13:15:21Z","content_type":null,"content_length":"14177","record_id":"<urn:uuid:d8ab023b-6919-420e-a32f-f3737946a26a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Legal Theory Lexicon 007: The Prisoners' Dilemma One of the most useful tools in analyzing legal rules and the policy problems to which they apply is game theory. The basic idea of game theory is simple. Many human interactions can be modeled as games. To use game theory, we build a simple model of a real world situations as a game. Thus, we might model civil litigation as a game played by plaintiffs against defendants. Or we might model the confirmation of federal judges by the Senate as a game played by Democrats and Republicans. This week's installment of the Legal Theory Lexicon discusses one important example of game theory, the prisoner's dilemma. This introduction is very basic--aimed at a first year law student with an interest in legal theory. An Example Ben and Alice have been arrested for robbing Fort Knox and placed in separate cells. The police make the following offer to each of them. "You may choose to confess or remain silent. If you confess and your accomplice remains silent I will drop all charges against you and use your testimony to ensure that your accomplice gets a heavy sentence. Likewise, if your accomplice confesses while you remain silent, he or she will go free while you get the heavy sentence. If you both confess I get two convictions, but I'll see to it that you both get light sentences. If you both remain silent, I'll have to settle for token sentences on firearms possession charges. If you wish to confess, you must leave a note with the jailer before my return tomorrow morning." This is illustrated by Table One. Ben's moves are read horizontally; Alice's moves read vertically. Each numbered pair (e.g. 5, 0) represents the payoffs for the two players. Alice's payoff is the first number in the pair, and Ben's payoff is the second number. Larger numbers represent more utility (a better payoff); so 5 is best, then 3, then 1, then 0 (the worst). Table One: Example of the Prisoner's Dilemma. Suppose that you are Ben. You might reason as follows. If Alice confesses, then I have two choices. If I confess, I get a light sentence (to which we assign a numerical value of 1). If Alice confesses and I do not confess, then I get the heavy sentence and a payoff of 0. So if Alice confesses, I should confess (1 is better than 0). If Alice does not confess, I again have two choices. If I confess, then I get off completely and a payoff of 5. If I do not confess, we both get light sentences and a payoff of 3. So if Alice does not confess, I should confess (because 5 is better than 3). So, no matter what Alice does, I should confess. Alice will reason the same way, and so both Ben and Alice will confess. In other words, one move in the game (confess) dominates the other move (do not confess) for both players. But both Ben and Alice would be better off if neither confessed. That is, the dominant move (confess) will yield a lower payoff to Ben and Alice (1, 1) than would the alternative move (do not confess), which yields (3, 3). By acting rationally and confessing, both Ben and Alice are worse off than they would be if they both had acted irrationally. The Real World The prisoner's dilemma is not just a theoretical model. Here is an example from Judge Frank Easterbrook's opinion in United States v. Herrera, 70 F.3d 444 (7th Cir. 1995): Cynthia LaBoy Herrera survived a nightmare. She and her husband Geraldo Herrera were arrested after a drug transaction. The couple, separated by the agents, then played and lost a game of Prisoner's Dilemma. See Page v. United States, 884 F.2d 300 (7th Cir.1989); Douglas G. Baird, Robert H. Gertner & Randal C. Picker, Game Theory and the Law 312-13 (1994). Cynthia told agents who their suppliers were. Learning of this, Geraldo talked too. When both were out on bond, Geraldo decided that Cynthia should pay for initiating the revelations. Geraldo clobbered Cynthia on the back of her head with a hammer; while she tried to defend herself, Geraldo declared that she talked too much to the DEA. As Cynthia grappled with the hand holding the hammer, Geraldo used his free hand to punch her in the face. Geraldo got the other hand free and hit Cynthia repeatedly with the hammer; she lapsed into unconsciousness. Communication and Bargains How can we overcome a prisoner's dilemma? You have probably noticed that the prisoner's dilemma assumed that the two prisoner's were isolated from each other. This was not an accident. If the two prisoner's can communicate with each other, then they might reach an agreement. Alice might say to Ben, "I won't confess if you won't," and Ben might say, "I agree." Of course, this might not solve the prisoner's dilemma. Why not? Suppose they do agree not to confess, but each is then taken to a separate room and given a confession to sign. Ben might reason as follows, "If I keep the bargain, and Alice does not, then she will get off while I get a heavy sentence." So Ben may be tempted to defect from their agreement. And Alice may reason in exactly the same way. On the other hand, it may be that Ben and Alice have a reason to trust one another. For example, they may have had prior dealings in which each proved trustworthy to the other. Of course, trust can be established in another way. If each party can make a credible threat of retaliation against the other, then those threats may change the payoff structure in such a way as to make the cooperative strategy dominant. One situation in which the threat of retaliation is built into the model is the iterative (repeated) prisoner's dilemma. Iterated Game As described above, the prisoner's dilemma is a one-shot game. But in the real world, may prisoner's dilemmas involve repeated plays. You can imagine a series of moves, for example: Round One--Alice Confesses, Ben Does Not Confess Round Two--Alice Confesses, Ben Confesses Round Three--Alice Does Not Confess, Ben Does Not Confess We can imagine various strategies of play for Ben and Alice. One of the most important strategies is called tit for tat. Alice might say to herself, "If Ben Confesses, then I will retaliate and confess, but if Ben does not confess, then neither will I." Add one more element to this strategy. Suppose both Ben and Alice say to themselves, on the first round of play, I will cooperate and not confess. Then we would get the following pattern: Round One--Alice Does Not Confess, Ben Does Not Confess Round Two--Does Not Confess, Ben Does Not Confess Round Three--Alice Does Not Confess, Ben Does Not Confess Thus, if both Ben and Alice play tit for tat, the result might be a stable pattern of cooperation, which benefits both Ben and Alice. If you want to get a really good feel for the iterative prisoner's dilemma, go to this website, where you can actually try out various strategies. One more twist. Suppose that this game is finite, i.e. it has a fixed number of moves, e.g. ten. How will Ben and Alex play in the "end game." Ben might reason as follows. If I defect and confess on the tenth move, Alice cannot retaliate on the eleventh move (because there is no eleventh round of play). And Alice might reason the same way, leading both Ben and Alice to confess in the final round of play. But now Ben might think, since it is rational for both of us to defect in the tenth round, I need to rethink my strategy in the ninth round. Since I know that Alice will confess anyway in the tenth round, I might as well confess in the ninth round. But once again, Alice might reason in exactly this same way. Before we know it, both Alice and Ben have decided to defect in the very first round. This has been a very basic introduction to the prisoner's dilemma, but I hope that it has been sufficient to get the basic concept across. As a first year law student, you are likely to run into the prisoner's dilemma sooner or later. If you have an interest in this kind of approach to legal theory, I've provided some references to much more sophisticated accounts. Happy modeling! Related Lexicon Entries Resources on the Web (Last revised on April 7, 2013.)
{"url":"http://lsolum.typepad.com/legal_theory_lexicon/2003/10/legal_theory_le.html","timestamp":"2014-04-16T11:19:52Z","content_type":null,"content_length":"44171","record_id":"<urn:uuid:bb0d661f-db66-437a-a5eb-1e9dd3a7b844>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Set Types - The GNU Pascal Manual 6.2.11.6 Set Types set_type_identifier = set of set_element_type; set_type_identifier is a set of elements from set_element_type which is either an ordinal type, an enumerated type or a subrange type. Set element representatives are joined together into a set by [set_element, ..., set_element] [] indicates the empty set, which is compatible with all set types. Note: Borland Pascal restricts the maximal set size (i.e. the range of the set element type) to 256, GNU Pascal has no such restriction. The number of elements a set variable is holding can be determined by the intrinsic set function Card (which is a GNU Pascal extension, in Extended Pascal and Borland Pascal you can use SizeOf instead but note the element type size in bytes, then) to the set. There are four intrinsic binary set operations: the union +, the intersection * and the difference -. The symmetric difference >< is an Extended Pascal extension. See also Card, SizeOf
{"url":"http://www.gnu-pascal.de/gpc/Set-Types.html","timestamp":"2014-04-19T01:47:54Z","content_type":null,"content_length":"3371","record_id":"<urn:uuid:2c3ac6d3-24e1-49bd-a752-ad4742c1e313>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
R. J. E. Smith Publications (32)106.59 Total impact [show abstract] [hide abstract] ABSTRACT: We present the results of a search for gravitational waves associated with 223 gamma-ray bursts (GRBs) detected by the InterPlanetary Network (IPN) in 2005-2010 during LIGO's fifth and sixth science runs and Virgo's first, second and third science runs. The IPN satellites provide accurate times of the bursts and sky localizations that vary significantly from degree scale to hundreds of square degrees. We search for both a well-modeled binary coalescence signal, the favored progenitor model for short GRBs, and for generic, unmodeled gravitational wave bursts. Both searches use the event time and sky localization to improve the gravitational-wave search sensitivity as compared to corresponding all-time, all-sky searches. We find no evidence of a gravitational-wave signal associated with any of the IPN GRBs in the sample, nor do we find evidence for a population of weak gravitational-wave signals associated with the GRBs. For all IPN-detected GRBs, for which a sufficient duration of quality gravitational-wave data is available, we place lower bounds on the distance to the source in accordance with an optimistic assumption of gravitational-wave emission energy of $10^{-2}M_{\odot}c^2$ at 150 Hz, and find a median of 13 Mpc. For the 27 short-hard GRBs we place 90% confidence exclusion distances to two source models: a binary neutron star coalescence, with a median distance of 12Mpc, or the coalescence of a neutron star and black hole, with a median distance of 22 Mpc. Finally, we combine this search with previously published results to provide a population statement for GRB searches in first-generation LIGO and Virgo gravitational-wave detectors, and a resulting examination of prospects for the advanced gravitational-wave detectors. [show abstract] [hide abstract] ABSTRACT: We report results from a search for gravitational waves produced by perturbed intermediate mass black holes (IMBH) in data collected by LIGO and Virgo between 2005 and 2010. The search was sensitive to astrophysical sources that produced damped sinusoid gravitational wave signals, also known as ringdowns, with frequency $50\le f_{0}/\mathrm{Hz} \le 2000$ and decay timescale $0.0001\lesssim \tau/\mathrm{s} \lesssim 0.1$ characteristic of those produced in mergers of IMBH pairs. No significant gravitational wave candidate was detected. We report upper limits on the astrophysical coalescence rates of IMBHs with total binary mass $50 \le M/\mathrm{M}_\odot \le 450$ and component mass ratios of either 1:1 or 4:1. For systems with total mass $100 \le M/\mathrm {M}_\odot \le 150$, we report a 90%-confidence upper limit on the rate of binary IMBH mergers with non-spinning and equal mass components of $6.9\times10^{-8}\,$Mpc$^{-3}$yr$^{-1}$. We also report a rate upper limit for ringdown waveforms from perturbed IMBHs, radiating 1% of their mass as gravitational waves in the fundamental, $\ell=m=2$, oscillation mode, that is nearly three orders of magnitude more stringent than previous results. [show abstract] [hide abstract] ABSTRACT: We present an implementation of the $\mathcal{F}$-statistic to carry out the first search in data from the Virgo laser interferometric gravitational wave detector for periodic gravitational waves from a priori unknown, isolated rotating neutron stars. We searched a frequency $f_0$ range from 100 Hz to 1 kHz and the frequency dependent spindown $f_1$ range from $-1.6\, (f_0/100\,{\rm Hz}) \times 10^{-9}\,$ Hz/s to zero. A large part of this frequency - spindown space was unexplored by any of the all-sky searches published so far. Our method consisted of a coherent search over two-day periods using the $\mathcal{F}$-statistic, followed by a search for coincidences among the candidates from the two-day segments. We have introduced a number of novel techniques and algorithms that allow the use of the Fast Fourier Transform (FFT) algorithm in the coherent part of the search resulting in a fifty-fold speed-up in computation of the $\mathcal{F} $-statistic with respect to the algorithm used in the other pipelines. No significant gravitational wave signal was found. The sensitivity of the search was estimated by injecting signals into the data. In the most sensitive parts of the detector band more than 90% of signals would have been detected with dimensionless gravitational-wave amplitude greater than $5 \times 10^{-24}$. [show abstract] [hide abstract] ABSTRACT: During the LIGO and Virgo joint science runs in 2009-2010, gravitational wave (GW) data from three interferometer detectors were analyzed within minutes to select GW candidate events and infer their apparent sky positions. Target coordinates were transmitted to several telescopes for follow-up observations aimed at the detection of an associated optical transient. Images were obtained for eight such GW candidates. We present the methods used to analyze the image data as well as the transient search results. No optical transient was identified with a convincing association with any of these candidates, and none of the GW triggers showed strong evidence for being astrophysical in nature. We compare the sensitivities of these observations to several model light curves from possible sources of interest, and discuss prospects for future joint GW-optical observations of this type. The Astrophysical Journal Supplement Series 02/2014; 211(1):25. · 16.24 Impact Factor [show abstract] [hide abstract] ABSTRACT: The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave astrophysics communities. The purpose of NINJA is to study the ability to detect gravitational waves emitted from merging binary black holes and recover their parameters with next-generation gravitational-wave observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete binary black hole hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a "blind injection challenge" similar to that conducted in recent LIGO and Virgo science runs, we added 7 hybrid waveforms to two months of data recolored to predictions of Advanced LIGO and Advanced Virgo sensitivity curves during their first observing runs. The resulting data was analyzed by gravitational-wave detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We also perform a large-scale monte-carlo study to assess the ability to recover each of the 60 hybrid waveforms with early Advanced LIGO and Advanced Virgo sensitivity curves. Our results predict that early Advanced LIGO and Advanced Virgo will have a volume-weighted average sensitive distance of 300Mpc (1Gpc) for $10M_{\odot}+10M_{\odot}$ ($50M_{\odot}+50M_{\odot}$) binary black hole coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. [Abstract abridged for ArXiv, full version in PDF] [show abstract] [hide abstract] ABSTRACT: Cosmic string cusps produce powerful bursts of gravitational waves (GWs). These bursts provide the most promising observational signature of cosmic strings. In this letter we report stringent limits on cosmic string models obtained from the analysis of 625 days of observation with the LIGO and Virgo GW detectors. A significant fraction of the cosmic string parameter space is ruled out. This result complements and improves existing limits from searches for a stochastic background of GWs using cosmic microwave background and pulsar timing data. In particular, if the size of loops is given by gravitational back-reaction, we place upper limits on the string tension $G\mu$ below $10^{-8}$ in some regions of the cosmic string parameter space. Physical Review Letters 10/2013; · 7.94 Impact Factor [show abstract] [hide abstract] ABSTRACT: Long gamma-ray bursts (GRBs) have been linked to extreme core-collapse supernovae from massive stars. Gravitational waves (GW) offer a probe of the physics behind long GRBs. We investigate models of long-lived (~10-1000s) GW emission associated with the accretion disk of a collapsed star or with its protoneutron star remnant. Using data from LIGO's fifth science run, and GRB triggers from the swift experiment, we perform a search for unmodeled long-lived GW transients. Finding no evidence of GW emission, we place 90% confidence level upper limits on the GW fluence at Earth from long GRBs for three waveforms inspired by a model of GWs from accretion disk instabilities. These limits range from F<3.5 ergs cm^-2 to $F<1200 ergs cm^-2, depending on the GRB and on the model, allowing us to probe optimistic scenarios of GW production out to distances as far as ~33 Mpc. Advanced detectors are expected to achieve strain sensitivities 10x better than initial LIGO, potentially allowing us to probe the engines of the nearest long GRBs. Physical Review D 09/2013; 88:122004. · 4.69 Impact Factor [show abstract] [hide abstract] ABSTRACT: We present the results of a directed search for continuous gravitational waves from unknown, isolated neutron stars in the Galactic Center region, performed on two years of data from LIGO's fifth science run from two LIGO detectors. The search uses a semi-coherent approach, analyzing coherently 630 segments, each spanning 11.5 hours, and then incoherently combining the results of the single segments. It covers gravitational wave frequencies in a range from 78 to 496 Hz and a frequency-dependent range of first order spindown values down to -7.86 x 10^-8 Hz/s at the highest frequency. No gravitational waves were detected. We place 90% confidence upper limits on the gravitational wave amplitude of sources at the Galactic Center. Placing 90% confidence upper limits on the gravitational wave amplitude of sources at the Galactic Center, we reach ~3.35x10^-25 for frequencies near 150 Hz. These upper limits are the most constraining to date for a large-parameter-space search for continuous gravitational wave signals. Physical Review D 09/2013; · 4.69 Impact Factor [show abstract] [hide abstract] ABSTRACT: We present the results of searches for gravitational waves from a large selection of pulsars using data from the most recent science runs (S6, VSR2 and VSR4) of the initial generation of interferometric gravitational wave detectors LIGO (Laser Interferometric Gravitational-wave Observatory) and Virgo. We do not see evidence for gravitational wave emission from any of the targeted sources but produce upper limits on the emission amplitude. We highlight the results from seven young pulsars with large spin-down luminosities. We reach within a factor of five of the canonical spin-down limit for all seven of these, whilst for the Crab and Vela pulsars we further surpass their spin-down limits. We present new or updated limits for 172 other pulsars (including both young and millisecond pulsars). Now that the detectors are undergoing major upgrades, and, for completeness, we bring together all of the most up-to-date results from all pulsars searched for during the operations of the first-generation LIGO, Virgo and GEO600 detectors. This gives a total of 195 pulsars including the most recent results described in this paper. The Astrophysical Journal 09/2013; 785(2):18. · 6.73 Impact Factor [show abstract] [hide abstract] ABSTRACT: Nearly a century after Einstein first predicted the existence of gravitational waves, a global network of Earth-based gravitational wave observatories1, 2, 3, 4 is seeking to directly detect this faint radiation using precision laser interferometry. Photon shot noise, due to the quantum nature of light, imposes a fundamental limit on the attometre-level sensitivity of the kilometre-scale Michelson interferometers deployed for this task. Here, we inject squeezed states to improve the performance of one of the detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) beyond the quantum noise limit, most notably in the frequency region down to 150 Hz, critically important for several astrophysical sources, with no deterioration of performance observed at any frequency. With the injection of squeezed states, this LIGO detector demonstrated the best broadband sensitivity to gravitational waves ever achieved, with important implications for observing the gravitational-wave Universe with unprecedented sensitivity. Nature Photonics 07/2013; 7:613. · 27.25 Impact Factor [show abstract] [hide abstract] ABSTRACT: Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a "blind injection" where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron star and black hole parameter space over the individual mass range 1 Msun - 25 Msun and the full range of spin parameters. The cases reported in this study provide a snap-shot of the status of parameter estimation in preparation for the operation of advanced detectors. [show abstract] [hide abstract] ABSTRACT: We present a possible observing scenario for the Advanced LIGO and Advanced Virgo gravitational wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We determine the expected sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. For concreteness, we focus primarily on gravitational-wave signals from the inspiral of binary neutron star (BNS) systems, as the source considered likely to be the most common for detection and also promising for multimessenger astronomy. We find that confident detections will likely require at least 2 detectors operating with BNS sensitive ranges of at least 100 Mpc, while ranges approaching 200 Mpc should give at least ~1 BNS detection per year even under pessimistic predictions of signal rates. The ability to localize the source of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and can be as large as thousands of square degrees with only 2 sensitive detectors operating. Determining the sky position of a significant fraction of detected signals to areas of 5 sq deg to 20 sq deg will require at least 3 detectors of sensitivity within a factor of ~2 of each other and with a broad frequency bandwidth. Should one of the LIGO detectors be relocated in India as expected, many gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone. [show abstract] [hide abstract] ABSTRACT: The coalescence of a stellar-mass compact object into an intermediate-mass black hole (intermediate mass-ratio coalescence; IMRAC) is an important astrophysical source for ground-based gravitational-wave interferometers in the so-called advanced configuration. However, the ability to carry out effective matched-filter based searches for these systems is limited by the lack of reliable waveforms. Here we consider binaries in which the intermediate-mass black hole has mass in the range 24 - 200 solar masses with a stellar-mass companion having masses in the range 1.4 - 18.5 solar masses. In addition, we constrain the mass ratios, q, of the binaries to be in the range 1/140 < q < 1/10 and we restrict our study to the case of circular binaries with non-spinning components. We investigate the relative contribution to the signal-to-noise ratio (SNR) of the three different phases of the coalescence: inspiral, merger and ringdown. We show that merger and ringdown contribute to a substantial fraction of the total SNR over a large portion of the mass parameter space, although in a limited portion the SNR is dominated by the inspiral phase. We further identify three regions in the IMRAC mass-space in which: (i) inspiral-only searches could be performed with losses in detection rates L in the range 10% < L < 27%, (ii) searches based on inspiral-only templates lead to a loss in detection rates in the range 27% < L < 50%$, and (iii) templates that include merger and ringdown are essential to prevent losses in detection rates greater than 50%. We investigate the effectiveness with which the inspiral-only portion of the IMRAC waveform space is covered by comparing several existing waveform families in this regime. Our results reinforce the importance of extensive numerical relativity simulations of IMRACs and the need for further studies of suitable approximation schemes in this mass range. Physical review D: Particles and fields 02/2013; 88(4). [show abstract] [hide abstract] ABSTRACT: We present the first multi-wavelength follow-up observations of two candidate gravitational-wave (GW) transient events recorded by LIGO and Virgo in their 2009-2010 science run. The events were selected with low latency by the network of GW detectors (within less than 10 minutes) and their candidate sky locations were observed by the Swift observatory (within 12 hr). Image transient detection was used to analyze the collected electromagnetic data, which were found to be consistent with background. Off-line analysis of the GW data alone has also established that the selected GW events show no evidence of an astrophysical origin; one of them is consistent with background and the other one was a test, part of a 'blind injection challenge'. With this work we demonstrate the feasibility of rapid follow-ups of GW transients and establish the sensitivity improvement joint electromagnetic and GW observations could bring. This is a first step toward an electromagnetic follow-up program in the regime of routine detections with the advanced GW instruments expected within this decade. In that regime, multi-wavelength observations will play a significant role in completing the astrophysical identification of GW sources. We present the methods and results from this first combined analysis and discuss its implications in terms of sensitivity for the present and future instruments. The Astrophysical Journal Supplement Series 12/2012; 203(2). · 16.24 Impact Factor [show abstract] [hide abstract] ABSTRACT: We present the results of a search for gravitational waves associated with 154 gamma-ray bursts (GRBs) that were detected by satellite-based gamma-ray experiments in 2009-2010, during the sixth LIGO science run and the second and third Virgo science runs. We perform two distinct searches: a modeled search for coalescences of either two neutron stars or a neutron star and black hole, and a search for generic, unmodeled gravitational-wave bursts. We find no evidence for gravitational-wave counterparts, either with any individual GRB in this sample or with the population as a whole. For all GRBs we place lower bounds on the distance to the progenitor, under the optimistic assumption of a gravitational-wave emission energy of 10{sup -2} M {sub Sun} c {sup 2} at 150 Hz, with a median limit of 17 Mpc. For short-hard GRBs we place exclusion distances on binary neutron star and neutron-star-black-hole progenitors, using astrophysically motivated priors on the source parameters, with median values of 16 Mpc and 28 Mpc, respectively. These distance limits, while significantly larger than for a search that is not aided by GRB satellite observations, are not large enough to expect a coincidence with a GRB. However, projecting these exclusions to the sensitivities of Advanced LIGO and Virgo, which should begin operation in 2015, we find that the detection of gravitational waves associated with GRBs will become quite possible. The Astrophysical Journal 11/2012; 760(1). · 6.73 Impact Factor [show abstract] [hide abstract] ABSTRACT: Accurate parameter estimation of gravitational waves from coalescing compact binary sources is a key requirement for gravitational-wave astronomy. Evaluating the posterior probability density function of the binary's parameters (component masses, sky location, distance, etc.) requires computing millions of waveforms. The computational expense of parameter estimation is dominated by waveform generation and scales linearly with the waveform computational cost. Previous work showed that gravitational waveforms from non-spinning compact binary sources are amenable to a truncated singular value decomposition, which allows them to be reconstructed via interpolation at fixed computational cost. However, the accuracy requirement for parameter estimation is typically higher than for searches, so it is crucial to ascertain that interpolation does not lead to significant errors. Here we provide a proof of principle to show that interpolated waveforms can be used to recover posterior probability density functions with negligible loss in accuracy with respect to non-interpolated waveforms. This technique has the potential to significantly increase the efficiency of parameter estimation. Physical review D: Particles and fields 11/2012; 87(12). [show abstract] [hide abstract] ABSTRACT: We report a search for gravitational waves from the inspiral, merger and ringdown of binary black holes (BBH) with total mass between 25 and 100 solar masses, in data taken at the LIGO and Virgo observatories between July 7, 2009 and October 20, 2010. The maximum sensitive distance of the detectors over this period for a (20,20) Msun coalescence was 300 Mpc. No gravitational wave signals were found. We thus report upper limits on the astrophysical coalescence rates of BBH as a function of the component masses for non-spinning components, and also evaluate the dependence of the search sensitivity on component spins aligned with the orbital angular momentum. We find an upper limit at 90% confidence on the coalescence rate of BBH with non-spinning components of mass between 19 and 28 Msun of 3.3 \times 10^-7 mergers /Mpc^3 /yr. [show abstract] [hide abstract] ABSTRACT: This paper presents results of an all-sky searches for periodic gravitational waves in the frequency range [50, 1190] Hz and with frequency derivative ranges of [-2 x 10^-9, 1.1 x 10^ -10] Hz/s for the fifth LIGO science run (S5). The novelty of the search lies in the use of a non-coherent technique based on the Hough-transform to combine the information from coherent searches on timescales of about one day. Because these searches are very computationally intensive, they have been deployed on the Einstein@Home distributed computing project infrastructure. The search presented here is about a factor 3 more sensitive than the previous Einstein@Home search in early S5 LIGO data. The post-processing has left us with eight surviving candidates. We show that deeper follow-up studies rule each of them out. Hence, since no statistically significant gravitational wave signals have been detected, we report upper limits on the intrinsic gravitational wave amplitude h0. For example, in the 0.5 Hz-wide band at 152.5 Hz, we can exclude the presence of signals with h0 greater than 7.6 x 10^-25 with a 90% confidence level. [show abstract] [hide abstract] ABSTRACT: Pulsar Timing Arrays are a prime tool to study unexplored astrophysical regimes with gravitational waves. Here we show that the detection of gravitational radiation from individually resolvable super-massive black hole binary systems can yield direct information about the masses and spins of the black holes, provided that the gravitational-wave induced timing fluctuations both at the pulsar and at the Earth are detected. This in turn provides a map of the non-linear dynamics of the gravitational field and a new avenue to tackle open problems in astrophysics connected to the formation and evolution of super-massive black holes. We discuss the potential, the challenges and the limitations of these observations. Physical Review Letters 07/2012; 109(8). · 7.94 Impact Factor [show abstract] [hide abstract] ABSTRACT: We present the results of the first search for gravitational wave bursts associated with high energy neutrinos. Together, these messengers could reveal new, hidden sources that are not observed by conventional photon astronomy, particularly at high energy. Our search uses neutrinos detected by the underwater neutrino telescope ANTARES in its 5 line configuration during the period January - September 2007, which coincided with the fifth and first science runs of LIGO and Virgo, respectively. The LIGO-Virgo data were analysed for candidate gravitational-wave signals coincident in time and direction with the neutrino events. No significant coincident events were observed. We place limits on the density of joint high energy neutrino - gravitational wave emission events in the local universe, and compare them with densities of merger and core-collapse events. Top Journals • 2011–2013 □ University of Birmingham ☆ School of Physics and Astronomy Birmingham, England, United Kingdom
{"url":"http://www.researchgate.net/researcher/59355791_R_J_E_Smith","timestamp":"2014-04-19T15:37:43Z","content_type":null,"content_length":"450035","record_id":"<urn:uuid:df20f9d6-360e-465f-8f1e-f8a1dd8d0da1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Crystal Fantry—Wolfram|Alpha Blog Blog Posts from this author: Are you looking for a great way to spend your summer? We are happy to announce the Mathematica Summer Camp 2012! Held at Curry College in Milton, Massachusetts, students will have the opportunity to learn Mathematica’s language, apply their skills in other disciplines, and program their very own Wolfram Demonstrations! Students will also work individually and in groups to hone their Mathematica This unique, two-week overnight camp is designed for students entering their junior and senior years in high school. We look forward to seeing all the most talented high school students at camp this year! More » Do you need to work with numbers that are of the magnitude of thousands, millions, or even billions? How about the thousandths, millionths, or billionths? Scientists and engineers need to work with really large and really small numbers every day. Now Wolfram|Alpha can help put all of those large and small numbers into scientific notation. For example, the Earth’s mass is about 5973600000000000000000000 kg, but it is nicely represented in scientific notation as 5.9736×10^24 kg. The real line runs from negative to positive infinity and consists of rational and irrational numbers. It generally appears horizontally, and every point corresponds to a real number. Also known as a number line in school, the real line is said to be one of the most useful ways to understand basic mathematics. Wolfram|Alpha can now aid you in learning the difference between x<-5 and x>5, or Abs[x Wolfram|Alpha now graphs inequalities and points on the real line. This new feature in Wolfram|Alpha allows you to plot a single inequality or a list of multiple inequalities. Let’s start off simply and try “number line x<100”. You can easily see that this is the set of all real numbers from negative infinity to, but not including, 100. What if you need to plot a more difficult inequality, like “number line 3x<7x^2+2”? This plot will show that the solutions to this inequality are all real numbers between negative and positive More » A new school year is here, and many students are diving into new levels of math. Fortunately, this year, you have Wolfram|Alpha to help you work through math problems and understand new concepts. Wolfram|Alpha contains information from the most basic math problems to advanced and even research-level mathematics. If you are not yet aware of Wolfram|Alpha’s math capabilities, you are about to have a “wow” moment. For the Wolfram|Alpha veterans, we have added many math features since the end of the last school year. In this post, we’re highlighting some existing Wolfram|Alpha math essentials, such as adding fractions, solving equations, statistics, and examples from new topics areas like cusps and corners, stationary points, asymptotes, and geometry. You can access the computational power of Wolfram|Alpha through the free website, via Wolfram|Alpha Widgets, with the Wolfram|Alpha App for iPhone, iPod touch, and the iPad! Even better, the Wolfram| Alpha Apps for iPhone, and iPod touch, and the iPad are now on sale in the App Store for $0.99 though September 12. If you need to brush up on adding fractions, solving equations, or finding a derivative, Wolfram|Alpha is the place to go. Wolfram|Alpha not only has the ability to find the solutions to these math problems, but also to show one way of reaching the solution with the “Show Steps” button. Check out the post “Step-by-Step Math” for more on this feature. You can find this widget, and many others, in the Wolfram|Alpha Widget Gallery. Customize or build your own to help you work through common math problems. Then add these widgets to your website or blog, and share them with friends on Facebook and other social networks. Of course, Wolfram|Alpha also covers statistics and probability. For example, Wolfram|Alpha can compute coin tossing probabilities such as “probability of 21 coin tosses“, and provides information on normal distribution: More » Steven Strogatz, a professor of applied mathematics at Cornell University, is currently blogging for The New York Times about issues “from the basics of math to the baffling”. It’s been a fascinating series, starting with preschool math and progressing through subtraction, division, complex numbers, and more. As Wolfram|Alpha is such a powerful tool for working with mathematical concepts, we thought it’d be fun to show how to use it to explore some of the topics in Strogatz’s blog. First up is Strogatz’s post on “Finding Your Roots”. For a brief introduction to Wolfram|Alpha’s ability to find roots, try “root of 4x+2”. Here we found the one and only root of 4x+2, but what if there is more than one root? Not a problem for Wolfram|Alpha—try “4x^2 + 3x – 4”. More » Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more. Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies… Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes! Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon? Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
{"url":"http://blog.wolframalpha.com/author/crystal-fantry/page/2/","timestamp":"2014-04-18T23:29:43Z","content_type":null,"content_length":"44176","record_id":"<urn:uuid:21bcd59d-b322-4ddc-9d26-149dfbdfc250>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Heat Loss Through Windows A spectacular view may cost more than you think. Between 12 and 30 percent of your yearly heating bill goes to make up for heat loss through windows, estimates the University of Wisconsin Cooperative Extension. Knowing the cost of the heat that lost through the glass of each window annually can help you take steps to reduce heating bills. A savvy homeowner can calculate this loss and, coupled with proper window treatments, make a substantial change in the cost of heating the home. Proper window treatments can reduce a heating bill by up to 25 percent, the U.S. Department of Energy notes. Measure and record the width and length of the window in inches. Calculate the square footage of the window by multiplying the width by length and dividing by 144. Retrieve the cost per unit of heating product from your heating bill. For example, electricity cost is per kilowatt hour, or kWh, oil is per gallon, and natural gas is per 100 cubic feet. Record this Determine the heating degree days, or DD, for your area. This is the number of degrees that the average daily temperature is below 65 degrees Fahrenheit. The provider of your heating fuel, a weather office or airport can typically provide this figure. One day of 20-degree weather, for example, provides 45 DD. Calculate the cost of heat lost per square foot of the window by multiplying your fuel cost per unit by the number of degree days. Multiply the result by 38.82 for electricity, 1.57 for oil and 2.03 for natural gas and divide the result by 10,000. The end figure is the cost of heat lost per square foot of a double-paned window. For example: Cost of electricity: 98 cents, multiplied by 5,000 DD equals 4,900, multiplied by 38.82 equals 190,218, divided by 10,000 equals $19.02 per square foot of window annually. For triple-paned windows, multiply the result by 0.65. For a single-glass window, multiply the result by 2.27. Things You Will Need • Tape measure • Heating bill • Calculator • Wood and coal are excluded from these calculations because the heat production of these fuels is not consistent. Photo Credits • Hemera Technologies/AbleStock.com/Getty Images
{"url":"http://homeguides.sfgate.com/calculate-heat-loss-through-windows-26110.html","timestamp":"2014-04-17T13:28:46Z","content_type":null,"content_length":"32068","record_id":"<urn:uuid:a52c6952-85c2-4bbb-a901-49c5cd56f79d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
We all played tag when we were kids. What most of us don't realize is that this simple chase game is in fact an application of pursuit theory, and that the same principles of games like tag, dodgeball, and hide-and-seek are also at play in military strategy, high-seas chases by the Coast Guard, and even romantic pursuits. In Chases and Escapes, Paul Nahin gives us the first complete history of this fascinating area of mathematics, from its classical analytical beginnings to the present day. Drawing on game theory, geometry, linear algebra, target-tracking algorithms, and much more, Nahin also offers an array of challenging puzzles with their historical background and broader applications. Chases and Escapes includes solutions to all problems and provides computer programs that readers can use for their own cutting-edge analysis. Now with a gripping new preface on how the Enola Gay escaped the shock wave from the atomic bomb dropped on Hiroshima, this book will appeal to anyone interested in the mathematics that underlie pursuit and evasion. Paul J. Nahin is the best-selling author of many popular math books, including Mrs. Perkins's Electric Quilt, Digital Dice, Dr. Euler's Fabulous Formula, When Least Is Best, and An Imaginary Tale (all Princeton). He is professor emeritus of electrical engineering at the University of New Hampshire. "In the 18th century, mathematicians began to tease apart how best to track down and intercept prey, inspired by pirate ships bearing down on merchant vessels. The mathematics is by no means trivial, and quickly becomes fiendish if the merchant ship takes evasive action. This is just one of the colorful problems in Paul Nahin's fascinating history of the mathematics of pursuit, in which he guides us masterfully through the maths itself--think lions and Christians, submarines and torpedoes, and the curvaceous flight of fighter aircraft."--New Scientist "This is a highly readable book that offers several colorful applications of differential equations and good examples of non-trivial integrals for calculus students. It would be a good source of examples for the classroom and or a starting point for an independent project."--Bill Satzer, MAA Review "This book contains a well-written, well-organized collection of solutions to twenty-one challenging calculus and differential equation problems that concern pursuit and evasion as well as the historical background of each problem type."--Mathematics Teacher "I am sure that this book will appeal to everyone who is interested in mathematics and game theory. Excellent work."--Prabhat Kumar Mahanti, Zentralblatt Math "Chases and Escapes is a wonderful collection of interesting and classic pursuit and evasion problems. . . . If you are interested in in dogs chasing ducks, pirates chasing merchants, and submarines hiding, then this book is for you."--Mathematics Teacher "Nahin provides beautiful applications of calculus, differential equations, and game theory. If you are pursuing an enjoyable collection of mathematical problems and the stories behind them, then your search ends here."--Arthur Benjamin, Harvey Mudd College More Endorsements Table of Contents Other Princeton books authored or coauthored by Paul J. Nahin: • Princeton Puzzlers Subject Areas:
{"url":"http://press.princeton.edu/titles/9700.html","timestamp":"2014-04-18T03:01:14Z","content_type":null,"content_length":"20099","record_id":"<urn:uuid:c03a50c6-cdf0-4c7d-afae-11b32e3a49fe>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00296-ip-10-147-4-33.ec2.internal.warc.gz"}