content
stringlengths
86
994k
meta
stringlengths
288
619
Thermistor signal conditioning: Dos and Don'ts, Tips and Tricks Posted by Jason Sachs on Jun 15 2011 under Tutorials | Circuit Design | Thermistor In an earlier blog entry, I mentioned this circuit for thermistor signal conditioning: It is worth a little more explanation on thermistor signal conditioning; it's something that's often done poorly, whereas it's among the easiest applications for signal conditioning. The basic premise here is that there are two resistors in a voltage divider: Rth is the thermistor, and Rref is a reference resistor. Here Rref is either R3 alone, or R3 || R4, depending on the gain This is only one possible circuit. There are many others, but please use the following guidelines. Just to be clear, we're going to talk about using an NTC (negative-temperature coefficient) thermistor in an embedded system with an analog-to-digital converter (ADC) and a processor. • Don't linearize your analog circuitry Linearization is the use of additional circuitry (usually 1-3 additional fixed resistors placed in series or parallel with the thermistor or reference resistor) to produce a voltage that is a more linear function of temperature. There are a number of application notes on linearization from manufacturers like Maxim, Microchip, and EPCOS. In purely analog circuits, linearization is necessary. For example, thermocouple circuits often require cold-junction compensation to correct for errors caused by changing temperature of the "cold junction", which is where the thermocouple wires are attached to the signal conditioning circuitry. This often uses a thermistor, and the nonlinear response of a thermistor needs to be conditioned into a linear adjustment to the thermocouple amplifier. In an embedded system using ADCs and a processor, linearization is both unnecessary and wasteful. Linearization comes at a cost. The sensitivity of voltage output is reduced, and there are more components involved, which means more opportunities for component tolerance to contribute to temperature error. There is absolutely no reason why in such an embedded system, the processor should not do the linearization in software. There are some very simple and fast ways to handle the nonlinear conversion of ADC counts to temperature, which we'll discuss later. • Don't use excess signal conditioning One system I worked on was designed by a contractor, and had a linearization circuit which reduced the sensitivity of the thermistor output by about a factor of 10, followed by an amplifier circuit that amplified the thermistor output by a factor of 10 relative to a reference voltage. I looked at the schematic and shook my head. Please realize that each stage of signal conditioning introduces the chance for errors. In analog circuitry, component tolerances and noise sensitivity cause these errors: Resistors and capacitors have value tolerances. Op-amps have specs like offset voltage and current. ADCs have gain and offset errors, and integral and differential nonlinearity. (INL and DNL) These all add up. My rule of thumb is that unless you keep things very simple, use good components, and design carefully, it is difficult to have net voltage errors below 1% of the ADC fullscale -- excluding the sensor itself, and this goes for any sensor, not just thermistors. If errors concern you, then you really need to anticipate what kind of errors to expect in your system, and find ways to deal with them. For a basic resistor divider, you can use 0.1% resistors: these have really come down in price over the last 10 years -- a single 10K 0.1% 0603 resistor can be purchased from Digikey for $0.25, with prices of about $0.10 in 1K quantities. As far as op-amps go, so many manufacturers make CMOS op-amps with picoamp input currents that it's easy to forget about input current errors. Typical offset voltages are now in the 2-5mV range. For a buffering application in a 3.3V system this is about 0.1-0.2% of fullscale; applications with gains higher than 1 are worse and you may have to use more costly precision op-amps. (In digital signal processing, there are no component tolerances or noise, but in each stage of computation there is an oppportunity to introduce errors when doing multiplication or division. PCs use double-precision (64 bits or more) to calculate math; embedded systems often limit this precision to 32-bit or 16-bit fixed point math. The errors are very small and predictable compared to the errors of analog signal conditioning, but they still exist.) In any case, keep it simple and you'll save yourself trouble. • Do understand your requirements This is probably the most important and understated part of the design process, for any circuit, not just thermistors. The three most important requirements for thermistor temperature sensing are: • temperature sensing range • temperature sensing resolution • temperature sensing accuracy Let's look at the "Z" curve for Quality Thermistors, in a voltage divider: [25]. Shown above are the output voltages in a voltage divider (as a fraction of the total voltage across the voltage divider) where the reference resistor R[ref] is either 0.2R[25], 1.0R[25], or 5.0R[25]. (corresponding to 2K, 10K, and 50K for R[25]=10K) These curves are very easy to calculate from the manufacturer's thermistor curves: the voltage divider ratio α = R[th]/(R[ref] + R[th]) = 1 - R[ref]/(R[ref]+R[th]). You'll notice that for each reference resistor, there's a range of about 60-80 degrees C over which α drops from 0.9 to 0.1. Outside those ranges, the sensed voltage changes very little with temperature. So if your range of interest is, for example, 25 C - 100 C, you might want to use R[ref] of 0.2R[25] and live with the low resolution in sensed temperature above and below that range. If you need to sense temperature over a wide range, you need to make a system that either can accurately read voltage to deal with the low sensitivity at cold and hot temperatures, or you need to switch R[ref] between ranges, like we talked about in my previous article (with the choice of either R3 alone, or R3 || R4). Another way of looking at this circuit is to analyze the sensitivity itself (for those of you familiar with calculus, it's the first derivative with respect to temperature dα/dT): If you are using an 8-bit ADC, this graph shows the sensitivity (in counts per degree) of temperature sensing at various temperatures. Sensitivity and resolution are related: the more sensitive a circuit is, the higher resolution it provides. Quantitatively, the two are related inversely: 3 counts per degree means 0.33 degree resolution, whereas 10 counts per degree means 0.1 degree resolution. If you only need 1 degree C resolution, you're probably fine with an 8-bit ADC. If you need 0.1 degree C resolution, you'll want a 10-bit ADC, or you'll want a way to amplify ranges of the thermistor voltage. Resolution is a distinct requirement from accuracy. A digital thermometer, even if it's off by 2 degrees C, can easily show you temperature in 0.1 degree steps: that's a resolution of 0.1 degree but an accuracy of 2 degrees C. Accuracy is much more difficult to analyze, as it has many contributing factors, but the biggest cause is the accuracy of the thermistor itself. Many thermistors are specified with a 5% resistance tolerance at 25 C; the inherent temperature error is this relative accuracy divided by the thermistor's temperature coefficient. Quality Z's thermistor table shows a tempco of 4.4%/degree at 25 C, so 5% / 4.4% = 1.1 degree C accuracy. In general, it is very difficult or expensive to get high accuracy thermistors without some sort of calibration step. • Do use ratiometric circuits A ratiometric circuit is one in which the ratio of voltages or currents is sensed, rather than their absolute values. Resistor dividers and Wheatstone bridges are both examples of ratiometric circuits. Sensors which use ratiometric circuits are ideal, because it means that the accuracy of the quantity you are sensing is independent -- or at least nearly independent -- of the accuracy of voltage or current references in your circuit. If a reference voltage or current has variation with time or temperature, so does the sensed voltage or current, and this variation cancels out when the ratio is calculated. Nearly all ADCs and DACs are ratiometric as well: the sampled digital output of an ADC is the ratio of its analog input voltage to its reference voltage, and the analog output of a DAC is the fraction of its reference voltage selected by its digital input. Three of the circuits I have mentioned (resistor dividers as a function of their resistances, ADCs, and DACs) are what I would call strongly ratiometric: the ratios of these circuits are unitless functions of resistance or voltage ratios, and they range from essentially 0 to essentially 1. In other words, the gain is almost exactly predictable. (ADCs and DACs have slight gain error, and resistor dividers of low values may have parasitic series resistance that causes errors, but otherwise their transfer functions are very close to 1.) Watch out with your ADCs and DACs, though: some of them take in half-scale reference voltages, so there is an indirect gain of 2 somewhere in the circuit, caused by resistor or capacitor or transistor area matching, and the ICs that do this sometimes don't specify very well the tolerance of this gain. Some examples of this are the ADCs in TI's 28xxx DSP family, and the MAX5322 DAC. Most sensors are ratiometric but not strongly ratiometric: for a fixed sensing quantity (temperature, strain, humidity, etc.) their output is ratiometric to a supply voltage or reference voltage, but the sensor gain is not unitless, and has part-to-part variation. For strain gages, as an example, there is a gain from strain to resistance that depends on the manufacturing tolerances and material properties of the strain gage. So the gain of a strain gage will not be affected by changes in supply voltage, but it will vary from part to part. Often the gain will need to be calibrated out. In any case, the #1 thing you should remember when you have a sensor that is ratiometric, used with an ADC that is ratiometric, is to use the same reference voltage! (Or reference voltages with very tightly coupled ratios.) Otherwise you are throwing away free accuracy. As an example, if you have a precision 3V reference driving a voltage divider, but you use an inaccurate 3.3V analog power supply on an ADC, as the power supply varies you will see different readings from the ADC. • Do treat your ADC input channels properly Don't just hook up a resistor divider to an ADC input channel without analyzing what is going on. I may write a future article going into more detail, but briefly speaking, there are two important characteristics of ADC input channels that you need to be aware of: input leakage, and input capacitance. Input leakage is a parasitic resistance or current flow between the ADC input and one or more other circuit nodes within the ADC. It's the same idea as input current offset in an op-amp. The resulting current, times your circuit's equivalent resistance, produces an undesired offset error. You may have to buffer the voltage into your ADC to minimize this error. Input capacitance is a much more subtle issue. Many ADCs use an internal sample-and-hold capacitor: the ADC hooks this capacitor up to your input voltage using internal switches, then disconnects the capacitor from the input and uses a state machine and comparators and what-not to convert that capacitor voltage to a digital reading. (If you're curious, look up successive-approximation converters in Wikipedia.) So the input stage of an ADC looks like a capacitor that appears and disappears, and in a multiple-channel ADC this capacitor transfers charge between inputs. This happens every time you sample the input voltage before a conversion, and your external circuit needs to transfer charge to/from the sampling capacitor until the voltage stabilizes. The best solution is to put a unity-gain buffer (which solves the input leakage and part of the input capacitance issue) followed by a small RC low-pass filter, in front of the ADC input. This filter provides a stiff source of charge (via the external capacitor) to the ADC sample-and-hold capacitor, and the resistor isolates the op-amp from a capacitive load. Usually this RC is in the 100-1000ohm and 100-1000pf range, so its time constant is under 1us. • Do understand the thermal properties of thermistors Thermistors aren't perfect. They have two characteristics -- self-heating and conduction through leads -- that can ruin your day if you're not careful. A thermistor, like any other resistor, dissipates power = I^2R. Power dissipation causes the thermistor to heat up to a temperature that is slightly higher than the one you want to sense: in other words, this causes sensor error. Most thermistor datasheets will give you a self-heating thermal coefficient, like 2mW/C, which means that for every 2mW you dissipate in the thermistor, its temperature will be off by 1 degree C. This is measured in still air; in moving air or a liquid the self-heating is smaller, because the power dissipation is conducted or convected away more easily. So the good news is the self-heating constant in the datasheet is a maximum amount of self-heating (well, unless you surround it by insulation). The bad news is that the actual change in temperature due to self-heating depends on airflow and is therefore usually unpredictable, which means you can't reliably write an algorithm to compensate for self-heating error. On the other hand, even though self-heating temperature rise is unpredictable, it is easy to calculate how much self-heating power can occur in a voltage divider. The worst-case amount of self-heating is when the thermistor and reference resistor are equal values, at which point the power dissipation in each is V[ref]^2/R[ref]/4. At lower temperatures, when the thermistor's resistance increases, the current through the pair of resistors drops. At higher temperatures, when the thermistor's resistance decreases, current increases but most of the power dissipation is across the reference resistor. There are two things you can do to reduce self-heating: One is to use a higher-valued thermistor, e.g. 100K rather than 10K nominal -- I'm not sure why 10K is the standard value, but it's a poor choice in many applications. The other is to use a smaller reference voltage. This makes the voltage sensitivity of your circuit smaller, but it may lower the total error if the self-heating can be greatly reduced. The other important thermal issue to note, besides self-heating, is the thermal conduction through the thermistor's leads. There has to be an electrical connection between a thermistor's sensing element and your circuit. Circuit boards and components use copper. Copper is a great electrical conductor, and it's also a great conductor of heat. So there's also an unwanted thermal connection between a thermistor's sensing element and your circuit. For example, if you're measuring hot air at 80 C slowly moving through a pipe with a thermistor that has its leads soldered to a circuit board on a 30 C heat sink, the thermal conductivity between the thermistor and the air might be 50 times better than the thermal conductivity between the thermistor and the circuit board, but 50 isn't infinity, so the thermistor would read 49/50 * 80 + 1/50 * 30 = 79 C: so you'll see inaccuracies caused by relative variation between the two temperatures. This is one reason why thermistor leads and copper traces may need to be very thin, so that they minimize parasitic thermal conduction through the thermistor leads. Tips and Tricks There's two other tips and tricks I'll share with you. One is on the analog side, and the other is on the digital side. • Tip: ADC gain/offset autocalibration ADCs have gain and offset error. You're stuck with it, and it's usually specified in LSBs (multiples of 1 ADC count). For example, Microchip's MCP3201 (a 12-bit ADC) is spec'd at +/- 3LSB (=0.07% of fullscale) offset and +/-5LSB (=0.12% of fullscale) gain error. Let's use a 4:1 analog multiplexer to measure 4 different things into the ADC channel: • Two voltage dividers (from two thermistors) • A 3-resistor voltage divider with taps that are close to the upper and lower rails. This lets us measure two voltage divider ratios, and two reference ratios. The reference ratios are so close to 0 and 1 that they are very insensitive to resistor tolerance. A 1:100:1 voltage divider that uses 1% tolerance resistors has ratios of approx 0.0098 and 0.9902, with worst-case ranges of 0.0096-0.0100 and 0.9900-0.9904. That's +/- 0.02% of fullscale accuracy out of 1% resistors! These inaccuracies are smaller than the errors caused by the gain and offset error of the MCP3201, so we can measure its inputs at 1% of fullscale (nominally 41 counts) and 99% of fullscale (nominally 4055 counts) and use the readings to compensate for gain and offset error. (We're still stuck with differential and integral nonlinearity) A high-ratio voltage divider also allows for a very low output impedance with low power dissipation (100 ohm output impedance but 10.2K fullscale resistance in the above circuit) • Tip: Convert directly from resistor ratio to temperature Once you've measured the ADC reading of a thermistor voltage divider, and compensated for ADC gain and offset, there are a number of ways you can convert that ADC reading to a temperature. (Remember, the relationship between voltage divider output and NTC thermistor temperature is a nonlinear operation.) Two ways not to do this are: • lookup tables • ADC reading -> resistance -> temperature Lookup tables are simple. They're arrays of numbers that convert an index to an output. But they're also space hogs for the amount of accuracy you get out of them. With a 12-bit ADC, you're either going to need a 4096-element lookup table, or you're going to have to interpolate between elements of a smaller lookup table, in which case you need to do some multiplication. By the time you commit to using multiplication, you're generally better off just using a polynomial. So unless you have a lot of extra RAM or ROM (or an 8-bit ADC which only needs a 256-element lookup table) and are using a processor where hardware multiplication isn't available (no "multiply" assembly instruction), a lookup table is a poor choice for converting ADC voltage to temperature. The other important thing to note is that while you can convert between ADC reading and thermistor resistance, and then from thermistor resistance to temperature, in almost all cases this two-step process is unnecessary and a poor choice. You don't really care what the thermistor's resistance is! You care what temperature it is. And on top of that, if you try to compute the thermistor's resistance, it varies over several orders of magnitude: it's a nearly exponential relationship with temperature, and exponentials are bad things to have to calculate with fixed-point math. (thought experiment: a Quality Z 10K thermistor at 25C will be 32.6K at 0 C, and 679 ohms at 100C. If you want 1% numerical accuracy, that means a dynamic range of 32.6K / (1% of 679 ohms) = 4800:1, which is possible in 16-bit math but doesn't leave much room in case you suddenly find out you need to sense temperature from -10 C or -20 C) There's a general lesson here: whenever you have your choice of computations, pick the one that is the most linear possible. What does this mean? That's another story for a future article, but briefly: From a qualitative standpoint, take a function you have to calculate and graph it. If it looks kind of like a line, or a line with a little bit of curvature you're in good shape. If it has sharp corners or cusps or quickly turns from steep to shallow, it's going to be difficult to calculate accurately. From a quantitative standpoint, prefer the lowest-order polynomial that approximates a computation. In this case, if I did need to know the thermistor's resistance, I would probably express it in logarithmic terms, e.g. calculate log R[th] rather than R[th] itself. This is because the relationship between temperature and log R[th] is closer to a quadratic or cubic polynomial, whereas between temperature and R[th] is exponential. It turns out that in most cases the relationship between voltage divider ratio α and temperature is not that nonlinear. Depending on the temperature range you care about, you may be able to get away with a 3rd-order polynomial or even a quadratic. The approach for doing this approximation is fairly simple: 1. Compute the nominal ADC voltage for a given temperature. (Alternatively: calibrate your system by measuring ADC voltage at various temperatures.) 2. Use your favorite math software (MATLAB/Octave/SciLab/Mathematica/MathCAD/etc., or if you really must, use Excel) to fit a polynomial to the curve x = ADC voltage, y = temperature. 3. Calculate the error between the actual temperature and the predicted temperature based on the polynomial. 4. If the error is lower than your requirements, you're done; otherwise, you may need to either increase the degree of the polynomial or split the range into pieces. As a rule of thumb, polynomials of degrees higher than 5 should be avoided; the higher the polynomial degree, the more difficult it is to avoid overflow and underflow errors. At this point, you may be wondering, how did such a simple problem get so complicated? Such is engineering! If it were easy, it wouldn't be interesting. Rate this article: posted by Jason Sachs Jason has 17 years of experience in signal conditioning (both analog + digital) in motion control + medical applications. He likes making things spin. Previous post by Jason Sachs: Oscilloscope Dreams Next post by Jason Sachs: Which MOSFET topology? all articles by Jason Sachs I am using a simple thermister with a resistor to form a divider and this is fed directly to my ADC (LPC ARM from NXP). I don't need any signal conditioning. I just use a look up table to catch the non-linearity of the sensor and convert to the right temperature value. It works fine....has been tested from -30 to +70....gets a little inaccurate at the extremes, but thats good enough for my application. Is dead on everywhere else. Doesn't need any signal conditioning? Is your application special or something that it requires conditioning? Thanks -Donald 2 years ago Sorry, you need javascript enabled to post any comments. Hi, nice article the hyperlink you've given to the first line, is linked to the address which doesn't contain the article. Instead, it must be linked to http://www.embeddedrelated.com/showarticle/81.php 3 months ago Sorry, you need javascript enabled to post any comments.
{"url":"http://www.embeddedrelated.com/showarticle/91.php","timestamp":"2014-04-20T18:23:16Z","content_type":null,"content_length":"55575","record_id":"<urn:uuid:b2f78e3a-d84a-43d7-af52-e9bd55101711>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
Bolinas Precalculus Tutor Find a Bolinas Precalculus Tutor ...I help at all levels of math including the test preparation for the quantitative part of the GRE. I had several GRE students and they all exceeded their initial expectations in the math portion. "Excellent tutor improved my GRE score!" - Megan G. Oakland, CA Andreas was a huge help to me in preparing for my GRE. 41 Subjects: including precalculus, calculus, geometry, statistics ...From the nerve-racking experience of not understanding something because of the poor sound quality, to the anxious wait until the results are in, I've been through it all. And very successfully! Since then, living in the US has improved my English proficiency even more, making me an even better tutor now than I was before. 32 Subjects: including precalculus, chemistry, Spanish, calculus (Can tutor any subject in Spanish!) Availability: M, Th, F: 5-9 pm I am a scientific researcher at UC Berkeley with experience teaching and tutoring basic and complex scientific subjects. I am a trained engineer, with a M.S. from UC Berkeley, and a B.S. from the University of Illinois at Urbana-C... 15 Subjects: including precalculus, Spanish, calculus, ESL/ESOL ...I enjoy the real analysis aspect of calculus, including precise definitions of limits, definition of power with irrational exponent, and functions of rational numbers. Also origin of e and transcendental topic infinite series. I have the 5th edition of the Stewart textbook, an excellent book used by many schools, and the student solutions manual 5th edition. 14 Subjects: including precalculus, physics, calculus, geometry ...Paul Church. Hence,I'm confident that I'm completely able to help the students to understand the lectures, do their homework and assignments correctly and improve their grade significantly. In addition, I also can help the students to understand to basic concept of Physics like motions, pressures, force, wave, energy and light. 18 Subjects: including precalculus, calculus, trigonometry, statistics
{"url":"http://www.purplemath.com/Bolinas_Precalculus_tutors.php","timestamp":"2014-04-21T10:58:23Z","content_type":null,"content_length":"24215","record_id":"<urn:uuid:a8ffa1b0-b359-401b-9d97-56a7736bd5ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment Hexarights Apply familiar concepts in unfamiliar settings Apply mathematical definitions and explore their subtleties Choose tools to help in problem-solving Suggested time allotment One class period Student social organization Students working alone or in pairs Task Assumed background: This task assumes that the children are familiar with area and perimeter of plane figures and with perpendicular lines. In particular, the children should have had some experience in measuring lengths of line segments and in drawing right angles (using an L-square or similar tool) drawing objects (like rectangles) that meet predetermined criteria — for example with given areas or perimeters. OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment It is also assumed that the students have had some experience in dealing with newly defined geometric objects. The assessment activity does not, however, assume any familiarity with hexarights. Indeed, part of what is being assessed is children's abilities to grapple with a concept that is new to them. Presenting the task: The teacher should first be sure that everyone recalls that a hexagon is a 6-sided plane figure, and that adjacent sides are sides that touch. Then he or she should distribute copies of the student sheet and read through the first item to ensure that everyone has a beginning understanding of what a hexaright is. As always, tools for drawing should be available for children to use as they see the need. The tools should include rulers, L-squares (which provide an easy way to draw right angles), centimeter tiles, and centimeter graph paper. An L-square is provided on the back cover of this book. Student assessment activity: See the following pages. Note: the student work pages have been drawn to fit the 7'' × 10" page of this volume. Reproduction of these pages for student use may affect the scale of the centimeter graph paper in questions 2 and 3 and the hexaright in question 4. The teacher may choose to redraw these diagrams to achieve proper scale. OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment Name _________________________________________ Date _____________ We made up a new kind of shape and made up a name for it: hexaright. Here's the definition: A hexaright is a hexagon in which each pair of adjacent sides is perpendicular. Here are some examples of hexarights: This is not a hexaright because not all pairs of adjacent sides are OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment 1. This is not a hexaright either. Why not? 2. This hexaright has been drawn on some centimeter graph paper. Find the perimeter and the area of this hexaright: Perimeter: ___________________ Area: ___________________ OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment 3. This hexaright has also been drawn on some centimeter graph paper. Find the perimeter and area of this hexaright: Perimeter: ___________________ Area: ___________________ 4. Here's a hexaright with a perimeter of 24 cm. What is the area of this hexaright? OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment 5. There are lots of different hexarights with a perimeter of 24 cm. On a separate piece of paper, draw two different hexarights, each one with a perimeter of 24 cm. (Be sure to put your name on the paper!) 6. Draw one more hexaright with perimeter 24 cm, and make the area as large as you can. You can draw it in the space below or on another piece of paper. What is the area of the hexaright you just drew? _______ 7. What did you find out about the areas of hexarights with a perimeter of 24 cm? OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment Rationale for the mathematics education community An important feature of this task is that it incorporates basic ideas from geometry and measurement — perpendicularity, line segments, area, perimeter — to define a new geometric figure previously unknown to students — hexarights. It is not, however, hexarights in and of themselves that are important. Rather, it is the ability of students to use some familiar mathematical ideas to define and explore a new class of mathematical objects. The purpose of the task is to assess how well students can deal with this new concept, in the sense of (a) being able to distinguish what fits the definition and what does not and (b) constructing examples that fulfill given constraints. The ideas suggested by hexarights are particularly rich from a mathematical standpoint. The task provides an opportunity for students to explore the idea of maximizing one property — area in this case — while keeping other properties — perimeter here — fixed. Again, it is neither hexarights nor area and perimeter that make the task noteworthy; it is the mathematical investigation of interrelated properties that is an important part of mathematical power. Similarly, the task asks students to think about what hexarights are not. The point is that the little "bite" that makes the figure a hexaright (rather than a rectangle) can be made as small as one wishes; but once the bite disappears, so too does the hexaright! Hence, while the area can be arbitrarily close to 36 cm2, it cannot equal 36 cm2. Still another important reason for including this task is that the students are given the opportunity to select the tools that they need or want to use to do the drawings. Some children will prefer to use paper ruled in centimeters (which in fact may limit their drawings to hexarights with integral sides); others will use a centimeter L-square, which is a very appropriate tool in this case. It is important for teachers and texts to leave the selection of tools for particular tasks to the students, particularly by the time they reach fourth grade. The selection of the proper tool from several possibilities is a vital part of problem-solving because each tool may have its own advantages and disadvantages in a given setting. OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment Task design considerations: Several features of this task deserve highlighting: Note that hexarights are formally defined in words, as opposed to being implicitly "defined" solely through lots of examples and non-examples. (Of course it may well be that some children of this age will form the "hexaright" concept through the pictures rather than through the words. If the task were used with older children, it might be appropriate to convey the concept without the pictures. The students would then be expected to generate whatever pictures they would need to solidify the concept in their own minds.) Note also that the shape was intentionally given a name that suggests its meaning, rather than an invented nonsense word (as is typical of some materials that aim at concept formation through examples and non-examples). All the pictures of hexarights are drawn so their sides are not parallel with the edges of the paper. This is in an effort to combat one of the most prevalent misconceptions about geometry: that the properties of being a rectangle or square, or (more generally) the relations of parallel and perpendicular have something to do with the orientation of the lines with respect to the edges of the page or the chalkboard. Even in the cases of questions 2 and 3, the pieces of centimeter graph paper are shown as if they had been torn from a larger sheet and placed obliquely with respect to the edges of the page. Question 5 deliberately does not leave sufficient space to answer the question, and instead calls for the student to use a separate piece of paper. The purpose of this is to force the child to decide what kind of paper to use. Centimeter graph paper will be helpful to some students and a distraction for others. Similarly, in question 7, no lines are provided for the student to write on. Some children will want to describe their findings in words, while others will want to explain via pictures. No method is best a priori. (Students should be encouraged to use a sheet of lined paper, however, if they ask to do so.) Filling the page with lines to write on might convey the message that pictures are not appropriate. OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment A note on the definition of "hexaright": Although three technical words are involved in the definition — "hexagon," "adjacent,'' and "perpendicular" — they are all useful in a variety of settings, mathematical and non-mathematical. In any case, teachers are asked to make sure everyone understands these terms. Notice also the introductory sentence that says that "hexaright" is a made-up term. One would also not want teachers or other adults to think that a hexaright is some concept from high school geometry that they have forgotten! An alternative would have been to define the hexaright as a hexagon all of whose angles are right angles. Some may object to this form of the definition, thinking that "angle" connotes "interior angle." Since the measure of one interior angle of a hexaright is 270°, confusion might result. Variants and extensions: The difficulty of the task can be varied by asking intermediate questions. For example, even before question 1, the student might be asked to draw some hexarights (without size conditions) or to describe in their own words what hexarights look like. One variant of the task is to use numbers that are not integers for the areas or perimeters of the figures. This might help some children to see that there is no largest hexaright with a perimeter of 24 cm; for others, however, the task would become more difficult. A natural extension of the task is to consider octarights (or, more generally, polyrights) and their areas and perimeters. Interestingly, octarights occur in three basic shapes (as opposed to hexarights' single basic L-shape). An extension in another direction is to consider the 3-dimensional analog of a hexaright. If a rectangular box in 3 dimensions is the analog of a rectangle in 2 dimensions, what does the 3-dimensional analog of a hexaright look like? What are the corresponding roles of volume and surface area? OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment Protorubric Characteristics of the high response: The high-level response is one that demonstrates an overall facility for dealing with the newly-defined hexaright, and that shows some understanding of the complementary roles of perimeter and area in the task — that is, that there are hexarights with perimeter 24 cm and areas that are very close (but not equal) to 36 square centimeters. The student's responses to questions 1 through 4 are correct, although the indications of "cm" and "square cm" (or "cm2") may be missing. The hexarights drawn for questions 5 and 6 are close to being completely accurate, with virtually right angles and side lengths within 0.5 cm of being correct. (An alternative, which in some ways is more sophisticated, is a sketch, drawn without a straightedge, that indicates clearly the dimensions.) The area of the hexaright for question 6 is accurate, is at least 34 square cm, and is as large as the ones in question 5. The highest level response to question 7 says something to the effect that there is no hexaright with perimeter 24 cm and area 36 cm2. (No fourth grader should be expected to justify this fact completely.) Full credit should be OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment given to any response that refers to a square of perimeter 24 cm, or that says there are hexarights of perimeter 24 cm and area arbitrarily close to 36 cm2. Somewhat less is assigned to a response that simply refers to some (finite) sequence of hexarights of perimeter 24 cm and increasing area. Characteristics of the medium response: The responses to questions 1-4 show an understanding of what a hexaright is and what the perimeter and area are. There may be a flaw in one of the calculations. The responses to question 5 provides supporting evidence of understanding, if there are any miscalculations. The figures drawn in questions 5 and 6 are hexarights, although the lengths of the sides may be up to a centimeter wrong in either direction, the angles may not be accurately drawn right angles, and the perimeter may not be exactly 24 cm. (Of course the nature of the errors will depend on the kinds of tools, including type of paper, that the student selects.) In question 6, the partial understanding that is typical of the medium-level response can be shown in a variety of ways. For example, the student could draw a hexaright with a relatively small area (perhaps 30 cm 2), but report the area accurately (within perhaps 1 cm2 of the correct area). Alternatively, an appropriate hexaright could be drawn very well, but the area misidentified as 23 cm2 rather than 35 cm2. The response to question 7 is something that is true (for example, that there are lots of hexarights of different areas, for a fixed perimeter) but that does not address any connections between or among hexarights with perimeter 24 cm. OCR for page 53 Measuring Up: Prototypes for Mathematics Assessment Characteristics of the low response: Little understanding evinced of what a hexaright is, and of what area and perimeter are. The drawings are done without regard to accuracy, either in making straight line segments or in making right angles. Moreover, the student's response to question 7 is not related to the problem in a low response.
{"url":"http://books.nap.edu/openbook.php?record_id=2071&page=53","timestamp":"2014-04-21T08:03:27Z","content_type":null,"content_length":"53563","record_id":"<urn:uuid:c62ebd4c-d55c-4a95-ab63-1b65854dd32f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
estimating K and Lambda from an extreme value distribution Gordon D. Pusch g_d_pusch_remove_underscores at xnet.com Thu Mar 4 12:57:28 EST 2004 Kevin Karplus <karplus at cheep.cse.ucsc.edu> writes: > In article <giu11czfui.fsf at pusch.xnet.com>, Gordon D. Pusch wrote: >> Just to stir the pot a little about the near-universal abuse of extreme value >> theory that routinely occurs in bioinformatics: Since a "random sequence" >> model underlies the derivation of the so-called "Karlin-Altschul distribution" >> used by BLAST (whose correct name is the "Gumbel distribution," since >> Gumbel discovered it and the other two asymptotic classes of extreme value >> distribution decades before Karlin and Altschul), should not this exact >> same objection also be equally true of the "standard" P-values returned >> by BLAST --- which everyone still uses on a routine basis ??? >:-I > The derivation of the Gumbel distribution may not be rock > solid---indeed there are problems with the null model assumptions used > in BLAST, but the authors of BLAST have continued to improve the > composition and length corrections, and have good empirical evidence > that the Gumbel distribution is a good fit to their scores. ...Except for the minor detail that, if they had bothered to actually _read_ Gumbel's 1958 monograph "Statistics of Extremes," they would have (or at least _should_ have) immediately realized that they had committed =THE= classic Cardinal Sin of Extreme Value Theory, namely: Using An EVD Corresponding To The Wrong Class Of Asymptotic Behavior For The PDF's Tail. There is not one, but _THREE_ different "Extreme Value" distributions: The Frechet Distribution, corresponding to the distribution of the extreme value of a set of IIDRVs drawn from a PDF with a power-law tail, the Gumbel Distribution, corresponding to the distribution of the extreme value of a set of IIDRVs drawn from a PDF with an exponential tail, and the Weibull Distribution, corresponding to the distribution of the extreme value of a set of IIDRVs drawn from a PDF whose domain is bounded from above. (NOTE: The derivation of the three families of Extreme-Value Distributions assumes that the IIDRVs are _REAL-VALUED_ variables, whereas it is a common practice in bioinformatics to artificially _force_ the scores to be integer valued; in the discussion below, I will ignore this technical point, since IMO it does not change my primary conclusion.) Now, in any problem of bioinformatic interest, one is aligning a query sequence with a FINITE number of characters against a database of sequences all of which also have a FINITE number of characters; furthermore, each character in the query sequence can be aligned with at _MOST_ one character in each database sequence. Hence, if all the entries in the scoring matrix are finite, and all the "gap penalties" are finite, then it is trivially obvious that the score of the alignment must be a sum of a finite number of finite terms, less a finite number of finite gap penalties, and therefore must be finite. Furthermore, a small amount of additional thought will show that the alignment score cannot possibly exceed the smaller of the two scores obtained by aligning the query sequence with itself and the database sequence with itself. It therefore follows that the set of scores obtained by aligning a query sequence with every sequence in _ANY_ sequence database is bounded from above, with an upper bound given by score of the alignment of the query sequence with itself. Since the distribution of possible scores is bounded from above, it therefore immediately and trivially follows from extreme value theory that the correct extreme value distribution is the _WEIBULL_ distribution, =NOT= the Gumbel distribution. The Gumbel distribution, by contrast, would require a distribution of possible scores that is _UNBOUNDED_, with an exponential tail; this makes absolutely no sense unless one assumes that =BOTH= the query =AND= the database sequence can in principle be _INFINITELY LONG_ with a geometric (exponential) length distribution, and that one is _IGNORANT_ of the query sequence length and composition at the time of the query --- which is patently absurd! The error committed by Karlin and Altschul is that, being Frequentists, the only way they could think of to treat the alignment problem was to (incorrectly!) map it onto the alignment of a _SEMI-INFINITE SEQUENCE WITH ITSELF_, which Karlin had earlier solved using "Martingale" theory. However, contrary to Altschul's invalid hand-waving claims at where he attempts to argue that the assumption of a "semi-infinite" sequence can be justified by imagining that all the sequences in the database are "concatenated end to end," in fact it is _FALSE_ to assume that the limit that the number of sequences in the database becomes unboundedly large is equivalent to assuming that the _length_ of the query and database sequences become unboundedly large. The Karlin-Altschul distribution provided the `answer' to the _WRONG STATISTICAL QUESTION_: It is a good example of what happens when all one has is a hammer, and then proceeds to falsely assume that "Everything Is A Nail." > A good paper to read is > @article{improved-psiblast-2001, > author={Sch\"affer, Alejandro A. and Aravind, L. > and Madden, Thomas L. and Shavirin, Sergei > and Spouge, John L. and Wolf, Yuri I. > and Koonin, Eugene and Altschul, Stephen F.}, > title ="Improving the accuracy of {PSI-BLAST} protein database > searches with composition-based statistics and other refinements", > journal="Nucleic Acids Research", > volume=29, number=14, > year=2001, > pages="2994-3005" > } ...Which unfortunately _still_ is solving the wrong statistical problem, since it is implicitly assuming that one knows _NOTHING_ about the properties of the query sequence, but is instead comparing pairs of sequences drawn "at random" !!! > Of course, the real reason that people routinely use the BLAST > e-values is not because the auhtors of BLAST have been very careful to > make the e-values as accurate as they can (although they have been > careful), but because many biologists have blind faith in their > computational tools. ...Or at least, in the _authors_ of their computational tools... >:-I Furthermore, in my experience, many of the biologists I've worked with do not even assume that the E-values or P-values return by such tools are "statistically accurate" --- their level of statistical sophistication stops at the level of "Small P-values Are Good, and Big P-values Are bad." (Indeed, one biologist I worked with did not even =LOOK= at the P-values or the E-values returned by BLAST or FASTA, but simply assumed that any sequence "near the top of the list" was "closely related," whereas any sequence near the bottom of the list was "probably unrelated" --- in spite of having been _REPEATEDLY_ told that the set of hits in this particular display had been "truncated" to show _ONLY THE BEST HITS FROM EACH ORGANISM IN THE NON-REDUNDANT DATABASE_, and that in principle it was possible for =ALL= the hits in such a "truncated" display to be > One would expect wet-lab scientists to have a healthy scepticism of any > results, knowing how often experiments fail, and how much bad data has > made it out into the literature, but many seem to have an almost mystical > faith in anything produced by computation. This sort of "mystical faith in the results of computation" is not limited to "wet-lab" biologists: I saw much the same sort of "mystical faith" exhibited by _experimental physicists_, toward the results of the "Monte Carlo" simulations they made of the "expected backgrounds" of their high-energy and nuclear physics experiments !!! 8-( > (On the other hand, computational people seem to have an almost mystical > faith in wet-lab verification---expecting experiments to be neat, quick > deterministic tests like "if" statements in code.) <*shrug*> Programmers tend to think "deterministically," not "experimentally." (What fraction of the programmers that you know also have training in Experimental Analysis --- or even in basic statistical theory ??? Now consider that, given your research subfield, the odds are that this fraction is _MUCH_ larger than the fraction among programmers in general...) -- Gordon D. Pusch perl -e '$_ = "gdpusch\@NO.xnet.SPAM.com\n"; s/NO\.//; s/SPAM\.//; print;' More information about the Comp-bio mailing list
{"url":"http://www.bio.net/bionet/mm/comp-bio/2004-March/002716.html","timestamp":"2014-04-21T11:19:03Z","content_type":null,"content_length":"11883","record_id":"<urn:uuid:08bdde21-b5a0-4e7c-a6ed-f101580b9cb2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Cribler les entiers sans grand facteur premier, Philos - ALGORITHMIC NUMBER THEORY , 2008 "... ..." - CHARACTERS AND THE POLYA-VINOGRADOV THEOREM 29 "... A central problem in analytic number theory is to gain an understanding of character sums χ(n), n≤x ..." - Proceedings of the Millennial Conference on Number Theory , 2002 "... This paper presents lower bounds and upper bounds on the distribution of smooth integers; builds an algebraic framework for the bounds; shows how the bounds can be computed at extremely high speed using FFT-based power-series exponentiation; explains how one can choose the parameters to achieve ..." Cited by 3 (1 self) Add to MetaCart This paper presents lower bounds and upper bounds on the distribution of smooth integers; builds an algebraic framework for the bounds; shows how the bounds can be computed at extremely high speed using FFT-based power-series exponentiation; explains how one can choose the parameters to achieve any desired level of accuracy; and discusses several generalizations. "... Abstract. We consider some questions related to the signs of Hecke eigenvalues or Fourier coefficients of classical modular forms. One problem is to determine to what extent those signs, for suitable sets of primes, determine uniquely the modular form, and we give both individual and statistical res ..." Cited by 3 (1 self) Add to MetaCart Abstract. We consider some questions related to the signs of Hecke eigenvalues or Fourier coefficients of classical modular forms. One problem is to determine to what extent those signs, for suitable sets of primes, determine uniquely the modular form, and we give both individual and statistical results. The second problem, which has been considered by a number of authors, is to determine the size, in terms of the conductor and weight, of the first sign-change of Hecke eigenvalues. Here we improve the recent estimate of Iwaniec, Kohnen and Sengupta. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1678734","timestamp":"2014-04-19T18:37:56Z","content_type":null,"content_length":"18139","record_id":"<urn:uuid:d1861805-2b72-4dc1-84ee-6a25ec2fd744>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Pretty Correlation Map of PIMCO Funds June 14, 2012 By klr As PIMCO expands beyond fixed income, I thought it might be helpful to look at correlation of PIMCO mutual funds to the S&P 500. Unfortunately due to the large number of funds, I cannot use the chart.Correlation from PerformanceAnalytics. I think I have made a pretty correlation heatmap of PIMCO institutional share funds with inception prior to 5 years ago. Of course this eliminates many of the new strategies, but it is easy in the code to adjust the list. I added the Vanguard S&P 500 fund (VFINX) as a reference point. Then, I orderded the correlation heat map by correlation to VFINX. As expected there are two fairly distinct groups of funds: those (mostly fixed income) with negative/low correlation to the S&P 500 and those with strong positive correlation. Here is the more standard heat map with dendrogram ordering, which has its purpose but gets a little busy. If we are only interested in the correlation to the S&P 500 (VFINX), then this might be more helpful. for the author, please follow the link and comment on his blog: Timely Portfolio daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/pretty-correlation-map-of-pimco-funds/","timestamp":"2014-04-19T22:30:24Z","content_type":null,"content_length":"37692","record_id":"<urn:uuid:89c71aed-9e61-4d22-aa64-5ea8f399eefa>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Twist on Monty Hall Date: 03/31/2001 at 17:27:39 From: rick singh Subject: Twist on Monty Hall Here is a twist on the classic Monty Hall problem. There are three doors. Each door contains a prize. Your goal is to maximize the probability of getting the best of the three prizes. You can open as many of the doors as you like, but you have to stick with the last door that you open. When you open a door, you do not know whether or not it is the best. Should you open only one door, or should you open one and then open the second and stick with the second - or should you open all three and stick with the third? I understand Monty Hall, but this one has elements I can't figure out. Thanks for your help. Date: 04/01/2001 at 16:41:37 From: Doctor Schwa Subject: Re: Twist on Monty Hall Hi Rick, The classic problem can be found in the Dr. Math FAQ: The Monty Hall Problem Yours is a really interesting type of problem! I first heard it in the context of a princess choosing among a hundred suitors. At any time she can accept the proposal of the suitor she's seeing, but once she rejects one, he won't come back. How can she maximize her chances of getting the best one? The answer, of course, depends on the relative quality of the suitors. I think it turns out that the best strategy for her is to let the first 37 suitors go by, and then if she ever sees a suitor better than the best she's seen so far, keep him. Why 37? Well, that's a long Luckily, your problem has only three doors instead of a hundred suitors, so it's a bit easier to analyze. If you keep the first door, you have a 1/3 chance of getting the best If you go to the second door, and just keep it no matter what, it still has a 1/3 chance of being best. Same with the third door. BUT, when you open the second door, you already have a bit of information: is the second door better than the first? If it is better than the first, keep it. If it's worse than the first, go on to the third and take whatever's there. This strategy gives you a 50% chance of getting the best prize. Let's let A stand for the best prize, and B the second best, and C the worst. Then the possibilities are With the strategy described above, you win whenever the second prize is best (BAC and CAB), but you also win in the case BCA, because with the second prize worse than the first, you go on to the third and win. So, half the time you get the best prize; in two cases out of six, ACB and CBA, you end up with the middle-quality prize (in ACB, since the second prize is worse than the first, you try the third; in CBA, since the second prize was better than the first, you stay with it). Only in case ABC will you get stuck with the worst prize. - Doctor Schwa, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/56674.html","timestamp":"2014-04-19T02:07:27Z","content_type":null,"content_length":"7834","record_id":"<urn:uuid:2ac56bc9-9b3b-4af5-9af6-7cf970df84fd>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Text::NSP::Measures - Perl modules for computing association scores of Ngrams. This module provides the basic framework for these measures. use Text::NSP::Measures::2D::MI::ll; my $npp = 60; my $n1p = 20; my $np1 = 20; my $n11 = 10; $ll_value = calculateStatistic( n11=>$n11, if( ($errorCode = getErrorCode())) print STDERR $errorCode." - ".getErrorMessage()."\n""; print getStatisticName."value for bigram is ".$ll_value."\n""; These modules provide perl implementations of mathematical functions (association measures) that can be used to interpret the co-occurrence frequency data for Ngrams. We define an Ngram as a sequence of 'n' tokens that occur within a window of at least 'n' tokens in the text; what constitutes a "token" can be defined by the user. The measures that have been implemented in this distribution are: Further discussion about these measures is in their respective documentations. This module also provides a basic framework for building new measures of association for Ngrams. The new Measure should either inherit from Text::NSP::Measures::2D or Text::NSP::Measures::3D modules, depending on whether it is a bigram or a trigram measure. Both these modules implement methods that retrieve observed frequency counts, marginal totals, and also compute expected values. They also provide error checks for these counts. You can either write your new measure as a new module, or you can simply write a perl program. Here we will describe how to write a new measure as a perl module Perl. 1. To create a new Perl module for the measure issue the following command (replace 'NewMeasure' with the name of your measure): h2xs -AXc -n Text::NSP::Measures::2D::NewMeasure (for bigram measures) h2xs -AXc -n Text::NSP::Measures::3D::NewMeasure (for trigram measures) This will create a new folder namely... Text-NSP-Measures-2D-NewMeasure (for bigram) Text-NSP-Measures-3D-NewMeasure (for trigram) This will create an empty framework for the new association measure. Once you are done completing the changes you will have to install the module before you can use it. To make changes to the module open: Text-NSP-Measures-2D-NewMeasure/lib/Text/NSP/Measures/2D/NewMeasure/ NewMeasure.pm Text-NSP-Measures-3D-NewMeasure/lib/Text/NSP/Measures/3D/NewMeasure/ NewMeasure.pm in your favorite text editor, and do as follows. 2. Let us say you have named your module NewMeasure. The first line of the file should declare that it is a package. Thus the first line of the file NewMeasure.pm should be... package Text::NSP::Measures::2D::NewMeasure; (for bigram measures) package Text::NSP::Measures::3D::NewMeasure; (for trigram measures) To inherit the functionality from the 2D or 3D module you need to include it in your NewMeasure.pm module. A small code snippet to ensure that it is included is as follows: use Text::NSP::Measures::2D::MI; use Text::NSP::Measures::2D::MI; You also need to insert the following lines to make sure that the required functions are visible to the programs using your module. These lines are same for bigrams and trigrams. The "no warnings 'redefine';" statement is used to suppress perl warnings about method overriding. use strict; use Carp; use warnings; no warnings 'redefine'; require Exporter; our ($VERSION, @EXPORT, @ISA); @ISA = qw(Exporter); @EXPORT = qw(initializeStatistic calculateStatistic getErrorCode getErrorMessage getStatisticName); 3. 3 You need to implement at least one method in your package This method is passed reference to a hash containing the frequency values for a Ngram as found in the input Ngram file. method calculateStatistic() is expected to return a (possibly floating point) value as the value of the statistical measure calculated using the frequency values passed to it. There exist three methods in the modules Text::NSP::Measures::2d and Text::NSP::Measures::3D in order to help calculate the ngram statistic. These methods return the observed and expected values of the cells in the contingency table. A 2D contingency table looks like: |word2 | not-word2| word1 | n11 | n12 | n1p not-word1 | n21 | n22 | n2p np1 np2 npp Here the marginal totals are np1, n1p, np2, n2p, the Observed values are n11, n12, n21, n22 and the expected values for the corresponding observed values are represented using m11, m12, m21, m22, here m11 represents the expected value for the cell (1,1), m12 for the cell (1,2) and so on. Before calling either computeObservedValues() or computeExpectedValues() you MUST call computeMarginalTotals(), since these methods require the marginal to be set. The computeMarginalTotals method computes the marginal totals in the contingency table based on the observed frequencies. It returns an undefined value in case of some error. In case success it returns '1'. An example of usage for the computeMarginalTotals() method is my %values = @_; if(!(Text::NSP::Measures::2D::computeMarginalTotals(\%values)) ){ return; } @_ is the parameters passed to calculateStatistic. After this call the marginal totals will be available in the following variables computeObservedValues() computes the observed values of a ngram, It can be called using the following code snippet. Please remember that you should call computeMarginalTotals() before calling if( !(Text::NSP::Measures::2D::computeObservedValues(\%values)) ) { %value is the same hash that was initialized earlier for computeMarginalTotals. If successful it returns 1 otherwise an undefined value is returned. The computed observed values will be available in the following variables: Similarly, computeExpectedValues() computes the expected values for each of the cells in the contingency table. You should call computeMarginalTotals() before calling computeExpectedValues(). The following code snippet demonstrates its usage. if( !(Text::NSP::Measures::2D::computeExpectedValues()) ) { return; } If successful it returns 1 otherwise an undefined value is returned. The computed expected values will be available in the following variables: 4. The last lines of a module should always return true, to achieve this make sure that the last two lines of the are: Please see, that you can put in documentation after these lines. 5. There are four other methods that are not mandatory, but may be implemented. These are: i) initializeStatistic() ii) getErrorCode iii) getErrorMessage iv) getStatisticName() statistical.pl calls initializeStatistic before calling any other method, if there is no need for any specific initialization in the measure you need not define this method, and the initialization will be handled by the Text::NSP::Measures modules initializeStatistic() method. The getErrorCode method is called immediately after every call to method calculateStatistic(). This method is used to return the errorCode, if any, in the previous operations. To view all the possible error codes and the corresponding error message please refer to the Text::NSP documentation (perldoc Text::NSP).You can create new error codes in your measure, if the existing error codes are not sufficient. The Text::NSP::Measures module implements both getErrorCode() and getErrorMessage() methods and these implementations of the method will be invoked if the user does not define these methods. But if you want to add some other actions that need to be performed in case of an error you must override these methods by implementing them in your module. You can invoke the Text::NSP::Measures getErrorCode() methods from your measures getErrorCode() method. An example of this is below: sub getErrorCode my $code = Text::NSP::Measures::getErrorCode(); #your code here return $code; #(or any other value) sub getErrorMessage my $message = Text::NSP::MeasuresgetErrorMessage(); #your code here return $message; #(or any other value) The fourth method that may be implemented is getStatisticName(). If this method is implemented, it is expected to return a string containing the name of the statistic being implemented. This string is used in the formatted output of statistic.pl. If this method is not implemented, then the statistic name entered on the commandline is used in the formatted output. Note that all the methods described in this section are optional. So, if the user elects to not implement these methods, no harm will be done. The user may implement other methods too, but since statistic.pl is not expecting anything besides the five methods above, doing so would have no effect on statistic.pl. 6. You will need to install your module before you can use it. You can do this by Change to the base directory for the module, i.e. Then issue the following commands: perl Makefile.PL make test make install perl Makefile.PL PREFIX=<destination directory> make test make install If you get any errors in the installation process, please make sure that you have not made any syntactical error in your code and also make sure that you have already installed the Text-NSP To tie it all together here is an example of a measure that computes the sum of ngram frequency counts. package Text::NSP::Measures::2D::sum; use Text::NSP::Measures::2D::MI::2D; use strict; use Carp; use warnings; no warnings 'redefine'; require Exporter; our ($VERSION, @EXPORT, @ISA); @ISA = qw(Exporter); @EXPORT = qw(initializeStatistic calculateStatistic getErrorCode getErrorMessage getStatisticName); $VERSION = '0.01'; sub calculateStatistic { my %values = @_; # computes and returns the marginal totals from the frequency # combination values. returns undef if there is an error in # the computation or the values are inconsistent. if(!(Text::NSP::Measures::2D::computeMarginalTotals($values)) ){ # computes and returns the observed and marginal values from # the frequency combination values. returns 0 if there is an # error in the computation or the values are inconsistent. if( !(Text::NSP::Measures::2D::computeObservedValues($values)) ) { # Now for the actual calculation of the association measure my $NewMeasure = 0; $NewMeasure += $n11; $NewMeasure += $n12; $NewMeasure += $n21; $NewMeasure += $n22; return ( $NewMeasure ); sub getStatisticName { return "Sum"; } 1; __END__ 1. The Text-NSP package is not installed - Make sure that Text-NSP package is installed and you have inherited the correct module (Text::NSP::Measures::2D or Text::NSP::Measures::3D). 2. The five methods (1 mandatory, 4 non-mandatory) must have their names match EXACTLY with those shown above. Again, names are all case sensitive. 3. This statement is present at the end of the module: 1; initializeStatistic() - Provides an empty method which is called in case the measures do not override this method. If you need some measure specific initialization, override this method in the implementation of your measure. INPUT PARAMS : none RETURN VALUES : none calculateStatistic() - Provides an empty framework. Your Measure should override this method. INPUT PARAMS : none RETURN VALUES : none # INPUT PARAMS : none # RETURN VALUES : errorCode .. The current error code. getErrorMessage() - Returns the error message in the last operation if any and resets the string to ''. # INPUT PARAMS : none # RETURN VALUES : errorMessage .. The current error message. getStatisticName() - Provides an empty method which is called in case the measures do not override this method. INPUT PARAMS : none RETURN VALUES : none Ted Pedersen, University of Minnesota Duluth <tpederse@d.umn.edu> Satanjeev Banerjee, Carnegie Mellon University <satanjeev@cmu.edu> Amruta Purandare, University of Pittsburgh <amruta@cs.pitt.edu> Bridget Thomson-McInnes, University of Minnesota Twin Cities <bthompson@d.umn.edu> Saiyam Kohli, University of Minnesota Duluth <kohli003@d.umn.edu> Last updated: $Id: Measures.pm,v 1.15 2006/03/25 04:21:22 saiyam_kohli Exp $ Copyright (C) 2000-2006, Ted Pedersen, Satanjeev Banerjee, Amruta Purandare, Bridget Thomson-McInnes and Saiyam Kohli This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to The Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. Note: a copy of the GNU General Public License is available on the web at http://www.gnu.org/licenses/gpl.txt and is included in this distribution as GPL.txt.
{"url":"http://search.cpan.org/~btmcinnes/Text-NSP-1.21/lib/Text/NSP/Measures.pm","timestamp":"2014-04-20T14:50:40Z","content_type":null,"content_length":"33230","record_id":"<urn:uuid:fd4b0ccf-194a-4f30-8820-c007f6496a6c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
pointer 2d array im having trouble compiling this, #include <stdio.h> #include <math.h> double matnorm1(double **A, int m, int n) double norm=0.0; double temp; int x; int y; for (x=0; x<n; x++) for (y=0; y<m; y++) if(temp>norm) norm=temp; double matnorminf(double **A, int m, int n) double norm=0.0; double temp; int x; int y; for (x=0; x<m; x++) for (y=0; y<n; y++) if(temp>norm) norm=temp; double A[2][2]={{1.0,2.0},{3.0,4.0}}; printf("The matnorm1 is %g\n",matnorm1(A,2,2)); printf("The matnorminf is %g\n",matnorminf(A,2,2)); when i do i get the following error message- hw2-3.c:44: error: cannot convert 'double (*)[2]' to 'double**' for argument '1' to 'double matnorm1(double**, int, int)' hw2-3.c:45: error: cannot convert 'double (*)[2]' to 'double**' for argument '1' to 'double matnorminf(double**, int, int)' and i have no idea how to correct this. any help would be appreciated.
{"url":"http://cboard.cprogramming.com/c-programming/69634-pointer-2d-array.html","timestamp":"2014-04-16T22:35:08Z","content_type":null,"content_length":"50394","record_id":"<urn:uuid:7ba79f22-0ab0-4a43-88c9-395f3207998f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnitude and direction of net force Browse Ask Answer Search Join/Login Magnitude and direction of net force Asked Oct 12, 2009, 04:15 PM — 1 Answer Two cars hit a stop sign at the same time. The first car is heading south with a force of 40N and the second car is heading west with a force of 25N. Determine the magnitude and direction of the net force on the stop sign. Share | 1 Answer Not your question? Ask your question View similar questions Thread Tools Search this Thread Show Printable Version Email this Page Check out some similar questions! Finding the Magnitude and Direction of Force one with the Resultant Given [ 14 Answers ] Determine the magnitude and direction theata of F1 so that the resultant force is directed vertically upward and has a magnitude of 800N. 9925 I have difficulty of where to start the problem? Do I sum up the forces of the x axis with unknowns in it and forces of the y axis with the unknown... Magnitude and direction of resultant force [ 1 Answers ] I don't know how to determine what is an x and y component. I am trying to solve the following problem. Three people are pulling on a tree. The first person pulls with 15N at 65 degrees; the second with 16N at 135 degrees; the third with 11N at 195 degrees. What is the magnitude and direction... Velocity (magnitude and direction) [ 1 Answers ] Hi.. Can u help me to soLve this probLem of mine.. Can you show me the soLution of this probLem.. 'a canoe has a velocity of 0.40m/s southeast relative to the earth. The canoe is on a river that is floating 0.50m/s east raletive to the earth. Find the velocity (magnitude and direction) of... Magnitude and Direction [ 1 Answers ] How can I find magnitude and direction of the velocity of the body (A and B) if Before A is in motion and moving to the direction of B m = 0.5 kg v = 9.0 m/s B m = 2.5 kg Magnitude and direction of the resultant [ 1 Answers ] I am in a high school physics course and NEED some help with this problem. Four forces act on a hot-sir balloon, Force upward= 5120N Force downward= 4050N Force westward= 1520N Force eastward= 950N Find the magnitude and direction of the resultant force... View more questions Search EN NL Home Blog Contact Us Privacy Policy Experts Help Magnitude and direction of net force Asked Oct 12, 2009, 04:15 PM — 1 Answer Two cars hit a stop sign at the same time. The first car is heading south with a force of 40N and the second car is heading west with a force of 25N. Determine the magnitude and direction of the net force on the stop sign. Two cars hit a stop sign at the same time. The first car is heading south with a force of 40N and the second car is heading west with a force of 25N. Determine the magnitude and direction of the net force on the stop sign. Thread Tools Search this Thread Show Printable Version Email this Page Determine the magnitude and direction theata of F1 so that the resultant force is directed vertically upward and has a magnitude of 800N. 9925 I have difficulty of where to start the problem? Do I sum up the forces of the x axis with unknowns in it and forces of the y axis with the unknown... I don't know how to determine what is an x and y component. I am trying to solve the following problem. Three people are pulling on a tree. The first person pulls with 15N at 65 degrees; the second with 16N at 135 degrees; the third with 11N at 195 degrees. What is the magnitude and direction... Hi.. Can u help me to soLve this probLem of mine.. Can you show me the soLution of this probLem.. 'a canoe has a velocity of 0.40m/s southeast relative to the earth. The canoe is on a river that is floating 0.50m/s east raletive to the earth. Find the velocity (magnitude and direction) of... How can I find magnitude and direction of the velocity of the body (A and B) if Before A is in motion and moving to the direction of B m = 0.5 kg v = 9.0 m/s B m = 2.5 kg I am in a high school physics course and NEED some help with this problem. Four forces act on a hot-sir balloon, Force upward= 5120N Force downward= 4050N Force westward= 1520N Force eastward= 950N Find the magnitude and direction of the resultant force... EN NL Home Blog Contact Us Privacy Policy Experts Help
{"url":"http://www.askmehelpdesk.com/math-sciences/magnitude-direction-net-force-405303.html","timestamp":"2014-04-20T20:58:23Z","content_type":null,"content_length":"39570","record_id":"<urn:uuid:75cbb452-e78d-4d51-a089-bee5e2559ac4>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
Media, PA Algebra 2 Tutor Find a Media, PA Algebra 2 Tutor ...I have helped many high school and college students with organization, listening, reading, note-taking, outlining, summarization and review. Some students who are having trouble do not have a clear plan for approaching each of their courses. I can help them establish a plan. 32 Subjects: including algebra 2, English, geometry, chemistry ...These students were discouraged by the amount of time it took them to complete assignments. Once we identified that the problem was not in their understanding of the new material, but with their more basic skills, they were able to improve their performance. I can also help students improve their strategies on standardized tests. 18 Subjects: including algebra 2, calculus, statistics, geometry ...I also love teaching students of all ages. I have been tutoring students from elementary age to adult on a daily basis for more than 15 years. Individualized support for a student is the most effective and efficient way to gain confidence and mastery in any subject. 23 Subjects: including algebra 2, reading, writing, geometry I am an certified elementary teacher who has also been teaching Chinese for more than five years. I am currently teaching in a public high school and I also teach heritage langauge programs on weekends. My stduents come from different backgound, abilities and age, and I can tailor the lesson to meet your langauge learning goals. 7 Subjects: including algebra 2, geometry, Chinese, ESL/ESOL ...The exception is when a student has achieved well in high school mathematics, has taken pre-calculus, and does not score well on standardized tests. When this is the case, the student will likely do better on the ACT, probably because the questions are more straight forward. The ACT math section covers trigonometry and elements of pre-calculus while the SAT goes only through algebra 2. 23 Subjects: including algebra 2, English, calculus, geometry Related Media, PA Tutors Media, PA Accounting Tutors Media, PA ACT Tutors Media, PA Algebra Tutors Media, PA Algebra 2 Tutors Media, PA Calculus Tutors Media, PA Geometry Tutors Media, PA Math Tutors Media, PA Prealgebra Tutors Media, PA Precalculus Tutors Media, PA SAT Tutors Media, PA SAT Math Tutors Media, PA Science Tutors Media, PA Statistics Tutors Media, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/media_pa_algebra_2_tutors.php","timestamp":"2014-04-19T04:49:31Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:f1d00260-1b9b-47a6-8796-d8ff20c1751d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of borellebesque theorem noun Mathematics. the theorem that in a metric space every covering consisting of open sets that covers a closed and compact has a finite collection of subsets that covers the given set. Also called Borel-Lebesque theorem. named after Eduard Heine (1821–81), German mathematician and Émile Borel (1871–1956), French mathematician
{"url":"http://dictionary.reference.com/browse/borellebesque+theorem","timestamp":"2014-04-19T17:17:17Z","content_type":null,"content_length":"91038","record_id":"<urn:uuid:ba8be49c-e7c2-4342-a0b2-d599aee9679c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Plotting numbers in the complex plane? March 30th 2011, 04:02 AM #1 Junior Member Mar 2011 Plotting numbers in the complex plane? (I'm unsure as to whether this question can be regarded as calculus, however, I am currently studying Differential Calculus at university as a first year subject so, I just made the assumption there - this question came from my course's textbook.) "Find all solutions of z^4 = 8 + 8√3i, and plot them in the complex plane." First of all, I'm unsure with how you would be able to find the solutions for the equation. I understand that there are 4 roots to the equation, one is for linear equations in z and two for quadratics in z^2 and so on. I know that you need to express z^4 = 8 + 8√3i into the form r(cost + isint) and then use de Moivre's theorem from there. So, from doing that - how would you be able to get the points that need to be plotted? Although, I'm still confused as to how you would work out the entire question. This question was given as an example from my textbook however, I wasn't able to follow on with it quite clearly. What I am most particularly confused about is how you would plot the points. Apparently you can find one solution given that the other solutions of z^n = k form a regular polygon around the origin (which confuses me even more...) So, is anyone able to provide a helpful step-by-step procedure with how to get the answer? Any help would be greatly appreciated. Thank you. $8+8\sqrt{3}i=16[\cos (\pi/3)+i\sin(\pi/3)]$. Now, apply: $\sqrt[4]{8+8\sqrt{3}i}=2\left[\cos\left(\dfrac{\pi}{12}+\dfrac{2k\pi}{4}\left)+i \sin\left(\dfrac{\pi}{12}+\dfrac{2k\pi}{4}\right)\ right],\quad (k=0,1,2,3)$ First $8+8\sqrt{3}=16(\text{cis}\left(\frac{\pi}{3}\right ))$ One fourth root of that is clearly $2(\text{cis}\left(\frac{\pi}{12}\right))$. Now the other three roots are equally distributed about the circle at angles of $\dfrac{2\pi}{4}$ apart. So another root is $2(\text{cis}\left(\frac{7\pi}{12}\right))$. Can you find the other two? Last edited by Plato; March 30th 2011 at 05:56 AM. I was able to figure that when k = 0, 2(cis(pi/12)). However my answers for the other values of k: k = 1, 2(cis(7pi/12)) k = 2, 2(cis(13pi/12)) k = 3, 2(cis(19pi/12)) So, none of my other roots matched up as being 2(cis(5pi/12)). So, I'm pretty sure I'm doing something wrong there. What I did was basically sub-in the values of k from FernandoRevilla's answer: 2(cos(pi/12 + 2kpi/4) + isin(pi/12 + 2kpi/4)) Another similar example from my textbook did the same thing and its answers were correct. So, I'm unsure with what I did wrong. Or, could it be something that k = 1 is 2(cis(5pi/12))? Although then again, I don't know how that would work out with the other values of k. That typo has been corrected. Apparently the values plotted on a complex plane lie at the corners of a square. However, I really don't have a clue with how can I possibly plot these values on the complex plane. Would I have to change it back into cartesian form to be able to plot all the values on the complex plane then? Apparently the values plotted on a complex plane lie at the corners of a square. However, I really don't have a clue with how can I possibly plot these values on the complex plane. Would I have to change it back into cartesian form to be able to plot all the values on the complex plane then? As I said above, the numbers are on a circle. In the first graphic the four fourth roots are plotted. In the second graphic the six sixth roots are plotted. Thank you! That solves my question. March 30th 2011, 04:20 AM #2 March 30th 2011, 04:26 AM #3 March 30th 2011, 04:49 AM #4 Junior Member Mar 2011 March 30th 2011, 05:57 AM #5 March 30th 2011, 06:35 AM #6 Junior Member Mar 2011 March 30th 2011, 07:04 AM #7 April 5th 2011, 08:21 PM #8 Junior Member Mar 2011
{"url":"http://mathhelpforum.com/calculus/176303-plotting-numbers-complex-plane.html","timestamp":"2014-04-19T08:32:03Z","content_type":null,"content_length":"56921","record_id":"<urn:uuid:7ef3f414-a0df-4c7b-b535-e5c121418144>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Frederic W. Shultz Professor of Mathematics Email: fshultz [at] wellesley [dot] edu Department of Mathematics Wellesley College Address: 106 Central Street Wellesley, MA 02481 Office: Science Center 374B Phone: (781)-283-3118 Fax: (781)-283-3642 B.S., California Institute of Technology Ph.D., University of Wisconsin Research Interests My current research involves state spaces of operator algebras and related topics in quantum information theory. Publications and Preprints • Axioms for quantum mechanics: a generalized probability theory. Ph. D dissertation, University of Wisconsin, Madison, 1972. • A characterization of state spaces of orthomodular lattices, Journal of Combinatorial Theory 17 (1974) 317-328. • (with E. Alfsen): On the geometry of non-commutative spectral theory, Bulletin American Mathematical Society 81 (1975), 893 - 895. • (with E. Alfsen): Non-commutative spectral theory for affine function spaces on convex sets, Memoirs Amer. Math. Soc. 172, Providence, 1976, 120 pp. • Events and observables in axiomatic quantum mechanics, International J. of Theoret. Physics 16 (1977) 259-272 • (with E. Alfsen and E. Stormer): A Gelfand-Neumark theorem for Jordan algebras, Advances in Math 28 (1978) 11-56 • (with E. Alfsen): State spaces of Jordan algebras, Acta. Math. 140 (1978) 155-190. • On normed Jordan algebras which are Banach dual spaces, J. Functional Analysis 31 (l979) 360-376. • (with E. Alfsen and H. Hanche-Olsen) State spaces of C*-algebras, Acta Mathematica 144 (1980), 267-305. • Dual maps of Jordan homomorphisms and *-homomorphisms between C*-algebras, Pacific. J. Math. 93 (1981), 435-441. • Pure states as a dual object for C*-algebras, Commun. Math. Physics 82 (1982) 497-509. • Pure states as a dual object for C*-algebras, Proc. Symp.Pure Math 38 (1982) 413-417 (summary of preceding paper). • (with B. Iochum) Normal state spaces of Jordan and von Neumann algebras, J. Functional Analysis 50 (1983) 317-328. • (with C. Akemann) Perfect C*-algebras, Memoirs Amer. Math. Soc. 326, 1985, 117 pg. • (with R. J. Archbold) Characterization of C*-algebras with continuous trace by properties of their pure states, Pacific J. Math 136 (1989) 1-13. • (with A. Shuchat) The Joy of Mathematica, Addison-Wesley, 1994, Japanese translation 1995, 2nd edition: Harcourt-Brace/Academic Press, 2000. • (with E. Alfsen) Orientation in operator algebras, Proc. Natl. Acad. Sci. USA 95 (1998) 6596-6601. • (with E. Alfsen) On Orientation and Dynamics in Operator Algebras, Part I, Commun. Math. Phys. 194 (1998) 87-108. • (with E. Alfsen) State Spaces of Operator Algebras: Basic Theory, Orientations, and C*-products, Birkhauser Boston, 2001. • (with E. Alfsen) Geometry of State Spaces of Operator Algebras, Birkhauser Boston, 2003. • Dimension groups for interval maps, New York Journal of Mathematics 11 (2005) 1-41. (pdf)(NYJM) The articles below may be downloaded for personal use only. Any other use requires prior permission of the author and the publisher. • (with V. Deaconu), C*-algebras associated with interval maps, Trans. Amer. Math. Soc. 359 (2007) 1889-1924. (pdf) • Dimension groups for interval maps: the transitive case. Ergodic Theory and Dynamical Systems, 27 (2007) 1287-1321. (pdf) • (with E. Alfsen) Unique decompositions, faces, and automorphisms of separable states, J. Math. Phys. 51 (2010) 52201 (pdf) (Copyright (2010) American Institute of Physics. The published article may be found at http://link.aip.org/link/doi/10.1063/1.3399808) • (with E. Alfsen) Finding decompositions of a class of separable states, Linear Algebra and its Applications 437 (2012) 2613-2629 (pdf) (Copyright (2012) Elsevier Inc. The published article may be found at http://dx.doi.org/10.1016/j.laa.2012.06.018) • (with V. Paulsen) Complete positivity of the map from a basis to its dual basis, J. Math. Phys. 54 (2013) 072201 (pdf) (Copyright (2013) American Institute of Physics. The published article may be found at http://dx.doi.org/10.1063/1.4812329) • (with J. Chen, H. Dawkins, Z. Ji, N. Johnston, D. Kribs, B. Zeng) Uniqueness of quantum states compatible with given measurement results. Phys. Rev. A, 88 (2013) 012109. (pdf) (Copyright (2013) American Physical Society. Published article may be found at http://dx.doi.org/10.1103/PhysRevA.88.012109) • Arveson's work on entanglement, Complex Anal. Oper. Theory (initial electronic version Nov 2013; print version expected fall 2014) (pdf) (Copyright (2013) Birkhauser. The "electronic first" published article may be found at http://dx.doi.org/10.1007/s11785-013-0342-2)
{"url":"http://palmer.wellesley.edu/~fshultz/","timestamp":"2014-04-17T15:26:11Z","content_type":null,"content_length":"7835","record_id":"<urn:uuid:7d608a5b-5f0a-4cab-916a-139c45911082>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
A remarkable change in the turbulent dynamo action occurs if the turbulence is helical. This can be clearly seen for example in the simulations by Brandenburg [19], where a large scale field, on the scale of the box develops when a helical forcing is employed, even though the forcing itself is on a scale about 1/5th the size of the box. The large scale field however in these closed box simulations develops only on the long resistive timescales. It is important to understand how such a field develops and how one can generate a large scale field on a faster timescale. The possible importance of helical turbulence for large-scale field generation was proposed by Parker [20], and is in fact discussed in text books [21]. We summarize briefly below the theory of the mean-field dynamo (MFD) as applied to magnetic field generation in disk galaxies, turn to several potential problems that have been recently highlighted and their possible resolution. Suppose the velocity field is split into the sum of a mean, large-scale velocity u. The induction equation becomes a stochastic partial differential equation. Let us split the magnetic field also as B = b, into a mean field B> and a fluctuating component b. Here the average <>, is defined either as a spatial average over scales larger than the turbulent eddy scales (but smaller than the system size) or as an ensemble average. Taking the average of the induction equation 2.1, one gets the mean-field dynamo equation for This averaged equation now has a new term, the mean electromotive force (emf) l, and expresses the mean emf 21]. For isotropic, homogeneous, helical 'turbulence' in the approximation that the correlation time ul << 1, where u is the typical turbulent velocity) one employs what is known as the First order smoothing approximation (FOSA) to write Here [K] = -([t] = (21]. In the context of disk galaxies, the mean velocity twisted D = |[0] G h^3 [t]^-2| > D[crit] ~ 6 [11, 22], a condition which can be satisfied in disk galaxies. (Here h is the disk scale height and G the galactic shear, [0] typical value of D to be positive). The mean field grows typically on time-scales a few times the rotation time scales, of order 3-10 × 10^8 yr. This picture of the galactic dynamo faces several potential problems. Firstly, while the mean field dynamo operates to generate the large-scale field, the fluctuation dynamo is producing small-scale fields at a much faster rate. Also the correlation time of the turbulence measured by ul is not likely to be small in turbulent flows. So the validity of FOSA is questionable. Indeed, based on specific imposed (kinematic) flow patterns it has been suggested that there is no simple relation between 23]. In order to clarify the existence of R[m] dependence, even in the kinematic limit, we have recently measured 24]. These simulations reach up to a modest Rm ~ 220. We find, somewhat surprisingly, that for isotropic homogeneous turbulence the high conductivity results obtained under FOSA are reasonably accurate up to the moderate values of R[m] that we have tested. A possible reason for this might be that the predictions of FOSA are very similar to a closure called the minimal 27, 2] where the approximation of neglecting nonlinear terms made in FOSA is not done, but replaced by a closure hypothesis. But MTA is also not well justified, although numerical simulations (for R [m] 28]. Interestingly, this agreement of [t] directly measured from the simulation, with that expected under FOSA, is obtained even in the presence of a small-scale dynamo, where b is growing exponentially. This suggests that the exponentially growing part of the small-scale field does not make a contribution to the mean emf R[m], but these preliminary results are quite encouraging. Another potential problem with the mean field dynamo paradigm is that magnetic helicity conservation puts severe restrictions on the strength of the 2]. The magnetic helicity associated with a field B = A is defined as H[t] = A.B dV, where A is the vector potential [21]. Note that this definition of helicity is only gauge invariant (and hence meaningful) if the domain of integration is periodic, infinite or has a boundary where the normal component of the field vanishes. H[t] measures the linkages and twists in the magnetic field. From the induction equation one can easily derive the helicity conservation equation, dH[t] / dt = -2 B ⋅ (B) dV. So in ideal MHD with H[t] in the limit dE[B] / dt)[Joule] = -c^2) J^2 dV, can be dissipated at finite rates even in the limit ^-1/2 as J ⋅ B and not J^2), in many astrophysical conditions where R[m] is large (H[t], is almost independent of time, even when the magnetic energy is dissipated at finite rates. The operation of any mean-field dynamo automatically leads to the growth of linkages between the toroidal and poloidal mean fields and hence a mean field helicity. In order to satisfy total helicity conservation this implies that there must be equal and oppositely signed helicity being generated in the fluctuating field. What leads to this helicity transfer between scales? To understand this, we need to split the helicity conservation equation into evolution equations of the sub-helicities associated with the mean field, say [t] = dV and the fluctuating field h[t] = a ⋅ b dV . The evolution equations for [t] and h[t] are [2] Here, we have assumed that the surface terms can be taken to vanish (we will return to this issue below). We see that the turbulent emf H[t] = [t] + h[t]. Note that in the limit when the fluctuating field has reached a stationary state one has dh[t] / dt dV = -2 c) j ⋅ b dV which tends to zero as R[m], a feature also borne out in periodic box simulations [19, 25]. To make the above integral constraint into a local constraint requires one to be able to define a gauge invariant helicity density for at least the random small-scale field. Such a definition has indeed been given, using the Gauss linking formula for helicity [26]. In physical terms, the magnetic helicity density h of a random small scale field (in contrast to the total helicity h[t]), is the density of correlated links of the field. This notion can be made precise and a local conservation law can be derived (see [26] for details) for the helicity density h, where h is approximately 29, 30, 31]. This conclusion can be understood more physically as follows: As the large-scale mean field grows the turbulent emf Blackman and Field (in [29]) first suggested that the losses of the small-scale magnetic helicity through the boundaries of the dynamo region can be essential for mean-field dynamo action. Such a helicity flux can result from the anisotropy of the turbulence combined with large-scale velocity shear or the non-uniformity of the 29]. Another type of helcity flux is simply advection of the small scale field and its associated helicity out of the system, with h 30]. This effect naturally arises in spiral galaxies where some of the gas is heated by supernova explosions producing a hot phase that leaves the galactic disc, dragging along the small-scale part of the interstellar magnetic field. In order to examine the effect of helicity fluxes in more detail, one also needs to fold in a model of how the dynamo co-efficients get altered due to Lorentz forces. Closure models either using the EDQNM closure or the MTA or quasi-linear theory, suggest that the turbulent emf gets re-normalized, with [K] + [m], where [m] = (b.b> / (42, 27, 32]. The turbulent diffusion [t] is left unchanged to the lowest order, although to the next order there arises a non-linear hyperdiffusive correction [16]. Some authors have argued against an [m] contribution, and suggest instead that the 33]. To clarify this issue, we have studied the nonlinear R[m],Re << 1 using FOSA applied to both induction and momentum equations [34]. We show explicitly in this limit that one can express Adopting [K] + [m] as above, one can now look for a combined solution to the helicity conservation equation (5.3), and the mean-field dynamo equation [2, 35], after relating the current helicity arising in [m] to the magnetic helicity density h. The effect of the advective flux in resolving the quenching of the dynamo was worked out in detail in Ref [30]. In the absence of an advective flux, the initial growth of magnetic field is catastrophically quenched and the large-scale magnetic field decreases at about the same rate as it grew. The initial growth occurs while the current helicity builds up to cancel the kinetic We have emphasized so far the role of helicity for the dynamo generation of large-scale fields. Intriguingly, several recent simulations show that the generation of large-scale fields may arise even in non-helical turbulence in the presence of strong enough shear [36]. It is at present an important open question as the exact cause of such large-scale field generation.
{"url":"http://ned.ipac.caltech.edu/level5/March08/Subramanian/Subramanian5.html","timestamp":"2014-04-19T22:12:32Z","content_type":null,"content_length":"26465","record_id":"<urn:uuid:aeca5da8-af81-4f88-b2b8-2ecbc925635b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Parallel Computational Fluid Dynamics '97 Recent Developments and Advances Using Parallel Computers Edited by • D. Emerson, Daresbury Laboratory, Daresbury, Warrington, WA4 4AD, Cheshire, UK • A. Ecer, Deparment of Mechanical Engineering, Indiana University Purdue, University Indianapolis, IN, USA • P. Fox, Purdue School of Engineering and Technology, 799 West Michigan Street, Indianapolis, IN 46202-5160, USA • J. Periaux, Dassault-Aviation, 78 Quai Marcel Dassault, 92214 Saint Cloud, France • N. Satofuka, Kyoto Institute of Technology, Matsugasaki, Sakyo-ku, Kyoto 606, Japan Computational Fluid Dynamics (CFD) is a discipline that has always been in the vanguard of the exploitation of emerging and developing technologies. Advances in both algorithms and computers have rapidly been absorbed by the CFD community in its quest for more accurate simulations and reductions in the time to solution. Within this context, parallel computing has played an increasingly important role. Moreover, the uptake of parallel computing has brought the CFD community into ever-closer contact with hardware vendors and computer scientists. The multidisciplinary subject of parallel CFD and its rapidly evolving nature, in terms of hardware and software, requires a regular international meeting of this nature to keep abreast of the most recent developments. Parallel CFD '97 is part of an annual conference series dedicated to the discussion of recent developments and applications of parallel computing in the field of CFD and related disciplines. This was the 9th in the series, and since the inaugural conference in 1989, many new developments and technologies have emerged. The intervening years have also proved to be extremely volatile for many hardware vendors and a number of companies appeared and then disappeared. However, the belief that parallel computing is the only way forward has remained undiminished. Moreover, the increasing reliability and acceptance of parallel computers has seen many commercial companies now offering parallel versions of their codes, many developed within the EC funded EUROPORT activity, but generally for more modest numbers of processors. It is clear that industry has not moved to large scale parallel systems but it has shown a keen interest in more modest parallel systems recognising that parallel computing will play an important role in the future. This book forms the proceedings of the CFD '97 conference, which was organised by the the Computational Engineering Group at Daresbury Laboratory and held in Manchester, England, on May 19-21 1997. The sessions involved papers on many diverse subjects including turbulence, reactive flows, adaptive schemes, unsteady flows, unstructured mesh applications, industrial applications, developments in software tools and environments, climate modelling, parallel algorithms, evaluation of computer architectures and a special session devoted to parallel CFD at the AEREA research centres. This year's conference, like its predecessors, saw a continued improvement in both the quantity and quality of contributed papers. Since the conference series began many significant milestones have been acheived. For example in 1994, Massively Parallel Processing (MPP) became a reality with the advent of Cray T3D. This, of course, has brought with it the new challenge of scalability for both algorithms and architectures. In the 12 months since the 1996 conference, two more major milestones were achieved: microprocessors with a peak performance of a Gflop/s became available and the world's first Tflop/s calculation was performed. In the 1991 proceedings, the editors indicated that a Tflop/s computer was likely to be available in the latter half of this decade. On December 4th 1996, Intel achieved this breakthrough on the Linpack benchmark using 7,264 (200MHz) Pentium Pro microprocessors as part of the ASCI Red project. With the developments in MPP, the rapid rise of SMP architectures and advances in PC technology, the future for parallel CFD looks both promising and challenging. Published: April 1998 Imprint: North-holland ISBN: 978-0-444-82849-1 • Invited Papers. Adaptive Schemes. A generic strategy for dynamic load balancing of distributed memory parallel computational mechanics using unstructured meshes (A.Arulananthan et al.). Communication cost function for parallel CFD using variable time stepping algorithms (Y.P. Chien et al.). Dynamic load balancing for adaptive mesh coarsening in computational fluid dynamics (T. Gutzmer). A parallel unstructured mesh adaptation for unsteady compressible flow simulations (T. Kinoshita, O. Inoue). A fully concurrent DSMC implementation with adaptive domain decomposition (C.D. Robinson, J.K. Harvey). Parallel dynamic load-balancing for the solution of transient CFD problems using adaptive tetrahedral meshes (N. Touheed et al.). Parallel dynamic load-balancing for adaptive unstructured meshes (C.Walshaw et al.). Combustion and Reactive Flows. Convergence and computing time acceleration for the numerical simulation of turbulent combustion processes by means of a parallel multigrid algorithm (A. Bundschuh et al.). Coupling of a combustion code with an incompressible Navier-Stokes code on MIMD architecture (G. Edjlali et al.). Parallel simulation of forest fire spread due to firebrand transport (J.M. McDonough et al.). Association of European Research Establishments in Aeronautics Special Session. Comparisons of the MPI and PVM Performances by using structured and unstructured CFD codes (E. Bucchignani et al.). Three-dimensional simulation on a parallel computer of supersonic coflowing jets (O. Louedin, J. Ryan). Navier-Stokes algorithm development within the FAME mesh environment (S.H. Onslow et al.). Partitioning and parallel development of an unstructured, adaptive flow solver on the NEC-SX4 (H. van der Ven, J.J.W. van der Vegt). Distributed Computing. Parallel workstation clusters and MPI for sparce systems in computational science (A. Berner, G.F. Carey) Inegration of an implicit multiblock code into a workstation cluster environment (F. Cantariti et al.). Parallel solution of Maxwell's equations on a cluster of WS in PVM environment (U. Glucat et al.). Application of the networked computers for numerical investigation of 3D turbulent boundary layer over complex bodies (S.V. Peigin, S.V. Timchenko). Unsteady Flows. Simulation of accoustic wave propagation within unsteady viscous compressible gas flows on parallel distributed memory computer systems (A.V. Alexandrov et al.). Parallel solution of hovering rotor flow (C.B. Allen, D.P. Jones). High accuracy simulation of viscous unsteady gasdynamic flows (A.N. Antonov et al.). RAMSYS: A parallel code for the aerodynamic analysis of 3D potential flows around rotorcraft configurations (A. D'Alascio et al.). Multistage simulations for turbomachinery design on parallel architectures (G. Fritsch, G. Möhres). Applications on Unstructured Meshes. Massively parallel implementation of an explicit CFD algorithm on unstructred grids, II (B.L. Bihari et al.). Towards the parallelisation of pressure correction method on unstructured grids (Y.C. Chuang et al.). Parallel implementation of a discountinuous finite element method for the solution of the Navier-Stokes equations (A. Codenotti et al.). Hybrid cell finite volume euler solutions of flow around a main-jib sail using an IBM SP2 (N.C. Rycroft et al.). Development of a parallel unstructured spectral/hp method for unsteady fluid dynamics (S.J. Sherwin et al.). Parallel buidling blocks for finite element simulations: application to solid-liquid mixture flows (D. Vanderstraeten, M. Knepley). Parallel CFD computation on unstructured grids (Y.F. Yao, B.E. Richards). Parallel Algorithms. A. domain decomposition based parallel solver for viscous incompressible flows (H.U. Akay et al.). Parallelisation of the discrete transfer radiation model (N.W. Bressloff). Study of flow bifurcation phenomena using a parallel characteristics based method (D. Drikakis, A. Spentzos). Efficient parallel computing using digital filtering algorithms (A. Ecer et al.). Parallel implicit PDE computations: algorithms and software (W.D. Gropp et al.). Parallel controlled random search algorithms for shape optimization (Y.F. Hu et al.). Performance of ICCG solver in vector and parallel machine architecture (K. Minami et al.). Parallel iterative solvers with localized ILU preconditioning (K. Nakajima et al.). Last achievements and some trends in CFD (Y.D. Shevelev). The effective parallel algorithm for solution of parabolic partial differential equations system (S.V. Timchenko). Multioperator high-order compact unwind methods for CFD parallel calcuations (A.I. Tolstykh). Evaluation of Architecture and Machine Performance. FLOWer and CLIC-3D, A portable flow solving system for block structured 3D-applications: status and benchmarks (H.M. Bleecke et al.). Delft-hydra - an architecture for coupling concurrent simulators (I.J.P. Elshoff et al.). A 3D free surface flow and transport model on different high performance computational architectures (R. Hinkelmann et al. ). Recent progress on numerical wind tunnel at the National Aerospace Laboratory, Japan (N. Hirose et al.). Performance comparison of the cray T3E/512 and the NEC SX-4/32 for a parallel CFD-code based on message passing (J. Lepper et al.). About some performance issues that occur when porting LES/DNS codes from vector machines to parallel platforms (M. Porquie et al.). Microtasking versus message passing parallelisation of the 3D-combustion code AIOLOS on the NEC SX-4 (B. Risio et al.). Parallel performance of domain decomposition based transport (P. Wilders). Navier-Stokes Applications. Portable parallelization of a 3-D flow solver (T. Bönisch, R. Rühle). Implementation of a Navier-Stokes solver on a parallel computing system (G. Passoni et al.). Parallel application of a Navier-Stokes solver for projectile aerodynamics (J. Sahu et al.). Incompressible Navier-Stokes solver on massively parallel computer adopting coupled method (K. Shimano et al.). Industrial Applications. A Multi-platform shared- or distribute-memory Navier-Stokes code (F. Chalot et al.). Predictions of external car aerodynamics on distributed memory machines (H. Schiffermüller et al.). Industrial flow simulations using different parallel architectures (J.B. Vos et al.). Software Tools, Mappings and Environments. On the use of Cray's scientific libraries for Navier-Stokes algorithm for complex three-dimensional geometrics (V. Botte et al.). Automatic generation of multi-dimensionally partitioned parallel CFD code in a parallelisation tool (E.W. Evans et al.). ELMER - an environment for parallel industrial CFD (H. Hakula et al.). Semi-automatic parallelisation of unstructured mesh codes (C.S. Ierotheou et al.). Modelling continuum mechanics phenomena using three dimensional unstructured meshes on massively parallel processors (K. McManus et al.). An object-oriented programming paradigm for parallel computational fluid dynamics on memory distributed parallel computers (T. Ohta). Turbulence. Numerical study of separation bubbles with turbulent reattachment followed byu a boundary layer relaxation (M. Alam, N.D. Sandham). Efficient parallel-turbulence simulation using the combination method on workstation-clusters and MIMD-systems (W. Huber). Industrial use of large eddy simulation (C.B. Jenssen). High performance computing of turbulent flows with a non-linear v2 - f model on the CRAY T3D using SHMEM and MPI (F.S. Lien). Parallel computation of lattice boltzmann equations for incompressible flows (N. Satofuka et al.). Numerical simulation of 3-D free shear layers (Y. Tsai). Data-parallel DNS of turbulent flow (R.W.C.P. Verstappen, A.E. P. Veldman). Parallel implicit computation of turbulent transonic flow around a complete aircraft configuration (C. Weber). Environmental and Climate Modeling. Parallel computing of dispersion of passive pollutants in coastal seas (S. Chumbe et al.). A parallel implementation of a spectral element ocean model for simulating low-latitude circulation system (H. Ma et al.). Modelling the global ocean circulation on the T3D (C. S. Richmond et al.). Multidisciplinary and Complementary Applications. ZFEM: Collaborative visualization for parallel multidisciplinary applications (J.R. Cebral). Development of parallel computing environment for aircraft aero-structural coupled analysis (R. Onishi et al.). A parallel self-adaptive grid generation strategy for a highly unstructured euler solver (K. Warendorf, R. Rühle) Invited Papers. Adaptive Schemes. Combustion and Reactive Flows. Association of European Research Establishments in Aeronautics Special Session. Distributed Computing. Unsteady Flows. Applications on Unstructured Meshes. Parallel Algorithms. Evaluation of Architecture and Machine Performance. Navier-Stokes Applications. Industrial Applications. Software Tools, Mappings and Environments. Turbulence. Environmental and Climate Modeling. Multidisciplinary and Complementary Applications.
{"url":"http://www.elsevier.com/books/parallel-computational-fluid-dynamics-97/emerson/978-0-444-82849-1","timestamp":"2014-04-20T18:33:51Z","content_type":null,"content_length":"40022","record_id":"<urn:uuid:9f4fa5bf-3217-431e-973a-2480fd2a6dd5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Payback for a Photovoltaic System Second, if you have gone with a grid-tied PV system, and most people these days do, then you have to consider how net-metering works. All states are required under federal law to provide net metering but each state is allowed to implement it differently. In some states you always get paid a flat retail rate regardless of when the electricity is generated. For example, if the utility rate was $.18 per kilowatt hour then you would pay them 18 cents per kilowatt-hour when you took energy off of the grid and they would pay you 18 cents per kilowatt-hour when you generated excess energy back to the grid. Other states use a payback scheme called a time-of-use approach that is based on the time of day when the energy is generated. Under a time-of-use approach rates are higher for electricity during the day when demand is greatest and lowest in the evening when demand is the lowest. The time-of-use approach to compensation is particularly advantageous to the PV owner and can significantly shorten the payback period. This is because the utility company pays you for the excess energy you generate during the day at a high rate (and solar systems only generate energy during the day), and then you pay them for the energy you use from them in the evening at a lower rate. A final consideration in any long-term payback analysis is an assumption (or more likely a guess) as to what energy costs will be in the future since PV systems will continue to provide power for 25 to 30 years at minimum once they are installed. As many of us have found out in recent years, predicting future prices for gas and electricity is not a simple matter. Everyone knows prices will continue to go up as we enter the era of post-peak oil, but how much they will go up is a hard call. Calculating the Potential Monthly Savings from a PV System Before we start figuring out the payback period over the life of the system, lets start with something simple like figuring out what the immediate impact will be on our electric bill once the solar system is installed. The calculations are contained in the attached spreadsheet. Here is the process that was used: • Step 1: Find out your average monthly electric bill. - For most of us this is a fairly easy process. Look at your electric bill and find out what your average monthly electric bill is and your total annual bill. If your statement doesn't show you the annual average monthly electric bill then call the electric company and have them provide you that information. They are required to give it to you. • Step 2: Determine how many kilowatt hours you use per month. - Electric usage is measured in kilowatt hours. Most monthly electric bills will show you both the number of kilowatt-hours you used that month plus your average monthly usage for the year. If you keep copies of your bills you can add them up for the last year and take the average. If you don't have them and its not on your bill call the utility company and have them give this to you. Most homeowners use somewhere between 600 and 1200 kilowatt-hours per month. • Step3: Find out the monthly output of the proposed system. - This number should be provided to you by the contractor who gave you the bid. If you have not yet received a bid it is still pretty easy to estimate. For the purpose of this analysis we will assume that you are planning on putting in a 4 kilowatt (4000 watts) PV system into a home near Albany, New York. By the way, this example, is covered in detail in our section called Typical PV System Costs. To determine the monthly output of the proposed system in kilowatt hours you must first multiply the 4 kilowatts times the numbers of hours of sun per day you receive in your location. If you don't know what that is for your area you can look it up in the solar maps section of our Website. According to the U.S. solar map Albany, New York receives on average 4.3 hours of sun per day if we assume that the panels are mounted at latitude on a fixed mount on a roof. If you multiply the 4 kilowatts per hour output from the panels times the 4.3hours of sunlight per day you get a daily output of 17.2 kilowatt hours per day. Now multiply this by 30 days per month on average and you get 516 kilowatt hours per month. (4kw x 4.3 sun hours x 30days = 516 kw hours per month) • Step 4: Adjust estimate for real solar conditions - Solar panels are rated under ideal conditions in a laboratory setting. However, those conditions do not accurately reflect real world conditions. In reality there will be occasional cloudy days, rain, and other conditions that keep performance from being optimal. Therefore we should adjust the earlier estimate of kilowatt hours to account for this. For most locations in the U.S. and adjustment down of 20% should be sufficient. So multiply the 515 kw hours per month times .8 and we get 412 kilowatt hours per month. • Step 5: Divide adjusted output hours by the actual average monthly use - To determine what percentage of your electric bill the PV system will cover just divide the adjusted output hours (412 kw hours) by your average monthly kilowatt hours of use which you got from you electric bill. So for example, if the average monthly use was 600 kilowatt hours per month the calculation is 412/600 which equals .69. This means that the proposed 4 kilowatt system would cover 69% of the electric needs of the household. • Step 6: Multiply your average electric bill by the percentage - Since we now know that the proposed system will address 69% of our electric needs we can multiply that times our monthly bill to find out the savings. If we were paying $.22 per kilowatt hour our monthly bill would be about $132 per month excluding any special charges. If we multiply this by 69% we can see that the system will save us about $91 per month ($132 x 69% = $91 or $1092 per year. Calculating the Lifetime Payback for Your PV System Now that we have calculated the monthly and annual savings we can take a look at the savings over the lifetime of the system. One of the great things about PV systems is that they last a very long time. Most solar panels are warranted for 25 years and will probably perform even longer if properly cared for. Therefore, a payback calculation needs to look at the savings from the solar system across a 25 year period. This can be calculated by hand but is a whole lot easier with a spreadsheet. Click on the icon to see the analysis we have done and then see the explanation below as to how we calculated the values: • Step 1: Determine initial annual savings from PV system - We have just done an example of this in the section above so let's use that example. By calculating the output of the proposed system in kilowatt hours we determined that the PV system will pay for 69% of our electric needs and therefore save us $1092 per year. • Step 2: Set assumptions regarding future increase in electric costs - It seems highly likely that electricity rates will continue to increase given that fuels which are most often used to create electricity such as natural gas and coal are non-renewable. The question is by what amount do we presume they will increase. In this example we have used a fairly conservative estimate of a 4% increase per year for the next 25 years. • Step 3: Multiply the annual savings by energy inflation rate - This step is a lot easier to do if you use the attached spreadsheet. Create a row of columns for years 1-25. Take the amount you will save after year 1 and then multiply that savings by 1.04 for each year for the next 25 years (keeping in mind that the solar panels are warranted for 25 years) • Step 4: Accumulate the savings by year - Use a spreadsheet to calculate the accumulated savings by year. This is done by adding the prior year's savings (adjusted for inflation) to the current year savings for each year. • Step 5: Subtract the initial cost from the accumulated savings - To see how you are paying off your initial investment you then create a row which subtracts the accumulated savings from the initial cost. Initially this will be positive because the initial system cost is more than the savings. However, after a certain number of years the accumulated savings will exceed the initial costs. The point at which this occurs is known as the payback period. Using the data from the prior example all initial costs are paid back by year 14. The next 16 through 25 years are all positive until by the end of 25 years the system has earned us $17,782. This example is a fairly simple one. For example, if you had to take out a homeowners loan of some type to pay for the system, the cost of the interest on that loan would have to be accounted for in the payback scheme. We have also not done a comparative analysis on the comparative cost of this investment to other investments you might have made over the same time period. This type of comparative Return on Investment (ROI) analysis will be covered in more detail in an upcoming article. The Effect of a PV Investment on Your Home's Resale Value Finally, the biggest factor we have not yet discussed is the impact of this investment on the value of your home. Unfortunately there is very little good scientific data on the impact of PV systems on home values. While the general consensus is that an investment in a PV system, like most home upgrades, increases its value, there doesn't seem to be a lot of good current data out there as to how much. Most recent studies we have seen were made several years ago when the housing market was booming. It would probably be unwise to presume those apply to the current slumping housing market. Nonetheless, there is some data that suggests that even in the current market PV can be a very good payback. At a recent Solar Conference in San Diego a group of new home builders reported that their new housing units with PV systems far outsold the units without PV even though they were more expensive. Also, the addition of PV systems seems to have a good fit with the growing movement around green remodeling. The McGraw-Hill Construction SmartMarket Report on Attitudes and Preferences for Remodeling and Buying Green Homes found that 73 percent of people surveyed listed potential higher resale value as one of the top four reasons to buy a green home. When all is said and done home values are very often a local phenomenon, particularly in a down housing market as we have now. If resale value is a major consideration for you we suggest that you talk to some local real estate agents and get their opinions as to the impact a PV system will have in your local housing market.
{"url":"http://energybible.com/solar_energy/calculating_payback.html","timestamp":"2014-04-20T21:29:38Z","content_type":null,"content_length":"25050","record_id":"<urn:uuid:93616b06-9731-4f74-8e61-d01e0b3376c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Defining functions.... True or False May 2nd 2008, 06:31 AM #1 May 2008 Defining functions.... True or False Answer true of false to the following questions: 1. It is possible to uniquely define a quadratic function given 4 distinct points. 2. It is possible to uniquely define a constant function given 4 distinct points. 3. It is possible to uniquely define a quartic function given 4 distinct points. 4. It is possible to uniquely define a cubic function given 4 distinct points. 5. It is possible to uniquely define a quintic function given 4 distinct points. Answer true of false to the following questions: 1. It is possible to uniquely define a quadratic function given 4 distinct points. 2. It is possible to uniquely define a constant function given 4 distinct points. 3. It is possible to uniquely define a quartic function given 4 distinct points. 4. It is possible to uniquely define a cubic function given 4 distinct points. 5. It is possible to uniquely define a quintic function given 4 distinct points. The answer would be TRUE to all if you are asking if it is possible for at least one set of 4 distinct points. However, if you are asking if it is always possible to define a function such functions: 1,2,3,5 are all false, but 4 is true . Think of the general polynomial equation and the number of constants you have to solve for isnt 2 true? May 2nd 2008, 06:42 AM #2 May 3rd 2008, 01:12 PM #3 Junior Member Apr 2008 May 3rd 2008, 08:13 PM #4 May 5th 2008, 01:22 PM #5 May 2008
{"url":"http://mathhelpforum.com/pre-calculus/36887-defining-functions-true-false.html","timestamp":"2014-04-16T20:50:15Z","content_type":null,"content_length":"42286","record_id":"<urn:uuid:199081bc-7e36-4017-804a-21f3ca8ad5de>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Assignment 1 Home : Data Mining Course : Assignments : Assignment 1 with answers Assignment 1: Using the WEKA Workbench A. Become familiar with the use of the WEKA workbench to invoke several different machine learning schemes. Use latest stable version. Use both the graphical interface (Explorer) (here is a guide (pdf)) and command line interface (CLI). B. Use the following learning schemes, with the default settings to analyze the weather data (in weather.arff). For test options, first choose "Use training set", then choose "Percentage Split" using default 66% percentage split. Report model percent error rate. • ZeroR (majority class) • OneR • Naive Bayes Simple • J4.8 Model: Yes Evaluate using training set: 5/14 = 35% errors Evaluate using split: 2/5 = 40% errors sunny -> no overcast -> yes rainy -> yes Evaluate using training set, error rate: 4/14 =29% Evaluate using split, error rate: 3/5 = 60% NaiveBayes (simple) Model: (omitted to save space) Evaluate using training set, error rate: 1/14 =7% Evaluate using split, error rate: 2/5 = 40% J48 pruned tree outlook = sunny | humidity <= 75: yes (2.0) | humidity > 75: no (3.0) outlook = overcast: yes (4.0) outlook = rainy | windy = TRUE: no (2.0) | windy = FALSE: yes (3.0) Evaluate using training set, error rate: 0/14 =0% Evaluate using split, error rate: 3/5 = 60% C. Which of these classifiers are you more likely to trust when determining whether to play? Why? The one with the lower error on the separate test set, which is NaiveBayes. D. What can you say about accuracy when using training set data and when using a separate percentage to train? When using only training data, the classifier that can build a more complex model, like J4.8 decision tree, can fit the data. Accuracy on the train set is not a good predictor of the accuracy on the separate test set.
{"url":"http://info.psu.edu.sa/psu/cis/asameh/cs-500/assignment-1-with-answers.html","timestamp":"2014-04-21T14:41:16Z","content_type":null,"content_length":"2968","record_id":"<urn:uuid:1b3c3feb-60f4-4181-952f-b6328dc9a70d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Optical Society of America A The scattering of electromagnetic waves by a dielectric structure that fills a slit aperture in a conducting screen is considered. The cavity consists of two zones of different materials separated by an arbitrarily shaped interface. A rigorous <i>R</i>-matrix multilayer modal method is applied, which gives numerical stability even for deep structures. Examples of the numerical results obtained are shown and discussed for both fundamental cases of polarization. © 1998 Optical Society of America OCIS Codes (050.1940) Diffraction and gratings : Diffraction (050.2770) Diffraction and gratings : Gratings (290.0290) Scattering : Scattering Ricardo A. Depine and Diana C. Skigin, "Multilayer modal method for diffraction from dielectric inhomogeneous apertures," J. Opt. Soc. Am. A 15, 675-683 (1998) Sort: Year | Journal | Reset 1. T. B. Hansen and A. D. Yaghjian, “Low-frequency scattering from two-dimensional perfect conductors,” IEEE Trans. Antennas Propag. 40, 1389–1402 (1992). 2. Y.-L. Kok, “Boundary value solution to electromagnetic scattering by a rectangular groove in a ground plane,” J. Opt. Soc. Am. A 9, 302–311 (1992). 3. T.-M. Wang and H. Ling, “A connection algorithm on the problem of EM scattering from arbitrary cavities,” J. Electromagn. Waves Appl. 5, 301–314 (1991). 4. S.-K. Jeng, “Scattering from a cavity-backed slit in a ground plane—TE case,” IEEE Trans. Antennas Propag. 38, 1529–1532 (1990). 5. R. F. Harrington and J. R. Mautz, “A generalized network formulation for aperture problems,” IEEE Trans. Antennas Propag. 24, 870–873 (1976). 6. J. F. Aguilar and E. R. Méndez, “Imaging optically thick objects in scanning microscopy: perfectly conducting surfaces,” J. Opt. Soc. Am. A 11, 155–167 (1994). 7. T. C. Rao and R. Barakat, “Plane wave scattering by a conducting cylinder partially buried in a ground plane. 1: TM case,” J. Opt. Soc. Am. A 6, 1270–1280 (1989). 8. T. C. Rao and R. Barakat, “Plane wave scattering by a conducting cylinder partially buried in a ground plane. 2: TE case,” J. Opt. Soc. Am. A 8, 1986–1990 (1991). 9. P. J. Valle, F. González, and F. Moreno, “Electromagnetic wave scattering from conducting cylindrical structures on flat substrates: study by means of the extinction theorem,” Appl. Opt. 33, 512–523 (1994). 10. S. T. Peng, T. Tamir, and H. L. Bertoni, “Theory of periodic dielectric waveguides,” IEEE Trans. Microwave Theory Tech. MTT-23, 123–133 (1975). 11. M. G. Moharam and T. K. Gaylord, “Diffraction analysis of dielectric surface-relief gratings,” J. Opt. Soc. Am. 72, 1385–1392 (1982). 12. D. M. Pai and K. A. Awada, “Analysis of dielectric gratings of arbitrary profiles and thickness,” J. Opt. Soc. Am. A 8, 755–762 (1991). 13. J. R. Andrewartha, J. R. Fox, and I. J. Wilson, “Resonance anomalies in the lamellar grating,” Opt. Acta 26, 69–89 (1977). 14. L. Li, “A modal analysis of lamellar diffraction gratings in conical mountings,” J. Mod. Opt. 40, 553–573 (1993). 15. Y.-L. Kok, “General solution to the multiple-metallic-grooves scattering problem: the fast-polarization case,” Appl. Opt. 32, 2573–2581 (1993). 16. R. A. Depine and D. C. Skigin, “Scattering from metallic surfaces having a finite number of rectangular grooves,” J. Opt. Soc. Am. A 11, 2844–2850 (1994). 17. M. Kuittinen and J. Turunen, “Exact-eigenmode model for index-modulated apertures,” J. Opt. Soc. Am. A 13, 2014–2020 (1996). 18. L. M. Brekhovskikh, Waves in Layered Media (Academic, New York, 1960). 19. L. Li, “Multilayer modal method for diffraction gratings of arbitrary profile, depth, and permittivity,” J. Opt. Soc. Am. A 10, 2581–2591 (1993). 20. D. C. Skigin and R. A. Depine, “R-matrix method for a surface with one groove of arbitrary profile,” Opt. Commun. 130, 307–316 (1996). 21. D. C. Skigin and R. A. Depine, “The multilayer modal method for electromagnetic scattering from surfaces with several arbitrarily shaped grooves,” J. Mod. Opt. 44, 1023–1036 (1997). 22. L. C. Botten, M. S. Craig, R. C. McPhedran, J. L. Adams, and J. R. Andrewartha, “The dielectric lamellar grating,” Opt. Acta 28, 413–428 (1981). 23. R. H. Morf, “Exponentially convergent and numerically efficient solution of Maxwell’s equations for lamellar gratings,” J. Opt. Soc. Am. A 12, 1043–1056 (1995). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-15-3-675","timestamp":"2014-04-17T08:31:43Z","content_type":null,"content_length":"122948","record_id":"<urn:uuid:9f83bc61-08e3-4f2c-b816-3fb91cadbbc9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
A few flux questions May 11th 2011, 04:39 PM #1 Junior Member Dec 2010 A few flux questions 1. Let S be the surface $z=x, x^2+y^2<=1$ Find $\iint (x^2 +y^2)dS$ I know the answer is supposed to be $\sqrt{2}*\pi/ 2$ but can't seem to get there. I've tried evaluating it with the divergence theorem, but nothing I do seems to work correctly. I know that $x^ 2+y^2dS$ is equal to $F \cdot n$ and I have $n = <-1/\sqrt{2},0,1/\sqrt{2}>$, which would make $F=<-\sqrt{2} x^2,0,\sqrt{2} y^2>$. Then the integral becomes $\sqrt{2} \int_0^{2\pi}} \int_0^{1} \ int_0^{rcos\theta} } -r^2cos(\theta)^2+r^2sin(\theta)^2 rdzdrd\theta$. Am I doing this right? I can't figure out where I'm messing up. These next two I have the same issues with: 2. Let S be the surface $z = x + y, 0\leqslant x \leqslant 1, 0\leqslant x \leqslant 1$. Find the upward flux of the vector field $F = <z,x,y>$ across S. I think that the divergence is 0, and wouldn't that make the whole integral 0? But I'm told the correct answer is -1. Why is this? 3. Let S be the portion of the cylinder given by $0\leqslant z \leqslant 3, r=1, 0\leqslant \theta \leqslant \pi/2$. Orient S by normal vectors pointing away from the z-axis and compute the flux of $F = <2x,y,-3z>$ across S. In this case, wouldn't the divergence also be 0, since $abla \cdot F = 2 + 1 -3 = 0$? But I know the answer to be $9\pi/4$. I'm obviously missing the same sort of thing in all these problems so any help would be appreciated very much! For the first one the divergence theorem does not apply it is a scalar surface integral. $dS=\sqrt{ \left(\frac{\partial z}{\partial x} \right)^2+ \left(\frac{\partial z}{\partial y} \right)^2+1}=\sqrt{2}$ This gives the integral in cylindrical coordinates $\sqrt{2}\int_{0}^{2\pi}\int_{0}^{1}r^2 rdrd\theta$ for 2 the divergence theorem does not apply because the surface is not closed. use the definition of a surface integral for 3 again the surface is not closed so the divergence theorem does not apply. This can be parametrized by $\mathbf{r}(\theta,z)=\cos(\theta)\mathbf{i}+\sin( \theta )\mathbf{j}+z\mathbf{k}$ Okay, I sort of understand this, but I still have some questions. For the second problem, since $F = zi + xj + yk$ and $z = x+ y$ Then $\frac{\partial z}{\partial x} =1$ and $\frac{\partial z}{\partial y} = 1$, so for the definition of a surface integral I have $\iint F\cdot dS = \iint (-P \frac{\partial z}{\partial x} - Q \frac{\partial z}{\partial y} + R) dA$ for $F = P i + Q j + Rk.$ So my integral is $\iint -2x dxdy$ But how do I set up the limits? And for the 3rd one, if $\mathbf{r}(\theta,z)=\cos(\theta)\mathbf{i}+\sin( \theta )\mathbf{j}+z\mathbf{k}$, then I'm getting { $r}_{\theta } = <-sin(\theta),cos(\theta),0>$ and ${r}_{z} = <0,0,1>,$ and their cross product is $<cos(\theta),sin(\theta),0>.$ How do I proceed from there? For the 2nd problem the limits are given to you! You typed them in the problem statement. To the third problem dot the vector you obtianed by the cross product with the vector field and use the limits you gave in the problem statement. Thank you very much! May 11th 2011, 05:48 PM #2 May 11th 2011, 06:22 PM #3 Junior Member Dec 2010 May 11th 2011, 07:32 PM #4 May 12th 2011, 08:07 AM #5 Junior Member Dec 2010
{"url":"http://mathhelpforum.com/calculus/180259-few-flux-questions.html","timestamp":"2014-04-21T13:29:56Z","content_type":null,"content_length":"48217","record_id":"<urn:uuid:88a85a27-4777-4c27-b720-06a89c51261a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x T.-H. Lai, A.P. Sprague, "Placement of the Processors of a Hypercube," IEEE Transactions on Computers, vol. 40, no. 6, pp. 714-722, June, 1991. BibTex x @article{ 10.1109/12.90250, author = {T.-H. Lai and A.P. Sprague}, title = {Placement of the Processors of a Hypercube}, journal ={IEEE Transactions on Computers}, volume = {40}, number = {6}, issn = {0018-9340}, year = {1991}, pages = {714-722}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.90250}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Placement of the Processors of a Hypercube IS - 6 SN - 0018-9340 EPD - 714-722 A1 - T.-H. Lai, A1 - A.P. Sprague, PY - 1991 KW - processors embedding; minimizing; longest interprocessor wire; hypercube; rectangular mesh; neighboring nodes; graph-theoretic distance; delays; delays; graph theory; hypercube networks; minimisation of switching nets. VL - 40 JA - IEEE Transactions on Computers ER - The authors formalize the problem of minimizing the length of the longest interprocessor wire as the problem of embedding the processors of a hypercube onto a rectangular mesh, so as to minimize the length of longest wire. Where neighboring nodes of the mesh are taken as being at unit distance from one another, and where wires are constrained to be laid out as horizontal and vertical wires, the length of the wire joining nodes u and v of the mesh equals the graph-theoretic distance between u and v. The problem of minimizing delays due to interprocessor communication is then modeled as the problem of embedding the vertices of a hypercube onto the nodes of a mesh, so as to minimize dilation. Two embeddings which achieve dilations that (for large n) are within 26% of the lower bound for square meshes and within 12% for meshes with aspect ratio 2 are presented. [1] R. Aleliunas and A. L. Rosenberg, "On embedding rectangular grids in square grids,"IEEE Trans. Comput., vol. C-31, pp. 907-913, 1982. [2] S. Bhatt, F. Chung, T. Leighton, and A. Rosenberg, "Optimal simulations of tree machines," inProc. 27th Annu. IEEE Symp. Foundations Comput. Sci., 1986, pp. 272-282. [3] G. S. Bloom and S. W. Golomb, "Applications of numbered undirected graphs,"Proc. IEEE, vol. 65, pp. 562-569, 1977. [4] M. Y. Chan, "Dilation-2 embeddings of grids into hypercubes," inProc. 1988 Int. Conf. Parallel Processing, vol. 3, 1988, pp. 295-298. [5] M. Y. Chan and F. Y. L. Chin, "On embedding rectangular grids in hypercubes,"IEEE Trans. Comput., vol. 37, pp. 1285-1288, 1988. [6] P. Z. Chinn, J. Chvatalova, A. K. Dewdney, and N. E. Gibbs, "The bandwidth problem for graphs and matrices--A survey,"J. Graph Theory, vol. 6, pp. 223-254, 1982. [7] W. J. Dally, "Wire efficient VLSI multiprocessor communication networks," inProc. Stanford Conf. Advanced Res. VLSI, 1987, pp. 391-415. [8] W. Feller,An Introduction to Probability Theory and Its Applications, Vol. 1. New York: Wiley, 1950. [9] M. J. Flynn, "Very high-speed computer systems,"Proc. IEEE, vol. 54, pp. 1901-1909, 1966. [10] M. R. Garey, R. L. Graham, D. S. Johnson, and D. E. Knuth, "Complexity results for bandwidth minimization,"SIAM J. Appl. Math., vol. 34, pp. 477-495, 1978. [11] L. H. Harper, "Optimal assignments of numbers to vertices,"J. Soc. Industrial Appl. Math, vol. 12, pp. 131-135, 1964. [12] L. H. Harper, "Optimal numberings and isoperimetric problems on graphs,"J. Combinatorial Theory, vol. 1, pp. 385-393, 1966. [13] L. H. Harper, "Chassis layout and isoperimetric problems,"Jet Propulsion Laboratory Space Projects Summary 37-66, vol. 2, pp. 37-42, 1970. [14] C.-T. Ho and S. L. Johnsson, "On the embedding of meshes in Boolean cubes," inProc. 1987 Int. Conf. Parallel Processing, 1987, pp. 188-191. [15] D. Kratsch, "Finding the minimum bandwidth of an interval graph,"Inform. Computat., vol. 74, pp. 140-158, 1987. [16] T.-H. Lai and W. White, "Mapping pyramid algorithms into hypercubes,"J. Parallel Comput., vol. 8, 1990. [17] Y. E. Ma and L. Tao, "Embeddings among toruses and meshes," inProc. 1987 Int. Conf. Parallel Processing, 1987, pp. 178-187. [18] G. Mitchison and R. Durbin, "Optimal numberings of an n×n array,"SIAM J. Algebraic Discrete Methods, vol. 7, pp. 571-582, 1986. [19] B. Monien, "The bandwidth minimization problem for caterpillars with hair length 3 is NP-complete,"SIAM J. Algebraic Discrete Methods, vol. 7, pp. 505-512, 1986. [20] A. M. Odlyzko and H. S. Wilf, "Bandwidths and profiles of trees,"J. Combinatorial Theory, Series B, vol. 42, pp. 348-370, 1987. [21] C. H. Papadimitriou, "The NP-completeness of the bandwidth minimization problem,"Computing, vol. 16, pp. 263-270, 1976. [22] M. S. Paterson, W. L. Ruzzo, and L. Snyder, "Bounds on minimax edge length for complete binary trees," inProc. 13th Annu. ACM Symp. Theory Comput., 1981, pp. 293-299. [23] A. G. Ranade and S. L. Johnsson, "The communication efficiency of meshes, boolean cubes and cube connected cycles for wafer scale integration," inProc. 1987 Int. Conf. Parallel Processing, 1987, pp. 479-482. [24] C. L. Seitz, "Self-timed VLSI systems," inProc. Caltech Conf. Very Large Scale Integration, 1979, pp. 345-355. [25] C. L. Seitz, "The Cosmic Cube,"Commun. ACM, pp. 22-33, Jan. 1985. [26] V. I. Smirnov,A Course in Higher Mathematics, Vol. 3, Part 2. Oxford, England: Pergamon, 1964, translated by D. E. Brown. [27] A. P. Sprague, "Problems in VLSI layout design," Ph.D. dissertation, Ohio State Univ., 1988. [28] A. Y. Wu, "Embedding of tree networks into hypercubes,"J. Parallel Distributed Comput., vol. 2, pp. 239-249, 1985. Index Terms: processors embedding; minimizing; longest interprocessor wire; hypercube; rectangular mesh; neighboring nodes; graph-theoretic distance; delays; delays; graph theory; hypercube networks; minimisation of switching nets. T.-H. Lai, A.P. Sprague, "Placement of the Processors of a Hypercube," IEEE Transactions on Computers, vol. 40, no. 6, pp. 714-722, June 1991, doi:10.1109/12.90250 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1991/06/t0714-abs.html","timestamp":"2014-04-18T08:27:08Z","content_type":null,"content_length":"54618","record_id":"<urn:uuid:42cf64a8-a679-4809-a1c2-adeee9913434>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help September 26th 2011, 01:51 PM #1 MHF Contributor Mar 2010 Let $a_1,\cdots , a_n$ be given numbers. Compute the determinant of the nxn matrix $A=(a_{ij})$, where $a_{ij}=a^{i-1}_j$. I don't understand what is meant by this: $a_{ij}=a^{i-1}_j$ Re: Determinant without being given any more context, i would assume it means $a_{ij}$ is the (i-1)-th power of $a_j$. so the first row should all be 1's. this kind of matrix is called a Vandermonde matrix. you'll probably arrive at a better understanding if you try it first for n = 2, and n = 3. try to factor the result into binomials. if you are clever, and figure out (guess) the general form, you can actually prove it by induction on n. Re: Determinant without being given any more context, i would assume it means $a_{ij}$ is the (i-1)-th power of $a_j$. so the first row should all be 1's. this kind of matrix is called a Vandermonde matrix. you'll probably arrive at a better understanding if you try it first for n = 2, and n = 3. try to factor the result into binomials. That is the question 100% verbatim. Re: Determinant I see it as $a^{i-1}_j \in \{a_1,\cdots , a_n\}$ Re: Determinant well it's an important matrix, both for the study of polynomials, alternating bilinear forms, and permutation groups. it crops up in a lot of different places. time to roll up your sleeves and make some messy calcs...lol @pickslides: i don't think so....there's nothing in the wording to indicate membership. and the calculation of such a determinant (a Vandermonde matrix) is the sort of thing you might encounter in a variety of different courses, inclusing differential geometry, linear algebra and group theory. i think the first time i saw it was in a physics class. Re: Determinant I have used it before in my undergraduate linear alg. class. I am familiar with it. I just don't know it when I am presented with it. Re: Determinant $V\ =\ \begin{bmatrix}1&1&\cdots&1\\a_1&a_2&\cdots&a_n\\a _1^2&a_2^2&\cdots&a_n^2\\ \vdots&\vdots&\ddots&\vdots\\a_1^{n-1}&a_2^{n-1}&\cdots&a_n^{n-1} \end{bmatrix}$ does it look better written out like this? Re: Determinant Re: Determinant fair enough. but perhaps if someone else peruses this thread one day, it will be helpful to them. no slight intended. September 26th 2011, 01:57 PM #2 MHF Contributor Mar 2011 September 26th 2011, 02:02 PM #3 MHF Contributor Mar 2010 September 26th 2011, 02:08 PM #4 September 26th 2011, 02:08 PM #5 MHF Contributor Mar 2011 September 26th 2011, 02:12 PM #6 MHF Contributor Mar 2010 September 26th 2011, 02:22 PM #7 MHF Contributor Mar 2011 September 26th 2011, 02:28 PM #8 MHF Contributor Mar 2010 September 26th 2011, 02:48 PM #9 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/advanced-algebra/188914-determinant.html","timestamp":"2014-04-16T07:32:34Z","content_type":null,"content_length":"58983","record_id":"<urn:uuid:4ec8ce3a-017b-42ff-a793-9bd17d4e5982>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In the diagram below, BD is parallel to XY. What is the value of y? (Picture below.) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/506e202ae4b060a360ff3ac6","timestamp":"2014-04-19T07:30:41Z","content_type":null,"content_length":"45188","record_id":"<urn:uuid:57586187-abd2-4249-a1cb-12c83c885cba>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of a Cone Date: 01/29/2001 at 21:55:37 From: cari Subject: Volume of cones Dr. Math, I know HOW to find the volume of a cone(1/3area of base times height divided by three) but my teacher wants to know WHY. We know that filling the cone with water can prove it, but how does it work with the actual shapes? If you put a cone inside a cylinder, then you obviously have extra room, but how do two other cones fit in there? I understand that they won't keep their regular shape, but please explain. Thank you, Date: 01/29/2001 at 23:29:46 From: Doctor Peterson Subject: Re: Volume of cones Hi, Cari. We have explanations of many such formulas in our archives; but the cone is probably one of the hardest to explain without using calculus. Here's one answer I've given about pyramids, which are closely Volume of a Pyramid You'll see there how you can fit three pyramids with the same volume into a prism; from there, geometrical knowledge lets us build up to any pyramid, and then to the cone. The best proof I'm familiar with comes very close to the spirit of calculus, without requiring you to know any of it. Let's try doing this directly, rather than starting with a pyramid. You can't actually fit three cones together into a cylinder. Instead, we can dismantle a cone into lots of little near-cylinders. Picture one of those baby toys that look like a cone made up of several rings stacked up; or imagine a cone sliced like a pineapple, and the slices trimmed to make flat cylinders. You can imagine that if you make the slices thin enough, the scrap from the trimming will be as little as you like; so the sum of the volumes of the cylinders will be very close to the volume of the cone itself. How can we find the volume of those slices? Here's a cross-section of the cone, showing the slices: / |h\ /| |h |\ / | |r1| \ /| |h |\ / | | r2 | \ /| |h |\ / | | r3 | \ /| |h |\ / | | r4 | \ If the cone has base radius R and height H, and we've cut it into N slices (including that empty slice at the top, with radius r0 = 0), then each cylinder will have height h = H/N, and radius r[k] = kR/N, where k is the number of the cylinder, starting with 0 at the top and ending with N-1 for the bottom cylinder. The volume of cylinder k will be pi r[k]^2 h = pi (kR/N)^2 (H/N) = pi R^2 H * k^2/N^3 The total volume will be the sum of these, for all k from 0 to N-1; since only k is different from one cylinder to the next, we can factor everything else out from the sum and get V = pi R^2 H / N^3 Sum(k^2) = pi R^2 H / N^3 (0 + 1 + 4 + ... + (N-1)^2) At this point I have to either do some magic and tell you the formula for the sum of squares, and hope you trust me, or try to convince you. The formula is: 0 + 1 + 4 + ... + N^2 = N(N+1)(2N+1)/6 I show a proof by induction in the page I referred to above; another proof can be found here: Formula For the Sum Of the First N Squares If we replace N with N-1, we get 0 + 1 + 4 + ... + (N-1)^2 = (N-1)(N)(2N-1)/6 Put this into our formula and you get V = pi R^2 H (N-1)(N)(2N-1)/(6N^3) = pi R^2 H/6 (N-1)/N N/N (2N-1)/N = pi R^2 H/6 (1-1/N)(1)(2-1/N) Now, if N is very large, 1/N is very small, in fact, as close to zero as you want if N is large enough. So to find the volume of the cone itself, we can just replace it with 0. (Proving this thoroughly is where calculus begins.) We get V = pi R^2 H/6 (1)(1)(2) = 1/3 pi R^2 H Whew! There's the formula. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55263.html","timestamp":"2014-04-16T11:36:48Z","content_type":null,"content_length":"9343","record_id":"<urn:uuid:f45d299e-d0f4-4e16-bd3a-4f137f392ba5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: What is a vector field? Replies: 11 Last Post: Apr 30, 2013 8:57 AM Messages: [ Previous | Next ] Re: What is a vector field? Posted: Apr 22, 2013 6:38 PM On 4/22/13 4/22/13 2:10 AM, Shmuel (Seymour J.) Metz wrote: > In <ZPidnRFGArYb0O7MRVn_vwA@giganews.com>, on 04/20/2013 > at 09:22 PM, Tom Roberts <tjroberts137@sbcglobal.net> said: >> Displacements from a given point form a vector space only when the >> displacements are infinitesimal. > I'm not sure what you mean by that. Just that a set of all displacements from a given origin does not form a vector space unless there is an isomorphism to the tangent space at the origin. >> Hmmmm. In physics, where we always have an ORIENTABLE manifold WITH >> METRIC, > Certainly at an undergraduate level, but once you get into things like > gauge theories you need manifolds without a metric. Yes. I meant spacetime manifold. Tom Roberts Date Subject Author 4/19/13 Re: What is a vector field? Hetware 4/20/13 Re: What is a vector field? Tom Roberts 4/20/13 Re: What is a vector field? Hetware 4/23/13 Re: What is a vector field? Tom Roberts 4/24/13 Re: What is a vector field? Shmuel (Seymour J.) Metz 4/25/13 Re: What is a vector field? Tom Roberts 4/30/13 Re: What is a vector field? Shmuel (Seymour J.) Metz 4/22/13 Re: What is a vector field? Shmuel (Seymour J.) Metz 4/22/13 Re: What is a vector field? Tom Roberts 4/22/13 Re: What is a vector field? Virgil 4/23/13 Re: What is a vector field? Hetware
{"url":"http://mathforum.org/kb/thread.jspa?messageID=8897199&tstart=0","timestamp":"2014-04-17T19:11:09Z","content_type":null,"content_length":"28590","record_id":"<urn:uuid:0b7a36dd-de49-4e40-b098-0106c9be4c79>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Bootstrapped Standard Errors Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Bootstrapped Standard Errors From Danny Dan <danny2011dan@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Bootstrapped Standard Errors Date Thu, 22 Mar 2012 18:52:51 -0500 Thank you so much Professor Kolenikov for answering my question in details. Your points really make sense. I will certainly try to contact Professor King on this issue. Thank you so much for pointing out the uncertainty in using the cem_weights. I will thoroughly contemplate and try to solve this issue. If I get any solution, I will certainly update my post. In the meantime if any other member has any solution on this please let me know. My sincere thanks to Professor Kolenikov and Professor Millimet for their most helpful suggestions. On Thu, Mar 22, 2012 at 1:23 PM, Stas Kolenikov <skolenik@gmail.com> wrote: > This gets very complicated, in my opinion. Conceptually speaking, your > standard errors should reflect all sources of uncertainty about the > final treatment effect estimate you obtained. If by virtue of your > sample design, the controls were oversampled, then it would have been > relatively easier for you to find the controls for your treated, thus > reducing your bias and mean squared error. The procedure to get the > standard errors should take into account the fact that the original > data were obtained by random sampling; replacing the sampling weights > with CEM weights eliminates this source of uncertainty, and probably > biases your resulting estimate. If the bootstrap is to mimic the > sampling process, then it should remove some of the observations in > both the control and the treatment group, thus making it more > difficult to find matches (may be as far as having to drop the whole > CEM stratum if it does not have matches, which has a drastic effect on > the estimate and its degrees of freedom), and increasing your bias/MSE > within the bootstrap subsamples. I am not entirely sure I see how to > handle all of these issues and incorporate all different sources of > uncertainty. Following Don Rubin's tradition, Gary King does not > address the question of variance estimation, not even in the JASA > paper, let alone the Stata Journal paper. I guess you would want to > write to Gary King and ask about his advice on your data situation. > You can utilize -bsweights/bs4rw- with the CEM weights as the > baseline, but how far off your resulting standard errors would be from > the true sampling variability is beyond me. If I were you, I would set > up a simulation to see if the standard errors are at least > approximately correct; you would have to make some smart choices > regarding the sample design and the decisions within CEM steps, > though. In the best case, you will see 95% coverage of your confidence > intervals (rarely happened to me in my simulations). In the worst > case, you will have material for a paper titled "Failures of CEM > method: how on earth does one get the standard errors right???". > On Thu, Mar 22, 2012 at 12:22 PM, Danny Dan <danny2011dan@gmail.com> wrote: >> Hello Professor Kolenikov, >> I have seen your article in the Stata journal ("Resampling variance >> estimation for complex survey data") and was also trying to use it in >> my work. However, I am little confused about the use of the >> appropriate weights in using your tool as because I am not using the >> usual sampling weights as available in the raw data but using weights >> generated after implementing a matching method. >> I am trying to use weights that are generated after coarsened exact >> matching (CEM). After running CEM matching it gives both CEM_STRATA >> and CEM_WEIGHTS (where wt=1 for treated and some positive values for >> the controls). My question is can I use CEM_WEIGHTS and CEM_STRATA in >> using your tool? >> Please let me know whether this would be appropriate. >> Also for your information, I also have weights from my raw data >> (Primary sampling unit and strata for variance estimation (VARPSU and >> VARSTR)). >> How shall I set the svy (-svyset-)? Shall I do the following as shown >> in your example: >> egen upsu = group(strata psu) >> . svyset upsu [pw=finalwgt], strata(cstrata) >> In my case, shall I replace finalwt with cem_weights and cstrata with >> cem_strata and psu with varpsu. >> Please let me know. >> Thank you for your reply. >> Best , >> Dan >> On Wed, Mar 21, 2012 at 8:41 PM, Danny Dan <danny2011dan@gmail.com> wrote: >>> Hello All, >>> I have a question: >>> I know that bootstrapping cannot be applied with weights, however, is >>> there anyway in STATA that after doing a weighted regression (like >>> regress `depvar' `indepvar' [weight=wt], or, probit `depvar' >>> `indepvar' [weight=wt]) I can use bootstrapping option separately to >>> generate the standard errors? >>> I am asking this because I need to generate bootstrapped standard >>> errors because of its unknown structure in my model. To be more >>> precise, I am doing a 2-stage estimation and trying to use >>> bootstrapping to generate standard errors in the 2nd stage. >>> Please let me know if I am not clear with my question then I will try >>> to clarify it further for the ease of comprehensibility. >>> Please help. >>> Thank you. >>> Best, >>> Dan >>> * >>> * For searches and help try: >>> * http://www.stata.com/help.cgi?search >>> * http://www.stata.com/support/statalist/faq >>> * http://www.ats.ucla.edu/stat/stata/ >> * >> * For searches and help try: >> * http://www.stata.com/help.cgi?search >> * http://www.stata.com/support/statalist/faq >> * http://www.ats.ucla.edu/stat/stata/ > -- > Stas Kolenikov, also found at http://stas.kolenikov.name > Small print: I use this email account for mailing lists only. > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-03/msg00977.html","timestamp":"2014-04-19T01:59:00Z","content_type":null,"content_length":"14969","record_id":"<urn:uuid:dbd23a93-9292-4606-8e7c-b00bbaa842c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 2008(2008), No. 107, pp. 1-30. Homoclinic orbit solutions of a one Dimensional Wilson-Cowan type model Edward P. Krisner Abstract: We analyze a time independent integral equation defined on a spatially extended domain which arises in the modelling of neuronal networks. In this paper, the coupling function is oscillatory and the firing rate is a smooth "heaviside-like" function. We will derive an associated fourth order ODE and establish that any bounded solution of the ODE is also a solution of the integral equation. We will then apply shooting arguments to prove that the ODE has N-bump homoclinic orbit solutions for any even-valued N>0. homoclinic orbit. Submitted May 14, 2008. Published August 7, 2008. Math Subject Classifications: 45K05, 92B99, 34C25. Key Words: Shooting; periodic; coupling; integro-differential equation Show me the PDF file (563 KB), TEX file, and other files for this article. │ │ Edward P. Krisner │ │ │ B-18 Smith Hall │ │ │ University of Pittsburgh at Greensburg │ │ │ Greensburg, PA 15601, USA │ │ │ email: epk15+@pitt.edu │ Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/2008/107/abstr.html","timestamp":"2014-04-17T07:08:02Z","content_type":null,"content_length":"1782","record_id":"<urn:uuid:77861692-1207-4ab7-8e2d-aca002552171>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Direc­tory tex-archive/fonts/arev Package: arev Maintainer: Stephen Hartke, lastname at gmail dot com The package arev provides Type 1 fonts, virtual fonts and LaTeX packages for using Arev Sans for both text and mathematics. Arev Sans is a derivative of Bitstream Vera Sans created by Tavmjong Bah by adding support for Greek and Cyrillic characters. Bah also added a few variant letters that are more appropriate for mathematics. The primary purpose for using Arev Sans in LaTeX is presentations, particularly when using a computer projector. Arev Sans is quite readable for presentations, with large x-height, "open letters," wide spacing, and thick stems. The style is very similar to the SliTeX font lcmss, but Arev Sans is used for all letters and letter-like symbols and MathDesign bold math fonts for Bitstream Charter are used for most geometric symbols. Fourier-GUTenberg is used for blackboard bold, Ralph Formal Script for script, and the AMS fonts for fraktur. Bera Mono is used for typewriter text. Arev Sans is released under the Bitstream Vera license. All files necessary to use Arev Sans with TeX were created by Stephen Hartke and are released under the LaTeX Project Public License, with the exception of ams-mdbch.sty, which is released under the GNU General Public License. Name Size Date Notes ChangeLog 1889 2006-05-31 10:11:00 ==> /fonts/arev/doc/fonts/arev/ChangeLog README 1315 2006-05-31 10:11:00 ==> /fonts/arev/doc/fonts/arev/README arevdoc.pdf 528466 2006-05-31 10:11:00 ==> /fonts/arev/doc/fonts/arev/arevdoc.pdf Down­load the com­plete con­tents of this di­rec­tory in one zip archive (1.9M). arev – Fonts and LaTeX sup­port files for Arev Sans The pack­age arev pro­vides type 1 and vir­tual fonts, to­gether with LaTeX pack­ages for us­ing Arev Sans in both text and math­e­mat­ics. Arev Sans is a deriva­tive of Bit­stream Vera Sans cre­ated by Tavmjong Bah, adding sup­port for Greek and Cyril­lic char­ac­ters. Bah also added a few vari­ant let­ters that are more ap­pro­pri­ate for math­e­mat­ics. The pri­mary pur­pose for us­ing Arev Sans in LaTeX is pre­sen­ta­tions, par­tic­u­larly when us­ing a com­puter pro­jec­tor. In such a con­text, Arev Sans is quite read­able, with large x-height, "open let­ters", wide spac­ing, and thick stems. The style is very sim­i­lar to the SliTeX font lcmss, but heav­ier. Arev is one of a very small num­ber of sans-font math­e­mat­ics sup­port pack­ages. Others are Doc­u­men­ta­tion Readme Li­cense The LaTeX Project Public Li­cense Main­tainer Stephen Hartke Con­tained in TeXLive as arev MiKTeX as arev fonts for use in math­e­mat­ics Topics Sans-serif font fonts them­selves
{"url":"http://www.ctan.org/tex-archive/fonts/arev","timestamp":"2014-04-17T02:35:44Z","content_type":null,"content_length":"10104","record_id":"<urn:uuid:d46060d2-1f7e-4bae-9748-209b03ba0257>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 3: Sensitivity indices for NPP based on (a) main effects using all inputs; (b) main effects using a reduced set of inputs; (c) interaction effects using all inputs; (d) interaction effects using a reduced set of inputs. The total effects (Sobol) index (TSI) is the addition of main effects and interaction index (TSI for all set of model inputs is computed as the summation of index values shown separately within insets (a) (main effects index) +(c) (interaction index), for a given parameter. Likewise, TSI for the reduced set of model inputs is the summation of indices shown within insets (b) (main effects index) +(d) (interaction index), for a given parameter, and TSI significance/importance is classified/ranked according to: TSI > 0.8 (very important), 0.5 < TSI < 0.8 (important), 0.3 < TSI < 0.5 (unimportant), and TSI < 0.3 irrelevant.
{"url":"http://www.hindawi.com/journals/ijecol/2012/756242/fig3/","timestamp":"2014-04-18T22:41:56Z","content_type":null,"content_length":"2658","record_id":"<urn:uuid:8d115048-250a-4e9b-83b6-c9d0eb10f685>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Write 2/(-3i), (5-3i)/(3+2i) in a+bi form; make y^2-8y sqr These are problems that are on the study guide for the next test. I need some help understanding them and working them. Write in a+bi form. First Problem: 2/(-3i) Second Problem: (5 - 3i)/(3 + 2i) Third Problem: Add a constant term to make the term a perfect square: y^2-8y Also is there a good book that explains Algebra concepts in simplistic terms? Re: Write 2/(-3i), (5-3i)/(3+2i) in a+bi form; make y^2-8y s dsparklez wrote:Write in a+bi form. First Problem: 2/(-3i) Second Problem: (5 - 3i)/(3 + 2i) To learn how to "rationalize" complex denominators, try here. (Scroll down about halfway for a very useful example for your "First Problem".) dsparklez wrote:Third Problem: Add a constant term to make the term a perfect square: y^2-8y You can learn the pattern for perfect-square trinomials here. (Scroll down toward the bottom to get to the part on perfect squares.) Once you know the pattern, I'll bet you can figure out what you need to do with the -8 to find the number you need to add. dsparklez wrote:Also is there a good book that explains Algebra concepts in simplistic terms? There are probably "Dummies" algebra books. Have you tried your local library or bookstore? If you get stuck working any of the three Problems you've posted, please reply showing your work (after you've studied the lessons, so you can get started). Thanks!
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=7&t=1516&p=4635","timestamp":"2014-04-21T04:55:11Z","content_type":null,"content_length":"19929","record_id":"<urn:uuid:b316e559-be66-4d14-8d65-f892dc87e2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;diophantine-approximation pr.probability&#39; Questions This question arose from an earlier one and the MO user's useful answers there: What are the values of the derivative of Riemann's zeta function at the known non-trivial zeros? (which is not a ... The Duffin-Schaeffer conjecture is an old conjecture in metric number theory which has withstood attempts to solve it for about 70 years. The statement can be found here: ...
{"url":"http://mathoverflow.net/questions/tagged/diophantine-approximation+pr.probability","timestamp":"2014-04-18T14:05:51Z","content_type":null,"content_length":"34384","record_id":"<urn:uuid:d113c1be-7c9b-4d71-a2cb-3e3d95896389>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
The Möbius Challenge: Solution Jan 2002 The Möbius Challenge - Solution The best way to see what happens when you cut a Möbius strip in half is to imagine you're trimming off so much of the original strip that nothing is left in the middle. So you'll only be left with a loop of twice the length and half the width of the original strip, with four twists in it - the Möbius strip has vanished completely. Now it's easy to work out what happens when you mark your strip at one point into n parts, and cut along each of them. First cut 1/n of the way in from the side: this will give you a Möbius strip that is (n-2)/n the width of the original, and a linked loop of double the length, and width 1/n, with four twists. Then you repeat the procedure with the Möbius strip, again trimming off a loop of double the length and width 1/n. You keep doing this until either: • If n was even, you are left with a loop of width 2/n, and when you cut this, you finally end up with n/2 linked loops of width 1/n, each with four twists in it. • If n was odd, you finally end up with a Möbius strip of width 1/n, and (n-1)/2 linked loops with twice the length of the original strip, width 1/n, and four twists. Do the twist (again) What you get when you bisect strips with more than one twist also depends on whether the number of twists is even or odd. • If you bisect a strip with an even number of twists you get two loops, each with that same number of twists. So a loop with 2 twists splits into two loops, each with 2 twists, and so on. • If the number of twists n is odd, you get one loop with 2n + 2 twists. Checking this against the case we already know, this formula does indeed give us the right result - a strip with 1 twist becomes a loop with 2+2 = 4 twists.
{"url":"http://plus.maths.org/content/puzzle-page-12","timestamp":"2014-04-20T21:07:34Z","content_type":null,"content_length":"20250","record_id":"<urn:uuid:e33aca7b-7758-40a2-816c-60ac77be357e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling dual-chamber air spring with orifice damper I am trying to model an air spring consistes of two large air chambers about 20' by 20' by 35'each connected by an oriface of diameter to be decided. The gas is air with intial pressure of 1 ~ 2 atm. One chamber volume is altered by a piston like mechanim providing an excursion of +-20 ft at frequency of 15 seconds. I am thinking of modeling the oriface as a a damper in parallel with one of the spring and then in series with the second spring like the following: ***spring1********spring2******MASS of piston ** -------------* ***dash pod** Given the conditions, what is the proper way to model the parameters? is adiabetic formula for each chamber ok? How to derive the damper coefficient of the oriface? Or some more realistic model is
{"url":"http://www.physicsforums.com/showthread.php?t=220280","timestamp":"2014-04-17T04:06:46Z","content_type":null,"content_length":"20107","record_id":"<urn:uuid:297ef481-62ab-41bb-b5c5-3559be5142d3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Andrew S. Glassner, "Spacetime Ray Tracing for Animation," IEEE Computer Graphics and Applications, vol. 8, no. 2, pp. 60-70, March/April, 1988. BibTex x @article{ 10.1109/38.504, author = {Andrew S. Glassner}, title = {Spacetime Ray Tracing for Animation}, journal ={IEEE Computer Graphics and Applications}, volume = {8}, number = {2}, issn = {0272-1716}, year = {1988}, pages = {60-70}, doi = {http://doi.ieeecomputersociety.org/10.1109/38.504}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - MGZN JO - IEEE Computer Graphics and Applications TI - Spacetime Ray Tracing for Animation IS - 2 SN - 0272-1716 EPD - 60-70 A1 - Andrew S. Glassner, PY - 1988 VL - 8 JA - IEEE Computer Graphics and Applications ER - Techniques for the efficient ray tracing of animated scenes are presented. They are based on two central concepts: spacetime ray tracing, and a hybrid adaptive space subdivision/boundary volume technique for generating efficient, nonoverlapping hierarchies of bounding volumes. In spacetime ray tracing, static objects are rendered in 4-D space-time using 4-D analogs to 3-D techniques. The bounding volume hierarchy combines elements of adaptive space subdivision and bounding volume techniques. The quality of hierarchy and its nonoverlapping character make it an improvement over previous algorithms, because both attributes reduce the number of ray/object intersections that must be computed. These savings are amplified in animation because of the much higher cost of computing ray/object intersections for motion-blurred animation. It is shown that it is possible to ray trace large animations more quickly with space-time ray tracing using this hierarchy than with straightforward frame-by-frame rendering. 1. A. Appel, "Some Techniques for Shading Machine Renderings of Solids,"Proc. AFIPS Conf., Vol. 32, 1968, pp. 37-45. 2. W. Bouknight and K. Kelley, "An Algorithm for Producing Half-Tone Computer Graphics Presentations with Shadows and Movable Light Sources,"Proc. AFIPS Conf., Vol. 36, 1970, pp. 1-10. 3. D. Kay, "Transparency, Refraction, and Ray Tracing for Computer Synthesized Images," master's thesis, Cornell University, Ithaca, NY, 1979. 4. T. Whitted, "An Improved Illumination Model for Shaded Display,"Comm. ACM, Vol. 23, No. 6, June 1980, pp. 343- 349. 5. R.L. Cook, T. Porter, and L. Carpenter, "Distributed Ray Tracing,"Computer Graphics(Proc. SIGGRAPH), July 1984, pp. 137-145. 6. M.E. Lee, R.A. Redner, and S.P. Uselton, "Statistically Optimized Sampling for Distributed Ray Tracing,"Computer Graphics(Proc. SIGGRAPH), July 1985, pp. 61-67. 7. J.T. Kajiya, "The Rendering Equation,"Computer Graphics(Proc. Siggraph), Vol. 20, No. 4, Aug. 1986, pp. 143-150. 8. E.A. Haines and D.P. Greenberg, "The Light Buffer: A Shadow-Testing Accelerator,"CG&A, Sept. 1986, pp. 6-16. 9. T. Kay and J.T. Kajiya, "Ray Tracing Complex Scenes,"Computer Graphics(Proc. SIGGRAPH), July 1986, pp. 269-278. 10. H. Weghorst, G. Hooper, and D.P. Greenberg, "Improved Computational Methods for Ray Tracing,"ACM Trans. Graphics, Jan. 1984, pp. 52-69. 11. J.T. Kajiya, "New Techniques for Ray Tracing Procedurally Defined Objects,"Computer Graphics(Proc. SIGGRAPH), July 1982, pp. 245-254. 12. S. Roth, "Ray Casting for Modelling Solids,"Computer Graphics and Image Processing, Vol. 18., 1982, pp. 109-144. 13. S.M. Rubin and T. Whitted, "A 3-D Representation for Fast Rendering of Complex Scenes,"Computer Graphics(Proc. Siggraph), Vol. 14, No. 3, July 1980, pp. 110-116. 14. J. Goldsmith and J. Salmon, "Automatic Creation of Object Hierarchies for Ray Tracing,"CG&A, May 1987, pp. 14-20. 15. A.S. Glassner, "Space Subdivision for Fast Ray Tracing,"CG&A, Oct. 1984, pp. 15-22. 16. M. Dippéand J. Swensen, "An Adaptive Subdivision Algorithm and Parallel Architecture for Realistic Image Synthesis,"Computer Graphics(Proc. Siggraph), Vol. 18, No. 3, July 1984, pp. 149-158. 17. M. Kaplan, "Space Tracing: A Constant Time Ray Tracer," SIGGRAPH 85 Tutorial on the State of the Art in Image Synthesis, July 1985. 18. A. Fujimoto, T. Tanaka, and K. Iawata, "ARTS: Accelerated Ray Tracing System,"IEEE CG&A, Vol. 6, No. 4, April 1986, pp. 16-26. 19. P. Heckbert, "Color Image Quantization for Frame Buffer Display,"Computer Graphics(Proc. Siggraph), Vol. 16, No. 3, July 1982, pp. 297-307. 20. A.S. Glassner, "Spacetime Ray Tracing for Animation," Introduction to Ray Tracing, course notes #13 (SIGGRAPH), ACM, New York, 1987. 21. J.T. Kajiya, "New Techniques for Ray Tracing Procedurally Defined Objects,"Computer Graphics(Proc. SIGGRAPH) July 1983, pp. 91-102. 22. R. Rucker,The Fourth Dimension, Houghton Mifflin, Boston, 1984. 23. A. Abbott,Flatland, Dover Publications, Mineola, N.Y., 1952 (original copyright 1884). 24. P. Bergmann,Introduction to the Theory of Relativity, Dover Publications, Mineola, N.Y., 1975. 25. A.S. Glassner, "Supporting Animation in Rendering Systems,Proc. CHI+GI, Canadian Information Processing Soc., Toronto, 1987. 26. P. Amburn, E. Grant, and T. Whitted, "Managing Geometric Complexity with Enhanced Procedural Models,"Computer Graphics(Proc. SIGGRAPH), July 1986, pp. 189-195. 27. W. Press, B. Flannery, S. Teukolsky, and W. Vetterling,Numerical Recipes, Cambridge University Press, N.Y., 1986. 28. J. Arvo and D. Kirk, "Fast Ray Tracing by Ray Classification,Computer Graphics(Proc. SIGGRAPH), July 1987, pp. 55-64. Andrew S. Glassner, "Spacetime Ray Tracing for Animation," IEEE Computer Graphics and Applications, vol. 8, no. 2, pp. 60-70, March-April 1988, doi:10.1109/38.504 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/mags/cg/1988/02/mcg1988020060-abs.html","timestamp":"2014-04-18T06:37:45Z","content_type":null,"content_length":"54202","record_id":"<urn:uuid:06bf7e7c-2c0e-4cf8-95f9-7d89248973e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Slitherlink решение Slitherlink is a loop-forming puzzle based on a rectangular lattice of dots containing clues in various places. The object is to link adjacent dots so the value of each clue equals the number of links surrounding it and the solution forms a single continuous loop with no crossings or branches when the puzzle is completed. In this example we have a 5x5 Slitherlink puzzle with five columns and five rows. We now need to create a continuous loop according to the above rules… but how? Starting techniques Slitherlink puzzles always start off with some very easy moves, many of which are obvious at first glance. After solving the first clues, proceed to basic techniques to get more ideas how to solve the puzzles. Here are some ways of using the starting techniques: 1. No lines around a 0: When solving Slitherlink puzzles it is equally important to eliminate links where lines are NOT allowed. The best way to do this is by marking them with an X. Let’s look at the 0 in the center of this example. The 0 means it is not surrounded with any lines so we can place four X’s around it to show all four links are excluded. This example also shows a 0 in a corner and a 0 on a side. In both cases, two additional X’s are marked to show that lines are not allowed because they cannot be continued. 2. Adjacent 0 and 3: In this example there is an X above the 3, so there is only one way to loop around the 3: on the left, on the bottom and on the right as shown below. We can now extend the loop with two additional lines, one to the right and one to the left, because there is only one way to continue in each direction. And finally, we can mark four X’s next to the corners because having lines in these places will create branches (or crossings), which are not allowed according to Slitherlink rules. 3. Diagonal 0 and 3: The combination of 0 and 3 also creates special starting technique situation when located diagonal to each other. In this example the gray lines show the only two possible solutions. Any other way would lead to a conflict later on. Since two lines are common to both possibilities we know they must be part of the loop in the puzzle solution. 4. Two adjacent 3's: There are only two solutions for adjacent 3’s, as shown with the gray lines in this example. Any other way of drawing the lines will quickly lead to a conflict. Since three of the red lines are common to both solutions we know they must be part of the loop. We also see that, regardless of which solution we end up with, the loop must always bend between the 3’s. This means that two X’s can be marked as well. This technique works the same way for any number of adjacent 3’s. 5. Two diagonal 3's: There are several ways to loop around two diagonal 3’s, one of which is shown with the gray lines below. However, there are four lines shown in red common to all solutions and therefore must be part of the loop. To avoid crossing and branching, four X’s are also marked. Excluding these links will be essential later on for the progress of the puzzle. 6. Any number in a corner: Any number in a corner will always provide some immediate starting points. The 0 is simple and has been described in starting technique 1. The 1 requires placing two X’s in the corner because it is not possible to place just one line in that area. The 2 has two solutions both of which connect the dots circled in red so the two red lines must be part of the loop. And finally, there are only two ways to loop around the 3 and the red lines show which lines are common to both solutions. Basic techniques The next step in Slitherlink is to continue the partially solved lines. In some cases the progress is quick and easy because there is only one alternative while in other cases there are several alternatives making further analysis necessary. Here are some ways of using the basic techniques: 1. Constraints on a 3: Because 3 is the largest clue in Slitherlink requiring lines around three of its sides, it often happens that a neighboring constraint helps see what some of these lines should be. In this example we have a 3 on the bottom edge of the puzzle, constrained on the right by an X. Similar to the corner situation of starting techniques, the red lines show the paths common to all possible solutions, and which therefore must be part of the loop. 2. Loop reaching a 3: In this example the top part of the loop reaches the corner of a 3 with the possibility to continue in three directions: to the left, down, and to the right. However, the loop can continue around the 3 in only two possible ways. The red lines show paths common to the two possible solutions, and which therefore must be part of the loop. Since the loop must continue around the 3, it cannot branch to the left and we must therefore add X to show that this link is excluded. 3. Loop reaching a 1: Sometimes the loop reaches a 1 in such a way that it is forced to continue on one of its sides. In this example we see the loop reaching 1 on the bottom edge of the puzzle. As a result, the loop can continue either upwards or to the right. This means the links on the right and on the top of the 1 are excluded, as shown with the red X’s. 4. Constraints on a 2: Some of the most interesting situations in Slitherlink occur when the loop goes around a 2. By carefully examining this example one can see there are only two ways the loop can go around the 2. However, both solutions will always connect the two dots circled in red. This means there is only one way the loop can continue as shown with the red line. 5. Avoiding a separate loop: The rule of Slitherlink says only one loop may be formed. This means we must never close smaller loops in the puzzle. In this example we have three ways to continue the loop in the bottom part of the puzzle: to the left, upwards and to the right. However, if we make a line to the left a separate loop will be formed which is not allowed. Therefore this link is excluded with an X. Advanced techniques The techniques described so far won’t be enough to solve hard puzzles. For this you will need advanced techniques to work out many special and interesting logic situations. Most advanced techniques use recursion, a looking ahead process of making assumptions and checking for conflicts one or two steps ahead. Here are some examples of advanced techniques to solve special situations. You will develop many more of your own when solving hard Slitherlink puzzles by yourself: Advanced technique 1: If the loop in the bottom-left corner is continued upwards, as shown in the left diagram, the next steps will quickly create a closed loop conflict. Therefore the loop cannot be continued upwards as marked with the X. Advanced technique 2: If we mark an X next to the 3 in the left diagram we will then have to add three lines around it as shown in the center diagram which creates conflict with the 3 in the top-right corner. Therefore there should be a line instead of the X as shown in the right diagram. Advanced technique 3: If the loop in the bottom-right corner is continued upwards as shown in the left diagram, we will be forced to create a small loop as shown in the center diagram. Therefore the loop cannot continue upwards and an X should be marked X instead. Advanced technique 4: The two red links in the left diagram show how the loop goes around the 2’s in the top-right corner. We don’t know if the loop goes inwards to the center or outwards to the corner of the puzzle, but we do know the loop must connect to the dots with the small red circles. This means the two red lines shown in the right diagram are part of the solution. Advanced technique 5: If we make a line under the 1 in the left diagram we will then have to add three X’s on the other sides as shown in the center diagram. This, however, creates a conflict with the 3 in the top-left corner. Therefore there should be an X under the 1 instead of a line as shown in the right diagram. Advanced technique 6: If we mark the X shown in red in the left diagram we will create a conflict around the 1 in the top-right corner. Whichever way we make one line next to the 1 we will have to make a second line too. Therefore there should be a line instead of the X as shown in the right diagram.
{"url":"http://www.conceptispuzzles.com/ru/index.aspx?uri=puzzle/slitherlink/techniques","timestamp":"2014-04-18T08:54:13Z","content_type":null,"content_length":"33217","record_id":"<urn:uuid:0efc5658-c55c-47f9-b94e-8a1633ecbb9e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/tyteen4a03/answered","timestamp":"2014-04-20T00:51:51Z","content_type":null,"content_length":"129532","record_id":"<urn:uuid:60da314f-2895-4ebb-895a-cc2e1861cc88>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: a simple question Replies: 4 Last Post: Sep 27, 2013 2:31 AM Messages: [ Previous | Next ] Re: a simple question Posted: Sep 24, 2013 4:07 AM Do you mean you want to solve that equation for h? (That's a different thing than "evaluating it", which would not be possible since the left hand side is not the sort of expression to which you can assign a value -- and what does is to assign a value. Whereas == denotes an equality.) Perhaps you mean something like the following: Solve[A Log[h]/h - x == 0, h] The result you'll get gives the solution as: h -> -((A*ProductLog[-(x/A)])/x This may or may not be of use to you, since ProductLog is a not an "elementary" function. Do A and x have some specific values? By the way, it's a bad idea to use names in Mathematica that are, or begin with, an upper-case letter -- because that risks confounding them with the names of built-in objects. On Sep 23, 2013, at 9:59 PM, Dhaneshwar Mishra <dhaneshwarmishra@gmail.com> wrote: > I would like to evaluate A*Log[h]/h -x=0 for h, can any body help me doing so? > I am beginner for Mathematica. Murray Eisenberg Mathematics & Statistics Dept. Lederle Graduate Research Tower University of Massachusetts 710 North Pleasant Street Amherst, MA 01003-9305 Date Subject Author 9/24/13 Murray Eisenberg 9/25/13 Barrie Stokes 9/26/13 Helping people on the internet -- was: Re: a simple question Richard Fateman 9/27/13 Re: Helping people on the internet -- was: Re: a simple question David Bailey
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2599708&messageID=9282099","timestamp":"2014-04-17T12:59:23Z","content_type":null,"content_length":"20677","record_id":"<urn:uuid:0ea5b41a-687b-4d02-ac5b-9479555eea0a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Gunter, the weightlifter, can lift a 230.0 kg barbell overhead on Earth. The acceleration due to gravity on the sun is 274 m/s^2. What is the weight of the barbell on the sun (if he was safe from the heat)? How much force does each arm carry? Best Response You've already chosen the best response. does g stand for the gravity? Best Response You've already chosen the best response. Ok, I was totally wrong previously.... 1 kg on earth is 1 kg on the sun.... The barbells mass is 230 kg.... Its weight is 230*274 Newtons The force through the guys arm (assuming 1 arm and only vertical forces) is 230*274/2 Newtons Best Response You've already chosen the best response. *assuming 2 arms >< Best Response You've already chosen the best response. Okay, thanks ^^ Best Response You've already chosen the best response. sorry for that, I get confused with mass and weight sometimes (this time ;P) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ef0e673e4b082f22c0b4cf5","timestamp":"2014-04-20T16:11:37Z","content_type":null,"content_length":"37273","record_id":"<urn:uuid:096e7c5b-48a4-4340-8719-c0be02bf7d57>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Help - Upwind Propagation Help - 850 mb Moisture Transport The 850 mb moisture transport is the product of the wind speed (m/s) and the mixing ratio (g/g) at 850 mb. Values are scaled by factor of 100, such that a 40 kt (~20 m/s) wind speed and a 12 g/kg mixing ratio (0.012 g/g) results in a moisture transport of 24 m/s (the first pink shade in the color fill). High values of moisture transport have been related to heavy rainfall potential with convective systems. Junker, N. M., R. S. Schneider, and S. L. Fauver, 1999: A study of heavy rainfall events during the great Midwest flood of 1993. Wea. Forecasting, 14, 701-712.
{"url":"http://www.spc.noaa.gov/exper/mesoanalysis/help/help_tran.html","timestamp":"2014-04-19T17:42:17Z","content_type":null,"content_length":"1326","record_id":"<urn:uuid:18d02115-426c-4bf9-a998-966672a2444f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
AVN-BASED MOS GUIDANCE - THE 0600/1800 UTC ALPHANUMERIC MESSAGES J. Paul Dallavalle and Mary C. Erickson This Technical Procedures Bulletin (TPB) describes the format and contents of the new AVN MOS alphanumeric messages generated during the 0600 and 1800 UTC forecast cycles. These messages contain forecasts of the max/min temperature; time-specific surface temperature and dew point; total sky cover; surface wind direction and wind speed; probability of precipitation (PoP) for 6- and 12-h periods; categories of quantitative precipitation for 6- and 12-h periods; probability of thunderstorms and conditional probability of severe thunderstorms for 6- and 12-h periods; conditional probability of precipitation type (freezing, snow, or liquid) and a corresponding category; snowfall amount; and categories of ceiling height, visibility, and obstruction to vision. Guidance is provided for projections of 6 to 72 hours for most weather elements. Note that a particular element line (see Sections 3 - 20) is not included in the message when all of the forecasts in that line are unavailable. These messages are scheduled for implementation during the fall of 2001. The weather element guidance described in the sections below will be added to the messages as the required MOS equations are developed and implemented. 2. MESSAGE HEADING The message heading shown above (see Figs. 1 and 2 also) identifies the station for which the guidance is valid, the forecast cycle, and the day and hour for which the forecasts are valid. In this example, the message is valid for Albany, NY (KALB). All stations are identified by the ICAO four-character identifier. The "AVN MOS GUIDANCE" appearing on the same line as the station call letters identifies the message contents. The date of the forecast cycle during which the message is issued follows this information. The form of mm/dd/yyyy where mm is the month (1 through 12), dd is the day (1 through 31), and yyyy is the four-digit year is used. The forecast cycle is identified by the standard 0600 or 1800 UTC. In this example, the MOS guidance for KALB was issued from the 0600 UTC forecast cycle of the AVN on October 24, 2001. The DT and HR lines denote the date and hour at which the forecasts are valid. The DT line indicates the day of the month. Note that the month is denoted by the standard three or four letter abbreviation. Note, also, that the message for the 0600 UTC cycle does not contain the month indicator in the DT line for the last forecast period. For temperature, dew point, sky cover, wind direction and speed, precipitation type, ceiling height, visibility, and obstruction to vision, the date and hour denote the specific time that the forecasts are valid. These forecasts are valid every 3 hours until 60 hours after initial time and then every 6 hours until 72 hours after initial time. For PoP, quantitative precipitation, thunderstorms, severe weather, and snowfall amount, the time indicates the end of the period over which the forecasts are valid. For the max/min temperature, the date group gives only the approximate ending time of the daytime and nighttime periods for which the max and min temperature guidance, respectively, are valid. 3. X/N - MAXIMUM/MINIMUM TEMPERATURE The max/min surface temperature forecasts are displayed for projections of 18, 30, 42, 54, and 66 hours after the initial data time (0600 or 1800 UTC). Although the forecasts are presented at consecutive 12-h intervals, each forecast is actually valid for a daytime or nighttime period. For the AVN-based MOS guidance, daytime is defined as 7 a.m. to 7 p.m. Local Standard Time (LST). Nighttime is defined as 7 p.m. to 8 a.m. LST. Thus, the valid date in the appropriate column of the DT and HR lines must be converted by the forecaster to his/her local date. This local date then denotes the appropriate daytime or nighttime for the max or min temperature forecast. For the 0600 UTC forecast cycle, the temperatures are shown in max/min (X/N) order and are valid for today's max, tonight's min, tomorrow's max, tomorrow night's min, and the day after tomorrow's max. For the 1800 UTC cycle, the temperatures are shown in min/max (N/X) order and are valid for tonight's min, tomorrow's max, tomorrow night's min, the day after tomorrow's max, and the night after tomorrow night's min. Each temperature forecast is presented to the nearest whole degree Fahrenheit, and three characters are allowed. A missing forecast is indicated by a 999. 4. TMP - SURFACE TEMPERATURE Time-specific 2-m temperature forecasts are valid every 3 hours from 6 to 60 hours, and then every 6 hours to 72 hours after 0600 and 1800 UTC. These forecasts are valid at 1200, 1500,..., 0300, 0600 UTC, and so forth. Each temperature forecast is presented to the nearest whole degree Fahrenheit; a missing forecast is indicated by a 999. Only three characters are available for the temperature forecasts. Thus, two consecutive forecasts of 100 degrees or more or of -10 degrees or less appear with no spaces between them. 5. DPT - SURFACE DEW POINT Time-specific 2-m dew point forecasts are valid every 3 hours from 6 to 60 hours, and then every 6 hours to 72 hours after 0600 and 1800 UTC. These forecasts are valid at 1200, 1500,..., 0300, 0600 UTC, and so forth. Each dew point forecast is presented to the nearest whole degree Fahrenheit; a missing forecast is indicated by a 999. Three characters are available for the dew point forecasts so that two consecutive forecasts of -10 degrees or less appear with no spaces between them. 6. CLD - TOTAL SKY COVER CATEGORIES Forecast categories of total sky cover (see the following table) are available in plain language for projections at 3-h intervals from 6 to 60 hours, and then every 6 hours to 72 hours after the initial data times (0600 and 1800 UTC). All forecasts are valid for specific times (i.e., 1200, 1500, 1800, and so forth). Two characters identify the category (CL - clear; SC - scattered; BK - broken; OV - overcast); a missing forecast is denoted by XX. Total Sky Cover Categories CL - clear; SC - > 0 to 4 octas of total sky cover; BK - > 4 to < 8 octas of total sky cover; OV - 8 octas of total sky cover or totally obscured. 7. WDR - SURFACE WIND DIRECTION / WSP - SURFACE WIND SPEED Surface wind direction (WDR) and speed (WSP) forecasts are given at 3-h intervals for projections of 6 to 60 hours, and then every 6 hours to 72 hours after the initial data times (0600 and 1800 UTC). These are forecasts of the 10-m winds (a 2-minute average) at specific times throughout each day (i.e., 1200, 1500, 1800 UTC, and so forth). The wind direction is given in tens of degrees and varies from 01 (10 degrees) to 36 (360 degrees). The normal meteorological convention for specifying wind direction is followed. The wind speed is given in knots; the maximum speed allowed in the message is 98 knots. For both direction and speed, missing forecasts are denoted by 99. A calm wind is indicated by a wind direction and speed of 00. 8. P06 - PROBABILITY OF PRECIPITATION IN A 6-H PERIOD 9. P12 - PROBABILITY OF PRECIPITATION IN A 12-H PERIOD The P12 forecasts are for the probability of 0.01 inches or more of liquid-equivalent precipitation (PoP) occurring during a 12-h period. For nearly all stations, the 12-h PoP's are valid for intervals of 6-18, 18-30, 30-42, 42-54, and 54-66 hours after the initial data times (0600 and 1800 UTC). For stations in Hawaii, however, the 12-h PoP's are valid for intervals of 12-24, 24-36, 36-48, 48-60, and 60-72 hours after 0600 and 1800 UTC. In the message, the forecast values are displayed under the ending time of the 12-h period. The probability is given to the nearest percent. Values range from 0 to 100%. A missing forecast value is indicated by 999. 10. Q06 - QUANTITATIVE PRECIPITATION AMOUNT IN A 6-H PERIOD Guidance for liquid-equivalent precipitation amount (QPF) accumulated during a 6-h period is presented in categorical form. These forecasts are available for projections of 6-12, 12-18, 18-24, 24-30, 30-36, 36-42, 42-48, 48-54, 54-60, 60-66, and 66-72 hours after the initial data time (0600 and 1800 UTC). The forecasts are displayed beneath the hour indicating the end of the 6-h period. The QPF guidance is a categorical forecast of liquid-equivalent precipitation equaling or exceeding certain specified amounts in the 6-h periods. The categories are as follows: QPF Categories 0 = no precipitation expected; 1 = 0.01 - 0.09 inches; 2 = 0.10 - 0.24 inches; 3 = 0.25 - 0.49 inches; 4 = 0.50 - 0.99 inches; 5 = > 1.00 inches. Missing forecasts are denoted by 9. 11. Q12 - QUANTITATIVE PRECIPITATION AMOUNT IN A 12-H PERIOD Guidance for liquid-equivalent precipitation amount (QPF) accumulated during a 12-h period is presented in categorical form. These forecasts are available for projections of 6-18, 18-30, 30-42, 42-54, and 54-66 hours after the initial data time (0600 and 1800 UTC). For stations in Hawaii, however, the 12-h QPF's are valid for intervals of 12-24, 24-36, 36-48, 48-60, and 60-72 hours after 0600 and 1800 UTC. The forecasts are displayed beneath the hour indicating the end of the 12-h period. The QPF guidance is a categorical forecast of liquid-equivalent precipitation equaling or exceeding certain specified amounts in the 12-h periods. The categories are as follows: QPF Categories 0 = no precipitation expected; 1 = 0.01 - 0.09 inches; 2 = 0.10 - 0.24 inches; 3 = 0.25 - 0.49 inches; 4 = 0.50 - 0.99 inches; 5 = 1.00 - 1.99 inches; 6 = > 2.00 inches. Missing forecasts are denoted by 9. 12. T06 - PROBABILITY OF THUNDERSTORMS/CONDITIONAL PROBABILITY OF SEVERE THUNDERSTORMS IN A 6-H PERIOD The T06 line represents forecasts for the probability of thunderstorms (to the left of the diagonal) and the conditional probability of severe thunderstorms (to the right of the diagonal) occurring during a 6-h period. The 6-h probability forecasts are valid for intervals of 6-12, 12-18, 18-24, 24-30, 30-36, 36-42, 42-48, 48-54, 54-60, and 66-72 hours after the initial data times (0600 and 1800 UTC). Because of the line width, the 60-66 h forecast is not available. In the message, the pair of forecast values are displayed under the ending time of the 6-h period. The thunderstorm probability is given to the nearest whole percent. Values range from 0 to 100%. A missing forecast value is indicated by 999. The conditional severe thunderstorm probability is given to the nearest whole percent. Values range from 0 to 98%. A missing forecast value is given by 99. Both the thunderstorm and conditional severe storm probabilities are available year-round for stations in the contiguous U.S. Note that these probabilities represent the likelihood of the event within a box approximately 47 km on a side and containing the station specified. Forecasts are unavailable for stations in Alaska, Hawaii, or Puerto Rico because reports from the National Lightning Detection Network used to define the thunderstorm predictand were unavailable for locations in those areas. 13. T12 - PROBABILITY OF THUNDERSTORMS/CONDITIONAL PROBABILITY OF SEVERE THUNDERSTORMS IN A 12-H PERIOD The T12 line represents forecasts for the probability of thunderstorms (to the left of the diagonal) and the conditional probability of severe thunderstorms (to the right of the diagonal) occurring during a 12-h period. The 12-h probability forecasts are valid for intervals of 12-24, 24-36, 36-48, 48-60, and 60-72 hours after the initial data times (0600 and 1800 UTC). In the message, the pair of forecast values are displayed under the ending time of the 12-h period. The thunderstorm probability is given to the nearest whole percent. Values range from 0 to 100%. A missing forecast value is indicated by 999. The conditional severe thunderstorm probability is given to the nearest whole percent. Values range from 0 to 98%. A missing forecast value is given by 99. Both the thunderstorm and conditional severe storm probabilities are available year-round for stations in the contiguous U.S. Note that these probabilities represent the likelihood of the event within a box approximately 47 km on a side and containing the station specified. Forecasts are unavailable for stations in Alaska, Hawaii, or Puerto Rico because reports from the National Lightning Detection Network used to define the thunderstorm predictand were unavailable for locations in those areas. 14. POZ - PROBABILITY OF FREEZING PRECIPITATION (CONDITIONAL) Conditional probability of freezing precipitation (given that precipitation is occurring) forecasts are available for specific times every 3 hours from 6 to 60 hours and then every 6 hours to 72 hours after 0600 and 1800 UTC. Freezing precipitation is defined as the occurrence of freezing rain or drizzle, ice pellets (sleet), or any mixture of freezing rain, drizzle, or ice pellets with other precipitation types. The probabilities are given to the nearest whole percent, and values range from 0 to 100%. Missing values are indicated by 999. These probabilities are used in producing the categorical TYP forecast described in Section 16. The POZ guidance is transmitted during the period of September 1 - May 31. Because of the rarity of the freezing rain event, many stations do not have forecast equations for the POZ category. In these cases, the POZ line will not appear in the message at any time of the year. 15. POS - PROBABILITY OF SNOW (CONDITIONAL) Conditional probability of snow (given that precipitation is occurring) forecasts are available for specific times every 3 hours from 6 to 60 hours and then every 6 hours to 72 hours after 0600 and 1800 UTC. Snow is defined as the occurrence of a pure snow event, that is, snow, snow showers, snow grains, or snow pellets or any combination of those elements. Snow mixed with rain is considered a liquid precipitation event. The probabilities are given to the nearest whole percent, and values range from 0 to 100%. Missing values are indicated by 999. These probabilities are used in producing the categorical TYP forecast described in Section 16. The POS guidance is transmitted only during the period of September 1 - May 31. Although the conditional probability of liquid precipitation is not given in the message, the probability can be inferred since the sum of the probability of freezing precipitation, snow, and liquid precipitation is 100%. 16. TYP - PRECIPITATION TYPE FORECASTS (CONDITIONAL) The TYP line represents forecasts of precipitation type (if precipitation occurs) for specific times every 3 hours from 6 to 60 hours, and then every 6 hours to 72 hours after the initial hour of 0600 or 1800 UTC. The forecast is indicated by one character where "Z" represents freezing precipitation (freezing rain, freezing drizzle, ice pellets (sleet), or any report of these elements mixed with other precipitation types), "S" represents snow (snow, snow grains, snow pellets, or snow showers), and "R" represents liquid precipitation (rain, drizzle, or a mixture of rain or drizzle with snow). A missing forecast is denoted by "X". The precipitation type guidance is transmitted only during the period of September 1 - May 31. 17. SNW - SNOWFALL AMOUNT CATEGORICAL FORECAST Categorical forecasts of snowfall amount are available in the message for 24-h periods ending approximately 30 and 54 hours after 0600 UTC and approximately 42 and 66 hours after 1800 UTC. Since observations from the cooperative observer network are used to define the event, the valid times are approximations. The categories are denoted as follows: Snowfall Amount Categories 0 = no snow or a trace expected; 1 = > a trace to < 2 inches expected; 2 = 2 to < 4 inches; 4 = >4 to < 6 inches; 6 = >6 to < 8 inches; 8 = > 8 inches. A missing forecast is denoted by 9; forecasts are disseminated only for the period of September 1 - May 31. 18. CIG - CEILING HEIGHT CATEGORICAL FORECASTS Forecasts of seven categories of ceiling height (see the following table) are available for specific times valid every 3 hours from 6 to 60 hours and then every 6 hours to 72 hours after 0600 and 1800 UTC. The forecasts are displayed beneath the time of the day for which they are valid. Values of 1 through 7 are allowed for the categorical guidance; a value of 9 denotes a missing forecast. The categories are as follows: Ceiling Height Categories 1 = ceiling height of < 200 feet; 2 = ceiling height of 200 - 400 feet; 3 = ceiling height of 500 - 900 feet; 4 = ceiling height of 1000 - 3000 feet; 5 = ceiling height of 3100 - 6500 feet; 6 = ceiling height of 6600 - 12,000 feet; 7 = ceiling height of > 12,000 feet or unlimited ceiling. The categorical guidance is prepared by using probability forecasts of the same categories. 19. VIS - VISIBILITY CATEGORICAL FORECASTS Forecasts of seven categories of visibility (see the following table) are available for specific times valid every 3 hours from 6 to 60 hours and then every 6 hours to 72 hours after 0600 and 1800 UTC. The forecasts are displayed beneath the time of the day for which they are valid. Values of 1 through 7 are allowed for the categorical guidance; a value of 9 denotes a missing forecast. The categories are as follows: Visibility Categories 1 = visibility < 1/4 mi; 2 = visibility of > 1/4 mi to < 1/2 mi; 3 = visibility of > 1/2 mi to < 1 mi; 4 = visibility of 1 to < 3 mi; 5 = visibility of 3 to 5 mi; 6 = visibility of 6 mi; 7 = visibility of > 6 mi. The categorical guidance is prepared by using probability forecasts of the same categories. 20. OBV - OBSTRUCTION TO VISION CATEGORICAL FORECASTS Forecasts of five categories of obstruction to vision (see the following table) are available for specific times valid every 3 hours from 6 to 60 hours and then every 6 hours to 72 hours after 0600 and 1800 UTC. The forecasts are displayed in plain language beneath the time of the day for which they are valid. The categories are denoted by the letters "N", "HZ", "BR", "FG", and "BL"; a value of "X" denotes a missing forecast. The categories are as follows: Obstruction to Vision Categories N = none of the following; HZ = haze, smoke, dust; BR = mist (fog with visibility > 5/8 mi); FG = fog or ground fog (visibility < 5/8 mi); BL = blowing dust, sand, snow. The categorical guidance is prepared by using probability forecasts of the same categories. In the equation development, cases of fog or mist were not stratified by the occurrence of precipitation. Thus, a forecast of fog can be coincidental with a forecast of precipitation. Lower visibilities caused exclusively by precipitation occurrence are not indicated by the obstruction to vision 21. AVAILABILITY The 0600 and 1800 UTC AVN MOS guidance is available at approximately 1030 and 2230 UTC, respectively, in 10 alphanumeric messages transmitted to NWS AWIPS and Family of Services (FOS) circuits: six containing guidance for stations in the contiguous U.S., Puerto Rico, and the Virgin Islands; three containing guidance for Alaskan sites; and one containing guidance for stations in Hawaii. The following two-line WMO headers are used: WMO Header - Region FOPA20 KWNO - Pacific Region FOUS21 KWNO - Northeast U.S. FOUS22 KWNO - Southeast U.S. FOUS23 KWNO - North Central U.S. FOUS24 KWNO - South Central U.S. FOUS25 KWNO - Rocky Mountain Region FOUS26 KWNO - West Coast Region FOAK37 KWNO - Southeast Alaska (Juneau) FOAK38 KWNO - Central Alaska (Anchorage) FOAK39 KWNO - Northern Alaska (Fairbanks) The messages for a subset of the stations in the above collectives are also sent to AFWA for dissemination on military communication circuits. Twenty-seven messages contain guidance for stations in the contiguous U.S., three messages contain guidance for Alaskan sites, one message contains guidance for Hawaiian sites, and one message contains guidance for stations in Puerto Rico. The following WMO headers are used: WMO Header - Region FOUS30 KWNO - Contiguous U.S. MAVFxx, where xx=01 through 27 FOAK30 KWNO - Alaska MAVFxx, where xx=50, 51, or 52 FOPA30 KWNO - Hawaii FOCA30 KWNO - Puerto Rico 22. STATION LIST As of August 2001, the AVN MOS guidance was available for 1060 stations in the ten bulletins transmitted to AWIPS and on the NWS FOS. Guidance for another 346 sites will be added in late 2001. As of September 2001, the AVN MOS guidance is available for 273 stations in the messages transmitted to AFWA. The user may check the following home pages for the station lists and corresponding WMO The first address provides station lists for the AWIPS/FOS messages; the second address provides station lists for the military bulletins.
{"url":"http://www.nws.noaa.gov/om/tpb/481body.htm","timestamp":"2014-04-20T00:47:28Z","content_type":null,"content_length":"26048","record_id":"<urn:uuid:45a541d7-e45f-456e-a6f5-ad4f10a8f41f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Cranston Calculus Tutor Find a Cranston Calculus Tutor ...I also wrote workbook problems and test questions for the company's mock SAT and MCAS tests. At this job I gained an encyclopedic perspective on what shortcuts work for different types of students, and on what types of questions appear on the SAT. I'm inventive, and now I've advanced well beyond my old company's tutoring system. 14 Subjects: including calculus, geometry, algebra 2, algebra 1 ...I can help you master trig! The SAT math covers material from basic arithmetic through algebra II. While there are all sorts of strategies that focus on test taking tips and how to guess at multiple choice questions, I believe in learning the material. 19 Subjects: including calculus, chemistry, physics, geometry ...As an undergraduate I read extensively in philosophy, literature, and sociology. I have also worked in an undergraduate tutorial office for a year, and tutored high school students in English Language Arts (ELA). I have a BA in philosophy, and do well on standardized test. As an undergraduate I did a lot of writing, and was published in the school's journal. 29 Subjects: including calculus, reading, English, geometry ...Thanks for the consideration.I have taught high school math for over 14 years and understand the importance of study skills. The key to doing well in any class is a good balance of effective note-taking, consistent homework completion, and competent test preparation. I am committed to personalizing the student's learning experience so that they can do well in school. 23 Subjects: including calculus, reading, writing, geometry ...Precalculus is a gateway course for more advanced mathematics -- so it's no surprise that many students, even those who have a track record of good grades in math, find themselves overwhelmed by both the depth and breadth of the material. I've tutored the subject since I was in high school and e... 47 Subjects: including calculus, English, reading, chemistry
{"url":"http://www.purplemath.com/Cranston_calculus_tutors.php","timestamp":"2014-04-18T00:51:50Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:0ee26358-4498-4127-9cb6-7d177e5e6707>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Root Mean Square versus Root Sum Square - John Dunn, Consultant, Ambertec, P.E., P.C. Consider some repetitive voltage waveform applied across a resistance of R: At time T1, we have voltage E1 across resistance R which yields a power of E1² / R. At time T2, we have voltage E2 across resistance R which yields a power of E2² / R. At time T3, we have voltage E3 across resistance R which yields a power of E3² / R. .... and so forth and so forth and ..... At time Tn, we have voltage En across resistance R which yields a power of En² / R. First consider the root-mean-square (RMS) of thee voltages as follows: We find the average power applied to R as: ( E1² / R + E2² / R + E3² / R + . . . + En² / R ) / n. Factoring out 1 / R, we re-write the average power as: ( ( E1² + E2² + E3² + . . . + En² ) / n ) / R. We know of course that power is the square of a voltage divided by resistance, so in this circumstance and with malice aforethought, we will call that squared voltage Erms². We therefore have: Erms² = ( E1² + E2² + E3² + . . . + En² ) / n where this Erms² is the mean, the average, of the sum of the squares of E1, E2, E3 and so on up to En. We then take the square roots of both sides of this equation: Erms² = ( E1² + E2² + E3² + . . . + En² ) / n and Erms = sqrt ( ( E1² + E2² + E3² + . . . + En² ) / n ) ) Lo and behold, we call Erms the root-mean-square or the RMS voltage. It is the square root of the mean of the squares of the individual voltages. The power dissipation in R for all of those individual voltages over their applied time span is the same as the power dissipation in R for the application of Erms over that same time span. Hold this thought and look next at the root-sum-square of these voltages. Instead of taking the average of the sum of the squares, we take just the sum of the squares and call that Erss². When we take the square root of that Erss², we get the root-sum-square, or Erss. We Erss² = E1² + E2² + E3² + . . . + En² .....and then...... Erss = sqrt ( E1² + E2² + E3² + . . . + En² ) Now that we've looked at root-mean-square (RMS) calculation and a root-sum-square (RSS) calculation apropos of voltage, we realize that we can do this mathematics for any parameter we choose. These two calculations can be done for voltages, for currents, for standing wave ratios, for tolerance values or whatever you happen to be devoting your attention to at the moment. Physical meanings are something else to consider, but the equations themselves are valid. RMS and RSS each have their own roles in this space time continuum. Most commonly, RMS applies to applied voltage or current going to a load while RSS is used in voltage standing wave ratio estimates in RF designs. Just be careful to not mistake one for the other. RMS is used primarily to find the "average" value of a continuous, periodic process. RSS is used to find the "average" of a statistical process, especially things like random noise. Erss = sqrt ( E1² + E2² + E3² + . . . + En² ) is essentially what you do with vectors such as the scallar value of say impedance. Thanks for clarifying this. This makes sense to me. This is only a preview. Your comment has not yet been posted. Your comment could not be posted. Error type: Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments. Having trouble reading this image? View an alternate. Post a comment Comments are moderated, and will not appear until the author has approved them.
{"url":"http://licn.typepad.com/my_weblog/2012/02/root-mean-square-versus-root-sum-square-john-dunn-consultant-ambertec-pe-pc.html","timestamp":"2014-04-20T20:56:32Z","content_type":null,"content_length":"51830","record_id":"<urn:uuid:4c7e2f08-6bf8-4887-98a5-9c1d5b1f468f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Items where Year is 2004 Number of items: 79. Alarcon, T. and Byrne, H. M. and Maini, P. K. (2004) A mathematical model of the effects of hypoxia on the cell-cycle of normal and cancer cells. Journal of theoretical Biology, 229 (3). pp. 395-411. Alarcon, T. and Byrne, H. M. and Maini, P. K. (2004) Towards whole-organ modelling of tumour growth. Progress in Biophysics and Molecular Biology, 85 (2-3). pp. 451-472. Allwright, D. J. and Kaouri, K. (2004) Circulation in inviscid gas flows with shocks. Applied Mathematics Letters, 17 (7). pp. 767-770. Ball, J. M. (2004) Mathematical models of martensitic microstructure. Materials Science and Engineering A, 378 (1-2). pp. 61-69. Bampfylde, C. J. and Brown, N. D. and Gavaghan, D. J. and Maini, P. K. (2004) The role of small-scale spatial interactions on the coexistence of rain forest species: using neighbourhood techniques and simulation models. Journal of Agricultural Science Camb, 142 (2). p. 245. Barrett, John W. and Robson, Janice A. and Suli, Endre (2004) A posteriori error analysis of mixed finite element approximations to quasi-Newtonian incompressible flows. Technical Report. Unspecified. (Submitted) Barrett, John W. and Schwab, Christoph and Suli, Endre (2004) Existence of Global Weak Solutions for Some Polymeric Flow Models. Technical Report. Unspecified. (Submitted) Bernardi, Christine and Suli, Endre (2004) Time and space adaptivity for the second-order wave equation. Technical Report. Unspecified. (Submitted) Betcke, Timo and Trefethen, Lloyd N. (2004) Computations of eigenvalue avoidance in planar domains. Technical Report. Unspecified. (Submitted) Braun, H. T. F. (2004) Model Theory of Holomorphic Functions. PhD thesis, University of Oxford. Breward, C. J. W. and Howell, P. D. (2004) Straining flow of a micellar surfactant solution. European Journal of Applied Mathematics, 15 . pp. 511-531. Brezzi, Franco and Cockburn, Bernardo and Marini, Donatella and Suli, Endre (2004) Stabilization Mechanisms in Discontinuous Galerkin Finite Element Methods. Technical Report. Unspecified. Brezzi, Franco and Marini, Donatella and Suli, Endre (2004) Discontinuous Galerkin methods for first-order hyperbolic problems. Technical Report. Unspecified. (Submitted) Browning, T. D. and Heath-Brown, D. R. (2004) Equal Sums of three powers. Inventiones Mathematicae, 157 . pp. 553-573. ISSN 0020-9910 Bruin, N. and Flynn, E. V. (2004) Rational divisors in rational divisor classes. In: Algorithmic Number Theory. Lecture Notes in Computer Science, 3076 . Springer, Berlin, Germany, pp. 132-139. ISBN Burke, James and Greenbaum, Anne (2004) Some Equivalent Characterizations of the Polynomial Numerical Hull of Degree Technical Report. Unspecified. (Submitted) Buttle, D. (2004) Credit networks and agent games. PhD thesis, University of Oxford. Cangiani, Andrea and Suli, Endre (2004) A-posteriori error estimators and RFB. Technical Report. Unspecified. (Submitted) Cartis, Coralia (2004) Some Disadvantages of a Mehrotra-Type Primal-Dual Corrector Interior Point Algorithm for Linear Programming. Technical Report. Unspecified. (Submitted) Cropp, Roger and Norbury, John and Gabric, Albert J. and Braddock, Roger D. (2004) Modeling dimethylsulphide production in the upper ocean. Global Biogeochemical Cycles, 18 (GB3005). Dollar, H. Sue and Gould, Nicholas I. M. and Wathen, A. J. (2004) On implicit-factorization constraint preconditioners. Technical Report. Unspecified. (Submitted) Dollar, H. Sue and Wathen, A. J. (2004) Incomplete factorization constraint preconditioners for saddle-point matrices. Technical Report. Unspecified. (Submitted) Flynn, E. V. (2004) The Hasse principle and the Brauer-Manin obstruction for curves. Manuscripta Mathematica, 115 . pp. 437-466. ISSN 0025-2611 Giles, M. B. (2004) Sharp error estimates for a discretisation of the 1D convection/diffusion equation with Dirac initial data. Technical Report. Unspecified. (Submitted) Harriman, Kathryn and Gavaghan, D. J. and Suli, Endre (2004) Application of hpDGFEM to mechanisms at channel microband electrodes. Technical Report. Unspecified. (Submitted) Harriman, Kathryn and Gavaghan, D. J. and Suli, Endre (2004) Approximation of linear functionals using an hp-adaptive discontinuous Galerkin finite element method. Technical Report. Unspecified. Harriman, Kathryn and Gavaghan, D. J. and Suli, Endre (2004) Finite element solution of a membrane covered electrode problem. Technical Report. Unspecified. (Submitted) Harriman, Kathryn and Gavaghan, D. J. and Suli, Endre (2004) The importance of adjoint consistency in the approximation of linear functionals using the discontinuous Galerkin finite element method. Technical Report. Unspecified. (Submitted) Hauser, Raphael and Nedic, Jelena (2004) On the relationship between convergence rates of discrete and continuous dynamical systems. Technical Report. Unspecified. (Submitted) Heath-Brown, D. R. (2004) The average rank of elliptic curves. Duke Mathematical Journal, 122 . pp. 591-623. Heath-Brown, D. R. (2004) Rational points and analytic number theory. In: Arithmetic of higher-dimensional algebraic varieties (Palo Alto, CA, 2002). Birkhauser Boston, Boston, MA, USA, pp. 37-42. Heath-Brown, D. R. and Moroz, B. Z. (2004) On the representation of primes by cubic polynomials in two variables. Proceedings of the London Mathematical Society (3), 88 . pp. 289-312. Houston, P. and Robson, Janice A. and Suli, Endre (2004) Discontinuous Galerkin finite element approximation of quasilinear elliptic boundary value problems I: The scalar case. Technical Report. Unspecified. (Submitted) Howell, P. D. and Siegel, M. (2004) The evolution of a slender non-axisymmetric drop in an extensional flow. Journal of Fluid Mechanics, 521 . pp. 155-180. Howell, P. D. and Stone, H. A. (2004) On the absence of marginal pinching in thin free films. European Journal of Applied Mathematics . (Submitted) Howison, S. D. and Ockendon, J. R. and Oliver, J. M. (2004) Oblique slamming, planing and skimming. Journal of Engineering Mathematics, 48 (3-4). pp. 321-337. ISSN 0022-0833 Joyce, Dominic (2004) Special Lagrangian submanifolds with isolated conical singularities. I. Regularity. Annals of Global Analysis and Geometry, 25 . pp. 201-251. Joyce, Dominic (2004) Special Lagrangian submanifolds with isolated conical singularities. II. Moduli spaces. Annals of Global Analysis and Geometry, 25 . pp. 301-352. Joyce, Dominic (2004) Special Lagrangian submanifolds with isolated conical singularities. III. Desingularization, the unobstructed case. Annals of Global Analysis and Geometry, 26 . pp. 1-58. Kaouri, K. (2004) Secondary Sonic Boom. PhD thesis, University of Oxford. Korobeinikov, A. and Maini, P. K. (2004) A Lyapunov function and global properties for SIR and SEIR epidemiological models with nonlinear incidence. Mathematical Biosciences and Engineering, 1 (1). pp. 57-60. Kozyreff, G. and Howell, P. D. (2004) The instability of a viscous sheet floating on an air cushion. Journal of Fluid Mechanics . (Submitted) Lasis, Andris and Suli, Endre (2004) One-parameter discontinuous Galerkin finite element discretisation of quasilinear parabolic problems. Technical Report. Unspecified. (Submitted) Little, M. A. and Heesch, D. (2004) Chaotic root-finding for a small class of polynomials. Journal of Difference Equations and Applications, 10 (11). pp. 949-953. ISSN 1023-6198 Little, M. A. and Moroz, I. M. and McSharry, P. E. and Roberts, S. J. (2004) Variational integration for speech signal processing. In: Proceedings of IMA Conference on Mathematics in Signal Processing VI, December 2004, Cirencester, UK. Mack, Austin N. F. (2004) A Solenoidal Finite Element Approach for Prediction of Radar Cross Sections. Technical Report. Unspecified. (Submitted) Madzvamuse, A. and Maini, P. K. and Wathen, A. J. (2004) A moving grid finite element method for the simulation of pattern generation by Turing models on growing domains. Technical Report. Unspecified. (Submitted) Maini, P. K. (2004) The impact of Turing's work on pattern formation in biology. Mathematics Today, 40 (4). pp. 140-141. Maini, P. K. (2004) Using mathematical models to help understand biological pattern formation. Comptes Rendus Biologies, 327 (3). pp. 225-234. Maini, P. K. and McElwain, S. and Leavesley, D. (2004) A travelling wave model to interpret a wound healing migration assay for human peritoneal mesothelial cells. Tissue Engineering, 10 (3/4). pp. Maini, P. K. and McElwain, S. and Leavesley, D. (2004) Travelling waves in a wound healing assay. Applied Maths Letters, 17 (5). pp. 575-580. Maini, P. K. and Schnell, S. and Jolliffe, S. (2004) Bulletin of Mathematical Biology - facts, figures and comparisons. Bulletin of Mathematical Biology, 66 (4). pp. 595-603. McInerney, D. and Schnell, S. and Baker, Ruth E. and Maini, P. K. (2004) A mathematical formulation for the cell-cycle model in somitogenesis: analysis, parameter constraints and numerical solutions. Mathematical Medicine & Biology, 21 (2). pp. 85-113. Miller, Keith (2004) Computed tomography from X-rays: old 2-D results, new 3-D problems. Technical Report. Unspecified. (Submitted) Miura, T. and Maini, P. K. (2004) Periodic pattern formation in reaction-diffusion systems -an introduction for numerical simulation. Anatomical Science International, 79 (3). pp. 112-123. Miura, T. and Maini, P. K. (2004) Speed of pattern appearance in reaction-diffusion models: Implications in the pattern formation of limb bud mesenchyme cells. Bulletin of Mathematical Biology, 66 (4). pp. 627-649. Moinier, P. and Giles, M. B. (2004) Eigenmode analysis for turbomachinery applications. Technical Report. Unspecified. (Submitted) Monoyios, Michael (2004) Option pricing with transaction costs using a Markov chain approximation. Journal of Economic Dynamics and Control, 28 . pp. 889-913. Monoyios, Michael (2004) Performance of utility based strategies for hedging basis risk. Quantitative Finance, 4 . pp. 245-255. Moroz, I. M. (2004) The Malkus-Robbins dynamo with a nonlinear motor. International Journal of Bifurcation and Chaos, 14 (8). pp. 2885-2892. Moroz, Irene M. (2004) The extended Malkus-Robbins dynamo as a perturbed Lorenz system. Nonlinear Dynamics . (Submitted) Mozolevski, Igor and Suli, Endre (2004) hp-version interior penalty DGFEMs for the biharmonic equation. Technical Report. Unspecified. (Submitted) O Murchadha, N. and Szabados, L. B. and Tod, K. P. (2004) A comment on Liu and Yau's positive quasi-local mass. Physical Review Letters, 92 (25). pp. 25900-1. Ockendon, J. R. and Fitt, A. D. and Kozyreff, G. K. (2004) Inertial levitation. Journal of Fluid Mechanics, 508 . pp. 165-174. Orphanidou, C. and Moroz, I. M. and Roberts, S. J. (2004) Wavelet-based voice morphing. WSEAS Journal on Systems, 10 (3). pp. 3297-3302. Ortner, Christoph (2004) Technical Report. Unspecified. (Submitted) Plaza, R. and Sánchez-Garduño, F. and Padilla, P. and Barrio, R. A. and Maini, P. K. (2004) The effect of growth and curvature on pattern formation. Journal of Dynamics and Differential Equations, 16 (4). pp. 1093-1121. Reisinger, Christoph and Wittum, Gabriel (2004) On multigrid for anisotropic equations and variational inequalities: pricing multi-dimensional European and American options. Computing and Visualization in Science, 7 (3-4). pp. 189-197. ISSN 1433-0369 Ribba, B. and Alarcon, T. and Marron, K. and Maini, P. K. and Agur, Z. (2004) The use of hybrid cellular automaton models for improving cancer therapy, In Proceedings, Cellular Automata: 6th International Conference on Cellular Automata for Research and Industry, ACRI 2004, Amsterdam, The Netherlands, eds P.M.A. Sloot, B. Chopard, A.G. Hoekstra. Lecture Notes in Computer Science, 3305 . pp. 444-453. Roose, T. and Fowler, A. C. (2004) A mathematical model for water and nutrient uptake by plant root systems. J. Theor. Biol., 228 (2). pp. 173-184. Roose, T. and Fowler, A. C. (2004) A model for water uptake by plant roots. J. Theor. Biol., 228 (2). pp. 155-171. Shaw, William T. (2004) Recovering holomorphic functions from their real or imaginary parts without the Cauchy-Riemann equations. SIAM Review, 46 (4). pp. 717-728. Siegel, Michael and Caflisch, Russell E. and Howison, Sam (2004) Global existence, singular solutions, and ill-posedness for the Muskat problem. Communications on Pure and Applied Mathematics, LVII . pp. 1-38. (In Press) Smillie, A. and Sobey, Ian and Molnar, Z. (2004) A hydro-elastic model of hydrocephalus. Technical Report. Unspecified. (Submitted) Tee, T. W. and Sobey, Ian (2004) Spectral method for the unsteady incompressible Navier-Stokes equations in gauge formulation. Technical Report. Unspecified. (Submitted) Trefethen, L. N. and Chapman, S. J. (2004) Wave packet pseudomodes of twisted Toeplitz matrices. Communications in Pure and Applied Mathematics, 57 (9). pp. 1233-1264. ISSN 0010-3640 Trefethen, Lloyd N. (2004) Wave Packet Pseudomodes of Variable Coefficient Differential Operators. Technical Report. Unspecified. (Submitted) Wathen, A. J. (2004) Preconditioning and fast solvers for incompressible flow. Technical Report. Unspecified. (Submitted) White, G. S. and Howell, P. D. and Breward, C. J. W. and Young, R. J. S. (2004) A model for the screen printing of Newtonian fluids. Journal of Engineering Mathematics . (In Press)
{"url":"http://eprints.maths.ox.ac.uk/view/year/2004.default.html","timestamp":"2014-04-16T16:01:04Z","content_type":null,"content_length":"35451","record_id":"<urn:uuid:2fd0513d-f44a-40e3-855f-c5f23523900c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Algorithms: Qudit Naming Sunday, June 19, 2005 Qudit Naming Entanglement Made Simple (PhysicsWeb) begs the (un)important question -- what do you call qudits with more dimensions than 3? The accepted names are: 2 (binary) qubit 3 (ternary) qutrit D (arbitrary) qudit Clearly the first two come from bit and trit, the common terms in the classical domain, which are loosely based (I assume) on the Latin names for bases: 2 binary 3 ternary 4 quaternary 5 quinary 6 senary 7 septenary 8 octal 9 nonary 10 decimal 11 undenary 12 duodecimal 16 hexadecimal 20 vigesimal 60 sexagesimal (Eric W. Weisstein. "Base." From MathWorld--A Wolfram Web Resource. If you took the boring route and named them after the Latin words, this is what you might get (your results on this [fairly pointless] exercise may vary) 4 quaternary: quatrit 5 quinary: quinit 6 senary: qusenit 7 septenary: quseptit 8 octal: quoctit 9 nonary: quonit 10 decimal: qudecit 11 undenary: qundenit 12 duodecimal: quduodecit 16 hexadecimal: quhexadecit 20 vigesimal: quvigesit 60 sexagesimal: qusexagesit Those are mostly pretty terrible, almost as bad sounding as qualgorithm. So here's a modest proposal: instead of invoking Latin, replace the D in quDit with a number (but keep qubit and qutrit since they're pretty well accepted). e.g.: 2 (binary) qubit 3 (ternary) qutrit 4 (quaternary) qu4it 5 (quinary) qu5it 6 (senary) qu6it 10 (decimal) qu10it 16 (hexadecimal) qu16it A bit hard to pronounce maybe, but easily recognizable in print.
{"url":"http://qualgorithms.blogspot.com/2005/06/qudit-naming.html","timestamp":"2014-04-21T02:00:11Z","content_type":null,"content_length":"15924","record_id":"<urn:uuid:202c5040-21b6-4df6-b046-bae4c9822bef>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Third Degree Polynomial Function question November 3rd 2009, 07:36 AM #1 Oct 2009 Third Degree Polynomial Function question I have been working on this for hours and I cannot seem to get all the numbers to work. please. How would I do this? Find a third degree polynomial function such that f(0)=3 and whose zeros are 1, 2, and 3. So firstly, how can you write a cubic in terms of its zeroes? Once you do that, now you have to adjust somehow for f(0)=3. Once you answer my first question, plug in x=0 and see what you get. Last edited by Jameson; November 3rd 2009 at 01:38 PM. f(0) = 3 and whose zeros are 1, 2, and 3. (x - a)(x - b)(x - c) = x^3 -(a + b + c)x^2 + (ab + bc + ac)x - (abc) = 0? Am I supposed to randomly pic numbers out and put them in a 3rd degree polynomial and keep reworking the problem until I get answers 1, 2,3 ? I think this is where I am confused. No, no. You're making this too hard. Think of a quadratic. You factor these all the time into something like (x+a)(x-b). This means that -a and b are zeroes of the function. Why? Because plugging in either of those for x makes one of the parenthesis 0, which makes the whole thing 0 since everything is multiplied. You can go backwards too. If I said a quadratic had zeroes of x=2,1 , what would this be? It is just y=(x-2)(x-1). If you expand that out, it will look like a normal quadratic. The same idea goes with a cubic and with all powers. You can write any polynomial as a product of it's zeroes. Polynomials can always be factored into products of polynomials of lesser degree. Look at this example: $(x+c)(x+d) = x^2 + (c+d)x + cd$ The two factors on the left are polynomials of degree 1, and their product is a polynomial of degree 2. Treat it like addition- 1 + 1 = 2. Here's another example: $(x+a)(x^2 + bx + c) = x^3 + (b+a)x^2 + (ab + c)x + ca$ Note that polynomial of degree 1 factored with polynomial of degree 2 is, by this addition rule, 1 + 2 = 3, so we've got a polynomial of degree 3 out of it. But of course, there are other ways to make three. What about 1 + 1 + 1? If we multiply three polynomials of degree 1 together, we're going to get a cubic (degree 3). Now recall that we know the three roots to our polynomial, which are 1, 2 and 3. Let's write out our polynomial factors: $f(x) = (x + a)(x + b)(x + c)$ We know that all we have to do is to get one of those linear factors to be equal to 0 to find a root. If f(1) = 0, then one of those factors must be: Going on, if f(2) = 0 as well, then one of the other factors must be $(x+(-2))$, and so on. In fact, the whole picture is: $f(x) = (x-1)(x-2)(x-3)$ If you've followed me thus far, you should be able to expand those terms. That's not quite the whole story, though. Although it will work for f(1), f(2), f(3), it might not for f(0), so you'll have to adjust for a constant term. Last edited by rowe; November 3rd 2009 at 10:12 AM. That's how you write the cubic with x=1,2,3 being zeroes. x=0 gives a y-value of -6 though, so we have to add 9 to this whole thing so that x=0 gives a y-value of 3. So I get y=(x-1)(x-2)(x-3)+9. That might be what you wrote, I just didn't expand it out. I'll probably get shot for contradicting the Administrator, but when you add 9 you no longer have a polynomial that vanishes when x = 1, 2 or 3. What you need to do to the polynomial (x-1)(x-2) (x-3) is to multiply it by a suitable constant so that it takes the value 3 at x=0. Duh! Thank you for the catch. No infractions this time My mistake. I'll finish the problem correctly now since I messed it up before. Like I said x=0 gives a y-value of -6, so we must now find a number that multiplied by -6 yields 9. So -6S=9 -> S=-3 Final solution should be $y=\left( -\frac{3}{2} \right) (x-1)(x-2)(x-3)$ Hopefully Opalg won't find any more mistakes... Duh! Thank you for the catch. No infractions this time My mistake. I'll finish the problem correctly now since I messed it up before. Like I said x=0 gives a y-value of -6, so we must now find a number that multiplied by -6 yields 9. 9? Shouldn't that be 3?? So -6S=9 -> S=-3/2 Final solution should be $y=\left( -\frac{\color{red}1}{2} \right) (x-1)(x-2)(x-3)$ Hopefully Opalg won't find any more mistakes... November 3rd 2009, 07:41 AM #2 MHF Contributor Oct 2005 November 3rd 2009, 07:50 AM #3 November 3rd 2009, 08:17 AM #4 Oct 2009 November 3rd 2009, 08:23 AM #5 MHF Contributor Oct 2005 November 3rd 2009, 08:57 AM #6 Oct 2009 November 3rd 2009, 09:15 AM #7 November 3rd 2009, 09:16 AM #8 MHF Contributor Oct 2005 November 3rd 2009, 01:12 PM #9 November 3rd 2009, 01:29 PM #10 MHF Contributor Oct 2005 November 4th 2009, 12:01 AM #11
{"url":"http://mathhelpforum.com/algebra/112163-third-degree-polynomial-function-question.html","timestamp":"2014-04-20T21:46:31Z","content_type":null,"content_length":"69939","record_id":"<urn:uuid:5276fc1c-2a20-4e08-9ee8-daa77a450048>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Cutler Bay, FL Algebra Tutor Find a Cutler Bay, FL Algebra Tutor ...Having a true compassion for my students, I always am the most favorite teacher of my students. I have the ability to be very understanding to my students so that they don’t feel bored or desperate by the complexity of a topic. Also I do have the ability to look at life in a different way and t... 23 Subjects: including algebra 1, algebra 2, chemistry, ACT Math ...I know the GED exam very well, and I can tutor all the different subjects that are included in the test. These subjects are math, reading, sciences and history. I math we can study all about algebra 1 and algebra 2, in reading we can study grammar, English, and we can study to get your essays to the next level. 23 Subjects: including algebra 2, algebra 1, English, reading ...While studying I tutored high school students from Ateneo de Manila University in Algebra, Geometry, Trigonometry and Physics. I also worked in various banks as an IT Manager and Vice President. As manager, I also mentored the staff on various aspects of systems and program development. 4 Subjects: including algebra 1, algebra 2, geometry, trigonometry ...I have also worked as a teacher's assistant, photographer, and counselor at various summer camps. I am comfortable tutoring any age and have a lot of patience. I have experienced many students that have different ways of learning so I adjust my plans and teaching methods to their needs. 15 Subjects: including algebra 1, algebra 2, geometry, trigonometry I look forward to assisting my students to achieve great success. I have over twenty-one years of full time successful teaching experience in both the public and private sector and in both parochial and non parochial schools. I am Florida State certified to teach math grades 5-8, math grades 6- 12, and economics grades 6 to 12. 16 Subjects: including algebra 1, algebra 2, geometry, trigonometry Related Cutler Bay, FL Tutors Cutler Bay, FL Accounting Tutors Cutler Bay, FL ACT Tutors Cutler Bay, FL Algebra Tutors Cutler Bay, FL Algebra 2 Tutors Cutler Bay, FL Calculus Tutors Cutler Bay, FL Geometry Tutors Cutler Bay, FL Math Tutors Cutler Bay, FL Prealgebra Tutors Cutler Bay, FL Precalculus Tutors Cutler Bay, FL SAT Tutors Cutler Bay, FL SAT Math Tutors Cutler Bay, FL Science Tutors Cutler Bay, FL Statistics Tutors Cutler Bay, FL Trigonometry Tutors Nearby Cities With algebra Tutor Coral Gables, FL algebra Tutors Goulds, FL algebra Tutors Hialeah Gardens, FL algebra Tutors Hialeah Lakes, FL algebra Tutors Homestead, FL algebra Tutors Miami algebra Tutors Miami Beach algebra Tutors Miami Shores, FL algebra Tutors Opa Locka algebra Tutors Palmetto Bay, FL algebra Tutors Perrine, FL algebra Tutors Pinecrest, FL algebra Tutors South Miami Heights, FL algebra Tutors South Miami, FL algebra Tutors West Miami, FL algebra Tutors
{"url":"http://www.purplemath.com/cutler_bay_fl_algebra_tutors.php","timestamp":"2014-04-17T16:01:46Z","content_type":null,"content_length":"24186","record_id":"<urn:uuid:d6f2dfc1-e89b-4fc4-8b5f-36706a6c4186>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
transformation of partial derivates into spherical coordinates December 13th 2012, 06:01 AM #1 Oct 2012 transformation of partial derivates into spherical coordinates Hello, please excuse my title if I stated the topic incorrectly. I was given an assignment to derive the quantum mechanical operator for the z-component of the angular momentum in spherical coordinates. I have found the solution, and the derivation uses the following relationship: $\frac{\partial}{\partial(y)}$ = $\frac{\partial(r)}{\partial(y)}$$\frac{\partial}{\partial(r)}$ + $\frac{\partial(theta)}{\partial(y)}$$\frac{\partial}{\partial(theta)}$ + $\frac{\partial(phi)} I was curious if anyone might be able to tell me from what this relationship is derived from. Whenever I search "transformation to spherical coordinates" or something along those lines, I find explanations to transforming each cartesian coordinate into spherical representation, but I don't see any transformation for the partial derivative of a cartesian coordinate into spherical If anyone could help me with the "correct" term for the above relationship or the mathematical technique used to derive it, I will gladly google the details then myself. Thank you very much! Re: transformation of partial derivates into spherical coordinates Hey blaisem. This is known as the total derivative and is based on the chain rule in multiple dimensions: Total derivative - Wikipedia, the free encyclopedia Re: transformation of partial derivates into spherical coordinates Hey blaisem. This is known as the total derivative and is based on the chain rule in multiple dimensions: Total derivative - Wikipedia, the free encyclopedia Hi chiro, Thanks a lot for the information! I have since found a couple other resources that have helped me in this process for anyone else interested: Wolfram Demonstrations Project Envisioning total derivatives of scalar functions f(x,y) © I have a couple of questions after looking at this to help complete my understanding, if you or anyone else would be willing to take a stab at them I'd appreciate it! I'll start by trying to outline my understanding of the topic currently, then I'll summarize my questions at the end. So, if I have understood this correctly, the relationship above means that the rate of change of y is given by three "vector components" corresponding to r, theta, and phi. The rate of change of y is then described by the rate of change of each variable multiplied respectively by the distance of the variable. If this is correct, then I know why the total derivative is formulated the way it is. Now, the total derivative is only used when the variables are not independent. My question would then be: in the relationship I have in my original post, why would the radius, theta, and phi necessarily be dependent on one another? Is it because they can all change at the same time, ie. as each variable is varied, the others are not necessarily held constant? The problem with this explanation is that if I try to extend it to a general case, it seems to conflict with my understanding of partial derivatives: Taking the partial derivative with respect to one variable only describes how that variable changes when the rest are held constant. What exactly does one accomplish then when one takes the partial derivative $\frac{\partial}{\partial(xy)}$ of a function f(x,y)? Since the partial derivative of both variables are taken, then both variables are allowed to change (ie are not held constant). This is exactly what I am doing in my relationship from the original post. So I am currently stumbling on: 1) What is the purpose of taking the partial derivative of all variables vs. the total derivative? My understanding of each seems to have their purposes overlapping. 2) Why are the variables in my relationship considered dependent on one another, and therefore characterized appropriately by the total derivative instead of partial derivatives? If I were to approach the problem of deriving the relationship above without having ever seen it before, I would have not have been able to tell you whether the spherical coordinates should be handled as dependent on one another or independent. I hope I adequately explained where I am coming from for my issue. Thanks for any advice again! Re: transformation of partial derivates into spherical coordinates For 1) it depends on what you are doing but the reason for the total derivative is that you want to find the overall vector derivative that takes into account all of the individual components (like x,y or r,theta etc). A normal function maps R^n -> R and the total derivative tells us the total change by considering the vectors. Think of it like Pythagoras' theorem where the length of the hypotenuse is the sum of the squares of the rest of the sides (even in n-dimensions which this extends to in Euclidean/Cartesian For 2) it depends on the nature of the function and the geometry but essentially we look at the smallest number of independent components and then consider these in the context of the chain rule where we consider transformations that take us to the atomic simplest independent variables. Basically as an example consider u(v(w(x))) where x is the atomic variable and the composition functions are chained together where the final derivative is in terms of du/dv * dv/dw * dw/dx. As for d/d(xy), you can only do this if xy is a single variable. So think of it in terms of of Pythagoras' theorem but instead of lengths of some solid rectangle, they are rates of change and behave according to the laws of differentiation. In fact we generalize geometry in the exact same way by modelling ds^2 where ds is the rate of change of length of a vector in a generic co-ordinate system and this is typically referred to as tensor analysis or Riemannian geometry or differential geometry. December 13th 2012, 08:42 PM #2 MHF Contributor Sep 2012 December 15th 2012, 08:56 AM #3 Oct 2012 December 15th 2012, 03:25 PM #4 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/calculus/209730-transformation-partial-derivates-into-spherical-coordinates.html","timestamp":"2014-04-18T00:47:31Z","content_type":null,"content_length":"44670","record_id":"<urn:uuid:ec3a63b2-bffa-40b6-a75b-6a4af470a4e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Kendall Park Algebra Tutor Find a Kendall Park Algebra Tutor With two children of my own, I understand the difficulties kids have in grasping math concepts. While earning my BS at Boston University, and my MS at Rochester Institute of Technology, both in Computer Engineering, I tutored both college students and area elementary/high school students in math an... 25 Subjects: including algebra 2, algebra 1, geometry, computer science ...I have worked with college, high school, and grade school students. I have a genuine love for educating others, and in increasing my own knowledge. The moments when students understand concepts provide the greatest job satisfaction I can imagine. 17 Subjects: including algebra 1, algebra 2, reading, chemistry ...I have also completed an educational technology course through Rutgers. In addition, I have completed other college course work related to my subject area. I am a New Jersey certified ESL 21 Subjects: including algebra 1, reading, English, grammar I am currently enrolled at Rutgers University in the School of Arts and Sciences. My major is Cell Biology and Neuroscience and my minor is Psychology. I have been a private tutor for over 9 years and have also worked with other tutoring companies. 40 Subjects: including algebra 1, algebra 2, chemistry, reading I have tutored dozens of students in SAT math, ACT math, and high school math with excellent results. Additionally, I have also helped candidates working on their GMAT, GED, Series 7, and Series 66 exams. One on One tutoring has always proven to be the most effective teaching method. 16 Subjects: including algebra 2, algebra 1, geometry, GED
{"url":"http://www.purplemath.com/Kendall_Park_Algebra_tutors.php","timestamp":"2014-04-18T00:44:14Z","content_type":null,"content_length":"23850","record_id":"<urn:uuid:c2196f74-6eb9-4579-8c12-0dde07e4af32>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
math world problems Number of results: 235,400 world history Re: Industrial Revolution Apart from external problems including urban crowding, insanitation, pollution, and social and order problems, what may have also been (2) problems created by industrialization/urbanization during this time? Saturday, February 9, 2008 at 8:16pm by hellga grammar (again) There are other problems with this sentence: As students, we are endowed with many [what are leniencies???] which are taken for granted in the [real world]. What do you mean by "the real world"? Students don't live in the real world?? Please reconsider your wording. Wednesday, April 9, 2008 at 9:16am by Writeacher why did e. e. cummings write the poem Mariann Morre --M in a vicious world-to love virtue A in a craven world-to have courage R in a treacherous world-to prove loyal I in a wavering world-to stand firm A in a cruel world-to show mercy N in a biased world-to act justly N in a ... Tuesday, June 3, 2008 at 11:57am by Angela y is vertical, x is horzontal. When the rock is at y=0, you need to find x. 0=-.005x^2+.48x-6.7 0=x^2-96x+1340 so x comes out 17, and 79 (rounded) So there are two answers, and in the real world, he could have thrown it with two different velocities. Remember these problems ... Thursday, April 26, 2012 at 12:30pm by bobpursley world war 1 world war 1: (1) how did the cuban fight for independence affect the United States? Give 2examples. (2) describe two problems faced by the builders of the panama canal. How was each problem solved? Tuesday, February 24, 2009 at 7:57pm by 8th grade social how do you do these problems?? 5-26 2-2/9 8-2/3 4/3+6 But I can do these types pf problems ( don't need help with problems below) 3/4 + 1/10 and 5/6 - 1/4 Please Help!! Thanks!!! Thursday, December 2, 2010 at 7:03pm by Erin Yolanda completed 14/25 of her math problems before her break and 1/5 of her math problems after her break. What fraction of her math problems did she complete? Thursday, November 8, 2012 at 10:32pm by Bell A derivation of the equation for this situation can be found at http://www.real-world-physics-problems.com/elastic-collision.html Thursday, October 20, 2011 at 11:28pm by drwls there's a question that I don't understand and the direction says that 'solve problems as combination integer problems.remember to rewrite all double sign problems as combination problems before solving' and here's a question 50- -61+ - 170 = Tuesday, January 22, 2008 at 10:03pm by allie Studies of Society In addition of Australia's problems, the whole world would go into mass hysteria and the WHO (World Health Organization) would spring into action to try to prevent further spread. Friday, September 7, 2007 at 10:59pm by LehrerinSagt world civilization ( the roman empire) What political, social, and economic problems beset Rome in the third and fourth centuries c.e.? How did Diocletian and Constantine deal with them? Were they effective in stemming the tide of decline and disintegration in the Roman Empire? What problems were they unable to ... Monday, October 1, 2012 at 12:11pm by lashay Kenneth has 22 math problems to do for homework. He has 12 problems done.How many more problems does he left?if he completes 1 problem every minute,how many more minutes does he have to work? Tuesday, August 27, 2013 at 4:51pm by Tanny I am having problems understanding how to solve problems like the following: a2 + b2 = c2 Please help me where I can understand solving problems such as the one above. Monday, March 30, 2009 at 9:27pm by Jasmine select one region of the world and identify at least two serious environmental problems, such as soil degradation, air or water pollution, pesticide misuse, overpopulation, wildlife extinction or threatened biodiversity, and deforestation, that impact this region. What seem to... Monday, October 1, 2007 at 1:05pm by Bruce Maria answered all the problems on her math test. She answered 80 percent of the problems correctly. If she answered 6 problems incorrectly, how many problems were on the test? Thursday, July 19, 2012 at 7:27pm by Willie Good and famous writers just don’t fall from sky rather acquire their art of good writing from this very world. Explain in detail how world teaches them to create writing master pieces by avoiding certain frequent writing lapses and problems. Tuesday, January 17, 2012 at 4:50am by hamid "speed of solving math problems" = "how fast math problems can be solved" Why is "OR" in caps? 5. "...See if the experimental group solved math problems significantly more rapidly than the control group." There may be a difference, but it needs to be statistically significant... Monday, December 23, 2013 at 11:52am by PsyDAG geography-please help!! Welcome to the real world! :-) Sometimes solutions to real problems create more problems. Often there are no "good" answers, but "better" and "worse" answers. In China's case, its one-child policy undoubtedly helped the majority of people prosper while it took away freedoms ... Monday, November 10, 2008 at 9:31pm by Ms. Sue matthew got 36 problems right out of a total of 40 problems on a test. What percent of the problems did he get right? Sunday, November 14, 2010 at 7:52pm by Michelle kenneth has 22 math problems to do for homeworks he has 12 problems done. How many more problems does he have left? If he completes 1 problem every minute, how many more minutes does he have to work? Wednesday, October 5, 2011 at 4:54pm by wonderer 5th grade Math Please help me with these math problems. #1 Matt subtracted 1.9 from 20.8 and got 1.8. Explain why this is not reasonable? #2 Explain why it is easier to find 10 minus 1.9 mentally than with paper and pencil. #3 On May 27, 2001, a high school student ran a mile in 3 minutes 53... Wednesday, September 30, 2009 at 5:04pm by Tim Mathematical Methods Following completion of your readings, complete exercises 35 and 37 in the “Real World Applications” section on page 230 of Mathematics in Our World. For each exercise, specify whether it involves an arithmetic sequence or a geometric sequence and use the proper formulas where... Tuesday, March 22, 2011 at 12:25am by John Math Help please!! how do you do these problems?? 5-26 2-2/9 8-2/3 4/3+6 But I can do these types pf problems ( don't need help with problems below) 3/4 + 1/10 and 5/6 - 1/4 Please Help!! Thanks!!! Thursday, December 2, 2010 at 7:19pm by Erin Please try some of the following links: http://search.yahoo.com/search?fr=mcafee&p=What+problems+in+the+business+world+requre+you+to+multiply+whole+numbers+ Sra Monday, March 7, 2011 at 9:18pm by SraJMcGin world history I can think of several problems. What does your book say? Saturday, January 7, 2012 at 10:09pm by Ms. Sue What are the most large ecological problems in the world todayÉ Saturday, January 14, 2012 at 11:12pm by Rebecca MATH - Homework dumping Please show us how you think these problems should be solved -- or where you're stuck. Do not post any more problems until you've shown us some effort on these other 10 or so problems. Monday, February 13, 2012 at 6:05pm by Ms. Sue Eden took a quiz with 15 problems and got 3/5 of the problems correct.how many problems did she get correct. Wednesday, February 5, 2014 at 7:16pm by Nicole Mike completed 19 math problems before dinner for tonight’s homework, and he completed x problems after dinner. Write an expression to determine the total number of problems he completed. Sunday, November 6, 2011 at 4:43pm by adam Good answer by Guru. To expand on it a little bit more, knowledge in itself isn't bad. The problem comes in with how we use knowledge. Notice that after eating the fruit, they are ashamed of themselves. Before that time, the focus of the world was how beautiful the world is. ... Saturday, October 6, 2007 at 3:41pm by MattsRiceBowl math i need help please Makeeda -- please don't just post your problems without giving us some idea of how you think the problems should be solved. You are not getting any favors by posting problems and then getting answers. You're taking this class to learn math, not to copy other people's work. Monday, January 25, 2010 at 8:48pm by Ms. Sue So I've been having trouble solving some problems. Mainly problems that look like this 5-(t+3)=-1+2(t-3) what causes problems is the 5- part. I asked my teacher for help, and he gave me a hint, which is to distribute a -1 like 5+(-1)(t)+(-1)(3) But I still don't get it? Can ... Thursday, October 7, 2010 at 5:49pm by katherine in the addition problems below each leter represents the same digit in both problems. Replace each letter with a different digi, 1-9, so that both addition problems are true.(there are two possible answers.) A B C A D G +D E F +B E H ------ ------ G H I C F I Thursday, August 23, 2007 at 5:36pm by kenzie don't know, but you have your work cut out for you. All you can do is your best. In the future, if you can find the time, don't just do the 6 assigned homework problems. Do the whole set, and if you're still having problems, go get the Schaum Outline for Trigonometry and there... Saturday, February 23, 2013 at 5:52pm by Steve U.S. History Explain the domestic problems confronted by the United States during World War I. Friday, October 26, 2012 at 11:28am by Heather social studies I assume you're a teenager and definitely not of my generation. What problems do you see that need to be solved by your generation? * global warming? * alternative energy? * affordable health care? * medical care and food for impoverished people around the world? * stable ... Tuesday, November 11, 2008 at 2:16pm by Ms. Sue There are any types of math problems- unfortunately we cannot help you unless you give us something to work with, like a difficult math problem. I do not really get math either sometimes, Elizabeth, but as Ms. Sue said, if you post a couple of confusing problems, we'll try to ... Wednesday, September 26, 2012 at 5:33pm by Delilah Problems the government faced in mobilizaing the public and the economy and the rising of an army for world war 1 Wednesday, March 3, 2010 at 6:36pm by Joy sat essay Is it necessary for us to find new solutions to problems? Many people may argue that we don¡¯t need to since we are already able to solve the problems. As far as I am concerned, we should always try to find new solutions to problems. New solutions enable us to work more ... Wednesday, December 2, 2009 at 8:59am by Mercedes You'll find the answer after you've correctly solved your math problems. If you need help with one or two of these problems, please post them and we'll try to help you. Tuesday, August 21, 2012 at 6:16pm by Ms. Sue Keep working on your math problems and you'll find your answer. If you need help on one or two of the problems, please post them here, and we'll try to help you with them. Sunday, March 31, 2013 at 9:27pm by Ms. Sue social problems analyzing cultural traits and items. kinds of foods and eating patterns differ in other cultures. for ex. in many parts of the world people use only their right hand to eat. investigate an area of the world in which you are interested and compose a report on the foods eaten ... Monday, September 24, 2012 at 12:30am by donnie If you don't know how to do the math problems, I guess you take a zero. I hope you find out how to do them tomorrow. Or -- you could post a couple of your problems here and I'll be glad to help you understand them. Wednesday, August 22, 2012 at 10:14pm by Ms. Sue I'm having problems with subtracting 10 digit math problems.Can you give me any websites? Tuesday, September 2, 2008 at 5:57pm by Nehemiah world history Describe the economic, political, social, and cultural problems faced by the new nations of Africa? Monday, October 15, 2007 at 12:02pm by cowgirl world civilization NEED HELP A.S.A.P What solutions did Augustus provide for the political problems that had plagued the Roman Republic? Thursday, September 27, 2012 at 12:50pm by tisha English Language I asked this question earlier. I need to write 2 letters of complaint about problems I face at school and problems I face as a student in general but I can't find any problems, could you help me list some problems? Its really really urgent. Monday, February 9, 2009 at 7:57pm by PLEASE HELP ME!!! What kind of Math are you doing? I think for any kind of math though, doing problems from the book may be the best way to study for an exam, but what you sound about how "if your teacher throws something diffrent at you, you bomb it" that right there tells me that as much as ... Sunday, April 25, 2010 at 12:12pm by Kelli I have a few math questions I made them just like my actual problems so that I can go through the steps you have went through to answer my own problems...Again THESE ARE NOT THE REAL PROBLEMS JUST EXAMPLES LIKE MINE WITH DIFFERENT NUMBERS. I don't know how to complete these ... Wednesday, June 12, 2013 at 5:40pm by Anonymous world geography following the breakup of the Soviet Union, whatare some of the major problems faced by many of the new countries Sunday, April 18, 2010 at 4:01pm by lisa algebra world problems You're right. However your first equation is confusing. Please use a different letter than O which can be confused with 0. Monday, October 17, 2011 at 9:22pm by Ms. Sue World War 1 wait tabby, what do u mean tell serbia to deal with its own problems. the germnas were never allies with them Sunday, April 29, 2012 at 7:38pm by Stephanie SAT Writing I can't find the post regarding to the essay about "Nothing in the world can take the place of Persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full ... Wednesday, August 28, 2013 at 7:28pm by Anonymous World History short essay decribing the economic, political,social, and cultural problems, faced by the new nations of Africa. Thursday, September 13, 2007 at 11:31am by Cody problems faced in conserving kalimantan rainforest The tropical rain forest aer home to over_of the world's species of life. Sunday, December 23, 2007 at 7:44am by Ruby Algebra 1 ( Inequalities and their Graphs) look at any of the word problems in your text. Those are examples of real-world situations where a function is involved. Thursday, November 14, 2013 at 1:53pm by Steve These are all multiplication problems. 1/3 * 6/1 = 6/3 = 2 I'll be glad to check your answers for the other problems. Monday, February 11, 2013 at 4:06pm by Ms. Sue What problems? I have no idea what helped you solve these unknown problems. Thursday, January 23, 2014 at 6:09pm by Ms. Sue Math - homework dumping You've posted 8 problems with no indication of how you want us to help you with them. What don't you understand about these problems? Monday, August 13, 2012 at 5:47pm by Ms. Sue You need to have two ways the variables are related. Take a look at some of the coin problems, mixture problems, tickets sold problems, etc. Do a search in the box at the top right and I'm sure you will see how they are done. For example, if you have 7 coins (dimes and ... Tuesday, February 12, 2013 at 12:03am by Steve explaining math problems Are you talking about word problems? Monday, September 15, 2008 at 9:43pm by yeh Amy finished 6/12 of the problems on her timed test. Jackson finished 4/6 of the problems on the timed test. Did they finish the same fraction of the problems? Explain. Monday, March 9, 2009 at 8:39pm by Lisa Health (Ms. Sue) It states: "It is important to have an efficient and effective public health system to prevent or manage an infectious disease outbreak. Even though great progress has been made in the ability to protect the public's health, the methods and financial resources needed for such ... Thursday, November 21, 2013 at 4:31pm by Anonymous Algebra 1 Explain why any two regular n-sided polygons would be considered to be similar and how knowing this information could help in solving real-world problems. Monday, March 3, 2014 at 5:25pm by Zoey Alegbra 1 Explain why any two regular n-sided polygons would be considered to be similar and how knowing this information could help in solving real-world problems. Monday, March 3, 2014 at 10:59pm by HELP English 10 Check Someone please check! * is the answer I chose. For each of the following pairs of topics select the one that is more focused. 1.)Topic. a.)Why beaches are fun *b.)Why I love Miami Beach 2.)Topic. a.) The wonderful world of vegetables *b.)The health benefits of carrots 3.)Topic... Sunday, June 13, 2010 at 3:02pm by Ariel * I try to make a better world by _______________. 1. recycling used materials. 2. building some senior centers after I become wealthy. 3. inventing useful robots that can do a good job for the world. 4. becoming the President of our country. 5. making our country rich so that... Sunday, May 31, 2009 at 5:22pm by John Can you help with three problems 1)2x^2+7x+3 2)6x^2+5x-4 3)4x^2+12x+9 I have to factor this problems.Please help Wednesday, March 17, 2010 at 10:06pm by lauren Thank you. If you dont mind can you look over some more problems if available? I just want to ensure that I am doing the problems correctly Saturday, March 20, 2010 at 6:20pm by Tasha Direct answer : George, in your life time you will never need to solve such problems, or similar problems. Friday, July 15, 2011 at 2:36am by Reiny math conversion factors 1. Homework problems solved in 2 hours 30 minutes at a rate of 26 problems an hour. Tuesday, February 5, 2013 at 10:37pm by rachel ENGLISH, HELP? World problems: Israeli-Palestinian conflict slavery drug cartels clean water for everyone global warming renewable resources Monday, March 18, 2013 at 4:19pm by Ms. Sue 8th Grade Algebra I am having problems understanding how to solve problems like the following: a2 + b2 = c2 Please help me where I can understand solving problems such as the one above. Wednesday, March 25, 2009 at 8:16pm by Jasmine Compare and contrast the western world view with the deep ecology world view. Challenge one of the world views presented in Visualizing Environmental Science. Which world view is closest to your own? Thursday, December 6, 2012 at 11:42pm by Jose AP World History You can post as many as you want. However, people who post six or more very similar questions (usually math problems) in a row may not receive answers. In other words, we tend to ignore what seems to be "homework dumping," but we're glad to help students who seem to be really ... Thursday, May 26, 2011 at 2:48pm by Ms. Sue What problems in the business world requre you to multiply whole numbers Monday, March 7, 2011 at 9:18pm by Twila math 012 Hi, I am trying to do homework for my math 012 class- solving linear equations- and I need help with 9 of the problems. I would really appreciate any help with this. I can do the problems and send them to you, or however you prefer. Thanks! Friday, January 30, 2009 at 11:53pm by amy Which region of the world have you selected? What have you found about its environmental problems? Monday, October 1, 2007 at 1:05pm by Ms. Sue world history this is my answer to the question: explain in detail 3 seperate causes of the bolshevik Revolution. There were many causes of the Bolshevik Revolution. One of these reasons was poor leadership from Czar Nicholas. He held very strict control over Russia. There were many ... Wednesday, January 12, 2011 at 5:59pm by billy world history explain in detail 3 seperate causes of the Bolshevik Revolution. is this answer good???: There were many causes of the Bolshevik Revolution. One of these reasons was poor leadership from Czar Nicholas. He held very strict control over Russia. There were many problems existing ... Wednesday, January 12, 2011 at 6:40pm by billy essay intro repost Biomedical engineering is a field that saves peoples’ lives by using advanced technology in medicine. Being a biomedical engineer requires being good at solving problems in biology and medicine. Engineers design new devices and instruments which they can use to do research or ... Monday, October 22, 2007 at 6:21pm by julie evaluating functions For some reason I can't get these 3 problems correct: g(x) = x^2(x-5) g(3/2) g(c) g(t+2) Any help would be greatly appreciated. I contribute as much as I can and I don't seem to get any help. Maybe these are too hard of problems? Friday, January 23, 2009 at 9:11pm by strawberryfields See your post with the detailed answer of how these type problems are done. All the problems you posted are done the same exact way. Time to get to work. Sunday, February 13, 2011 at 7:19pm by helper See your post with the detailed answer of how these type problems are done. All the problems you posted are done the same exact way. Time to get to work. Sunday, February 13, 2011 at 7:09pm by helper See your post with the detailed answer of how these type problems are done. All the problems you posted are done the same exact way. Time to get to work. Sunday, February 13, 2011 at 6:58pm by helper See your post with the detailed answer of how these type problems are done. All the problems you posted are done the same exact way. Time to get to work. Sunday, February 13, 2011 at 6:55pm by helper Sky, I've done two examples for you. All your problems are the same. Your turn to try the other problems. Post your answers and I will check them for you. Wednesday, February 23, 2011 at 2:03pm by Helper Math 8 - help!!! 15 - x = 2 (x +3) 15y + 14 = 2(5y + 6) 1/2 (6x - 4) = 4x - 9 4(3d -2) = 8d - 5 please help me with these problems. I done other 20 problems just to let you know. Wednesday, October 17, 2012 at 9:42pm by Laruen Problems: How to get straight A's How to get along with siblings How to curb global warming How to provide clean water for everyone in the world How to eliminate malaria How to choose a college Wednesday, April 7, 2010 at 5:41pm by Ms. Sue I lost my paper for the Math Riddle Pizazz page 167 about solving systems of equations and I no longer have the problems. (It is the "did you hear about the farmer who gave birdseed to his cows..." page) What are the math problems for it? Monday, February 28, 2011 at 5:16pm by Caroline I appreciate all of your help. We had about 77 of these problems to do. These are the LAST two problems that I need some help on. Thank you so much for everything. 1/x-1 minus 2/x^2-1= -1/2 2/y plus y-1/3y= 2/5 Saturday, May 1, 2010 at 1:49pm by Brian I appreciate all of your help. We had about 77 of these problems to do. These are the LAST two problems that I need some help on. Thank you so much for everything. 1/x-1 minus 2/x^2-1= -1/2 2/y plus y-1/3y= 2/5 Saturday, May 1, 2010 at 4:12pm by Brian World History Evidentally he means that the problems come from little government power. He's advocating a strong federal government. Wednesday, September 30, 2009 at 7:47pm by Ms. Sue Ben can complete 3 math problems in 20 minutes. If he continues working at the same rate, how long will it take Ben to complete 15 math problems? Wednesday, January 26, 2011 at 2:55pm by Anonymous Ben can complete 3 math problems in 21 minutes. If he continues working at the same rate, how long will it take Ben to complete 16 math problems? thanks . Sunday, March 13, 2011 at 4:27pm by Imani Ben can complete 4 math problems in 20 minutes. If he continues working at the same rate, how long will it take Ben to complete 17 math problems? Tuesday, April 5, 2011 at 3:11pm by HELP I'm working on some review problems in Math and having trouble on the critical thinking one. The question says is it possible for 2 numbers to have the same LCM and GCF? Explain. I'm not sure where to begin on this. I know how to do both types of problems but don't know how to... Friday, August 10, 2012 at 7:39pm by Brandi World Religion Now, I am learning about HINDUISM. The question is " In what way do the Upanishads(Core teaching of Vedanta) speak of the problems and possibilities of human existance? Help me please. Saturday, October 17, 2009 at 12:47am by A-tan World History Some of the evils of the Industrial Revolution included dirt, pollution, child labor, and unsafe working conditions. What did England do to try and correct these problems? Saturday, September 29, 2012 at 1:15pm by Ms. Sue i have got a lot of math problems to do...if you can just help me with 3 of these problems...i could do the rest...thank you so much!! factor each expression::: 1. 20xy 2. 36g^2h^2 3. 44m^2n Wednesday, August 18, 2010 at 7:45pm by savannah Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=math+world+problems","timestamp":"2014-04-20T19:45:59Z","content_type":null,"content_length":"38347","record_id":"<urn:uuid:84504375-b32f-46a0-9444-6e7822f16857>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
full bridge rectifier equations - All About Circuits Forum full bridge rectifier equations Hi, I am having some trouble deriving or finding some help on this. I want to get an equation which describes the output of a full bridge rectifier. I have found the following information thus far, $V_o = V_{peak} - 0.5V_{ripple}$ $V_{ripple} = \frac{V_{peak}}{fCR}$ When you calculate this value, you will get a singular output dc value. However, I want to have an equation which gives more details, one that includes the ripple voltage. In my mind, this equation should be in a sinusoidal form, but I can't seem to think of have to arrive at that. Does anybody know how i can do this analysis? or maybe have some references available for this?
{"url":"http://forum.allaboutcircuits.com/showthread.php?t=68845","timestamp":"2014-04-20T23:54:10Z","content_type":null,"content_length":"64857","record_id":"<urn:uuid:e9907536-7834-479a-aace-3693c6740d47>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] ticket #605 Timothy Hochberg tim.hochberg@ieee.... Wed Apr 9 12:24:39 CDT 2008 On Wed, Apr 9, 2008 at 7:01 AM, David Huard <david.huard@gmail.com> wrote: > Hello Jarrod and co., > here is my personal version of the histogram saga. > The current version of histogram puts in the rightmost bin all values > larger than range, but does not put in the leftmost bin all values smaller > than bin, eg. > In [6]: histogram([1,2,3,4,5,6], bins=3, range=[2,5]) > Out[6]: (array([1, 1, 3]), array([ 2., 3., 4.])) > It discards 1, but puts 2 in the first bin, 3 in the second bin, and 4,5,6 > in the third bin. Also, the docstring says that outliers are put in the > closest bin, which is false. Another point to consider is normalization. > Currently, the normalization factor is db=bin[1]-bin[0]. Of course, if the > bins are not equally spaced, this will yield a spurious density. Also, I'd > argue that since the rightmost bin covers the space from bin[-1] to > infinity, it's density should always be zero. > Now if someone wants to explain all that in the docstring, that's fine by > me. I fully understand the need to avoid breaking people's code. I simply > hope that in the next big release, this behavior can be changed to something > that is simpler: bins are the bin edges (instead of the left edges), and > everything outside the edges is ignored. This would be a nice occasion to > add an axis keyword and possibly weights, and would make histogram > consistent with histogramdd. I'm willing to implement those changes, but I > don't know how to do so without breaking histogram's behavior. Here's one way which is more or less what they tend to do in the core Python to avoid breaking things. 1. Choose a new name for histogram with the desired behavior. 'histogram1D' for example. 2. Add the function with the new behavior to major release X and modify the old 'histogram' to produce a PendingDeprecationWarning (which by default does nothing, you need to change the warning filter to see 3. In major release X+1, change the PendingDeprecationWarning to a DeprecationWarning. Now people will start to see warnings when they use 4. In major release X+2, rip out histogram. So, if you got the new version into 1.1, in 1.2 it would start complaining when you used histogram and in 1.3 histogram would be gone, but the new version would be in it's place. In this way, there's no point where the behavior of histogram just changes subtly; since it disappears one is forced to figure out where it went and implement appropriate changes in ones code. > I just got Bruce reply, so sorry for the overlap. > David > 2008/4/9, Jarrod Millman <millman@berkeley.edu>: > > > > Hello, > > > > I just turned this one into a blocker for now. There has been a very > > long and good discussion about this ticket: > > http://projects.scipy.org/scipy/numpy/ticket/605 > > > > Could someone (David?, Bruce?) briefly summarize the problem and the > > current proposed solution for us again? Let's agree on the problem > > and the solution. I want to have something similiar to what is > > written about median for this release: > > http://projects.scipy.org/scipy/numpy/milestone/1.0.5 > > > > I agree with David's sentiment: "This issue has been raised a number > > of times since I follow this ML. It's not the first time I've proposed > > patches, and I've already documented the weird behavior only to see > > the comments disappear after a while. I hope this time some kind of > > agreement will be reached." > > > > If you give me the short summary I will make sure Travis or Eric > > respond (and I will put it in the release notes). > > > > Thanks, > > > > > > -- > > Jarrod Millman > > Computational Infrastructure for Research Labs > > 10 Giannini Hall, UC Berkeley > > phone: 510.643.4014 > > http://cirl.berkeley.edu/ > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion@scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion . __ . |-\ . tim.hochberg@ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080409/3fd9dce3/attachment.html More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/032619.html","timestamp":"2014-04-21T10:12:55Z","content_type":null,"content_length":"8214","record_id":"<urn:uuid:33214116-5265-431f-86d6-177210904224>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Encinitas Science Tutor Find an Encinitas Science Tutor ...I am a biology student at UCSD and will be graduating in June. I passed by IB Biology HL exam while in high school, and received an A in my honors biology course in high school as well. I received 5's in both of the AP calculus tests, and am a UCSD biology student and so use Calculus on a regular basis in my classes. 42 Subjects: including ACT Science, reading, biology, writing ...I have extensive knowledge of developmental, neurobiological and learning disorders. I am able to tutor up to high school level in: Biology, Algebra, Chemistry, and Writing. I was diagnosed with ADHD in 8th grade and was fortunate enough to have supportive parents, teachers, and tutors who helped me to build up my confidence and reach my potential. 23 Subjects: including botany, psychology, nutrition, physics ...The CBEST tests basic Mathematical knowledge like multiplication in word problem format. I know how to take a student and have them be fluent in their Mathematical skills and pass the test. I have an Electrical Engineering degree (BSEE) from University of California Irvine. 28 Subjects: including chemistry, astronomy, golf, baseball ...I recently just retook calculus 1-3 and received an A as well as reviewed this subject for the GRE. I have run study groups and have tutored other math subjects. My major is currently in math, and I'm working to become a math teacher My strength is breaking down what seems like big concepts and relating it to stuff students have seen before. 13 Subjects: including organic chemistry, statistics, chemistry, calculus ...I tutored the subject for a year, offering aid with understanding variability among microbial physiology and qualities for subverting host defenses. To anyone looking for help keeping track of the information, I can provide clear diagrams of how different microbes function. I studied Organic Chemistry for a year and earned an A in each of the three courses I took. 10 Subjects: including biology, genetics, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Encinitas_Science_tutors.php","timestamp":"2014-04-18T03:43:49Z","content_type":null,"content_length":"24054","record_id":"<urn:uuid:d5dff70a-5496-4dac-8c07-41c2b9571de3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Seminar Details: LANS Informal Seminar "A Continuous Multilevel Solver for the Vertex Separator Problem" DATE: July 28, 2011 TIME: 10:30:00 - 11:30:00 SPEAKER: James Hungerford, MCS Summer Student and Ph.D. Student, Dept of Mathematics, Univeristy of Florida LOCATION: Bldg 240 Conference Center 1416, Argonne National Laboratory Given an undirected graph G, the vertex separator problem is to find the smallest number of nodes whose removal disconnects the graph into disjoint subsets A and B, where A and B are subject to size constraints. We will show how this problem can be formulated as a continuous quadratic program. We use the QP as a local processor in a multilevel scheme for solving large scale instances of the problem. Numerical results will be presented. Please send questions or suggestions to Krishna: snarayan at mcs.anl.gov.
{"url":"http://www.mcs.anl.gov/research/LANS/events/listn/detail.php?id=1271","timestamp":"2014-04-18T22:20:04Z","content_type":null,"content_length":"2639","record_id":"<urn:uuid:86af2683-b6fd-4c48-82ef-328697aa10ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: AMS/IP Studies in Advanced Mathematics 2009; 491 pp; hardcover Volume: 45 ISBN-10: 0-8218-4823-2 ISBN-13: 978-0-8218-4823-4 List Price: US$119 Member Price: US$95.20 Order Code: AMSIP/45 This book consists of two independent works: Part I is "Solutions of the Einstein Vacuum Equations", by Lydia Bieri. Part II is "Solutions of the Einstein-Maxwell Equations", by Nina Zipser. A famous result of Christodoulou and Klainerman is the global nonlinear stability of Minkowski spacetime. In this book, Bieri and Zipser provide two extensions to this result. In the first part, Bieri solves the Cauchy problem for the Einstein vacuum equations with more general, asymptotically flat initial data, and describes precisely the asymptotic behavior. In particular, she assumes less decay in the power of \(r\) and one less derivative than in the Christodoulou-Klainerman result. She proves that in this case, too, the initial data, being globally close to the trivial data, yields a solution which is a complete spacetime, tending to the Minkowski spacetime at infinity along any geodesic. In contrast to the original situation, certain estimates in this proof are borderline in view of decay, indicating that the conditions in the main theorem on the decay at infinity on the initial data are sharp. In the second part, Zipser proves the existence of smooth, global solutions to the Einstein-Maxwell equations. A nontrivial solution of these equations is a curved spacetime with an electromagnetic field. To prove the existence of solutions to the Einstein-Maxwell equations, Zipser follows the argument and methodology introduced by Christodoulou and Klainerman. To generalize the original results, she needs to contend with the additional curvature terms that arise due to the presence of the electromagnetic field \(F\); in her case the Ricci curvature of the spacetime is not identically zero but rather represented by a quadratic in the components of \(F\). In particular the Ricci curvature is a constant multiple of the stress-energy tensor for \(F\). Furthermore, the traceless part of the Riemann curvature tensor no longer satisfies the homogeneous Bianchi equations but rather inhomogeneous equations including components of the spacetime Ricci curvature. Therefore, the second part of this book focuses primarily on the derivation of estimates for the new terms that arise due to the presence of the electromagnetic field. Titles in this series are co-published with International Press, Cambridge, MA. Graduate students and research mathematicians interested in general relativity. "Both parts are well written. ...the book should be of interest to anyone who is doing research in mathematical relativity." -- Mathematical Reviews
{"url":"http://ams.org/bookstore-getitem/item=amsip-45","timestamp":"2014-04-21T06:10:38Z","content_type":null,"content_length":"17359","record_id":"<urn:uuid:86f7cbbd-abce-4a40-a872-1b77762d656b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Decomposition of solvable Lie group up vote 0 down vote favorite Suppose $G$ is a connected Lie group whose radical is $R$. It is known that the solvable group $R$ can always be decomposed as $R=UT$ where $U$ is a simply-connected normal subgroup of $R$ and $T$ is a compact abelian subgroup of $R$ with $U\cap T = 1_G$. We know that $R$ is a normal subgroup of $G$. Question: Is $U$ necessarily a normal subgroup of $G$? lie-groups solvable-groups 1 Your statement "it is known..." is not true. Example: $G=R$ the quotient of the 3-dimensional Heisenberg group by an infinite discrete central subgroup. – Yves Cornulier Mar 25 '13 at 17:40 add comment 2 Answers active oldest votes As Yves Cornelier already said: Your presumed statement is wrong. Any connected, linear, solvable Lie group over the reals is the semi-direct product of a compact abelian subgroup and a simply connected normal subgroup. (This holds more general for algebraic, connected, solvable lie groups ober a field of characteristic 0, what can be found in Chevalley's "Théorie des groupes de Lie") up vote 2 This is in general false for non-linear Lie groups, what explains Yves Cornulier's counter example as the quotient of the heisenberg group with it's central discrete cyclic subgroup is down vote not linear. (The non-linearity of this group is e.g. proved in "The Structure of Compact Groups: A Primer for Students, a Handbook for the Expert" of Hofmann and Morris on page 169.) Thank you. But if G is linear, the answer to the question is Yes? – Li Yu Mar 26 '13 at 2:45 add comment In the case of connected linear algebraic groups it is true: Any inner automorphism of $G$ is an algebraic group automorphism of $R$. And so it carries all the unipotent elements of $R$ up vote 2 to unipotent elements, (See Section 19. Connected Solvable Groups in J.E. Humphreys textbook "Linear Algebraic Groups") down vote add comment Not the answer you're looking for? Browse other questions tagged lie-groups solvable-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/125524/decomposition-of-solvable-lie-group","timestamp":"2014-04-21T15:09:56Z","content_type":null,"content_length":"53829","record_id":"<urn:uuid:d664e2e2-4cc1-4f34-9ca3-19b2b11ce8b1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 18th 2010, 08:14 AM #1 MHF Contributor Mar 2010 An nxn matrix is said to be nilpotent if $A^k=0$ for some positive $\mathbb{Z}$ k. Show that all eigenvalues of a nilpotent matrix are 0. I have proved by math induction that, for $m\geq1$, $\lambda^m$ is an eigenvalue of $A^m$. I don't know if that should help. Yes, that certainly does help!$\lambda$ then v is also an eigenvector of $A^m$ corresponding to eigenvalue $\lambda^m$. In particular, if $\lambda$ an eigenvalue of A, then $\lambda^k$ is an eigenvalue of $A^k$- that is, for some non-zero vector v, $A^k v= \lambda^k v$. But $A^kv= 0$ for any vector so we have $\ lambda^k v= 0$, with v non-zero. Yes, that certainly does help!$\lambda$ then v is also an eigenvector of $A^m$ corresponding to eigenvalue $\lambda^m$. In particular, if $\lambda$ an eigenvalue of A, then $\lambda^k$ is an eigenvalue of $A^k$- that is, for some non-zero vector v, $A^k v= \lambda^k v$. But $A^kv= 0$ for any vector so we have $\ lambda^k v= 0$, with v non-zero. So that is all it is then? Well, what do you conclude from $\lambda^k v= 0$? I should probably conclude lambda is zero. April 18th 2010, 08:22 AM #2 MHF Contributor Apr 2005 April 18th 2010, 08:29 AM #3 MHF Contributor Mar 2010 April 18th 2010, 08:59 AM #4 MHF Contributor Apr 2005 April 18th 2010, 09:11 AM #5 MHF Contributor Mar 2010
{"url":"http://mathhelpforum.com/advanced-algebra/139847-nilpotent.html","timestamp":"2014-04-18T11:00:37Z","content_type":null,"content_length":"48550","record_id":"<urn:uuid:d9939175-f3f4-4d1a-9db0-d33956b82d65>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary of the Properties of a Möbius band with Summary of the Properties of a Möbius band with T=1π • Only a Band with finite width/height is disymmetric (chiral). It belongs to the C[2] (Abelian) point group • A Band with finite width/height and T=1 has one C[2] axis, which can bisect the (idealized) cycle at any arbitrary point • The C[2] axis intersects at precisely two points on the cycle • For a band with finite width/height, these points are not equivalent (the C[2] Abelian point group has no degeneracies) • A journey starting at any arbitrary point on the surface takes two traversals of the cycle to return to the starting point. • In the limit, a cycle with zero width/height lies exclusively in 2D space © H. S. Rzepa
{"url":"http://www.ch.ic.ac.uk/rzepa/talks/ENQO07/1.html","timestamp":"2014-04-20T10:47:21Z","content_type":null,"content_length":"2342","record_id":"<urn:uuid:11d1c84e-8d01-4f87-b0df-1a364fd055e8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
[racket] Looking for feedback on code style From: David Van Horn (dvanhorn at ccs.neu.edu) Date: Thu Sep 9 11:26:17 EDT 2010 On 9/9/10 10:04 AM, Prabhakar Ragde wrote: > I don't think vectors help very much in this case (median-finding). For > the given code, the O(n) access to the middle of the list is dominated > by the cost of the sorting, which is at least O(n log n) [*]. > It is theoretically possible to compute the median in O(n) time, but the > method is complicated and not very practical. But sorting definitely > does too much work. If only the median (or the kth largest, a problem > called "selection") is needed, a method which is both practical and of > pedagogical interest stems from adapting Quicksort, which is a good > exercise after section 25.2 of HtDP. This has expected cost O(n) on > random data, and vectors offer no asymptotic advantage over lists. --PR > [*] Technically, "at least Omega(n log n)". The original post got me interested in median algorithms and I started to read up on the selection problem. Wikipedia (I know, I know) says the same thing as you: medians can be computed in O(n) time and points to selection as the way to do it. But I don't see how to use selection to achieve a O(n)-time median algorithm -- selection (of the kth largest/smallest element) is O(n), but that's were k is some fixed constant. To compute the median, you let k=n/2 (right?), so it's no longer constant. Can you point me to (or sketch) a O(n) method? Or just correct me if my reasoning is going astray. Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2010-September/041450.html","timestamp":"2014-04-21T16:02:18Z","content_type":null,"content_length":"6775","record_id":"<urn:uuid:1cb95a3b-667a-4dc5-a07d-e169d22daedb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Costa Mesa Algebra 2 Tutor Find a Costa Mesa Algebra 2 Tutor ...I'm a local guy and graduated from Newport Harbor in 1993. Go Sailors!I've been tutoring math (algebra, geometry, algebra 2, trigonometry, precalculus, calculus, SAT Math, ACT Math) for most of my life. My schedule is wide open, so I am available in an hour's notice. 20 Subjects: including algebra 2, chemistry, geometry, algebra 1 ...I have experience working as a private math tutor and at an established math tutoring company. I am extremely patient and understanding, with an adaptable teaching style based on the student's needs. I specialize in high school math subjects like Pre-Algebra, Algebra, Algebra 2/Trigonometry, Precalculus and Calculus. 9 Subjects: including algebra 2, calculus, geometry, algebra 1 ...AP Calculus AB/BC Currently, I am helping couple high school students in AP Calculus AB/BC and preparing them for the AP Calculus test in this May. I usually will assign my students some extra practice problems every week. 4.Statistics/ Probability/AP Statistics I have advanced statistics cour... 19 Subjects: including algebra 2, calculus, geometry, statistics ...I have a GPA of 4.00 and I am a member of the Irvine Valley College's honor program. During my education, I have had great professors who made me feel the subjects, not memorize them; and I am doing my best to do the same for my students. As a tutor, I use different methods to teach each stud... 18 Subjects: including algebra 2, chemistry, geometry, biology ...Understanding each students difficulties in mathematics is important. Every student has their own challenges and these must be addressed accordingly. I like to start from the beginning with a new student and find out exactly what is causing some lack of understanding. 14 Subjects: including algebra 2, calculus, statistics, SAT math
{"url":"http://www.purplemath.com/costa_mesa_algebra_2_tutors.php","timestamp":"2014-04-20T10:59:06Z","content_type":null,"content_length":"24076","record_id":"<urn:uuid:5ed5cc23-2324-4b62-90d5-969c308ffb1d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
이력서 쓸때마다 경력 계산하기 귀찮으시죠? 날짜만 입력하시면 당신의 경력이 얼마나 되는지 한번에 계산하여 드립니다. 구직자, 이직자 여려분들께 추천합니다. UPDATE 예정사항: * 경력 저장 기능 * 일괄 삭제 기능 개발자 연락처 : RealCalc Plus is the enhanced version of Android's #1 Scientific Calculator, RealCalc - a fully featured scientific calculator which looks and operates like the real thing. RealCalc Plus includes the following features: * Traditional algebraic or RPN operation * Fraction calculations and conversion to/from decimal * Degrees/minutes/seconds calculations and conversion * Result history * User-customizable unit conversions * User-customizable constants * Percentages * 10 memories * Binary, octal, and hexadecimal (can be enabled in Settings) * Trig functions in degrees, radians or grads * Scientific, engineering and fixed-point display modes * Landscape mode * Configurable digit grouping and decimal point * Full built-in help * If you want data size conversions in multiples of 1024, use kibibytes, mebibytes, gibibytes, etc - see en.wikipedia.org/wiki/Kibibyte. * If the percent key appears to give wrong answers, make sure you are pressing '=' at the end, e.g. '25 + 10 % =' will give 27.5. * If sin/cos/tan functions don't give the answer you are expecting, make sure you are in the correct angle mode. Degrees, radians and grads are supported, indicated by DEG, RAD, GRAD in the display. Use the DRG key to change mode. * If any of the digit keys are disabled, or the decimal point doesn't work, or you have answers with letters in, or basic arithmetic appears to be wrong, then you are in binary, octal or hexadecimal mode. Press DEC to return to decimal operation. If you don't need these modes, please make sure that 'Enable Radix Modes' is disabled in the settings. * If you can't find HEX, BIN or OCT modes, go to the settings and make sure that 'Enable Radix Modes' is checked. Please read the help for more information. The Panecal is an editable expressions scientific calculator. The Panecal can indicate expressions on a multi-line display, allowing you to prevent input mistakes. In addition, moving the cursor on the display can easily modify expressions. * Re-editable and re-callable expressions * Result and expressions history * Decimal, Binary, octal, and hexadecimal * Base conversions * Main memory and 6 variable memories * Percentages * Arithmetic, trigonometric, inverse trigonometric, exponential, logarithmic function, power, power root function, factorial, and absolution. * DEG, RAD, GRAD modes. * Floating-point, Fixed-point, Scientific and Engineering display modes. * Configurable decimal separator and grouping separator * Configurable number of bits for base conversions * BS key, DEL key, INS key. * Landscape mode * Key input confirmation by vibration and orange colors [System environment] Android OS 2.1 to 2.3.x Android OS 3.x, 4.x APPSYS does not accept responsibility for any loss which may arise from reliance on the software or materials published on this site. Powerful simulator of the classics calculators. With advanced features and easy to use. The same that we all know but now in your Smartphone. * Percentages * Memories * Trig functions in degrees, radians or grads * Scientific, engineering and fixed-point display modes * Configurable digit grouping and decimal point ★ several different calculators calculators into one that can be as a user accidentally You can avoid the error to a minimum, with built- in calculator. Dwan convenient and smart . ★ Integrated Converter Free version is ad is displayed. ★ converter configuration information 1) Simple : basic arithmetic functions 2 ) S , Scientific : arithmetic operations , trigonometric functions, mathematical functions 3) statistics for the mean, standard deviation, 4) notation : 16 , 10, 08 , binary 5) Unit Conversion : length , width, speed , weight and units conversion 6) Date of Calculation: date difference in years, months , weeks, days Calculate 7) percent: various percentage calculation ★ Other user-friendly features 1) History and history storage: Specifies the number of historical management 2) Memory management: Memory Storage Management result 3) UNDO: go to the previous step, the function formula 4) Specify decimal places 5) When the initial setting: specifies the initial launch screen Created by: Woo Sung App software , woocheol, kim USA TODAY named Calculator Plus among its "25 Essential Apps", calling it the "handy calculator app that's garnered great user ratings" I'm Calculator Plus - the perfect calculator for Android. I'm easy to use and beautifully designed to do things better than your phone or handheld calculator ever did. I love saving you time and effort. I remember everything you calculate, and let you review it anytime, making me perfect for shopping, doing homework, balancing checkbooks, or even calculating taxes. And if you quit the calculator and go do something else, it's all still here when you come back. You'll never need to type the same calculation twice again. I'm attractive and effective and I make great use of your big, beautiful display: - You'll never forget where you are in a calculation - I show you exactly what's happening at all times - I remember everything, so you can take a break, then come back later and pick up where you left off - I show your calculations in clear, elegant type that's easy to read, with commas just where they should be - You can use backspace anytime to correct a simple mistake, instead of starting over - Use memory to keep a running total you can actually see - My percentage key shows exactly what it did, so you're not left confused - Swipe memory keys aside for advanced math functions! - NEW! Full support for Samsung Multi-Window - true multitasking for your Galaxy device. - My intuitive, lovable design makes it simple to do everyday calculations on your phone or tablet Let Calculator Plus and your phone or tablet finally put that handheld calculator to rest! This is an ad supported version - our ad-free version is also available. Calculator Plus (C) 2013 Digitalchemy, LLC Universal free, every day use calculator with scientific features. One of top. Good for simple and advanced calculations! * Math expressions calculation (developed on RPN-algorithm but no RPN-calculators' kind UI!) * Percentages (calculation discount, tax, tip and other) * Radix mode (HEX/BIN/OCT) * Time calculation (two modes) * Trigonometric functions. Radians and degrees with DMS feature (Degree-Minute-Second) * Logarithmic and other functions * Calculation history and memory * Digit grouping * Cool color themes (skins) * Large buttons * Modern, easy and very user friendly UI * Very customizable! * NO AD! * Very small apk * More features will be added. Stay in touch! :) OLD NAME is Cube Calculator. PRO-version is currently available on Google Play. KW: mobicalc, mobicalculator, mobi, calc, cubecalc, mobicalcfree, android calculator, percentage, percent, science, scientific calculator, advanced, sine, simple, best, kalkulator, algebra, basic Do you want to own a unique taste of android app? Do you want to show off to others? Do you want to have a nice calculator on your phone or pad? Download my calculator and you will access the above achievements! What's more,you can have a convenient and helpful calculator with yourself! Do not hesitate to download. The calculator will be updated frequently,I aim to make it more and more beautiful and functional! A calculator with 10 computing modes in one application + a handy scientific reference facility - different modes allow: 1) basic arithmetic (both decimals and fractions), 2) scientific calculations, 3) hex, oct & bin format calculations, 4) graphing applications, 5) matrices, 6) complex numbers, 7) quick formulas (including the ability to create custom formulas), 8) quick conversions, 9) solving algebraic equations & 10) time calculations. Functions include: * General Arithmetic Functions * Trigonometric Functions - radians, degrees & gradients - including hyperbolic option * Power & Root Functions * Log Functions * Modulus Function * Random Number Functions * Permutations (nPr) & Combinations (nCr) * Highest Common Factor & Lowest Common Multiple * Statistics Functions - Statistics Summary (returns the count (n), sum, product, sum of squares, minimum, maximum, median, mean, geometric mean, variance, coefficient of variation & standard deviation of a series of numbers), Bessel Functions, Beta Function, Beta Probability Density, Binomial Distribution, Chi-Squared Distribution, Confidence Interval, Digamma Function, Error Function, Exponential Density, Fisher F Density, Gamma Function, Gamma Probability Density, Hypergeometric Distribution, Normal Distribution, Poisson Distribution, Student T-Density & Weibull Distribution * Conversion Functions - covers all common units for distance, area, volume, weight, density, speed, pressure, energy, power, frequency, magnetic flux density, dynamic viscosity, temperature, heat transfer coefficient, time, angles, data size, fuel efficiency & exchange rates * Constants - a wide range of inbuilt constants listed in 4 categories: 1) Physical & Astronomical Constants - press to include into a calculation or long press for more information on the constant and its relationship to other constants 2) Periodic Table - a full listing of the periodic table - press to input an element's atomic mass into a calculation or long press for more information on the chosen element - the app also includes a clickable, pictorial representation of the periodic table 3) Solar System - press to input a planet's orbit distance into a calculation or long press for more information on the chosen planet 4) My Constants - a set of personal constants that can be added via the History * Convert between hex, oct, bin & dec * AND, OR, XOR, NOT, NAND, NOR & XNOR Functions * Left Hand & Right Hand Shift * Plotter with a table also available together with the graph * Complex numbers in Cartesian, Polar or Euler Identity format * Fractions Mode for general arithmetic functions including use of parentheses, squares, cubes and their roots * 20 Memory Registers in each of the calculation modes * A complete record of each calculation is stored in the calculation history, the result of which can be used in future calculations An extensive help facility is available which also includes some useful scientific reference sections covering names in the metric system, useful mathematical formulas and a detailed listing of physical laws containing a brief description of each law. A default screen layout is available for each function showing all buttons on one screen or, alternatively, all the functions are also available on a range of scrollable layouts which are more suitable for small screens - output can be set to scroll either vertically (the default) or horizontally as preferred – output font size can be increased or decreased by long pressing the + or - A full range of settings allow easy customisation - move to SD for 2.2+ users Please email any questions that are not answered in the help section or any requests for bug fixes, changes or extensions regarding the functions of the calculator - glad to help wherever possible. This is an ad-supported app - an ad-free paid version is also available for a nominal US$ 0.99 - please search for Scientific Calculator (adfree) 1. Calculate Percents & Percentages 2. Percent Discounts (sale price) 3. Percent Markups (increase by) 4. Percent Margin (selling price) 5. Calculate Tips. 6. Percentage Difference (Change) 7. Percentage (what % of) i.e. x is what percentage of y Enter any two values and the third is computed e.g. [23]% of [x] = 115. X will be computed (20). Customize background colors. This app is an app that can be calculated while looking at the formula. Calculation along the laws of arithmetic Unlike the input of a general computer, the calculation of the correct order of () is used and the priorities of the division and multiplication is possible. If you make a mistake a formula, correction is possible by tapping the part of the formula. The formula, or a division is used in programming the "slash /", Multiplication of the "asterisk *" is also available. Calculator++ is an advanced, modern and easy to use scientific calculator #1. Calculator++ helps you to do basic and advanced calculations on your mobile device. Discuss Calculator++ on Facebook: http://facebook.com/calculatorpp 1. Always check angle units and numeral bases: trigonometric functions, integration and complex number computation work only for RAD!!! 2. Application contains ads! If you want to remove them purchase special option from application settings. Internet access permission is needed only for showing the ads. ADS ARE ONLY SHOWN ON THE SECONDARY SCREENS! If internet is off - there are no ads! ++ easy to use ++ home screen widget + no need to press equals button any more - the result is calculated automatically + smart cursor positioning + copy/paste in one button + landscape/portrait orientations ++ drag buttons up or down to use special functions, operators etc ++ modern interface with possibility to choose themes + highlighting of expressions + history with all previous calculations and undo/redo buttons ++ variables and constants support (build-in and user defined) ++ complex number computations + support for a huge variety of functions ++ expression simplification: use 'identical to' sign (≡) to simplify current expression (2/3+5/9≡11/9, √(8)≡2√(2)) + support for Android 1.6 and higher + open source NOTE ABOUT INTERNET ACCESS: Calculator++ (version 1.2.24) contains advertisement which requires internet access. To get rid of it - purchase a version without ads (can be done from application's How can I get rid of the ads? You can do it by purchasing the special option in the main application preferences. Why Calculator++ needs INTERNET permission? Currently application needs such permission only for one purpose - to show ads. If you buy the special option C++ will never use your internet connection. How can I use functions written in the top right and bottom right corners of the button? Push the button and slide lightly up or down. Depending on value showed on the button action will occur. How can I toggle between radians and degrees? To toggle between different angle units you can either change appropriate option in application settings or use the toggle switch located on the 6 button (current value is lighted with yellow color). Also you can use deg() and rad() functions and ° operator to convert degrees to radians and vice versa. 268° = 4.67748 30.21° = 0.52726 rad(30, 21, 0) = 0.52726 deg(4.67748) = 268 Does C++ support %? Yes, % function can be found in the top right corner of / button. 100 + 50% = 150 100 * 50% = 50 100 + 100 * 50% * 50% = 125 100 + (100 * 50% * (25 + 25)% + 100%) = 150 100 + (20 + 20)% = 140, but 100+ (20% + 20%) = 124.0 100 + 50% ^ 2 = 2600, but 100 + 50 ^ 2% = 101.08 Does C++ support fractional calculations? Yes, you can type your fractional expression in the editor and use ≡ (in the top right corner of = button). Also you can use ≡ to simplify expression. 2/3 + 5/9 ≡ 11/9 2/9 + 3/123 ≡ 91/369 (6-t) ^ 3 ≡ 216 - 108t + 18t ^ 2 - t ^ 3 Does C++ support complex calculations? Yes, just enter complex expression (using i or √(-1) as imaginary number). ONLY IN RAD MODE! (2i + 1) ^ = -3 + 4i e ^ i = 0.5403 + 0.84147i Can C++ plot graph of the function? Yes, type expression which contains 1 undefined variable (e.g. cos(t)) and click on the result. In the context menu choose 'Plot graph'. Does C++ support matrix calculations? No, it doesn't Keywords: calculator++ calculator ++ engineer calculator, scientific calculator, integration, differentiation, derivative, mathematica, math, maths, mathematics, matlab, mathcad, percent, percentage, complex numbers, plotting graphs, graph plot, plotter, calculation, symbolic calculations, widget I'm Fraction Calculator Plus and I'm the best and easiest way to deal with everyday fraction problems. Whether you're checking homework, preparing recipes, or working on craft or even construction projects, I can help: - Wish you could find the time to check your kids' math homework? Now checking fraction math takes just seconds. - Need to adjust recipe quantities for a larger guest list? Let me adjust your cup and teaspoon quantities. - Working on a craft or home project in inches? Stop double-or-triple calculating on paper - let me do it once, accurately. I'm attractive and effective and I make great use of either a phone or tablet display: - I show your calculations in crisp, clear, elegant type that you can read at-a-glance from a distance. - My innovative triple keypad display lets you type fast! (entering three and three quarters takes just 3 taps!). - Every fraction result gets automatically reduced to its simplest form to make your job easy. - NEW! Every result is also shown in decimal to make conversion a breeze. - It couldn't be easier to add, subtract, multiply, and divide fractions. Let Fraction Calculator Plus turn your phone or tablet into an everyday helping hand. This is an ad supported version - our ad-free version is also available. Fraction Calculator Plus (C) 2013 Digitalchemy, LLC When you perform the cutting with the ball end mill, You can perform various calculations. 1)You can easily calculate the true processing diameter of the ball end mill. -Real cutting diameter calculation. -Real cutting speed calculation. -Necessary main shaft number of revolutions. -Necessary cut depth.(by cutting speed) 2)For ball end mill cutting, a theory side rough degree occurs. -Cutter outer diameter and Pick feed. Cutting time becomes long when I make pich small. Let's calculate at the same time in that time. 3)Cutting Diameter*Work engagement[ae] -> Feed/tooth 4)Work corner R * Inside dia cutting = Cutting speed up PowerCalc is a powerful Android scientific calculator with real look. It is one of the few Android calculators with complex number equations support. Features: * Real equation view editor with brackets and operator priority support * Component or polar complex entry/view mode * Equation and result history * 7 easy to use memories * Large universal/physical/mathematical/chemical constant table * Degrees, radians and grads mode for trigonometric functions * Fixed, scientific and engineering view mode * Easy to use with real look * Advertisement free! Would you like to have multiline equation editor with equation syntax hightiting, actual bracket highlighting and trigonometric functions of complex argument support? Upgrade to PowerCalc Pro. * Multiline equation editor * Equation syntax highliting * Actual bracket highliting * Trigonometric functions with complex argument support Stay tuned! We are preparing new functionalities: * Unit conversions * Radix modes * Help Found bug? Please contact us to fix it. If you find PowerCalc useful please upgrade to PowerCalc Pro to support further development. Thank you! Scientific calculator with all functions, easy to use, includes a screen to type the characters unlimited transactions, use parentheses and hierarchy in the operations and functions, the result is displayed on the second line of the display. Ability to modify or correct an operation. Kal Scientific calculator Features: * Allows function graphs * New functionality (FML), 100 built-in formulas * Typical operations (add, subtract, multiply, divide). * Functions powers (nth power, nth root, squaring, square root, cube root) * Functions logarithmic (log10, ln, powers of 10, exp) * Trigonometric functions (sin, cos, tan, including inverse and hyperbolic) * Three angle modes (DEG, RAD, GRA) * Random number generator * PI * Permutations (nPr) and combinations (nCr) * Absolute value, factorial. * Allows numbers in scientific notation. * Includes major scientific constants. * Basic operations and converting number systems (decimal, hexadecimal, octal, binary) * Set decimal number. * History of the last 10 operations and results. * Memory storage results statistical mode only available in Kal Pro * standard deviation * arithmetic mean * sum of values ★ MyCalc is a fully featured All-In-One calculator for your everyday calculations. ★ ★★★★★ "It has everything from a currency converter with live rate updates to a scientific calculator that has a range of functions, unit conversions and constants to a percentage calculator that'll tell you how much a loan will cost. It's brilliant. Since moving from Windows Phone 8 I've been looking for a program that equals calculator² on that platform and I think I've found something even better. This is THE calculator app you need. I'll be deleting the others that just weren't quite good enough. This excels." - Davy Strange (Feb 21, 2014) ★★★★★ "Nice app. Works great helps me very much in the office and on the go. Keep up the great work." - Randy Salazar (Nov 7, 2013) ★★★★★ "Very Well Done. I usually don't rate utility apps, but this calculator is great. I like the currency ticker on the currency calculator screen, I realized that currencies can be added instead of having a huge list to sort through. The discount calculator will probably be the most useful to me, it is very cleverly setup. I have not been through the entire calculator, but will update if my opinion changes as I go. Good Job, Dev!." - Julie Hafford (Nov 6, 2013) MyCalc includes: Scientific calculator, Standard, Currency, Tip and Percent calculator which can be accessed from the menu. ● My Calc Features: ✓ Result history ✓ Unit conversion calculator ✓ Physical constants table ✓ Traditional algebraic operation ✓ Permutations (nPr) & Combinations (nCr) ✓ Trigonometric and hyperbolic functions in radians, degrees and grads ✓ Scientific, engineering and fixed-point display modes ✓ Calculation memory support ✓ Full support for percentages (20 + 10% = 22) ✓ Decimal degrees into degrees, minutes, and seconds converter ✓ Tip calculator - calculates tip quickly and easily and split the bill between any number of people. ✓ Calculator with percentage (calculate discount, tax, margin and other with Percent calc) ✓ Currency converter calculator - Track Currencies from around the world with live currency rates. Easily convert between your favorite currencies. More from developer Kind of like the block diagonal, horizontal vertical 8 collect 3 or more by dragging in the direction of removing puzzle game. You can also do a lot at once, remove the multiple blocks, More scores through combo you can stroke them. The combo every time you go to twice score by increasing the stroke, you can do them.
{"url":"https://play.google.com/store/apps/details?id=com.careercalc&referrer=utm_source%3Dappbrain","timestamp":"2014-04-21T12:10:03Z","content_type":null,"content_length":"145023","record_id":"<urn:uuid:78ed997f-e06b-4744-9a11-2b70f8cb778b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Method for calculating sedimentary rock pore pressure - Baroid Technology, Inc. 1. Field of Invention The present invention relates to an improved method for calculating the pressure of fluid contained in a sedimentary rock which has been naturally compacted under the influence of gravity. A more accurate calculated pore pressure profile at various depth ranges produced according to the method of this invention produces valuable geological information useful in the hydrocarbon recovery 2. Background Pore fluid pressure is the major factor affecting the planning and drilling of an oil well. The borehole fluid hydrostatic pressure must be greater than the formation pore fluid pressure if one is to avoid the possibly catastrophic risk of blowout. Likewise, the borehole fluid circulating pressure must be less than fracture propagation pressure if one is to avoid the risk of lost circulation. Several expensive casing strings are usually required so that an oil well can be drilled within these two pore fluid pressure and fracture propagation pressure limits. The present invention thus enhances the safety of oil or gas well drilling operations, and also reduces the overall cost of hydrocarbon recovery by providing more reliable information to a drilling operator and thus avoiding complicated correction operations. Because of its critical relationship to drilling operations, there are numerous techniques for calculating pore fluid pressure. All known petrophysical prior art methods calculate pore fluid pressure indirectly based upon measured rock properties, e.g., rock density or drilling rate of penetration. Most of these methods follow a calibration procedure which is not based on mechanical or physical information. Instead, these calibration procedures are generally based upon an observed empirical relationship between a measured physical parameter and a "normal" or hydrostatic compaction trend. The "normal" trend line is the average value of the measured parameter which changes as a function of depth. The change in the measured parameter according to these prior art techniques is thus related to a change in compaction of the sedimentary rock. Sedimentary rocks are compacted by the stress applied to their grain matrix framework, which is not solely function of depth. When fluid pressure is approximately hydrostatic and the overburden is gradually increasing, both depth and stress are increasing. Under these conditions, depth behaves as a pseudo-stress variable. However, when pore pressure is elevated, effective stress and overburden gradients can be either increasing or decreasing and depth is not a pseudo-stress variable. Most of the prior art methods for determining pore fluid pressure use depth as a pseudo-stress variable in both "normal" and "excess" pressured intervals which results in significant pore pressure calculation errors. Another significant failing of prior art pore pressure calculation techniques is attributable to their basic formulation. According to prior art techniques, pore pressure (P) is calculated as a sum of "normal" hydrostatic fluid pressure (Pn) which is inferred from compaction-depth trend, plus a differential or "excess" fluid pressure (ΔP) which is related to a measured difference from the "normal" trend. The equation expressing this relationship is: P=Pn+ΔP (1) Equation 1 is a physically incorrect mathematical formulation. In fact, Pascal's Principle requires that all of the fluids in a given local pore space or container be at the same pressure. Since the "excess" pressure term (ΔP) does not exist in nature, there is no way it can be physically related to a measured parameter. Calibrating a measured physical parameter to a quantity which does not exist (ΔP) is not reasonably sound. The "normal compaction" vs depth trend line methods give the drilling operator a false sense of confidence based entirely upon the hydrostatic (Pn) calibration interval, wherein depth is a pseudo-stress variable. Pascal's Principle is not violated in the upper hydrostatic (Pn) interval because ΔP=0 and P=Pn. Unfortunately, this sense of confidence gained in the (Pn) calibration interval is then transferred to the associated empirical "excess" pressured (ΔP) calibration where two entirely different conditions apply. In the "excess pressured" interval, depth is not a pseudo-stress variable. Also, the change in the measured physical parameter, such as density, resistivity, or rate of penetration, is related according to this prior art technique to the positive (ΔP) term of Equation 1, which violates Pascal's Principle. Apparent success of pore pressure predictions derived from these methods below the base of the hydrostatically compacted interval may be due to a coincidence between pressure and depth which is peculiar to a given area or depth range. Any correspondence between calculated and observed pore pressures cannot be attributed to a physical relationship between the measured parameter and the excess fluid pressure, however, because such a relationship does not physically exist. Lacking a physical cause-effect relationship, these prior art methods have been judged on a raw observed pressure vs. hydrostatic fluid pressure (Pn) trend basis. The (ΔP) calibration correlates the difference between the observed measurement and the projected (Pn) normal compaction trend. There is no data to support the (Pn) projection below the top of the overpressured (ΔP) zone because known pressure (P) is above (Pn). Consequently, all these methods include depth below the base of (Pn) as a contributing calibration factor. To make these "calibrations" work, similar depth-pore pressure profiles are taken within a given study area. What is presumed to be pore pressure prediction accuracy using these methods is actually a raw vs an averaged form of the same pressure data within a given area. The scatter of data about its own average trend is more commonly known as measurement precision. A narrow scatter within a given study area, such as reported in a 1965 article by Hottman et al., actually also means that depth-pore pressure profiles are similar within the area. In that case, the only measurement that is needed to successfully predict pore pressure is the measured depth to the top of the overpressured zone. A paper published that same year by Matthews et al shows both positively and negatively curving correlations of (ΔP) to resistivity in different study areas and depth ranges. If there was a direct correlation between resistivity and pore pressure, one would expect one relationship or the other, but not both. The dozens of pore pressure methods in practice today which follow the P=Pn+ΔP formulation violate a law of physics in their fluid pressure calibration and are flawed since they are not based on valid theories. U.S. Pat. No. 4,833,914 to Rasmus is an example of a P=Pn+ΔP method which violates Pascal's Principle. Rasmus volumetrically subdivides total rock porosity into overpressured porosity, effective porosity, and water porosity. A response equation solver then uses these terms to solve for pore pressure. As all fluid molecules are free to exchange position with each other through Brownian movement, there is no boundary between these artificially calculated pore volumes and no natural way to define them. The overpressured pore volume used by Rasmus is also a (ΔP) term which exists in the same total pore space as "normally pressured" pore volume, which further violates Pascal's Principle. The method uses complicated statistics to converge on these artificially calculated, physically non-existent pore volume terms. This patent discloses pressure results being calculated in shales only from the "overpressured porosity" term. Although this calibration technique is performed statistically with a computer, it has the same physical shortcomings of the methods described in the previous paragraph. U.S. Pat. No. 5,081,612 to Scott et al discloses a method for determining formation pore pressure from remotely sensed seismic data. This particular method and the prior art methods cited in this patent depend upon a hydrostatically compacted reference velocity profile. Referring back to Equation 1, this profile is essentially an observed or inferred curved (Pn) velocity gradient. The Scott et al pore pressure gradient technique applies to only one lithology, which is common to most of the prior art methods using a P=Pn+ΔP formulation. Pore pressures are calculated with respect to the reference velocity gradient, which again is a violation of Pascal's Principle. A 1990 article by Haas presented a seismic data pore pressure method which accounts for the difference in formation velocity which is a function of lithology and not pore pressure. These lithologic changes are "normalized" out by either addition or subtraction to make a smooth (Pn) velocity trend. After normalization, a velocity overlay is developed which empirically relates P=Pn+ΔP by using lithology normalized velocity as the measured parameter. To operate properly, this Haas method would require all lithologies to compact in the same manner after normalization. Different lithologies did not compact similarly before their transit time offset normalization, and there is no logical basis to presume that they would compact similarly after offset normalization. The Haas procedure does not make rock compactional sense, and results derived therefrom should be suspect. There are at least three prior art methods for determining pore fluid pressure from petrophysical measurements which are based upon the effective stress law, which was first elucidated by Terzaghi in 1923 through compactional studies of marine sediments: P=S-σ[v] ( 2) This relationship states that the fluid pressure in the pore space (P) can be calculated as the difference between the total overburden load (S) and the load borne by the sediment grain-grain contacts (σ[v]). In the science of rock and soil mechanics, this σ[v] term is known as the effective stress. Effective stress law is not widely used today for pore pressure prediction for various reasons, including the absence of an effective σ[v] calibration technique. Effective stress was ignored by most geologic compaction studies, which instead evaluated geologic compaction as depth - porosity functions. Overburden gradients which differ considerably from place to place were assumed to be equal or uniformly varying. Although pore pressure was mentioned as a possible explanation for porosity differences, it was not subtracted from the total overburden load (S) to calculate effective stress. The mechanical effective stress explanation for the differences in porosity vs depth trends were thus ignored by geologists. The differences between porosity vs depth compaction curves were instead attributed to geologic factors such as geologic age and temperature. Articles by Maxwell published in 1964, and by Schmoker et al in 1988 and 1989, evidence this A 1972 article by Baldwin et al unified the compaction of shales worldwide through use of a power law solidity vs depth relationship. These researchers re-cast the then-standard shale porosity vs depth curves, noting that each of the compaction curves from 14 worldwide basins fell within 2% of the Baldwin et al worldwide average power law solidity vs depth relationship. These researchers then substituted effective stress (σ[v]) for depth in a power law equation of the same form: σ[v] =σ[max] (Solidity)^α+1 ( 3) In this equation, the σ[max] term is the power law intercept of the compaction curve with the 100% solidity axis. σ[max] is the effective stress that will cause complete compaction of the sedimentary particle mixture. α+1 is the slope of the power law compaction function for that granular material. This seemingly simple mathematical substitution transformed the Baldwin et al unified depth (pseudo-stress) empirical compaction function into a mechanically sound stress-strain relationship. The critical difference between this and all other compaction functions is that effective stress is the load applied to the sedimentary rock grain matrix framework. Solidity is a linear function of the compactional strain experienced by that rock grain matrix framework. Calibration using this equation represents a sound cause-effect relationship based on valid mechanical theories. However, Baldwin et al made no attempt to calculate pore pressure using this approach. The accompanying discussion of sandstone compaction curves in the Baldwin et al article indicated that sandstone compaction was apparently not governed by power law functions. The observed wide variance between sandstone compaction curves between different basins apparently suggested to them that no unified sandstone compaction function was possible. A 1987 article by Holbrook et al and U.S. Pat. No. 4,981,037 applied the effective stress law for pore pressure prediction using a power law effective stress compaction function. The initial σ[max] and, α+1 constants used were expressed in the Baldwin et al article. The method was highly successful at predicting pore pressures in mid-shelf and off-shelf Gulf Coast sandstone-shale sequences. However, very deep highly sand prone wells forced a change of the effective stress constants σ[max] and α to higher values than suggested by Baldwin et al. The revised constants include the effects of pore pressure and are based upon calculated stress rather than pseudo-stress. The revised constants are more accurate and cover a broader depth and stress range than the Baldwin et al constants. 1989 and 1992 articles by Alixant also disclose the use of the effective stress law for pore pressure prediction. However, Alixant used a single laboratory derived compaction function, which he applied to shales only. In field testing, the compaction constant could not accurately cover the range of shale solidities. This method requires considerable changes in unrelated non-physical constants to match observed pore pressure data within a given local area. It is known, however, that strain hardening changes the compaction function of a rock. A constant laboratory compaction function can calculate stress from strain accurately only where the constant coincidentally matches the changing compaction function. Another 1989 article by Bryant also disclosed an attempt to use the effective stress law for pore pressure prediction. Bryant used an average exponential function to calculate overburden as a function of depth rather than data from the well. His results were inaccurate partially because of this average exponential function, and partially because he used the same compaction function for sandstones and shales. Bryant's methods in not in common use today, possibly due to these large inaccuracies. Holbrook extended the effective stress concept to the prediction of vertical fracture propagation pressure in a 1989 article. This approach was at least 4 times more accurate than prior art fracture pressure methods. Leakoff tests calibrated using this effective stress method all fell at or below the calculated overburden for that depth. Kehle noted in a 1964 article that all his observed onshore leakoff tests fell below the calculated overburden. However, neither the Kehle nor Holbrook articles used this observation as a feedback mechanism to improve the calculation of formation pore fluid pressure. The disadvantages of the prior art are overcome by the present invention, and improved and techniques are hereinafter disclosed for more accurately calculating pore pressure of sedimentary rock which has been naturally compacted under the influence of gravity. The techniques of the present invention provide more meaningful pore pressure profiles which are useful in the hydrocarbon recovery The present invention provides an improved technique based on sound mechanical theories for calculating the pressure of fluid contained in a sedimentary rock which has been naturally compacted under the influence of gravity. The effective stress portion of the method encompasses both internal and external measures of rock grain matrix strain. Thus the same effective stress calibration can be applied equally well to externally measured rock thickness data and petrophysically measured rock porosity data. The power law effective stress-strain relationship for any sedimentary rock can be determined from the weighted average of the power law functions of the minerals which compose that sedimentary rock. The overburden calibration portion of the method takes advantage of an upper limiting relationship between leakoff tests to sub-horizontal fracture propagation pressure and a lower leakoff test limit of sub-vertical fracture propagation pressure. All leakoff tests within a given well or local area can be used for calibration. Barring other mechanical problems, all measured leakoff tests should fall within these two borehole fluid pressure limits which are related to the far field stresses. It is an object of this invention to provide improved techniques for both calculating sedimentary rock pore pressure, and for graphically depicting pore pressure data in a manner which facilitates understanding of geological factors and thus geophysical analysis. It is the feature of this invention that an initial overburden may be more accurately determined that in prior art techniques utilizing leakoff pressure test data. It is a further feature of this invention that the maximum effective stress of a mineral is related as a power law function. Still a further feature of this invention is that a linear relationship of effective stress of a mineral may be used to accurately extrapolate the effective stress for a rock containing a mixture of minerals. A significant advantage of the present invention is that additional and costly equipment is not necessary in order to make more accurately determinations of sedimentary rock pore pressure. A further advantage of this invention is that the technique may be used for various combinations of rock containing different mineral compositions. These and further objects, feature, and advantages of the present invention would become apparent from the following detailed descriptions, wherein reference is made to the FIGURES in the accompanying drawings. FIG. 1 graphically displays for a well bore calibrated pore pressure, mud weight, propagation pressure subvertical fracture, and overburden pressure (which is equated with sub-horizontal fracture propagation pressure) according to the techniques of the present invention. FIG. 2 is a schematic representation of mechanical and chemical compaction mechanisms for rock comprising calcite grains, quartz grains, and shale particles. FIG. 3 graphically depicts average compaction curves for various lithologies from the Po Valley according to the prior art. FIG. 4 graphically depicts stress as a function of porosity for various lithologies from the Po Valley data according to the present invention. FIG. 5 graphically depicts input petrophysical well data displayed in the left side and the related critical pressure output data in the right side measured and calculated according to the present The practical application of the effective stress law to pore pressure prediction requires accurate estimates of overburden stress (S) and accurate estimates of effective vertical stress (σ[v]) from compactional strain data. An error of 200 psi or more in the combined pore pressure calculation would seriously limit the usefulness of any well-site pore pressure prediction method. Techniques for calculating or estimating these stress values are described separately below. 1. Overburden stress - leakoff test calibration The most reliable known measurement of overburden stress requires the use of a borehole gravimeter which must be clamped and held steady in a borehole for about 1/2 hour so that a stable measurement can be made. Two borehole gravimeter readings are used for this stress measurement technique to estimate overburden at any given depth. An initial calibration reading is required to measure the earth's gravity at the surface. If the well location is offshore, this surface is average sea level. At least one additional borehole gravimeter reading is needed from a depth that is close to the top of the available petrophysical log data. In a 1989 article, Mac Queen describes this method for inverting these borehole gravimeter measurements to determine overburden at the top of the petrophysically logged interval. Drilling operators rarely order this service, however, because it is very expensive and there is a high risk of getting a clamped tool stuck in the borehole, thereby incurring even more expense. Fortunately, drilling operations do routinely perform leakoff tests almost every time they set casing. The principal application of this measurement is to test the casing seat cement job and to determine how far they can raise static mud weight before having to set another protective casing string. After cementing protective steel casing in place, the operator will usually drill the cement out of the casing plus an additional few feet of new formation. If the cement job is good (as it usually is), the leakoff test actually provides valuable information about earth far field stresses from the few feet of open borehole which is immediately below the casing shoe. In most drilling operations, the shallowest leakoff test and uppermost petrophysical measurements are usually taken hundreds to thousands of feet below the earth's surface. Initial overburden stress from the unlogged upper portion of the hole can easily vary by 200 psi from an average compaction curve depending on the overburden lithology, sediment compaction and initial formation pore pressure. Using the effective stress law, any error in initial overburden psi would be carried as a constant offset to all subsequent pore pressure calculations for that well. Within the petrophysically logged portion of a borehole, the additional incremental overburden stress can be calculated very accurately using Equation 1 and lithologic constants disclosed in U.S. Pat. No. 4,981,037. According to the present invention, continuous overburden stress, pore pressure and fracture propagation pressure logs can be constructed using these equations and methods, as shown in FIG. 1. By using the whole log, both leakoff tests and the lost circulation pressure of 15.4 ppg (pounds per gallon) at a depth of 10860 feet can be used as constraints on the initial overburden value of 15.0 ppg at 6986 feet. An initial overburden stress gradient of 15.0 ppg at 7370 feet resulted in a match between observed and calculated fracture pressure within 30 psi for all three leakoff test and lost circulation measurements. The initial overburden stress selected from leakoff test comparison was 100 psi lower than that of an average normally compacted overburden containing 30% sandstone. A leakoff test measures the weakest point in the open borehole. If the casing cement job is good, the weakest point is usually an existing fracture in the few feet of open borehole. Natural fractures are caused by and geometrically related to the far field stresses. Bedding plane fractures are almost always present. Sub-horizontal bedding plane fractures have essentially no tensile strength and are held closed only by the maximum principal stress which is overburden. Consequently, overburden is the upper leakoff pressure limit in an open borehole through sub-horizontally bedded rocks. Results and conclusions drawn from open borehole leakoff tests should not be confused with results derived from laboratory experiments. Laboratory test rocks must be specially machined to fit into triaxial cells. The samples are not representative of most subsurface rocks. Rocks which can be machined without falling apart are ordered from a few quarries which are well known to the laboratory experimenters. The machined samples are selected to avoid natural fractures. Consequently, laboratory experiments include the effects of rock tensile strength in their measured fracture pressures. Rock tensile strength is usually several hundred to several thousand PSI, depending on lithology and average confining stress. Unfractured laboratory rock fracture pressures have yield phenomenon similar to that observed during leakoff tests. However, laboratory measured pressures are much higher because this measurement includes rock tensile strength. Laboratory equivalent fracture initiation pressures are thus hardly ever reached in open bore holes during leakoff tests because natural fractures are opened first at lower pressures. The open borehole leakoff test is usually stopped at this point, and no new fractures are initiated. Pressures required to initiate new fractures which occur in laboratory experiments are hardly ever reached in the field. Leakoff tests are performed on natural rocks which usually contain abundant natural fractures, as more fully explained in a 1991 article by Lorenz et al. In addition to bedding plane fractures, there typically is also another set of sub-vertical tensile fractures which are oriented perpendicular to the least principal horizontal stress. If the short open borehole intersects one of these fractures, borehole leakoff pressure will be a measure of the minimum principal stress which holds these sub-vertical fractures closed. Frequently both maximum and minimum far field stresses are measured in the same leakoff test. The second leakoff test at 10608 feet depicted graphically in FIG. 1 is an example of such a case. The short open borehole below 10608 feet probably contained one or more sub-horizontal bedding plane fractures. The leakoff test reached a peak pressure of 16.77 ppg, which is very close to the calculated overburden gradient at that depth. This corresponds to the upper pressure (Fph) illustrated on the inset leakoff test graph. The escaping borehole fluid will follow its path of least resistance and propagate at the pressure that is holding that fracture closed. Sub-horizontal fractures are held closed by the maximum principal stress and sub-vertical fractures are held closed by the least principal stress. When borehole fluid traveling in a sub-horizontal fracture intersects a sub-vertical fracture, the path of least resistance will be the sub-vertical fracture. At that time, the borehole measured pressure will drop because the fluid has found a lower resistance path. If pumping is continued, borehole fluid will travel out into the formation at the propagation pressure of a sub-vertical fracture. This corresponds to (Fpv) on the inset leakoff test diagram. Usually leakoff tests are stopped well before this to avoid unnecessary damage to the borehole. If the formation supports a constant bleed down pressure after the pumps are turned off, this fracture closure pressure is usually a good estimate of fracture propagation pressure (Fpv) and minimum horizontal stress. This is true because an existing fracture has essentially no tensile strength. In this test at 10608 feet (see FIG. 1), the observed bleed down pressure exactly matched the calculated fracture propagation pressure gradient of 15.7 ppg. A third initial overburden constraint occurred at 14180 feet when the operator raised mud weight from 15.0 to 15.4 ppg. Circulation was lost at this time, indicating borehole fluid was escaping into fractures that had opened somewhere in the open borehole. The minimum vertical fracture propagation pressure shown on FIG. 1 below the 10688 casing shoe is at 10860 feet. The fracture pressure there is 15.4 ppg. This value constrains the initial overburden to be 100 psi less than an average initial overburden column at 7370 feet. Higher overburden would have raised calculated fracture pressure and the well would not lose circulation at 15.4 ppg pressure. The use of leakoff test to calibrate initial overburden in this case resulted in an improvement of over 400% (30 psi error according to this improved technique vs 130 psi error using the prior art techniques of U.S. Pat. No. 4,981,037) in the value of calculated pore pressure and fracture pressure for the whole well. The resulting continuous pore pressure log on the left of FIG. 1 is within 200 psi of the equivalent mud weight pressure at the points where the operator raised mud weight due to hole response. This level of accuracy is highly desirable in order to use petrophysically calculated pore pressure to guide a drilling operation. 2. Effective stress - mineralogic compaction function calibration Each of the minerals which compose a sedimentary rock has its own characteristic compaction function. Sedimentary mineral grains compact through mechanical and chemical pressure solution processes. A mineral's overall compaction resistance is directly proportional to its hardness and inversely proportional to it's solubility. Most of our knowledge about sandstone and limestone compaction comes from sedimentary petrographers. Compaction conclusions of these petrographers are principally related to the purpose for their study. Petrographers typically have no knowledge of the stress conditions around the sedimentary rock sample which is observed in petrographic thin section. Typically a petrographic microscope will show hundreds of individual grains in a 1/4 centimeter field of view. FIG. 2 conceptually shows the microscopic relationship between interpenetrating pressure solution surfaces for the most common sedimentary minerals, quartz, clay, and calcite. In FIG. 2, the harder less soluble quartz grains form bridges leaving porosity between the grains. The softer more soluble clay and calcite grains are preferentially dissolved at points of contact and re-precipitated locally in the pore space. When observing these intergranular relationships, sedimentary petrographers broadly describe the quartz grains as load bearing. The calcite which occurs in the space between quartz grains is considered to be non-load bearing. This grossly oversimplifies the load bearing relationships between the minerals and leads to false conclusions about porosity and compaction. The space between quartz grains is called intergranular porosity, and this porosity is controlled by compaction of the quartz load bearing matrix. Calcite is 10,000 times softer than quartz and 20 times more soluble. Explanations by sedimentary petrographers of how porosity is gained or lost generally focus on the presence or absence of calcite between the quartz grains. Calcite is characterized as a non-load bearing cement whose occurrence is controlled only by fluid chemical processes. A petrophysical logging instrument measures average porosity with accuracy approximately equal to a petrographic microscope, although the sample size is several cubic feet. This inherently broader viewpoint, combined with reasonable mineralogic stress-strain relationships, lead to some very different conclusions by a geologist about the effect of mineralogy on rock porosity. Using petrophysical data and the effective stress law, a geologist can determine the load bearing capacity of individual minerals with sufficient accuracy to calculate pore pressure. FIG. 3 illustrates a set of mineralogic end member compaction curves measured from petrophysical logs as published in 1987 by Gandino et al. These are typical of the non-mechanistic depth vs compaction functions prevalent in the geologic literature. The changes in observed bulk density that occur with depth are directly related to porosity because each mineral has a unique grain matrix density. The compaction functions are curved and widely spread, which would make it extremely difficult to construct a workable compaction function for mixed mineralogy rocks on the basis of this raw mono-mineralic petrophysical data. FIG. 4 shows the same Gandino et al compaction data recast as mechanical power law effective stress - solidity (grain matrix compactional strain) functions according to the present invention. An effective stress data point was calculated at each kilometer of burial depth. Actual mineral grain and fluid densities were used to convert bulk density to porosity and its complement solidity (1.0 - porosity=solidity). The compaction curves of FIG. 3 are thus the power law straight lines of FIG. 4. The power law linear functions incorporate the observed strain hardening that occurs with each individual mineralogic end member. Strain hardening is the phenomenon of increased compaction resistance with decreasing porosity of granular solid materials. There is less than 2 porosity units deviation of the power law functions from the input data over the whole compaction range of all the curves. Thus the power law function accurately captures the strain hardening phenomenon for naturally deposited and compacted mono-mineralic sedimentary granular solids. The intercept of each power law function with the 100% solidity axis represents the effective stress necessary to remove all porosity from naturally pure sedimentary particles of that granular solid mineral. The power law slope of each mono-mineralogic compaction function, i.e., delta log (σ[v])/delta log (solidity) is expressed simply as α. Table 1 shows the power law compaction functions for naturally sedimented pure minerals which have been naturally loaded during geologic burial. The halite compaction results were derived from the conversion of observed salt pan halite depth - porosity data published by Casas et al in 1989. The pure quartz sandstone compaction data is from clean Louisiana sandstones published by Atwater et al in 1965. The recast Gandino et al Po Valley compaction constants from their 1987 article have been effective stress tested in the North Sea and are described in Table 1 as calcite sand. Anhydrite constants were derived from Pfeifle et al laboratory compaction data published in 1981. TABLE 1 Power Law Compaction Functions For Granular Naturally Sedimented Pure Minerals From Natural Gravitational Geologic Loading mineral σ[max] log hardness solubility (or rock) (psi) (σ[max]) α (mohs) Quartz Sand 130000 5.114 13.219 7.0 6 Average Shale 18461 4.266 8.728 3.0 20 Calcite Sand 12000 4.079 13.000 3.0 140 Anhydrite 1585 3.200 20.00 2.5 3000 Halite Sand 85 1.929 31.909 2.0 350000 The σ[max] values calculated according to the present invention, and the previously known hardness and solubility data also shown on Table 1, are all mineral surface properties which represent mechanical and/or chemical compaction resistance. The mineralogic rank ordering that would result from any one of the three possible classification criteria are the same, which strongly supports the calculated σ[max] valves. Quartz is by far the mineral which is most resistant to compaction and halite (NaCl salt) is by far the least resistant to compaction. The σ[max] coefficient is a physically meaningful mineralogic stress-strain compaction resistance parameter. Table 1 shows that σ[max] is positively related to mineral hardness which should increase mechanical compaction resistance. The ability of a mineral to resist pressure solution compaction should decrease as the solubility of that mineral increases. A strong inverse relationship between σ[max] and mineral solubility is also evident on Table 1. The mineralogic σ[max] and α constants shown in Table 1 will yield good estimates of effective stress over a wide stress range. However, other constants can yield the same numeric results over relatively narrow ranges of effective stress. Any combination of σ[max] and α constants which are ±2 porosity units of the preferred constants in the 1000 PSI to 4000 PSI; stress range could produce an equivalent effective stress and pore pressure log. The reasonable range data in Table 2 below relates σ[max] and α values which would produce equivalent effective stress values under normal conditions. TABLE 2 Reasonable σ[max] and α Ranges For Naturally Sedimented mineral (or rock) σ[max] range (psi) α Range (mohs) Quartz Sand 130,000-60,000 Average Shale 20,000-9,000 Calcite Sand 15,000-9,000 Anhydrite 2,000-1,000 Halite Sand 200-60 35.0-10.0 The above compilation of pure mineralogic end member data is vital for determining mineral surface compaction resistance. However, rarely do these pure end members, e.g., pure quartz sand or pure calcite sand, exist in nature. Rather the most common case is that a sedimentary rock is a natural mechanical mixture of these common rock forming minerals. The individual mineral grains settle together in a particular chemical environment under the influence of gravity. They are usually naturally sorted into narrow particle size and mineralogic categories. Geologists describe these common associations as lithology or depositional facies. One overriding factor controlling the mineralogy of a sedimentary rock is chemical. Halite and anhydrite are precipitated from seawater under a very narrow range of basin geometric and arid climatic conditions. Calcite precipitates easily from warm seawater but is dissolved by cold seawater. Global climate which has varied through geologic time controls both average eustatic sea level and average water temperature. During upper and middle Cretaceous times, global climate was warm and there were no polar ice caps. Continental shelves were flooded with warm water shallow seas due to globally higher eustatic sea levels. Sedimentary rocks deposited in warm waters during these warm sea climatic periods are dominantly mixtures of limestone and shale. The climatically associated higher sea level reduces quartz and clay input by reducing both the area and height of continental landmass which could contribute these sediments. Stratigraphic sequences deposited during these times are dominantly mixtures of calcite and clay with sedimentary quartz being only a minor constituent. Sedimentary rocks deposited in cold waters or in overall cold climatic periods are predominantly quartz sand - shale sequences. Polar ice caps store water thus lowering eustatic sea level. This exposes greater land area to erosion and increases erosion rates due to steeper average land surface gradients. Quartz sediment supply is increased and calcite precipitation is prevented by the lower sea water temperature. In today's oceans, calcite precipitated in the warm surface waters is dissolved as it falls through the cold water column. Calcite never reaches the deep ocean abyssal plains which are red muds. The combined climatic eustatic sea level effects divide sedimentary rocks into two broad mineralogic mixture categories. Essentially binary calcite - clay sedimentary mixtures dominate during globally warm times. Cooler climates prevent calcite precipitation. Relatively calcite free quartz sandstone - clay binary sedimentary mixtures dominate during these lowstand periods. There is controversy regarding the relationships governing the compaction resistance of granular mineralogic mixtures, which have significant implications on the calculation of pore pressure from petrophysical data. Marion et al demonstrated in articles published in 1989 and 1992 that laboratory binary quartz sand - clay mixtures had a compactional porosity minimum between 10% and 40% clay at all levels of effective stress. The minimum porosity resulting from different packing relationships would appear to be a function of different particle size distributions of the two minerals. However, Thomas et al disclosed a linear relationship between shale content and porosity from petrophysical measurements of naturally sedimented and compacted quartz sand - clay mixtures in a 1975 article. Pittman et al also disclosed a near linear relationship between percent ductile grains and porosity for laboratory compacted mixtures in a 1991 publication. A linear relationship also exists between clay content and porosity at several different levels of effective stress in quartz sand - shale mixtures. In all three cases, higher ductile grain and clay content resulted in lower porosities upon compaction. Another linear relationship has been determined to be present between porosity and shale content in limestone - shale stratigraphic sequences in the North Sea. The relationship was between the two pure mineralogic end member porosities inferred from the Gandino et al data published in 1987. In this case, the more soluble limestones had uniformly lower porosities upon compaction. These observations lead to the conclusion that compaction of these binary mineralogic mixtures is an approximately linear function of mineralogy. Given the power law linear mineralogic compaction functions shown on Table 1 and the apparently linear porosity relationships between the end members, a rational and accurate method has been developed for calculating effective stress and consequently pore pressure for sedimentary rocks of any mineralogy. This method involves three basic steps: 1. Calculate the σ[max] exponent for the mixed mineralogy rock as the weighted average of the logarithms of pure end member σ[max] values shown on Table 1; 2. Calculate σ[max] for that mixed mineralogy rock by raising 10 to the σ[max] exponent; and 3. Calculate α for the mixed mineralogy rock as the weighted average of the individual pure end member α s. Following this procedure, porosity and its complement solidity will be an approximately linear function of mineralogy at all levels of effective stress for all natural sedimentary mixtures. When applied to pore pressure calculations, the same (Equation 3) mineralogic power law function is applied consistently to any mixed mineralogy sedimentary rock over various stress and depth ranges. This assures consistent reproducible fluid pressure results from all lithologies under variable geologic conditions. Following this method and approach, geologic compaction is explained mechanistically in terms of sedimentary rock physical properties and stress. The prior art relative compactional depth-porosity relationships explained as temperature - geologic age functions are equally well based upon sedimentary rock physical properties using the mechanically based mineralogy - effective stress relationships shown on Table 1. The latter method approach has the advantage of relating stress to sedimentary rock material (mineralogy, porosity) intrinsic physical properties. Higher geothermal gradients and temperatures are associated with higher compaction through the physical relationship between higher overburden density and thermal conductivity. The thermal conductivity of a sedimentary rock can be calculated as a weighted average of the individual mineral and fluid thermal conductivities, according to a 1990 publication by Briguad et al. Higher compaction is associated with higher temperature through higher overburden and effective stress. Temperature cannot be ruled out as a contributing factor to sedimentary rock compaction. However, its effect on compaction is probably minor compared to the stress applied to the grain matrix. The melting points of the common sedimentary minerals listed on Table 1 are seven or more times higher than these minerals experience during compaction to zero porosity. Individual mineral mechanical crystal lattice strength is probably not affected significantly at these relatively low compaction temperatures. With the exception of anhydrites individual mineral solubility generally increases with temperature. Pressure solution compaction might be enhanced by increased temperature. However, the temperature effect cannot be properly evaluated unless one also considers compactional pressure, i.e., effective stress effects. If temperature were a significant controlling factor over compaction one would not see the many compaction reversals which have been observed and are related to pore fluid pressure. Temperature almost always increases steadily with depth, while compaction of the same mineral increases and decreases considerably. The geologic age of a mineral has absolutely no effect on either its solubility or hardness. The law of superposition dictates that older rocks will underlay younger rocks. By this definition, both depth and geologic age are pseudo-stress variables. Older rocks are found on average to be more compacted because they are deeper. Older rocks are also under higher effective stress. In no way do these average depth relationships imply that geologic age is affecting compaction. Neither does geologic age control the rate of compaction. Pore fluids will obey the universal gas law and bear a mechanical load at elevated pressure for an infinite time if the fluid escape path is blocked. The compactional time dependence observed during the production of a reservoir is so fast that it is difficult to measure. The measured compaction of the Ekofisk field during 20 years of production from a 400 foot reservoir is 50 feet. The producing Ekofisk chalk apparently compacts almost as rapidly as the fluid is withdrawn. The load which was born by pore fluids for over 60 million years in the Ekofisk formation was transferred to the grain matrix as increased effective stress when fluid from the reservoir was produced. There is thus no apparent compactional time delay on the 20 year time scale. The effective stress natural compactional equilibration time for any rock is probably less than 100 years. Essentially every rock is in compaction equilibrium with its effective stress environment when it is initially cut by a drill bit. Beyond 100 years, geologic age is not a factor which affects compaction. The three step mineralogic effective stress constant weighted averaging method described above is a significant improvement compared to previous compaction techniques. Although general end member compaction characteristics were known in the prior art, the interactions between compacting minerals was not known. The discovery of linear mineralogic mixing relationships is thus of tremendous importance. The inference that all mineralogic mixing is approximately linear and can be expressed as a simple weighted average is a significant extension of the observations. The compactional characteristics of the two evaporite minerals, halite and anhydrite, have not yet been studied. FIG. 5 graphically depicts information from a well in two different forms. The data on the left side of FIG. 4 represents input and intermediate calculated petrophysical data. The raw measured gamma ray data and normalized gamma ray shale volume are shown as two separate traces. Rock porosity calculated from resistivity data using an input water conductivity profile is also shown. The latter two parameters are used to calculate effective stress for given low gamma lithology constants α and σ[max]. The right side drawings represent the calculated critical pressure output curves. In each case (and preceding left to right), the first trace line is pore pressure, the second trace line corresponds to mud weight, the third trace is the fracture propagation pressure, and the fourth trace represents the overburden pressure. The units are the same as those provided in FIG. 1. The data itself is not the significant point. What is important is that it is clear that the calculated pore pressure trace line in FIG. 5 is both more accurate and more meaningfully displayed than the calculated pore pressure trace line shown in FIG. 1. Even those unskilled in the petrophysical pore pressure art will also appreciate the benefits of the displayed right side information in FIG. 5 compared to the left side information in FIG. 5. Drilling operators and well planners would clearly rather make determinations based on the calculated critical pressures rather than petrophysical data. An exemplary procedure according to the present invention for calculating pore pressure and for providing additional useful information to a geologist or a well planner will now be described. The background for this procedure assumes that a borehole has been drilled from the earth's surface through compacted sedimentary rock for the purpose of recovering hydrocarbons. In a manner analogous to prior art techniques, the overburden will normally be calculated as a function of the depth of the rock (and if applicable a column of water above the rock for offshore applications). While an overburden log may be generated with this procedure, it should be understood that the overburden calculations are based solely on depth and the known or presumed rock composition at various depths. It should be understood that this overburden estimate procedure is not based on any measurement of overburden, but rather assumes that a certain type of rock, e.g., shale, likely will produce a range of overburden pressures at a certain depth. While various techniques may be used to calculate this assumed overburden, the most commonly used prior art technique is based on water column, sediment column, and rock makeup information. With this assumed overburden information, a fracture pressure log may be generated to give the well planner some initial guidance as to the maximum borehole pressure the well bore is capable of withstanding at any depth prior to formation fracture, so that both an initial overburden and fracture pressure log may be generated as a function of depth. Each time a new casing string is set in the well bore, the drilling operator will normally conduct one or more leakoff tests to test the casing cement job and determine how far static mud weight can be raised before setting another casings string. According to the present invention, this leakoff test information is used to accurately determine overburden at one or more of these setting depths. If three casing strings are thus set in a well, all available leakoff test data from each casing setting will preferably be used. The propagation pressure of a subhorizontal fracture or overburden is then substantially equated to the maximum pressure obtained at a certain casing setting depth. The logical assumption is that this maximum leakoff pressure was the pressure required to "lift" the overlying rock sufficiently to open an existing subhorizontal fracture, and thereby lose fluid pressure. This maximum leakoff pressure is thus substantially equal to the, overburden pressure. Similarly, the minimum leakoff pressure at a given setting depth when circulation is lost is equated to the subvertical fracture pressure, since this lower pressure is the minimum pressure required to "open" a subvertical fracture. Between these maximum and minimum pressures, various other fractures at that setting depth may be opened. With this information, the initial overburden and fracture pressure logs may then be adjusted by constant amounts, so that all leakoff test pressures fall within the constant offset continuous logs. The leakoff test procedure as described above is different than prior art procedures for calculating overburden in that actual overburden pressure is measured. It should be understood, however, that other techniques may also be used for directly measuring the overburden pressure. An example of a less favored technique utilizing a gravimeter was previously described. With this more accurate technique for calculating overburden pressure and fracture pressure at various setting depths, a revised set of continuous logs may thus be generated using additional petrophysical measurements conventionally taken at well site. While petrophysically calculated data is thus "filled in" between setting depths based on information which is not a function of actual overburden pressure, the procedure is significantly more accurate since the data may be adjusted to fit known instead of presumed pressure data at certain depths. Using this procedure, rock porosity may be determined based upon a conventional resistivity sensor or bulk density sensor run in the well bore. Those skilled in the art will appreciate that solidity is the complement of porosity and equals 1.0 minus porosity. The volume or percent volume of a specific mineral, such as shale, limestone, or sandstone, may also be determined by conventional techniques for each interval depth of the borehole. One available technique for making this determination utilizes a gamma ray sensor to detect radioactive potassium which evidences shale content. Cutting or core samples may also be used for determining the volume of other minerals in the rock. This technique is frequently used, for example, to determine whether the mineral mixed with the shale is calcite limestone or quartz sandstone. Regardless of the technique utilized to determine the volume of the specific minerals in the rock at each interval depth, a grain density for pure minerals is generally known. Exemplary values for typical minerals are as follows: quartz--2.65 g/cc; calcite or shale--2.71 g/cc; anhydrite--2.96 g/cc; halite--2.15 g/cc. Using this information, the average rock grain density p[g] may be calculated based upon the mineral volume determinations and known mineral grain density values at each interval depth. To calculate the true bulk density at each interval depth for both the rock and the fluid within the rock, information regarding the fluid and its characteristics as well as the porosity of the rock are taken into account. Assuming for example that the fluid in the rock at a specific depth is known or presumed to be water, the density of the water may be calculated as a function of the liquid pressure (which corresponds to the pore fluid pressure), liquid volume (which presumably is a function of porosity), the temperature of the liquid, and the characteristics of the liquid. The conventional well bore conductivity tool ma), be used to determine the salinity of water in the rock, and conventional temperature sensors may be used to determine temperature at a specific depth, so that this information can then be used to calculate the rock bulk density as a function of both the specific minerals in the rock and the fluid within the rock at each depth interval. Other techniques may be used for determining the characteristics of the fluid at etch interval and thus the density of the fluid. For example, the salinity of water may alternatively be determined from produced water samples. It should be understood that this procedure for adjusting a density of a rock as a function of not only the specific minerals in the rock at each depth but also as a function of the density of the fluid in the rock at that depth may not be essential for all operations, particularly if the rock has a low level of porosity and thus a low volume of fluid. With these bulk density calculations at each interval depth, the overburden at each depth below a specific setting depth may be determined as a function of the calculated bulk density and depth to generate a continuous revised overburden log. Those skilled in the art will also appreciate that the procedure for determining bulk density as described above is based upon the volume of the specific minerals in the rock at each depth, but this bulk density determination could be generated based upon characteristic of the mineral, such as its mass or weight, which is directly related to its Research by the inventor has shown that the logarithm of the effective stress for a mineral plotted as a function of the logarithm of solidity is substantially a linear relationship, as shown in FIG. 4. With this information, the line intercept with the hundred percent solidity axis may be used to determine the logarithm of the maximum effective stress σ[max] for a specific mineral. Referring to FIG. 4, the logarithm of the maximum effective stress for limestone (calcite sand) is shown to be approximately 4.0. Revised plots and calculations for the maximum effective stress for various minerals are supplied in Table 1, and are reasonable range for those values are supplied in Table 2. Similarly, the compaction exponent α for various pure minerals is the slope of the line depicted graphically in FIG. 4, and currently preferred compaction exponent values and a reasonable range of compaction and exponent values for various minerals was previously set forth. A particular feature of the present invention is that these maximum effective stress and compaction exponent values may be used to calculate the actual effective stress and compaction exponent values for rock of various combinations of minerals, as explained above. A weighted average of the maximum effective stress for a specific rock comprising determined or presumed volumes of specific minerals may thus be determined by the following equation: ##EQU1## With the calculation of the logarithm of maximum effective stress of the rock at each interval depth, e.g., one foot depth, the maximum effective stress for the rock at that depth may be easily determined by simply raising 10 to the power of the maximum effective stress value. The weighted average of the whole rock compaction exponent α may similarly be determined as a function of the volume of each mineral in the rock at each specific depth and the previously referenced compaction exponent values for a pure mineral. Equation 5 thus expresses this relationship: ##EQU2## Using the above information, the effective stress at each depth interval σ[v] may be calculated as follows: σ[v] =σ[max] (Solidity)^α (6) The overburden is then set as the upper physical limit for effective vertical stress. Any higher calculated value for overburden is not physically reasonable, and probably resulted from an error in the estimated measured petrophysical parameters. The lower physical limit for effective stress be set at 0 since subsurface rock is in a state of vertical tension. A log may thus be generated of continuous pore pressure P using the relationship: P=Overburden-σ[v] (7) As previously noted, the refined and more accurate technique for calculating pore pressure and generating the pore pressure log according to the present invention has particular utility for geologist and well planners. With the above information, additional information may also be readily generated. A continuous effective horizontal stress log may be obtained as a function of the solidity and effective stress values. It is important that effective horizontal stress is a function of solidity because effective stresses are transmitted only through the solid fraction of the rock. The first order effective horizontal stress can be calculated from Equation 8: σ[h] =(Solidity) σ[v] (8) A continuous log of fracture propagation pressure Fpv may also be obtained using the effective stress law relationship: Fpv=P+σ[h] (9) These critical calculated pressures may then be used to either modify a well plan or alter drilling practice. The well plan or drilling practice should be carried out such that the drilling fluid pressure gradient in the open hole is greater than the continuous pore pressure log and less than the continuous fracture propagation pressure log. The drilling fluid pressure gradient should be maintained above the highest calculated pore pressure. Protective casing should be set when a higher drilling fluid pressure gradient would fracture the weakest open hole formation. The weighted average mineralogic method is a significant departure from the techniques primarily used today by geologic researchers familiar with compaction and pore pressure. As explained above, conventional geologic technology involves controlling factors such as depth, temperature and geologic age which are non-mechanistic and unsound. The position that rock composition (mineralogy and porosity) and not these other factors is controlling compaction and can be used to accurately calculate pore pressure is highly significant to the hydrocarbon recovery industry. This information should lead to many new and useful relationships which can be employed by geologists in the oil and gas industry. The foregoing disclosure and description of the invention is illustrative and explanatory thereof, and various changes in the method steps and techniques described therein may be made within the scope of the appended claims without departing from the spirit of the invention.
{"url":"http://www.freepatentsonline.com/5282384.html","timestamp":"2014-04-18T03:05:12Z","content_type":null,"content_length":"106836","record_id":"<urn:uuid:9b7223d5-99bd-4242-a397-9c591ff3c10b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstract Heresies As I mentioned in my last post, P and NP are commonly encountered complexity classes. Knowing what complexity class a problem is in can give us some idea of how difficult it may be to solve the problem. So what's the difference between P and NP? I don't want to get into the technical definition of P and NP, so I'm going to make broad generalizations that aren't technically rigorous, but will give you a feel for the difference. Problems in class P are generally considered “tractable” by computer scientists. By this we mean that a computer can probably solve such a problem efficiently and relatively quickly. Recall that the complexity classes relate the ‘number of pieces’ to the difficulty. If you can solve a particular problem in P, then a similiar problem with a ‘larger number of pieces’ is likely within your grasp (and if not, it is usually a question of upgrading to a better computer). Problems in class P are the ‘meat and potatoes’ of computer programs. Before I discuss NP, let me introduce the complexity class EXPTIME. EXPTIME is where you find some really hard problems, like computer chess or Go. It is true that there are computers that can play chess really well, but if we were to modify the rules of chess to add a couple of new pieces and make the board 9x9 instead of 8x8, it would greatly increase the difficulty of the game. Problems in EXPTIME are generally considered “intractable”. By this we mean that a computer will have a hard time solving such a problem efficiently and quickly. And if you are able to solve a particular problem in EXPTIME, then a similar problem with just ‘a few more pieces’ is likely beyond your grasp, no matter how fancy a computer you might buy. Problems in class EXPTIME take a lot of time and money to So what about NP? Problems in NP are quite curious. They seem to be very difficult to solve, much like EXPTIME problems, but they have the unusual property that it is very easy to if a solution is correct. Let me illustrate: if I showed you a chess position and claimed that it was a good position for white, you'd have to do a lot of work to verify whether my claim was true. In fact, it would have to be about the same amount of work it took me to come up with the position in the first place. On the other hand, if I were to show you a jigsaw puzzle and claim that I had finished it, you could tell at a glance whether my claim were true. Problems in NP seem to be much harder to solve than problems in P, but as easy to verify as problems in P. That is a little weird. Problems in NP are often encountered in computer programs, and many of these kind of problems, although very difficult to solve, have that are relatively easy. In some cases, when a perfect solution is not needed, one that is ‘good enough’ will be a lot easier to compute. Another weird thing is that a lot of problems in NP don't sound at all like they'd be hard to solve. Here are some examples: • Take a group of people and divide them into two subgroups such that both subgroups have exactly the same amount of change in their pockets. (No, you aren't allowed to move the change around.) • Find the shortest path that visits all the major landmarks in a city. • Check if two computer programs always produce the same output. Whoops! I blew this one. Check the next posting. There is one final bit of weirdness. It is easy to prove that EXPTIME problems are much harder to solve that P problems, but no one has proven that NP problems are harder than P problems. (or that harder than P problems!) This isn't for lack of trying. Many smart people have worked on a proof for some time. There's a million dollar prize for the first person to prove this one way or the other. Recently, Vinay Deolalikar of HP Labs claimed that he has a proof. Unfortunately, other experts in the field have pointed out flaws in the proof that may invalidate it. 1 comment: jkff said... The "check if two computer programs always produce the same output" is not NP. It's simply undecidable and cannot be solved in any amount of time.
{"url":"http://funcall.blogspot.com/2010/08/p-np-part-2.html","timestamp":"2014-04-21T04:33:09Z","content_type":null,"content_length":"57952","record_id":"<urn:uuid:a7b45fbc-2da0-4b7b-9bc4-085cafe72341>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: ambiguity mickunas@mickunas.cs.uiuc.edu (Dennis Mickunas) 25 May 1997 13:28:24 -0400 From comp.compilers Related articles ambiguity kauer@paxp01.mipool.uni-jena.de (Stefan Kauer) (1997-05-22) Re: ambiguity mickunas@mickunas.cs.uiuc.edu (1997-05-25) | List of all articles for this month | From: mickunas@mickunas.cs.uiuc.edu (Dennis Mickunas) Newsgroups: comp.compilers Date: 25 May 1997 13:28:24 -0400 Organization: University of Illinois at Urbana References: 97-05-270 Keywords: parse, theory, bibliography Stefan Kauer <kauer@paxp01.mipool.uni-jena.de> writes: >I have a rather theoretical question, for which I found no answer in >several standard books on compiler writing. >I have a context free, unambigious grammar, which contains left >recursion. When the left recursion is removed (by any well known >standard algorithm), is always the case, that the new grammar is also >If not, I'd like to see an example. If yes, I'd like to see the proof >(or a reference to a book or paper). Even better--cover grammars. From a tech report by Anton Nijholt: A. Nijholt "A Survey of Normal Form Covers For Context Free Grammars" Informatica Rapport 49 Vrije Universiteit (February, 1979) "Any e-free CFG G (cycle-free, no useless symbols) can be transformed to a NLR [non left-recursive] grammmar G' such that G'[r/r]G and G'[l/r]G. This result first appeared in Nijholt [7]. Soisalon-Soininen [11] gave a more simple proof of this result. One of the transformations which is used in the latter paper is based on an idea of Kurki-Suonio [5]. This trick can also be used for a transformation presented in Wood [14] and which is due to J.M. [5] Kurki-Suonio, R., "On top-to-bottom recognition and left recursion," CACM 9 (1966), pp. 527-528. [7] Nijholt, A., "On the covering of left-recursive grammars," 4th POPL (1977), pp. 86-96. [11] Soisalon-Soininen, E., "On the covering problem for left-recursive grammars," Theor. Comput. Science 8 (1979), pp. 1-12. [14] Wood, D., "The normal form theorem -- another proof," Computer Journal 12 (1966), pp. 139-147. M. Dennis Mickunas Department of Computer Science 1304 W. Springfield Ave. University of Illinois Urbana, Illinois 61801 mickunas@cs.uiuc.edu (217) 333-6351 Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/97-05-280","timestamp":"2014-04-16T14:05:58Z","content_type":null,"content_length":"5730","record_id":"<urn:uuid:46781e3a-bd91-474a-9aef-df8abee345b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
1) The Output Voltage Of An AC Generator Is Given ... | Chegg.com (a) impedance of the circuit ___________ Ω (b) rms current in the circuit __________ A (c) average power delivered to the circuit ___________ W 3) A generator delivers an AC voltage ofthe form Δ = (76 V) sin(95 ) to acapacitor. The maximum current in the circuit is 1.50 A. Find the following. (a) rms voltage of the generator ___________ V (b) frequency of the generator (c) rms current _________ A (d) reactance ___________ Ω (e) value of the capacitance 4) A series circuit contains thefollowing components: =4.00 µF, and a generator withΔ V operating at60 Hz. Calculate the following. (a) inductive reactance __________ Ω (b) capacitance reactance _____________ kΩ (c) impedance _____________ kΩ (d) maximum current _____________ mA (e) phase angle between the current and generator voltage ____________ ° (f) Calculate the individual maximum voltages across the resistor,inductor, and capacitor. resistor___________ V inductor __________ V capacitor ___________V
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-output-voltage-ac-generator-given-v-880-v-sin-120-t--generator-isconnected-across-1500-h-q820364","timestamp":"2014-04-19T04:13:26Z","content_type":null,"content_length":"24281","record_id":"<urn:uuid:7788f1d4-281e-43ec-9219-d7cc4bd65b0a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Woburn Trigonometry Tutor Find a Woburn Trigonometry Tutor ...Later on I participated in the design of the space shuttle, and the development of the first GPS operating software. Thus I have an informed perspective regarding both teaching and application of these disciplines. Recently I have been accepting some on-line tutoring requests in order to evalua... 7 Subjects: including trigonometry, calculus, physics, algebra 1 ...I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/discrete variety. I got an A in undergraduate linear algebra. I have also absorbed many additional linear algebra concepts in the process of taking graduate classes in functional analysis and abstract algebra. 14 Subjects: including trigonometry, calculus, geometry, GRE ...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. 16 Subjects: including trigonometry, French, elementary math, algebra 1 ...I have taken many classes in biostatistics - basics through survival analysis, logistic and other regression analysis, and some factor analysis. I have about five years of experience working with public health and medical students, as well as practicing doctors, one-on-one to teach biostatistics... 18 Subjects: including trigonometry, English, writing, statistics I'm a very experienced and patient Math Tutor with a wide math background and a Ph.D. in Math from West Virginia University. I teach high school through college students and can teach in person or, if convenient, via Skype. I don't want to take your tests or quizzes, so I may need to verify in some way that I'm not doing that! 14 Subjects: including trigonometry, calculus, geometry, GRE Nearby Cities With trigonometry Tutor Arlington, MA trigonometry Tutors Belmont, MA trigonometry Tutors Billerica trigonometry Tutors Brighton, MA trigonometry Tutors Burlington, MA trigonometry Tutors Chelsea, MA trigonometry Tutors Everett, MA trigonometry Tutors Lexington, MA trigonometry Tutors Malden, MA trigonometry Tutors Medford, MA trigonometry Tutors Melrose, MA trigonometry Tutors Reading, MA trigonometry Tutors Stoneham, MA trigonometry Tutors Wakefield, MA trigonometry Tutors Winchester, MA trigonometry Tutors
{"url":"http://www.purplemath.com/woburn_trigonometry_tutors.php","timestamp":"2014-04-20T21:06:02Z","content_type":null,"content_length":"24057","record_id":"<urn:uuid:c02442f9-d478-45a2-981a-970ba59fe31e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Click here to see the number of accesses to this library. # misc/umfpack.tgz moved to linalg/umfpack.shar file besz.c for fast zero finder for bessel functions J_nu(x) lang c by E. Onofri (onofri@parma.infn.it) gams c10a3 file bitnet for a list of bitnet sites. file blocksolve.tgz for BlockSolve is a software library module for solving large, , sparse, symmetric systems of linear equations on parallel , computers. The package contains source code, Unix "man" , pages and a manual. To achieve portability, BlockSolve uses , the Chameleon package developed by Bill Gropp and others at , Argonne National Laboratory. BlockSolve has been tested on the , Intel DELTA, the IBM SP series, and a network of Sun , workstations, but can be expected to run without modification , of source code on other architectures. , This file CANNOT be retrieved by email. by Mark Jones jones@cs.utk.edu, Paul Plassmann plassman@mcs.anl.gov contact jones@cs.utk.edu prec single/double gams d2b4 lang c age research file contrib for a checklist for netlib contributors file dlamch.f for determines double precision machine parameters. gams r1 file fft.f for yet another fft subroutine, this one from Ferziger's text file gmcmc.for for GMCMC, General Markov Chain Monte Carlo routine in Fortran by Guthrie Miller, December 28, 2013 lang fortran file gn/GN_ReadMe.pdf for The GN (Gauss Newton) algorithm does nonlinear least squares minimization with finite-difference derivatives. , It uses an augmented Gauss-Newton, Levenberg-Marqardt method. , GN README. file gn/GN.FOR for The GN (Gauss Newton) algorithm does nonlinear least squares minimization with finite-difference derivatives. , It uses an augmented Gauss-Newton, Levenberg-Marqardt method. , The Fortran contains 323 executable lines. by Kenneth Klare, Guthrie Miller, February 21, 2013 lang fortran file groups for list of people interested in na at various centers. by Gene Golub, Stanford age old file ickp.tar.z for for checkpointing programs on the Intel iPSC/2 & iPSC/860 by James Plank, Princeton University gams z file instab.tgz for Instab is a software package for automatically detecting instability , in numerical algorithms. Instab implements functional stability analysis, , which uses the relationship between the forward error, the backward error, , and a problem's condition to define a function that estimates a lower bound , on the backward error. The subplex optimization method then maximizes the function. , A numerical algorithm is unstable if the maximization shows that the backward error , can become large. Since numerical algorithms are treated as black boxes, Instab , normally requires little more than an executable version of a numerical algorithm , to determine if it is unstable. by Tom Rowan <na.rowan@na-net.ornl.gov> lang fortran size 24K file intel/README.txt for README for Intel(R) Decimal Floating-Point Math Library file intel/IntelRDFPMathLib10U1.tar for Intel(R) Decimal Floating-Point Math Library file iqpack for Fortran subroutines for the weights of interpolatory quadratures , the package is an implementation of the method described in , "Calculation of the Weights of Interpolatory Quadratures", , J. Kautsky and S. Elhay, Numer Math 40 (1982) 407-422, by S. Elhay, Feb 1988. gams h2c lang fortran file jet-lag-diet for Argonne's Anti-Jet-Lag Diet helps travelers adjust to new time zones. file jgraph.readme gams q file jgraph.tgz for program to plot graphs in Postscript by Jim Plank, Princeton University gams q file lis for Lis, a Library of Iterative Solvers for linear systems, , is a parallel library for solving linear equations and , eigenvalue problems that arise in the numerical solution , of partial differential equations using iterative methods. file machar for this program prints hardware-determined , machine constants obtained by smchar, a subroutine due to , w. j. cody. , descriptions of the machine constants are , given in the prologue comments of smchar. , subprograms called smchar , version Fri Oct 25 22:54:26 BST 1985 lang C by Tim Hopkins , Computing Laboratory, University of Kent, Canterbury CT2 7NF Kent U.K. lib mglab for tutorial 1d multigrid gams i1b1 file mpsim for Portable (to most UNIX systems) message-passing simulator , that supports both C and FORTRAN. Uses forks and pipes , to support up to 8 to 16 processes on UNIX bsd 4.x, SYS V 3.x, , DYNIX, Encore, Ultrix, Sun, XENIX, Tek UNIX, 3B2s. Simulates , Intel iPSC/1 and iPSC/2 hypercubes and produces trace file of message , events. Trace-file analyzers produces tabular or graphical , summaries. (Encore and Sequent versions will utilize , multiple processors.) by Tom Dunigan, Oak Ridge Nat Lab, 2/89, dunigan@msr.epm.ornl.gov gams z file mus for A package for solving two-point BVPs Double precision version by Authors: R.M.M. Mattheij, G.W.M. Staarink. , G.W.M. Staarink; Economisch Instituut; Thomas van Aquinostraat 6; , 6525 GD Nijmegen; The Netherlands file nanet for an introduction to NAnet, after its move from Stanford to ORNL file nanet.tgz for the software for running the na-net. file netlib for The source and scripts for netlib. file netlib-paper for The troff form of the paper describing netlib. file nonsymdc for This software is a sequential version of a parallel algorithm for , computing the eigenvalues and eigenvectors of a , non-symmetric matrix. The algorithm is based on a , divide-and-conquer procedure and uses an iterative , refinement technique. , see tennessee/ut-cs-91-137.ps for details on the approach. by Jack Dongarra <dongarra@cs.utk.edu> , Majed Sidani <sidani@cray.com> file randnum-cray for vectorised random number generator for the CRAY X-MP. by Oscar Buneman, 10/16/86 gams l6a21 file rktec.c for computes the truncation error coefficients, tecs, of a Runge-Kutta , formula, or a pair of formulas, specified in an input file. , Version 2.1 by Mike Hosea (na.hosea@na-net.ornl.gov) June 6, 1994 gams i1c file slamch.f for determines single precision machine parameters. gams r1 file sledge for These routines estimate eigenvalues, eigenfunctions and/or , spectral density functions for Sturm-Liouville problems. by Steven Pruess <spruess@mines.colorado.edu>, , Charles Fulton <fulton@zach.fit.edu> lang fortran size 195k file sleign for eigenvalues and eigenfunctions of regular and singular Sturm-Liouville , boundary value problems. The package is a modification and , extension of the code developed by Bailey, Gordon, and Shampine, , described in ACM-TOMS 4(1978).(Burt Garbow, ANL, 11/29/88) gams i1b3 lang fortran file syevj for the code implements the accurate symmetric eigensolver , which consist of the symmetric indefinite decomposition , followed by implicit Jacobi iteration. by "Dr. Ivan Slapnicar" <islapnicar@uni-zg.ac.mail.yu> , 3 Dec 1992 gams d4a1 lang fortran file xplayer.tgz for trace2au is a tool that takes trace stream as input and outputs sounds , on a Sun workstation. E.g., can be used with a trace file produced by , picl to identify communication patterns, hotspots, and bottlenecks. by Jean Yves Peterschmitt <jypeters@cs.utk.edu> and , Bernard Tourancheau <btouranc@cs.utk.edu> gams n1, s3 file trace2au_report.tgz for utility for xplayer file trace2au_tools.tgz for utility for xplayer file tymnet for list of tymnet numbers around the country. file vrend.tgz for 3-D volume-renderer (using ray-casting); a PVM 2.4 implementation. by Hugh Caffey, caffey@tc.cornell.edu Dec 10, 1992 gams q
{"url":"http://www.netlib.org/misc/","timestamp":"2014-04-20T11:19:53Z","content_type":null,"content_length":"9400","record_id":"<urn:uuid:4f28272f-2526-4fb8-bef1-9888882f417d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A cylindrical tank has a radius of 23 m and a height of 52 m. What is its lateral area? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50098d82e4b020a2b3beae82","timestamp":"2014-04-20T23:43:00Z","content_type":null,"content_length":"48892","record_id":"<urn:uuid:a19a50e7-2f28-4941-9f2d-594ab7dad216>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: A Oracle FAQ Your Portal to the Oracle Knowledge Grid HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US Home -> Community -> Usenet -> comp.databases.theory -> Re: A Re: A From: Paul <paul_at_test.com> Date: Sun, 10 Jul 2005 12:08:32 +0100 Message-ID: <42d101b1$0$2907$ed2e19e4@ptn-nntp-reader04.plus.net> Jan Hidders wrote: >>> In some sense you might say that is is "too large" to be a set. The >>> collection of all relations has the same problem. >> I'm skeptical to this, but if it is too difficult to explain (or to >> give an example of a problem), I'll let it be for the moment. > > I'll give another hint. Since unary relations are similar to sets you > can get Cantor's paradox. Doesn't this only apply if you are considering the set of all relations over all domains? What if you restrict yourself to a finite set of domains? I can't see how Cantor's Paradox would apply in this case. So rather than having a domain of "the set of all relations", which can't exist, you could have a domain of "the set of all relations over a specified finite set of domains". Or even an infinite set of domains, I suppose, providing it's still a well-defined set. So the "size explosion" problem here is with the domains rather than the relations? Paul. Received on Sun Jul 10 2005 - 06:08:32 CDT HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2005/07/10/0290.htm","timestamp":"2014-04-20T20:04:21Z","content_type":null,"content_length":"7325","record_id":"<urn:uuid:86e433a7-e32d-4bf5-a5a0-5aa75c02f2c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
inertia stratification up vote 0 down vote favorite Let $X$ be a nice algebraic variety (say smooth, projective) over a field of characteristic 0. Let $G$ be an abelian group acting on $X$. For each subgroup $H$ of $G$, denote by $X^H$ the closed subvariety of points fixed by $H$. Proposition. $X^H$ is smooth Of course, the intersection between different $X^H$ could be non-empty. That's why people look at the "inertia stratification" $X_H=X^H- \bigcup_{H' \subsetneq H} X^{H'}$ My question is: assuming the above proposition, why is each $X_H$ smooth? Thanks for you help ag.algebraic-geometry group-actions 1 $X_H$ is an open subset of $X^H$. I voted to close as "too localized". – Angelo Apr 7 '13 at 17:05 @Angelo, so the argument is just that an open of a smooth is smooth? – inert89 Apr 7 '13 at 17:11 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry group-actions or ask your own question.
{"url":"http://mathoverflow.net/questions/126787/inertia-stratification","timestamp":"2014-04-17T07:44:45Z","content_type":null,"content_length":"46631","record_id":"<urn:uuid:fb891357-0780-4f1c-ad6f-6491e4fdd707>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Category Archives: Code After entirely too long, I am happy to announce the beta release of boolean3, an R package for modeling causal complexity. The package can be downloaded at the following links: Unix/ Linux: boolean3_3.0.20.tar.gz Windows: boolean3_3.0.20.zip (Please let me know if you have any … Continue reading John Cook has three entries up on his blog discussing the pitfalls of calculating the sample variance using the mathematical textbook definitions. He provides a Monte Carlo comparison of methods here, and a theoretical discussion here. He also provides a … Continue reading I have fixed a small bug in mtable-ext that prevented asterisks from being printed for negative coefficients in mixed effects models output by lme4. Thanks to Reinhold Kliegl and Martin Elff for pointing out the bug and for providing the … Continue reading I finally got around to organizing and packaging my complete set of extended model support for mtable in Martin Elff’s memisc library. Here is a list of the models supported: coxph, survreg – Cox proportional hazards models and parametric survival … Continue reading I have recently discovered memisc, an extremely useful R package by Martin Elff (see his memisc page here). The package contains any number of useful functions, and is particularly good at helping one manage and recode survey data. However, by far my … Continue reading Some time ago I found myself in need of daily exchange rates for the Slovenian Tolar (though I can’t now remember why). Unfortunately, I wasn’t able to find the data in a readily usable format at the Bank of Slovenia … Continue reading
{"url":"http://leftcensored.skepsi.net/category/code/","timestamp":"2014-04-17T21:23:01Z","content_type":null,"content_length":"23638","record_id":"<urn:uuid:892245a4-d856-4ed0-a800-7a67a5008889>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Homewood, IL Math Tutor Find a Homewood, IL Math Tutor ...I am currently teaching Decision Science, which is a applied Linear Algebra class in the Business Department. I have an MBA in Marketing from Keller Graduate School, plus over thirty years of marketing experience as president of a manufacturers' representative firm, and am a current member of th... 11 Subjects: including statistics, probability, algebra 1, algebra 2 ...These three are my absolute passion and I would love to help others with these subjects as well. I love teaching others; I teach my friends whenever they need help grasping and understanding something, or just a friendly helping hand. I'm very sociable and easy to get along with. 16 Subjects: including geometry, SAT math, English, algebra 1 ...I tutor the way that I teach--present the material in a way that it can understood, practice with my clients, and give them opportunities to try themselves with me right there to troubleshoot. Whether you're in Chicago or the suburbs, I will put in the work to give you what you need. Working towards success in mathematics! 11 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I have received many awards and certificates for excellence in tutoring. I have also received extensive tutoring training that highly qualifies me to help students with learning disabilities succeed in math. Biology is my passion! 22 Subjects: including statistics, precalculus, prealgebra, algebra 2 I know it may seem weird to many, but I absolutely love math, especially algebra. :) Something that I may love even more than algebra itself though is helping students learn algebra. It is my pleasure to help those in need, and I am very patient in working with students. I am open to learners at all levels and have no problem going back to whatever basics are necessary. 8 Subjects: including precalculus, softball, discrete math, logic Related Homewood, IL Tutors Homewood, IL Accounting Tutors Homewood, IL ACT Tutors Homewood, IL Algebra Tutors Homewood, IL Algebra 2 Tutors Homewood, IL Calculus Tutors Homewood, IL Geometry Tutors Homewood, IL Math Tutors Homewood, IL Prealgebra Tutors Homewood, IL Precalculus Tutors Homewood, IL SAT Tutors Homewood, IL SAT Math Tutors Homewood, IL Science Tutors Homewood, IL Statistics Tutors Homewood, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Homewood_IL_Math_tutors.php","timestamp":"2014-04-18T03:56:38Z","content_type":null,"content_length":"23887","record_id":"<urn:uuid:785b2986-02ea-483f-9e2d-bc6b13610270>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Express 5,981,025,000 in exponential form using standard scientific or "e" notation (for example, 105 = 1.05e2 and this one too please i'm horrible with exponents. Express .0370700 in exponential form, using standard scientific or "e" notation (for example, 105 = 1.05e2). Enter the correct number of significant figures. It's 5.981025e9 (and the other one is 3.707e-2). Scientific form is a way of representing a number by two numbers: the mantissa and exponent (mantissa before the e, exponent after) - to get back to the real value, you multiply the mantissa by 10^ exponent (10 to the power of the exponent, or multiply by 10 exponent-times). For a number to be scientific form, the mantissa must be a between 1 and 10 (unless the number is 0 - but otherwise ignore this). So you effectively just move the decimal point so that the number becomes a number between 1 and 9; then, the exponent is how many places you moved it. So for example, with 5,981,025,000, for the mantissa to be between 1 and 10 we must move the decimal place to just after the 5. (It will always be after the first non-zero number (unless the whole number is 0)). We had to move the decimal point 9 places to the left, so to get from 5.981025 to 5,981,025,000 we must move the decimal point 9 places to the right: this means that the exponent is +9 - we must multiply the mantissa by 10 nine times to get the original number. i.e. the answer is 5.981025e9. Similarly with the other one, we move the decimal point to just after the 3 to get the mantissa; to get back to the original number we must then divide by 10^2 (divide by 10 twice), so the exponent is -2 (same as multiplying by 1/100 or 10^-2). So this answer is 3.707e-2. You do not need significant figures... scientific notation doesn't necessarily involve rounding the number. Hope that helped. before we answer we need to know how many significant figures u need 2 sig: 6.0e9 & 3.7e-1 3: 5.98e9 & 3.71e-2
{"url":"http://www.science-mathematics.com/Chemistry/201105/6330.htm","timestamp":"2014-04-21T02:00:13Z","content_type":null,"content_length":"18945","record_id":"<urn:uuid:10cee666-83fa-45fa-898b-adbb0f9363a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Will I Ever Use Factoring in Real Life? | The Classroom | Synonym Factoring refers to the separation of a formula, number or matrix into products. For example, 49 can be factored into two 7s, or x^2 - 9 can be factored into x - 3 and x + 3. This is not a procedure used commonly in everyday life. Part of the reason is that the examples given in algebra class are so simple and that equations do not take such simple form in higher-level classes. Another reason is that everyday life does not require use of physics and chemistry calculations, unless it is your field of study or profession. High School Science Second-order polynomials--e.g., x^2 + 2x + 4--are regularly factored in high school algebra classes, usually in ninth grade. Being able to find the zeros of such formulas is basic to solving problems in high school chemistry and physics classes in the following year or two. Second-order formulas come up regularly in such classes. Quadratic Formula However, unless the science instructor has heavily rigged the problems, such formulas will not be as neat as they are presented in math class when simplification is used to help focus students on factoring. In physics and chemistry classes, the formulas are more likely to come out as 4.9t^2 + 10t - 100 = 0. In such cases, the zeros are no longer mere integers or simple fractions as in math class. The quadratic formula must be used to solve the equation: x = [-b +/- ?(b^2 - 4ac)] / [2a], where +/- means “plus or minus.” This is the messiness of the real world entering into mathematical application, and because the answers are no longer as neat as you find in algebra class, more complex tools must be used to deal with the added complexity. In finance, a common polynomial equation that comes up is the calculation of present value. This is used in accounting when the present value of assets must be determined. It is used in asset (stock) valuation. It is used in bond trading and mortgage calculations. The polynomial is of high order, for example, with an interest term with exponent 360 for a 30-year mortgage. This is not a formula that can be factored. Instead, if the interest needs to be calculated, it is solved for by computer or calculator. Numerical Analysis This brings us into a field of study called numerical analysis. These methods are used when the value of an unknown can’t be solved for simply (e.g., by factoring) but must instead be solved for by computer, using approximation methods that estimate the answer better and better with each iteration of some algorithm such as Newton’s method or the bisection method. These are the sorts of methods used in financial calculators to calculate your mortgage rate. Matrix Factorization Speaking of numerical analysis, one use of factorization is in numerical computations to split a matrix into two product matrices. This is done to solve not a single equation but instead a group of equations simultaneously. The algorithm to perform the factorization is itself far more complex than the quadratic formula. The Bottom Line Factorization of polynomials as it is presented in algebra class is effectively too simple to be used in everyday life. It is nevertheless essential to completing other high school classes. More advanced tools are needed to account for the greater complexity of equations in the real world. Some tools can be used without understanding, e.g., in using a financial calculator. However, even entering the data in with the correct sign and making sure the right interest rate is used makes factoring polynomials simple by comparison. Style Your World With Color • Burden and Faires; Numerical Analysis; 1987
{"url":"http://classroom.synonym.com/ever-use-factoring-real-life-2459.html","timestamp":"2014-04-19T06:53:50Z","content_type":null,"content_length":"32305","record_id":"<urn:uuid:132d5563-3f69-4059-bbda-3747d4480e03>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 76 - 100 of 408 76. CMB 2011 (vol 56 pp. 593) On the $p$-norm of an Integral Operator in the Half Plane We give a partial answer to a conjecture of DostaniÄ on the determination of the norm of a class of integral operators induced by the weighted Bergman projection in the upper half plane. Keywords:Bergman projection, integral operator, $L^p$-norm, the upper half plane Categories:47B38, 47G10, 32A36 77. CMB 2011 (vol 56 pp. 184) On Some Non-Riemannian Quantities in Finsler Geometry In this paper we study several non-Riemannian quantities in Finsler geometry. These non-Riemannian quantities play an important role in understanding the geometric properties of Finsler metrics. In particular, we study a new non-Riemannian quantity defined by the S-curvature. We show some relationships among the flag curvature, the S-curvature, and the new non-Riemannian quantity. Keywords:Finsler metric, S-curvature, non-Riemannian quantity Categories:53C60, 53B40 78. CMB 2011 (vol 56 pp. 225) On the Notion of Visibility of Torsors Let $J$ be an abelian variety and $A$ be an abelian subvariety of $J$, both defined over $\mathbf{Q}$. Let $x$ be an element of $H^1(\mathbf{Q},A)$. Then there are at least two definitions of $x$ being visible in $J$: one asks that the torsor corresponding to $x$ be isomorphic over $\mathbf{Q}$ to a subvariety of $J$, and the other asks that $x$ be in the kernel of the natural map $H^1(\ mathbf{Q},A) \to H^1(\mathbf{Q},J)$. In this article, we clarify the relation between the two definitions. Keywords:torsors, principal homogeneous spaces, visibility, Shafarevich-Tate group Categories:11G35, 14G25 79. CMB 2011 (vol 56 pp. 39) Comparison Theorem for Conjugate Points of a Fourth-order Linear Differential Equation In 1961, J. Barrett showed that if the first conjugate point $\eta_1(a)$ exists for the differential equation $(r(x)y'')''= p(x)y,$ where $r(x)\gt 0$ and $p(x)\gt 0$, then so does the first systems-conjugate point $\widehat\eta_1(a)$. The aim of this note is to extend this result to the general equation with middle term $(q(x)y')'$ without further restriction on $q(x)$, other than Keywords:fourth-order linear differential equation, conjugate points, system-conjugate points, subwronskians Categories:47E05, 34B05, 34C10 80. CMB 2011 (vol 56 pp. 395) Coessential Abelianization Morphisms in the Category of Groups An epimorphism $\phi\colon G\to H$ of groups, where $G$ has rank $n$, is called coessential if every (ordered) generating $n$-tuple of $H$ can be lifted along $\phi$ to a generating $n$-tuple for $G$. We discuss this property in the context of the category of groups, and establish a criterion for such a group $G$ to have the property that its abelianization epimorphism $G\to G/[G,G]$, where $[G,G]$ is the commutator subgroup, is coessential. We give an example of a family of 2-generator groups whose abelianization epimorphism is not coessential. This family also provides counterexamples to the generalized Andrews--Curtis conjecture. Keywords:coessential epimorphism, Nielsen transformations, Andrew-Curtis transformations Categories:20F05, 20F99, 20J15 81. CMB 2011 (vol 56 pp. 510) Linear Forms in Monic Integer Polynomials We prove a necessary and sufficient condition on the list of nonzero integers $u_1,\dots,u_k$, $k \geq 2$, under which a monic polynomial $f \in \mathbb{Z}[x]$ is expressible by a linear form $u_1f_1+\dots+u_kf_k$ in monic polynomials $f_1,\dots,f_k \in \mathbb{Z}[x]$. This condition is independent of $f$. We also show that if this condition holds, then the monic polynomials $f_1,\ dots,f_k$ can be chosen to be irreducible in $\mathbb{Z}[x]$. Keywords:irreducible polynomial, height, linear form in polynomials, Eisenstein's criterion Categories:11R09, 11C08, 11B83 82. CMB 2011 (vol 56 pp. 412) Structure in Sets with Logarithmic Doubling Suppose that $G$ is an abelian group, $A \subset G$ is finite with $|A+A| \leq K|A|$ and $\eta \in (0,1]$ is a parameter. Our main result is that there is a set $\mathcal{L}$ such that \begin {equation*} |A \cap \operatorname{Span}(\mathcal{L})| \geq K^{-O_\eta(1)}|A| \quad\text{and}\quad |\mathcal{L}| = O(K^\eta\log |A|). \end{equation*} We include an application of this result to a generalisation of the Roth--Meshulam theorem due to Liu and Spencer. Keywords:Fourier analysis, Freiman's theorem, capset problem 83. CMB 2011 (vol 56 pp. 442) Closed Left Ideal Decompositions of $U(G)$ Let $G$ be an infinite discrete group and let $\beta G$ be the Stone--Ä ech compactification of $G$. We take the points of $Ä ta G$ to be the ultrafilters on $G$, identifying the principal ultrafilters with the points of $G$. The set $U(G)$ of uniform ultrafilters on $G$ is a closed two-sided ideal of $\beta G$. For every $p\in U(G)$, define $I_p\subseteq\beta G$ by $I_p=\bigcap_{A\ in p}\operatorname{cl} (GU(A))$, where $U(A)=\{p\in U(G):A\in p\}$. We show that if $|G|$ is a regular cardinal, then $\{I_p:p\in U(G)\}$ is the finest decomposition of $U(G)$ into closed left ideals of $\beta G$ such that the corresponding quotient space of $U(G)$ is Hausdorff. Keywords:Stone--Ä ech compactification, uniform ultrafilter, closed left ideal, decomposition Categories:22A15, 54H20, 22A30, 54D80 84. CMB 2011 (vol 56 pp. 400) A Factorization Theorem for Multiplier Algebras of Reproducing Kernel Hilbert Spaces Let $(X,\mathcal B,\mu)$ be a $\sigma$-finite measure space and let $H\subset L^2(X,\mu)$ be a separable reproducing kernel Hilbert space on $X$. We show that the multiplier algebra of $H$ has property $(A_1(1))$. Keywords:reproducing kernel Hilbert space, Berezin transform, dual algebra Categories:46E22, 47B32, 47L45 85. CMB 2011 (vol 56 pp. 326) Restricting Fourier Transforms of Measures to Curves in $\mathbb R^2$ We establish estimates for restrictions to certain curves in $\mathbb R^2$ of the Fourier transforms of some fractal measures. Keywords:Fourier transforms of fractal measures, Fourier restriction Categories:42B10, 28A12 86. CMB 2011 (vol 56 pp. 272) On Super Weakly Compact Convex Sets and Representation of the Dual of the Normed Semigroup They Generate In this note, we first give a characterization of super weakly compact convex sets of a Banach space $X$: a closed bounded convex set $K\subset X$ is super weakly compact if and only if there exists a $w^*$ lower semicontinuous seminorm $p$ with $p\geq\sigma_K\equiv\sup_{x\in K}\langle\,\cdot\,,x\rangle$ such that $p^2$ is uniformly Fréchet differentiable on each bounded set of $X^*$. Then we present a representation theorem for the dual of the semigroup $\textrm{swcc}(X)$ consisting of all the nonempty super weakly compact convex sets of the space $X$. Keywords:super weakly compact set, dual of normed semigroup, uniform Fréchet differentiability, representation Categories:20M30, 46B10, 46B20, 46E15, 46J10, 49J50 87. CMB 2011 (vol 56 pp. 258) The Smallest Pisot Element in the Field of Formal Power Series Over a Finite Field Dufresnoy and Pisot characterized the smallest Pisot number of degree $n \geq 3$ by giving explicitly its minimal polynomial. In this paper, we translate Dufresnoy and Pisot's result to the Laurent series case. The aim of this paper is to prove that the minimal polynomial of the smallest Pisot element (SPE) of degree $n$ in the field of formal power series over a finite field is given by $P (Y)=Y^{n}-\alpha XY^{n-1}-\alpha^n,$ where $\alpha$ is the least element of the finite field $\mathbb{F}_{q}\backslash\{0\}$ (as a finite total ordered set). We prove that the sequence of SPEs of degree $n$ is decreasing and converges to $\alpha X.$ Finally, we show how to obtain explicit continued fraction expansion of the smallest Pisot element over a finite field. Keywords:Pisot element, continued fraction, Laurent series, finite fields Categories:11A55, 11D45, 11D72, 11J61, 11J66 88. CMB 2011 (vol 56 pp. 251) Sign Changes of the Liouville Function on Quadratics Let $\lambda (n)$ denote the Liouville function. Complementary to the prime number theorem, Chowla conjectured that \begin{equation*} \label{a.1} \sum_{n\le x} \lambda (f(n)) =o(x)\tag{$*$} \end {equation*} for any polynomial $f(x)$ with integer coefficients which is not of form $bg(x)^2$. When $f(x)=x$, $(*)$ is equivalent to the prime number theorem. Chowla's conjecture has been proved for linear functions, but for degree greater than 1, the conjecture seems to be extremely hard and remains wide open. One can consider a weaker form of Chowla's conjecture. Conjecture 1. [Cassaigne et al.] If $f(x) \in \mathbb{Z} [x]$ and is not in the form of $bg^2(x)$ for some $g(x)\in \mathbb{Z}[x]$, then $\lambda (f(n))$ changes sign infinitely often. Clearly, Chowla's conjecture implies Conjecture 1. Although weaker, Conjecture 1 is still wide open for polynomials of degree $\gt 1$. In this article, we study Conjecture 1 for quadratic polynomials. One of our main theorems is the following. Theorem 1 Let $f(x) = ax^2+bx +c $ with $a\gt 0$ and $l$ be a positive integer such that $al$ is not a perfect square. If the equation $f(n)=lm^2 $ has one solution $(n_0,m_0) \in \ mathbb{Z}^2$, then it has infinitely many positive solutions $(n,m) \in \mathbb{N}^2$. As a direct consequence of Theorem 1, we prove the following. Theorem 2 Let $f(x)=ax^2+bx+c$ with $a \in \ mathbb{N}$ and $b,c \in \mathbb{Z}$. Let \[ A_0=\Bigl[\frac{|b|+(|D|+1)/2}{2a}\Bigr]+1. \] Then either the binary sequence $\{ \lambda (f(n)) \}_{n=A_0}^\infty$ is a constant sequence or it changes sign infinitely often. Some partial results of Conjecture 1 for quadratic polynomials are also proved using Theorem 1. Keywords:Liouville function, Chowla's conjecture, prime number theorem, binary sequences, changes sign infinitely often, quadratic polynomials, Pell equation Categories:11N60, 11B83, 11D09 89. CMB 2011 (vol 56 pp. 388) Application of Measure of Noncompactness to Infinite Systems of Differential Equations In this paper we determine the Hausdorff measure of noncompactness on the sequence space $n(\phi)$ of W. L. C. Sargent. Further we apply the technique of measures of noncompactness to the theory of infinite systems of differential equations in the Banach sequence spaces $n(\phi)$ and $m(\phi)$. Our aim is to present some existence results for infinite systems of differential equations formulated with the help of measures of noncompactness. Keywords:sequence spaces, BK spaces, measure of noncompactness, infinite system of differential equations Categories:46B15, 46B45, 46B50, 34A34, 34G20 90. CMB 2011 (vol 56 pp. 354) The Sizes of Rearrangements of Cantor Sets A linear Cantor set $C$ with zero Lebesgue measure is associated with the countable collection of the bounded complementary open intervals. A rearrangment of $C$ has the same lengths of its complementary intervals, but with different locations. We study the Hausdorff and packing $h$-measures and dimensional properties of the set of all rearrangments of some given $C$ for general dimension functions $h$. For each set of complementary lengths, we construct a Cantor set rearrangement which has the maximal Hausdorff and the minimal packing $h$-premeasure, up to a constant. We also show that if the packing measure of this Cantor set is positive, then there is a rearrangement which has infinite packing measure. Keywords:Hausdorff dimension, packing dimension, dimension functions, Cantor sets, cut-out set Categories:28A78, 28A80 91. CMB 2011 (vol 56 pp. 500) The Lang--Weil Estimate for Cubic Hypersurfaces An improved estimate is provided for the number of $\mathbb{F}_q$-rational points on a geometrically irreducible, projective, cubic hypersurface that is not equal to a cone. Keywords:cubic hypersurface, rational points, finite fields Categories:11G25, 14G15 92. CMB 2011 (vol 56 pp. 292) Quasisymmetrically Minimal Moran Sets M. Hu and S. Wen considered quasisymmetrically minimal uniform Cantor sets of Hausdorff dimension $1$, where at the $k$-th set one removes from each interval $I$ a certain number $n_{k}$ of open subintervals of length $c_{k}|I|$, leaving $(n_{k}+1)$ closed subintervals of equal length. Quasisymmetrically Moran sets of Hausdorff dimension $1$ considered in the paper are more general than uniform Cantor sets in that neither the open subintervals nor the closed subintervals are required to be of equal length. Keywords:quasisymmetric, Moran set, Hausdorff dimension Categories:28A80, 54C30 93. CMB 2011 (vol 56 pp. 265) Embedding Distributions of Generalized Fan Graphs Total embedding distributions have been known for a few classes of graphs. Chen, Gross, and Rieper computed it for necklaces, close-end ladders and cobblestone paths. Kwak and Shim computed it for bouquets of circles and dipoles. In this paper, a splitting theorem is generalized and the embedding distributions of generalized fan graphs are obtained. Keywords:total embedding distribution, splitting theorem, generalized fan graphs 94. CMB 2011 (vol 56 pp. 127) Evolution of Eigenvalues along Rescaled Ricci Flow In this paper, we discuss monotonicity formulae of various entropy functionals under various rescaled versions of Ricci flow. As an application, we prove that the lowest eigenvalue of a family of geometric operators $-4\Delta + kR$ is monotonic along the normalized Ricci flow for all $k\ge 1$ provided the initial manifold has nonpositive total scalar curvature. Keywords:monotonicity formulas, Ricci flow Categories:58C40, 53C44 95. CMB 2011 (vol 55 pp. 842) The Rank of Jacobian Varieties over the Maximal Abelian Extensions of Number Fields: Towards the Frey-Jarden Conjecture Frey and Jarden asked if any abelian variety over a number field $K$ has the infinite Mordell-Weil rank over the maximal abelian extension $K^{\operatorname{ab}}$. In this paper, we give an affirmative answer to their conjecture for the Jacobian variety of any smooth projective curve $C$ over $K$ such that $\sharp C(K^{\operatorname{ab}})=\infty$ and for any abelian variety of $\ operatorname{GL}_2$-type with trivial character. Keywords:Mordell-Weil rank, Jacobian varieties, Frey-Jarden conjecture, abelian points Categories:11G05, 11D25, 14G25, 14K07 96. CMB 2011 (vol 56 pp. 283) Transcendental Solutions of a Class of Minimal Functional Equations We prove a result concerning power series $f(z)\in\mathbb{C}[\mkern-3mu[z]\mkern-3mu]$ satisfying a functional equation of the form $$ f(z^d)=\sum_{k=1}^n \frac{A_k(z)}{B_k(z)}f(z)^k, $$ where $A_k (z),B_k(z)\in \mathbb{C}[z]$. In particular, we show that if $f(z)$ satisfies a minimal functional equation of the above form with $n\geqslant 2$, then $f(z)$ is necessarily transcendental. Towards a more complete classification, the case $n=1$ is also considered. Keywords:transcendence, generating functions, Mahler-type functional equation Categories:11B37, 11B83, , 11J91 97. CMB 2011 (vol 56 pp. 366) Multiple Solutions for Nonlinear Periodic Problems We consider a nonlinear periodic problem driven by a nonlinear nonhomogeneous differential operator and a Carathéodory reaction term $f(t,x)$ that exhibits a $(p-1)$-superlinear growth in $x \in \ mathbb{R}$ near $\pm\infty$ and near zero. A special case of the differential operator is the scalar $p$-Laplacian. Using a combination of variational methods based on the critical point theory with Morse theory (critical groups), we show that the problem has three nontrivial solutions, two of which have constant sign (one positive, the other negative). Keywords:$C$-condition, mountain pass theorem, critical groups, strong deformation retract, contractible space, homotopy invariance Categories:34B15, 34B18, 34C25, 58E05 98. CMB 2011 (vol 56 pp. 3) Semiclassical Limits of Eigenfunctions on Flat $n$-Dimensional Tori We provide a proof of a conjecture by Jakobson, Nadirashvili, and Toth stating that on an $n$-dimensional flat torus $\mathbb T^{n}$, and the Fourier transform of squares of the eigenfunctions $|\ varphi_\lambda|^2$ of the Laplacian have uniform $l^n$ bounds that do not depend on the eigenvalue $\lambda$. The proof is a generalization of an argument by Jakobson, et al. for the lower dimensional cases. These results imply uniform bounds for semiclassical limits on $\mathbb T^{n+2}$. We also prove a geometric lemma that bounds the number of codimension-one simplices satisfying a certain restriction on an $n$-dimensional sphere $S^n(\lambda)$ of radius $\sqrt{\lambda}$, and we use it in the proof. Keywords:semiclassical limits, eigenfunctions of Laplacian on a torus, quantum limits Categories:58G25, 81Q50, 35P20, 42B05 99. CMB 2011 (vol 56 pp. 203) Productively Lindelöf Spaces May All Be $D$ We give easy proofs that (a) the Continuum Hypothesis implies that if the product of $X$ with every Lindelöf space is Lindelöf, then $X$ is a $D$-space, and (b) Borel's Conjecture implies every Rothberger space is Hurewicz. Keywords:productively Lindelöf, $D$-space, projectively $\sigma$-compact, Menger, Hurewicz Categories:54D20, 54B10, 54D55, 54A20, 03F50 100. CMB 2011 (vol 55 pp. 233) On Algebraically Maximal Valued Fields and Defectless Extensions Let $v$ be a Henselian Krull valuation of a field $K$. In this paper, the authors give some necessary and sufficient conditions for a finite simple extension of $(K,v)$ to be defectless. Various characterizations of algebraically maximal valued fields are also given which lead to a new proof of a result proved by Yu. L. Ershov. Keywords:valued fields, non-Archimedean valued fields Categories:12J10, 12J25 Previous 1 ... 3 4 5 ... 17 Next
{"url":"http://cms.math.ca/cmb/kw/f?page=4","timestamp":"2014-04-16T10:24:10Z","content_type":null,"content_length":"67258","record_id":"<urn:uuid:22c5e3c2-db05-4c1a-b6de-145b8c972166>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-Dimensional Runs in Bernoulli Trials Suppose we have a tray with a 20x20 grid of "dimples" onto which marbles can be placed. We dip this tray into a large barrel of red and blue marbles, with a population density of a = 0.95 red and b = 0.05 blue marbles, and each dimple acquires a marble. What is the probability of achieving at least one 5x5 square region of all red marbles? In principle, given an m-by-m square grid, we can determine the probability of at least one n-by-n regions of “successes” (i.e., red marbles) by direct application of inclusion-exclusion. For example, if m = n+1 there would be only four possible n-by-n regions, and we could represent these by unions of the nine mutually exclusive regions shown in the figure below. Letting the letters denote the event that the respective regions consists of nothing but successes (i.e., red marbles), the event of having one or more n-by-n regions of successes has the Boolean If we now let the letters signify the probabilities of the respective events, we can apply inclusion-exclusion to give The probabilities of the individual events are so the probability P[1] of one or more n-by-n runs in a region of size (n+1) by (n+1) is In the same way we can determine that the probability P[2] of one or more n-by-n runs in a region of size (n+2) by (n+2) is Likewise the probability P[3] of one or more n-by-n runs in a region of size (n+3) by (n+3) is These results suggest that the first several terms of P[k] are given by so if a is sufficiently small (say around 0.5 or lower) these terms could be used to approximate the answer. For example, if our original problem had specified a probability of 0.6 for the red marbles (instead of 0.95), we could accurately compute the result 0.00062 using these few terms. Unfortunately for values of a approaching 1, the result doesn't converge unless nearly all of the terms are included. Since the number of terms seems to at least double with each increment of k, it seems there would be well over 100,000 terms in the full expression for P[15]. Of course, one very simple way of placing a lower limit on the answer for large values of a is to consider mutually exclusive regions. We know that the probability of any particular 5x5 region containing nothing but red marbles is (0.95)^25 = 0.2773..., and this applies to each of the 16 mutually exclusive 5x5 regions. The probability of any k of these events is simply the kth power of the probability of one of them, so the probability of the union is This represents a lower bound on the probability that any of these mutually exclusive 5x5 regions is all red. To improve this lower bound, recall that in the one-dimensional case we can express the probability of a run of n consecutive successes (each with probability e) in a sequence of m Bernoulli trials by a simple recurrence relation. Letting p[j] denote the probability that the sequence of trials from 1 to j contains at least one run of n consecutive successes, we have p[1] = p[2] = p[n][-1] = 0, and p[n] = e^n. For each subsequent trial the probability p[j] of having at least one run to that point equals the previous probability p[j][-1] plus the probability that [the preceding j-1 trials did not contain a run but the entire sequence up to the jth trial does contain a run]. The event in square brackets is true if and only if the last n trials (i.e., the trials j, j-1, j-2, …, j-n+1) were all successes and the trial j-n was a failure and the sequence of trials up to j-n-1 did not contain a run. Thus the probability p[j] can be expressed as This formula enables us to recursively compute the probability of one or more runs in a sequence of any given length. The ranges of the sequence involved in this formula are illustrated in the figure below for the case n = 5. The recurrence formula asserts that the probability for the entire range up to the blue cell to contain a run equals the probability for the range up to the adjacent yellow cell, plus the probability that the blue and yellow cells are all successes and the pink cell is a failure and the entire green range does not contain a run. Using this result we can improve the lower bound on the two-dimensional problem by partitioning the original 20x20 grid into four mutually exclusive 20x5 stripes. If Q denotes the probability Q that a stripe contains one or more 5x5 square of pure red, then a lower bound on the overall probability is given by 1 - (1 - Q)^4. To determine Q we can treat a stripe as a one-dimensional sequence of Bernoulli trials. Each 5x1 row of a stripe has a probability of e = (0.95)^5 = 0.7737... of being all red, and so Q is the probability of getting one or more "runs" of length 5 in a sequence of 20 trials. Letting p[j] denote the probability of one or more runs of 5 in j trials, we have the recursive relation with the initial values From this we can compute Q = p[20] = 0.880786..., and therefore we have the lower bound for the entire 20x20 square of To verify that these lower bounds are valid, we can estimate the result numerically by means of a Monte Carlo simulation. Simulating a 20x20 array of dimples, each with probability a of being red, we find that the approximate probabilities of at least one 5x5 region of pure red marbles for various values of a are as tabulated below. These numerical results show that, if each dimple has probability of 0.95 of being filled by a red marble, then the probability of at least one 5x5 region of all red marbles differs from 1 by only about 1.8E-6. Also, the probability rises rapidly from near zero to near unity as a increases from about 0.70 to about 0.90. One possible approach to computing the exact answer explicitly is to determine a two-dimensional recurrence relation analogous to the recurrence for one-dimensional runs. In general, given an m-by-m square array of Bernoulli trials with probability a of success on each individual trial, we seek the probability that the array will contain at least one n-by-n square of successes. Letting p[i,j] denote the probability that a rectangle of i columns and j rows contains at least one such n-by-n two-dimensional run, we obviously have pi,j = 0 for all i,j such that i < n or j < n, and we have Incidentally, in the special case n = 1 we have [] , so in the following we assume n>1. Now, we already know how to compute the values of p[n,n+j] = p[n+j,n] for any j by means of the one-dimensional recurrence described previously. In particular, we have the values At this point we have the values depicted in the figure below for the case n = 5. We next need to determine the value of p[n+1,n+1]. By reasoning analogous to the one-dimensional case, we can say that this equals the probability of one or more runs in the square region extending to the cell [n+1,n+1] but excluding that cell, plus the probability that a run is completed by that cell but there were no other runs in that square region. It might seem as if the first quantity would be given by p[n+1,n] + p[n,n+1] – p[n,n], which would give However, this doesn’t fully account for all the inter-dependencies of overlapping runs. To determine the correct expression, note that there are only three possible positions for an n-by-n run in the region excluding the cell [n+1,n+1], and the probability of the union is given by inclusion-exclusion as To this we must add the probability that a run is completed by the cell [n+1,n+1] and there are no other runs in the overall region. This condition entails that all n^2 cells leading up to the subject corner are successes, and there is at least one failure in each of the three overlapping regions marked with blue, green, and red bars in the figure below. We can partition those cells into five groups, designated a through e as shown below. The Boolean expression for the required failures is Letting the letters denote the probabilities of the individual components, the probability of the overall event is given by inclusion-exclusion as In each region we require that at least one cell is a failure, i.e., not all the cells are successes, so the individual component probabilities are Inserting these values into the preceding expression, we get So, multiplying this by a to the power n^2 and adding the result to expression (1), we arrive at the probability Naturally this is identical to the expression for P[1] derived previously. The analogy with the one-dimensional recurrence is clarified if we write this as follows where P* is the probability of one or more 5x5 red regions in overall square region less the 5x5 square at the outer corner. For this particular case, P* = 0, but in general it is not zero. To calculate the probabilities of all the cells in the entire 20x20 grid, we need to generalize this formula to accommodate the general case illustrated below. Given the probabilities for the rectangular regions extending from the outer boundary to each of the green, pink, and yellow cells, we wish to compute the probability for the blue cell. Just as in the one-dimensional case, this will equal the probability for the region extending from the boundary up to the adjacent yellow cells, plus the probability that the blue and yellow cells are all successes and the pink cells are not contained in any runs and there are no runs in the green cells. The task therefore reduces to determining a general expression for the probability that the union of two overlapping rectangular regions contains a run, given the probabilities of containing a run for each of the rectangular regions within that union. Another approach would be to regard the m^2 trials as a linear sequence (reading left to right and top to bottom), and treat the squares of successes as linear patterns, e.g., with m=5,n=2 the squares are just linear sequences of the form 1100011 (where 0 means "don't care"). However, some way of excluding wrap-around" squares would be needed, and there are complications in the recurrence relation for runs containing “don’t care” elements. Return to MathPages Main Menu
{"url":"http://www.mathpages.com/home/kmath344/kmath344.htm","timestamp":"2014-04-17T10:45:46Z","content_type":null,"content_length":"33749","record_id":"<urn:uuid:ed7f6518-e50b-411c-ad2d-8c595e102431>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Target Population Quota Sampling Matched Samples Spatial Sampling Independent Samples Sampling Variability Random Sampling Standard Error Simple Random Sampling Bias Stratified Sampling Precision Cluster Sampling Target Population The target population is the entire group a researcher is interested in; the group about which the researcher wishes to draw conclusions. Suppose we take a group of men aged 35-40 who have suffered an initial heart attack. The purpose of this study could be to compare the effectiveness of two drug regimes for delaying or preventing further attacks. The target population here would be all men meeting the same general conditions as those actually included in the study. Matched Samples Matched samples can arise in the following situations: a. Two samples in which the members are clearly paired, or are matched explicitly by the researcher. For example, IQ measurements on pairs of identical twins. b. Those samples in which the same attribute, or variable, is measured twice on each subject, under different circumstances. Commonly called repeated measures. Examples include the times of a group of athletes for 1500m before and after a week of special training; or the milk yields of cows before and after being fed a particular diet. Sometimes, the difference in the value of the measurement of interest for each matched pair is calculated, for example, the difference between before and after measurements, and these figures then form a single sample for an appropriate statistical analysis. Independent Sampling Independent samples are those samples selected from the same population, or different populations, which have no effect on one another. That is, no correlation exists between the samples. Random Sampling Random sampling is a sampling technique where we select a group of subjects (a sample) for study from a larger group (a population). Each individual is chosen entirely by chance and each member of the population has a known, but possibly non-equal, chance of being included in the sample. By using random sampling, the likelihood of bias is reduced. Compare simple random sampling. Simple Random Sampling Simple random sampling is the basic sampling technique where we select a group of subjects (a sample) for study from a larger group (a population). Each individual is chosen entirely by chance and each member of the population has an equal chance of being included in the sample. Every possible sample of a given size has the same chance of selection; i.e. each member of the population is equally likely to be chosen at any stage in the sampling process. Compare random sampling. Stratified Sampling There may often be factors which divide up the population into sub-populations (groups / strata) and we may expect the measurement of interest to vary among the different sub-populations. This has to be accounted for when we select a sample from the population in order that we obtain a sample that is representative of the population. This is achieved by stratified sampling. A stratified sample is obtained by taking samples from each stratum or sub-group of a population. When we sample a population with several strata, we generally require that the proportion of each stratum in the sample should be the same as in the population. Stratified sampling techniques are generally used when the population is heterogeneous, or dissimilar, where certain homogeneous, or similar, sub-populations can be isolated (strata). Simple random sampling is most appropriate when the entire population from which the sample is taken is homogeneous. Some reasons for using stratified sampling over simple random sampling are: a. the cost per observation in the survey may be reduced; b. estimates of the population parameters may be wanted for each sub-population; c. increased accuracy at given cost. Suppose a farmer wishes to work out the average milk yield of each cow type in his herd which consists of Ayrshire, Friesian, Galloway and Jersey cows. He could divide up his herd into the four sub-groups and take samples from these. Cluster Sampling Cluster sampling is a sampling technique where the entire population is divided into groups, or clusters, and a random sample of these clusters are selected. All observations in the selected clusters are included in the sample. Cluster sampling is typically used when the researcher cannot get a complete list of the members of a population they wish to study but can get a complete list of groups or 'clusters' of the population. It is also used when a random sample would produce a list of subjects so widely scattered that surveying them would prove to be far too expensive, for example, people who live in different postal districts in the UK. This sampling technique may well be more practical and/or economical than simple random sampling or stratified sampling. Suppose that the Department of Agriculture wishes to investigate the use of pesticides by farmers in England. A cluster sample could be taken by identifying the different counties in England as clusters. A sample of these counties (clusters) would then be chosen at random, so all farmers in those counties selected would be included in the sample. It can be seen here then that it is easier to visit several farmers in the same county than it is to travel to each farm in a random sample to observe the use of pesticides. Quota Sampling Quota sampling is a method of sampling widely used in opinion polling and market research. Interviewers are each given a quota of subjects of specified type to attempt to recruit for example, an interviewer might be told to go out and select 20 adult men and 20 adult women, 10 teenage girls and 10 teenage boys so that they could interview them about their television viewing. It suffers from a number of methodological flaws, the most basic of which is that the sample is not a random sample and therefore the sampling distributions of any statistics are unknown. Spatial Sampling This is an area of survey sampling concerned with sampling in two (or more) dimensions. For example, sampling of fields or other planar areas. Sampling Variability Sampling variability refers to the different values which a given function of the data takes when it is computed for two or more samples drawn from the same population. Standard Error Standard error is the standard deviation of the values of a given function of the data (parameter), over all possible samples of the same size. Bias is a term which refers to how far the average statistic lies from the parameter it is estimating, that is, the error which arises when estimating a quantity. Errors from chance will cancel each other out in the long run, those from bias will not. The following illustrates bias and precision, where the target value is the bullseye: Precise Imprecise The police decide to estimate the average speed of drivers using the fast lane of the motorway and consider how it can be done. One method suggested is to tail cars using police patrol cars and record their speeds as being the same as that of the police car. This is likely to produce a biased result as any driver exceeding the speed limit will slow down on seeing a police car behind them. The police then decide to use an unmarked car for their investigation using a speed gun operated by a constable. This is an unbiased method of measuring speed, but is imprecise compared to using a calibrated speedometer to take the measurement. See also precision. Precision is a measure of how close an estimator is expected to be to the true value of a parameter. Precision is usually expressed in terms of imprecision and related to the standard error of the estimator. Less precision is reflected by a larger standard error. See the illustration and example under bias for an explanation of what is meant by bias and precision.
{"url":"http://www.stats.gla.ac.uk/steps/glossary/sampling.html","timestamp":"2014-04-21T02:55:48Z","content_type":null,"content_length":"12265","record_id":"<urn:uuid:5c890add-c159-441a-b441-bf3ff7eaf2bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Homewood, IL Math Tutor Find a Homewood, IL Math Tutor ...I am currently teaching Decision Science, which is a applied Linear Algebra class in the Business Department. I have an MBA in Marketing from Keller Graduate School, plus over thirty years of marketing experience as president of a manufacturers' representative firm, and am a current member of th... 11 Subjects: including statistics, probability, algebra 1, algebra 2 ...These three are my absolute passion and I would love to help others with these subjects as well. I love teaching others; I teach my friends whenever they need help grasping and understanding something, or just a friendly helping hand. I'm very sociable and easy to get along with. 16 Subjects: including geometry, SAT math, English, algebra 1 ...I tutor the way that I teach--present the material in a way that it can understood, practice with my clients, and give them opportunities to try themselves with me right there to troubleshoot. Whether you're in Chicago or the suburbs, I will put in the work to give you what you need. Working towards success in mathematics! 11 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I have received many awards and certificates for excellence in tutoring. I have also received extensive tutoring training that highly qualifies me to help students with learning disabilities succeed in math. Biology is my passion! 22 Subjects: including statistics, precalculus, prealgebra, algebra 2 I know it may seem weird to many, but I absolutely love math, especially algebra. :) Something that I may love even more than algebra itself though is helping students learn algebra. It is my pleasure to help those in need, and I am very patient in working with students. I am open to learners at all levels and have no problem going back to whatever basics are necessary. 8 Subjects: including precalculus, softball, discrete math, logic Related Homewood, IL Tutors Homewood, IL Accounting Tutors Homewood, IL ACT Tutors Homewood, IL Algebra Tutors Homewood, IL Algebra 2 Tutors Homewood, IL Calculus Tutors Homewood, IL Geometry Tutors Homewood, IL Math Tutors Homewood, IL Prealgebra Tutors Homewood, IL Precalculus Tutors Homewood, IL SAT Tutors Homewood, IL SAT Math Tutors Homewood, IL Science Tutors Homewood, IL Statistics Tutors Homewood, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Homewood_IL_Math_tutors.php","timestamp":"2014-04-18T03:56:38Z","content_type":null,"content_length":"23887","record_id":"<urn:uuid:785b2986-02ea-483f-9e2d-bc6b13610270>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
It is difficult to grasp the source-type from the standard focal mechanism plot. And decompositions of the deviatoric component are non-unique, where the DC and CLVD decomposition followed here could be replaced by two DCs (Julian et al., 1998). Following the source-type analysis described in Hudson et al. (1989) we calculate and , which are given by where , and are the deviatoric principal moments for the T, N, and P axes, respectively, and is the isotropic moment where trace. is a measure of the departure of the deviatoric component from a pure double-couple mechanism, and is for a pure double-couple and for a pure CLVD. is a measure of the volume change, where would be a full explosion and a full implosion. We calculate the source-type plot parameters for 12 earthquakes, 17 explosions and three collapses (one cavity and two mine) and produce the source-type plot (Figure 2.25). The nuclear tests occupy the region where , the earthquakes cluster near the origin (with some interesting deviations), and the collapses plot almost exactly at (1,-5/9), which is the location for a closing crack in a Poisson solid. The populations of earthquakes, explosions, and collapses separate in the source-type plot. These initial results are very encouraging and suggest a discriminant that employs the , parameters. Figure 2.25: Source-type plot of the 12 earthquakes (blue), 17 explosions (red), 3 collapses (green), and their associated 95% confidence regions (shaded) analyzed in this study. The magnitude of the event is given by the symbol. The abscissa measures the amount of volume change for the source and the ordinate measures the departure from a pure DC. Berkeley Seismological Laboratory 215 McCone Hall, UC Berkeley, Berkeley, CA 94720-4760 Questions or comments? Send e-mail: www@seismo.berkeley.edu © 2007, The Regents of the University of California
{"url":"http://seismo.berkeley.edu/annual_report/ar06_07/node81.html","timestamp":"2014-04-17T11:10:32Z","content_type":null,"content_length":"8328","record_id":"<urn:uuid:89290a97-aadb-4737-b5ff-ecbe42330d04>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of algebra branch of concerned with operations on sets of or other elements that are often represented by symbols. Algebra is a generalization of arithmetic and gains much of its power from dealing symbolically with elements and operations (such as addition and multiplication) and relationships (such as equality) connecting the elements. Thus, no matter what numbers Principles of Classical Algebra In elementary algebra letters are used to stand for numbers. For example, in the equation ax^2+bx+c=0, the letters a, b, and c stand for various known constant numbers called coefficients and the letter x is an unknown variable number whose value depends on the values of a, b, and c and may be determined by solving the equation. Much of classical algebra is concerned with finding solutions to equations or systems of equations, i.e., finding the roots, or values of the unknowns, that upon substitution into the original equation will make it a numerical identity. For example, x=-2 is a root of x^2-2x-8=0 because (-2)^2-2(-2)-8=4+4-8=0; substitution will verify that x=4 is also a root of this equation. The equations of elementary algebra usually involve polynomial functions of one or more variables (see function). The equation in the preceding example involves a polynomial of second degree in the single variable x (see quadratic). One method of finding the zeros of the polynomial function f(x), i.e., the roots of the equation f(x)=0, is to factor the polynomial, if possible. The polynomial x^ 2-2x-8 has factors (x+2) and (x-4), since (x+2)(x-4)=x^2-2x-8, so that setting either of these factors equal to zero will make the polynomial zero. In general, if (x-r) is a factor of a polynomial f( x), then r is a zero of the polynomial and a root of the equation f(x)=0. To determine if (x-r) is a factor, divide it into f(x); according to the Factor Theorem, if the remainder f(r)—found by substituting r for x in the original polynomial—is zero, then (x-r) is a factor of f(x). Although a polynomial has real coefficients, its roots may not be real numbers; e.g., x^2-9 separates into (x +3)(x-3), which yields two zeros, x=-3 and x=+3, but the zeros of x^2+9 are imaginary numbers. The Fundamental Theorem of Algebra states that every polynomial f(x)=a[n]x^n+a[n-1]x^n-1+ … +a[1]x+a[0], with a[n]≠0 and n≥1, has at least one complex root, from which it follows that the equation f( x)=0 has exactly n roots, which may be real or complex and may not all be distinct. For example, the equation x^4+4x^3+5x^2+4x+4=0 has four roots, but two are identical and the other two are complex; the factors of the polynomial are (x+2)(x+2)(x+i)(x-i), as can be verified by multiplication. Principles of Modern Algebra Modern algebra is yet a further generalization of arithmetic than is classical algebra. It deals with operations that are not necessarily those of arithmetic and that apply to elements that are not necessarily numbers. The elements are members of a set and are classed as a group, a ring, or a field according to the axioms that are satisfied under the particular operations defined for the elements. Among the important concepts of modern algebra are those of a matrix and of a vector space. See M. Artin, Algebra (1991). The Columbia Electronic Encyclopedia Copyright © 2004. Licensed from Columbia University Press
{"url":"http://www.reference.com/browse/algebra","timestamp":"2014-04-19T02:16:34Z","content_type":null,"content_length":"91118","record_id":"<urn:uuid:cb9de0ff-e47d-4a3e-9655-1245db90577d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 1 Tutors Wheaton, IL 60187 Caring tutor for math and more ...Their confidence soars! A little bit about me: -Bachelor of Arts, Human Development from Prescott College, High Honors, 4.0 GPA -Associate in Arts, Associate in General Studies, College of DuPage, High Honors, 4.0 GPA -Qualities: patient, understanding, flexible,... Offering 10+ subjects including algebra 1
{"url":"http://www.wyzant.com/Burr_Ridge_IL_Algebra_1_tutors.aspx","timestamp":"2014-04-19T05:02:29Z","content_type":null,"content_length":"61436","record_id":"<urn:uuid:b2a11df4-0a6b-48d1-8a89-31cd98745f42>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Interval censoring using intcens Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Interval censoring using intcens From Patrick Munywoki <pmunywoki@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Interval censoring using intcens Date Wed, 1 Aug 2012 11:58:21 +0100 Many thanks for the suggestions. The main problem in my dataset is i do not have an exact date/time of when the study participants either started or stopped shedding the respiratory virus of interest. note i sample participant twice-a-week hence there are intervals of 3 to 4 days(longer in cases where sample was not collected) between sample collections for all the participants. Any further ideas on how to analyse this data is welcome. I am currently thinking of using imputation techniques to determine when the infection episodes started and ended before i proceed with the survival analysis. Your thoughts on this approach is also welcome. On 29 July 2012 13:02, <S.Jenkins@lse.ac.uk> wrote: > Steve Samuels provided very good advice. Some other reflections from me: > -intcens- (on SSC) is a program that fits parametric _continuous_ > survival time distributions to interval-censored survival time data > (a.k.a. as grouped or discrete time data). The program doesn't allow > time-varying covariates. It has one row per spell/obs -- convenient for > the maximisation by -ml-. > I'm not sure that -stpm- (which you ask about) is appropriate for > interval-censored data. I would check further if I were you. (If it is, > then also check out -stpm2- which is more flexible and faster. Use > -findit- to get latest version -- it's from SJ or SSC.) > You could think more generally about models for interval-censored data > -- see the MS and lessons off my survival analysis webpages (URL below) > for discussion and references. This shows how you can fit models which > make no assumption about the shape of the underlying survival time > distribution. (You can assume shapes for the interval-hazard if you > wish; but can also assume interval-specific values if you wish and your > data allow it.) And time-varying covariates can be easily incorporated. > More complicated is what to do with multiple spells. (You don't mention > them explicitly, but it sounds as if you have them according to your > description.) The key issue is non-independence across spells from the > same person. Steve Samuels remarked on this and suggested clustering the > standard errors (persons as clusters). An alternative is to assume some > parametric form for the individual-specific effect that generates the > non-independence across spells from the same person -- this is 'frailty' > a.k.a. 'unobserved heterogeneity'. The most straightforward of handling > this would be: > * Reorganise (expand) your data so that you have one row in data set for > each interval that each person is at risk of infection, and create an > event occurrence indicator y_it for person i and interval t (see my > Lessons) > * Create any time-varying covariates required. At minimum, this will be > some specification for the duration dependence of the interval hazard > * fit a -xtcloglog- model with the binary outcome variable being y_it. > This assumes that the person-specific frailty is normal (Gaussian). Or > just fit a -cloglog- model if you want to ignore frailty. Either way, > you would be fitting the interval-censored model corresponding to an > underlying continuous time model that satisfies the proportional hazards > assumption. (That assumption can be tested using interactions between > explanatory variables and the variables summarising duration > dependence.) An alternative would be -xtlogit- and -logit- to data > organised in the same way. > [Cf. -pgmhaz8- and -hshaz- (on SSC) which also fit discrete time > proportional hazards models with frailty (Gamma, and discrete mass > point, respectively), but only to single spell data. -xtcloglog- and > -xtlogit- work with multiple spell data because the frailty is > integrated out numerically.] > Stephen > ------------------------------------- > Professor Stephen P. Jenkins <s.jenkins@lse.ac.uk> > Department of Social Policy > London School of Economics and Political Science > Houghton Street, London WC2A 2AE, U.K. > Tel: +44 (0)20 7955 6527 > Changing Fortunes: Income Mobility and Poverty Dynamics in Britain, OUP > 2011, http://ukcatalogue.oup.com/product/9780199226436.do > Survival Analysis using Stata: > http://www.iser.essex.ac.uk/survival-analysis > Downloadable papers and software: http://ideas.repec.org/e/pje7.html > ---------------------------------------------------------------------- > Date: Sat, 28 Jul 2012 09:29:15 +0100 > From: Patrick Munywoki <pmunywoki@gmail.com> > Subject: st: Interval censoring using intcens > Hi, > I have been attempting to analyse interval censored time-to-event data > with 'intcens' ado (Griffin et al 2006). My data arise from a > longitudinal household-based study with nasal swab collections > twice-a-week for a duration of 26 weeks regardless of their any > symptoms. I want to be able to estimate the duration of infectious > period for one of the viruses we detected. I have reduced the data to > one observation per infection episode in order to use the 'intcens' > command with t0 being the date last positive sample while t1 is the > date of the next negative sample. I hope this data conversion to > single observation per infection episode data is alright? > My questions? > 1. How do i interpret the coefficient given in the results below? > intcens t0 t1 male, dist(exp) time nolog > stata output > Exponential distribution - log acceleration factors > Uncensored 0 > Right-censored 0 > Left-censored 0 > Interval-censored 188 > Number of obs = 188 > Wald chi2(1) = 0.00 > Log likelihood = -1796.982 Prob > chi2 = 0.9990 > Coef. Std. Err. z P>z [95% Conf. Interval] > male -.0001871 .1470683 -0.00 0.999 -.2884356 .2880615 > _cons 9.817517 .2234524 43.94 0.000 9.379558 10.25548 > Note the actual interval between the dates t0 and t1 is on average(sd) > 3.6 (0.98) days; median(IQR) 4 (3-4) days; and range 2-7 days. > 2. Whenever i try using any other distribution this error message pops > up. What could be the problem here? > intcens t0 t1 male, dist(weib) time nolog > initial values not feasible > r(1400); > 3. Is there an alternative method to the interval censoring which > allows me to use the multiple records per person accounting for the > interval censoring. I have tried stpm but not sure whether it allows > for this. > I would greatly appreciate your help , > Many thanks, > - -- > Patrick Munywoki > Please access the attached hyperlink for an important electronic communications disclaimer: http://lse.ac.uk/emailDisclaimer > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ Patrick Munywoki * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-08/msg00012.html","timestamp":"2014-04-17T10:08:50Z","content_type":null,"content_length":"14993","record_id":"<urn:uuid:de184818-d30d-43d4-b844-34dd28da1ab2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
The fundamental theorem of calculus This applet is a variant of Applet 6, Complex Integration. The main differences are that (a) all the functions f(z) have a primitive F(z), and (b) an integral from z_1 to z_2 will start from F(z_1) (and hence end up at F(z_2)) rather than starting at 0 as in Applet 6 (in which case the integral would end up at F(z_2) - F(z_1), as per the fundamental theorem of calculus. If the mouse is at a location z, and you drag the mouse by small amount dz, then the integral moves from F(z) to F(z + dz). Since F'(z) = f(z), the net change in the integral is roughly f(z) dz - the same as in Applet 6. As before, The cyan and green lines indicate the direction the integral would move by if you moved z rightward or upward respectively; they represent the complex numbers f(z) and i f(z) respectively. When you integrate f on a closed loop, you always get 0 - providing that the function f has an anti-derivative on all of the loop. In the applet below, there are some cases (when f(z) = 1/z) when the function F(z) sometimes fails to be an anti-derivative, in which case the right-hand screen fails to correctly compute the integral.
{"url":"http://www.math.ucla.edu/~tao/java/Ftoc.html","timestamp":"2014-04-21T07:04:02Z","content_type":null,"content_length":"2370","record_id":"<urn:uuid:060d8ff7-4861-48fd-a570-2254e0cfcc9d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Summation Program. Summation Program. I'm supposed to be making a program that can find the summation of a function (such as x^5+10) from x=1 to n, which is inputted by the user. I'm supposed to do this once using only "for" loops, and once using only "while" loops. But so far, I don't even know how to get that far. This is what I have, but it doesn't do anything, primarily because I have no idea how to do a summation on C++. 1 #include <iostream> /*included to allow for cout/cin to be used*/ 2 #include <cmath> 3 using namespace std; 4 int main() { 5 int cont; 6 int n; 7 int sum = 0; 8 int number; 10 cout << "Do you want to find a summation? (Y/N) \n"; 11 cin >> cont; 12 int N; 14 if (cont == N) { 15 return 0; 16 } 18 cout << "Enter an n value. n= "; 19 cin >> n; 23 //int func = (pow(x,5)+10); 29 return 0; 30 } Thanks for any help. Last edited on You just need to use a loop that runs from 1 to n (the user's input). Inside the loop, find the value of the function at that value for X (1 through n) and add it to sum. For instance, in the function you gave, it would be something like Start with doing this in a for loop, then changing it to a while loop should be trivial. Last edited on Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/82158/","timestamp":"2014-04-20T15:59:53Z","content_type":null,"content_length":"8522","record_id":"<urn:uuid:820cd840-b670-413a-b2c4-b8724269be1e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
4.4.2 Ising Model Next: 4.4.3 Potts Model Up: 4.4 Spin Models Previous: 4.4.1 Introduction The Ising model is the simplest model for ferromagnetism that predicts phase transitions and critical phenomena. The spins are discrete and have only two possible states. This model, introduced by Lenz in 1920 [Lenz:20a], was solved in one dimension by Ising in 1925 [Ising:25a], and in two dimensions by Onsager in 1944 [Onsager:44a]. However, it has not been solved analytically in three dimensions, so Monte Carlo computer simulation methods have been one of the methods used to obtain numerical solutions. One of the best available techniques for this is the Monte Carlo Renormalization Group (MCRG) method [Wilson:80a], [Swendsen:79a]. The Ising model exhibits a second-order phase transition in d=3 dimensions at a critical temperature T approaches and the pair correlation function r as a power law defining the critical exponent In 1984, this was done by Pawley, Swendsen, Wallace and Wilson [Pawley:84a] in Edinburgh on the ICL DAP computer with high statistics. They ran on four lattice sizes-Swendsen:87a], implemented according to Wolff [Wolff:89b]. Fourth, we would like to try to measure another critical exponent more accurately-the correction-to-scaling exponent The idea behind MCRG is that the correlation length diverges at the critical point, so that certain quantities should be invariant under ``renormalization'', which here means a transformation of the length scale. On the lattice, we can double the lattice size by, for example, ``blocking'' the spin values on a square plaquette into a single spin value on a lattice with 1/4 the number of sites. For the Ising model, the blocked spin value is given the value taken by the majority of the 4 plaquette spins, with a random tie-breaker for the case where there are 2 spins in either state. Since quantities are only invariant under this MCRG procedure at the critical point, this provides a method for finding the critical point. In order to calculate the quantities of interest using MCRG, one must evaluate the spin operators Pawley:84a], the calculation was restricted to seven even spin operators and six odd; we evaluated 53 and 46, respectively [Baillie:91d]. Specifically, we decided to evaluate the most important operators in a Baillie:88h]. To determine the critical coupling (or inverse temperature), L of size S of size m times more than the smaller lattices. levels, according to The Distributed Array Processor (DAP) is a SIMD computer consisting of M planes in each PE: [Metropolis:53a] spin updates followed by one cluster update using Wolff's single-cluster variant of the Swendsen and Wang algorithm. On the 100 sweeps for Metropolis alone to 10 Metropolis plus one cluster update for the hybrid algorithm. In order to measure the spin operators, 100 Metropolis updates. Therefore, our hybrid of 10 Metropolis plus one cluster update takes about the same time as a measurement. On a DAP 510, this hybrid update takes on average 127 secs (13.5 secs) for the In analyzing our results, the first thing we have to decide is the order in which to arrange our 53 even and 46 odd spin operators. Naively, they can be arranged in order of increasing total distance between the spins [Baillie:88h] (as was done in [Pawley:84a]). However, the ranking of a spin operator is determined physically by how much it contributes to the energy of the system. Thus, we did our analysis initially with the operators in the naive order to calculate their energies, then subsequently we used the ``physical'' order dictated by these energies. This physical order of the first 20 even operators is shown in Figure 4.12 with 6 of Edinburgh's operators indicated; the 7th Edinburgh operator (E-6) is our 21st. This order is important in assessing the systematic effects of truncation, as we are going to analyze our data as a function of the number of operators included. Specifically, we successively diagonalize the Figure 4.12: Our Order for Even Spin Operators We present our results in terms of the eigenvalues of the even and odd parts of levels starting from the 4.13, and on the first five blocking levels starting from the 4.14. Similarly, the leading odd eigenvalues for 4.15 and 4.16, respectively. First of all, note that there are significant truncation effects-the value of the eigenvalues do not settle down until at least 30 and perhaps 40 operators are included. We note also that our value agrees with Edinburgh's when around 7 operators are included-this is a significant verification that the two calculations are consistent. With most or all of the operators included, our values on the two different lattice sizes agree, and the agreement improves with increasing blocking levels. Thus, we feel that we have overcome the finite size effects so that a 4.14 and 4.16: There, we can perform one more blocking , which reveals that the results on the fourth and fifth blocking levels are consistent. This means that we have eliminated most of the transient effects near the fixed point in the MCRG procedure. We also see that the main limitation of our calculation is statistics-the error bars are still rather large for the highest blocking level. Now in order to obtain values for levels to an infinite number. This is done by fitting the corresponding eigenvalues Figure 4.13: Leading Even Eigenvalue on Figure 4.14: Leading Even Eigenvalue on Figure 4.15: Leading Odd Eigenvalue on Figure 4.16: Leading Odd Eigenvalue on Finally, perhaps the most important number, because it can be determined the most accurately, is level on the Thus, MCRG calculations give us very accurate values for the three critical parameters level have been reduced to below the statistical errors. Future high statistics simulations on Next: 4.4.3 Potts Model Up: 4.4 Spin Models Previous: 4.4.1 Introduction Guy Robinson Wed Mar 1 10:19:35 EST 1995
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node44.html","timestamp":"2014-04-20T23:39:59Z","content_type":null,"content_length":"18328","record_id":"<urn:uuid:02be73c7-cd76-4b63-930c-881c25fac745>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and system for producing 3-D carred signs using automatic tool path generation and computer-simulation techniques - Patent # 5703782 - PatentGenius Method and system for producing 3-D carred signs using automatic tool path generation and computer-simulation techniques 5703782 Method and system for producing 3-D carred signs using automatic tool path generation and computer-simulation techniques (15 images) Inventor: Dundorf Date Issued: December 30, 1997 Application: 08/507,153 Filed: July 26, 1995 Inventors: Dundorf; David M. (Ridgewood, NJ) Primary Gordon; Paul P. Attorney Or Hopgood, Calimafde Kalil & Judlowe U.S. Class: 700/182; 700/184 Field Of 364/474.24; 364/468; 364/474.22; 364/474.26; 364/191; 395/119; 395/120; 219/121.69; 318/567; 318/569; 318/570 U.S Patent 1718333; 3650178; 3742816; 3827334; 3843875; 3857025; 3860050; 3915061; 3927599; 4393450; 4404507; 4430548; 4458133; 4533286; 4535408; 4546427; 4556957; 4558977; 4559601; 4561814; Documents: 4589062; 4606386; 4617623; 4624609; 4641236; 4663720; 4714920; 4739468; 4739489; 4757461; 4825377; 4834595; 4868761; 4888713; 4893251; 4905158; 4907164; 4945487; 4956787; 4972323; 5043906; 5070464; 5150305 Other Parametric Spline Curves and Surfaces, IEEE Computer Graphics and Applications, , B.A. Karsky, 8 pages (33-40), Feb. 1986.. References: Ship Hulls, B-Spline surfaces, and CAD/CAM, IEEE Computer Graphics and Applications, D. Rogers, 9 pages (37-45) Dec. 1983.. A Procedure for Generating Contour Lines From a B-Spline Surface, IEEE Computer Graphics and Applications, D. Rogers, 5 pages, (71-75) Apr. 1985.. Computer-Integrated Manufacturing of Surfaces Using Octree Encoding, K. Yamaguchi and T. Kunii, 5 pages (60-65) Sep. 1983.. System 48.TM.. . . The Cutting Edge of signmaking Technology, Gerber scientific Products, Inc., sales brochure, 8 pages Date unknown.. CSF 300 Computerized Sign Fabrication System, sales brochure, Gerber scientific Products, Inc., sales brochure, 8 pages date unknown 5 pages.. CNC Machine and Electric Spindle Speed Sign Making, American Machinist & Automated Manufacturing, Aug. 1986, 1 page.. Ray Tracing Free-Form B-Spline Surfaces, IEEE Computer Graphics and Applications, M. Sweeney, 7 pages, (41-48) Feb. 1986.. It's Only Natural: Wood Signs for Retailers, E. Golliher, 6 pages, 6 pages, (62-67) Aug. 1987.. Approximating Complex Surfaces by Triangulation of Contour Lines, E. Keppel, IBM J. Res. Develop., 10 pages, (2-11) Jan. 1975.. Surface Analysis Methods, IEEE Computer Graphics and Applications 13 pages, (18) Dec. 1986.. Cartesian 5: High Speed Heavy Duty Machining Systems, Literature of Therwood Corporation, 8 pages date unknown.. Abstract: The present invention concerns computer-produced carved signs and methods and apparatus for making the same. A computer-produced carved sign embodying a signage work having three-dimensional surfaces, is produced by a method which comprises, designing on a computer-aided design system, a three-dimensional graphical model of the signage work having three-dimensional surfaces to be carved in a signboard. On the computer-aided design system, a desired mathematical representation of the three-dimensional graphical model of the signage work to be carved in the signboard, is determined, and the desired mathematical representation is provided to a computer-aided machining system having a carving tool. Material constituting the signboard is removed using the carving tool moving under the controlled guidance of the computer-aided machining system, to leave in the signboard, a three-dimensional carved pattern corresponding to the three-dimensional graphical model of the signage work, wherein the three-dimensional carved-pattern in the signboard has three-dimensional surfaces corresponding to the three-dimensional surfaces of the three-dimensional graphical model of the signage work. Claim: What is claimed is: 1. A method of producing a 3-D signage work in a signboard formed of constituting material, said method comprising the sequence of steps: (a) on a computer-graphics modelling workstation, creating a 3-D computer-graphics model of a signboard of predetermined dimensions and a 3-D computer-graphics model of an axially rotating carving tool to be moved relative to said 3-Dcomputer-graphics model of said signboard in order to produce a 3-D computer-graphics model of a 3-D signage work having 3-D surfaces to be formed in said signboard using said axially rotating carving tool associated with a computer-controlled carvingmachine capable of simultaneously moving said axially rotating carving tool along at least three coordinate axes referenceable to said signboard; (b) automatically determining a tool path along which said axially rotating carving tool is to be moved relative to said signboard during sign carving operations carried out by said computer-controlled carving machine in order to form said 3-Dsignage work in said signboard; (c) simulating the carving of said 3-D signage work in said signboard by generating on said computer-graphics modelling workstation, a 3-D computer-graphics model of the process of forming 3-D surfaces in said 3-D computer-graphics model of saidsignboard as said 3-D computer-graphics model of said axially rotating carving tool is moved relative to said 3-D computer-graphics model of said signboard along said automatically determined tool path; (d) graphically displaying said 3-D computer-graphics model of said 3-D surfaces formed in said 3-D computer-graphics model of said signboard during step (c); and (e) during said sign carving operations, removing constituting material of said signboard by moving said axially rotating carving tool relative to said signboard along said tool path under the control of said computer-controlled carving machinein order to form in said signboard, a 3-D carved-pattern corresponding to said 3-D graphical model of said signage work, wherein said 3-D carved-pattern formed in said signboard has 3-D surfaces corresponding to said 3-D surfaces of said 3-D computer graphics model of said signage work. 2. The method of claim 1, wherein step (b) further comprises storing a library of 3-D computer-graphics models of a plurality of axially rotating carving tools that can be modelled during step (a) and used to remove constituting material of said signboard during step (e). 3. The method of claim 1, which further comprises after step (d), applying gold-leaf material to the 3-D surfaces of said 3-D carved-pattern formed in said signboard. 4. A system for producing a 3-D signage work in a signboard formed of constituting material, said system comprising: a computer-graphics modelling workstation, for creating a 3-D computer-graphics model of a signboard of predetermined dimensions and a 3-D computer-graphics model of an axially rotating carving tool to be moved relative to 3-D computer-graphicsmodel of said signboard in order to produce a 3-D computer-graphics model of a 3-D signage work having 3-D surfaces to be formed in said signboard using said axially rotating carving tool operably associated with a computer-controlled carving machinecapable of simultaneously moving said axially rotating carving tool along at least three coordinate axes referenceable to said signboard, said computer-graphics modelling workstation including tool path generation means for automatically generating a tool path along which said axially rotating carving tool is to be moved relative to said signboard during sign carving operations carried out by said computer-controlled carving machine inorder to form said 3-D signage work in said signboard, computer-graphics simulation means for simulating the carving of said 3-D signage work in said signboard by generating a 3-D computer-graphics model of the process of forming 3-D surfaces in said 3-D computer-graphics model of said signboard assaid 3-D computer-graphics model of said axially rotating carving tool is moved relative to said 3-D computer-graphics model of said signboard along said automatically determined tool path, and graphical display means for graphically displaying said 3-D computer-graphics model of said 3-D surfaces formed in said signboard as a result of said axially rotating carving tool being moved along said automatically determined tool path; and a computer-controlled carving machine, operably associated with said computer-graphics modelling workstation, for removing the constituting material from said signboard by moving said axially rotating carving tool relative to said signboard alongsaid tool path so as to form in said signboard, a 3-D carved-pattern corresponding to said 3-D graphical model of said signage work, wherein said 3-D carved-pattern formed in said signboard has 3-D surfaces corresponding to said 3-D surfaces of said 3-Dcomputer graphics model of said 3-D signage work. 5. The system of claim 4, wherein said computer-graphics modelling workstation comprises means for storing a library of 3-D computer-graphics models of a plurality of axially rotating carving tools for movement along said automatically determined tool path referenced with respect to said signboard, during sign carving operations. 6. The system of claim 4, wherein said signage work comprises letters. 7. The system of claim 4, which further comprises means for applying gold-leaf material to the 3-D surfaces of said 3-D carved-pattern formed in said signboard. Description: FIELD OF THE INVENTION The present invention relates generally to methods and apparatus for producing carved signs, and more particularly to methods and apparatus for producing carved signs using computers. BACKGROUND OF THE INVENTION Carving, dating long before paper was invented, can be considered one of the earliest forms of writing. Letters carved in wood provide a sense of warmth and a feeling of permanence, and can focus the attention of viewers in a most dramatic way. Dating well beyond the Colonial Period, traditional hand-carved wood signs having gold-leafed lettering had found a deep rooted place in our culture, and over the years the manufacture of such signs has become a time-honored craft of thesignmaking arts. Wood chisels and special knives are the wood crafters basic carving tools used in the time consuming process of hand carving signage works in both relieved and incised modes of carving. Traditionally gold or silver leaf coatings havebeen applied to the relieved and/or incised surfaces of signage works, so that natural as well as artificial light favorably reflects therefrom to improve the visibility of the signage work, and to display a sense of richness and accentuate the artisticbeauty of a signage work itself. The conventional process for producing these hand-carved gold-leafed wood signs is manual, slow and laborious, and although expensive, they are of distinct beauty and treasured by many. Yet while hand carved wood signs with gold-leaf lettering are highly desired articles of manufacture, the traditional process by which they have been made, has tended to make them time intensive, too expensive and thus out of reach for thegreater number of persons who otherwise would desire to own such a sign customized to their needs, interests and taste. Hitherto, the art of making gold-leafed hand-carved wood signs has retained its traditional method of manufacture, with the exception of a minor development involving the use of an overhead projector to transfer a layout pattern to prepared wood. Such a layout transfer technique is described in Volume 15 of Fine Woodworking, March 1979, in an article at pages 72-73 entitled "Routed Signs: Overhead Projector Transfers Layout To Prepared Wood" by Frederick Wilbur. Using architectural stick-onletters, a few parallel lines and a design concept, a sign layout is mocked-up on a piece of transparent plastic film. Using an overhead projector, the layout is transferred onto the prepared wood. In contrast with wood carving signmakers universally eschewing, as a matter of convention, any and all computer-assistance in practicing conventional methods of manufacturing gold-leafed carved wooden signs, the signmaking industry in general,has nevertheless been effected by the application of computer-aided design, computer-aided manufacturing and computerized numerical control technology. Hitherto, several computer-aided signmaking systems employing computer-aided design (CAD) and computer-aided manufacturing and computer numerical control (CNC) based technology, have been developed and are presently available. However, such signmaking systems and methods using CAD/CAM technology have been limited to the production of routed and cut-out type signs. In contrast, because of its nature, the art of carving traditional gold-leafed wood carved signs hasremained in the field of art wherein wood carvers use only gouges, knives, chisels and hammers. Thus, it is now in order to briefly describe in the following paragraphs, these inherently limited CAD/CAM signmaking systems and methods. Prior art computer-aided signmaking systems allow a signmaker to design two dimensional signage works on two-dimensional CAD systems, and to cut-out or route-in characters, shapes, designs and parts thereof so designed, using cutting tools movingunder the guidance of a computer-aided machining system, which includes, a computerized numerically controlled (CNC) axially rotating routing tool. However, the cutting and routing functions achieved by the prior art CAD/CAM signmaking systems arelimited in several significant ways. In general, signage works formed into signboards by prior art CAD/CAM signmaking systems, are routed thereinto by operation of a routing tool moving in a single plane, with single pass operations. The outlines of the characters are formed by arotating router tool bit moving in a plane, routing out uniform grooves in the signboard within the plane. Notably, the uniform grooves formed in the signboard, have the cross-sectional shape of the rotating tool bit performing the routing operation,and are identical along the entire lengths of the members of alphanumeric characters. In some cases, multiple passes of the routing tool along the character outlines is effected, often using tool bit offsetting, to provide desired finished edges,slightly modifying the original uniform groove so formed coextensively within a single plane. These routed signs bear little if any resemblance to, and lack the surface features of, traditional gold-leafed wood carved signs, the subject to which thepresent invention is directed. One example of such prior art signmaking apparatus is described in the sales brochure for the "System 48 Plus" of Gerber Scientific Products, Inc. of Manchester, Conn., wherein a computer-aided signmaking system is disclosed. Specifically, the"System 48 Plus" signmaking system comprises a computer-aided manufacturing system which includes a gantry-type cutting machine which can cut or route-out letters up to 24" high, or stencil-cut sign faces for backlighting. The characters so formed fromthe system, are square cut or beveled, with an optional finish cut. Also, the system provides control for specifying the total depth of cut, and depth of each pass of the router head. (See pages 4.74-4.76, IV System Operation of Gerber ScientificProducts' System 48 User's Manual, Document No. 599-020174, January 1986). However, while the "System 48 Plus" signmaking system allows an operator to make any number of passes from 0" to 2" inches deep for efficient routing and finer surface finishes,the system is incapable of carving into a signboard, a signage work comprising characters and designs having three-dimensional incised and/or relieved surfaces for which hand-crafted gold-leafed wood carved signs are noted. In particular, the Gerber"System 48 Plus" is limited to 21/2 axes of simultaneous cutting tool motion. Another example of prior art, signmaking apparatus is described in the sales brochure for the "CSF 300 Computerized Sign Fabrication System" of Cybermarion Inc. of Cambridge, Mass. The brochure discloses a CAD/CAM signmaking system including arouter head mounted to the carriage of a CNC gantry-type machine which is limited to 21/2 axes of simultaneous motion. Sign layouts, either computer-designed or conventionally laid out, are programmed and can be called up at the machine by an operator. While the system has a library of pre-programmed geometric parts (i.e., letters and numbers in various typestyles) requiring the operator to enter only the desired dimensions, such parts do not have the three-dimensional features characteristic oftraditional gold-leafed hand-carved wood signs, nor is the CSF 300 system capable of carving signs having such surface characteristics and features. Thus, in the art of computer-assisted design and manufacture of signage works, the convention has been to use CAD systems to design two-dimensional layouts of signage works to be cut-out of or simply routed-in various signboard materials. In thelatter instance, the routed surfaces formed within a single plane of a signboard, are limited to the cutting dimensions of the tool bit employed and moving in the plane thereof. Therefore, there is no teaching or suggestion of a computer-aided method or system for producing carved signs embodying signage works which have three-dimensional surfaces akin to those characteristic of traditional hand-crafted gold-leafed woodcarved signs. Accordingly, it is a primary object of the present invention to provide a way of doing by computers and machines, that which was done by hand in order to produce carved signs having three-dimensional surfaces akin to those characteristic of handcarved gold-leafed wood carved signs. Another object of the present invention is to provide a computer-aided method of producing carved signs which embody signage works having three-dimensional incised and/or relieved surfaces, characteristic of traditional gold-leafed hand-carvedwood signs. It is a further object of the present invention to provide a method of producing carved signs resembling traditional hand-carved gold-leafed wood signs, wherein the method uses an integration of computer-aided design (CAD), computer-aidedmachining (CAM), and computerized numerical control (CNC) technology. The present invention provides a design and manufacturing method for providing computer-produced carved signs embodying signage works having complex three-dimensional surfaces. A principal advantage of the method hereof is it allows production of a prototype carved sign within only a few minutes after the design has been completed. As for small volume or customized production, the method requires at most, only a fewhours of design time and a few minutes of manufacturing time per carved sign. Another object of the present invention is to provide a carved sign embodying a signage work formed in a signboard by an axially rotating carving tool simultaneously moving along at least three programmable axes under the controlled guidance of acomputer-aided machining system. A further object of the present invention is to provide a computer-aided method of producing carved signs embodying signage works comprising characters shapes and designs having three-dimensional incised and/or relieved complex surfaces. According to the present method, the characters are designed on a computer-aided design system by creating a three-dimensional geometric model thereof, and are carved into a signboard using a carving tool moving under the guidance of a computer-aidedmachining system. Another object of the present invention is to provide a carved sign produced by such computer-aided method of design and manufacture. It is an even further object of the present invention to provide a CAD/CAM system for producing carved signs embodying signage works having three-dimensional incised and/or relieved curved surfaces. An advantage of the design and manufacturingmethod of the present invention is that a signage work represented by a three-dimensional graphical and numeric model can be exactly reproduced, as a carving in signboards, thereby allowing the use of such three-dimensional signage works as trademarksand service marks, registered with the United States Patent and Trademark Office. A further object of the present invention is to provide a method of generating on a computer-aided design system, three-dimensional computer graphic (or, geometric) models (and numerical coordinate data files for corresponding three-dimensionalcarving tool paths) of three-dimensional characters generated from traditional two-dimensional characters. Such computer-aided design method can be used with the method and system for producing carved signs hereof. Another object of the present invention is to provide a method of designing three-dimensional graphical models (i.e., representations) and numerical coordinate data files of three-dimensional characters generated from two-dimensional characters,using parametric spline-curve and/or spline-surface representations in interpolating curves and surfaces, Another object of the present invention is to provide a method of manufacturing, carved signs embodying signage work having been recorded from preexisting physical objects using three-dimensional surface coordinate measuring methods and apparatus(e.g., instrumentation), based on principles including laser-ranging, and holography. An even further object of the present invention is to provide a method of generating three-dimensional graphical representations and corresponding numerical coordinate data files of a signage work wherein such method employs a computer-aidedthree-dimensional solid image processing program on the CAD system hereof. This method provides a designer with the capability of precisely mathematically subtracting (e.g., using a computational process on the CAD system), three-dimensional solid stockmaterial from a three-dimensional solid model of a signboard which is in mathematical union with the solid model of a carving tool that is translatable within the CAD systems' three-dimensional coordinate system, using a three-dimensional ortwo-dimensional stylus or a mouse. In particular, this method involves providing a solid geometric model (i.e., three-dimensional solid graphical representation) of a carving tool and of signboard constituting material, and performing therewith,three-dimensional solid-image processing. A principal advantage of this CAD method is that it provides a highly flexible way in which to render a desired three-dimensional model (e.g., graphical representation) from which can be generated, numericalcoordinate data file(s) for a three-dimensional composite tool path corresponding to a signage work to be carved in a real signboard using a particular carving tool or tools of the present invention. Yet a further object of the present invention is to provide a computer-aided carved sign design and manufacturing system on which the methods hereof can be computer-programmed, and wherein the design and manufacturing system comprises in part, acomputer-aided design system that can automatically generate and display a computer-simulation of the carving tool motion required to produce the desired signage work carved in a signboard. The design and manufacturing system of the present inventionalso includes a computer-aided carving system having at least a three-dimensional numerical control (NC) machining (i.e., tool path) program, supported by a CAD/CAM computer. Other and further objects will be explained hereinafter, and will be more particularly delineated in the appended claims, and other objects of the present invention will in part be obvious to one with ordinary skill in the art to which thepresent invention pertains, and will, in part, appearobvious hereinafter. SUMMARY OF THE INVENTION The present invention uses an integration of computer-aided design, computer-aided manufacturing, and computer numerical control technology to provide a computer-aided design and manufacturing process for producing carved signs having surfaceproperties and features characteristic of traditional hand-crafted gold-leafed wood carved signs. In accordance with the principles of the present invention, the method for producing carved signs hereof comprises designing on a computer-aided design (CAD) system, a three-dimensional graphical model (i.e., representation) of a signage workhaving three-dimensional surfaces to be carved in a signboard. On the computer-aided design system, a desired mathematical (e.g., numerical) representation of the signage work is determined. Thereafter, the desired mathematical representation, whichcan be in one of many possible and desirable formats, is provided to a computer-aided machining (CAM) system including a CNC machine tool having a carving tool. The material constituting the signboard is removed using the carving tool moving under thecontrolled guidance of the computer-aided machining system, to leave in the signboard, a three-dimensional carved pattern corresponding to the three-dimensional graphical model of the signage work. The three-dimensional carved pattern in the signboardhas three-dimensional surfaces corresponding to the three dimensional surfaces of the three-dimensional graphical model of the signage work. DESCRIPTION OF THE DRAWINGS For a further understanding of the objects of the present invention, reference is made to the following detailed description of the preferred embodiment which is to be taken in connection with the accompanying drawings, wherein: FIG. 1 is a perspective view of an example of the design and manufacturing equipment required to provide a carved sign manufactured in accordance with the preferred embodiment of the design and manufacturing method of the present invention; FIG. 2 is a schematic block diagram of the computer-aided design and manufacturing system for producing carved signs hereof, shown in FIG. 1; FIG. 3A is a perspective view of a carved sign produced by the method hereof, showing the emulated geometrical features of traditional hand-carved wood signs; FIG. 3B is an elevated cross-sectional side view of a carved signboard embodying a signage work produced by the method hereof illustrating the three-dimensional nature of the "center line" curves of the carved grooves incised therein; FIG. 4A is a plan view of a two-dimensional graphical model (i.e., representation) of a layout of an alphanumerical signage work displayed on the high-resolution color graphics display terminal of the computer-aided design system hereof; FIGS. 4B and 4C are different scaled perspective views of a three-dimensional graphical model of components of the signage work "SAGAMORE" shown in FIG. 3A, which are typically displayed on the color graphics display terminal during the processof generating three-dimensional graphical representations of alphanumerical characters from two-dimensional graphical representations (e.g., characteristic outlines) thereof, in accordance with the principles of the present invention; FIGS. 4D and 4E are different scaled perspective views of three-dimensional composite carving tool paths, shown in association with respective characteristic outlines of the three-dimensional graphical models of the alphabetical characters "SA"illustrated in FIGS. 4B and 4C; FIG. 4F is a plan view of a three-dimensional graphical model of the numerical character of the numerical character "40" of the signage work of FIGS. 3A and 4A hereof; FIG. 4G is a perspective view of the three-dimensional graphical model of the numerical character "40" illustrated in FIG. 4F; FIG. 4H is a side view of the three-dimensional graphical model of the numerical character illustrated in FIGS. 4F and 4G; FIGS. 4I and 4J are different perspective views of three dimensional composite carving tool paths graphically shown in association with the characteristic outlines of the three-dimensional graphical models of the numerical character "41"illustrated in FIGS. 4F, 4G, and 4H hereof; FIG. 4K is a perspective view of the three-dimensional composite carving tool paths graphically shown in association with respective characteristic outlines of three-dimensional graphical models of three alphanumeric characters "SA 4" illustratedin FIGS. 4A, through 4J hereof; and FIG. 5 is a chart showing several conventional sweeps of gouges and chisels positioned alongside corresponding tool bits for use with the axially rotating carving tool hereof, as to emulate conventional carving operations using method andapparatus of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT It is now in order to describe in a best mode embodiment, the details of the design and manufacturing method and apparatus for producing carved signs embodying signage works having three-dimensional incised and/or relieved carved surfaces, inaccordance with the principles of the present invention. Referring now to FIG. 1, therein is shown an example of a computer-produced carved sign CPCS design and manufacturing system 1, although many different system configurations are possible and would be evident hereinafter to those skilled in theart. From this description, for purposes of illustration, the CPCS system 1 includes a computer-aided design/ computer-aided machining (CAD/CAM) work station 2, a CAD/CAM computer 3 including software packages, and a CAM system 4. The CAD/CAM work station 2 includes a keyboard 5 for providing instructions and data to the CAD/CAM computer 3 via a connection 6. For reviewing the design, a three-dimensional high-resolution color graphics display unit 7 having a view screen 8is part of the CAD/CAM work station 2. In the preferred embodiment, the three-dimensional high resolution color graphics display terminal, can be the Iris 3030 workstation from Silicon Graphics, Inc. of Mountain View, Calif. As illustrated in FIG. 2, the CAD/CAM work station 2 can be designated as having several other computer-assisted design tools, such as three and two-dimensional "object" coordinate measuring apparatus, and methods used in connection therewith. An example of two-dimensional coordinate measuring apparatus would be a digitizer tablet 9, and an example of three-dimensional coordinate measuring apparatus 25 would be the Cyberscan.TM. laser-based non-contact height profiling system, available fromCyberoptics, Inc. of Minneapolis, Minn. As illustrated in FIG. 1, the CAD/CAM computer 3 is shown as a single unit although it may comprise separate systems available from many different manufacturers. The CAD/CAM computer 3 is connected by a connection 10 to the CAM system 4. Information developed on the computer 3 can be optionally transported to the CAM system 4 on standard commercial magnetic media in the appropriate computerized language formats numerically controlled. Alternatively, connection 10 be realized using amodem in accordance with conventional telecommunication principles (e.g., using the telephonic circuits, microwave and/or satellite links). As will be discussed in greater detail hereinafter, the CAD/CAM computer 3 can be used for either manual,semi-manual, or automatic generation of carving tool paths, based on the geometry of a part developed in the CAD/CAM computer during the CAD phase. The CAM system as defined herein, is shown in the preferred embodiment as having a gantry-type carving tool 11 mounted over a vacuum type work table 12 on order of the size of a typical signboard used in outdoor commercial environments, such asin front of a law office or other professional building, but it can be much larger or smaller. The carving tool 11 in the preferred embodiment, comprises an axially rotating carving tool, such as an electric or pneumatic router head, which is mounted toa carriage 13 that moves along the gantry structure 14 in response to three-dimensional "carving tool path" instructions provided thereto. In the preferred embodiment, the carving tool 11 is provided with five programmable axes of simultaneous motion. In order to properly practice the computer-assisted design and manufacturing method of the present invention, the carving tool 11 need only have at least three programmable axes of simultaneous motion. However, while in the preferred embodimentof the present invention, three-programmable axes of simultaneous carving tool movement can be employed, five or seven programmable axes of simultaneous carving tool movement can provide certain advantages when carving particular types ofthree-dimensional signage works. Three, five, and seven axes gantry-type machine tools are available from Thermwood Corporation, of Dale, Ind. In particular, the Thermwood Cartesian 5 Aerospace model having five axes of programmable motion, features acomputer numerical controller (i.e., machine control unit) from the Allen-Bradley Corporation, having bubble memory and milling software. The table size available with such a model is 71/2 feet by almost 16 feet, the vacuum feature making it mostsuitable for accurately holding down a signboard with repeatability. The CAM system 2 also includes a computer numerical controller (CNC) referred to hereinafter as the machine control unit (M.C.U.) shown in FIG. 2. The CAM system 2 is in addition to other mechanical material removal systems such as drills,routers, sanders and the like which can find auxiliary application in carved sign manufacturing operations. Referring now to FIG. 2, there is shown a schematic block representational diagram of the CPCS design and manufacturing system 1 of the present invention. As shown in FIG. 1, the system of FIG. 2 also comprises the CAD/CAM work station 2, theCAD/CAM computer 3, machine control unit 15, gantry-type carving tool with axially rotating carving tool 11 and also "a post processor" 16. It also is shown to include a Graphics Library 17, realized as a computer data base in communication with theCAD/CAM computer 3. In order to provide hardcopy print-outs (i.e., plots) of a three-dimensional graphical or numerical models of signage works, a plotter/printer unit 20 can be provided. Alternatively, screen image reproductions can be provided byphotographic equipment. The Graphics Library 17 contains symbolic representations, such as numerical coordinate tool path data files, three-dimensional geometrical and graphical (e.g., curve, surface, and solid) models, design documentation and the like, of signageparts including characters, shapes and designs previously designed or otherwise provided. The symbolic representations stored in the Graphics Library 17 hereof can be (i) generated on the CAD/CAM system 1 in accordance with the computer-assisted (andautomated) design methods of the present invention, and then (ii) stored in the computer data base 17. Alternatively, the symbolic representations in Graphics Library 17 can be produced with the aid of three and two-dimensional coordinate measuringmethods and apparatus to be described in detail hereinafter. Thereafter they can be called up by a designer at the work station 2 and concatenated with others, using the keyboard of the workstation to display inventory files on the viewing screen of the3-D color graphics display unit 7. Alternatively, the symbolic representations of characters, shapes and designs after having been generated in accordance with the methods hereof, can be copied, post-processed, and used on other CAD/CAM systems once theoriginal design work has been achieved Greater details regarding use of the Graphics Library 17 in the step involving designing signage works to be carved in signboards, will be given in a later section of this Referring to FIG. 3A, there is shown a perspective view of a signboard embodying a three-dimensional carved pattern produced by the design and manufacturing method and using the apparatus of the present invention. FIG. 3A illustrates how withthe computer-assisted carving method of the present invention, the "width" of carved grooves can be made to vary in the x-y plane. In FIG. 3B, a cross-sectional view of the carved sign of FIG. 3A taken along the line 3B--3B, is shown. Thiscross-sectional view illustrates the potentially complex nature of the surfaces. More particularly, this view illustrates how the depth of carved "V" and other shaped grooves of a signage work can be made to vary along the z axis as a function of x, ycoordinates in the x-y plane. Using the design and manufacturing method of the present invention, virtually any type of signage work having simple or complex three-dimensional surfaces, can be represented as a three-dimensional graphical model on theCAD system of the present invention, and carved into a signboard using, the carving tool of the computer-aided machining system thereof, governed by a desired mathematical representation generated from the three-dimensional graphical model. At this juncture, it is now in order to briefly describe the mathematical basis underlying the geometrical and graphical modeling and graphical display of curves, surfaces, and solids comprising the computer-generated three dimensional graphicalmodels of the present invention in particular, and three-dimensional mathematical representations of signage works and components thereof, in general. In the preferred embodiment, curve, surface and solid generation facilities are provided for representing curved lines, surfaces, and solids drawn in three-dimensional space. The following section hereof describes the mathematical basis for thethree-dimensional curve and surface facilities of the system of the present invention. For purposes of illustration and not of limitation, the CAD/CAM computer system and work station of the present invention, can be realized (i.e., implemented) using the CAMAND 3000 Series.TM. CAD/CAM System by Camax Systems, Inc. ofMinneapolis, Minn. The CAMAND.TM. 3000 Series CAD/CAM Computer System can include the 3030 Iris Series super workstation from Silicon Graphics of Mountain View, Calif., providing state-of-the-art capabilities for high level CAD/CAM usage. Thisthree-dimensional engineering/designing workstation can provide the user with a rapid response time with real-time color graphics display, shading capabilities, multi-windowing, and multi-tasking capabilities. The CAMAX CAD/CAM System includes CAMAND.TM. Software that provides sufficient CAD/CAM capabilities for the design and manufacturing of computer-produced carved signs having surface features characteristic of traditional gold-leafed hand carvedwood signs. CAMAND.TM. Software includes comprehensive features which are suitable for three-dimensional graphic (or geometric) modeling, design analysis, documentation, and multi-axis numerical control programming of carved signage works to which thepresent invention is directed. As an alternative to CAMAND 3000 Series.TM. System from CAMAX Systems, Inc., the CAD system of the CPCS System hereof can be realized (i.e., implemented) using the ANVIL.TM.-5000 CADD/ CAM Software System including the OMNISOLID.TM. SolidModeling Software System of Manufacturing and Consulting Services, Inc. (hereinafter MCS) of Irvine, Calif. The MCS ANVIL.TM.-5000 CADD/CAM System is a fully integrated 3-D CADD/CAM software package which provides wireframe, surface and solid modeling,finite-element mesh generation, analysis, drafting, and numerical control using the same integrated database structure and the same interactive interfaces. MCS's OMNISOLIDS.TM. Solids Modelling Software module is a Constructive Solid Geometry (CSG)/Boundary-Representation (B-REP) hybrid system which allows full use of all sculptured surfaces. The data structure of OMNISOLIDS.TM. Solid ModellingSoftware Module is a GSG/B-REP hybrid. CSG is a method of storing a solid as a series of unions, intersections and differences of simpler solids, or primatives. B-REP, Boundary Representation, is a method of storing the faces (i.e., surfaces) of thesolids. The OMNISOLIDS.TM. Solid Modelling Software Module utilizes a combination of these two storage techniques. The mathematical basis for three-dimensional curve facility of the preferred embodiment hereof, is now given with respect to the Iris.TM. curve facility of the CAMAND 3000 Series TM CAD /CAM Computer System. A curve segment is drawn by specifying a set of four "control points", and a "basis" function which defines how the control points will be used to determine the shape of the curve segment. Complex curved lines in three-dimensions representive ofcarving tool paths (e.g., character. "center lines"), and the like, can be created by joining several curve segments end-to-end The curve facility provides the means for making smooth joints between the curve segments. For purposes of the present disclosure, the term "center line" will be hereinafter used much in the way that it is conventionally referred to in Fine Woodworking's On Carving and How to Carve Wood, both works published by Taunton Press. The mathematical basis for the curve facility of the preferred embodiment, can be the parametric cubic curve. The curves in the present application which correspond to the three-dimensional "centerline" trough (of carved grooves in thesignboard), are often too complex to be represented by a single curve segment and instead must be represented by a series of curve segments joined end-to-end. In order to create smooth joints, it is necessary to control the positions and curvatures atthe end points of curve segments to be joined. Parametric cubic curves are the lowest-order representation of curve segments that can provide continuity of position, slope, and curvature at the point where two curve segments meet. A parametric cubic curve has the property that x, y, z can be defined as third-order polynomials for some variable t: ##EQU1## A cubic curve segment is defined over a range of values for t (usually o.ltoreq.t.ltoreq.1), and can be expressed as a vector product. ##EQU2## The curve facility hereof can approximate the shape of a curve segment with a series of line segments. The end points for all the line segments can be computed by evaluating the vector product c(t) for a series of t values between 0 and 1. Theshape of the curve segment is determined by the coefficients of the vector product, which are stored in column vector. These coefficients can be expressed as a function of a set of four control points. Thus, the vector product becomes where G is a set of four control points, or the "geometry", and B is a matrix called the "basis". The basis matrix B is determined from a set of constraints that express how the shape of the curve segment relates to the control points. Forexample, one constraint might be that one end point of the curve segment, is located at the first control point. Another constraint could be that the tangent vector at that end point lies on the line segment formed by the first two control points. Whenthe vector product C is solved for a particular set of constraints, the coefficients of the vector product are identified as a function of four variables (the control points). Then, given four control point values, the vector product C(t) can be used togenerate the points on the curve segment. For a detailed discussion of the various classes of cubic curves, including Cardinal Spline, B-Spline and Bezier Spline curve representations, reference can be made to the publication "Parametric Curves,Surfaces, and Volumes in Computer Graphics and Computer-Aided Geometric Design" (November, 1981) by James H. Clark, Technical Report No. 221 Computer Systems Laboratory, Stanford University, Standford, Calif. Attention is now accorded to the mathematical basis for the surface facility of the present invention, which in the preferred embodiment, can be the Iris.TM. surface facility. Three-dimensional surfaces, or patches, are presented by a "wireframe" of curve segments. A patch is drawn by specifying a set of sixteen control points, the number of curve segments to be drawn in each direction of the patch (i.e., precision), and the two "bases" which define how the control points determine theshape of the patch. Complex surfaces can be created by joining several patches into one large patch using the surface facility the method for drawing three-dimensional surfaces is similar to that of drawing curves. A "surface patch" appears on theviewing screen as a "wire frame" of curve segments. The shape of the patch is determined by a set of user-defined control points. A complex surface consisting of several joined patches, can be created by using overlapping sets of control points andB-spline and Cardinal spline curve bases. The mathematical basis for the surface facility of the present invention, can be the parametric bicubic surface. Bicubic surfaces can provide continuity of position, slope, and curvature at the points where two patches meet. The points on abicubic surface are defined by parametric equations for x, y, and z. The parametric equation for x is: ##EQU3## (the equations for y and z are similar). The points on a "bicubic patch" are defined by varying the parameters s and t from 0 to 1. If oneparameter is held constant, and the other is varied from 0 and 1, the result is a cubic curve. Thus, a wire frame patch can be created by holding s constant several values, and using the facility hereof to draw curve segments in one direction, and doingthe same for t in the other direction. There are five steps in drawing a surface patch: (1) The appropriate curve bases are defined. The Bezier basis provides "intuitive" control over the shape of the patch, whereas the Cardinal Spline and B-Spline bases allow smooth joints to be created between patches. (2) A basis for each of the directions in the patch, u and v, must be specified. Notably, the u-basis and v-basis do not have to be the same. (3) The number of curve segments to be drawn in each direction is specified, where a number of curve segments can be drawn in each direction. (4) The "precisions" for the curve segments in each direction (i.e., u and v) must be specified. The precision is the minimum number of line segments approximating each curve segment and can be different for each direction. To guarantee thatthe u and v curves segments forming the wire frame, actually intersect, the actual number of line segments is selected to be a multiple of the number of curve segments being drawn in the opposing direction. (5) Using appropriate "path" commands, as for example, of the Iris.TM. Graphics Library, the surface is act drawn. The arguments to the "patch" command contain the sixteen control points that govern the shape of the patch, and associated withthe x, y, and z of the sixteen control points, there is associated a 4.times.4 matrix, respectively. Patches can be joined together to create the complex surfaces of three-dimensional signage works, by using for example, the Cardinal Spline or B-Spline bases, and overlapping sets of control points. In addition, curves and surfaces can be"blended", smoothed, filled and trimmed by mathematical processing. For a discussion of the mathematical basis for the solid model facility of the preferred embodiment hereof, reference can be made to Chapter 3, Subchapter 4 entitled, "Parametric Volumes" of James H. Clark's Technical Report No. 221, ComputerSystems Laboratory, Stamford University referred to hereinbefore. Attention is now given to designing a signage work on the computer-aided design system hereof in accordance with the principals of the present invention. In realizing the design and manufacturing method of the present invention, one of severaltechniques can be used to design on the CAD system hereof, three-dimensional graphical models (e.g., three-dimensional geometrical representations and/or carving tool path data files) of a signage work to be carved in a signboard. In each embodiment ofthe method however, there exists a step of modeling in some form or another, the geometry of the components of a three-dimensional signage work, and determining an appropriate three-dimensional carving tool path provided by NC programming, to render thecarved signage work in the signboard. In developing the computer-assisted design and manufacturing method of the present invention, careful study has been accorded to the traditional tools of the wood carving signage craft. As illustrated in FIG. 5, such tools include wood carvingchisels and gouges of various sweeps and sizes, and in particular, study has been given to the ways in which the various carving functions (i.e., involving traditional wood carving tools) can be emulated using, for example, the axially rotating carvingtool 11 having a selected tool bit geometry, moved in three-dimensional space under the controlled guidance of the CAM system of the present invention. Additionally, recognition is given to the fact that wood carvers have cut the sides of the grooves (i.e., gouges) of letters at angles ranging from 90.degree. to 120.degree. in order to form the "V" shaped grooves of many tradionallyhand-carved incised letters. Notably, different wood carvers often select different angles to form the "V" as to reflect light in a preferred manner. In connection therewith, FIGS. 4C, 4D, 4E and 4K in particular, clearly illustrate how the width of athree-dimensional carved pattern (such as a groove) can be varied along a three- dimensional center line interposed between the inner and outer character outlines, by simultaneously controlling along the z axis, the cutting depth (e.g., z coordinates) ofa cutter bit as it is moved along the three-dimensional carving tool path in the x-y plane of a three-dimensional coordinate system. Hereinbelow is described one method in particular which has been developed for carving letters and other alphanumeric characters using the CPCS design and manufacturing system 1 and the carving tool bits illustrated in FIG. 5. Thiscomputer-assisted design method has been discovered to be a highly effective and most efficient method for designing three-dimensional computer graphic models and three-dimensional carving tool paths (including numerical coordinate data) for characters,to be used in producing three-dimensional carved patterns of three-dimensional signage works in sign boards, wherein the carved patterns have incised and/or relieved surfaces characteristic of traditional gold-leafed wood carved signs. This particularmethod will now be described below. Referring to FIG. 4A, a two-dimensional computer graphic model (i.e., representation) of a layout of a three-dimensional signage work is presented in plan view as would appear on the display terminal 7. FIGS. 4B and 4C illustrate in greaterdetail two characters (i.e., components or parts) of a three-dimensional signage work whose geometry is being modelled on the CAD system. The three-dimensional graphical representations of the signage work of FIGS. 4B through 4J, preferably aredisplayed on the viewing screen 8 using high-resolution color graphics software. Referring to FIGS. 4A and 4F through 4J, there is illustrated several principal steps comprising a method of generating three-dimensional graphical and numerical models of three-dimensional characters from traditional or novel two-dimensionalcharacters or shapes, having "outer" (and sometimes "inner") characteristic outlines 18 and 19 respectively. The sequence of steps for this computer-aided design method will now be described in detail. As indicated in FIG. 4A, a two-dimensional graphical representation (e.g., the "4" of "40 SAGAMORE") having "inner" and "outer" characteristic outlines 19 and 18, is displayed (i.e., plotted) in two-dimensions (e.g., the x-y plane) on the CADsystem, such system preferably having high-resolution color graphics capabilities. As a matter of design choice, the characteristic outlines can be designated a particular color such as yellow. As indicated in FIG. 4F, a plurality of substantially similar outlines lea of the two-dimensional character (e.g., "4") are generated from the "outer" characteristic outline 18 and a plurality of substantially similar outlines 19A from the"inner" characteristic outline 19 thereof, at a predetermined offset (in millimeters) in a direction towards the inside (i.e., towards the centerline) of the two dimensional character. These "offsetted" characteristic outlines 18A and 19A can bedesignated as purple, for example. As indicated from the characters of FIG. 4F, in particular, there can arise from this computer graphic design process, the formation of what will hereinafter be termed "islands", designated by 21A, 2B, and 21C of the character "4" in FIG. 4F. Inaccordance with the present invention, "island formations" can be thought of as the void or vacant two-dimensional spaces remaining within the space between the characteristic outer and inner outlines, 18 and 19 respectively, that is, after the outer andinner characteristic outlines 18 and 19 converge to within a distance apart equivalent to the offset distance. Notably, the character "0" of FIG. 4F has no island formations. When island formations arise in the process of generating three-dimensional characters from two-dimensional characteristic outlines of characters, shapes, designs and the like, then either manual or programmed generation of "local" characteristicoutlines, e.g., 22A, 22B and 22C, for the "islands" 21A, 21B and 21C respectively, must be generated. This procedure ensures that complete three-dimensional graphical models of signage works and components thereof can be provided. In such instances,the island characteristic outlines 22A, 22B and 22C can be offset to generate a plurality of island characteristic outlines as illustrated in FIG. 4F. The plurality of "inner" and "outer" similar outlines (i e., offsets) illustrated in FIG. 4F in particular, are then displayed on the CAD system's color graphics viewing screen, for review. The general appearance of these geometrically similaroutlines are that of contour lines, of a contour map. But as will be illustrated in the description of this particular method, providing such similar outlines principally, although not solely, serve to help the designer determine on the CAD system (i)the depth (e.g., z coordinates) and (ii) the location (e.g., x, y coordinates) of the three-dimensional "center line" curve of the three-dimensional character produced from a transformed two-dimensional character, projected into the As illustrated in FIGS. 4G and 4H, each of the geometrically similar outlines are then translated (i.e., projected), a predetermined distance along the third dimension (e.g., z axis) of the CAD systems' three-dimensional coordinate system. Asmentioned hereinabove, this step is helpful in assisting the designer to determine the location where the three-dimensional "center line" of the two-dimensional character will be drawn. Thereafter, using the three-dimensional graphical model of FIG. 4G, a plurality of points are interactively introduced in the three-dimensional coordinate system, at locations corresponding to points lying along what can be visualized to be athree-dimensional tool path, along which the apex (i.e., tip) of an axially rotating cutting tool of predetermined cutting dimensions, moves under the guidance of the CAM system hereof. The interactive introduction of points can be achieved using a"stylus" or "mouse" device well known in the computer-aided design arts. These points are selected so that when the axially rotating cutting tool 11 is moved along the three-dimensional tool path, a desired three-dimensional carved pattern havingdesired three-dimensional surfaces of a visualized signage work, is formed in a signboard. Notably three-dimensional surfaces of the carved pattern will correspond to the three-dimensional surfaces of the three-dimensional graphical model (i.e.,representation) of the three-dimensional alphanumerical character. As discussed in the curve mathematics section provided hereinbefore, the plurality of points are then appropriately interpolated using parametric spline-curve representations, to renderthe coordinates of a composite three-dimensional carving tool path 23 illustrated in FIGS. 4I and 4J. The carving tool path 23 when taken with a three-dimensional graphic model of a carving tool, corresponds to the three-dimensional carved pattern thatis associated with the so designed three-dimensional graphical model of the three-dimensional character. Thereafter, the interactively introduced points can be erased for display purposes. In connection with the above-described method of the present invention, a three-dimensional graphical model and corresponding numerical coordinate tool path data file(s) can be generated on the CAD system hereof, from a correspondingtwo-dimensional graphical model (e.g., characteristic outline) of the alphanumeric character. The alphanumerical character can be of any sort regardless of type style or font, and with or without serifs, a feature such as a fine cross-stroke at the topor bottom of a letter. The three-dimensional graphical representations, numerical coordinate carving tool paths, and other mathematical representations derived therefrom, once having been generated, can be stored in non-volatile memory (e.g., ROM) andcan be used to create the data-base of the Graphics Library of the present invention, as discussed hereinbefore with reference to FIG. 2B. The tool path data file so generated by the above-described design method, is then subject to post processing, an operation which involves processing the tool path data file to produce complete, machine-ready files, expressed in machine (i.e.,assembly) or binary logical languages. In the post processor, the tool path data is matched (i.e., interfaced) to a particular CNC machine tool and machine control unit (MCU) combination. The output of the post processor can be generated for papertape, magnetic memory storage or direct numerical control (DNC). Notwithstanding post processing being a subject well known and understood in the art of NC programming, reference is made to a paper entitled "G-Posting To NC Flexibility", by the Computer Integrated Manufacturing Company, of Irving, Tex., andreprints from Modern Machine Shop of Cincinnati, Ohio. This paper provides a further discussion on the "generalized post-processor approach" utilized in simplifying NC workpiece programming and in making such programs function on different makes ofsimilar types of machine tools. in the preferred embodiment, the output of the post processor corresponds to a three-dimensional composite tool path data file, and three-dimensional graphical representations (i.e., models) of each alphanumerical character. The post processoroutput can also be used to create the extensive Graphics Library of numerous sets of three-dimensional alphanumerical characters of distinct typestyles (i.e., fonts). The computer-software based Graphics Library of the CAD/CAM sign carving system ofFIG. 2B, can provide a robust inventory of three-dimensional characters. The data files of these three-dimensional characters can be simply accessed by a designer at the work station 2, for purposes of designing a three-dimensional layout of athree-dimensional signage work. Once designed, the three-dimensional graphical model of the signage work can be displayed, reconfigured, and transformed to the designer's liking, and after generation of three-dimensional tool path data files and postprocessing thereof, provided to the CAM system 4 in order to carve the corresponding three-dimensional signage work into a signboard, by taking necessary and sufficient steps. In addition to the above-described method of designing three-dimensional graphical models and tool path data files of three-dimensional alphanumerical characters derived from two-dimensional alphanumerical characters, an alternative method ofachieving the same has been developed. This alternative method will now be described and explained below, after making a few preliminary remarks appropriate at this juncture. As discussed hereinbefore the methods thus described include that prior to carving any form of three-dimensional signage work in a signboard the geometry of the design of the signage work is first specified by a computer graphic model from whichthereafter a numerical coordinate (three-dimensional tool path) model is produced. In the present invention, the computer graphic and numerical coordinate tool path models of a signage work are prepared using computer-aided design and manufacturingtechniques, all of which are based upon computer graphics and computational geometry, the latter being a subject which is given treatment in "Computational Geometry for Design and Manufacturing (1980)" by I. D. Faux and M. J. Pratt, published by JohnWiley and Sons. Notably, in the field of geometric (to be contrasted with graphical) design, if the design of a three-dimensional signage work has complex surfaces, as can many wood carved signage works, then precise surface descriptions would need to be givenfor those complex surfaces, prior to the determination of tool paths and the output of the post processor. This therefore makes geometric modeling using geometric primatives, a potentially time consuming process in some cases, as the nature andprecision of the surface description given to a signage work is a question of mathematical form. Mathematical form, on the other hand, is a matter regarding the type of mathematical functions used to describe complex three-dimensional curves, surfacesand solids of signage works, wherein the three-dimensional surfaces thereof are characteristic of traditional gold-leafed hand-carved wood signs, and which are to be machine-carved in a signboard in accordance with the present invention. In contrast with geometric design, graphical design on the CAD/CAM system of the present invention, can employ three-dimensional coordinate measuring methods and aparatus, which usually does not require production of geometric descriptions (i.e.,functions) and can produce numeric models of three-dimensional objects to be carved in a signboard in accordance with the principles of the present invention. The advantages of each type of model used in computer-aided sign carving according hereto,will hereinafter appear obvious to those with ordinary skill in the art to which the present invention pertains. It is also within the contemplation of the present invention, that there can appear at times, the need to employ additional modeling techniques based on alternative mathematical structures and processes operationally supported within the CADsystem of the CPCS design and manufacturing system hereof. It has been discovered that this is especially the case when desiring to produce carved signs embodying signage works having three-dimensional surfaces akin to those characteristic oftraditional hand-crafted gold-leafed wood carved signs in particular, and having relieved and/or incised carvings of characters and designs, in general. In particular, in IEEE Computer Graphics and Applications Journal of January 1984, a paper is presented entitled "Computer-Integrated Manufacturing of Surfaces Using Octree Encoding" by Yamaguchi et al. The paper presents an algorithm forautomatically generating from an octree description, numerical coordinate tool paths containing the data that a numerical control milling machine requires to manufacture a part. The octree data structure, representing a three-dimensional object byhierarchically organized cubes of various sizes, facilitates the performance of boolean operations and tool and work piece "interference" checking, and provides an approximate representation of smooth surfaces to any required accuracy. Also, since theoctree model has a very simple data structure, the automatic generation of various types of carving tool paths is possible. Accordingly, the use of octree data structures, operations, and algorithms can be used with the CPCS design and manufacturingsystem hereof, to design three-dimensional graphical models of signage works having three-dimensional incised and/or relieved surfaces. When graphically modeling signage works having certain surface topologies, it has been discovered that other CAD methods can be advantageously employed in designing and manufacturing carved signs in accordance with the principals of the presentinvention. Additionally, as discussed hereinbefore, the method of the present invention, can make use of parametric spline-curve, spline-surface, and spline-volume (i.e., solid) representations as mathematical structures for geometric modeling of thethree-dimensional surfaces of a signage work. Examples of such spline-curve and surface representations are defined and described in the IEEE Computer Graphics and Applications Journal, in the following articles: "Parametric Spline Curves and Surfaces"by B. A. Barskey, February 1986; "Rational B-Splines for Curve and Surface Representation" by Wayne Tiller, September 1983; "Rectangular V-Splines" by G. M. Nielson, February 1986; "A Procedure For Generating Contour Line From B-Spline Surface" by S. G.Sutterfield and D. F. Rogers, April, 1985. Herebelow, using one of several known or yet-to-be-discovered parametric spline curve or surface representations, an alternative method is presented for generating, on the CAD system, a three-dimensional graphical model (i.e., representation) ofa two-dimensional shape having at least one characteristic outline. This method comprises displaying in two dimensions on the CAD system, the two-dimensional graphical representation of the characteristic outline of the shape. From this two-dimensionalgraphical representation, the surface within the "characteristic outline" thereof is subdivided into a plurality of "surfaces patches", each of which can be independently created and smoothly connected together using surface mathematics as hereinbeforedescribed. A spline surface representation of a particular type, can be selected as a basis for patches of the three-dimensional curved surfaces of the three-dimensional graphical model (i.e., representation) generated from the two-dimensionalcharacter. Interactively, an array of control points can then be introduced in three-dimensional space, to control the desired shape of the parametric spline-surface representations so to design the "surface patches" comprising the three-dimensionalgraphical model generated from the two-dimensional shape or character. The array of control points for each surface patch, are then interpolated using a spline surface representation to thereby generate the individual surface patches comprising thethree-dimensional graphic model. From the resulting three-dimensional graphical model, a corresponding tool path can be automatically or interactively (i.e., manually) generated. In connection with the design and manufacturing method of producing carved signs in accordance with the present invention, there are two prior art computer-aided methods which can be used in the process of designing from two-dimensionalalphanumerical characters, three-dimensional graphical models thereof. U.S. Pat. No. 4,589,062 to Kishi et al. incorporated herein by reference, discloses a method of creating curved surfaces which can be used in the design step involving the formation of three-dimensional graphical models of components ofthree-dimensional signage works. In particular, the method of U.S. Pat. No. 4,589,062 is an "interactive" method, which involves defining on a first section curve (e.g., characteristic outline), a first correspondence point which corresponds to asecond correspondence on a second section curve (e.g., center line), and then generating intermediate section curves in accordance with the first and second correspondence points. In essence, such method involves moving and transforming a first sectioncurve of two given section curves, until the first second curve is superposed on a second section curve. The major advantages thereof is that curved surfaces featuring subtle changes can be generated with increased degrees of freedom and created withaccuracy. According to the present invention, the method of U.S. Pat. No. 4,589,062 can be employed in the process of producing a three-dimensional graphical model (i.e., representation) of a signage work in general, and three-dimensional graphicalmodel of a three-dimensional character generated from a two-dimensional character having at least one characteristic outline, in particular. Another method which can be used in the design step of the method of the present invention, involves automatically creating three-dimensional sculptured surfaces from sectional profiles designated on design drawings only. FAPT DIE-II softwarefrom General Numeric of Elk Grove Village, Ill., provides such facility. For sectional profiles, curves on an optional plane in a space are classified into basic curves and drive curves. For example, assume that one basic curve and two (i.e., first andsecond) drive curves are designated on a design drawing. Sculptured surfaces are created by gradually changing the profile of the first drive curve to the second drive curve when the first drive curve moves toward the second drive curve along the basiccurve. As applied to the present invention, the first and second drive curves can represent the effective cross-sections of an axially rotating carving tool disposed at two different points along the z axis herein. The basic curve can represent thecenter line of a carved groove in a signboard. While the above methods of generating three-dimensional graphical models of characters may satisfy most designers of computer-produced carved signs, especially those designing signage works limited to lettering, the present invention understandsthat there are, nevertheless, CAD designers who desire to feature in their three-dimensional signage works, shapes and designs other than alphanumerical characters such as those commonly seen in hand-crafted "chip" carvings. In such situations, thedesigner will need to generate on the CAD system, three-dimensional graphical models having complex three dimensional surfaces. In such an event, the designers will require certain computer-assisted geometric modeling and NC tool path generationcapabilities. This is to ensure that complex signage work components can be efficiently and effectively designed, composite tool path graphics displayed, and composite tool path numerical data generated therefrom and proven by computer simulation on theCAD system or by carving signboards with the CAM system of the present invention. In accordance with the principles of the present invention, the components of a complex signage work can be modelled with any combination of "wire frame" and surface (or solid) primitives, including spline curve and surface representations. Fromthe Graphics Library 17 in the system diagram of FIG. 2B, a designated computer program can access previously recorded two and three-dimensional graphical designs for creation of tool paths which can be dynamically displayed and interactively joined, andedited. This provides a visual representation of the exact tool paths relating to the graphically designed part. The NC tool path data can be in one of several formats, and an appropriate post processor will produce either paper tape, or magneticrecordings, or direct output for controlling the axially rotating carving tool 11 hereof preferably having five programmable axes of simultaneous movement as described hereinbefore. The present invention also contemplates that there are instances when a designer will desire additional freedom in designing a three-dimensional graphical model of a signage work, that is, as compared with the above-described computer-aideddesign methods. It has been discovered that in such instances, it may even be desired to have the capability of representing three-dimensionally on the CAD system hereof, the removal of "solid" signboard constituting material, as does a carverskillfully utilizing conventional tools of the trade, such as chisels, gouges and hammers. In connection with such design capability, an alternative computer-aided design method has been developed and will be described hereinbelow. This alternative computer-assisted design and NC programming method teaches "mathematically" subtracting (using Boolean operations), solid "stock material" (i.e., signboard material) representations from a signboard represented in thethree-dimensional CAD system, which uses a computer-aided carving tool. Therein, the carving tool(s) is (are) represented on the CAD system in the form of a "solid" three-dimensional graphical structure representing the "effective" solid geometry of aspecified tool bit in operation. The carving tool is also displayed on the visual display unit of the CAD system, and can be moved on the screen using a joystick, light pen or other conventional device. Between the three-dimensional images of the solidsignboard and carving tool bit, a computational-based "three-dimensional image subtraction" process comprising "Boolean operations", is performed to generate a three-dimensional graphical representation of a signage work. Therefrom, tool path dataassociated with a particular three-dimensional, graphically represented carving tool, is automatically generated. The steps of the process are described below. Using solid geometry, the designer models (i.e., represents) on the CAD system, the carving tool as well as the signboard and then removes (i.e., mathematically subtracts) the from the solid model of the signboard, the graphically representedstock material of the solid signboard model, over which the solid models (i.e., numeric and graphics-based three-dimensional graphical representations) of the carving tool bit and signboard, overlap. As the three-dimensional carved patterns are beingdefined, both the tool path graphics data and the tool itself can be displayed. At the same time or thereafter, tool path numerical data files thereof can be automatically generated using known computational processes. The process described hereinabove involves three-dimensional solid-image subtraction and has the advantage of automatic tool path generation. Thus, this method of designing three-dimensional models of a signage work requires implementation of athree-dimensional image subtraction technique realized by a computer-aided process on the CAD/CAM computer 3. The computer-aided process effectuates the removal Of three-dimensionally represented "solid" stock material in "union" (i.e., overlapping in3-D space) with the position of the solid geometrical model of a carving tool (e.g. axially rotating carving tool bit). With this process, the removal of solid stock material in "union" with the solid carving tool model is achieved by mathematicalsubtraction (i.e., difference calculations) from a solid geometrical model of the signboard and in a manner which is analogous in some respects to the modus operandi of sign carvers employing manual, time-honored carving tools and procedures. In realizing the above-described method, an enhanced version of one of the CAMAX CAMAND.TM. and the MCS ANVIL-5000 OMNISOLIDS.TM. solid (i.e., volume) modeling computer software program packages can be used to impliment the hereinabovedescribed design process of the present invention. With such a process, a means is provided for mathematically or "computer graphically" carving signage works and automatically generating numerical coordinate tool path data therefor on the CAD/CAMsystem hereof. In implimenting the above-described three-dimensional solid-image subtraction/automatic tool path generation process, advantages can be derived by using work station software from Weber Systems Inc. of Brookfield, Wis. In particular,the work station software can allow an operator/designer practicing the present invention, to simultaneously view four different views of the Boolean-based computational process involving solid models of the carving tool and stock material (e.g.,signboard). In connection with the CAD method hereinbefore described, focus is now given to FIG. 5 wherein examples of carving tool bits of various geometries are illustrated, and which can be used with the design and manufacturing method of the presentinvention. Therein, the chart shows several conventional sweeps and gouges and chisels positioned alongside corresponding tool bits for use with axially rotating carving tool 11, which are capable of emulating conventional hand carving operations inaccordance with the principles of the present invention. Also, as illustrated in FIG. 2, three-dimensional solid graphical (and numerical) models of the various carving tool bits illustrated in FIG. 5 can be stored in memory 24, and called up whendesired by a designer or program. The present invention also contemplates that there are instances when a designer will desire to design (i.e., define) a geometric model of a signage work using at least one or more of the parametric curve, surface, and solid generation facilitiesof the system hereof, and allow the CAD/CAM computer 3 to automatically generate the tool path parameters (e.g., carving tool specifications, numerical coordinate tool path data, spindle and feed speeds, etc.), tool entry methods, and clearance planes,in a language compatible with the post-processor There will also be times when a computer-assisted designer may desire to carve a three-dimensional pattern or design of a preexisting "physical" object, alongside or around carved lettering comprising in combination therewith, a composite signagework. Realizing that creating a graphical (or geometrical) model of preexisting physical objects requires substantial time at the work station 2, a three-dimensional graphical and numerical model of such signage work can be designed (i.e., provided) byrecording the coordinates of the three-dimensional surfaces of the physical object to be carved in the signboard, as to produce a three-dimensional graphical and numerical model of such signage work or component thereof. Using automatic or manual toolpath generation techniques and one of several carving tools, a numerical coordinate data file of a composite tool path therefor can be produced. This CAD technique offers the advantage of obviating the need to manually generate a three-dimensional graphical model of the physical object using computational geometry and the like, but rather utilizes three-dimensional surface coordinatemeasuring methodologies, based in part on principals of holographic imaging and optical memory storage. In such instances, "three-dimensional coordinate measuring" methods and apparatus can be use in the step of designing (i.e., producing or providing)a three-dimensional graphical model of a signage work, in accordance with the design and manufacturing method of the present invention. In particular, a laser-based non-contact height profiling system can be employed to carry out methods of measuringthree-dimensional coordinates of the surfaces of a low profiled physical object (i.e., digitizing three-dimensional objects) to be carved in a signboard. An example of such three-dimensional coordinate measuring apparatus 25 diagrammatically illustratedin FIG. 2, is the Cyberscan.TM. profiling system available from Cyberoptics Inc., of Minneapolis, Minn. and as the corporate name suggests, optical principles can be applied to achieve control processes. In the case of the present invention, thecontrol processes would be the CAM system 4 guiding the carving tool 11 in accordance with carving tool paths generated from a three-dimensional graphical model of the preexisting physical object. Another approach using three-dimensional coordinate measuring methods and apparatus can involve utilization of holographic recording methods and equipment. In such instances, a three-dimensional graphical model can be produced by holographicallyrecording a physical object to be carved in a signboard, using holographic equipment. The holographically recorded image of the physical object can be stored and digitally processed to provide in a suitable computer graphic format, a three-dimensionalgraphical model of the physical object. From this three-dimensional graphical model, suitable carving tool paths (i.e., numerical data files) can be generated using either manual, semi-manual or automatic tool path generation Alternatively, a hand-held stylus called the "3 Space Digitizer" from Polhemus Navigation Sciences, of Colchester, Vt., can be used to enter x, y, and z coordinate data of three-dimensional physical objects or models, into a properly interfacedCAD/CAM system. Using a Unigraphics.TM. CAD/CAM workstation from The McDonnel Douglas Corporation, an alphanumeric terminal initiates the digitizer task, and the 3 Space Digitizer can be used to enter complex geometry of non-metallic objects (e.g., todetermine the x, y, and z coordinates of points located on a 3D model or object). The 3D Space Digitizer transmits this data to a host computer which includes a C.P.U., tape drive, and disk drive, and stores data in user-specified part files andinterfaces with the Unigraphics.TM. workstation. The 3D Space Digitizer can be used to measure the coordinates (i.e., digitize the space dimensions) of three-dimensional physical objects that are to be made part of signage works, employing one or more of incised, relieved, or applique modes ofcarving. From so produced numerical models of these objects, a three-dimensional graphical model thereof can be displayed, and numerical coordinate tool path data files generated. Two-dimensional recording of surface coordinates of preexisting physical objects can also be performed using 2-D coordinate measuring methods and apparatus to provide two-dimensional characteristic outlines thereof. Thereafter, characteristicoutlines so produced, can be used to generate therefrom, three-dimensional graphical models in accordance with the methods described hereinbefore. OPERATION OF PREFERRED EMBODIMENT HEREOF It is appropriate at this juncture having described hereinbefore methods and apparatus of the present invention, to now describe the operation of the preferred embodiment of the CAD/CAM design and manufacturing system 1 of the present inventionduring an explemary design and manufacturing cycle based on the principles thereof. Visualizing in ones mind a signage work to be carved on a signboard, a designer using the design and manufacturing method hereof, has great flexibility and numerous design tools from which to choose. More specifically, an operator using the CPCSCAD/CAM system hereof has several options in producing a three-dimensional graphical model of a signage work to be carved in a signboard. One method of designing a three-dimensional graphical model of a signage work is to apply at the workstation 2, one of the various computer-aided design methods described hereinbefore. For example, using on the CAD system hereof, the method ofgenerating three-dimensional alphanumerical characters from corresponding two-dimensional alphanumerical characters can produce a three-dimensional graphical (and numerical model) model of a composite signage work comprising such characters. Alternatively, three-dimensional coordinate measuring methods and apparatus can be used through the workstation 2, to provide a three-dimensional graphical model of a physical object to be used as a signage work which is intended to be carved ina signboard according to principles of the present invention. Yet, on the other hand, a designer using one of the computer-aided design methods described hereinbefore can visualize a signage work and applying such design methods, produce a three-dimensional graphical model of the signage work. From the three-dimensional graphical model however produced, a mathematical representation of the signage work, such as a numerical coordinate (tool path) data file, can be generated and provided to the CAM system 4 having carving tool 11. Thematerial constituting the signboard is then removed using the carving tool 11 moving under the controlled guidance of the CAM system 4, to leave in the signboard, a three-dimensional carved pattern corresponding to the signage work. Notably, thethree-dimensional carved pattern in the signboard will have three-dimensional surfaces corresponding to the three-dimensional surfaces of the three-dimensional graphical model of the signage work. It is herein noted that during the machine carving operation, tool change may be required according to the designed carving program (e.g., tool path data file) which has been provided to the Post Processor 16 of the CAM system 4. In suchinstances, carving tool bits of the type illustrated in FIG. 5, can be accessed from tool storage 26 during a carving operation, and changed in accordance with the carving program whereafter the carving operation can resume. Tool change can occur asoften as desired. Also in instances where "chisel or gouge markings" formed in the three-dimensional carved grooves are desired, an approach employing several levels of carving processes (and thus multiple composite carving tool paths) can be adopted and CNCprogrammed. In such a multi-stage carving process, the later stages of the carving process can include carving tool movement to create the chisel and/or gouge markings, as to emulate the textural appearance of such traditional hand-carved wood signs. After a signage work has been carved into the signboard using the computer-aided design and manufacturing method of the present invention, finishing operations can then be performed on the carved sign according to conventional principles andtechniques. For example, the carved signboard can be prepared for painting and gold leafing. In cases where the signboard is constituted of wood, conventional wood finishing techniques can be employed. Examples of such techniques can be found in How toCarve Wood by Richard Butz cited hereinbefore. Thereafter, gold-leaf material can be applied to the signboard in accordance with techniques known in the traditional wood carving arts. Discussion of such applicable techniques can be found in Chapter IXentitled "Laying and Burnishing Gold" of Writing a Illuminating & Lettering (1983) by Edward Johnston, published by Adam & Charles Black of London, England, and by the Taplinger Publishing Co., Inc. of New York, N.Y. In the case where vinyl or likeplastic is used as signboard constituting material, conventional gold-leafing can be obviated, and chrome or gold spray or deposition processes can be used. Alternatively where the signboard is constituted of metal, electroplating processes can be usedto deposit light reflective coatings over three-dimensional carved surfaces. Attention is now accorded to the types of materials out of which the signboards may be constituted. It has been discovered that aside from woods such as for example, mahogany, pine, redwood and cedar, other materials such as acrylic, vinyl,polycarbonate, styrene, aluminum, brass and foam board, also provide suitable signboard materials for practicing the method of the present invention. There are several parameters which should be considered prior to carving using the design and manufacturing method of the present invention. Specifically, as regards spindle speeds, (i.e., of the axially rotating carving tool 11), it has beendiscovered that speeds within the range of 15,000 to 24,000 RPM have provided excellent results when computer-carving mahogany wood. However, when using wood, cutting directions of the axially rotating carving tool hereof must also be considered in viewof the grain of the wood. It has been discovered that information regarding "grain" of particular wood signboards to be carved using the methods hereof, can be model on the CAD system and used to generate tool paths which consider the grain of the woodsignboard. In the present invention sanding operations can be executed using axially rotating sanding tools of appropriately configured dimensions, which are moved in the three-dimensional carved grooves of signage works, under the guidance of the NCprogrammed CAM system hereof. It would be within the scope and spirit of the present invention to also provide computer-produced sternboards for boats, yachts and the like, as well as computer-produced tombstones using the design and manufacturing method of the presentinvention. In the case of tombstones, the signboard can be a stone material such as granite, marble, sandstone or other suitable material, and the carving tool bit can be "diamond tipped" or made of material appropriate for carving stone under theguidance of the CAM system hereof. Using the method and apparatus of the present invention, names and patterns typically cut into tombstones by conventional waterjet cutting, sandblasting, chiseling and routing processes can be carved by way of an axially rotating cutting toolhaving at least three-programmable axes of simultaneous movement, under the guidance of the CAM system hereof. It would also be within the scope and spirit of the present invention to utilize one of laser and sandblasting principled devices as the carving tool of the method and apparatus of the present invention. In the case where laser devices are used, a laser beam of sufficient energy to burn away wood or other signboard constituting material can be controllably moved simultaneously in at least three programmable axes under the controlled guidance ofthe CAM system hereof. Such controlled movement of laser beams can remove signboard constituting material as to leave three-dimensional carved patterns in the signboard, which correspond to the three-dimensional surfaces of the three-dimensionalgraphical model of the signage work to be carved therein. One example of laser cutting techniques is illustrated in U.S. Pat. No. 4,430,548 to Macken wherein laser apparatus and a process for cutting paper is disclosed. In the case where sandblasting devices are used, a focused pressurized stream of sand or like particles to blast away wood or other signboard constituting material, can be controllably moved simultaneously in at least three programmable axesunder the controlled guidance of the CAM system hereof. However, in both the laser cutting and sandblasting processes described hereinabove, controlling the cutting depth of the laser beam in the case of the laser cutting process, and the sand stream in the case of the sandblasting process, isextremely difficult. In both cases, the post processor must take into consideration (i) the physical properties of the signboard material, and (ii) the precise energy (i.e., heat or momentum) of the cutting process utilized so that precise cuttingdepths can be obtained. Further modifications of the present invention herein disclosed will occur to persons skilled in the art to which the present invention pertains and all such modifications are deemed to be within the scope and spirit of the present inventiondefined by the appended claims. * * * * *
{"url":"http://www.patentgenius.com/patent/5703782.html","timestamp":"2014-04-17T10:07:07Z","content_type":null,"content_length":"114299","record_id":"<urn:uuid:80419f12-2ab0-48f4-bc6f-8adb15101a67>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Sun Earth Angles The angle between the earth- sun line and the earth"s equatorial plane is known as the angle of declination. This varies with the date; and the orbital velocity of the earth traveling around the elliptic plane also varies slightly. The two angles that completely describe the sun position are the solar altitude b, measured from 0° to 90° above the horizon, and the solar azimuth f, measured from 0° to 180° from the south with positive sign eastwards and negative sign westwards. To determine these two angles from data on Latitude, date and time, the following calculation is carried out: Instead of being expressed in time units, true solar time can be expressed in angular terms related to the earth"s rotation as the hour angle, H, where H = 0.25 X hour angle Number of minutes from true solar time noon Since in one minute the earth rotates 0.25°. Values a.m. are +ve and p.m. +ve. Then, sin b = cos L cos d cos H + sin L sin d sin f = (cos d sin H) / cos b Latitude, degrees declination, degrees (northern hemisphere =+ve, southern hemisphere = -ve) Values of b and f may be available from tables in publications or graphical techniques for finding them are also available.
{"url":"http://www.new-learn.info/packages/clear/thermal/climate/sun/angles.html","timestamp":"2014-04-19T00:01:39Z","content_type":null,"content_length":"3632","record_id":"<urn:uuid:69433839-a392-4906-840e-7b9cc427fbb5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find an equation of the line through the given points. Write the equation in standard form. Through (2, 0) and (-7, 8) a. 2x - 15y = -106 b. -2x + 15y = -106 c. -8x + 9y = 16 d. 8x + 9y = 16 Best Response You've already chosen the best response. find out with d help of standard eqn of line Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f15d0b2e4b0737c567bcfa4","timestamp":"2014-04-17T07:08:04Z","content_type":null,"content_length":"27712","record_id":"<urn:uuid:378c5838-959a-4061-b6ad-a9774a93c4f3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Questions In Matrix Multiplication Hello there i have managed to multiply matrices with this(as shown at rosetta code): Code: Select all (define (matrix-multiply matrix1 matrix2) (lambda (row) (apply map (lambda column (apply + (map * row column))) where my map looks like this: Code: Select all ;;map: act to a list with a procedure ;;proc ---> lst (define (map proc lst) (if (null? lst) (cons (proc (car lst)) (map proc (cdr lst))))) I have tested it and runs great!But i have some questions since i'm still learning: 1)In the first prodecure(line 6) we call as "map * row column" but my map is like (map proc lst).In line 6 there is one more argument.So i can't understand why and how it works. 2)In the second lambda why is not "column" inside parenthesis?Isn't the format of a lambda something like: Code: Select all ((lambda (x) (+ x x)) (* 3 4)) and this returning 24? I haven't found anywhere a lambda without arguments a the parenthesis. Any kind of info should be useful since i'm a learn.Even the slightest. Thanx in advance. Re: Questions In Matrix Multiplication sepuku wrote:1)In the first prodecure(line 6) we call as "map * row column" but my map is like (map proc lst).In line 6 there is one more argument.So i can't understand why and how it works. In general, the number of lists supplied when using 'map' needs to match the number of arguments expected by function. For example, if you were to map 'cons' then you would need to supply two lists: Code: Select all (map cons '(1 2 3) '(4 5 6)) ==> ((1 . 4) (2 . 5) (3 . 6)) The multiplication operator accepts any number of arguments, so you are free to pass as many lists as you want; all the cars of the lists will be multiplied, all the cadrs will be multiplied together, all the caddrs... -- producing a new list. Code: Select all (map * '(1 2 3) '(4 5 6) '(7 8 9)) ==> (28 80 162) ;; I.E., ((* 1 4 7) (* 2 5 8) (* 3 6 9)) sepuku wrote:2)In the second lambda why is not "column" inside parenthesis?Isn't the format of a lambda something like: Code: Select all ((lambda (x) (+ x x)) (* 3 4)) and this returning 24? I haven't found anywhere a lambda without arguments a the parenthesis. That is a special notation for passing a variable number of arguments to a procedure (it is equivalent to "(define (foo . x) ...") -- if that form is familiar to you. For example Code: Select all (define foo (lambda x (display x))) (foo 1 2 3) ==> (1 2 3) (foo 1) ==> (1) (foo) ==> () ;; i.e. the empty list (define matrix1 '((11 12) (21 22))) (foo matrix1) == (((11 12) (21 22)))) Re: Questions In Matrix Multiplication First of all thank you for your reply. Thank you for exaplaining the map procedure.So far i was not aware that the number of arguments of map depend on the procedure we map.(although i should Now this gives birth to a new question : The multiplication operator accepts any number of arguments to make a multiplication we need at least 2 arguments right? When we (map * row) what multiplication is done? If i understand well the substitutions, this is like saying (map * matrix2).Is that correct?I can not understand how the substitution works in all the program...maybe it's because of the lambdas for example when we have this: Code: Select all (matrix-multiply '((1 2) (3 4)) '((-3 -8 3) (-2 1 4))) We want the first element of the resulting matrix to be (+ (* 1 -3) (* 2 -2)) this happens in Code: Select all (apply + (map * row column)) But how it understands what is everytime "row" and "column" ? Re: Questions In Matrix Multiplication Ahhh wrong!!! It's not (map * row), it's (map * row column). But is my substitution correct? Re: Questions In Matrix Multiplication sepuku wrote:to make a multiplication we need at least 2 arguments right? Technically, no. If passed no arguments, the '*' function in Scheme returns "1"; if passed one argument, it returns that argument. sepuku wrote:When we (map * row) what multiplication is done? Perhaps this would be made clear by considering the following implementation of a multiply procedure that accepts a variable number of arguments, but relies upon a 'mult2' function which requires exactly two arguments (ignoring that Scheme already provides this functionality). When you map 'my-*' to a single list of numbers, each of those numbers gets multiplied by "1" to form a new list. In other words, Nothing really happens except that you've created a new copy of the original list. The behavior of the '*' function is identical (we could have named our function '*' but then we'd need to ensure that mult2 didn't reference it as such). sepuku wrote:If i understand well the substitutions, this is like saying (map * matrix2).Is that correct?I can not understand how the substitution works in all the program...maybe it's because of the lambdas for example when we have this: Code: Select all (matrix-multiply '((1 2) (3 4)) '((-3 -8 3) (-2 1 4))) We want the first element of the resulting matrix to be (+ (* 1 -3) (* 2 -2)) this happens in Code: Select all (apply + (map * row column)) But how it understands what is everytime "row" and "column" ? Actually, I think the secret to understanding this lies in how "(apply map (lambda colums" works. The "apply +" is not evaluated until after both lambdas are created. Consider the evaluation of "(apply map list '((-3 -8 3) (-2 1 4)))", a list is created from the cars of each of the two lists, from each of the cadrs, and from each of the caddrs. The actual breakdown of this is a little bit involved, but fairly straightforward; I will leave that for you to investigate on your own. Our main interest is how this transforms when our internal lambda is substituted for the 'list' function. If we use the "identity" lambda function (i.e., a lambda that when invoked returns its argument), we get the same result as for the above code but now we have a way to "do stuff" with the arguments other than just inserting them in a list. So our (inner) lambda has been passed each of those three lists (of two numbers) and is tasked with the job of multiplying them, element-by-element, with one of the 'row's of 'matrix1' (in particular, the row that is being mapped as an argument to the outer lambda). Re: Questions In Matrix Multiplication First of all i'm really sorry for the late answer.But i study and write code in my free time and i have not much lately. Secondly i want to thank you for the posted paradigms.Although it should be obvious,before your post it wasn't. Once again thank you for your time.
{"url":"http://www.lispforum.com/viewtopic.php?p=7150","timestamp":"2014-04-18T00:14:54Z","content_type":null,"content_length":"29922","record_id":"<urn:uuid:7a4dbf32-ccf7-41cc-8e9a-668106e3277b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Question from a test i had today about group proof February 3rd 2010, 04:13 PM Question from a test i had today about group proof Okay here is the question and ill give the answer i put i dont know if its sufficient or if there was another way. Given G is a finite group whose only subgroups are {e} and G itself A)Show G is cyclic 2 choices for G either G = {e} and G is cyclic. If not then there are more elements in G and since the only subgroups are the ones mentioned these elements must generate the entire group or else there would be another subgroup so there is a generator and G is cyclic. B)Show G is of prime order G is cyclic so G is isomorphic to Zn then we know G has subgroups generated by elements that divide order of G. There are only those 2 so order G is only divisible by 1 and itself so n is prime February 3rd 2010, 05:37 PM If $G=\{e\}$ we are done. Otherwise, let $g\in G$ then clearly $\left\langle g\right\rangle\leqslant G$ and since it's non-trivial it must be improper. Thus, $\left\langle g\right\rangle =G$ B)Show G is of prime order G is cyclic so G is isomorphic to Zn then we know G has subgroups generated by elements that divide order of G. There are only those 2 so order G is only divisible by 1 and itself so n is prime That's good!
{"url":"http://mathhelpforum.com/advanced-algebra/127036-question-test-i-had-today-about-group-proof-print.html","timestamp":"2014-04-17T20:28:50Z","content_type":null,"content_length":"6108","record_id":"<urn:uuid:ad49cbce-50ed-44f0-ba8a-1ccc9609f80a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: unexplained provocative statements john.kadvany@us.pwcglobal.com john.kadvany at us.pwcglobal.com Fri Mar 5 19:13:50 EST 1999 Following are my (John Kadvany?s) replies to Harvey Friedman?s posting FOM: unexplained provocative statements 03/04/99 06:58:39. The text is structured as: Harvey?s selection of something I wrote, then followed by Harvey?s question marked HF, followed by my current response marked JK. Harvey is rightly concerned that my earlier responses are not self-contained, but I was responding to a reading of my paper, and I cannot repeat the paper. I?m doing my best here to balance effort and accuracy in the world of email sound bites. >1. My take on pomo [postmodernism] is that the debates thoroughly confuse >normative >and descriptive ideas. HF: Give an example of such confusions. JK: Read just about any heated discussion on the Sokal hoax, or accounts in the New York Times. We can distinguish between describing certain areas of knowledge as having a postmodern character from the normative endorsement of that condition as a desirable end. The Marxist Frederic Jameson developed lots of the ideas about what postmodernism is, as a description of contemporary consumer culture, but is normatively opposed to much of it, including its ahistoricism, meaning the absence of a critical and postive relationship to the past and how we see ourselves as moving forward as a society; I agree with that basic division, and I think much of descriptions of contemporary culture are valid. His book is Postmodernism: The Cultural Logic of Late Capitalism. But note that it is not even so obvious that disunified science is a bad thing; see Peter Galison?s The Disunity of Science for a substantive discussion. >There is also a lot of bad writing and hype >associated with the subject, but that's true of lots of intellectual HF: Give examples of bad writing and hype. JK: For bad writing, see writings of Judith Butler, recently discussed by Martha Nussbaum (U Chicago) in The National Review. For hype see Sokal after his hoax and others who fan the reactionary flames against the perceived threats of postmodernism. >Pomo also has many senses and uses, like marxism, historicism, >and other isms. My descriptive use is that lots of areas of >knowledge, culture, and science are fragmented, not unified by a >global paradigm, and without evident criteria of progress that look >anything like traditional standards, or any standards worth embracing. HF: High level f.o.m. *is* so unified and has evident criteria of progress traditional standards. JK: Please then identify the criteria by which people will abandon work on any number of incompatible set theories and settle down with just one, for example. Maybe that is an out-moded view of fom, but I think it is common enough to justify as a starting point. I apologize for not knowing much about recent developments and a more nuanced view. >This should be an uncontroversial observation/definition. HF: Is my previous statement noncontroversial? JK: It probably is controversial; I admit I don?t know enough to pursue it. This is a good issue for future discussion, also related to whether Godel?s program of looking for low-level consequences of set-theoretic axioms needs to be revisitied and improved upon. >In my Godel >paper I wanted to provoke the reader by the descriptive claim that >foundational work in logic now has this character: we have many >foundations, and the whole problem of "foundations" is very much a >mathematical and no longer a philosophical problem of great weight. HF: By many foundations, what do you mean? What are these many foundations? could be referring to any number of kinds of things here. The readers of FOM cannot be expected to tell which. There are many ways to look at "foundations" so that it is very much a philosophical problem of great weight. What do you mean by the "problem of foundations?" JK: I mean the basic idea of multiple incompatible set theories for reconstructing most of mathematics. This perhaps is a bad start but it is also fairly common. It could be more nuanced. >Two questions arise: what general aspects of mathematical theory may >underly this condition? Second, for those who don"t like a normative >pomo slant, which here would mean just reveling in every kind of >foundational topic you can pick, regardless of some kind of progress >or contradictions with the next guy"s "foundations", what kind of >"antitdote" is possible, but without invoking a traditional dogmatic >and naive search for foundations? My answer, in short, is an >historical understanding of mathematical theorem-proving and >concept-formation. Here"s how I get to that position. HF: This paragraph makes absolutely no sense to me, and I suspect, to anyone on the FOM. Please explain. JK: I?m trying to make a lengthy discussion accessible through shorthands; this is necessary for this type of discussion. Beside, what you quote is an introductory paragraph, which was then followed by the promised explanation, and implicit in what follows below. >2. The answer to the descriptive condition of multiple, competing >foundations is that foundational practices are characterized in many >ways by the methods of ancient skeptics, who e.g., can be credited >with the discovery of the informal idea of undecidability in what is >called isostheneia. HF: What multiple competing foundations are you referring to? Who calls JK: Again, just the old idea of incompatible, typically incomplete theories, like incompatable extensions of ZF. Isostheneia is the old Greek term, used by the skeptics, for what is an undecidable statement relative to some theory in modern times. Undecidablility relative to a theory is a great conceptual discovery of the ancient skeptics. >As described in the paper, the heuristic >structure of Godel"s proofs can be seen to implicitly make use of >formalized versions of several key skeptical "tropes," as they are >called, and which you summarized. HF: What are these "skeptical "tropes""? JK: Steve summarized some of these. Some is explalined below. >That is again just a description of >the logic of Godel"s theorems; I think it might qualify as an example >of what Kreisel called "informal rigor," the translation of informal >philosophical arguments into precise mathematical problems. HF: What "translation of informal philosophical arguments into precise mathematical problems" are you referring to? JK: Primarily skeptical arguments for isostheneia/undecidability, but also including other informal devices which get formal treatment in Godel?s proofs. Again, this is getting detailed and is the topic of the paper. >Vis a vis >Pyrrhonism, it should be recognized that this it is one of the major, >major influences in the development of modern science, as discussed in >Richard Popkin"s classic The History of Scepticism; this is standard >history of science and ideas, not a minor offshoot. HF: Explain Pyrrhonism. JK: Pyrrhonism is a set of general methodological ideas for dealing with undecidability and the criticism of so-called dogmatically asserted claims, like certain foundations in the mind, physical world, senses, or logic, and coherently defusing such claims. Godel?s theorems provide a paradigm case; that?s what I try to explain in the paper. This is a bit like trying to explain marxism or Aristotle or a difficult proof or whatever; it?s just too much for an email. I suggest reading Popkin or Burnyeat. If you don?t like these books I will buy them from you second-hand. >In the 20th >century Pierre Duhem was very influenced by Pyrrhonism; so were Paul >Feyerabend and Imre Lakatos. By the way, this simultaneous >characterization of postmodern "chaos," Godelian method, and >skepticism shows that the epistemic structure of pomo is mostly just a >version of skepticism; no big new ideas, and its "effects" are >predicable; the conceptual analogy to Godel is made precise and >completely explained. HF: What conceptual analogy? Also, I rarely see anything philosophical "completely explained." JK: The idea here is that Godel?s proofs have long been taken up to show all kinds of speculative ideas about knowledge, with some justification but lots of confusion. Once you see the skeptical structure underlying the heuristics of the proofs, then it is clear where people get some of these ideas and how to clarify them. I still think I?ve pretty much explained what is going on here. There's also no basis for popular ideas about Godel's proofs and the limitations or capabilities of the human mind. You can argue for that kind of thing, but then your argument would probably apply just as well to ancient skepticism, which seems odd. >3. Now, assume for the sake of argument that Godel"s proofs do have >the skeptical heuristic structure I describe. HF: What skeptical heuristic structure? JK: Key heuristics of Godel?s proofs (construction of undecidable sentences, development of undecidability within a number system like PM or PA without appeal to an external standard of truth, proof of the second incompleteness theorem by reflecting on construction of the first theorem, etc.) turn out to be precise formal equivalents of several of the most important Pyrrhonian tropes, or methodological techniques for destroying claims to dogmatic truth, with that position being filled in by Principia Mathematica, ZF, or a Hilbert-like stand-in or whatever. I really would like people to refute this analysis if it is incorrent; it seems almost obvious to me once understood. >Once you see how the >different skeptical tropes are applied (crudely: to move in and out of >various "foundations," seeing where they lead, but then criticizing >their "dogmatic" status; the details go much further), it is easy to >see the fragmentation of mathematical foundational studies as a kind >of skeptical practice, and regardless of the intentions of the >participants, of course. HF: What "fragmentation of mathematical foundational studies?" What JK: Here you can talk to Steve Simpson on fragmentation; he seemed to recognize something. The skeptical practice is the ability we know have to work among a variety of incompatible foundations, with foundation being taken in the old Principia sense, but without having to really decide among them. This is the so-called postmodern condition. There is lots of work being done on constructible sets, versions of the axiom of determinacy, large cardinals and so on: What would eventually decide among all these in some philosophical-foundational sense, and what would that mean? Mathematically there may be much interest in pursuing all or various subsets, but that?s not the issue. >Again, this is just descriptive. Godel"s >"legacy" in my title, therefore, is this broad skeptical practice of >creating foundations and taking them apart, moving from one to >another, and never settling down. HF: Who is "creating foundations and taking them apart, moving from one to another, and never settling down" and where? JK: See previous. There is also a skeptical idea of ataraxia, which refers to the goal of people calming down and not getting so worked up about foundational issues. The Pyrrhonists were actually therapist-types, sort of consultants, who worked in the medical community in Alexandria where there were heated debates about theoretical versus empirical medicine. FOM could use at bit of that, in my humble opinion. "Skeptic" meant "searcher" in >Greek, and this describes that postmodern conditionof knowledge. We >can argue about how good a description that is for fom, how far it >goes, and so on, but let"s just accept it for the moment. I think it >is true enough. HF: Do you mean "skeptic" is a description of fom? Explain. JK: Yes, fom practice is skeptical in this way. It?s not the practice of creating a single theorem, but the pattern of mathematical interest and fragmented progress across the board, and across time, as reflected in publications. Godel?s methodological legacy included techniques for generating a kind of implicit skeptical practice in the development of mathematical theories, and this has generated a kind of postmodern condition of knowledge. Part of this is to take the perspective of stepping back and looking at mathematics as a longer-term process, not a bunch of static results, though the latter form an essential part of mathematical content also. >4. What"s the problem with pomo as a normative perspective? Well, >you just get this mess in fom. What"s the point of it all? It"s >nihilistic, it appears to have no meaning or purpose, it"s not making >progress, etc. It"s up to the individual to decide how a bad a >problem this really is in fom, and that"s a good side-debate on its >own, related to the more general question of whether much of >contemporary mathematics has just become hugely irrelevant to >outsiders, even other mathematicians, but also many scientists. >Again, I want to provoke people into thinking about this situation via >the "pomo" epithet. HF: Why don't you explain what pomo is in simple clear terms? JK: I did in an earlier email. Ask Steve Simpson, who seems to know, or ask Sokal, who is so worked up over it. It?s not that deep an idea and is expressed in several of the above answers. >5. Then what is a response for fom in particular? In my paper, I >wanted to show that Godel"s theorems, when you look at them closely >and non-superficially, don"t just "give us" incompleteness and the >unprovability of inconsistency, and skeptical method: there are >important choices which have to be made to get the theorems to "work" >as we want, namely the correct formulation of the formalized >consistency statement,and identification of the HBLob conditions. HF: What important choices? What substantive issues do you see here? JK: Choices are the selection of the Hilbert-Bernays-Lob conditions as characterizing the desirable properties of a proof predicate. This to me is one of the most interesting and important developments in all 20th century >little history I provide, from Godel to Rosser to Lob to Kreisel and >Feferman and then to Kripke and Solovay is just to show that one of >the greatest pieces of formal logicis itself a piece of informal >mathematics which had to go through its own historical development. HF: To you mean "informal" in the trivial sense that any really interesting theorem is? JK: No, it means that the theorem-proving process, and the theorem?s conceptual content, contains important features which are not at all characterized by a formalism, or at least a single formalism. This is getting detailed and I refer you to Lakatos? Proofs and Refutations. The point is when such informal features are missed and divert attention from new ideas which may be outside the bounds of contemporary mathematics, but which could be usefully incorporated. >The key word is "historical." Postmodernism, as many marxists argued, >is normatively unattractive because it picks and chooses from the >past, like much modern popular culture, without any principled view of >where that is taking us into the future. The normatively unattractive >aspect of postmodern is its ahistoricism, and Godel"s theorems almost >seem to "provide" it; my little history is an argument against that. HF: This paragraph makes no sense to me. JK: Many paragraphs in mathematical papers make no sense to me until I read them several times, discuss them with my friends, or follow up on the references indicated in the paper. Also, you can say instead 'Please explain this paragraph with concrete examples.' >6. Hence the dilemma I try to force on the reader, and those >interested in fom is: Take your pick, either the "chaos" of >postmodernism inducedby skeptical practices implicit in the Godelian >metamathematical paradigm,OR mathematical historicism (a la Lakatos in >Proofs and Refutations). I opt for the latter, myself. There is no >"foundation" in the classical dogmatic sense, but a historical view of >the problem of foundations in the history of mathematics, just like we >have for algebra, geometry, probability, etc. HF: What is "the Godelian metamathematical paradigm" and what is historicism" and what is "foundation" in the classical dogmatic sense and want is "the problem of foundations in the history of mathematics" and what is "the problem of foundations for algebra, geometry, probability, etc.?" HF: "the Godelian metamathematical paradigm": JK: Lots of techniques and appraoches for coding up large chunks of math and specific problems as (meta)mathematical problems of various types. Most of this depends on routine applications of metamathematical ideas and limits due to unprovability or undecidability created by Godel, at least to a first HF: What is "mathematical historicism": JK: See Proofs and Refutations. Mathematics is a thoroughly historical subject, and requires philosophiical approaches which are historically oriented, all in contrast to most ideas of a priori knowledge, and most philosophy of mathematics. HF: what is "foundation" in the classical dogmatic sense: JK: You expect Principia Mathematica or whatever to provide some kind certain basis or framework which justifies mathematical knowledge in a very strong way. HF: what is "the problem of foundations in the history of mathematics": JK: The emergence of foundational issues through the 19th century and mostly after 1900, culminating perhaps in the Hilbert programme, but not HF: what is "the problem of foundations for algebra, geometry, probability, etc.?": JK: Geometry and probability used to be much more informal, were the topics of great philosophical debate and use, such as Kant?s use of geometry, but have now become largely normalized mathematical subjects. There are still important philosophical questions, but not so much the old ones, and the concepts have changed quite a bit through this process of mathematicization. I don?t this is sufficiently recognized vis a vis logic and foundations and this perspective is hindering progress. >7. Now, you say that [following internal quote is Steve Simpson:]" In my opinion, Kadvany is barking up the wrong >tree here, because there is no serious challenge to the naturalness of >the standard proof predicate, and if any doubt remains, the >Hilbert-Bernays derivability conditions take care of it." In my paper >I don"t claim the HBLob conditions are wrong; the idea is that they >are the result of the same kind of trial and error learning as found, >say, in the developoment of theories of the integral, or trigonometic >series, or zillions of other mathematical concepts and theorems, again >in the spirit of Lakatos" Proofs and Refutations. HF: What is the import of your invoking Lakatos in this? JK: Lakatos is all about the heuristic-historical development of mathematical proofs. See Hersh and Davis, The Mathematical Experience for an introduction. >I think that"s an >accurate description of the history, even if it is not such a big deal >mathematically; I refer to Feferman"s work, and Kreisel"s comments as >justification that at least some people thought it was worth being >careful about. HF: What work of Feferman are you referring to? He has written many papers. JK: Sorry. 'Arithmeticization of metamathematics in a general setting,' Fundamenta Math. 195?. >My personal belief is that the intensional structure >represented by the HBLob conditions has mathematical content yet to be >discovered and exploited. HF: What do you mean by intensional structure? JK: The HBLob conditions are characterized by modal-logical conditions formalized using Kripke semantics, and modal logic is all about formalized intensional relations. Intensional (Feferman noted this in his Fund. Paper, but it is general usage) means 'depends on the description used to define the set or predicate,' versus extensional, meaning does not so depend. So we have to get the description of the proof-predicate correct, not just its extension, to get the incompleteness theorems to work as we want. This is very unusual for mathemtical practice, where nearly all reasoning is extensional. > I critcize Boolos exceptionally fine >Unprovability of Consistency for just his way of imposing on the >reader the "naturalness" of the canonical proof predicate, when he >should do a better job on the historical evolution. HF: Was he writing history? JK: He should have been, a bit. The book suffers in this respect. I?m not saying write 20 pages per chapter. I?m saying probably 10 pages for the whole book, and more importantly, to provide the right heuristic discussion motivating the defintions and proofs. "Naturalness" is a sign that a difficult and important series of steps has been omitted. The book suffers in this regard. Also, does anybody know why this book is out of print? What can be done about that? >In a small way, >this is a kind of antihistoricism which I believe is bad for >mathematical practice. HF: Why? JK: It buries the heuristic structure of the theory, and prevents progress and hinders learning. See Proofs and Refutations, appendix 2. This also has a long history in the Greek tradition of analysis-synthesis, with Descartes, John Walllis and others complaining about discovery methods of the ancients being covered up. >This kind of anipathy toward getting the >history of theorems and proofs right is a big point of Lakatos" Proofs >and Refutations; it is interesting to see it happening in mathematical >logic itself, which just reinforces Lakatos" claim that logic is just >more informal mathematics. HF: Why should Boolos - or anyone else - write about all aspects of things they are writing about some aspects of things? What do you think of the patently absurd and/or meaningless claim "that logic is just more informal mathematics?" In fact, what does this mean? JK: Just because his book was imperfect in important ways, that does not mean writing a whole other book. Boolos and all mathematicians have the responsiblitiy to make their public work of maximum social and pedagogical value, even if only to the mathematical community, and to be conscious about the implications of their approach. I think lots of mathematicians today recognize the need to revise the way math books are written, at least compared to say twenty years ago. On logic being just more informal math: this is mentioned above. Informality of logic can?t be that obviously absurd, since the default assumption of many philosophers is to characterize mathematics and even all of human language only through some type of formal system. You miss much of great importance by that sometimes scientistic assumption. See Proofs and Refutations. To me the interesting issue here is the relationship between informal mathematics and its formal representations; I think the intensionality of Godel's second theorem is at just this boundary, and bears closer attention for that reason to understand better the relationship between extensional and intensional reasoning. >8. You can seek for a foundation for math outside of mathematics, as >you suggest, and I can"t prove that"s impossible to find, but there >sure are a heck of a lot of arguments about why about a zilllion >approaches will not work. HF: What do you mean by "outside of mathematics?" Depending on what you we obviously have it, or it's impossible to have. JK: Like in the physical world, or the human mind, or the senses, or some metaphysical doctrine. We do not obviously have or not have such foundations, and they are not logically impossible, if that is what you John Kadvany Applied Decision Analysis, Inc. Menlo Park, CA 94025. The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002747.html","timestamp":"2014-04-21T03:09:59Z","content_type":null,"content_length":"28486","record_id":"<urn:uuid:f2a52176-2f05-4b89-8b9f-6653240081fe>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how do you solve this equation with a diode? circuit is in SERIES , made up of a 5v battery, its positive is connected to a 10ohm resistor,and connected to the negative side of the diode and then diode is connected to the positive side of an 8V battery and connected back to the NEGATIVE of the 5v battery? are they the same if in ideal model and ; how about at the 0.7 v model? • 7 months ago • 7 months ago Best Response You've already chosen the best response. pls give circuit Best Response You've already chosen the best response. I guess you refer to the n-zone (cathode) and p-zone (anode) of the diode when you say negative side and positive side. When in a circuit, diodes have electrodes named anode and cathode. A description like the positive and negative sides of a diode is misleading as the sign of the voltage in its terminals is determined by the circuit itself. The circuit you describe can be solved assuming it is a Si diode and we can use the following model: \[I_D=\frac{ 8-5-V_D }{ 10 } \approx \frac{ 3-0.7 }{ 10 }=0.23A\]A more accurate calculation would be: \[10·I_D+V_D=3\rightarrow 10·I_S(e^{\frac{ V_D }{ V_T }}-1)+V_D=3\]which is a bit more complex to solve and requires that you know Is. This one can be solved with numerical methods or graphical methods Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/523641a5e4b0af32a0797e6b","timestamp":"2014-04-21T02:35:24Z","content_type":null,"content_length":"30994","record_id":"<urn:uuid:72ff1929-464f-4f24-9131-3edb01f6de49>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7284185 - Puncturing/depuncturing using compressed differential puncturing pattern The present invention relates generally to puncture patterns for forward error correcting codes and, more particularly, to a method of puncturing and de-puncturing convolutional codes using a compressed differential puncturing pattern. The ultimate purpose of a digital communication system is to transmit information from an information source to a destination over a communication channel. In many types of communication channels, such as radio channels, the inherent noise causes bit errors to occur during transmission. In order to reduce bit errors, digital communication systems typically employ both error detecting and error correcting codes. These error control codes introduce controlled redundancy into the information transmitted over the communication channel, which can be used at the destination to detect and/or correct errors in the received signal. Convolutional codes are one type of forward error correcting codes used in digital communication systems. The code rate k/n of a convolutional code indicates the number of output bits n produced by the encoder for each k input bits. In general, the complexity of the encoder and decoder increases as the number of input bits k increases. Consequently, convolutional encoders with code rates 1/n are desirable from a complexity point of view. However, if k is constrained to “1,” the highest code rate that can be obtained is 1/2. Puncturing is a technique of constructing new higher rate convolutional codes from an original lower rate convolutional code. For a given low rate convolutional code, it is possible to obtain a plurality of higher rate codes by selectively puncturing coded bits output from the encoder. The punctured bits are not transmitted. The higher rate convolutional codes created by puncturing can be decoded using essentially the same trellis as the original code from which the punctured code is derived. Further, a single encoder/decoder can be used to provide a range of code rates so that the code rate use can be varied, depending upon channel conditions and other factors. To implement puncturing, a puncture pattern is stored in memory and used to puncture coded bits output by the encoder. One technique used in the prior art is to store an index for each bit to be transmitted. The index is a numerical value that identifies the bit position for each transmitted bit. Depending upon the length of the codeword to be transmitted, this technique requires a very significant amount of storage for the puncturing patterns. For example, an adaptive full rate 8-PSK wideband speech frame contains 1,467 bits after coding but before puncturing. According to the GSM specifications, 1,344 bits of the original 1,467 coded bits are transmitted. Storing a 16-bit index for each transmitted bit would require storage of 21,504 bits. Another technique for storing puncturing patterns is to store a bit map containing 1 bit for each coded bit output from the encoder. Each bit in the bit map corresponds to a single coded output bit. A bit value of “0” in the bit map indicates that the corresponding bit output by the encoder is punctured, while a bit value of “1” indicates that the corresponding bit is to be transmitted, or vice versa. While this technique reduces the memory requirements for storing puncturing patterns, the processor must compare every coded bit output by the encoder with the bit map to see whether the bit is to be transmitted or punctured. The code needed to perform these comparisons consumes processor cycles increasing the demand on the signal processor. Because some cellular communication systems may use more than 100 logical channels with different puncturing patterns, there is a need for techniques that further reduce the memory requirements for storing the puncturing patterns and that reduce of processor cycles needed for carrying out puncturing/depuncturing operations. The present invention is a channel coder/decoder that performs puncturing/depuncturing of forward error correcting codes. The channel coder/decoder uses differential puncture patterns that can be compressed and stored in memory. The differential puncture patterns result in relatively low complexity puncturing and depuncturing algorithms that conserve processing power. The differential puncture patterns are derived from an absolute bit index sequence that indicates the bits of a codeword transmitted over a communication channel from a transmitter to a receiver. At the transmitter, the differential puncture pattern is used to select the bits of the codeword to be transmitted. At the receiver, the differential puncture pattern is used to insert the received bits into the correct bit positions in a depunctured codeword. The differential puncture pattern comprises a sequence of successive differential indexes or offsets corresponding to the differences between successive bit indexes in the bit index sequence. The differential puncture pattern can be compressed to reduce memory requirements for storage of the differential puncture pattern. To compress the differential puncture patterns, repeating sub-patterns in the differential puncture pattern are identified and stored along with the number of sub-patterns, the length of the sub-patterns, the number of repetitions of the sub-pattern, and the starting index. FIG. 1 is a block diagram of a mobile communication device in which the channel coder/decoder of the present invention could be used. FIG. 2 is a block diagram of an exemplary channel coder according to the present invention. FIG. 3 is a block diagram of an exemplary channel decoder according to the present invention. FIG. 4 is a block diagram of an apparatus for generating compressed differential puncture patterns according to one embodiment of the present invention. FIG. 5 is a flow chart illustrating a method of generating compressed differential puncture patterns according to one embodiment of the present invention. FIG. 6 is a block diagram of an exemplary puncturer for a channel decoder according to one embodiment of the present invention. FIG. 7 is a flow chart of an exemplary puncturing method using differential puncture patterns according to one embodiment of the present invention. FIG. 8 is a block diagram of an exemplary de-puncturer for a channel decoder according to one embodiment of the present invention FIG. 9 is a flow chart of an exemplary de-puncturing method using differential puncture patterns according to one embodiment of the present invention. FIG. 1 illustrates a mobile communication device generally indicated by the numeral 10. The mobile communication device 10 comprises a system controller 12 to control the overall operation of the mobile communication device 10, a memory 14 to store programs and data needed for operation, a transmitter 20 to transmit signals, and a receiver 30 to receive signals. The transmitter 20 and receiver 30 are coupled to a common antenna 18 by a duplexer 16 that permits full duplex operation. The transmitter 20 receives a source data stream from an information source, processes the source data stream to generate a transmit signal suitable for transmission over a radio channel, and modulates the transmit signal onto an RF carrier. The transmitter 20 includes a source encoder 22, a channel coder 24 and a modulator 26. The source encoder 22 removes redundancy or randomizes the source data stream to produce an information sequence that has been optimized for maximum information content. The information sequence from the source encoder 22 is passed to the channel coder 24. The channel encoder 24 introduces an element of redundancy into the information sequence supplied by the source encoder 22 to generate a coded output. The redundancy added by the channel coder 24 serves to enhance the error correction capability of the communication system. By making use of the redundant information, a receiver 30 can detect and correct bit errors that may occur during transmission. The output of the channel coder 24 is the transmit bit sequence. The modulator 26 receives the transmit bit sequence from the channel coder 24 and generates waveforms that both suit the physical nature of the communication channel and can be efficiently transmitted over the communication channel. Typical modulation schemes used in mobile communication devices 10 include 16QAM, 8-PSK, 4-PSK, and the like. The receiver 30 receives signals transmitted from a far-end device that has been corrupted by passage through the communication channel. The function of the receiver is to reconstruct the original source data stream from the received signal. The receiver 30 includes a demodulator 32, channel decoder 34, and source decoder 36. The demodulator 32 processes the received signal and generates a received bit sequence, which may comprise hard or soft values for each received bit. If the received signal is transmitted without error through the communication channel, the received bit sequence would be identical to the transmit bit sequence at the transmitter. In actual practice, the passage of the received signal through the communication channel introduces bit errors into the received signal. The channel decoder 34 uses the redundancy added by the channel coder 24 at the transmitter 20 to detect and correct the bit errors. A measure of how well the demodulator 32 and decoder 34 perform is the frequency with which bit errors occur in the decoded sequence. As a final step, a source decoder 36 reconstructs the original signal from the information source. The difference between the reconstructed information signal and the original information signal is a measure of the distortion introduced by the communication system. FIGS. 2 and 3 illustrate an exemplary channel coder 24 and decoder 34, respectively. The channel coder 24 (FIG. 2) includes an encoder 40 to encode an information sequence from the source encoder 22 and a puncturer 42 to puncture coded bits output by the encoder 40. The encoder 40 may, for example, comprise a 1/2 rate convolutional encoder that produces two coded bits for each input bit. The code bits output by the encoder 40 form a codeword that is input to the puncturer 42. The puncturer 42 selects the coded bits to be transmitted based on a puncture pattern stored in memory 14. The channel decoder 34 (FIG. 3) comprises a de-puncturer 60 followed by a decoder 62. The de-puncturer 60 determines the correct bit positions for the received bits and outputs a de-punctured codeword to the decoder 62. As explained in more detail below, the de-punctured codeword is not the same as the original codeword because the punctured bits are replaced with neutral values in the de-punctured codeword. The neutral values may be “0s” if the received bits are represented by +/−1, or soft values symmetrically distributed around “0.” The decoder 62 may comprises a MLSE decoder (i.e., Viterbi decoder), a MAP decoder, or any other known type of decoder. According to the present invention, the puncture pattern is compressed to reduce storage requirements and decompressed to perform puncturing and de-puncturing. The compression technique generates a differential puncture pattern that is used to directly select the bits for transmission at the transmitter 20, and to determine the correct bit positions of the received bits in a de-punctured codeword at the receiver 30. At the transmitter 20, the puncturer 42 directly selects bits to be transmitted without performing operations on the punctured bits. The direct selection of the transmitted bits significantly reduces the number of instructions that must be executed by the puncturer 42. At the receiver 30, the de-puncturer 60 determines the bit positions of the received bits in a de-punctured codeword and inserts the received bits in the determined positions. The channel coding/decoding method employed in the present invention is described below using the wideband full rate speech channel (O-TCH/WFS23.85) as an example. The desired channel coding for this channel is described in the GSM specification 45.003, Release 5. According to the GSM specification, the codeword output by the encoder 40 has 1,467 bits before puncturing. The codeword is divided into two blocks called the Pg block and the Pb block. The Pg block comprises 896 of the original 1,467 bits. The Pb block comprises 448 bits of the original 1,467 bits. A total of 1344 bits are transmitted in the Pg and Pb blocks so 146 bits are punctured. The bits of the Pg block are given by the bit index sequence: □ 2, 3, 5, 6, 8, 9, 11, 12, 14, 15, 17, 18, 20 . . . 1343, 1346, 1347, 1349, 1350, 1352, 1353, 1355, 1358, 1359, 1361, 1362, 1364, 1365, 1367 . . . Each bit index in the index sequence corresponds to one transmitted bit. A bit index value of “2” for example, indicates that the second bit in the original codeword is transmitted. The bit index sequence is an absolute puncture pattern since it consists of absolute bit indexes for each transmitted bit. According to the present invention, a differential puncture pattern is computed based on the bit index sequence and then compressed to reduce memory requirements. The differential puncture pattern is computed by calculating the differences between successive bit indexes in the original bit index sequence. For the bit index sequence given above, the corresponding differential puncture pattern is: □ 1, 2, 1, 2, 1, 2 . . . 3, 1, 2, 1, 2, 1, 2, 3, 1, 2, 1, 2, 1, 2 . . . Each element of the differential puncture pattern is a differential index or offset that functions as a relative bit index. The offset represents the difference between the absolute indices of two consecutive transmitted bits. The differential puncture pattern is referenced to a starting index that indicates the bit position of the first transmitted bit in the codeword. In the example given above, the starting index is “2.” With knowledge of the starting index and the differential puncture pattern, the absolute bit index for each transmitted bit in a codeword can be determined. Due to the strong regularity of the puncture pattern for the Pg block, the entire differential puncture pattern for the Pg block can be represented by three differential sub-patterns: P1=[1, 2] repeated 171 times, P2=[3, 1, 2, 1, 2, 1, 2] repeated 78 times, and P3=[3, 1, 2, 1, 2, 3, 3] repeated once. After computing the differential puncture pattern, the differential puncture pattern can be compressed by identifying the sub-patterns. A compressed version of the differential puncture pattern can be created by storing the following information: a0 the absolute bit index value of the first code bit to be transmitted SN number of sub-patterns in the differential puncture pattern array containing number of repetitions for each sub-pattern array containing the length of each sub-pattern array containing sub-patterns The compressed differential puncture pattern requires 37 16-bit words to store in memory 14. For comparison, storing the bitmap would require 184 16-bit words. Given that some mobile communication devices 10 may use more than 100 puncture patterns, the present invention substantially reduces the memory 14 required to store the puncture patterns. FIG. 4 illustrates an exemplary apparatus according to one embodiment of the invention, indicated generally at 80, for generating compressed differential puncture patterns. The apparatus illustrated in FIG. 4 may be implemented by software stored in a computer readable media and executed by a processor. The apparatus 80 comprises a pattern generator 82 and a compression module 88. The pattern generator 82 includes a delay element 84 and a subtractor 86. An absolute bit index sequence indicating the absolute bit indices for transmitted bits in a codeword is input to the pattern generator 82 one bit index at a time. Delay element 84 introduces a one element delay in the bit index sequence. Subtractor 86 subtracts the previous bit index (b[n−1]) at time n−1 from the current bit index (b[n]) to compute a differential index or offset. The sequence of differential indices or offsets output from the subtractor 86 comprises the differential puncture pattern. The differential puncture pattern is input to the compression module 88, which compresses the differential puncture pattern. FIG. 5 is a flow diagram illustrating a method of generating compressed differential puncture patterns according to some embodiments of the invention indicated generally at 100. To begin generation of a compressed differential puncture pattern, the absolute bit index sequence for the puncture pattern is input to the pattern generator 82 (block 102). The pattern generator 82 computes the differential puncture pattern (block 104) as previously described. The compression module 88 then processes the differential puncture pattern to identify repeating sub-patterns (block 106). The compression module 88 determines the starting index for the differential puncture pattern (block 108), the number of sub-patterns (block 110), the length of the sub-patterns (block 112), and the number of repetitions of each sub-pattern (block 114). The compressed differential puncture pattern is then stored in memory 14 (block 116). The compressed differential puncture pattern of the present invention not only conserves memory, but also lends itself to efficient puncturing and de-puncturing techniques that significantly reduce the number of instructions or operations that must be executed. For puncturing, the differential puncture pattern is decompressed and the decompressed differential puncture pattern is used to sequentially select coded bits for transmission. For de-puncturing, the differential puncture pattern is used to insert the received bits in the correct positions in a de-punctured codeword. FIG. 6 illustrates an exemplary puncturer 42 in block diagram form. The puncturer 42 may be implemented in hardware, firmware, software or a combination thereof. In one exemplary embodiment, the puncturer 42 is implemented by software stored in a computer readable media and executed by a processor. As shown in FIG. 6, the puncturer 42 includes a controller 44, an accumulator 46, a selector 48 and an output buffer 50. The absolute bit index, a0, for the first output bit is used to initialize the accumulator 46. The absolute bit index is also supplied to the selector 48, which selects the first coded bit to be transmitted and writes it to the first position in the output buffer 50, which contains the transmit bit sequence. After the first coded bit is written to the output buffer 50, the differential puncture pattern is used to select the remaining coded bits to be transmitted. The differential indices or offsets of the differential puncture pattern are processed one at a time and used to select the coded bits to be transmitted (i.e., the transmit bits) using an accumulate and select technique. The differential indices or offsets are input to the accumulator 46, which maintains an accumulated bit index. The offset is added to the accumulated bit index to get a new accumulated bit index and the new accumulated bit index is output to the selector 48. The selector 48 uses the accumulated bit index output by the accumulator 46 to select the coded bits for transmission. For example, if the accumulated bit index at time t is “64,” the selector 48 selects the 64th coded bit and writes it to the next location in the output buffer. This process is repeated, selecting coded bits for transmission one at a time until the entire transmit bit sequence is complete. FIG. 7 is a flow diagram illustrating the puncturing operation indicated generally at 200. To begin the puncturing operation, the controller 44 selects the appropriate compressed differential puncture pattern from memory 14 (block 202) and uses the absolute bit index for the first transmitted bit to initialize the accumulator 46 (block 204) and select the first output bit (block 206). After writing the first output bit to the output buffer 50, the controller 44 selects the first sub-pattern in the compressed differential puncture pattern (block 208) and then selects the first element of the first sub-pattern (block 210). Those skilled in the art will recognize that the sub-pattern elements are the differential indices or offsets that comprise the differential puncture pattern. The controller 44 sequentially inputs the sub-pattern elements to the accumulator 46, which calculates the accumulated bit index (block 212). The selector 48 uses the accumulated bit index output by the accumulator 46 to select the next coded bit for transmission (block 214). After each element in the differential puncture pattern is processed, the controller 44 determines if it was the last element in the sub-pattern (block 216). If not the controller 44 selects the next sub-pattern element (block 218) and repeats the accumulate and select process (block 212, 214). When the last element in a sub-pattern is reached, the controller 44 determines whether to repeat processing of the sub-pattern (block 220). If the desired number of repetitions has not been performed, the controller 44 increments the repetition count (block 222) and iteratively processes the sub-pattern until the last repetition is complete (blocks 210-218). When the last repetition of a sub-pattern is complete, the controller 44 determines if all the sub-patterns have been processed (block 224). If not, the controller 44 selects the next sub-pattern (block 226) for processing as previously described (blocks 210-222). Processing of the differential puncture pattern ends (block 228) when the last sub-pattern is processed. FIG. 8 illustrates a de-puncturer 60 in block diagram form according to some embodiments of the invention. The de-puncturer 60 may be implemented in hardware, firmware, software or a combination thereof. In one exemplary embodiment, the de-puncturer 60 is implemented by software stored in a computer readable media and executed by a processor. As shown in FIG. 8, the de-puncturer 60 includes a controller 64, accumulator 66, an inserter 68 and an output buffer 70. The length of the output buffer 70 is the same as the length of the original codeword before puncturing. The elements of the output buffer 70 are initialized with neutral values, such as “0” if the received bits have values of +/−1, or soft values distributed around “0.” The absolute bit index, a0, for the first output bit is used to initialize the accumulator 66. The absolute bit index is also supplied to the inserter 68, which inserts the first received bit into the output buffer 70 at the bit position indicated by the starting index. For, example, if the starting index is “2,” the inserter 68 inserts the first received bit into the second bit position in the output buffer 70. After the first received bit is written to the output buffer 70, the differential indices or offsets of the differential puncture pattern are processed one at a time and used to insert the received bits into the output buffer 70. The differential indices or offsets comprising the differential puncture pattern are input to the accumulator 66, which maintains an accumulated bit index. Each offset is added to the accumulated bit index to get a new accumulated bit index and the new accumulated bit index is output to the inserter 68. The inserter 68 uses the accumulated bit index output by the accumulator 66 to determine the bit position in the output buffer for the next received bit. For example, if the accumulated bit index output at time t is “64,” the inserter 68 inserts the next received bit into the 64th bit position in the output buffer 70. This process is repeated, inserting received bits one at a time into the output buffer 70 until the entire received bit sequence is processed. The contents of the output buffer 70 comprise a de-punctured codeword, corresponding to the original codeword before puncturing. The de-punctured codeword, however, is not identical to the original codeword because all of the punctured bits in the original codeword are replaced with neutral soft values. FIG. 9 is a flow diagram illustrating the de-puncturing operation indicated generally by 300. To begin the de-puncturing operation, the controller 64 selects the appropriate compressed differential puncture pattern from memory 14 (block 302) and uses the absolute bit index for the first transmitted bit to initialize the accumulator 66 (block 304). The absolute bit index is also input to the inserter 68, which inserts the first received bit into the output buffer 70 at the bit position indicated by the absolute bit index (block 306). After writing the first received bit to the output buffer 70, the controller 64 selects the first sub-pattern in the compressed differential puncture pattern (block 308) and then selects the first element of the first sub-pattern (block 310). The controller 64 sequentially inputs the sub-pattern elements to the accumulator 66, which calculates the accumulated bit index (block 312). The inserter 68 uses the accumulated bit index output by the accumulator 66 to insert the next received bit into the output buffer 70 at the bit position indicated by the accumulated bit index (block 314). After each element in the differential puncture pattern is processed, the controller 64 determines if it was the last element in the sub-pattern (block 316). If not the controller 64 selects the next sub-pattern element (block 318) and repeats the accumulate and insert process (blocks 312, 314). When the last element in a sub-pattern is reached, the controller 64 determines whether to repeat processing of the sub-pattern (block 320). If the desired number of repetitions has not been performed, the controller 64 increments the repetition count (block 322) and processes the sub-pattern again (blocks 310-318). When the last repetition of a sub-pattern is complete, the controller 64 determines if all the sub-patterns have been processed (block 324). If not, the controller 64 selects the next sub-pattern (block 326) for processing as previously described (blocks 310-322). Processing of the differential puncture pattern ends (block 328) when the last sub-pattern is processed. The present invention may, of course, be carried out in other specific ways than those herein set forth without departing from the spirit and essential characteristics of the invention. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
{"url":"http://www.google.com.au/patents/US7284185","timestamp":"2014-04-20T01:05:53Z","content_type":null,"content_length":"97626","record_id":"<urn:uuid:9f448f66-a651-4291-aab8-9f60f3bf1a07>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Developments and Trends in Infinite-Dimensional Lie Theory 1st edition by Neeb | 9780817647407 | Chegg.com Details about this item Developments and Trends in Infinite-Dimensional Lie Theory: This collection of invited expository articles focuses on recent developments and trends in infinite-dimensional Lie theory, which has become one of the core areas of modern mathematics. The book is divided into three parts: infinite-dimensional Lie (super-)algebras, geometry of infinite-dimensional Lie (transformation) groups, and representation theory of infinite-dimensional Lie groups.Part (A) is mainly concerned with the structure and representation theory of infinite-dimensional Lie algebras and contains articles on the structure of direct-limit Lie algebras, extended affine Lie algebras and loop algebras, as well as representations of loop algebras and Kac Moody superalgebras.The articles in Part (B) examine connections between infinite-dimensional Lie theory and geometry. The topics range from infinite-dimensional groups acting on fiber bundles, corresponding characteristic classes and gerbes, to Jordan-theoretic geometries and new results on direct-limit groups.The analytic representation theory of infinite-dimensional Lie groups is still very much underdeveloped. The articles in Part (C) develop new, promising methods based on heat kernels, multiplicity freeness, Banach Lie Poisson spaces, and infinite-dimensional generalizations of reductive Lie groups.Contributors: B. Allison, D. Belti, W. Bertram, J. Faulkner, Ph. Gille, H. Glockner, K.-H. Neeb, E. Neher, I. Penkov, A. Pianzola, D. Pickrell, T.S. Ratiu, N.R. Scheithauer, C. Schweigert, V. Serganova, K. Styrkas, K. Waldorf, and J.A. Wolf." Back to top
{"url":"http://www.chegg.com/textbooks/developments-and-trends-in-infinite-dimensional-lie-theory-1st-edition-9780817647407-0817647406","timestamp":"2014-04-16T06:38:00Z","content_type":null,"content_length":"21262","record_id":"<urn:uuid:3210bc54-0290-43c4-b60b-5920c3f1b0f6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Average Word Problems (with worked solutions & videos) Algebra: Average Word Problems There are three main types of algebra average word problems commonly encountered in school or in tests like the SAT: Average (Arithmetic Mean), Weighted Average and Average Speed. Related Topics: More Algebra Word Problems Average (Arithmetic Mean) The average (arithmetic mean) uses the formula: The formula can also be written as The average (arithmetic mean) of a list of 6 numbers is 20. If we remove one of the numbers, the average of the remaining numbers is 15. What is the number that was removed? Step 1: The removed number could be obtained by difference between the sum of original 6 numbers and the sum of remaining 5 numbers i.e. sum of original 6 numbers – sum of remaining 5 numbers Step 2: Using the formula sum of original 6 numbers = 20 × 6 = 120 sum of remaining 5 numbers = 15 × 5 = 75 Step 3: Using the formula from step 1 Number removed = sum of original 6 numbers – sum of remaining 5 numbers 120 – 75 = 45 Answer: The number removed is 45. The following video gives an introduction to averages and solves some algebra problems involving averages. The following videos give examplse of how to solve algebra average problem. If the average (arithmetic mean) of 8,11,25,and p is 15, find 8+11+25+p and then find p. If a = 3b = 6c, what is the average (arithmetic mean) of a, b and c in terms of a? Weighted Average Another type of average problem involves the weighted average - which is the average of two or more terms that do not all have the same number of members. To find the weighted term, multiply each term by its weighting factor, which is the number of times each term occurs. The formula for weighted average is: A class of 25 students took a science test. 10 students had an average (arithmetic mean) score of 80. The other students had an average score of 60. What is the average score of the whole class? Step 1: To get the sum of weighted terms, multiply each average by the number of students that had that average and then sum them up. 80 × 10 + 60 × 15 = 800 + 900 = 1700 Step 2: Total number of terms = Total number of students = 25 Step 3: Using the formula Answer: The average score of the whole class is 68. Be careful! You will get the wrong answer if you add the two average scores and divide the answer by two. This video tutorial shows how to calculate a weighted mean (weighted average) Fiften accounting majors have an average grade of 90. Seven marketing majors averaged 85, and ten finance majors averaged 93. What is the weighted mean for the 32 students? The following shows how to use weighted average to calculate the average score of a student. Average Speed Computation of average speed is a trickier type of average problems. Average speed uses the formula: John drove for 3 hours at a rate of 50 miles per hour and for 2 hours at 60 miles per hour. What was his average speed for the whole journey? Step 1: The formula for distance is Distance = Rate × Time Total distance = 50 × 3 + 60 × 2 = 270 Step 2: Total time = 3 + 2 = 5 Step 3: Using the formula Answer: The average speed is 54 miles per hour. Be careful! You will get the wrong answer if you add the two speeds and divide the answer by two. How to calculate average speed. The Hubble Space Telescope was launched. 21 years later and it needs repairing. A rocket was sent out it made good progress traveling at a average of 3000 mph . On the way back it was traveling at 1000 mph what was the average speed..? We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"http://www.onlinemathlearning.com/average-problems.html","timestamp":"2014-04-18T10:34:16Z","content_type":null,"content_length":"46795","record_id":"<urn:uuid:64c9000e-2d3f-413c-aef6-707074b76390>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
(Prepared as starting points for early NKS briefings and media discussions.) Print Version (PDF) Mathematical equations do not capture many of nature s most essential mechanisms For more than three centuries, mathematical equations and methods such as calculus have been taken as the foundation for the exact sciences. There have been many profound successes, but a great many important and obvious phenomena in nature remain unexplained especially ones where more complex forms or behavior are observed. A New Kind of Science builds a framework that shows why equations have had limitations, and how by going beyond them many new and essential mechanisms in nature can be captured. Thinking in terms of programs rather than equations opens up a new kind of science Mathematical equations correspond to particular kinds of rules. Computer programs can embody far more general rules. A New Kind of Science describes a vast array of remarkable new discoveries made by thinking in terms of programs and how these discoveries force a rethinking of the foundations of many existing areas of science. Even extremely simple programs can produce behavior of immense complexity Everyday experience tends to make one think that it is difficult to get complex behavior, and that to do so requires complicated underlying rules. A crucial discovery in A New Kind of Science is that among programs this is not true and that even some of the very simplest possible programs can produce behavior that in a fundamental sense is as complex as anything in our universe. There have been hints of related phenomena for a very long time, but without the conceptual framework of A New Kind of Science they have been largely ignored or misunderstood. The discovery now that simple programs can produce immense complexity forces a major shift in scientific intuition. Simple programs can yield behavior startlingly like what we see in nature How nature seems so effortlessly to produce forms so much more complex than in typical human artifacts has long been a fundamental mystery often discussed for example in theological contexts. A New Kind of Science gives extensive evidence that the secret is just that nature uses the mechanisms of simple programs, which have never been captured in traditional science. Simple programs can do much more than typical programs written by programmers A New Kind of Science shows that extremely simple programs picked for example at random can produce behavior that is far more complex than typical programs intentionally set up by programmers. The fundamental engineering concept that one must always be able to foresee the outcome of programs one writes has prevented all but a tiny fraction of all possible programs from being considered. The idea of allowing more general programs has great potential significance for technology. Simple computer experiments reveal a vast world of new phenomena In their times both telescopes and microscopes revealed vast worlds that had never been seen before. Through the ideas of A New Kind of Science, computer experiments now also reveal a vast new world, in many ways more diverse and surprising even than the world seen in astronomy, or than the flora and fauna discovered by explorers of the Earth in past centuries. Many of the basic experiments in A New Kind of Science could in principle have been done by mosaic makers thousands of years ago. But it took new intuition and new tools to unlock what was needed to do the right experiments and understand their significance. Randomness in physics can be explained by mechanisms of simple programs Despite attempts from approaches like chaos theory, no fundamental explanation has ever been found for randomness in physical phenomena such as fluid turbulence or patterns of fracture. A New Kind of Science presents an explanation based on simple programs that for example predicts surprising effects such as repeatable randomness. Thermodynamic behavior can be explained by mechanisms of simple programs The Second Law of Thermodynamics (Law of Entropy Increase) has been a foundational principle in physics for more than a century, but no satisfactory fundamental explanation for it has ever been given. Using ideas from studying simple programs, A New Kind of Science gives an explanation, and in doing so shows limitations of the Second Law. Complexity in biology can be explained by mechanisms of simple programs From traditional intuition one expects that the observed complexity of biological organisms must have a complex origin presumably associated with a long process of adaptation and natural selection. A New Kind of Science shows how complex features of many biological organisms can be explained instead through the inevitable behavior of simple programs associated with their growth and development. This implies that biology need not just reflect historical accidents, and that a general study of simple programs can lead to a predictive theory of at least certain aspects of biology. Simple programs may lay the groundwork for new insights about financial systems The underlying mechanism that leads for example to seemingly random fluctuations in prices in markets has never been clear. Discoveries about simple programs such as the phenomenon of intrinsic randomness generation provide potentially important new insights on such issues. Our whole universe may be governed by a single underlying simple program In its recent history, physics has tried to use increasingly elaborate mathematical models to reproduce the universe. But building on the discovery that even simple programs can yield highly complex behavior, A New Kind of Science shows that with appropriate kinds of rules, simple programs can give rise to behavior that reproduces a remarkable range of known features of our universe leading to the bold assertion that there could be a single short program that represents a truly fundamental model of the universe, and which if run for long enough would reproduce the behavior of our universe in every detail. Underlying space there may be a simple discrete structure Throughout almost the entire history of science, space has been viewed as something fundamental and typically continuous. A New Kind of Science suggests that space as we perceive it is in fact not fundamental, but is instead merely the large-scale limit of an underlying discrete network of connections. Models constructed on this basis then lead to new ideas about such issues as the origins of gravity and general relativity, the true nature of elementary particles and the validity of quantum mechanics. Time may have a fundamentally different nature from space The standard mathematical formulation of relativity theory suggests that despite our everyday impression time should be viewed as a fourth dimension much like space. A New Kind of Science suggests however that time as we perceive it may instead emerge from an underlying process that makes it quite different from space. And through the concept of causal invariance the properties of time seem to lead almost inexorably to a whole collection of surprising results that agree with existing observations in physics including the special and general theories of relativity, and perhaps also quantum Systems with exceptionally simple rules can be universal computers Seeing the complicated circuitry of existing computers, one would think that it must take a complicated system to be able to do arbitrary computation. But A New Kind of Science shows that this is not the case, and that in fact universal computation can be achieved even in systems with very simple underlying rules. As a specific example, it gives a proof that the so-called rule 110 cellular automaton whose rules are almost trivial to describe is universal, so that in principle it can be programmed to perform any computation. And as a side result, this leads to by far the simplest known universal Turing machine. Many systems in nature are capable of universal computation If universal computation required having a system as elaborate as a present-day computer, it would be inconceivable that typical systems in nature would show it. But the surprising discovery that even systems with very simple rules can exhibit universality implies that it should be common among systems in nature leading to many important conclusions about a host of fundamental issues in science, mathematics and technology. The Principle of Computational Equivalence provides a broad synthesis Many of the discoveries in A New Kind of Science can be summarized in the bold new Principle of Computational Equivalence, which states in essence that processes that do not look simple almost always correspond to computations of exactly equivalent sophistication. This runs counter to the implicit assumption that different systems should do all sorts of different levels and types of computations. But the Principle of Computational Equivalence has the remarkable implication that instead they are almost all equivalent leading to an almost unprecedentedly broad unification of statements about different kinds of systems in nature and elsewhere. Many systems in nature are computationally equivalent to us as humans We would normally assume that we as humans are capable of much more sophisticated computations than systems in nature such as turbulent fluids or collections of gravitating masses. But the discoveries in A New Kind of Science imply that this is not the case, yielding a radically new perspective on our place in the universe. Many systems in nature can show features like intelligence Statements like "the weather has a mind of its own" have usually been considered not scientifically relevant. But the Principle of Computational Equivalence in A New Kind of Science shows that processes like the flow of air in the atmosphere are computationally equivalent to minds, providing a major new scientific perspective, and reopening many debates about views of nature with an animistic character. Extraterrestrial intelligence is inevitably difficult to define and recognize It has usually been assumed that detecting extraterrestrial signals from a sophisticated mathematical computation would provide evidence for extraterrestrial intelligence. But the discoveries in A New Kind of Science show that such computation can actually be produced by very simple underlying rules of kinds that can occur in simple physical systems with nothing like what we normally consider intelligence. The result is a new view of the character of intelligence, and a collection of ideas about the nature of purpose, and recognizing it in ultimate extrapolations of technology. It is easy to make randomness that we cannot decode One might have thought that we would always be able to recognize signs of the simplicity of an underlying program in any output it produces. But A New Kind of Science studies all the various common methods of perception and analysis that we use, and shows that all of them are ultimately limited to recognizing only specific forms of regularity, which may not be present in the behavior of even very simple programs with implications for cryptography and for the foundations of fields such as statistics. Apparent complexity in nature follows from computational equivalence We tend to consider behavior complex when we cannot readily reduce it to a simple summary. If all processes are viewed as computations, then doing such reduction in effect requires us as observers to be capable of computations that are more sophisticated than the ones going on in the systems we are observing. But the Principle of Computational Equivalence implies that usually the computations will be of exactly the same sophistication providing a fundamental explanation of why the behavior we observe must seem to us complex. Many important phenomena are computationally irreducible Most of the great successes of traditional exact science have ultimately come from finding mathematical formulas to describe the outcome of the evolution of a system. But this requires that the evolution be computationally reducible, so that the computational work involved can just be reduced to evaluation of a formula. A New Kind of Science shows however that among most systems computational reducibility is rare, and computational irreducibility is the norm. This explains some of the observed limitations of existing science, and shows that there are cases where theoretical prediction is effectively not possible, and that observation or experiment must inevitably be used. Apparent free will can arise from computational irreducibility For centuries there has been debate about how apparent human free will can be consistent with deterministic underlying laws in the universe. The phenomenon of computational irreducibility described in A New Kind of Science finally provides a scientifically based resolution of this apparent dichotomy. Undecidability occurs in natural science, not just mathematics The phenomenon of formal undecidability discovered in mathematics in the 1930s through Gödel s Theorem has normally been viewed as esoteric, and of little relevance to ordinary science. A New Kind of Science shows however that undecidability is not only possible but actually common in many systems in nature, leading to important philosophical conclusions about what can and cannot be known in natural science. The difficulty of doing mathematics reflects computational irreducibility Mathematical theorems such as Fermat s Last Theorem that are easy to state often seem to require immensely long proofs. In A New Kind of Science this fundamental observation about mathematics is explained on the basis of the phenomenon of computational irreducibility, and is shown to be a reflection of results like Gödel s Theorem being far more significant and widespread than has been believed before. Existing mathematics covers only a tiny fraction of all possibilities Mathematics is often assumed to be very general, in effect covering any possible abstract system. But the discoveries in A New Kind of Science show that mathematics as it has traditionally been practiced has actually stayed very close to its historical roots in antiquity, and has failed to cover a vast range of possible abstract systems many of which are much richer in behavior than the systems actually studied in existing mathematics. Among new results are unprecedentedly short representations of existing formal systems such as logic, used to show just how arbitrarily systems like these have in effect been picked by the history of mathematics. The framework created in A New Kind of Science provides a major generalization of mathematics, and shows how fundamentally limited the traditional theorem-proof approach to mathematics must ultimately be. Studying simple programs can form a basis for technical education As a vehicle for teaching precise analytical thinking, A New Kind of Science represents a major alternative to existing mathematics, with such advantages as greater explicitness and visual appeal, more straightforward applicability to certain issues in natural science, and side benefits of learning practical computer science and programming. Mechanisms from simple programs suggest new kinds of technology In existing technology complex tasks tend to be achieved by systems with elaborately arranged parts. But the discoveries in A New Kind of Science show that complex behavior can be achieved by systems with an extremely simple underlying structure that is for example potentially easy to implement at an atomic scale. Many specific systems, such as cellular automata, studied in A New Kind of Science are likely to find their way into a new generation of technological systems.
{"url":"http://wolframscience.com/reference/quick_takes.html","timestamp":"2014-04-17T16:03:21Z","content_type":null,"content_length":"25094","record_id":"<urn:uuid:67f88127-592c-4aaa-82cd-bcffa5f8a886>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Running time-series regressions on a dataset with large gaps: is it legitimate? August 29th 2013, 06:14 AM #1 Aug 2013 Running time-series regressions on a dataset with large gaps: is it legitimate? I would like to run a simple time-series regression, to estimate the sensitivity of my dependent variable to a set of explanatory variables. However, instead of running the regression on the entire time-series, I would like to running it only over specific time periods, all stacked together as one continuous dataset. For example, a time-series of stock returns could be subdivided into a bull market subset and a bear market subset. I would like to run the regression over bull-market periods only, or bear market periods The issue is that these market conditions (regimes) are discontinuous. In other words, you have a time period of bull market, followed by one of bear market, then bull, then bear,... The discontinuity between two bull market periods or two bear periods could be several years. Is it legitimate to stack all the subsets of the same regime, e.g. bull, and run the regression? The purpose is to get a regime-specific sensitivity to the explanatory variables. Thank you, Re: Running time-series regressions on a dataset with large gaps: is it legitimate? Hey mariamb. The question you should first answer is if the response variable(s) are continuous. If this is the case the predictors can be of any kind (continuous, ordinal, categorical etc). If you want to run a conditional regression, then you can absolutely do this (and do many advanced models particularly in SAS). If there are discontinuities at the boundaries of each season (which I think is what you are concerned about) where prices "jump" then you can add regression terms to your model that factor in this discontinuity. All you have to do is add an extra term in your regression. The term is typically something like (x-a)+ where it equals 0 if x < a and 1 if x >= a. If you add these kinds of terms you can adjust for the discontinuities or "jumps" between seasons. Alternatively you can use the season directly as an ordinal variable. August 30th 2013, 01:48 AM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/advanced-statistics/221489-running-time-series-regressions-dataset-large-gaps-legitimate.html","timestamp":"2014-04-19T05:06:16Z","content_type":null,"content_length":"33984","record_id":"<urn:uuid:b9b16673-4596-4e39-975f-fa918d664a0a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
MCS-287 Notes for 2006-03-17 Tucker and Noonan's presentation of the Fibonacci example in Section 3.4.3 has lots of glitches. Since I don't particularly care for this example, I'd rather work a different one than fix theirs up. Specifically, let's consider a procedure for multiplying two integers, the first of which we will assume is not negative. Of course, the simplest way to do this would be using *, but let's do it with one arm tied behind our back. (The resulting algorithmic ideas apply more realistically in other settings.) A straightforward Java version of this procedure is attached. The first thing we might do would be to add in some assertions for the procedure's precondition, postcondition, and loop invariant. The version with those few assertions is also attached. Finally, if we wanted to really show off our understanding of axiomatic semantics, we could create a version with a lot more assertions added to nail down all the details of the correctness proof. It is worth remembering that we are actually only proving partial correctness. To make that point, we can consider a variant, also available with no assertions, a few assertions, or the whole nine yards. This variant has the virtue not only that it has a simpler loop invariant, but also that it doesn't require the precondition that the first argument be greater than or equal to zero. However, think carefully about what we prove: this procedure, unlike the first one, won't return a wrong answer when given a negative first argument. But that doesn't mean it will return the right answer! For a more complicated example, which will also bring in the proof rule for conditionals, we can consider making the procedure more efficient by cutting the multiplier in half (when possible) rather than just subtracting one. This more advanced version is also available with no assertions, a few assertions, or every conceivable assertion. Course web site: http://www.gustavus.edu/+max/courses/S2006/MCS-287/ Instructor: Max Hailperin <max@gustavus.edu>
{"url":"https://gustavus.edu/+max/courses/S2006/MCS-287/notes/2006-03-17/","timestamp":"2014-04-16T22:13:39Z","content_type":null,"content_length":"2992","record_id":"<urn:uuid:5a8e7e44-5318-4550-a203-052d96c4feec>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Readability vs speed in R February 19, 2011 By Martin Scharm I have bad news for those of you trying to produce lucid code! In his blog Radford M. Neal, Professor at the University of Toronto, published an article with the headline Two Surprising Things about R. He worked out, that parentheses in mathematical expression slow down the run-time dramatically! In contrast it seems to be less time consuming to use curly brackets. I verified these circumstances to be true: 1 > x=10 2 > f <- function (n) for (i in 1:n) 1/(1*(1+x)) 3 > g <- function (n) for (i in 1:n) (((1/(((1*(((1+x))))))))) 4 > system.time(f(10^6)) 5 user system elapsed 6 2.231 0.000 2.232 7 > system.time(g(10^6)) 8 user system elapsed 9 3.896 0.000 3.923 10 > 11 > # in contrast with curly brackets 12 > h <- function (n) for (i in 1:n) 1/{1*{1+x}} 13 > i <- function (n) for (i in 1:n) {{{1/{{{1*{{{1+x}}}}}}}}} 14 > system.time(h(10^6)) 15 user system elapsed 16 1.974 0.000 1.974 17 > system.time(i(10^6)) 18 user system elapsed 19 3.204 0.000 3.228 As you can see adding extra parentheses is not really intelligent concerning run-time, and not in a negligible way. This fact shocked me, because I always tried to group expressions to increase the readability of my code! Using curly brackets speeds up the execution in comparison to parentheses. Both observations are also surprising to me! So the conclusion is: Try to avoid redundant parentheses and/or brackets! To learn more about the why you are referred to his article. He also found a interesting observation about squares. In a further article he presents some patches to speed up R. for the author, please follow the link and comment on his blog: binfalse » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/readability-vs-speed-in-r/","timestamp":"2014-04-19T04:24:29Z","content_type":null,"content_length":"43114","record_id":"<urn:uuid:04bedc86-4b4d-4d0b-b3ac-21c938a5b629>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
proposition for syntax for initialisation of multidimensional lists Nils Grimsmo nils.grimsmo at idi.ntnu.no Tue Oct 12 23:02:24 CEST 2004 i always have trouble explaining why creation of multidimensional lists is not as straight forward as it could be in python. list comprehensions are ugly if you are new to the language. i really would like to see it made easy. i propose using tuples in the same way as integers are now. >>> [0] * (2,3,4) [[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]] this is even less "mathematically bad" syntax than multiplication with an integer, as multiplication of vectors is only well defined if they are of the same length. [1] * 7 could be interpreted as a vector of length one multiplied with a scalar 7 resulting in the vector [7]. (this was not meant as a proposition to remove the current syntax.) i would be easy to implement: class mylist(list): def __mul__(self, dims): if dims.__class__ == (()).__class__: if len(dims) > 1: return [self.__mul__(dims[1:]) for i in range(dims[0])] return list.__mul__(self, dims[0]) return list.__mul__(self, dims) li = mylist() print li * (2,3,4) what do you think? klem fra nils More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2004-October/249743.html","timestamp":"2014-04-19T16:33:02Z","content_type":null,"content_length":"3890","record_id":"<urn:uuid:167b2a41-90ea-4d8a-b84a-0b1c02470434>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00082-ip-10-147-4-33.ec2.internal.warc.gz"}