content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Can't get the loop to work properly
August 8th, 2012, 09:34 PM #1
Junior Member
Join Date
Aug 2012
Thanked 0 Times in 0 Posts
*****Solved it myself, thanks for help*****
Hey there,
I've got a program which I need to test whether an integer n is the exact sum of k consecutive integer numbers starting from 1.
eg. 6 is 1 + 2 + 3 and therefore k = 3 and if I put in 4 then it returns false. But what I've only able to achieve so far is to get the program to add numbers 1 to n together, so if I enter 6 I
get 21 because 1 + 2 + 3 + 4 + 5 + 6 = 21, but this isn't what I need the program to do. :/ I'm kinda stuck, I've tried a few other things but they aren't working.
import java.util.Scanner;
public class ExactSum {
* @param args
public static void main(String[] args) {
Scanner keyboard = new Scanner(System.in);
System.out.println("Type an integer:");
int n = keyboard.nextInt();
System.out.println("The number "+n+" is not an exact sum");
else {System.out.println("The number "+n+" is the sum of integers from 1 to "+k+" ");
Any help is appreciated!
Last edited by thegreatzo; August 10th, 2012 at 10:06 AM. Reason: Solved
My advice would be to drop the code for now. Back up and make a pseudocode. Once you get the ideas down, turning that into code is much easier.
To start you off:
You need to test whether an integer n is the exact sum of k.
So you need to get two inputs, both of which should be an integer value greater than 0.
You can break these following steps down more:
do some math..
decide what to output
Once you get each step worked out, go over the steps again and see if you can break any one step down into more than one smaller step. Like getting an integer greater than zero could involve
multiple steps which might include:
-output to user what information you want
-get an integer back
-verify the integer is greater than zero
Hope that gets you back under way
If you don't understand my answer, don't ignore it, ask a question.
August 9th, 2012, 01:51 AM #2
August 9th, 2012, 02:57 PM #3
Super Moderator
Join Date
May 2010
Eastern Florida
Thanked 1,959 Times in 1,933 Posts | {"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/17150-cant-get-loop-work-properly.html","timestamp":"2014-04-19T22:26:24Z","content_type":null,"content_length":"55917","record_id":"<urn:uuid:183fa862-4a83-4b80-9c95-8d3ac262c4ef>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Center for Philosophy of Science ::: and HPS speakers
Measurement and Limits in the Principia, Section 10
Chris Smeenk and George Smith
This paper has two related aims: first, to elucidate the methodological
sophistication displayed in Newton’s treatment of constrained motion in
Section 10 of the Principia, and second, to discuss measurement and limitcase
reasoning more generally, drawing on our reading of Newton.
Section 10 of Book 1 of the Principia includes 10 theorems regarding
constrained motion – motion on inclined planes, more general constraint
surfaces, and the oscillating motion of pendulums – that significantly extend
earlier results obtained by Galileo, Huygens, and others. These topics
were of central importance to the history of mechanics before and after the
Principia. Newton’s treatment differs from the earlier mechanical tradition
in two striking ways: first, pathwise independence of acquired velocity
follows directly from the Laws of Motion, rather than being assumed as
a separate principle; second, Galilean gravity, a force with constant magnitude
directed along parallel lines, is replaced with gravity treated as a
centripetal force. Newton proved a more general version of the key result
of Huygens’s Horologium: he stated a necessary and sufficient condition
for isochrony, and established that oscillations along a (generalized) cycloid
are isochronous for a force law varying as f (r )/r .
The results of Section 10 are particularly striking when one considers
why Newton pursued them and how they relate to the Principia as a whole.
This section is where Newton considers limiting cases most carefully, and
thus it illustrates an under-appreciated aspect of his methodology. We will
argue that Newton’s work in Section 10 establishes that the central results
of the Horologium survive the conceptual transition from Huygensian mechanics
to Newton’s more general framework. The fact that Huygens’s
results can be recovered as well-defined limit cases justifies Newton’s reliance
on these results, most prominently Huygens’s measurement of surface
gravity. In addition, Newton’s careful treatment insures that evidence
in favor of Galilean mechanics carries over to the more general theoretical
framework of the Principia, and there will be no grounds for objecting to
the new physics on the basis of earlier results regarding constrained motion.
Turning to more general questions, our discussion of Newton’s method
addresses two philosophical issues. First, Newton’s approach to measurement
requires that a real system used to measure a quantity can approach
ideal precision in specific circumstances. Assessing whether the requirement
holds depends on the theory itself, not on external stipulations regarding
observability and measurement. In this case, Newton showed
that the actual motions of a cycloidal pendulum bob approximate a motion
that would be exactly periodic in specific circumstances, and he further
quantified departures from isochronism. We will illustrate the importance
of considerations along these lines by contrasting this case with
an example in which this requirement does not hold. Second, Newton’s
results in Section 10 make it possible to characterize the relationship between
Galilean and Newtonian mechanic quite clearly. Galileo’s theory is
a suitable approximation of Newtonian theory in a specific circumstance,
namely near the surface of the earth (on the assumption that it has uniform
density). The limiting relationship between theories is not simply
characterized as holding between fundamental equations, but instead depends
on the descriptions of particular situations. Furthermore, various
lawlike relationships in Galilean mechanics retain their lawlike force, as
Newton’s limit-case reasoning shows. More generally, we will argue that
such limit-case reasoning provides a convincing reply to (one aspect of)
Kuhnian worries regarding incommensurability. | {"url":"http://www.pitt.edu/~pittcntr/Events/All/Conferences/others/other_conf_2007-08/andHPS/abstracts/smeenk1.htm","timestamp":"2014-04-20T21:05:39Z","content_type":null,"content_length":"11967","record_id":"<urn:uuid:f009ad3f-2aed-4935-b436-92b903b3a9d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lattice Points and Boundary Lattice Points
Date: 08/30/98 at 23:09:52
From: Bernard Doria
Subject: Lattice Points
What is an interior lattice point and a boundary lattice point of a
given shape (triangle, circle, rectangle, etc.)?
For example, an isosceles right triangle whose legs are 3 units long.
The teacher wrote that there are roughly 2 interior lattice points and
8 boundary lattice points, but the teacher isn't here to explain it.
A friend of mine tried explaining it to me, but I was still confused.
Can anyone help me on this?
Date: 08/31/98 at 03:03:20
From: Doctor Pat
Subject: Re: Lattice Points
Lattice points are the points on the regular x,y coordinate plane that
have an integer value for both x and y. When you draw the shape by
drawing the edges, if the line contains one of these lattice points,
it is a boundary lattice point. If a lattice point is totally inside
the shapes edges, it is an interior lattice point. The number of
lattice points on the boundary and inside depend on where you put the
shape and how it is rotated. In some positions it may have more
boundary points and in others, more interior points.
If you draw the isosceles right triangle whose legs are 3 units long
with the right angle at the vertex (0,0) and the two legs along the x
and y axes, then there would be the following boundary lattice points:
(0,0), (0,1), (0,2), (0,3), (1,0), (2,0), (3,0), (2,1), and (1,2)
The only interior lattice point in this case would be (1,1). Plot
these and it will help.
But if you slide the triangle to the left 1/2 a unit and down 1/2 a
unit, suddenly there are no boundary points and 6 interior points.
Again, try plotting these. I think your teacher is working up to a
beautiful mathematical idea called Pick's theorem and a way to find
I hope this helps. Good luck.
- Doctor Pat, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/54249.html","timestamp":"2014-04-21T16:06:00Z","content_type":null,"content_length":"7114","record_id":"<urn:uuid:c414683f-920c-4567-9733-4167cbfa30dd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparison between Analytical Calculus and FEM for a Mechanical Press Bed
Engineering Faculty, University “Constantin Brâncuşi” of Târgu-Jiu, Romania
In the first part of this paper is presented a method for calculating stress of press bed, based on expansion of classic methodology, using reduced frame, determined by the points of application of
force and the gravity center line, thus determining sectional geometry and maximum stress. Calculus is extended considering cross sections of the frame, from 15° to 15°, providing more information on
both maximum values and the distribution of these tensions. Values obtained confirm the assumption that using the simplified structure is obtained generally large, the calculation usually used for
verification. By this method one can get stresses in different sections, not only the maximum value. For a complete stress value and their distribution require a more complex calculation, furthermore
allowing any dimensional optimization, such as FEM. Second, is presented a step-by-step method for modeling the frame of mechanical press studied, using Pro/Engineer, in order to perform consequent
static or dynamic analysis based on FEM, using COSMOS/M. So are presented the stages of defining the mesh, the environment bonds, the loads, and finally performing analysis and result interpretation.
According to FEA results a continuous distribution of displacements and stresses that validate the model. At the end are presented considerations and comparison between the results of analytical
method and FEM, regarding stress values and their distribution.
At a glance: Figures
Keywords: CAD/FEA software, complex structures, mechanical press, modeling, FEM, stress distribution
American Journal of Mechanical Engineering, 2013 1 (1), pp 6-13.
DOI: 10.12691/ajme-1-1-2
Received December 29, 2012; Revised January 16, 2013; Accepted February 26, 2013
© 2013 Science and Education Publishing. All Rights Reserved.
Cite this article:
• Iancu, Cătălin. "Comparison between Analytical Calculus and FEM for a Mechanical Press Bed." American Journal of Mechanical Engineering 1.1 (2013): 6-13.
• Iancu, C. (2013). Comparison between Analytical Calculus and FEM for a Mechanical Press Bed. American Journal of Mechanical Engineering, 1(1), 6-13.
• Iancu, Cătălin. "Comparison between Analytical Calculus and FEM for a Mechanical Press Bed." American Journal of Mechanical Engineering 1, no. 1 (2013): 6-13.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Introduction
The press frame is a basic element of the press, aiming to support all of the machine kinematics and force transmission from the press to the work-piece. In this paper is considered the frame for a
crank press, one of the most common types of such machines. Properly working characteristics and for functional reasons, as specified in OSHA 1910.217 ^[1], crank mechanical presses frames are made
in two models: half frames (open) and complete frames (closed).
Open frames are distinguished by the shape of the normal section made through the column, at table level, and it can be assembled mono-block design or assembled construction.
The column can have one or two uprights, second version allowing through the opening between columns to increase the workspace of the die, and easier parts and waste disposal. Columns or pillars are
secured by means of ribs and some braces. Common variants of the cross sections can be mentioned: closed contour, half contour, open contours or cross-section profile of T. Frames after a closed
contour presses are used for fixed and the semi-closed or open contours for both fixed press and for the folding ones.
As specified in the Machine-tools, typification rules ^[2], the materials used in performing frames are cast iron and steel castings Fc 250...Fc 350 (Ft 25D…Ft 35D according to NF-A 35-501-77) or OT
(cast steel) class for cast frames, and steel type OL 37, OL 42.2k, OL 44.2k and OL 52.2k (E 24-2, E 26-2, E 30-2 and E 36-2 according to NF-A 35-501-77) for welded frames of plates.
The selection of construction materials is subject, however, largely by pressing force. Thus, for forces ranging from 63 kN…1600 kN can be used open frames with two columns (pillars) reclining or
tilt-bed, or open frames with two columns fixed and folding table. Open section frames with closed contour section are used in the range of 63 kN...1600 kN, but can be used even for higher nominal
force. In the same limits size, closed frames are approx. 15 % more rigid than open frameworks, the decisive factor in choosing the technology and construction of the frame being technical-economic
2. Crank Mechanical Press Bed Calculus
For presses with capacities up to a nominal force of 5000 kN, such as universal or blanking presses, eccentric or crank drive is still the most effective drive system ^[3]. Mechanical presses crank
bearing pedestal calculation is based on the determination of stresses and strains that occur under full load, for the nominal pressing force.
In terms of static calculation performed, types of frames for these presses are divided into three general categories ^[4]:
1. Frame with a single upright, considered a straight beam eccentric loaded, crank shaft being eccentric type end, oriented transversely (perpendicular to the machine).
2. Frame with a single upright, double, tilt usually, regarded as a curved beam eccentric loaded, crank shaft being oriented longitudinally (parallel to the machine).
3. Frame with two uprights, loaded symmetrically, for closed presses, crank shaft being placed longitudinally or transversely.
As Tabără et al. shown in ^[5], and Tschaetsch in ^[6] the calculation considers a flat charging, concentrated forces acting in the plane of symmetry of the bearing pedestal.
Following will be treated half frame bearing pedestal (open), crank shaft being oriented longitudinally (parallel to the machine). In this case, must determine the reduced scheme framework, driven by
the points of application of force and the centers of gravity line, specific to sections of the frame.
The reduced frame scheme in Figure 1 shows that in a certain section A-A the bed is subject to a bending moment, given by:
where: F[N][ ][A-A ] is the nominal deformation strength, corresponding to calculation section, Y - distance from the axis of force to the center of gravity calculation section.
Because along the beam represented by frame the bending moment remains constant, cross-sectional area considered is to be required for interior tensile and compressive on the outside.
Resistance modules for the two parts of the section will be considered:
And the corresponding bending stresses:
By overlapping with constant stress given by:
Maximum tensions will arise to external fibers of considered section:
This calculation is presented through a numerical example, as stated by Iancu ^[7] represented by a crank-type press frame, type PMCR - 63 (Figure 2), mechanical press with half frame type (open)
tilt, adjustable stroke and nominal force of 630 kN, crank shaft oriented longitudinally.
The classic calculation is only for maximum force, on its direction. This calculation is extended, as specified by Iancu in ^[8], by considering all cross sections of the frame, from 15° to 15°,
which allows estimation of tensions after several directions, providing information on both the maximum values and the distribution of these tensions, making sizing more accurate.
It also mentioned that for the calculation is made the assumption that the tensile and compressive efforts and deformations will be perpendicular to the direction A - A up to G - G, the calculating
sections. Scheme for calculating the bed is shown in Figure 3. The frame is made of sheet plates from OL 44.2k in welded construction. According to the literature, wall thickness is between 8 and
60 mm. Depending on similar machine tools made worldwide and those clauses in terms of characteristics, have settled the original dimensions from structural and functional conditions, followed by
subsequent calculations, to finally check these dimensions.
It also states that, due to the symmetry of the bearing pedestal (of course, at this stage are not considered various technological clippings) will be performed the calculation for half the
considered sections (see figure 4).
Sectional area is considered:
The position of the center of gravity of the section will be:
Moment of inertia of the composite section is considered:
Resistance modules are:
For section A-A, which is perpendicular to the direction of the force of the press, will be considered for calculating the nominal power of the press, so:
Bending moment acting on the considered section will be:
Tensile constant stress is:
Tensile stress, i.e. compression stress, due to bending loads will have values:
Maximum stress to outer fiber of the considered section will be:
Following this model, calculation is performed in other sections of calculation considered B-B...G-G, determining for each: area and center of gravity of the section, moment of inertia and resistance
modulus; force component acting on the section considered, the maximum stress for considering sections.
These elements are summarized in Table 1.
As seen in Figure 5, which presents the distribution of these stress, the highest values are obtained in section A-A and section D-D, inclined at 45°, where, for a frame incorrectly sized may
appear cracks in welding or even in basic material. Note that the absolute values of these efforts are quite small compared with the allowable material resistance, which shows that the frame is
really oversized.
3. Modeling Frame Strategy for FEA
3.1. General Elements
For the discretization of any structure must be selected the most suitable finite elements, regarding the geometry, the loads, the desired precision and many other conditions.
In the majority of cases the structure geometry determines the finite element type to be used, as stated Imbert in ^[9]. It is recommended using a single type of finite element, but for complex
structures may be used two or even more type of finite elements.
One-dimensional finite elements- are used when geometry, material properties, etc. may be described by a single space coordinate.
Bi-dimensional finite elements- are used when two space coordinates may describe geometry, material properties, etc.
Three-dimensional finite elements- are used when the structure is massive and the geometry, material properties, etc. may be described by three space coordinates. The base element is a tetrahedron,
but may be used other shapes, like cuboids, prism, hexahedron.
At the same time the size of the elements is very important in order to achieve correct result and the desired precision. Regarding this aspect a compromise must be made between the size and the
number of elements because increasing the number of elements leads to the growth of the problem, and increase the solving time.
If the analyzed structure has no geometry discontinuities, material properties and loads, can be divided in finite elements approx. equals, and the node distances will be uniform. The number of
elements is connected with the desired precision, but it is a certain number of elements for that the relation precision-solving time is optimized, as stated by Rao in ^[10].
3.2. Discretization Variants
As Neumann and Hahn stated in ^[11], the frames of machines are generally complex structures, three-dimensional, which requires a detailed analysis of variants for modeling and discretization.
Theoretically can be used three possibilities of discretization, and thus obtaining three model variants:
- beam type elements, which have the advantage of easy modeling, a reduced solving time, but cannot accurate modeling concentrators zones;
- plate type elements which have the advantage of a detailed examination of local phenomena, but having a bigger solving time;
- solid type elements (three-dimensional), which models very exactly the whole structure, but having a large saving time.
It’s also obvious that it can be combined various types of elements, when necessary.
By analyzing the geometry of the structure it comes to conclusion that can be discretized in majority with plate type finite elements. How the majority of bed elements is welded plates, it’s clear
that plates can model structure. The modeling is done on middle plane, regarding the thickness of plates. The bedplates have been discretized by SHELL elements with 3 nodes, the thickness of a plate
being constant. For the reason of real convey of forces, the cantilevers and the bosses from the upper side have been discretized by SOLID elements.
The version of “COSMOS/M” software ^[12] used for solving, permit using three important classes of finite elements:
- One-dimensional (BEAM class);
- Bi-dimensional (PLANE2D class);
- Three-dimensional (SOLID class).
So the "COSMOS/M" software permits any variant of structure discretization, in order to achieve a model very real and to obtain result accuracy, especially when analyzing complex structures like a
press bed.
3.3. Modeling and Processing the Bed Model
The solving time is main motivation in choosing a discretization variant and complying software. The appearance of a new solving technique (FFE - Fast Finite Element) developed by Structural Research
& Analysis Corporation was decisive in using COSMOS/M ^[13].
For solving a problem of structural analysis and optimization of a complex structure like a mechanical press bed must be followed certain phases:
a- completing geometric model; b- establishing the analysis type; c- defining the finite element type; d- defining the mesh; e- defining material characteristics; f- defining geometric
a) Completing geometric model
First it must be realized a geometric model most accurate of the bed, using a dedicated CAD software, or even the geometric modeler of COSMOS/M. This modeler is quite cumbersome, so was chosen
dedicated CAD software (it’s recommended a soft that can perform parametric modeling and work integrated with FEA software). Such a program is Pro/Engineer ^[14], produced by American corporation
Parametric Technology Corp. The problems and phases of modeling are not detailed here.
b) Establishing the analysis type
The load for such a structure is complex. Practically it is a dynamic load and the natural frequencies are very important. In this paper, will be presented the modeling and only the static analysis
of the structure, made mostly for validating the geometric model.
c) Defining finite element type
After completing the geometric model, presented in Figure 6, for subsequent FEA, it must be done the meshing. It has been mentioned that the discretization can be done using plate elements (shell 3),
or solid elements (tetra 4). Since majority of the structure is realized of welded plates of different thickness, between 10 - 80 mm, it will be used shell type elements for the whole structure, and
for the cantilevers and the bosses, tetrahedron elements.
d) Defining the mesh
Using this type of finite elements it has been done the discretization of the structure, obtaining the mesh presented in Figure 7. The mesh was done directly in Pro/Engineer, because importing
geometry in COSMOS/M and then creating the mesh in this software have revealed a series of inconsistencies as a result of a different precision of coordinates in the two programs. In addition Pro/
Engineer has special facilities of refining the initial mesh.
e) Defining material characteristics
The refined mesh presented in Figure 7 has been imported in COSMOS/M, in this stage the compatibility being complete. For defining material measurement it has been used the SI system material library
of software, existing the possibility of defining every property at row, measurement units according to the SI system material library of software, existing the possibility of defining every property
at row, measurement units according to the SI system.
The bed is made of OL 44.2k, and the table is made of OL 52.2k, STAS 500/2-80. The material chosen from a library is Steel (plain steel), which has the usual characteristics of this material.
f) Defining geometric characteristics
The geometric characteristics refer to this type of structure the thickness of every plate. When importing the mesh in COSMOS/M for FEA have been defined finite element groups, and also the thickness
and FE type adjoin to every real constant (the initial thickness of every plate of the bed).
4. Static Analysis of PMCR-63 Mechanical Press Frame
The model of the complex structure analyzed, completed and prepared as shown before, is now ready for finite element analysis. For FEA analysis, either static or dynamic, in COSMOS/M must be followed
the phases: defining the mesh, defining the environment bonds, defining the loads, performing analysis and result interpretation.
4.1. Defining the Mesh
The mode of obtaining the mesh was presented before. The mesh has 10307 nodes, 25734 finite elements and 103590 degrees of freedom (DOF), element type SHELL 3 for discretization of all bed plates and
type TETRA 4 for discretization of cantilevers and the bosses (Figure 8).
4.2. Defining the Environment Bonds
The bed structure may be studied in half, regarding the symmetry in plane YOZ. This symmetry regards the geometry, the loads and the environments bonds. However, for more accurate evaluation of the
model was chosen the modeling of the whole structure. The environment bonds apply to nodes being in the zone of the bed resting on the foundation. In this zone are blocked all DOF (3 translations and
3 rotation). In Figure 9 are presented the environment bonds.
4.3. Defining the Loads
Defining the loads of analyzed structure can be approached in many points of view:
- defining real loads (shock type, on short time, with hard to estimate damping), in this case is needed a detailed dynamic analysis;
- defining static loads, like a cutoff impairment of loads to static domain, when this is real enough.
The forces developped for on working are generated by rod-crank mechanism. Their effect is transmitted by superior bosses and by the bed table in the whole structure. Therefore, on the bed action:
- action forces on bed table, with maximum value of 63 tf;
- reaction forces on the upper bosses, same value, but contrasted direction.
Since the action of these forces doesn’t concentrate, the forces on upper bosses were considered like a uniform distributed pressure on bosses’ width, having the value:
And the force on bed table like a distributed pressure on circular surface of 300 mm diameter:
In Figure 10 and Figure 11 are shown these pressures. To these loads is added the weight of the structure, considered by an automated option in the software.
4.4. Performing Analysis and Result Interpretation
The model prepared for analysis as shown, was studied with COSMOS/M, with solving option FFE (Fast Finite Element). The results show both displacement and stress, concerning maximum values and
distribution. In Figure 12 is presented the stress distribution, by Von Misses theory.
The admissible strengths considered are: bending strength 2·E+08 N/m^2 and traction-compression strength 1.8 ·E+08 N/m^2.
It can be noticed that stress values are generally low compared with traction-compression strength of bed material, σ[Von] = 1.8 E+08 N/m^2, higher values being recorded only locally, in the bed
table zone. The maximum stress is 1.82 ·E+08 N/m^2, and because the structure is stressed on traction-compression compound with bending, the admissible strength considered is
σ Von[max] = 2·E+08 N/m^2, so the stress is below the admissible strength.
5. Conclusions
This calculation presents a novelty compared with classical calculus because are considered different sections of the frame, from 15° to 15°, which allows estimation of stresses after several
directions, providing more information on both maximum values and the distribution stresses.
For a more accurate and complete stress values and their distribution is clearly required a more complex calculation, which allows eventually a dimensional optimization, considering real dynamic
load, such as FEM, as stated by Neumann in ^[15].
So in the second part of the paper are presented the stages needed for modeling complex structures, such as a mechanical press bed, shown by a numeric example and also the preparing of the model for
subsequent FEA analysis.
It has to be mentioned that the specialty literature is relatively poor in offering complex stress calculus on models realized in majority using SHELL elements, for plates made structures.
Based on FEA application the results show a continuous distribution of displacements and stresses that validate the model, proving it correct. It can be noticed that stress values are generally low
compared with traction-compression strength of bed material, higher values being recorded only locally.
The maximum stress is 1.82 ·E+08 N/m2, and because the structure is stressed on traction-compression compound with bending, the admissible strength considered is σ Von[max] = 2·E+08 N/m2, so the
stress is really below the admissible strength.
In Figure 13 and Figure 14 are presented the stress values obtained by FEA in section A-A, respectively D-D. In Table 2 is presented the comparison between the values obtained by classical calculus
and by FEA.
It can be noticed that stress values obtained by classical calculus are higher than the values obtained by FEA, confirming the assumption that using only a calculation based on the simplified
structure, leads to an oversized structure, the calculation method being usually used for verification.
Also, with this type of analysis it became possible knowing values in every point of the bed that interests, and preparing the way for the sizing optimization of such a complex structure,
optimization considering dynamic load too.
[1] OSHA 1910.217, Mechanical power press requirements and platen presses, USA, 1991.
[2] I.C.P.M.U.A., Sibiu Subsidiary, Machine tools, typification rules, Romania, 2000.
[3] Schuler Inc., Metal forming handbook, Springer, Germany, 1998, 33-44.
[4] Smith, D.A., Mechanical press types and nomenclature, MI, USA, 2005, 1-27.
[5] Tabără, V., Catrina, D. and Ghana, V., Calculus, design and adjustment of presses, Technical Publishing House, Bucureşti, Romania, 1976, 71-80.
[6] Tschaetsch, H., Metal forming practice, Springer, Germany, 2005, 295-310.
[7] Iancu, C., Dimensional optimization of mechanical press, Ed. MJM, Craiova, Romania, 2002, 30-50.
[8] Iancu, C., Contributions to dimensional optimization of mechanical press in dynamic regime, Ph.D. Thesis, University of Pitesti, Romania, 2002, 20-39, 160-190.
[9] Imbert, J.F., Analyse de structure par elements finis, Ecole Nationale Superieure de l’Aeronautique et de l’Espace, Cepadues-Edition, Toulouse, France, 1991
[10] Rao, S.S., The finite element method in engineering, fifth edition, Elsevier, 2011, 53-63
[11] Neumann, M., and Hahn, H., “Computer simulation and dynamic analysis of a mechanical press based on different engineer models”, Mathematics and Computers in Simulation, Vol. 46, Issues 5–6,
1998, 559–574
[12] Cosmos/M -Finite Element Analysis System, User Guide, Structural Research & Analysis Corp., Los Angeles, CA, USA, 2002.
[13] Cosmos/M -Finite Element Analysis System, Basic FEA System, Structural Research & Analysis Corp., Los Angeles, CA, USA, 2002.
[14] Pro/Engineer -User guide, Parametric Technology Corporation, Waltham, MA, USA, 2001.
[15] Neumann, M., Dynamic analysis of mechanical presses, RTS Scientific rapport, Kassel University, Germany, 1994. | {"url":"http://pubs.sciepub.com/ajme/1/1/2/index.html","timestamp":"2014-04-20T20:57:24Z","content_type":null,"content_length":"105076","record_id":"<urn:uuid:38034580-ee95-4b43-ae6f-9f80395f6577>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Energy consumption and economic growth: evidence from nonlinear panel cointegration and causality tests
Omay, Tolga and Hasanov, Mubariz and Ucar, Nuri (2012): Energy consumption and economic growth: evidence from nonlinear panel cointegration and causality tests.
Download (324Kb) | Preview
In this paper, we propose a nonlinear cointegration test for heterogeneous panels where the alternative hypothesis is an exponential smooth transition (ESTAR) model. We apply our tests for
investigating cointegration relationship between energy consumption and economic growth for the G7 countries covering the period 1977-2007. Moreover, we estimate a nonlinear Panel Vector Error
Correction Model in order to analyze the direction of the causality between energy consumption and economic growth. By using nonlinear causality tests we analyze the causality relationships in low
economic growth and high economic growth regimes. Furthermore, we deal with the cross section dependency problem in both nonlinear panel cointegration test and nonlinear Panel Vector Error Correction
Item Type: MPRA Paper
Original Energy consumption and economic growth: evidence from nonlinear panel cointegration and causality tests
English Energy Consumption and Economic Growth: Evidence from Nonlinear Panel Cointegration and Causality Tests
Language: English
Keywords: Nonlinear panel cointegration, nonlinear Panel Vector Error Correction Model, cross section dependency
C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C13 - Estimation: General
Subjects: C - Mathematical and Quantitative Methods > C2 - Single Equation Models; Single Variables > C23 - Models with Panel Data; Longitudinal Data; Spatial Time Series
C - Mathematical and Quantitative Methods > C3 - Multiple or Simultaneous Equation Models; Multiple Variables > C33 - Models with Panel Data; Longitudinal Data; Spatial Time Series
Item ID: 37653
Depositing Tolga Omay
Date 26. Mar 2012 14:15
Last 13. Feb 2013 13:53
Aloui, C., Jammazi, R. 2009. The effects of crude oil shocks on stock market shifts behaviour: A regime switching approach. Energy Economics 31, 789-799
Akinlo, A.E., 2008. Energy consumption and economic growth: evidence from 11 Sub-Sahara African countries. Energy Economics 30 (5), 2391–2400.
Ang, J.B., 2008. Economic development, pollutant emissions and energy consumption in Malaysia. Journal of Policy Modeling 30, 271–278.
Apergis, N., Payne, J.E., 2009. Energy consumption and economic growth in Central America: evidence from a panel cointegration and error correction model. Energy Economics 31, 211–216.
Balcilar M., Ozdemir, Z.A., Arslanturk, Y., 2010. Economic growth and energy consumption causal nexus viewed through a bootstrap rolling window. Energy Economics 32, 1398-1410
Bai, J., Ng, S., 2004. A PANIC Attack on Unit Roots and Cointegration. Econometrica 72(4), 1127-1177.
Bai, J., Kao, C., Ng, S., 2009. Panel cointegration with global stochastic trends. Journal of Econometrics 149(1), 82-99.
Banerjee, A., Marcellino, M., Osbat, C., 2004. Some cautions on the use of panel methods for integrated series of macroeconomics data. Econometrics Journal, 7, 322-340.
Beaudreau, B.C., 2005. Engineering and economic growth. Structural Change and Economic Dynamics 16, 211–220.
Belloumi, M., 2009. Energy consumption and GDP in Tunisia: cointegration and causality analysis. Energy Policy 37 (7), 2745–2753.
Boden, T.A., G. Marland, and R.J. Andres. 2010. Global, Regional, and National Fossil-Fuel CO2 Emissions. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S.
Department of Energy, Oak Ridge, Tenn., U.S.A. doi 10.3334/CDIAC/00001_V2010.
Chang, Y., 2004. Bootstrap unit root tests in panels with cross-sectional dependency. Journal of Econometrics, 120(2), pages 263-293.
Cheng-Lang, Y., Lin, H.-P., Chang, C.-H., 2010. Linear and nonlinear causality between sectoral electricity consumption and economic growth: Evidence from Taiwan. Energy Policy Volume 38
(11), 6570-6573
Chiou-Wei, S.Z., Chen, Ching-Fu, Zhu, Z., 2008. Economic growth and energy consumption revisited—evidence from linear and nonlinear Granger causality. Energy Economics 30(6),3063–3076.
Costantini, V., Martini, C., 2010. The causality between energy consumption and economic growth: A multi-sectoral analysis using non-stationary cointegrated panel data. Energy Economics
32, 591-603
Engle, R.F., Granger, C.W.J., 1987. Co-integration and error correction: representation, estimation, and testing. Econometrica 55: 251-276
Erdal, G., Erdal, H., Esengün, K., 2008. The causality between energy consumption and economic growth in Turkey. Energy Policy 36(10), 3838–3842.
Francis, B.M., Moseley, L., Iyare, S.O., 2007. Energy consumption and projected growth in selected Caribbean countries. Energy Economics 29, 1224–1232.
Gabreyohannes, E., 2010. A nonlinear approach to modelling the residential electricity consumption in Ethiopia. Energy Economics 32, 515-523
Ghali, K.H., El-Sakka, M.I.T., 2004. Energy use and output growth in Canada: a multivariate cointegration analysis. Energy Economics 26 (2), 225–238.
Glasure, Y.U., 2002. Energy and national income in Korea: further evidence on the role of omitted variables. Energy Economics 24, 355–365.
Gonzalez, A., Teräsvirta, T., Dijk, D. 2005. Panel smooth transition regression models, Working Paper Series in Economics and Finance No 604, Stockholm School of Economics, Sweden.
Granger, C.W.J., Teräsvirta, T., 1993. Modelling nonlinear economic relationships. Advanced Texts in Econometrics. Oxford University Press, New York, USA.
Greene, W.H., 1997. Econometric Analysis, Third Edition. Prentice Hall, New Jersey, USA.
Halicioglu, F., 2009. An econometric study of CO2 emissions, energy consumption, income and foreign trade in Turkey. Energy Policy 37, 1156–1164.
Hamilton, J.D. 2003. What is an oil shock? Journal of Econometrics 113, 363-398
Hansen, B. 1999. Threshold effects in non-dynamic panels: estimation, testing and inference. Journal of Econometrics 93(2), 345-368
Hasanov, M., Telatar, E. 2011. A re-examination of stationarity of energy consumption: Evidence from new unit root tests. Energy Policy 39, 7726–7738
Heston, A., Summers, R., and Aten, B. (2009) Penn World Table Version 6.3. Center for International Comparisons of Production, Income and Prices at the University of Pennsylvania, August
Ho, C-Y., Siu, K.W., 2007. A dynamic equilibrium of electricity consumption and GDP in Hong Kong: an empirical investigation. Energy Policy 35 (4), 2507–2513.
Huang, B.N., Hwang, M.J. Yang, C.W., 2008. Does more energy consumption bolster economic growth? An application of the nonlinear threshold regression model. Energy Policy 36, 755-767
Im, K.S., Pesaran, H., Shin, Y., 2003. Testing for unit roots in heterogeneous panels. Journal of Econometrics 115, 53–74.
Kapetanios, G., Shin, Y., Snell, A. 2003. Testing for a unit root in the nonlinear STAR framework. Journal of Econometrics 112, 359-79.
Kapetanios, G., Shin, Y., Snell, A. 2006. Testing for cointegration in nonlinear smooth transition error correction models. Econometric Theory 22, 279-303.
Kapetanios, G., Pesaran, M. H., Yamagata, T. 2011. Panels with non-stationary multifactor error structures. Journal of Econometrics, 160(2), 326-348.
Kraft, J., Kraft, A., 1978. On the relationship between energy and GNP. Journal of Energy and Development 3, 401–403.
Lee, C.C., 2006. The causality relationship between energy consumption and GDP in G-11 countries revisited. Energy Policy 34, 1086–1093.
Lee, C.C., Chien, M.S., 2010. Dynamic modelling of energy consumption, capital stock, and real income in G-7 countries. Energy Economics 32, 564-581
Lee, C.C., Chang, C.P., 2007. Energy consumption and GDP revisited: a panel analysis of developed and developing countries. Energy Economics 29, 1206–1223.
Lee, C.C., Chang, C. P. 2008. Energy consumption and economic growth in Asian economies: A more comprehensive analysis using panel data. Resource and Energy Economics 30, 50–65
Li, J. 2006. Testing Granger causality in the presence of threshold effects. International Journal of Forecasting 22, 771–780.
Luukkonen, R., Saikkonen, P., Teräsvirta, T. 1988. Testing linearity against smooth transition autoregressive models. Biometrika 75, pp. 491-99
Maddala, G. S., Wu, S. 1999. A Comparative Study of Unit Root Tests with Panel Data and New Simple Test. Oxford Bulletin of Economics and Statistics 61, 631-652.
Maki, D. 2010. An alternative procedure to test for cointegration in STAR models. Mathematics and Computers in Simulation 80, 999-1006
Mehrara, M., 2007. Energy consumption and economic growth: the case of oil exporting countries. Energy Policy 35 (5), 2939–2945.
Moon, Y.S., Sonn, Y.H., 1996. Productive energy consumption and economic growth: an endogenous growth model and its empirical application. Resource and Energy Economics 18, 189–200.
Moon, H.R., Hyungsik, R., Perron, B., 2004. Testing for a unit root in panels with dynamic factors. Journal of Econometrics, 122(1), 81-126.
Narayan, P.K., Smyth, R., Prasad, A., 2007. Electricity consumption in the G7 countries: a panel cointegration analysis of residential demand elasticities. Energy Policy 35 (9),
Narayan, P.K., Smyth, R., 2008. Energy consumption and real GDP in G7 countries: new evidence from panel cointegration with structural breaks. Energy Economics 30, 2331–2341.
Oh, W., Lee, K., 2004. Causal relationship between energy consumption and GDP: the case of Korea 1970–1999. Energy Economics 26 (1), 51–59.
Omay, T., Kan, E.O., 2010. Re-examining the Threshold Effects in the Inflation-Growth Nexus: OECD Evidence. Economic Modelling 27 (5), 995-1004.
Ozturk, I., 2010. A literature survey on energy–growth nexus. Energy Policy 38, 340–349.
Payne, J.E., 2009. On the dynamics of energy consumption and output in the US. Applied Energy 86 (4), 575–577.
Pedroni, P., 1999. Critical values for cointegration tests in heterogeneous panels with multiple regressors. Oxford Bulletin of Economic and Statistics 61, 653–678.
Pesaran, M.H. 2004. General diagnostic tests for cross-section dependence in panels. Cambridge Working Papers in Economics 0435, Faculty of Economics, University of Cambridge, UK.
Pesaran, M.H. 2007. A simple panel unit root test in the presence of cross-section dependence. Journal of Applied Econometrics 22(2), 265-312.
Rahman, S., Serletis, A. 2010 The Asymmetric Effects of Oil Price and Monetary Policy Shocks: A Nonlinear VAR Approach, Energy Economics 32, 1460-1466
Soytas, U., Sari, R., 2003. Energy consumption and GDP: causality relationship in G-7 countries and emerging markets. Energy Economics 25, 33–37.
Soytas, U., Sari, R., 2006. Energy consumption and income in G7 countries. Journal of Policy Modeling 28, 739–750
Stern, D.I., 2000. A multivariate cointegration analysis of the role of energy in the US macroeconomy. Energy Economics 22, 267–283.
Teräsvirta, T., 1994. Specification, estimation, and evaluation of smooth transition autoregressive models. Journal of the American Statistical Association 89, 208–218.
Teräsvirta, T., Anderson, H.M., 1992. Characterizing nonlinearities in business cycles using smooth transition autoregressive models. Journal of Applied Econometrics 7, S119–S136.
Ucar N. and Omay T.,(2009) “Testing For Unit Root In Nonlinear Heterogeneous Panels” Economics Letters. 104(1), 5-7.
Wolde-Rufael, Y., 2004. Disaggregated industrial energy consumption and GDP: the case of Shanghai. Energy Economics 26, 69–75.
Zachariadis, T., 2007. Exploring the relationship between energy use and economic growth with bivariate models: new evidence from G-7 countries. Energy Economics 29 (6), 1233–1253.
Zamani, M., 2007. Energy consumption and economic activities in Iran. Energy Economics 29 (6), 1135–1140.
Zhang, X.P., Cheng, X.M., 2009. Energy consumption, carbon emissions, and economic growth in China. Ecological Economics 68 (10), 2706–2712.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/37653 | {"url":"http://mpra.ub.uni-muenchen.de/37653/","timestamp":"2014-04-16T07:25:44Z","content_type":null,"content_length":"39753","record_id":"<urn:uuid:c1742845-26e3-4e00-a229-2568ea6c3b4d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monads, Vector Spaces and Quantum Mechanics pt. II
Back from wordpress.com:
I had originally intended to write some code to simulate quantum computers and implement some quantum algorithms. I'll probably eventually do that but today I just want to look at quantum mechanics
in its own right as a kind of generalisation of probability. This is probably going to be the most incomprehensible post I've written in this blog. On the other hand, even though I eventually talk
about the philosophy of quantum mechanics, there's some Haskell code to play with at every stage, and the codeives the same results as appear in physics papers, so maybe that will help give a handle
on what I'm saying.
First get some Haskell fluff out of the way:
> import Prelude hiding (repeat)
> import Data.Map (toList,fromListWith)
> import Complex
> infixl 7 .*
Now define certain types of vector spaces. The idea is the a
W b a
is a vector in a space whose basis elements are labelled by objects of type
and where the coefficients are of type
> data W b a = W { runW :: [(a,b)] } deriving (Eq,Show,Ord)
This is very similar to standard probability monads except that I've allowed the probabilities to be types other than
. Now we need a couple of ways to operate on these vectors.
allows the application of a function transforming the probabilities...
> mapW f (W l) = W $ map (\(a,b) -> (a,f b)) l
applies a function to the basis element labels.
> instance Functor (W b) where
> fmap f (W a) = W $ map (\(a,p) -> (f a,p)) a
We want our vectors to support addition, multiplication, and actually form a monad. The definition of
is similar to that for other probability monads. Note how vector addition just concatenates our lists of probabilities. The problem with this is that if we have a vector like a+2a we'd like it to be
reduced to 3a but in order to do that we need to be able to spot that the two terms a and 2a both contain multiples of the same vector, and to do that we need the fact that the labels are instances
. Unfortunately we can't do this conveniently in Haskell because of the lack of
restricted datatypes
and so to collect similar terms we need to use a separate
> instance Num b => Monad (W b) where
> return x = W [(x,1)]
> l >>= f = W $ concatMap (\(W d,p) -> map (\(x,q)->(x,p*q)) d) (runW $ fmap f l)
> a .* b = mapW (a*) b
> instance (Eq a,Show a,Num b) => Num (W b a) where
> W a + W b = W $ (a ++ b)
> a - b = a + (-1) .* b
> _ * _ = error "Num is annoying"
> abs _ = error "Num is annoying"
> signum _ = error "Num is annoying"
> fromInteger a = if a==0 then W [] else error "fromInteger can only take zero argument"
> collect :: (Ord a,Num b) => W b a -> W b a
> collect = W . toList . fromListWith (+) . runW
Now we can specialise to the two monads that interest us:
> type P a = W Float a
> type Q a = W (Complex Float) a
is the (hopefully familiar if you've read Eric's recent posts) probability monad. But
allows complex probabilities. This is because quantum mechanics is a lot like probability theory with complex numbers and many of the rules of probability theory carry over.
Suppose we have a (non-quantum macroscopic) coin that we toss. It's state might be described by:
> data Coin = Heads | Tails deriving (Eq,Show,Ord)
> coin1 = 0.5 .* return Heads + 0.5 .* return Tails :: P Coin
Suppose that if Albert sees a coin that is heads up he has a 50% chance of turning it over and if he sees a coin that is tails up he has a 25% chance of turning it over. We can describe Albert like
> albert Heads = 0.5 .* return Heads + 0.5 .* return Tails
> albert Tails = 0.25 .* return Heads + 0.75 .* return Tails
We can now ask what happens if Albert sees a coin originally turned up heads n times in a row:
> repeat 0 f = id
> repeat n f = repeat (n-1) f . f
> (->-) :: a -> (a -> b) -> b
> g ->- f = f g
> (-><) :: Q a -> (a -> Q b) -> Q b
> g ->< f = g >>= f
> albert1 n = return Heads ->- repeat n (->< albert) ->- collect
Let me explain those new operators.
is just function application written from left to right. The
in the middle is intended to suggest the direction of data flow.
is just
but I've written it this way with the final
intended to suggest the way a function
a -> M b
'fans out'. Anyway, apropos of nothing else, notice how Albert approaches a steady state as n gets larger.
Quantum mechanics works similarly but with the following twist. When we come to observe the state of a quantum system it undergoes the following radical change:
> observe :: Ord a => Q a -> P a
> observe = W . map (\(a,w) -> (a,magnitude (w*w))) . runW . collect
Ie. the quantum state becomes an ordinary probabilistic one. This is called
wavefunction collapse
. Before collapse, the complex weights are called 'amplitudes' rather than probabilities. The business of physicists is largely about determining what these amplitudes are. For example, the well
Schrödinger equations
is a lot like a kind of probabilistic diffusion, like a random walk, except with complex probabilities instead of amplitudes. (That's why so many physicists have been hired into finance firms in
recent years - stocks follow a random walk which has formal similarities to quantum physics.)
The rules of quantum mechanics are a bit like those of probability theory. In probability theory the sum of the probabilites must add to one. In addition, any process (like
) must act in such a way that if the input sum of probabilities is one, then so is the output. This means that probabilistic process are
. In quantum mechanics the sum of the squares of the magnitudes of the amplitudes must be one. Such a state is called 'normalised'. All processes must be such that normalised inputs go to normalised
outputs. Such processes are called
There's a curious subtlety present in quantum mechanics. In classical probability theory you need to have the sum of the probabilities of your different events to sum to one. But it's no good having
events like "die turns up 1", "die turns up 2", "die turns up even" at the same time. "die turns up even" includes "die turns up 2". So you always need to work with a mutually exclusive set of
events. In quantum mechanics it can be pretty tricky to figure out what the mutually exclusive events are. For example, when considering the spin of an electron, there are no more mutually exclusive
events beyond "spin up" and "spin down". You might think "what about spin left?". That's just a mixture of spin up and spin down - and that fact is highly non-trivial and non-obvious. But I don't
want to discuss that now and it won't affect the kinds of things I'm considering below.
So here's an example of a quantum process a bit like
above. For any angle $latex \theta$,
turns a boolean state into a mixture of boolean states. For $latex \theta=0$ it just leaves the state unchanged and for $latex \theta=\pi$ it inverts the state so it corresponds to the function
. But for $latex \theta=\pi/2$ it does something really neat: it is a kind of square root of
. Let's see it in action:
> rotate :: Float -> Bool -> Q Bool
> rotate theta True = let theta' = theta :+ 0
> in cos (theta'/2) .* return True - sin (theta'/2) .* return False
> rotate theta False = let theta' = theta :+ 0
> in cos (theta'/2) .* return False + sin (theta'/2) .* return True
> snot = rotate (pi/2)
> repeatM n f = repeat n (>>= f)
> snot1 n = return True ->- repeatM n snot ->- observe
We can test it by running
snot1 2
to see that two applications take you to where you started but that
snot1 1
gives you a 50/50 chance of finding
. Nothing like this is possible with classical probability theory and it can only happen because complex numbers can 'cancel each other out'. This is what is known as 'destructive interference'. In
classical probability theory you only get constructive interference because probabilities are always positive real numbers. (Note that
is just a monadic version of repeat - we could have used it to simplify
above so there's nothing specifically quantum about it.)
Now for two more combinators:
> (=>=) :: P a -> (a -> b) -> P b
> g =>= f = fmap f g
> (=><) :: P (Q a) -> (a -> Q b) -> P (Q b)
> g =>< f = fmap (>>= f) g
The first just uses
to apply the function. I'm using the
sign as a convention that the function is to be applied not at the top level but one level down within the datastructure. The second is simply a monadic version of the first. The reason we need the
latter is that we're going to have systems that have both kinds of uncertainty - classical probabilistic uncertainty as well as quantum uncertainty. We'll also want to use the fact that
is a monad to convert doubly uncertain events to singly uncertain ones. That's what
> join :: P (P a) -> P a
> join = (>>= id)
OK, that's enough ground work. Let's investigate a physical process that can be studied in the lab: the
Quantum Zeno effect
, otherwise known as the fact that a watched pot never boils. First an example related to
> zeno1 n = return True ->- repeatM n (rotate (pi/fromInteger n)) ->- collect ->- observe
The idea is that we 'rotate' our system through an angle $latex \pi/n$ but we do so in n stages. The fact that we do it in n stages makes no difference, we get the same result as doing it in one go.
The slight complication is this: suppose we start with a probabilistic state of type
P a
. If we let it evolve quantum mechanically it'll turn into something of type
P (Q a)
. On observation we get something of type
P (P a)
. We need
to get a single probability distribution of type
P a
. The
is nothing mysterious, it just combines the outcome of two successive probabilistic processes into one using the usual laws of probability.
But here's a variation on that theme. Now we carry out n stages, but after each one we observe the system causing wavefunction collapse:
> zeno2 n = return True ->- repeat n (
> \x -> x =>= return =>< rotate (pi/fromInteger n) =>= observe ->- join
> ) ->- collect
Notice what happens. In the former case we flipped the polarity of the input. In this case it remains closer to the original state. The higher we make n the closer it stays to its original state.
(Not too high, start with small n. The code suffers from combinatorial explosion.)
's a paper describing the actual experiment. Who needs all that messing about with sensitive equipment when you have a computer? :-)
A state of the form
P (Q a)
is called a mixed state. Mixed states can get a bit hairy to deal with as you have this double level of uncertainty. It can get even trickier because you can sometimes observe just
of a quantum system rather than the whole system like
does. This inevitably leads mixed states. von Neumann came up with the notion of a
density matrix
to deal with this, although a
P (Q a)
works fine too. I also have a hunch there is an elegant way to handle them through an object of type
P (Q (Q a))
that will eliminate the whole magnitude squared thing. However, I want to look at the quantum Zeno effect in another way that ultimately allows you deal with mixed states in another way.
Unfortunately I don't have time to explain this elimination today, but we can look at the general approach.
In this version I'm going to consider a quantum system that consists of the logical state in the Zeno examples, but also include the state of the observer. Now standard
says you can't can't form quantum states out of observers. In other words, you can't form
Q Observer
is the state of the observer. It says you can only form
P Observer
. Whatever. I'm going to represent an experimenter using a list that representing the sequence of measurements they have made. Represent the complete system by a pair of type
. The first element of the pair is the experimenter's memory and the second element is the state of the boolean variable being studied. When our experimenter makes a measurement of the boolean
variable, its value is simply prepended to his or her memory:
> zeno3 n = return ([],True) ->- repeatM n (
> \(m,s) -> do
> s' <- rotate (pi/fromInteger n) s
> return (s:m,s')
> ) ->- observe =>= snd ->- collect
Note how we now delay the final observation until the end when we observe both the experimenter and the poor boolean being experimented on. We want to know the probabilities for the final boolean
state so we apply
so as to discard the state of the observer's memory. Note how we get the same result as
. (Note no mixed state, just an expanded quantum state that collapses to a classical probabilistic state.)
There's an interesting philosophical implication in this. If we model the environment (in this case the experimenter is part of that environment) as part of a quantum system, we don't need all the
intermediate wavefunction collapses, just the final one at the end. So are the intermediate collapses real or not? The interaction with the environment is known as
and some hope that wavefunction collapse can be explained away in terms of it.
Anyway, time for you to go and do something down-to-earth like gardening. Me, I'm washing the kitchen floor...
I must mention an important cheat I made above. When I model the experimenter's memory as a list I'm copying the state of the measured experiment into a list. But you can't simply copy data into a
quantum register. One way to see this is that unitary processes are always invertible. Copy data into a register destroys the value that was there before and hence is not invertible. So instead,
imagine that we really have an array that starts out zeroed out and that each time something is added to the list, the new result is xored into the next slot in the array. The list is just
non-unitary looking, but convenient, shorthand for this unitary process.
4 comments:
This reminds me of Youssef's theory that bayesian probabilities should actually really be complex numbers, and that quatum phyisics makes more sense if they are. I've never quite got my head
round it, though. You can find it via Youseff's web page:
I've met Youssef's stuff before though I've not read it properly. But I do agree very strongly with his opening sentence
"If it weren’t for the weight of history, it would seem natural to take quantum mechanical phenomena as an indication that something has gone wrong with probability theory and to attempt to
explain such phenomena by modifying probability theory itself, rather than by invoking quantum mechanics."
Except that it seems to me that quantum mechanics is a modified probability theory. Skimming ahead I did notice that like me, he eliminates mixed states. I did it more out of computational
convenience that anything - but on thinking further I suspect I'm doing the same thing as Youssef. To be honest, I wasn't trying to do anything non-standard, I just wanted to do some textbook
standard QM and show it it formally looks just like probability theory to the point where you can share code between a probability and quantum monad.
Really nice review! Unboxing is the best part of getting something! thanks for a nice post.
Quantum mechanics can already be seen as a theory of projection-valued measures (ie. a theory where probability measures take "non-commuting values"). This point of view has been well established
since Von Neumann.
Answers to questions "is my spin up" or "is the particle within the Borel set A" are answered with a closed subspace (equivalently with the projection onto this closed space), such that familiar
laws of measure theory hold : P(\cup_i A_i) = \sum_i P(A_i) for disjoint A_i, P(never) = the zero projector, P(always) = identity.
Such projection valued measures naturally yield self adjoint operators (and conversely, by the Spectral Theorem) which are the "observables" that we know about.
Heck, one could even speak of toposes here, with the projections of some fixed Hilbert space as the subobject classifier and Hilbert tensor products as products (objects, ie. state spaces are
Hilbert spaces), particular subspaces as equalizers, C as an initial object, etc.
Projection valued measures form some sort of monad: a measure is a device which integrates functions, its type is something like (a -> r) -> r (r fixed, here to a vector space of projectors), and
the "monad of measures" is therefore a kind of Cont monad...
This should perhaps make sense in Haskell. Thanks again for this blog. | {"url":"http://blog.sigfpe.com/2007/03/monads-vector-spaces-and-quantum.html","timestamp":"2014-04-21T12:09:56Z","content_type":null,"content_length":"78626","record_id":"<urn:uuid:7263f119-2b39-4638-bcaf-d62a8ff1da0b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: how to fix subscript indices must either be real positive integers or logicals?
Replies: 4 Last Post: Aug 28, 2013 2:06 PM
Messages: [ Previous | Next ]
dpb Re: how to fix subscript indices must either be real positive integers
or logicals?
Posts: 7,876 Posted: Aug 28, 2013 1:49 PM
Registered: 6/7/07
On 8/28/2013 12:18 PM, p v bhumireddi wrote:
> im new to matlab.im executing program and im getting repeatedly the
> above mentioned error
> my code is w=[2:5];
> int=0;
> for k=1:w
> e(int)=[exp(i*k) exp(2*i*k)];
> int=int+1;
> end can anyone help me.urgent
What's the value of 'int' the first pass thru when it is used as the
index to the array e? Matlab arrays are 1-based.
One line of code rearrangement will resolve that problem at which time
you'll discover a few more.
for k=1:w
won't return what you expect
doc for
for why and how to correct it.
Date Subject Author
8/28/13 how to fix subscript indices must either be real positive integers or logicals? p v bhumireddi
8/28/13 Re: how to fix subscript indices must either be real positive integers or logicals? Alan
8/28/13 Re: how to fix subscript indices must either be real positive integers or logicals? Alan
8/28/13 Re: how to fix subscript indices must either be real positive integers or logicals? Curious
8/28/13 Re: how to fix subscript indices must either be real positive integers dpb
or logicals? | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2592310&messageID=9236899","timestamp":"2014-04-20T19:40:13Z","content_type":null,"content_length":"21461","record_id":"<urn:uuid:703b5ad5-d7e4-4f63-a5c2-049c1f84f6e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Residuals from sequence of regressions
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Residuals from sequence of regressions
From DBernstein@lecg.com
To statalist@hsphsun2.harvard.edu
Subject Re: st: Residuals from sequence of regressions
Date Mon, 18 Oct 2004 13:38:13 -0400
I'm new to replying to messages, so I hope I post this correctly.
Anyway, here is a solution to the residual question that should work:
If anyone has a more efficient solution, I would be interested in seeing
levels year, local(Y)
gen residual=.
for each y of local Y {
reg y x1 x2 x3 if year==`y'
predict temp, residual
replace residual=temp if year==`y'
drop temp
David J. Bernstein, Ph.D.
Managing Economist
LECG Inc.
Office: 215-546-4950 ext. 239
Fax: 215-546-3568
Subject: st: Residuals from sequence of regressions
I'm relatively new to Stata, and have a seemingly simple question but,
despite consulting physical and on-line manuals, I have been unable to
answer it so assistance would be appreciated.
I have a large balanced panel of N observations over T years. I would like
to do a cross-sectional regression for each year, and record the residuals
for the regression for each of the N observations for that year. E.g. for
1960, I want each of the N observations to have a new variable which is the
residual from the 1960 cross-sectional regression across the N
Then I want the same for 1961 etc.
I know how to run individual regressions for each year, but I am unsure how
to store the residuals from the regression for each year. To run the
regressions, I type:
Sort year
By year: quietly reg y x1 x2 x3
Normally, after a regression you can type "predict res, residual", but I am
not sure how to incorporate it in the above framework. Stata doesn't seem
let me type more than one command after the "by" function. E.g. it won't
me type something like
By year: quietly reg y x1 x2 x3, predict res, residual
and typing "predict res, residual" at the end merely gives me the residuals
for the cross-sectional regression for the last year.
Any help would be gratefully received.
Thanks, Alex
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
End of statalist-digest V4 #1765
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-10/msg00528.html","timestamp":"2014-04-18T05:48:20Z","content_type":null,"content_length":"7396","record_id":"<urn:uuid:36666f1a-3d07-44bf-b5e5-143f5ec8aad7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
The 20-lb Block A And The 30-lb Block B Are Supported ... | Chegg.com
Image text transcribed for accessibility: The 20-lb block A and the 30-lb block B are supported by an incline that is held in the position shown. Knowing that the coefficient of static friction is
friction is.0.15 between the two blocks and zero between block B and the incline determine the value of theta for which motion is impending. The 20-lb block A and the 30-lb block B are supported by
an incline that is held in the position shown Knowing that the coefficient of static friction is 0.15 between all surfaces of contact, determine the value of theta for which motion is impending. The
coefficients of friction are MUS, = 0.40 and muk = 0.30 between all surfaces of contact. Determine the smallest force P required to start the 30-kg block moving if cable AB (a) is attached as shown,
(b) is removed. A 40-kg packing crate must be moved to the left along the floor without tipping. Knowing that the coefficient of static friction between the crate and the floor is 0.35, determine (a)
the largest allowable value of a, (b) the corresponding magnitude of the force P.
Mechanical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/20-lb-block-30-lb-block-b-supported-incline-held-position-shown-knowing-coefficient-static-q3874221","timestamp":"2014-04-19T03:23:58Z","content_type":null,"content_length":"21350","record_id":"<urn:uuid:b17c5014-4b5e-4629-9522-00030f0de1d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
getters & setters are bad, design patter
I mean how to compute the angle between them
This is the proper definition of a dot product :
u dot v = u
+ u
+ u
= |u| |v| cos \theta
I am using this notation rather than u = (u
, u
, u
Calculating the dot product by multiplying it's components is much easier than using the magnitudes.
So it's pretty obvious that they can be badly abused.
's example is particularly bad because it's straight assignment, and completely negates the ideas of deposit or withdrawal.
But it doesn't mean that they can't be written properly with checks & validation.
Maybe this topic fits into the same category as when a beginner routinely does
, because they can't bothered thinking of an end condition, but there are situations where an infinite loop done properly is a good thing.
Can anyone show how they think a Circle class should look? So that a DrawTangentLine calculation can be done with 2 circles? Without get functions? There is no need to show any calculations - I would
just like to see how the data is accessed.
Does anyone have any thoughts on the GUI problem I mentioned above?
Any way it's late again - so I will see what anyone has to say tomorrow :)
Edit: Really slow typing again. I didn't see the last 2 posts while I was doing mine.
- you are a champion - I will look at that info tmrw. | {"url":"http://www.cplusplus.com/forum/lounge/101305/3/","timestamp":"2014-04-16T19:11:40Z","content_type":null,"content_length":"33273","record_id":"<urn:uuid:81af5197-0488-4e81-9f84-798e993f568e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof using the Sum Rule
October 17th 2010, 12:50 PM #1
Oct 2010
Proof using the Sum Rule
I'm studying for a test and I came across this exercise...
Let a be contained in C(complex numbers) and p is a polynomial, then p(x)-->p(a) as x-->a. The hint given says to use the Sum Rule and induction on the degree but I still am not sure how to get
started. Any guidance would be appreciated.
Base Case:
Let $P(x) = c_1x + c_0$, and let $\epsilon > 0$ be given. Take x such that $|x - a| < \frac{\epsilon}{|c_1|}$.
Then, $|P(a) - P(x)| = |c_1a + c_0 - (c_1x + c_0)| = |c_1(a - x)| = |c_1|*|a - x| < |c_1| * \frac{\epsilon}{|c_1|} = \epsilon$.
Alright cool. That makes sense. Thanks.
Let a be contained in C(complex numbers) and p is a polynomial, then p(x)-->p(a) as x-->a.
Since a polynomial is built from constants, x, addition and multiplication, this statement is a corollary of two facts: $\lim_{x\to a}(f(x)+g(x))=\lim_{x\to a}f(x) + \lim_{x\to a}g(x)$, and $\
lim_{x\to a}(f(x)\cdot g(x))=(\lim_{x\to a}f(x))\cdot (\lim_{x\to a}g(x))$. (Well, technically, you also need $\lim_{x\to a}c=c$ for a constant $c$ and $\lim_{x\to a}x=a$.) It is suggested that
you break a polynomial into a sum of monomials and use the fact about the sum of limits above. Also, it is suggested that you show that $\lim_{x\to a} x^n=a^n$ by induction on $n$ using the fact
about the product of limits above.
Whether the facts about the sum and product are taken for granted or need a proof depends on your course.
October 17th 2010, 01:02 PM #2
Jan 2009
October 17th 2010, 01:10 PM #3
Oct 2010
October 17th 2010, 01:16 PM #4
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/differential-geometry/159994-proof-using-sum-rule.html","timestamp":"2014-04-20T19:26:57Z","content_type":null,"content_length":"39189","record_id":"<urn:uuid:0ddf6c32-da3f-4cb8-a512-43319b515045>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mead, CO Algebra Tutor
Find a Mead, CO Algebra Tutor
...I have taught all levels of elementary school math from kindergarten through 6th grade and beyond! I love working with students to discover where challenges lie and finding fun and engaging
ways to help them learn! I have taught all types of elementary school and middle school science for 19 years in the public school classroom and as a tutor.
14 Subjects: including algebra 1, algebra 2, reading, geometry
...I am a senior enrolled at the University of Colorado at Boulder studying International Affairs and Political Science with a minor in economics. I have earned a degree in Biotechnology at the
Delaware Technical Community College. I have extensive course work in chemistry, biology, politics, economics, and physics with experience in many other fields.
39 Subjects: including algebra 2, algebra 1, chemistry, reading
...I love math, and I enjoy helping students understand it as well. When working with a student, I usually try to show the student how I understand the material first. Then upon understanding the
student's learning style, I will incorporate a variety of methods to help the student.
17 Subjects: including algebra 1, algebra 2, chemistry, physics
...The absolute proudest moment (and the biggest surprise) of my career so far came when one of these students ran to me from across a dance floor and threw her arms around my neck, saying she'd
scored a 32! As a multidisciplinary learner, I believe adamantly in the educational power of the arts, a...
31 Subjects: including algebra 1, English, reading, Spanish
...I hold a PhD degree in Mathematics, and my research interests include Differential Equations. I have taught Differential Equations to a broad range of students including Computer Scientists,
Math/Physics/Material Sciences students. I also have tutored 12th graders as well as undergraduate students in Differential Equations for at least five years.
15 Subjects: including algebra 2, algebra 1, calculus, French
Related Mead, CO Tutors
Mead, CO Accounting Tutors
Mead, CO ACT Tutors
Mead, CO Algebra Tutors
Mead, CO Algebra 2 Tutors
Mead, CO Calculus Tutors
Mead, CO Geometry Tutors
Mead, CO Math Tutors
Mead, CO Prealgebra Tutors
Mead, CO Precalculus Tutors
Mead, CO SAT Tutors
Mead, CO SAT Math Tutors
Mead, CO Science Tutors
Mead, CO Statistics Tutors
Mead, CO Trigonometry Tutors | {"url":"http://www.purplemath.com/mead_co_algebra_tutors.php","timestamp":"2014-04-17T11:22:10Z","content_type":null,"content_length":"23875","record_id":"<urn:uuid:e2754bca-e4d9-4fa9-b9bf-7fc47b863c34>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subject Guides
General Resources
Reference Tools
Professional Associations
Compiled by:
Provides full text for nearly 3,200 scholarly publications covering academic areas of study including social sciences, humanities, education, computer sciences, engineering, language and linguistics,
arts & literature, medical sciences, and ethnic studies. Click here for a complete title list. Click here for more info
Applied Science and Technology Abstracts
Applied Science and Technology Abstracts covers every area of science, engineering and technology. Sources are indexed from 1983 on and include trade and industrial publications, journals issued by
professional and technical societies, and specialized subject periodicals. Citations begin with 1983; Abstracts begin with 1994. The CUNY+ version, TECH, contains only citations.
General Science Abstracts
General Science Abstracts indexes articles of at least one column in length from English-language periodicals published in the United States and Great Britain beginning with 1984. Topics include
Astronomy, Biology, Chemistry, Conservation, Earth Science, Environment, Health, Mathematics, Medicine, Nutrition, and Physics. Abstracting coverage begins with periodicals published in March 1993.
This electronic journal service provides full text access to more than 250 journals in language and literature, Latin American and Asian studies, political science, history, business, arts and
sciences, and other fields in the social sciences and humanities. Journal coverage begins with the first issue of each journal, in some cases going as far back as the 1800s. Current journals are not
entered into the database for 2 to 5 years.
LEXIS-NEXIS Academic Universe
LEXIS-NEXIS Academic Universe is a full-text database that provides access to a wide range of news, business, legal and reference information compiled specifically for the research needs of academic
institutions. LEXIS-NEXIS may be searched for general, company, industry, legal and government news, company financial information, country and state profiles, biographical information, medical
abstracts, law reviews, U.S. code, constitution and court rules, and state legal
MasterFILE Premier
Provides full text for over 1,880 periodicals covering nearly all subjects including general reference, business, health, and much more. Click here for a complete title list. Click here for more info
netLibrary is a collection of full-text e-books, including 860 titles purchased by CUNY and the entire 4,000 volume netLibrary Public eBook Collection. netLibrary titles are listed in the CUNY+
catalog. netLibrary requires each user to register before checking out titles.
Readers Guide Abstracts
Readers Guide Full Text, is a database containing comprehensive indexing and abstracting of popular general-interest periodicals published in the United States and Canada. Coverage begins with 1983.
WWW version of the OCLC union catalog containing bibliographic information on over 39 million items from books and journals to CD-ROMs and videos. Includes holding information from libraries around
the world. Part of FirstSearch (OCLC).
A+ Math
This site shows that math can be fun. There are exercises ranging from basic to more advanced, in the game room section you can practice math, and you can create your own math problems.
Algebra Story and Word Problems
“Collection of simple Word Problems illustrating concept-based problem solving. These are typical of problems in textbooks, in a variety of competency tests and real-life problems.”
Cornell Theory Center
Links to resources in mathematics and science for teachers and students.
Curious and Useful Math
Interesting math problems, tricks and puzzles
ENC Online (Eisenhower National Clearinghouse)
Resources for science and math teachers, ideas for improving K-12 mathematics, and links to consortia and federal agencies.
Exploring Fractals
Site introduces students to fractals.
Fantastic Fractals
An in-depth introduction to the world of fractals
Graphing Linear Equations
This site teaches the graphing of linear equations.
Links to math sites.
Lesson Plans and Classroom Activities for Math Teachers
Articles, lesson plans, and ideas for classroom activities
Math Goodies
Over 400 pages of math activities and resources for students, teachers, and parents
Mathematics Archive WWW Server
Catalog of math books for all areas of mathematics and links to resources for teaching and learn
Mathematics WWW Virtual Library
The links on this site include resources such as bibliographies, sites for teaching math, and links to books and journals.
NOVA Online – The Proof
“For over 350 years, some of the greatest minds of science struggled to prove what was known as Fermat's Last Theorem -- the idea that a certain simple equation had no solutions. Now hear from the
man who spent seven years of his life cracking the problem, read the intriguing story of an 18th century woman mathematician who hid her identity in order to work on Fermat's Last Theorem, and
demonstrate that a related equation, the Pythagorean Theorem, is true.”
This is Mega-Mathematics!
Mathematical problems and coloring, games, mathematics of knots, and algorithms
Electronic Journal of Differential Equations
All topics related to differential equations and their applications. Articles are available free of charge.
Electronic Journal of Probability
Publishes papers in all areas of probability. The electronic version of the journals is free.
Electronic Research Announcements of the American Mathematical Society
Publishes research announcements of significant advances in all branches of mathematics. Journal available free on AMS website.
Electronic Transactions on Numerical Analysis
Publishes significant new and important developments in numerical analysis and scientific computing.
Geometry and Topology
Publishes articles about geometry and topology.
Journal of Inequalities in Pure and Applied Mathematics
All areas of mathematics relating to inequalities.
Journal of Online Mathematics and its Applications (JOMA)
Publishes articles on design and use of online materials.
Convert-me.com: Interactive measurements calculator, weights and measures
Allows user to convert many measurements including weight, distance, speed, and temperature.
Dictionary of Measures, Units and Conversions
allows user to convert common units such as length, area, volume, density, flow rate, spread rate, viscosity, etc.
Eric Weissenstein’s World of Mathematics
Mathematics encyclopedia.
Martindale’s Reference Desk: Calculators On-Line
Various calculators for mathematics.
Math tables.
Mathematical Atlas: A Gateway to Mathematics
Short articles designed to provide an introduction to the areas of modern mathematics.
Measurement Conversions
Tables for hundreds of unit conversions.
MegaConverter 2
Allows user to convert anything from angles to wire resistance
Metric Conversion Chart
Conversions for length, area, mass, and volumes
On-Line Encyclopedia of Integer Sequences
Browse the database to look at sequences or get information about a particular number sequence.
PRIME Mathematics Encyclopedia
Articles on mathematical concepts.
ProBrewer Brewer’s Tools
Volume Unit Conversion Calculator
S.O.S. Math
Tables and formulas for algebra, trigonometry, calculus, differential equations, complex variables, and matrix algebra.
Online algebra lessons, calculators, and interactive worksheets.
Algebra Homework Help at Algebra.com
Pre-Algebra, algebra I and II.
Visual Calculus
“A collection of modules that can be used in the studying or teaching of calculus.”
American Mathematical Society (AMS)
Mathematical Association of America (MAA)
National Council Of Teachers Of Mathematics | {"url":"http://www.hostos.cuny.edu/library/HHCL_New_Web/online_res_subjectguides_math.htm","timestamp":"2014-04-21T09:40:25Z","content_type":null,"content_length":"49492","record_id":"<urn:uuid:13c624a0-d033-4035-8569-a134ecbc9d34>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Independence Friendly Logic
1. The work of providing solid foundations to analysis was initiated by Augustin Cauchy (1789–1857).
2. For references to uses of the device ‘depends on…but not on…’ in actual mathematical literature, see Dechesne (2005), Hintikka (1996), Hintikka & Kulas (1983), Hodges (1997a, 1997b), Mann et al.
(2011), Mostowski & Krynicki (1995), Pietarinen (2001); for dependent quantifiers, see also Quine (1970).
3. In order to properly assess the origins of IF first-order logic in light of available documents, also the following should be noted. Hintikka writes (1991: 7): ‘IF first-order languages are
introduced and studied in my paper, "What is Elementary Logic?" (forthcoming).’ This paper appeared as Hintikka (1995). Hintikka (1991: 8,67) refers to the papers published in Sandu's Ph.D. thesis,
saying that in them Sandu has also studied independence friendly languages; he mentions specifically Sandu's study of informationally independent connectives in Sandu & Väänänen (1992), as well as
Hintikka & Sandu (1989). In Hintikka & Sandu (1997: 368) it is stated that IF languages were introduced in Hintikka & Sandu (1989) – contradicting what is said in Hintikka (1991). Hintikka (1996: 52)
mentions Sandu (1993) and Hintikka (1995) as sources for earlier discussions on IF first-order logic; Hintikka & Sandu (1989) is not mentioned. In Sandu & Hyttinen (2001: 37) it is said that IF
first-order languages were introduced in Hintikka (1996), but on the next page it is affirmed that these languages were introduced in Hintikka & Sandu (1989).
The slash notation used for indicating quantifier independence appears already in Hintikka (1988: 22). A general notation for separately indicating those preceding operators of which a given one is
independent – more general than the notation of partially ordered quantifiers – was proposed in connection with epistemic logic even earlier, in Hintikka (1982: 164).
4. Thoralf Skolem (1887–1963) introduced Skolem functions in Skolem (1920). For uses of Skolem functions in model theory, see e.g. Hodges (1993). They have also found use in automated theorem
proving; cf. Portoraro's entry on Automated Reasoning in this encyclopedia. On how they are employed in linguistics, cf. footnote 90.
5. I.e., a function with zero arguments.
6. Here and henceforth, ‘iff’ abbreviates ‘if and only if.’
7. The function f yields a witness for ∃y depending on the interpretation of ∀x, and similarly g provides a witness for ∃w depending on the interpretation of ∀z.
8. Henkin introduced the general idea of dependent quantifier Q[m, n, d], which in effect leads to the notion of (arbitrary) partially ordered quantifier (p.o.q.) with m universal quantifiers and n
existential quantifiers, where d is a function that determines for each existential quantifier on which universal quantifiers it depends; m and n may be any cardinal numbers. The notion of finite
p.o.q. results by letting m and n be finite. Well-known papers on finite p.o.q.s include Enderton (1970), Walkoe (1970), Barwise (1979), Krynicki & Lachlan (1979), Blass & Gurevich (1986), Krynicki
(1993); for an overview, see Krynicki & Mostowski (1995). In Barwise (1979) generalized partially ordered quantifiers were also studied.
9. FPO is not syntactically closed under negation, conjunction, or disjunction.
10. For a critique, see Hodges (2001), Marion (2009).
11. Let us agree that player 1 is male (‘he’) and player 2 female (‘she’). In the literature, the players have been called by many names: Myself (or I) and Nature, ∃ and ∀, Eloise and Abelard, as
well as the initial Verifier and the initial Falsifier (by reference to the players' initial roles).
12. Henceforth, whenever M is a model, M is its domain.
13. Note that while quantifier moves pertain to ‘objects out there’ (elements of the domain), moves for conjunction and disjunction concern syntactic items (conjuncts and disjuncts).
14. For abstract and strategic meaning, cf. also Hintikka & Sandu (1997: 397–398). For research on GTS and natural language, see Saarinen (1979), Hintikka & Sandu (1991), Sandu (1991), Pietarinen
(2001), Clark (2012); Hintikka & Sandu (1997) offers an overview.
15. In any event, this is the way in which Hintikka uses the word ‘Skolem function,’ see, e.g., Hintikka (1996: 31; 2002). Cf. Sandu (2001), where ‘two ways of Skolemizing’ are discussed. One of the
two ways corresponds to Hintikka's notion of Skolem function, the other corresponding to a generalized notion of strategy function. What results from the latter choice is, in Hodges's terminology, a
slash logic, not an IF logic; cf. the quote below.
16. Excluding slash logic still underdetermines the choice between several proposed formulations of IF first-order logic. However, all these formulations use in their semantics Skolem functions, or
strategy functions all of whose arguments are moves by the opponent. Publications in which a slash logic is actually formulated – yet referred to as an IF logic – include Hyttinen & Sandu (2000),
Pietarinen (2001), Väänänen (2002), Sandu (2007), Caicedo et al. (2009) and Mann et al. (2011).
17. For the term ‘regular formula,’ see Caicedo et al. (2009). The logical symbols of first-order logic considered here are negation (~), disjunction (∨), conjunction (∧), existential quantifier (∃),
universal quantifier (∀), and parentheses. A vocabulary (signature, non-logical terminology) is any countable set τ of relation symbols (each of which carries a fixed arity), function symbols (again
each with a fixed arity) and constant symbols. See Shapiro's entry on Classical Logic in this encyclopedia for a detailed presentation of first-order logic.
Note that implication is not among the logical symbols of IF first-order logic. Material implication (φ → ψ) can be defined in the so-called extended IFL (Subsect. 3.4) by the formula (¬φ ∨ ψ). In
the special case that φ is ‘truth equivalent’ to a first-order formula (see Subsect. 4.2), it is possible to express (φ → ψ) in IFL by the formula (~φ ∨ ψ). For different ways of treating
conditionals in GTS, including conditionals in natural language, see e.g. Hintikka & Kulas (1983; Ch. 3), Hintikka & Sandu (1991).
18. In the literature, the syntax of IFL is invariably given for formulas. By contrast, the semantics is often formulated for sentences only (see Hintikka 1991, 1995, 1996; Hintikka & Sandu 1996,
1997; Sandu 1993, 1994). In Sandu (1996, 1998), the semantics is given for arbitrary formulas.
19. See, e.g., Hintikka (1991, 1995, 1996), Sandu (1993, 1994); for defining GTS for FO in this way, see e.g. Hintikka & Kulas (1983), cf. Hintikka (1968, 1973a). The semantics of IFL to be presented
will be its game-theoretical semantics, sketched in Hintikka & Sandu (1989) and Hintikka (1991) and presented in a more detailed fashion e.g. in Hintikka (1995, 1996), Sandu (1993, 1994). The
semantics is defined explicitly to arbitrary formulas and variable assignments in Sandu (1996, 1998). The original approach to the GTS of first-order logic in Hintikka (1968) avoided variable
assignments by letting the players name the objects they had chosen; this approach could be adapted to IFL as well. Janssen (2002) and Janssen & Dechesne (2006) address the issue of whether the
game-theoretical semantics of IFL does justice to the idea of ‘informational independence’ among logical operators.
20. Models (τ structures) are termed interpretations in Shapiro's entry on Classical Logic in this encyclopedia. Recall that in the degenerate case that n = 0, the expression (∃x/∀y[1],…,∀y[n])
equals by stipulation (∃x), and similarly in the case of the other connectives followed by a slash. This is why we need not give separate clauses for (∃x), ∨, (∀x), and ∧. If g is a variable
assignment, g[x/a] is the variable assignment which is otherwise like g but maps the variable x to the object a.
21. The notion of the strategy of a player must not be confused with the game-theoretical notion of strategy profile. The latter means a set of strategies, one strategy for each player. For semantic
games, a strategy profile would be a set {F[1], F[2]}, where each F[i] itself is a set, namely a set of strategy functions of player i.
22. The term ‘dissatisfaction’ is used as a dual of ‘satisfaction.’ The distinction between these notions generalizes the distinction between ‘truth’ and ‘falsity.’ Hodges (1997a) conceptualizes a
related distinction in his compositional framework by speaking of ‘trumps’ and ‘cotrumps.’
23. To be precise, below game G(φ, M) equals by definition game G(φ, M, g[0]), where g[0] is an arbitrarily chosen variable assignment.
24. When negation symbol is allowed to appear in arbitrary syntactic positions in FO formulas, it is no longer the case that universal quantifiers and conjunction symbols are automatically of
universal force while existential quantifiers and disjunction symbols are of existential force. The force of an operator depends on the number of negation signs in whose syntactic scope it lies. If
we wish to retain in the general setting the characteristic feature distinguishing IFL from slash logic – namely the requirement that strategy functions of a player only take as arguments moves made
by the adversary – the syntax must be formulated paying attention to the force that each quantifier has (cf. however Sandu 1993, 1994; Hintikka & Sandu 1997: 373). In the general formulation of IFL,
hence formulated, an operator O can be marked as independent from any preceding quantifiers of the opposite force, but not from any preceding quantifier having the same force as O. For example (∀x)~
(∃y)(∀z/∀x,∃y) S(x, y, z) would be well formed, unlike ~(∀x)~(∃y/∀x) R(x, y).
25. See e.g. Hintikka (1991: 147, 2002: 409), Hodges (1997a: 546), Sandu (1993, 1994).
26. For non-determinacy and IF logic, see Hintikka (1996), Burgess (2003), Väänänen (2006).
27. No third truth-value is stipulated in the semantics of IFL. Truth-value gaps may arise when neither truth nor falsity can be meaningfully ascribed to a sentence (because a presupposition of the
sentence is not satisfied). This is not what non-determinacy means. Non-determinacy is a model-relative complex negative property, which can be ascribed to a sentence. A sentence has this property
iff neither of the simple positive properties of truth and falsity can be correctly ascribed to the sentence. An ascription of non-determinacy is correct or incorrect depending exclusively on the
model relative to which the ascription is effected.
28. For ‘truth equivalence’ Hintikka uses the term ‘weak equivalence’ and for ‘logical equivalence’ the term ‘strong equivalence’ (cf. Hintikka 1996: 150). He characterizes strong equivalence as the
preservation of truth and falsity. Thus if ψ and χ are strongly equivalent, nothing prevents ψ from being non-determined in a model, as long as χ also is. (On other occasions, e.g. in Hintikka 1991:
20, Hintikka defines ‘strong equivalence’ differently, by the requirement that either ψ and χ are both true or both false; this precludes the possibility that one or both of two strongly equivalent
sentences would be non-determined.) The term ‘truth equivalence’ used by Dechesne (2005: 32) is more descriptive. The term ‘falsity equivalence’ also stems from Dechesne. She follows Hintikka in
terming ‘strong equivalence’ what is here for simplicity referred to as ‘logical equivalence’.
29. Cf. Hintikka (1996: 147), Hodges (1997a: 546), Sandu (1993, 1994).
30. In Hintikka (1996: 149) this logic is termed truth-functionally extended IF first-order logic – in contradistinction to ‘extended IF first-order logic,’ which in Hintikka (1991: 47, 1996: 149)
and Hintikka & Sandu (1997) is defined as containing, in addition to IFL formulas, only formulas ¬φ, where φ is an IFL formula (or sentence). The considerations that motivate the introduction of ¬ in
the first place, actually call for the more general syntax (witness Hintikka 1996: 149, 195–196), which is why it is natural to define extended IF first-order logic as it is done here. For extended
IF first-order logic, see also Hintikka (1991: 49; 1997; 2004). For the crucial restriction that ¬ may not occur in the scope of a quantifier, see Hintikka (1991: 49; 1996: 148). For counterexamples
to this restriction see, however, Hintikka (1996: 148; 2002c) and especially Hintikka (2006b) in which the so-called fully extended IF first-order logic is considered. In fully extended IF
first-order logic any occurrences of ¬ are allowed which are subject to the following syntactic condition: if (Qx/W) is a quantifier in the syntactic scope of an occurrence of ¬, then all quantifiers
listed in W are likewise in the syntactic scope of that occurrence of ¬.
31. Cf. Hintikka (1991: 49; 1996: 147–149; 2002), Hintikka & Sandu (1997: 375), Sandu (1993, 1994).
32. Cf. Hintikka (1991, 1995, 1996, 1997), Hintikka & Sandu (1996), Sandu (1993, 1996, 1998). Analogous properties of (what on substantial criteria is) slash logic are discussed in Sandu (2007) and
Mann et al. (2011); recall Hodges's distinction between independence friendly logic and slash logic (see the beginning of Sect. 3). Properties of another related logic, Väänänen's dependence logic,
are in a mathematically detailed fashion covered in Väänänen (2007).
33. For (existential) second-order logic, see Enderton's entry on Second-order and Higher-order Logic in this encyclopedia. See also van Benthem and Doets (2001).
34. For the Axiom of Choice (AC), see Jech's entry on Set Theory in this encyclopedia.
35. In Hodges (2006: 527; 2001), it is pointed out that a strengthened version of this theorem can be proven if a weakened notion of strategy is applied (cf. also Hintikka 2006a: 536). Related
observations are made in Forster (2006). Namely, the standard notion of strategy employed in GTS is that of deterministic strategy: a set of single-valued functions yielding for any given tuple of
arguments a unique move. A nondeterministic strategy of player 2 consists of a set of instructions, each instruction yielding a set of moves for any tuple of earlier moves by player 1. A
nondeterministic strategy of player 2 is winning if it guarantees a win for player 2, regardless of which move, among those given by the strategy, player 2 makes in each case when it is her turn to
Theorem (without AC). (Hodges 2001) Let τ be any vocabulary, M any τ structure, g any variable assignment and φ any FO[τ] formula. Then M, g ⊨ φ holds in the standard sense iff there is a
nondeterministic winning strategy for player 2 in game G(φ, M, g).
For finite partially ordered quantifiers and AC, see Krynicki & Mostowski (1995; Sect. 2).
36. The intertranslatability of the two logics is discussed in Hintikka (1991, 1996, 1997), Hintikka & Sandu (1997), Sandu (1998).
37. A Hamiltonian path on a graph is a path visiting every node exactly once. For Fagin's theorem and references to NP-complete problems, see e.g. Ebbinghaus & Flum (1999), Grädel et al. (2007),
Papadimitriou (1994). For partially ordered quantifiers, partially ordered connectives, and NP-complete problems, see Blass & Gurevich (1986), Hella & Sandu (1995).
38. If φ and ψ are IFL formulas, let us write (φ → ψ) for (~φ ∨ ψ), and (φ ↔ ψ) for ((φ → ψ) ∧ (ψ → φ)); cf. footnote 17.
39. For a discussion, see Hintikka (1991, 1995, 1996), Hintikka & Sandu (1997), Sandu (1993). Paying due attention to Hodges's warning against confusing slash logic and IF logic, one may reconstruct
proofs for various properties of IFL on the basis of Väänänen's results concerning his dependence logic (Väänänen 2007). The results pertaining to (what in substance is) slash logic discussed in
(Sandu 2007) and in (Mann et al. 2011) can likewise be compared to the analogous results concerning IFL.
40. A complete disproof procedure is a procedure for recursively enumerating all inconsistent sentences. The term ‘complete proof procedure for inconsistency’ might involve a smaller risk of being
misunderstood (cf., e.g., Quine 1970: 90). In any event, disproving must be understood as ‘establishing inconsistent,’ not as ‘establishing not provable’ (or ‘establishing not valid’).
41. For the fact that ESO is not closed under negation, cf. e.g. Hinman (1978: 82).
42. For a proof phrased in terms of FPO, see Barwise (1979: 56, 73–74).
43. Actually, it is non-determined in all domains with at least two elements.
44. Fragments of ESO are subject to lively interest in finite model theory and complexity theory, see e.g. Ebbinghaus & Flum (1999), Grädel et al. (2007), Papadimitriou (1994). For related work on
partially ordered quantifiers and connectives, see Blass & Gurevich (1986), Hella & Sandu (1995), Hella, Sevenster & Tulenheimo (2008). For the P = NP problem, see e.g. Fortnow (2009).
45. That there is a sound and complete proof procedure for FO was first proven by Kurt Gödel (1930). There are different variants of axiomatizability. Let L be a logic, A a set of sentences of L, and
P a proof procedure for L. Then A is an axiomatization of L relative to P, if the sentences of L provable in P from the sentences of A are precisely the valid sentences of L. The logic L is finitely
axiomatizable (resp. recursively axiomatizable, resp. axiomatizable) relative to P, if it has a finite (resp. recursive, resp. recursively enumerable) axiomatization A relative to P.
46. See Hintikka (1991, 1995, 1996, 2000).
47. The theorem was proven by Boris Trakhtenbrot (1950). For Trakhtenbrot's theorem, see Ebbinghaus, Flum & Thomas (1984). Andrzej Mostowski (1957) proved, without recourse to Trakhtenbrot's theorem,
that extending FO with a quantifier capable of saying that a formula of one free variable is satisfied by finitely many elements only, results in a logic that is not axiomatizable.
48. The fact that FPO fails to have a sound and complete proof procedure can be seen by reference to Trakhtenbrot's theorem. In 1958 Ehrenfeucht showed that φ[inf] can be expressed in FPO; see Henkin
(1961). From this he concluded – making use of Mostowski's result mentioned in footnote 47 – that the Boolean closure of FPO (i.e., its closure under the operations ¬, ∧, and ∨) is not axiomatizable.
From Mostowski's result the non-axiomatizability of FPO itself does not follow (since the finiteness of an extension is not expressible in FPO).
49. See e.g. Hintikka (2006a: 65), Sandu & Hintikka (2001: 49). For compositionality and GTS, see Hintikka & Kulas (1983), Hintikka & Sandu (1997). For a discussion on different senses of
compositionality and IFL, see Hintikka (1996), Sandu & Hintikka (2001). Cf. also Szabó's entry on Compositionality in this encyclopedia.
50. See e.g. Hintikka & Kulas (1983), Hintikka (1988, 1996, 2000, 2006a).
51. See e.g. Hintikka & Kulas (1983), Hintikka (1988, 1996, 2000).
52. See e.g. Hintikka & Kulas (1983), Hintikka (1988, 1996, 2000), Hintikka & Sandu (1997).
53. See e.g. Hintikka & Kulas (1983), Hintikka (1988, 1995, 1996).
54. Barwise (1979) suggested that a compositional semantics must satisfy the condition that the relation ‘M ⊨ φ’ be an inductive verifiability relation in the sense of Barwise & Moschovakis (1978).
He proved that FPO does not admit of a compositional semantics subject to this condition. Hodges (1997a) finds the condition suggested by Barwise unmotivated. It rather begs the question: in order
for a relation to be an inductive verifiability relation, the relevant inductive clauses must be first-order. That they cannot be first-order in connection with FPO is hardly surprising; cf. Hodges
(2007: 118).
Hintikka had on many occasions stated that there is no realistic hope of formulating compositional truth-conditions for IFL sentences (Hintikka 1991, 1995, 1996; see also Hintikka & Sandu 1997).
Hodges (1997a,b) wanted to prove Hintikka's statement wrong. The syntax Hodges opted for is that of slash logic (cf. Sect. 3 and Subsect. 6.1). Since Hodges considers variants of slash logic, his
results are not, strictly speaking, about IFL. However, it is clear that the framework of Hodges (1997b) can be adapted to the syntax of IFL proper.
55. It was pointed out already in Hintikka & Kulas (1983: 20) that at least in some cases compositionality can be restored by resorting to higher-order entities, specifically to functions embodying
strategies of player 2. In the same connection it was noted that such higher-order entities are much less realistic philosophically and psycholinguistically than the original individuals. See also
Zadrozny (1995), Janssen (1997), Hodges (1998); cf. Westerståhl (1998), Sandu & Hintikka (2001).
56. Note that (N, TRUE^N) is an expansion of N to the vocabulary τ ∪ {TRUE}.
57. For details about truth-definitions of FO, see Hodges's entry on Tarski's Truth Definitions in this encyclopedia; further, see e.g. Kaye (1991), Sandu (1998), Väänänen (2007).
58. The proof of this result is sketched in Hintikka (1991, 1996), and later in more detail in Sandu (1998). See also Hintikka (1998, 2001), Hyttinen & Sandu (2000), Sandu & Hyttinen (2001), Väänänen
(2007). For a technically analogous result, see Hodges's comments about self-applied truth-predicates for Σ[n] formulas in Azriel Levy's set-theoretical hierarchy (Hodges's entry on Tarski's Truth
Definitions in this encyclopedia); see also de Rouilhan and Bozon's (2006: 700) comments about John Myhill's and Craig Smorynski's similar results.
59. If one attempts to construct Liar's paradox for IFL, the assertion that the Liar sentence is true will itself be non-determined (Hintikka 1991: 44–51, 1996: 142; cf. Väänänen 2007: 108–109).
60. See also Hintikka (1996: 106–108; 2001: 23–24; 2006a: 65–66).
61. See Hintikka (1996: 123–125; 2001), Hintikka & Sandu (1997).
62. Enderton (1970) proved that the logic L* obtained from FO by closing it under arbitrary Henkin quantifiers, as well as under the operations ¬, ∧, and ∨, can be translated into the so-called Δ^1
[2] fragment of second-order logic. (For this fragment, see, e.g., Hinman 1978.) Marcin Mostowski (1991) showed that in fact L* does not suffice to cover Δ^1[2]; cf. Krynicki & Mostowski (1995: 217).
Now, (1) EIFL has the same expressive power as the Boolean closure of ESO (i.e., its closure under ¬, ∧, and ∨), (2) the Boolean closure of ESO has the same expressive power as the Boolean closure of
FPO, and (3) the Boolean closure of FPO is a fragment of L*. Therefore we may conclude that EIFL is strictly weaker than the Δ^1[2] fragment of second-order logic.
63. The standard interpretation in the sense of Henkin (1950) construes second-order quantifiers as follows. If R is an n-ary relation symbol, f is an n-ary function symbol, and M is a domain, then
under the standard interpretation of second-order logic, the second-order quantifiers (∃R) and (∀R) range over all (extensionally possible) subsets of M^n, and the second-order quantifiers (∃f) and
(∀f) range over all (extensionally possible) functions from M^n to M.
64. Similar requirements need be expressed concerning eventual second-order quantifiers (∃f).
65. See Hintikka (1955; 1991: 47–48; 1995: 12–13; 2006a: 476–477), and especially Hintikka (1996: 194–195). For details about the reduction of the validity problem of full second-order logic to that
of ESO, see e.g. Leivant (1994).
66. For an algebraic study of slash logic, see Mann (2007).
67. For the relation of Jónsson and Tarski's Theorem 3.14 to modal logic, cf. Copeland's entry on Arthur Prior in this encyclopedia.
68. For these observations relating the propositional part of EIFL, the modal logic S4, and intuitionistic propositional logic, see Hintikka (2004b; 2006a: 471).
69. See Hintikka (1991; 1995; 1996: 129,190,198,201,205; 2000: 133; 2002a: 409; 2006a: 472). It may be of interest to note that Quine (1970: 90) affirms without hesitation that FPO formulas are not
ontologically committed to higher-order entities (even if their expressive power goes beyond FO). On the same basis Quine would, no doubt, have accepted that IFL is only committed to first-order
entities. (Quine's reasons for not considering FPO as a logic were related to its non-axiomatizability; cf. Subsect. 4.3.) For ontological commitments of FPO formulas, see also Patton (1991). For the
status of FPO as a first-order logic, cf. Hintikka & Kulas (1983: 50).
70. See Westerståhl's entry on Generalized Quantifiers in this encyclopedia.
71. For, in a play of the semantic game for the formula Q[≥κ]z P(z) and the model M, player 2 would choose a function f : κ → M, whereafter player 1 would choose distinct ordinals α, β < κ. Player 2
would win iff f(α) ≠ f(β) and the individual f(α) satisfies the predicate P.
72. Cf. Subsect. 4.6. For details, see e.g. Leivant (1994).
73. Both sets are Π[2] complete; for Π[2] completeness, see e.g. Väänänen (2007; Ch. 7).
74. Cf. Feferman (2006: 461,463), Hintikka (1996: 129,193,198).
75. Cf., e.g., Hintikka (1991; 1995; 2002: 407; 2006a: 40–41).
76. Together with witnesses for disjunction symbols, likewise depending in general on the values chosen for the preceding universal quantifiers.
77. For the argument, see Hintikka (1996; 1998; 2004a; 2006a: 74).
78. For limited formal truth definitions for set theory, see Hodges's entry on Tarski's Truth Definitions in this encyclopedia.
79. These individuals are ‘sets’ only in the sense that they belong to a domain of a model of an axiomatic set theory; they are not sets in the sense in which the entities over which second-order
quantifiers range are sets (namely sets of individuals of the domain considered).
80. See e.g. Hintikka (1996: 172–177; 1998: 319–323, 326–32; 2006a: 74). For the continuum hypothesis, and Gödel's and Cohen's independence results, see Jech's entry on Set Theory in this
81. See Hintikka (1991: 47–48; 1996: 192–193); see also Väänänen (2007: 139–141).
82. See Hintikka (1986; 1993; 1996: 208–209). The axiom of completeness appears in the second edition (1903) of Hilbert (1898). Saying that a model M is maximal requires, at least on the face of it,
quantification over individuals (or sets) outside the domain of M.
83. Actually, Hintikka sees it as a conceptual confusion to speak of ‘axiomatizations’ in connection with logic as well as mathematical theories. In the former case, when possible, an axiomatization
is simply a recursive enumeration of validities of the logic considered. By contrast, one can view an axiomatization of a mathematical theory as a way of compressing all the truths (not logical
truths or validities) about the subject matter of the theory into a finite or recursively enumerable set of axioms (Hintikka 1996: 1–4).
84. Quantifiers occurring in conjoint constituents, quantifiers occurring in different nested of phrases, and relative clauses with several antecedents; see Hintikka (1973a, 1976a, 1979).
85. For a discussion, see e.g. Peters & Westerståhl (2006: 66–72) and the literature referred to there.
86. This is because the set of valid FPO sentences is not recursively enumerable. As Hintikka himself recognizes, to produce a rigorous argument, it would be necessary to show for an arbitrary FPO
sentence Qφ, not only that there is an English sentence that is naturally represented using the prefix Q, but also that the grammatical constructions used allow an English counterpart of φ: whether
the formula Qφ can be translated to FO crucially depends on the matrix φ.
87. The example is from Hintikka (1982); in this paper Hintikka does not use the slash sign. By reference to Hintikka's theory of the semantics of questions (Hintikka 1976b), Carlson and ter Meulen
(1979) called attention to wh-questions formed out of suitable sentences with a branching quantifier reading, and argued that the desideratum of such a question will involve informational
independence of the usual quantifiers from the intensional know-construction.
88. In epistemic logic, the knowledge operator K[a] is construed as a universal modal operator ranging over scenarios compatible with the knowledge of the agent a considered. For further information,
see Hendricks & Symons's entry on Epistemic Logic in this encyclopedia.
89. The impossibility to express the condition K[I] (∀x)(∃y/K[I]) admires(x, y) in FPO is pointed out in both Carlson & ter Meulen (1979) and Hintikka (1982). Here the existential quantifier depends
on the universal quantifier but not on the knowledge operator, while the knowledge operator must precede the universal quantifier in order for the formula to imply the presupposed de dicto sentence
‘I know that everybody admires someone.’ According to the formula (∀x) K[I] (∃y/K[I]) admires(x, y), the agent's knowledge would pertain to all actual individuals and it would merely imply that the
agent knows of all actually existing individuals that each of them admires someone. If it is compatible with the agent's knowledge that there are further individuals beside those which happen to
exist actually, this would not guarantee the truth of the de dicto sentence.
90. Cf. Hintikka (1982, 1987, 1990), Engdahl (1986). For the uses of Skolem functions and related ideas in linguistics – notably the so-called choice functions – see e.g. Reinhart (1997), von
Heusinger (2004), Steedman (2012). In logical terms, a choice function is a function taking a formula of one free variable as its argument and returning as its value an individual that satisfies this
formula – if at least one such individual exists.
91. In Hintikka & Sandu (1989: 575–576) it is said that informational independence is a relation between applications of game rules, but also that it is a relation that a move prompted by an
expression in a semantic game bears to a number of earlier moves prompted by further expressions (cf. also Hintikka & Sandu 1997: 367). The latter construal complies with the way informational
independence is understood in game theory, while the former led to some confused examples in Hintikka and Sandu's paper (examples about neg-raising, pp. 577–580, see also Hintikka 1990, Sandu 1993).
92. The belief operator B[a] is a universal modal operator, just like the knowledge operator K[a]; its range is the set of scenarios compatible with all that the agent a believes in the scenario
relative to which the sentence is evaluated (cf. footnote 88).
93. Cf. Hintikka & Sandu (1989: 584–585, 587–589), Hintikka (1990). Some exceptions were noted in Hintikka (1973a), cf. footnote 84. However, even when informational independence can be syntactically
indicated, the indicators are different from case to case. | {"url":"http://plato.stanford.edu/entries/logic-if/notes.html","timestamp":"2014-04-20T05:42:59Z","content_type":null,"content_length":"54092","record_id":"<urn:uuid:130efe0a-431f-4fd5-a739-4992255616a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Encinitas Algebra 1 Tutor
...More, I organize a German language table to optimize the language learning skills of our German language students at UCSD. Please feel free to contact me if you have any questions or concerns!I
have tutored many high school students in algebra and other areas of math. I have gone beyond the high school level to taking higher level calculus courses at my university, where I study biology.
42 Subjects: including algebra 1, reading, Spanish, writing
...Thanks for visiting! Sincerely, DougI majored in Biology and pre-med in college. I have taught High School Chemistry for over 20 years.
15 Subjects: including algebra 1, chemistry, MCAT, biology
...I have developed the patience and caring attitude that must be present in every tutor in order to conduct an informative and educated tutoring session. I will provide the best knowledge for
your child from what I gained and will answer all of their questions. I will diligently review the necess...
10 Subjects: including algebra 1, chemistry, reading, physics
...I have nearly fifteen years experience in teaching, coaching, and personal development. I am very motivated and thoroughly committed to seeing young people succeed. I look forward to being of
service or answering any questions you may have.
6 Subjects: including algebra 1, calculus, geometry, algebra 2
...This test is so important! It doesn't just show colleges how much information you have retained from high school, it also shows how much you prepared for the test. It is important to do the
very best you can and show colleges your true potential.
9 Subjects: including algebra 1, algebra 2, economics, ACT Math | {"url":"http://www.purplemath.com/encinitas_ca_algebra_1_tutors.php","timestamp":"2014-04-21T15:21:25Z","content_type":null,"content_length":"23827","record_id":"<urn:uuid:4662221c-57b3-467b-b63b-28a34da35372>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Michi’s blog
My two high-school kids came by today. We’ve been trying to get a new teaching session together since early February, but they had a hell of a time all through February, and all our appointments
ended up canceled with little or no notice; and then I spent March and April on tour.
We pressed on with knot theory. Today, we discussed knot sums, prime knots, knot tabulation, behavior of the one invariant (n-colorability) we know so far under knot sums, Dowker codes, and we got
started on Conway codes for knots. Next week, I plan for us to finish up talking about the Conway knot notation, get the connection between rational knots and continued fractions down pat, and start
looking into new invariants. | {"url":"http://blog.mikael.johanssons.org/page/2/","timestamp":"2014-04-18T15:46:23Z","content_type":null,"content_length":"154270","record_id":"<urn:uuid:8858776d-eebe-4cbd-b002-05eb44035c72>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verifying validity of a statement involving infinite compact sets
September 29th 2010, 08:29 AM #1
Verifying validity of a statement involving infinite compact sets
Theorem: Show that an infinite set $S$ is compact if and only if every infinite subset of $S$ has an accumulation point that lies in $S$.
My attempt of proof:
Suppose $S$ is infinite and compact but that there exists an infinite subset that does not contain any accumulation points in $S$. Call this subset $S'$ and let an accumulation point of this
subset be $a$. Since $a$ is an accumulation point of $S'$, there is a sequence of points in $S'$ such that the limit of that sequence is $a$. Clearly this sequence lies in $S$ as well since $S' \
subset S$, so we have that $a$ is an accumulation point of $S$, and since $S$ is compact, $a\in S$, which is a contradiction. So all infinite subsets of $S$ have an accumulation point which lies
in $S$.
Now suppose every infinite subset of $S$ has an an accumulation point in $S$ and that $S$ is not compact. Then $S$ is either unbounded or not closed. If $S$ is unbounded, then there exists a
sequence in $S$, say $\{x_{n}\}$, such that for all $n\in \mathbb{N}$, $x_{n} \ge n$, so $\lim x_{n} = \infty$. So let $S''$ be the set of these sequence elements. By construction, this is an
infinite set, yet it has no accumulation point since no matter the ordering of the terms, there is no such point $b$ such that infinitely many sequence terms lie arbitrary close to $b$. This
contradictions our hypothesis, so $S$ is bounded. Suppose $S$ is not closed. Then there exists a point $c$ such that $c\in \partial S$ yet $cot\in S$. Since $c\in \partial S$, there exists a
sequence of points in $S$, say $\{y_{n}\}$ such that $\lim y_{n} = c$. Let $S'''$ be the set of these sequence elements. By construction, this is an infinite set, yet its only accumulation point
is $c$ since no matter the ordering of the terms, $c$ is the only point where infinitely many terms of the sequence are arbitrarily close to. Again, this contradicts the hypothesis, so $S$ must
be compact. $\blacksquare$
Is this proof valid, the only part of the proof I am really unsure of is in the second paragraph with the construction of those sequences. Is my claim valid that no matter the ordering, there are
no other accumulation points then the ones I have specified? It seems obvious/true to me (I think) but is it a false claim or is it a claim that needs proof? Thank you.
Theorem: Show that an infinite set $S$ is compact if and only if every infinite subset of $S$ has an accumulation point that lies in $S$.
Suppose $S$ is infinite and compact but that there exists an infinite subset that does not contain any accumulation points in $S$. Call this subset $S'$ and let an accumulation point of this
subset be $a$.
Look at the statements in blue and red.
You can not have both ways: either the set has an accumulation point or it does not.
So try again. If it has no accumulation point, then the is a open infinite cover which does have a finite sub-cover.
Well, since $S'$ is infinite and is a subset of a bounded set, the subset itself is bounded, so it MUST have an accumulation point (which is another theorem with another proof). I'm trying to
prove that this accumulation point must lie in $S$, so I'm assuming to the contrary that the accumulation point of $S'$, which exists, does not lie in $S$ which will lead to a contradiction that
$S$ is compact.
Read what you posted very carefully. If you already know from another theorem every infinite bounded set has a limit point, then there is nothing to prove in that direction. So you have wasted a
lot of words.
Yes, but that theorem says nothing about where that accumulation point lies. Yes, every bounded infinite set has an accumulation point, so every infinite subset of a compact set has an
accumulation point, but it is not immediate that these accumulation points lie in $S$ for ALL infinite subsets. So I assume to the contrary that there is one infinite subset that has a limit
point NOT in $S$ and arrive at a contradiction to the hypothesis that $S$ is bounded AND closed.
The other theorem makes no assumption about being closed, only bounded and infinite. I guess that an alternate proof to the current theorem could be that the accumulation point belongs to $cl(S')
$ and then prove that $cl(S') \subset S$ by assuming to the contrary and deriving a contradiction (or directly works just as well). In any case, I have to prove that the accumulation point of an
infinite subset, making no assumption about whether this subset is closed or not, of $S$ has an accumulation point that lies in $S$.
Look you are going in circles.
If a set $S$ is compact then it is closed and bounded.
By the B-W theorem any infinite subset has a limit point in $S$.
The other way.
Suppose that every infinite subset of $S$ has a limit point in $S$.
If $\left\{ {O_n } \right\}$ is an infinite open cover for $S$ which has no finite sub-cover work for a contradiction.
September 29th 2010, 10:02 AM #2
September 29th 2010, 10:05 AM #3
September 29th 2010, 10:15 AM #4
September 29th 2010, 10:19 AM #5
September 29th 2010, 10:29 AM #6
September 29th 2010, 12:14 PM #7
September 29th 2010, 12:31 PM #8 | {"url":"http://mathhelpforum.com/differential-geometry/157830-verifying-validity-statement-involving-infinite-compact-sets.html","timestamp":"2014-04-19T14:47:58Z","content_type":null,"content_length":"71216","record_id":"<urn:uuid:89d539a9-d194-49d0-a2e9-1d96d7b1752e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ozone Park Algebra 1 Tutor
...Proficiency with Physics Laws and Concepts. Prepare a perfect base knowledge to absorb complex problems in further studies Habit of keeping myself updated, learning something new everyday.
Competitive exams have always drawn the best output from me.
9 Subjects: including algebra 1, physics, geometry, GRE
...I taught precalculus as a high school math teacher and am extremely comfortable with all the material. Even though the material might look hard initially, with the proper instruction and right
practice, you will master the concepts and even enjoy solving problems! Trigonometry might seem like a...
26 Subjects: including algebra 1, calculus, geometry, writing
...We review important concepts such as: the basics of functions including: graphing, finding their inverses and compositions of functions; graphing transformations of basic functions; complex
numbers; the unit circle; maximizing word problems and other word problems. Through the several Statistics...
21 Subjects: including algebra 1, calculus, physics, statistics
...Whether a student needs to learn addition or upper level algebra, basic reading skills or SAT level English, I can help. I will methodically and patiently work step by step to make the material
easy. I instruct students ranging from pre-K to adult.
30 Subjects: including algebra 1, English, reading, writing
...I am a highly qualified biology tutor who helped students very successfully achieve a high grade. I am a PhD graduate from Weill Cornell Medical School. I have experience tutoring science
(Math, Biology, Chemistry) at all levels, all ages, all skills.
18 Subjects: including algebra 1, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Ozone_Park_algebra_1_tutors.php","timestamp":"2014-04-19T07:20:15Z","content_type":null,"content_length":"23947","record_id":"<urn:uuid:fc124e37-d135-40ac-8993-630c835b8b36>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
hardware skinning how-to [Archive] - OpenGL Discussion and Help Forums
03-04-2007, 03:53 AM
atm i'm computing animation in software, but i'd like to do it in a vertex shader. a model has an array of joint matrices and every vertex has an id which refers to a position in that array.
i want to upload the matrices as an array of uniform mat4 to the vertex shader and pass the joint id as a vertex attribute:
attribute int a_JointId;
uniform mat4 u_JointMatrices[32];however, glslvalidate complains about:
'attribute' : cannot be bool or intmpfh. so what can i do? thanks! | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-163230.html","timestamp":"2014-04-16T22:39:07Z","content_type":null,"content_length":"11856","record_id":"<urn:uuid:5424ee58-b07e-4b04-906f-224d5e5f55e6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 36,553
English grammar
Michael: Email me please Bobpursley@gmail.com
Wednesday, November 21, 2007 at 6:41pm by bobpursley
1/2=14x to solve for x, divide both sides by 14 1/28=x
Monday, January 12, 2009 at 5:08pm by bobpursley
Michael: Please email me Bobpursley@gmail.com
Wednesday, November 21, 2007 at 9:05pm by bobpursley
dividing by 6 is the same thing as multiplying both sides by 1/6
Monday, January 12, 2009 at 5:08pm by bobpursley
bobpursley: math cont
y=mx+b y=x+b -3=-3+b b=0 so the equation is y=x I have no idea what you did.
Tuesday, February 23, 2010 at 4:26pm by bobpursley
Chemistry - bobpursley pls help
didn't I just demonstrate it is conc HCl?
Tuesday, September 21, 2010 at 4:47pm by bobpursley
to: bobpursley
all were correct. see the post about squaring 3
Sunday, September 8, 2013 at 8:22pm by bobpursley
how come when I put into my calculator (sqrt5 - sqrt 3)^2 i get .2540333076 not 5-3 or simply 2...?????????
Sunday, May 31, 2009 at 7:31pm by TO BOBPURSLEY
7th grade science bobpursley
Go to the head of the class. All correct.
Wednesday, January 16, 2013 at 9:12pm by bobpursley
If you are interested in being a regular volunteer, please email me. Bobpursley@gmail.com
Sunday, November 1, 2009 at 11:20am by bobpursley
to bobpursley
TTT HHH THH HtH HHt tth tht htt I count eight.
Tuesday, December 1, 2009 at 4:23pm by bobpursley
Chem- bobpursley
1.67E-27kg*(.0015/9.11E-31) =2.75 so three protons.
Tuesday, September 21, 2010 at 8:39pm by bobpursley
Math BOBPURSLEY HELP HELP HELP HELP
Why did you make it so complicated? 1/2 m v^2=GMe m/re v^2= 2GMe/re take the square root of each side.
Monday, May 2, 2011 at 11:35am by bobpursley
Post the web address with spaces, such as this.. w w w . H o t s a u s a g e . D e f ?
Wednesday, October 15, 2008 at 9:08pm by bobpursley
good work, watch sig digits.
Wednesday, October 15, 2008 at 9:55pm by bobpursley
makes sense to me but is my answer also correct?
Wednesday, May 27, 2009 at 2:59pm by TO BOBPURSLEY
Grammar and Compositon Bobpursley
ok, give me a few minutes.
Wednesday, November 4, 2009 at 11:25am by bobpursley
math confused for bobpursley
BobPursley is taking a couple of days off from the Jiskha Board.
Thursday, May 22, 2008 at 8:27pm by Ms. Sue
everything made sense I just dont see why .5 m must be half a wave length
Sunday, November 15, 2009 at 3:19pm by TO BOBPURSLEY
Language Arts
Jim, email me if you are interested in being a regular volunteer here. BobPursley@gmail.com
Saturday, October 31, 2009 at 2:05pm by bobpursley
I have no idea what the items were, that was what I was pointing out to you. THe assignement said cover ....., and I can't help if those are not listed.
Friday, April 9, 2010 at 6:00pm by bobpursley
math 117 for bobpursley
ally, settle down. The answer is obviously 30. Surely you can verify that with your calculator.
Wednesday, February 23, 2011 at 7:56pm by bobpursley
to bobpursley
see my post. anytime you have a nuber being taken to a power, the law of exponsents holds.
Sunday, September 8, 2013 at 8:30pm by bobpursley
I would rather eat Tacos.
Saturday, March 20, 2010 at 7:44pm by bobpursley
Technology: ms.sue, bobpursley
Thursday, April 8, 2010 at 5:58pm by bobpursley
To bobpursley
I had a simple answer like that, but Ms. Sue wanted me to be more specific, so thats what I wrote. Thank you so much bobpursley:-)
Sunday, October 25, 2009 at 6:18pm by Sara
Physics-bobpursley please check
You second solution makes no sense. Why would you set the KE of an object equal to its PE?
Thursday, November 4, 2010 at 9:18pm by bobpursley
chem -repost for bobpursley
on a per mole basis, you release 4 moles of new gas. At STP, it occupies 2*22.4 liters
Sunday, April 17, 2011 at 8:53pm by bobpursley
I dont see any unanswered. I am not a biologist....electrical engineering, physics, geek stuff mainly. I specialize in irritating Georgia Peaches.
Tuesday, March 23, 2010 at 12:41pm by bobpursley
Math BOBPURSLEY HELP HELP HELP HELP
Then set the initial KEnergy equal to the potential energy at the surface of EArth. 1/2 mv^2= GMe*m/re solve for escape velocity, v
Monday, May 2, 2011 at 11:35am by bobpursley
Algebra 2 ..........BOBPURSLEY
What graph? There is no y in the equation (x-2)^3 -2=0
Saturday, January 26, 2013 at 7:58pm by bobpursley
Please email me if you are interested in being a "regular" volunteer tutor. Bobpursley@gmail.com Your responses are very excellent, and helpful. Thanks for all you do.
Sunday, July 5, 2009 at 4:36pm by bobpursley
It depends on the age of the child, but nearly always includes thoughts, daily activites, or even short essays. My youngest (7) often wrote letters in the journal.
Wednesday, August 13, 2008 at 8:15pm by bobpursley
PC: email me if you are interested in becoming a regular volunteer teacher. Bobpursley@gmail.com
Friday, May 29, 2009 at 9:02am by bobpursley
the first is correct. On the second, I have no idea what the diagram is, however, I suspect your "ancestor" is correct.
Thursday, December 12, 2013 at 11:02am by bobpursley
Bobpursley plz help
Please dont put my name in the subject box. Others here can help better than I, and I am only online a few minutes. I have another life also. Yes, on first. Yes on second.
Monday, August 24, 2009 at 8:27pm by bobpursley
The question is odd. You are correct. I am not certain why the author of the question worded it all this way.
Wednesday, October 15, 2008 at 10:16pm by bobpursley
physics- response to bobpursley
II is wrong, III is wrong. If I were you I would discuss it with your instructor.
Wednesday, June 24, 2009 at 8:48pm by bobpursley
To bobpursley
thanks so much bobpursley:-)
Friday, October 9, 2009 at 10:12pm by sara
Tuesday, October 26, 2010 at 6:29pm by bobpursley
Professor, please email me at BobPursley@gmail.com I need to have your email address in order to enable you to post hot links. Thanks.
Sunday, April 6, 2008 at 3:02pm by bobpursley
If 1/5 m= 22, then m = 22 (5). If 2/3c=5.9, then c = 5.9 (3/2) Even though I am not bobpursley, I hope this helps. thanks for asking.
Monday, January 12, 2009 at 5:31pm by PsyDAG
Matt, I need your new email address. BobPursley@gmail.com
Friday, November 13, 2009 at 10:09pm by bobpursley
Univesity of Phoenix
I believe that there are two things that motivate humans: the need to grow, and the need to avoid pain. That theory is mine, and it has served me well in application. Bobpursley
Sunday, October 24, 2010 at 6:06pm by bobpursley
you're pretty good at history and literature as well... i'm not sure about Reiny, and bobpursley, and damon..and GuruBlue i've noticed bobpursley answer many different type of questions; science,
lit, history, ..
Sunday, February 22, 2009 at 10:16pm by Bella
Thanks for helping me with al those parts bobpursley. If you have time, can you also check my questions please. Thank you very, very much in return. My post is on the second page.
Tuesday, April 6, 2010 at 9:49pm by Sara
work is ok, for the net forces add the two forces, the 365 and the 374N forces. Watch significant digits.
Wednesday, October 15, 2008 at 9:29pm by bobpursley
physics repost bobpursley
This is a simple math problem . c=300/300 * 4.18* 10/50 = .8 j/gC about. Now if you do specific heat in calories, you get .2. Numbers without units are meaningless.
Monday, June 22, 2009 at 10:48pm by bobpursley
7th grade science bobpursley
please dont put a particular teacher's name in the subject box. What happens often, if that teacher is not online, other teachers don't look at it, and you MISS OUT on help. Thanks.
Wednesday, January 16, 2013 at 9:12pm by bobpursley
Physics-bobpursley can u check this again please
dang. I forgot about PEnergy gain. F(net)*d=1/2 kx^2+ 1/2 mv^2+mgx sorry, It has been a long day for me (since three am). check my thinking.
Thursday, October 28, 2010 at 10:54pm by bobpursley
I think bobpursley meant that: pOH = -log[OH-] and pH = 14 - pOH Use these relationships to get the pH after finding the molarity of KOH = [OH-] as bobpursley suggested.
Wednesday, August 5, 2009 at 1:35am by GK
i dont have a clue about what you are on about-i bet bobpursley agree's with that too. Well I totally agree with bobpursley. Thanks. P.S; what does S.S mean for the subject??!!
Monday, October 29, 2007 at 8:08am by Danielle
Go with bobpursley - math
Never mind my solution, go with bobpursley Found my "silly error" , just noticed I had my fractions upside - down. e.g. speed for shorter trip = x/4 , etc. Can I blame it on the heat affecting my
brain ?
Friday, July 22, 2011 at 9:38am by Reiny
Please bobpursley help.algebra 2
The first is a function of x as there is one y value for all x. At any x, y=3. The second is not a function because there is no value of y defined for any x. At x=3, y can be any value, and at other
x values, y is undefined.
Monday, August 27, 2007 at 8:46pm by bobpursley
sra McGuin
Scan and attach to an email to me Bobpursley@gmail.com and I will get it to Sra McGuin
Thursday, January 1, 2009 at 9:31pm by bobpursley
ATTN bobpursley
Saturday, October 4, 2008 at 2:04pm by bobpursley
Wednesday, January 14, 2009 at 6:26pm by bobpursley
go with bobpursley - algebra
ARRGGGHHHH!!!! looks like I messed up in my 45-49 could be done with 1 quarter, 2 dimes, and 4 pennies. Nowhere else do we actually need 2 nickels so as bobpursley said 3 quarters, 2 dimes 1 nickel 4
pennies for a total of 10 coins
Monday, May 7, 2012 at 6:40pm by Reiny
bobpursley please help
Hmmmm, It appears to me you did too much rounding. I do not understand all the conversions . wf^2=Wi^2+2ad 0=16.1^2+2(-1.93)d d= 16.1^1/3.86 radians distance= angdisplacement*r = 16.1^2/3.86 * .0330/
2=1.11 m check that.
Sunday, November 28, 2010 at 4:55pm by bobpursley
Physics(thank you for the help)
Physics(thank you for the help) - bobpursley, Sunday, May 27, 2012 at 4:53pm I am uncertain what part b is. I am uncertain what F is in the ratio F/Fa If F is the total force, then F/Fa=k by
definition. So frankly, I have no idea what the question is asking.
Sunday, May 27, 2012 at 4:45pm by bobpursley
Chemistry Bobpursley
on b, think what a buffer does. on c, if a very large stsron acid, it is most likely to make the solution more acid, decreasing, I agree with that. THe issue is what is a very large amount.
Wednesday, November 7, 2012 at 10:08pm by bobpursley
physics-bobpursley/drwls please check
bobpursley/ drwls please help me with my post on Sunday, April 15 at 10:16pm and 10:19pm. Thanks!
Monday, April 16, 2007 at 7:24am by Mary
acceleartiononmoon= GMassmoon/radius^2 =G 7.35E22/(1.74E6)^2= G 2.43E10= = 6.67E-11*2.43E10= 1.62 m/s^2 or as a gravitational field constant= 1.62 N/kg I have no idea why you used the mass of Earth.
were did you get a = r^-2 Gm from i thought that the equation was net force = ...
Saturday, September 12, 2009 at 5:03pm by TO BOBPURSLEY
you dont get it. Divide the force of gravity by the mass of the object which is being accelerated. F/massyou = your acceleration of course, forces come in couplets, so F/massmoon= moons acceleration
toward you. where F is the force of gravity, it operates on you and the moon, ...
Saturday, September 12, 2009 at 5:03pm by bobpursley
BobPursley, HELP! Please!!!!
I am soooo confused!!
Wednesday, March 25, 2009 at 7:53pm by BobPursley, HELP! Please!!!!
chem -repost for bobpursley
2. Yes, it is acidic because it is a salt made from a salt of a strong acid and a weak base. 3. Catalysts speed a reaction by providing an alternate reaction pathway, since they speed it, the rate
constant is greatly changed. Good thinking.
Saturday, April 9, 2011 at 2:34pm by bobpursley
Thanking MS Sue
Thank you Ms Sue for all you do here. We appreciate it. You have helped students all your life, and I am certain many of them haven't forgotten you. Thank you from all of us. BobPursley
Sunday, September 11, 2011 at 10:02pm by bobpursley
Be sure you're paying attention to what Bobpursley has written. The NCLB regulations are not the only laws that the feds have passed that affect education, even though education is not in the
Constitution or Amendments. Bobpursley has listed other federal laws that affect ...
Tuesday, August 5, 2008 at 12:32am by Writeacher
Grammar and Compositon Bobpursley
I think I would work on his personal characteristics, rich just is just wealth. Is he kind, charitable, sociable, community oriented? What is his legacy? Rich is awfully shallow for a legacy, as is
poor. Humans are worth more than their bank account.
Wednesday, November 4, 2009 at 11:25am by bobpursley
You didn't read bobpursley's original answer very well. math - bobpursley, Tuesday, January 26, 2010 at 8:35pm I wonder how tall and wide the door is. If the door is wide and tall enough, the tallest
cabinet is 8 feet.
Tuesday, January 26, 2010 at 9:12pm by Ms. Sue
Algebra - for bobpursley
Now any UTex student worth his salt knows -2(4-3k)=-8+6k Must have been Aggie thinking here. with that change, it becomes 14-8k-16 = -8+6k subtracting 6k from each side -2-14k=-8 adding two to each
side -14k=-6 k=6/14 or 3/7
Thursday, September 26, 2013 at 6:16pm by bobpursley
math - bobpursley, Tuesday, May 14, 2013 at 7:41pm 125=M*e^(.045*40) M=125K/6.04== 20.66236 K 125=M(1+.05)^40 log 125=logM + 40log1.05 M= = 17.75571 thousand
Tuesday, May 14, 2013 at 7:22pm by bobpursley
Math CONTINUATION OF QUESTION FROM bobpursley
Thank you for answering the question before bobpursley, I understand how you did the deriv of inside, outside, but how do I now do the second derivative since you have the extra -18x^2 in the
numerator?? Thank you very much! (I suppose I have to brush up on my derivatives) ...
Thursday, March 10, 2011 at 9:05pm by KIKSY
ysics - bobpursley, Tuesday, April 2, 2013 at 3:42pm deltaLength=1000ft*coeffexpansion*33 look up the coefficent of linear expansion for STEEL (railroad tracks are steel, not iron), figure delta
length, add that to the original 1000ft It scares me to think of a train running ...
Tuesday, April 2, 2013 at 3:35pm by bobpursley
Anonymous and bobpursley
can you please Anonymous and bobpursley explain this to me. Please and thank you
Thursday, November 4, 2010 at 9:26pm by anna
The steps? YOu have to have a physician with admitting priviledges, admit you. It can occur in his office, at your home, in the emergency room. If the physician admits you, you are admitted. This can
occur in a wide variety of circumstances. The process of the admitting ...
Thursday, April 1, 2010 at 9:29am by bobpursley
To Bobpursley
Ok I finally understand your work and it makes sense to me but I guess it's wrong and I don't see were I followed your work and got the same answer as you positive one... the only thing is I put the
problem into my calculator and get negative one and the back of the book gives...
Sunday, November 15, 2009 at 8:31pm by To Bobpursley (Wrong answer???)
Math 9th
I got it. bobpursley was REALLY close for b. the only difference is that there's no decimal. also, c=225. just use the substitution method and you should be fine. If you're tired Kat, especially
since you've been staring at it a while, i suggest forgetting about it now and ...
Friday, January 23, 2009 at 9:13pm by nattily
YEs, it is the correct balanced equation bobpursley A 2.80 mol/L solution of iron (II) dichromate is combined with 5.00L of a 2.25 mol/L acidic solution of tin (II) bromide. Calculate the volume of
iron (II) dichromate required. FeCr2O7 + SnBr2 --> SnCr2O7 + FeBr2 Is this ...
Wednesday, May 16, 2007 at 6:32pm by bobpursley
algebra 2
How is this problem done? IF f(x)=2x^2-8x-3 FIND f(-2) algebra 2 - bobpursley, Saturday, January 16, 2010 at 1:42pm Put in for x the value -2, and compute f(-2) algebra 2 - hellogoodbie, Saturday,
January 16, 2010 at 1:47pm so is the answer f=21/2? if that is right, thank you ...
Saturday, January 16, 2010 at 2:04pm by hellogoodbie
algebra 2
How is this problem done? IF f(x)=2x^2-8x-3 FIND f(-2) algebra 2 - bobpursley, Saturday, January 16, 2010 at 1:42pm Put in for x the value -2, and compute f(-2) algebra 2 - hellogoodbie, Saturday,
January 16, 2010 at 1:47pm so is the answer f=21/2? if that is right, thank you ...
Saturday, January 16, 2010 at 2:59pm by hellogoodbie
Goodness. forcegravity= GMassmoon*massyou/r^2 forcegravity/massyou = GMassmoon/r^2 "g"= G*Massmoon/r^2
Saturday, September 12, 2009 at 5:03pm by bobpursley
Chemistry!! - bobpursley please
Calculate the maximum wavelength of light capable of removing an electron for a hydrogen atom from the energy state characterized by the following when n=4 =__________nm Responses Chemistry!! -
bobpursley, Wednesday, October 22, 2008 at 4:05am So the two energy states are n=4 ...
Wednesday, October 22, 2008 at 6:40pm by Giznelbell
Grammar and Composition Bobpursley
On the last, your thesis did not say anything about businessman,you said he was rich. So we disagree on that. On the first, you are right, I numbered the paragraphs on my printout,and the spaces
separating the paragraphs did not print. I see now. The third point, in paragraph ...
Wednesday, November 4, 2009 at 5:21pm by bobpursley
A vertical spring with a spring constant of 450 N/m is mounted on the floor. From directly above the spring, which is unstrained, a 0.30 kg block is dropped from rest. It collides with and sticks to
the spring, which is compressed by 3.0 cm in bringing the block to a momentary...
Friday, April 6, 2007 at 8:42pm by Mary
Algebra 1 Please bobpursley
Hi Mary, I understand from your post that you are the mother of the student. I really can't tell at what grade level this is supposed to be, but I agree with Bobpursley's earlier comment when you
first posted this question. I also cannot make any sense of this problem. I have ...
Saturday, September 12, 2009 at 2:53pm by Reiny
Ok, assume it equals zero. 2x + 3y -6=0 subtract 3y from each side. add six to each side. divide both sides by two. its how its in the book i'm suppost to solve for x. but how? How do I solve for the
following: Directions solve each literal equation for the individual variable...
Tuesday, December 19, 2006 at 9:29am by bobpursley
algebra 2
Posted by hellogoodbie on Saturday, January 16, 2010 at 3:24pm. Posted by hellogoodbie on Saturday, January 16, 2010 at 2:59pm. How is this problem done? IF f(x)=2x^2-8x-3 FIND f(-2) algebra 2 -
bobpursley, Saturday, January 16, 2010 at 1:42pm Put in for x the value -2, and ...
Saturday, January 16, 2010 at 4:03pm by hellogoodbie
AlgebrAlgebra-Mathmate or BobPursley or Reiny
I read it as the logarithm of x. If you read it that way also, it is a logarithmic equation. Very few mathematicians get overly concerned with notation, however, I have known many high school
teachers make a big deal of Log(x) and other notation. I still remember when one ...
Tuesday, December 14, 2010 at 3:15pm by bobpursley
Math CONTINUATION OF QUESTION FROM bobpursley
I don't think bobpursley is on line, so permit me to continue ... from Bob, y' = -18x^2(1+2x^3)-2 so using the product rule and chain rule combination y'' = -18x^2)(-2)(1+2x^3)^-3(6x^2) + (1+2x^2)^-2
(-36x) = 216x^4(1+2x^3)^-3 - 36x(1+2x^3)^-2 = 36x(1+2x^3)^-3 [ 6x^3 - (1+2x^3...
Thursday, March 10, 2011 at 9:05pm by Reiny
Algebra 1 Please bobpursley
Mary, and Mary's mom: One of our other math teachers wrote this to me in an email: "I cannot make much sense out of the problem, even after reading it a few times. It sounds to me students are
supposed to make the "identical" creatures with scissors and paper and use the ...
Saturday, September 12, 2009 at 2:53pm by bobpursley
physics repost bobpursley
The energy given off by 300 grams of an alloy as it cools through 50°C raises the temperature of 300 grams of water from 30°C to 40°C. The specific heat of the alloy (in cal/g C ) is: i know Q=CpM
(delta T) but what is what? Responses * physics - bobpursley, Monday, June 22, ...
Monday, June 22, 2009 at 10:48pm by jane
Reiny or Damon or Bobpursley
Express the series 11 + 18 + 27 +...+ 171 using sigma notation. Usually you would take 27-18 and thats the step between numbers, right? (I think its the k-value?) But 27-18=9 and 18-11=7. Are we
stepping by k+2 here? Like k, k+2 k+4, k+6...? Please help me. I'm not confident ...
Friday, April 27, 2012 at 6:09pm by Tabby
acceleration estimate(bobpursley)
Well, bobpursley explained this to me. I didn't understand this well. Since there was an error in the data, I am posting it again t(sec) v(ft/sec) 1 12.2 1.5 13 2 13.4 2.5 13.7 velocity of an object
moving along a line at various times. How do I estimate the object's ...
Monday, December 11, 2006 at 10:31pm by Jen
Calculus, bobpursley!
Tuesday, September 21, 2010 at 8:01pm by bobpursley
"STEVE", and anonomous responder
your response shows care and insight to this question: http://www.jiskha.com/display.cgi?id=1324045462 Steve, several of your replies have been very good. If you are interested in joining our
Volunteer Jiska staff, please email me at Bobpursley@gmail.com Bob
Friday, December 16, 2011 at 1:27pm by bobpursley
Physics-bobpursley please check
First, you have so many math errors it hurts. If 1/2 mv^2=.9J then v^2=2*.9/.035 v=7.17m/s I have no idea where you got 2.215m/s You have told me several things about the initial PE a) it was a
height of .5m (PE= .035*9.8*.5 = .17J ( you indicated in your solution mass was ....
Thursday, November 4, 2010 at 9:18pm by bobpursley
as I did.
Sunday, September 13, 2009 at 9:38pm by bobpursley
Friday, July 23, 2010 at 8:54pm by bobpursley
For bobpursley
ok, thanks.
Saturday, February 12, 2011 at 8:34pm by bobpursley
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=bobpursley","timestamp":"2014-04-16T08:01:24Z","content_type":null,"content_length":"35859","record_id":"<urn:uuid:9ccf1a1c-1db6-43b8-8fb6-780774c8a2ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formal Reduction of the General Combinatorial Decision Problem
Results 11 - 20 of 69
, 2011
"... The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the
thought that one day machines may become more intelligent than people, many of its critics view such a prosp ..."
Cited by 13 (7 self)
Add to MetaCart
The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought
that one day machines may become more intelligent than people, many of its critics view such a prospect with horror. Partly because these controversies attract so much attention, one of the most
important accomplishments of AI has gone largely unnoticed: the fact that many of its advances can also be used directly by people, to improve their own human intelligence. Chief among these advances
is Computational Logic. Computational Logic builds upon traditional logic, which was originally developed to help people think more effectively. It employs the techniques of symbolic logic, which has
been used to build the foundations of mathematics and computing. However, compared with traditional logic, Computational Logic is much more powerful; and compared with symbolic logic, it is much
simpler and more practical. Although the applications of Computational Logic in AI require the use of mathematical notation, its human applications do not. As a consequence, I have written the main
part of this book informally, to reach as wide an audience as possible. Because human thinking is also the subject of study in many other fields, I have drawn upon related studies in Cognitive
Psychology, Linguistics, Philosophy, Law, Management Science and English
, 1999
"... In this paper we briey introduce a Wide Spectrum Language and its transformation theory and describe a recent success of the theory: a general recursion removal theorem. This theorem includes as
special cases the two techniques discussed by Knuth [12] and Bird [7]. We describe some applications of t ..."
Cited by 11 (8 self)
Add to MetaCart
In this paper we briey introduce a Wide Spectrum Language and its transformation theory and describe a recent success of the theory: a general recursion removal theorem. This theorem includes as
special cases the two techniques discussed by Knuth [12] and Bird [7]. We describe some applications of the theorem to cascade recursion, binary cascade recursion, Gray codes, the Towers of Hanoi
problem, and an inverse engineering problem. 1 Introduction In this paper we briey introduce some of the ideas behind the transformation theory we have developed over the last eight years at Oxford
and Durham Universities and describe a recent result: a general recursion removal theorem. We use a Wide Spectrum Language (called WSL), developed in [19,20,21] which includes lowlevel programming
constructs and high-level abstract specications within a single language. Working within a single language means that the proof that a program correctly implements a specication, or that a
specication correct...
- International Journal of Bifurcation and Chaos , 1995
"... . Finding a natural meeting ground between the highly developed complexity theory of computer science ---with its historical roots in logic and the discrete mathematics of the integers--- and
the traditional domain of real computation, the more eclectic less foundational field of numerical analysis ..."
Cited by 11 (0 self)
Add to MetaCart
. Finding a natural meeting ground between the highly developed complexity theory of computer science ---with its historical roots in logic and the discrete mathematics of the integers--- and the
traditional domain of real computation, the more eclectic less foundational field of numerical analysis ---with its rich history and longstanding traditions in the continuous mathematics of
analysis--- presents a compelling challenge. Here we illustrate the issues and pose our perspective toward resolution. This article is essentially the introduction of a book with the same title (to
be published by Springer) to appear shortly. Webster: A public declaration of intentions, motives, or views. k Partially supported by NSF grants. y International Computer Science Institute, 1947
Center St., Berkeley, CA 94704, U.S.A., lblum@icsi.berkeley.edu. Partially supported by the Letts-Villard Chair at Mills College. z Universitat Pompeu Fabra, Balmes 132, Barcelona 08008, SPAIN,
cucker@upf.es. P...
- Logic, Methodology and Philosophy of Science: Proceedings of the Twelfth International Congress , 2005
"... Abstract. Formal syntax has hitherto worked mostly with theoretical frameworks that take grammars to be generative, in Emil Post’s sense: they provide recursive enumerations of sets. This work
has its origins in Post’s formalization of proof theory. There is an alternative, with roots in the semanti ..."
Cited by 10 (1 self)
Add to MetaCart
Abstract. Formal syntax has hitherto worked mostly with theoretical frameworks that take grammars to be generative, in Emil Post’s sense: they provide recursive enumerations of sets. This work has
its origins in Post’s formalization of proof theory. There is an alternative, with roots in the semantic side of logic: model-theoretic syntax (MTS). MTS takes grammars to be sets of statements of
which (algebraically idealized) well-formed expressions are models. We clarify the difference between the two kinds of framework and review their separate histories, and then argue that the
generative perspective has misled linguists concerning the properties of natural languages. We select two elementary facts about natural language phenomena for discussion: the gradient character of
the property of being ungrammatical and the open nature of natural language lexicons. We claim that the MTS perspective on syntactic structure does much better on representing the facts in these two
domains. We also examine the arguments linguists give for the infinitude of the class of all expressions in a natural language. These arguments turn out on examination to be either unsound or lacking
in empirical content. We claim that infinitude is an unsupportable claim that is also unimportant. What is actually needed is a way of representing the structure of expressions in a natural language
without assigning any importance to the notion of a unique set with definite cardinality that contains all and only the expressions in the language. MTS provides that.
- In: Artificial Intelligence and Neural Networks: Steps toward Principled Integration. Honavar , 1994
"... 1 An understanding of learning -- the process by which a learner acquires and refines a broad range of knowledge and skills -- is central to the enterprise of building truly adaptive, flexible,
robust, and creative intelligent systems. Significant theoretical and empirical contributions to the chara ..."
Cited by 9 (6 self)
Add to MetaCart
1 An understanding of learning -- the process by which a learner acquires and refines a broad range of knowledge and skills -- is central to the enterprise of building truly adaptive, flexible,
robust, and creative intelligent systems. Significant theoretical and empirical contributions to the characterization of learning in computational terms have emerged from research in a number of
disparate research paradigms. The limitations of individual paradigms and of particular classes of techniques within each paradigm are beginning to be recognized. Converging lines of evidence from
multiple sources, both theoretical as well as empirical, suggest that artificial intelligence systems, in order to be able to deal with complex tasks such as recognizing and describing 3-dimensional
objects, or communicating in natural language, must be able to effectively utilize a range of learning algorithms operating with an adequate repertoire of representational structures. This paper
draws on a broad ran...
, 1996
"... This paper concerns the formal study on the generative powers of extended splicing ..."
, 1993
"... In this paper we briefly introduce a Wide Spectrum Language and its transformation theory and describe a recent success of the theory: a general recursion removal theorem. Recursion removal
often forms an important step in the systematic development of an algorithm from a formal specification. We us ..."
Cited by 7 (3 self)
Add to MetaCart
In this paper we briefly introduce a Wide Spectrum Language and its transformation theory and describe a recent success of the theory: a general recursion removal theorem. Recursion removal often
forms an important step in the systematic development of an algorithm from a formal specification. We use semantic-preserving transformations to carry out such developments and the theorem proves the
correctness of many different classes of recursion removal. This theorem includes as special cases the two techniques discussed by Knuth [13] and Bird [7]. We describe some applications of the
theorem to cascade recursion, binary cascade recursion, Gray codes, and an inverse engineering problem.
- AI Communications , 1994
"... An earlier article [25] discusses the proposition that the storage and processing of information in computers and in brains may often be understood as information compression. A subsequent
article [15] criticises the computing aspects of [25] and research on the more specific conjecture that all for ..."
Cited by 7 (7 self)
Add to MetaCart
An earlier article [25] discusses the proposition that the storage and processing of information in computers and in brains may often be understood as information compression. A subsequent article
[15] criticises the computing aspects of [25] and research on the more specific conjecture that all forms of computing and formal reasoning may usefully be understood as information compression. The
present article, which is intended to be intelligible without recourse to earlier articles, answers the main points in [15], tries to correct the many inaccuracies and misconceptions in that article,
and discusses related issues. Topics which are discussed include: the way theories are or should be developed; the role of evidence in motivating research; apparent shortcomings in the Turing machine
concept as a reason for seeking new principles of computing; the apparent conflict between the idea of `computing as compression' and the fact that computers may create redundancy - and how the
, 1994
"... Classifier systems are sub-symbolic or dynamic approaches to machine learning. These systems have been studied rather extensively. In this thesis some theoretical results about the long-term
behaviour and the computational abilities of classifier systems are derived. Then some experiments are undert ..."
Cited by 4 (0 self)
Add to MetaCart
Classifier systems are sub-symbolic or dynamic approaches to machine learning. These systems have been studied rather extensively. In this thesis some theoretical results about the long-term
behaviour and the computational abilities of classifier systems are derived. Then some experiments are undertaken. The first experiment entails the implementation of a simple logic function, a
multiplexer in a simple classifier system. It is shown that this task can be learned very well. The second task that is taught to the system is a mushroom-classification problem that has been
researched with other learning systems. It is shown that this task can be learned. The last problem is the parity problem. First it is shown that this problem does not scale linearly with its number
of bits in a straightforward classifier system. An attempt is made to solve it with a multilayer classifier-system, but this is found to be almost impossible. Explanations are given of why this
should be the case. Then some ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=38649&sort=cite&start=10","timestamp":"2014-04-17T19:53:27Z","content_type":null,"content_length":"38180","record_id":"<urn:uuid:38c298e1-fb49-4c76-81d5-7a65ec498688>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
WyzAnt Resources
Latest answer by
Parviz F.
Woodland Hills, CA
The goal is to simplify the trig indentity expression as much as possible using the different fundamentals of identities like reciprocal identities, quotient identities, and pythagorean identitie...
Top voted answer by
Steve S.
Westford, MA
Top voted answer by
Mona B.
West Orange, NJ
a family uses 12.5lb of paper in a week and recycles about 3/4 of its waste. how many pounds of paper does the family recycle?
Latest answer by
Vivian L.
Middletown, CT
if a parked car were hit directly from behind by another car what direction would it move and why? is it possible for it to turn sideways on its own?
Latest answer by
Vivian L.
Middletown, CT
We define the following functions: F (x)=2x+5 g(x)=x^2-3 h(x)=(7-x)/3 Compute (f – h) (4). Evaluate the following two compositions: A: (f°g) (x) B: (h°g) (x) Graph...
Latest answer by
Shelly J.
Stamford, CT
Jason has $20 more than Darcy and Maria has twice as muvh money as Jason. How much money do Darcy and Jason have altogether?
Top voted answer by
Steve S.
Westford, MA
Can someone please help with this, find the factor s^2(t – u) – 9t^2(t – u)
Latest answer by
Steve S.
Westford, MA
f(x)= (6x-6)(x+3)/(-6x-2)(3x-6) what are the vertical asymptotes? what are the horizontal asymptotes? what are the x-intercepts? what are the y-intercepts?
Latest answer by
Steve S.
Westford, MA
please answer me i really need help on my homework
Latest answer by
Steve S.
Westford, MA
What are the factors of x^3-10x^2+17x+28
Latest answer by
Steve S.
Westford, MA
For the following functions, indicate that the x-axis is an asymptote, a horizontal asymptote other than the axis, a vertical asymptote, a slanted asymptote, or no asymptotes at all. f(x)=...
Latest answer by
Steve S.
Westford, MA
For the following expression indicate if the x-axis is an asymptote, if there is a horizontal asymptote, vertical asymptote, slanted asymptote, or not asymptotes at all f(x)= 1/x f(x)= x3...
Latest answer by
Steve S.
Westford, MA
Solve each of the following equations. (i) 3cos2 2x+ 4 sin2x =1 , 0° < x < 360° (ii) sec x (tan x -2) = 2 cosec x , 0° < x < 360° (iii)...
Top voted answer by
Kenneth G.
Tarzana, CA
Ok, I was watching TV and saw hold'em poker playing. In that game three cards are flipped over in the center of the table. If one was to give the cards valuations (Ace=1, 2=2,....and so forth
Latest answer by
Elijah C.
Indio, CA
Y=6 Y=5x+7
Latest answer by
Kevin C.
Kensington, MD
consider the equation f(x)=-6-1 and g(x)=4x^2 select the solution for (fg)(x)
Top voted answer by
Kenneth G.
Tarzana, CA
How to solve an acute triangle with a missing side by using the law of cosines.... Segment CB is 45, segment BA is 43, segment CA is unknown "b", <B is 88 degrees. I am supposed to find...
Latest answer by
Steve S.
Westford, MA
Simplify the following expression 4^1/3 4^1/6 what is the value of 4^ 1 4^1 ...
Latest answer by
Steve S.
Westford, MA
Find a symbolic Representation for f ^-1(x) f(x) = 1 x - 12 f ^-1(x) = ?
Top voted answer by
Steve S.
Westford, MA
Find a Symbolic Representation for F^-1(x) F(x) = ^3√¯10x f^-1(x) =? | {"url":"http://www.wyzant.com/resources/answers/questions?f=new-answers&pagesize=20&pagenum=125","timestamp":"2014-04-17T19:29:25Z","content_type":null,"content_length":"59696","record_id":"<urn:uuid:aece2373-3e60-497f-a9ca-70840dd5b8c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
integration by parts for the fractional Laplacian
up vote 3 down vote favorite
Is there an integration by parts formula for fractional laplacians in $L^p(\mathbb{R}^N)$, something like $$ s\in(0,1),\qquad\int\limits_{\mathbb{R}^N}f[(-\Delta)^sg] =\int\limits_{\mathbb{R}^N}[(-\
Delta)^{s}f]g $$ or an intermediate formula involving "lower derivatives"? Typically, I would like to know if $$ \int\limits_{\mathbb{R}^N}f\cdot[(-\Delta)^sf] dx\geq 0 $$ still holds true as for the
usual Laplacian (say for well-behaved $f$)? Computing formally in the Fourier space with $\widehat{(-\Delta)^sf}(\xi)=|\xi|^{2s}\hat{f}(\xi)$ it seems obvious, but it is not clear to me from the
Riesz potential representation of $(-\Delta)^{-s}f$. Also, what kind of regularity/decay at infinity do I need in order not to bother with boundary terms at infinity?
fa.functional-analysis ap.analysis-of-pdes
$(-\Delta)^{s}$ is positive if and only if $0<s\les 1$, and they generates positive heat semigroup $e^{-t(-\Delta)^{s}}$. – user23078 May 6 '13 at 15:49
and I guess the intermediate formula involving "lower derivatives" looks like $$ \int\limits_{\mathbb{R}^d}f[(-\Delta)^s g]=-\int\limits_{\mathbb{R}^d}(-\Delta)^{s/2} f(-\Delta)^{s/2} g $$ ??? –
leo monsaingeon May 6 '13 at 18:44
These formulae are all correct, and the easiest way to realize this is to use the Fourier space representation, the usual function space is the $H^s$ space. – Ray Yang May 7 '13 at 17:29
add comment
1 Answer
active oldest votes
You can integrate by parts:
$$ \int_{\mathbb{R}^d} (-\Delta)^s f(x) g(x)dx=\int_{\mathbb{R}^d} (-\Delta)^s g(x) f(x)dx. $$ Using Fourier and $L^2$ the equality is obvious. Let's do "by hand" in $d=1$ and $s=1/2$
(the other cases follow the same idea:
up vote 1 You have $$ \int_{\mathbb{R}} (-\Delta)^{1/2} f(x) g(x)dx=\int_\mathbb{R} g(x)P.V.\int_\mathbb{R} \frac{f(x)-f(y)}{|x-y|^2}dydx$$ $$ =\int_\mathbb{R} P.V.\int_\mathbb{R} \frac{g(y)(f(y)-f
down vote (x))}{|x-y|^2}dydx=-\int_\mathbb{R} P.V.\int_\mathbb{R} \frac{g(y)(f(x)-f(y))}{|x-y|^2}dydx. $$ From here $$ \int_{\mathbb{R}} (-\Delta)^{1/2} f(x) g(x)dx=\frac{1}{2}\int_\mathbb{R} P.V.\
accepted int_\mathbb{R} (g(x)-g(y))\frac{f(x)-f(y)}{|x-y|^2}dydx $$ $$ =\frac{1}{2}\int_\mathbb{R} (-\Delta)^{1/2}g(x)f(x)dx+\frac{1}{2}\int_{\mathbb{R}}P.V.\int_{\mathbb{R}} -f(y)\frac{g(x)-g(y)}
{|x-y|^2}dydx $$ $$ =\frac{1}{2}\int_\mathbb{R} (-\Delta)^{1/2}g(x)f(x)dx+\frac{1}{2}\int_{\mathbb{R}}P.V.\int_{\mathbb{R}} -f(y)\frac{g(x)-g(y)}{|x-y|^2}dydx. $$ Now you can change
variables again in the last integral and conclude the result.
Thank you, very instructive. This is precisely the computation I wanted to see! – leo monsaingeon May 16 '13 at 20:26
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis ap.analysis-of-pdes or ask your own question. | {"url":"http://mathoverflow.net/questions/129830/integration-by-parts-for-the-fractional-laplacian","timestamp":"2014-04-21T01:07:14Z","content_type":null,"content_length":"55205","record_id":"<urn:uuid:896610b2-cfc1-468c-bf13-9ffb54f9a698>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Institute for Mathematics and its Applications (IMA)
- 2012 PI Summer Graduate Program: Algebraic Geometry for Applications
This program is primarily for graduate students of IMA Participating Institutions. Support for a limited number of students at other US universities will be provided through funding from the National
Science Foundation. In order to participate, students need to complete the online application form provide a personal statement and (1) a letter of nomination from the PI chair (for students from an
IMA PI) or (2) a recommendation letter (for students from institutions that are not IMA PI).
From Monday, June 18, 2012 through Friday, July 6, 2012, Georgia Institute of Technology will be the host of the IMA PI Summer Graduate Program in Mathematics.
Techniques and algorithmic methods from algebraic geometry are making an impact on applied mathematics and engineering. Algebraic techniques provide new approaches to hard computational problems.
They deliver exact answers desired in many applications through symbolic computation, and enhance approximate numerical solving.
The program will be structured around a course in computational algebraic geometry, treating foundational material as well as current applications. The foundations will include introductions to
Gröbner bases, numerical algebraic geometry, semidefinite programming and real algebraic geometry, and tropical geometry. We will include tutorials and lab sessions on computer algebra systems. There
will be several guest lectures by the leading researchers in the field.
Main Lecture Topics
• Polynomial Optimization and Real Algebraic Geometry
• Computer Algebra and Numerical Algebraic Geometry
• Tropical Geometry | {"url":"http://ima.umn.edu/2011-2012/PISG6.18-7.6.12/","timestamp":"2014-04-19T07:24:42Z","content_type":null,"content_length":"32364","record_id":"<urn:uuid:7f63fdfd-e240-47d7-a2a4-626f3e6066de>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Trigonometry
Hi bobbym and Maiya,
The solution T#20 is 200 √3 + 200 meters = 546.4 meters approximately.
Excellent, bobbym! Keep trying, Maiya!
T#21. If the value of cos A = 1/√2, find the value of tan A.
Character is who you are when no one is looking.
Re: Trigonometry
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Trigonometry
hello Ganesh;
Re: Trigonometry
Hi bobbym and Maiya,
The solution T#21 is correct. Good work!
T#22. A lighthouse was observed from two points in a line with it, but on opposite sides of it. The distance between the points is 120 meters. If the angle of elevation are 30^o and 45^o, find the
height of the lighthouse.
Character is who you are when no one is looking.
Re: Trigonometry
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Trigonometry
Hi bobbym and Maiya,
The solution T#22 is
meters. Neat job, bobbym!
T#23. Two men are on the opposite sides of a tower. They measure the angles of elevation of the top of the tower as 30° and 45° respectively. If the height of the tower is 150 meters, find the
distance between the men.
Character is who you are when no one is looking.
Re: Trigonometry
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Trigonometry
Hey guys! I haven't been online... here's a couple answers
"Have you ever had a dream that you were so sure was real? What if you were unable to wake from that dream? How would you know the difference between the dream world and the real world? "
Re: Trigonometry
"A couple"... lol, my bad.
"Have you ever had a dream that you were so sure was real? What if you were unable to wake from that dream? How would you know the difference between the dream world and the real world? "
Re: Trigonometry
Hi bobbym and Stormtangent,
bobbym : The solution T#23 is perfect. Excellent!
Stormtangent : The solutions T#21, T#22, and T#23 are perfect. Brilliant!
T#24. From the top of a lighthouse, the angles of depression of two ships floating in a straight line on either sides of the lighthouse are observed as 30^o and 45^o. If the height of the lighthouse
is 200 meters, find the distance between the ships.
Character is who you are when no one is looking.
Re: Trigonometry
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Trigonometry
Hi bobbym and Maiya,
bobbym : The solution T#24 is correct. Excellent!
Maiya : The solutions T#23 and T#24 are correct. Excellent!
T#25. If cos A = ½, then tan 2A = ?
Character is who you are when no one is looking.
Re: Trigonometry
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Trigonometry
Hi bobbym and Maiya,
The solution T#25 is correct. Stupendous!
T#26. Find the value of
Character is who you are when no one is looking.
Re: Trigonometry
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Trigonometry
Hello Ganesh;
Last edited by Maiya (2011-11-29 22:41:40)
Re: Trigonometry
Hi bobbym and Maiya,
The solution T#26 is correct, bobbym! Superlative!
T#27. What is the value of sin30sin45sin60 (in degrees)?
Character is who you are when no one is looking.
Re: Trigonometry
Hi ganesh;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: Trigonometry
hi ganesh
Last edited by anonimnystefy (2011-12-29 02:27:28)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=194800","timestamp":"2014-04-20T00:46:00Z","content_type":null,"content_length":"35804","record_id":"<urn:uuid:6bd241ac-4c61-4001-9c61-76130f3bb4db>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
Current Students
MAT courses
APM courses
Help and Resources
• The Preparing for University Math Program (PUMP) is a non-credit course specifically designed to introduce the participants to the background knowledge required for entry-level Mathematics
courses in calculus and linear algebra at the University of Toronto. PUMP is for those who want to close any existing gap between high school math and university level math courses. It covers
algebra, geometry, trigonometry and calculus at the high school level. PUMP would also be beneficial to those who wish to review high school math before attempting university-level math or other
science courses
• See link for a site we created for in-coming first year students to help review material before taking courses like MAT133Y, MAT135H/MAT136, MAT221, and MAT223: http://uoft.me/precalc
Introduction to proofs is a half-credit course that was designed for:
• Students who intend to register in analytical courses but who have not been introduced to proofs in their high school program
• Mathematics and Science students who are planning to take some of our more demanding courses like MAT137Y, MAT157, MAT237Y, MAT240, etc.
• Students currently registered in other programs who are interested in completing a mathematical course on solving proofs
• Students who complete the first-year computational mathematics courses but who hope to transition into more challenging mathematics courses
The course will provide students with plenty of practice on reading and reasoning in mathematics, finding and writing proofs and doing general problem solving.
Some elements of logic, sets and cardinality, relations and functions, geometry and number theory will be introduced along the course.
Student Development | {"url":"http://www.math.toronto.edu/cms/current-students-ug/","timestamp":"2014-04-19T07:23:34Z","content_type":null,"content_length":"12718","record_id":"<urn:uuid:c29601a9-f396-42f7-9ab3-edbde9cb7419>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction and Basic Definitions
The concept of a limit is fundamental to Calculus. In fact, Calculus without limits is like Romeo without Juliet. It is at the heart of so many Calculus concepts like the derivative, the integral,
etc. So what is a limit?
Maybe the best example to illustrate limits is through average and instantaneous speeds: Let us assume you are traveling from point A to point B while passing through point C. Then we know how to
compute the average speed from A to B: it is simply the ratio between the distance from A to B and the time it takes to travel from A to B. Though we know how to compute the average speed this has no
real physical meaning.
Indeed, let us suppose that a policeman is standing at point C checking for speeders going through C. Then the policeman does not care about the average speed. He only cares about the speed that you
see on the speedometer, the one that the car actually has when crossing C. That one is real.
How do we compute this "instantaneous speed"? That's not easy at all! Naturally one way to do this is to compute the average speed from C to points close to C. In this case, the distance between
these points and C is very small as well as the time taken to travel from them to C. Then we look at the ratio: Do these average speeds over small distances get close to a certain value? If so, that
value should be called be the instantaneous speed at C. In fact, this is exactly how the policeman's radar computes the driver's speed!
Let us express this more mathematically. If s(t) is a function that determines the position of the moving object, and assume that at time t[0], the moving object is at C. At
Then we study these numbers when
to indicate the instantaneous speed at C.
Before we state the formal definition of the limit, let us consider the function
Clearly this function makes sense as long as the input is not equal to 0. In other words, we can take as an input any number close enough to 0, but not 0 itself.
It is clear by looking at the outputs that, when x gets close to 0, x goes to 0 and write
You have to be very careful when you use calculators not to jump to conclusions too quickly. Quantities may be getting close to each other up to a certain point but then they may move further away
from each other again. This happens frequently when dealing with chaotic systems, for example. Most of the calculators do computations up to nine digits or so. So two numbers with the same nine
decimals are equal (according to the calculator). Be aware of the dangers from these shortcomings of calculating devices! But in the above statement, we mean that x extremely close to 0. In other
words, for a given error x is close enough to 0, we will have
How do we express: "x very close to 0"? Simply by saying that there exists
Definition of limit. Let f(x) be a function defined around a point c, maybe not at c itself. We have
iff for any
The number L is called the limit of f(x) when x goes to c.
Sometimes the function is not defined around the point c but only to the left or right of c. Then we have the concepts of left-limit and right-limits at c.
(i) L is the left-limit of f(x) at c iff for any
and write
L is the right-limit of f(x) at c iff for any
and write
Of course, if a function has a limit when x get closer to cfrom both sides then the left and right limits exists and are equal to the limit at the point, i.e. if
The following joke comes to my mind: An engineer, a physicist and a mathematician take a train ride through the Scottish countryside. Suddenly they see a sheep outside in a meadow. The engineer says:
"Wow, in Scotland all sheep are black!" The physicist replies: "Not really; there is at least one black sheep in Scotland!" - The mathematician smiles and replies: "There is at least one sheep in
Scotland with at least one black side." (My apologies to all engineers, who seem to be at the receiving end of most math jokes!).
What's the point? Whether you want to look at the limit world through the eyes of the "physicist" or the "mathematician" depends on your and your teacher's expectations! Maybe it suffices to stay
with the "getting closer"-idea, maybe you need to dig into the workings of the formal
Example. Consider the function
So obviously we have
which implies that
Example. Consider the function x get closer and closer to 0. For example we have
We see that f(x) does not get close to anything, even when xis close to 0 from the right, or the left. Thus
Example. Let f(x) = x^2. It is easy to see that
Let us show this through the formal definition. Indeed, let x to get close to 2, then we restrict ourselves to x between 1 and 3, i.e.
This finishes the proof of our claim.
Note that it was quite easy in this example to find x-c| through some algebraic manipulations. Let us illustrate this with an example:
Example. Consider the function c = 9. Then we obviously have
Indeed, let
So if
So if we choose
In the following example, we discuss a limit at a "generic" point c.
Example. Let f(x) =x and g(x) = C, where C is a constant. Then for any point a, we have
You may want to check these two statements by going through the
Exercise 1. Evaluate
Exercise 2. Which of the statements below are true knowing that
(c) If L < 3.01.
Exercise 3. If
What about the converse?
[Trigonometry] [Calculus]
[Geometry] [Algebra] [Differential Equations]
[Complex Variables] [Matrix Algebra]
S.O.S MATHematics home page
Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard.
Mohamed A. Khamsi
Helmut Knaust
Copyright © 1999-2014 MathMedics, LLC. All rights reserved.
Contact us
Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA
users online during the last hour | {"url":"http://www.sosmath.com/calculus/limcon/limcon01/limcon01.html","timestamp":"2014-04-18T15:39:12Z","content_type":null,"content_length":"28127","record_id":"<urn:uuid:eb0f36c3-05f5-4564-a814-c85a3e9fd65a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
NCERT Solutions for Class 11th Maths: Chapter 16 - Probability
National Council of Educational Research and Training (NCERT) Book Solutions for class 11th Subject: Maths Chapter: Chapter 16 – Probability
Class 11th Maths Chapter 16 Probability NCERT Solution is given below.
Click Here to view All Chapters Solutions for Class 11th Maths
Stay Updated. Get All Information in Your Inbox. Enter your e-Mail below: | {"url":"http://schools.aglasem.com/?p=1344","timestamp":"2014-04-16T15:59:26Z","content_type":null,"content_length":"68741","record_id":"<urn:uuid:5f2443a4-ddf2-4ad0-b586-5d9bdb2322f4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explanation of
Explanation of the digit movies
These movies illustrate the neural network described in the paper:
Hinton, G. E., Osindero, S. and Teh, Y. (2006)
A fast learning algorithm for deep belief nets.
Neural Computation 18, pp 1527-1554. [ps.gz] [pdf]
The network learns a generative model that looks like this:
10 label units <--> 2000 top-level units <--> 500 high-level units --> 500 units --> 784 pixels.
To generate from this model we first use alternating Gibbs sampling to get a sample from the top-level associative memory that consists of 2000 top-level units symmetrically connected to the 500
high-level units + 10 label units. Then we use the directed connections to stochastically generate pixel probabilities from the sampled binary states of the 500 high-level units. Notice how the
random initial configuration of the high-level associative memory gradually settles into a free energy ravine that is preferred by the clamped label.
To recognize an image, we use bottom-up "recognition" connections to produce binary activities in the two lower hidden layers and then perform alternating Gibbs sampling in the top two layers.
Movies of the network generating and recognizing digits
Click on a class label then click play to generate an image from that class.
Click on an image then click play to provide the input for recognition.
Additional digit movies
These movies show 10 different runs in parallel, with a different class label clamped on in each run. Each movie starts with random binary states for the 2000 top-level neurons and then performs 300
iterations of alternating Gibbs sampling between the top two layers. Every 3 iterations, the directed connections down to the pixels are used to compute the probabilities of each pixel turning on and
these probabilities are displayed. 300 iterations is not quite long enough for the model to always find the right free energy ravine.
movie1 movie2 movie3
The three movies are just three different runs form random initial states. | {"url":"http://www.cs.toronto.edu/~hinton/digits.html","timestamp":"2014-04-17T15:45:47Z","content_type":null,"content_length":"2855","record_id":"<urn:uuid:049466bf-89fc-4e87-bda5-b71d14659fcb>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is it good?
05-22-2009 #1
Registered User
Join Date
May 2009
Is it good?
#include <string>
#include <vector>
using std::string;
using std::vector;
using std::cout;
using std::endl;
#define MAX_ITEMS 100
#define MAX_INV 30
class Items
int ID;
int mainClass;
int subClass;
int grade;
int quality;
int degree;
int bonding;
int price;
int stackSize;
int primDmg[3];
int sconDmg[3];
int resist[3];
int bnsOpt[10];
string name;
string desc;
double attSpeed;
double defSpeed;
ID = NULL;
mainClass = NULL;
grade = NULL;
quality = NULL;
degree = NULL;
bonding = NULL;
price = NULL;
attSpeed = NULL;
defSpeed = NULL;}
void ItemsInfo (int A, int B, int C, int D, int E, int F, int G, int H, int I, int J, int K,
int L, int M, int N, int O, int P, int Q, int R, int S, int T, int U, int V, int W, int X,
int Y, int Z, int AA, int AB, string AC, string AD, double AE, double AF)
ID = A;
mainClass = B;
subClass = C;
grade = D;
quality = E;
degree = F;
bonding = G;
price = H;
stackSize = I;
primDmg[0] = J; // first element directly adds to the needed stat
primDmg[1] = K;
primDmg[2] = L;
sconDmg[0] = M; // first element directly adds to the needed stat
sconDmg[1] = N;
sconDmg[2] = O;
resist[0] = P;
resist[1] = Q;
resist[2] = E;
bnsOpt[0] = S; //each array here is directly added to the player stats
bnsOpt[1] = T;
bnsOpt[2] = S;
bnsOpt[3] = U;
bnsOpt[4] = V;
bnsOpt[5] = W;
bnsOpt[6] = Y;
bnsOpt[7] = Z;
bnsOpt[8] = AA;
bnsOpt[9] = AB;
name = AC;
desc = AD;
attSpeed = AE;
defSpeed = AF;};
double Quality()
if (quality==0) return 0.7;
if (quality==1) return 1.0;
if (quality==2) return 1.3;
if (quality==3) return 1.7;
if (quality==4) return 2.1;
if (quality==5) return 2.6;
if (quality==6) return 3.1;}
double Grade()
if (grade==0) return 1.0;
if (grade==1) return 1.1;
if (grade==2) return 1.2;
if (grade==3) return 1.4;
if (grade==4) return 1.6;
if (grade==5) return 2.0;
if (grade==6) return 2.4;
if (grade==7) return 3.2;}
double Degree()
if (degree==0) return 1.0;
if (degree==1) return 1.1;
if (degree==2) return 1.2;
if (degree==3) return 1.4;
if (degree==4) return 1.6;
if (degree==5) return 1.9;
if (degree==6) return 2.2;
if (degree==7) return 2.6;
if (degree==8) return 3.0;
if (degree==9) return 3.5;
if (degree==10) return 4.0;
if (degree==11) return 4.6;
if (degree==12) return 5.2;
if (degree==13) return 5.9;
if (degree==14) return 6.6;}
int ArmRes(){return (resist[0]*(int)Quality())*(int)Grade();}
int EleRes(){return (resist[1]*(int)Quality())*(int)Grade();}
int MagRes(){return (resist[2]*(int)Quality())*(int)Grade();}
int MinDmg(){return (primDmg[1]*(int)Quality())*(int)Grade();}
int MaxDmg(){return (primDmg[2]*(int)Quality())*(int)Grade();}
int Min2Dmg(){return (primDmg[1]*(int)Quality())*(int)Grade();}
int Max2Dmg(){return (primDmg[2]*(int)Quality())*(int)Grade();}
void showOpts()
string type;
for (int iii=0; iii!=10; iii++)
if (bnsOpt[iii]!=0)
if (iii==0)
type = "DEX";
else if (iii==1)
type = "STR";
else continue;
cout << "Adds" << bnsOpt[iii] << type << endl;} }
void descrWin()
}itemsDB[MAX_ITEMS], charInv[MAX_INV], charEqup[10];
Well the good think it works fine, but im still wondering if i can improve it somehow, any suggestion? but don't mention the argu's.
Definitely, having 20+ arguments to a function is BAD. Not naming them something sensible makes it DOUBLY bad.
Since some of those are arrays, why not pass in an array/vector instead?
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
ID = NULL;
mainClass = NULL;
NULL is for pointers (although in C++ it is defined as 0).
but don't mention the argu's.
My compiler says that parameters R and X are unused.
double Grade()
if (grade==0) return 1.0;
if (grade==1) return 1.1;
if (grade==2) return 1.2;
if (grade==3) return 1.4;
if (grade==4) return 1.6;
if (grade==5) return 2.0;
if (grade==6) return 2.4;
if (grade==7) return 3.2;}
This and similar functions don't handle the case where grade is not one of the values. (assert / throw an exception / return some default value like 0.0)
int ArmRes(){return (resist[0]*(int)Quality())*(int)Grade();}
In this and similar functions. Why make Quality and Grade return a double in the first place, if you are always truncating it to int before using? Perhaps the intention was to make the
calculation with doubles, and only cast the final result?
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
well i did something like that in the past but i didn't like the fact that i had to use multiple functions for one object, and is it the only thing i should improve?
ID = NULL;
mainClass = NULL;
NULL is for pointers (although in C++ it is defined as 0).
My compiler says that parameters R and X are unused.
double Grade()
if (grade==0) return 1.0;
if (grade==1) return 1.1;
if (grade==2) return 1.2;
if (grade==3) return 1.4;
if (grade==4) return 1.6;
if (grade==5) return 2.0;
if (grade==6) return 2.4;
if (grade==7) return 3.2;}
This and similar functions don't handle the case where grade is not one of the values. (assert / throw an exception / return some default value like 0.0)
int ArmRes(){return (resist[0]*(int)Quality())*(int)Grade();}
In this and similar functions. Why make Quality and Grade return a double in the first place, if you are always truncating it to int before using? Perhaps the intention was to make the
calculation with doubles, and only cast the final result?
well if it is the same as 0 its ok
oh, then maybe its just some little typo i had
the value is not user-input, its a fixed value, so theres no need to do the assert IMO
the double value id for some other purpose, in that case i need a whole number
Anon gives a few other items that you should improve.
My point is that I have worked with functions that take 8-10 parameters (Win32 CreateProcess, StretchBitBlt for example), and those are hard to work with, when it comes to keeping track of which
parameter is which and what to pass. This function has over 30 parameters. None of which are called anything remotely meaningful.
I'd say, generally, if a function takes more than about 5 arguments, it's likely good to either split the function, or pass the arguments in groups (struct, vector, array) so that the number of
things to pass are fewer.
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
All these things produce compiler warnings, so they are not completely OK.
well if it is the same as 0 its ok
It takes more typing and it doesn't mean the same thing semantically.
the value is not user-input, its a fixed value, so theres no need to do the assert IMO
The value comes from the outside and is not checked anywhere. Asserts like this help you catch accidental errors when your mind wonders.
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
well i think i will change it to 0 then (tho i always loved the NULL keyword)
hmm i guess ill use it then, it makes more sense, well thanks for the help, I really appreciate it.
double Grade()
if (grade==0) return 1.0;
if (grade==1) return 1.1;
if (grade==2) return 1.2;
if (grade==3) return 1.4;
if (grade==4) return 1.6;
if (grade==5) return 2.0;
if (grade==6) return 2.4;
if (grade==7) return 3.2;}
This and similar functions don't handle the case where grade is not one of the values. (assert / throw an exception / return some default value like 0.0)
An easier way would be to have the function that sets grade enforce the condition (grade >= 0 && grade <= 7). That allows Grade() to assume the values are in range. Or, in other words, enforce
your class invariants in setters rather than getters.
The philosophy is to detect an error as early as possible - and then either report an error, ignore it, recover from it - as early as possible. Practically, in many circumstances, the earliest
point where that is possible is when changing a value, not retrieving it.
A simpler implementation of Grades() - again assuming the error check is done elsewhere - is
double Grade() const
const double data[] = {1.0, 1.1, 1.2, 1.4, 1.6, 2.0, 2.4, 3.2};
return data[grade];
Since the function doesn't actually change the object it acts on, it is appropriate that it be const.
Last edited by grumpy; 05-22-2009 at 05:59 PM.
Right 98% of the time, and don't care about the other 3%.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Going completely off-topic here, but the US TV program(me) Mythbusters showed that you can. I think Lion poo was one of the better ones to work on... [the principle is one of a japanese art of
polishing balls of mud/clay or some such].
The fact that a turd MAY be polished to a gloss doesn't fix the problems with too many arguments tho'.
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
Going completely off-topic here, but the US TV program(me) Mythbusters showed that you can. I think Lion poo was one of the better ones to work on... [the principle is one of a japanese art of
polishing balls of mud/clay or some such].
The fact that a turd MAY be polished to a gloss doesn't fix the problems with too many arguments tho'.
Interesting, LOL!!!
I didn't see that episode!
long time; /* know C? */
Unprecedented performance: Nothing ever ran this slow before.
Any sufficiently advanced bug is indistinguishable from a feature.
Real Programmers confuse Halloween and Christmas, because dec 25 == oct 31.
The best way to accelerate an IBM is at 9.8 m/s/s.
recursion (re - cur' - zhun) n. 1. (see recursion)
It appears to be on Youtube here:
YouTube - MYTHBUSTERS are going to Polish ........ they have gone nuts
I haven't looked through it.
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
05-22-2009 #2
Kernel hacker
Join Date
Jul 2007
Farncombe, Surrey, England
05-22-2009 #3
The larch
Join Date
May 2006
05-22-2009 #4
Registered User
Join Date
May 2009
05-22-2009 #5
Registered User
Join Date
May 2009
05-22-2009 #6
Kernel hacker
Join Date
Jul 2007
Farncombe, Surrey, England
05-22-2009 #7
The larch
Join Date
May 2006
05-22-2009 #8
Registered User
Join Date
May 2009
05-22-2009 #9
Registered User
Join Date
Jun 2005
05-22-2009 #10
05-22-2009 #11
05-23-2009 #12
Kernel hacker
Join Date
Jul 2007
Farncombe, Surrey, England
05-23-2009 #13
05-23-2009 #14
05-24-2009 #15
Kernel hacker
Join Date
Jul 2007
Farncombe, Surrey, England | {"url":"http://cboard.cprogramming.com/cplusplus-programming/116148-good.html","timestamp":"2014-04-18T12:04:59Z","content_type":null,"content_length":"108852","record_id":"<urn:uuid:87f9b21e-8e85-4bf8-b029-2f0de02eed1e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with salary calculator!
There are a couple of ways to solve this problem but first I think you should make sure what your instructor wants.
Does he mean if you work more than 40 hours you get paid 1.5 times the rate on all 41 hours, or just on hours worked past 40 hours.
Ie if an employee worked 45 hours, they would get paid (40 * 1.0 * $10/hr) + (5 * 1.5 * $10/hr)
or does he want an employee who worked 45 hours to get compensated (45 * 1.5 * $10/hr) vs an employee who worked 39 hours to get compensated (39 * 1.0 * $10/hr)
The way your loops are set up are bad though, your rate gets changed on every loop were hours is greater than 40. You should probably only change the rate once, if at all.
You could do a simple while loop like this (assuming your instructor wants different compensation rates for hours below 40 and above 40)
1 while(counter <= hours)
2 {
3 salary = salary + rate; //adds one hours worth of wage to salary for each iteration
4 if (counter > 40) {salary = salary + rate/2.0;} //add's the bonus wage for hours worked over 40 if the iteration is above.
5 counter++;
6 }
you'd probably want to change your initializations to
double salary=0,rate=0,hours=0,counter=1
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/124186/","timestamp":"2014-04-16T04:19:36Z","content_type":null,"content_length":"10551","record_id":"<urn:uuid:9c6b4021-9e3e-4327-beb8-fd49912e06bd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral of line
April 25th 2009, 08:28 AM #1
Super Member
Jun 2008
Integral of line
Calculate the integral of line
$\int_c F.dr$
, where
$F(x,y) = \sqrt{x^2+y^2}$
and c is the circumference
How do I calculate integral of line ?
Yes the answer is $2 \pi$
In English we call it a line integral
The question only makes sense If F is a vector field
Do you mean F = r = xi +yj ?
In which case use x = cos(t) y = sin(t) to parameterize the curve
r = cos(t) i +sin(t) j dr/dt = -sin(t) i +cos(t) j
On the is curve F = cos(t) i +sin(t) j
F*dr/dt = 0
Therefor the line integral is 0
If you want check out the line integral page on my website for the general method
Line Integrals
dr is understood to be the vector dx i +dy j
Recall For line integrals to actually calculate we use
integral of F(x(t),y(t)) *dr/dt as t varies from a to b
F*dr is just notation rarely used just like the notation integral(fdx+gdy)
where F = f i + g j
We use integral of F(x(t),y(t)) *dr/dt
This is correct ?
$\int_0^{2 \pi} \sqrt{(cos(t))^2 + (sin(t))^2}.(-sin(t),cos(t))dt$
what you have makes no sense --F must be a vector field!!!!
So what is F ?
Sorry, what are the steps to resolve line integral ?
Is it possible that what you want is $\int_c F\,dx$, $\int_c F\,dy$ or $\int_c F\,ds$?
You start with a vector field F = f(x,y) i + g(x,y) j
parameterize the curve r = x(t) i +y(t) j
Then then the line integral is the integral of F(x(t),y(t))* dr/dt Where * is the dot product
Integrate F(x(t),y(t))* dr/dt from a to b
This line integral is:
$\int_0^{2 \pi} \sqrt{cos(t)^2}(-sen(t))dt + \int_0^{2 \pi} \sqrt{sen(t)^2}(cos(t))dt$
$= \frac{-1}{2} \frac{cos(t)^3}{3}|_0^{2 \pi} + \frac{-1}{2} \frac{sen(t)^3}{3} |_0^{2 \pi} = 0$
What is the vector field F you start with? ---If we don't know F we can't proceed!!!!!!!!!!!!!!!
F= (x^2 +y^2)^(1/2) is not a vector field
Look at the problem again there must be a vector field F !!!!!!!!!!!!!!
This problem i find in internet but not correct.
This problem is the book:
1) C is the curve represented by the equations:
$x=2t; y=3t^2; (0 \leq t \leq 1)$
Calculate the line integral along C
a) $\int_c (x-y)ds$
b) $\int_c (x-y)dx$
c) $\int_c (x-y)dy$
April 25th 2009, 08:34 AM #2
April 25th 2009, 08:38 AM #3
Super Member
Jun 2008
April 25th 2009, 08:40 AM #4
April 25th 2009, 08:43 AM #5
April 25th 2009, 08:48 AM #6
April 25th 2009, 08:49 AM #7
Super Member
Jun 2008
April 25th 2009, 08:51 AM #8
April 25th 2009, 08:53 AM #9
Super Member
Jun 2008
April 25th 2009, 08:57 AM #10
April 25th 2009, 08:57 AM #11
April 25th 2009, 08:59 AM #12
April 25th 2009, 09:14 AM #13
Super Member
Jun 2008
April 25th 2009, 09:19 AM #14
April 25th 2009, 09:28 AM #15
Super Member
Jun 2008 | {"url":"http://mathhelpforum.com/calculus/85584-integral-line.html","timestamp":"2014-04-16T13:59:59Z","content_type":null,"content_length":"77260","record_id":"<urn:uuid:6f9bcaee-21eb-4309-a684-432c1b478a53>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Problems for Fun on Random Math
The purpose of this post is to prove Chevalley’s theorem: If ${f: X \rightarrow Y}$ is a finite surjective morphism of noetherian separated schemes, with ${X}$ affine, then ${Y}$ is affine.
We will follow the outline in Hartshorne (III.3 Problems 1 & 2 and III.4 Problems 1 & 2).
Theorem 1 Let ${f: X \rightarrow Y}$ be an affine morphism of noetherian schemes. Then for any coherent sheaf ${\mathcal F}$ on ${X}$, there are natural isomorphisms for all ${i \geq 0}$,
$\displaystyle H^i(X, \mathcal F) \simeq H^i(Y, f_* \mathcal F).$
Proof: According to (II, Ex. 5.17), when ${f}$ is affine, the direct image functor ${f_*}$ induces an equivalence from the category of coherent ${\mathcal O_X}$-modules to the category of coherent $
{f_*\mathcal O_X}$-modules. Moreover, an equivalence ${\tau : A \rightarrow B}$ of abelian categories (i.e. an additive functor which is also an equivalence) is exact. Therefore, if ${F: B \
rightarrow \text{Ab}}$ is a left additive functor, by the uniqueness of the ${\delta}$-functor extending a given left additive functor, it follows that there exists a natural isomorphism ${R^i(F \
circ \tau) \simeq R^i F \circ \tau}$ for each ${i}$. $\Box$
Theorem 2 Let ${X}$ be a noetherian scheme. Then ${X}$ is affine if and only if ${X_{\text{red}}}$ is.
Proof: Clearly ${X_\text{red}}$ is affine if ${X}$ is affine.
Conversely, suppose ${X_{\text{red}}}$ is affine. We prove that ${X}$ has cohomological dimension ${0}$, hence it is affine by Serre’s theorem (III.3.7). Let ${\mathcal F}$ be a quasi-coherent sheaf
on ${X}$. As indicated in the hint, we let ${\mathcal N}$ denote the sheaf of nilpotents of ${X}$ and we consider the filtration
$\displaystyle \mathcal F \supseteq \mathcal N \cdot \mathcal F \supseteq \mathcal N^2 \cdot \mathcal F \supseteq \dots$
of ${\mathcal F}$. Since ${X}$ is noetherian, there exists an ${n>0}$ such that ${\mathcal N^n = 0}$, so the filtration is finite.
We prove by descending induction on ${j}$ that ${ \mathcal N^j \cdot \mathcal F}$ is acyclic. For ${j=n}$, it is trivial. Now consider the exact sequence of quasi-coherent sheaves on ${X}$,
$\displaystyle 0 \rightarrow \mathcal N^j \cdot \mathcal F\rightarrow \mathcal N^{j-1} \cdot \mathcal F \rightarrow (\mathcal N^{j-1} \cdot \mathcal F) / (\mathcal N^j \cdot \mathcal F) \rightarrow
The quasi-coherent sheaf ${(\mathcal N^{j-1} \cdot \mathcal F) / (\mathcal N^j \cdot \mathcal F)}$ is naturally a quasi-coherent ${\mathcal O_X / \mathcal N \simeq \mathcal O_{X_{\text{red}}}}$
-module, and its cohomology can be calculated either as an ${\mathcal O_X}$-module or as an ${\mathcal O_{X_{\text{red}}}}$ module by Theorem 1 (using the fact that the reduction morphism ${X_{\text
{red}} \to X}$ is affine). Therefore, it is acyclic, since ${X_{\text{red}}}$ is affine by assumption. The sheaf ${\mathcal N^j \cdot \mathcal F}$ is acyclic by the inductive hypothesis. By the long
exact sequence of cohomology, we see that ${\mathcal N^{j-1} \cdot \mathcal F}$ is also acyclic. $\Box$
Theorem 3 Let ${X}$ be a reduced scheme. Then ${X}$ is affine if and only if each irreducible component of ${X}$ is affine.
Proof: The irreducible components of ${X}$ are closed subschemes of ${X}$, hence they are affine if ${X}$ is affine. Conversely, suppose that every irreducible component of ${X}$ is affine. We prove
that ${X}$ has cohomological dimension ${0}$.
We proceed by induction on the number of irreducible components of ${X}$. If ${X}$ is irreducible, then the statement is vacuously true. Now suppose it holds for noetherian schemes with ${n-1}$
irreducible components. Suppose that ${X}$ has ${n}$ irreducible components, and write it as ${X=Y \cup X'}$ where ${Y}$ is irreducible. Let ${\mathcal F}$ be a quasi-coherent sheaf on ${X}$. Denote
${\tau}$ the inclusion ${Y \hookrightarrow X}$ and ${\iota}$ the inclusion ${X' \hookrightarrow X}$, where each closed subscheme is given the canonical reduced closed subscheme structure. Since ${Y}$
is Noetherian, ${ \tau_* \tau^* \mathcal F}$ is also a quasi-coherent sheaf on ${X}$, supported on ${Y}$. There is a canonical morphism ${\mathcal F \rightarrow \tau_* \tau^* \mathcal F}$, and ${\
mathcal F \rightarrow \iota_* \iota^* \mathcal F }$. (Each of these two morphisms is a unit of the “inverse image – direct image” adjunction). Let
$\displaystyle g : \mathcal F \rightarrow \tau_* \tau^* \mathcal F \oplus \iota_* \iota^* \mathcal F$
be their sum. It is easy to see that this morphism is surjective, and an isomorphism away from the intersection. Let ${\mathcal G= \ker g}$. Then ${\mathcal G}$ is quasi-coherent and supported in ${Y
\cap X'}$. Therefore we have an exact sequence
$\displaystyle 0 \rightarrow \mathcal G \rightarrow \mathcal F \rightarrow \tau_* \tau^* \mathcal F \oplus \iota_* \iota^* \mathcal F \rightarrow 0$
Since ${X'}$ is affine by the induction hypothesis, ${Y \cap X'}$ is affine, being a closed subscheme of an affine scheme. Now, since ${\text{Supp }\mathcal G \subseteq Y \cap X'}$, the cohomology of
${\mathcal G}$ can be calculated either as an ${\mathcal O_{(Y \cap X')}}$-module or as an ${\mathcal O_X}$-module, and therefore it vanishes. Similarily the sheaf ${\tau_* \tau^* \mathcal F \oplus \
iota_* \iota^* \mathcal F}$ is acyclic because ${Y}$ and ${X'}$ are affine. Therefore, by the long exact sequence of cohomology, ${\mathcal F}$ is also acyclic. $\Box$
Lemma 4 Let ${f: X \rightarrow Y}$ be a finite surjective morphism of integral noetherian schemes. Then there is a coherent sheaf ${\mathcal M}$ on ${X}$, and a morphism of sheaves ${\alpha : \
mathcal O_Y^r \rightarrow f_* \mathcal M}$ for some ${r>0}$, such that ${\alpha}$ is an isomorphism at the generic point of ${Y}$.
Proof: Let ${L}$ be the function field of ${X}$ and ${K}$ be the function field of ${Y}$. Then the morphism ${f}$ gives rise to a field homomorphism ${K \hookrightarrow L}$. Since ${f}$ is finite and
surjective, ${L}$ is finite over ${K}$, say of degree ${r}$. Let ${\{x_1, \dots, x_r\}}$ be a basis for ${L}$ over ${K}$. Each ${x_j}$ can be represented as a section ${s_j}$ of ${\mathcal O_X}$ over
an open set ${U_j}$. Let ${\tau_j : U_j \hookrightarrow X}$ be the inclusion. Let ${\mathcal E_j}$ be the sheaf ${\mathcal E_j = s_j \cdot \mathcal O_{U_j}}$ on ${U_j}$. Obviously ${\mathcal E_j}$ is
coherent (in fact free of rank ${1}$). Let ${\mathcal F_j = (\tau_j)_*(\mathcal E_j)}$. Then ${\mathcal F_j}$ is quasi-coherent on ${X}$ since ${U_j}$ is noetherian; since ${f}$ is finite, ${\mathcal
F_j}$ is in fact coherent. Let ${\mathcal M = \bigoplus_j \mathcal F_j}$. Define the morphism ${\alpha : \mathcal O^r_Y \rightarrow f_*\mathcal M}$ by the global sections ${x_j}$ of ${f_*\mathcal M}$
(using the fact that ${\mathcal O_Y}$ represents the global sections functor ${\Gamma(Y, -)}$). Then, by construction, ${\alpha}$ is an isomorphism of ${K}$-vector spaces ${K^r \cong L}$ at the
generic point of ${Y}$. $\Box$
Lemma 5 Let ${f: X \rightarrow Y}$ be a finite surjective morphism of integral noetherian schemes. Then for any coherent sheaf ${\mathcal F}$ on ${Y}$, there exists a coherent sheaf ${\mathcal G}
$ on ${X}$, and a a morphism ${\beta : f_* \mathcal G \rightarrow \mathcal F^r}$ which is an isomorphism at the generic point of ${Y}$.
Proof: We take ${\beta = \mathcal{H}\text{om}(\alpha, \mathcal F)}$, where ${\mathcal{H}\text{om}}$ is the sheaf ${\mathcal{H}\text{om}}$ and ${\alpha}$ is the morphism of Lemma 4:
$\displaystyle \beta: \mathcal{H}\text{om}(f_*\mathcal M, \mathcal F) \rightarrow \mathcal{H}\text{om}(\mathcal O_Y^r, \mathcal F).$
Remark that ${\mathcal{H}\text{om}(\mathcal O_Y^r, \mathcal F) \simeq \mathcal F^r}$. Moreover, the sheaf ${\mathcal{H}\text{om}(f_*\mathcal M, \mathcal F)}$ naturally has a structure of ${f_*\
mathcal O_X}$-module. By (II, Ex. 5.17), when ${f}$ is an affine morphism, ${f_*}$ induces an equivalence between the category of coherent ${\mathcal O_Y}$-modules and the category of coherent ${f_*\
mathcal O_X}$-modules. Therefore ${\mathcal{H}\text{om}(f_*\mathcal M, \mathcal F)}$ is isomorphic to an ${\mathcal O_Y}$-module of the form ${f_*\mathcal G}$, where ${\mathcal G}$ is a coherent ${\
mathcal O_X}$-module. Thus ${\beta}$ has the form ${f_* \mathcal G \rightarrow \mathcal F^r}$.
Moreover, it follows from the fact that a coherent sheaf on a noetherian scheme is finitely presented that on such a scheme, taking sheaf ${\mathcal{H}\text{om}}$ commutes with taking stalks of
morphisms; therefore ${\beta}$ is also an isomorphism at the generic point of ${Y}$. $\Box$
Now we are ready to prove Chevalley’s theorem.
Theorem 6 (Chevalley’s theorem). Let ${f: X \rightarrow Y}$ be a finite surjective morphism of noetherian separated schemes, where ${X}$ is affine. Then ${Y}$ is affine.
Proof: By Theorems 2 and 3, we may suppose that ${X}$ and ${Y}$ are reduced and irreducible. We prove by contradiction that ${Y}$ is affine. Let ${\Sigma}$ be the collection of closed subschemes of $
{Y}$ which are not affine. Suppose it not empty; then it contains a minimal element ${Z \hookrightarrow X}$, which we may view as having the reduced induced subscheme structure. Since finite
morphisms are stable under base change, we may in fact suppose that ${Z=Y}$ (what this means is that we are replacing ${f}$ by its restriction to ${f^{-1}(Z)}$ if necessary). Therefore, we suppose
that every proper closed subscheme of ${Y}$ is affine.
Let ${\mathcal F}$ be a coherent sheaf on ${X}$. By Lemma ${5}$, there exists a coherent sheaf ${\mathcal G}$ on ${X}$ and a morphism ${\beta: f_* \mathcal G \rightarrow \mathcal F^r}$ which is
generically an isomorphism (and which is therefore surjective, since ${Y}$ is irreducible). Thus, if ${\mathcal D = \ker \beta}$, we have an exact sequence of sheaves on ${Y}$
$\displaystyle 0 \rightarrow \mathcal D \rightarrow f_* \mathcal G \rightarrow \mathcal F^r \rightarrow 0.$
Now, as in the proof of Theorem 3, we view ${\mathcal D}$ as a quasi-coherent sheaf on the proper closed subscheme ${\text{Supp }\mathcal D}$. By the minimality of ${Y}$, ${\text{Supp }\mathcal D}$
is affine and therefore ${\mathcal D}$ is acyclic. Moreover, since a finite morphism is affine, we can apply Theorem 1 to see that ${f_* \mathcal G}$ is also acyclic. Therefore, by the long exact
sequence of cohomology, ${\mathcal F^r}$ is acyclic, so ${\mathcal F}$ is acyclic. $\Box$
Thefore, ${Y}$ has cohomological dimension ${0}$, which contradicts the assumption that it is not affine.
A Noetherian and Hausdorff space is finite
In this post, I will prove that a Noetherian and Hausdorff topological space is finite (and therefore has the discrete topology, being Hausdorff). The proof is very short and pleasant.
Proof: Let ${X}$ be such a space, and suppose that it is infinite. Let ${\Sigma}$ be the collection of infinite closed subsets of ${X}$. It is nonempty since ${X \in \Sigma}$, and therefore has a
minimal member ${Z}$ by the Noetherian assumption. Let ${p,q}$ be points of ${Z}$, and ${U,V}$ be disjoint open neighborhoods of ${p}$ and ${q}$ respectively (such ${U}$ and ${V}$ exist by the
Hausdorff assumption). Then ${X = (X-U) \cup (X-V)}$ since ${U}$ and ${V}$ are disjoint, so ${Z = (Z \cap (X-U)) \cup (Z \cap (X-V))}$. Now each of ${Z \cap (X-U)}$ and ${Z \cap (X-V)}$ is closed in
${X}$, and is properly contained in ${Z}$ (the first one doesn’t contain ${p}$, and the second one doesn’t contain ${q}$). Therefore, by minimality of ${Z}$, each must be finite, and therefore ${Z}$
is also finite, which is a contradiction. $\Box$
Corollary: in any infinite Hausdorff space, there exists a strictly descending infinite chain of closed subsets $Z_1 \supset Z_2 \supset Z_3 \dots$. The proof above can be easily adapted to construct
such a sequence.
Ph.D. Comprehensive exam practice problems, Round 2
Exercise 1 Let ${V}$ be the vector space of continuous real-valued functions on the interval ${[0,\pi]}$. Then, for any ${f \in V}$,
$\displaystyle 2 \int_0^\pi f(x)^2 \sin x dx \geq \left(\int_0^\pi f(x) \sin x dx\right)^2.$
Proof: Let ${d\mu}$ be the measure ${\frac{\sin x dx}{2}}$ on ${[0,\pi] = X}$. Then ${(X, d\mu)}$ is a probability space, ${f}$ is Lebesgue-integrable on ${X}$ and ${t \mapsto t^2}$ is a convex
function ${\mathbf R \rightarrow \mathbf R}$. By Jensen’s inequality,
$\displaystyle \int_0^\pi f(x)^2 d\mu \geq \left(\int_0^\pi f(x) d\mu\right)^2.$
Multiplying throughout by ${4}$ we get the claimed inequality.
Exercise 2 Let ${T}$ be a linear operator on a finite-dimensional vector space ${V}$. (a) Prove that if every one-dimensional subspace of ${V}$ is ${T}$-invariant, then ${T}$ is a scalar multiple
of the identity operator. (b) Prove that if every codimension-one subspace of ${V}$ is ${T}$-invariant, then ${T}$ is a scalar multiple of the identity operator.
Proof: (a) The hypothesis means that every nonzero vector of ${V}$ is an eigenvector of ${T}$. Suppose ${v_1, v_2}$ are eigenvectors of ${T}$ with eigenvalues ${\lambda_1}$, ${\lambda_2}$. Since, by
assumption ${v_1+v_2}$ is also an eigenvector, and ${v_1}$ and ${v_2}$ are independent, we can read off the eigenvalue of ${v_1 + v_2}$ off of either coefficient in the equation ${T(v_1+v+2)= \
lambda_1 v_1 + \lambda_2 v_2}$, and therefore ${\lambda_1 = \lambda _2}$. Therefore ${T}$ is a multiple of the identity operator.
(b) Let ${T^\vee}$ be the dual operator on ${V^\vee}$. We claim that ${T^\vee}$ satisfies the condition of ${(a)}$. First, we have the following:
Lemma 1 Two functionals ${f, g : V \rightarrow k}$ (where ${k}$ is the ground field) have the same kernel if and only if they are multiples of each other.
Proof: Indeed, it is trivial if either of ${f}$ or ${g}$ is ${0}$ (in which case both are zero), so suppose neither is ${0}$. Recall that if ${W \subseteq V^\vee}$ and we define ${\mathrm{Ann}(W) = \
{v \in V : f(v) = 0\: \: \forall w \in W\}}$, then we have a canonical isomorphism ${\mathrm{Ann}(W) \cong (V^\vee/W)^\vee}$, which in particular implies ${\dim \mathrm{Ann}(W) = \mathrm{codim}(W\
subseteq V^\vee)}$. If we apply this to ${W=\left<f,g\right>}$, we have, under assumption,
$\displaystyle \mathrm{Ann}(W) = \ker f \cap \ker g = \ker f = \ker g$
which has codimension ${1}$ since ${f,g eq 0}$. Therefore ${W}$ has dimension ${1}$, and ${f}$ and ${g}$ are scalar multiples of each other. $\Box$
Now, back to ${(b)}$. S suppose that ${0 eq f \in V^\vee}$. Then ${\ker f}$ has codimension ${1}$ in ${V}$, and therefore, under the hypothesis of (b), ${T(\ker f) \subseteq \ker f}$. This implies $
{\ker T^\vee(f) \supseteq \ker f}$; indeed, if ${v \in \ker f}$, then ${T^\vee(f)(v) = f(Tv) = 0 }$ since ${Tv \in \ker f}$. Since ${\ker f}$ is codimension ${1}$, we either have equality, or ${T^\
vee(f) = 0}$. If there is equality, then ${T^\vee(f)}$ and ${\ker f}$ have the same kernel and therefore they are proportional, i.e. ${f}$ is an eigenvector of ${T^\vee}$. If ${T^\vee(f)=0}$ then $
{f}$ is trivially an eigenvector of ${T^\vee}$. In every case, we see that ${f}$ is an eigenvector of ${T^\vee}$. By (a), ${T^\vee}$, and therefore ${T}$, is a multiple of the identity operator. $\
Exercise 3 Let ${T}$ be a linear operator on a finite-dimensional inner product space ${V}$.
□ (a) Define what is meant by the adjoint ${T^*}$ of ${T}$.
□ (b) Prove that ${\ker T^* = \mathrm{im}(T)^\perp}$.
□ (c) If ${T}$ is normal, prove that ${\ker T = \ker T^*}$. Give an example when the equality fails (and, of course, ${T}$ is not normal).
• (a) It is the unique linear operator ${T^*}$ on ${V}$ such that ${\left<Tv, w\right> = \left<v, T^*w\right>}$ for every ${v, w \in V}$.
• (b) Indeed,
$\displaystyle \begin{array}{rcl} v \in \ker T^* &\Leftrightarrow& \left<w, T^*v\right> = 0 \: \forall w \in W \\ &\Leftrightarrow& \left<Tw, v\right> = 0 \: \forall w \in W \\ &\Leftrightarrow&
v \perp T(w)\: \forall w \in W. \end{array}$
• (c) A normal operator is one which commutes with its adjoint, i.e. ${TT^* = T^*T}$. Thus,
$\displaystyle \begin{array}{rcl} v \in \ker T^* &\Leftrightarrow& \left<T^*v, T^*v\right> = 0\\ &\Leftrightarrow& \left<TT^*v, v\right> = 0 \\ &\Leftrightarrow& \left<T^*Tv, v\right> = 0 \\ &\
Leftrightarrow& \left<Tv, Tv\right> = 0\\ &\Leftrightarrow& Tv=0. \end{array}$
An example where the equality fails is supplied by the operator ${T=\left(\begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right)}$ acting on ${(\mathbf R^2, \bullet)}$ in the standard way. The
vector ${(1,-1)}$ is in the kernel of ${T}$ but not of ${T^*}$.
Ph.D. Comprehensive exam practice problems, Round 1
In May, I will be taking the qualifying exams for my Ph.D.. Over the next few weeks, I will be posting practice problems and my solutions to them. Until the end of February, I will be reviewing
linear algebra, single variable real analysis, complex analysis and multivariable calculus. In March and April, I will be focusing on algebra, geometry and topology.
Here are three problems to start.
Problem: Suppose that ${A}$ is an ${n \times n}$ real matrix with ${n}$ distinct real eigenvalues. Show that ${A}$ can be written in the form ${\sum_{j=1}^n \lambda_j I_j}$ where each ${\lambda_j}$
is a real number and the ${I_j}$ are ${n\times n}$ real matrices with ${\sum_{j=1}^n I_j = I}$, and ${I_jI_l = 0}$ if ${j eq l}$. Give a ${2 \times 2}$ real matrix ${A}$ for which such a
decomposition is not possible and justify your answer.
Solution: for each ${j}$, let ${E_j}$ denote the matrix with a ${1}$ on the entry ${(j,j)}$ and zeroes everywhere else. Then ${\sum_j E_j = I}$ and ${E_jE_l= 0}$ when ${jeq l}$. Since ${A}$ has ${n}$
distinct real eigenvalues ${\lambda_1, \dots, \lambda_n}$, it is diagonalizable over ${\mathbf R}$, so there is a real matrix ${P}$ such that ${P^{-1}AP = D}$, where ${D=\mathrm{diag}(\lambda_1, \
dots, \lambda_n) = \sum_j \lambda_j E_j }$. Let ${I_j = PE_jP^{-1}}$. Then
$\displaystyle \sum_j \lambda_j I_j = P\left(\sum_j \lambda_j E_j\right) P^{-1} = PDP^{-1} = A.$
Moreover, for ${j eq l}$ we have ${I_jI_l = PE_jE_lP^{-1} = 0}$.
For the second part, notice that if the matrix ${A}$ is decomposed in the manner described above, the numbers ${\lambda_j}$ are necessarily eigenvalues of ${A}$. Indeed, multiplying the equality ${\
sum I_j = I}$ by ${I_l}$ and using that ${I_lI_j = 0}$ when ${l eq j}$, we find that ${I_l^2=I_l}$. Hence, let ${v \in \mathbf R^n}$ be any nonzero vector. Since ${\sum_j I_j v = v}$, at least one of
the terms in the sum is nonzero, say ${I_l v eq 0}$. Then
$\displaystyle AI_lv = \sum_j \lambda_j I_j I_lv = \lambda_l I_l^2v = \lambda_l I_lv,$
and therefore ${I_lv}$ is an eigenvector of ${A}$ with eigenvalue ${\lambda_l}$. Thus, it is impossible for the matrix ${A}$ to have such a decomposition if, say, it has no real eigenvalues, for
$\displaystyle A=\left(\begin{array}{ll} 0 & -1 \\ 1 & 0 \end{array}\right).$
A divisibility identity for Euler’s totient function
In this note I will give a Galois-theoretic proof that for a prime ${p}$ and positive integer ${n}$,
$\displaystyle n \mid \frac{\varphi(p^n-1)}{\varphi(p-1)}.$
I’d love to see a more elementary proof if you can come up with one.
First we need the following:
Lemma 1 Let ${Z_n}$ be the cyclic group with ${n}$ elements. Let ${m}$ be a positive divisor of ${n}$, and consider ${Z_m}$ as a subgroup of ${Z_n}$. Then the number of automorphisms of ${Z_n}$
which fix ${Z_m}$ pointwise is equal to ${\varphi(n)/\varphi(m)}$ (which, in particular, is an integer).
Proof of the Lemma: Note that any automorphism of ${Z_n}$ fixes ${Z_m}$, though not necessarily pointwise: indeed ${Z_n}$ has a unique subgroup of order ${m}$, and thus any automorphism of ${Z_n}$
must take this subgroup to itself. Thus we have a group homomorphism ${\text{Aut}(Z_n) \rightarrow \text{Aut}(Z_m)}$ which is easily seen to be surjective; its kernel is precisely the subgroup
consisting of those automorphisms of ${Z_n}$ which fix ${Z_m}$ pointwise. The statement follows by comparing orders. ${\square}$
Now to prove the initial claim, consider the field extension ${\mathbf{F}_{p^n}/\mathbf{F}_p}$. Basic Galois theory tells that this is a Galois extension of degree ${n}$. Consider the canonical
$\psi: \displaystyle \text{Gal}(\mathbf{F}_{p^n}/\mathbf{F}_p) \rightarrow \text{Aut}(\mathbf{F}_{p^n}^\times)$
which restricts an ${\mathbf{F}_p}$-automorphism ${\sigma}$ to the group of units of ${\mathbf{F}_{p^n}}$. Clearly it is an injective homomorphism since ${\sigma}$ is completely determined by where
it sends the units. Moreover for any ${\sigma \in \text{Gal}(\mathbf{F}_{p^n}/\mathbf{F}_p)}$, $\psi(\sigma)$ lies in the subgroup of ${\mathbf{F}_{p^n}^\times}$ of those automorphisms fixing
pointwise the cyclic subgroup ${\mathbf{F}_{p}^\times}$ of order ${p-1}$, because the Galois group consists of $\mathbf{F}_p$-homomorphisms. By the lemma the subgroup of these automorphisms has order
${\frac{\varphi(p^n-1)}{\varphi(p-1)}}$, whereas ${\text{Gal}(\mathbf{F}_{p^n}/\mathbf{F}_p)}$ has order ${n}$. This does it.
The Problem of Misaddressed Letters
I have decided to switch the focus of this blog. Instead of expository write-ups, I will be posting mostly tidbits of fun mathematics, possibly without relation to one another.
In this post, I want to talk about the problem of derangements, fist considered by Niclaus Bernoulli (1687-1759), solved by him and, later, independently by Euler. If I write letters to ${100}$
different friends, and send the letters randomly among them, what is the probability that none of my friends will receive the letter personally addressed to them? Since there are ${100!}$ different
ways of sending the letters, this probability equals ${!100/100!}$, where ${!100}$ denotes the number of ways of rearranging ${100}$ objects in such a way that no object is left in the same position.
Such a permutation is called a derangement.
How can we calculate ${!100}$? First, note that any permutation of ${n}$ objects fixes certain elements, and deranges the others. The number of permutations of ${n}$ fixing exactly ${k}$ elements is
equal to
$\displaystyle {n \choose k} !(n-k).$
Therefore, the total number of permutations is
$\displaystyle n! = \sum_{k=0}^n{n \choose k} !(n-k).$
In the language of species, we can say that the species of permutations is the product of the species of derangements and of the identity species.
In the language of generating functions, this translates to
$\displaystyle \frac{1}{1-x} = e^x D(x)$
where ${D(x)= \sum_{n=0}^\infty !n \frac{x^n}{n!}}$.
$\displaystyle D(x)=\frac{e^{-x}}{1-x}$
and from this we read off the formula
$\displaystyle !n = \sum_{k=0}^n {n \choose k}(-1)^k (n-k)!.$
In fact, rearranging this shows that
$\displaystyle \frac{!n}{n!} = \sum_{k=0}^n \frac{(-1)^k}{k!},$
which is simply the truncated Taylor series for ${e^{-x}}$, evaluated at ${1}$. Hence we see that
$\displaystyle \frac{!n}{n!} - e^{-1} \rightarrow 0.$
But even more is true: using Taylor’s remainder formula, we see that for all ${n>0}$,
$\displaystyle \left |\frac{!n}{n!} - e^{-1} \right | < \frac{1}{(n+1)!}.$
Hence, in fact, ${!n}$ is the nearest integer to ${n!/e}$, for all ${n}$.
Furthermore, as a consequence of the identity ${\frac{1}{1-x} = 1+\frac{x}{1-x}}$, we see that ${!n}$ satisfies the recurrence relation
$\displaystyle !n = n\times !(n-1) + (-1)^n.$
Note that this is the same recurrence as satisfied by ${n!}$, with an extra ${(-1)^n}$ term.
The derangement numbers appear in certain integrals. For instance, for ${n\geq 0}$,
$\displaystyle \int_1^e (\log u)^n \mathrm{d}u = (-1)^n(e!n-n!).$
This gives another proof that ${!n - n!e^{-1} \rightarrow 0}$ (and that it oscillates around ${0}$), since clearly the integral is positive and converges to ${0}$. It also gives a continuation of the
function ${!n}$ to complex values.
A cute problem
I came up with this little problem last night. It’s not very difficult to prove but still fun (I think). Here it is : let $A$ be a commutative ring, and let $h(u,v) \in A[u,v]$. Suppose that, for any
polynomials $f(u), g(u) \in A[u]$, we have $h(f(u), g(u))=h(g(u), f(u))$. Then $h(u,v)=h(v,u)$.
I’ll post my solution in a couple of days to see if anyone can come up with an alternative solution in the meantime. :) | {"url":"http://mathramble.wordpress.com/category/problems-for-fun/","timestamp":"2014-04-19T19:33:41Z","content_type":null,"content_length":"131240","record_id":"<urn:uuid:75e14343-b8be-41cc-9315-4caeb01259b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question on autonomous differential equation
June 8th 2012, 02:08 PM
Question on autonomous differential equation
I have the differential equation:
I have to find the equilibria and solve them.
Second I have to answer the question:
Let x[1](t) be its particular solution such that x[1](t[1])=-7 - for some time instant t1. Is it possible that x[1](t)=2 for some t? Explain.
To be honest, I have no clue about how to answer to the second question...
I find the points by setting:
x=-6 or x=1
to get the equilibrium points:
= 17 for x=-6 and 7 for x=1
so both points are unstable.
Is this true? - How can I answer the second question.
Another quick question:
Is the differential equation linear? Is it separable?
No it's not linear because it contains the second power of x. It's not separable, because it can not be written as dx/h(x)=t*g(t)
Is this the right answer?
June 8th 2012, 11:01 PM
Re: Question on autonomous differential equation
For x=-6 you get df/dx = -7, so x=-6 is stable point.
So I guess but I cannot recall (took ODE courses 3-4 years ago, a bit rusty) that if at some time our solution equals -7, then because x=-6 is a stable solution all solutions will be attracted to
it, thus there cannot be a time t such that x_1(t)=2. | {"url":"http://mathhelpforum.com/differential-equations/199814-question-autonomous-differential-equation-print.html","timestamp":"2014-04-19T03:43:57Z","content_type":null,"content_length":"4754","record_id":"<urn:uuid:1dc3dcb7-befb-4369-91c6-ed11c1408855>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: re: Troubleshooting 'not sorted' and 'not regularly spaced' erro
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: re: Troubleshooting 'not sorted' and 'not regularly spaced' errors
From Michael Hanson <mshanson@mac.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: re: Troubleshooting 'not sorted' and 'not regularly spaced' errors
Date Mon, 13 Oct 2008 14:19:31 -0400
Kit is, of course, correct that one can use -ivreg2- (from SSC; runs in Stata 9) or -ivregress- (Stata 10 only) to mimic the -newey- and - newey2- estimators, as all allow you to get Newey-West style
heteroskedasticity and autocorrelation consistent (HAC) standard errors. There are a few additional points worth noting in order to make sure these various Stata commands are delivering the HAC
estimator that you have in mind. And when you have time series with "gaps" and use lagged regressors, you may need to pay close attention to whether you -tsset- or -xtset- to a Stata date variable
(with gaps) or a linear time trend (without gaps).
1. If m is the number of lags you would specify in -newey- or - newey2-, then use m for the -ivregress, vce(hac nw m)- command, but n=m+1 for the -ivreg2, bw(n)- command. As you point out, it takes
some effort or experimentation to extract this information from the - ivreg2- help file.
2. To match the HAC SE estimates of -newey- or -newey2-, you will need to add the options -robust small- to an -ivreg2- estimation and the option -small- to an -ivregress- estimation.
3. If you -tsset- (or -xtset-) your data to a Stata date variable with gaps, you may not get the same HAC SE estimates from -newey2, force- as from -ivreg2- or -ivregress- when lagged regressors are
included. You can modify my earlier example program to demonstrate that fact (left as an exercise). This problem may or may not occur in your dataset.
4. If you instead -tsset- (or -xtset-) to a sequential linear time trend (-gen t = _n-), then you will get the same results for all three commands -- but you will have, in effect, told Stata that
there are no gaps in your data. Indeed, you likely will have included a larger set of observations in your regressions, so both the coefficient estimates and the SE estimates are likely to differ
from those in (3).
Which of (3) or (4) above is preferable depends on how you think the "gaps" should be handled. Stata constructs lags by strictly using the prior observation at the frequency specified. For example,
if you have business daily data (as in the example program in my prior message) and -tsset- to a daily Stata date variable, Stata will not construct any lags for a Monday, as the prior daily
observation (Sunday) is missing in that financial data set. However, you may well want Stata to treat Friday as the prior observation -- which will require you to pay close attention to the
construction of your data and then to -tsset- to a time trend. (Other statistical packages that specialize in time series estimation, such as RATS and (IIRC) EViews, have an explicit "business daily"
frequency as this problem arises regularly in applications with financial data. One still has to think about how to handle holidays in that case.) If instead you have data with regular gaps -- such
as every two or every five years of annual data -- you can use the -delta- option of - tsset- or -xtset- to have Stata treat the prior two- or five-year observation as the lagged value.
On Oct 13, 2008, at 12:27 PM, Thomas Jacobs wrote:
Thanks for introducing me to ivregress and ivreg2 for use in newey
west estimation. I was unfamiliar with both commands. I confirmed
they run just as you indicated with both the examples Michael Hanson
provided as well as the questions from my original post. No problems
with either time series commands or gaps.
Is the translation between newey west lags and bartlett kernel
bandwidth simply bandwidth = lags +1 as it appears? I was unsure from
the ivreg2 help file. Thanks again.
On Sat, Oct 11, 2008 at 3:07 PM, Kit Baum <baum@bc.edu> wrote:
< >
Re Michael Hanson's comments on David Roodman's -newey2-: David has on more than one occasion suggested that those wanting to calculate Newey- West standard errors in an OLS regression should
just use -ivreg2- of Baum, Schaffer, Stillman. Despite its name it is happy to estimate OLS models without any instruments, and it can estimate Newey-West standard errors as
well as a variety of other HAC models. E.g.
ivreg2 irx t, robust bw(9)
Notice that it reports the presence of gaps but does not choke on them.
-ivreg2- requires Stata 9.2.
In fact, contradictory to its syntax diagram, -ivregress- can do this as
well. The -ivregress- syntax states that the (varlist2 = varlist_iv)
component is mandatory, but the program does not enforce this:
ivregress 2sls irx t, vce(hac bartlett 8)
works, making no mention of gaps. -ivregress- requires Stata 10.
Kit Baum, Boston College Economics and DIW Berlin
An Introduction to Modern Econometrics Using Stata:
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-10/msg00702.html","timestamp":"2014-04-19T01:54:45Z","content_type":null,"content_length":"12237","record_id":"<urn:uuid:e07a3f87-0a01-47de-9e0f-fc7a4a6c76cf>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Elizabeth, NJ Algebra 1 Tutor
Find a North Elizabeth, NJ Algebra 1 Tutor
...I look forward to joining you on your journey to enrichment and discovery!I was accepted into every college I applied to, including MIT, Yale, and UCLA. I have helped several seniors with
their college essays and decision process. I have also helped high school students plan their courses and extracurricular activities to best prepare them for their desired futures.
25 Subjects: including algebra 1, English, reading, biology
...The subjects where I can provide the most help would be the elementary school classes and advanced grammar, mathematics, MS-Excel, religion and French. I look forward to hearing from you if I
can help. Have a great day!
36 Subjects: including algebra 1, English, chemistry, precalculus
...My style of teaching is to treat the student as an equal. I try to help them gain a grasp on the material, but do not belittle them for any questioned asked or if it takes a while to
understand the material. The difference between a tutor and a student is time.
7 Subjects: including algebra 1, physics, calculus, astronomy
...Thus they are my strongest talking points. As an undergraduate I have had two years of Biology, paleontology, and even Dinosaurs. A very fun class that I would be willing to recreate for
persons of any age group.
13 Subjects: including algebra 1, reading, chemistry, geometry
...I approach each client differently. Everyone learns differently.I completed High School algebra. I completed all coursework for high school algebra.
18 Subjects: including algebra 1, chemistry, calculus, geometry
Related North Elizabeth, NJ Tutors
North Elizabeth, NJ Accounting Tutors
North Elizabeth, NJ ACT Tutors
North Elizabeth, NJ Algebra Tutors
North Elizabeth, NJ Algebra 2 Tutors
North Elizabeth, NJ Calculus Tutors
North Elizabeth, NJ Geometry Tutors
North Elizabeth, NJ Math Tutors
North Elizabeth, NJ Prealgebra Tutors
North Elizabeth, NJ Precalculus Tutors
North Elizabeth, NJ SAT Tutors
North Elizabeth, NJ SAT Math Tutors
North Elizabeth, NJ Science Tutors
North Elizabeth, NJ Statistics Tutors
North Elizabeth, NJ Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Bayway, NJ algebra 1 Tutors
Bergen Point, NJ algebra 1 Tutors
Chestnut, NJ algebra 1 Tutors
Elizabeth, NJ algebra 1 Tutors
Elizabethport, NJ algebra 1 Tutors
Elmora, NJ algebra 1 Tutors
Greenville, NJ algebra 1 Tutors
Midtown, NJ algebra 1 Tutors
Pamrapo, NJ algebra 1 Tutors
Parkandbush, NJ algebra 1 Tutors
Peterstown, NJ algebra 1 Tutors
Roseville, NJ algebra 1 Tutors
Townley, NJ algebra 1 Tutors
Union Square, NJ algebra 1 Tutors
Weequahic, NJ algebra 1 Tutors | {"url":"http://www.purplemath.com/North_Elizabeth_NJ_algebra_1_tutors.php","timestamp":"2014-04-18T13:50:12Z","content_type":null,"content_length":"24283","record_id":"<urn:uuid:0883004b-2def-42a8-a3c5-e57849bcb2b1>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding volume of a solid
May 21st 2009, 06:38 PM #1
Junior Member
Jan 2009
Finding volume of a solid
Find the volume of a solid formed by revolving the region bounded by the graphs of y=x^3+x+1,y=1, and x=1 about the line x=2.
did you graph the region? i would use the shell method here.
$V = 2 \pi \int \text{radius} \cdot \text{height}$
here we have $V = \int_0^1 (2 - x)(x^3 + x)~dx$
any questions?
no, thanks!!
May 21st 2009, 08:15 PM #2
May 22nd 2009, 06:00 AM #3
Junior Member
Jan 2009 | {"url":"http://mathhelpforum.com/calculus/90014-finding-volume-solid.html","timestamp":"2014-04-19T20:36:23Z","content_type":null,"content_length":"35364","record_id":"<urn:uuid:85157e9f-082d-4253-a909-f6453d18c6a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Story about infinity
Some infinities are larger than other infinities.
If you have an infinite number of hands, then the number of fingers on those hands is also infinite, but the finger infinity is five times as great as the hand infinity.
When I was a kid I thought about infinity and other stuff, like does the Universe come to an end, and if there is a wall at the end of the Universe, how thick is the wall and what is on the other
side. I Thought of infinity as an endless trains. You are standing at a train crossing, and there is one care after another going by and they just keep going on forever. You never get to see the
Then I thought that this train had a beginning, but no end. Suppose, instead, the train never began, and never ended? It had just been going by forever, and will continue to go on forever. That made
me think that there are two types of infinity, which I called "one-tail infinity," and "two-tail infinity."
I wonder, though, if there actually exists any type of population with an infinite number of members? If there is nothing in existence which is actually infinite, then perhaps there really is no such
thing as infinity, other than as a concept which we have created with our minds, which would explain how come there can be such paradoxes as infinities which are greater than other infinities. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=46245","timestamp":"2014-04-18T16:31:21Z","content_type":null,"content_length":"16898","record_id":"<urn:uuid:f1f9810b-8b53-4cd8-bdac-1b3989a24994>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Multivariate calculation. Use of the continuous groups.
(English) Zbl 0561.62048
Springer Series in Statistics. New York etc.: Springer-Verlag. XVI, 376 p. DM 138.00 (1985).
This book contains a selection of mathematical topics needed for multivariate statistical inference, in particular for the solution of sampling distributions problems which lead to multivariate
normal density functions.
After a very useful chapter of introduction and survey, there are 13 chapters headed as follows: Transforms (2); Locally compact groups and Haar measure (3); Wishart’s paper, dealing with the paper
Biometrika 20, 32-52 (1928) (4); The Fubini-type theorems of Karlin (5); Manifolds and exterior differential forms (6); Invariant measures on manifolds (7); Matrices, operators, null sets (8);
Examples using differential forms (9); Cross-sections and maximal invariants (10); Random variable techniques (11); Zonal polynomials (12,13), and Multivariate inequalities (14). About this contents
the author comments that ”Chapters 2,4,5,9,10 and 11 are directly concerned with distinct techniques of computing or otherwise determining density functions, while Chapters 3,6,7 and 8 give the
development of needed mathematical background”.
This book is an extension, revision and correction of another one by the same author, Techniques of multivariate calculation. (1976; Zbl 0337.62033). The level of the present book is advanced, and
assumes a great deal of its readers. This includes measure theory, measures in metric spaces, locally compact groups and Hausdorff spaces, plus other standard techniques like quadratic forms,
positive definite matrices and canonical forms.
The book is intended mainly to be used by researchers and advanced students interested in the mathematical aspects of some parts of multivariate statistical analysis. For this audience the material
is excellent, the presentation aims at completeness, and references to the main sources are frequent and detailed. Some references to statistical problems (analysis of variance or canonical
correlations) are given; however this is not a book on statistics but on mathematical methods which are potentially useful for some statistical problems.
In conclusion, this is a mathematically advanced treatment of approaches, methods and techniques to be used by mathematical statisticians working in some important areas of multivariate inference.
The presentation of the material (and of the book) is very good, and for the indicated audience the availability of a single source containing this material will no doubt be a great asset.
62H05 Characterization and structure theory (Multivariate analysis)
62-02 Research monographs (statistics)
28C10 Set functions and measures on topological groups or semigroups, etc.
62H10 Multivariate distributions of statistics
62A01 Foundations and philosophical topics in statistics
58A15 Exterior differential systems (Cartan theory) | {"url":"http://zbmath.org/?q=an:0561.62048","timestamp":"2014-04-16T04:13:26Z","content_type":null,"content_length":"23827","record_id":"<urn:uuid:c3c55bac-c9cc-45a8-94b7-f241665c35c7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calcul de Constructions Infinies et son Application à la Vérification de Systèmes Communicants
- In Proceedings of Theorem Proving in Higher Order Logics (TPHOLs'98), number 1479 in LNCS , 1998
"... Over the last decade, the increasing demand for the validation of safety critical systems lead to the development of domain-specific programming languages (e.g. synchronous languages) and
automatic verification tools (e.g. model checkers). Conventionally, the verification of a reactive system is imp ..."
Cited by 9 (4 self)
Add to MetaCart
Over the last decade, the increasing demand for the validation of safety critical systems lead to the development of domain-specific programming languages (e.g. synchronous languages) and automatic
verification tools (e.g. model checkers). Conventionally, the verification of a reactive system is implemented by specifying a discrete model of the system (i.e. a finite-state machine) and then
checking this model against temporal properties (e.g. using an automata-based tool). We investigate the use of a theorem prover, Coq, for the specification of infinite state systems and for the
verification of co-inductive properties.
"... Abstract. Some total languages, like Agda and Coq, allow the use of guarded corecursion to construct infinite values and proofs. Guarded corecursion is a form of recursion in which arbitrary
recursive calls are allowed, as long as they are guarded by a coinductive constructor. Guardedness ensures th ..."
Cited by 6 (3 self)
Add to MetaCart
Abstract. Some total languages, like Agda and Coq, allow the use of guarded corecursion to construct infinite values and proofs. Guarded corecursion is a form of recursion in which arbitrary
recursive calls are allowed, as long as they are guarded by a coinductive constructor. Guardedness ensures that programs are productive, i.e. that every finite prefix of an infinite value can be
computed in finite time. However, many productive programs are not guarded, and it can be nontrivial to put them in guarded form. This paper gives a method for turning a productive program into a
guarded program. The method amounts to defining a problem-specific language as a data type, writing the program in the problem-specific language, and writing a guarded interpreter for this language.
"... Abstract. The recent success of languages like Agda and Coq demonstrates the potential of using dependent types for programming. These systems rely on many high-level features like datatype
definitions, pattern matching and implicit arguments to facilitate the use of the languages. However, these fe ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. The recent success of languages like Agda and Coq demonstrates the potential of using dependent types for programming. These systems rely on many high-level features like datatype
definitions, pattern matching and implicit arguments to facilitate the use of the languages. However, these features complicate the metatheoretical study and are a potential source of bugs. To
address these issues we introduce ΠΣ, a dependently typed core language. It is small enough for metatheoretical study and the type checker is small enough to be formally verified. In this language
there is only one mechanism for recursion—used for types, functions and infinite objects— and an explicit mechanism to control unfolding, based on lifted types. Furthermore structural equality is
used consistently for values and types; this is achieved by a new notion of α-equality for recursive definitions. We show, by translating several high-level constructions, that ΠΣ is suitable as a
core language for dependently typed programming. 1
, 2009
"... Purely inductive definitions give rise to tree-shaped values where all branches have finite depth, and purely coinductive definitions give rise to values where all branches are potentially
infinite. If this is too restrictive, then an alternative is to use mixed induction and coinduction. This techn ..."
Cited by 2 (0 self)
Add to MetaCart
Purely inductive definitions give rise to tree-shaped values where all branches have finite depth, and purely coinductive definitions give rise to values where all branches are potentially infinite.
If this is too restrictive, then an alternative is to use mixed induction and coinduction. This technique appears to be fairly unknown. The aim of this paper is to make the technique more widely
known, and to present several new applications of it, including a parser combinator library which guarantees termination of parsing, and a method for combining coinductively defined inference systems
with rules like transitivity. The developments presented in the paper have been formalised and checked in Agda, a dependently typed programming language and proof assistant.
"... Abstract. It is natural to present subtyping for recursive types coinductively. However, Gapeyev, Levin and Pierce have noted that there is a problem with coinductive definitions of non-trivial
transitive inference systems: they cannot be “declarative”—as opposed to “algorithmic ” or syntax-directed ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. It is natural to present subtyping for recursive types coinductively. However, Gapeyev, Levin and Pierce have noted that there is a problem with coinductive definitions of non-trivial
transitive inference systems: they cannot be “declarative”—as opposed to “algorithmic ” or syntax-directed—because coinductive inference systems with an explicit rule of transitivity are trivial. We
propose a solution to this problem. By using mixed induction and coinduction we define an inference system for subtyping which combines the advantages of coinduction with the convenience of an
explicit rule of transitivity. The definition uses coinduction for the structural rules, and induction for the rule of transitivity. We also discuss under what conditions this technique can be used
when defining other inference systems. The developments presented in the paper have been mechanised using Agda, a dependently typed programming language and proof assistant. 1
- in FoSSaCS'99 (ETAPS) Conf. Proc., W.Thomas ed., Springer LNCS 1578 , 1983
"... We introduce a coinductive logical system à la Gentzen for establishing bisimulation equivalences on circular non-wellfounded regular objects, inspired by work of Coquand, and of Brandt and
Henglein. In order to describe circular objects, we utilize a typed language, whose coinductive types involve ..."
Cited by 1 (1 self)
Add to MetaCart
We introduce a coinductive logical system à la Gentzen for establishing bisimulation equivalences on circular non-wellfounded regular objects, inspired by work of Coquand, and of Brandt and Henglein.
In order to describe circular objects, we utilize a typed language, whose coinductive types involve disjoint sum, cartesian product, and finite powerset constructors. Our system is shown to be
complete with respect to a maximal fixed point semantics. It is shown to be complete also with respect to an equivalent final semantics. In this latter semantics, terms are viewed as points of a
coalgebra for a suitable endofunctor on the category Set of non-wellfounded sets. Our system subsumes an axiomatization of regular processes, alternative to the classical one given by Milner. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2705926","timestamp":"2014-04-16T07:09:21Z","content_type":null,"content_length":"26764","record_id":"<urn:uuid:9b5c74f5-4bc8-4fdc-b1ba-6cfbd6073c6f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kennesaw Statistics Tutor
Find a Kennesaw Statistics Tutor
...I have taught students of diverse ages and backgrounds, including underprivileged and learning-disabled students. Before WyzAnt I worked with some of the highest quality personal educational
service companies available. My areas of expertise include the following: standardized test preparation ...
31 Subjects: including statistics, English, reading, chemistry
...I took Geometry in high school and it was my first taste of what it meant to write a proof -- and I loved it! I took Pre-Algebra in the accelerated math program in 6th grade and did well
enough in the course to maintain my status in the accelerated program. I took Pre-Calculus while I was still in the military and earned an A in the class.
15 Subjects: including statistics, calculus, geometry, algebra 1
...I love helping students understand Economics! Tutored fellow MBA students in Finance. Awarded Mason Gold Standard Award for contributing to the academic achievement of my peers.
28 Subjects: including statistics, calculus, GRE, physics
...I tutor students from elementary school through Algebra II and Trigonometry. I get to know students, their strengths and their challenges. I help them to see how what is new builds on what
they have already learned.
8 Subjects: including statistics, algebra 1, trigonometry, algebra 2
...I have a masters degree in education and have minored in Public Speaking.In my positions as math/science coach,Executive Director of Employee Relations, and Assistant Supt., I have been
responsible for all in service training programs. I have made hundreds of speaking engagements before adults, ...
47 Subjects: including statistics, reading, English, biology
Nearby Cities With statistics Tutor
Acworth, GA statistics Tutors
Austell statistics Tutors
Canton, GA statistics Tutors
Cartersville, GA statistics Tutors
Doraville, GA statistics Tutors
Duluth, GA statistics Tutors
Dunwoody, GA statistics Tutors
East Point, GA statistics Tutors
Hiram, GA statistics Tutors
Mableton statistics Tutors
Marietta, GA statistics Tutors
Milton, GA statistics Tutors
Norcross, GA statistics Tutors
Smyrna, GA statistics Tutors
Woodstock, GA statistics Tutors | {"url":"http://www.purplemath.com/Kennesaw_statistics_tutors.php","timestamp":"2014-04-17T07:17:21Z","content_type":null,"content_length":"23792","record_id":"<urn:uuid:d0597d13-5337-49fd-a820-7af343ade169>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
September 2010
Math Digest
Summaries of Media Coverage of Math
Edited by Mike Breen and Annette Emerson, AMS Public Awareness Officers
Contributors: Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (2004 AMS Media Fellow), Annette Emerson (AMS), Brie Finegold (University of Arizona), Baldur Hedinsson
(Boston University), Allyn Jackson (Deputy Editor, Notices of the AMS), and Adriana Salerno (Bates College)
September 2010
"Letters to the Editor." Providence Journal, 30 September 2010 and 11 October 2010.
An interesting exchange took place in the Letters to the Editor section of the Providence Journal. In the first letter, Richard Heath wrote regarding whether algebra should be required in school: "I
am 83 and took algebra in public school. I have never used it! I have six children who have never used it." His grandchildren haven't used it either. On October 11, Susan Osberg replied with six
questions (e.g. "Have you ever had to sort through the fiscal claims of candidates for public office?"), writing that perhaps generations of the Heath family had used algebra without knowing it.
Osberg, president of the Rhode Island Mathematics Teachers Association, finished with: "To answer your question about 'whether the public schools need to teach algebra,' I give you a resounding yes."
--- Mike Breen
Return to Top
"Making Math Lessons as Easy as 1, Pause, 2, Pause ...," by Winnie Hu. The New York Times, 30 September 2010.
Schools in the US have been working hard for decades to improve the math skills of their students. They have done this by adopting new programs, like "new math"” in the 60s, which focused on abstract
theories and created a back-to-basics backlash, and "reform math", which focuses on conceptual understanding and problem solving, but doesn't lack its critics. Recently, more and more schools have
been adopting a program known as Singapore math, based on the country's success. Singapore students have ranked at or near the top on international math exams since the 1990s.
The innovative program devotes more time to fewer topics, and thus addresses one of the big difficulties in teaching math: all children learn differently. By slowing down the learning process,
students get a solid math foundation upon which they can build increasingly complex skills. Even though they start out slow in kindergarten (by spending a week on the numbers 1 and 2 for example) the
pace can be increased in fourth and fifth grade, putting children as much as a year ahead of students in other math programs. Bill Jackson, a math coach for the Scarsdale district, says that in
Singapore math, the students move through a three-step learning process: concrete, pictorial, and abstract. Most American math programs typically skip the middle steps and lose students in the
process, says Jackson. The main criticism of the program, and the main reason some schools have dropped it after a few years, is that it is not easy or cheap to successfully adopt. School board
members and parents are also reluctant to adopt a foreign math program. Training teachers can be expensive, and in some cases teachers themselves lacked a sufficiently strong math background to be
able to implement the program adequately. But recent suggests that students who are taught Singapore math score higher on standardized tests and it even helps young children develop confidence in
their math skills, so it might just be worth the trouble.
--- Adriana Salerno
Return to Top
"In Character with Tom Henderson," by Peter Korn. The Portland Tribune, 23 September 2010.
In this brief and amusing article, Peter Korn interviews mathematician Tom Henderson, an adjunct professor of mathematics at Portland State University in Portland, Oregon, as well as an improv
comedian. Henderson is currently raising money for a book he intends to write, Punk Mathematics, using the Kickstarter.com website. (As of this writing, Henderson has raised almost $29,000.) During
the interview, Henderson speaks about the affinity between punk and mathematics: punk is "very critical of social norms” while, in mathematics, “you can verify everything for yourself if you spend
long enough on a problem. Those two can behave very nicely (together)." When asked about the hardest math problem he has faced, Henderson says a few words about the Goldbach Conjecture, and, at the
interviewer's request, finishes with a joke about an engineer, a scientist, and a mathematician. (No, the setting is not a bar but a hotel room that catches on fire.) You can read the interview, and
hear Henderson and fellow math aficionado Nick Horton discuss infinity, probability, game theory, and other topics related to mathematics.
--- Claudia Clark
Return to Top
"Sizing Up Consciousness by Its Bits," by Carl Zimmer. The New York Times, 20 September 2010.
Measuring a person's level of consciousness in a manner akin to measuring their blood pressure is a lofty goal, but one that mathematician David Balduzzi and acclaimed neuroscientist Giulio Tononi
have begun to move towards. Recently, they wrote a paper entitled "Qualia: The Geometry of Integrated Information," which discusses means of viewing the integrated information generated during a
conscious experience as a geometric object whose height indicates the quantity of information or level of consciousness associated to that experience. As Carl Zimmer, the author of the New York Times
article explains, information technology distinguishes between the aggregate of many disjoint bits of information such as a photograph formed by many pixels, and the integrated information that
results from a network of neurons that can communicate with each other. While our brains work by sharing information from one part with another, this type of shared information is harder to quantify
than the many pixels in a photo. Consider for example that "simply linking all the parts in every possible way does not raise phi (the level of consciousness) much." Rather, connecting every neuron
to every other neuron in a system creates one giant on/off switch. “It’s either all on, or all off,” says Tononi of such a system.
Apart from assessing models of very small neural systems (such as that of a worm) to measure levels of consciousness, an actual model of the human brain is not very near. But Tononi argues that the
theoretical implications of his theory would do a better job explaining transitions in and out of consciousness than some of the current theories. Another approach Tononi is taking in an effort to
measure consciousness is to record the brain's reaction to a stimulus provided by a magnetic pulse. The duration of the brain waves that echo varies according to how anesthetized the brain is,
indicating that a less conscious individual's brain might have fewer reverberations. More experiments on individuals in various states of sleep and wakefulness will soon be performed, giving a firmer
shape to Tononi's theory.
--- Brie Finegold
Return to Top
"Fibbing With Numbers": Review of Proofiness: The Dark Arts of Mathematical Deception by Charles Seife. Reviewed by Steven Strogatz. The New York Times, 19 September 2010.
"Go figure" (also a review of Proofiness). Reviewed by Jordan Ellenberg. The Boston Globe, 26 September 2010.
Proofiness (noun): the art of using bogus mathematical arguments to prove something that you know in your heart is true—even if it is not. The concept is the heart (and the title) of a new book by
Charles Seife that explores how politicians and marketers use misleading and even inaccurate numbers to support their cases. Seife’s book looks at the perils of ascribing too much importance to
numbers that are derived from sloppy or even nonexistent calculations, including vote counting in the recent Minnesota Senate race, in which the observed errors were larger than the difference in the
number of votes. He also provides examples of how cherry picking data and assigning high importance to the average of a data set with extreme outliers have been used to mislead the public. Reviewer
Steven Strogatz notes that although several recent books have looked at the deceptive power of numbers, Seife’s Proofiness stands out for its strong examples of the effect of this deception on
--- Lisa DeKeukelaere
Return to Top
"After Cracking a Theoretical Bottleneck, a Math Prodigy Arrives at the U. of Chicago," by Paul Basken. The Chronicle of Higher Education, 17 September 2010, page A4.
The Chronicle of Higher Education reports on outstanding mathematician, Ngô Bao Châu, becoming the newest professor of mathematics at the University of Chicago. Châu is famous for proving an
important theorem, known as the "fundamental lemma" which is a crucial part of a set of conjectures known as the "Langlands Program." The proof had eluded mathematicians ever since Robert P.
Langlands conjectured the theorem in 1969. Langlands himself spent more than ten years trying to prove it without success and a considerable body of subsequent work depended on its validity. Châu’s
insight that principles of a seemingly unrelated field of mathematics would help in solving this great mathematical challenge along with four years of working with Langlands at the Institute for
Advanced Study in Princeton ultimately led to a proof in 2009.
--- Baldur Hedinsson
Return to Top
"Tutors Made to Measure," by Maggie Jones. The New York Times, 16 September 2010.
Online tutoring sites have been around for a while. But virtual tutors, with a face, a gender, a race, and an emotional response to their students’ progress are a relatively new development. The
Wayang Outpost, an online program designed by Beverly Park Woolf and Ivon M. Arroyo, two University of Massachusetts, Amherst, researchers, was originally developed to encourage middle- and
high-school girls to embrace mathematics. These virtual tutors, or “affective pedagogical agents,” are designed to read students’ emotional cues, like boredom, frustration, anxiety and nervousness.
The students are hooked up to sensors monitoring sweat, pressure placed on the mouse, and fidgeting. A small camera monitors facial expressions. This information is then used to tailor the tutor’s
encouragement. And even though these computer-generated tutors are never going to be a perfect replacement for human teachers, as long as they keep students engaged and motivated they are doing
--- Adriana Salerno
Return to Top
Blog: Math Goes Pop! Stand Up to Questionable Odds, by Matthew Lane. 15 September 2010.
Matt Lane, a graduate student at UCLA, documents some of the meetings of pop culture and math in his blog, Math Goes Pop! Hi goal is to "make mathematics more exciting and less terrifying to a
general audience." While some of his entries focus on likely sources of mathematical subjects like Futurama and A Beautiful Mind, he also illuminates less obvious mathematical content present in
other media. For example, he discusses the physics of the epic punch delivered to character Scott Pilgrim by Todd Ingram, a punch that would have thrown Michael Cera's character almost 700 ft up in
the air. He also questions a graph in Slate magazine that records the possibly diminishing revenue taken in by 3D films. Lane also points out the moments when mathematics takes the spotlight like it
did in August due to the proof that the Rubik's cube can always be solved in 20 moves or less.
Even a Public Service Announcement for the campaign to Stand Up To Cancer uses a long list of statistics on everything from bowling a perfect game to tripping while texting to highlight the high
probability of a the viewer getting cancer in his/her lifetime. While this is an effective strategy for getting the American Cancer Society's point across, Mr. Lane wonders where those other
statistics came from, and he follows up to determine their believability. Such a combination of reporting and mathematical insight is characteristic of the entries in Math Goes Pop!
--- Brie Finegold
Return to Top
"Change your math attitude, and daughter's," by Leanna Landsmann. Detroit Free Press, 8 September 2010.
In her weekly column, A+ Advice for Parents, education writer and editor Leanna Landsmann answers parent's questions about their children's education. In this week's column, a mother writes that she
has told her seventh-grade daughter that it's OK to get a D in math because she herself did poorly in math--"like most women." She asks Landsmann if her daughter, who "hates math" and wants to be a
dress designer, really needs math. Landsmann answers in the affirmative and provides several reasons and resources for this parent. These include reading "The I Hate Mathematics Book" by Marilyn
Burns to change her own attitude about math, setting higher expectations for her daughter and getting her extra academic support if needed, using "real-life examples to demonstrate the concepts,"
teaching her daughter to ask "Is my answer logical?", making use of resources on the web, and having her daughter teach math mini-lessons to her.
--- Claudia Clark
Return to Top
"Bringing the Beauty of Math to Life," by Faiza Elmasry. Voice of America, 3 September 2010.
For this article, Faiza Elmasry interviews Alex Bellos, a reporter with degrees in mathematics and philosophy from Oxford University. "After 20 years as a reporter," Bellos "traveled around the
world--and back in time--to uncover fascinating stories of mathematical achievements and to profile people whose lives are intertwined with numbers." Bellos wrote about his adventures in the
recently-published book, Here's Looking at Euclid: A Surprising Excursion Through the Astonishing World of Mathematics. Bellos describes to Elmasry a little of what he learned and a few of the people
he met. For instance, he found that cultural attitudes toward mathematics, mathematicians, and mathematical ability vary greatly around the world: in France (but not in the U.K., according to Bellos)
it's "quite cool" to be a mathematician, while in India "being good at arithmetic" is "almost seen as a badge of national pride." In Japan, Bellos discussed the pleasures of mathematical games with
the creator of the Sudoku puzzles, "spent time with a guru of origami" outside of Tokyo, and met "the world's most numeric chimpanzee." Elmasry finds that Bellos's book does what Bellos intended:
proves "that mathematics is not a dry field of learning."
--- Claudia Clark
Return to Top
"Rummaging for a Final Theory," by Zeeya Merali. Scientific American, September 2010, pages 14-17.
In July, mathematicians and physicists met at the Banff International Research Station to discuss unifying gravity and the Standard Model. The Standard Model of particle physics uses a combination of
three Lie groups to connect all known elementary particles, electromagnetism, the strong force (that binds atomic nuclei), and the weak force (associated with radioactive decay). Physicists Roberto
Percacci and Fabrizio Nesti think a larger Lie group would incorporate gravity into the Standard Model, while Oregon State University mathematician Tevian Dray and physicist Corinne Manogue are using
octonions to describe some properties of particles, such as spin. Although the research is just beginning, Dray says that "We are starting to get glimmers of the properties that a final theory must
have." Others are not so optimistic, countering that the theory would predict particles whose existence would violate previous experiments.
--- Mike Breen
Return to Top | {"url":"http://www.ams.org/news/math-in-the-media/mathdigest-md-201009-toc","timestamp":"2014-04-18T06:51:39Z","content_type":null,"content_length":"32382","record_id":"<urn:uuid:58c05ce8-9d5c-4f93-84e8-37db76ee7cfe>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fourier Transform Problem
May 9th 2010, 05:36 AM #1
Junior Member
Dec 2009
Fourier Transform Problem
I have got the below fourier problem:
The Fourier transform of a function $f(t)$ is defined as:
$F(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^ \infty f(t) e^{-jwt}dt$
I have worked through it and got two different answers depending on whether I used a positive contour or negative contour.
For the positive contour:
$j\sqrt{\frac{\pi}{2}} e^\omega$
For the negative contour:
$j\sqrt{\frac{\pi}{2}} e^{-\omega}$
I'm not sure whether these need to be combined or which one to use for the final answer. Looked at Wolfram Alpha and this gave something completely different.
Can someone please point me in the right direction.
I have got the below fourier problem:
The Fourier transform of a function $f(t)$ is defined as:
$F(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^ \infty f(t) e^{-jwt}dt$
I have worked through it and got two different answers depending on whether I used a positive contour or negative contour.
For the positive contour:
$j\sqrt{\frac{\pi}{2}} e^\omega$
For the negative contour:
$j\sqrt{\frac{\pi}{2}} e^{-\omega}$
I'm not sure whether these need to be combined or which one to use for the final answer. Looked at Wolfram Alpha and this gave something completely different.
Can someone please point me in the right direction.
Just post the question as it is given please.
The fourier transform of a function $f(t)$ is defined as
$<br /> <br /> F(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^ \infty f(t) e^{-jwt}dt<br />$
use this formula and residue calculus to compute the Fourier transform of the function
$f(t) = \frac{t}{1 + t^2}$
Think about which way you traverse the two contours and what that means for the part of the integral along the real axis, also which contour you use depends on the sign of $\omega$ to make the
integral over the semicircular arc go to zero, see the very similar problem on the Wikipedia page.
Last edited by CaptainBlack; May 10th 2010 at 06:20 AM.
May 10th 2010, 12:00 AM #2
Grand Panjandrum
Nov 2005
May 10th 2010, 05:36 AM #3
Junior Member
Dec 2009
May 10th 2010, 06:02 AM #4
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-applied-math/143824-fourier-transform-problem.html","timestamp":"2014-04-16T05:41:26Z","content_type":null,"content_length":"44553","record_id":"<urn:uuid:dcd0225f-8564-45d2-8f87-8db6ccacedf3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
2. Given the following triangle, solve for side XY. Help? Please?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5097eea4e4b0d0275a3d2fdd","timestamp":"2014-04-16T04:36:55Z","content_type":null,"content_length":"41634","record_id":"<urn:uuid:441f085f-0019-41ff-beb8-1896f858e204>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Louviers Math Tutor
Find a Louviers Math Tutor
...I can help students with more than just math! I am very good with the reading/writing portions of standardized tests, such as the SAT and GRE. I can help students with reading comprehension
and essay writing.
27 Subjects: including algebra 1, algebra 2, vocabulary, Microsoft Excel
...I am passionate about the subject and love helping students understand difficult concepts. I have tutored general chemistry at a community college for over 6 years. In order to solve a
chemistry problem, it is important to have a clear understanding of how to approach the problem.
7 Subjects: including algebra 2, geometry, algebra 1, chemistry
My name is Kevin, and I received a Bachelor of Science Degree in Applied and Computational Mathematics from the University of Southern California (USC). I was a member of the Pi Mu Epsilon Math
Honors Society at USC. Since receiving my degree, I have continued my education in mathematics by comple...
21 Subjects: including statistics, calculus, ACT Math, actuarial science
...Thoroughly trained in the quantitative skills, I have spent my career perfecting and teaching effective writing and presentation skills. While I am a full time businessman, my greatest
satisfaction comes from tutoring my students on evenings and weekends. I have accrued hundreds of hours of tut...
24 Subjects: including algebra 2, geometry, trigonometry, probability
...I am a spirit-filled Christian and I pray and seek God daily for an understanding of his Word. I have facilitated seminars and classes for more than 30 years. I was a part of the Learning and
Development program at Aurora Loan Services where I taught in front of large groups as a result of my work with the Implementation Team.
43 Subjects: including calculus, elementary (k-6th), physics, statistics
Nearby Cities With Math Tutor
Bennett, CO Math Tutors
Blackhawk, CO Math Tutors
Bow Mar, CO Math Tutors
Central City, CO Math Tutors
Columbine Valley, CO Math Tutors
Commerce City Math Tutors
Elizabeth, CO Math Tutors
Foxfield, CO Math Tutors
Henderson, CO Math Tutors
Indian Hills Math Tutors
Larkspur, CO Math Tutors
Morrison, CO Math Tutors
Sedalia, CO Math Tutors
Shawnee, CO Math Tutors
Watkins, CO Math Tutors | {"url":"http://www.purplemath.com/Louviers_Math_tutors.php","timestamp":"2014-04-19T15:00:08Z","content_type":null,"content_length":"23678","record_id":"<urn:uuid:fe6a9e92-0905-4f94-9285-0813319a9aca>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Marcos, CA Algebra 2 Tutor
Find a San Marcos, CA Algebra 2 Tutor
...Many times, simply removing the anxiety from math itself has worked wonders: former failing students become top performers who are no longer terrified by tests and quizzes. This is all
accomplished in a relaxed, safe, and trusting environment where the student is not afraid to make mistakes. I encourage them to believe in their ability to conquer any problem with hard work and
6 Subjects: including algebra 2, calculus, algebra 1, geometry
...Furthermore, I've been a full-time tutor since I graduated in 2009 and I see Calculus students all the time; usually about a third of my clientele are studying Calculus. That means you can
count on me to communicate concepts well. Geometry is a source of frustration for many students, because it's not like other math that you see in high school.
12 Subjects: including algebra 2, calculus, geometry, GRE
...I recalled my own experience as a prospective student, my anxieties as well as my successes, and sought to connect with the students. By speaking with many prospective students for two years,
I broadened my understanding of college admissions generally and better understood the character of part...
54 Subjects: including algebra 2, reading, English, chemistry
...My teaching experience comes from graduate school, where I was required to teach, and through coaching my children's Math League and Science Olympiad. I have also tutored children in the No
Child Left Behind Program. Surprisingly, I enjoyed these experiences and would like to continue them.
23 Subjects: including algebra 2, chemistry, physics, calculus
...I am confident that I can help you understand the material that you are struggling with. I have been teaching organic chemistry for 8 years at the college level. I am excellent at teaching
this material, and have had many students email me about how well I prepared them for the 4 year university or for medical school.
9 Subjects: including algebra 2, chemistry, physics, biology
Related San Marcos, CA Tutors
San Marcos, CA Accounting Tutors
San Marcos, CA ACT Tutors
San Marcos, CA Algebra Tutors
San Marcos, CA Algebra 2 Tutors
San Marcos, CA Calculus Tutors
San Marcos, CA Geometry Tutors
San Marcos, CA Math Tutors
San Marcos, CA Prealgebra Tutors
San Marcos, CA Precalculus Tutors
San Marcos, CA SAT Tutors
San Marcos, CA SAT Math Tutors
San Marcos, CA Science Tutors
San Marcos, CA Statistics Tutors
San Marcos, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/san_marcos_ca_algebra_2_tutors.php","timestamp":"2014-04-18T03:59:31Z","content_type":null,"content_length":"24385","record_id":"<urn:uuid:fc67d183-c85d-4922-9d89-7eac3cff9e1e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integrating by Parts
$\int{x(arctanx)^2} dx$ No work to show so far, not sure which substitutions would be the best to make here.
If I have done this correctly, $du=2(\arctan x)\frac{1}{1+x^2}dx, ~v=\frac{1}{2}x^2$ the integral is then $\frac{1}{2}x^2(arctan x)^2 - \int \frac{1}{2}x^22(\arctan x)\frac{1}{1-x^2}dx$ I am then
stuck on what to do with that integral..should I make the substitution u = arctan x and then do parts again?
$\frac{1}{2}x^2(\arctan x)^2 - \int x^2(\arctan x)\frac{1}{1+x^2}dx$ $\frac{1}{2}x^2(\arctan x)^2 - \int \left(1 - \frac{1}{1+x^2}\right)\arctan{x} \, dx$ $\frac{1}{2}x^2(\arctan x)^2 - \int \arctan
{x} \, dx + \int \frac{\arctan{x}}{1+x^2} \, dx$ continue ...
=Em Yeu Anh;440924]If I have done this correctly, the integral is then $\frac{1}{2}x^2(arctan x)^2 - \int \frac{1}{2}x^22(\arctan x)\frac{1}{1{\color{red}-}x^2}dx$ The red one should be "+" not "-".
Also the 2`s will cancel. and the integral will be: $\int tan^{-1}(x) \frac{x^2}{x^2+1} dx$ Clearly: $\frac{x^2}{x^2+1}=1-\frac{1}{x^2+1}$<--- Do you know why? Substituting this in the integral: $\
int tan^{-1}(x) (1-\frac{1}{x^2+1}) dx$ $=\int tan^{-1}(x)dx-\int \frac{tan^{-1}(x)}{x^2+1}dx$ This first one can be solved using integration by parts. The second one can be solved by using a | {"url":"http://mathhelpforum.com/calculus/124211-integrating-parts.html","timestamp":"2014-04-17T21:05:23Z","content_type":null,"content_length":"46207","record_id":"<urn:uuid:8a936802-3002-4cd7-8ea8-7d95d096ad4e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
the dual space of C(X) (X is noncompact metric space)
up vote 6 down vote favorite
It is well known that when X is compact space (or locally compact space), $C(X)=\{f:f X\rightarrow X \text{is continous,and bounded} \}$ ,the dual space of $C(X) C(X)^{*} $is correspond to $M(X)$
space of Radon measure with bounded variation.
However according to my knowledge. there are few books discuss case that when X is noncompact, for example complete vsperable metric space.
even for the simplest example, when taking X is R, means real line, what does $(C(X))^{*}$ mean.
Any advice and reference will be appreaciated.
fa.functional-analysis real-analysis
2 I think, you should specify what do you mean by $C(X)$, when $X$ is non-compact. For example, in the case of $X={\mathbb R}$ you can mean by by $C(\mathbb R)$ the space of all continuous functions
on $\mathbb R$ endowed with the compact-open topology, and after that the dual space $C(\mathbb R)^*$ becomes exactly the space of all measures with compact support. – Sergei Akbarov Nov 1 '12 at
@Sergei: I completely agree. This should have been said right from the start. – Alain Valette Nov 1 '12 at 19:24
add comment
3 Answers
active oldest votes
What you state in the first paragraph is the Riesz Representation Theorem (see http://en.wikipedia.org/wiki/Riesz_representation_theorem#
The_representation_theorem_for_linear_functionals_on_Cc.28X.29) This is valid for all locally compact Hausdorff spaces; so in particular for $\mathbb R$ (ah, I guess, if you look at $C_0
(\mathbb R)^*$).
If $X$ is any topological space, then of course we can talk of $C^b(X)$ (the bounded continuous functions on $X$). This is still a commutative C$^*$-algebra, and so is isomorphic to $C
up vote 13 (K)$, where $K$ is some compact Hausdorff space. The process of moving from $X$ to $K$ is functorial; purely at the topological level it corresponds to constructing the Stone-Cech
down vote compactification (see http://en.wikipedia.org/wiki/Stone_cech_compactification ) Point evaluation at $x\in X$ induces a character on $C^b(X) = C(K)$ and hence a point $k$ of $K$; we thus
accepted get a (continuous) map $X\rightarrow K$. This is injective if $X$ is completely regular; but it can fail to be injective (basically, we might lack enough continuous functions to separate
points of $X$).
Back to your question: $C^b(X)^* = C(K)^* = M(K)$. For $\mathbb R$, we find that $K$ is nothing but $\beta\mathbb R$ the Stone-Cech compactification (quite a large space!)
add comment
A nice reference, taken from my answer to another question:
V. S. Varadarajan, MEASURES ON TOPOLOGICAL SPACES, AMS Transl. 48 (1965) 161--228.
up vote 3 down vote
Measures on topological spaces as dual to continuous functions on the space, or to bounded continuous functions on the space. (Also, beware of an error in the appendix.)
add comment
The problem of obtaining a useful generalisation of the Riesz representation theorem for non-compact spaces was addressed in the 50's by R.C. Buck, amongst others. It was clear that it was
necessary to leave the context of Banach spaces for a nice theory. Buck introduced the so-called strict topology on the space of bounded, continuous functions on a locally compact space and
up vote showed that the dual is the space of bounded Radon measurs on the underlying space. This was generalised to the case of completely regular spaces in the 60's using the theory of mixed
3 down topologies or Saks spaces which had been developed by the Polish school. The most succinct definition of the resulting topology on the above space is that it is the finest locally convex
vote topology which agrees with compact convergence on bounded sets. There is a relatively complete theory---in particular, the Riesz representation theorem holds in its natural form.
Interesting! This is also related to a question that I asked recently (mathoverflow.net/questions/105147). Could you give some more precise references, perhaps here or over at the other
question? – Igor Khavkine Nov 1 '12 at 12:48
1 There is a fairly systematic treatment in the book "Saks spaces and applications to functional analysis". – jbc Nov 1 '12 at 12:55
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis real-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/89274/the-dual-space-of-cx-x-is-noncompact-metric-space?sort=oldest","timestamp":"2014-04-21T02:22:47Z","content_type":null,"content_length":"62482","record_id":"<urn:uuid:04559a88-b3bb-4529-abba-199a3bf7c0ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: ENCODING AND DECODING USING ELASTIC CODES WITH FLEXIBLE SOURCE BLOCK MAPPING
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Data can be encoded by assigning source symbols to base blocks, assigning base blocks to source blocks and encoding each source block into encoding symbols, where at least one pair of source blocks
is such they have at least one base block in common with both source blocks of the pair and at least one base block not in common with the other source block of the pair. The encoding of a source
block can be independent of content of other source blocks. Decoding to recover all of a desired set of the original source symbols can be done from a set of encoding symbols from a plurality of
source blocks wherein the amount of encoding symbols from the first source block is less than the amount of source data in the first source block and likewise for the second source block.
A method for encoding data to be transmitted over a communications channel that could possibly introduce errors or erasures, wherein source data is represented by an ordered plurality of source
symbols and the source data is recoverable from encoding symbols that are transmitted, the method comprising: identifying a base block for each symbol of the ordered plurality of source symbols,
wherein the identified base block is one of a plurality of base blocks that, collectively, cover the source data to be encoded; identifying, from a plurality of source blocks and for each base block,
at least one source block that envelops that base block, wherein the plurality of source blocks includes at least one pair of source blocks that have a characteristic that there is at least one base
block that is enveloped by both source blocks of the pair and at least one base block for each source block of the pair that is enveloped by that source block and not by the other source block of the
pair; and encoding each of the plurality of source blocks according to an encoding process, resulting in encoding symbols, wherein the encoding process operates on one source block to generate
encoding symbols, with the encoding symbols being independent of source symbol values of source symbols from base blocks not enveloped by the one source block, wherein the encoding is such that the
portion of the source data that is represented by the union of the pair of source blocks is assured to be recoverable from a combination of a first set of encoding symbols generated from the first
source block of the pair and a second set of encoding symbols generated from the second source block of the pair, wherein the amount of encoding symbols in the first set is less than the amount of
source data in the first source block and the amount of encoding symbols in the second set is less than the amount of source data in the second source block.
The method of claim 1, wherein the encoding process is such that, when the encoding symbols and the source symbols have the same size, when the first set of encoding symbols comprises M1 encoding
symbols, the first source block comprises N1 source symbols, the second set of encoding symbols comprises M2 encoding symbols, the second source block comprises N2 source symbols, and when the
intersection of the first and second source blocks comprises N3 source symbols with N3 is greater than zero, then recoverability of the union of the pair of source blocks is assured beyond a
predetermined threshold probability if M1+M2=N1+N2-N3 for at least some combinations of values of M1<N1 and M2<N
2. 3.
The method of claim 2, wherein the recoverability of the union of the pair of source blocks is assured beyond a predetermined threshold probability if M1+M2=N1+N2-N3 for all combinations of values of
M1 and M2 such that M
ltoreq.N1 and M
2. 4.
The method of claim 2, wherein the recoverability of the union of the pair of source blocks is certain if M1+M2=N1+N2-N3 for all combinations of values of M1 and M2 such that M
ltoreq.N1 and M
2. 5.
The method of claim 2, wherein recoverability of the union of the pair of source blocks is assured with a probability higher than a predetermined threshold probability if M1+M2 is larger than
N1+N2-N3 by less than a predetermined percentage but smaller than N1+N2 for at least some combinations of values of M1 and M
2. 6.
The method of claim 1, wherein at least one encoding symbol generated from a source block is equal to a source symbol from the portion of the source data that is represented by that source block.
The method of claim 1, wherein the encoding is such that the portion of the source data that is represented by the first source block of the pair is assured to be recoverable from a third set of
encoding symbols generated from the first source block, wherein the amount of encoding symbols in the third set is no greater than the amount of source data represented in the first source block.
The method of claim 1, wherein the encoding is such that the portion of the source data that is represented by the first source block of the pair is assured to be recoverable with a probability
higher than a predetermined threshold probability from a third set of encoding symbols generated from the first source block, wherein the amount of encoding symbols in the third set is only slightly
greater than the amount of source data represented in the first source block.
The method of claim 1, wherein the number of distinct encoding symbols that can be generated from each source block is independent of the size of the source block.
The method of claim 1, wherein the number of distinct encoding symbols that can be generated from each source block depends on the size of the source block.
The method of claim 1, wherein identifying base blocks for symbols is performed prior to a start to encoding.
The method of claim 1, wherein identifying source blocks for base blocks is performed prior to a start to encoding.
The method of claim 1, wherein at least one encoding symbol is generated before a base block is identified for each source symbol or before the enveloped base blocks are determined for each of the
source blocks or before all of the source data is generated or made available.
The method of claim 1, further comprising: receiving receiver feedback representing results at a decoder that is receiving or has received encoding symbols; and adjusting one or more of membership of
source symbols in base blocks, membership of base blocks in enveloping source blocks, number of source symbols per base block, number of symbols in a source block, and/or number of encoding symbols
generated from a source block, wherein the adjusting is done based on, at least in part, the receiver feedback.
The method of claim 14, wherein adjusting includes determining new base blocks or changing membership of source symbols in previously determined base blocks.
The method of claim 14, wherein adjusting includes determining new source blocks or changing envelopment of base blocks for previously determined source blocks.
The method of claim 1, further comprising: receiving data priority preference signals representing varying data priority preferences over the source data; and adjusting one or more of membership of
source symbols in base blocks, membership of base blocks in enveloping source blocks, number of source symbols per base block, number of symbols in a source block, and/or number of encoding symbols
generated from a source block, wherein the adjusting is done based on, at least in part, the data priority preference signals.
The method of claim 1, wherein the number of source symbols in the base blocks enveloped by each source block is independent, as between two or more of the source blocks.
The method of claim 1, wherein source symbols identified to a base block are not consecutive within the ordered plurality of source symbols.
The method of claim 1, wherein the source symbols identified to a base block are consecutive within the ordered plurality of source symbols.
The method of claim 20, wherein source symbols identified to the base blocks enveloped by a source block are consecutive within the ordered plurality of source symbols.
The method of claim 1, wherein the number of encoding symbols that can be generated for a source block is independent of the number of encoding symbols that can be generated for other source blocks.
The method of claim 1, wherein the number of encoding symbols generated for a given source block is independent of the number of source symbols in the base blocks enveloped by the given source block.
The method of claim 1, wherein encoding comprises: determining, for each encoding symbol, a set of coefficients selected from a finite field; generating the encoding symbol as a combination of source
symbols of one or more base blocks enveloped by a single source block, wherein the combination is defined, in part by the set of coefficients.
The method of claim 1, wherein the number of symbol operations to generate an encoding symbol from a source block is much less than the number of source symbols in the portion of the source data that
is represented by the source block.
A method for decoding data received over a communications channel that could possibly include errors or erasures, to recover source data that was represented by a set of source symbols, the method
comprising: identifying a base block for each source symbol, wherein the identified base block is one of a plurality of base blocks that, collectively, cover the source data; identifying, from a
plurality of source blocks and for each base block, at least one source block that envelops that base block, wherein the plurality of source blocks includes at least one pair of source blocks that
have a characteristic that there is at least one base block that is enveloped by both source blocks of the pair and at least one base block for each source block of the pair that is enveloped by that
source block and not by the other source block of the pair; and receiving a plurality of received symbols; for each received symbol, identifying a source block for which that received symbols is an
encoding symbol for; and decoding a set of source symbols from the plurality of received symbols, wherein the portion of the source data that is represented by the union of the pair of source blocks
is assured to be recoverable from a combination of a first set of received symbols corresponding to encoding symbols that were generated from the first source block of the pair and a second set of
received symbols corresponding to encoding symbols that were generated from the second source block of the pair, wherein the amount of received symbols in the first set is less than the amount of
source data in the first source block and the amount of received symbols in the second set is less than the amount of source data in the second source block.
The method of claim 26, wherein if N1 is the number of source symbols in the source data of the first source block, if N2 is the number of source symbols in the source data of the second source
block, if N3 is the number of source symbols in the intersection of the first and second source blocks with N3 greater than zero, if the encoding symbols and the source symbols have the same size, if
R1 is the number of received symbols in the first set of received symbols, if R2 is the number of received symbols in the second set of received symbols, then decoding the union of the pair of source
blocks from the first set of R1 received symbols and from the second set of R2 received symbols is assured beyond a predetermined threshold probability if R1+R2=N1+N2-N3, for at least one value of R1
and R2 such that R1<N1 and R2<N
2. 28.
The method of claim 27, wherein decoding the union of the pair of source blocks is assured beyond a predetermined threshold probability if R1+R2=N1+N2-N3 for all values of R
ltoreq.N1 and R
2. 29.
The method of claim 27, wherein decoding the union of the pair of source blocks is certain if R1+R2=N1+N2-N3 for all values of R
ltoreq.N1 and R
2. 30.
The method of claim 26, wherein the portion of the source data that is represented by the first source block of the pair is recoverable from a third set of encoding symbols generated from the first
source block, wherein the amount of encoding symbols in the third set is no greater than the amount of source data represented in the first source block.
The method of claim 26, wherein the number of distinct encoding symbols that can be generated from each source block is independent of the size of the source block.
The method of claim 26, wherein at least one of identifying base blocks for source symbols and identifying source blocks for base blocks is performed prior to a start to encoding.
The method of claim 26, wherein at least some encoding symbols are generated before a base block is identified for each source symbol and/or before the enveloped base blocks are determined for each
of the source blocks.
The method of claim 26, further comprising: determining receiver feedback representing results at a decoder based on what received symbols have been received and/or what portion of the source data is
desired at a receiver and/or data priority preference; and outputting the receiver feedback such that it is usable for altering an encoding process.
The method of claim 26, wherein the number of source symbols in the base blocks enveloped by each source block is independent, as between two or more of the source blocks.
The method of claim 26, wherein the source symbols identified to a base block are consecutive within the ordered plurality of source symbols.
The method of claim 26, wherein source symbols identified to the base blocks enveloped by a source block are consecutive within the ordered plurality of source symbols.
The method of claim 26, wherein decoding further comprises: determining, for each received symbol, a set of coefficients selected from a finite field; and decoding at least one source symbol from
more than one received symbol or previously decoded source symbols using the set of coefficients for the more than one received symbol.
The method of claim 26, wherein the number of symbol operations to recover a union of one or more source blocks is much less than the square of the number of source symbols in the portion of the
source data that is represented by the union of the source blocks.
An encoder that encodes data for transmission over a communications channel that could possibly introduce errors or erasures, comprising: an input for receiving source data that is represented by an
ordered plurality of source symbols and the source data is recoverable from encoding symbols that are transmitted; storage for at least a portion of a plurality of base blocks, wherein each base
block comprises a representation of one or more source symbol of the ordered plurality of source symbols; a logical map, stored in a machine-readable form or generatable according to logic
instructions, mapping each of a plurality of source blocks to one or more base block, wherein the plurality of source blocks includes at least one pair of source blocks that have a characteristic
that there is at least one base block that is enveloped by both source blocks of the pair and at least one base block for each source block of the pair that is enveloped by that source block and not
by the other source block of the pair; storage for encoded blocks; and one or more encoders that each encode source symbols of a source block to form a plurality of encoding symbols, with a given
encoding symbol being independent of source symbol values from source blocks other than the source block it encodes source symbols of, such that the portion of the source data that is represented by
the union of the pair of source blocks is assured to be recoverable from a combination of a first set of encoding symbols generated from the first source block of the pair and a second set of
encoding symbols generated from the second source block of the pair, wherein the amount of encoding symbols in the first set is less than the amount of source data in the first source block and the
amount of encoding symbols in the second set is less than the amount of source data in the second source block.
The encoder of claim 40, wherein the number of encoding symbols in the first set plus the number of encoding symbols in the second set is no greater than the number of source symbols in the portion
of the source data that is represented by the union of the pair of source blocks, if the encoding symbols and the source symbols have the same size.
The encoder of claim 40, wherein the portion of the source data that is represented by the first source block of the pair is recoverable from a third set of encoding symbols generated from the first
source block, wherein the amount of encoding symbols in the third set is no greater than the amount of source data represented in the first source block.
The encoder of claim 40, wherein the number of distinct encoding symbols that can be generated from each source block is independent of the size of the source block.
The encoder of claim 40, further comprising: an input for receiving receiver feedback representing results at a decoder that is receiving or has received encoding symbols; and logic for adjusting one
or more of membership of source symbols in base blocks, membership of base blocks in enveloping source blocks, number of source symbols per base block, number of symbols in a source block, and/or
number of encoding symbols generated from a source block, wherein the adjusting is done based on, at least in part, the receiver feedback.
The encoder of claim 40, further comprising: an input for receiving data priority preference signals representing varying data priority preferences over the source data; and logic for adjusting one
or more of membership of source symbols in base blocks, membership of base blocks in enveloping source blocks, number of source symbols per base block, number of symbols in a source block, and/or
number of encoding symbols generated from a source block, wherein the adjusting is done based on, at least in part, the data priority preference signals.
The encoder of claim 40, wherein the number of source symbols in the base blocks enveloped by each source block is independent, as between two or more of the source blocks.
The encoder of claim 40, wherein the source symbols identified to a base block are consecutive within the ordered plurality of source symbols.
The encoder of claim 40, wherein source symbols identified to the base blocks enveloped by a source block are consecutive within the ordered plurality of source symbols.
The encoder of claim 40, wherein the number of distinct encoding symbols that can be generated for a source block is independent of the number of encoding symbols that can be generated for other
source blocks.
The encoder of claim 40, wherein the number of distinct encoding symbols generated for a given source block is independent of the number of source symbols in the base blocks enveloped by the given
source block.
The encoder of claim 40, further comprising: storage for a set of coefficients selected from a finite field for each of a plurality of the encoding symbols; and logic for generating the encoding
symbol as a combination of source symbols of one or more base blocks enveloped by a single source block, wherein the combination is defined, in part by the set of coefficients.
CROSS REFERENCES [0001]
The present application for patent is related to the following co-pending U.S. patent applications, each of which is filed concurrently herewith, assigned to the assignee hereof, and expressly
incorporated by reference herein:
U.S. patent application entitled "Framing for an Improved Radio Link Protocol Including FEC" by Mark Watson, et al., having Attorney Docket No. 092888U1; and
U.S. patent application entitled "Forward Error Correction Scheduling for an Improved Radio Link Protocol" by Michael G. Luby, et al., having Attorney Docket No. 092888U2.
The following issued patents are expressly incorporated by reference herein for all purposes:
U.S. Pat. No. 6,909,383 entitled "Systematic Encoding and Decoding of Chain Reaction Codes" to Shokrollahi et al. issued Jun. 21, 2005 (hereinafter "Shokrollahi-Systematic"); and
U.S. Pat. No. 6,856,263 entitled "Systems and Processes for Decoding Chain Reaction Codes Through Inactivation" to Shokrollahi et al. issued Feb. 15, 2005 (hereinafter "Shokrollahi-Inactivation").
BACKGROUND [0007]
1. Field
The present disclosure relates in general to methods, circuits, apparatus and computer program code for encoding data for transmission over a channel in time and/or space and decoding that data,
where erasures and/or errors are expected, and more particularly to methods, circuits, apparatus and computer program code for encoding data using source blocks that overlap an can be partially or
wholly coextensive with other source blocks.
2. Background
Transmission of files between a sender and a recipient over a communications channel has been the subject of much literature. Preferably, a recipient desires to receive an exact copy of data
transmitted over a channel by a sender with some level of certainty. Where the channel does not have perfect fidelity (which covers most all physically realizable systems), one concern is how to deal
with data lost or garbled in transmission. Lost data (erasures) are often easier to deal with than corrupted data (errors) because the recipient cannot always tell when corrupted data is data
received in error. Many error correcting codes have been developed to correct for erasures and/or for errors. Typically, the particular code used is chosen based on some information about the
infidelities of the channel through which the data is being transmitted and the nature of the data being transmitted. For example, where the channel is known to have long periods of infidelity, a
burst error code might be best suited for that application. Where only short, infrequent errors are expected a simple parity code might be best.
In particular applications, there is a need for handling more than one level of service. For example, a broadcaster might broadcast two levels of service, wherein a device capable of receiving only
one level receives an acceptable set of data and a device capable of receiving the first level and the second level uses the second level to improve on the data of the first level. An example of this
is FM radio, where some devices only received the monaural signal and others received that and the stereo signal. One characteristic of this scheme is that the higher layers are not normally useful
without the lower layers. For example, if a radio received the secondary, stereo signal, but not the base signal, it would not find that particularly useful, whereas if the opposite occurred, and the
primary level was received but not the secondary level, at least some useful signal could be provided. For this reason, the primary level is often considered more worthy of protection relative to the
secondary level. In the FM radio example, the primary signal is sent closer to baseband relative to the secondary signal to make it more robust.
Similar concepts exist in data transport and broadcast systems, where a first level of data transport is for a basic signal and a second level is for an enhanced layer. An example is H.264 Scalable
Video Coding (SVC) wherein an H.264 base compliant stream is sent, along with enhancement layers. An example is a 1 megabit per second (mbps) base layer and a 1 mbps enhancement layer. In general, if
a receiver is able to decode all of the base layer, the receiver can provide a useful output and if the receiver is able to decode all of the enhancement layer the receiver can provide an improved
output, however if the receiver cannot decode all of the base layer, decoding the enhancement layer does not normally provide anything useful.
Forward error correction ("FEC") is often used to enhance the ability of a receiver to recover data that is transmitted. With FEC, a transmitter, or some operation, module or device operating for the
transmitter, will encode the data to be transmitted such that the receiver is able to recover the original data from the transmitted encoded data even in the presence of erasures and or errors.
Because of the differential in the effects of loss of one layer versus another, different coding might be used for different layers. For example, the data for a base layer might be transmitted with
additional data representing FEC coding of the data in the base layer, followed by the data of the enhanced layer with additional data representing FEC coding of the data in the base layer and the
enhanced layer. With this approach, the latter FEC coding can provide additional assurances that the base layer can be successfully decoded at the receiver.
While such a layered approach might be useful in certain applications, it can be quite limiting in other applications. For example, the above approach can be impractical for efficiently decoding a
union of two or more layers using some encoding symbols generated from one of the layers and other encoding symbols generated from the combination of the two or more layers.
SUMMARY [0016]
Data can be encoded by assigning source symbols to base blocks, assigning base blocks to source blocks and encoding each source block into encoding symbols, where at least one pair of source blocks
is such they have at least one base block in common with both source blocks of the pair and at least one base block not in common with the other source block of the pair. The encoding of a source
block can be independent of content of other source blocks. Decoding to recover all of a desired set of the original source symbols can be done from a set of encoding symbols from a plurality of
source blocks wherein the amount of encoding symbols from the first source block is less than the amount of source data in the first source block and likewise for the second source block.
In specific embodiments, an encoder can encode source symbols into encoding symbols and a decoder can decode those source symbols from a suitable number of encoding symbols. The number of encoding
symbols from each source block can be less than the number of source symbols in that source block and still allow for complete decoding.
In a more specific embodiment where a first source block comprises a first base block and a second source block comprises the first base block and a second base block, a decoder can recover all of
the first base block and second base block from a set of encoding symbols from the first source block and a set of encoding symbols from the second source block where the amount of encoding symbols
from the first source block is less than the amount of source data in the first source block, and likewise for the second source block, wherein the number of symbol operations in the decoding process
is substantially smaller than the square of the number of source symbols in the second source block.
BRIEF DESCRIPTION OF THE DRAWINGS [0019]
FIG. 1 is a block diagram of a communications system that uses elastic codes according to aspects of the present invention.
FIG. 2 is a block diagram of an example of a decoder used as part of a receiver that uses elastic codes according to aspects of the present invention.
FIG. 3 illustrates, in more detail, an encoder, which might be the encoder shown in FIG. 1, or one encoder unit in an encoder array.
FIG. 4 illustrates an example of a source block mapping according to elastic codes.
FIG. 5 illustrates an elastic code that is a prefix code and G=4.
FIG. 6 illustrates an operation with a repair symbol's block.
Attached as Appendix A is a paper presenting Slepian-Wolf type problems on an erasure channel, with a specific embodiment of an encoder/decoder system, sometimes with details of the present invention
used, which also includes several special cases and alternative solutions in some practical applications, e.g., streaming. It should be understood that the specific embodiments described in Appendix
A are not limiting examples of the invention and that some aspects of the invention might use the teachings of Appendix A while others might not. It should also be understood that limiting statements
in Appendix A may be limiting as to requirements of specific embodiments and such limiting statements might or might not pertain the claimed inventions and, therefore, the claim language need not be
limited by such limiting statements.
To facilitate understanding, identical reference numerals have been used where possible to designate identical elements that are common to the figures, except that suffixes may be added, where
appropriate, to differentiate such elements. The images in the drawings are simplified for illustrative purposes and are not necessarily depicted to scale.
The appended drawings illustrate exemplary configurations of the disclosure and, as such, should not be considered as limiting the scope of the disclosure that may admit to other equally effective
configurations. Correspondingly, it has been contemplated that features of some configurations may be beneficially incorporated in other configurations without further recitation.
DETAILED DESCRIPTION [0028]
The present invention is not limited to specific types of data being transmitted. However in examples herein, it will be assumed that the data could be transmitted is represented by a sequence of one
or more source symbols and that each source symbol has a particular size, sometimes measured in bits. While it is not a requirement, in these examples, the source symbol size is also the size of
encoding symbols. The "size" of a symbol can be measured in bits, whether or not the symbol is actually broken into a bit stream, where a symbol has a size of M bits when the symbol is selected from
an alphabet of 2
In the terminology used herein, the data to be conveyed is represented by a number of source symbols, where K is used to represent that number. In some cases, K is known in advance. For example, when
the data to be conveyed is a file of unknown size and an integer multiple of the source symbol size, K would simply be the integer that is that multiple. However, it might also be the case that K is
not known in advance of the transmission, or is not known until after the transmission has already started. For example, where the transmitter is transmitting a data stream as the transmitter
receives the data and does not have an indication of when the data stream might end.
An encoder generates encoding symbols based on source symbols. Herein, the number of encoding symbols is often referred to as N. Where N is fixed given K, the encoding process has a code rate, r=K/N.
Information theory holds that if all source symbol values are equally possible, perfect recovery of the K source symbols requires at least K encoding symbols to be received (assuming the same size
for source symbols and encoding symbols) in order to fully recover the K source symbols. Thus, the code rate using FEC is usually less than one. In many instances, lower code rates allow for more
redundancy and thus more reliability, but at a cost of lower bandwidth and possibly increased computing effort. Some codes require more computations per encoding symbol than others and for many
applications, the computational cost of encoding and/or decoding will spell the difference between a useful implementation and an unwieldy implementation.
Each source symbol has a value and a position within the data to be transmitted and they can be stored in various places within a transmitter and/or receiver, computer-readable memory or other
electronic storage, that contains a representation of the values of particular source symbols. Likewise, each encoding symbol has a value and an index, the latter being to distinguish one encoding
symbol from another, and also can be represented in computer- or electronically-readable form. Thus, it should be understood that often a symbol and its physical representation can be used
interchangeably in descriptions.
In a systematic encoder, the source symbols are part of the encoding symbols and the encoding symbols that are not source symbols are sometimes referred to as repair symbols, because they can be used
at the decoder to "repair" damage due to losses or errors, i.e., they can help with recovery of lost source symbols. Depending on the codes used, the source symbols can be entirely recovered from the
received encoding symbols which might be all repair symbols or some source symbols and some repair symbols. In a non-systematic encoder, the encoding symbols might include some of the source symbols,
but it is possible that all of the encoding symbols are repair symbols. So as not to have to use separate terminology for systematic encoders and nonsystematic encoders, it should be understood that
the term "source symbols" refers to symbols representing the data to be transmitted or provided to a destination, whereas the term "encoding symbols" refers to symbols generated by an encoder in
order to improve the recoverability in the face of errors or losses, independent of whether those encoding symbols are source symbols or repair symbols. In some instances, the source symbols are
preprocessed prior to presenting data to an encoder, in which case the input to the encoder might be referred to as "input symbols" to distinguish from source symbols. When a decoder decodes input
symbols, typically an additional step is needed to get to the source symbols, which is typically the ultimate goal of the decoder.
One efficient code is a simple parity check code, but the robustness is often not sufficient. Another code that might be used is a rateless code such as the chain reaction codes described in U.S.
Pat. No. 6,307,487, to Luby, which is assigned to the assignee hereof, and expressly incorporated by reference herein (hereinafter "Luby I") and the multi-stage chain reaction as described in U.S.
Pat. No. 7,068,729, to Shokrollahi et al., which is assigned to the assignee hereof, and expressly incorporated by reference herein (hereinafter "Shokrollahi I").
As used herein, the term "file" refers to any data that is stored at one or more sources and is to be delivered as a unit to one or more destinations. Thus, a document, an image, and a file from a
file server or computer storage device, are all examples of "files" that can be delivered. Files can be of known size (such as a one megabyte image stored on a hard disk) or can be of unknown size
(such as a file taken from the output of a streaming source). Either way, the file is a sequence of source symbols, where each source symbol has a position in the file and a value.
The term "file" might also, as used herein, refer to other data to be transmitted that is not be organized or sequenced into a linear set of positions, but may instead represent data may have
orderings in multiple dimensions, e.g., planar map data, or data that is organized along a time axis and along other axes according to priorities, such as video streaming data that is layered and has
multiple layers that depend upon one another for presentation.
Transmission is the process of transmitting data from one or more senders to one or more recipients through a channel in order to deliver a file. A sender is also sometimes referred to as the
transmitter. If one sender is connected to any number of recipients by a perfect channel, the received data can be an exact copy of the input file, as all the data will be received correctly. Here,
we assume that the channel is not perfect, which is the case for most real-world channels. Of the many channel imperfections, two imperfections of interest are data erasure and data incompleteness
(which can be treated as a special case of data erasure). Data erasure occurs when the channel loses or drops data. Data incompleteness occurs when a recipient does not start receiving data until
some of the data has already passed it by, the recipient stops receiving data before transmission ends, the recipient chooses to only receive a portion of the transmitted data, and/or the recipient
intermittently stops and starts again receiving data.
If a packet network is used, one or more symbol, or perhaps portions of symbols, are included in a packet for transmission and each packet is assumed to have been correctly received or not at all. A
transmission can be "reliable", in that the recipient and the sender will correspond with each other in the face of failures until the recipient satisfied with the result, or unreliable, in that the
recipient has to deal with what is offered by the sender and thus can sometimes fail. With FEC, the transmitter encodes data, by providing additional information, or the like, to make up for
information that might be lost in transit and the FEC encoding is typically done in advance of exact knowledge of the errors, attempting to prevent errors in advance.
In general, a communication channel is that which connects the sender and the recipient for data transmission. The communication channel could be a real-time channel, where the channel moves data
from the sender to the recipient as the channel gets the data, or the communication channel might be a storage channel that stores some or all of the data in its transit from the sender to the
recipient. An example of the latter is disk storage or other storage device. In that example, a program or device that generates data can be thought of as the sender, transmitting the data to a
storage device. The recipient is the program or device that reads the data from the storage device. The mechanisms that the sender uses to get the data onto the storage device, the storage device
itself and the mechanisms that the recipient uses to get the data from the storage device collectively form the channel. If there is a chance that those mechanisms or the storage device can lose
data, then that would be treated as data erasure in the communication channel.
An "erasure code" is a code that maps a set of K source symbols to a larger (>K) set of encoding symbols with the property that the original source symbols can be recovered from some proper subsets
of the encoding symbols. An encoder will operate to generate encoding symbols from the source symbols it is provided and will do so according to the erasure code it is provided or programmed to
implement. If the erasure code is useful, the original source symbols (or in some cases, less than complete recovery but enough to meet the needs of the particular application) are recoverable from a
subset of the encoding symbols that happened to be received at a receiver/decoder, if the subset is of size greater than or equal to the size of the source symbols (an "ideal" code), or at least this
should be true with reasonably high probability. In practice, a "symbol" is usually a collection of bytes, possibly several hundred bytes, and all symbols (source and encoding) are the same size.
A "block erasure code" is an erasure code that maps one of a set of specific disjoint subsets of the source symbols ("blocks") to each encoding symbol. When a set of encoding symbols is generated
from one block, those encoding symbols can be used in combination with one another to recover that one block.
The "scope" of an encoding symbol is the block it is generated from and the block that the encoding symbol is used to decode, with other encoding symbols used in combination.
The "neighborhood set" of a given encoding symbol is the set of source symbols within the symbol's block that the encoding symbol directly depends on. The neighborhood set might be a very sparse
subset of the scope of the encoding symbol. Many block erasure codes, including chain reaction codes (e.g., LT codes), LDPC codes, and multi-stage chain reaction codes (e.g., Raptor codes), use
sparse techniques to generate encoding symbols for efficiency and other reasons. One example of a measurement of sparseness is the ratio of the number of symbols in the neighborhood set that an
encoding symbol depends on to the number of symbols in the block. For example, where a block comprises 256 source symbols (k=256) and each encoding symbol is an XOR of between two and five of those
256 source symbols, the ratio would be between 2/256 and 5/256. Similarly, where K=1024 and each encoding symbol is a function of exactly three source symbols (i.e., each encoding symbol's
neighborhood set has exactly three members), then the ratio is 3/1024.
For some codes, such as Raptor codes, encoding symbols are not generated directly from source symbols of the block, but instead from other intermediate symbols that are themselves generated from
source symbols of the block. In any case, for Raptor codes, the neighborhood set can be much smaller than the size of the scope (which is equal to the number of source symbols in the block) of these
encoding symbols. In these cases where efficient encoding and decoding is a concern and the resulting code construction is sparse, the neighborhood set of an encoding symbol can be much smaller than
its scope, and different encoding symbols may have different neighborhood sets even when generated from the same scope.
Since the blocks of a block erasure code are disjoint, the encoding symbols generated from one block cannot be used to recover symbols from a different block because they contain no information about
that other block. Typically, the design of codes, encoders and decoders for such disjoint block erasure codes behave a certain way due to the nature of the code. If the encoders/decoders were simply
modified to allow for nondisjoint blocks, i.e., where the scope of a block might overlap another block's scope, encoding symbols generated from the overlapping blocks would not be usable to
efficiently recover the source symbols from the unions of the blocks, i.e., the decoding process does not allow for efficient usage of the small neighborhood sets of the encoding symbols when used to
decode overlapping blocks. As a consequence, the decoding efficiency of the block erasure codes when applied to decode overlapping blocks is much worse than the decoding efficiency of these codes
when applied to what they were designed for, i.e., decoding disjoint blocks.
A "systematic code" is one in which the set of encoding symbols contains the source symbols themselves. In this context, a distinction might be made between source symbols and "repair symbols" where
the latter refers to encoding symbols other than those that match the source symbols. Where a systematic code is used and all of the encoding symbols are received correctly, the extras (the repair
symbols) are not needed at the receiver, but if some source symbols are lost or erased in transit, the repair symbols can be used to repair such a situation so that the decoder can recover the
missing source symbols. A code is considered to be "nonsystematic" if the encoding symbols comprise the repair symbols and source symbols are not directly part of the encoding symbols.
With these definitions in mind, various embodiments will now be described.
Overview of Encoders
/Decoders for Elastic Codes
In an encoder, encoding symbols are generated from source symbols, input parameters, encoding rules and possibly other considerations. In the examples of block-based encoding described herein, this
set of source symbols from which an encoding symbol could depend is referred to as a "source block", or alternatively, referred to as the "scope" of the encoding symbol. Because the encoder is
block-based, a given encoding symbol depends only on source symbols within one source block (and possibly other details), or alternatively, depends only on source symbols within its scope, and does
not depend on source symbols outside of its source block or scope.
Block erasure codes are useful for allowing efficient encoding, and efficient decoding. For example, once a receiver successfully recovers all of the source symbols for a given source block, the
receiver can halt processing of all other received encoding symbols that encode for source symbols within that source block and instead focus on encoding symbols for other source blocks.
In a simple block erasure encoder, the source data might be divided into fixed-size, contiguous and non-overlapping source blocks, i.e., each source block has the same number of source symbols, all
of the source symbols in the range of the source block are adjacent in locations in the source data and each source symbol belongs to exactly one source block. However, for certain applications, such
constraints may lower performance, reduce robustness, and/or add to computational effort of encoding and/or decoding.
Elastic erasure codes are different from block erasure codes in several ways. One is that elastic erasure code encoders and decoders operate more efficiently when faced with unions of overlapping
blocks. For some of the elastic erasure code methods described herein, the generated encoding symbols are sparse, i.e., their neighborhood sets are much smaller than the size of their scope, and when
encoding symbols generated from a combination of scopes (blocks) that overlap are used to decode the union of the scopes, the corresponding decoder process is both efficient (leverages the sparsity
of the encoding symbols in the decoding process and the number of symbol operations for decoding is substantially smaller than the number of symbol operations needed to solve a dense system of
equations) and has small reception overhead (the number of encoding symbols needed to recover the union of the scopes might be equal to, or not much larger than, the size of the union of the scopes).
For example, the size of the neighborhood set of each encoding symbol might be the square root of K when it is generated from a block of K source symbols, i.e., when it has scope K. Then, the number
of symbol operations needed to recover the union of two overlapping blocks from encoding symbols generated from those two blocks might be much smaller than the square of K', where the union of the
two blocks comprises K' source symbols.
With the elastic erasure coding described herein, source blocks need not be fixed in size, can possibly include nonadjacent locations, as well as allowing source blocks to overlap such that a given
source symbol is "enveloped" by more than one source block.
In embodiments of an encoder described below, the data to be encoded is an ordered plurality of source symbols and the encoder determines, or obtains a determination of, demarcations of "base blocks"
representing source symbols such that each source symbol is covered by one base block and a determination and demarcation of source blocks, wherein a source block envelops one or more base blocks
(and the source symbols in those base blocks). Where each source block envelops exactly one base block, the result is akin to a conventional block encoder. However, there are several useful and
unexpected benefits in coding when the source blocks are able to overlap each other such that some base block might be in more than one source block such that two source blocks have at least one base
block in their intersection and the union of the two source blocks includes more source symbols than are in either one of the source blocks.
If the encoding is such that the portion of the source data that is represented by the union of the pair of source blocks is recoverable from a combination of a first set of encoding symbols
generated from the first source block of the pair and a second set of encoding symbols generated from the second source block of the pair, it can be possible to decode using fewer received symbols
that might have been required if the more simple encoding process is used. In this encoding process, the resulting encoding symbols can, in some cases, be used in combination for efficient recovery
of source symbols of more than one source block.
An illustration of why this is so is provided below, but first, examples of implementations will be described. It should be understood that these implementations can be done in hardware, program code
executed by a processor or computer, software running on a general purpose computer, or the like.
Elastic Code Ideal Recovery Property [0055]
For block codes, ideal recovery is the ability to recover the K source symbols of a block from any received set of K encoding symbols generated from the block. It is well-known that there are block
codes with this ideal recovery property. For example, Reed-Solomon codes used as erasure codes exhibit this ideal recovery property.
A similar ideal recovery property might be defined for elastic codes. Suppose an elastic code communications system is designed such that a receiver receives some set of encoding symbols (where the
channel may have caused the loss of some of the encoding symbols, so the exact set might not be specifiable at the encoder) and the receiver attempts to recover all of the original source symbols,
wherein the encoding symbols are generated at the encoder from a set of overlapping scopes. The overlapping scopes are such that the received encoding symbols are generated from multiple source
blocks of overlapping source symbols, wherein the scope of each received encoding symbol is one of the source blocks. In other words, encoding symbols are generated from a set of T blocks (scopes) b
, b
, . . . , b
, wherein each encoding symbol is generated from exactly one of the T blocks (scopes).
In this context, the ideal recovery property of an elastic erasure code can be described as the ability to recover the set of T blocks from a subset, E, of received encoding symbols, for any S such
that 1≦S≦T, for all subsets {i
, . . . , i
}, of {1, . . . , T}, if the following holds: For all s such that 1≦s≦S, for all subsets {i
', . . . , i
'} of {i
, . . . , i
}, the number of symbols in E generated from any of b
, . . . , b
is at most the size of the union of b
, . . . , b
, and the number of symbols in E generated from any of b
, . . . , b
is equal to the size of the union of b
, . . . , b
. Note that E may be a subset of the received encoding symbols, i.e., some received encoding symbols might not be considered when evaluating this ideal recovery definition to see if a particular set
of blocks (scopes) are recoverable.
Ideally, recovery of a set of blocks (scopes) should be computationally efficient, e.g., the number of symbol operations that the decoding process uses might be linearly proportional to the number of
source symbols in the union of the recovered scopes, as opposed to quadratic, etc.
It should be noted that, while some of the descriptions herein might describe methods and processes for elastic erasure code encoding, processing, decoding, etc. that, in some cases, achieve the
ideal recovery properties described above, in other cases, only a close approximation of the ideal recovery and efficiency properties of elastic codes are achieved, while still being considered to be
within the definitions of elastic erasure code encoding, processing, decoding, etc.
System Overview [0060]
FIG. 1 is a block diagram of a communications system 100 that uses elastic codes.
In system 100, an elastic code block mapper ("mapper") 110 generates mappings of base blocks to source blocks, and possibly the demarcations of base blocks as well. As shown in FIG. 1, communications
system 100 includes mapper 110, storage 115 for source block mapping, an encoder array or encoder 120, storage 125 for encoding symbols, and transmitter module 130.
Mapper 110 determines, from various inputs and possibly a set of rules represented therein, which source blocks will correspond with which base blocks and stores the correspondences in storage 115.
If this is a deterministic and repeatable process, the same process can run at a decoder to obtain this mapping, but if is it random or not entirely deterministic, information about how the mapping
occurs can be sent to the destination to allow the decoder to determine the mapping.
As shown, a set of inputs (by no means required to be exhaustive) are used in this embodiment for controlling the operation of mapper 110. For example, in some embodiments, the mapping might depend
on the values of the source symbols themselves, the number of source symbols (K), a base block structure provided as an input rather than generated entirely internal to mapper 110, receiver feedback,
a data priority signal, or other inputs.
As an example, mapper 110 might be programmed to create source blocks with envelopes that depend on a particular indication of the base block boundaries provided as an input to mapper 110.
The source block mapping might also depend on receiver feedback. This might be useful in the case where receiver feedback is readily available to a transmitter and the receiver indicates successful
reception of data. Thus, the receiver might signal to the transmitter that the receiver has received and recovered all source symbols up to an i-th symbol and mapper 110 might respond by altering
source block envelopes to exclude fully recovered base blocks that came before the i-th symbol, which could save computational effort and/or storage at the transmitter as well as the receiver.
The source block mapping can depend on a data priority input that signals to mapper 110 varying data priority values for different source blocks or base blocks. An example usage of this is in the
case where a transmitter is transmitting data and receives a signal that the data being transmitted is a lower priority than other data, in which case the coding and robustness can be increased for
the higher priority data at the expense of the lower priority data. This would be useful, in applications such as map displays, where an end-user might move a "focus of interest" point as a map is
loading, or in video applications where an end-user fast forwards or reverses during the transmission of a video sequence.
In any case, encoder array 120 uses the source block mapping along with the source symbol values and other parameters for encoding to generate encoding symbols that are stored in storage 125 for
eventual transmission by transmitter module 130. Of course it should be understood that system 100 could be implemented entirely in software that reads source symbol values and other inputs and
generates stored encoding symbols. Because the source block mapping is made available to the encoder array and encoding symbols can be independent of source symbols not in the source block associated
with that encoding symbol, encoder array 120 can comprise a plurality of independently operating encoders that each operate on a different source block. It should also be understood that in some
applications each encoding symbol is sent immediately or almost immediately after it is generated, and thus there might not be a need for storage 125, or an encoding symbol might be stored within
storage 125 before it is transmitted for only a short duration of time.
Referring now to FIG. 2, an example of a decoder used as part of a receiver at a destination is shown. As illustrated there, a receiver 200 includes a receiver module 210, storage 220 for received
encoding symbols, a decoder 230, storage 235 for decoded source symbols, and a counterpart source block mapping storage 215. Not shown is any connection needed to receive information about how to
create the source block mapping, if that is needed from the transmitter.
Receiver module 210 receives the signal from the transmitter, possibly including erasures, losses and/or missing data, derives the encoding symbols from the received signal and stores the encoding
symbols and storage 220.
Decoder 230 can read the encoding symbols that are available, the source block mapping from storage 215 to determine which symbols can be decoded from the encoding symbols based on the mappings, the
available encoding symbols and the previously decoded symbols in storage 235. The results of decoder 230 can be stored in storage 235.
It should be understood that storage 220 for received encoded symbols and storage 235 for decoded source symbols might be implemented by a common memory element, i.e., wherein decoder 230 saves the
results of decoding in the same storage area as the received encoding symbols used to decode. It should also be understood from this disclosure that encoding symbols and decoded source symbols may be
stored in volatile storage, such as random-access memory (RAM) or cache, especially in cases where there is a short delay between when encoding symbols first arrive and when the decoded data is to be
used by other applications. In other applications, the symbols are stored in different types of memory.
FIG. 3 illustrates in more detail an encoder 300, which might be the encoder shown in FIG. 1, or one encoder unit in an encoder array. In any case, as illustrated, encoder 300 has a symbol buffer 305
in which values of source symbols are stored. In the illustration, all K source symbols are storable at once, but it should be understood that the encoder can work equally as well with a symbol
buffer that has less than all of the source symbols. For example, a given operation to generate an encoding symbol might be carried out with symbol buffer only containing one source block's worth of
source symbols, or even less than an entire source block's worth of source symbols.
A symbol selector 310 selects from one to K of the source symbol positions in symbol buffer 305 and an operator 320 operates on the operands corresponding to the source symbols and thereby generates
an encoding symbol. In a specific example, symbol selector 310 uses a sparse matrix to select symbols from the source block or scope of the encoding symbols being generated and operator 320 operates
on the selected symbols by performing a bit-wise exclusive or (XOR) operation on the symbols to arrive at the encoding symbols. Other operations besides XOR are possible.
As used herein, the source symbols that are operands for a particular encoding symbol are referred to as that encoding symbol's "neighbors" and the set of all encoding symbols that depend on a given
source symbol are referred to as that source symbol's neighborhood.
When the operation is an XOR, a source symbol that is a neighbor of an encoding symbol can be recovered from that encoding symbol if all the other neighbors source symbols of that encoding symbol are
available, simply by XORing the encoding symbol and the other neighbors. This may make it possible to decode other source symbols. Other operations might have like functionality.
With the neighbor relationships known, a graph of source symbols and encoding symbols would exist to represent the encoding relationships.
Details of Elastic Codes [0077]
Elastic codes have many advantages over either block codes or convolutional codes or network codes, and easily allow for what is coded to change based on feedback received during encoding. Block
codes are limited due to the requirement that they code over an entire block of data, even though it may be advantageous to code over different parts of the data as the encoding proceeds, based on
known error-conditions of the channel and/or feedback, taking into consideration that in many applications it is useful to recover the data in prefix order before all of the data can be recovered due
to timing constraints, e.g., when streaming data.
Convolutional codes provide some protection to a stream of data by adding repair symbols to the stream in a predetermined patterned way, e.g., adding repair symbols to the stream at a predetermined
rate based on a predetermined pattern. Convolutional codes do not allow for arbitrary source block structures, nor do they provide the flexibility to generate varying amounts of encoding symbols from
different portions of the source data, and they are limited in many other ways as well, including recovery properties and the efficiency of encoding and decoding.
Network codes provide protection to data that is transmitted through a variety of intermediate receivers, and each such intermediate receiver then encodes and transmits additional encoding data based
on what it received. Network codes do not provide the flexibility to determine source block structures, nor are there known efficient encoding and decoding procedures that are better than brute
force, and network codes are limited in many other ways as well.
Elastic codes provide a suitable level of data protection while at the same time allowing for real-time streaming experience, i.e., introducing as little latency in the process as possible given the
current error conditions due to the coding introduced to protect against error-conditions.
As explained, an elastic code is a code in which each encoding symbol may be dependent on an arbitrary subset of the source symbols. One type of the general elastic code is an elastic chord code in
which the source symbols are arranged in a sequence and each encoding symbol is generated from a set of consecutive source symbols. Elastic chord codes are explained in more detail below.
Other embodiments of elastic codes are elastic codes that are also linear codes, i.e., in which each encoding symbol is a linear sum of the source symbols on which it depends and a GF(q) linear code
is a linear code in which the coefficients of the source symbols in the construction of any encoding symbol are members of the finite field GF(q).
Encoders and decoders and communications systems that use the elastic codes as described herein provide a good balance of minimizing latency and bandwidth overhead.
Elastic Code Uses for Multi
-Priority Coding
Elastic codes are also useful in communications systems that need to deliver objects that comprise multiple parts for those parts may have different priorities of delivery, where the priorities are
determined either statically or dynamically.
An example of static priority would be data that is partitioned into different parts to be delivered in a priority that depends on the parts, wherein different parts may be logically related or
dependent on one another, in either time or some other causality dimension. In this case, the protocol might have no feedback from receiver to sender, i.e., be open-loop.
An example of dynamic priority would be a protocol that is delivering two-dimensional map information to an end user dynamically in parts as the end user focus on different parts of the map changes
dynamically and unpredictably. In this case, the priority of the different parts of the map to be delivered changes based on unknown a-priori priorities that are only known based on feedback during
the course of the protocol, e.g., in reaction to changing network conditions, receiver input or interest, or other inputs. For example, an end user may change their interest in terms of which next
portion of the map to view based on information in their current map view and their personal inclinations and/or objectives. The map data may be partitioned into quadrants, and within each quadrant
to different levels of refinement, and thus there might be a base block for each level of each quadrant, and source blocks might comprise unions of one or more base blocks, e.g., some source blocks
might comprise unions of the base blocks associated with different levels of refinement within one quadrant, whereas other source blocks might comprise unions of base blocks associated with adjacent
quadrants of one refinement level. This is an example of a closed-loop protocol.
Encoders Using Elastic Erasure Coding [0087]
Encoders described herein use a novel coding that allows encoding over arbitrary subsets of data. For example, one repair symbol can encode over one set of data symbols while a second repair symbol
can encode over a second set of data symbols, in such a way that the two repair symbols can recover from the loss of two source symbols in the intersections of their scopes, and each repair symbol
can recover from the loss of one data symbol from the data symbols that is in their scope but not in the scope of the other repair symbol. One advantage of elastic codes is that they can provide an
elastic trade-off between recovery capabilities and end-to-end latency. Another advantage of such codes is that they can be used to protect data of different priorities in such a way that the
protection provided solely for the highest priority data can be combined with the data provided for the entire data to recover the entire data, even in the case when the repair provided for the
highest priority data is not alone sufficient for recovery of the highest priority data.
These codes are useful in complete protocol designs in cases where there is no feedback and in cases where there is feedback within the protocol. In the case where there is feedback in the protocol,
the codes can be dynamically changed based on the feedback to provide the best combination of provided protection and added latency due to the coding.
Block codes can be considered a degenerate case of using elastic codes, by having single source scopes--each source symbol belongs in only one source block. With elastic codes, source scope
determination can be completely flexible, source symbols can belong to multiple source scopes, source scopes can be determined on the fly, in other than a pre-defined regular pattern, determined by
underlying structure of source data, determined by transport conditions or other factors.
FIG. 4 illustrates an example, wherein the lower row of boxes represents source symbols and the bracing above the symbols indicates the envelope of the source blocks. In this example, there are three
source blocks and thus there would be three encoded blocks, one that encodes for each one of the source blocks. In this example, if source blocks are formed from base blocks, there could be five base
blocks with the base blocks demarcations indicated with arrows.
In general, encoders and decoders that use elastic codes would operate where each of the source symbols is within one base block but can be in more than one source block, or source scope, with some
of the source blocks being overlapping and at least in some cases not entirely subsets of other source blocks, i.e., there are at least two source blocks that have some source symbols in common but
also each have some source symbols present in one of the source blocks but not in the other. The source block is the unit from which repair symbols are generated, i.e., the scope of the repair
symbols, such that repair symbols for one source block can be independent of source symbols not in that source block, thereby allowing the decoding of source symbols of a source block using encoded,
received, and/or repair symbols of that source block without requiring a decoder to have access to encoded, received, or repair symbols of another source block.
The pattern of scopes of source blocks can be arbitrary, and/or can depend on the needs or requests of a destination decoder. In some implementations, source scope can be determined on-the-fly,
determined by underlying structure of source data, determined by transport conditions, and/or determined by other factors. The number of repair symbols that can be generated from a given source block
can be the same for each source block, or can vary. The number of repair symbols generated from a given source block may be fixed based on a code rate or may be independent of the source block, as in
the case of chain reaction codes.
In the case of traditional block codes or chain reaction codes, repair symbols that are used by the decoder in combination with each other to recover source symbols are typically generated from a
single source block, whereas with the elastic codes described herein, repair symbols can be generated from arbitrary parts of the source data, and from overlapping parts of the source data, and the
mapping of source symbols to source blocks can be flexible.
Selected Design Considerations [0094]
Efficient encoding and decoding is primary concern in the design of elastic codes. For example, ideal efficiency might be found in an elastic code that can decode using a number of symbol operations
that is linear in the number of recovered source symbols, and thus any decoder that uses substantially fewer symbol operations for recovery than brute force methods is preferable, where typically a
brute force method requires a number of symbol operations that is quadratic in the number of recovered source symbols.
Decoding with minimal reception overhead is also a goal, where "reception overhead" can be represented as the number of extra encoding symbols, beyond what is needed by a decoder, that are needed to
achieve the previously described ideal recovery properties. Furthermore, guaranteed recovery, or high probability recovery, or very high likelihood recovery, or in general high reliability recovery,
are preferable. In other words, in some applications, the goal need not be complete recovery.
Elastic codes are useful in a number of environments. For example with layered coding, a first set of repair symbols is provided to protect a block of higher priority data, while a second set of
repair symbols protects the combination of the higher priority data block and a block of lower priority data, requiring fewer symbols at decoding and if the higher priority data block was encoded
separately and the lower priority data block was encoded separately. Some known codes provide for layered coding, but often at the cost of failing to achieve efficient decoding of unions of
overlapping source blocks and/or failing to achieve high reliability recovery.
The elastic window-based codes described below can achieve efficient and high reliability decoding of unions of overlapping source blocks at the same time and can also do so in the case of layered
Combination with Network Coding [0098]
In another environment, network coding is used, where an origin node sends encoding of source data to intermediate nodes that may experience different loss patterns and intermediate nodes send
encoding data generated from the portion of the encoding data that is received to destination nodes. The destination nodes can then recover the original source data by decoding the received encoding
data received from multiple intermediate nodes. Elastic codes can be used within a network coding protocol, wherein the resulting solution provides efficient and high reliability recovery of the
original source data.
Simple Construction of Elastic Chord Codes [0099]
For the purposes of explanation, assume an encoder generates a set of repair symbols as follows, which provides a simple construction of elastic chord codes. This simple construction can be extended
to provide elastic codes that are not necessarily elastic chord codes, in which case the identification of a repair symbol and its neighborhood set or scope is an extension of the identification
described here. Generate an m×n matrix, A, with elements in GF(256). Denote the element in the i-th row and j-th column by A
and the source symbols by S
for j=0, 1, 2, . . . . Then, for any tuple (e, l, i), where e, l and i are integers, e≧l>0 and 0≦i<m and a repair symbol R.sub.e,l,i has a value as set out in Equation 1.
R e
, l , i = j = e - l + 1 j = e A ij S jmodn ( Eqn . 1 ) ##EQU00001##
Note that for R.sub.e,l,i to be well-defined, a notion of multiplication of a symbol by an element of GF(256) and a notion of summation of symbols should be specified. In examples, herein, elements
of GF(256) are represented as octets and each symbol, which can be a sequence of octets, is thought of as a sequence of elements of GF(256). Multiplication of a symbol by a field element entails
multiplication of each element of the symbol by the same field element. Summation of symbols is simply the symbol formed from the concatenation of the sums of the corresponding field elements in the
symbols to be summed.
The set of source symbols that appear in Equation 1 for a given repair symbol is known as the "scope" of the repair symbol, whereas the set of repair symbols that have a given source symbol appear in
Equation 1 for each of those repair symbols is referred to as the "neighborhood" of the given source symbol. Thus, in this construction, the neighborhood set of a repair symbol is the same as the
scope of the repair symbol.
The encoding symbols of the code then comprise the source symbols plus repair symbols, as defined herein, i.e., the constructed code is systematic.
Consider two alternative constructions for the matrix A, corresponding to two different elastic codes. For a "Random Chord Code", the elements of A are chosen pseudo-randomly from the nonzero
elements of GF(256). It should be understood herein throughout, unless otherwise indicated, where something is described as being chosen randomly, it should be assumed that pseudo-random selection is
included in that description and, more generally, that random operations can be performed pseudo-randomly. For a "Cauchy Chord Code", the elements of A are defined as shown in Equation 2, where k=
255-m, and g(x) is the finite field element whose octet representation is x.
=(g(j mod k)⊕g(255-i))
(Eqn. 2)
Decoding Symbols from an Encoding Using a Simple Construction of Elastic
Chord Codes
As well as encoding symbols themselves, the decoder has access to identifying information for each symbol, which can just be an index, i.e., for a source symbol, S
, the identifying information is the index, j. For a repair symbol, R.sub.e,l,i, the identifying information is the triple (e, l, i). Of course, the decoder also has access to the matrix A.
For each received repair symbol, a decoder determines the identifying information and calculates a value for that repair symbol from Equation 1 using source symbol values if known and the zero symbol
if the source symbol value is unknown. When the value so calculated is added to the received repair symbol, assuming the repair symbol was received correctly, the result is a sum over the remaining
unknown source symbols in the scope or neighborhood of the repair symbol.
For simplicity, this description has a decoder programmed to attempt to recover all unknown source symbols that are in the scope of at least one received repair symbol. Upon reading this disclosure,
it should be apparent how to modify the decoder to recover less than all, or all with a high probability but less than certainty, or a combination thereof.
In this example, let t be the number of unknown source symbols that are in the union of the scopes of received repair symbols and let j
, j
, . . . , j
-1 be the indices of these unknown source symbols. Let u be the number of received repair symbols and denote the received repair symbols (arbitrarily) as R
, . . . , R
Construct the u×t matrix E with entries E
, where E
is the coefficient of source symbol S
in Equation 1 for repair symbol R
, or zero if S
does not appear in the equation. Then, if S=(S
, . . . , S
is a vector of the missing source symbols and R=(R
, . . . , R
is a vector of the received repair symbols after applying step 1, the expression in Equation 3 will be satisfied.
=ES (Eqn. 3)
If E does not have rank u, then there exists a row of E that can be removed without changing the rank of E. Remove this, decrement u by one and renumber the remaining repair symbols so that Equation
3 still holds. Repeat this step until E has rank u.
If u=t, then complete decoding is possible, E is square, of full rank and therefore invertible. Since E is invertible, S can be found from E
R, and decoding is complete. If u<t, then complete decoding is not possible without reception of additional source and/or repair symbols of this subset of the source symbols or having other
information about the source symbols from some other avenue.
If u<t, then let E' be a u×u sub-matrix of E of full rank. With a suitable column permutation, E can be written as (E' U), where U is a u×(t-u) matrix. Multiplying both sides of Equation 3 by E'
, the expression in Equation 4 can be obtained, which provides a solution for the source symbols corresponding to rows of E
R where E'
U is zero.
U)S (Eqn. 4)
Equation 4 allows simpler recovery of the remaining source symbols if further repair and/or source symbols are received.
Recovery of other portions of the source symbols might be possible even when recovery of all unknown source symbols that are in the scope of at least one received repair symbol is not possible. For
example, it may be the case that, although some unknown source symbols are in the scope of at least one received repair symbol, there are not enough repair symbols to recover the unknown source
symbols, or that some of the equations between the repair symbols and unknown source symbols are linearly dependent. In these cases, it may be possible to at least recover a smaller subset of the
source symbols, using only those repair symbols with scopes that are within the smaller subset of source symbols.
Stream Based Decoder Using Simple Construction of Elastic Chord Codes [0114]
In a "stream" mode of operation, the source symbols form a stream and repair symbols are generated over a suffix of the source symbols at the time the repair is generated. This stream based protocol
uses the simple construction of the elastic chord codes described above.
At the decoder, source and repair symbols arrive one by one, possibly with some reordering and as soon as a source or repair symbol arrives, the decoder can identify whether any lost source symbol
becomes decodable, then decode and deliver this source symbol to the decoder's output.
To achieve this, the decoder maintains a matrix (I|E'
U) and updates this each time a new source or repair symbol is received according to the procedures below.
Let D denote the "decoding matrix", (I|E'
U). Let D
denote the element at position (i,j), D.sub.*j denote the j-th column of D and D
* denote the i-th row of D.
In the procedures described below, the decoder performs various operations on the decoding matrix. The equivalent operations are performed on the repair symbols to effect decoding. These could be
performed concurrently with the matrix operations, but in some implementations, these operations are delayed until actual source symbols are recovered in the RecoverSymbols procedure described below.
Upon receipt of a source symbol, if the source symbol is one of the missing source symbols, S
, then the decoder removes the corresponding column of D. If the removed column was one of the first u columns, then the decoder identifies the repair symbol associated with the row that has a
nonzero element in the removed column. The decoder then repeats the procedure described below for receipt of this repair symbol. If the removed column was not one of the first u columns, then the
decoder performs the RecoverSymbols procedure described below.
Upon receipt of a repair symbol, first the decoder adds a new column to D for each source symbol that is currently unknown, within the scope of the new repair symbol and not already associated with a
column of D. Next, the decoder adds a new row, D
*, to D for the received repair symbol, populating this row with the coefficients from Equation 1.
For i from 0 to u-1 inclusive, the decoder replaces D
* with (D
*). This step results in the first u elements of D
* being eliminated (i.e., reduced to zero). If D
* is nonzero after this elimination step, then the decoder performs column exchanges (if necessary) so that D
is nonzero and replaces D
* with (D
For i from u-1 to 0 inclusive, the decoder replaces D
* with (D
*). This step results in the elements of column u being eliminated (i.e., reduced to zero) except for row u.
The matrix is now once again in the form (I|E'
U) and the decoder can set u:=u+1.
To perform the RecoverSymbols procedure, the decoder considers each row of E'
U that is zero, or for all rows of D if E'
U is empty. The source symbol whose column is nonzero in that row of D can be recovered. Recovery is achieved by performing the stored sequence of operations upon the repair symbols. Specifically,
whenever the decoder replaces row D
* with (D
*), it also replaces the corresponding repair symbol R
with (R
) and whenever row D
* is replaced with (αD
*), it replaces repair symbol R
with αR
Note that the order in which the operations are performed is important and are the same as the order in which the matrix operations were performed.
Once the operations have been performed, then for each row of E'
U that is zero, the corresponding repair symbol now has a value equal to that of the source symbol whose column is nonzero in that row of D and the symbol has therefore been recovered. This row and
column can then be removed from D.
In some implementations, symbol operations are only performed when it has been identified that at least one symbol can be recovered. Symbol operations are performed for all rows of D but might not
result in recovery of all missing symbols. The decoder therefore tracks which repair symbols have been "processed" and which have not and takes care to keep the processed symbols up-to-date as
further matrix operations are performed.
A property of elastic codes, in this "stream" mode, is that dependencies may stretch indefinitely into the past and so the decoding matrix D may grow arbitrarily large. Practically, the
implementation should set a limit on the size of D. In practical applications, there is often a "deadline" for the delivery of any given source symbol--i.e., a time after which the symbol is of no
use to the protocol layer above or after which the layer above is told to proceed anyway without the lost symbol.
The maximum size of D may be set based on this constraint. However, it may be advantageous for the elastic code decoder to retain information that may be useful to recover a given source symbol even
if that symbol will never be delivered to the application. This is because the alternative is to discard all repair symbols with a dependency on the source symbol in question and it may be the case
that some of those repair symbols could be used to recover different source symbols whose deadline has not expired.
An alternative limit on the size of D is related to the total amount of information stored in the elastic code decoder. In some implementations, received source symbols are buffered in a circular
buffer and symbols that have been delivered are retained, as these may be needed to interpret subsequently received repair symbols (e.g., calculating values in Equation 1 above). When a source symbol
is finally discarded (due to the buffer being full) it is necessary to discard (or process) any (unprocessed) repair symbols whose scope includes that symbol. Given this fact, and a source buffer
size, perhaps the matrix D should be sized to accommodate the largest number of repair symbols expected to be received whose scopes are all within the source buffer.
An alternative implementation would be to construct the matrix D only when there was a possibility of successful decoding according to the ideal recovery property described above.
Computational Complexity [0132]
The computational complexity of the code described above is dominated by the symbol operations.
Addition of symbols can be the bitwise exclusive OR of the symbols. This can be achieved efficiently on some processors by use of wide registers (e.g., the SSE registers on CPUs following an x86
architecture), which can perform an XOR operation over 64 or 128 bits of data at a time. However, multiplication of symbols by a finite field element often must be performed byte-by-byte, as
processors generally do not provide native instructions for finite field operations and therefore lookup tables must be used, meaning that each byte multiplication requires several processor
instructions, including access to memory other than the data being processed.
At the encoder, Equation 1 above is used to calculate each repair symbol. This involves l symbol multiplications and l-1 symbol additions, where l is the number of source symbols in the scope of the
repair symbol. If each source symbol is protected by exactly r repair symbols, then the total complexity is O(rk) symbol operations, where k is the number of source symbols. Alternatively, if each
repair symbol has a scope or neighborhood set of l source symbols, then the computational complexity per generated repair symbol is O(l) symbol operations. As used herein, the expression O( ) should
be understood to be the conventional "on the order of" function.
At the decoder, there are two components to the complexity: the elimination of received source symbols and the recovery of lost source symbols. The first component is equivalent to the encoding
operation, i.e., O(rk) symbol operations. The second component corresponds to the symbol operations resulting from the inversion of the u×u matrix E, where u is the number of lost source symbols, and
thus has complexity O(u
) symbol operations.
For low loss rates, u is small and therefore, if all repair symbols are used at the decoder, encoding and decoding complexity will be similar. However, since the major component of the complexity
scales with the number of repair symbols, if not all repair symbols are used, then complexity should decrease.
As noted above, in an implementation, processing of repair symbols is delayed until it is known that data can be recovered. This minimizes the symbol operations and so the computational requirements
of the code. However, it results in bursts of decoding activity.
An alternative implementation can smooth out the computational load by performing the elimination operations for received source symbols (using Equation 1) as symbols arrive. This results in
performing elimination operations for all the repair symbols, even if they are not all used, which results in higher (but more stable) computational complexity. For this to be possible, the decoder
must have information in advance about which repair symbols will be generated, which may not be possible in all applications.
Decoding Probability [0139]
Ideally, every repair symbol is either clearly redundant because all the source symbols in its scope are already recovered or received before it is received, or is useful for recovering a lost source
symbol. How frequently this is true depends on the construction of the code.
Deviation from this ideal might be detected in the decoder logic when a new received repair symbol results in a zero row being added to D after the elimination steps. Such a symbol carries no new
information to the decoder and thus is discarded to avoid unnecessary processing.
In the case of the random GF(256) code implementation, this may be to be the case for roughly 1 repair symbol in 256, based on the fact that when a new random row is added to a u×u+1 matrix over GF
(256) of full rank, the probability that the resulting u×u matrix does not have full rank is 1/256.
In the case of the Cauchy code implementation, when used as a block code and where the total number of source and repair symbols is less than 256, the failure probability is zero. Such a code is
equivalent to a Reed-Solomon code.
Block Mode Results [0143]
In tests of elastic chord codes used as a block code (i.e., generating a number of repair symbols all with scope equal to the full set of k source symbols), for fixed block size (k=256) and repair
amount (r=8), encode speed and decode speed are about the same for varying block sizes above about 200 bytes, but below that, speed drops. This is likely because below 200 byte symbols (or some other
threshold depending on conditions), the overhead of the logic required to determine the symbol operations is substantial compared to the symbol operations themselves, but for larger symbol sizes the
symbol operations themselves are dominant.
In other tests, encoding and decoding speed as a function of the repair overhead (r/k) for fixed block and symbol size showed that that encoding and decoding complexity is proportional to the number
of repair symbols (and so speed is proportional to 1/r).
Stream Mode Results [0145]
When the loss rate is much less than the overhead, the average latency is low but it increases quickly as the loss rate approaches the code overhead. This is what one would expect because when the
loss rate is much less than the overhead, then most losses can be recovered using a single repair symbol. As the loss rate increases, we more often encounter cases where multiple losses occur within
the scope of a single repair symbol and this requires more repair symbols to be used.
Another fine-tuning that might occur is to consider the effect of varying the span of the repair symbols (the span is how many source symbols are in the scope or neighborhood set of the repair
symbol), which was 256 in the examples above. Reducing the span, for a fixed overhead, reduces the number of repair symbols that protect each source symbol and so one would expect this to increase
the residual error rate. However, reducing the span also reduces the computational complexity at both encoder and decoder.
-Based Code that is a Fountain Block Code
In many encoders and decoders, the amount of computing power and time allotted to encoding and decoding is limited. For example, where the decoder is in a battery-powered handheld device, decoding
should be efficient and not require excessive computing power. One measure of the computing power needed for encoding and decoding operations is the number of symbol operations (adding two symbols,
multiplying, XORing, copying, etc.) that are needed to decode a particular set of symbols. A code should be designed with this in mind. While the exact number of operations might not be known in
advance, since it might vary based on which encoding symbols are received and how many encoding symbols are received, it is often possible to determine an average case or a worst case and configure
designs accordingly.
This section describes a new type of fountain block code, herein called a "window-based code," that is the basis of some of the elastic codes described further below that exhibit some aspects of
efficient encoding and decoding. The window-based code as first described is a non-systematic code, but as described further below, there are methods for transforming this into a systematic code that
will be apparent upon reading this disclosure. In this case, the scope of each encoding symbol is the entire block of K source symbols, but the neighborhood set of each encoding symbol is much
sparser, consisting of B<<K neighbors, and the neighborhood sets of different encoding symbols are typically quite different.
Consider a block of K source symbols. The encoder works as follows. First, the encoder pads (logically or actually) the block with B zero symbols on each side to form an extended block of K+2B
symbols, X
, X
, . . . , X
+2B-1, i.e., the first B symbols and the last B symbols are zero symbols, and the middle K symbols are the source symbols. To generate an encoding symbol, the encoder randomly selects a start
position, t, between 1 and K+B-1 and chooses values α
, . . . , α
-1 randomly or pseudo-randomly from a suitable finite field (e.g., GF(2) or GF(256)). The encoding symbol value, ESV, is then calculated by the encoder using the formula of Equation 5, in which case
the neighborhood set of the generated encoding symbol is selected among the symbols in positions t through t+B-1 in the extended block.
+j (Eqn. 5)
The decoder, upon receiving at least K encoding symbols, uses a to-and-fro sweep across the positions of the source symbols in the extended block to decode. The first sweep is from the source symbol
in the first position to the source symbol in the last position of the block, matching that source symbol, s, with an encoding symbol, e, that can recover it, and eliminating dependencies on s of
encoding symbols that can be used to recover source symbols in later positions, and adjusting the contribution of s to e to be simply s. The second sweep is from the source symbol in the last
position to the source symbol in the first position of the block, eliminating dependencies on that source symbol s of encoding symbols used to recover source symbols in earlier positions. After a
successful to-and-fro sweep, the recovered value of each source symbol is the value of the encoding symbol to which it is matched.
For the first sweep process, the decoder obtains the set, E, of all received encoding symbols. For each source symbol, s, in position i=B, . . . , B+K-1 within the extended block, the decoder selects
the encoding symbol e that has the earliest neighbor end position among all encoding symbols in E that have s in their neighbor set and then matches e to s and deletes e from E. This selection is
amongst those encoding symbols e for which the contribution of s to e in the current set of linear equations is non-zero, i.e., s contributes βs to e, where β≠0. If there is no encoding symbol e to
which the contribution of s is non-zero, then decoding fails, as s cannot be decoded. Once source symbol s is matched with an encoding symbol e, encoding symbol e is removed from the set E, Gaussian
elimination is used to eliminate the contribution of s to all encoding symbols in E, and the contribution of s to e is adjusted to be simply s by multiplying e by the inverse of the coefficient of
the contribution of s to e.
The second sweep process of the decoder works as follows. For each source symbol, s, in source position i=K-1, . . . , 0, Gaussian elimination is used to eliminate the contribution of s to all
encoding symbols in E matched to source symbols in positions previous to i.
The decoding succeeds in fully recovering all the source symbols if and only if the system of linear equations defined by the received encoding symbols is of rank K, i.e., if the received encoding
symbols have rank K, then the above decoding process is guaranteed to recover the K source symbols of the block.
The number of symbol operations per generated encoding symbol is B.
The reach of an encoding symbol is defined to be the set of positions within the extended block between the first position that is a neighbor of the encoding symbol and the last position that is a
neighbor of the encoding symbol. In the above construction, the size of the reach of each encoding symbols is B. The number of decoding symbol operations is bounded by the sum of sizes of the reaches
of the encoding symbols used for decoding. This is because, by the way the matching process described above is designed, an encoding symbol reach is never extended during the decoding process and
each decoding symbol operation decreases the sum of the sizes of the encoding symbol reaches by one. This implies that the number of symbol operations for decoding the K source symbols is O(KB).
There is a trade-off between the computational complexity of the window-based code and its recovery properties. It can be shown by a simple analysis that if B=O(K
/2) and if the finite field size is chosen to be large enough, e.g., O(K), then all K source symbols of the block can be recovered with high probability from K received encoding symbols, and the
failure probability decreases rapidly as a function of each additionally received encoding symbol. The recovery properties of the window-based code are similar to those of a random GF[2] code or
random GF[256] code when GF[2] or GF[256] are used as the finite field, respectively, and B=O(K
A similar analysis can be use to show that if B=O(ln(K/δ)/ε) then all K source symbols of the block can be recovered with probability at least 1-δ after K(1+ε) encoding symbols have been received.
There are many variations of the window-based codes described herein, as one skilled in the art will recognize. As one example, instead of creating an extended block of K+2B symbols, instead one can
generate encoding symbols directly from the K source symbols, in which case t is chosen randomly between 0 and K-1 for each encoding symbol, and then the encoding symbol value is computed as shown in
Equation 6. One way to decode for this modified window-based block code is to use a decoding procedure similar to that described above, except at the beginning a consecutive set of B of the K source
symbols are "inactivated", the decoding proceeds as described previously assuming that these B inactivated source symbol values are known, a B×B system of equations between encoding symbols and the B
inactivated source symbols is formed and solved, and then based on this and the results of the to-and-fro sweep, the remaining K-B source symbols are solved. Details of how this can work are
described in Shokrollahi-Inactivation.
X.sub.(t+j)mod K (Eqn. 6)
Systematic Window
-Based Block Code
The window-based codes described above are non-systematic codes. Systematic window-based codes can be constructed from these non-systematic window-based codes, wherein the efficiency and recovery
properties of the so-constructed systematic codes are very similar to those of the non-systematic code from which they are constructed.
In a typical implementation, the K source symbols are placed at the positions of the first K encoding symbols generated by the non-systematic code, decoded to obtain an extended block, and then
repair symbols are generated for the systematic code from the decoded extended block. Details of how this can work are described in Shokrollahi-Systematic. A simple and preferred such systematic code
construction for this window-based block code is described below.
For the non-systematic window-based code described above that is a fountain block code, a preferred way to generate the first K encoding symbols in order to construct a systematic code is the
following. Instead of choosing the start position t between 1 and K+B-1 for the first K encoding symbols, instead do the following. Let B'=B/2 (assume without loss of generality that B is even).
Choose t=B', B'+1, . . . , B'+K-1 for the first K encoding symbols. For the generation of the first K encoding symbols, the generation is exactly as described above, with the possible exception, if
it is not already the case, that the coefficient α
' is chosen to be a non-zero element of the finite field (making this coefficient non-zero ensures that the decoding process can recover the source symbol corresponding to this coefficient from this
encoding symbol). By the way that these encoding symbols are constructed, it is always possible to recover the K source symbols of the block from these first K encoding symbols.
The systematic code encoding construction is the following. Place the values of the K source symbols at the positions of the first K encoding symbols generated according to the process described in
the previous paragraph of the non-systematic window-based code, use the to-and-fro decoding process of the non-systematic window-based code to decode the K source symbols of the extended block, and
then generate any additional repair symbols using the non-systematic window-based code applied to the extended block that contains the decoded source symbols that result from the to-and-fro decoding
The mapping of source symbols to encoding symbols should use a random permutation of K to ensure that losses of bursts of consecutive source symbols (and other patterns of loss) do not affect the
recoverability of the extended block from any portion of encoding symbols, i.e., any pattern and mix of reception of source and repair symbols.
The systematic decoding process is the mirror image of the systematic encoding process. Received encoding symbols are used to recover the extended block using the to-and-fro decoding process of the
non-systematic window-based code, and then the non-systematic window-based encoder is applied to the extended block to encode any missing source symbols, i.e., any of the first K encoding symbols
that are missing.
One advantage of this approach to systematic encoding and decoding, wherein decoding occurs at the encoder and encoding occurs at the decoder, is that the systematic symbols and the repair symbols
can be created using a process that is consistent across both. In fact, the portion of the encoder that generates the encoding symbols need not even be aware that K of the encoding symbols will
happen to exactly match the original K source symbols.
-Based Code that is a Fountain Elastic Code
The window-based code fountain block code can be used as the basis for constructing a fountain elastic code that is both efficient and has good recovery properties. To simplify the description of the
construction, we describe the construction when there are multiple base blocks X
, . . . , X
of equal size, i.e., each of the L basic blocks comprise K source symbols. Those skilled in the art will recognize that these constructions and methods can be extended to the case when the basic
blocks are not all the same size.
As described previously, a source block may comprise the union of any non-empty subset of the L base blocks. For example, one source block may comprise the first base block and a second source block
may comprise the first and second base blocks and a third source block may comprise the second and third base blocks. In some cases, some or all of the base blocks have different sizes and some or
all of the source blocks have different sizes.
The encoder works as follows. First, for each base block X
, the encoder pads (logically or actually) the block with B zero symbols on each side to form an extended block of K+2B symbols X
, X
, . . . , X
, the first B symbols and the last B symbols are zero symbols, and the middle K symbols are the source symbols of base block X
The encoder generates an encoding symbol for source block S as follows, where S comprises L' base blocks, and without loss of generality assume that these are the base blocks X
, . . . , X
'. The encoder randomly selects a start position, t, between 1 and K+B-1 and for all i=1, . . . , L', chooses values α
, . . . , α
randomly from a suitable finite field (e.g., GF(2) or GF(256)). For each i=1, . . . , L', the encoder generates an encoding symbol value based on the same starting position t, i.e., as shown in
Equation 7.
(Eqn. 7)
Then, the generated encoding symbol value ESV for the source block is simply the symbol finite field sum over i=1, . . . , L' of ESV
, i.e., as shown in Equation 8.
= i = 1 , , L ' ESV i ( Eqn . 8 ) ##EQU00002##
Suppose the decoder is used to decode a subset of the base blocks, and without loss of generality assume that these are the base blocks X
, . . . , X
'. To recover the source symbols in these L' base blocks, the decoder can use any received encoding symbol generated from source blocks that are comprised of a union of a subset of X
, . . . , X
'. To facilitate efficient decoding, the decoder arranges a decoding matrix, wherein the rows of the matrix correspond to received encoding symbols that can be used for decoding, and wherein the
columns of the matrix correspond to the extended blocks for base blocks X
, . . . , X
' arranged in the interleaved order:
, X
, . . . , X
', X
, X
, . . . , X
', . . . , X
+2B-1, X
, . . . , X
Similar to the previously described to-and-fro decoder for a fountain block code, the decoder uses a to-and-fro sweep across the column positions in the above described matrix to decode. The first
sweep is from the smallest column position to the largest column position of the matrix, matching the source symbol s that corresponds to that column position with an encoding symbol e that can
recover it, and eliminating dependencies on s of encoding symbols that can be used to recover source symbols that correspond to later column positions, and adjusting the contribution of s to e to be
simply s. The second sweep is from the largest column position to the smallest column position of the matrix from the source symbol in the last position to the source symbol in the first position of
the block, eliminating dependencies on the source symbol s that corresponds to that column position of encoding symbols used to recover source symbols in earlier positions. After a successful
to-and-fro sweep, the recovered value of each source symbol is the value of the encoding symbol to which it is matched.
For the first sweep process, the decoder obtains the set, E, of all received encoding symbols that can be useful for decoding base blocks X
, . . . , X
'. For each position i=L'B, . . . , L'(B+K)-1 that corresponds to source symbol s of one of the L' basic blocks, the decoder selects the encoding symbol e that has the earliest neighbor end position
among all encoding symbols in E that have s in their neighbor set and then matches e to s and deletes e from E. This selection is amongst those encoding symbols e for which the contribution of s to e
in the current set of linear equations is non-zero, i.e., s contributes βs to e, where β≠0. If there is no encoding symbol e to which the contribution of s is non-zero then decoding fails, as s
cannot be decoded. Once source symbol s is matched with an encoding symbol e, encoding symbol e is removed from the set E, Gaussian elimination is used to eliminate the contribution of s to all
encoding symbols in E, and the contribution of s to e is adjusted to be simply s by multiplying e by the inverse of the coefficient of the contribution of s to e.
The second sweep process of the decoder works as follows. For each position i=L'(B+K)-1, . . . , L'B that corresponds to source symbol s of one of the L' basic blocks, Gaussian elimination is used to
eliminate the contribution of s to all encoding symbols in E matched to source symbols corresponding to positions previous to i.
The decoding succeeds in fully recovering all the source symbols if and only if the system of linear equations defined by the received encoding symbols is of rank L'K, i.e., if the received encoding
symbols have rank L'K, then the above decoding process is guaranteed to recover the L'K source symbols of the L' basic blocks.
The number of symbol operations per generated encoding symbol is BV, where V is the number of basic blocks enveloped by the source block from which the encoding symbol is generated.
The reach of an encoding symbol is defined to be the set of column positions between the smallest column position that corresponds to a neighbor source symbol and the largest column position that
corresponds to a neighbor source symbol in the decoding matrix. By the properties of the encoding process and the decoding matrix, the size of the reach of an encoding symbol is at most BL' in the
decoding process described above. The number of decoding symbol operations is at most the sum of the sizes of the reaches of the encoding symbols, as by the properties of the matching process
described above, the reach of encoding symbols are never extended beyond their original reach by decoding symbol operations and each decoding symbol operation decreases the sum of the sizes of the
encoding symbol reaches by one. This implies that, and that the number of symbol operations for decoding the N=KL' source symbols in L' basic blocks is O(NBL').
There is a trade-off between the computational complexity of the window-based code and its recovery properties. It can be shown by a simple analysis that if B=O(ln(L)K
/2) and if the finite field size is chosen to be large enough, e.g., O(LK), then all L'K source symbols of the L' basic blocks can be recovered with high probability if the recovery conditions of an
ideal recovery elastic code described previously are satisfied by the received encoding symbols for the L' basic blocks, and the failure probability decreases rapidly as a function of each
additionally received encoding symbol. The recovery properties of the window-based code are similar to those of a random GF[2] code or random GF[256] code when GF[2] or GF[256] are used as the finite
field, respectively, and B=O(ln(L)K
A similar analysis can be used to show that if B=O(ln(LK/δ)/ε) then all L'K source symbols of the L' basic blocks can be recovered with probability at least 1-δ under the following conditions. Let T
be the number of source blocks from which the received encoding symbols that are useful for decoding the L' basic blocks are generated. Then, the number of received encoding symbols generated from
the T source blocks should be at least L'K(1+ε), and for all S≦T, the number of encoding symbols generated from any set of S source blocks should be at most the number of source symbols in the union
of those S source blocks.
The window-based codes described above are non-systematic elastic codes. Systematic window-based fountain elastic codes can be constructed from these non-systematic window-based codes, wherein the
efficiency and recovery properties of the so-constructed systematic codes are very similar to those of the non-systematic code from which they are constructed, similar to the systematic construction
described above for the window-based codes that are fountain block codes. Details of how this might work are described in Shokrollahi-Systematic.
There are many variations of the window-based codes described herein, as one skilled in the art will recognize. As one example, instead of creating an extended block of K+2B symbols for each basic
block, instead one can generate encoding symbols directly from the K source symbols of each basic block that is part of the source block from which the encoding symbols is generated, in which case t
is chosen randomly between 0 and K-1 for each encoding symbol, and then the encoding symbol value is computed similar to that shown in Equation 6 for each such basic block.
One way to decode for this modified window-based block code is to use a decoding procedure similar to that described above, except at the beginning a consecutive set of L'B of the L'K source symbols
are "inactivated", the decoding proceeds as described previously assuming that these L'B inactivated source symbol values are known, a L'B×L'B system of equations between encoding symbols and the L'B
inactivated source symbols is formed and solved, and then based on this and the results of the to-and-fro sweep, the remaining L'(K-B) source symbols are solved. Details of how this can work are
described in Shokrollahi-Inactivation.
There are many other variations of the window-based code above. For example, it is possible to relax the condition that each basic block comprises the same number of source symbols. For example,
during the encoding process, the value of B used for encoding each basic block can be proportional to the number of source symbols in that basic block. For example, suppose a first basic block
comprises K source symbols and a second basic block comprises K' source symbols, and let μ=K/K' be the ratio of the sizes of the blocks. Then, the value B used for the first basic block and the
corresponding value B' used for the second basic block can satisfy: B/B'=μ. In this variation, the start position within the two basic blocks for computing the contribution of the basic blocks to an
encoding symbol generated from a source block that envelopes both basic blocks might differ, for example the encoding process can choose a value φ uniformly between 0 and 1 and then use the start
position t=φ(K+B-1) for the first basic block and use the start position t'=φ(K'+B'-1) for the second basic block (where these values are rounded up to the nearest integer position). In this
variation, when forming the decoding matrix at the decoder comprising the interleaved symbols from each of the basic blocks being decoded, the interleaving can be done in such a way that the
frequency of positions corresponding to the first basic block to the frequency of positions corresponding to the second basic block is in the ratio μ, e.g., if the first basic block is twice the size
of the second basic block then twice as many column positions correspond to the first basic block as correspond to the second basic block, and this condition is true (modulo rounding errors) for any
consecutive set of column positions within the decoding matrix.
There are many other variations as well, as one skilled in the art will recognize. For example, a sparse matrix representation of the decoding matrix can be used at the decoder instead of having to
store and process the full decoding matrix. This can substantially reduce the storage and time complexity of decoding.
Other variations are possible as well. For example, the encoding may comprise a mixture of two types of encoding symbols: a majority of a first type of encoding symbols generated as described above
and a minority of a second type of encoding symbols generated sparsely at random. For example the fraction of the first type of encoding symbols could be 1-K
/3 and the reach of each first type encoding symbol could be B=O(K
/3), and the fraction of the second type of encoding symbols could be K
/3 and the number of neighbors of each second type encoding symbol could be K
/3. One advantage of such a mixture of two types of encoding symbols is that the value of B used for the first type to ensure successful decoding can be substantially smaller, e.g., B=O(K
/3) when two types are used as opposed to B=O(K
/2) when only one type is used.
The decoding process is modified so that in a first step the to-and-fro decoding process described above is applied to the first type of encoding symbols, using inactivation decoding to inactivate
source symbols whenever decoding is stuck to allow decoding to continue. Then, in a second step the inactivated source symbol values are recovered using the second type of encoding symbols, and then
in a third step these solved encoding symbol values together with the results of the first step of the to-and-fro decoding are used to solve for the remaining source symbol values. The advantage of
this modification is that the encoding and decoding complexity is substantially improved without degrading the recovery properties. Further variations, using more than two types of encoding symbols,
are also possible to further improve the encoding and decoding complexity without degrading the recovery properties.
Ideal Recovery Elastic Codes [0187]
This section describes elastic codes that achieve the ideal recovery elastic code properties described previously. This construction applies to the case when the source blocks satisfy the following
conditions: the source symbols can be arranged into an order such that the source symbols in each source block are consecutive, and so that, for any first source block and for any second source
block, the source symbols that are in the first source block but not in the second source block are either all previous to the second source block or all subsequent to the second source block, i.e.,
there is no first and second source blocks with some symbols of the first source block preceding the second source block and some symbols of the first source block following the second source block.
For brevity, herein such codes are referred to as a No-Subset Chord Elastic code, or "NSCE code." NSCE codes include prefix elastic codes.
It should be understood that the "construction" herein may involve mathematical concepts that can be considered in the abstract, but that such constructions are applied to a useful purpose and/or for
transforming data, electrical signals or articles. For example, the construction might be performed by an encoder that seeks to encode symbols of data for transmission to a receiver/decoder that in
turn will decode the encodings. Thus, inventions described herein, even where the description focuses on the mathematics, can be implemented in encoders, decoders, combinations of encoders and
decoders, processes that encoder and/or decode, and can also be implemented by program code stored on computer-readable media, for use with hardware and/or software that would cause the program code
to be executed and/or interpreted.
In an example construction of an NSCE code, a finite field with n
(n) field elements is used, where c(n)=O(n
), where C is the number of source blocks. An outline of the construction follows, and implementation should be apparent to one of ordinary skill in the art upon reading this outline. This
construction can be optimized to further reduce the size of the needed finite field, at least somewhat, in some cases.
In the outline, n is the number of source symbols to be encoded and decoded, C is the number of source blocks, also called chords, used in the encoding process, c(n) is some predetermined value that
is on the order of n
. Since a chord is a subset (proper or not) of the n source symbols that are used in generating repair symbols and a "block" is a set of symbols generated from within the same domain, there is a
one-to-one correspondence between the chords used and the blocks used. The use of these elements will now be described with reference to an encoder or a decoder, but it should be understood that
similar steps might be performed by both, even if not explicitly stated.
An encoder will manage a variable, j, that can range from 1 to C and indicates a current block/chord being processed. By some logic or calculation, the encoder determines, for each block j, the
number of source symbols, k
, and the number of encoding symbols, n
, associated with block j. The encoder can then construct a k
Cauchy matrix, M
, for block j. The size of the field needed for the base finite field to represent the Cauchy matrices is thus the maximum of k
over all j. Let q be the number of elements in this base field.
The encoder works over a larger field, F, with q
elements, where D is on the order of q
. Let ω be an element of F that is of degree D. The encoder uses (at least logically) powers of ω to alter the matrices to be used to compute the encoding symbols. For block 1 of the C blocks, the
matrix M
is left unmodified. For block 2, the row of M
that corresponds to i-th source symbol is multiplied by ω
. For block j, the row of M
that corresponds to i-th source symbol is multiplied by ω
(j), where q(j)=q
Let the modified matrices be M'
, . . . , M'
. These are the matrices used to generate the encoding symbols for the C blocks. A key property of these matrices flows from an observation explained below.
Suppose a receiver has received some mix of encoding symbols generated from the various blocks. That receiver might want to determine whether the determinant of the matrix M corresponding to the
source symbols and the received encoding symbols is nonzero.
Consider the bipartite graph between the received encoding symbols and the source symbols, with adjacencies defined naturally, i.e., there is an edge between an encoding symbol and a source symbol if
the source symbol is part of the block from which the encoding symbol is generated. If there is a matching within this graph where all of the source symbols are matched, then the source symbols
should be decodable from the received encoding symbols, i.e., the determinant of M should not be zero. Then, classify each matching by a "signature" of how the source symbols are matched to the
blocks of encoding symbols, e.g., a signature of (1, 1, 3, 2, 3, 1, 2, 3) indicates that, in this matching, the first source symbol is matched to an encoding symbol in block 1, the second source
symbol is matched to an encoding symbol in block 1, the third source symbol is matched to an encoding symbol in block 3, the fourth source symbol is matched to an encoding symbol in block 2, etc.
Then, the matchings can be partitioned according to their signatures, and the determinant of M can be viewed as the sum of determinants of matrices defined by these signatures, where each such
signature determinant corresponds to a Cauchy matrix and is thus not zero. However, the signature determinants could zero each other out.
By constructing the modified matrices M'
, . . . , M'
, a result is that there is a signature that uniquely has the largest power of ω as a coefficient of the determinant corresponding to that signature, and this implies that the determinant of M is not
zero since the determinant of this unique signature cannot be zeroed out by any other determinant. This is where the chord structure of the blocks is important.
Let the first block correspond to the chord that starts (and ends) first within the source symbols, and in general, let block j correspond to the chord that is the j-th chord to start (and finish)
within the source blocks. Since there are no subset chords, if any one block starts before second one, it also has to end before the second one, otherwise the second one is a subset.
Then, the decoder handles a matching wherein all of the encoding symbols for the first block are matched to a prefix of the source symbols, wherein all of the encoding symbols for the second block
are matched to a next prefix of the source symbols (excluding the source symbols matched to the first block), etc. In particular, this matching will have the signature of e
1's followed by e
2's, followed by e
3's, etc., where e
is the number of encoding symbols that are to be used to decode the source symbols that were generated from block i. This matching has a signature that uniquely has the largest power of ω as a
coefficient (similar to the argument used in the Theorem 1 for the two-chord case), i.e., any other signature that corresponds to a valid matching between the source and received encoding symbols
will have a smaller power of ω as a coefficient. Thus, the determinant has to be nonzero.
One disadvantage with chord elastic codes occurs where subsets exist, i.e., where there is one chord contained within another chord. In such cases, a decoder cannot be guaranteed to always find a
matching where the encoding symbols for each block are used greedily, i.e., use all for block 1 on the first source symbols, followed by block 2, etc., at least according to the original ordering of
the source symbols.
In some cases, the source symbols can be re-ordered to obtain the non-contained chord structure. For example, if the set of chords according to an original ordering of the source symbols were such
that each subsequent chord contains all of the previous chords, then the source symbols can be re-ordered so that the structure is that of a prefix code, i.e., re-order the source symbols from the
inside to the out, so that the first source symbols are those inside all of the chords, followed by those source symbols inside all but the smallest chord, followed by those source symbols inside all
but the smallest two chords, etc. With this re-ordering, the above constructions can be applied to obtain elastic codes with ideal recovery properties.
Examples of Usage of Elastic Codes [0201]
In one example, the encoder/decoder are designed to deal with expected conditions, such as a round-trip time (RTT) for packets of 400 ms, a delivery rate of 1 Mbps (bits/second), and a symbol size of
128 bytes. Thus, the sender sends approximately 1000 symbols per second (1000 symbols/sec×128 bytes/symbol×8 bits/byte=1.024 Mbps). Assume moderate loss conditions of some light loss (e.g., at most
5%) and sometimes heavier loss (e.g., up to 50%).
In one approach, a repair symbol is inserted after each G source symbols, and where the maximum latency can be as little as G symbols to recover from loss, X=1/G is the fraction of repair symbols
that is allowed to be sent that may not recover any source symbols. G can change based on current loss conditions, RTT and/or bandwidth.
Consider the example in FIG. 5, where the elastic code is a prefix code and G=4. The source symbols are shown sequentially, and the repair symbols are shown with bracketed labels representing the
source block that the repair symbol applies to.
If all losses are consecutive starting at the beginning, and one symbol is lost, then the introduced latency is at most G, whereas if two symbols are lost, then the introduced latency is at most 2×G,
and if i symbols are lost, the introduced latency is at most i×G. Thus, the amount of loss affects introduced latency linearly.
Thus, if the allowable redundant overhead is limited to 5%, say, then G=20, i.e., one repair symbol is sent for each 20 source symbols. In the above example, one symbol is sent per 1 ms, so that
would mean 20 ms between each repair symbol and the recovery time would be 40 ms for two lost symbols, 60 ms for three lost symbols, etc. Note that using just ARQ in these conditions, recovery time
is at least 400 ms, the RTT.
In that example, a repair symbol's block is the set of all prior sent symbols. Where simple report back from the receiver are allowed, the blocks can be modified to exclude earlier source symbols
that have been received or are no longer needed. An example is shown in FIG. 6, which is a variation of what is shown in FIG. 5.
In this example, assume that the encoder receives from the sender a SRSI indicator of the smallest Relevant Source Index. The SRSI can increase each time all prior source symbols are received or are
no longer needed. Then, the encoder does not need to have any repair symbols depend on source symbols that have indices lower than the SRSI, which saves on computation. Typically, the SRSI is the
index of the source symbol immediately following the largest prefix of already recovered source symbols. The sender then calculates scope of a repair symbol from the largest SRSI received from the
receiver to the last sent index of a source symbol. This leads to exactly the same recovery properties as the no-feedback version, but lessens complexity/memory requirements at the sender and the
receiver. In the example of FIG. 6, SRSI=5.
With the feedback, prefix elastic codes can be used more efficiently and feedback reduces complexity/memory requirements. When a sender gets feedback indicative of loss, it can adjust the scope of
repair symbols accordingly. Thus, to combine forward error correction and reactive error correction, additional optimizations are possible. For example, the forward error correction (FEC) can be
tuned so that the allowable redundant overhead is high enough to proactively recover most losses, but not too high as to introduce too much overhead, while reactive correction is for the more rare
losses. Since most losses are quickly recovered using FEC, most losses are recovered without an RTT latency penalty. While reactive correction has an RTT latency penalty, its use is rarer.
Variations [0209]
Source block mapping indicates which blocks of source symbols are used for determining values for a set of encoding symbols (which can be encoding symbols in general or more specifically repair
symbols). In particular, a source block mapping might be stored in memory and indicate the extents of a plurality of base blocks and indicate which of those base blocks are "within the scope" of
which source blocks. In some cases, at least one base block is in more than one source block. In many implementations, the operation of an encoder or decoder can be independent of the source block
mapping, thus allowing for arbitrary source block mapping. Thus, while predefined regular patterns could be used, that is not required and in fact, source block scopes might be determined from
underlying structure of source data, by transport conditions or by other factors.
In some embodiments, an encoder and decoder can apply error-correcting elastic coding rather than just elastic erasure coding. In some embodiments, layered coding is used, wherein one set of repair
symbols protects a block of higher priority data and a second set of repair symbols protects the combination of the block of higher priority data and a block of lower priority data.
In some communication systems, network coding is combined with elastic codes, wherein an origin node sends encoding of source data to intermediate nodes and intermediate nodes send encoding data
generated from the portion of the encoding data that the intermediate node received--the intermediate node might not get all of the source data, either by design or due to channel errors. Destination
nodes then recover the original source data by decoding the received encoding data from intermediate nodes, and then decodes this again to recover the source data.
In some communication systems that use elastic codes, various applications can be supported, such as progressive downloading for file delivery/streaming when prefix of a file/stream needs to be sent
before it is all available, for example. Such systems might also be used for PLP replacement or for object transport.
Those of ordinary skill in the art would further appreciate, after reading this disclosure, that the various illustrative logical blocks, modules, circuits, and algorithm steps described in
connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and
software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as
hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each
particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the exemplary embodiments of the invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a
Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic,
discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor
may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a
microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a
combination of the two. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM
(EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can
read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an
ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored
on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium
that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such
computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or
store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if
the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as
infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-Ray® disc where disks usually reproduce data magnetically, while
discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description of the disclosed exemplary embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these exemplary
embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the
invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed
Patent applications by Lorenzo Vicisano, Berkeley, CA US
Patent applications by Mark Watson, San Francisco, CA US
Patent applications by Michael G. Luby, Berkeley, CA US
Patent applications by Mohammad Amin Shokrollahi, Preverenges CH
Patent applications by Payam Pakzad, Mountain View, CA US
Patent applications by QUALCOMM INCORPORATED
Patent applications in class Double encoding codes (e.g., product, concatenated)
Patent applications in all subclasses Double encoding codes (e.g., product, concatenated)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120210190","timestamp":"2014-04-20T04:45:10Z","content_type":null,"content_length":"180785","record_id":"<urn:uuid:193756b8-6aee-41f7-8791-f5d4d46b3456>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Positions in the Sky
Astronomy Answers
Positions in the Sky
Here it is explained how you can calculate the approximate position of the Sun or a planet as seen from another planet. Further down the page it is explained how you can calculate the approximate
position of the Moon as seen from the Earth.
1. Positions of the Sun and the Planets
If you want to calculate the position of a celestial body in the sky or as seen from the Sun (heliocentric), then you need information about that body and perhaps about the Earth. Which information
you need depends on what exactly you want to calculate and how accurate the results should be. This page explains how you can calculate the position in the sky as seen from a particular place on
Earth or from the Sun for a celestial body that orbits the Sun in a circle or ellipse. For this, you need the following information:
• the length a of the semimajor axis of the orbit, which indicates the size of the orbit. a should be measured in Astronomical Units (AU). Sometimes the perihelion distance q is given instead of a.
Then you can calculate a as follows:
(Eq. 1) a = q/(1 − e)
• the eccentricity e of the orbit, which indicates the shape of the orbit.
• the argument ω [omega] of the perihelion, which indicates how far the perihelion lies beyond the ascending node of the orbit. Warning: The symbol ω is also sometimes used for the sum of "our" ω
and Ω. The symbol ϖ is also sometimes used for that sum, and that symbol resembles ω. We use Π = ω + Ω for that sum.
• the ecliptic longitude Ω [large omega] of the ascending node of the orbit.
• the inclination i of the orbit, which indicates how far the orbit is tilted compared to the ecliptic.
• the mean anomaly M[0] at a specific date d[0], which indicates the position of the planet in its orbit at that date. Sometimes L[0] = M[0] + Ω + ω is given instead of M[0], and sometimes a date d
[perihelion] is given instead when the object was in its perihelion (where M is equal to 0). Here are the transformation formulas:
(Eq. 2) M[0] = L[0] − Ω − ω
(Eq. 3) M[0] = n (d[0] − d[perihelion])
In addition, you need the following information:
• the obliquity ε [epsilon] of the ecliptic
• your geographical longitude l (measured westward from Greenwich) and latitude φ [phi] (measured northward from the equator)
• your time zone t[z], measured as the number of hours that you have to add to your local clock time t to get Universal Time (UTC)
All in all, you need (besides the desired date and time) 16 separate numbers: 6 to describe the orbit and position of the planet in space, 6 to describe the orbit and position of the Earth in space,
and 4 that describe the orientation of the Earth and your position on Earth. If you only want to calculate the heliocentric position, then you need fewer numbers.
The orbital elements of the planets (at noon UTC on 1 January 2000, at Julian Day 2451545) are listed in the following table (with a measured in AU, and i, ω, Ω, and M[0] in degrees relative to the
equinox of that date).
Table 1: Planets: Orbital Elements
a e i ω Ω M[0]
Mercury 0.38710 0.20563 7.005 29.125 48.331 174.795
Venus 0.72333 0.00677 3.395 54.884 76.680 50.416
Earth 1.00000 0.01671 0.000 288.064 174.873 357.529
Mars 1.52368 0.09340 1.850 286.502 49.558 19.373
Jupiter 5.20260 0.04849 1.303 273.867 100.464 20.020
Saturn 9.55491 0.05551 2.489 339.391 113.666 317.021
Uranus 19.21845 0.04630 0.773 98.999 74.006 141.050
Neptune 30.11039 0.00899 1.770 276.340 131.784 256.225
Pluto 39.543 0.2490 17.140 113.768 110.307 14.882
For the Sun, a is equal to 0, so also x[Sun] = y[Sun] = z[Sun] = 0 so you can skip steps 1 through 4 below.
The inclination of the orbit of the Earth relative to the equinox of the date is zero by definition, so that orbit has no nodes and the value of the ecliptic longitude Ω of the ascending node is
immaterial. The value of Ω that I provide for the Earth in the table is the average of the values that you find a few years before and after 1 January 2000, when the inclination (compared to the
equinox of J2000.0) is different from zero.
If you multiply the inclination i by −1, add 180 degrees to Ω, and suitably adjust the argument ω of the perihelion, then you still have the same orbit (but with the roles of ascending and descending
nodes exchanged). For the other planets than the Earth, the inclination is very different from zero, so you can always choose it to be positive (by applying the factor of −1 when needed). Only for
the orbit of the Earth can be inclination be positive or negative, and one source can choose to take the future inclination to be positive while another source can take the past inclination to be
When the inclination is very small, then the length of the ascending node is not determined very accurately, so the values that different sources give for the length of the ascending node of the
Earth can differ by many degrees (even if you take a possible sign difference for the inclination into account) but this does not make much difference to the position of the Earth that you calculate
using that value, because the ecliptic latitude cannot be greater (in absolute sense) than the inclination.
Here is a table with some more handy numbers for each planet that depend only on the orbital elements. n is measured in degrees per day, a(1−e^2) in AU, and Π in degrees. These numbers are explained
Table 2: Planets: Other Orbital Numbers
n a(1−e^2) Π
Mercury 4.092317 0.37073 77.456
Venus 1.602136 0.72330 131.564
Earth 0.985608 0.99972 102.937
Mars 0.524039 1.51039 336.060
Jupiter 0.083056 5.19037 14.331
Saturn 0.033371 9.52547 93.057
Uranus 0.011698 19.17725 173.005
Neptune 0.005965 30.10796 48.124
Pluto 0.003964 37.09129 224.075
The steps you must follow to calculate the position of a planet as seen from Earth at date d are then:
If you want to calculate the heliocentric position of a planet, then you should take (x[planet], y[planet], z[planet]) for (x, y, z) in step 5.
These steps are explained in more detail below. As an example, we'll calculate the position of Jupiter at 01:00 hours Central European Standard Time (0:00 hours UTC) on 1 January 2004 from 52° north
latitude and 5° east longitude (which is near Utrecht in the Netherlands).
The section about approximations details approximations that provide less accurate results with less calculations.
WARNING: These calculations are based on the assumption that the planets orbit around the Sun in elliptical orbits that never change. In reality these orbits do change, but slowly and not by much.
These formulas are very suitable to determine whether a particular planet will be visible at a certain time, or what star it will be close to in the sky, but not (for example) to calculate whether a
particular planet will eclipse a particular star. The accuracy of the results of these formulas is discussed later.
WARNING 2: Make sure that you use the right units for the trigonometric functions such as sin and arctan. In computer programs those functions commonly work with radians. Many calculators can be
configured to work with degrees or radians. Calculate sin(1) with your calculator or computer program. If the result is approximately 0.8415, then the unit is the radian. If the result is
approximately 0.01745, then the unit is the degree. To transform from degrees to radians, multiply the number of degrees by π/180 ≈ 0.01745329.
First, calculate which angle n the planet traverses on average per day, as seen from the Sun:
(Eq. 4) n = 0,9856076686°/(a√a)
and then the mean anomaly M for date d (measured in days) from the start date d[0], the mean anomaly M[0] on the start date, and n:
(Eq. 5) M = M[0] + n (d − d[0])
For d and d[0] you could very well use Julian Day Numbers. The values of n for the planets are shown in the second table at the top of this page.
0:00 UTC at 1 January 2004 is 1460.6 days after the time for which M[0] from the table is valid, so d − d[0] is equal to 1460.5. The mean anomaly of Jupiter at the desired time is equal to M[Jupiter]
= 20.020 + 0.083056 * 1460.5 = 141.324° and for the Earth it is M[Earth] = 357.529 + 0.985608 * 1460.5 = 357.009° (after subtracting multiples of 360°).
The true anomaly ν [nu] is the angle between the line from the focus of the orbit (the Sun) to the perihelion of the orbit and the line from the focus to the planet. To calculate the true anomaly,
you need to solve the Equation of Kepler. See the Page about the Equation of Kepler about this.
For Jupiter, the true anomaly that goes with M[Jupiter] = 141.324° and e[Jupiter] = 0.04849 is equal to ν[Jupiter] = 144.637°. For the Earth, the true anomaly that goes with M[Earth] = 357.009° and e
[Earth] = 0.01671 is equal to ν[earth] = 356.907°.
1.3. The Distance to the Sun
The eccentricity e indicates the shape of the orbit and cannot be negative. If e is equal to 0, then the orbit is a circle. If e is greater than 0 but less than 1, then the orbit is an ellipse. If e
is equal to 1, then the orbit is a parabola or a straight line. If e is greater than 1, then the orbit is a hyperbola.
The size a of the semimajor axis of the orbit indicates the size of the orbit. For a circular orbit, a is equal to the distance between the planet and the Sun. For an elliptic orbit, a is equal to
half of the length of the longest line segment that fits within the ellipse.
The distance r of the planet from the Sun is:
(Eq. 6) r = a (1 − e^2)/(1 + e cos ν)
The values of a(1−e^2) for the planets are shown in the second table at the top of this page.
The distance between Jupiter and the Sun, measured in AU, is then r[Jupiter] = 5.19037 / (1 + 0.04849 * cos(144.637°)) = 5.40406 and for the Earth r[Earth] = 0.99972 / (1 + 0.01671 * cos(356.907°)) =
0.98331 .
We can now calculate the rectangular heliocentric ecliptic coordinates, all measured in AU. Of those, x[planet] is the distance from the Sun as measured along the line from the Sun to the vernal
equinox, y[planet] the distance to the line from the Sun to the vernal equinox, measured in the plane of the ecliptic, and z[planet] the distance above the plane of the ecliptic.
(Eq. 7) x[planet] = r (cos Ω cos(ω + ν) − sin Ω cos i sin(ω + ν))
(Eq. 8) y[planet] = r (sin Ω cos(ω + ν) + cos Ω cos i sin(ω + ν))
(Eq. 9) z[planet] = r sin i sin(ω + ν)
For Jupiter we find x[Jupiter] = 5.40406 * (cos(100.464°) * cos(273.867° + 144.637°) − sin(100.464°) * cos(1.303°) * sin(273.867° + 144.637°)) = −5.04289, y[Jupiter] = 5.40406 * (sin(100.464°) * cos
(273.867° + 144.637°) + cos(100.464°) * cos(1.303°) * sin(273.867° + 144.637°)) = 1.93965, and z[Jupiter] = 5.40406 * sin(1.303°) * sin(273.867° + 144.637°) = 0.10478 and for the Earth x[Earth] =
0.98331 * (cos(174.873°) * cos(288.064° + 356.907°) − sin(174.873°) * cos(0°) * sin(288.064° + 356.907°)) = −0.16811 , y[Earth] = 0.98331 * (sin(174.873°) * cos(288.064° + 356.907°) + cos(174.873°) *
cos(0°) * sin(288.064° + 356.907°)) = 0.96884, and z[Earth] = 0.98331 * sin(0°) * sin(288.064° + 356.907°) = 0.00000
If you have done steps 1 through 4 for the planet and also for the Earth, then you have calculated x[planet], y[planet], z[planet], x[Earth], y[Earth], z[Earth]. Then we can calculate the coordinates
of the planet relative to the Earth, measured in AU:
(Eq. 10) x = x[planet] − x[Earth]
(Eq. 11) y = y[planet] − y[Earth]
(Eq. 12) z = z[planet] − z[Earth]
For the rectangular geocentric coordinates of Jupiter, measured in AU, we find x = −5.04289 − (−0.16811) = −4.87477 , y = 1.93965 − 0.96884 = 0.97081 , and z = 0.10478 − 0.00000 = 0.10478
The ecliptic latitude β [beta] indicates how far the celestial body is from the ecliptic. The ecliptic longitude λ [lambda] shows how far the celestial body is from the vernal equinox, measured along
the ecliptic. For the distance Δ [large delta] of the planet from the Earth we find
(Eq. 13) Δ = √(x² + y² + z²)
and then
(Eq. 14) x = Δ cos λ cos β
(Eq. 15) y = Δ sin λ cos β
(Eq. 16) z = Δ sin β
It follows that
(Eq. 17) λ = arctan(y, x)
(Eq. 18) β = arcsin(z/Δ)
Here, arctan(y, x) is the special arc tangent function with two arguments that calculates the angle between the x axis and the line that goes from the point with coordinates (0,0) to the point with
coordinates (x,y). This special arc tangent function is usually not available under that name on electronic calculators, but is often available in programming languages (with a name like "atan2").
Some calculators have a function that transforms rectangular coordinates to polar ones (a distance and an angle), and you can use that to calculate the angle that goes with x and y, and that angle is
λ. That function often has a name like "R→P" or "Pol". To check: if you enter x = 1 and y = 0, then it should yield 0 for the angle, if you enter x = 0 and y = 1 then the result should be 90°, and if
you enter x = −1 and y = −1 then the result should be −135° or 225° (those differ by 360° so they indicate the same angle).
If that function is not on your calculator or in your programming language, then you can use the ordinary arc tangent function, but then you have to do some extra work. In that case, calculate
(Eq. 19) λ[0] = arctan(y/x)
If sin λ[0] has the same sign (plus or minus) as y and cos λ the same sign as x, then λ = λ[0]. Otherwise, λ = λ[0] + 180°.
We find Δ = √((−4.87477)^2 + 0.97081^2 + 0.10478^2) = 4.97161, λ = arctan(0.97081, −4.87477) = 168.737° , and β = arcsin(0.10478⁄4.97161) = 1.208°.
1.7. The Right Ascension and Declination
The axis of rotation of the Earth is not at right angles to the plane of the orbit of the Earth, so the equator of the Earth makes an angle with the plane of the orbit of the Earth. This angle ε
[epsilon] is called the obliquity of the ecliptic and its value at the beginning of 2000 was
(Eq. 20) ε = 23.4397°
The equatorial coordinate system in the sky is tied to the rotation axis of the Earth. The equatorial coordinates are the right ascension α [alpha] and the declination δ [delta]. The declination
shows how far the body is from the celestial equator and determines from which parts of the Earth the object can be visible. The right ascension shows how far the body is from the vernal equinox, as
measured along the celestial equator, and determines (together with other things) when the object is visible. We have:
(Eq. 21) sin α cos δ = sin λ cos β cos ε − sin β sin ε
(Eq. 22) cos α cos δ = cos λ cos β
(Eq. 23) δ = arcsin(sin β cos ε + cos β sin ε sin λ)
You can combine formulas 21 and 22 to:
(Eq. 24) α = arctan(sin λ cos ε − tan β sin ε, cos λ)
where the special arc tangent function was used again that was described at step 6. The right ascension is usually written as a time, with hours, minutes, and seconds. To transform it from a time
measured in hours to an angle measured in degrees (so you can use it in formulas with things such as sine functions), you should multiply it by 15.
For Jupiter we find α = arctan(sin(168.737°) * cos(23.4397°) − tan(1.208°) * sin(23.4397°), cos(168.737°)) = 170.120° = 11h20m29s and δ = arcsin(sin(1.208°) * cos(23.4397°) + cos(1.208°) * sin
(23.4397°) * sin(168.737°)) = 5.567°.
If you calculate the right ascension, declination, and distance for the Sun and all planets from the table, then you find
Table 3: Planet Positions on 1-1-2004
Name right ascension declination distance
degrees time degrees AU
Sun 280.710 18h42m50s -23.074 0.98331
Mercury 268.693 17h54m46s -20.296 0.70403
Venus 316.189 21h04m45s -18.614 1.3061
Mars 8.335 0h33m20s +3.660 1.1115
Jupiter 170.120 11h20m29s +5.567 4.9716
Saturn 100.256 6h41m01s +22.420 8.0443
Uranus 333.148 22h12m36s -11.868 20.654
Neptune 313.525 20h54m06s -17.459 30.973
Pluto 260.277 17h21m07s -14.497 31.700
Where a celestial body appears to be in the sky for you depends on the orientation of the Earth at your location compared to the stars. This angle is captured in the sidereal time θ (theta). The
sidereal time at a certain moment is equal to the right ascension that transits (culminates, passes through the celestial meridian) at that moment. If t is the local clock time and t[z] is how much
you have to add to the local time to get Universal Time (UTC), both measured in hours, then the sidereal time measured in degrees is equal to
(Eq. 25) θ = M[Earth] + Π[Earth] + 15° (t + t[z]) − l
(Eq. 26) Π[Earth] = Ω[Earth] + ω[Earth]
and Π[Earth] is shown in the second table. The t[z] is equal to −1 hour for Dutch and Belgian standard time (CET, Central European Time), and to −2 hours for Dutch and Belgian daylight savings time
(CEDT, Central European Daylight Savings Time). It is usually close to l/15° hours (reduced by one extra hour for daylight savings time).
The sidereal time is often written as a time rather than an angle, just like for the right ascension.
With the value of Π[Earth] from the second table, formula 25 becomes θ = M[Earth] + 102.937° + 15° (t + t[z]) − l. For 01:00 hours Central European Standard Time at 5° east longitude we have t = 1, t
[z] = −1 and l = −5°. In step 1 we found that M[Earth] = 357.009°. All in all, we find θ = 357.009° + 102.937° + 15° (1 + (−1)) − (−5°) = 104.946° = 6h59m47s (after subtracting multiples of 360° or
The hour angle H indicates how far the celestial body has passed beyond the celestial meridian. If the hour angle is zero, then the body transits and is highest in the sky, on the celestial meridian.
The hour angle is often written as a time rather than an angle, just like for the sidereal time and the right ascension. That time is approximately equal to how long ago the body was highest in the
(Eq. 27) H = θ − α
For Jupiter, we find H = 104.946° − 170.120° = −65.174° = 6h59m47s − 11h20m29s = −4h20m42s so Jupiter will be highest in the sky about 4⅓ hours later.
1.10. The Height and Azimuth
The position of a celestial body in the sky is specified by its altitude h above the horizon, and its azimuth A. The altitude is 0° at the horizon, +90° in the zenith (straight over your head), and
−90° in the nadir (straight down). The azimuth is the direction along the horizon, which we measure from south to west. South has azimuth 0°, west +90°, north +180°, and east +270° (or −90°, that's
the same thing). The altitude and azimuth are the horizontal coordinates. If the Earth had no atmosphere, then we'd have, for the horizontal coordinates:
(Eq. 28) sin A cos h = sin H cos δ
(Eq. 29) cos A cos h = cos H cos δ sin φ − sin δ cos φ
(Eq. 30) sin h = sin φ sin δ + cos φ cos δ cos H
It follows that
(Eq. 31) A = arctan(sin H, cos H sin φ − tan δ cos φ)
(Eq. 32) h = arcsin(sin φ sin δ + cos φ cos δ cos H)
where the special arc tangent function is used which was described earlier.
The atmosphere of the Earth bends light upward a bit so that it seems as if celestial bodies are a bit higher in the sky than they would be without an atmosphere. This effect is on average about
0.57° at the horizon and decreases rapidly with height. Moreover, it varies significantly close to the horizon, depending on the circumstances in the atmosphere. If you want to apply a correction to
h for this refraction, then you can use the following formula (if you measure h in degrees):
(Eq. 33) h[apparent] = h + 0.017/tan(h + 10.26/(h + 5.10))
For Jupiter at 01:00 hours Central European Standard Time on 1 January 2004 as seen from 52° north latitude and 5° east longitude we find h = arcsin(sin(52°) sin(5.567°) + cos(52°) cos(5.567°) cos
(−65.174°)) = 19.495° and A = arctan(sin(−65.174°), cos(−65.174°) sin(52°) − tan(5.567°) cos(52°)) = −73.383°. Jupiter is then about 19.5 degrees above the horizon, about 16 degrees south of east.
The correction for refraction is about 0.05°.
The elongation ψ of a planet is a measure for the distance of that planet from the Sun along the sky. You can measure that distance (1) directly between the planet and the Sun, or (2) only along the
ecliptic. These two methods in general give slightly different values, except for planets that are exactly on the ecliptic.
If you know the ecliptic longitude λ[Sun] of the Sun and also the ecliptic longitude λ and latitude β of the planet, then you can calculate elongation (1) with
(Eq. 34) ψ[1] = arccos(cos β cos(λ − λ[Sun]))
Elongation (2) is then equal to
(Eq. 35) ψ[2] = λ − λ[Sun]
It is customary to add or subtract multiples of 360° to or from ψ[2] until the value lies between −180° and 180°. Elongation (1) can only be positive or zero, but elongation (2) can be negative, too.
Positive means "east of the Sun" and negative "west of the Sun".
If you know the cartesian heliocentric ecliptic coordinates (x[Earth], y[Earth], z[Earth]) of Earth and the cartesian geocentric ecliptic coordinates (x, y, z) of the planet, then you can calculate
elongation (1) also like this:
(Eq. 36) Δ = √(x^2 + y^2 + z^2)
(Eq. 37) r[Earth] = √(x[Earth]^2 + y[Earth]^2 + z[Earth]^2)
(Eq. 38) ψ[1] = arccos(−(x[Earth] x + y[Earth] y + z[Earth] z)/(r[Earth] Δ))
We found earlier for Jupiter that λ = 168.737° and β = 1.208°. Similarly, we find for the Sun that λ[Sun] = 279.844°. It follows that ψ[1] = arccos(cos(1.208°) * cos(168.737° − 279.844°)) = 111.102°
and ψ[2] = 168.737° − 279.844° = −111.063°. Earlier, we found (x[Earth], y[Earth], z[Earth]) = (−0.16811, 0.96884, 0.00000) and (x, y, z) = (−4.87477, 0.97081, 0.10478) and, from that, Δ = 4.97161
and r[Earth] = 0.98331, and with these we now find ψ[1] = arccos(−(−0.16811 * −4.87477 + 0.96884 * 0.97081 + 0 * 0.10478) / (4.97161 * 0.98331)) = 111.102°, just like before.
1.12. Accuracy
How accurate are the results that these formulas yield? I compared those results for every day from 1 January 1980 through 1 January 2020 with the results that you get if you do the calculations
using a truncated version of the VSOP theory according to the book "Astronomical Algorithms" by J. Meeus, who includes the effects of the mutual gravitational attraction of the planets, of
abberation, nutation, the travel time of light, and the difference between UTC and TDT. The greatest differences are shown in the next table.
Table 4: Planets: Position Calculation Accuracy
Name α δ Δ
degrees degrees AU
Sun 0.03 0.01 0.0000
Mercury 0.09 0.04 0.0013
Venus 0.17 0.05 0.0008
Mars 0.26 0.07 0.0018
Jupiter 0.32 0.12 0.0093
Saturn 1.08 0.43 0.049
Uranus 1.00 0.35 0.047
Neptune 0.68 0.2 0.072
1.13. Application
The culmination or transit of a celestial body is the moment at which the body passes through the celestial meridian and is highest in the sky. The hour angle H of the body is then 0. From this it
follows that
(Eq. 39) θ[transit] = α
(Eq. 40) t[transit] = (α + l − M[Earth] − Π[Earth])/15 − t[z]
if you measure the angles in degrees and the times in hours. With the values from the first table, this becomes
(Eq. 41) t[transit] = (α + l − M[Earth])/15 + 17h8m15s
if you measure t[transit] in Universal Time (UTC). For Central European Standard Time (which is valid in winter in most of western Europe, including the Netherlands and Belgium) you should add 1 hour
, but for Daylight Savings Time 2 hours.
For Jupiter on 1 January 2004 we found in step 7 that α is equal to 170.120°, and for the Earth we found in step 1 that M[Earth] is equal to 357.009°. For 5° east longitude (l = −5) we then find t
[transit] = (170.120 + (−5) − 357.009)/15 + 17h8m15s = 4h20m42s so Jupiter culminates (is highest in the sky) at 04:21 hours Universal Time, which is equivalent to 05:21 hours Central European
Standard Time.
1.13.2. Rise and Set
When the celestial body rises or sets, then it has a particular height h as calculated for an observer on a smooth spherical Earth without an atmosphere and for a celestial body that looks like a
point. The Earth does in fact have an atmosphere, and the other conditions need not be met, either, so h need not be equal to 0 when the celestial body rises or sets. We therefore define that the
rise or set happens when h is equal to some fixed value h[0]. This yields:
(Eq. 42) H[horizon] = arccos((sin h[0] − sin φ sin δ)/(cos φ cos δ))
(Eq. 43) t[rise] = t[transit] − H[horizon]/15
(Eq. 44) t[set] = t[transit] + H[horizon]/15
if you measure the times in hours and the hour angle in degrees.
For Jupiter we found, in step 7, that δ is equal to 5.567°. For 52° north latitude (φ = 52°) and with h[0] = 0 we then find H[horizon] = arccos(−tan(52°) * tan(5.567°)) = 97.167° = 6h28m40s , t[rise]
= t[transit] − H[horizon]/15 = 5h20m42s − 6h28m40s = −1h7m58s = 22h52m2s , and t[set] = t[transit] + H[horizon]/15 = 5h20m42s + 6h28m40s = 11h49m22s . I added 24 hours to get the final result for t
[rise], so that time is one on the previous day. All in all, Jupiter is above the horizon from 22h52m to 11h49m CET. There is no point in quoting these times more precise than to the nearest minute.
1.14. Approximations
There are several approximations that one can make to simplify the calculations:
• approximate the elliptical orbit by a circular one (i.e., set e equal to 0). Then ν = M and r = a so steps 2 and 3 have become very simple.
• assume that the orbit is nearly a circle, so e is much smaller than 1. Then, approximately,
(Eq. 45) ν ≈ M + 2 e sin M + (5⁄4) e^2 sin 2M
and then step 2 has become simpler.
• ignore the angle between the plane of the orbit and the plane of the orbit of the Earth (i.e., set i equal to 0). Then step 4 becomes much simpler, namely
(Eq. 46) x[planet] = r cos(Ω + ω + ν)
(Eq. 47) y[planet] = r sin(Ω + ω + ν)
(Eq. 48) z[planet] = 0
where Π = Ω + ω and the value of Π for the planets is displayed in the second table.
This held for the Earth anyway (according to our model), so then we also have β equal to 0, and then step 7 becomes
(Eq. 49) α = arctan(sin λ cos ε, cos λ)
(Eq. 50) δ = arcsin(sin ε sin λ)
• approximate sines and cosines of ε with polynomials. That only yields short formulas if you also assume that i is equal to 0 (which holds, for example, for the Sun as seen from Earth). Then,
(Eq. 51) α ≈ λ − (¼) ε^2 sin 2λ
(Eq. 52) δ ≈ ε sin λ
• assume that h[0] is much smaller than 10°. Then, approximately,
(Eq. 53) H[horizon] ≈ arccos(−tan δ tan φ) − h[0]/√((cos δ)^2 − (sin φ)^2)
The next table shows the largest differences between the results that you get using some of these approximations and the results if you use the complete formulas mentioned before the section about
approximations. The approximations of which the errors are shown here are (1) assume that the orbit is almost a circle, (2) ignore the angle between the orbit of the planet and the orbit of the
Earth, and (3) approximate the sines and cosines of ε with polynomials. In general, approximation 1 yields the greatest reduction in the calculation time.
Table 5: Planets: Position Approximation Accuracy
[α] (degrees) [δ] (degrees) [Δ] (AU)
1+2+3 1+2 1 none 1+2+3 1+2 1 none 1+2+3 1+2 1 none
Sun 0.14 0.032 0.032 0.032 0.26 0.011 0.011 0.011 0.0001 0.0001 0.0001 0.0001
Mercury 2.3 2.3 0.45 0.088 5.0 4.9 0.23 0.045 0.0084 0.0084 0.0056 0.0013
Venus 3.4 3.4 0.17 0.17 8.2 7.9 0.05 0.05 0.0045 0.0045 0.00078 0.00077
Mars 2.6 2.6 0.41 0.26 6.3 6.5 0.14 0.074 0.0028 0.0028 0.0026 0.0017
Jupiter 0.87 0.86 0.32 0.32 1.8 1.6 0.12 0.12 0.0095 0.0095 0.0094 0.0093
Saturn 1.2 1.2 1.1 1.1 3.2 3.0 0.44 0.43 0.049 0.049 0.049 0.049
Uranus 1.1 0.99 1.0 1.0 1.1 1.1 0.35 0.35 0.047 0.047 0.047 0.047
Neptune 1.1 1.1 0.68 0.68 1.4 1.4 0.27 0.27 0.072 0.072 0.072 0.072
For example: if you calculate the declination δ of Mecury without approximations 1 - 3 (but with the approximation that its orbit is a fixed ellipse, which is assumed everywhere on this page), then
the greatest difference relative to the results of the VSOP theory is 0.045°. If you use only approximation 1, then it becomes 0.23°. If you use approximations 1 and 2, then it becomes 4.9°. If you
use approximations 1 - 3 then the greatest difference is 5.0°.
2. Position of the Moon
The position of the Moon cannot be calculated in the same fashion as the positions of the planets, because the orbit of the Moon around the Earth itself spins around with a period of only about 18
years, so that the nodes of the orbit of the Earth shift once around the whole ecliptic in that period. If you want to calculate the position of the Moon very accurately, then you must take into
account thousands of terms and add them up. Here, I ignore all of those terms except for the very largest ones.
Use the numbers from the next table to calculate the mean geocentric ecliptic longitude L compared to the equinox of the date, the mean anomaly M, and the mean distance F of the Moon from its
ascending node, measured in degrees. The value of such a quantity is equal to c[0] + c[1] (d − d[0]) where c[0] and c[1] come from the table and d − d[0] is the number of days since 1 January 2000,
12:00 UTC, just like in the calculations for the planets.
c[0] c[1]
L 218.316 13.176396
M 134.963 13.064993
F 93.272 13.229350
Now calculate the geocentric ecliptic coordinates λ (longitude), β (latitude), and Δ (distance), with the angles in degrees and the distance in kilometers:
(Eq. 54) λ = L + 6.289 sin M
(Eq. 55) β = 5.128 sin F
(Eq. 56) Δ = 385001 − 20905 cos M
You can now use step 7 and later ones from the method for the planets to calculate the right ascension and declination or the height and azimuth.
We calculate the position of the Moon for 0:00 UTC on 1 January 2004, which is 1460.5 days after d[0]. Then L = 218.316 + 13.176396 * 1460.5 = 19462.44° = 22.44°, M = 134.963 + 13.064993 * 1460.5 =
19216.39° = 136.39° and F = 93.272 + 13.229350 * 1460.5 = 19414.74° = 334.74°. This yields λ = 22.44 + 6.289 * sin(136.39°) = 26.78°, β = 5.128 * sin(334.74°) = −2.19° and Δ = 385001 − 20905 * cos
(136.39°) = 400136 km.
I calculated the position of the Moon (λ, β, Δ) for every day between 1950 and 2050 according to the above method and also according to a much more accurate method (of which the above method includes
only the very largest terms). The largest error in λ was 2.57 degrees (standard deviation 1.04 degrees), that in β 0.81 degrees (standard deviation 0.31 degrees), and that in Δ 7645 km (standard
deviation 3388 km).
languages: [en] [nl]
Last updated: 2013-10-08 | {"url":"http://aa.quae.nl/en/reken/hemelpositie.html","timestamp":"2014-04-18T13:32:20Z","content_type":null,"content_length":"85770","record_id":"<urn:uuid:48c056e3-9d87-4d12-a744-9a84fbbc97c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/jadeeyes/asked","timestamp":"2014-04-17T16:02:41Z","content_type":null,"content_length":"94240","record_id":"<urn:uuid:1abe5ecd-0a3c-4355-bce1-4d1faa1029e3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Apache Junction
Apache Junction, AZ 85118
Experienced Teacher: Alg I, II, College Alg, Geometry
I have enjoyed teaching
ematics to a wide variety of students over the past 30 years. I began teaching high school
ematics (I have a minor in secondary education). After four years I went on to teach all levels of
ematics through Calculus at the University...
Offering 5 subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/geo_Apache_Junction_Math_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-19T02:37:28Z","content_type":null,"content_length":"59888","record_id":"<urn:uuid:46721c36-8584-41f9-bd0d-d673d4447867>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
call maple routine in matlab
April 25th 2009, 04:32 AM #1
Apr 2009
call maple routine in matlab
ok first of im pretty new to maple, but i turned to it when i needed some symbolic integration and summation done, and dont have the symbolic maths package for matlab
I have a short bit of maths that seems to work well im maple, a double integral then a summation. There are two values i would like to sub into it (r and theta, polars..) to find the value of the
solution at a given point. How best can i go about subbing in these values into the maple expression, and returning the answer to matlab. I can just us the evalf(subs()) expression in maple and
copy it accross, but this is something i need to do thousands/millions of times in a loop.
Alternatively (how) can i use loops in maple to generate and process the data then output the lot into a file i can import into matlab to process further.
If ive been to vague anywhere, feel free to ask for more details.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math-software/85547-call-maple-routine-matlab.html","timestamp":"2014-04-18T16:59:37Z","content_type":null,"content_length":"29714","record_id":"<urn:uuid:4f1e8fc9-58f8-4fdc-b5d5-2c400e13a7f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Quandaries: A Category-Theoretic Perspective
John Baez
April 24, 2007
Quantum Quandaries: A Category-Theoretic Perspective
Category theory is a general language for describing things and processes - called "objects" and "morphisms". In this language, the counterintuitive features of quantum theory turn out to be
properties that the category of Hilbert spaces shares with the category of cobordisms - in which objects are choices of "space", and morphisms are choices of "spacetime". In particular, both
these categories - but not the category of sets and functions - are noncartesian monoidal categories with duals. We show how this accounts for many of the famously puzzling features of quantum
theory: the failure of local realism, the impossibility of duplicating quantum information, and so on. We argue that these features only seem puzzling when we try to treat the category of Hilbert
spaces as analogous to the category of sets rather than the category of cobordisms, so that quantum theory will make more sense when regarded as part of a theory of spacetime. Moreover, the
common features of the category of Hilbert spaces and the category of cobordisms are best understood by studying categories of "spans" and "cospans".
Click here to see the slides of the talk:
• Quantum Quandaries: a Category-Theoretic Perspective, in PDF and Postscript
For more on this subject try these introductory papers:
Also try this somewhat more technical one:
Note: on page 10 of my talk, I fail to mention the requirement that Δ[x] and e[x] be symmetric monoidal natural transformations. For precise details, see page 47 and some preceding pages here:
Text © 2007 John Baez
Image © Aaron Lauda | {"url":"http://math.ucr.edu/home/baez/treilles/","timestamp":"2014-04-19T00:03:27Z","content_type":null,"content_length":"3393","record_id":"<urn:uuid:44bfb377-3ab1-47ee-a926-c3cbf350861a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
improper integral
March 5th 2008, 09:09 AM #1
Aug 2007
improper integral
so, i need to either use tests or evaluation to decide whether this integral converges or diverges, but im not sure what to do!
integral from 0 to 1 of: 1/(t-sin(t))
i thought of doing a comparison to 1/t, but -sin(t) can take on both positive and negative values, so im not quite sure where to go
so, i need to either use tests or evaluation to decide whether this integral converges or diverges, but im not sure what to do!
integral from 0 to 1 of: 1/(t-sin(t))
i thought of doing a comparison to 1/t, but -sin(t) can take on both positive and negative values, so im not quite sure where to go
$\frac{1}{x-\sin x} \geq \frac{1}{x}$ on $[\epsilon,1]$ where $\epsilon >0$. And note $\int_0^1 \frac{dx}{x}$ diverges.
March 5th 2008, 09:12 AM #2
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/calculus/30062-improper-integral.html","timestamp":"2014-04-18T13:10:36Z","content_type":null,"content_length":"32996","record_id":"<urn:uuid:4cef55aa-a0a4-4a67-b9e2-ffc379f485e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jim McFadden
CS 224n Final Project
Information Extraction with Hidden Markov Models
Information extraction is the process of extracting relations out of natural language text. For example, from the sentence, "The Palo Alto headquartered Redwood Software, Inc. has sold off its
internet branch BuyStuff.com to industry giant Huge Software Company for 30 million dollars.", the relation (BuyStuff.com, Huge Software Company, Redwood Software, 30 million) could be extracted into
a database with fields (Acquired Company, Buying Company, Selling Company, Dollar Amount).
A large majority of the information on the internet and even on computers in general, whether it’s in the form of web pages, emails, newsgroups, papers, or company documents, is just plain natural
language text. The problem of extracting more structured information out of this data, therefore, is an important one. Creating information extraction tools would be very useful.
The goal of this project was to create the best information extraction tool I could. As a starting point, I used the three papers that were handed out in class. The most appealing approach to me was
to build a maximum entropy model. These models allow the use of arbitrary features. For example, part of speech information can be combined with extraction patterns. For the sake of simplicity and
for ease of implementation, I decided to build a Hidden Markov Model instead. Compared to Maximum Entropy Models, Hidden Markov Models are not quite as versatile but easier to implement.
The basis of the Hidden Markov Model I created was Dayne Freitag and Andrew McCallum’s paper, "Information Extraction with HMM Structures Learned by Stochastic Optimization". I originally intended to
automatically learn HMM structures the techniques they describe in the paper, but I found that just getting a hand-created structure to run was a lot of work for one person.
Hidden Markov Model Overview
A Hidden Markov Model is a finite-state automaton that has probabilities associated with each state transition and with what symbol is emitted at each state. A Hidden Markov Model, then, consists of
three parts:
-a set S of n states
-an n by n matrix of transition probabilities A from each state to each state
-an n by v matrix B from of emission probabilities that express the probability of emitting each word in the vocabulary (of v words) from each state
Generally, the emission probabilities can depend on both the from state and the to state of a transition. I decided to have my emission probabilities depend only on the from state because it made
intuitive sense and made learning parameters easier.
Specific to the information extraction problem, I used two ideas from the Freitag and McCallum paper. First, for each field that I am trying to extract from a document, I built a separate HMM. This
is done so that each field is treated separately and can have parameters learned specific to that field. I think this would perform better than having only one HMM that had to try to encode the
structure of all of the fields. Second, there are two kinds of states: target states and background states. Only target states emit the "target" information, or the field we are trying to extract.
The way a HMM can be used to solve the information extraction problem is as follows. The input document is a sequence of words. It is then assumed that some HMM was used to generate this sequence.
The Viterbi algorithm, explained later, is used to find the most likely sequence of states that the HMM went through to output that document. Then, the words determined to be most likely emitted by
the target states are extracted as the field we are looking for.
This describes how a HMM is used to do extraction. Before we can use a HMM, we have to build it. For this project, I started with this hand-built structure from the Freitag and McCallum paper.
The circles represent background states and the octagons represent target states. The target states are connected to themselves and each of the other target states. Only one background state goes to
any target and it connects to all of them. The targets only connect to one background state and all of them connect to it. Intuitively, this structure seems like it would be good for extraction. The
top node that connects to itself is a garbage node that the HMM stays at for most of the document. It transitions to itself with high probability. The sequence of four states before the target
represents a prefix before the target. The four prefix nodes have high emission probabilities for words that are likely to precede the target. The four words after the target work similarly for
common words that commonly follow the target.
To build a working model, I start this structure with reasonable transition and emission probabilities. Then I use the Baum-Welch parameter estimation algorithm and a large set of tagged training
data to find better probabilities. This algorithm uses the current parameters and an output sequence to estimate new parameters. This algorithm converges to a local maximum in the space of parameters
so I was careful to select reasonable parameters to start with, rather than random ones.
The two main algorithms are the Viterbi algorithm for finding the most likely state sequence and the Baum-Welch algorithm for parameter estimation. I implemented these algorithms exactly they are
stated in Manning and Schutze p. 331-335 with the modification for target states described in Freitag and McCallum. In parameter estimation, we make the restriction that only target states can emit
target words and only background states can emit non-target words. To do this, at all steps we let the probability of being in state s at time t be zero if the type of s (target or background) does
not match the type of the output at the t position of the training sequence. For the Viterbi algorithm, there is no modification.
I implemented the HMM using the Java programming language. To run my extraction tool, use
java HMM <training file> <testing file> <field name>
where training file and testing file contained tagged data (one of whose tagged fields is field name). The data files contain documents separated by the token "ENDOFDOC". The HMM will train itself
using the tags in the training file and will then attempt to extract the given field from the documents in the testing file, and evaluate its performance against the tags in the testing file.
One decision I had to make was regarding how to do smoothing for the emission probabilities. If we let the probability of emission of unseen words be zero, then the HMM will not work as intended
because all state sequences will have a probability of zero for that input sequence. When reading the training document, I kept a count for each word in the vocabulary. I started the emission
probabilities for each word at each state as the count over the sample size with add-one smoothing. So unseen words are given a small probability. After training the emission probabilities change
depending on the state, but I let the probability of unseen words remain the same at its small value. This avoids the problem of zero probabilities.
The most significant problem I ran into was floating point underflow. Because the HMM algorithms multiply large amounts of probabilities the numbers being manipulated are extremely small. I tried
solving this problem using scaling factors. I would multiply the probabilities at each time by a constant depending exponentially on t. When computing probabilities across times the scaling factors
cancel. I was able to get this to work, but not satisfactorily. I found that the scaling factor had to be carefully chosen in a range that depended heavily on the size of the document. If the chosen
factor was too large, there would be floating point overflow. If it was too small, there would be underflow. Because the factor was dependent on the size of the document and the documents varied
widely in size, I couldn’t just find one scaling factor that worked. So I decided to work strictly with the logs of probabilities. It is easy to do this when multiplying probabilities; one must
simply add the logs. Taking sums of probabilities is more difficult. To do this, I used the log_add algorithm given in Manning and Schutze p. 337. This worked very well, although it did slow the
parameter estimation algorithm significantly.
I also ran into a problem with the structure of the HMM I was using. Extracting the acquired field from the sentence, "Company A acquired <acquired>B</acquired> and <acquired>C</acquired>.", my model
would give a probability of zero. This was because the minimum prefix and suffix before a target in my model was length 9 words. To fix this, I added low probability transition to and from
transitions between the garbage state and one of the targets. This fixed the problem.
Testing and Performance
I used the corporate acquisitions data set that consists of Reuters articles hand tagged by Dayne Freitag. I chose this data set because it was available and already tagged. This is a set of about
400 documents that each describe a corporate acquisition. They are tagged with a set of pertinent fields like seller, acquired, purchaser, dlramt (price of purchase), status (status of acquisition),
acqloc (location of acquired company), and acqbus (business that the acquired company is in).
I divided the data into two sets. I used a random selection of about two thirds of the documents for training and the one remaining one third for testing. I tested the performance in two ways. I
counted the number of documents for which the HMM perfectly extracted the exact set of correct words. For each document, I also calculated a "score" which for each document was equal to (true
positives / (size of correct set + false positives)). In the case that the HMM correctly extracted an empty set, the score is 1. This score does a pretty good job of evaluating performance because
the higher the percentage of extracted words correct, the closer the score to 1. Also, guessing at more words doesn’t help because it penalizes for false positives.
Here are the results:
│ Field │ % Perfect │ % Score │
│ acquired │ 16.4 │ 28.1 │
│ seller │ 25.4 │ 53.7 │
│ purchaser │ 23.2 │ 36.6 │
│ status │ 39 │ 44.2 │
│ dlramt │ 61.6 │ 62.6 │
│ acqbus │ 55.4 │ 58.6 │
│ acqloc │ 69.5 │ 71 │
│ Average │ 41.5 │ 50.7 │
The results for the fields that did well are a little misleading. All of the documents do not contain all of the fields. For the fields that did the best, a lot of the documents don’t contain that
field. So all the HMM had to do to get them right was correctly identify them as empty. At the same time, these documents were tagged with even more tags than is shown here. In addition to and
acquired and seller field, for instance, there were acqcode, acqabr sellercode, sellerabr fields that contained codes and abbreviations for these companies. This made it more difficult for the HMM
because if it identifies what was really the acqabr as the acquirer, this is counted as wrong.
These results might seem somewhat discouraging. On average, only about 40% of fields are extracted correctly. But information extraction is a hard problem. For each field, out of a document of about
80-250 words, around only three words must be identified. In the Freitag McCallum paper, using the same data, they report that their "grown" HMM (which has a learned structure) gets only 41.3% on the
acquired field and only 54.4% on the dlramt field, with average performance of only 57.2% over a variety of tasks. | {"url":"http://nlp.stanford.edu/courses/cs224n/2001/jmcf/writeup.html","timestamp":"2014-04-16T18:57:39Z","content_type":null,"content_length":"15493","record_id":"<urn:uuid:c2d547a2-0b54-4bcc-a02d-ee2f362be9a4>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
transform x-2y>-4?
Hi there andrelt375, you now to employ your knowledge and algebra and inequalities here. We need to isolate $y$ $x-2y>-4$ First, lets take $x$ from both sides. $x{\color{red}-x}-2y>-4{\color{red}-x}$
Gives us $-2y>-4-x$ Now we need to divide each side by $-2$ . When we divide through by a negative value the sign will flip the other way. $\frac{-2y}{{\color{red}-2}}<\frac{-4-x}{{\color{red}-2}}$
Leaving $y<\frac{-4-x}{-2}$ This can be simplified further as $y<\frac{-4}{-2}+\frac{-x}{-2}$ $y<2+\frac{x}{2}$ And we are finished.
Last edited by pickslides; April 2nd 2010 at 01:53 PM. Reason: flipped some inequalities... | {"url":"http://mathhelpforum.com/algebra/137018-transform-x-2y-4-a.html","timestamp":"2014-04-20T06:04:01Z","content_type":null,"content_length":"43936","record_id":"<urn:uuid:0fe445b3-b763-43e5-9235-e1bafc704352>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
HELP! PLEASE! When a crew rows with the current, it travels 16 miles in 2 hours. Against the current, the crew rows 8 miles in 2 hours. Let x= the crew's rowing rate in still water and let y= the
rate of the current. (Solve with Calcculator) What is the input? What is the output? What is the rate of rowing? What is the rate of current?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
allright, il set it up for ya and see if ya can figure it out! Rate x Time = Distance... so im gonna go to the drawing board now!
Best Response
You've already chosen the best response.
|dw:1354314486582:dw| so x=6... now plug x back into the equation so (next drawing)
Best Response
You've already chosen the best response.
|dw:1354314768442:dw| and that is that! u shud b able to finish the rest yourself!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b7772ae4b0c789d50fb3ff","timestamp":"2014-04-20T21:21:36Z","content_type":null,"content_length":"135074","record_id":"<urn:uuid:c2005983-ae0c-4171-b90a-c14bc91fdb9d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Mathematics Blog by Nilo de Roock
Scenegraphica progress: programming ( how I prefer to do it anyway ) involves a lot of throwing away of code. Start again from scratch often but with the know-how ( and portions of code ) of the
previous version. Eventually entire classes survive and a system emerges. There must be a name for this 'method'. Probably something with agile in it.
I just started on the third cycle of scenegraphica: x3. Not much of x2 survived. I am concentrating on color, surface properties of objects, light, shadow and viewpoints in this release. - I am
redesigning the root scenegraph class. It gets a property of type collection to store all the light sources in the universe. - Instead of using Mathematica's own List I am developing an OO-System
List class. It will implement Java's Collection interface. Think of a little brother of ListArray.
To get ideas for scenegraphica I look a lot at professional ( Mathematica ) generated graphics.
for example.
It will take a while ( if at all ) before I can create graphics like that. Because the author of the article, Jean-Pierre Hebert, was so kind to provide his formulas it inspired me to try something
ContourPlot[{Sin[x + y + Sin[x] + Sin[y]]}, {x, 0, 12 \[Pi]}, {y, 0, 12 \[Pi]}, Frame -> False, ColorFunction -> "Rainbow"]
and this:
ContourPlot[{Sin[x + y + 3 Sin[x] + 4 Sin[y]]}, {x, 0, 4 \[Pi]}, {y, 0, 4 \[Pi]}, Frame -> False, ColorFunction -> "Rainbow"]
Note the usage of ColorFunction which is an option for graphics functions which specifies a function to apply to determine colors of elements. "Rainbow" is just a standardfnction that comes equipped
with Mathematica. The fun starts when you start making your own color functions.
It would be nice if I could use these patterns as textures for 3D objects. Not sure yet how to do that though.
No comments: | {"url":"http://mathematics-diary.blogspot.com/2011/07/colorfunction-option-in-mathematica.html","timestamp":"2014-04-18T00:46:32Z","content_type":null,"content_length":"127278","record_id":"<urn:uuid:0706b1dc-4991-446b-ab86-cbbbad04b295>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alexandria, VA Math Tutor
Find an Alexandria, VA Math Tutor
...Currently, I have been working through WyzAnt for two years now, and I love every minute of working with these students! I was also a Teaching Assistant in RPI's Department of Mathematical
Sciences for Calculus 1 and 2 classes for my last 3 years as a student there. I taught smaller classes of ...
9 Subjects: including differential equations, algebra 1, algebra 2, calculus
Hello! My name is Tony and I graduated from Oklahoma State University with a degree in Agricultural Economics and Philosophy. I absolutely love school and learning, so I took lots of extra course
work in a variety of fields such as chemistry and programming.
32 Subjects: including algebra 1, SAT math, ACT Math, economics
...I have an engineering degree from UCLA and well over 15 years of full-time teaching & tutoring experience. Since these tests are for admissions to Catholic Schools, I thought I would also
mention I am a practicing Catholic. Thanks for your consideration.
28 Subjects: including differential equations, logic, algebra 1, algebra 2
...As for athletics, I am skilled in the following areas: Swimming: I was a competitive swimmer for three years and enjoy teaching stroke mechanics and customizing drills to help you progress
through swimming. Golf: I was on my high school varsity team for three years. I cover swing mechanics (I...
13 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...I attended convocation 3 times a week where scripture was read and taught. Not only that, but I've taught Bible lessons and been apart of Bible Studies my entire life. I played volleyball in
high school for about two years, and also played intramural in college and grad school.
56 Subjects: including prealgebra, piano, English, writing
Related Alexandria, VA Tutors
Alexandria, VA Accounting Tutors
Alexandria, VA ACT Tutors
Alexandria, VA Algebra Tutors
Alexandria, VA Algebra 2 Tutors
Alexandria, VA Calculus Tutors
Alexandria, VA Geometry Tutors
Alexandria, VA Math Tutors
Alexandria, VA Prealgebra Tutors
Alexandria, VA Precalculus Tutors
Alexandria, VA SAT Tutors
Alexandria, VA SAT Math Tutors
Alexandria, VA Science Tutors
Alexandria, VA Statistics Tutors
Alexandria, VA Trigonometry Tutors
Nearby Cities With Math Tutor
Annandale, VA Math Tutors
Arlington, VA Math Tutors
Bethesda, MD Math Tutors
Fairfax, VA Math Tutors
Falls Church Math Tutors
Forest Heights, MD Math Tutors
Fort Washington, MD Math Tutors
Hyattsville Math Tutors
Oxon Hill Math Tutors
Silver Spring, MD Math Tutors
Springfield, VA Math Tutors
Temple Hills Math Tutors
Vienna, VA Math Tutors
Washington, DC Math Tutors
Woodbridge, VA Math Tutors | {"url":"http://www.purplemath.com/Alexandria_VA_Math_tutors.php","timestamp":"2014-04-17T13:22:13Z","content_type":null,"content_length":"23924","record_id":"<urn:uuid:a5ceb092-02fb-4907-8c94-1736052d0103>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] instance Monad m => Functor m
Hans Aberg haberg at math.su.se
Wed Apr 9 12:04:30 EDT 2008
On 9 Apr 2008, at 17:49, Henning Thielemann wrote:
> Additionally I see the problem, that we put more interpretation
> into standard symbols by convention. Programming is not only about
> the most general formulation of an algorithm but also about error
> detection. E.g. you cannot compare complex numbers in a natural
> way, that is
> x < (y :: Complex Rational)
> is probably a programming error. However, some people might be
> happy if (<) is defined by lexicgraphic ordering. This way complex
> numbers can be used as keys in a Data.Map. But then accidental uses
> of (<) could no longer be detected. (Thus I voted for a different
> class for keys to be used in Data.Map, Data.Set et.al.)
I think there it might be convenient with a total order defined on
all types, for that data-map sorting purpose you indicate. But it
would then be different from the semantic order that some types have.
So the former should have a different name.
Also, one might have
Ordering(LT, EQ, GT, Unrelated)
so t can be used on all relations.
> Also (2*5 == 7) would surprise people, if (*) is the symbol for a
> general group operation, and we want to use it for the additive
> group of integers.
This is in fact as it should be; the idea is to admit such things:
class Group(a; unit, inverse, mult) ...
class (Group(a; 0, (-), (+)), Monoid(a; 1, (*)) => Ring(a; 0, 1,
(-), (+), (*)) ...
-- (or better variable names).
instance Ring(a; 0, 1, (-), (+), (*)) => Integer
A group can be written additively or multiplicatively, (+) is often
reserved for commutative operations. But there is not way to express
that, unless one can write
class AbelianGroup(a; unit, inverse, mult) where
mult a b = mult b a
One would need pattern matching to Haskell in order to make this useful.
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2008-April/041543.html","timestamp":"2014-04-21T00:59:14Z","content_type":null,"content_length":"4669","record_id":"<urn:uuid:9712cbb0-0aed-4208-b19c-8524e75d869f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
integrate (1+2e^x-e^(-x)^(-1)
Re: integrate (1+2e^x-e^(-x)^(-1)
Of course, I am just showing that it can also be done as zf. suggested. The partial fractions work better in this integral.
Last edited by anonimnystefy (2013-02-19 10:43:46)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=253977","timestamp":"2014-04-20T13:31:16Z","content_type":null,"content_length":"21239","record_id":"<urn:uuid:b2c9c0b1-51e9-40bc-a9d1-20bc73effb14>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Note on Generalized Hardy-Sobolev Inequalities
International Journal of Analysis
Volume 2013 (2013), Article ID 784398, 9 pages
Research Article
A Note on Generalized Hardy-Sobolev Inequalities
Department of Mathematics, Indian Institute of Science, Bangalore 560012, India
Received 16 September 2012; Accepted 20 November 2012
Academic Editor: Tohru Ozawa
Copyright © 2013 T. V. Anoop. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
We are concerned with finding a class of weight functions so that the following generalized Hardy-Sobolev inequality holds: , for some , where is a bounded domain in . By making use of Muckenhoupt
condition for the one-dimensional weighted Hardy inequalities, we identify a rearrangement invariant Banach function space so that the previous integral inequality holds for all weight functions in
it. For weights in a subspace of this space, we show that the best constant in the previous inequality is attained. Our method gives an alternate way of proving the Moser-Trudinger embedding and its
refinement due to Hansson.
1. Introduction
In this paper, we are interested to find the best suitable function space for the weights so that the following generalized Hardy-Sobolev inequality holds: for some , where is a bounded domain in .
We say that is admissible, if the previous inequality holds. We are also interested to find a class of admissible functions that ensures the best constant in (1) which is attained for some .
Let us first recall the classical Hardy inequality: By taking with in (2), we get The higher dimensional analogue of the previous inequality is referred to as the Hardy-Sobolev inequality in the
literature: where is a domain containing the origin with . Clearly (4) does not hold when or 2, since is not locally integrable for that contains the origin.
For , the Hardy-Sobolev inequality is generalized mainly in two directions, namely, the generalized Hardy-Sobolev inequalities and the improved Hardy-Sobolev inequalities. The first one refers to the
inequalities of the form (1) for more general weights instead of the homogeneous weight . The second one relies on the fact that the best constant in (4) is not attained in and hence one anticipates
to improve (4) by adding nonnegative terms in the left-hand side. The first major improvement in the Hardy-Sobolev inequality is obtained by Brézis and Vázquez in [1] who have proved the following
inequality: Motivated by the previous inequality, several improved Hardy-Sobolev inequalities have been proved, for example see [2–5].
For , as we pointed out before, the Hardy potential is not admissible for any domain in that contains the origin. In [6], Leray showed that is the right admissible function (analogous to Hardy
potential) for . In this paper, we focus on finding a large class of admissible functions including that of Leray’s function or its improvements (by adding nonnegative terms) for the generalized
Hardy-Sobolev inequalities in bounded domains of .
The most general sufficient condition (for any dimension) for the generalized Hardy-Sobolev inequalities is given by Maźja [7], in terms of the capacity. We recall that for a compact set , the
relative capacity of with respect is defined as First, let us see that Maźja’s capacity condition is very much intrinsic on (1). Let be a positive weight satisfying (1), then for any compact subset ,
By taking the infimum, we get . Therefore Maźja proved that the previous condition is indeed sufficient for (1) (Theorem 1/2.4.1, page 128 of [7]). Since Maźja’s condition is necessary and
sufficient, all the improved Hardy-Sobolev inequalities follow directly from Maźja’s result. However, verifying Maźja’s capacity condition for a general weight function is not an easy task. Thus it
is of interest to find certain verifiable conditions for the generalized Hardy-Sobolev inequalities by other means.
One such verifiable condition for is obtained by Visciglia in [8]. He proved that (1) holds for weights in the Lorentz space . The embedding of into the Lorentz space played a key role in his result.
The case is more subtle, for example, (1) does not hold when and , see [9]. In this paper we obtain a verifiable condition for admissible functions for bounded domains in , using one-dimensional
weighted Hardy inequalities and certain rearrangement inequalities.
A general one dimensional weighted Hardy inequality has the following form: For an excellent review on the weighted Hardy inequalities, we refer to [10] by Maligranda et al. Many necessary and
sufficient conditions on , , , for holding (9) are available in the literature, see [11–13]. Here we make use of the so called Muckenhoupt condition [13] for obtaining a class of weight functions
satisfying (1). For a measurable function , we denote its decreasing rearrangement by and the maximal function of is denoted by , that is, . Now we define Indeed, is a rearrangement invariant Banach
function space with the norm see [14]. More details of the space are given in Section 3. Now we state our main theorem.
Theorem 1. Let be a bounded domain in and let . Then is admissible and
Having obtained , a class of admissible functions, one would like to know for which among them the best constant in (1) is attained in . Many authors have addressed this question when , see [8, 15]
and the references therein. For , Maźja has a sufficient condition (see 2.4.2 of [7]) in terms of capacity. Here we consider the weights in a subclass of so that the best constant in (1) is attained
in . For a bounded domain (analogous to space in [15]) we define We show that the best constant in (1) is attained in , when , where is the positive part of . More precisely, we have the following
Theorem 2. Let and . Define Then is attained for some .
The Moser-Trudinger embedding of into the Orlicz space , the Orlicz space generated by the Orlicz function , can be used to show that functions are admissible. An embedding of , finer than that of
Moser-Trudinger, is established independently by Hansson [16], Brézis, and Wainger [17]. As anticipated, this embedding gives a bigger class of admissible functions than . In [18], the authors used
this finer embedding and showed that the Lorentz-Zygmund space is admissible. We are not going to use any embeddings for proving that functions are admissible, instead we use some rearrangement
inequalities and one dimensional weighted Hardy inequalities. We would like to stress that the admissibility of functions can be used to give alternate proofs for the Moser-Trudinger embedding and
its refinement due to Hansson.
The rest of the paper is organised as follows. In Section 2, we recall definition and properties of decreasing rearrangement. Further, we state some classical inequalities that will be used in the
subsequent sections. We discuss the properties of the space and give examples of function spaces contained in in Section 3. In Section 4, we give a proof for Theorem 1. The last section contains a
proof of Theorem 2.
2. Preliminaries
In this section, we recall the definition and some of the properties of symmetrization and certain inequalities concerning symmetrization that we use in the subsequent sections. For further details
on symmetrization, we refer to the books [19–21].
Let be a domain. Let .
Then the distribution function of is defined as where denotes the Lebesgue measure of a set . Now we define the decreasing rearrangement as The Schwarz symmetrization of is given by where is the
measure of unit ball in and is the open ball, centered at the origin with same measure as .
Next we give some inequalities concerning the distribution function and rearrangement of a function.
Proposition 3. Let be a domain and let and be measurable functions on . Then,(a), ;(b)if and only if.
The map is not subadditive. However, we obtain a subadditive function from , namely, the maximal function of defined by The subadditivity of with respect to helps us to define norms in certain
function spaces.
Finally, in the following proposition we state two important inequalities concerning Schwarz symmetrization (decreasing rearrangement).
Proposition 4. Let be a domain in with . Let and be two measurable functions on and let . Then one has the following inequalities.(a)The Hardy-Littlewood inequality: (b)The Polya-Szegö inequality:
Next we state a necessary and sufficient condition for one dimensional weighted Hardy inequality due to Muckenhoupt (see 4.17, [10]).
Proposition 5. Let and be nonnegative measurable functions such that . Let and let be the conjugate exponent of . Then for any , holds for all measurable function if and only if Moreover, if is the
best constant in (21), then
3. A Space for Admissible Functions
In this section, we define the function space and give its relation with certain classical function spaces. For a bounded domain , we define One can verify that is a Banach function space with the
norm The nomenclature refers to the fact that is the maximal rearrangement invariant Banach function space (Lorentz M-space) on with the fundamental function . Next we give some examples of function
spaces in .
Recall, for a bounded domain is the Orlicz space generated by the Orlicz function , that is, A similar analysis as in Lemma 6.2 of [14] gives Using this equivalent definition, we show that is
contained in .
Proposition 6. Let be a bounded domain in . Then .
Proof. Let , then using the definition of and the monotonicity of we obtain Now by taking the supremum over in the previous inequality, we obtain the desired fact.
This inclusion is strict as seen in the following example.
Example 7. Let . For small, let Then for , does not belong to but belongs to .
For a bounded domain , the Zygmund space is defined as From the following proposition we see that is contained in .
Proposition 8. Let be a bounded domain in . Then for a measurable function , the following inequality holds:
Proof. Let . Then and the last inequality follows as .
Remark 9. In [18], using Hansson’s embedding, the authors showed that functions are admissible. Since , their result also follows from Theorem 1 without using Hansson’s embedding. Since functions are
admissible, Theorem 2.5 of [22] shows that the spaces and are equivalent. Thus, Theorem 1 indeed follows from Hansson’s embedding as in [18]. However, our proof for Theorem 1 relies only on certain
classical rearrangement inequalities and the Muckenhoupt condition for 1-dimensional weighted Hardy inequalities.
In the next proposition, we show that the weights considered in [2] for the improved Hardy-Sobolev inequalities are in and hence belong to as well.
Proposition 10. Let be a bounded domain in and let . Then for ,
Proof. Let , and . Note that , , and hence A straightforward calculation gives where the last inequality follows since . Therefore Hence the proof.
Next we verify that weights in satisfy Maźja’s capacity condition.
Proposition 11. Let be a bounded domain in , , and . Then satisfies Maźja’s capacity condition, that is,
Proof. Using Polya-Szegö inequality, we can easily verify that . Further, (see, e.g., page 106 of [7]). Now for a compact set , Therefore,
In the next section, we see that the space almost characterizes the radial weights satisfying Maźja’s condition (Theorem 13).
4. The Generalized Hardy-Sobolev Inequalities
In this section, we give a proof for Theorem 1. Further, when for some , we show that all radial and radially decreasing admissible weights necessarily lie in . First we have the following theorem on
one dimensional weighted Hardy inequalities.
Theorem 12. Let be a bounded domain in and let . Then
The previous inequality is known for more general weights (even for measures), see [11–13]. Note that when , satisfies Muckenhoupt condition (22) and hence the inequality follows from Proposition 5.
We prove Theorem 12 by adapting the proof of Theorem 4, chapter 4 of [10].
Proof. For the simplicity, we let and . Since , we have Now using the Hölder inequality we obtain Therefore, The equality in the last step follows from Fubini’s theorem on the interchange of the
order of integration. Next we estimate the innermost integral on the right-hand side of the previous inequality using (41): (using (41)).
Now by substituting back into (43), we get the result.
Proof of Theorem 1. Let . From the inequalities given in Proposition 4, it is clear that the inequality holds, if Since , we can rewrite the previous inequality as Now by Theorem 12, we see that the
previous inequality holds with .
In the following theorem, we show that our condition is almost necessary for the generalized Hardy-Sobolev inequality.
Theorem 13. Let and let be such that is positive, radial, and radially decreasing. If is admissible, then .
Proof. Let be admissible and let be such that We use certain test functions in to estimate . For , let Clearly and Therefore Further, since is radial and radially decreasing, we get Now (48), (51),
and (52) yield Thus by substituting in the previous inequality and noting that , we get Hence and .
Next we see how one can obtain Moser-Trudinger embedding and Hansson’s embedding using Theorem 1.
Remark 14 (an alternate proof for some classical embeddings). From Theorem 1, for each , we have the generalized Hardy-Sobolev inequality (i)Moser-Trudinger embedding: since , there exists such that
Thus from (55) we have The previous inequality shows that for each is a continuous linear functional on with . In other words, (the dual space of ) and In particular, for each and also (ii)Hansson’s
embedding: since , there exists such that As before, for each with , that is, Thus for and
5. The Best Constant in the Hardy-Sobolev Inequalities
In this section, we give a proof of Theorem 2 using a direct variational method. For , we define the following: Recall that It is easy to see that , where . Note that, for an admissible and is the
best constant in (1). Thus the best constant in (1) is attained for some if has a minimizer on . We show that, under the assumptions of Theorem 2, admits a minimizer on .
First we prove the following compactness theorem.
Lemma 15. Let be a bounded domain in and let . Then the map is compact.
Proof. For we show that . First we estimate the following: Since is bounded, from (55) we get that is bounded. Thus that if . Let . For a given , since , we choose such that Therefore Now For
sufficiently large , the first integral can be made smaller than since is embedded compactly in and . Thus we conclude that .
Proof of Theorem 2. Since and , there exists such that (see for e.g., Proposition 4.2 of [23]) and and hence the set is nonempty. Let be a minimizing sequence of on , that is, Using the coercivity of
, the reflexivity of , and the compactness of the embedding of into , we can further assume that converges weakly and almost everywhere to some . Since , from the previous lemma we get For , we write
Now use the Fatou’s lemma to obtain Thus we get . Set . Now the homogeneity and the weak lower semicontinuity of yields the following: Thus equality must hold at each step and hence . This shows that
and .
Next we give an equivalent definition for the space . Recall that is the closure of in .
Theorem 16. Let be a bounded domain in . Then the following statements are equivalent: (i), (ii) and .
Proof. We show that . Let and in . Note that for each is bounded and hence Let be given. We show that there exists such that for all . Since in , there exists such that Using the subadditivity of the
maximal operator , we get Note that from (76), there exists such that for all . Now (78) yields the result as . Let be given. For proving , we show that there exists such that . Since , we get such
that Let Clearly . Since is continuously embedded in and is dense in , we can choose so that Next we estimate . Let us compute the distribution function : For computing the symmetric rearrangement ,
we set . If , for all , we get the desired result as . Next we calculate , when . For , note that implies that (if , then , a contradiction). Thus for , using (83) we have Now for , again using (83),
we see that , for all and hence , for all . Thus Therefore, Since , from (80), we have . Now (82) yields and hence the proof is done.
Next we give example of functions in .
Example 17. We have already seen that (Proposition 6). In fact, since is dense in and the inclusion of in is continuous. Also, using Theorem 16, one can verify that , for .
Remark 18 (an open problem). Whether all the admissible functions are in or not, by setting , the problem can be rephrased as whether implies or not.
Remark 19. For and (not necessarily bounded), using the Muckenhoupt condition (22) and the inequalities given in Proposition 4, one can show that (as in Theorem 1) is admissible if satisfies The
previous condition shows that one can obtain Visciglia’s result [8] without using the Lorentz-Sobolev embedding. In fact, one can give an alternate proof for the Lorentz Sobolev embedding of into the
Lorentz space using similar arguments as in Remark 14.
The author thanks Professor Mythily Ramaswamy for her encouragement and useful discussions. He also thanks Prof. A. K. Nandakumaran and Prof. Prashanth K. Srinivasan for their continuous support and
interest. The author also thanks the unknown referees for their suggestions that greatly improved this paper This work has been supported by UGC under Dr. D. S. Kothari postdoctoral fellowship
scheme, Grant no. 4-2/2006 (BSR)/13-573/2011(BSR).
1. H. Brezis and J. L. Vázquez, “Blow-up solutions of some nonlinear elliptic problems,” Revista Matemática de la Universidad Complutense de Madrid, vol. 10, no. 2, pp. 443–469, 1997. View at
Zentralblatt MATH · View at MathSciNet
2. Adimurthi, N. Chaudhuri, and M. Ramaswamy, “An improved Hardy-Sobolev inequality and its application,” Proceedings of the American Mathematical Society, vol. 130, no. 2, pp. 489–505, 2002. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
3. N. Chaudhuri and M. Ramaswamy, “Existence of positive solutions of some semilinear elliptic equations with singular coefficients,” Proceedings of the Royal Society of Edinburgh A, vol. 131, no.
6, pp. 1275–1295, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. S. Filippas and A. Tertikas, “Optimizing improved Hardy inequalities,” Journal of Functional Analysis, vol. 192, no. 1, pp. 186–233, 2002. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
5. N. Ghoussoub and A. Moradifam, “On the best possible remaining term in the Hardy inequality,” Proceedings of the National Academy of Sciences of the United States of America, vol. 105, no. 37,
pp. 13746–13751, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. J. Leray, “Etude de diverses equations integrales nonlineaire et de quelques prob-lemes que pose lhydrodinamique,” Journal de Mathématiques Pures et Appliquées, vol. 12, no. 9, pp. 1–82, 1933.
7. V. G. Maz'ja, Sobolev Spaces, Springer Series in Soviet Mathematics, Springer, Berlin, Germany, 1985, Translated from the Russian by T. O. Shaposhnikova. View at MathSciNet
8. N. Visciglia, “A note about the generalized Hardy-Sobolev inequality with potential in ${L}^{p,d}\left({ℝ}^{n}\right)$,” Calculus of Variations and Partial Differential Equations, vol. 24, no. 2,
pp. 167–184, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
9. C. Cosner, K. J. Brown, and J. Fleckinger, “Principal eigenvalues for problems with indefinite weight function on ${ℝ}^{n}$,” Proceedings of the American Mathematical Society, vol. 109, no. 1,
pp. 147–155, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. L. Maligranda, L.-E. Persson, and A. Kufner, The Hardy Inequality: About its History and Some Related Results, 2007. View at MathSciNet
11. G. Talenti, “Osservazioni sopra una classe di disuguaglianze,” Rendiconti del Seminario Matematico e Fisico di Milano, vol. 39, pp. 171–185, 1969. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
12. G. Tomaselli, “A class of inequalities,” Bollettino Della Unione Matematica Italiana, vol. 2, no. 4, pp. 622–631, 1969. View at Zentralblatt MATH · View at MathSciNet
13. B. Muckenhoupt, “Hardy's inequality with weights,” Studia Mathematica, vol. 44, pp. 31–38, 1972, Collection of articles honoring the completion by Antoni Zygmund of 50 years of scientific
activity, I. View at Zentralblatt MATH · View at MathSciNet
14. C. Bennett and R. Sharpley, Interpolation of Operators, vol. 129 of Pure and Applied Mathematics, Academic Press, Boston, Mass, USA, 1988. View at MathSciNet
15. T. V. Anoop, M. Lucia, and M. Ramaswamy, “Eigenvalue problems with weights in Lorentz spaces,” Calculus of Variations and Partial Differential Equations, vol. 36, no. 3, pp. 355–376, 2009. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
16. K. Hansson, “Imbedding theorems of Sobolev type in potential theory,” Mathematica Scandinavica, vol. 45, no. 1, pp. 77–102, 1979. View at Zentralblatt MATH · View at MathSciNet
17. H. Brézis and S. Wainger, “A note on limiting cases of Sobolev embeddings and convolution inequalities,” Communications in Partial Differential Equations, vol. 5, no. 7, pp. 773–789, 1980. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
18. M. Krbec and T. Schott, “Superposition of imbeddings and Fefferman's inequality,” Bollettino della Unione Matematica Italiana B, vol. 2, no. 3, pp. 629–637, 1999. View at Zentralblatt MATH · View
at MathSciNet
19. D. E. Edmunds and W. D. Evans, Hardy Operators, Function Spaces and Embeddings, Springer Monographs in Mathematics, Springer, Berlin, Germany, 2004. View at MathSciNet
20. S. Kesavan, Symmetrization & Applications, vol. 3 of Series in Analysis, World Scientific Publishing, Hackensack, NJ, USA, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
21. M. Loss and E. H. Lieb, Analysis, vol. 14 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, USA, 2nd edition, 2001. View at MathSciNet
22. D. E. Edmunds and H. Triebel, “Sharp Sobolev embeddings and related Hardy inequalities: the critical case,” Mathematische Nachrichten, vol. 207, pp. 79–92, 1999. View at Zentralblatt MATH · View
at MathSciNet
23. B. Kawohl, M. Lucia, and S. Prashanth, “Simplicity of the principal eigenvalue for indefinite quasilinear problems,” Advances in Differential Equations, vol. 12, no. 4, pp. 407–434, 2007. View at
Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/ijanal/2013/784398/","timestamp":"2014-04-21T05:31:03Z","content_type":null,"content_length":"819157","record_id":"<urn:uuid:fefd3166-26ef-4cb0-ab7f-74f7a9799edf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radical Transformations
12.1: Radical Transformations
Created by: CK-12
This activity is intended to supplement Algebra I, Chapter 11, Lesson 1.
ID: 11574
Time Required: 15 minutes
Activity Overview
In this activity, students will use the transformational graphing application to examine how the square root function is transformed on the coordinate plane. As an extension, students will examine
similar transformations on a cube root function.
Topic: Radical Functions
• Domain, Range
• Square Root
• Transformations
Teacher Preparation and Notes
• Students will need the Transformation Graphing application installed on their graphing calculators.
• To download the calculator application, go to http://www.education.ti.com/calculators/downloads/US/Activities/Detail?id=11574 and select Transformation Graphing under the Applications header.
Associated Materials
Problem 1 – The General Radical Function
Students will examine the graph of $y=\sqrt{x}$
Students will also conjecture if the graph is always in the first quadrant.
Problem 2 – Transformations
In Problem 2, students will change values in the general equation of a square root graph. Students will determine the domain and range of a square root function based on the general equation. They
will revisit the question of where the graph lies in the plane. They will also describe the transformation performed on the graph by changing each variable.
Discussion Questions:
• How does each variable affect the graph of the function?
• How can we algebraically show the domain and range of the function?
• Why does the graph “stop” (no longer exist) on one side?
• How does the variable a affect the graph?
Extension – Cube Root Functions
In this problem, students will repeat the exploration above for the cube root function and determine domain, range, and transformations on the graph.
Note: The cube root can be entered by pressing MATH and selecting 5: $\sqrt[3]{\;\;}$
1. Domain: $x \ge 0$$y \ge 0$
2. Sample Response: The function is not defined for values less than zero because the square root becomes negative.
3. Sometimes (Incorrect answer is okay. This is a conjecture question.)
4. Sample response: The graph is a straight line because the square root is multiplied by zero, making the function $y = 0$
5. Sometimes (Now students should have the correct answer.)
6. Sample responses must have $\sqrt{x-3}$
7. $x \ge -2$
8. $h$
9. Sample response must have $-2$
10. $y \ge 3$
11. $k$
12. Sample response: Positive $4$$4$
13. Sample response: Positive $4$$2$
14. Sample response: Flips the graph open up or open down. Makes the graph steeper as $|a|$
15. $x \ge h$
16. $y \ge k$
17. Domain and range: All real numbers
18. $h$$k$$a$
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first. | {"url":"http://www.ck12.org/tebook/Texas-Instruments-Algebra-I-Teacher%2527s-Edition/r1/section/12.1/","timestamp":"2014-04-17T19:28:54Z","content_type":null,"content_length":"107040","record_id":"<urn:uuid:830a9cde-4cda-439c-8478-b4565a35ce7d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Don't detect singular sparse matrix
The matrix is square, so it's not sparse QR. You can ask MATLAB what it is using via:
Try this:
>> A=eye(5); A(1,5)=-1; A(5,1)=-1;
>> A
A =
1 0 0 0 -1
-1 0 0 0 1
>> inv(A)
Warning: Matrix is singular to working precision.
ans =
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
>> inv(sparse(A))
ans =
(1,1) 2
(5,1) 2
(2,2) 1
(3,3) 1
(4,4) 1
(1,5) 2
(5,5) 3
>> spparms('spumoni',2)
>> inv(sparse(A))
sp\: bandwidth = 4+1+4.
sp\: is A diagonal? no.
sp\: is band density (0.28) > bandden (0.50) to try banded solver? no.
sp\: is A triangular? no.
sp\: is A morally triangular? no.
sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)? yes.
sp\: is CHOLMOD's symbolic Cholesky factorization (with automatic reordering) successful? yes.
sp\: is CHOLMOD's numeric Cholesky factorization successful? no.
sp\: A is not positive definite.
CHOLMOD version 1.7.0, Sept 20, 2008: : status: warning, matrix not positive definite
Architecture: unknown
sizeof(int): 4
sizeof(UF_long): 8
sizeof(void *): 8
sizeof(double): 8
sizeof(Int): 8 (CHOLMOD's basic integer)
sizeof(BLAS_INT): 8 (integer used in the BLAS)
Results from most recent analysis:
Cholesky flop count: 8
Nonzeros in L: 6
memory blocks in use: 13
memory in use (MB): 0.0
peak memory usage (MB): 0.0
maxrank: update/downdate rank: 8
supernodal control: 1 40 (supernodal if flops/lnz >= 40)
nmethods=0: default strategy: Try user permutation if given. Try AMD.
Select best ordering tried.
method 0: user permutation (if given)
method 1: AMD (or COLAMD if factorizing AA')
prune_dense: for pruning dense nodes: 10
a dense node has degree >= max(16,(10)*sqrt(n))
flop count: 8
nnz(L): 6
final_asis: TRUE, leave as is
dbound: LDL' diagonal threshold: 0
Entries with abs. value less than dbound are replaced with +/- dbound.
grow0: memory reallocation: 1.2
grow1: memory reallocation: 1.2
grow2: memory reallocation: 5
nrelax, zrelax: supernodal amalgamation rule:
s = # columns in two adjacent supernodes
z = % of zeros in new supernode if they are merged.
Two supernodes are merged if (s <= 4) or (no new zero entries) or
(s <= 16 and z < 80%) or (s <= 48 and z < 10%) or (z < 5%)
sp\: But A is real symmetric; try MA57.
MA57 Control Parameters
Ordering used: AMD ordering using MC47.
Block Size for Level 3 BLAS: 16
Merge assembly tree nodes if number of eliminations
is less than: 16
Matrix is scaled using symmetrized version of HSL code MC64.
Maximum iterative refinement steps: 10
Pivot Threshold: 1.000000e-02
Zero Pivot Tolerance: 1.000000e-20
sp\: is MA57's symbolic analysis successful? yes.
Forecast number of Real entries in factors: 6
Forecast number of Integer entries in factors: 16
Forecast maximum frontal size: 2
Number of nodes in Assembly tree: 4
Forecast length of FACT array (real)
without numerical pivot: 39
Forecast length of ifact array (integer)
without numerical pivot 44
Length of fact required for a successful
completion of the numerical phase allowing
data compression (without numerical pivoting) 39
Length of ifact required for a successful
completion of the numerical phase allowing
data compression (without numerical pivoting) 44
Number of data compresses 0
Forecast number of floating point additions
for the assembly processes 6.000000e+00
Forecast number of floating point operations
to perform the elimination operations
counting multiply-add pair as two operations 8.000000e+00
*** Return code from MA57AD: 1
sp\: is MA57's numeric Factorization successful? yes.
Number of entries in factors: 6
Storage for real data in factors: 6
Storage for integer data in factors: 19
Minimum length of fact required for a successful
completion of the numerical phase: 39
Minimum length of ifact required for a successful
completion of the numerical phase: 44
Minimum length of fact required for a successful
completion of the numerical phase without
data compression: 39
Minimum length of ifact required for a successful
completion of the numerical phase without
data compression: 44
Order of the largest frontal matrix: 2
Number of 2x2 numerical pivots: 0
Total number of fully-summed variables passed to
the father node because of numerical pivot: 1
Number of negative eigenvalues: 0
Rank of factorization of the matrix: 4
Pivot step where static pivot commences and
is set to zero if no modification performed: 0
Number of data compresses on real
data structures: 0
Number of data compresses on integer
data structures: 0
Number of block pivots in factors 5
Number of zeros in triangle of factors: 0
Number of zeros in rectangle of factors: 0
Number of zero columns in rectangle of factors: 0
Number of static pivots chosen: 0
Number of floating point additions
for the assembly processes: 6.000000e+00
Number of floating point operations
to perform the elimination operations
counting multiply-add pair as two operations: 7.000000e+00
Minimum value of the scaling factor (MC64): 1.000000e+00
Maximum value of the scaling factor (MC64): 1.000000e+00
Maximum modulus of matrix entry: 1.000000e+00
*** Return code from MA57BD: 4
sp\: is MA57's solve successful? yes.
ans =
(1,1) 2
(5,1) 2
(2,2) 1
(3,3) 1
(4,4) 1
(1,5) 2
(5,5) 3
>> full(ans)
ans =
From the MA57 spec sheet, at http://www.hsl.rl.ac.uk/specs/ma57.pdf this looks like a warning that the matrix is rank deficient:
+4 Matrix is rank deficient on exit from MA57B/BD. In this case, a decomposition will still have been produced that will enable the subsequent solution of consistent equations. INFO(25) will be set
to the rank of the factorized matrix.
And sure enough, it is rank deficient:
>> rank(A)
ans =
>> svd(A)
ans =
(the rank(A) of 4 and the return code of 4 are coincidental). So the bug is not MA57, but the interface to it (MATLAB) that returns inv(A) in spite of the fact that the matrix is rank deficient. | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/281020","timestamp":"2014-04-19T04:24:47Z","content_type":null,"content_length":"61708","record_id":"<urn:uuid:6d8f2f8c-f774-42ce-a763-397942220f86>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electrical Engineering Archive | August 24, 2010 | Chegg.com
Hi, I am trying to find the laplace transforms of two functions.
The first function is f(t)=K^t, where K is a constant. I know that the answer needs to be
The second function is f(t)=sin(a*t+ß). I know that the answer needs to be [acos(ß)+s*sin(ß)]/(s^2+a^2)
For the first function I tried to integrate by parts, but what i was doing never seemed to approach the answer.
For the second function I don't even know where to begin because I think there is some trig identity that I don't know
Thanks for the help
• Show less | {"url":"http://www.chegg.com/homework-help/questions-and-answers/electrical-engineering-archive-2010-august-24","timestamp":"2014-04-21T09:01:07Z","content_type":null,"content_length":"28410","record_id":"<urn:uuid:f57184b3-acec-4dd3-9778-871bdb010cd1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A ball is thrown directly upward from ground level with an initial speed of 80 ft/s. How high will it go? When will it return to the ground? explain please.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ee699c4e4b023040d4bb189","timestamp":"2014-04-19T15:30:37Z","content_type":null,"content_length":"37241","record_id":"<urn:uuid:cf8c0208-696b-48af-b67e-9758840189bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
You can do arithmetic with Mathematica just as you would on an electronic calculator.
Spaces denote multiplication in
. The front end automatically replaces spaces between numbers with light gray multiplication signs.
Arithmetic operations in Mathematica.
Arithmetic operations in Mathematica are grouped according to the standard mathematical conventions. As usual, , for example, means , and not . You can always control grouping by explicitly using
With Mathematica, you can perform calculations with a particular precision, usually higher than an ordinary calculator. When given precise numbers, Mathematica does not convert them to an approximate
representation, but gives a precise result. | {"url":"http://reference.wolfram.com/mathematica/tutorial/Arithmetic.html","timestamp":"2014-04-19T06:57:27Z","content_type":null,"content_length":"35801","record_id":"<urn:uuid:3c04e085-f22a-4462-80d2-f80e0a25ffb5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum mechanics as an operationally time symmetric probabilistic theory
The standard formulation of quantum mechanics is
operationally asymmetric with respect to time reversal---in the language of
compositions of tests, tests in the past can influence the outcomes of test in
the future but not the other way around. The question of whether this represents
a fundamental asymmetry or it is an artifact of the formulation is not a new
one, but even though various arguments in favor of an inherent symmetry have
been made, no complete time-symmetric formulation expressed in rigorous
operational terms has been proposed. Here, we discuss such a possible
formulation based on a generalization of the usual notion of test. We propose
to regard as a test any set of events between an input and an output system
which can be obtained by an autonomously defined laboratory procedure. This
includes standard tests, as well as proper subsets of the complete set of
outcomes of standard tests, whose realization may require post-selection in
addition to pre-selection. In this approach, tests are not expected to be
operations that are up to the choices of agents---the theory simply says what
circuits of tests may occur and what the probabilities for their outcomes would
be, given that they occur. By virtue of the definition of test, the
probabilities for the outcomes of past tests can depend on tests that take
place in the future.
Such theories have been previously called non-causal, but
here we revisit that notion of causality. Using the Choi-Jamiolkowski
isomorphism, every test in that formulation, commonly regarded as inducing
transformations from an input to an output system, becomes equivalent to a
passive detection measurement applied jointly on two input systems---one from
the past and one from the future. This is closely related to the two-state
vector formalism, but it comes with a conceptual revision: every measurement is
a joint measurement on two separate systems and not on one system described by
states in the usual Hilbert space and its dual. We thus obtain a static picture
of quantum mechanics in space-time or more general structures, in which every
experiment is a local measurement on a global quantum state that generalizes
the recently proposed quantum process matrix. The existence of two types of
systems in the proposed formalism allows us to define causation in terms of
correlations without invoking the idea of intervention, offering a possible
answer to the problem of the meaning of causation. The framework is naturally
compatible with closed time-like curves and other exotic causal structures. | {"url":"http://perimeterinstitute.ca/videos/quantum-mechanics-operationally-time-symmetric-probabilistic-theory","timestamp":"2014-04-18T23:46:59Z","content_type":null,"content_length":"30112","record_id":"<urn:uuid:1c1a50e6-2489-4352-938b-0effa0a594ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Calculators
Algebra Calculators Index
Equation Calculator
Solves equations, showing work and detailed explanation.
Solves equations by factoring, resorting to other methods when necessary.
Completing the Square
Solves equations by completing the square, resorting to other methods when necessary.
Use this calculator if you only want to simplify, not solve an equation.
3D Equation Graphing Calculator
Plots equations and enables users to rotate the graphs 360 degrees in any direction.
Solving With Substitution
Use this calculator to substitute one or more variables before solving.
Factoring and Prime Factoring Calculator
Uses trial and error to find all of the prime and composite factors of any positive integer.
Percentage Calculator
This collection of simple calculators will solve various percentage problems using proportions.
Simplifying Simplies fractions and mixed numbers or converts between mixed numbers and improper fractions.
Will add, subtract, multiply, and divide fractions, mixed numbers, and improper fractions.
Comparison (Greater-Than/Less-Than) Determines whether a fraction, mixed number, or improper fraction is equal to, less than, or greater than another. | {"url":"http://www.algebrahelp.com/calculators/","timestamp":"2014-04-17T18:27:49Z","content_type":null,"content_length":"7286","record_id":"<urn:uuid:d1247c8a-32cb-45e6-b55f-8407d384b82e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenFOAM guide/The SIMPLE algorithm in OpenFOAM
From OpenFOAMWiki
The Navier-Stokes equations for a single-phase flow with a constant density and viscosity are the following:
$abla \cdot \left( \rho \vec{U} \right) = 0$
$\frac{\partial U}{\partial t} + abla \cdot \left( \vec{v} \vec{v} \right) - abla \cdot \left( u abla \vec{v} \right) = - abla p$
The solution of this couple of equations is not straightforward because an explicit equation for the pressure is not available. One of the most common approaches is to derive an equation for the
pressure by taking the divergence of the momentum equation and by substituting it in the continuity equation.
2 The pressure equation
The momentum equation can be re-written in a semi discretised form as follows:
$a_p \vec{U_p} = H(\vec{U}) - abla p \Longleftrightarrow \vec{U_p} = \frac{H(\vec{U})}{a_p} - \frac{abla p}{a_p}$
$H(\vec{U}) = - \sum_n a_n \vec{U}_n + \frac{\vec{U}^o}{\Delta t}$
The first term of $H(\vec{U})$ represents the matrix coefficients of the neighbouring cells multiplied by their velocity, while the second part contains the unsteady term and all the sources except
the pressure gradient.
The continuity equation is discretised as:
$abla \cdot \vec{U} = \sum_f \vec{S} \vec{U}_f = 0$
where $\vec{S}$ is outward-pointing face area vector and $\vec{U}_f$ the velocity on the face.
The velocity on the face is obtained by interpolating the semi discretised form of the momentum equation as follows:
$\vec{U}_f = \left( \frac{H(\vec{U})}{a_p}\right)_f - \frac{\left( abla p\right)_f }{\left( a_p\right)_f}$
By substituting this equation into the discretised continuity equation obtained above, we obtain the pressure equation:
$abla \cdot \left( \frac{1}{a_p} abla p \right) = abla \cdot \left( \frac{H(\vec{U})}{a_p} \right) = \sum_f \vec{S} \left( \frac{H(\vec{U})}{a_p}\right)_f$
3 The SIMPLE algorithm
The SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) allows to couple the Navier-Stokes equations with an iterative procedure, which can be summed up as follows:
1. Set the boundary conditions.
2. Solve the discretized momentum equation to compute the intermediate velocity field.
3. Compute the mass fluxes at the cells faces.
4. Solve the pressure equation and apply under-relaxation.
5. Correct the mass fluxes at the cell faces.
6. Correct the velocities on the basis of the new pressure field.
7. Update the boundary conditions.
8. Repeat till convergence.
The steps 4 and 5 can be repeated for a prescribed number of time to correct for non-orthogonality.
4 Implementation of the SIMPLE algorithm in OpenFOAM
The SIMPLE algorithm can be implemented in OpenFOAM as follows (The complete implementation of the algorithm can be seen in the source code of the simpleFoam solver provided with OpenFOAM):
• Store the pressure calculated at the previous iteration, because it is required to apply under-relaxation
• Define the equation for U
tmp<fvVectorMatrix> UEqn
fvm::div(phi, U) - fvm::laplacian(nu, U)
tmp< > is used to reduce peak memory.
• Under-relax the equation for U
• Solve the momentum predictor
solve (UEqn == -fvc::grad(p));
• Update the boundary conditions for p
• Calculate the a[p] coefficient and calculate U
volScalarField AU = UEqn().A();
U = UEqn().H()/AU;
phi = fvc::interpolate(U) & mesh.Sf();
adjustPhi(phi, U, p);
• Define and solve the pressure equation and repeat for the prescribed number of non-orthogonal corrector steps
fvScalarMatrix pEqn
fvm::laplacian(1.0/AU, p) == fvc::div(phi)
pEqn.setReference(pRefCell, pRefValue);
phi -= pEqn.flux();
• Calculate continuity errors
# include "continuityErrs.H"
• Under-relax the pressure for the momentum corrector and apply the correction
U -= fvc::grad(p)/AU;
• Check for convergence and repeat from the beginning until convergence criteria are satisfied.
Note: In OpenFOAM 1.6. and 1.6.x the convergence check has been implemented in simpleFoam by defining
□ eqnResidual: Initial residual of the equation
□ maxResidual: Maximum residual of the equations after one solution step
□ convergenceCriterion: Convergence criterion specified by the user
The value of the initial residual can be obtained when solving the corresponding equation using the initialResidual() method. Two syntax are possible:
eqnResidual = solve
UEqn() == -fvc::grad(p)
or, equivalently, for the pressure equation, since it has been already defined,
eqnResidual = pEqn.solve().initialResidual();
5 References
J. H. Ferziger, M. Peric, Computational Methods for Fluid Dynamics, Springer, 3rd Ed., 2001.
H. Jasak, Error Analysis and Estimation for the Finite Volume Method with Applications to Fluid Flows, Ph.D. Thesis, Imperial College, London, 1996. | {"url":"http://openfoamwiki.net/index.php/The_SIMPLE_algorithm_in_OpenFOAM","timestamp":"2014-04-17T21:23:17Z","content_type":null,"content_length":"35429","record_id":"<urn:uuid:96adaa99-9346-4df2-87c6-74318601e1b0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
mouse position openGL [Archive] - OpenGL Discussion and Help Forums
so I've got this little game I"m working on but the problem is the macintosh toolbox routine GetMouse returns an integer Point structure...two numbers between 0 and 600 and 800(800 x 600 screen) but
when you use the openGL calls they do things in terms of floating point...and there is the glVertex3i but the numbers are still nasty....up till now I've been multiplying and adding arbitrary values
to convert where the mouse is in pixels into the floating points...I'm sick of it. I'm sure there is some function that truly specifys what the perspective matrix does to the frame buffer or
whatever...i assume that -1.0f and 1.0f in the x plane are the left and right sides of the screen while z is equal to 0.0f. but what about when z = -5.0f or something??? | {"url":"https://www.opengl.org/discussion_boards/archive/index.php/t-158911.html","timestamp":"2014-04-20T10:56:18Z","content_type":null,"content_length":"4910","record_id":"<urn:uuid:07c12820-2415-4c0a-94dc-628e4d3d2d6b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
free algebra tutorial testing
Author Message
Blind-Sommet Posted: Wednesday 27th of Dec 13:34
Hello there, I am a high-school student and pretty soon I will have my exams in algebra. I was never into math, but now I am worried that I won't be able to finish this course. I
came across free algebra tutorial testing and other math issues that I can’t understand . These topics really made me panic : midpoint of a line and trigonometric functions .
paying for a tutor is not possible for me, because I don't have any money. Please help me!!
From: High
Wycombe, UK
espinxh Posted: Thursday 28th of Dec 14:25
Have you checked out Algebrator? This is a great help tool and I have used it several times to help me with my free algebra tutorial testing problems. It is really very simple -you
just need to enter the problem and it will give you a complete solution that can help solve your assignment . Try it out and see if it is useful .
From: Norway
3Di Posted: Friday 29th of Dec 10:16
Hey , Thanks for the prompt response . But could you give me the details of genuine websites from where I can make the purchase? Can I get the Algebrator cd from a local book mart
available near my residence ?
From: 45°26' N,
09°10' E
Gog Posted: Saturday 30th of Dec 07:02
Algebrator is a simple software and is surely worth a try. You will find quite a few exciting stuff there. I use it as reference software for my math problems and can say that it
has made learning math more fun .
From: Austin, TX
chotsmedh Posted: Sunday 31st of Dec 09:17
Can a program really help me excel my math? Guys I don’t need something that will solve problems for me, instead I want something that will help me understand the subject as well.
From: 67212
Dxi_Sysdech Posted: Monday 01st of Jan 11:28
It is available at http://www.polymathlove.com/dividing-decimals-by-whole-numbers.html and really is the easiest program to get up and running. You can start learning math within
minutes of downloading the software.
From: Right here,
can't you see me? | {"url":"http://www.polymathlove.com/polymonials/trigonometric-functions/free-algebra-tutorial-testing.html","timestamp":"2014-04-20T03:11:04Z","content_type":null,"content_length":"49922","record_id":"<urn:uuid:c9f9b365-50bf-4ef6-84fe-f48341e0588a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: levelsof for many categories without sorting
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: levelsof for many categories without sorting
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: levelsof for many categories without sorting
Date Wed, 7 Sep 2005 15:12:16 +0100
Note for anyone interested:
-levelsof- as implemented in Stata 9 differs
subtly from -levels- as added to Stata 8
during its lifetime.
That aside, I am very surprised at Iwan's
report that -levelsof- reports categories
according to their order of occurrence in the data.
That contradicts not just the help file, but
also the code as I read it (and for that matter
as I wrote it, originally). StataCorp would like
to see evidence, I am sure. I suspect Iwan's
impression is mistaken, but I am not sure why
it arises.
The general problem to which -levelsof- is
one solution is discussed in
A fairly general strategy for going through all
possible levels
* according to their order of first occurrence
* in the data
is as follows.
(This circumvents problems arising when -levelsof-
cannot cope.)
Suppose we have an identifier, say -id-.
First generate an observation number:
gen long obs = _n
Now we sort by -id-, breaking ties by
-obs-. The first observation in each block
then carries information on first occurrence.
We copy the observation number of first
occurrence to each other occurrence of the same id.
bysort id (obs) : replace obs = obs[1]
Now we tag ids from 1 to whatever, according
to first occurrence:
bysort obs : gen group = _n == 1
replace group = sum(group)
Those familiar with -egen, group()- may
recognise the basic idea here.
Now the number of groups is identifiable from
su group, meanonly
local max = r(max)
Typically then you loop over groups:
forval i = 1/`max' {
Within that loop, a look-up technique to
get the identifier concerned is, for
a numeric identifier:
su id if group == `i', meanonly
All identifiers in each group are the same,
so it matters little whether we pick up
the minimum, the mean or the maximum:
local which = r(min)
will do, for example.
If the identifier -id- is a string variable, a little
more work is needed. Outside the loop,
replace obs = _n
Inside the loop,
su obs if group == `i', meanonly
local which = id[`r(min)']
Barankay, Iwan
> I find the command "levelsof" very useful to cut down the
> time on loops when I run through the category of a variable
> (e.g. the location_ids of a large survey).
> What I also like is that the local macro generated by
> levlesof is - so it seams to me - still in the order in which
> it appears in the data and does not sort it which is needed
> at times (even though the hlp file of levelsof says
> otherwise). When usually a list is entered into a local it is
> then sorted.
> The problem of course is that there are constraints on
> levelsof when it hits the character limit.
> My question is:
> What can I use instead of levelsof for (i) a large number of
> categories to avoid the character constraint but which (ii)
> also keeps the categories in the order it appears in the data
> and does not sort it.
> (i) is much more important than (ii) but if someone did an
> elegant solution for (ii) I would love to hear of it.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-09/msg00168.html","timestamp":"2014-04-20T14:05:04Z","content_type":null,"content_length":"7942","record_id":"<urn:uuid:c4f2bdc9-14d3-4b03-84e3-de16ec00bbf8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
Differential Geometry
The striking feature of modern Differential Geometry is its breadth, which touches so much of mathematics and theoretical physics, and the wide array of techniques it uses from areas as diverse as
ordinary and partial differential equations, complex and harmonic analysis, operator theory, topology, ergodic theory, Lie groups, non-linear analysis and dynamical systems. Research at Notre Dame
covers the following areas at the forefront of current work in geometric analysis and its applications.
1. Geodesics, minimal surfaces and constant mean curvature surfaces.
The global structure of a space may be investigated by the extensive use of geodesics, minimal surfaces and surfaces of constant mean curvature; such surfaces are themselves of physical interest
(membranes, soap films and soap bubbles). An important problem in the area is the determination of conditions on a compact Riemannian space which ensure the existence of infinitely many geometrically
distinct closed geodesics. We have proved this for compact Riemannian spaces with positively pinched curvature and in another direction established that if two compact surfaces of negative curvature
and finite area have the same length data for marked closed geodesics then the two surfaces must be isometric. Our research on minimal surfaces has produced a series of outstanding results on what
have long been recognized as crucial problems for the theory. These include the first breakthrough to finiteness in the extension of the classical Bernstein Theorem, the recent proof of the
uniqueness of the helicoid as the only non-flat complete embedded simply-connected minimal surface in 3-space, and the first solution of the free boundary problem for polyhedral surfaces, the
prototype for Jost’s Theorem. Our far-reaching generalization of the classical work of Delaunay classified all complete constant mean curvature surfaces admitting a one-parameter group of isometries;
the new infinite families of such surfaces generated by this work are currently of interest in other areas of surface theory.
2. Classical surface theory.
Classical surface theory is the study of isometric immersions of surfaces into Euclidean 3-space. In this study the umbilic points have a special significance (both topologically and geometrically)
and the Caratheodory conjecture of eighty years standing is one of the most resistant of problems in this area. Beginning with a generic geometric solution to this conjecture and the establishing of
a remarkable connection with the theory of compressible plane fluid flow, we have made profound contributions to our understanding of this phenomenon, so that these purely mathematical results are
now being applied to the solution of fundamental problems in the theory of relativity. Our work is an integral part of Rozoy’s celebrated solution of the Lichnerowicz Conjecture that a static stellar
model of a (topological) ball of perfect fluid in an otherwise vacuous universe must be spherically symmetric; this includes, as a special case, Israel’s theorem that static vacuum black-hole
solutions of Einstein’s equations are spherically symmetric, i.e., Schwarzschild solutions.
3. Complex geometry and analysis on non-compact manifolds.
Our work in complex geometry includes the affirmative solution of the Bochner Conjecture on the Euler number of ample Kaehler manifolds, a solution of Bloch’s Conjecture (on the degeneracy of
holomorphic curves in subvarieties of abelian varieties) and the classification of complex surfaces of positive bi-sectional curvature. Our current research on this area focuses on complex manifolds
with non-positive curvature, exhibiting various manifestations of hyperbolicity and parabolicity. Much of the progress in Riemannian geometry that took place over the last decades has been made via
the use of deep analytic techniques on non-compact manifolds. The central object of study is the Laplace operator, acting on functions and on differential forms. Our work on the spectral theory of
the Laplacian uses techniques from quantum mechanical scattering theory. A recent example has been one proof that the Laplacian of the 4-dimensional hyperbolic space is rigid, in the Hilbert space
sense. Probabilistic methods, coming from the theory of Brownian motion, have also been used with success in our discovery of a new family of Liouville manifolds having a positive lower bound for the
Laplacian spectrum; these manifolds provided counter-examples to a conjecture of Schoen and Yau on Liouville manifolds. Another recent accomplishment in the study of Laplace operators has been a
vanishing theorem for $ L^2$ co-homology and its applications, via index theory, to the Euler number of non-positively curved compact Kaehler manifolds.
4. Geometric analysis via Gromov’s methods.
Over the last thirty years Gromov has made important contributions to diverse areas of mathematics and pioneered new directions in mathematics such as filling Riemannian geometry, almost flat
manifolds, word-hyperbolic groups, Carnot geometry and applications to the rigidity of symmetric spaces, to name but a few. Our work on geometric analysis via Gromov’s approaches includes an
affirmative solution to Gromov’s minimal volume gap conjecture for compact manifolds of non-positive curvature, isoperimetric inequalities on singular spaces of non-positive curvature and the study
of harmonic functions on non-compact spaces with Gromov’s hyperbolicity.
5. Spaces of positive scalar curvature.
In the past ten years it has been observed that there are profound connections between the existence of metrics with positive scalar curvature on a given compact space and the topological structure
of the space. An outstanding problem in this area is the existence of metrics of positive scalar curvature on compact spin manifolds. Gromov-Lawson conjectured that any compact simply-connected spin
manifold with vanishing $\hat A$ genus must admit a metric of positive scalar curvature. The expert in this area at Notre Dame successfully solved this important problem by a detailed study of
positive scalar curvature metrics on quaternionic fibrations over compact manifolds. In addition, our researchers have been interested in the study of metrics of positive scalar curvature on certain
compact manifolds such as exotic spheres. It has also been found that the topological K-theory is closely related to the study of manifolds with non-positive sectional curvature. | {"url":"http://math.nd.edu/research/research-groups-in-mathematics/differential-geometry/","timestamp":"2014-04-18T20:42:46Z","content_type":null,"content_length":"24905","record_id":"<urn:uuid:6612774c-1a37-486b-881a-5114a1c3b56a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
An implementation of linear hash tables. (See http://en.wikipedia.org/wiki/Linear_hashing). Use this hash table if you...
• care a lot about fitting your data set into memory; of the hash tables included in this collection, this one has the lowest space overhead
• don't care that inserts and lookups are slower than the other hash table implementations in this collection (this one is slightly faster than Data.HashTable from the base library in most cases)
• have a soft real-time or interactive application for which the risk of introducing a long pause on insert while all of the keys are rehashed is unacceptable.
Linear hashing allows for the expansion of the hash table one slot at a time, by moving a "split" pointer across an array of pointers to buckets. The number of buckets is always a power of two, and
the bucket to look in is defined as:
bucket(level,key) = hash(key) mod (2^level)
The "split pointer" controls the expansion of the hash table. If the hash table is at level k (i.e. 2^k buckets have been allocated), we first calculate b=bucket(level-1,key). If b < splitptr, the
destination bucket is calculated as b'=bucket(level,key), otherwise the original value b is used.
The split pointer is incremented once an insert causes some bucket to become fuller than some predetermined threshold; the bucket at the split pointer (*not* the bucket which triggered the split!) is
then rehashed, and half of its keys can be expected to be rehashed into the upper half of the table.
When the split pointer reaches the middle of the bucket array, the size of the bucket array is doubled, the level increases, and the split pointer is reset to zero.
Linear hashing, although not quite as fast for inserts or lookups as the implementation of linear probing included in this package, is well suited for interactive applications because it has much
better worst case behaviour on inserts. Other hash table implementations can suffer from long pauses, because it is occasionally necessary to rehash all of the keys when the table grows. Linear
hashing, on the other hand, only ever rehashes a bounded (effectively constant) number of keys when an insert forces a bucket split.
Space overhead: experimental results
In randomized testing (see test/compute-overhead/ComputeOverhead.hs in the source distribution), mean overhead is approximately 1.51 machine words per key-value mapping with a very low standard
deviation of about 0.06 words, 1.60 words per mapping at the 95th percentile.
Unsafe tricks
Then the unsafe-tricks flag is on when this package is built (and it is on by default), we use some unsafe tricks (namely unsafeCoerce# and reallyUnsafePtrEquality#) to save indirections in this
table. These techniques rely on assumptions about the behaviour of the GHC runtime system and, although they've been tested and should be safe under normal conditions, are slightly dangerous. Caveat
emptor. In particular, these techniques are incompatible with HPC code coverage reports.
• W. Litwin. Linear hashing: a new tool for file and table addressing. In Proc. 6th International Conference on Very Large Data Bases, Volume 6, pp. 212-223, 1980.
• P-A. Larson. Dynamic hash tables. Communications of the ACM 31: 446-457, 1988.
data HashTable s k v Source
HashTable HashTable
Show (HashTable s k v) | {"url":"http://hackage.haskell.org/package/hashtables-1.0.1.7/docs/Data-HashTable-ST-Linear.html","timestamp":"2014-04-17T19:44:43Z","content_type":null,"content_length":"14304","record_id":"<urn:uuid:5a876ea1-6eb3-4a32-9464-eb23a60b209d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
@article {Keefe:1989-05-01T00:00:00:0003-7028:605, author = "Keefe, C. Dale and Comisarow, Melvin B.", title = "Exact Interpolation of Apodized, Magnitude-Mode Fourier Transform Spectra", journal =
"Applied Spectroscopy", volume = "43", number = "4", year = "1989-05-01T00:00:00", abstract = "A procedure is developed for the exact interpolation of apodized, magnitude-mode Fourier transform (FT)
spectra. The procedure gives the true center frequency, i.e., the location of the continuous peak, from just the largest three discrete intensities in the discrete magnitude spectrum. The procedure
is applicable for the peaks in the apodized magnitude spectrum of time signal of the form f(t) = cos(ωt) exp(–t/τ). There are no restrictions on the value of the damping ratio T/τ. The procedure is
demonstrated for the sine-bell and Hanning windows and is gener-alizable to other windows which consist of a sum of constants and sine/cosine terms. This includes the majority of commonly used
windows.", pages = "605-607", url = "http://www.ingentaconnect.com/content/sas/sas/1989/00000043/00000004/art00003", doi = "doi:10.1366/0003702894202418", keyword = "Computer applications,
Instrumentation, Fourier transform" } | {"url":"http://www.ingentaconnect.com/content/sas/sas/1989/00000043/00000004/art00003?format=bib","timestamp":"2014-04-24T00:07:43Z","content_type":null,"content_length":"1732","record_id":"<urn:uuid:26a928b6-d513-4e49-8294-1f88a26893cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
wapply: A faster (but less functional) 'rollapply' for vector setups
wapply: A faster (but less functional) ‘rollapply’ for vector setups
For some cryptic reason I needed a function that calculates function values on sliding windows of a vector. Googling around soon brought me to ‘rollapply’, which when I tested it seems to be a very
versatile function. However, I wanted to code my own version just for vector purposes in the hope that it may be somewhat faster.
This is what turned out (wapply for “window apply”):
wapply <- function(x, width, by = NULL, FUN = NULL, ...)
FUN <- match.fun(FUN)
if (is.null(by)) by <- width
lenX <- length(x)
SEQ1 <- seq(1, lenX - width + 1, by = by)
SEQ2 <- lapply(SEQ1, function(x) x:(x + width - 1))
OUT <- lapply(SEQ2, function(a) FUN(x[a], ...))
OUT <- base:::simplify2array(OUT, higher = TRUE)
It is much more restricted than ‘rollapply’ (no padding, left/center/right adjustment etc).
But interestingly, for some setups it is very much faster:
x <- 1:200000
large window, small slides:
> system.time(RES1 <- rollapply(x, width = 1000, by = 50, FUN = fun))
User System verstrichen
3.71 0.00 3.84
> system.time(RES2 <- wapply(x, width = 1000, by = 50, FUN = fun))
User System verstrichen
1.89 0.00 1.92
> all.equal(RES1, RES2)
[1] TRUE
small window, small slides:
> system.time(RES1 <- rollapply(x, width = 50, by = 50, FUN = fun))
User System verstrichen
2.59 0.00 2.67
> system.time(RES2 <- wapply(x, width = 50, by = 50, FUN = fun))
User System verstrichen
0.86 0.00 0.89
> all.equal(RES1, RES2)
[1] TRUE
small window, large slides:
> system.time(RES1 <- rollapply(x, width = 50, by = 1000, FUN = fun))
User System verstrichen
1.68 0.00 1.77
> system.time(RES2 <- wapply(x, width = 50, by = 1000, FUN = fun))
User System verstrichen
0.06 0.00 0.06
> all.equal(RES1, RES2)
[1] TRUE
There is about a 2-3 fold gain in speed for the above two setups but a 35-fold gain in the small window/large slides setup. Interesting…
I noticed that zoo:::rollapply.zoo uses mapply internally, maybe there is some overhead for pure vector calculations…
7 Responses to wapply: A faster (but less functional) ‘rollapply’ for vector setups
1. Might want to check the RcppRoll package too. Created because of zoo’s rolling being slow.
□ Thanks for the notice! Didn’t know of that one…
I had a look at it and it’s blazing fast! However, it doesn’t have a ‘by =’ argument which defines the sliding values,
so the function is always calculated on a +1 sliding window of size n along the vector, which restricts it a bit for some purposes.
Maybe it is easy to implement that in the Rcpp code, a good time to learn it anyway!
2. Using x from the post rollapply(x, 1000, mean) is about 40x faster than wapply(x, 1000, 1, mean) on my machine so you need to be careful about generalizations here.
□ Yes, you’re right…
I should have mentioned that for “mean”, “median” and “max”, ‘rollapply’ uses internal fast functions such as
But for other user defined function setups, I think it is a bit faster [trying to avoid generalization here ;-) ].
3. Could you please hint how to use wapply with width as vector length > 1? For now I’m stick with mapply to apply different window for different observations. length(x) is equal to length(width).
Is it possible to solve better than using mapply?
□ just to followup my question with the answer:
☆ Hi MusX,
thanks for the question and your answer on SO! I think a good hybrid version would be one windowing in C++ but where you can define the function in R. It should be possible since the
minpack.lm (nonlinear regression) works this way by defining the minimization function in R and minimizing in Fortran…
Anyone give it a go? | {"url":"http://rmazing.wordpress.com/2013/04/23/wapply-a-faster-but-less-functional-rollapply-for-vector-setups/","timestamp":"2014-04-16T21:51:13Z","content_type":null,"content_length":"62400","record_id":"<urn:uuid:41f036fa-70e2-48a2-b684-83c58e01beb5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
One-Zero String Damping Filter
Next | Prev | Up | Top | REALSIMPLE Top
One-Zero String Damping Filter
The original EKS string-damping filter replaced the two-point average of the KS digitar algorithm by a weighted two-point average
where 4 above. At
To control the overall decay rate, another (frequency-independent) gain multiplier
Since this filter is applied once per period
In Faust, we can calculate
t60 = hslider("decaytime_T60", 4, 0, 10, 0.01); // seconds
rho = pow(0.001,1.0/(freq*t60));
where freq is the fundamental frequency (computed from the MIDI key number in the example of Fig.9).
The following Faust code implements the original EKS damping filter in terms of a ``brightness'' parameter
B = hslider("brightness", 0.5, 0, 1, 0.01); // 0-1
b1 = 0.5*B; b0 = 1.0-b1; // S and 1-S
dampingfilter1(x) = rho * (b0 * x + b1 * x');
Relating to Eq.2), we have the relation
Next | Prev | Up | Top | REALSIMPLE Top Download faust_strings.pdf | {"url":"https://ccrma.stanford.edu/realsimple/faust_strings/One_Zero_String_Damping_Filter.html","timestamp":"2014-04-20T12:04:10Z","content_type":null,"content_length":"8516","record_id":"<urn:uuid:6d0f28eb-ac00-4a5c-a4c7-e3e94768d191>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Hybrid Optimization Techniques' topic
Author Comment/Response
I was wondering if someone could help me out with this command and tell me why it is not working,
E^Sin[50 i] + Sin[60 E^j] + Sin[70 Sin[i]] + Sin[Sin[80 j]] -
Sin[10 (i + j)] + 1/4 (i^2 + j^2), {i, x}, {j, y}][[1]], {x, y},
Method -> "DifferentialEvolution"]
Basically I am trying to analyze a hybrid meta-heuristic where I first use a local search technique to find the local minimum, then I use a meta-heuristic to find the global minimum.
Here is how I have the formula broken down,
E^Sin[50 i] + Sin[60 E^j] + Sin[70 Sin[i]] + Sin[Sin[80 j]] -
Sin[10 (i + j)] + 1/4 (i^2 + j^2), {i, x}, {j, y}][[1]]
should output just a number because I am using the Part function to extract the answer, of course this function alone will not work because the initial starting conditions are variables
(x,y), however if x and y are set to some number like 2 and 3 it works fine.
However, I would think that the command,
NMinimize[ "First Command", {x, y},
Method -> "DifferentialEvolution"]
would set the values of x and y to some guess so the FindMinimum command would be initialized with values for x and y. Am I missing something, please help.
Shannon Bowling
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/23880","timestamp":"2014-04-18T23:57:27Z","content_type":null,"content_length":"26888","record_id":"<urn:uuid:446139b3-472c-405b-9c68-5618b3ad8026>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alan Baker
Born: 19 August 1939 in London, England
Click the picture above
to see three larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Alan Baker was educated at Stratford Grammar School. From there he entered University College London where he studied for his B.Sc., moving on to Trinity College Cambridge where he was awarded an
M.A. Continuing his research at Cambridge, Baker received his doctorate and was elected a Fellow of Trinity College in 1964.
From 1964 to 1968 Baker was a research fellow at Cambridge, then becoming Director of Studies in Mathematics, a post which he held from 1968 until 1974 when he was appointed Professor of Pure
Mathematics. During this time he spent time in the United States, becoming a member of the Institute for Advanced Study at Princeton in 1970 and visiting professor at Stanford in 1974.
Baker was awarded a Fields Medal in 1970 at the International Congress at Nice. This was awarded for his work on Diophantine equations. This is described by Turán in [5], who first gives the
historical setting:-
[Diophantine equations], carrying a history of more than one thousand years, was, until the early years of this century, little more than a collection of isolated problems subjected to ingenious
ad hoc methods. It was A Thue who made the breakthrough to general results by proving in 1909 that all Diophantine equations of the form
f (x, y) = m
where m is an integer and f is an irreducible homogeneous binary form of degree at least three, with integer coefficients, have at most finitely many solutions in integers.
Turán goes on to say that Carl Siegel and Klaus Roth generalised the classes of Diophantine equations for which these conclusions would hold and even bounded the number of solutions. Baker however
went further and produced results which, at least in principle, could lead to a complete solution of this type of problem. He proved that for equations of the type f (x, y) = m described above there
was a bound B which depended only on m and the integer coefficients of f with
max(|x[0]|, |y[0]|) ≤ B
for any solution (x[0], y[0]) of f (x, y) = m. Of course this means that only a finite number of possibilities need to be considered so, at least in principle, one could determine the complete list
of solutions by checking each of the finite number of possible solutions.
Baker also made substantial contributions to Hilbert's seventh problem which asked whether or not a^q was transcendental when a and q are algebraic. Hilbert himself remarked that he expected this
problem to be harder than the solution of the Riemann conjecture. However it was solved independently by Gelfond and Schneider in 1934 but Baker ([4]):-
... succeeded in obtaining a vast generalisation of the Gelfond-Schneider Theorem ... From this work he generated a large category of transcendental numbers not previously identified and showed
how the underlying theory could be used to solve a wide range of Diophantine problems.
Turán [5] concludes with these remarks:-
I remark that [Baker's] work exemplifies two things very convincingly. Firstly, that beside the worthy tendency to start a theory in order to solve a problem it pays also to attack specific
difficult problems directly. ... Secondly, it shows that a direct solution of a deep problem develops itself quite naturally into a healthy theory and gets into early and fruitful contact with
significant problems of mathematics.
In addition to the honour of the Fields Medal, Baker has received many other honours including the Adams Prize from the University of Cambridge in 1972 and election to become a Fellow of the Royal
Society in 1973. In 1980 he was elected an honorary Fellow of the Indian National Science Academy.
Among his famous books are Transcendental number theory (1975), Transcendence theory : advances and applications (1977), A concise introduction to the theory of numbers (1984) and (with Gisbert
Wüstholz) Logarithmic forms and Diophantine geometry (2007). Yuri Bilu begins a review of this last mentioned work as follows:-
This long-awaited book is an introduction to the classical work of Baker, Masser and Wüstholz in a form suitable for both undergraduate and graduate students.
Baker also edited the important New advances in transcendence theory (1988).
In 1999 a conference was organised in Zurich to celebrate Baker's 60^th birthday. Most of the lectures given at the meetings were published in A Panorama in Number Theory or The View from Baker's
Garden (2002). The Introduction to the book begins as follows:-
The millennium, together with Alan Baker's 60th birthday offered a singular occasion to organize a meeting in number theory and to bring together a leading group of international researchers in
the field; it was generously supported by ETH Zurich together with the Forschungsinstitut für Mathematik. This encouraged us to work out a programme that aimed to cover a large spectrum of number
theory and related geometry with particular emphasis on Diophantine aspects. ... The London Mathematical Society was represented by its President, Professor Martin Taylor, and it sent greetings
to Alan Baker on the occasion of his 60th birthday.
Outside of mathematics, Baker lists his interests as travel, photography and the theatre.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (5 books/articles)
Mathematicians born in the same country
Honours awarded to Alan Baker
(Click below for those honoured in this way)
BMC morning speaker 1970
Speaker at International Congress 1970
Fields Medal 1970
Fellow of the Royal Society 1973
Cross-references in MacTutor
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © September 2009 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Baker_Alan.html","timestamp":"2014-04-17T18:33:16Z","content_type":null,"content_length":"17047","record_id":"<urn:uuid:57cdef2a-82a2-4228-9223-19accfe80fc2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A high school basketball team has a budget specifically for towels and extra basketballs. A towel costs $6 and a basketball costs $30. Assume the budget is $300 and that 25 towels are needed for the
team for the entire season. Use a standard form equation to determine how many basketballs may be purchased by the team.
• 9 months ago
• 9 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51cd8b30e4b063c0de5b3f0b","timestamp":"2014-04-21T10:27:36Z","content_type":null,"content_length":"25445","record_id":"<urn:uuid:4944b3ff-4acb-4b3a-942c-6ab1f9c72d39>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
sab-R-metrics: Basics of LOESS Regression
May 11, 2011
By Millsy
Last week, I left you off at logistic regression. This week, I'll be pushing the limits of regression analysis a bit more with a smoothing technique called LOESS regression. There are a number of
smoothing methods that can be used, such as Smoothing Splines or simple Local Linear Regression; however, I'm going to cover LOESS (loess) here because it is very flexible and easy to implement in R.
Remember that here, I'm not going to cover too much of the quantitative portion of the methods. That means that if you plan on using loess in your own work, you should probably read up on what it is
actually doing. I'll begin with a brief, non-mathematical description.
When we ran regressions using OLS procedures, there is an assumption that the relationship between the X and Y variable is monotonic and constant across the domain and range of each variable (i.e.
that as X increases, Y also increases--at the same rate for all X and Y). Of course in the real world, this is not always the case. You can use polynomials in linear regression to address the issue,
but sometimes other methods may be necessary. This is where smoothing comes in.
Using smoothers, there is no restriction on the functional form between X and Y with respect to intensity of the relationship, or direction (positive or negative). Of course, this means our fits are
a bit more computationally intensive. And if not careful, it is very easy to overfit the data by trying to include every wiggle we see. But if done properly, one may be able to glean some extra
information from the data by using a smoother instead of a restrictive linear model.
So what is loess? Well, as I said there are a number of smoothers out there. The advantage of loess (with its predecessor 'LOWESS') is that it allows a bit more flexibility than some other smoothers.
The name 'loess' stands for Locally Weighted Least Squares Regression. So, it uses more local data to estimate our Y variable. But it is also known as a variable bandwidth smoother, in that it uses a
'nearest neighbors' method to smooth. If you are interested in the guts of LOESS, a Google search should do you just fine.
As usual, there is a nice easy function for loess in R. The first thing you'll need to do is download a new data set from my site called "
". This is a Pitch F/X data set from Joe Lefkowitz's site including all pitches by Jeremy Guthrie from 2008 through 2011 (as of May 11, 2011). If you are a Baseball Prospectus reader and ran across
Mike Fast's most recent article
, you'll understand why I think this is a nice data set for implementing loess...or at least you will by the end of this tutorial.
Once you have the data, go ahead and set your working directory and load it in. I'm naming my initial version of the data "
####set working directory and load datasetwd("c:/Users/Millsy/Dropbox/Blog Stuff/sab-R-metrics")guth <- read.csv(file="guthrie.csv", h=T) head(guth)
Now because I plan on working with pitch velocity data today, I want to make sure we're including pitches of a certain type. For this reason, I want to go ahead and subset the data into only fastball
variants thrown by Guthrie over this time period. That way, the data aren't contaminated with change-ups and curveballs of lower velocity. We want to look specifically at arm strength. The following
code should subset the data correctly. Remember that the "
" means "OR" in R.
##subset to just fastballs and fastball variantsguthfast <- subset(guth, guth$pitch_type=="FA" | guth$pitch_type=="FF" | guth$pitch_type=="FC" | guth$pitch_type=="FT" | guth$pitch_type=="SI")
Now, because we want this to be an ordered series of pitches across time, we'll have to create a new variable to represent this sequence. For this, we'll make use of a new function that comes very
much in handy when working with and visualizing smoothing analyses. It is called "
", and it creates a sequence of numbers. The first command below "
" indicates the starting point of your sequence, and "
" represents the endpoint. Finally, "
" tells R the space between your points. You can put any number into these that you want. The smaller the "
", the more points you will have. I'm going to keep it simple and use "
", so that we'll have a count of the pitches.
##create a time sequence for the pitches
guthfast$pitch_num <- seq(from=1, to=length(guthfast[,1]), by=1)head(data)
You'll notice in the "
" function above, I tell R to count up to the length of the dataset. By typing "
" I am indicating that I want the number of rows (i.e. the 'length' of the first column in our data set). This way, if we count by 1, we know that every pitch will have a sequential integer-valued
number in the Pitch Number variable we appended onto the data set.
Now that we have this set up, let's take a quick look at Guthrie's pitch velocity over time. The code below should plot each pitch's speed as a function of the pitch number:
##plot all fastball pitch velocity by the pitch number variableplot(guthfast$start_speed ~ guthfast$pitch_num, ylab="Speed out of Hand (Fastballs, MPH)", xlab="Fastball Count (2008-2011)", main=
"Pitch Speed by Sequential Fastball")
Looking at the plot above we can see some semblance of a pattern, but the data seem to be too noisy to really see what is going on. Perhaps if we use the average for each game we'll be able to see
something more useful. For this, we'll make use of the "tapply()" function again. If you don't remember this function, head back to the earlier sab-R-metrics posts and check it out. This function
allows us to quickly take the average fastball velocity by game using the following code:
##get mean fastball velocity by gameaggdat <- tapply(guthfast$start_speed, guthfast$gid, mean)aggdat <- as.data.frame(aggdat) head(aggdat)
This function spits out a vector with the game id's as the row names. However, we convert it to a data frame using the function "as.data.frame()" so that we can use our standard object calls and
variable names. Unfortunately, you'll see that we don't have the right variable name. It's just called "aggdat" for the average velocity for each game. We can use an easy function in R to fix this
up. But first, let's append a count of the game numbers just like we did with the pitch numbers so that we have them sequentially over time:
##create game numbersaggdat$game_num <- seq(from=1, to=length(aggdat[,1]), 1)##change column namescolnames(aggdat) <- c("start_speed", "game_num") head(aggdat)
As you can see, our column names are what they should be now. Using this new data set, let's again plot the fastball velocity over time. Note that we reduced the data to only 101 data points (games):
###plot average velocity by game across timeplot(aggdat$start_speed ~ aggdat$game_num, ylab="Speed out of Hand (Fastballs, MPH)", xlab="Game Number (2008-2011)", main="Pitch Speed by Sequential
Here, we see that there are many fewer data points than before. But it's still a bit tough to understand any pattern going on. We could start with an OLS model to fit the data linearly. Using the
code below, you should get output that tells you Guthrie's fastball velocity is decreasing over time and that this is significant at the 1% level. But is that really the case? Once the model is
fitted, go ahead and plot the regression line using the second bit of code. We'll save the standard errors from the model for later on.
##fit a linear model to the datafit.ols <- lm(aggdat$start_speed ~ aggdat$game_num) summary(fit.ols)pred.ols <- predict(fit.ols, aggdat, se=TRUE)##plot the regression line on the dataplot
(aggdat$start_speed ~ aggdat$game_num, ylab="Speed out of Hand (Fastballs, MPH)", xlab="Game Number (2008-2011)", main="Pitch Speed by Sequential Game")
lines(pred.ols$fit, lty="solid", col="darkgreen", lwd=3)
We see the negative slope in the plot above, but do you think this is the best representation of the data? Using a loess regression, we may be able to improve on this. Unfortunately, the drawback
from the loess is that there isn't really a clean functional form like we get from OLS. That means no real 'coefficients' in a nice Y = mX + b form that we learned in algebra class. For the most
part, the best way to use loess is to look at it.
So, to fit a loess regression, we'll go ahead and stick with the game average data for now. Using the "
" function (doesn't get much easier than that!), we can apply the new analytical tool to our data. Let's try some basic code first. Below, I estimate the loess using a default smoothing parameter and
then predict values and plot it just like with the OLS model. Only this time, it looks a bit different.
##loess default estimationfitd <- loess(aggdat$start_speed ~ aggdat$game_num)my.count <- seq(from=1, to=101, by=1)predd <- predict(fitd, my.count, se=TRUE) plot(aggdat$start_speed ~ aggdat$game_num,
pch=16, ylab="Speed out of Hand (Fastballs, MPH)", xlab="Pitch Count", main="Pitch Speed by Pitch Count")lines(predd$fit, lty="solid", col="darkred", lwd=3)
In the code above, I kept things pretty basic. Usually, we want to use the argument "span=" in order to tell R how much smoothing we want. The larger the span, the more points that are included in
the weighted estimation, and the smoother the plot will look. Something to be careful with here, though, is making the span too small. We don't want to over fit the data, but want it to give us some
idea of the pattern that we see. If we fit every point on its own, we may as well just look at a scatterplot. On the other hand, if we smooth too much, we may as well just estimate an OLS regression.
The idea is to find a balance between the two using the smoothing parameter.
You can also identify a polynomial, which allows for more 'wigglyness' in your loess. For our purposes, I'm not going to bother with this. However, if you are fitting something that you believe needs
some serious wiggles, go ahead and fiddle around with the "
" argument in the "
" function. For the most part, I would not recommend going over 3 for the polynomial as you'll likely be bordering on over-fitting--but some data might well need further polynomials. The default in R
for this function is "
", and you can change it to "
" if you like and you'll see your wigglyness--for the same span--decrease a bit. It all depends on your data. For the purposes of this post, we'll just 'eyeball it'. However, there are other ways to
optimize smoothing parameters in loess and other smoothing (and density estimation--see next week) methods. These include "rules of thumb", cross-validation methods, and so on. The default span in
this function in R is 0.75, but it should really depend on your data.
Now go ahead and add a parameter in your loess code. For this, just include the additional argument "
" within the "
" function. Play around with it and see what happens. I'd say just work with values between 0.1 and 1.0. The code might look something like this:
##fiddling with the spanfit3 <- loess(aggdat$start_speed ~ aggdat$game_num, span=0.3)pred3 <- predict(fit3, my.count, se=TRUE) plot(aggdat$start_speed ~ aggdat$game_num, ylab="Speed out of Hand
(Fastballs, MPH)", xlab="Sequential Game (2008-2011)", main="Pitch Speed by Pitch Count (Span=0.3)")lines(pred3$fit, lty="solid", col="darkred", lwd=3)
Below I've embedded a quick video that shows how the loess line changes when we increase the span by 0.1 each time. I won't provide the code for the movie (it's just a repeat of the same code over
and over, and I made the movie in Windows Movie Maker with still images). In the future, I'll be sure to get into 'for loops' to generate multiple versions of the same plot while incrementally
changing a given parameter, but that's advanced for this post. Watch through the video and try to pick out what you think is the best representation of the data that does not over fit (over-wiggle).
You might be saying to yourself, "Well, it looks like the span of 0.6 smooths the best." If you said that, then I'd agree. Of course, if you disagree, that doesn't make you wrong. The default looks
pretty good as well (with a span of 0.75). Remember we want to get to a balance of fit and smooth and we're just eyeballing it. While it's somewhat subjective in this case, I imagine that we would
all come somewhere near a consensus on the range of acceptable smoothing.
You may also be saying to yourself, "How do I get those cool intervals?" If that's the case, then you're in luck. When we used the '
' function earlier, I made sure to tell R to keep the standard errors from the loess. This allows us to plot a standard 95% Interval on our plot. We'll have to make use of some new functions here.
First, let's create two sequenced data sets for our predictions and interval construction:
##interval construction stuffmy.count <- seq(from=1, to=101, by=1)my.count.rev <- order(my.count, decreasing=TRUE)
The second line simply reverses the order of the first. So each of these are vectors of the x-variable (game number) in increasing and decreasing order, respectively. From here, we can go ahead and
re-plot our span=0.6 version of the loess and add dashed lines for the confidence intervals using some basic math. After this, we'll add some fill color and apply what we learned about the RGB color
scheme in the
Bubble Plots tutorial.##fit the span=0.6 modelfit6 <- loess(aggdat$start_speed ~ aggdat$game_num, span=0.6)pred6 <- predict(fit6, my.count, se=TRUE)
##now plot itplot(aggdat$start_speed ~ aggdat$game_num, ylab="Speed out of Hand (Fastballs, MPH)", xlab="Sequential Game (2008-2011)", main="Pitch Speed by Pitch Count (Span=0.6)")
lines(pred6$fit, lty="solid", col="darkred", lwd=3)
##now add the confidence interval lineslines(pred6$fit-1.96*pred6$se.fit, lty="dashed", col="blue", lwd=1)lines(pred6$fit+1.96*pred6$se.fit, lty="dashed", col="blue", lwd=1)
You can see here that we use the 1.96 as an approximation of the 95% interval. In the plot above, we see the interval represented by the blue dashed lines. However, I really like the filled interval
look. For this, we'll need to use the "
" function and the code below.
The first portion of the code below tells R that we want to create an outline of a polygon on the y-axis with the confidence bounds at each point along the two vectors we created above. The second
line just recreates the x-axis, but in increasing then decreasing form to get full coverage of the shape. Finally, we create a shape using the confidence bounds and fill it with a transparent color
(otherwise it will cover up everything). The
indicates that we want it completely blue (99 in digits 5 and 6), with some transparency (33 in digits 7 and 8). As long as your plot is still up, this code will simply add
###create polygon boundsy.polygon.6 <- c((pred6$fit+1.96*pred6$se.fit)[my.count], (pred6$fit-1.96*pred6$se.fit)[my.count.rev])x.polygon <- c(my.count, my.count.rev)##add this to the plotpolygon
(x.polygon, y.polygon.6, col="#00009933", border=NA)
Notice how the interval bands flare out at the end. This is because there is less data at the endpoints (i.e. beyond the enpoints) of the data. Therefore, there is less certainty about the prediction
here. This is a common problem in any regression (including linear regression), but are exacerbated in smoothing because of the ability for a single point at the edge of the distribution having too
much influence on the direction of the endpoints of the smoothed line. Just something to be aware of.
Lastly, if you're feeling lazy, you can always just use the "
" function. This will automatically plot your loess, and takes the same arguments as "
". However, here you simultaneously provide the plotting parameters. See the code and plot below:
###show how scatter smooth just does the plot automaticallyscatter.smooth(aggdat$start_speed ~ aggdat$game_num, degree=2, span=0.6, col="red", main="Scatter Smooth Version (Span=0.6)", xlab=
"Sequential Game Number", ylab="Starting Speed (Fastballs, MPH)")
The above plot isn't as pleasing to me as the ones I made manually. In general, by doing things manually you will have more control over the look of things, but if you're looking for something quick,
" does just fine. One thing to remember, however, is that it has a different default for the polynomial than the "
" function, so if you want the same fit, you'll have to tell R that "
There are other options in loess that I haven't covered today, including the ability to parametrically estimate some variables, while applying the loess function to others. This can come in handy if
you think only some variables are non-linear, while others are linear in nature. If this interests you, definitely check into it. Other packages, like mgcv, allow for similar model types using
slightly different smoothers and an extension of the smoothing function to Generalized Linear Models (binomial response, etc.). Hopefully this will give you a nice base to work with loess regression
in your own work, but keep in mind that these tutorials are not a replacement for understanding the underlying mathematics that create the pretty pictures. Loess can be terribly misused in the wrong
hands (especially with pitch-location smoothing), so it is important to understand WHY you are doing certain things, not just HOW to do it in R.
I don't currently have Pretty-R code up and running, as the Blogger HTML really effs with embedding it in here. All of the necessary code is included above (and remember if you want to save plots and
pictures, use a graphics device like "
for the author, please follow the link and comment on his blog:
The Prince of Slides
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/sab-r-metrics-basics-of-loess-regression-2/","timestamp":"2014-04-21T12:30:57Z","content_type":null,"content_length":"61468","record_id":"<urn:uuid:3f4bed68-b28a-46cc-a586-90d3476e3b5f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes
OK, I admit that I pulled a fast one. I never finished the last post as promised, so here it is.
Cochrane Continued
In the last post I alluded to the 2006 Cochrane Laetrile review, the conclusion of which was:
This systematic review has clearly identified the need for randomised or controlled clinical trials assessing the effectiveness of Laetrile or amygdalin for cancer treatment.
I’d previously asserted that this conclusion “stand[s] the rationale for RCTs on its head,” because a rigorous, disconfirming case series had long ago put the matter to rest. Later I reported that
Edzard Ernst, one of the Cochrane authors, had changed his mind, writing, “Would I argue for more Laetrile studies? NO.” That in itself is a reason for optimism, but Dr. Ernst is such an exception
among “CAM” researchers that it almost seemed not to count.
Until recently, however, I’d only seen the abstract of the Cochrane Laetrile review. Now I’ve read the entire review, and there’s a very pleasant surprise in it (Professor Simon, take notice). In a
section labeled “Feedback” is this letter from another Cochrane reviewer, which was apparently added in August of 2006, well before I voiced my own objections:
The authors’ state that they: “[have] clearly identified the need for randomised or controlled clinical trials assessing the effectiveness of Laetrile or amygdalin for cancer treatment.” This is
to fail completely to understand the nature of oncology research in which agents are tested in randomized trials (“Phase III”) only after they have been successful in Phase I and II study. There
was a large Phase II study of laetrile (N Engl J Med. 1982 Jan 28;306(4):201-6) which the authors of the review do not cite, they merely exclude as being non-randomized. But the results of the
paper are quite clear: there was no evidence that laetrile had any effect on cancer (all patients had progression of disease within a few months); moreover, toxicity was reported. To expose
patients to a toxic agent that did not show promising results in a single arm study is clinical, scientific and ethical nonsense.
I would like to make a serious recommendation to the Cochrane Cancer group that no reviews on cancer are published unless at least one of the authors either has a clinical practice that focuses
on cancer or actively conducts primary research on cancer. My recollection when the Cochrane collaboration was established was that the combination of “methodologic” and “content” expertise was
Wow! That letter makes several of the same arguments that we’ve made here: that for both scientific and ethical reasons, scientific promise (including success in earlier trials) ought to be a
necessary pre-requisite for a large RCT; that the 1982 Moertel case series was sufficient to disqualify Laetrile; and that EBM, at least in this Cochrane review, suffers from “methodolatry.” It also
brings to mind Steven Goodman’s words:
An important problem exists in the interpretation of modern medical research data: Biological understanding and previous research play little formal role in the interpretation of quantitative
results. This phenomenon is manifest in the discussion sections of research articles and ultimately can affect the reliability of conclusions. The standard statistical approach has created this
situation by promoting the illusion that conclusions can be produced with certain “error rates,” without consideration of information from outside the experiment.
This method thus facilitated a subtle change in the balance of medical authority from those with knowledge of the biological basis of medicine toward those with knowledge of quantitative methods,
or toward the quantitative results alone, as though the numbers somehow spoke for themselves.
Perhaps most surprising about the ‘Feedback’ letter is the identity of its author: Andrew Vickers, a biostatistician who wrote the Center for Evidence-Based Medicine’s “Introduction to evidence-based
complementary medicine.” I’ve complained about that treatise before in this long series, observing that
There is not a mention of established knowledge in it, although there are references to several claims, including homeopathy, that are refuted by things that we already know.
Well, Dr. Vickers may not have considered plausibility when he wrote his Intro to EBCM, but he certainly seems to have done so when he wrote his objection to the Cochrane Laetrile review. Which is an
appropriate segue to a topic that Dr. Vickers hints at (“content expertise”), perhaps unintentionally, in the letter quoted above: Bayesian inference.
Bayes Revisited
A few years ago I posted three essays about Bayesian inference: they are linked below (nos. 2-4). The salient points are these:
1. Bayes’s Theorem is the solution to the problem of inductive inference, which is how medical research (and most science) proceeds: we want to know the probability of our hypothesis being true
given the data generated by the experiment in question.
2. Frequentist inference, which is typically used for medical research, applies to deductive reasoning: it tells us the probability of a set of data given the truth of a hypothesis. To use it to
judge the probability of the truth of that hypothesis given a set of data is illogical: the fallacy of the transposed conditional.
3. Frequentist inference, furthermore, is based on assumptions that defy reality: that there have been an infinite number of identically designed, randomized experiments (or other sort of random
sampling), without error or bias.
4. Bayes’s Theorem formally incorporates, in its “prior probability” term, information other than the results of the experiment. This is the sticking point for many in the EBM crowd: they consider
prior probability estimates, which are at least partially subjective, to be arbitrary, capricious, untrustworthy, and—paradoxically, because it is science that is ignored in the
5. Nevertheless, prior probability matters whether we like it or not, and whether we can estimate it with any certainty or not. If the prior probability is high, even modest experimental evidence
supporting a new hypothesis deserves to be taken seriously; if it is low, the experimental evidence must be correspondingly robust to warrant taking the hypothesis seriously. If the prior
probability is infinitesimal, the experimental evidence must approach infinity to warrant taking the hypothesis seriously.
6. Frequentist methods lack a formal measure of prior probability, which contributes to the seductive but erroneous belief that “conclusions can be produced…without consideration of information from
outside the experiment.”
7. The Bayes Factor is a term in the theorem that is based entirely on data, and is thus an objective measure of experimental evidence. Bayes factors, in the words of Dr. Goodman, “show that P
values greatly overstate the evidence against the null hypothesis.”
I bring up Bayes again to respond to Prof. Simon’s statements, recently echoed by several readers, that people may differ strongly in what they consider plausible, and that it is not clear how prior
probability estimates might be incorporated into formal reviews. I’ve discussed these issues previously (here and here, and in recent comments here and here), but it is worth adding a point or two.
First, it doesn’t really matter that people may differ strongly in what they consider plausible. What matters is that they commit to some range of plausibility—in public and with justifications, in
the cases of authors and reviewers, so that readers will know where they stand—and that everyone understands that this matters when it comes to judging the experimental evidence for or against a
An example will explain these points. Wayne Jonas was the Director of the US Office of Alternative Medicine from 1995 until its metamorphosis into the NCCAM in 1999. He is the co-author, along with
Jennifer Jacobs, of Healing with Homeopathy: the Doctors’ Guide (©1996), which unambiguously asserts that ultra-dilute homeopathic preparations have specific effects. Yet Jonas is also the co-author
(with Klaus Linde) of a 2005 letter to the Lancet that includes this statement, prefacing his argument that homeopathy, already subjected to hundreds of clinical trials, has not been disproved and
deserves further trials:
We agree that homoeopathy is highly implausible and that the evidence from placebo-controlled trials is not robust.
Bayes’s theorem shows that Jonas can’t have it both ways. Either he doesn’t really agree that homeopathy is highly implausible (which seems likely, unless he changed his mind between 1996 and
2005—oops, he didn’t); or, if he does, he needs to recognize that his statement quoted above is equivalent to arguing that the homeopathy ‘hypothesis’ has been disproved, at least to an extent
sufficient to discourage further trials.
Next, does it matter that we can’t translate qualitative statements of plausibility to precise quantitative measures? Does this mean that prior probability, in the Bayesian sense, is not applicable?
I don’t think so, and neither do many scientists and statisticians. Even “neutral” or “non-informative” priors, when combined with Bayes factors, are more useful than P values (see #7 above).
“Informative” priors—estimated priors or ranges of priors based on existing knowledge—are both useful and revealing: useful because they show how differing priors affect the extent to which we ought
to revise our view of a hypothesis in the face of new experimental evidence (see #5 above); and revealing of where authors and others really stand, and of the information that those authors have used
to make their estimates.
I believe that frequentist statistics has allowed Dr. Jonas and other “CAM” enthusiasts to project a posture of scientific skepticism, as illustrated by Jonas’s words quoted above, without having to
accept the consequences thereof. If convention had compelled him to offer a prior high enough to warrant further trials of homeopathy, Dr. Jonas would have revealed himself as credulous and foolish.
Finally, there is no reason that qualitative priors can’t be translated, if not precisely then at least usefully, to estimated quantitative priors. Sander Greenland, an epidemiologist and a Bayesian,
explains this in regard to household wiring as a possible risk factor for childhood leukemia. First, he argues that there are often empirical bases for estimating priors:
…assuming (an) absence of prior information is empirically absurd. Prior information of zero implies that a relative risk of (say) 10^100 is as plausible as a value of 1 or 2. Suppose the
relative risk was truly 10^100; then every child exposed >3 mG would have contracted leukaemia, making exposure a sufficient cause. The resulting epidemic would have come to everyone’s attention
long before the above study was done because the leukaemia rate would have reached the prevalence of high exposure, or ~5/100 annually in the US, as opposed to the actual value of 4 per 100,000
annually; the same could be said of any relative risk >100. Thus there are ample background data to rule out such extreme relative risks.
The same could be said for many “CAM” methods that, while not strictly subjects of epidemiology per se, have generated ample experimental data (see homeopathy) or have been in use by enough people
for enough time to have been noticed for substantial deviations from typical outcomes of universal diseases, should such deviations exist (see “Traditional [insert ethnic group here] Medicine”).
Next, Greenland has no problem with non-empirically generated priors, because these are revealing as well:
Many authors have expressed extreme scepticism over the existence of an actual magnetic-field effect, so much so that they have misinterpreted positive findings as null because they were not
‘statistically significant’ (e.g. UKCCS, 1999). The Bayesian framework allows this sort of prejudice to be displayed explicitly in the prior, rather than forcing it into misinterpretation of the
By “misinterpretation,” Greenland is arguing not that the “positive findings” of epidemiologic studies have proven the existence of a magnetic field effect, but that the objections of extreme
skeptics must be made explicit: it is their presumed, if unstated, prior probability estimates that justify their conclusions about whether or not there is an actual magnetic field effect associated
with childhood leukemia; it is not the data collection itself. Prior probability estimates put people’s cards on the table.
I recommend the rest of Greenland’s article, which is full of interesting stuff. For example, he doesn’t agree that “objective” Bayesian methods, using non-informative priors (see my point #7 above)
are more useful than frequentist methods, since they are really doing the same thing:
…frequentist results are what one gets from the Bayesian calculation when the prior information is made negligibly small relative to the data information. In this sense, frequentist results are
just extreme Bayesian results, ones in which the prior information is zero, asserting that absolutely nothing is known about the [question] outside of the study. Some promote such priors as
‘letting the data speak for themselves’. In reality, the data say nothing by themselves: The frequentist results are computed using probability models that assume complete absence of bias and so
filter the data through false assumptions.
All for now. In the next post I’ll discuss another Cochrane review that has some pleasant surprises.
*The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:
1. Homeopathy and Evidence-Based Medicine: Back to the Future Part V
2. Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”
3. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued
4. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again
5. Yes, Jacqueline: EBM ought to be Synonymous with SBM
6. The 2nd Yale Research Symposium on Complementary and Integrative Medicine. Part II
7. H. Pylori, Plausibility, and Greek Tragedy: the Quirky Case of Dr. John Lykoudis
8. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1
9. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2
10. Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs?
11. Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?
12. Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research
13. Of SBM and EBM Redux. Part IV: More Cochrane and a little Bayes
14. Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes
15. Cochrane is Starting to ‘Get’ SBM!
16. What is Science?
63 thoughts on “Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes”
1. I think showing actual calculations of final probabilities starting with different Bayesian priors would be informative. If you have a really high prior, then equivocal studies drop it down
pretty fast. If you have a low prior, equivocal studies don’t bring it up. That is essentially a sensitivity analysis on how the prior affects the outcome.
If you tried to find a prior that would justify more studies given a bunch of equivocal studies, you can’t.
2. One of the real obstacles to doing Bayesian analysis is that frequentist is easier. And there are more readily-available and usable tools to help with frequentist analysis, so in practice it’s
MUCH easier.
Which doesn’t even consider “that’s how it’s done.” One can’t even count on scientists to understand frequentist statistics – far too often, the answer given to “if I say that the 5% confidence
interval for alpha is 1.0 to 2.0, does that mean that there is a 95% chance alpha is within that range?” is “yes.” Even in physics, which is far more mathematically based than medicine. When the
level of statistical knowledge of the COMMON tools is that poor, how can we expect scientists to understand why Bayes is so important?
This sort of cultural change will be an uphill battle. Important, but very difficult.
3. Not bad. I take it as a call to map priors to posteriors, and that this should involve discussion of the reasonableness of the prior. Leukemia example was excellent.
I’d disagree with Scott’s argument based on popularity. The common tools are actually confusing when judged as models of learning – I’m not sure they even claim to model learning at all. In class
#1 for graduate biologists learning stats, I’d toss a coin one time, and we’d discuss what inferences could be made, and how we would gamble on the results of the next toss. The effects on the
opposition are like depleted uranium rounds. Students would, like the reviews criticized here, cry that the real problem was that more data was needed. Yes, more data always helps, and that will
never change (and get used to it) but stupid is stupid, and the stupid is just more eminent when we look at examples were data is limited. Note that I’m not claiming that for simple experiments
that giving p and N’s are not sufficient statistics – yes, some folks aren’t able to interpret them perfectly, I admit.
Can we agree that scientists need more training in this area?
Is my doc competent to read the medical literature, even now? (Perhaps only a subset of them that will publish expert summary advice need to be Jedi’s, if the rest will follow them, but will
4. @ rork:
Actually, you’re reinforcing my point as opposed to disagreeing with it. The fact that the tools don’t promote learning is part of the problem! People want a simple black box whose crank they can
turn and get an answer out without having to understand the statistics involved.
5. The usual application of frequentist methods in medicine also relies on randomization and patient selection to minimize effects of prior probabilities. Those same prior probabilities we are
minimizing in an RCT become important factors in framing decisions for individual patients. Some of the uncertainty faced in medical practice can be reduced with Bayes methods. The adoption of
Bayes methods can improve the quality of medical research, and reduce uncertainty of medical practice.
6. @ daedalus2u on 04 Mar 2011 at 7:35 am
I don’t know what kind of example you are thinking of, but here is one. We are dealing with test that says ‘yes’ correctly in 80% of the cases that reality is ‘yes’.
The same test says ‘no’ correctly in 95% of the cases that reality is ‘no’ (this corresponds to using p=0.05 as significance limit). For reasons I don’t understand this positive correct rate 80%
is called the power or sensitivity of the test.
You first calculate 80% divided by 5% (true positive rate divided by false positive rate). This is 16. let’s call that the yes-factor of the test.
Now calculate 20% divided by 95% . This is 1 over 4.75. It is the false negative rate divided by the true negative rate.
Next you need the prior odds. Odds means the ratio [yes] : [no]. I like using ratios rather than quotients because I may want to talk about 0:1 or 1:0.
The square brackets mean: probability of. Before you start you have a prior probability of, say, 0.2 . Now do the test. It comes out ‘yes’.
Multiply 0.2 times 16, equals 3.2
For safety you do another test, it also says yes. Multiply again by 16. Gives 51.2.
So now the posterior odds are 51.2. That amounts to [no] = 1/ 52.2. Each time the test comes out ‘yes’, the odds go up by a factor 16. Even if the prior odds are one millionth, ten ‘yes’ test
answers will raise the odds to one million.
But if the test says ‘no’, we have to use the other number. The prior odds are divided by 4.75. After one step they are 0.0421… , after the next step (assuming another ‘no’) they are 0.00886…
which is roughly also the posterior [yes] (more precisely 0.00878…).
So if the tests keep saying ‘no’ the posterior odds keep dropping.
If the yes and no answers both occur frequently, you are in trouble. The tests are wrong.
The problem is often that the prior odds are often only vaguely known.
Suppose we are dealing with the risk of HIV infection for a person not belonging to any known risk group. The prior odds are very small and unknown. But the yes-factor for an HIV-test is very
large and also unknown, so the posterior odds after a ‘positive’ result are very small times very large so not negligible but unknown.
Not only sometimes the prior odds aren’t known well, sometimes they aren’t even based on something that remotely can be called probabilities. For example ‘the probability that other life exists
in this galaxy’.
All this only applies to serious tests with a proper power and p-value. If one sets up an experiment with homeopathy nobody knows what effect to expect ‘if homeopathy works in this case’. So
there goes your power. Moreover lots of homeopathy trials are done by thinking up the outcome criterion after breaking of the code (Texas sharpshooter technique) or other sins. I don’t know how
rampant this practice is in ordinary medicine. I fear the worst.
One solution is to see homeopathy trials as bets about a claim, something like the Randi Challenge. In that case the party offering the money is very careful. He doesn’t give out the million in
advance to the claimant hoping the claimant will get a nice publication with an acknowledgement for the funding, i.e. the NCCAM technique (am I wrong?).
7. Oops: I started with a prior probability of 1/6, so prior odds of 1/6 : 5/6, as a number: 0.2.
8. @Jan Willem Nienhuyson 04 Mar 2011 at 2:11 pm
Be careful about mixing decisions about scientific hypotheses from scientific observation, with decisions about whether the result of a clinical test means that a patient has a disease. In
scientific hypotheses, the problem is how to translate scientific concepts into a numeric figure of prior probability. If we allow several repetitions of scientific studies, the importance of the
estimate of prior probability becomes rapidly diminished (except for extreme values). Acceptance or rejection of a scientific hypothesis really fits the Bayesian perception of probability… it’s
not so much that there is real truth with some error in sampling, but the hypothesis either increases in acceptance, or decreases, based on any data that becomes available.
In the case of the decision for a patient we can’t translate basic science concepts into a figure representing the prior probability of disease. In the clinical setting we often don’t have the
luxury of repeated tests to diminish the bias of imprecise prior probabilities. The significance of “another hypothesis bites the dust” pales in comparison to the cost of being wrong for a
patient. We need more empirical estimates for clinical decisions. An experienced good clinician often has reliable estimates based on experience. Ideally, we would have more observational data to
give prior probabilities for clinical decisions, especially if we are going to reduce the training requirements for being a healthcare provider. In some ways the clinical decision fits the
frequentist view more than the Bayes view… there is truth that will be know for the patient, it will become known over time. The patient either has or has not been infected by HIV, we just may
not immediately know the truth.
9. @ JMB on 04 Mar 2011 at 10:37 pm
Hypotheses vs patients: the mathematics stays the same. But the difference is usually that in patients one has a reason (based on other symptoms, for instance) to order tests. I gave the HIV
example to show what happens if there is no reason whatsoever, or only that an insurance company demands such a test.
The same problem arises with mass screening.
Another difference between patients and theories is that in the case of theories there actualy seldom is no prior probability in the ordinary sense. One does not have a series of comparable
situations and some statistics about how often ‘the theory’ happens to be correct or false. In patients prior symptoms establish a hint about the prior probabilty. And many tests do better than
an error rate of 5%. I imagine that a standard lab test for ‘blood glucose over 10 mmol/l’ will rarely be mistaken.
I don’t quite know what is meant by the frequentist or Bayes view. But if the Bayes view involves treating belief and feelings of (im)plausibilty as probabilities, then count me with the
frequentists. If ‘frequentist’ means sticking to the view that a probability in the end only can be based on counting how often something occurs in an ensemble of comparable situations, then I’m
a frequentist too. I cringe at book titles like The Probability of God (in my opinion a cheap Bayesian trick that can be summarized in one paragraph).
But if frequentist means doing experiments regardless of any prior plausibility, and accepting p<0.05 as irrefutable evidence that homeopathy works / cell phones cause cancer / pigs can fly /
substance X cures cancer or MS / prayer or acupuncture helps IVF – then count me out.
10. Francis Galton built a machine that illustrates Bayes’ theorem:
11. @JMB “if we are going to reduce the training requirements for being a healthcare provider”
Interesting point about a “shortcut” to clinical competency.
Do we really want to reduce training requirements though?
12. Scott, I struggle a bit with thie detailed arguments in this area. Please, expand on why the person who answers ‘yes’ in your example is wrong.
“if I say that the 5% confidence interval for alpha is 1.0 to 2.0, does that mean that there is a 95% chance alpha is within that range?” is “yes.”
13. @ Badly Shaved Monkey on 06 Mar 2011 at 7:15 am
If the 5% confidence interval for a quantity A is [1.0, 2.0], it means that if A is actually outside this interval the outcome of the experiment (or one deviating even more) would have had a
computed chance of not more than 5% of happening.
The chance that A is somewhere (e.g. in the interval [1.0 , 2.0]) is not even a well defined concept. A is not a random variable that may or may not take values inside that interval.
From Wikipedia:
if the statistical model is correct, then taken over all the data that might have been obtained, the procedure for constructing the interval for A would deliver a confidence interval that
included the true value of A in 95% of the time.
So it is not A that is the random variable. It is the computed interval in an ensemble of repeats of the experimental process. The phrase would be somewhat more correct if it said:
Such confidence intervals have a chance of 95% of containing the true value of A.
Simplify: you write the number 1 on a piece of paper. Now you throw a die. It comes up 5. Can you say now: 5 has a probabilty of 0.16667 of being equal to the number on my paper? Or: the number
on my paper has a 1/6 probability of being equal to 5? Well, you may say that, but is a mighty strange thing to say. Before you throw the die you can say something about the probability of the
event that you will throw ‘the number’ on your paper. But after the throw there is no more probability, regardless of whether you have turned that paper upside down or forgotten what’s on it.
Whenever you hear “chance of x% that event y happens”, always try to think what you would have to to in order to verify that actually y happens x times out of every 100 times.
14. @Jan Willem Nienhuyson 04 Mar 2011 at 2:11 pm
The only point I am trying to make is that in the case of medical research, imprecise probabilities derived from logical application of basic science principles are sufficient to win the
On the medical practice side, we would much prefer to have more precise direct measures of risks and benefits. In the absence of direct measures, we may fall back on less precise inferred
measures based on the Laplace strategy of Bayes.
15. http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0040168
Steven Goodman herein points to flaws in the work leading to Ionnadis’ unfortunate generalization that “most published research findings are wrong.”
16. God.
Watching y’all fumble around with math is entertaining. (Note to the MDs in the crowd – if you want to start a stand-up comedy routine, start talking about math or chemistry to someone who
actually knows something about it. The results are stupendously gratifying.)
Most of the words you use do not mean what you think they mean. (“Prior” and “posterior”, in particular.) In the same way that you deride the folks who spend fifteen minutes cutting up frogs and
doing research on Wikipedia, then proclaiming that they are ready to perform heart surgery, those of us who have some facility with numbers and mathematical concepts – or, God forbid, actual
*academic credentials* in the subject – roll our eyes at those of you who cheat your way through a “Math for Doctors” course and then proclaim that you know anything at all about statistics.
Here’s a hint – until you can actually re-derive Bayes’ Theorem from memory, you don’t know a damn thing about it. It’s not that hard, but I’m willing to bet that 90% of the people fulsomely
bloviating about it here cannot actually reproduce it without reference to Wikipedia. (Go ahead, try it. I’ll wait.)
(I will nod at one point, though, even though I’ve brought it up before: statistics does not provide capital-T Truth, “truth”, or even truthiness; it merely points an arrow, sometimes, at places
where truth might be found if one digs hard enough. Elucidation of the biochemical processes behind illnesses is Truth; all else is at best handwaving, or at worst High Wankery, whether it’s
homeopathy or pompous airs about the medical state-of-the-art.)
“hint: Bayes’ theorem doesn’t have anything to do with spam”
17. Bayes’ Theorem is not hard to derive indeed. The reason it looks so fearsome is that it is formulated in terms of probabilities instead of odds.
If you start with odds, then BT is:
posterior odds = prior odds times A/B
Never mind what A/B is for the moment. Suppose the posterior probability of something called S is denoted by X, and the prior probabilty by x. We have, by definition of odds: posterior odds = X/
(1-X); prior odds = x /(1-x).
Prior and posterior refer to before and after some kind of experiment E that can yield two answers: ‘yes’ and ‘no’. Instead of ‘after the experiment’ you also may think ‘under the condition that
experiment says so and so’.
Plug this in and solve for x. This is junior high school math, the only thing required is a neat handwriting, so as not to confuse x and X. The result is:
(*) X = xA / (B(1-x) + xA)
Now for the meaning of these letters:
X= the probability of S under the condition that E yields ‘yes’
x = the probability of S
(1-x) = the probability of not-S
A = the probability that E yields ‘yes’, when S is true
B = the probability that E yields ‘yes’, when not- S holds
Now you have to devise a notation without any words in it for the above five sentences. Try things like P(S) and P(E | ~S) etc.
You may think I cheated, because I left out the hard part, namely the odds version of BT. Not so.
After doing experiment E there are basically 4 possibilities
1. S holds, E says ‘yes’ (xA of all cases)
2. S holds, E says ‘no’ (xC of all cases)
3. S is false, E says ‘yes’ ( (1-x)B of all cases)
4. S is false, E says ‘no’ ( (1-x)D of all cases)
(Exercise: what are the meanings of C and D?)
(Actually E doesn’t have to be an experiment, it can be any statement about the elements of the universe considered, just like S, as long as it is either true or false.)
The posterior odds is just the ratio of 1 and 3. Of course you can derive from this also the probability form of BT, namely as
(S holds, E says ‘yes’) divided by (totality of all cases where E says ‘yes’).
The advantage of the odds form is that it is easier to remember. I have a poor memory, and I never can recall BT in the probability form. It’s too complicated. It is easier to derive it when you
need it. In judging medical papers one has to deal with odds anyway, because of the ubiquitous odds ratio.
18. The usual application of frequentist methods in medicine also relies on randomization and patient selection to minimize effects of prior probabilities. Those same prior probabilities we are
minimizing in an RCT become important factors in framing decisions for individual patients. You first calculate 80% divided by 5% (true positive rate divided by false positive rate). This is 16.
let’s call that the yes-factor of the test. If the yes and no answers both occur frequently, you are in trouble. The tests are wrong. Not only sometimes the prior odds aren’t known well,
sometimes they aren’t even based on something that remotely can be called probabilities. For example ‘the probability that other life exists in this galaxy’. In the case of the decision for a
patient we can’t translate basic science concepts into a figure representing the prior probability of disease. In the clinical setting we often don’t have the luxury of repeated tests to diminish
the bias of imprecise prior probabilities. The significance of “another hypothesis bites the dust” pales in comparison to the cost of being wrong for a patient. We need more empirical estimates
for clinical decisions. An experienced good clinician often has reliable estimates based on experience. Ideally, we would have more observational data to give prior probabilities for clinical
decisions, especially if we are going to reduce the training requirements for being a healthcare provider. I struggle a bit with thie detailed arguments in this area. Please, expand on why the
person who answers ‘yes’ in your example is wrong. Most of the words you use do not mean what you think they mean. (“Prior” and “posterior”, in particular.) In the same way that you deride the
folks who spend fifteen minutes cutting up frogs and doing research on Wikipedia, then proclaiming that they are ready to perform heart surgery, those of us who have some facility with numbers
and mathematical concepts – or, God forbid, actual *academic credentials* in the subject – roll our eyes at those of you who cheat your way through a “Math for Doctors” course and then proclaim
that you know anything at all about statistics.
neural interface
19. @Jan:
… That’s gibberish.
If I were to phrase that in the most charitable way, you are using highly nonstandard and incorrect terminology to describe something that is, at its core, simple. That you can’t remember it
indicates too much time spent with the chronic during your education, if indeed you had any mathematical education beyond basic additionl
If you can’t remember BT in the “probability form” [sic], then you don’t know what you’re talking about. End of story. It *ain’t* that hard.
“j’ f’n c”
20. SD,
Then please have the kindness to show us how it’s done. In Dutch of course, since Jan Willem Nienhuys was so considerate as to provide his explanation in English for you.
21. @Jan:
(side note: the derivation of that Godforsaken hash in the seventh paragraph actually *does* work out to Bayes’ theorem, but is expressed in an EXTREMELY overcomplicated way. Also, “odds” is a
term used by bookies. Statisticians work in terms of “probability”, unless they work in Las Vegas or Atlantic City.)
“for great justice”
22. @Alison:
I do Russian, Spanish and English. Dutch is outside my sphere; sorry.
However, that’s irrelevant; the terms of mathematics transcend linguistic boundaries. Difficult and foreign as it may seem to you, there is such a thing as standard notation in mathematics, and
it does not depend on the native language of the author.
And, just to be a dick: if he’s Dutch, then the Bayesian “prior probability” dictates that he was stoned beyond any capacity for short- or long-term recollection during his college years, since
weed is legal in the Netherlands (or at least, not officially criminalized). I defy you to rebut that assertion. >;->
23. … however:
“Probability” [sic] form of Bayes’ theorem:
P(A|B) = P(B|A) * P(A)/P(B)
P(A|B) = P(A U B) / P(B)
P(B|A) = P(B U A) / P(A)
P(A|B) * P(B) = P(A U B)
P(B|A) * P(A) = P(B U A)
P(A U B) = P(B U A) [symmetric property of set union]
P(A|B) * P(B) = P(A U B) = P(B|A) * P(A)
P(A|B) * P(B) = P(B|A) * P(A)
P(A|B) = P(B|A) * P(A) / P(B)
QED. []
This is accessible with high-school algebra. The implications, however, are not, necessarily.
“i’ve had scarier TA’s than you can even imagine”
24. @ Badly Shaved Monkey:
Jan got it exactly. The value of alpha is not the random variable; the computed confidence interval is. This confuses people because we only know the latter and not the former, so we tend to
assume that the thing we don’t know is the random bit.
25. I apologize for thinking I was against scott in my comment #3, and appreciate (now) that “it’s going to be hard” may actually be more important than many other details. I will read more carefully
in future (he said for the 100th time).
Did the much-smarter-than-us SD just use unions everywhere rather than intersections? I’m talking about that last post – the one that actually included content.
26. Well, well. SD certainly knows how to impress people with his knowledge of mathematical notations and foreign countries.
He purports to prove P(A|B) = P(B|A) * P(A)/P(B), in other words
P(A|B) * P(B) = P(B|A) * P(A)
and he does so by explaining in a complicated way by observing that both sides (by definition of the conditional probability P(A|B)) equal P(A U B), even invoking a set theoretic theorem about
the union of two sets. Why not include as well the set theoretic proof of A U B = B U A (using only the Zermelo-Fraenkel axioms of set theory and the logical principle of substitution of equals)?
But he is mistaken, it is the intersection, not the union. So put everywhere ∩ instead of ∪. Actually, when I wrote
S holds, E says ‘yes’ (xA of all cases)
i.e. in mathematian’s jargon: P(S ∩ E) = P(S)*P(E|S), I assumed this to be self evident (maybe I was too optimistic about my readers, but having taught set theory for many years to math students,
one can become an optimist, I admit).
I disapprove of using this kind of pompous notation outside of mathematics, and I find it somewhat funny that someone who is
(1) hiding behind a pseudonym
(2) rather harshly criticizes hypothetical people who can’t reproduce a proof of ‘the’ Bayes Theorem,
(3) then criticizes a proof using a notation that is not usually found in mathematics textbooks, and
(4) concludes with a proof in which he mixes up union and intersection, and
(5) as an extra claims that ‘odds’ is a term that should be restricted to bookies.
Maybe SD should read
but I doubt that he will do so, because he frowns upon Wikipedia as if it is a kind of Cannabis sativa. Maybe he should read
if he prefers an article written by named professors in statistics.
SD’s version of Bayes Theorem has that name, but in various texts one will find
P(A|B) = P(B|A) P(A) / (P(B|A) P(A) + P(B|~A) P(~A))
For example, on page 51 of The Probability of God, by Stephen D. Unwin (2003), who repeats this formula about a dozen times, but who in essence starts with prior odds of 1 (‘even bet’) that God
exists, then multiplies this by factors 2, 10, 1/2, 1/10 that express his beliefs in the various kinds of evidence pro and con, and then arrives at posterior odds 2 (0.666 probability that God
exists). I call that a pompous smokescreen. It gives mathematics a bad name (theologians probably aren’t too happy either).
Similar complicated formulas are also called Bayes Theorem. The one just mentioned is the one I tried to explain, and if you want to invoke the Supreme Being to denounce this jumble of letters
and brackets and symbols you have my deepest sympathy. I dislike having to remember any formula that I can’t visualize.
But please try to be a bit more civil about other people’s failings if you are confused about intersection and union.
27. EthanFoster on 07 Mar 2011 at 7:37 am
I would love to answer you to the best of my knowledge, but your post is a concatenation of quotes and some of the connecting text seems to be missing.
You wrote
Please, expand on why the person who answers ‘yes’ in your example is wrong.
I don’t know whether that question was directed to me. I am not aware of having mentioned people saying ‘yes’, only tests saying ‘yes’. That test can be some RCT (i.e. not a person) with as
result ‘yes, the treatment works, because the verum group improved more than the control group, p<0.001', or it can be a laboratory test or diagnostic saying 'yes, this person infected with X, or
has prostate cancer, or whatever'.
In the latter case it makes a lot of difference whether there is any prior information (e.g. symptoms) that the person is suffering from the dreaded affliction. That is precisely the reason why
mass screening may be useless, even when individual diagnostics for people with symptoms make perfectly good sense.
In the former case it makes a lot of difference whether the RCT is the crowning piece of evidence after a great many other investigations, or just a piece of shoddy work that would imply
relegating 200 years of science to the dustbin if it were correct.
28. I would suggest that SD is doing the intellectual equivalent of flashing. Revealing himself with the intent to impress, but actually embarrassing himself deeply in the process.
so sad
29. micheleinmichigan,
I like the analogy. I knew a girl who devastated a flasher by saying “What’s that? It looks like a penis, only smaller.”
30. HAH! Ahh, I love being spectacularly wrong like that in public… Sure as hell won’t make *that* mistake ever again. Thanks for catching that union/intersection gaffe – that WAS stupid of me,
wasn’t it now? Looked a lot better when I was drinking, I’ll tell you that much. Probably had something to do with the fact that “union” is a lot easier to represent on a keyboard… I accept my
penance. (Part of which is developing even as we speak, a mild hangover.) Somebody tell Cde. Gorski, he’d love this, and he deserves a good laugh at my expense. >;->
I feel bad, since the point I was attempting to make (badly) deserved better, frankly, than me making stupid mistakes while demonstrating it. So yes, I screwed that one up. If you’re going to
embarrass yourself by being wrong, do it flamboyantly, I always say. *sigh*
So, turn all those “U”s upside down, and search and replace “union” with “intersection”. Much better.
So why do they use that definition for conditional probability? Well, first, it’s so that blowhards like me are encouraged to get drunk and swap symbols when flaming someone on the Internet,
leading to a temporary but significant rise in the self-esteem and mood of the audience and responders. And that should be reason enough. However, a more viable mathematical reason is that it
provides an intuitive description of what’s going on. I shall demonstrate.
Picture a Venn diagram (everybody’s favorite graph) that looks as follows:
[ (///////////////(XXXXX)\\\\\\\\\\\\\\\\\\\\) ]
consisting of two overlapping circles:
[ ] – universe
(/////) – event A
(\\\\\) – event B
(XXXXX) – *INTERSECTION* of A and B (see, I got it right that time!)
So what is the probability that, say, event A will occur? (*THAT* is the definition of “prior” probability; it has NOTHING TO DO with whether event A occurs “before” B or not. It is simply the
probability of A occurring, before consideration of any other factors.) It’s P(A). That’s it. Let’s give it the arbitrary value of 0.4, give P(B) the value of 0.5, and set the P(A ∩ B) at 0.1.
These are random values set just for the sake of argument and demonstration.
So what is the probability of A given B, or P(A|B)?
By definition, we either *know* or are assuming that B has happened – we’re “given” it. That is what “posterior” probability means; we are taking it into account *after* we reduce our statistical
universe of outcomes by considering something else. (Note carefully that this applies only to our calculations, and again has nothing to do with B occurring before A, or vice versa.) Our
statistical universe has just gone from “1″ (everything is still on the table) to 0.5 (the probability that B happened). In this “new” universe – of “only B” – P(A ∩ B) occupies one-fifth of
possible outcomes:
So, the probability of A given B is 0.2 (mostly because I picked the numbers to work out easily).
This also comes easily from the symbolic definition: P(A|B) = P(A ∩ B) / P(B) = 0.1 / 0.5 = 0.2.
This description has the virtue of being “simple”; that’s why it’s used. It follows *directly* from the picture, and/or from the basic language used to describe our little toy statistical
universe to begin with. Now, you *can* chew through the calculations and recover this from Jan’s statement of Bayes’ theorem, or vice versa. (Already said that, in fact.) This is what Jan’s
statement works out to, in standard notation:
P(A|B) = P(A)*P(B|A) / ( P(~A)*P(B|~A) + P(A)*P(B|A) )
I will grant it its one useful advantage – it avoids entirely any explicit requirement for P(B) (the probability of B happening), preferring instead to use potentially more accessible variables,
such as “probability of B given A” and “probability of B given not-A”. (On the other hand, if you have access to those, you know what P(B) is anyway… For the reader: Why?) You can do all kinds of
tortured yoga on Bayes’ theorem to get it to fit any mold you need to pour it into, and as long as you do the math right, it works. This particular statement of it shows up a lot in that “Math
for Poets/Doctors/Biologists” course I was talking about. It makes it easy to do things like that calculation of the actual success rate of medical tests, f’rinstance.
However, there are not “two probability theories”; everything that is true and expressible with Bayesian reasoning is also true and expressible using “frequentist” reasoning, since, at their
core, they use the same language and the same axioms of probability. This is Classic Coke and Diet Coke – two flavors of the same basic drink – rather than Catholicism vs. Protestantism. What
Bayes actually demonstrated: the two expressions for conditional probability of two variables, one conditioned on the other, are *related* by the ratio of the probability of one event to the
probability of the other. That’s it. No magic.
Now, Bayes is all the rage – especially here – because the assumption is that that “prior probability” for things like “non-sciency stuff” can be selected or assigned such that goofy shit like
homeopathy is sifted right out of the research pool, leaving more bux for Real Scientists(™). Yeah, great theory, might even work, but. Unfortunately, if you can’t do the job already with
frequentist statistics, you won’t be able to do it with Bayesian reasoning either, and sooner or later somebody will call you on your attempt. “Why has this prior probability been set so low?
Doesn’t that prejudice the outcome?” As much as you hem and haw about how “it all works out in the end, Bayes’ theorem says so”, I suspect that it will provide a convenient trap for the more
partisan, since their assignation of prior probability will reveal (in one convenient number) their political reliability. “Middle-of-the-road”, “compromise” numbers will be chosen, which prior
probabilities will lead to a conclusion that CAM has a definite but small effect. Somehow I doubt that was the effect you were looking for.
“ach, mein Kopf”
31. @michele:
“I would suggest that SD is doing the intellectual equivalent of flashing. Revealing himself with the intent to impress, but actually embarrassing himself deeply in the process.
so sad”
Yes, I freely admit that that my attempt at a proof was typed in what is medically known as a “drunken haze”. (In which I violated one of my own cardinal rules, “*NEVER DO MATH WHILE DRINKING*”…)
Alcohol makes fools of us all. Ah, sweet, sweet correction…
“I like the analogy. I knew a girl who devastated a flasher by saying “What’s that? It looks like a penis, only smaller.””
Yeah. That was from my “unit” in toto spontaneously attempting to crawling back into my stomach from embarrassment. It’s a defense mechanism. Honest.
“and the air is really cold too! honest!”
32. I have read that the frequentist approach is equivalent to the Bayes approach with uniform priors. But the whole point of the discussion is that when priors deviate significantly from uniform,
then the decisions based on the frequentist approach will often be in error.
My preference for the Bayes view of probability comes from its use in classifiers and machine learning. That is what I used to program. I think the Bayes view of probability also has a more
direct relationship to information theory and entropy.
33. I was programming a Bayes classifier in 1989. I did not recently adopt the Bayes view because it is a current rage. The Bayes view of probability does allow application of the theorem to a wider
range of problems dealing with uncertainty. The fact that you can use a Bayes classifier as a spam filter is one advantage of the Bayes view of probability over the frequentist view of
probability. I still prefer the frequentist approach for proof of causation, and estimation of important parameters for clinical decisions.
The answer to the question of how the Bayes view of probability varies from the frequentist view is a topic best left for the statisticians and mathematicians. I would suggest Bayes view of
probability is that you are standing on the ground wondering where you can run in order to avoid the bombs you saw dropped from that plane, and the frequentist view is from that plane wondering
whether the bombs just dropped will hit the intended target (a stationary building).
34. I would like to make a bit more propaganda for adopting the ‘odds’ version of Bayes.
Here is an example.
In a certain place it rains 5 days per year. The weather forecaster, who is right 90% of the time (whatever his prediction), says it is going to rain tomorrow. What is the probability – given
this forecast – that it rains tomorrow?
‘Odds’ version of the solution:
step 1. The prior odds (for ‘it’s positively raining’) are 5:360 (= 1:72).
step 2. The fraction ‘correct positive rate’/'false positive rate’ is 90%/10% (= 9).
step 3. Now multiply. The posterior odds are 9:72 (= 1:8).
step 4. Convert odds back to probability, gives 1/9.
Compare this to exactly the same example (with horrible formula and desert wedding) on http://stattrek.com/Lesson1/Bayes.aspx
One doesn’t have to be a “world-class statiscian” to perform the above steps, and one doesn’t even have to recall the formula. Even using the calculator on that site isn’t easy, because you’ll
have to recall the meanings of those letters. I think that non-mathematicians easily are distracted by letters with no intrinsic meanings, and even physicists hate letters with unconventional
meanings (they always want to denote time by t, temperature by T, pressure by p, volume by V, velocity by v and so on).
Of course you do have to remember something, namely the meaning of concepts like false positive (= some kind of test says positive/raining/sick/effective in the situation that the opposite is
true). If you are used to saying sensitivity instead of correct positive rate, you’ll probably have to remember false positive rate = 100% minus sensitivity. Personally I prefer ‘false positive
rate’ because the words convey the meaning by themselves.
See also http://en.wikipedia.org/wiki/Sensitivity_and_specificity
where the terms precision and positive predictive value are introduced for what I called the posterior probability (i.e. 1/9 in the above example). There you find a worked example with a fecal
occult blood screen test for bowel cancer. And you’ll find the additional terms ‘statistical power’ and ‘recall’ for correct positive rate.
35. SD:
Of course there are not two probability theories, but misapplications of frequentist statistics have led to erroneous conclusions and to irrational decisions. In this series I’ve been mainly
concerned with the myth that data can “speak for themselves,” which is a misapplication of frequentist statistics, but nevertheless a common one (at least in medical research–examples abound
throughout this series). Bayes forces authors and readers to confront external knowledge. I haven’t noticed that any of the authors or readers here think that the “prior” in “prior probability”
has special temporal meaning, by the way.
I agree that there will be a push to choose “compromise numbers,” if Bayes ever becomes the norm in medical research, but I still want justifications for priors to be made explicit, and I want
MDs to understand that the data from a particular experiment can’t “speak for themselves.” The players in academic medicine at least ought to understand what they’re arguing about. See my example
of Jonas, above.
There are other problems with frequentist inference that I haven’t discussed, having to do with misuses that stem from its long-run perspective. (Forgive me if you already know these things).
Goodman (pp. 999-1000, 1003) and others have described the “classical statistical puzzle” of two experiments, each comparing the same two hypotheses, with identical subjects, identical
treatments, and identical outcomes, that yield quite different P values—merely because each was performed by a different investigator with a different criterion for stopping (although, by
coincidence, each eventually stopped after the same number of trials).
Thus each investigator predicted different results if his experiment were repeated many times, as is the basis for frequentist calculations. P values are for the long-run only, but ‘everyone’
seems to use them as if they were also for the short run. There have been real-life arguments about P values when experiments have been stopped early because of large treatment (or toxic) effects
(cited in Goodman). Goodman:
Because frequentist inference requires the “long run” to be unambiguous, frequentist designs need to be rigid (for example, requiring fixed sample sizes and pre-specified stopping rules),
features that many regard as requirements of science rather than as artifacts of a particular inferential philosophy.
Goodman also shows, in his subsequent article, that in the case of the two identical experiments involved in the “classical statistical puzzle,” Bayesian statistics gets it right: each experiment
yields the same likelihood ratio (Bayes factor). That article discusses other examples of “problems that plague frequentist inference,” such as multiple comparisons—which I understand only
faintly, so I’ll merely quote Goodman without implying assertions of my own:
The frequentist solution…involves adjusting the P value for having looked at the data more than once or in multiple ways. But adjusting the measure of evidence because of considerations that
have nothing to do with the data defies scientific sense, belies the claim of “objectivity” that is often made for the P value, and produces an undesirable rigidity in standard trial design.
From a Bayesian perspective, these problems and their solutions are viewed differently: they are caused not by the reason an experiment was stopped but by the uncertainty in our background
knowledge. The practical result is that experimental design and analysis is far more flexible with Bayesian than with standard approaches.
Oh: “probability of B given A” plus “probability of B given not-A” is equal to “probability of B”
36. JMB wrote
the frequentist approach is equivalent to the Bayes approach with uniform priors
I don’t know what the author he read meant, but I think it has something to do with the definition of probability. Let me explain.
1. If you throw a well-formed die, the symmetry of the thing alone tells you that all sides have a probability of 1/6 of coming up. You can try to test that if you wish, but hardly anybody ever
does so.
2. If you throw a coin, the symmetry of the coin alone tells you that both sides have a probality of 1/2 of coming up. Actually some coins are not exactly symmetric because the ‘heads’ part is a
bit thicker. When you ‘throw’ the coin by letting it rapidly spin vertically on a very smooth table, you get rather large deviations from the 50/50 odds.
3. I experimented with a reduced cube. I had sawn off about 1/3 of one side, so I had a block of 42 x 42 x 27 mm. I threw it 100 times, and it fell 83 times with a 42×42 side (a flat side) on the
floor. I am too stupid to calculate the theoretical probability in this case (maybe a puzzle for mr. S.D. Unisection?), I even don’t know how to go about it.
4. I presume that the frequentist view for establishing the true flat chance is that I just should continue throwing my flat die and that in the end the fraction of flat falls will converge to
The True Flat Probability F. Unfortunately I don’t have the time to do that.
5. However, I can make hypotheses. What would be the chance of obtaining exactly this result (namely 83 flats in 100 throws) if F=0.83 or any other value?
Here is a table:
suppose F=0.5: the probability of this result is 5.246 x 10⁻¹²
suppose F=0.6: the probability of this result is 4.410 x 10⁻⁷
suppose F=0.7: the probability of this result is 1.194 x 10⁻³
suppose F=0.7418: the probability of this result is 0.01148
suppose F=0.75: the probability of this result is 0.01652
suppose F=0.77: the probability of this result is 0.03556
suppose F=0.79: the probability of this result is 0.06362
suppose F=0.81: the probability of this result is 0.09245
suppose F=0.83: the probability of this result is 0.10567 (this is the maximum)
suppose F=0.85: the probability of this result is 0.09081
suppose F=0.87: the probability of this result is 0.05495
suppose F=0.89: the probability of this result is 0.02118
suppose F=0.8977: the probability of this result is 0.01261
suppose F=0.91: the probability of this result is 4.420 x 10⁻³
suppose F=0.95: the probability of this result is 7.184 x 10⁻⁶
suppose F=0.99: the probability of this result is 2.888 x 10⁻¹⁶
(If you think one doesn’t have to consider F=0.99, try throwing a standard matchbox, and count how often it lands standing on its smallest side.)
6. We can of course suppose what we want, but what is the true value of F? The Bayesian idea is, I believe, to inject some kind of prior hunch about what F might be. You may think that all F are
equal (that is the uniform prior) or that the F below 0.5 are extremely improbable. It really doesn’t matter. If you assume any prior distribution of the F, you’ll have to multiply it with the
function given above to get the posterior distribution. The function given above has a rather narrow peak in the interval 0.77 to 0.91, so it really doesn’t matter what the prior distribution is
outside this interval.
7. If my experiment had consisted of 1000 throws (with 821 flats, say) then the peak would have been even narrower and the precise form of the prior distribution would have been even more
irrelevant. So if you can do very many experiments the prior really doesn’t matter anymore.
8. What would be the frequentist interpretation? I don’t know what a frequentist is, but I guess it is a person who prefers the following statement:
“Suppose F=0.7418, then the probability of having 83 or more flats is 0.025.
Suppose F=0.8977, then the probability of having 83 or less flats is 0.975.
Let us call the interval [0.7418 , 0.8977] the 95% confidence interval.”
9. I can’t see how frequentist = uniform prior (exactly), but in the case of very many experiments the narrow peak looks just like the standard normal curve and both methods give approximately
the same interval for ’95% confidence’.
But all this refers to the situation that you want to give an experimentally workable definition of probability. I mean a definition that tells you what to do when you want to determine an
unknown probability and how to interpret the results of a large but finite number of experiments.
37. @Jan
Uniform priors was suggested by Laplace for initial calculation. A uniform prior distribution is usually characterized as all possible results being equally probable. Since frequentist statistics
don’t consider prior probabilities, that is equivalent to assuming the hypothesis and null hypothesis are equally likely. In the case of balanced dice, then that means that each number is equally
The uniform prior distribution for the roll of an unbalanced dice would require some pretty heavy duty physics calculations regarding measuring the potential energy barrier from rolling from each
face of the die to each adjacent face of the die. If you plot bins on a line representing the size of the bins reflecting the potential energy barrier, then the probability will be flat and
uniform relative to the reference line, but the bins will have different probabilities based on their size. Now, I will probably be crucified by the physicists who may read this.
In any event, it is much easier to calculate the probability of a homeopathic dilution of 60C containing an active ingredient, than to calculate the probability distribution of an unbalanced die.
38. @Jan
In your dice problem of determining the probability distribution of the sliced dice sides ending up, using the Bayes schema (this is not Bayes inference theorem), you could naively assign a
uniform distribution to the probability of each side ending up an equal number of times. After your first trial, the postulated probability distribution would be updated by the result. Since your
first result has such a low probability of occurrence based on the assumption of uniform distribution, the extent of the update of the probability distribution would be great (note the similarity
to information theory, the lower the probability of a message, the greater the information it contains). With this Bayes strategy (using the probability of the observed result as a weighting
factor for update of the believed distribution, useful in machine learning), you can very rapidly converge on a probability distribution that the frequentist approach will take a larger number of
trials to estimate.
calculate the probability of a homeopathic dilution of 60C containing an active ingredient
Here is something we agree on. But those pesky homeopaths keep saying that their preparation process amplifies some kind of spiritual quality (which they call energy) that science doesn’t know
about. I find that ‘improbable’ (that such a crude preparation process would do something to an as yet undiscovered property of matter), but I am at a loss how to assign a credible number to
Come to think of it, that number is far less than 10⁻¹²⁰ anyway, so negligible in this case. Or is it?
40. One could actually, were one sufficiently motivated, assign a pretty good prior to the proposition that there exists such a new interaction. The fact that it hasn’t been observed in current
collider experiments would allow a quite nice constraint. Almost certainly much LESS than 10^-12.
And that’s before considering the odds that succussion actually manipulates said new interaction.
41. @ Scott
Astronomers seem pretty sure that in and around galaxies there is a huge amount of mass (only bound gravitionally) of an unknown nature. A typical galaxy has a density (in the sphere enclosing
it) of about 1000 atoms per cubic metre. So this, or a modest multiple of it, indicates the mass density of this totally unknown kind of mass. So the prior odds – given our knowledge of astronomy
– that there is some kind of unknown mass still to be dicovered are quite high (of course there may be something seriously wrong with the theory of gravitation at long distances).
But as far as we know this mysterious mass has only gravitational interaction. Typical particles discoverd in colliders only exist for very short times and then decay. The only non-decaying
elementary particles found by nuclear physics are the neutrinos (originally discovered because of failed energy-momentum balances), but the neutrinos we know are too light and energetic to be
gravitationally bound to galaxies.
But the interactions with the human body seen by homeopaths in their consulting rooms are much more impressive. The particles or whatever does this interaction are also very stable: homeopathic
medicine can keep for years. they are only destroyed by interaction with strong smelling herbs, mint and menthol in toothpaste and coffee. The only solution seems to be that it is all spiritual
and that no scientific or statisticla or mathematical or logical laws apply to it, only the observations of paranormally or mystically gifted people.
42. @ JMB on 08 Mar 2011 at 2:02 pm
I am afraid you lost me. Can you illustrate what happens when you first throw
‘flat, flat’, then update, and then throw ‘side, flat’? In my version after the fourth throw the assumption F = x would imply the result to have a probability of: 4x³(1-x).
I already stumble when you ‘naively assign a uniform distribution to the probability of each side ending up an equal number of times’. That amounts to putting the full weight 1 on just the prior
F = 1/3, in other words, the odds of F being unequal to 1/3 equal to 0:1. I guess this is not your meaning, so if you would demonstrate how you go about it in this particular case (‘prior, flat,
flat, update, side, flat, second update’) it would be most welcome.
43. @ Jan:
We have a high confidence that dark matter exists, yes. But an unknown *interaction* relevant at energy scales corresponding to biological processes is the relevant question, and that’s quite
independent of dark matter. And we can rule it out to a very high degree of confidence.
44. @ Scott
We almost totally agree. (Whatever happens in collider experiments is also irrelevant for biology energy scales.) But the point I wanted to make is that we cannot express this degree of
confidence in a numerical form. And that is precisely one of the criticisms against ‘doing Bayes’: that so-called priors are not actually chances (or odds) or that they are unknown or unknowable.
If we are dealing with rare events like lab mistakes or intentional fraud in science one can do statistics. These things happen. But it is impossible to examine a bunch of universes with human
life (‘as we know it’) in it and count how often homeopathy is true or false in them without scientists understanding how homeopathy could work, to arrive at credible prior odds.
So the judgement that research into homeopathy is of no use cannot be based on simply plugging in numbers into Bayes’ Theorem (incredibly small times anything is still incredibly small).
In case of homeopathy the solution is simpler than with many other unproven healing methods. The basic tenet of homeopathy is that highly diluted stuff will produce symptoms in healthy people,
e.g. unbearable itch for Sulphur C200, or
‘Sentimental mood in moonlight, particularly ecstatic love’ for Antimonum crudum (stibnite, antimony sulphide, I don’t know which potency, see
http://www.homeoint.org/seror/cowperthwaite/antim_c.htm , note that the formula for Sb2S3 is rendered incorrectly but consistent with what Hahnemann thought; as antimony is poisonous, I guess
provings with it were only done with small doses; Hahnemann recommended in his later years that provings be done with C30, and this particular symptom doesn’t appear in Hahnemann’s writings, I
OK, if they believe that, and if they base treatment on comparing patients’ complaints to that kind of symptoms, let them do a serious proving of any kind. The total of such kind of symptoms runs
in the hundred thousands, let them take their pick. Table salt C30 has over 1000. To perform a serious proving is not difficult, and I gather that homeopaths often do that (it would usually be a
reproving) during their training periods. This is not a strange idea at all. Any science student starts doing small experiments that have been done often before, not only cutting up frogs, but
measuring gravitation or measuring the speed of sound or of heat loss, whatever.
So let those homeopaths first show a couple of succesful reprovings done in cooperation with real scientists. But they won’t. They refuse to do so. I can only think of one reason: they know they
will fail.
But then there is no need to take them seriously anymore.
(Whatever happens in collider experiments is also irrelevant for biology energy scales.)
Not true. High-energy experiments are illuminating about lower-energy phenomena, even though the latter is not true. Technically speaking, this is because you can do renormalization to absorb the
effects of higher energy scales into the coupling constants at lower energy scales. But that’s not true in reverse.
Any interaction which could do what’s claimed for homeopathy, much less reiki, would stick out like a sore thumb in such things as the decay cross-sections of the Z boson.
It could be done with lower-energy arguments too, I’ll grant you, but there’s a certain cachet about using the fun stuff.
But the point I wanted to make is that we cannot express this degree of confidence in a numerical form.
Also false. We can calculate the probability that an interaction having the characteristics claimed would not have been detected in existing experiments, based on the uncertainties therein. This
is a routine operation. Strictly speaking it’s normally done in the other direction – given the experimental constraints, what characteristics, in particular coupling constant (i.e. strength),
can a hypothetical new interaction have – but the math works the same way for ruling out a new interaction which would have to have a certain strength to do what’s claimed for it.
In particular, we can conclude that said hypothetical interaction must have (at a minimum) comparable strength to electromagnetism at biological energy scales (i.e. eV) since it would necessarily
have to be affecting chemical reactions to function, and hence EM cannot entirely dominate.
46. @Jan
Sorry, I assumed when you sawed off the side, there were still six flat faces to the die. I framed the problem as trying to predict the probability distribution of each side landing down, not
just the probability of the flat side landing down (comes from my past focus on multiple disease possibilities). The issue of updating the hypothesis is a complicated issue in the way I have
framed the problem. Your way of framing the problem is better. The main point I was trying to make is that the update of the hypothesis is dependent on the probability of the observed data
resulting from the hypothesis. When the prior probability estimate is far from the observed data, there will be a greater revision of the hypothesis. Imprecise prior probability estimates are
revised rapidly after a few trials. In either the frequentist approach or the Bayesian approach, several trials are necessary to reduce errors in decisions. However, even with imprecise priors,
the Bayesian approach will more rapidly approach a correct probability estimate than the frequentist approach. Here is a paper (that I am in no position to determine its validity) that shows that
unless the true parameter is significantly out of the range of the uniform prior, that the Bayes estimator of the parameter will outperform the frequentist estimator given a limited number of
The other advantage of the Bayes approach is that for those hypotheses with prior probabilities in the extreme range (such as low probability of homeopathic preparations having an active
ingredient ( whether 10 ^ -120 or 10 ^ -12 or 10 ^ -6), or high probability that a parachute will reduce you chance of death after falling out of an airplane), even with an unexpected result, the
probability of the hypothesis will not be changed enough that decisions made from statistical analysis will be wrong. I would tend to say that any prior probability of less than .01 or greater
than .99 represents established scientific knowledge (maybe physicists and chemists can use .0001 and .9999).
I think Dr Atwood has made the most important point that decision making should be based on multiple trials. I would add that with multiple trials, errors resulting from imprecise priors are less
than residual errors from the frequentist approach (with a large number of trials, the Bayes and frequentist estimators converge).
there were still six flat faces to the die
Both 42×42 faces of the 42x42x27 die I called the flat faces, because if the ‘die’ lies on that side, I feel it is lying flat; in the other position it is ‘upright’. Of course the four small
faces (42×27) have the same chance as have the two 42×42 faces. So in order to know the chance of landing on any face, one only has to determine ‘the combined chance of landing on any of the two
42×42 faces’. In an apparently failed attempt to be succinct, I called that the flat chance. I wonder whether making the die into a cylinder of about the same dimensions (making it easy to roll
on that face) would affect the flat chance a lot.
maybe physicists and chemists can use .0001 and .9999
In certain fields of physics it is customary to consider a signal as ‘real’ when its exceeds noise by a factor 5, in other words five standard deviations, corresponding to p=0.000 000 3. This is
done in fields where many data are collected, for example X-rays from space from any direction. If you collect a billion data per day, this still amounts to having to look at 300 ‘events’.
So if data are expensive to get, you settle for more modest p-values.
Here is a paper (that I am in no position to determine its validity)
The paper seems to explain things more or less as I did. It does some experimenting.
You draw 10 times from a vase containing ten balls, b of which are black and the remainder is white. After each draw you put the drawn ball back into the vase. So the expected number of black
balls is b. In the mathematical jargon this is ‘drawing a random sample from the binomial (n=10, π )’, where the funny symbol stands for b/10. May I use F instead of the funny symbol?
If one doesn’t know how large is F, one can try to guess F from the result. Naturally, one would just think that the number of black balls divided by the number of draws would be your best bet.
This is called the Frequentist view, with a capital F.
The Bayesian view, with a capital B, is to start believing (before you draw any balls) that black and white have the same chance, i.e. the belief that F=0.5, in other words the vase contains five
balls of either color. So in your mind you prefix two draws, one yielding black, one yielding white. At least that is how the authors interpret the Bayesian point of view. The Bayesian estimator
for F is ‘number of black balls plus 1′ divided by ‘number of draws plus 2′.
The paper then proceeds to perform this experiment (ten draws from that vase) 2000 times in each of the cases that the vase has 1, 2, … 9 black balls.
I am not certain why one should do this by simulation, because the expected curve can easily be derived theoretically (the math required is of the kind I used to teach to freshmen for many
years). The theoretical curves (in a manner of speaking obtainable by a googol experiments) are just parabolas, see below. I am not really impressed if someone does things by simulation that can
be done simple math.
Naturally, if you put in a modest amount of belief that there are 5 black balls, the result is that you will do slightly better in guessing if your prior guess was true. Surprise, surprise. I
predict that in case F= 0,5 prefixing 100 draws, 50 of which are black, will work even better!
Actually the real surprise is that the advantage of having those two imaginary extra draws in addition to the ten real draws works so well if the numbers of white and black balls are between 2
and 8.
We can even compute the result when there are no black balls! The Frequentist will estimate F= 0 every time and he will be entirely correct every time, mean square error is 0. The Bayesian will
estimate F = 1/12 each time and this is also the error, hence the mean square error will be 1/144 = 0.007. This allows you to extend the graph on page 10 of the quoted source a bit further.
Actually these curves are squares of errors, if we extract roots we see that the small ‘Bayesian’ advantage in the middle is offset by a much larger disadvantage on the sides.
This consideration determines where the mentioned parabolas intersect the left and right edges of a properly drawn graph (with 0 left and and 1 right). The theoretical parabolas rise to 0.02500
and 0.01722 (=5/288).
It’s an example of the Law of Conservation of Misery: what you gain in one place (better estimates if reality conforms to your prior) you lose elsewhere, namely where reality doesn’t fit your
In a world where you have to deal with unknown prior odds that may differ vastly from 1 (or may not even be real quotients of probabilities), it doesn’t really help you a lot if you assume prior
odds = 1.
49. @Jan
Thanks for your review of the paper. I guess I needed to attend your class. I knew there was an analytic solution to the distribution of results of trials in the binomial problem, but did not
think there was an analytic solution to errors of Bayesian estimators and frequentist estimators after one trial.
The paper does closely follow your mathematical formulation.
You do note that there is a range from the proposed prior in which the Bayes estimator more rapidly converges on the true parameter. So the if we use a uniform prior, if the true parameter is in
the range of .2 to .8, then the Bayes estimator will work better than the frequentist estimator. But in the problem with the sawed off dice, wouldn’t it make more sense to use an informative
prior in which the estimate of prior probability is based on the fraction of area each face represents as the total area of the faces of the dice? While not as good as measuring the energy
required to displace the dice from one face to another, it is still an informative prior based on our knowledge.
One simple definition of what differentiates a Bayesian approach from a frequentist approach in whether or not prior information is used,
. Indeed, Fisher’s maxim, “Let the data speak for themselves” seems to imply that it would be wrong ( a violation of “scientific objectivity” ) to allow ourselves to be influenced by other
considerations such as prior knowledge about H.
where H is a scientific hypothesis.
In the same paper the point is made that the experimental model we choose for a study does require prior information.
Yet the very act of choosing a model (i.e. a sampling distribution conditional on H) is a means of expressing some kind of prior knowledge about the existence and nature of H, and its
observable effects.
This was a point made by John Tukey (noted in the subsequent paragraph). In a totally unrelated point in my personal history, I originally entered radiology as a medical student to use their
research computer (a PDP/11) to program an integer version of the Fast Fourier Transform (the Cooley-Tukey algorithm) for calculation of computer generated holograms.
if the true parameter is in the range of .2 to .8, then the Bayes estimator will work better than the frequentist estimator
Yes, but only if you do ten draws. If you do more draws then of course adding an imaginary extra two draws will still be better if your prior happens to match reality, but the difference will be
smaller and I guess that the range where it will work ‘better’ also will change. I can calculate it if you wish, I guess it becomes slowly smaller.
If you don’t do ten draws, but only one, then the Bayesian method works ‘better’ for parameter between 0.09175 and 0.90825.
In any case, the prior becomes less and less important the more data you collect.
Applying this to homeopathy: by now so many data have been collected that the prior would be not very relevant, if only it wouldn’t be so vanishingly small. Unfortunately the homeopaths keep
saying that homeopathy has adequate scientific proof by now!
51. @Jan
I wouldn’t ask you to calculate it. Usually medical hypotheses are promoted to the next level of study design after fewer positive results than 10 studies. If several trials of a plausible
hypothesis fail (usually no more than 5 times), then the plausible hypothesis is dropped. Notable exceptions are certain implausible CAM hypotheses like homeopathy that have a religious
following. Acupuncture (the ancient mechanism description is implausible, but the process is not implausible) has undergone thousand of trials. The penultimate medical trial is the large scale
RCT. There are few interventions that have undergone 10 large RCTs.
52. Unrelated linguistic aside:
The dictionary meaning of “penultimate” is “next-to-last.” It’s not a common word, so these days people are using it to mean “ultra-ultimate,” or “after-the-last.” At what point do the subculture
of people who use the dictionary meaning abandon the word as hopelessly ambiguous?
(The story of how I came to know the dictionary meaning. Friends who had been working in Tanzania told of the confusion over the pronunciation of the country’s name. It had previously been two
countries, Tanganyika and Zanzibar. When they merged, the names were also merged but people weren’t sure how to pronounce the new name. Was it TanZANia or TanzanIa? The government announced that
the correct pronunciation would be formally announced on the radio at such-and-such a time. The country eagerly tuned in. The radio announcer: “The correct pronunciation is with the emphasis on
the penultimate syllable.” Um. This is a very cruel and unhelpful answer for most Tanzanians. “… TanzanIa.” Ahh, that’s better.) (This is also the story of how I came to know whatever happened to
Tanganyika and where Zanzibar was.) (This is not a story about how elite I am but about how difficult it is to rely on an assumption of common history to support a common understanding of words.
It’s a wonder we manage to communicate at all.) (Back to our regularly scheduled programming.)
53. @Alison
The ultimate test of a medical intervention is the observed performance after it is approved (or accepted) for use. Usually this takes 10 or more years. Unfortunately this often ends up a legal
decision instead of a scientific decision (IUDs being an example). | {"url":"http://www.sciencebasedmedicine.org/of-sbm-and-ebm-redux-part-iv-continued-more-cochrane-and-a-little-bayes/","timestamp":"2014-04-18T08:22:49Z","content_type":null,"content_length":"200702","record_id":"<urn:uuid:03d993df-f68e-430d-b344-7f87a195b2c7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
k well i have a game
what i did was i have it so that when the ball hits a coin the variable coins goes up by 1 and when coins = 5 it goes to frame one but it always goes to frame one after i hit two or three. | {"url":"http://www.kongregate.com/forums/4-game-programming/topics/2585-need-help","timestamp":"2014-04-19T09:58:07Z","content_type":null,"content_length":"92947","record_id":"<urn:uuid:c3b37b12-83a1-4928-85ca-9f9f4d104eca>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A small tropical fish is at the center of a spherical fish bowl 1m in diameter. Determine the position and the lateral magnification of the image of the fish seen by an observer outside the bowl. The
refractive index of water is 4/3
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@benpen @RaphaelFilgueiras @experimentX pls any ideas
Best Response
You've already chosen the best response.
Not sure about the answer ... this is just an attempt. |dw:1348160490381:dw|
Best Response
You've already chosen the best response.
|dw:1348160596123:dw| i hope this picture is correct.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
oops ... this is a unit circle. the the coordinate is \( (h, \sqrt{1 - h^2})\)|dw:1348161039200:dw|
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
the magnification must be \[ \tan ({ \theta _i - \theta_r}) \times \sqrt{1 - h^2 } + h \over h\]
Best Response
You've already chosen the best response.
\[ {\sin \theta_i \over \sin \theta_r} = {4 \over 3} \\ \theta_r = \arctan \left( h \over \sqrt{1 - h^2}\right)\] Therefore \[ \theta_i = \arcsin \left( {4 \over 3} \sin \left( \arctan \left( h \
over \sqrt{1 - h^2}\right)\right)\right)\] Hence the magnification is \[ \tan \left( \arcsin \left( {4 \over 3} \sin \left( \arctan \left( h \over \sqrt{1 - h^2}\right)\right)\right)- \arctan \
left( h \over \sqrt{1 - h^2}\right)\right ) \times \sqrt{1 - h^2 } + h \over h\]
Best Response
You've already chosen the best response.
@experimentX in the diagram above pls make clear the angle \[\theta 1 - \theta 2\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Also make clear the angle of incidence and that of refraction
Best Response
You've already chosen the best response.
theta_1 is angle of refraction and theta 2 is angle of incidence.
Best Response
You've already chosen the best response.
it seems that magnification is dependent on size of the fish. I'm not sure on this though. do you have an answer?
Best Response
You've already chosen the best response.
im on it..trying to figgy it out
Best Response
You've already chosen the best response.
how did you derive the magnification formula
Best Response
You've already chosen the best response.
which should i follow the first or the second
Best Response
You've already chosen the best response.
Image size/ object size ... isn't it? i haven't done this for quite a long time though.
Best Response
You've already chosen the best response.
yep u right
Best Response
You've already chosen the best response.
@experimentX how did u derive that magnificaton formular
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
all right ... I'm sleeping. carry on!!
Best Response
You've already chosen the best response.
@demitris yep i ve printed it.....but how would you define lateral magnification
Best Response
You've already chosen the best response.
so how did magnification becomes refractive index
Best Response
You've already chosen the best response.
pls dont delete
Best Response
You've already chosen the best response.
so you mean angle OA'A is the incidence angle while OAC is refracted angle?
Best Response
You've already chosen the best response.
im just trying to figgy why you are using sine L as ur incident or which formula are u using for n as refractive index
Best Response
You've already chosen the best response.
ur letters might be sending conflicting signals..it will be nice if they are legible...im just trying to make sure i get ur import
Best Response
You've already chosen the best response.
pls dont remove any thing...
Best Response
You've already chosen the best response.
igot it now @demitris thanks alot
Best Response
You've already chosen the best response.
@woleraymond what did you get?
Best Response
You've already chosen the best response.
@demitris actually uploaded a document where he explained how the refractive index equated to magnification
Best Response
You've already chosen the best response.
@woleraymond mind sharing it?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
pls can i delete now?
Best Response
You've already chosen the best response.
just close it?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5059a848e4b03290a414d9c4","timestamp":"2014-04-20T08:32:31Z","content_type":null,"content_length":"227927","record_id":"<urn:uuid:e1b55425-bc0f-4e3c-8b02-4bae7ccb402b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
The first transfinite ordinal. If you wrote down all the natural numbers on a piece of paper and asked, "what comes next," the answer would be omega-null. It has cardinality aleph-null; however (and
here is the part that sucks), so does omega + 1, omega + 2, and so forth. But I'm not a mathematician, and I shouldn't even put that hat on for parlor tricks, so really you should read the writeup
below for the gory details. | {"url":"http://everything2.com/title/omega-null","timestamp":"2014-04-20T09:11:04Z","content_type":null,"content_length":"23890","record_id":"<urn:uuid:ced18395-6f4d-4b4d-9d6e-267b012e19a3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Gradient expansion on The Gauge Connection
Ashtekar and the BKL conjecture
Abhay Ashtekar is a well-known Indian physicist working at Pennsylvania State University. He has produced a fundamental paper in general relativity that has been the cornerstone of all the field of
research of loop quantum gravity. Beyond the possible value that loop quantum gravity may have, we will see in the future, this result of Ashtekar will stand as a fundamental contribution to general
relativity. Today on arxiv he, Adam Henderson and David Sloan posted a beautiful paper where the Ashtekar’s approach is used to reformulate the Belinski-Khalatnikov-Lifshitz (BKL) conjecture.
Let me explain why this conjecture is important in general relativity. The question to be answered is the behavior of gravitational fields near singularities. About this, there exist some fundamental
theorems due to Roger Penrose and Stephen Hawking. These theorems just prove that singularities are an unavoidable consequence of the Einstein equations but are not able to state the exact form of
the solutions near such singularities. Vladimir Belinski, Isaak Markovich Khalatnikov and Evgeny Lifshitz put forward a conjecture that gave them the possibility to get the exact analytical behavior
of the solutions of the Einstein equations near a singularity: When a gravitational field is strong enough, as near a singularity, the spatial derivatives in the Einstein equations can be safely
neglected and only derivatives with respect to time should be retained. With this hypothesis, these authors were able to reduce the Einstein equations to a set of ordinary differential equations,
that are generally more treatable, and to draw important conclusions about the gravitational field in these situations. As you may note, they postulated a gradient expansion in a regime of a strong
Initially, this conjecture met with skepticism. People simply have no reason to believe to it and, apparently, there was no reason why spatial variations in a solution of a non-linear equation with a
strong non-linearity should have to be neglected. I had the luck to meet Vladimir Belinski at the University of Rome “La Sapienza”. I was there to follow some courses after my Laurea and Vladimir was
teaching a general relativity course that I took. The course showed the BKL approach and gravitational solitons (another great contribution of Vladimir to general relativity). Vladimir is also known
to have written some parts of the second volume of the books of Landau and Lifshitz on theoretical physics. After the lesson on the BKL approach I talked to him about the fact that I was able to get
their results as their approach was just the leading order of a strong coupling expansion. It was on 1992 and I had just obtained the gradient expansion for the Schroedinger equation, also known in
literature as the Wigner-Kirkwood expansion, through my approach to strong coupling expansion. The publication of my proof happened just on 2006 (see here), 14 years after our colloquium.
Back to Ashtekar, Henderson and Sloan’s paper, this contribution is relevant for a couple of reasons that go beyond application to quantum gravity. Firstly, they give a short but insightful excursus
on the current situation about this conjecture and how computer simulations are showing that it is right (a gradient expansion is a strong coupling expansion!). Secondly, they provide a sound
formulation using Ashtekar variables of the Einstein equations that is better suited for its study. In my proof too I use a Hamiltonian formulation but through ADM formalism. These authors have in
mind quantum gravity instead and so ADM formalism could not be the best for this aim. In any case, such a different approach could also reveal useful for numerical simulations.
Finally, all this matter is a strong support to my view started with my paper on 1992 on Physical Review A. Since then, I have produced a lot of work with a multitude of applications in almost all
areas of physics. I hope that the current trend of confirmations of the goodness of my ideas about perturbation theory will keep on. As a researcher, it is a privilege to be part of this adventure of
Ashtekar, A. (1986). New Variables for Classical and Quantum Gravity Physical Review Letters, 57 (18), 2244-2247 DOI: 10.1103/PhysRevLett.57.2244
Abhay Ashtekar, Adam Henderson, & David Sloan (2011). A Hamiltonian Formulation of the BKL Conjecture arxiv arXiv: 1102.3474v1
Marco Frasca (2005). Strong coupling expansion for general relativity Int.J.Mod.Phys. D15 (2006) 1373-1386 arXiv: hep-th/0508246v3
Frasca, M. (1992). Strong-field approximation for the Schrödinger equation Physical Review A, 45 (1), 43-46 DOI: 10.1103/PhysRevA.45.43
Paper replacement
I have updated the paper with the answer to Terry Tao on arxiv (see here). No correction was needed, rather I have added a new result giving the next-to-leading order correction for the Yang-Mills
field. This result is important as it shows the right approximate solution, in an expansion into the inverse of the coupling constant, for the mapping between the scalar and the Yang-Mills field. As
we repeated a lot of times, Smilga’s solutions are all is needed to work out our argument as this relies on a gradient expansion. A gradient expansion at the leading order has a solution depending
just on time variable. But, as this has been a reason for discussion, I have also shown to what extent my approach applies to the solution of the quartic scalar field given in the form
$\phi(x) = \mu\left(\frac{2}{\lambda}\right)^{1\over 4}{\rm sn}(p\cdot x,i)$
with $p^2=\mu^2\left(\lambda/2\right)^{1\over 2}$ with $\mu$ an integration constant and $\lambda$ the coupling. But I would like to emphasize that the relevance of these solutions for the Yang-Mills
case was just demanded by Tao’s criticism but it is not needed for my argument to work. So, the main result of this paper is that
As it has been noted elsewhere, higher order corrections are zero in the Lorenz gauge. This result is certainly not trivial and worth to be considered in a classical analysis of Yang-Mills equations.
Finally, we note as any concern about gauge invariance is just worthless. Smilga’s solutions are exact solutions of the Yang-Mills equations. Casting doubt on them using gauge invariance arguments
should be put on the same ground as casting doubt on Kasner solution of Einstein equations using general covariance reasons. Nothing worth to spend time on but a poor excuse to ignore a good work.
Quantum field theory and gradient expansion
In a preceding post (see here) I showed as a covariant gradient expansion can be accomplished maintaining Lorentz invariance during computation. Now I discuss here how to manage the corresponding
generating functional
$Z[j]=\int[d\phi]e^{i\int d^4x\frac{1}{2}[(\partial\phi)^2-m^2\phi^2]+i\int d^4xj\phi}.$
This integral can be computed exactly, the theory being free and the integral is a Gaussian one, to give
$Z[j]=e^{\frac{i}{2}\int d^4xd^4yj(x)\Delta(x-y)j(y)}$
where we have introduced the Feynman propagator $\Delta(x-y)$. This is well-knwon matter. But now we rewrite down the above integral introducing another spatial coordinate and write down
$Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-(\partial\phi)^2-m^2\phi^2]+i\int d\tau d^4xj\phi}.$
Feynman propagator solving this integral is given by
and a gradient expansion just means a series into $p^2$ of this propagator. From this we learn immeadiately two things:
• When one takes $p=0$ we get the right spectrum of the theory: a pole at $p_\tau^2=m^2.$
• When one takes $p_\tau=0$ and Wick-rotates one of the four spatial coordinates we recover the right Feynman propagator.
All works fine and we have kept Lorentz invariance everywhere hidden into the Euclidean part of a five-dimensional theory. Neglecting the Euclidean part gives us back the spectrum of the theory. This
is the leading order of a gradient expansion.
So, the next step is to see what happens with an interaction term. I have already solved this problem here and was published by Physical Review D (see here). In this paper I did not care about
Lorentz invariance as I expected it would be recovered in the end of computations as indeed happens. But here we can recover the main result of the paper keeping Lorentz invariance. One has
$Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-(\partial\phi)^2-m^2\phi^2-\frac{\lambda}{2}\phi^4]+i\int d\tau d^4xj\phi}$
and if we want something not trivial we have to keep the interaction term into the leading order of our gradient expansion. So we will break the exponent as
$Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-\frac{\lambda}{2}\phi^4]-i\int d\tau d^4x\frac{1}{2}[(\partial\phi)^2+m^2\phi^2]+i\int d\tau d^4xj\phi}$
and our leading order functional is now
$Z_0[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-\frac{\lambda}{2}\phi^4]+i\int d\tau d^4xj\phi}.$
This can be cast into a Gaussian form as, in the infrared limit, the one of our interest, one can use the following small time approximation
$\phi(x,\tau)\approx\int d\tau' d^4y \delta^4(x-y)\Delta(\tau-\tau')j(y,\tau')$
being now
that can be exactly solved giving back all the results of my paper. When the Gaussian form of the theory is obtained one can easily show that, in the infrared limit, the quartic scalar field theory
is trivial as we obtain again a generating functional in the form
$Z[j]=e^{\frac{i}{2}\int d^4xd^4yj(x)\Delta(x-y)j(y)}$
being now
after Wick-rotated a spatial variable and having set $p_\tau=0$. The spectrum is proper to a trivial theory being that of an harmonic oscillator.
I think that all this machinery does work very well and is quite robust opening up a lot of possibilities to have a look at the other side of the world.
Covariant gradient expansion
Due to the relevance of the argument, after a nice discussion with a contribution of Carl Brannen, I decided to pursue this matter further. Indeed, the only way to have a covariant formulation of a
gradient expansion is adding a time variable and taking the true time variable Wick rotated. In this way, for d=1+1 wave equation you will use d=2+1 wave equation and so on. In d=3+1 you will use d=
4+1 wave equation. Let me explain with some equations what I mean. I consider again d=1+1 case as
but, instead to apply a gradient expansion to it, I apply this to the equation
being $\Delta_2 = \partial_{xx}+\partial_{yy}$. As usual, I rescale time variable as $t\rightarrow\sqrt{\lambda}t$ and I take a solution series
Now I will get the set of equations
and so on. Let us note that, in this case, we can introduce two new spatial variables as $z=x+iy$ and $\bar z=x-iy$. These are conjugate variables as you know. So, already at the leading order I have
solved my equation. Indeed, I note that
$\Delta_2=\partial_z\partial_{\bar z}$
and so the Laplacian has the solution $f(z)+g(\bar z)$ being f and g arbitrary functions. In this case the gradient expansion gives immediately the exact result making its application trivial as
should be. Indeed, I take $t=0$ in the perturbation series and put $iy=t$ and I get
that is the exact solution. Nice, it works! This means that a quantum field theory using gradient expansion exists and it is a strong coupling expansion. This result is surely less trivial than the
one obtained above.
Gradient expansions and quantum field theory
It is more than two years that I am working on quantum field theory in the strong coupling limit and I am generally very satisfied with the acceptance by the community about my views. Of course,
these are new ideas and may take some time to be accepted. So, I keep on working on them trying to clarify them at best so that people can have a clear understanding of their strengths and
weaknesses. One of the ways we researchers have to know how our colleagues consider our views is peer-review. This system is indeed crucial to any serious scientific endeavor and, indeed, I am proud
of my achievements only when my peers agree about their value. But peer-review is also useful to my work to know what are the main objections to it. It can happen that sometime these objections are
deeply wrong and may be worthwhile to discuss them at length also to have an idea on how such a prejudice arose.
We should know that when a mathematical theory enters into the description of nature, whatever mathematical method one uses to exploit it is always correct. So, natural laws in physics are described
by differential equations and whatever method you know to solve them is good provided is also mathematically legal. You should consider mathematics for physicists as a severe judge that grants no
appeal. You are right or wrong depending on the correctness of your computation. But in physics there is something more and these are assumptions we start with. You can do the beautiful mathematics
in the world but if you started with a wrong concept about how nature works your computations are simply rubbish.
One of the criticisms I have received on trying to get my papers published is that one cannot do a gradient expansion because this breaks Lorentz/Poincare’ invariance. This is completely wrong from a
mathematical standpoint. As an exercise you can consider the wave equation in two dimensions as
and consider the case where the spatial part is not so important. This can be easily obtained by rescaling time as $t\rightarrow\sqrt{\lambda}t$ and taking the limit $\lambda\rightarrow\infty$. One
gets the solution series
solving the equations
and so on. All this is perfectly legal from a mathematical standpoint and I get a true solution of the wave equation. But, as you can see, I have broken Lorentz invariance, a symmetry of this
equation. So, mathematics says yes while physics seems to say no. The answer is quite simple and is known since a long time: The computation is right but Lorentz invariance is no more manifest. This
is due to the fact that I have separated time and space. But if I am able to resum all the terms of the expansion series I will get the right answer
that is Lorentz invariant. So, both physics and mathematics give the same answer and is a resounding yes, it works and it works so well that we are left with a kind of strong coupling expansion.
So, what should do a smart referee with such a doubt, admitting that a smart referee does not know such mundane facts of physics and mathematics? It should realize that here one is facing a really
interesting problem of physics: Could we formulate a gradient expansion in such a way to have Lorentz invariance manifest? I have not an answer yet to this question but I grant to you that is a
matter I would like to publish a paper about somewhere. This is an interesting mathematical problem as well. We know that people met a similar problem at the start of the deep understanding of QED
due to Feynman, Schwinger, Tomonaga and Dyson. I think that an answer to this question would have the same scientific value.
Gradient expansions, strong perturbations and classicality
It is a common view that when in an equation appears a very large term we cannot use any perturbation approach at all. This is a quite common prejudice and forced physicists, for a lot of years, to
invent exotic approaches with very few luck to unveil physics behind equations. The reason for this relies on a simple trick generally overlooked by mathematicians and physicists and here is my luck.
This idea can be easily exposed for the Schroedinger equation. So, let us consider the case
$(H_0+\lambda V)|\psi\rangle=i\hbar\frac{\partial|\psi\rangle}{\partial t}$
with $\lambda\rightarrow\infty$. This is a very unlucky case both for a physicist and a mathematician as the only sure approach that come to our rescue is a computer program with all the difficulties
this implies. Of course, it would be very nice if we could find a solution in the form of an asymptotic series like
but we know quite well that if we insert such a solution into the Schroedinger equation we get meaningless results. But there is a very smart trick that can get us out of this dark and can produce
the required result. I have exposed this since 1992 on Physical Review A (see here) and this paper was not taken too seriously by the community so that I had time enough to be able to apply this idea
to all fields of physics. The paper producing the turning point has been published on Physical Review A (thank you very much, Bernd Crasemann!). You can find it here and here. The point is that when
you have a strong perturbation, an expansion is not enough. You also need a rescaling in time like $\tau=\lambda t$. If you do this and insert the above expansion into the original Schroedinger
equation, this time you will get meaningful results: A dual Dyson series that, being now the perturbation independent of time, becomes a well-known gradient expansion: Wigner-Kirkwood series. But
this series is a semiclassical one and you get the striking result that a strongly perturbed quantum system is a semiclassical system! So, if you want to change a quantum system into a classical one
just perturb it strongly. This is something that happens when one does a measurement in quantum mechanics using just electromagnetic fields that are the only means we know to accomplish such a task.
This result about strong perturbations and semiclassicality has been published on a long time honored journal: Proceedings of the Royal Society A (see here and here). I am pleased of this also
because of my estimation for Michael Berry, the Editor. I have met him at a Garda lake’s Conference some years ago and I have listened a beautiful talk by him about the appearance of a classical
world out of the quantum conundrum. I remember he asked me how to connect to internet from the Conference site but there there was just a not so cheap machine from Telecom Italia and then my help was
quite limited.
So, I just removed a prejudice and was lucky enough to give sound examples in all branches of physics. Sometime, looking in some dusty corners of physics and mathematics can be quite rewarding!
Classical scalar theory in D=1+1 and gradient expansion
As said before a pde with a large parameter has the spatial variations that are negligible. Let us see this for a very simple case. We consider the following equation
$\frac{\partial^2\phi}{\partial t^2}-\frac{\partial^2\phi}{\partial x^2}-\lambda\phi^3=0$
with the conditions $\phi(0,t)=0$, $\phi(1,t)=0$ and $\phi(x,0)=x^2-x$ where the choice of a parabolic profile is arbitrary and can be changed. We also know that, if we can neglect the spatial part,
the solution can be written down analytically as (see here and here):
$\phi\approx (x^2-x){\rm sn}\left[(x^2-x)\sqrt{\frac{\lambda}{2}}t+x_0,i\right]$
being $x_0={\rm cn}^{-1}(0,i)$. Indeed, for $\lambda = 5000$ we get the following pictures
The agreement is excellent confirming the fact that a strong coupling expansion is a gradient expansion. So, a large perturbation entering into a differential equation can be managed much in the same
way one does for a small perturbation. In the case of ode look at this post.
Physics laws and strong coupling
It is a well acquired fact that all the laws of physics are expressed through differential equations and our ability as physicists is to unveil their solutions in a way or another. Indeed, almost all
these equations are really difficult to solve in a straightforward way and are very far from the exercises at undergraduate courses. During the centuries people invented several techniques to manage
such equations and the most generally known is surely perturbation theory. Perturbation theory applies when a small parameter enters into the equations and a series solution is so allowed. I remember
I have seen this method the first time at the third year of my “laurea” course and was Giovanni Jona-Lasinio that showed it to me and other fellow students.
Presently, we see as small perturbation theory has become so pervasive that conclusions derived just at a perturbation level are sometime believed always true. An example of this is the Landau pole
or, generally, what implies the renormalization program. It is not generally stated but it is quite common the prejudice that when a large parameter enters into a differential equation we are stuck
and nothing can be done than using our physical intuition or numerical computation. This is true despite the fact that the inverse of such a large parameter is indeed a small parameter and most known
functions have both a small parameter and a large parameter series as well.
As I said elsewhere this is just a prejudice and I have proved it wrong in a series of papers on Physical Review A (see here, here and here). I have given an overview in a recent paper. With such a
great innovation to solve differential equations at hand is really tempting to try to apply it to all fields of physics. Indeed, I have worked for a lot of years in quantum optics testing the
approach in a lot of successful ways and I have also found applications to condensed matter physics appeared on Physical Review B and Physica E.
The point is quite clear. How to apply all this to partial differential equations? What is the effect of a large perturbation on such equations? Indeed, I have had this understanding under my nose
since the start but I have not been so able to catch it immediately. The reason is that the result is really counterintuitive. When a physical system is strongly perturbed all the terms that imply
spatial variation can be neglected. So, a strong perturbation series is a gradient expansion and the converse is true as well. I have proved it numerically in a quite easy way using two or three
lines of Maple. These results can be found in my very recent paper on quantum field theory (see here and here). Other results can be found by yourself with similar simple means and are very easy to
As strange as may seem this conclusion, it has obtained a striking confirmation through numerical computations in general relativity. Indeed, I have applied this method also to general relativity
(see here and here). Indeed this paper gives a sensible proof of the Belinski-Khalatnikov-Lifshitz or BKL conjecture on the behavior of space-time approaching a singularity. Indeed, BKL conjecture
has been analyzed numerically by David Garfinkle with a very beatiful paper published on Physical Review Letters (see here and here). It is seen in a striking way how all the gradient contributions
from Einstein equations become increasingly irrelevant as the singularity is approached. This is a clear proof of BKL conjecture and our approach of strong perturbations at work. Since then Prof.
Garfinkle has done a lot of other very good work on general relativity (see here).
We hope to show in future posts how this machinery works for pdes. In case of odes we have already posted about (see here). | {"url":"http://marcofrasca.wordpress.com/tag/gradient-expansion/","timestamp":"2014-04-16T17:17:41Z","content_type":null,"content_length":"145568","record_id":"<urn:uuid:6055a9a9-6b62-4f14-aaeb-3dd447ad5f3e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
"n^2 and 2n^2 is an increase by a factor of 4": how??
Re: "n^2 and 2n^2 is an increase by a factor of 4": how??
"n^2 and 2n^2 is an increase by a factor of 4": how??
In one of my math problems the answer stated that going from n^2 to 2n^2 is an increase by a factor of 4. I dont understand how this is. Can someone explain?
ScripterKitty wrote:In one of my math problems the answer stated that going from n^2 to 2n^2 is an increase by a factor of 4. I dont understand how this is. Can someone explain?
If the expressions are as posted -- n
and 2n
-- then the increase is not a factor of 4, but a factor of 2. If, on the other hand, the second expression is (2n)
, then the expression simplifies to 4n
, which is an increase by a factor of 4.
Re: "n^2 and 2n^2 is an increase by a factor of 4": how??
Thanks alot. I thought about this last night and came to the same conclusion that I was "assuming" that the equation was like this 2(n^2) vs (2n)^2. Thanks! | {"url":"http://www.purplemath.com/learning/viewtopic.php?p=7792","timestamp":"2014-04-19T02:48:57Z","content_type":null,"content_length":"20153","record_id":"<urn:uuid:3c578232-5484-45c6-9d2f-9b05b53c05d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Kind of Science: The NKS Forum - Prime Number Absolute Difference Graphic
Philip Ronald Dutton
Columbia, SC
Registered: Feb 2004
Posts: 172
primes: unfortunate confusion
The only reason people think about primes all the time is due to the "short cuts" created by the axioms related to multiplication and division. Indeed, I have never seen a definition of the
fundamental theorem of arithmetic which did not use the word "product." Isn't a product simply repetitive addition? The main reason primes appear to be so important is because we humans desperately
need that "short hand" notation. I would absolutely love to see any of the prominent number theorists rewrite the fundamental theorem of arithmetic in terms of addition and subtraction only! Please!
For the sake of man-kind!
Moving along. I would not want to say to an employee, "I will pay you one,two,three,four, five, six, seven, eight,...,nine-hundred and ninety-nine, one-thousand dollars to work for me for one, two,
three, four, five, six, seven days." Ultimately, without short-hand, we would count to each other instead of exchange numbers.
Again I plead: Write the axioms down on paper and then erase the ones related to multiplication and division. Now what exactly is a prime number?
The prime number itself is not that special in terms of the number line. A prime number is simply a number which draws attention to the side effect of the language related to the short hand
functionality provided by those extra axioms.
Perhaps there is relationship to the form that is found within notions of sequence (ie: the number line).
Maybe one should consider NKS study of primes first by understanding the combination of counting, sequence, and short-hand in terms of NKS.
It seems that the form required for supporting notions of "sequence" are built right into 1D CA. And counting is rather simply with the rule that just moves the "on" cell left or right. As far as
notions of "short hand" then the question becomes, "what exactly requires 'short hand?'" This is immediately a meta-mathematical question according to my interpretation. It is based on the
application. Our everyday application is exchange of "numerical data" without the requirement to re-count.
It is unfortunate that there are not many good discussions about primes outside of the standard boring and unproductive context of distributions. I could not find any. (really difficult to find good
old fashion honest discussion about those pesky numbers in terms of the assumptions and human behaviors behind the writing of the axioms). In my opinion the real meat of the discussion is in the
assumptions of the axioms and the interpretation of the language of the axioms and the need for short cuts when counting.
In a generic sense, counting is essentially required for multiplication. Computers could care less about short cuts for addition (figuratively speaking- since there are some cool transistor logic
circuits which speedily compute stuff) so why did the humans start to care about the short cuts??? Obviously a rhetorical question (and easy to answer), but I bring it up because it is so easy to
forget the pre-tech age human factor.
I think the prime number closet needs to be cleaned out before we should expect NKS to magically predict primes.
P h i l i p . R . D u t t o n
Last edited by Philip Ronald Dutton on 07-20-2007 at 05:47 AM
Report this post to a moderator | IP: Logged | {"url":"http://forum.wolframscience.com/showthread.php?postid=4872","timestamp":"2014-04-18T21:05:51Z","content_type":null,"content_length":"34893","record_id":"<urn:uuid:6738abfe-62c6-493c-8d38-b598eb37796b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Mollification Regularization Method for a Fractional-Diffusion Inverse Heat Conduction Problem
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 109340, 9 pages
Research Article
A Mollification Regularization Method for a Fractional-Diffusion Inverse Heat Conduction Problem
^1School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China
^2School of Mathematics, Southwest Jiaotong University, Chengdu 610031, China
^3Department of Mathematical Sciences, Xidian University, Xi'an 710071, China
Received 12 June 2012; Revised 19 December 2012; Accepted 20 December 2012
Academic Editor: Fatih Yaman
Copyright © 2013 Zhi-Liang Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Linked References
1. V. V. Kulish and J. L. Lage, “Fractional-diffusion solution for transient local temperature and heat flux,” Transactions of ASME, vol. 122, pp. 372–376, 2000.
2. K. B. Oldham and J. Spanier, The Fractional Calculus: Theory and Application of Differential and Integration to Arbitrary Order, Academic Press, 1974. View at MathSciNet
3. K. B. Oldham and J. Spanier, “A general solution of the diffusion equation for semiinfinite geometries,” Journal of Mathematical Analysis and Applications, vol. 39, pp. 655–669, 1972. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. K. B. Oldham and J. Spanier, “The replacement of Fick’s law by a formulation involving semidifferentiation,” Journal of Electroanalytical Chemistry, vol. 26, pp. 331–341, 1970.
5. R. Gorenflo, F. Mainardi, D. Moretti, and P. Paradisi, “Time fractional diffusion: a discrete random walk approach,” Nonlinear Dynamics, vol. 29, no. 1–4, pp. 129–143, 2002. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. V. V. Anh and N. N. Leonenko, “Non-Gaussian scenarios for the heat equation with singular initial conditions,” Stochastic Processes and their Applications, vol. 84, no. 1, pp. 91–114, 1999. View
at Publisher · View at Google Scholar · View at MathSciNet
7. I. Podlubny, Fractional Differential Equations, Academic Press, 1999.
8. M. M. El-Borai, “The fundamental solutions for fractional evolution equations of parabolic type,” Boletín de la Asociación Matemática Venezolana, vol. 6, no. 1, pp. 29–43, 2004. View at
Zentralblatt MATH · View at MathSciNet
9. F. Mainardi, Y. Luchko, and G. Pagnini, “The fundamental solution of the space-time fractional diffusion equation,” Fractional Calculus & Applied Analysis, vol. 4, no. 2, pp. 153–192, 2001. View
at Zentralblatt MATH · View at MathSciNet
10. F. Liu, V. Anhb, and I. Turnerb, “Numerical solution of the space fractional Fokker-Planck equation,” Journal of Computational and Applied Mathematics, vol. 166, pp. 209–219, 2004.
11. M. M. Meerschaert and C. Tadjeran, “Finite difference approximations for fractional advection-dispersion flow equations,” Journal of Computational and Applied Mathematics, vol. 172, no. 1, pp.
65–77, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. S. B. Yuste, “Weighted average finite difference methods for fractional diffusion equations,” Journal of Computational Physics, vol. 216, no. 1, pp. 264–274, 2006. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. G. J. Fix and J. P. Roop, “Least squares finite-element solution of a fractional order two-point boundary value problem,” Computers & Mathematics with Applications, vol. 48, no. 7-8, pp.
1017–1033, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. J. P. Roop, “Computational aspects of FEM approximation of fractional advection dispersion equations on bounded domains in ${R}^{2}$,” Journal of Computational and Applied Mathematics, vol. 193,
no. 1, pp. 243–268, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. Z. M. Odibat and S. Momani, “Approximate solutions for boundary value problems of time-fractional wave equation,” Applied Mathematics and Computation, vol. 181, no. 1, pp. 767–774, 2006. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
16. S. S. Ray and R. K. Bera, “Analytical solution of a fractional diffusion equation by Adomian decomposition method,” Applied Mathematics and Computation, vol. 174, no. 1, pp. 329–336, 2006. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. J. J. Liu and M. Yamamoto, “A backward problem for the time-fractional diffusion equation,” Applicable Analysis, vol. 89, no. 11, pp. 1769–1788, 2010. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
18. W. McLean, “Regularity of solutions to a time-fractional diffusion equation,” The ANZIAM Journal, vol. 52, no. 2, pp. 123–138, 2010. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
19. J. Prüss, Evolutionary Integral Equations and Applications, vol. 87, Birkhäuser, Basel, Switzerland, 1993. View at Publisher · View at Google Scholar · View at MathSciNet
20. A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, 2006. View at MathSciNet
21. S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional Integrals and Drivatives, Gordon and Breach Science, Yverdon, Switzerland, 1993. View at MathSciNet
22. D. A. Murio, “Stable numerical solution of a fractional-diffusion inverse heat conduction problem,” Computers & Mathematics with Applications, vol. 53, no. 10, pp. 1492–1501, 2007. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
23. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, 1996. View at Publisher · View at Google Scholar · View at MathSciNet
24. L. Yang, Z.-C. Deng, J.-N. Yu, and G.-W. Luo, “Two regularization strategies for an evolutional type inverse heat source problem,” Journal of Physics A, vol. 42, no. 36, Article ID 365203, 16
pages, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
25. L. Yang, Z.-C. Deng, J.-N. Yu, and G.-W. Luo, “Optimization method for the inverse problem of reconstructing the source term in a parabolic equation,” Mathematics and Computers in Simulation,
vol. 80, no. 2, pp. 314–326, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
26. L. Yang, M. Dehghan, J.-N. Yu, and G.-W. Luo, “Inverse problem of time-dependent heat sources numerical reconstruction,” Mathematics and Computers in Simulation, vol. 81, no. 8, pp. 1656–1672,
2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
27. L. Yang, J. N. Yu, G. W. Luo, and Z. C. Deng, “Reconstruction of a space and time dependent heat source from finite measurement data,” International Journal of Heat and Mass Transfer, vol. 55,
pp. 6573–6581, 2012.
28. L. Yang, J. N. Yu, G. W. Luo, and Z. C. Deng, “Numerical identification of source terms for a two dimensional heat conduction problem in polar coordinate system,” Applied Mathematical Modelling,
vol. 37, pp. 939–957, 2013.
29. F. F. Dou and Y. C. Hon, “Kernel-based approximation for Cauchy problem of the time-fractional diffusion equation,” Engineering Analysis with Boundary Elements, vol. 36, no. 9, pp. 1344–1352,
2012. View at Publisher · View at Google Scholar · View at MathSciNet
30. Y. C. Hon and T. Wei, “A fundamental solution method for inverse heat conduction problem,” Engineering Analysis With Boundary Elements, vol. 28, no. 5, pp. 489–495, 2004.
31. J. Cheng, J. Nakagawa, M. Yamamoto, and T. Yamazaki, “Uniqueness in an inverse problem for a one-dimensional fractional diffusion equation,” Inverse Problems, vol. 25, no. 11, pp. 1–16, 2009.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
32. B. Jin and W. Rundell, “An inverse problem for a one-dimensional time-fractional diffusion problem,” Inverse Problems, vol. 28, no. 7, Article ID 075010, 19 pages, 2012. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
33. D. N. Hào, “A mollification method for ill-posed problems,” Numerische Mathematik, vol. 68, no. 4, pp. 469–506, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View
at MathSciNet
34. U. Tautenhahn, “Optimality for ill-posed problems under general source conditions,” Numerical Functional Analysis and Optimization, vol. 19, no. 3-4, pp. 377–398, 1998. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/mpe/2013/109340/ref/","timestamp":"2014-04-20T17:13:11Z","content_type":null,"content_length":"36841","record_id":"<urn:uuid:1ebee26d-0413-4142-a4ed-4fd1e123285f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: On Crossings in Geometric Proximity Graphs
Bernardo M. ŽAbrego
Ruy Fabila-Monroy
Silvia FernŽandez-Merchant
David Flores-Pe~naloza
Ferran Hurtado
Vera SacristŽan
Maria Saumell
We study the number of crossings among edges of some higher order proximity graphs of the
family of the Delaunay graph. That is, given a set P of n points in the Euclidean plane, we
give lower and upper bounds on the minimum and the maximum number of crossings that these
geometric graphs defined on P have.
1 Introduction
Let P be a set of n points in the plane in general position (no three are collinear). A geometric graph
on P is a graph with vertex set P and such that its edges are drawn as straight-line segments. When
two edges share an interior point we say that they give rise to a crossing.
The number of crossings is a parameter that has been attracting extensive studies in the context of
combinatorial graphs. Given a graph G, the crossing number of G, denoted by cr(G), is the minimum
number of crossings in any drawing of G, i.e., in any non-degenerate representation of the graph in the | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/357/1666088.html","timestamp":"2014-04-17T13:01:52Z","content_type":null,"content_length":"8307","record_id":"<urn:uuid:9fc4cc38-8d2f-4574-b4b2-de1ec07724b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
March 6th 2006, 04:43 PM #1
hi i really need help
i know i got the wrong answer but i tried to find out several ways but i get the wrong answer " the answer suposed to be 1/648" how \HELP
hi i really need help
i know i got the wrong answer but i tried to find out several ways but i get the wrong answer " the answer suposed to be 1/648" how \HELP
I really cannot understand what the question is, can you be clearer?
hi i really need help
i know i got the wrong answer but i tried to find out several ways but i get the wrong answer " the answer suposed to be 1/648" how \HELP
Although it is easy to trace your question from the given answer, it is better if you present your questions in ways that can be understood by all. You tried, basing from your correct use of
parentheses, but the two "square" did not help. I assume you did not know yet that "a^2" is the symbol for "a,squared" or "a, square" or "square of a"
If you like to learn/use LaTex, the "a^2" will appear as the familiar "a, squared" as you would write it on paper.
(-1/3)[(1/6)^2]*[(-1/3)(1/6) -(-1/3)^2]
= (-1/3)(1/36)*[(-1/18) -(1/9)]
= (-1/108)*[(-1 +2(-1))/18]
= (-1/108)*[(-3)/18]
= (-1/108)(-1/6)
= 1/648 ------------------answer.
March 6th 2006, 06:16 PM #2
Global Moderator
Nov 2005
New York City
March 6th 2006, 10:48 PM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/algebra/2111-algebara2.html","timestamp":"2014-04-17T15:55:38Z","content_type":null,"content_length":"31752","record_id":"<urn:uuid:95120575-360f-4b0c-9a8d-5c3165f74060>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Craig Freedman's SQL Server Blog
Comments 0
In my last post, I gave an overview of the PIVOT operator. In this post, I'm going to take a look at the query plans generated by the PIVOT operator. As we'll see, SQL Server generates a surprisingly
simple query plan that is essentially just a fancy aggregation query plan.
Let's use the same schema and queries from my previous post:
CREATE TABLE Sales (EmpId INT, Yr INT, Sales MONEY)
INSERT Sales VALUES(1, 2005, 12000)
INSERT Sales VALUES(1, 2006, 18000)
INSERT Sales VALUES(1, 2007, 25000)
INSERT Sales VALUES(2, 2005, 15000)
INSERT Sales VALUES(2, 2006, 6000)
INSERT Sales VALUES(3, 2006, 20000)
INSERT Sales VALUES(3, 2007, 24000)
SELECT [2005], [2006], [2007]
FROM (SELECT Yr, Sales FROM Sales) AS s
PIVOT (SUM(Sales) FOR Yr IN ([2005], [2006], [2007])) AS p
This query generates the following query plan:
|--Compute Scalar(DEFINE:(
[Expr1006]=CASE WHEN [Expr1024]=(0) THEN NULL ELSE [Expr1025] END,
[Expr1007]=CASE WHEN [Expr1026]=(0) THEN NULL ELSE [Expr1027] END,
[Expr1008]=CASE WHEN [Expr1028]=(0) THEN NULL ELSE [Expr1029] END))
|--Stream Aggregate(DEFINE:(
[Expr1024]=COUNT_BIG(CASE WHEN [Sales].[Yr]=(2005) THEN [Sales].[Sales] ELSE NULL END),
[Expr1025]=SUM(CASE WHEN [Sales].[Yr]=(2005) THEN [Sales].[Sales] ELSE NULL END),
[Expr1026]=COUNT_BIG(CASE WHEN [Sales].[Yr]=(2006) THEN [Sales].[Sales] ELSE NULL END),
[Expr1027]=SUM(CASE WHEN [Sales].[Yr]=(2006) THEN [Sales].[Sales] ELSE NULL END),
[Expr1028]=COUNT_BIG(CASE WHEN [Sales].[Yr]=(2007) THEN [Sales].[Sales] ELSE NULL END),
[Expr1029]=SUM(CASE WHEN [Sales].[Yr]=(2007) THEN [Sales].[Sales] ELSE NULL END)))
|--Table Scan(OBJECT:([Sales]))
This is just a basic scalar aggregation query plan! It calculates one SUM aggregate for each year. Like any SUM aggregate, each aggregate actually computes both the count and the sum. If the count is
zero, the query plan returns NULL else it returns the sum. (The compute scalar handles this logic.)
The only twist is that each SUM aggregate is actually computed over a CASE statement that filter for those rows that match the year for which it is summing sales. The CASE statement returns the value
of the Sales column for those rows that match the year and NULLs for all other rows. To clarify what is happening, we can view the results of the CASE statements without the aggregation:
SELECT EmpId, Yr,
CASE WHEN Yr = 2005 THEN Sales END AS [2005],
CASE WHEN Yr = 2006 THEN Sales END AS [2006],
CASE WHEN Yr = 2007 THEN Sales END AS [2007]
FROM Sales
EmpId Yr 2005 2006 2007
----------- ----------- --------------------- --------------------- ---------------------
1 2005 12000.00 NULL NULL
1 2006 NULL 18000.00 NULL
1 2007 NULL NULL 25000.00
2 2005 15000.00 NULL NULL
2 2006 NULL 6000.00 NULL
3 2006 NULL 20000.00 NULL
3 2007 NULL NULL 24000.00
2 2007 NULL NULL NULL
When computing the sums of each of the year columns, the query plan relies on the fact that aggregate functions discard NULLs; that is, the NULL values are not included in the results. Although this
point may seem intuitive for a SUM aggregate, the significance is clearer for a COUNT aggregate:
CREATE TABLE T (A INT)
INSERT T VALUES(NULL)
-- Returns 1: the number rows in table T
SELECT COUNT(*) FROM T
-- Returns 0: the number of non-NULL values of column A
SELECT COUNT(A) FROM T
Note that we could just as easily have written the original query as:
SUM(CASE WHEN Yr = 2005 THEN Sales END) AS [2005],
SUM(CASE WHEN Yr = 2006 THEN Sales END) AS [2006],
SUM(CASE WHEN Yr = 2007 THEN Sales END) AS [2007]
FROM Sales
This query gets a nearly identical query plan. The only visible difference is the use of an extra compute scalar to evaluate the CASE statements.
|--Compute Scalar(DEFINE:(
[Expr1004]=CASE WHEN [Expr1013]=(0) THEN NULL ELSE [Expr1014] END,
[Expr1005]=CASE WHEN [Expr1015]=(0) THEN NULL ELSE [Expr1016] END,
[Expr1006]=CASE WHEN [Expr1017]=(0) THEN NULL ELSE [Expr1018] END))
|--Stream Aggregate(DEFINE:(
[Expr1013]=COUNT_BIG([Expr1007]), [Expr1014]=SUM([Expr1007]),
[Expr1015]=COUNT_BIG([Expr1008]), [Expr1016]=SUM([Expr1008]),
[Expr1017]=COUNT_BIG([Expr1009]), [Expr1018]=SUM([Expr1009])))
|--Compute Scalar(DEFINE:(
[Expr1007]=CASE WHEN [Sales].[Yr]=(2005) THEN [Sales].[Sales] ELSE NULL END,
[Expr1008]=CASE WHEN [Sales].[Yr]=(2006) THEN [Sales].[Sales] ELSE NULL END,
[Expr1009]=CASE WHEN [Sales].[Yr]=(2007) THEN [Sales].[Sales] ELSE NULL END))
|--Table Scan(OBJECT:([Sales]))
The other big difference between the PIVOT syntax and query plan and the alternative syntax and query plan is that the PIVOT query suppresses the following warning about NULLs:
Warning: Null value is eliminated by an aggregate or other SET operation.
We could also suppress this warning by executing the following statement:
SET ANSI_WARNINGS OFF
At this point, the query plan for a multi-row PIVOT operation should not come as a surprise:
SELECT EmpId, [2005], [2006], [2007]
FROM (SELECT EmpId, Yr, Sales FROM Sales) AS s
PIVOT (SUM(Sales) FOR Yr IN ([2005], [2006], [2007])) AS p
The query plan for this PIVOT operation uses CASE statements to compute the same intermediate result that we saw above. Then, like any other GROUP BY query, it uses either stream or hash aggregate to
group by EmpId and to compute the final result. In this case, the optimizer chooses a stream aggregate. Since we do not have an index to provide order, it must also introduce a sort.
|--Compute Scalar(DEFINE:(
[Expr1007]=CASE WHEN [Expr1025]=(0) THEN NULL ELSE [Expr1026] END,
[Expr1008]=CASE WHEN [Expr1027]=(0) THEN NULL ELSE [Expr1028] END,
[Expr1009]=CASE WHEN [Expr1029]=(0) THEN NULL ELSE [Expr1030] END))
|--Stream Aggregate(GROUP BY:([Sales].[EmpId]) DEFINE:(
[Expr1025]=COUNT_BIG(CASE WHEN [Sales].[Yr]=(2005) THEN [Sales].[Sales] ELSE NULL END),
[Expr1026]=SUM(CASE WHEN [Sales].[Yr]=(2005) THEN [Sales].[Sales] ELSE NULL END),
[Expr1027]=COUNT_BIG(CASE WHEN [Sales].[Yr]=(2006) THEN [Sales].[Sales] ELSE NULL END),
[Expr1028]=SUM(CASE WHEN [Sales].[Yr]=(2006) THEN [Sales].[Sales] ELSE NULL END),
[Expr1029]=COUNT_BIG(CASE WHEN [Sales].[Yr]=(2007) THEN [Sales].[Sales] ELSE NULL END),
[Expr1030]=SUM(CASE WHEN [Sales].[Yr]=(2007) THEN [Sales].[Sales] ELSE NULL END)))
|--Compute Scalar(DEFINE:([Sales].[EmpId]=[Sales].[EmpId]))
|--Sort(ORDER BY:([Sales].[EmpId] ASC))
|--Table Scan(OBJECT:([Sales]))
To see how this query really is no different than any other GROUP BY query or to see some alternative query plans, try creating a clustered index on EmpId to eliminate the sort or using an OPTION
(HASH GROUP) hint to force a hash aggregate. | {"url":"http://blogs.msdn.com/b/craigfr/archive/2007/07/09/pivot-query-plans.aspx","timestamp":"2014-04-19T00:21:21Z","content_type":null,"content_length":"65085","record_id":"<urn:uuid:6602753f-9f6c-429b-924a-0fa814c1f654>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Victor Nistor, Jinchao Xu, Stephanie Zerby, Xiantao Li, Yuxi Zheng, Hope Shaffer.
Title: Scalable Implicit Solution Methods for Multiple-time-scale Multiphysics Systems: Applications from CFD, Transport/Reaction, and MHD*
Seminar: Computational and Applied Mathematics Colloquium
Speaker: J.N. Shadid, Sandia Natl. Lab.
A current challenge before the computational science and numerical mathematics community is the efficient computational solution of multiphysics systems. These systems are strongly coupled, highly
nonlinear and characterized by multiple physical phenomena that span a very large range of length- and time-scales. These interacting, nonlinear, multiple time-scale physical mechanisms can balance
to produce steady-state behavior, nearly balance to evolve a solution on a dynamical time scale that is long relative to the component time-scales, or can be dominated by just a few fast modes. These
characteristics make the scalable, robust, accurate, and efficient computational solution of these systems extremely challenging. This presentation will discuss issues related to the stable, accurate
and efficient time integration, nonlinear, and linear solution of multiphysics systems. The discussion will begin with an illustrative example that compares operator-split to fully-implicit methods.
The talk will then continue with an overview of a number of the important fully-coupled solution methods that our research group has applied to the solution of coupled multiple-time-scale
multi-physics systems. These solution methods include, fully-implicit time integration, direct-to-steady-state solution methods, continuation, bifurcation, and optimization techniques that are based
on Newton-Krylov iterative solvers. To enable robust, scalable and efficient solution of the large-scale sparse linear systems generated by the Newton linearization, fully-coupled multilevel
preconditioners are employed. The multilevel preconditioners are based on two differing approaches. The first technique employs a graph-based aggregation method applied to the nonzero block structure
of the Jacobian matrix. The second approach utilizes approximate block decomposition methods and physics-based preconditioning approaches that reduce the coupled systems into a set of simplified
systems to which multilevel methods are applied. The multilevel preconditioners are then compared to standard variable overlap additive one-level Schwarz domain decomposition type preconditioners. To
demonstrate the capability of these methods representative results are presented for the solution of transport/reaction and resistive magnetohydrodynamic systems with stabilized finite element
methods. In this context robustness, efficiency, and the parallel and algorithmic scaling of solution methods are discussed. These results will include the solution of systems with up to a billion
unknows on 100K cores of current large-scale parallel architectures. *This work was partially supported by the DOE office of Science Applied Math Program at Sandia National Laboratory. Sandia is a
multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract
Room Reservation Information
Room Number: MB106
Date: 11 / 02 / 2012
Time: 03:35pm - 04:25pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=13500","timestamp":"2014-04-18T11:03:47Z","content_type":null,"content_length":"6353","record_id":"<urn:uuid:fd19376c-3f50-41f8-b62b-6d8c9a90bf46>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
ROW function
This article describes the formula syntax and usage of the ROW function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use
functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel.
Returns the row number of a reference.
The ROW function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.):
● Reference Optional. The cell or range of cells for which you want the row number.
Example 1
The example may be easier to understand if you copy it to a blank worksheet.
1. Select the example in this article. If you are copying the example in Excel Online, copy and paste one cell at a time.
Important: Do not select the row or column headers.
Selecting an example from Help
1. Press CTRL+C.
2. Create a blank workbook or worksheet.
3. In the worksheet, select cell A1, and press CTRL+V. If you are working in Excel Online, repeat copying and pasting for each cell in the example.
Important: For the example to work properly, you must paste it into cell A1 of the worksheet.
4. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
After you copy the example to a blank worksheet, you can adapt it to suit your needs.
A B
Formula Description (Result)
=ROW() Row in which the formula appears (2)
=ROW(C10) Row of the reference (10)
Example 2
The example may be easier to understand if you copy it to a blank worksheet.
1. Select the example in this article. If you are copying the example in Excel Online, copy and paste one cell at a time.
Important: Do not select the row or column headers.
Selecting an example from Help
1. Press CTRL+C.
2. Create a blank workbook or worksheet.
3. In the worksheet, select cell A1, and press CTRL+V. If you are working in Excel Online, repeat copying and pasting for each cell in the example.
Important: For the example to work properly, you must paste it into cell A1 of the worksheet.
4. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
After you copy the example to a blank worksheet, you can adapt it to suit your needs.
A B
Formula Description (Result)
=ROW(C4:D6) First row in the reference (4)
Second row in the reference (5)
Third row in the reference (6)
Note The formula in the example must be entered as an array formula (array formula: A formula that performs multiple calculations on one or more sets of values, and then returns either a single
result or multiple results. Array formulas are enclosed between braces { } and are entered by pressing CTRL+SHIFT+ENTER.). After copying the example to a blank worksheet, select the range A2:A4
starting with the formula cell. Press F2, and then press CTRL+SHIFT+ENTER. If the formula is not entered as an array formula, the single result is 4.
Applies to:
Excel 2010, Excel Web App, SharePoint Online for enterprises, SharePoint Online for professionals and small businesses | {"url":"http://office.microsoft.com/en-us/excel-help/row-function-HP010342861.aspx","timestamp":"2014-04-20T10:52:08Z","content_type":null,"content_length":"28814","record_id":"<urn:uuid:3f268f48-df47-455a-8a68-322dd8a98230>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
SRPP+ All-in-One & Impedance Multiplier Circuits
22 September 2009
Most classical music fans have heard of the curse of the ninth. The curse, which started with Beethoven and ended with Shostakovich, would claim the life of any composer who wrote a ninth symphony,
leaving them dead soon after its completion. Gustav Mahler, in a sneaky attempt to cheat death, titled his actual ninth symphony Das Lied von der Erde, but nonetheless died soon after writing his
official ninth symphony. Well, I am greatly relieved that there is no curse on writing about the same circuit topology nine times. For it there were, I would be in trouble, as I have written on the
famous SRPP circuit at least ten times so far.
At least that is as many instances that I could easily find via Google. Interestingly enough, prior to performing the search, I would have guessed that five was the total, as I have somewhat fallen
prey to partially believing the epitaph so many have stuck on me: He’s the anti-SRPP guy. In truth, I am not anti any topology, but I am very much anti-nescience in general and anti-unreasoning, as
it is applied to electronic circuits, in particular—in a nutshell, if a circuit can be designed, it can be understood and explained. (Have you ever noticed that those claiming complete understanding
of tube electronics while also claiming that this deep understanding is inexplicable are often the same fellows building and selling high-end tube audio gear?) And I am very much
anti-absolute-circuit, opposing any claim of universal perfection. It’s not the perfect that troubles me; it’s the universal, the implicit claim of perfection independent of (and unrelated to)
anything else. The circuit that worked beautifully as phono pre-preamplifier is not likely to work as well as a subwoofer amplifier, for example.
The irony here is that I was once a huge SRPP zealot back in the early 1980s, using it as my key topology and promoting it to anyone who would listen—before I understood how the circuit worked. Today
I am much more circumspect, heedful of its assets and liabilities. When I have felt that it was the best circuit for a particular application, I have used it for that application. In other words, I
am not anti-SRPP by any means. In fact, I have recently felt the soft gnaw of nostalgia as I recall my college days, when I had built a 6DJ8-based SRPP headphone amplifier for my altogether wonderful
Sennheiser HD-414 headphones. Unlike the new HD-414 remake, the original headphones boasted a 2k impedance, which required only a 6µF coupling capacitor for bandwidth down to 20Hz. (I used my HD-414s
with my 200lbs, coffin-sized subwoofer that crossover at 80Hz, so I only needed a 1µF capacitor; an OTL with 1µF coupling capacitor!) This simple headphone amplifier used two 6DJ8/6922 tubes and a
hefty 200V power supply and sounded amazingly sweet. Simple—when you can get away with it—is great. Thus, with my heart filled with warm, fuzzy feelings, I decided to lay out a new All-in-One PCB
that honored the SRPP circuit, but with a plus.
The plus takes the form of a topological variation I came up with over a decade ago and which appears in the May 2000 issue of the Tube CAD Journal.
Before explaining the SRPP+ topology, let’s do a quick review of how an SRPP circuit works.
In spite of its immense popularity and few circuit elements, few understand how the SRPP circuit works. The first step is to discern just what the SRPP’s function is. For example, is it a unity-gain
buffer? A voltage amplifier? A phase splitter? An argument could be made for all three answers, but none would prove completely satisfying. Yes, the SRPP appears to encompass a cathode follower of
sorts, making the unity-gain buffer answer seem at least partially right. And it does provide gain, making the voltage amplifier answer partially right. And it is capable of swinging positive and
negative current swings into a load in excess of its idle current, making push-pull operation and, thus, making a portion of the phase splitter answer seem reasonable. So what is its primary
The answer is that the SRPP is actually two elemental circuits, not one. It is a compound circuit that holds a simple grounded-cathode amplifier that provides voltage gain and an impedance-multiplier
that is neither a unity-gain buffer nor voltage amplifier. Two sub-circuits? The bottom triode is configured as a grounded-cathode amplifier that sees its plate loaded by a magnified load impedance
provided by the top triode, which is configured as an impedance-multiplier. What is an impedance-multiplier?
An impedance-multiplier is a circuit that effectively inflates the impedance presented by the load. An impedance doubler, as an example, doubles the effective impedance of the external load; for
example, 300 ohms will be reflected as being 600 ohms. Thus, a 1mA current flow into the impedance-multiplier will not produce the 0.3V voltage drop across the 300-ohm load resistor, but instead 0.6V
will develop across the resistor. So as far as the bottom tube in the SRPP is concerned, the 300-ohm load is now a 600-ohm load, which means that greater gain is now realizable by the
grounded-cathode amplifier.
We will tear away the impedance-multiplier portion of the SRPP, so that we can examine it in isolation, but first let’s examine a conceptually pure impedance-multiplier. The following may look like a
solid-state OpAmp circuit, but think of it as an idealized OpAmp, whose underlining technology is unspecified—possibly alien and unknown on our planet—but which adheres to the concept of a perfect
OpAmp: infinite open-loop gain, infinitely low output impedance, and infinitely wide frequency bandwidth, a true voltage amplifier, one capable of delivering infinite current into a load.
Resistors R1 and R2 set the impedance multiplication. If they equal one another, the multiplication will equal 2, doubling the load impedance. Using 1 ohm for R1 and R2 and an 8-ohm load, let’s
examine what happens when 1A of positive-going current is applied to the impedance-multiplier’s input.
Normally, 1A against 8 ohms equals a voltage drop of 8Vdc, but in our circuit the 8-ohm load sees 16Vdc developed across its leads. Where did the extra 1A of current come from? The super OpAmp must
keep both its input pins, the inverting and non-inverting pins, at the exact same voltage. Which in this example is only possible when the OpAmp’s output voltage equals the input voltage presented to
the impedance-multiplier’s input?
Wait a minute, didn’t we just apply a 1A current to the input, not a fixed DC voltage? Yes indeed, but as soon as any current flows into the impedance-multiplier’s input, a voltage develops across
the load resistor and resistor R2, which will then also appear at the OpAmp’s output. Any in-phase voltage on the OpAmp’s output means that the OpAmp is also delivering current into the load. As we
can see in this example, the OpAmp is slavishly matching the 1A of current into the load, thereby doubling the current into the load. (In other words, we could just as easily think of this the
impedance-multiplier circuit as a current-multiplier circuit.)
If the input current pulled down, rather than pushed up, the load would still see a 16V voltage drop, but with the current flowing in the opposite direction.
What would happen if we applied a fixed voltage to the impedance-multiplier’s input, instead of a fixed current flow? As far as the load resistor is concerned, not much would be different.
Once again, this circuit functions as an impedance multiplier, so the 17Vdc fixed voltage must also imply a current flow of 1A from the external voltage source. Well, why wasn’t 16Vdc applied, rather
than 17Vdc; isn’t twice 8 ohms equal to 16 ohms? The impedance-multiplier input impedance does not exactly equal twice load impedance, as resistor R2’s resistance is in series with the doubled load
impedance. Expressed as a formula:
Zin = R2 + Rload(R1 + R2) / R1
When R1 = R2,
Zin = R2 + 2Rload
While on the subject of formulas, we might as well fill in the blanks, for example the output impedance and the circuit’s “gain,” as seen by the load. Assuming a current source as the signal source
and assuming R1 equals R2:
Zo = Infinity
Voltage Gain = Current x 2Rload
Assuming a low-output-impedance voltage source as the signal source and assuming R1 equals R2:
Zo = R1 || R2
Voltage Gain = Vin x 2Rload / (R2 + 2Rload)
Yes indeed, there is a voltage loss when the impedance-multiplier’s input is a fixed voltage, as resistor R2 and twice the load impedance define a two-resistor voltage divider. In fact there is also
a voltage loss of sorts when the input signal derives from a current source, as the impedance-multiplier’s input impedance is greater than the just twice the load impedance, so the voltage drop
across R2 represents a loss of voltage. The best course to travel is the one that avoids thinking about “gain,” as voltage gain is not the impedance-multiplier function, no more than it is a
The interesting result is the output impedance as seen by the load. With a current source as a signal source, the output impedance approaches infinity and the impedance-multiplier resembles a
current-output amplifier, like Nelson Pass’s First Watt amplifier. With the voltage source as a signal source, the output impedance is as low as resistors R1 and R2 in parallel and the
impedance-multiplier resembles a tube amplifier, with its low damping factor.
In other words, if you are only familiar with voltage power amplifiers, the impedance-multiplier circuit must seem very strange indeed. Sometimes it acts sort of like a power amplifier; at other
times, a unity-gain buffer of sorts, but not quite. Furthermore, a power amplifier’s output impedance is independent of the signal source’s output impedance, whereas the impedance-multiplier
circuit’s output is entirely dependent on the impedance presented to its input. Put another way, stop thinking power amplifier; think impedance-multiplier circuit.
Anticipate possible fault conditions and incorporate safety features. For example, if the impedance-multiplier circuit is fed from a current source, rather than a voltage source, an open circuit at
the impedance-multiplier circuit’s output spells danger, as any amount of current against infinity implies infinite voltage swing. (See Blog 98 for more information on fault conditions with a
current-output amplifier.)
So far, only impedance-multiplier circuits that doubled the load impedance have been covered. Now we move on to greater impedance-multiplication ratios. What happens if R2 is greater than R1 in the
following circuit?
The obvious answer is that the OpAmp will have to deliver more current into the load than the input signal source; but how much more?
In the example above, resistor R2 is twice R1’s value, so the current flow through R2 must be half that through R1, as both resistors see the same voltage differential; thus, the current ratio is
equal to R2/R1. But what is the impedance multiplication in this example?
Z = Rload(R1 + R2) / R1
This allows us to conclude that the input impedance presented by the impedance-multiplier circuit is:
Zin = R2 + Rload(R1 + R2) / R1
While we are at it, the “gain” and Zo, assuming a current source as the signal source are:
Voltage Gain = Current x Rload(R1 + R2) / R1
Zo = Infinity
Assuming a low-output-impedance voltage source as the signal source:
Voltage Gain = Vin x [Rload(R1 + R2) / R1] / [R2 + Rload(R1 + R2) / R1]
Zo = R1 || R2
By the way, if the current ratio is equal between impedance-multiplier circuit and signal source, then both must contribute equally in delivering power into the load. (The impedance-multiplier
circuit heats only R1 and the signal source heats only R2.) But if the ratio is in favor of the impedance-multiplier circuit, then the impedance-multiplier circuit must be providing more power into
the load impedance. For example, in the example of R1 = 1 and R2 = 2, the impedance multiplication is 3 and the impedance-multiplier circuit delivers twice the current that the signal source does
into the load, so the impedance-multiplier circuit’s power contribution must be twice that of the signal source. In this same example, 24Vpk develops across the load resistance, the result of 3A
against 8 ohms, which results in 36W of power (RMS) into the load, 24W from the impedance-multiplier circuit and 12W from the signal source.
This raises an interesting question, What happens if R1 equals zero ohms? The impedance multiplication ratio will equal infinity and the impedance-multiplier circuit must deliver infinitely more
power into the load than the signal source. This is impossible in reality, of course. In the real world, the impedance-multiplier circuit will always contain an implicit R1 resistance in the form of
the OpAmp’s output impedance, albeit that this impedance can be amazingly low with modern, high-powered, solid-state power amplifiers.
Okay, now switch mental gears and think back to April of 2008, in Blog 140, wherein I describe the New Hybrid SE OTL Design in audioXpress Magazine.
AJ van Doorn’s hybrid power amplifier also uses a high-powered solid-state power amplifier and low-powered SE tube to provide voltage gain and a wee amount of power into the load. With a few
adjustments and much editing, we end up with something like this.
The amplifier shown above could use a high-current triode, such as the 6C33, or pentode, such as the 6LF6. The tube would work in single-ended, class-A mode, with the constant-current source loading
its plate. Resistors R1 and R2 would have to be adjusted to provide the optimal impedance multiplication. And the input transformer would have to offer a large step-up ratio. Of course, the input
transformer could be eliminated, along with the negative power supply, if we used a coupling capacitor to bridge the tube amplifier to the impedance-multiplier circuit, but then the output tube would
demand a huge input voltage swing from the line-stage amplifier, something most tubes do plentifully well. The output impedance must be staggeringly high, even with a triode in place of a pentode. By
turning this circuit on its head, we can use a cathode follower to lower the output impedance.
Why the unhappy face? The above circuit will suffer from DC offset woes, as the output and the cathode cannot be both a zero volts, while any current flows through the tube. One possible workaround
is to add a DC servo loop that would steer the power OpAmp’s output to a slightly negative voltage that would then offset the positive tug from the output tube, resulting in 0V at the loudspeaker
The big test for any possible solution is will it protect the loudspeaker when the tube is cold and not yet conducting or jiggled in its socket, breaking its contact with the rest of the circuit? In
this example, I am inclined to think everything will work out. In fact, the only problem I see is having to create a separate bipolar power supply for the DC-servo OpAmp, as few 8-pin IC OpAmps can
handle 64V of power supply voltage. So, is it possible to forgo the extra OpAmp?
The above circuit uses the power OpAmp itself as its own DC servo and the tube’s quasi constant-current source. One aspect of this and the previous variation that may escape notice is how the tube’s
idle current is not fixed at 100mA, although it must be more obvious with the this last variation. The -32V bias voltage and the 30-ohm cathode resistor set the idle current for the output tube. This
might prove a big liability with some tubes; high-gm tubes are often quite squirrelly in terms of idle current. The next variation reintroduces the constant-current source and it will force the
output tube to conduct a specified idle current, even with variations in wall voltage or tube ageing.
Ok, let us step back a bit and think small. Few have the money, the expertise, and the guts to build a big power amplifier. How could we easily use an impedance-multiplier circuit and a single tube
to build a nice unity-gain line buffer? The following circuit uses an adjustable negative three-pin voltage regulator reconfigured as an impedance-multiplier circuit. The LM337 strives to keep a
fixed voltage differential of -1.25V across its adjustment pin and its output pin. If not external load attached to the two 62.5-ohm resistors, this fixed voltage across 125-ohm of resistance would
define a constant-current source of 10mA. In other words, the tube must conduct 10mA of current and only 10mA at idle.
With a load impedance and an input signal, however, the load will divert current and the tube’s current conduction can swing from 0A to 20mA, as can the LM3337. The load sees the delta in conduction
between tube and LM337. Thus, at idle, no current flow into the load (ignore the coupling capacitor). But as the tube’s cathode swings up and the tube conducts more, the LM337 conducts equally less,
so the external load sees twice the current swing that the tube or LM337 sees; in this case -20mA to +20mA. In other words, we have a seemingly single-ended circuit (like the SRPP) that operates in a
push-pull fashion. As far as the tube is concerned, the external load impedance has been doubled (plus 62.5 ohms) and its distortion characteristic is very single-ended. As far as the LM337 is
concerned, any change in the voltage differential must be countered by varying current and the LM337 doesn’t know or care where that varying current ends up as long as the 1.25V voltage differential
In my next post, I will explain how the above circuit works in great detail. By the way, expect this circuit to be invented in the near future (and patented) by one of the big tube-gear companies. Of
course, no mention will ever be made of me or the Tube CAD Journal and the name given to the circuit will be something like the Super-Lin Follower™. Maybe I should preemptively strike and name it the
“Inverted SRPP” or something more whimsical, say the “Somersault Circuit.” (My four-year old daughter would love it, as she asked me the other day, “Daddy, what’s the etymology of somersault?” That’s
my daughter!)
Now let’s switch gears again and think back to the topic of power-booster amplifiers, covered in Blogs 157, 155, 154, and 153. The idea behind the power-booster amplifiers is that a small, flea-power
amplifier and a huge, heavy, power-booster amplifier are wed and the result is the ability to greatly magnify the wimpy amplifier’s output into hundreds of watts. The commercial example was the
Musical Fidelity 550K power-booster, which presented a 50-ohm load to the small power amplifier. I have no idea what the 550K schematic looks like, but I am sure that it does not represent a true
impedance-multiplier circuit. This is not a failing, just an observation. Now imagine a big impedance-multiplier circuit capable of delivering 500 watts into a loudspeaker and imagine a tube OTL
power amplifier capable of delivering 1Apk with 100Vpk voltage swings. Such an OTL would require a 100-ohm load, which the impedance-multiplier circuit could create out of an 8-ohm load. In other
words, the tube amplifier would deliver 1A and the solid-state impedance-multiplier circuit would provide 11A into the 8-ohm load. This could prove quite interesting, as 12A equals 576W into 8-ohm
Or imagine a loudspeaker that held a high-efficiency tweeter and midrange drivers, say 96dB @2.8V, and a low-efficiency, 2-ohm, 10 inch, car woofer (or four smaller 8-ohm woofers in parallel) in an
acoustic suspension enclosure, but with deep, deep bass extension. Such a speaker could never just be hooked up to a conventional power amplifier, as a 2-ohm load would provoke gross distortion from
most amplifiers. But if we added an impedance-multiplier circuit to only the woofer, the woofer could effectively see a fourfold increase in signal current, while still presenting an 8-ohm load to
the external power amplifier, which would bring it efficiency up to the rest of the system. Might be just the answer to many a tube-loving audiophile's dreams; a small speaker with thundering bass
and high efficiency. True, such a speaker would need to be plugged into a wall socket, but that might be a feature, as many would imagine that it was an electrostatic design.
Once again, as a testament to my lack of basic business sense, after many readers have given up, I now mention that the SRPP All-in-One PCBs will be offered for sale soon. Visit the GlassWare Yahoo
I am always amazed by how long these entries become and by how much I have left unsaid. And I have much, much more to write about. But for right now, this should be enough for most TCJers to digest
until next time. | {"url":"http://www.tubecad.com/2009/09/blog0171.htm","timestamp":"2014-04-20T21:30:47Z","content_type":null,"content_length":"47427","record_id":"<urn:uuid:e764baf6-1945-4088-9d88-f5f24a64350c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Salem, NH Precalculus Tutor
Find a Salem, NH Precalculus Tutor
...In the last 3 years we have upgraded to a chess team where we play different schools around the area. My rating, based on some online play, might be around 1500 give or take a 100 points. I
have been teaching C++ at North Reading High School for 7 years.
19 Subjects: including precalculus, physics, algebra 2, algebra 1
...I taught over forty College Courses in Physics, Mathematics and Electrical Engineering. I have a CAGS degree in Educational Leadership (It is like a Master's degree, in School Administration).
I enjoy teaching High/Middle school and College students, regardless of level. I recently taught a Col...
6 Subjects: including precalculus, physics, algebra 1, prealgebra
...In the beginning I had trouble understanding his heavy accent and considered changing sections. Had I changed sections I would have made a big mistake. During the first class, this professor
told us that he felt that American educators did not teach math the right way.
6 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I have over fifteen years of teaching experience, in the United States, Mainland China, and Hong Kong. I formerly served as Director of Program Planning for the World Affairs Council of
Pittsburgh and as an Adjunct Professor of Political Science for the University of Pittsburgh, teaching courses...
41 Subjects: including precalculus, English, algebra 1, reading
...I graduated Cum Laude and as a member of the national economics society. I double majored in economics and business management with a concentration in finance. Since college I have had a
year's worth of experience as a paraprofessional working with special education, particularly behavior and emotional.
29 Subjects: including precalculus, English, finance, economics
Related Salem, NH Tutors
Salem, NH Accounting Tutors
Salem, NH ACT Tutors
Salem, NH Algebra Tutors
Salem, NH Algebra 2 Tutors
Salem, NH Calculus Tutors
Salem, NH Geometry Tutors
Salem, NH Math Tutors
Salem, NH Prealgebra Tutors
Salem, NH Precalculus Tutors
Salem, NH SAT Tutors
Salem, NH SAT Math Tutors
Salem, NH Science Tutors
Salem, NH Statistics Tutors
Salem, NH Trigonometry Tutors
Nearby Cities With precalculus Tutor
Andover, MA precalculus Tutors
Atkinson, NH precalculus Tutors
Derry, NH precalculus Tutors
Dracut precalculus Tutors
Haverhill, MA precalculus Tutors
Hudson, NH precalculus Tutors
Lawrence, MA precalculus Tutors
Londonderry, NH precalculus Tutors
Methuen precalculus Tutors
North Andover precalculus Tutors
North Salem, NH precalculus Tutors
Pelham, NH precalculus Tutors
Plaistow precalculus Tutors
Tewksbury precalculus Tutors
Windham, NH precalculus Tutors | {"url":"http://www.purplemath.com/Salem_NH_precalculus_tutors.php","timestamp":"2014-04-18T13:29:11Z","content_type":null,"content_length":"23980","record_id":"<urn:uuid:bb6ad30e-493a-4581-8b74-e68677b3c602>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Java Dice Program
May 22nd, 2011, 07:44 PM #1
Junior Member
Join Date
May 2011
Thanked 0 Times in 0 Posts
public class RollDice2
public static void main(String [] args)
Scanner input= new Scanner(System.in);
int dice=0,rolls=0;
System.out.println("Enter the number of dice you wish to use");
System.out.println("Enter the number of rolls you wish to use");
int frequencies[ ] = new int[6*dice + 1];
// set each cell in frequencies to 0
for (int i=0; i<frequencies.length; i++)
frequencies[i] = 0;
// roll the dice
for (int i=0; i<rolls; i++) {
int total = RollDice2.throwDice(dice);
for(int s=0;s<total; s++)
System.out.println(total+"- ");
public static int throwDice(int count){
count =(int)(1+6*(Math.random()));
return count; }
I am trying to write a program and the user inputs how many dice and rolls they want. The output is a histogram using dashes. I'm not sure what is wrong with my code at this point.
Last edited by helloworld922; May 22nd, 2011 at 08:02 PM.
Please properly format your post. Use highlight tags around your code. And please don't type in all caps. That usually has the connotation of shouting.
What is wrong? Are you getting an exception? If so, please post the error message. Otherwise, please post an example input and the output you're getting, along with a statement of what the
correct output should be.
It works when I run and compile it except the - are after the number. Also you should have an import statement at the top declaring your importing the scanner class.
For great tutiorals on java programming, visit my blog: http://www.ridgehutchingsblog.blogspot.com/
2 -
3 --
4 ---
5 ----
6 -----
7 ------
8 -----
9 ----
10 --
11 --
12 -
the above is the correct output
this is the output i am getting is
Enter the number of dice you wish to use
Enter the number of rolls you wish to use
Press any key to continue . . .
import java.util.*;
public class RollDice2
public static void main(String [] args)
Scanner input= new Scanner(System.in);
int dice=0,rolls=0;
System.out.println("Enter the number of dice you wish to use");
System.out.println("Enter the number of rolls you wish to use");
int frequencies[ ] = new int[6*dice + 1];
// set each cell in frequencies to 0
for (int i=0; i<frequencies.length; i++)
frequencies[i] = 0;
// roll the dice
for (int i=0; i<rolls; i++) {
int total = RollDice2.throwDice(dice);
for(int s=0;s<total; s++)
System.out.println(total+"- ");
public static int throwDice(int count){
count =(int)(1+6*(Math.random()));
return count; }
Last edited by ebone; May 22nd, 2011 at 08:41 PM.
yes it works fine when i do it too but the results are not what they should be. i have the import statment at the stop of my code i just forgot to copy and past it
There are so many things wrong with your code. I'll start with the throwDice method.
Why bother passing in a value as a parameter if you immediately throw it away and assign a new value to the count variable?
Why are you not using the parameter in your formula to calculate the dice total?
The formula will not give correct results. The Math.random class generates numbers from 0.0 to .999999. Plugging those values into your formula:
1 + (6 * 0.0) = 1
1 + ( 6 * 0.999999) = 6
Therefore your method can only return values in the range of 1 ( which is impossible for 2 or more dice) to 6. What happens when there are two or more dice? Where are the values of 7 and upwards?
Now back to the main method:
Why do you have the loop displaying the histogram inside the other loop? I would expect the program to display it once at the end.
Why is the histogram display loop iterating "total" number of times? If the total is 2 how is it supposed to display the histogram for all other values above 2?
May 22nd, 2011, 08:10 PM #2
Super Moderator
Join Date
Jun 2009
Thanked 619 Times in 561 Posts
Blog Entries
May 22nd, 2011, 08:21 PM #3
Junior Member
Join Date
May 2011
Thanked 0 Times in 0 Posts
May 22nd, 2011, 08:38 PM #4
Junior Member
Join Date
May 2011
Thanked 0 Times in 0 Posts
May 22nd, 2011, 08:42 PM #5
Junior Member
Join Date
May 2011
Thanked 0 Times in 0 Posts
May 22nd, 2011, 11:07 PM #6
Grand Poobah
Join Date
Mar 2011
My Mood
Thanked 158 Times in 149 Posts | {"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/9025-java-dice-program.html","timestamp":"2014-04-16T16:01:44Z","content_type":null,"content_length":"78938","record_id":"<urn:uuid:7fe2df60-d00c-41ae-bd42-5fe06b615d99>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that AC + BC = 2AB
October 5th 2012, 11:30 AM #1
Sep 2012
Prove that AC + BC = 2AB
Hey everyone,
I've got troubles solving this task, maybe you will be able to solve it
Point P,Q,R lie adequately on sides BC, CA and AB of traingle ABC (as shown on the picture). We know that AR = RP = PC and BR = RQ = QC.
Prove that AC + BC= 2AB
My first idea was to prove it using vectors, but unfortunately I failed
I will try to prove it again, but maybe you will help me a bit
Thanks in advance!
Last edited by Lukaszm; October 5th 2012 at 10:16 PM.
Re: Prove that AC + BC = 2AB
Re: Prove that AC + BC = 2AB
Oups, there should be AC + BC = 2AB, sorry
October 5th 2012, 03:36 PM #2
October 5th 2012, 10:16 PM #3
Sep 2012 | {"url":"http://mathhelpforum.com/geometry/204707-prove-ac-bc-2ab.html","timestamp":"2014-04-19T21:16:04Z","content_type":null,"content_length":"34262","record_id":"<urn:uuid:c18d33b7-9d8a-4ae3-b812-20f0ce3d9b06>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |