content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Before decoupling, the matter in the universe has significant pressure because it is tightly coupled to radiation. This pressure counteracts any tendency for matter to collapse gravitationally.
Formally, the Jeans mass is greater than the mass within a horizon volume for times earlier than decoupling. During this epoch, density perturbations will set up standing acoustic waves in the
plasma. Under certain conditions, these waves leave a distinctive imprint on the power spectrum of the microwave background, which in turn provides the basis for precision constraints on cosmological
parameters. This section reviews the basics of the acoustic oscillations.
In their classic 1996 paper, Hu and Sugiyama transformed the basic equations describing the evolution of perturbations into an oscillator equation. Combining the zeroth moment of the photon Boltzmann
equation with the baryon Euler equation for a given k-mode in the tight-coupling approximation (mean baryon velocity equals mean radiation velocity) gives
where [0] is the zeroth moment of the temperature distribution function (proportional to the photon density perturbation), R = 3[b] / 4[] is proportional to the scale factor a, H = a is the conformal
Hubble parameter, and the sound speed is given by c[s]^2 = 1 / (3 + 3R). (All overdots are derivatives with respect to conformal time.) c[s]^2 = dP / d
A WKB approximation to the homogeneous equation with no driving source terms gives the two oscillation modes (Hu and Sugiyama 1996)
where the sound horizon r[s] is given by
Note that at times well before matter-radiation equality, the sound speed is essentially constant, c[s] = 1 / k will set up an oscillatory behavior in the primordial plasma described by a linear
combination of the two modes in Eq. (28). The relative contribution of the modes will be determined by the initial conditions describing the perturbation.
Equation (27) appears to be simpler than it actually is, because [0]. At the expense of pedagogical transparency, this situation can be remedied by considering separately the potential from the
photon-baryon fluid and the potential from the truly external sources, the dark matter and neutrinos. This split has been performed by Hu and White (1996). The resulting equation, while still an
oscillator equation, is much more complicated, but must be used for a careful physical analysis of acoustic oscillations.
The initial conditions for radiation perturbations for a given wavenumber k can be broken into two categories, according to whether the gravitational potential perturbation from the baryon-photon
fluid, [b], is nonzero or zero as n[b] / n[], the ratio of baryon to photon number densities, is a constant in space. This case must couple to the cosine oscillation mode since it requires [0]
The other case is termed ``isocurvature'' since the fluid gravitational potential perturbation [b], and hence the perturbations to the spatial curvature, are zero. In order to arrange such a
perturbation, the baryon and photon densities must vary in such a way that they compensate each other: n[b] / n[] varies, and thus these perturbations are in entropy, not curvature. At an early
enough time, the temperature perturbation in a given k mode must arise entirely from the Sachs-Wolfe effect, and thus isocurvature perturbations couple to the sine oscillation mode. These
perturbations arise from causal processes like phase transitions: a phase transition cannot change the energy density of the universe from point to point, but it can alter the relative entropy
between various types of matter depending on the values of the fields involved. The potentially most interesting cause of isocurvature perturbations is multiple dynamical fields in inflation. The
fields will exchange energy during inflation, and the field values will vary stochastically between different points in space at the end of the phase transition, generically giving isocurvature along
with adiabatic perturbations (Polarski and Starobinsky 1994).
The numerical problem of setting initial conditions is somewhat tricky. The general problem of evolving perturbations involves linear evolution equations for around a dozen variables, outlined in
Sec. 3.2. Setting the correct initial conditions involves specifying the value of each variable in the limit as
The characteristic ``acoustic peaks'' which appear in Figure 1 arise from acoustic oscillations which are phase coherent: at some point in time, the phases of all of the acoustic oscillations were
the same. This requires the same initial condition for all k-modes, including those with wavelengths longer than the horizon. Such a condition arises naturally for inflationary models, but is very
hard to reproduce in models producing perturbations causally on scales smaller than the horizon. Defect models, for example, produce acoustic oscillations, but the oscillations generically have
incoherent phases and thus display no peak structure in their power spectrum (Seljak et al. 1997). Simple models of inflation which produce only adiabatic perturbations insure that all perturbations
have the same phase at
A glance at the k dependence of the adiabatic perturbation mode reveals how the coherent peaks are produced. The microwave background images the radiation density at a fixed time; as a function of k,
the density varies like cos(kr[s]), where r[s] is fixed. Physically, on scales much larger than the horizon at decoupling, a perturbation mode has not had enough time to evolve. At a particular
smaller scale, the perturbation mode evolves to its maximum density in potential wells, at which point decoupling occurs. This is the scale reflected in the first acoustic peak in the power spectrum.
Likewise, at a particular still smaller scale, the perturbation mode evolves to its maximum density in potential wells and then turns around, evolving to its minimum density in potential wells; at
that point, decoupling occurs. This scale corresponds to that of the second acoustic peak. (Since the power spectrum is the square of the temperature fluctuation, both compressions and rarefactions
in potential wells correspond to peaks in the power spectrum.) Each successive peak represents successive oscillations, with the scales of odd-numbered peaks corresponding to those perturbation
scales which have ended up compressed in potential wells at the time of decoupling, while the even-numbered peaks correspond to the perturbation scales which are rarefied in potential wells at
decoupling. If the perturbations are not phase coherent, then the phase of a given k-mode at decoupling is not well defined, and the power spectrum just reflects some mean fluctuation power at that
In practice, two additional effects must be considered: a given scale in k-space is mapped to a range of l-values; and radiation velocities as well as densities contribute to the power spectrum. The
first effect broadens out the peaks, while the second fills in the valleys between the peaks since the velocity extrema will be exactly out of phase with the density extrema. The amplitudes of the
peaks in the power spectrum are also suppressed by Silk damping, as mentioned in Sec. 3.5.
The mass of the baryons creates a distinctive signature in the acoustic oscillations (Hu and Sugiyama 1996). The zero-point of the oscillations is obtained by setting [0] constant in Eq. (27): the
result is
The photon temperature [0] is not itself observable, but must be combined with the gravitational redshift to form the ``apparent temperature'' [0] - aa [b] [], then the oscillations are effectively
about the mean temperature. The positive and negative oscillations are of the same amplitude, so when the apparent temperature is squared to form the power spectrum, all of the peaks have the same
height. On the other hand, if the baryons contribute a significant mass so that aa | {"url":"http://ned.ipac.caltech.edu/level5/Kosowsky2/Kosowsky5.html","timestamp":"2014-04-21T13:01:03Z","content_type":null,"content_length":"16077","record_id":"<urn:uuid:f43d2525-d91a-4f7c-b843-5b6badd0454c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Venn Factors
Venn Factors
by admin •
Use the arrows to change the number to generate factors for. Drag and drop the tiles to the correct position on the venn diagram. They will illuminate green when correctly placed and red when
incorrectly placed.
Go to interactive whiteboard resource.
A similar interactive whiteboard resource focussing on multiples rather than factors.
Another factor based resource but one that demonstrates prime factor trees.
4 comments for “Venn Factors”
Remind me to kill (thank) you! We just did GCF!
i think vennfactors&vennmultiples is easy to use also very good to learn maths with
I like the idea of using the Venn Diagram to show common factors. It will certainly help my students to locate the greatest common factor by isolating the common factors first. Because the
factors are given to students, it will eliminate the error of missing a set of factors. Thanks. | {"url":"http://www.teacherled.com/2009/01/31/venn-factors/","timestamp":"2014-04-17T15:26:06Z","content_type":null,"content_length":"17397","record_id":"<urn:uuid:dd083521-69e0-4ada-9d97-88b073bc2aa5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laguna Woods Math Tutor
Find a Laguna Woods Math Tutor
I have a doctorate in Nuclear Physics and many years of engineering experience designing electronics for industry. For the last two years, I have been teaching at the Community College Level as an
Adjunct Professor. I have volunteer tutored math at a local high school and tutored privately.
8 Subjects: including calculus, algebra 1, algebra 2, geometry
...Wanting to learn more about science, I obtained a second Bachelors degree in physics. I have worked in multiple areas including structural design, advanced spacecraft design, database design,
and programming. When I was a graduate student I taught engineering Statics, Dynamics, and Strength of Materials classes.
14 Subjects: including calculus, general computer, geometry, prealgebra
...That's not to say that an A is not important, as grades on your transcripts do evaluate your potential, but It is vital to find a balance between striving to learn and working for an A. I began
my academic journey studying economics at the University of California, Irvine. I completed my degree...
51 Subjects: including algebra 2, chemistry, ACT Math, reading
...I help them achieve high scores on their tests and quizzes. They went from a C to an A in a matter of weeks. I also tutor elementary and middle school kids for NHS (National Honor Society). I
have won the math competition in 8th grade and I was chosen by my school to represent as a spelling bee competitor; however, I was unable to go due to a wrestling tournament.
23 Subjects: including algebra 2, economics, SAT math, trigonometry
...I have taught proofreading to editorial staffs and to students from elementary school through college. Very few people possess this necessary skill. A good proofreader stands out from the crowd
not just in the classroom but in consideration for promotion—or termination—at work.
27 Subjects: including algebra 1, algebra 2, ACT Math, GRE | {"url":"http://www.purplemath.com/Laguna_Woods_Math_tutors.php","timestamp":"2014-04-21T12:56:55Z","content_type":null,"content_length":"24050","record_id":"<urn:uuid:4e8ebc81-8916-4304-85a1-bbff75e784f3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem in generating Fibonacci sequence in java
April 23rd, 2009, 01:52 PM
Problem in generating Fibonacci sequence in java
hello all. i am working on the Fibonacci sequence constructor and i was wondering if i posted wha ti have so far if you could tell me if im on the right trac or not and maybe give me a few
pointers to help me get it right heres what i have for code and the assignment so you can see what i have to do
Code :
public class FibonacciGenerator
public static int fib(int n)
int fold1=0, fold2=1;
public getNumber()
for(int i=0; i<n; i++)
int savefold1 = fold1;
fold1 = fold2;
fold2 = savefold1 + fold2;
return fold1;
Write a program that prompts the user for n and prints the nth value in the Fibonacci sequence. Use a class FibonacciGenerator with a method nextNumber .
There is no need to store all values for fn. You only need the last two values to compute the next one in the series:
fold1 = 1;
fold2 = 1;
fnew = fold1 + fold2;
After that, discard fold2 , which is no longer needed, and set fold2 to fold1 and fold1 to fnew .
April 24th, 2009, 03:23 AM
Freaky Chris
Re: i could use a few pointers with the Fibonacci sequance
Your code makes no sense. You have random function things in the middle of another function.
What i can tell you is that a Fibonacci Sequence is denoted by the following mathematical Expression.
F(n) = F(n-1) + F(n-2) : Where F(0) = 0, F(1) = 1
using that you can write either a recurring formulae or looped to compute the answer.
Since you are trying to find the value of F(N) there is a mathematical equation that allows you to compute that and only that, which you could do as a side for extra merit.
the equation is as such
((((1 + ( 5^0.5 ) ) / 2)^n ) / ( 5^0.5 ) ) + 0.5
Then rounded by the rules of floor, which state the number is to be rounded down to the nearest whole number, NEVER up. So evern 3.999999999 would be rounded to 3. this is often the effect given
via truncation when dealing with integer mathematics whilst programming.
If anyone reads this and is wondering about the solution using Math to find F(N) without computing the entire sequence up F(N) then here it is,
Code :
import java.util.Scanner;
public class FibonacciGenerator
public static void main(String[] args){
System.out.println("Enter a number to compute the fibonacci number for:");
System.out.println(fib(new Scanner(System.in).nextInt()));
public static int fib(double n)
return (int)(( Math.pow(((1 + Math.sqrt(5)) / 2 ), n) / Math.sqrt(5) ) + 0.5);
big_c it should be noted, you cannot hand that in because it is not what your assignment asks for :)
April 24th, 2009, 08:52 AM
Freaky Chris
Re: i could use a few pointers with the Fibonacci sequance
Here are three different solutions, it is clear to see which are best. Of course the final looped solution is the best for the problem big_c is facing.
Code :
import java.util.Scanner;
public class FibonacciGenerator
public static void main(String[] args){
System.out.println("Enter a number to compute the fibonacci number for:");
int n = new Scanner(System.in).nextInt();
public static int fib(double n)
return (int)(( Math.pow(((1 + Math.sqrt(5)) / 2 ), n) / Math.sqrt(5) ) + 0.5);
public static int fibR(int n){
if(n == 0) return 0;
else if(n <= 2) return 1;
return fibR(n-1)+fibR(n-2);
public static int fibL(int n){
int A = 0;
int B = 1;
int current = 0;
if(n == 0) return 0;
else if(n <= 2) return 1;
for(int i = 0; i < n-1; i++){
current = A+B;
A = B;
B = current;
return current; | {"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/363-problem-generating-fibonacci-sequence-java-printingthethread.html","timestamp":"2014-04-21T15:01:29Z","content_type":null,"content_length":"9090","record_id":"<urn:uuid:007871ad-6a09-48b5-9ef7-e77acb2316fd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Root Routes
Historically, the search for the square root of minus one,, gave rise to the complex numbers. Typically we refer to as . Perhaps the next obvious question is: what is ? Do we need to invent another
number, or can be found in the complex plane?
The figure shows the complex number plane. The circle is the unit circle with -1 and labeled. As you drag the blue point, the red point shows you its squared value, that is,
. So, of course, putting the blue point on puts the red point on -1. Where do you put the blue point to put the red point on ? Can you find a second solution? Don't forget every nonzero number has
square roots!
Snapshot 1: The blue point at is the square root of the red point at -1.
Snapshot 2: Here is the second square root of -1.
It is remarkable that only is needed to allow you to take any root of any complex number to get a complex number. Even more: over the complexes, every polynomial equation has a solution. | {"url":"http://demonstrations.wolfram.com/RootRoutes/","timestamp":"2014-04-21T04:33:55Z","content_type":null,"content_length":"44021","record_id":"<urn:uuid:80f490a6-2709-4b70-adbb-4869fe8a5e8d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] tuple and HList
David Menendez zednenem at psualum.com
Sun Mar 20 23:55:19 EST 2005
Frederik Eaton writes:
> > One way t make tuples into sugar for HLists would be to effectively
> > have a series of declarations like these:
> >
> > type (a,b) = TCons a (TCons b HNil)
> > type (a,b,c) = TCons a (TCons b (TCons c HNil))
> >
> > But then we can't use tuples in instance declarations. That is,
> > there isn't any way to desugar 'instance Functor ((,) a)' without
> > using a type lambda.
> I'm not sure I understand this, but the intent was that you'd use e.g.
> TCons instead of the tuple syntax in instance declarations.
Currently, '(,)' is a type constructor of kind * -> * -> * and '(a,b)'
is sugar for '(,) a b'. That means we can partially apply '(,)' in
instance declarations, for example:
instance Functor ((,) a) where
fmap f (x,y) = (x, f y)
If we get rid of '(,)' and redefine '(a,b)' as sugar for 'TCons a (TCons
b HNil)' (or whatever), then there is no way to declare the above
instance. I don't think that's a deal-killer, but it is a disadvantage.
David Menendez <zednenem at psualum.com> <http://www.eyrie.org/~zednenem/>
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2005-March/009476.html","timestamp":"2014-04-23T21:09:21Z","content_type":null,"content_length":"3790","record_id":"<urn:uuid:6cf585a7-ad0b-4734-b109-261fb23b9ed9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dead Mathematicians' Society Schedules Free Lecture Nov. 19
Posted: November 13, 2013
The Dead Mathematicians' Society at Mt. Hood Community College (MHCC), invites the public to the first in a series of free and engaging presentations on math.
Each year, the college's math department schedules the Infinite Enrichment Series on topics that are not typically included in the math curriculum.
The first presentation of the academic year will be held Nov. 19, 3:15 - 4:15 p.m. in room AC2608 on the Gresham Campus, 26000 S.E. Stark St. Refreshments will be provided. Parking is free on all
campuses, no permit required.
Derek Garton, Ph.D., from Portland State University, will talk about the Goldbach Conjecture. He explained it this way:
• In 1742, Goldbach wrote a letter to Euler in which he claimed "every even integer greater than two can be written as the sum of two primes." Euler responded, "I regard this as a completely
certain theorem, although I cannot prove it." Despite Euler's certainty, the theorem is still unproven today.
• The Weak Goldbach Conjecture is the easier statement: "Every odd integer greater than three can be written as the sum of three primes."
• In May 2013, Harald Helfgott released a proof of this theorem.
Society Formed in 1986
Nick Chura, MHCC math instructor, leads the Dead Mathematicians' Society. Two MHCC math instructors, now retired, were instrumental in forming the Society in about 1986 to make math fun and
interesting: Bill Covell, who teaches part-time at MHCC, and Paul Porch.
Individuals requiring accommodations due to a disability may contact the MHCC Disability Services Office at 503-491-6923 or 503-491-7670 (TDD). Please call at least two weeks prior to the event to
ensure availability.
For more information, media professionals may contact the Office of College Advancement at 503-491-7204 or news@mhcc.edu. | {"url":"http://mhcc.edu/news.aspx?id=3530","timestamp":"2014-04-20T23:26:57Z","content_type":null,"content_length":"26363","record_id":"<urn:uuid:fd726d41-daaa-4e16-affd-b5509f72dde6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Northridge Algebra 1 Tutor
Find a Northridge Algebra 1 Tutor
...I attended a private tutoring company ever since I was in grade school and, eventually, I started working there as a tutor starting my junior year in high school. I continued working there for
a couple years until I decided to become a full-time student to study psychology. I chose this major because I wanted to better understanding the dynamics of behavior and social interactions.
17 Subjects: including algebra 1, Chinese, SAT math, grammar
I have a great deal of experience dealing with students from different background ethnicity and level of education. I have been a physics and astronomy tutor for over 4 years now. I have also been
teaching physics and astronomy labs at California State University Northridge for over 3 years.
7 Subjects: including algebra 1, physics, algebra 2, trigonometry
...In addition, I am a patient, outgoing, honest, creative, detailed, patient, and dynamic educator. I really enjoy educating and mentoring students of all ages. I have taught and tutored the
following subjects: Elementary Math, Pre-Algebra, Algebra I, Algebra II, Geometry, Trigonometry, Pre-Calcu...
27 Subjects: including algebra 1, reading, English, geometry
Hello All! My name is Paul. I have a lot of experience with those who are attempting to excel to higher standards in whichever subject they desire!
27 Subjects: including algebra 1, reading, writing, geometry
...I graduated from Mt. St. Mary's College with a MS in Education.
6 Subjects: including algebra 1, special needs, autism, ADD/ADHD | {"url":"http://www.purplemath.com/Northridge_algebra_1_tutors.php","timestamp":"2014-04-19T02:27:45Z","content_type":null,"content_length":"23761","record_id":"<urn:uuid:845ed9bf-0760-480a-8f25-0c061368c49a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
• [X]
• [X]
• Bestsellers - This Week
• Foreign Language Study
• Pets
• Bestsellers - Last 6 months
• Games
• Philosophy
• Archaeology
• Gardening
• Photography
• Architecture
• Graphic Books
• Poetry
• Art
• Health & Fitness
• Political Science
• Biography & Autobiography
• History
• Psychology & Psychiatry
• Body Mind & Spirit
• House & Home
• Reference
• Business & Economics
• Humor
• Religion
• Children's & Young Adult Fiction
• Juvenile Nonfiction
• Romance
• Computers
• Language Arts & Disciplines
• Science
• Crafts & Hobbies
• Law
• Science Fiction
• Current Events
• Literary Collections
• Self-Help
• Drama
• Literary Criticism
• Sex
• Education
• Literary Fiction
• Social Science
• The Environment
• Mathematics
• Sports & Recreation
• Family & Relationships
• Media
• Study Aids
• Fantasy
• Medical
• Technology
• Fiction
• Music
• Transportation
• Folklore & Mythology
• Nature
• Travel
• Food and Wine
• Performing Arts
• True Crime
• Foreign Language Books
Mathematics; Problems, exercises, etc
Most popular at the top
• Taylor and Francis 2010; US$ 146.95
Following the work of Yorke and Li in 1975, the theory of discrete dynamical systems and difference equations developed rapidly. The applications of difference equations also grew rapidly,
especially with the introduction of graphical-interface software that can plot trajectories, calculate Lyapunov exponents, plot bifurcation diagrams, and find basins... more...
• Cambridge University Press 2004; US$ 171.00
An introduction for graduate students, a guide for users, and a comprehensive resource for experts. more...
• Elsevier Science 2007; US$ 148.00
Difference equations appear as natural descriptions of observed evolution phenomena because most measurements of time evolving variables are discrete. They also appear in the applications of
discretization methods for differential, integral and integro-differential equations. The application of the theory of difference equations is rapidly increasing... more...
• Taylor and Francis 2010; US$ 109.95
Keeping the style, content, and focus that made the first edition a bestseller, Integral Transforms and their Applications, Second Edition stresses the development of analytical skills rather
than the importance of more abstract formulation. The authors provide a working knowledge of the analytical methods required in pure and applied mathematics,... more...
• World Scientific Publishing Company 2005; US$ 274.00
The Hilbert?Huang Transform (HHT) represents a desperate attempt to break the suffocating hold on the field of data analysis by the twin assumptions of linearity and stationarity. Unlike
spectrograms, wavelet analysis, or the Wigner?Ville Distribution, HHT is truly a time-frequency analysis, but it does not require an a priori functional basis and,... more...
• Springer-Verlag New York Inc 2006; US$ 84.95
Integrates both classical and modern treatments of difference equations. This third edition includes proofs, graphs, and applications. It contains: a chapter on Higher Order Scalar Difference
Equations, and also results on local and global stability of one-dimensional maps. It is useful for advanced undergraduate and beginning graduate students. more...
• Springer-Verlag New York Inc 2005; US$ 59.95
In this new text, designed for sophomores studying mathematics and computer science, the authors cover the basics of difference equations and some of their applications in computing and in
population biology. Each chapter leads to techniques that can be applied by hand to small examples or programmed for larger problems. Along the way, the reader will... more...
• Springer 2006; US$ 179.00
This book illustrates the basic ideas of regularity properties of functional equations by simple examples. It then treats most of the modern results about regularity of non-composite functional
equations of several variables in a unified fashion. A long introduction highlights the basic ideas for beginners and several applications are also included. more...
• Springer-Verlag New York Inc 2006; US$ 169.00
This book presents the author?s new method of two-stage maximization of a likelihood function, which helps to solve a series of non-solving before the well-posed and ill-posed problems of
pseudosolution computing systems of linear algebraic equations (or, in statistical terminology, parameters? estimators of functional relationships) and linear integral... more...
• Springer 2006; US$ 99.00
Presents the real inner product spaces of arbitrary (finite or infinite) dimension greater than or equal to 2. This book studies the sphere geometries of Mobius and Lie for these spaces, besides
euclidean and hyperbolic geometry, as well as geometries where Lorentz transformations play the key role. more... | {"url":"http://www.ebooks.com/subjects/general-mathematics-problems-exercises-etc-ebooks/7511/?page=3","timestamp":"2014-04-18T08:48:09Z","content_type":null,"content_length":"80541","record_id":"<urn:uuid:9f163be9-8fe1-4b20-beea-7aa9cccf39ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
The End of Insight
I worry that insight is becoming impossible, at least at the frontiers of mathematics. Even when we're able to figure out what's true or false, we're less and less able to understand why.
An argument along these lines was recently given by Brian Davies in the "Notices of the American Mathematical Society". He mentions, for example, that the four-color map theorem in topology was
proven in 1976 with the help of computers, which exhaustively checked a huge but finite number of possibilities. No human mathematician could ever verify all the intermediate steps in this brutal
proof, and even if someone claimed to, should we trust them? To this day, no one has come up with a more elegant, insightful proof. So we're left in the unsettling position of knowing that the
four-color theorem is true but still not knowing why.
Similarly important but unsatisfying proofs have appeared in group theory (in the classification of finite simple groups, roughly akin to the periodic table for chemical elements) and in geometry (in
the problem of how to pack spheres so that they fill space most efficiently, a puzzle that goes back to Kepler in the 1500's and that arises today in coding theory for telecommunications).
In my own field of complex systems theory, Stephen Wolfram has emphasized that there are simple computer programs, known as cellular automata, whose dynamics can be so inscrutable that there's no way
to predict how they'll behave; the best you can do is simulate them on the computer, sit back, and watch how they unfold. Observation replaces insight. Mathematics becomes a spectator sport.
If this is happening in mathematics, the supposed pinnacle of human reasoning, it seems likely to afflict us in science too, first in physics and later in biology and the social sciences (where we're
not even sure what's true, let alone why).
When the End of Insight comes, the nature of explanation in science will change forever. We'll be stuck in an age of authoritarianism, except it'll no longer be coming from politics or religious
dogma, but from science itself. | {"url":"http://edge.org/response-detail/11385","timestamp":"2014-04-17T10:30:32Z","content_type":null,"content_length":"36682","record_id":"<urn:uuid:e25adfc1-f0ea-4131-ac27-76e95f7fdf2c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need to be able to set order of steps for matching
• Type:
• Status:
• Priority:
• Resolution: Fixed
• Affects Version/s: 2.2
• Component/s: None
We have run into some difficulty declaring steps that have similar wording such that the matching was incorrect for some of our scenarios. For example:
We have a line from one of scenarios that looks like this:
And the table.testtable with test_1_id of foo has exactly one test_2_id of bar
We want this line to match on the following step:
@Given("the $tableName with $whereColumnName of $whereColumnValue has exactly one $selectColumnName of $selectColumnValue")
But instead the match is always occuring on this step:
@Given("the $tableName with $whereColumnName of $whereColumnValue has $selectColumnName of $selectColumnValue")
where $selectColumnName gets interpreted as "exactly one test_2_id"
Can we have an annotation that indicates the order by which the matching occurs? That way, the first step in our example would always get compared first to our example input line instead of step two.
For example something like this:
@Given("the $tableName with $whereColumnName of $whereColumnValue has exactly one $selectColumnName of $selectColumnValue")
@Given("the $tableName with $whereColumnName of $whereColumnValue has $selectColumnName of $selectColumnValue")
I'm not sure that we'd gain much by adding an @Order annotation.
Couldn't you just add a qualifier in the second Given step: e.g. adding "at least one" to match the "exactly one" of the other step, and thus distinguishing the two steps?
Mauro Talevi
added a comment -
I'm not sure that we'd gain much by adding an @Order annotation. Couldn't you just add a qualifier in the second Given step: e.g. adding "at least one" to match the "exactly one" of the other step,
and thus distinguishing the two steps?
In the end, this is what we did. But this then forces you to change your language to get around this issue. In my opinion, the code should be flexible enough to recognize the difference between two
different steps. The @Order (or whatever you want to call that) would do this. There might be a cleaner way to do this, but this was the first thing that came to mind.
Douglas Padian
added a comment -
In the end, this is what we did. But this then forces you to change your language to get around this issue. In my opinion, the code should be flexible enough to recognize the difference between two
different steps. The @Order (or whatever you want to call that) would do this. There might be a cleaner way to do this, but this was the first thing that came to mind.
Would changing the match to use non-greedy match not help?
Current implementation is greedy:
this.anyWordBeginningWithThePrefix = "(
" + prefix + "\\w*)(\\W|
changing it to:
this.anyWordBeginningWithThePrefix = "(
" + prefix + "\\w*?)(\\W|
Z)"; //notice the ?
will make the matching stop at the first whitespace instead of the last possible.
added a comment -
Would changing the match to use non-greedy match not help? Current implementation is greedy: this.anyWordBeginningWithThePrefix = "( " + prefix + "\\w*)(\\W| Z)"; changing it to:
this.anyWordBeginningWithThePrefix = "( " + prefix + "\\w*?)(\\W| Z)"; //notice the ? will make the matching stop at the first whitespace instead of the last possible.
I think that might work – please run your proposed solution against the example that I posted and see if it works properly. If it does, that is a more elegant solution.
Douglas Padian
added a comment -
I think that might work – please run your proposed solution against the example that I posted and see if it works properly. If it does, that is a more elegant solution.
Added order_matching.scenario in trader example to reproduce simplified behaviour and allow investigation.
The non-greedy regex solution did not seem to work.
Mauro Talevi
added a comment -
Added order_matching.scenario in trader example to reproduce simplified behaviour and allow investigation. The non-greedy regex solution did not seem to work.
More investigation is needed, so descoping it from 2.4 release.
Mauro Talevi
added a comment -
More investigation is needed, so descoping it from 2.4 release.
Reclassified from bug to enhancement.
Added optional priority attribute to step method annotations (@Given, @When, @Then), to allow ordering or priotisation of methods whose regex patterns both match the same text step. To prioritise the
less-greedy pattern, simply give it a higher priority (which by default is zero):
@Given(value = "the $tableName with $whereColumnName of $whereColumnValue has exactly one $selectColumnName of $selectColumnValue", priority=1)
will take precendence over
@Given("the $tableName with $whereColumnName of $whereColumnValue has $selectColumnName of $selectColumnValue")
Mauro Talevi
added a comment -
Added optional priority attribute to step method annotations (@Given, @When, @Then), to allow ordering or priotisation of methods whose regex patterns both match the same text step. To prioritise the
less-greedy pattern, simply give it a higher priority (which by default is zero): @Given(value = "the $tableName with $whereColumnName of $whereColumnValue has exactly one $selectColumnName of
$selectColumnValue", priority=1) will take precendence over @Given("the $tableName with $whereColumnName of $whereColumnValue has $selectColumnName of $selectColumnValue") | {"url":"http://jira.codehaus.org/browse/JBEHAVE-162","timestamp":"2014-04-19T02:14:05Z","content_type":null,"content_length":"65235","record_id":"<urn:uuid:9de84421-66cb-4563-9ad8-972f7b9e2147>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove n is divisible by 2^m
March 23rd 2008, 05:51 PM
Prove n is divisible by 2^m
Prove that if n is a positive integer such that the integer which is made up from the last m digits of n in its decimal representation is divisible by 2^m then n is divisible by 2^m
March 23rd 2008, 11:13 PM
Decompose $n$ into a number $k_1$ made from its last $m$ digits and the number
made by its remaining digits:
$n=k_2 10^{m} +k_1$
Now we are told that $2^m|k_1$, so to complete this problem therefore
it is sufficient to show that $2^m|k_210^m$
March 24th 2008, 03:54 AM
I know what the problem is asking and I don't dispute the result, but I don't like how its phrased. For example, 4|100, but "00" is divisible by anything. It probably should be mentioned as a
special case for the problem to be stated correctly. (Yes, I'm in a picky mood this morning.) | {"url":"http://mathhelpforum.com/number-theory/31832-prove-n-divisible-2-m-print.html","timestamp":"2014-04-17T20:14:20Z","content_type":null,"content_length":"5889","record_id":"<urn:uuid:d87002f1-a7cc-4e43-97ac-c6aaec4b67d7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard Deviation: 6 Steps to Calculation
The formula for Standard Deviation depends on whether you are analyzing population data, in which case it is called σ or estimating the population standard deviation from sample data, which is called
The steps to calculating the standard deviation are:
• Calculate the mean of the data set (x-bar or 1. μ)
• Subtract the mean from each value in the data set2.
• Square the differences found in step 23.
• Add up the squared differences found in step 34.
• Divide the total from step 4 by either N (for population data) or (n – 1) for sample data (Note: At this point 5. you have the variance of the data)
Take the square root of the result from step 5 to get the standard deviation6.
Step 1: The average depth of this river, x-bar, is found to be 4’.
Step 5: The sample variance can now be calculated:
Step 6: To find the sample standard deviation, calculate the square root of the variance:
Featured School
Related posts | {"url":"http://www.sixsigmadaily.com/methodology/standard-deviation-6-steps-to-calculation","timestamp":"2014-04-19T20:19:53Z","content_type":null,"content_length":"47957","record_id":"<urn:uuid:db499a1d-71dc-4036-bd6a-3ac5f12822ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Domain with natural logs
June 24th 2012, 02:40 PM
Domain with natural logs
I'm working with: $f(x)=ln(x)-ln(1-4x)$ and I need to figure out the domain of this function. I know that natural logs cannot be zero or negative numbers but is there a trick to simplying this to
seeing the domain easily? Thanks.
June 24th 2012, 02:47 PM
Re: Domain with natural logs
You said yourself that we cannot take the log of a nonpositive number. That means we must have $x > 0$ and $1 - 4x > 0.$ Can you solve the second inequality for $x?$
June 24th 2012, 02:51 PM
Re: Domain with natural logs
note that both $x > 0$ and $1-4x > 0$
since $1-4x > 0 \implies x < \frac{1}{4}$ , the intersection of these two restrictions is $0 < x < \frac{1}{4}$ ... the domain. | {"url":"http://mathhelpforum.com/pre-calculus/200337-domain-natural-logs-print.html","timestamp":"2014-04-21T04:10:37Z","content_type":null,"content_length":"7199","record_id":"<urn:uuid:972f373d-2743-4e79-9415-64bcf14681d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
9.1 miles, Kent, WA 98042
Serving Southeast King County
Physics and mathematics are subjects which can challenge even the brightest of students. You may need help understanding your current class, reviewing what you have forgotten from a prior class, or
preparing for an upcoming AP test. I will tailor my instruction...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/tutorsearch?z=98047&d=40&kw=Math","timestamp":"2014-04-18T19:24:33Z","content_type":null,"content_length":"72708","record_id":"<urn:uuid:13fb2f1a-addc-4173-9b1e-984245191b25>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sending multidimensional array from jQuery to php
up vote 0 down vote favorite
I am passing an array from jQuery to php.
The array is generated from a table with this code:
var stitchChartArray = [];
var row = 0;
// turn stitch chart into array for php
$('#stitchChart').find('tr').each(function (index, obj) {
//first row is table head- "Block #"
if(index != 0){
var TDs = $(this).children();
$.each(TDs, function (i, o) {
var cellData = [$(this).css('background-color'), $(this).find("img").attr('src')];
In console it looks like this:
[[["rgb(75, 90, 60)", "symbols/177.png"], ["rgb(75, 75, 60)", "symbols/184.png"], ["rgb(75, 90, 60)", "symbols/177.png"], 7 more...], [["rgb(105, 105, 105)", "symbols/163.png"], ["rgb(75, 75, 60)", "symbols/184.png"], ["rgb(75, 90, 60)", "symbols/177.png"], 7 more...], [["rgb(105, 105, 105)", "symbols/163.png"], ["rgb(75, 90, 60)", "symbols/177.png"], ["rgb(75, 75, 60)", "symbols/184.png"], 7 more...], [["rgb(75, 90, 60)", "symbols/177.png"], ["rgb(75, 90, 60)", "symbols/177.png"], ["rgb(98, 119, 57)", "symbols/210.png"], 7 more...], [["rgb(105, 105, 105)", "symbols/163.png"], ["rgb(105, 105, 105)", "symbols/163.png"], ["rgb(150, 150, 195)", "symbols/72.png"], 7 more...], [["rgb(75, 165, 105)", "symbols/187.png"], ["rgb(134, 158, 134)", "symbols/64.png"], ["rgb(165, 180, 180)", "symbols/171.png"], 7 more...], [["rgb(60, 150, 75)", "symbols/189.png"], ["rgb(120, 120, 90)", "symbols/225.png"], ["rgb(143, 163, 89)", "symbols/209.png"], 7 more...]]
It represents each row of a table->each cell of row->[0]rgb value of bg of cell [1]icon in cell.
This jQuery code returns the correct element(and rgb value) from the array:
alert(stitchChartArray[1][1][0]); //row 1,cell 1, first value(rgb)
But when it gets sent to the php script with this:
$.post('makeChartPackage.php', {'stitchChart[]': stitchChartArray }, function(data){
The php throws an error:
Cannot use string offset as an array in /Users/tnt/Sites/cross_stitch/makeChartPackage.php on line 33
$stitchChart = $_POST['stitchChart'];
echo $stitchChart[1][1][0]; //line 33
I am assuming I am either constructing the array incorrectly or passing it to the php script incorrectly.
EDIT: I did this to return the array to jQuery:
$stitchChart = $_POST['stitchChart'];
And here was the result: Array ( [0] => rgb(75, 90, 60),symbols/177.png,rgb(75, 75, 60),symbols/184.png,rgb(75, 90, 60),symbols/177.png,rgb(98, 119, 57),symbols/210.png,rgb(180, 195, 105),symbols/
388.png,rgb(165, 165, 120),symbols/235.png,rgb(75, 75, 60),symbols/184.png,rgb(90, 90, 45),symbols/195.png,rgb(120, 120, 75),symbols/156.png,rgb(105, 105, 105),symbols/163.png [1] => rgb(105, 105,
105),symbols/163.png,rgb(75, 75, 60),symbols/184.png,rgb(75, 90, 60),symbols/177.png,rgb(75, 90, 60),symbols/177.png,rgb(165, 165, 120),symbols/235.png,rgb(120, 120, 75),symbols/156.png,rgb(75, 90,
60),symbols/177.png,rgb(75, 90, 60),symbols/177.png,rgb(105, 105, 105),symbols/163.png,rgb(120, 120, 90),symbols/225.png [2] => rgb(105, 105, 105),symbols/163.png,rgb(75, 90, 60),symbols/177.png,rgb
(75, 75, 60),symbols/184.png,rgb(75, 90, 60),symbols/177.png,rgb(98, 119, 57),symbols/210.png,rgb(75, 90, 60),symbols/177.png,rgb(75, 75, 60),symbols/184.png,rgb(105, 105, 105),symbols/163.png,rgb
(120, 120, 90),symbols/225.png,rgb(105, 105, 105),symbols/163.png
It appears the array is not multidimensional?
php javascript jquery arrays
add comment
1 Answer
active oldest votes
$_POST['stitchChart'] in the context you have addressed it there is (effectively) a JSON representation of a multidimensional array, stored as a string. When you treat a string as a
multidimensional indexed array in PHP, you will get that error. The first [x] is treated as a "string offset" - i.e. the character at position x - but the next and any subsequent [x]
addresses can only be treated as arrays (you cannot get a substring of a single character) and will emit the error you have received.
To access your data as an array in PHP, you need to use json_decode():
$stitchChart = json_decode($_POST['stitchChart'],TRUE);
echo $stitchChart[1][1][0];
up vote 1 Because the jQuery data argument seemingly can't deal with multidimensional arrays, you should use Douglas Crockford's JSON-js library and pass the result into data as a string. NB: use
down vote json2.js.
Here is how you could do this:
stitchChartArray = JSON.stringify(stitchChartArray);
$.post('makeChartPackage.php', {'stitchChart': stitchChartArray }, function(data){
If you use this code, my original PHP suggestion should work as expected.
I thought that was going to be it! But now I am getting an error: PHP Warning: json_decode() expects parameter 1 to be string, array given in /Users/tnt/Sites/cross_stitch/
makeChartPackage.php on line 34 – maddogandnoriko Jan 1 '12 at 23:42
no matter what I do the array seems to be getting flattened down to just one dimension: array[0]"a string" [1]"a string"etc. when it should be: array[0][0]"rgb color" [1]"symbol path"
[1][0]"rgb color"[1]"symbol path" – maddogandnoriko Jan 2 '12 at 14:35
@maddogandnoriko See edit above... – DaveRandom Jan 2 '12 at 15:16
Tried that and almost the same error: json_decode() expects parameter 1 to be string, array given in /Users/tnt/Sites/cross_stitch/makeChartPackage.php on line 29. Un jsoned the array
looks a bit different: Array ( [0] => [[[\"rgb(75, 90, 60)\",\"symbols/177.png\"],[\"rgb(75, 75, 60)\",\"symbols/184.png\"],[\"rgb(75, 90, 60)\",\"symbols/177.png\"],[\"rgb(98, 119,
57)\",\"symbols/210.png\"],[\"rgb(180, 195, 105)\",\"symbols/388.png\"],[\"rgb(165, 165, 120)\",\"symbols/235.png\"],[\"rgb(75, 75, 60)\",\"symbols/184.png\"],[\"rgb(90, 90, 45)\",\
"symbols/195.png\"],[\"rgb(120, 1... – maddogandnoriko Jan 2 '12 at 19:23
@maddogandnoriko I was under this impression also, but the results you get seem to indicate to the contrary. You could try passing them as objects, but I sort of doubt this would fix
1 it... But really, I would say that JSON is the way this sort of thing should be done anyway, especially when working in Javascript, the language from which JSON derives it's syntax...
– DaveRandom Jan 2 '12 at 19:53
show 4 more comments
Not the answer you're looking for? Browse other questions tagged php javascript jquery arrays or ask your own question. | {"url":"http://stackoverflow.com/questions/8696036/sending-multidimensional-array-from-jquery-to-php","timestamp":"2014-04-23T22:43:34Z","content_type":null,"content_length":"74993","record_id":"<urn:uuid:1b911b90-24fa-4e52-894d-6ca0e01466c6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Final velocity given acceleration and initial velocity
1. The problem statement, all variables and given/known data
A dynamite blast at a quarry launches a chunk of rock straight upward, and 2.1 s later it is rising at a speed of 19 m/s. Assuming air resistance has no effect on the rock, calculate its speed at (a)
at launch and (b) 5.2 s after the launch.
2. Relevant equations
v = v0 + at (I'm assuming is the only relevant one, although I'll post two others in case they are needed)
x = x0 + v0t + a/2 t^2
v^2 = v0*2 + 2a(x-x0)
3. The attempt at a solution
I solved part A by plugging into the equation v = v0 + at
19 m/s = v0 + (-9.8 m/s^2)(2.1 s)
And I found that the initial velocity equals 40 m/s.
So, to solve part b, I should just have to plug the initial in and find the final. I tried that:
v = 40 m/s + (-9.8 m/s^2)(5.2 s)
v = -11 m/s
However, when I entered that solution in for the homework, I was told it was wrong. I'm not really sure how to go about doing the problem if that's incorrect. I thought maybe I could have an error in
rounding with significant figures.
Thanks for the help! | {"url":"http://www.physicsforums.com/showthread.php?t=466655","timestamp":"2014-04-18T23:24:48Z","content_type":null,"content_length":"34778","record_id":"<urn:uuid:2c1d6bb9-b1a9-42da-9680-92cd5b62a250>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
This game is a logical deduction puzzle. The colors make it a little bit more fun.
Game Rules:
Determine the correct sequence of colors. The white star tells you how many colors you have correct, and the black star tells you how many elements (color and position) are correct.
Math Skills:
Logical Deduction – The trick to solving a complex math problem is to turn it into smaller, simpler problems you already know how to solve. Just apply that principle here.
Fun Tips:
Develop an algorithm for solving the puzzle. For example, first determine the correct colors. After that, just determine the positions. Logical deduction is often used for solving complex math
problems. This game will improve your ability to draw logical conclusions, and therefore solve math problems. | {"url":"http://www.funmathematicsgames.com/math-games/deduction_game.html","timestamp":"2014-04-20T14:09:44Z","content_type":null,"content_length":"5551","record_id":"<urn:uuid:c5bc6909-a517-4c00-87bc-bd38a7cf55f6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Mya on Wednesday, April 14, 2010 at 11:46pm.
how do you solve ths problem?? 78mi on 3gal (find the unit rate)
Related Questions
math - Okay so i here is the problem......find the unit rate. I dont remember ...
math/unit rate - please help me find the definition for unit rate. It depends ...
8th grade - I cant Figure this out... Find each Unit Rate: Four pounds of apples...
7th grade - how woul you solve a problem like ths (9x-5y)(2x+3y)
algebra 1 - how do you solve a problem that says to find the unit rate?
8th grade - WRITE A UNIT RATE 250 MILES FOR 20 GALLONS
Math - 3gal 2qt=___qt I know that 1gal=4qts. So 3gal=2quarts?
8th grade science - how do i solve this problem The Math Behind the LawHow much ...
math - what is the unit of 120mi using 3gal
math - how do u solve a unit rate problem using proportions to solve | {"url":"http://www.jiskha.com/display.cgi?id=1271303188","timestamp":"2014-04-20T16:35:00Z","content_type":null,"content_length":"8047","record_id":"<urn:uuid:bd2575c5-8412-42a3-b878-40488667b348>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Guide Entry 07.03.05
Yale-New Haven Teachers Institute Home
The Space Cadet's Laboratory: Using Electromagnetic Energy to Study Astronomy, by Jennifer B. Esty
Guide Entry to 07.03.05:
The electromagnetic spectrum is a basic science topic that is covered in many different ways in many different classes in school. This unit will look at how the electromagnetic spectrum is used to
study astronomy. This curriculum unit is written to be taught in a high school physics class; however, most of the ideas could be adapted for use in a middle school or a general science class. Many
of the students for whom this unit is intended struggle with basic algebra and read, write and think at about an eighth- or ninth-grade level. Most of these students have had very little background
in science of any kind, so there is a fair amount of basic and introductory information covered in this unit as well as some of the more advanced topics in the study of astronomy and energy.
This unit includes several hands-on activities intended to make the study of the electromagnetic spectrum more interesting. These activities include building and using a simple spectrophotometer, and
a new way to think about electron energy levels. All of the activities are suitable for high school level students and are probably adaptable for younger students as well as older ones.
(Recommended for Physics, General Science, and Astronomy, grades 9-12.)
Contents of 2007 Volume III | Directory of Volumes | Index | Yale-New Haven Teachers Institute | {"url":"http://www.yale.edu/ynhti/curriculum/guides/2007/3/07.03.05.x.html","timestamp":"2014-04-19T22:13:47Z","content_type":null,"content_length":"4589","record_id":"<urn:uuid:b4bb066c-ad5a-4a4b-a0ec-0ec02eedc6df>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
normal line?
October 11th 2006, 09:41 PM #1
Sep 2006
normal line?
alright heres the question:
the point (1,2) lies on the curve described by the equ. x^2-xy+y^2=3
find the point other than (1,2) where the normal line to the curve at (1,2) crosses the curve.
ok i really dont know what there asking here... i mean i know how to find the normal line of the tangent line, neg recipricale(sp) of dy/dx x^2-xy+y^2=3
but what do they mean by find the point other than (1,2) where the normal line to the curve at (1,2) crosses the curve.
how do i find that?
do they mean find a equ of a line... meaning find the normal line from the given pt and equ?
alright heres the question:
the point (1,2) lies on the curve described by the equ. x^2-xy+y^2=3
find the point other than (1,2) where the normal line to the curve at (1,2) crosses the curve.
ok i really dont know what there asking here... i mean i know how to find the normal line of the tangent line, neg recipricale(sp) of dy/dx x^2-xy+y^2=3
but what do they mean by find the point other than (1,2) where the normal line to the curve at (1,2) crosses the curve.
how do i find that?
do they mean find a equ of a line... meaning find the normal line from the given pt and equ?
your equation describes an ellipse with its centre in the origin.
At (1,2) the tangent to the ellipse has the slope zero, thus the normal line is a parallel to the y-axis. The other point which you were looking for has the same x-value as the given point. Plug
in x = 1 into your equation and solve for y. You'll get y = 2 (that's the given point) or y = -1.
So the unknown point is (1, -1).
still a little lost?
October 11th 2006, 11:36 PM #2
October 11th 2006, 11:44 PM #3
October 12th 2006, 12:21 PM #4
Sep 2006 | {"url":"http://mathhelpforum.com/calculus/6366-normal-line.html","timestamp":"2014-04-18T09:13:26Z","content_type":null,"content_length":"40608","record_id":"<urn:uuid:d7ff640c-5b2c-458d-9881-72d652a34551>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrices Homework -- The Inverse of the Generic 2x2 Matrix
Today you are going to find the inverse of the generic 2×2 matrix. Once you have done that, you will have a formula that can be used to quickly find the inverse of any 2×2 matrix.
The generic 2×2 matrix, of course, looks like this:
[A] = abcd[A]=abcd size 12{ left [ matrix { a {} # b {} ## c {} # d{} } right ]} {}
Since its inverse is unknown, we will designate the inverse like this:
[ A-1 ] = wxyz[A-1]=wxyz size 12{ left [ matrix { w {} # x {} ## y {} # z{} } right ]} {}
Our goal is to find a formula for w w in terms of our original variables a a, b b, c c, and d d. That formula must not have any w w, x x, y y, or z z in it, since those are unknowns! Just the
original four variables in our original matrix [A][A]. Then we will find similar formulae for x x, y y, and z z and we will be done.
Our approach will be the same approach we have been using to find an inverse matrix. I will walk you through the steps—after each step, you may want to check to make sure you’ve gotten it right
before proceeding to the next.
Write the matrix equation that defines A-1 A-1 as an inverse of AA.
Now, do the multiplication, so you are setting two matrices equal to each other.
Now, we have two 2×2 matrices set equal to each other. That means every cell must be identical, so we get four different equations. Write down the four equations.
Solve. Remember that your goal is to find four equations—one for w w, one for x x, one for y y, and one for z—where each equation has only the four original constants a a, b b, c c, and d d!
Now that you have solved for all four variables, write the inverse matrix A -1 A -1 .
A -1 = A -1 =
As the final step, to put this in the form that it is most commonly seen in, note that all four terms have an ad-bcad-bc in the denominator. (*Do you have a bc-adbc-ad instead? Multiply the top and
bottom by –1!) This very important number is called the determinant and we will see a lot more of it after the next test. In the mean time, note that we can write our answer much more simply if we
pull out the common factor of 1ad−bc1ad−bc size 12{ { {1} over { ital "ad" - ital "bc"} } } {}. (This is similar to “pulling out” a common term from a polynomial. Remember how we multiply a matrix by
a constant? This is the same thing in reverse.) So rewrite the answer with that term pulled out.
A -1 = A -1 =
You’re done! You have found the generic formula for the inverse of any 2x2 matrix. Once you get the hang of it, you can use this formula to find the inverse of any 2x2 matrix very quickly. Let’s try
a few!
The matrix 23452345 size 12{ left [ matrix { 2 {} # 3 {} ## 4 {} # 5{} } right ]} {}
• a. Find the inverse—not the long way, but just by plugging into the formula you found above.
• b. Test the inverse to make sure it works.
The matrix 32953295 size 12{ left [ matrix { 3 {} # 2 {} ## 9 {} # 5{} } right ]} {}
• a. Find the inverse—not the long way, but just by plugging into the formula you found above.
• b. Test the inverse to make sure it works.
Can you write a 2×2 matrix that has no inverse? | {"url":"http://cnx.org/content/m19214/latest/?collection=col10686","timestamp":"2014-04-20T21:59:10Z","content_type":null,"content_length":"99840","record_id":"<urn:uuid:8b33f155-bef5-453a-bbb7-b5207583ac2c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Dense Point-to-Point Alignment Method for Realistic 3D Face Morphing and Animation
International Journal of Computer Games Technology
Volume 2009 (2009), Article ID 609350, 9 pages
Research Article
A Dense Point-to-Point Alignment Method for Realistic 3D Face Morphing and Animation
College of Information Science and Technology, Beijing Normal University, Beijing 100875, China
Received 29 January 2009; Accepted 13 March 2009
Academic Editor: Suiping Zhou
Copyright © 2009 Yongli Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We present a new point matching method to overcome the dense point-to-point alignment of scanned 3D faces. Instead of using the rigid spatial transformation in the traditional iterative closest point
(ICP) algorithm, we adopt the thin plate spline (TPS) transformation to model the deformation of different 3D faces. Because TPS is a non-rigid transformation with good smooth property, it is
suitable for formulating the complex variety of human facial morphology. A closest point searching algorithm is proposed to keep one-to-one mapping, and to get good efficiency the point matching
method is accelerated by a KD-tree method. Having constructed the dense point-to-point correspondence of 3D faces, we create 3D face morphing and animation by key-frames interpolation and obtain
realistic results. Comparing with ICP algorithm and the optical flow method, the presented point matching method can achieve good matching accuracy and stability. The experiment results have shown
that our method is efficient for dense point objects registration.
1. Introduction
Constructing alignment of 3D objects is a crucial element of data representations in computer vision and graphics. Generally the dense alignment is a point-to-point mapping from one surface onto
another surface, where each point gets the correspondent point according to its inherent property, such as the points of nose tip on different 3D faces are correspondent points according to the
feature of human face. However, the practices and applications of dense point correspondence have been increasing over the last years. The straightforward application of the dense alignment is to
compute objects morphing and animation. More important, if the point correspondence of a class of objects has been established, it is achievable to construct a representation for these objects. The
most typical and simple model is the linear combination model described in [1], where a 3D face morphable model was constructed on the aligned 3D faces, and given a facial image the 3D face can be
reconstructed by a model matching procedure. The other applications, involving objects recognition based on 2D/3D images, shape retrieval, and 3D surface reconstruction in computer vision, are all
relied on dense surface correspondence.
For dense 3D objects, as the complexity of model structure and the hugeness of data, it is a challenging problem to get good correspondence result, especially to high-resolution scanned 3D faces. In
fact, the correspondence of different 3D faces is not a well-defined problem. When two faces are compared, only some distinct feature points, such as the tip of nose, the corner of mouth, and the
center of eyes, have the clearly correspondent points, while it is difficult to define the correspondence for the points on the smooth regions, such as the cheeks and the forehead. However, even
matching the distinct feature points may be a difficult problem because it involves many of the basic problems of computer vision and feature detection. To conquer the correspondence problem of dense
3D faces, we present a closest point matching method based on the thin plate spline (TPS) transformation. In this method, the source 3D face is firstly transformed onto the destination 3D face by TPS
transformation, which is constructed from the interpolation on the feature points hand-placed on the source and target 3D face. Then using a revised closest point matching algorithm, the
point-to-point alignment between 3D faces is obtained. We create 3D face morphing and animation from the interpolation between the aligned 3D faces. The realistic deformation results and the
experiments comparing with the related methods show that our correspondence algorithm may be an appropriate approach.
The remainder of the paper is structured as follows. In Section 2 we review some related work. In Section 3 the TPS transformation of 3D faces is described in detail. Then the point-to-point
alignment is established in Section 4. In Section 5, 3D face morphing and animation are implemented, and experimental results are given. Finally this work is concluded.
2. Related Work
In the past decades, there are many methods and algorithms that are presented to solve surface alignment and dense point correspondence for different applications. All these researches fasten on two
element problems about the point matching: the spatial transformation and feature correspondence searching. The former one is to find a suitable transformation for the aligning objects. These spatial
transformations can be classified into rigid transformation and nonrigid transformation. The rigid transformation is generally used in the alignment of an object and itself, such as the different
viewpoint scenes or the overlapped parts of the object. The nonrigid transformation, including affine transformation, spline function, and radial-based function, now is the dominant method used in
the cases existing nonrigid deformation. The latter issue of point alignment generally concerns how to determine the right correspondence by the inherent features of the objects, which commonly have
the forms in geometry properties, like points, lines, curves, and surfaces, or the abstract measurements, such as moment, entropy, and mutual information. There are several surveys [2–6] that have
given comprehensive reviews about this subject. The following are some typical work related to our method.
One of the most popular point matching methods is the iterative closest point (ICP) algorithm proposed by Besl and McKay [7]. It iteratively searches for closest points in two surface patches and
optimizes the rigid transformation to minimize the average distance of these closest points. The original ICP algorithm demands adequate prealignment and does not usually guarantee the one-to-one
correspondence, as a result various improved ICP methods were proposed. Rusinkiewicz and Levoy provided good surveys over these ICP variants [8]. Although these improvements have enhanced the
convergence of ICP and achieved high registration accuracy, the rigid transformation constrains its application. In many nonrigid deformation cases, ICP is not suitable, such as 3D faces.
Blanz and Vetter made dense correspondence between 3D facial scans [1, 9], taking advantage of the fact that the radial coordinate from Cyberware scans can be expressed as a height map image with the
intensity representing the radius in cylinder coordinate system. They used optical flow technique to establish correspondence between texture images and height maps images, and the correspondence was
refined by a bootstrapping method if large amount of the prototypic scans obtained. A 3D face representation named morphable model was constructed from the set of aligned 3D faces. Recently, they
proposed a new dense 3D correspondence method [10] based on their 3D faces database. In this method, a facial feature learning strategy and automatic properties extraction algorithm were used for
alignment optimization. Although their alignment has convincing results, it demands large quantities of 3D facial scans, and some 3D information will be lost when the alignment is perceived from 2D
images optical flow computation.
Similarly, the notable TPS-RPM method of Chui and Rangarajan [11] attempted to incorporate TPS into the framework of ICP for point matching. A binary correspondence matrix was used in this method to
record the matching relation of all points and eliminate outliers. In point matching procedure, a soft-assign and deterministic annealing optimization was implemented to compute point correspondence
iteratively. Although their experiments show good results on some sparse 2D/3D point sets, the method can easily get trapped in bad local minima if the objects are not approximately aligned initially
[12]. And this method is not suitable for the alignment of 3D faces with large quantity of dense points because of the limitation of the dimension of the correspondence matrix and the impracticalness
of applying TPS on the whole dense point sets.
The interpolation idea in [13] is very close to our method. To synthesis facial expression from photographs, a general 3D facial model was fitted to the individual faces based on radial basis
functions using 13 feature points [13]. But the general 3D facial model created by Alias—Wavefront tools—is a relative sparse model comparing with the dense 3D faces. In addition, the fitting
procedure and its refinement are different from the closest point matching algorithm here.
There are other researches associated with surface or dense point correspondence, but the applications are various. The medical image registration may be the dominant domain, others applications
include 3D objects reconstruction, representation, and recognition. To get good correspondence results, many approaches require large training data. But we focus on the dense point correspondence of
3D faces and its application on 3D face morphing and animation which require only two objects.
3. 3D Face Deformation Based on Thin Plate Spline
To get more accurate point matching result, the prototypic objects are generally transformed into a reference before alignment. There are rigid transformation, affine transformation, and nonaffine
deformation. As the 3D faces have complex shape feature, it is difficult to find a rigid or affine transformation with good deformation results. The nonaffine transformation is considered as the
proper mapping method. For the scanned 3D faces with high dimensional dense points, the data is too large to do a global transformation for all points. The alternative solution is to use subsampling
sparse point sets. Here we use an interactive tool to pick out 25 landmarks on the aligning 3D faces. Figure 1 shows the landmarks on the 3D faces. These landmarks are the main feature points that
refer to the morphological properties of human face, and will be used as the controlling points to constraint the TPS deformation between 3D faces in our method.
It is frequent in spline theory to generate a smoothly interpolated mapping between two sets of landmark points. We adopt TPS to model the deformation of 3D faces. TPS was introduced by Harder and
Desmarais [14], and Bookstein [15] firstly used TPS for medical image registration. TPS is a class of nonrigid spline mapping functions with desirable properties, such as globally smooth, and easily
computable, and the most important is that TPS transformation can be separated into affine and nonaffine components. So TPS has been widely used in 2D image or 3D data registration for variety
applications. The following gives the implementation of TPS transformation for 3D faces in detail.
The TPS transformation can be regard as a mapping from space to , so we denote TPS as . For the convenience of explication, we use , that denote the source 3D face and destination 3D face for
aligning. , can be looked as two point sets hat have the following expression: where and are the points number of and such that . The landmark points sets of and are denoted as where is the count of
landmarks (here ). These landmarks are the controlling points for TPS transformation, that is, TPS satisfies the following interpolation conditions at the landmark points: At the same time, TPS is
restricted by the blend smooth constraint, formed by the minimization of the following blending energy function, the sum of squares of all second-order partial derivatives: It is proved that TPS can
be decomposed by affine component and nonaffine component [15]. This fact is generally represented as the following formula: where is the point on the source 3D face and has the homogeneous
coordinates . is a affine transformation matrix. named TPS kernel is an vector with the form such that . is an warping coefficient matrix representing the nonaffine deformation.
To get TPS transformation, the matrices and must be determined. There are two solutions to this problem, the interpolating and noninterpolating methods. If TPS needs not be interpolated, that is,
formula (3) is not strictly satisfied, the following energy function can be minimized to find the optimal answer: where is the weight to control the smooth component, and for a fixed there will be a
unique minimum for the energy function.
In the interpolating case, formula (3) is satisfied, putting (5) into (3), and confining to nonaffine transformation, that is, , it leads a direct solution for and formed by the following matrix
relation: where and are matrix whose rows are the homogeneous coordinates of the landmark points belonging to and , respectively. is an symmetry matrix which represents the spatial relation between
the landmark points of the source 3D face and hasthe element with the following formation: In our work, the landmarks placed on the source and target 3D faces are looked as the correspondent points
with the same facial feature, hence the condition in (3) will be satisfied, and the interpolating method is adopted here to solve the TPS transformation. From (7) the matrices and will be determined,
and the source 3D face will be deformed by TPS transformation, we denote the deformed 3D face of as . Figure 2 shows the TPS deformation of the source 3D face and the deformed 3D face is compared
with the source 3D face and the destination 3D face. It is proved that the deformed source 3D face is closer to the destination 3D face than the source 3D face, so it leads a more accurate points
alignment. In the next section, the point-to-point correspondence between and will be done by a closest point matching process.
4. Dense Point Alignment by Closest Point Matching
Although the rigid transformation of ICP algorithm is not used in our method, we adopt the similar closest point matching schemes like ICP. That is, for each point on the deformed source 3D face ,
the closest point will be found on the destination 3D face . Before the closest point matching, the closest point criterion must be defined. ICP algorithm generally uses the distance between points
or the distance between point and point set to define the closest point, and the distance refers to Euclidean distance. Here we define the closet point in the sense of the distance from a point to a
point set. To the point on , the correspondent point on is determined by the following minimum requirement: where is a function defined to compute the distance between two points. As the deformation
among 3D faces is a type of nonrigid transformation, the Euclidean distance used to determine the closest points in rigid transformation is not the proper method in nonrigid situation. Considering
the modality of human face, the curvature is an important property interrelated to the local surface feature. Here the distance is defined as a weighted combination of Euclidean distance and the
difference of the mean curvature of the points. The distance of points , has the following formation: where is the weight to balance the Euclidean distance and the curvature difference such that . In
the following experiments we set . is the function to compute the mean curvature of the points on 3D faces.
Having determined the closest point matching criterion, for each point on , the closest point searching must be executed on the target 3D face . As the huge data of the source and target 3D faces,
the whole closest points searching is a very time consuming procedure with computation . To get high point matching efficiency, we adopt the dimensional binary search tree (KD-tree) technique in the
point matching method. The KD-tree algorithm was introduced by Bentley [16] and has been widely utilized in the nearest neighbor searching [17]. It is a binary search tree in which each node
represents a partition of the dimensional space. The root node represents the entire space, and the leaf nodes represent subspaces containing mutually exclusive small subsets of the relevant points.
The space partitioning is carried out in a recursive binary fashion. The average performance of the KD-tree searching has complexity of .
The other obstacle has to be settled for the closest point matching is that the current method does not preserve one-to-one mapping. In fact, some points on the deformed 3D face may be mapped onto
the same point on the destination 3D face . We denote these points on as collision points which have more than one correspondent points on . Generally the collision points are produced by the points
of outliers or the points with local complex geometry feature. Considering the high resolution of 3D faces and the distribution of these collusion points, the latter one is concerned with the main
problem. The distribution of these collision points on the destination 3D face is shown in Figure 3. To eliminate these collision points, a revised point matching algorithm is proposed. The main idea
of the method is to construct a distance list for every collision point, and only the point with minimum distance is regarded as the truly correspondent point. The following is the outline of the
one-to-one point matching algorithm.
(1)Create KD-tree for the destination 3D face .(2)For each point on the deformed source 3D face , search its closest point on .(3)Detect the collision points on , if not exist, go to 6. (4)For each
collision point , find the correspondent points on reversely, denote the point with minimum distance as , and record the correspondent pair points . (5)Remove the point from , delete the node from
the KD-tree, then go to (2)(6)Record the remained correspondent pairs of points without collision.
By the revised closest point matching algorithm, the correspondent point searching procedure maintains one-to-one mapping, though more computation is required.
5. Experimental Results of 3D Face Morphing and Animation
If the point-to-point correspondence of 3D faces is established, the direct application of the alignment is to create 3D face morphing and animation, which have wide applications in computer game,
virtual reality, and animating actor in entertainment movies.
The scanned 3D faces we used come from MPI Face database [18] and BJUT-3D Face Database [19]. As the 3D facial scans have high resolution, which generally have more than 70000 vertices and 140000
triangles with texture information, the realistic animation results will be achieved if accurate point correspondence is obtained. Here we use the simple key-frames interpolation method to produce
the face morphing and animation between the source and destination faces. The points on the key-frames 3D face are computed by linear interpolation between the correspondent points. The texture and
the geometry normal of the correspondent points are interpolated at the same time.
The experiment of face morphing is implemented on two 3D faces selected from MPI face database, one face is female and the other is male. As the difference of the two faces is adequate to express
variety of the human face modality, the nonrigid transformation is demanded to do with the deformation. The face animation is created on the same person's 3D faces with different expressions selected
from BJUT-3D Face Database. The sequence of key-frames of the face morphing and face animation is shown in Figure 7. On the whole, the vision reality of the morphing and animation is satisfied,
though the local areas with relative complex shape feature and the areas with missing points as the scanning reason are not looking good, such as the areas of mandible and ears.
To compare our TPS method with the original ICP algorithm [7] and the optical flow method [9], the MPI source 3D face is aligned to the target 3D face using these three methods, respectively. To
compute the point correspondence by the optical flow method, the source and target 3D faces are spread into texture and height mapping images (shown in Figure 4) by cylinder coordinate
transformation. Then the facial texture and height mapping images are aligned by an optical flow algorithm, here we adopt the optical flow algorithm proposed by Horn and Schunck [20]. Finally the
point correspondence of 3D faces is obtained from the alignment of 2D images by the reversed cylinder coordinate transformation. In ICP and TPS methods, the source 3D face is transformed by rigid
transformation and TPS deformation, respectively. Then using the proposed closest point searching method, the two transformed faces are aligned with the destination 3D face. To evaluate the alignment
results of these three methods, the average and standard deviations of the distances between the correspondent points on the source and destination 3D face are computed respectively.
The results of these three methods are shown in Table 1. It is denoted that all the vertices of the 3D faces are standardized into interval before the experiment. The distances of correspondent
points of these three methods are also visualized on the source 3D face (shown in Figure 5). The average and standard deviations of the distances and its visualization in Figure 5 reveal that the TPS
method has the best point matching accuracy, while the optical flow method performs poorly in dense points alignment, and the ICP is in-between of the former two methods. The optical flow is
generally used in perception of the movement of objects in video sequence [21]. When the difference between the facial images is too large to satisfy the continualness requirement of adjoining frame
images, the optical flow computation will fail with obvious error. It is the main reason for referring to the poor results of the optical flow method. In fact, the nonrigid transformation is more
suitable for 3D faces deformation than rigid transformation, so that the TPS method has the better results than ICP algorithm.
To examine the stability of the TPS method, we selected 30 3D faces from BJUT-3D Face Database as an aligning set. The dense point alignment is implemented on the aligning set using the above three
methods. The experiment is done with the 3D faces number of the aligning set increasing, that is, the 3D faces are added into the aligning set gradually. At first, the aligning set composes of two 3D
faces, then 3D faces are added into one by one, until all 30 face are added. At the same time, the mean average and standard deviations of the correspondent points distances of the 3D faces in the
aligning set are computed. Figure 6 shows the change of the mean average and standard deviations with the increasing of 3D faces number respected to the optical flow method, ICP algorithm and TPS
method. The experimental results show that the mean average distance and its standard deviations of these three methods are all converging toward a stable value, and TPS method has better stability
and correspondence accuracy than the ICP algorithm and the optical flow method.
6. Conclusion
In this paper, we describe a new dense point-to-point alignment method and apply it on scanned 3D faces. In the method, TPS is adopted to model the deformation of 3D faces, and a closest point
matching algorithm is proposed to search the correspondent points and simultaneously guarantees the alignment one-to-one mapping. To reduce the closest points searching time and get good point
matching accuracy, a KD-tree technique and a user-defined distance function which considers the points local curvature are integrated with the point matching algorithm. The dense point alignment is
used in 3D faces morphing and animation by key-frames interpolation and gets satisfied realistic visual results. Contrasting with ICP algorithm and the optical flow method, the error analysis on the
selected pair of MPI 3D faces and the experiment on 30 BJUT 3D faces prove that our method is efficient for dense point correspondence. Furthermore, the method does not require large facial database
and can easily extend to other dense objects.
In our work, the landmarks of 3D faces are picked up by an interactive tool, though the manual marking procedure is simple, and taking little time, it limits the method apply in many areas, such as
realtime application and the large quantity of objects situation. So the future work firstly focus on the fully automatic point matching algorithm. The intuitively thought is to find the suitable
automatic feature detection method, but it is another challenging problem in pattern recognition and computer vision. The additional points to be improved of this work include refining the aligning
accuracy by exploring proper representation of the local geometry feature, constructing the whole head model with hair to get more natural looking, and making practical applications.
This work was supported by the National Natural Science Foundation of China (Grant no. 60736008 and no. 60872127) and the Postdoctoral Science Foundation of China (Grant no. 20080430316). The 3D
facial scans were provided by the Max-Planck Institute for Biological Cybernetics in Tuebingen, Germany and the Multimedia and Intelligent Software Technology Beijing Municipal Key Laboratory of
Beijing University of Technology in Beijing, China.
1. V. Blanz and T. Vetter, “A morphable model for the synthesis of 3D faces,” in Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '99), pp.
187–194, Los Angeles, Calif, USA, August 1999.
2. L. G. Brown, “A survey of image registration techniques,” ACM Computing Surveys, vol. 24, no. 4, pp. 325–376, 1992. View at Publisher · View at Google Scholar
3. J. B. A. Maintz and M. A. Viergever, “A survey of medical image registration,” Medical Image Analysis, vol. 2, no. 1, pp. 1–36, 1998. View at Publisher · View at Google Scholar
4. M. A. Audette, F. P. Ferrie, and T. M. Peters, “An algorithmic overview of surface registration techniques for medical imaging,” Medical Image Analysis, vol. 4, no. 3, pp. 201–217, 2000. View at
Publisher · View at Google Scholar
5. B. Zitová and J. Flusser, “Image registration methods: a survey,” Image and Vision Computing, vol. 21, no. 11, pp. 977–1000, 2003. View at Publisher · View at Google Scholar
6. R. Wan and M. Li, “An overview of medical image registration,” in Proceedings of the 5th International Conference on Computational Intelligence and Multimedia Applications (ICCIMA '03), p. 385,
Xi'an, China, September 2003.
7. P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992. View at Publisher · View
at Google Scholar
8. S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of the 3rd International Conference on 3D Digital Imaging and Modeling, pp. 145–152, Quebec, Canada,
May-June 2001.
9. T. Vetter and V. Blanz, “Estimating coloured 3D face models from single images: an example based approach,” in Proceedings of the 5th European Conference on Computer Vision (ECCV '98), vol. 2,
pp. 499–513, Freiburg, Germany, June 1998.
10. F. Steinke, B. Schölkopf, and V. Blanz, “Learning dense 3D correspondence,” in Advances in Neural Information Processing Systems 19, pp. 1313–1320, MIT Press, Cambridge, Mass, USA, 2007.
11. H. Chui and A. Rangarajan, “A new point matching algorithm for non-rigid registration,” Computer Vision and Image Understanding, vol. 89, no. 2-3, pp. 114–141, 2003. View at Publisher · View at
Google Scholar
12. V. Jain and H. Zhang, “Robust 3D shape correspondence in the spectral domain,” in Proceedings of IEEE International Conference on Shape Modeling and Applications (SMI '06), pp. 118–129,
Matsushima, Japan, June 2006. View at Publisher · View at Google Scholar
13. F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, “Synthesizing realistic facial expressions from photographs,” in Proceedings of the Annual Conference on Computer Graphics and
Interactive Techniques (SIGGRAPH '98), pp. 75–84, Orlando, Fla, USA, July 1998.
14. R. L. Harder and R. N. Desmarais, “Interpolation using surface splines,” Journal of Aircraft, vol. 9, no. 2, pp. 189–191, 1972. View at Publisher · View at Google Scholar
15. F. L. Bookstein, “Principal warps: thin-plate splines and the decomposition of deformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 6, pp. 567–585, 1992.
View at Publisher · View at Google Scholar
16. J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Communications of the ACM, vol. 18, no. 9, pp. 509–517, 1975. View at Publisher · View at Google Scholar
17. M. Greenspan and M. Yurick, “Approximate K-D tree search for efficient ICP,” in Proceedings of the 4th International Conference on 3-D Digital Imaging and Modeling (3DIM '03), pp. 442–448, Banff,
Canada, October 2003. View at Publisher · View at Google Scholar
18. N. F. Troje and H. H. Bülthoff, “Face recognition under varying poses: the role of texture and shape,” Vision Research, vol. 36, no. 12, pp. 1761–1771, 1996. View at Publisher · View at Google
19. Y. Hu, B. Yin, Y. Sun, and S. Cheng, “3D face animation based on morphable model,” Journal of Information and Computational Science, vol. 2, no. 1, pp. 35–39, 2005.
20. B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1–3, pp. 185–203, 1981. View at Publisher · View at Google Scholar
21. J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” International Journal of Computer Vision, vol. 12, no. 1, pp. 43–77, 1994. View at Publisher · View at
Google Scholar | {"url":"http://www.hindawi.com/journals/ijcgt/2009/609350/","timestamp":"2014-04-17T20:11:05Z","content_type":null,"content_length":"167028","record_id":"<urn:uuid:44ca7e0e-abf4-4e11-98a0-67558b848396>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve word problems involving addition and subtraction of fractions referring to the same whole (2)
Common Core: 5.NF.2
Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction models or equations to
represent the problem. Use benchmark fractions and number sense of fractions | {"url":"http://learnzillion.com/lessonsets/223","timestamp":"2014-04-18T05:45:38Z","content_type":null,"content_length":"25006","record_id":"<urn:uuid:fccc84e7-3817-4925-bf33-c87edae3c68b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Weblog
Sunday 25 September 2005 at 6:17 pm | In
9 Comments
The notation do g first then f, rather than the other way round and is therefore counter-intuitive to the beginner.
One solution is to use a different notation for functions and use do f then g. This seems to be much nicer if it weren’t for one thing. As far as I know, this notation is only taught in advanced
courses (3rd year degree/postgraduate) and in books like Universal Algebra by P M Cohn, which is certainly not meant to be read by a novice. By this time, the
Does anyone know if the
Powered by WordPress with Pool theme design by Borja Fernandez.
Entries and comments feeds. Valid XHTML and CSS. ^Top^ | {"url":"http://www.sixthform.info/maths/?m=200509","timestamp":"2014-04-16T05:13:38Z","content_type":null,"content_length":"16671","record_id":"<urn:uuid:dcc38bd1-4333-41d9-9dd7-2e642ade2a54>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: INTERFERENCE ALIGNMENT FOR CHANNEL-ADAPTIVE WAVEFORM MODULATION
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Embodiments provide an apparatus and method for interference alignment for channel-adaptive waveform modulation. The method includes obtaining at least a part of a first matrix and a part of a second
matrix for the impulse response function of a communication channel. The method further includes designing a set of one or more linearly independent waveforms based on at least the obtained parts of
the first and second matrices such that a first subspace spanned by the linearly independent waveforms when multiplied by the obtained part of the first matrix at least partially overlaps a second
subspace spanned by the linearly independent waveforms when multiplied by the obtained part of the second matrix.
A method for transmitting a sequence of data blocks, comprising: obtaining at least a part of a first matrix and a part of a second matrix for the impulse response function of a communication
channel, the part of the first matrix relating to channel-induced interference between a current data block and a previously transmitted first data block, the part of the second matrix relating to
channel-induced interference between the current data block and a previously transmitted second data block, the second data block being transmitted before the first data block; designing a set of one
or more linearly independent waveforms based on at least the obtained part of the first matrix and the obtained part of the second matrix for the impulse response function such that a first subspace
spanned by the linearly independent waveforms when multiplied by the obtained part of the first matrix at least partially overlaps a second subspace spanned by the linearly independent waveforms when
multiplied by the obtained part of the second matrix; and transmitting a sequence of the data blocks over the channel from a transmitter, each data block of the sequence being a weighted linear
superposition of the one or more waveforms of the designed set.
The method of claim 1, wherein the designing step designs the set of linearly independent waveforms such that the first subspace and the second subspace occupy a same linear space.
The method of claim 1, wherein the designed set includes a subset of eigenvectors of a product of (1) an inverse of the first matrix and (2) the second matrix.
The method of claim 3, wherein the subset comprises right eigenvectors of the product.
The method of claim 1, wherein the designing step further includes: obtaining eigenvectors and corresponding eigenvalues based on an eigenvector decomposition of a product of (1) an inverse of the
first matrix and (2) the second matrix; and selecting a subset of the obtained eigenvectors.
The method of claim 5, wherein the designing step further includes: configuring a second set of waveforms based on the selected subset, wherein the configured second set of waveforms is an orthogonal
complement of the product of the selected subset and either of the first or the second matrices.
The method of claim 1, wherein the first data block immediately precedes the current data block and the second data block immediately precedes the first data block.
The method of claim 1, further comprising: transmitting a set of pilot signals over the communication channel that is between the transmitter and a receiver, the part of the first matrix and the part
of the second matrix for the impulse response function being obtained responsive to measurements of said pilot signals.
The method of claim 1, further comprising: for each one of the data blocks of the sequence, modulating the waveforms of the designed set to have amplitudes responsive of a received input data symbol
and linearly superimposing the modulated waveforms to produce the each one of the data blocks.
The method of claim 1, wherein the designed set has a number of the waveforms equal to one half of a symbol interval when the delay spread is twice the symbol interval.
An apparatus for communicating data, comprising: a transmitter including an array of modulators, each modulator being configured to modulate an amplitude of a corresponding one of linearly
independent waveforms over a sequence of sampling intervals in response to receipt of each of a sequence of input data symbols; and an adder configured to form a sequence of data blocks, each data
block being a linear superposition of modulated transmitter waveforms produced by the modulators responsive to receipt of one of the input data symbols, the adder configured to transmit the data
blocks via a communication channel; and wherein the transmitter configures the modulated waveforms in a manner responsive to a part of a first matrix and a part of a second matrix for the impulse
response function of a communication channel, the part of the first matrix relating to channel-induced interference between a current data block and a previously transmitted first data block, the
part of the second matrix relating to channel-induced interference between the current data block and a previously transmitted second data block, the second data block being transmitted before the
first data block, wherein the transmitter configures the modulated waveforms such that a first subspace spanned by the modulated waveforms when multiplied by the part of the first matrix at least
partially overlaps a second subspace spanned by the modulated waveforms when multiplied by the part of the second matrix.
The apparatus of claim 11, wherein the transmitter configures the modulated waveforms such that the first subspace and the second subspace occupy a same linear space.
The apparatus of claim 11, where the modulated waveforms include a subset of eigenvectors of a product of (1) an inverse of the first matrix and (2) the second matrix.
The apparatus of claim 13, wherein the subset comprises right eigenvectors of the product.
The apparatus of claim 11, wherein the transmitter configures the modulated waveforms by obtaining eigenvectors and corresponding eigenvalues based on an eigenvector decomposition of a product of (1)
an inverse of the first matrix and (2) the second matrix and selects a subset of the obtained eigenvectors.
The apparatus of claim 15, wherein the transmitter configures the modulated waveforms by constructing a second set of waveforms based on the selected subset, wherein the constructed second set of
waveforms is an orthogonal complement of the product of the selected subset and either of the first or second matrices.
The apparatus of claim 11, further comprising: a receiver having an array of demodulators, the demodulators projecting the data blocks onto conjugate waveforms to produce estimates of a linear
combination of components of the input data symbol carried by the data blocks being demodulated.
The apparatus of claim 17, wherein the transmitter transmits a set of pilot signals over the communication channel that is between the transmitter and the receiver, the part of the first matrix and
the part of the second matrix for the impulse response function being obtained responsive to measurements of said pilot signals.
The apparatus of claim 11, wherein each modulator modulates the amplitude of the corresponding one of linearly independent waveforms to have amplitudes responsive of a received input data symbol and
linearly superimposing the modulated waveforms to produce each one of the data blocks.
The apparatus of claim 11, wherein the number of modulated waveforms is equal to one half of a symbol interval when the delay spread is twice the symbol interval.
BACKGROUND [0001]
Inter-symbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have similar effect
as noise, thus making the communication less reliable. One of the causes of ISI is multipath propagation in which a wireless signal from a transmitter reaches the receiver via many different paths.
The causes of this include reflection (i.e., the signal may bounce off buildings), refraction (such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting and
ionospheric reflection. Since all of these paths are of different lengths, this results in the different versions of the signal arriving at different times, resulting in ISI.
Data communication schemes have handled ISI by a variety of techniques. One such technique is known as Orthogonal Frequency Division Multiplexing (OFDM). OFDM uses modulation waveforms that enable
the essential removal of ISI in a frequency-dependent channel. For example, in OFDM, each transmitted data block is a weighted superposition of OFDM modulation waveforms. The OFDM modulation
waveforms form an orthonormal basis set over a time period (T
) where T
is the length of the OFDM block (also referred to as symbol-interval of duration T
) and T
is the duration of either a guard interval or a cyclic prefix, both expressed as a multiple of the sampling interval. Because ISI does not distort symbols separated by more than the communication
channel's delay-spread T
, the guard interval T
is selected to be greater than or equal to the delay spread T
in OFDM. In an OFDM block, the weights of the superposition define the data symbol being transmitted.
At the receiver, in OFDM, each transmitted data block is demodulated by projecting the received data block onto a basis set of conjugate OFDM modulation waveforms. Because the OFDM modulation
waveforms are a basis set over the period (T
), the projections may be performed over the last period (T
) of the OFDM data blocks. That is, the projections do not need to use prefix portions of the OFDM data blocks. Because the channel memory is limited to a time of length T
, an earlier transmitted OFDM block only produces ISI in the cyclic prefix or guard portion of the next received OFDM data block. Thus, by ignoring said cyclic prefix or guard portions of received
OFDM data blocks, OFDM produces demodulated data that is free of distortion due to ISI. OFDM techniques may also effectively diagonalize the communication channel.
Unfortunately, cyclic prefix and guard portions of OFDM data blocks consume bandwidth that might otherwise be used to transmit data. As the communication channel's delay-spread T
approaches the temporal length of the OFDM data block T
, the bandwidth T
remaining for carrying data shrinks to zero. For example, when the channel delay-spread is equal to the symbol interval, then OFDM is 0% efficient because the redundant cyclic prefix occupies the
entire symbol interval. Increasing the symbol interval T
would alleviate this problem, but this results in increased communication delay, which might not be tolerable depending on the application.
In order to overcome the bandwidth-deficient channel whose delay-spread approaches the length of the OFDM block, Chen et al. (U.S. Pat. No. 7,653,120) introduces a Channel Adaptive Waveform
Modulation (CAWM) that generates modulating waveforms from the channel impulse response itself. When the channel delay-spread is equal to the symbol interval, CAWM is 50% efficient because the number
of orthogonal data-symbol-bearing waveforms that can be created is equal to half the symbol-interval. When the delay-spread is equal to twice the symbol interval, then CAWM is 1/3 (33%) efficient.
This compares to a 0% efficiency of OFDM in both cases.
SUMMARY [0006]
Embodiments provide an apparatus and method for interference alignment for channel-adaptive waveform modulation.
The method includes obtaining at least a part of a first matrix and a part of a second matrix for the impulse response function of a communication channel. The part of the first matrix relates to
channel-induced interference between a current data block and a previously transmitted first data block, and the part of the second matrix relates to channel-induced interference between the current
data block and a previously transmitted second data block, the second data block being transmitted before the first data block.
The method further includes designing a set of one or more linearly independent waveforms based on at least the obtained part of the first matrix and the obtained part of the second matrix for the
impulse response function such that a first subspace spanned by the linearly independent waveforms when multiplied by the obtained part of the first matrix at least partially overlaps a second
subspace spanned by the linearly independent waveforms when multiplied by the obtained part of the second matrix.
In one embodiment, the designing step designs the set of linearly independent waveforms such that the first subspace and the second subspace occupy a same linear space. Further, the designed set may
include a subset of eigenvectors of a product of (1) an inverse of the first matrix and (2) the second matrix. The subset may comprise the right eigenvectors of the product.
In another embodiment, the designing step further includes obtaining eigenvectors and corresponding eigenvalues based on an eigenvector decomposition of a product of (1) an inverse of the first
matrix and (2) the second matrix, and selecting a subset of the obtained eigenvectors.
Also, the designing step may further include configuring a second set of waveforms based on the selected subset, where the configured second set of waveforms is an orthogonal complement of the
product of the selected subset and either of the first or the second matrices.
In one embodiment, the first data block immediately precedes the current data block and the second data block immediately precedes the first data block.
The method may further include transmitting a set of pilot signals over the communication channel that is between the transmitter and the receiver, where the part of the first matrix and the part of
the second matrix for the impulse response function are obtained responsive to measurements of said pilot signals.
The method further includes, for each one of the data blocks of the sequence, modulating the waveforms of the designed set to have amplitudes responsive of a received input data symbol and linearly
superimposing the modulated waveforms to produce each one of the data blocks.
In one embodiment, the designed set has a number of waveforms equal to one half of a symbol interval when the delay spread is twice the symbol interval.
The apparatus includes a transmitter having an array of modulators, where each modulator is configured to modulate an amplitude of a corresponding one of linearly independent waveforms over a
sequence of sampling intervals in response to receipt of each of a sequence of input data symbols, an adder configured to form a sequence of data blocks, where each data block is a linear
superposition of modulated transmitter waveforms produced by the modulators responsive to receipt of one of the input data symbols and the adder is configured to transmit the data blocks via a
communication channel.
The transmitter configures the modulated waveforms in a manner responsive to a part of a first matrix and a part of a second matrix for the impulse response function of a communication channel. The
part of the first matrix relates to channel-induced interference between a current data block and a previously transmitted first data block, the part of the second matrix relates to channel-induced
interference between the current data block and a previously transmitted second data block, the second data block being transmitted before the first data block. The transmitter configures the
modulated waveforms such that a first subspace spanned by the modulated waveforms when multiplied by the part of the first matrix at least partially overlaps a second subspace spanned by the
modulated waveforms when multiplied by the part of the second matrix.
In one embodiment, the transmitter configures the modulated waveforms such that the first subspace and the second subspace occupy a same linear space. Further, the modulated waveforms may include a
subset of eigenvectors of a product of (1) an inverse of the first matrix and (2) the second matrix. The subset may comprise right eigenvectors of the product. The transmitter configures the
modulated waveforms by obtaining eigenvectors and corresponding eigenvalues based on an eigenvector decomposition of a product of (1) an inverse of the first matrix and (2) the second matrix and
selects a subset of the obtained eigenvectors.
The transmitter may configure the modulated waveforms by constructing a second set of waveforms based on the selected subset, where the constructed second set of waveforms is an orthogonal complement
of the product of the selected subset and either of the first or second matrices.
The apparatus may further include a receiver having an array of demodulators, where the demodulators project the data blocks onto conjugate waveforms to produce estimates of a linear combination of
the components of the input data symbols carried by the data blocks being demodulated.
The transmitter may transmit a set of pilot signals over the communication channel that is between the transmitter and the receiver, where the part of the first matrix and the part of the second
matrix for the impulse response function is obtained responsive to measurements of said pilot signals.
Each modulator may modulate the amplitude of the corresponding one of linearly independent waveforms to have amplitudes responsive of a received input data symbol and linearly superimposing the
modulated waveforms to produce each one of the data blocks.
In one embodiment, the number of modulated waveforms is equal to one half of a symbol interval when the delay spread is twice the symbol interval.
BRIEF DESCRIPTION OF THE DRAWINGS [0024]
Example embodiments will become more fully understood from the detailed description given herein below and the accompanying drawings, wherein like elements are represented by like reference numerals,
which are given by way of illustration only and thus are not limiting, and wherein:
FIG. 1 illustrates a communication system 10 according to an embodiment;
FIG. 2 illustrates a data stream 19 that is transmitted over the channel 13 according to an embodiment;
FIG. 3 illustrates a method for performing interference alignment for channel-adaptive waveform modulation according to an embodiment;
FIG. 4 illustrates a method of constructing waveforms and conjugate waveforms such that inter-symbol interference is removed according to an embodiment; and
FIG. 5 illustrates a comparison of a number of orthogonal input waveforms L as a function of channel delay spread T
achieved by the embodiments of the present invention (solid line), channel-adaptive waveform modulation (dashed line), and the OFMD method (dotted line).
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0030]
Various example embodiments will now be described more fully with reference to the accompanying drawings in which some example embodiments are shown. Like numbers refer to like elements throughout
the description of the figures.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to
distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope
of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a," "an" and
"the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or
"including," when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other
features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be
executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example
embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the
context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the following description, illustrative embodiments will be described with reference to acts and symbolic representations of operations (e.g., in the form of flowcharts) that may be implemented as
program modules or functional processes that include routines, programs, objects, components, data structures, etc., that when executed perform particular tasks or implement particular abstract data
types and may be implemented using existing hardware at existing network elements. Such existing hardware may include one or more Central Processing Units (CPUs), digital signal processors (DSPs),
application-specific-integrated-circuits, field programmable gate arrays (FPGAs) computers or the like machines that once programmed become particular machines.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.
Unless specifically stated otherwise, or as is apparent from the discussion, terms such as "obtaining", "designing", "configuring" or the like, refer to the action and processes of a computer system,
or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data
similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Below, parts of the description will use a complex baseband description of the channel and signals as discrete time variables. In this description, the various signals and channel quantities are
described as complex baseband functions whose values depend on the sampling interval. A sampling interval t refers to the temporal interval over which a modulator or demodulator applies one data
value to the signal being modulated or demodulated. The symbol interval refers to the duration (expressed in terms of the number of sampling intervals) T
of one block of symbols. The delay spread T
refers to the length (expressed again in terms of the number of sampling intervals) of a communication channel's memory. The embodiments and claims are meant to cover situations where frequency
up-conversion occurs in the transmitter and frequency down-conversion occurs in the receiver as well as situations where no such conversions occur.
Embodiments of the present disclosure employ interference alignment in the context of Channel Adaptive Waveform Modulation (CAWM), as discussed in U.S. Pat. No. 7,653,120, which is incorporated by
reference in its entirety. Interference alignment in the CAWM environment ensures that the inter-symbol interference occupies a relatively smaller subspace than it would otherwise occupy. For
example, multiple interfering symbols are aligned to fall into the same subspace at the receiver. As a result, an increased number of waveforms may be used in the symbol interval, improving the
efficiency of the scheme.
FIG. 1 illustrates a communication system 10 according to an embodiment. The communication system 10 includes a transmitter 11, a receiver 12, and a frequency-dependent communication channel 13. The
transmitter 11 includes a parallel array of L modulators 14, and an adder 15. In the array, each modulator 14 is configured to amplitude modulate a received component of an input data symbol onto a
waveform, wherein each waveform corresponds to one of the modulators 14. For example, the 1-th modulator 14 modulates its waveform with the 1-th component a
q of the q-th input data symbol [a
q a
q, . . . , a
] in response to the receipt of the q-th input data symbol in the transmitter 11. In the array, each modulator 14 modulates the input data symbol onto its waveform in parallel with the other
modulators 14 of the array. Thus, the array formed by the modulators 14 will produce a temporally synchronized array of L modulated waveforms in response to the receipt of L data symbols. The adder
15 is connected to sum the amplitude modulated waveforms of the array in a temporally aligned manner to produce a temporal sequence of output signals, e.g., . . . s
-1, s
, s
+1 . . . , for transmission over the communication channel 13. Each of the output signals is a superposition of waveforms modulated at the same sampling interval.
The communication channel 13 transports the signals from the transmitter 11 to the receiver 12. The communication channel 13 may be a wireless channel, an optical fiber channel, or a wire line
channel and may be operated in simplex or duplex mode, for example.
FIG. 2 illustrates a data stream 19 that is transmitted over the channel 13 according to an embodiment. The communication system 10 transmits a data stream 19 over the communication channel 13 as a
sequence of data blocks, e.g., consecutive data blocks (q-1), q, and (q+1). Each data block spans T
contiguous, non-overlapping sampling intervals, and the different data blocks have equal temporal length. For this reason, it will be convenient to represent the values of any signal variable over a
data block as a T
-dimensional vector whose individual components represent the values of the signal variable at individual sampling intervals. That is, the components of such a vector group together the values of the
signal variable at the sampling intervals of one data block. For that reason, each component of such a vector will be labeled by two integer indices. The first index will represent the position of
the corresponding signal variable in a data block, e.g., an integer in [1, T
], and the second integer index will represent the position of the data block in the data stream. For example, the "k q" component of such a vector will be the value of the corresponding signal
variable during the k-th sampling interval of the q-th data block, i.e., at sampling interval qT
Referring back to FIG. 1, the transport over the communication channel 13 transforms each transmitted signal into a corresponding signal at the receiver 12, e.g., s
for the signals corresponding to the sampling interval "t". The transport over the communication channel 13 effectively convolves output signal, s
, by the communication channel's impulse response, h
, and adds a noise, w
, so that the corresponding signal x
received at the receiver 12 for the sampling interval "t" is given by:
x t
= T = 0 T D h T s t - T + w t . Eq . ( 1 ) ##EQU00001##
In Eq. (1), the integer T
is the delay-spread of the communication channel 13. The delay-spread T
determines the number of sampling intervals over which an earlier modulated and transmitted signal can produce interference in the received signal corresponding to a later modulated and transmitted
The receiver 12 includes an input 16 and a parallel array of L demodulators 17. The number L of demodulators 17 is typically equal to the number of modulators 14. The input port 16 also transmits the
sequence of received signals, i.e., . . . x
-1, x
, x
+1 . . . , to the demodulators 17 of the array in parallel. Each demodulator 17 projects the received signals onto a conjugate waveform corresponding to the demodulator 17 to produce an estimate,
e.g., y
q, of a linear combination of the components of the input data symbol carried by the data block being demodulated.
Each individual y
q provides an estimate of the component a
q of the input data symbol. Thus, the number L of demodulators 17 produces a temporally synchronized array of L estimates in response to the receiving one data block from the communication channel
13, e.g., [y
q y
q, . . ., y
] in response to receiving the q-th data block [x
q x
q, . . . , x
The waveforms and the conjugate waveforms may be represented by:
Ψ = [ ψ 11 ψ 12 ψ 1 L ψ 21 ψ 22 ψ 2 L ψ T s 1 ψ T s 2 ψ T s L ] and Φ = [ φ 11 φ 12 φ 1 L φ 21 φ 22 φ 2 L φ T s 1 φ T s 2 φ T s L ] Eq . ( 2 ) ##EQU00002##
Each column of
Ψ is an independent input waveform, and each column of Φ is an independent output waveform. When the delay spread T
is greater than the symbol interval T
, the waveform matrix Ψ, and the corresponding waveform matrix Φ, is based, at least, on the matrix elements of the part of the impulse response function that relates to interference between the
current data block (q) and the first previous data block (q-1), and on the matrix elements of the part of the impulse response function that relates to interference between the current data block (q)
and the second previous data block (q-2), e.g., H
and H
as further explained below.
Both the waveforms (Ψ) and the conjugate waveforms (Φ) may be selected to form orthonormal bases of dimension L over the complex space of dimension T
. The orthonormality conditions on the waveforms and the conjugate waveforms are then described as follows:
×L and Φ†Φ=I
×L Eq. (3)
, I
×L is the L×L unit matrix, and the superscript "†" denotes "conjugate transpose". While such orthogonality and/or normality conditions are not required, they may be advantageous for modulating data
onto the input waveforms and demodulating data form the output waveforms, as discussed below.
Each modulator 14 of the parallel array modulates a corresponding component of the input data symbol onto a preselected one of the waveforms in parallel with the other modulators 14. For example, in
response to the q-th input data symbol, the k-th modulator 14 will produce a temporal sequence of output signals represented by the column vector a
k, ψ
k, . . . , ψ
. Each of the output signals represents the form of the modulated k-th input waveform for one of the sampling intervals for one data block.
For each input data symbol, the adder 15 sums the L modulated input waveforms in a temporally aligned manner. For example, the adder 15 forms a weighted linear superposition of the waveforms, e.g.,
one data block for transmission. In the linear superposition, the starting sampling intervals of the individual modulated waveforms are temporally aligned. In response to the input data symbol a
, the modulating and summing produces an output data block that may be represented by a T
-dimensional column vector s
. The column vector s
may be written as:
s q
= a 1 q [ ψ 11 ψ 21 ψ T s 1 ] + a 2 q [ ψ 12 ψ 22 ψ T s 2 ] + + a Lq [ ψ 1 L ψ 2 L ψ T s L ] = Ψ a q Eq . ( 4 ) ##EQU00003##
In Eq. (4), each term of the sum represents the synchronized output of a corresponding one of the modulators 14 of FIG. 1. The last form of Eq. (4) writes the output data block s
in terms of a T
×L complex matrix representation, Ψ, of the set of waveforms of eqs. (2) and (3).
For each input data symbol, the transmitter 11 transmits the data block over the communication channel 13 that couples the transmitter to the receiver 12. The communication channel 13 distorts the
data blocks due to its impulse response function and additive noise.
According to the embodiments, the delay-spread T
of the communication channel 13 may be greater than each data block or symbol interval T
of data blocks, i.e., T
. For that reason, ISI may result from not only the immediately adjacent transmitted data block (q-1), but also may result from the previous data blocks (q-2, q-3, . . . ). For ease of exposition,
the following discussion assumes that the delay spread is at most twice the symbol interval, i.e., T
. The present disclosure is, however, not limited to this case, and the general situation will be discussed below. Under this assumption, ISI only results from the two previous data blocks (q-1 and
q-2), and Eq. (1) simplifies when written in data block form so that the q-th transmitted data block, s
, and the q-th received data block, x
, are related as follows:
x q
= H 0 s q + H 1 s q - 1 + H 2 s q - 2 + w q = H 0 Ψ a q + H 1 Ψ a q - 1 + H 2 Ψ a q - 2 + w q Eq . ( 5 a ) Eq . ( 5 b ) ##EQU00004##
The L
×L complex matrices H
, H
and H
are formed from the impulse response function of the communication channel 13 and are given by:
0 = [ h 0 0 0 0 h 1 h 0 0 0 h T s - 2 h T s - 3 h T s - 4 0 h T s - 1 h T s - 2 h T s - 3 h 0 ] H 1 = [ h T s h T s - 1 h T s - 2 h 1 h T s + 1 h T s h T s - 1 h 2 h 2 T s - 2 h 2 T s - 3 h 2 T s - 4
h T s - 1 h 2 T s - 1 h 2 T s - 2 h 2 T s - 3 h T s ] H 2 = [ h 2 T s h 2 T s - 1 h 2 T s - 2 h T s + 1 0 h 2 T s h 2 T s - 1 h T s + 2 0 0 h 2 T s h 2 T s - 1 0 0 0 h 2 T s ] Eq . ( 6 ) ##EQU00005##
The matrices H[1]
and H
produce the inter-data block interference between the current data block (q) and the previous data block (q-1) and the inter-data block interference between the current data block and the previous
data block (q-2), respectively. In Eq. (5), the column vector w
for the additive noise is given by w
q, w
q, . . . , w
The receiver 12 estimates y
by measuring correlations between the received data block and the conjugate waveforms. The measurement of each correlation involves evaluating an inner product between a received data block and each
of the conjugate waveforms. In particular, the receiver 12 produces for each input data symbol, a
, an L-dimensional estimate vector, y
, given by:
+Φ.dagg- er.H
. Eq. (7a)
Here, the last equation results from Eq. (5b) for the channel transformation of the transmitted data block. Φ†H
-1 is the interference term between the current data block (q) and the preceding data block (q-1) and Φ†H
-2 is the interference term between the current data block (q) and the preceding data block (q-2).
FIG. 3 illustrates a method for performing interference alignment for channel-adaptive waveform modulation according to an embodiment. The method may be performed by any type of transmitter 11 or
receiver 12 that is configured for the communication system 10. The term "device" may encompass the transmitter 11 or the receiver 12.
In step S21, the device obtains at least a part of a first matrix (H
) and a part of a second matrix (H
) for the impulse response function of the communication channel 13 between the transmitter 11 and the receiver 12. The part of the first matrix (H
) relates to channel-induced interference between a current data block (q) and a previously transmitted first data block (q-1). The part of the second matrix (H
) relates to channel-induced interference between the current data block (q) and a previously transmitted second data block (q-2).
However, the device may obtain channel-induced interference between the current data block and any other two previously transmitted blocks. In other words, the embodiments of the present disclosure
are not limited to only the interference relating to the immediately preceding two data blocks. In particular, the embodiments also encompass the situations where the delay spread T
is greater than twice the symbol interval T
. As a result, additional matrices (H
, H
, . . . ) may be present, or any number of such matrices. As such, the device may obtain a part of these matrices (H
, H
, . . . ) for the impulse response function, where the part of the matrix H
relates to channel-induced interference between the current data block (q) and the data block transmitted k blocks earlier (q-k).
The impulse response may be obtained by transmitting a sequence of pilot signals over the communication channel 13 between the transmitter 11 and the receiver 12. Both the transmitter 11 and the
receiver 12 know the sequence of transmitted pilot signals. For example, these pilot sequences may be programmed into these devices at their manufacture, installation, or upgrade. The pilot signals
are transmitted on the same communication channel 13 that will be used to transport data blocks in the communication phase. The pilot signals may be transmitted along the forward channel from the
transmitter 11 to the receiver 12 in the communication phase. In a duplex communication system, the pilot signals may alternately be transmitted along the reverse communication channel 13 provided
that the reverse and forward communications use the same physical channel and the same carrier frequency, e.g., as in time-division duplex communications.
Then, the device measures the received pilot signals to evaluate the impulse response function of the communication channel 13. The evaluation of the channel's impulse response function involves
comparing received forms of the pilot signals to the transmitted forms of the same pilot signal. The comparison determines the values of part or all of the impulse response function, i.e., h
, for different values of the delay "t" as measured in numbers of sampling intervals. The comparison determines, at least, the values of h
, h
. . . , h
, which define the part of the impulse response function relating to interference between sampling intervals of adjacent data blocks, i.e., H
and H
described above. As such, the device obtains a measurement, at least, of the non-zero H
matrix elements of the channel's impulse response. Furthermore, the comparison may also determine the value of ho of Eq. (1), which is not related to such channel-induced inter-data block
interference. In other words, the above comparison yields the matrix H
In S22, the device designs or configures a set of one or more linearly independent waveforms based on at least the obtained part of the first matrix (H
) and the obtained part of the second matrix (H
) for the impulse response function. For example, the device may design or configure the set of linearly independent waveforms such that a first subspace spanned by the linearly independent waveforms
when multiplied by the obtained part of the first matrix at least partially overlaps a second subspace spanned by the linearly independent waveforms when multiplied by the obtained part of the second
matrix. In other words, the interference to the current block from the two preceding data blocks (q-1 and q-2) are aligned to occupy at least a portion of the same subspace. In one particular
embodiment, the first subspace and the second subspace occupy the same linear subspace. It is noted, however, that the embodiments encompass interference alignment for any two not necessarily
consecutive previously transmitted blocks (e.g., q-1 and q-3, or q-2 and q-3, etc.).
The set of waveforms may include the waveforms Ψ and the conjugate waveforms Φ. The waveforms Ψ and the conjugate waveforms Φ are designed such that the inter-block interference terms of Eq. (7a)
vanishes, i.e., Φ†H
Ψ=0 and Φ†H
Ψ=0. In other words, all inter-block interference terms of Eq. (7a) vanish. Thus, the following equation may be used to obtain estimates of the linear combinations of the components of the input data
. Eq. (7b)
Various methods for constructing input and output waveforms to ensure that the inter-block interference terms of Eq. (7a) vanish are explained with reference to U.S. Pat. No. 7,653,120. Also, the
embodiments may in addition diagonalize the matrix Φ†H
Ψ. Methods for diagonalizing the matrix Φ†H
Ψ are explained with reference to U.S. Pat. No. 7,653,120.
As discussed above, according to the embodiments, the waveforms Ψ and the conjugate waveforms Φ are designed such that the waveforms Ψ and the conjugate waveforms Φ, when multiplied by H
and H
, occupy the same subspace. This is further explained with reference to FIG. 4.
FIG. 4 illustrates a method of constructing the waveforms Ψ and the conjugate waveforms Φ such that inter-symbol interference is removed according to embodiments.
In one embodiment, the waveforms Ψ are selected to be a subset [u
, . . . , u
, . . . , u
] of the eigenvectors of the product of the inverse of the first matrix (i.e., H
) and the second matrix (H
) of the impulse response function of the communication channel. In one particular embodiment, the subset may be the right eigenvectors of H
. The matrix H
exists assuming H
is full rank. The corresponding eigenvalues (d
, . . . , d
, . . . , d
) may encompass any value. FIG. 4 illustrates the steps of configuring such waveforms.
In step S31, the device obtains eigenvectors and their corresponding eigenvalues based on an eigenvector decomposition of the matrix H
. Embodiments of the present disclosure encompass any known technique for the decomposition of a matrix into eigenvalues and eigenvectors such as the Cholesky decomposition or Hessenberg
decomposition, for example.
In step S32, the device configures the waveforms Ψ based on the eigenvectors of the matrix H
. For example, in one embodiment, the device selects a subset of the right eigenvectors for use as the waveforms Ψ.
In step S33, the device configures the conjugate waveforms Φ based on the configured waveforms Ψ. The total inter-symbol interference at the receiver is given by the following equation:
-2 Eq. (8)
Additional terms corresponding to the matrices H[3]
, H
, . . . may be present in Eq. (8).
For example, if Ψ is selected as a subset of the right eigenvectors of the matrix H
, then the two interference terms corresponding to H
and H
in Eq. (8) span the same space of at most dimension L. The two interference terms are hence aligned in the same subspace. The conjugate waveforms Φ are chosen to be the projection onto a subspace of
dimension L of the orthogonal complement of this interference subspace. This ensures that the inter-block interference terms of Eq. (7a) vanish, i.e., Φ†H
Ψ=0 and Φ†H
Because the two interference parts are aligned as described above, for certain values of the delay spread T
of the communication channel, an increased number of waveforms L may be transmitted during any symbol interval T
Referring back to FIG. 3, in step S23, the device transmits a sequence of data blocks over the channel, where each data block of the sequence is a weighted linear superposition of the waveforms of
the designed set. In some embodiments, the step of transmitting includes, for each individual data block, amplitude modulating each waveform of the designed set responsive of receipt of an input data
symbol and linearly superimposing the modulated waveforms to produce the individual block.
Similarly, the device (when operating as a receiver) may receive a sequence of transmitted data blocks at the receiver. For example, the device estimates y
by measuring correlations between the received data block and the conjugate waveforms. The measurement of each correlation involves evaluating an inner product between a received data block and each
of the conjugate waveforms. In particular, the receiver 12 produces an L-dimensional estimate vector, y
, for each input data symbol, a
based on Eq. (7a). The conjugate waveforms are the configured conjugate waveforms Φ, as explained with reference to FIG. 4.
FIG. 5 illustrates a comparison of a number of orthogonal input waveforms L achieved by the embodiments (solid line), the channel-adaptive waveform modulation from U.S. Pat. No. 7,653,120 (dashed
line), and the OFMD method (dotted line) according to an embodiment of the present invention.
When comparing the OFDM method to the channel-adaptive waveform modulation scheme, only when T
is substantially less than one, the OFDM method performs relatively well. When T
is greater or equal to T
, the OFDM method achieves 0% efficiency. When comparing the embodiments of the present disclosure to the previous channel-adaptive waveform modulation scheme, the two schemes have the same
performance, when T
is less than or equal to T
. However, when T
is greater than 1.5T
, embodiments of the present disclosure can yield better performance. In particular, for T
, there is a 50% improvement in terms of available orthogonal input waveforms L.
Variations of the example embodiments are not to be regarded as a departure from the spirit and scope of the example embodiments, and all such variations as would be apparent to one skilled in the
art are intended to be included within the scope of this disclosure.
Patent applications by Thomas L. Marzetta, Summit, NJ US
Patent applications by ALCATEL-LUCENT USA INC.
Patent applications in class TRANSMITTERS
Patent applications in all subclasses TRANSMITTERS
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120294385","timestamp":"2014-04-19T10:30:44Z","content_type":null,"content_length":"79117","record_id":"<urn:uuid:02efda0a-0158-4efd-97ab-e5f956bc2920>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve Block Fails with Integral of Vector-Matrix
t is not defined. A constraint is not a definition.
Your other problem is that Mathcad does not do the sort of matrix calculations that you are asking for. The exp function cannot be applied to a matrix. And in your actual sheet, you are not even
attempting to do so -- if you look at the actual expression you will find that you have an implicit multiplication there, so you are multiplying exp by a matrix. That doesn't work as functions cannot
be used arithmetically. The parenthesis that are used to denote arrays cannot also denote function application. You need separate parentheses for that. Not that it matters in this case, as the
exponential of a matrix is not built into Mathcad, and you would have to define it separately (I've posted a worksheet that shows how to do that at
However, even that will not solve your problems, as Mathcad does not integrate vectors. Only scalars. You must either reduce the problem to a set of scalar integrals or write your own integration
Before you use an expression in a constraint in a solve block, you should make sure that it evaluates properly outside the solve block.
Tom Gutman | {"url":"http://communities.ptc.com/thread/16661","timestamp":"2014-04-21T09:37:09Z","content_type":null,"content_length":"110297","record_id":"<urn:uuid:5485e395-f2d9-4477-a75f-e295a299f05a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Almost 20 years ago I was asked a question by Herbert Kociemba, a computer scientist who has one of the best Rubik’s cube solving programs known. Efficient methods of storing permutations in $S_E$
and $S_V$ (the groups of all permutations of the edges $E$ and vertices $V$, respectively, of the Rubik’s cube) are needed, hence leading naturally to the concept of the complement of $S_m$ in $S_n$.
Specifically, he asked if $S_8$ has a complement in $S_{12}$ (this terminology is defined below). The answer is, as we shall see, ”no.” Nonetheless, it turns out to be possible to introduce a
slightly more general notion of a ”$k$-tuple of complementary subgroups” for which the answer to the analogous question is ”yes.”
This post is a very short summary of part of a paper I wrote (still unpublished) which can be downloaded here. This post explains the ”no” part of the answer. For the more technical ”yes” part of the
answer, see the more complete pdf version.
Notation: If $X$ is any finite set then
• $|X|$ denotes the number of elements in $X$.
• $S_X$ denotes the symmetric group on $X$.
• $S_n$ denotes the symmetric group on $\{1,2,...,n\}$.
• $A_n$ denotes its alternating subgroup of even permutations.
• $C_n$ denotes the cyclic subgroup of $S_n$ generated by the $n$-cycle $(1,2,...,n)$.
• $M_{10}$ denotes ”the Mathieu group of degree $10$” and order $720=10!/7!$, which we define as the subgroup of $S_{10}$ generated by $(2,6,10,7)(3,9,4,5)$ and $(1,5)(3,8)(4,10)(7,9)$.
• $M_{11}$ denotes the Mathieu group of degree $11$ and order $7920=11!/7!$ generated by $(1,2,3,4,5,6,7,8,9,10,11)$ and $(3,7,11,8)(4,10,5,6)$.
• $M_{12}$ denotes the Mathieu group of degree $12$ and order $95040=12!/7!$ generated by $(2,3,5,9,8,10,6,11,4,7,12)$ and $(1,12)(2,11)(3,10)(4,9)(5,8)(6,7)$.
• For any prime power $q$, ${\Bbb F}_q$ denotes the finite field with $q$ elements.
• $AGL_n({{\Bbb F}}_q)$ denotes the affine group of transformations on ${\Bbb F}_q^n$ of the form $\vec{v}\longmapsto A\vec{v}+\vec{a}$, where $A\in GL_n({{\Bbb F}}_q)$ and $\vec{a} \in {\Bbb F}_q^
If $G$ is a finite group and $G_1,G_2$ are subgroups then we say $G_2$ is the complement of $G_1$ when
• $G_1\cap G_2=\{1\}$, the identity of $G$,
• $G=G_1\cdot G_2=\{g_1g_2\ |\ g_1\in G_1,\ g_2\in G_2\}$.
Let $X$ denote a finite set. If $G$ is a subgroup of $S_X$ and $x\in X$ then we let $G_x$ denote the stabilizer of $x$ in $G$:
$G_x=\{ g\in G\ |\ g(x)=x\}.$
Let $G$ be a permutation group acting on a finite set $X$ (so $G$ is a subgroup of the symmetric
group of $X$, $S_X$). Let $k\geq 1$ be an integer and let
$X^{(k)}=\{{\rm distinct\ }k{\rm -tuples\ in\ }X\} =\{(x_1,x_2,...,x_k)\ |\ x_iot= x_j,\ 1\leq i \textless j\leq k\}.$
We say $G$ acts $k$-transitively on $X$ if $G$ acts transitively on $X^{(k)}$ via the ”diagonal” action
$g:(x_1,x_2,...,x_k)\longmapsto (g(x_1),g(x_2),...,g(x_k)).$
If $G$ acts transitively on $X$ and $G_x=1$ for some (hence all) $x\in X$ then we say $G$ acts regularly on $X$. If $G$ acts $k$-transitively on $X$ and acts regularly on $X^{(k)}$ then we say $G$
acts sharply $k$-transitively on $X$.
The classification of $k$-transitive groups, for $k\geq 4$, is to Jordan: A sharply $k$-transitive group, $k\geq 4$, must be one of the following.
• $k \geq 6$: $S_k$ , $S_{k+1}$ and $A_{k+2}$ only.
• $k = 5$: $S_5$ , $S_6$ , $A_7$ and the Mathieu group $M_{12}$.
• $k = 4$: $S_4$ , $S_5$ , $A_6$ and the Mathieu group $M_{11}$.
We give a table which indicates, for small values of $n$, which $S_m$ have a complement in $S_n$.
$n$ $m$ complement of $S_m$ in $S_n$
$4$ $2$ $A_4$
$4$ $3$ $C_4$
$5$ $2$ $A_5$
$5$ $3$ $\langle (1,2,3,4,5),(2,3,5,4)\rangle \cong AGL_1({{\Bbb F}}_5)$
size $20$
$5$ $4$ $C_5$
$6$ $2$ $A_6$
$6$ $3$ $\langle (2,3,4,5,6),(3,4,6,5)\rangle \cong PGL_2({{\Bbb F}}_5)$
size $120$
$6$ $5$ $C_6$
$7$ $2$ $A_7$
$7$ $5$ $\langle (1,2,6,5,3,7),(1,4,3,2,5,6)\rangle \cong AGL_1({{\Bbb F}}_7)$
size $42$
$7$ $6$ $C_7$
$8$ $2$ $A_8$
$8$ $5$ $\langle (1,5,8,6,7,4),(1,5,8,7)(2,3,4,6)\rangle \cong PGL_2({{\Bbb F}}_7)$
size $336$
$8$ $6$ $AGL_1({{\Bbb F}}_8)$
size $56$
$8$ $7$ $C_8$
$9$ $2$ $A_9$
$9$ $6$ $PGL_2({{\Bbb F}}_8)$
size $504$
$9$ $7$ $AGL_1({{\Bbb F}}_9)$
size $72$
$9$ $8$ $C_9$
$10$ $2$ $A_{10}$
$10$ $7$ $M_{10}$
size $720$
($PGL_2({{\Bbb F}}_9)$ is another)
$10$ $9$ $C_{10}$
$11$ $2$ $A_{11}$
$11$ $7$ $M_{11}$
size $7920$
$11$ $9$ $AGL_1({{\Bbb F}}_{11})$
size $110$
$11$ $10$ $C_{11}$
$12$ $2$ $A_{12}$
$12$ $7$ $M_{12}$
size $95040$
$12$ $9$ $PGL_2({{\Bbb F}}_{11})$
$=\langle (1,2,3,4,5,6,7,8,9,10,11), (1,2,4,8,5,10,9,7,3,6),$
size $1320$
$12$ $11$ $C_{12}$
Proposition: $S_m$ has a complement in $S_n$ if and only if there is an subgroup $H$ of $S_n$ such that
1. $H$ acts $(n-m)$-transitively on $\{1,2,…,n\}$,
2. $|H|=n(n-1)...(m+1)=n!/m!$.
Example: $S_{10}$ has not one but two non-isomorphic subgroups, $M_{10}$ and $PGL_2({{\Bbb F}}_9)$, of order $720=10!/7!$, each of which acts $3$-transitively on $\{1,2,...,10\}$. Thus $S_7$ has two
non-isomorphic complements in $S_{10}$.
The statement below is the main result.
Theorem: The following statements hold.
1. If $n>2$ is not a prime power or a prime power plus $1$ then the only $1<m<n$ for which $S_m$ has a complement in $S_n$ are $m=2$ and $m=n-1$.
2. If $n>12$ is a prime power and not a prime power plus $1$ then the only $1<m<n$ for which $S_m$ has a complement in $S_n$ are $m=2$, $m=n-2$ and $m=n-1$.
3. If $n>12$ is a prime power plus $1$ but not a prime power then the only $1<m<n$ for which $S_m$ has a complement in $S_n$ are $m=2$, $m=n-3$ and $m=n-1$.
4. If $n>12$ is both a prime power plus $1$ and a prime power then the only $1<m<n$ for which $S_m$ has a complement in $S_n$ are $m=2$, $m=n-3$, $m=n-2$ and $m=n-1$.
5. If $n\leq 12$, see the above table.
Grobner bases and permutation puzzles, according to Martin Kreuzer
This post is about some research Prof. Martin Kreuzer, at the Univ. of Passau, does which combines Grobner bases and solving permutation puzzles, such as the Rubik’s cube.
Non-commutative Grobner Bases and Twisty Puzzles was the title of his talk (the link takes you to his slides, which he kindly allowed me to post).
Grobner bases were introduced in the mid-1960s (by Bruno Buchberger and, independently, Heisuke Hironaka), but in hindsight it is amazing they weren’t invented earlier. Indeed, Grobner bases arise
when one tried to generalize long division of polynomials in one variable to polynomials of several variables. Non-commutative Grobner bases arise when one does not allow the variables to commute
with each other. The main setting is where
$G = \langle x_1,...,x_n\ |\ l_1=r_1,...,l_s=r_s\rangle$ is a finitely presented monoid,
$K \langle X \rangle= \oplus_{g\in G} Kg = K/I$ is a monoid ring, and where
$I = \langle l_1-r_1,...,l_s-r_s\rangle$ is a two-sided ideal generated by binomials.
First, Prof. Kreuzer introduces the non-commutative analogs of term ordering, leading term, and Grobner basis with respect to that ordering.
After developing enough theory, it turns out that the word problem in group theory can be re-expressed in terms of the membership problem for $K \langle X \rangle$.
Martin Kreuzer defines a twisty puzzle as a special form of a permutation puzzle. Its pieces (usually called cubies) can be permuted by twisting certain faces of the puzzle. The cubies carry stickers
defining their location in the solved puzzle. For example, with the Rubik’s Cube, the 54 stickers of a 3×3×3 cube can be permuted by 90 degree turns of the faces. The possible permutations form a
group called the Rubik’s cube group. Then he gives the following definition and remarkable Theorem.
Definition: Let G be a twisty puzzle group given via a set X of generating twists.
(a) The graph whose vertices are the elements of G and whose edges are pairs of positions which differ by just one twist is called the Cayley graph of G.
(b) The diameter δ(G) of the Cayley graph of G is called God’s number for the twisty puzzle.
(c) An algorithm which produces for every scrambled position p an element of X∗ whose length is the distance (in the Cayley graph) from p to the solved position is called a God’s algorithm for the
twisty puzzle.
Theorem (God’s Algorithm via Grobner Bases)
Let X be a set of twists (and their inverses) which generate the group of a twisty puzzle. Given a scrambled position p of the puzzle, perform the following steps:
(1) Compute a Grobner basis G of the defining ideal of the twisty puzzle group ring w.r.t. a length compatible word ordering.
(2) Find any word w ∈ X∗ representing a sequence of twists that result in the position p.
(3) Compute NRG(w) ∈ X∗ and return this word.
This is an algorithm which computes a word representing a shortest possible sequence of twists that produces p from the solved position.
This has even been implemented on a computer in Cocoa, for a permutation puzzle with a very small group.
Another example of a Prohibition era cipher message
The following message, with Elizebeth Friedman‘s decryption, is a message from “Consolidated Exporters” (a Vancouver Canada company illegally importing liquor into the US) to one of its fleet of
about 50 “black ships”, the SS Noble. (They are called “black ships” because they “rum run” without lights at night to avoid detection.)
Many thanks to the George C. Marshall Foundation, Lexington, Virginia, for providing this reproduction!
If you were a math textbook …
If I were a Springer-Verlag Graduate Text in Mathematics, I would be David Eisenbud’s Commutative Algebra with a view towards Algebraic Geometry.
I am an attempt to write on commutative algebra in a way that includes the geometric ideas that played a great role in its formation; with a view, in short, towards Algebraic Geometry. I cover the
material that graduate students studying Algebraic Geometry – and in particular those studying the book Algebraic Geometry by Robin Hartshorne – should know. The reader should have had one year of
basic graduate algebra.
Which Springer GTM would you be? The Springer GTM Test
A 1932 memorandum on rum-runner’s cryptosystems
During the prohibition era, organized crime made phenomenal amounts of money through illegal smuggling. Eventually, their messages were enciphered. By the late 1920s and early 1930s, these
cryptosystems became rather sophisticated. Here is a memorandum form Elizebeth Friedman to CMDR Gorman in January or 1932 which gives an indication of this sophistication.
Many thanks to the George C. Marshall Foundation, Lexington, Virginia, for providing this reproduction!
A 1926 message from “I Am Alone”
The I Am Alone, flying a Canadian flag, was a rumrunner sunk by Coast Guard patrol boats in the Gulf of Mexico in March, 1929. The USCG knew that the ship was not a Canadian owned-and-controlled
vessel but the proof went down to the bottom of the ocean when it was sunk. The Canadian Government sued the United States for $365,000 and the ensuing legal battle brought world-wide attention.
Elizebeth Friedman decoded the messages transmitted by the I Am Alone, and those messages proved that the boat was, in fact, not a Canadian owned-and-controlled vessel. The case actually went to a
Commission, whose final report was issued in 1935. They found, thanks in part to Elizebeth Friedman, that the owners and controllers of the vessel were not Canadian and used the boat primarily for
illegal purposes.
The image below is a scan of an intercepted message, dated 1926-02-15, from the I Am Alone.
The writing is that of EF and you can make out her (mostly) deciphered message.
For more information, see, for example,
“All Necessary Force”: The Coast Guard And The Sinking of the Rum Runner “I’m Alone” by Joseph Anthony Ricci, 2011, or
“Listening to the rumrunners” by David Mowry, 2001.
The above image is courtesy of the Elizebeth S. Friedman Collection at the George C. Marshall Foundation, Lexington, Virginia. (If you make use of the image, please acknowledge the Marshall
Lester Hill’s “The checking of the accuracy …”, part 10
Construction of finite fields for use in checking
Let $F_\Gamma$ denote a finite algebraic field with $\Gamma$ elements. It is well-known that, for a given $\Gamma$, all fields $F_\Gamma$ are “simply isomorphic”, and therewith, for our purposes,
identical. We shall consequently refer, without restraint, to ”the field $F_\Gamma$.”
If $p$ is a prime positive integer greater than $1$, $F_p$ is called, according to the terminology of Section 8, Example 2, a ”primary” field. Explicit addition tables, as was noted in section 8, are
hardly required in deal ing with primary fields. The most useful of these fields, in telegraphic checking, are probably $F_{23}$, $F_{29}$, $F_{31}$, and $F_{101}$. The field $F_{101}$ will be
considered in detail in what follows.
The number of elements in a non-primary finite algebraic field
is a power of a prime. If we have
where $p$ and $k$ are positive integers greater than $1$, and $p$ is prime, the non-primary field $F_q$ may be constructed very easily by algebraic extension of the field $F_p$. Explicit addition
table is needed when working with a non-primary field. Otherwise, checking operations are exactly the same as in primary fields.
Example: The field $F_3$ with the elements (marks) $0,1,2$, has the tables
$\begin{array}{r|*{3}{r}} \multicolumn{1}{c|} +&0&1&2\\\hline {}0&0&1&2\\ {}1&1&2&0\\ {}2&2&0&1\\ \end{array}$
$\begin{array}{r|*{3}{r}} \multicolumn{1}{c|} \cdot &0&1&2\\\hline {}0&0&0&0\\ {}1&0&1&2\\ {}2&0&2&1\\ \end{array}$
By adjoining a root of the equation $x^2=2$, an equation which is irreducible in $F_3$, we easily obtain the field $F_9$ with marks
$\alpha j+\beta$
where $\alpha$ and $\beta$ denote elements of $F_3$. The marks of $F_9$ are thus
These marks (elements) are combined, in the rational field operations of $F_9$, according to the reduction formula $j^2=2$. If we label the marks of $F_9$ as follows
$\begin{array}{ccccccccc} 0 & 1 & 2 & j & j+1& j+2& 2j& 2j+1& 2j+2\\ 0 & 1 & a & b & c & d & e & f & g\\ \end{array}$
the addition and multiplication tables of the field are given as in
Section 8, Example 1.
In a like manner, $F_{27}$ can be obtained from $F_3$ by adjunction of a root of the equation $x^3=x+1$, which is irreducible in $F_3$ and $F_9$. The marks (elements) of $F_{27}$ are
$\alpha j^2+\beta j+\gamma,$
where $\alpha,\beta,\gamma$ are elements of $F_3$. They are combined, in the rational operations of $F_{27}$ according to the reduction formula $j^3=j+1$.
Example: The field $F_2$ with the elements (marks) $0,1$, has the tables
$\begin{array}{r|*{2}{r}} \multicolumn{1}{c|} + &0&1\\ \hline {}0&0&1\\ {}1&1&0\\ \end{array}$
$\begin{array}{r|*{2}{r}} \multicolumn{1}{c|} \cdot &0&1\\ \hline {}0&0&0\\ {}1&0&1\\ \end{array}$
By adjunction of a root of the equation $x^5=x^2+1$, which is irreducible in the fields $F_2$, $F_4$, $F_8$ and $F_{16}$, we easily obtain the field $F_{32}$. The marks of $F_{32}$ are
$\alpha j^4+\beta j^3+\gamma j^2+\delta j+\epsilon,$
where $\alpha,\beta,\gamma, \delta,\epsilon$ are elements of $F_2$; and these $32$ marks are combined, in the rational operations of $F_{32}$, according to the reduction formula $j^5=j^2+1$.
Example: The field $F_5$ with the elements (marks) $0,1,2,3,4$, has the tables
$\begin{array}{r|*{5}{r}} \multicolumn{1}{c|} +&0&1&2&3&4\\\hline {}0&0&1&2&3&4\\ {}1&1&2&3&4&0\\ {}2&2&3&4&0&1\\ {}3&3&4&0&1&2\\ {}4&4&0&1&2&3\\ \end{array}$
$\begin{array}{r|*{5}{r}} \multicolumn{1}{c|} \cdot &0&1&2&3&4\\\hline {}0&0&0&0&0&0\\ {}1&0&1&2&3&4\\ {}2&0&2&4&1&3\\ {}3&0&3&1&4&2\\ {}4&0&4&3&2&1\\ \end{array}$
By adjoining a root of the equation $x^2=2$, which is irreducible in $F_{5}$, we readily obtain the field $F_{25}$. The marks of $F_{25}$ are
$\alpha j+\beta ,$
where $\alpha,\beta$ are elements of $F_5$; and these $25$ marks are combined, in the rational operations of $F_{25}$, according to the reduction formula $j^2=2$.
Of the non-primary fields, $F_{25}$, $F_{27}$, $F_{32}$ are probably those which are most amenable to practical application in telegraphic checking.
mathematics problem 155
A colleague Bill Wardlaw (March 3, 1936-January 2, 2013) used to create a “Problem of the Week” for his students, giving a prize of a cookie if they could solve it. Here is one of them.
Mathematics Problem, #155
We can represent a triangle with sides of length a, b, c by the ordered triple (a, b, c). Changing the order of the sides doesn’t change the triangle, so (a, b, c), (b, a, c), (b, c, a), (c, b, a),
(c, a, b), and (a, c, b) all represent the same triangle. To avoid confusion, let’s agree to write (a, b, c) with a < b < c. We say that a triangle <I (a, b, c) is integral if a, b, and c are
integers. How many integral triangles are there with longest side less than or equal to 100 ?
Mathematics Problem 154
A colleague Bill Wardlaw (March 3, 1936-January 2, 2013) used to create a “Problem of the Week” for his students, giving a prize of a cookie if they could solve it. Here is one of them.
Mathematics Problem, #154
Find the volume of the intersection of three cylinders, each of radius a, which are centered on the x-axis, the y-axis, and the z-axis. That is, find the volume of the three dimensional region
E = {(x,y,z) | x^2 + y^2 < a^2, y^2 + z^2 < a^2, z^2 + x^2 < a^2}.
Lester Hill’s “The checking of the accuracy …”, part 9
We may inquire into the possibility of undisclosed errors occurring in the transmittal of the sequence:
$\begin{array}{ccccccccccccccc} f_1 & f_2 & f_3 & f_4 & f_5 & f_6 & f_7 & f_8 & f_9 & f_{10} & f_{11} & f_{12} & c_1 & c_2 & c_3 \\ 5 & 17 & 13 & 21 & 0 & 8 & 6 & 0 & 11 & 0 & 11 & 11 & 6 & 15 & 2 \\
X & T & Y & P & V & Z & R & V & H & V & H & H & R & I & F \\ \end{array}$
Invoking the theorem established in sections 4 and 5, and formulated at the close of section 5, we may assert:
• (1) If not more than three errors are made in transmitting the fifteen letters of the sequence, and if the errors made affect the $f_i$ only, the $c_j$ being correctly transmitted, then the
presence of error is certain to be disclosed.
• (2) If not more than three errors are made, all told, but at least three of them affect the $f_i$, then the presence of error will enjoy only a
$1-{\rm in}-22^3\ \ \ \ \ (1-{\rm in}-10648)$
chance of escaping disclosure.
These assertions result at once from the theorem referred to. But a closer study of the reference matrix employed in this example permits us to replace them by the following more satisfactory
• (1′)
If errors occur in not more than three of the fifteen elements of the sequence $f_1f_2\dots f_{12}c_1 c_2 c_3$, and if at least one of the particular elements $f_{11}f_{12}c_2$ is correctly
transmitted, the presence of error will certainly be disclosed. But if exactly three errors are made, affecting presicely the elements $f_{11}f_{12}c_2$, the presence of error will enjoy a $1$
-in-$22^2$ ($1$-in-$484$) chance of escaping disclosure.
• (2′)
If more than three errors are made, then whatever the distribution of errors among the fifteen elements of the sequence, the presence of error will enjoy only a
$1$-in-$22^3$ ($1$-in-$10648$) chance of escaping disclosure.
Assertions of this kind will be carefully established below, when a more important finite field is under consideration. The argument then made will be applicable in the case of any finite field. But
it is worthwhile here to look more carefully into the exceptional distribution of errors which is italicized in (1′). This will help us note any weakness that ought to be avoided in the construction
of reference matrices.
Suppose that exactly three errors are made, affecting precisely $f_{11}f_{12}c_2$. If the mutilated message is to check up, and the errors to escape disclosure, we must have (for error notations, see
sections 4,5):
$11\epsilon_{11}+12\epsilon_{12}=0,\ \ \ 11^2\epsilon_{11}+12^2\epsilon_{12}=\delta_2,\ \ \ 11^3\epsilon_{11}+12^3\epsilon_{12}=0.$
These equations may be written:
$11\epsilon_{11}+12\epsilon_{12}=0,\ \ \ 6\epsilon_{11}+6\epsilon_{12}=\delta_2,\ \ \ 20\epsilon_{11}+3\epsilon_{12}=0.$
But $11\epsilon_{11}=-12\epsilon_{12}$ can be written $11\epsilon_{11}=11\epsilon_{12}$, or $\epsilon_{11}=\epsilon_{12}$.
Etc. In this way, we find that the errors can escape disclosure if and only if
$\epsilon_{11}=\epsilon_{12},\ \ \ {\rm and}\ \ \ \delta_{2}=12\epsilon_{11}.$
The error $\epsilon_{11}$ can be made quite arbitrarily. But then the values of $\epsilon_{12}$ and $\delta_2$ are then completely determined. There is evidently a $1$-in-$484$ chance – and no more –
that the errors will fall out just right.
The trouble arises from the vanishing, in our reference matrix, of the two-rowed
$\big{|} \begin{array}{cc} 11 & 12 \\ 11^3 & 12^3 \end{array} \big{|} .$
Note that
$\big{|} \begin{array}{cc} 11 & 12 \\ 11^3 & 12^3 \end{array} \big{|} = 11\cdot 12\cdot \big{|} \begin{array}{cc} 1 & 1 \\ 11^2 & 12^2 \end{array} \big{|} = 17 \big{|} \begin{array}{cc} 1 & 1 \\ 11^2
& 12^2 \end{array} \big{|} =0,$
since $11^2=12^2$.
From the fact that $\big{|} \begin{array}{cc} 11 & 12 \\ 11^3 & 12^3 \end{array} \big{|}$ is the only vanishing determinant of any order in the matrix employed, all other assertions made in (1′) and
(2′) are readily justified. This will be made clear in the following sections.
It will be advantageous, as shown more completely in subsequent sections, to employ reference matrices which contain the smallest possible number of vanishing determinants of any orders. | {"url":"http://wdjoyner.wordpress.com/author/wdjoyner/","timestamp":"2014-04-17T04:13:39Z","content_type":null,"content_length":"123962","record_id":"<urn:uuid:80a7a146-4b8b-4a5a-8957-8bbc1c3c29eb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove V is not a vector space
September 19th 2013, 05:20 PM
Prove V is not a vector space
Let $V = \left\{x,y}\right\}$ be a set with exactly two vectors, x and y. Define vector addition and scalar multiplication in V by the following rules:
Vector addition: $x+x=x$, $y+y=x$, $x+y=y$, and $y+x=y$.
Vector multiplication: $cx=x$, and $cy=y$ for all $c \in \mathbb{R}$.
Prove that V is not a vector space by finding one axiom in the definition of a vector space that fails to hold. You must state the axiom clearly and show it does not hold.
I'm at a loss on this one.
September 19th 2013, 05:23 PM
Re: Prove V is not a vector space
Hey jeremy5561.
You need to evaluate the axioms one by one. The first step is to write all the axioms down. Once you've done that then check them one by one.
We will wait for you to write all the axioms down and then if needed, we can look at specific axioms if you need specific help.
September 19th 2013, 05:49 PM
Re: Prove V is not a vector space
Closure under addition:
Passes because no matter what combination of x and y are added it results in either x and y in V.
Closure under scalar multiplication.
Passes because cx=x and cy=y for all c in real so no matter what scalar multiple you use the result will be in V.
There exists a zero vector such that 0+v = v
The zero vector could be x because x+x=x and x+y=y.
There exists a additive inverse in V such that v + (-v) = 0 for all v in V.
If x and y are both zero vectors then this would work out. x and y would be the additive inverse for both x and y.
September 19th 2013, 06:00 PM
Re: Prove V is not a vector space
Commutativity: passes based on last 2 addition statements.
Distributivity of vectors: True since cx = x and cy = y. That means c(x+y)=cx+cy or any other distributive equation you can come up with all evaluates to x+y.
Associativity: x+x=y
September 19th 2013, 06:24 PM
Re: Prove V is not a vector space
Multiplicative identity: any c can be used here as the multiplicative identity.
Multiplicative associativity (c1c2)v1=c1(c2v1): am pretty convinced this is true because both of these evaluate right to v1.
September 19th 2013, 06:38 PM
Re: Prove V is not a vector space
Oh I think I got it
a not= b
WAIT NEVER MIND x+x=y and y+y =x so x+x=y+y
It turns out that it is
(a+b)y = ay+by
a,b are real numbers are thus so is a+b
y = y + y
y = x (contradicts distributivity)
September 19th 2013, 06:57 PM
Re: Prove V is not a vector space
Just a bit curious but how do I show if x and y have additive inverses or not. I guess both x and y could be their own additive inverse?
Oh I got it since x is the zero vector then for x since x+x = x(the zero vector) x must be its own additive inverse, which does exist because there is a item in V for which x + x(additive
inverse) = x(zero vector). Likewise for y.
September 19th 2013, 07:50 PM
Re: Prove V is not a vector space
I'm a bit confused. Why does showing x=y prove that V isn't a vector space. Why doesn't it just prove that x and y are the same?
September 20th 2013, 08:53 AM
Re: Prove V is not a vector space
Because by hypothesis, V has exactly two vectors (according to the first line in your first post). If this is true, then it can't be a vector space.
If you decide to ignore that statement, then you arrive at a vector space which is isomorphic to the vector space containing only a zero element, or $V = \{0\}$. | {"url":"http://mathhelpforum.com/algebra/222101-prove-v-not-vector-space-print.html","timestamp":"2014-04-16T10:29:22Z","content_type":null,"content_length":"10001","record_id":"<urn:uuid:09831468-50ab-4642-a136-431ebb4a5543>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modern factor analysis (2nd ed
- IEEE TRANS. NEURAL NETW , 1999
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as
possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informat ..."
Cited by 511 (34 self)
Add to MetaCart
Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as
possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s information-theoretic approach and the projection pursuit approach. Using maximum entropy
approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by
minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are
analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixed-point
algorithms for practical optimization of the contrast functions. These algorithms optimize the contrast functions very fast and reliably.
- Cognitive Science , 1998
"... This paper has two objectives. First, we will argue that the mutability of conceptual fea- tures can be represented as a single, multiple-valued dimension. We will show that the fea- tures of a
concept can be reliably ordered with respect to the degree to which people are willing to transform the fe ..."
Cited by 59 (6 self)
Add to MetaCart
This paper has two objectives. First, we will argue that the mutability of conceptual fea- tures can be represented as a single, multiple-valued dimension. We will show that the fea- tures of a
concept can be reliably ordered with respect to the degree to which people are willing to transform the feature while retaining the integrity of a representation; i.e., that a number of conceptual
tasks, all of which require people to transform conceptual features, produce similar orderings. Following Medin and Shoben (1988), these tasks have in common that they ask people to consider an
object that is missing a feature but is otherwise intact (e.g., a real chair without a seat)
"... Computer Information Systems This study demonstrates that a commonly used type of factor analysis, principle components analysis, contributes to the amount of error in statistical analysis. In
the study, which concerns email use, factor analyses were performed using several different factor analysis ..."
Add to MetaCart
Computer Information Systems This study demonstrates that a commonly used type of factor analysis, principle components analysis, contributes to the amount of error in statistical analysis. In the
study, which concerns email use, factor analyses were performed using several different factor analysis methods. The results show that using factors derived via principle components analysis as
dependent variables substantially increased the amount of error in regression analyses, and in several cases reduced the amount of explained variance.
"... We present a novel classification and regression method that combines exploratory projection pursuit (unsupervised training) with projection pursuit regression (supervised training), to yield a
new family of costlcomplexity penalty terms. Some improved generalization properties are demonstrated on r ..."
Add to MetaCart
We present a novel classification and regression method that combines exploratory projection pursuit (unsupervised training) with projection pursuit regression (supervised training), to yield a new
family of costlcomplexity penalty terms. Some improved generalization properties are demonstrated on real-world problems. 1
"... We propose an object recognition scheme based on a method for feature extraction from gray level images that corresponds to recent statistical theory, called pmjection pursuit, and is derived
from a biologically motivated feature extracting neuron. To evaluate the performance of this method we use a ..."
Add to MetaCart
We propose an object recognition scheme based on a method for feature extraction from gray level images that corresponds to recent statistical theory, called pmjection pursuit, and is derived from a
biologically motivated feature extracting neuron. To evaluate the performance of this method we use a set of very detailed psychophysical three-dimensional object recognition experiments (Biilthoff
and Edelman 1992). 1
"... The individualism and collectivism constructs are theoretically analyzed and linked to certain hypothesized consequences (social behaviors, health indices). Study 1 explores the meaning of these
constructs within culture (in the United States), identifying the individual-differences variable, idioce ..."
Add to MetaCart
The individualism and collectivism constructs are theoretically analyzed and linked to certain hypothesized consequences (social behaviors, health indices). Study 1 explores the meaning of these
constructs within culture (in the United States), identifying the individual-differences variable, idiocentrism versus allocentrism, that corresponds to the constructs. Factor analyses of responses
to items related to the constructs suggest that UrS. individualism is reflected in (a) Self-Reliance With Competition, (b) Low Concern for Ingroups, and (c) Distance from Ingroups. A higher order
factor analysis suggests that Subordination oflngroup Goals to Personal Goals may be the most important aspect of U.S. individualism. Study 2 probes the limits of the constructs with data from two
collectivist samples (Japan and Puerto Rico) and one individualist sample (Illinois) of students. It is shown that responses depend on who the other is (i.e., which ingroup), the context, and the
kind of social behavior (e.g., feel similar to other, attentive to the views of others). Study 3 replicates previous work in Puerto Rico indicating that allocentric persons perceive that they receive
more and a better quality of social support than do idiocentric persons, while the latter report being more lonely than the former. Several themes, such as self-reliance, achievement, and
competition, have different meanings in the two kinds of societies, and detailed examinations of the factor patterns show how such themes | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1917416","timestamp":"2014-04-16T17:48:24Z","content_type":null,"content_length":"25490","record_id":"<urn:uuid:13ef6697-dc91-4f72-979e-b6a09cca3b44>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I throw a stone at 20 degree, when the stone falls to the ground, it reaches 100m further. Using CALCULUS methods, find the initial speed of the stone.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I assume this question is too hard so no one answers..
Best Response
You've already chosen the best response.
from the vertical motion, ie, velocity is usin(theta) find time it will take to get to highest pt. time of flight will be twice of it. you would use first equation of motion here. then from
second equation of motion, take horizontal motion, ie, velocity is ucostheta. plug time of flight and you'll get a formula for range in terms of theta, initial velocity(u) and g. it will look
like this. RANGE=u^2 * sin(2theta)/g
Best Response
You've already chosen the best response.
got it?
Best Response
You've already chosen the best response.
where is the calculus part..
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f768e0e4b027eb5d99b75e","timestamp":"2014-04-21T12:43:20Z","content_type":null,"content_length":"35131","record_id":"<urn:uuid:478613eb-61d9-44fb-9081-1e71230937eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Functions of Mathematical Physics
This book, first published in 1971, is about what we sometimes call “special functions”. It emphasizes special functions of particular interest in physics. Many of these arise naturally as solutions
to ordinary differential equations. Yet there are a bewildering number of “standard” special functions and most of them have more than one definition. It can be quite a challenge for an author to get
the reader past the feeling that the subject is nothing but a chaotic collection of formulas. This book attempts to do that through careful selection and organization. It is by no means a
comprehensive study of special functions. Hochstadt instead chose his topics according to his estimation of their value in mathematical physics, and to some extent to follow his own interests.
Most of the subjects treated here arose from questions studied in the eighteenth or nineteenth centuries by the likes of Gauss, Euler, Bessel and Legendre, and nearly all of them were motivated by
physical problems. Now, when we tend to fussier about the distinctions between pure and applied mathematics, we’d say that this book is split between topics of interest primarily in applied
mathematics and topics of purely mathematical interest.
The major topics are orthogonal polynomials (two chapters), hypergeometric functions (also two chapters), the Gamma, Legendre and Bessel functions, and spherical harmonics. The final chapter, a bit
of an outlier, focuses on Hill’s equation. This was first used to describe the stability of the moon’s orbit and later applied to the motion of an electron in a crystal. When this book was written,
Hill’s equation was not well known in the literature, and this chapter was partly intended to fill that gap.
Hochstadt does not assume any prior knowledge of special functions, but he does expect the reader to be very comfortable with real and complex analysis at or above the advanced undergraduate level.
Although the presentation is directed in part at physicists, engineers and other applied scientists, the treatment occasionally pushes the envelope with some of the discussions (for example, Riemann
surfaces and Schwarz-Christoffel transformations). The writing is fluid and arguments are generally easy to follow. However, there is very little attempt to put the discussions of special functions
in context or explain where and why they might be used. So, if readers don’t know why they should care about hypergeometric functions, they’re not going to learn it here. This is nonetheless a
valuable reference, a great place to browse or send students to learn about orthogonal polynomials, Legendre polynomials, Bessel functions and the like.
Bill Satzer (wjsatzer@mmm.com) is a senior intellectual property scientist at 3M Company, having previously been a lab manager at 3M for composites and electromagnetic materials. His training is in
dynamical systems and particularly celestial mechanics; his current interests are broadly in applied mathematics and the teaching of mathematics. | {"url":"http://www.maa.org/publications/maa-reviews/the-functions-of-mathematical-physics","timestamp":"2014-04-16T04:58:43Z","content_type":null,"content_length":"97373","record_id":"<urn:uuid:49e7c7ad-7f40-4bd1-9446-231b09b27a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about the limit of a specific function, but not sure how to formulate it
August 24th 2011, 12:45 AM #1
May 2011
A question about the limit of a specific function, but not sure how to formulate it
I was thinking of this problem for a couple of hours, but wasn't sure how to formulate it, hence I wasn't able to google it.
It's pretty simple to explain, though. Observe any function with a finite integral over the R line
$\int f(t)dt=A=const.$
And now look at the function
$G(\tau) = \int f(t - \tau)dt.$
Now, for any finite $\tau$, $G(\tau)=A$, obviously.
However, I asked myself what is the answer for $\lim_{\tau \to +\infty}G(\tau)$?
I'm not sure if I have or don't have the right to exchange the limit and the integral
$\lim_{\tau \to +\infty}\int f(t - \tau)dt =\int \lim_{\tau \to +\infty}f(t - \tau)dt$
mainly because I have no idea how to interpret this
$\lim_{\tau \to +\infty}f(t - \tau)$
So, what say you? I'm clueless.
Re: A question about the limit of a specific function, but not sure how to formulate
I understand you have no idea how to interpret $\lim_{\tau \to +\infty}f(t-\tau)$ since it's possible that this limit doesn't exist. For example, consider an even function f. We can choose for $n
\geq 1$$f$ affine on $\left[n-\frac 13,n\right]$ and $\left[n,n+\frac 13\right]$, nonnegative and such that $\int_{n-\frac 13}^{n+\frac 13}f(t)dt=2^{-n}$. Put $f=0$ on the complement of these
intervals: we can choose $f$ continuous, integrable but which has no limit as $x\to \pm\infty$.
Re: A question about the limit of a specific function, but not sure how to formulate
I was thinking of this problem for a couple of hours, but wasn't sure how to formulate it, hence I wasn't able to google it.
It's pretty simple to explain, though. Observe any function with a finite integral over the R line
$\int f(t)dt=A=const.$
And now look at the function
$G(\tau) = \int f(t - \tau)dt.$
Now, for any finite $\tau$, $G(\tau)=A$, obviously.
However, I asked myself what is the answer for $\lim_{\tau \to +\infty}G(\tau)$?
I'm not sure if I have or don't have the right to exchange the limit and the integral
$\lim_{\tau \to +\infty}\int f(t - \tau)dt =\int \lim_{\tau \to +\infty}f(t - \tau)dt$
mainly because I have no idea how to interpret this
$\lim_{\tau \to +\infty}f(t - \tau)$
So, what say you? I'm clueless.
Because the limiting process only ever considers finite (but large) $\tau$ the limit is $A$
Re: A question about the limit of a specific function, but not sure how to formulate
I was thinking of this problem for a couple of hours, but wasn't sure how to formulate it, hence I wasn't able to google it.
It's pretty simple to explain, though. Observe any function with a finite integral over the R line
$\int f(t)dt=A=const.$
And now look at the function
$G(\tau) = \int f(t - \tau)dt.$
Now, for any finite $\tau$, $G(\tau)=A$, obviously.
However, I asked myself what is the answer for $\lim_{\tau \to +\infty}G(\tau)$?
I'm not sure if I have or don't have the right to exchange the limit and the integral
$\lim_{\tau \to +\infty}\int f(t - \tau)dt =\int \lim_{\tau \to +\infty}f(t - \tau)dt$
mainly because I have no idea how to interpret this
$\lim_{\tau \to +\infty}f(t - \tau)$
So, what say you? I'm clueless.
In my opinion it is important to specify that You have to do with a definite integral over the R line, so that is...
$\int_{- \infty}^{+ \infty} f(t)\ dt = A= \text{cost}$ (1)
Now if You define a function of $\tau$ as...
$G(\tau)= \int_{- \infty}^{+\infty} f(t-\tau)\ dt$ (2)
... it is evident that is $G(\tau)=A= \text {cost}$ so that is $\lim_{\tau \rightarrow \infty} G(\tau)=A$...
Kind regards
Re: A question about the limit of a specific function, but not sure how to formulate
Hm, I think I get it now, hopefully. Thank you all!
@girdav Thanks for your reply bro, although to be honest - I didn't understand zip of it, I'm just a poor engineering student
The only thing that is still uncomfortable for me is that I can't seem to have an analytic form of the function I integrate, that is, $\lim_{\tau \to +\infty}f(t-\tau)$ as a function of t.
In other words, if I tell you the analytic form of the function $f(t)$, for example $f(t)=e^{-t^2}$, could you tell me the analytic form of the function $\lim_{\tau \to +\infty}f(t-\tau)$, as a
function of $t$?
Also, while we're on the subject, I haven't encountered interchange of limits and integrals like this before. It was usually the sequence of functions for me, and their interchange with the
integral if the sequence converges uniformly etc.
What would be the conditions for the interchange in a case like this?
Last edited by lajka; August 25th 2011 at 10:41 PM.
August 24th 2011, 01:38 AM #2
August 24th 2011, 02:41 AM #3
Grand Panjandrum
Nov 2005
August 24th 2011, 04:19 AM #4
August 25th 2011, 10:09 PM #5
May 2011 | {"url":"http://mathhelpforum.com/calculus/186620-question-about-limit-specific-function-but-not-sure-how-formulate.html","timestamp":"2014-04-19T20:07:48Z","content_type":null,"content_length":"55583","record_id":"<urn:uuid:eb9c1f0b-40a7-49fb-b02d-8b123bf7438b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sun City West Calculus Tutor
...The solutions are very carefully detailed, and important concepts are particularly emphasized for attention. The student is urged to ask questions in discussing those problems, and, in turn, I
ask peripheral questions to ensure good basic comprehension. I use a modified Socratic method of teach...
30 Subjects: including calculus, chemistry, English, reading
...When I was in high school, my father bought me disks with math programs on them. The math content on the disks were Algebra 1, Algebra 2, and Geometry. I practiced and mastered them all during
my downtime after school.
7 Subjects: including calculus, geometry, algebra 1, algebra 2
...It is my job as a tutor to guide the students to sift through the problems and obtain the correct answer. This is done by providing them with thoughtful sequential questions. This road map I
take with the students will enable them to continue the thought process without me there.
20 Subjects: including calculus, chemistry, physics, geometry
...I have a relaxed, but encouraging teaching style and I form strong relationships with my students. I am an Eagle Scout as well as a talented chess player and have coached chess for the past two
years. I have a passion for education and teaching and really enjoy working with students and helping them learn.
14 Subjects: including calculus, physics, geometry, algebra 1
...Prealgebra is such an important part of math. It is the fundamental building block of all advanced mathematics. Without a strong understanding of prealgebra, all other math is much more
20 Subjects: including calculus, physics, computer programming, C | {"url":"http://www.purplemath.com/Sun_City_West_calculus_tutors.php","timestamp":"2014-04-19T12:30:04Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:8c3a94a8-768e-481c-9473-db77ec35d4d0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notation confusion; |\pi N; I, I_3> states
The N stands for Nucleon. The nucleon states with I=1/2 are proton (I[3]= +1/2) and neutron (I[3] = -1/2). So the first line says the combined state with I = 3/2, I[3] = 3/2 consists of π^+ and
To get the last line, we apply the operator I[-] that lowers total I[3]. It acts on both the pion and the proton. Lowering the proton to a neutron gives us the first term, while lowering the π^+ to a
π^0 gives us the second term. As you say, the √'s in front of these terms are Clebsch-GordAn coefficients.
OK, so would a way to look at that operation be:
##I_- |\pi N; \frac{3}{2},\frac{3}{2}\rangle = I_- |\pi ;1,1\rangle \otimes | N; \frac{1}{2},\frac{1}{2}\rangle + |\pi ;1,1\rangle \otimes I_-| N; \frac{1}{2},\frac{1}{2}\rangle## ?
Which will give me
## k_1 |\pi^0 p \rangle + k_2 | \pi^+ n\rangle ## ?
where the coefficients are CG coefficients? | {"url":"http://www.physicsforums.com/showthread.php?p=4273599","timestamp":"2014-04-21T07:17:48Z","content_type":null,"content_length":"33348","record_id":"<urn:uuid:d74db060-1bd6-4ba2-977c-0f48ae6b0f96>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
An everywhere locally trivial line bundle
up vote 3 down vote favorite
Is there a variety $X$ over $\mathbb{Q}$ and a line bundle $L$ over $X$ (other than the trivial line bundle $\mathcal{O}_X$ ) such that $L_v$ is the trivial line bundle over $X_v=X\times_{\mathbb{Q}}
\mathbb{Q}_v$ for every place $v$ of $\mathbb{Q}$ ?
(Answer known. There is a pun on "locally trivial" in the title.)
nt.number-theory arithmetic-geometry
Forgive me for asking, but if the answer is known, could you show it to us? – Hailong Dao Jan 2 '10 at 16:18
I thought people would like to think about it. – Chandan Singh Dalawat Jan 3 '10 at 3:06
1 The moral of this one seems to me "don't let on that you know the answer" :-/ Maybe it's time you answered your own question? – Kevin Buzzard Jan 6 '10 at 16:00
Sorry for having kept everyone waiting ! I had to be away three days... – Chandan Singh Dalawat Jan 7 '10 at 9:32
2 Others may disagree, but I think it's against the spirit of things here to ask a question to which you already know the answer (however nice the question is). It might seem like an abuse of
people's willingness to help. – Tom Leinster Jan 7 '10 at 14:11
show 1 more comment
1 Answer
active oldest votes
The following example was provided to me by Colliot-Thélène some years ago : Let $X$ be the complement in $\mathbb{P}_{1,\mathbb{Q}}$ of the three closed points defined by $x^2=13$,
up vote 6 down $x^2=17$, $x^2=221$. Then $\operatorname{Pic}(X)=\mathbb{Z}/2\mathbb{Z}$ but $\operatorname{Pic}(X_v)=0$ for every place $v$ of $\mathbb{Q}$.
vote accepted
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory arithmetic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/10478/an-everywhere-locally-trivial-line-bundle/11027","timestamp":"2014-04-20T18:48:25Z","content_type":null,"content_length":"54976","record_id":"<urn:uuid:6dede65c-c6a7-492d-99cd-c9711f77a2cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivatives- Math
Posted by Mae on Monday, April 19, 2010 at 8:38pm.
Can someone explain how to find the derivative of :
1. y= 5^x / x
And the second derivative of:
y= xe^10x
For this question I got up to the first derivative and got this
y = e^10x + 10xe^10x but I can't seem to get the correct answer for the second derivative.
• sorry, the first question isnt showing up properly - Mae, Monday, April 19, 2010 at 8:41pm
, its suppose to be
5^squarerootx / x
• Derivatives- Math - Reiny, Monday, April 19, 2010 at 9:48pm
y = (5^(√x))/x
I would take ln of both sides
lny = ln (5^(√x))/x
lny = √x(ln5) - lnx
lny = (ln5)(x^1/2) - lnx
y' / y = (1/2)ln5(x^(-1/2)) - 1/x
y = [(5^(√x))/x][(1/2)ln5(x^(-1/2)) - 1/x]
(what a mess!)
for y = xe^10x
y' = e^10x + 10xe^10x is correct, now do it again
y'' = 10e^10x + 10(e^10x + 10xe^10x) , we just did that last part
= 20e^10x = 100xe^10x
Related Questions
Calc - 1. Find values of a,b,c, and d such that g(x) = a(x^3)+b(x^2)+cx+d has a ...
Shayna - Find the value of a so that the function f(x) = xe^(ax) has critical ...
Calculus - I need to find the second derivative of y=x(x-1)^1/2. I found the ...
local min - f(x) = x^4 + ax^2 What is a if f(x) has a local minimum at x=5. How ...
calculus - Differentiate both sides of the double angle identify sin 2x= 2 sin x...
calc - derivative of (x^3/6)+(1/2x)? i got (x^2/2)-(1/2x^2) is that correct? and...
Applied Calculus - Find the first and second derivative of the function: x/(7x+...
Calculus (Help) - Also I would like to know how to set this problem up in ...
Quantum mechanics, eigenfunctions! - Determine if the function sin(x)*e^(ax) ...
Pre-cal - Hi, I don't understand this problem also. Find the the relative ... | {"url":"http://www.jiskha.com/display.cgi?id=1271723896","timestamp":"2014-04-18T16:45:43Z","content_type":null,"content_length":"9107","record_id":"<urn:uuid:371ac3b9-e403-461a-9f5e-d5222f470345>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Ricci curvature with applications
Seminar Room 1, Newton Institute
We define a notion of discrete Ricci curvature for a metric measure space by looking at whether "small balls are closer than their centers are". In a Riemannian manifolds this gives back usual Ricci
curvature up to scaling. This definition is very easy to apply in a series of examples such as graphs (eg the discrete cube has positive curvature). We are able to generalize several Riemannian
theorems in positive curvature, such as concentration of measure and the log-Sobolev inequality. This definition also allows to prove new theorems both in the Riemannian and discrete case: for
example improved bounds on spectral gap of the Laplace-Beltrami operator, and fast convergence results for some Monte Carlo Markov Chain methods.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/DAN/seminars/2011051814001.html","timestamp":"2014-04-16T19:00:36Z","content_type":null,"content_length":"6283","record_id":"<urn:uuid:b66c36b5-7431-4f74-a1f2-2674e6baa93f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Properties and Examples of Gravitational Force
Gravitational Force – Properties and Examples of Gravitational Force
What is gravitational force? Mention its important properties. Give some examples of gravitational force.
Gravitational force is the force of mutual attraction between two bodies by virtue of their masses. It is a universal force. Every body attracts every other body of the universe with this force.
According to Newton’s law of gravitation, the gravitational attraction between two bodies of masses m1 & m[2 ]and separated by distance r is given by
where G is the universal gravitational constant which is equal to 6.67 * 10^ -11 Nm^2/Kg^2.
Important properties of gravitational force:
1. It is a universal attractive force.
1. It is directly proportional to the product of the masses of the two bodies.
2. It obeys inverse square law.
1. It is a long range force and does not need any intervening medium for its operation.
2. Gravitational force between two bodies does not depend upon the presence of other bodies.
3. It is the weakest force known in nature.
1. It is a central force (i.e., it acts along the line joining the centres of the two bodies).
2. It is a conservative force (i.e., work done in moving a body against the gravitational force is path independent).
3. Gravitational- force between two bodies is thought to be caused by ah exchange of a particle called graviton.
Examples of gravitational force:
‘ 1. All bodies fall because of the gravitational force of attraction exerted on them by the earth.
1. Gravitational force governs the motion of the moon and the artificial satellites around the earth; and the motion of the planets around the sun | {"url":"http://itsmyacademy.com/gravitational-force-properties-and-examples-of-gravitational-force/","timestamp":"2014-04-16T16:16:39Z","content_type":null,"content_length":"30109","record_id":"<urn:uuid:e4d8ed60-154c-4c87-a779-fd995f647ead>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: [Mata] naming matrices in a loop
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: [Mata] naming matrices in a loop
From Kit Baum <baum@bc.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: [Mata] naming matrices in a loop
Date Sat, 20 Mar 2010 07:10:40 -0400
On Mar 20, 2010, at 2:33 AM, Antoine wrote:
> I've tried pointers, but I must be doing something wrong. Must have a
> closer look at the manual.
> Since the matrices need not have the same dimensions, I cannot just
> populate a larger matrix with the results.
There is a worked example, with explanation, of the use of pointers to create a number of matrices (which indeed need not be of the same dimensions) in section 14.8, "Computing the seemingly unrelated regression estimator for an unbalanced panel", in An Introduction to Stata Programming. In this case one of the sets of matrices are the X'X matrices for each equation, and in SUR one can have different numbers of Xs in each equation, so the matrices are of different order.
Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-03/msg01369.html","timestamp":"2014-04-20T11:11:07Z","content_type":null,"content_length":"8523","record_id":"<urn:uuid:c490ceb3-6b2b-4bfc-ad7d-b2e2eabe3f55>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which of the lines in the figure are parallel? Which is the transversal?
Parallel lines never intersect. We know that. Transversals are lines that intersect two or more (often parallel) lines. We can identify a and b as the parallel lines (if not by their lack of
intersection, then by the helpful arrowheads along the lines themselves) and c as the transversal (because it crosses the other two). | {"url":"http://www.shmoop.com/parallel-perpendicular-lines/parallel-lines-transversals-examples.html","timestamp":"2014-04-19T22:34:49Z","content_type":null,"content_length":"35082","record_id":"<urn:uuid:d75e4197-e5d1-4b1e-a0e7-f60442aacef0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
1. March 8th 2009, 09:55 AM #1
2. March 10th 2009, 07:17 AM #2
A function between A and B assigns to each element of A one and only one element of B. If A has m elements and B n elements, we have that to each element of A we can assign any element of B.
The number of possible functions is therefore $N = n^ m$…
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/number-theory/77508-functions.html","timestamp":"2014-04-19T12:37:58Z","content_type":null,"content_length":"32420","record_id":"<urn:uuid:3aaa20b0-a69f-4b9c-a2e2-f7c0b6598a9b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Anonymous on Monday, March 14, 2011 at 1:47pm.
Suppose a 6ft. man is 10 ft. away from a 24 ft. tall lamp post. If the person is moving away from the lamp post at at rate of 2 ft/sec, at what rate is the length of his shadow changing?
• Calculus - Reiny, Monday, March 14, 2011 at 2:03pm
let the man be x ft from the lamppost.
At that time, let the length of his shadow be y ft
by ratios:
24/(x+y) = 6/y
24y = 6x+6y
3dy/dt = dx/dt
but dx/dt = 2
dy/dt = 2/3 ft/s
so his shadow is increasing in length at 2/3 ft/sec,
notice that the end of his shadow would be MOVING at 2 +2/3 or 8/3 ft/se
also notice that the fact that he is 24 feet from the post does not enter the picture at all.
• Calculus - Anonymous, Monday, March 14, 2011 at 2:08pm
Why is it "let the man be x ft from the lamppost. "
Why not 10 feet away?
& Why is it "also notice that the fact that he is 24 feet from the post does not enter the picture at all"
When the lamp post is 24 feet tall?
• Calculus - Reiny, Monday, March 14, 2011 at 5:11pm
I put the wrong number in when I made that last comment. It should have said " .... he is 10 feet from ..."
It has no effect on the solution.
We cannot use the 10 feet in the "general" case, the 10 feet is one specific instant.
in rate of change problems you <bnever use the specific case data until you have differentiated the equation.
Related Questions
Calculus - A man is 6ft tall and is walking at night straight toward a lighted ...
Calculus - (It's a related rate question) A man 6ft tall walks away from a lamp ...
Math 8R - HELP!!!!! - Mr. Martanarie bought a new lamp and lamppost for his home...
math - A man is 6ft tall is standing 10 feet from the base of a lamp post. The ...
calculus - A 6ft tall woman is walking away from a srteet lamp at 4 ft/sec that ...
math - A man 6 feet tall walks at a rate of 5 feet/sec toward a street lamp that...
calculus - A street light is mounted at the top of a 15-ft-tall pole. A man 6 ft...
math - A woman 5 ft tall is standing near a street lamp that is 12 ft tall. Find...
Calculus - A woman 5 ft tall walks at the rate of 5.5 ft/sec away from a ...
calculus - A street light is at the top of a 19 ft tall pole. A woman 6 ft tall ... | {"url":"http://www.jiskha.com/display.cgi?id=1300124868","timestamp":"2014-04-20T22:41:12Z","content_type":null,"content_length":"9661","record_id":"<urn:uuid:a6760610-f3c6-428b-abcb-8ffb862d2de2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decimal floating point in .NET
In my article on binary floating point types, I mentioned the System.Decimal (or just decimal in C#) type briefly. This article gives more details about the type, including its representation and
some differences between it and the more common binary floating point types. From here on, I shall just refer to it as the decimal type rather than System.Decimal, and likewise where float and double
are mentioned, I mean the .NET types System.Single and System.Double respectively. To make the article easier on the eyes, I'll leave the names in normal type from here on, too.
What is the decimal type?
The decimal type is just another form of floating point number - but unlike float and double, the base used is 10. If you haven't read the article linked above, now would be a good time to read it -
I won't go into the basics of floating point numbers in this article.
The decimal type has the same components as any other floating point number: a mantissa, an exponent and a sign. As usual, the sign is just a single bit, but there are 96 bits of mantissa and 5 bits
of exponent. However, not all exponent combinations are valid. Only values 0-28 work, and they are effectively all negative: the numeric value is sign * mantissa / 10^exponent. This means the maximum
and minimum values of the type are +/- (2^96-1), and the smallest non-zero number in terms of absolute magnitude is 10^-28.
The reason for the exponent being limited is that the mantissa is able to store 28 or 29 decimal digits (depending on its exact value). Effectively, it's as if you have 28 digits which you can set to
any value you want, and you can put the decimal point anywhere from the left of the first digit to the right of the last digit. (There are some numbers where you can have a 29th digit to the left of
the rest, but you can't have all combinations with 29 digits, hence the restriction.)
How is a decimal stored?
A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer
representing the sign and exponent. The top bit of the last integer is the sign bit (in the normal way, with the bit being set (1) for negative numbers) and bits 16-23 (the low bits of the high
16-bit word) contain the exponent. The other bits must all be clear (0). This representation is the one given by decimal.GetBits(decimal) which returns an array of 4 ints.
Formatting decimals
Unlike floats and doubles, when .NET is asked to format a decimal into a string representation, its default behaviour is to give the exact value. This means there is no need for a decimal equivalent
of the DoubleConverter code of the binary floating point article. You can, of course, ask it to restrict the value to a specific precision.
Keeping zeroes
Between .NET 1.0 and 1.1, the decimal type underwent a subtle change. Consider the following simple program:
using System;
public class Test
static void Main()
decimal d = 1.00m;
Console.WriteLine (d);
When I first ran the above (or something similar) I expected it to output just 1 (which is what it would have been on .NET 1.0) - but in fact, the output was 1.00. The decimal type doesn't normalize
itself - it remembers how many decimal digits it has (by maintaining the exponent where possible) and on formatting, zero may be counted as a significant decimal digit. I don't know the exact nature
of what exponent is chosen (where there is a choice) when two different decimals are multiplied, divided, added etc, but you may find it interesting to play around with programs such as the
using System;
public class Test
static void Main()
decimal d = 0.00000000000010000m;
while (d != 0m)
Console.WriteLine (d);
d = d/5m;
Which produces a result of:
Everything's a number
The decimal type has no concept of infinity or NaN (not-a-number) values, and despite the above examples of the same actual number being potentially representable in different forms (eg 1, 1.0, 1.00)
the normal == operator copes with these and reports 1.0==1.00 etc.
The decimal type has a larger precision than any of the built-in binary floating point types in .NET, although it has a smaller range of potential exponents. Also, many operations which yield
surprising results in binary floating point due to inexact representations of the original operands go away in decimal floating point, precisely because many operands are specifically represented in
source code as decimals. However, that doesn't mean that all operations suddenly become accurate: a third still isn't exactly representable, for instance. The potential problems are just the same as
they are with binary floating point. However, most of the time the decimal type is chosen for quantities like money, where operations will be simple and keep things accurate. (For instance, adding a
tax which is specified as a percentage will keep the numbers accurate, assuming they're in a sensible range to start with.) Just be aware of which operations are likely to cause inaccuracy, and which
As a very broad rule of thumb, if you end up seeing a very long string representation (ie most of the 28/29 digits are non-zero) then chances are you've got some inaccuracy along the way: most of the
uses of the decimal type won't end up using very many significant figures when the numbers are exact.
Most business applications should probably be using decimal rather than float or double. My rule of thumb is that manmade values such as currency are usually better represented with decimal floating
point: the concept of exactly 1.25 dollars is entirely reasonable, for example. For values from the natural world, such as lengths and weights, binary floating point types make more sense. Even
though there is a theoretical "exactly 1.25 metres" it's never going to occur in reality: you're certainly never going to be able to measure exact lengths, and they're unlikely to even exist at the
atomic level. We're used to there being a certain tolerance involved.
There is a cost to be paid for using decimal floating point arithmetic, but I believe this is unlikely to be a bottleneck for most developers. As always, write the most appropriate (and readable)
code first, and analyse your performance along the way. It's usually better to get the right answer slowly than the wrong answer quickly - especially when it comes to money... | {"url":"http://csharpindepth.com/Articles/General/Decimal.aspx","timestamp":"2014-04-20T16:02:35Z","content_type":null,"content_length":"12205","record_id":"<urn:uuid:1c3e9880-3c6a-442c-b851-23e652bf2a19>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
The concept of a linguistic variable and its applications to approximate reasoning
Results 1 - 10 of 595
, 1999
"... The fuzzy linguistic approach has been applied successfully to many problems. However, there is a limitation on this approach, the loss of information. It appears due to its information
representation model (discrete terms) and the computational methods used when fusion and combination processes are ..."
Cited by 102 (41 self)
Add to MetaCart
The fuzzy linguistic approach has been applied successfully to many problems. However, there is a limitation on this approach, the loss of information. It appears due to its information
representation model (discrete terms) and the computational methods used when fusion and combination processes are performed on linguistic variables. In this contribution we propose a new fuzzy
linguistic representation model based on the concept of "Symbolic Translation" for dealing with linguistic information in a continuous domain. Together with this representation model we shall develop
a computational technique for fusing linguistic variables without loss of information. Keywords: Linguistic variables, linguistic modeling, fusion of linguistic information. 1 Introduction The
problems depending on their aspects can deal with dierent types of information. Usually, the problems present quantitative aspects that can be assessed by means of precise numerical values, but in
other cases the problems p...
- IEEE Trans. on Systems, Man and Cybernetics, Part A: Systems , 1997
"... Abstract—The aim of this paper is to model the processes of the aggregation of weighted information in a linguistic framework. Three aggregation operators of weighted linguistic information are
presented: linguistic weighted disjunction (LWD) operator, linguistic weighted conjunction (LWC) operator, ..."
Cited by 93 (56 self)
Add to MetaCart
Abstract—The aim of this paper is to model the processes of the aggregation of weighted information in a linguistic framework. Three aggregation operators of weighted linguistic information are
presented: linguistic weighted disjunction (LWD) operator, linguistic weighted conjunction (LWC) operator, and linguistic weighted averaging (LWA) operator. A study of their axiomatics is presented
to demonstrate their rational aggregation. Index Terms — Aggregation operators, fuzzy linguistic quantifier, linguistic modeling. I.
- ARTIFICIAL INTELLIGENCE , 1997
"... A framework for the qualitative representation of positional information in a two-dimensional space is presented. Qualitative representations use discrete quantity spaces, where a particular
distinction is introduced only if it is relevant to the context being modeled. This allows us to build a flex ..."
Cited by 91 (3 self)
Add to MetaCart
A framework for the qualitative representation of positional information in a two-dimensional space is presented. Qualitative representations use discrete quantity spaces, where a particular
distinction is introduced only if it is relevant to the context being modeled. This allows us to build a flexible framework that accommodates various levels of granularity and scales of reasoning.
Knowledge about position in large-scale space is commonly represented by a combination of orientation and distance relations, which we express in a particular frame of reference between a primary
object and a reference object. While the representation of orientation comes out to be more straightforward, the model for distances requires that qualitative distance symbols be mapped to geometric
intervals in order to be compared; this is done by defining structure relations that are able to handle, among others, order of magnitude relations; the frame of reference with its three components
(distance system, s...
- Appl. Math. Comput. Sci
"... Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are
words and propositions drawn from a natural language, e.g., small, large, far, heavy, not very likely, the p ..."
Cited by 89 (3 self)
Add to MetaCart
Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words
and propositions drawn from a natural language, e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there
will be a significant increase in the price of oil in the near future, etc. Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks
without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech and summarizing a
story. Underlying this remarkable capability is the brain’s crucial ability to manipulate perceptions – perceptions of distance, size, weight, color, speed, time, direction, force, number, truth,
likelihood and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing
with words provides a foundation for a computational theory of perceptions – a theory which may have an important bearing on how humans make – and machines might make – perception-based rational
decisions in an environment of imprecision, uncertainty and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp whereas perceptions are
fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men
to the moon; we can build computers
- Applied Intelligence , 1996
"... In classical Constraint Satisfaction Problems (CSPs) knowledge is embedded in a set of hard constraints, each one restricting the possible values of a set of variables. However constraints in
real world problems are seldom hard, and CSP's are often idealizations that do not account for the preferenc ..."
Cited by 74 (13 self)
Add to MetaCart
In classical Constraint Satisfaction Problems (CSPs) knowledge is embedded in a set of hard constraints, each one restricting the possible values of a set of variables. However constraints in real
world problems are seldom hard, and CSP's are often idealizations that do not account for the preference among feasible solutions. Moreover some constraints may have priority over others. Lastly,
constraints may involve uncertain parameters. This paper advocates the use of fuzzy sets and possibility theory as a realistic approach for the representation of these three aspects. Fuzzy
constraints encompass both preference relations among possible instanciations and priorities among constraints. In a Fuzzy Constraint Satisfaction Problem (FCSP), a constraint is satisfied to a
degree (rather than satisfied or not satisfied) and the acceptability of a potential solution becomes a gradual notion. Even if the FCSP is partially inconsistent, best instanciations are provided
owing to the relaxation of ...
, 1994
"... This paper presents a consensus model in group decision making under linguistic assessments. It is based on the use of linguistic preferences to provide individuals' opinions, and on the use of
fuzzy majority of consensus, represented by means of a linguistic quantifier. Several linguistic consensus ..."
Cited by 69 (45 self)
Add to MetaCart
This paper presents a consensus model in group decision making under linguistic assessments. It is based on the use of linguistic preferences to provide individuals' opinions, and on the use of fuzzy
majority of consensus, represented by means of a linguistic quantifier. Several linguistic consensus degrees and linguistic distances are defined, acting on three levels. The consensus degrees
indicate how far a group of individuals is from the maximum consensus, and linguistic distances indicate how far each individual is from current consensus labels over the preferences. This consensus
model allows to incorporate more human consistency in decision support systems.
- Journal of the American Society for Information Science and Technology
"... (IRS) defined using an ordinal fuzzy linguistic approach is proposed. The ordinal fuzzy linguistic approach is presented, and its use for modeling the imprecision and subjectivity that appear in
the user-IRS interaction is studied. The user queries and IRS responses are modeled linguistically using ..."
Cited by 60 (35 self)
Add to MetaCart
(IRS) defined using an ordinal fuzzy linguistic approach is proposed. The ordinal fuzzy linguistic approach is presented, and its use for modeling the imprecision and subjectivity that appear in the
user-IRS interaction is studied. The user queries and IRS responses are modeled linguistically using the concept of fuzzy linguistic variables. The system accepts Boolean queries whose terms can be
weighted simultaneously by means of ordinal linguistic values according to three possible semantics: asymmetrical threshold semantic, aquantitativesemantic,andanimportancesemantic.Thefirstone
identifies a new threshold semantic used to express qualitative restrictions on the documents retrieved for a given term. It is monotone increasing in index term weight for the threshold values that
are on the right of the mid-value, and decreasing for the threshold values that are on the left of the mid-value. The second one is a new semantic proposal introduced to express quantitative
restrictions on the documents retrieved for aterm, i.e., restrictions on the number of documents that must be retrieved containing that term. The last one is the usual semantic of relative importance
that has an effect when the term is in aBoolean expression. Abottom-up evaluation mechanism of queries is presented that coherently integrates the use of the three semantics and satisfies the
separability property. The advantage of this IRS with respect to others is that users can express linguistically different semantic restrictions on the desired documents simultaneously, incorporating
more flexibility in the user–IRS interaction.
- Journal of Intelligent Manufacturing , 1995
"... : This paper proposes an extension of the constraint-based approach to job-shop scheduling, that accounts for the flexibility of temporal constraints and the uncertainty of operation durations.
The set of solutions to a problem is viewed as a fuzzy set whose membership function reflects preference. ..."
Cited by 53 (9 self)
Add to MetaCart
: This paper proposes an extension of the constraint-based approach to job-shop scheduling, that accounts for the flexibility of temporal constraints and the uncertainty of operation durations. The
set of solutions to a problem is viewed as a fuzzy set whose membership function reflects preference. This membership function is obtained by an egalitarist aggregation of local
constraint-satisfaction levels. Uncertainty is qualitatively described is terms of possibility distributions. The paper formulates a simple mathematical model of jobshop scheduling under preference
and uncertainty, relating it to the formal framework of constraint-satisfaction problems in Artificial Intelligence. A combinatorial search method that solves the problem is outlined, including fuzzy
extensions of well-known look-ahead schemes. 1. Introduction There are traditionally three kinds of approaches to jobshop scheduling problems: priority rules, combinatorial optimization and
constraint analysis. The first kind ...
- ASME JOURNAL OF MECHANISMS, TRANSMISSIONS, AND AUTOMATION IN DESIGN , 1989
"... A technique to perform design calculations on imprecise representations of parameters has been developed and is presented. The level of imprecision in the description of design elements is
typically high in the preliminary phase of engineering design. This imprecision is represented using the fuzzy ..."
Cited by 51 (18 self)
Add to MetaCart
A technique to perform design calculations on imprecise representations of parameters has been developed and is presented. The level of imprecision in the description of design elements is typically
high in the preliminary phase of engineering design. This imprecision is represented using the fuzzy calculus. Calculations can be performed using this method, to produce (imprecise) performance
parameters from imprecise (input) design parameters. The Fuzzy Weighted Average technique is used to perform these calculations. A new metric, called the γ-level measure, is introduced to determine
the relative coupling between imprecise inputs and outputs. The background and theory supporting this approach are presented, along with one example. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=43897","timestamp":"2014-04-16T16:34:51Z","content_type":null,"content_length":"40225","record_id":"<urn:uuid:5ba402ca-9e3e-4a9f-b276-1468ebf6fa99>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparison of two proportions: parametric (Z-test) and non-parametric (chi-squared) methods
July 29, 2009
By Todos Logos
Consider for example the following problem.
The owner of a betting company wants to verify whether a customer is cheating or not. To do this want to compare the number of successes of one player with the number of successes of one of his
employees, of which he is certain that he is not cheating. In a month's time, the player performs 74 bets and wins 30; the player in the same period of time making 103 bets, wins 65. Your client is a
cheat or not?
A problem of this kind can be solved in two different ways: using a parametric and a non-parametric method.
* Solution with the parametric method: Z-test.
You can use a Z-test if you can do the following two assumptions: the probability of common success is approximate 0.5, and the number of games is very high (under these assumption, a binomial
distribution is approximate a gaussian distribution). Suppose that this is the case. In R there is no function to calculate the value of Z, so we remember the mathematical formula, and we write our
z.prop = function(x1,x2,n1,n2){
numerator = (x1/n1) - (x2/n2)
p.common = (x1+x2) / (n1+n2)
denominator = sqrt(p.common * (1-p.common) * (1/n1 + 1/n2))
z.prop.ris = numerator / denominator
Z.prop function calculates the value of Z, receiving input the number of successes (x1 and x2), and the total number of games (n1 and n2). We apply the function just written with the data of our
z.prop(30, 65, 74, 103)
[1] -2.969695
We obtained a value of z greater than the value of z-tabulated (1.96), which leads us to conclude that the player that the director was looking at is actually a cheat, since its probability of
success is higher than a non-cheat user.
* Solution with the non-parametric method: Chi-squared test.
Suppose now that it can not make any assumption on the data of the problem, so that it can not approximate the binomial with a Gauss. We solve the problem with the test of chi-square applied to a 2x2
contingency table. In R there is the function prop.test.
prop.test(x = c(30, 65), n = c(74, 103), correct = FALSE)
2-sample test for equality of proportions without continuity correction
data: c(30, 65) out of c(74, 103)
X-squared = 8.8191, df = 1, p-value = 0.002981
alternative hypothesis: two.sided
95 percent confidence interval:
-0.37125315 -0.08007196
sample estimates:
prop 1 prop 2
0.4054054 0.6310680
Prop.test function calculates the value of chi-square, given the values of success (in the vector x) and total attempts (in the vector n). The vectors x and n can also be previously declared, and
then be retrieved as usual: prop.test (x, n, correct = FALSE).
In the case of small samples (low value of n), you must specify correct = TRUE, so as to change the computation of chi-square based on the continuity of Yates:
prop.test(x = c(30, 65), n = c(74, 103), correct=TRUE)
2-sample test for equality of proportions with continuity correction
data: c(30, 65) out of c(74, 103)
X-squared = 7.9349, df = 1, p-value = 0.004849
alternative hypothesis: two.sided
95 percent confidence interval:
-0.38286428 -0.06846083
sample estimates:
prop 1 prop 2
0.4054054 0.6310680
In both cases, we obtained p-value less than 0.05, which leads us to reject the hypothesis of equal probability. In conclusion, the customer is a cheat. For confirmation we compare the value
chi-square-value calculated with the chi-square-tabulation, which we calculate in this way:
qchisq(0.950, 1)
[1] 3.841459
qchisq function calculates the value of chi-square as a function of alpha and degrees of freedom. Since chi-square-calculated is greater than chi-square-tabulation, we conclude by rejecting the
hypothesis H0 (as stated by the p-value, and the parametric test).
for the author, please follow the link and comment on his blog:
Statistic on aiR
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/comparison-of-two-proportions-parametric-z-test-and-non-parametric-chi-squared-methods/","timestamp":"2014-04-20T16:06:19Z","content_type":null,"content_length":"39656","record_id":"<urn:uuid:212a4cf7-a8d8-477e-bc48-6cd1893daba0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/hally/medals","timestamp":"2014-04-18T20:45:01Z","content_type":null,"content_length":"107101","record_id":"<urn:uuid:98d85253-206a-4b62-a6a6-46d9691fcf9b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
free +package
Free monads are useful for many tree-like structures and domain specific languages. A Monad n is a free Monad for f if every Monad homomorphism from n to another monad m is equivalent to a natural
transformation from f to m. Cofree comonads provide convenient ways to talk about branching streams and rose-trees, and can be used to annotate syntax trees. A Comonad v is a cofree Comonad for f if
every Comonad homomorphism another comonad w to v is equivalent to a natural transformation from w to f. Version 4.2
A free functor is a left adjoint to a forgetful functor. It used to be the case that the only category that was easy to work with in Haskell was Hask itself, so there were no interesting forgetful
functors. But the new ConstraintKinds feature of GHC provides an easy way of creating subcategories of Hask. That brings interesting opportunities for free (and cofree) functors. The examples
directory contains an implementation of non-empty lists as free semigroups, and automata as free actions. The standard example of free higher order functors is free monads, and this definition can be
found in Data.Functor.HFree. Version 0.6.1.1
free-game is a library that abstracts graphical applications with simple interfaces. Twitter: #hs_free_game Version 0.9.4.3
The free-theorems library allows to automatically generate free theorems from Haskell type expressions. It supports nearly all Haskell 98 types except of type constructor classes, and in addition it
can also handle higher-rank functions. Free theorems are generated for three different sublanguages of Haskell, a basic one corresponding to the polymorphic lambda-calculus of Girard-Reynolds, an
extension of that allowing for recursion and errors, and finally a sublanguage additionally allowing seq. In the last two sublanguages, also inequational free theorems may be derived in addition to
classical equational results. Version 0.3.2.0
This program is to verify (or to put into question) strictness conditions on free theorems that arise if a polymorphic lambda calculus is enriched by general recursion. Given a type the program
either returns an instance of the corresponding unrestricted free theorem that does not hold and thereby verifies the need of the additional restrictions or it returns without finding such an
instantiation and thereby suggests (but not proves) that the strictness conditions are superfluous. The underlying algorithm is described in "Automatically Generating Counterexamples to Naive
Free Theorems" (FLOPS'10) by Daniel Seidel and Janis Voigtländer. A webinterface for the program is also available at http://www-ps.iai.uni-bonn.de/cgi-bin/exfind.cgi. Related to this package
you may be interested in the online free theorem generator at http://www-ps.iai.uni-bonn.de/ft that is also available offline via http://hackage.haskell.org/cgi-bin/hackage-scripts/package/
free-theorems-webui. Also interesting may be the tool polyseq that generates "optimal" free theorems in a polymorphic lambda calculus with selective strictness. Polyseq can be downloaded at
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/polyseq but the functionality is as well provided via a webinterface at http://www-ps.iai.uni-bonn.de/cgi-bin/polyseq.cgi. Version 0.3.1.0
Given a term, this program calculates a set of "optimal" free theorems that hold in a lambda calculus with selective strictness. It omits totality (in general, bottom-reflection) and other
restrictions when possible. The underlying theory is described in the paper "Taming Selective Strictness" (ATPS'09) by Daniel Seidel and Janis Voigtländer. A webinterface for the program is
running online at http://www-ps.iai.uni-bonn.de/cgi-bin/polyseq.cgi or available offline via the package http://hackage.haskell.org/package/free-theorems-seq-webui. Related to this package you may be
interested in the online free theorem generator at http://www-ps.iai.uni-bonn.de/ft that is also available offline via http://hackage.haskell.org/package/free-theorems-webui. Additionally interesting
may be the counterexample generator for free theorems that exemplifies the need of strictness conditions imposed by general recursion. It can be downloaded at http://hackage.haskell.org/package/
free-theorems-counterexamples or used via a webinterface at http://www-ps.iai.uni-bonn.de/cgi-bin/exfind.cgi. Version 1.0
This package provides access to the functionality of http://hackage.haskell.org/package/free-theorems through a web interface. An online version can be seen at http://www-ps.iai.uni-bonn.de/ft/,
where you can also find a more detailed description of the functionality. There is also a shell based interface: http://hackage.haskell.org/package/ftshell. The CGI binary is called "
free-theorems-webui.cgi". To start it locally for offline usage, just call "free-theorems-webui" after installation. (This needs python) Version 0.2.1.1
A soccer game. Version 0.1.2
Interface to the Kinect device. Currently supports depth perception. (No video or audio.) Version 1.0.2
This package provides a preprocessor executable, 'freesect', which implements a broad generalisation of sections (dubbed 'free sections') for partial application and higher-order style. Some examples
of free sections can be found in the included test suite; refer to the homepage for more info. Version 0.8
Wrapper around FreeType 2 library. Relevant exerpts from the FreeType 2 website: What is FreeType 2? FreeType 2 is a software font engine that is designed to be small, efficient, highly customizable,
and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other
products as well. The following is a non-exhaustive list of features provided by FreeType 2. * FreeType 2 provides a simple and easy-to-use API to access font content in a uniform way, independently
of the file format. Additionally, some format-specific APIs can be used to access special data in the font file. * Unlike most comparable libraries, FreeType 2 supports scalable font formats like
TrueType or Type 1 natively and can return the outline data (and control instructions/hints) to client applications. By default, FreeType 2 supports the following font formats. * TrueType fonts (and
collections) * Type 1 fonts * CID-keyed Type 1 fonts * CFF fonts * OpenType fonts (both TrueType and CFF variants) * SFNT-based bitmap fonts * X11 PCF fonts * Windows FNT fonts * BDF fonts (including
anti-aliased ones) * PFR fonts * Type 42 fonts (limited support) From a given glyph outline, FreeType 2 is capable of producing a high-quality monochrome bitmap, or anti-aliased pixmap, using 256
levels of gray. This is much better than the 5 levels used by Windows 9x/98/NT/2000 or FreeType 1. FreeType 2 supports all the character mappings defined by the TrueType and OpenType specification.
It is also capable of automatically synthetizing a Unicode charmap from Type 1 fonts, which puts an end to the painful 'encoding translation' headache common with this format (of course, original
encodings are also available in the case where you need them). The FreeType 2 core API provides simple functions to access advanced information like glyph names or kerning data. FreeType 2 provides
information that is often not available from other similar font engines, like kerning distances, glyph names, vertical metrics, etc. FreeType 2 provides its own caching subsystem since release 2.0.1.
It can be used to cache either face instances or glyph images efficiently. Version 0.1.1
Based on the freetype-gl library, with large modifications. This is similar to the FTGL (http://hackage.haskell.org/package/FTGL) library, but avoids C++, which makes it easier to wrap and work with
in Haskell-land. Unfortunately, it seems not to perform as well as FTGL on some setups. NOTE: Most of the demos and C-side documentation are out-of-date, as the C side was heavily modified, without
updating many of the demos or the C documentation. Version 0.0.4
This package provides datatypes to construct Free monads, Free monad transformers, and useful instances. In addition it provides the constructs to avoid quadratic complexity of left associative bind,
as explained in: * Janis Voigtlander, Asymptotic Improvement of Computations over Free Monads, MPC'08 Version 0.5.3
Plus, OpT, Yoneda, CoYoneda, Free, Cofree, Density, Codensity, CoT, CodensityAsk, Initialize, Finalize, Decompose, Recompose Version 0.1
A free indexed monad Version 0.3.1
Michael and Scott queues are described in their PODC 1996 paper: http://dl.acm.org/citation.cfm?id=248052.248106 These are single-ended concurrent queues based on a singlly linked list and using
atomic CAS instructions to swap the tail pointers. As a well-known efficient algorithm they became the basis for Java's ConcurrentLinkedQueue. Version 0.2.0.2
The pointfree tool is a standalone command-line version of the pl plugin for lambdabot. Version 1.0.4.5
Show more results | {"url":"http://www.haskell.org/hoogle/?hoogle=free+%2Bpackage","timestamp":"2014-04-18T00:31:16Z","content_type":null,"content_length":"24230","record_id":"<urn:uuid:2450256d-19e8-42b5-89a5-4c0e7d6560c4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 14
- Journal of Algorithms , 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself
in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co ..."
Cited by 188 (0 self)
Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our
book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by
their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder)
presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should
, 2002
"... As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead
to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in t ..."
Cited by 53 (6 self)
Add to MetaCart
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to
counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show
that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds.
This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a
generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives
the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
, 2002
"... Hex is a beautiful game with simple rules and a strategic complexity comparable to that of Chess and Go. The massive game-tree search techniques developed mostly for Chess and successfully used
for Checkers and a number of other games, become less useful for games with large branching factors like H ..."
Cited by 24 (0 self)
Add to MetaCart
Hex is a beautiful game with simple rules and a strategic complexity comparable to that of Chess and Go. The massive game-tree search techniques developed mostly for Chess and successfully used for
Checkers and a number of other games, become less useful for games with large branching factors like Hex and Go. In this paper, we describe deduction rules, which are used to calculate values of
complex Hex positions recursively starting from the simplest ones. We explain how this approach is implemented in HEXY---the strongest Hex-playing computer program, the Gold medallist of the 5th
Computer Olympiad in London, August 2000. 2001 Elsevier Science B.V. All rights reserved.
"... This report introduces GAMESMAN, a system for generating graphical parametrizable game applications. Programmers write game modules for a specific game, which when combined with our libraries,
compile together to become standalone X-window applications as shown in Figure A.1 below. The modules only ..."
Cited by 3 (2 self)
Add to MetaCart
This report introduces GAMESMAN, a system for generating graphical parametrizable game applications. Programmers write game modules for a specific game, which when combined with our libraries,
compile together to become standalone X-window applications as shown in Figure A.1 below. The modules only need contain information about the rules of the game and how the game ends. If the game is
small-enough, it may be solved, and the computer can play the role of an oracle, or perfect opponent. This oracle can advise a novice player how to play, and teach the strategy of the game even
though none was programmed into the system! If a game is too large to be solved exhaustively, the game programmer can add heuristics to provide an imperfect computer opponent. Finally, the
application can provide a useful utility to two human players who are playing each other, since it be a referee who constrains the users moves to be only valid moves, can update the board to respond
to the move, and can signal when one of the players has won.
"... Hex is an elegant and fun game that was first popularized by Martin Gardner [4]. The game was invented by Piet Hein in 1942 and was rediscovered by John Nash at Princeton in 1948. Two players
alternate placing white and black stones onto the hexagons of an N × N rhombus-shaped board. A hexagon may c ..."
Cited by 2 (0 self)
Add to MetaCart
Hex is an elegant and fun game that was first popularized by Martin Gardner [4]. The game was invented by Piet Hein in 1942 and was rediscovered by John Nash at Princeton in 1948. Two players
alternate placing white and black stones onto the hexagons of an N × N rhombus-shaped board. A hexagon may contain at most one stone. Agameof7 × 7 Hex after three moves.
White’sgoalistoputwhitestonesinasetofhexagonsthatconnectthetop and bottom of the rhombus, and Black’s goal is to put black stones in a set of hexagons that connect the left and right sides of the
rhombus. Gardner credits Nash with the observation that there exists a winning strategy for the first player in a game of hex. The proof goes as follows. First we observe that the game cannot end in
a draw, for in any Hex board filled with white and black stones there must be either a winning path for white, or a winning path for black [1, 3]. (This fact is equivalent to a version of the Brouwer
fixed point theorem, as shown by Gale [3].) Since the game is finite, there must be a winning strategy for either the first or the second player. Assume, for the sake of
- College Math. J., Special
"... Forms of Nim have been played since antiquity and a complete theory was published as early as 1902 (see [3]). Martin Gardner described the game in one of his earliest columns [7] and returned to
it many times over the years ([8]–[16]). Central to the analysis of Nim is Nim-addition. The Nim-sum is c ..."
Cited by 1 (1 self)
Add to MetaCart
Forms of Nim have been played since antiquity and a complete theory was published as early as 1902 (see [3]). Martin Gardner described the game in one of his earliest columns [7] and returned to it
many times over the years ([8]–[16]). Central to the analysis of Nim is Nim-addition. The Nim-sum is calculated by writing the terms in base 2 and adding the columns mod 2, with no carries. A Nim
position is a winning position if and only if the Nim-sum of the sizes of the heaps is zero [2], [7]. Is there is a generalization of Nim in which the analysis uses the base-b representations of the
sizes of the heaps, for b> 2, in which a position is a win if and only if the mod-b sums of the columns is identically zero? One such game, Rimb (an abbreviation of Restricted-Nim) exists, although
it is complicated and not well known. It was introduced in an unpublished paper [6] in 1980 and is hinted at in [5]. Despite his interest in Nim, Martin Gardner never mentions Rimb, nor does it
appear in Winning Ways [2], which extensively analyzes Nim variants. In the present paper we focus on b = 10, and consider, not Rim10 itself, but the arithmetic that arises if calculations, addition
and multiplication, are performed mod 10, with no carries. Along the way we encounter several new and interesting number sequences, which would have appealed
"... Thank you for giving place to articles on word problems [1, 2]. I am concerned with word problems very much and would like to say something about them also. First of all, in my opinion, any
discussion should include a definition of the subject and in such situations it is better to keep as close as ..."
Add to MetaCart
Thank you for giving place to articles on word problems [1, 2]. I am concerned with word problems very much and would like to say something about them also. First of all, in my opinion, any
discussion should include a definition of the subject and in such situations it is better to keep as close as possible to the exact meaning of the words. I suggest that a non-word problem is a
problem, which is formulated using mainly mathematical symbols and only a few special words like “Solve the equation... ” Correspondingly, a word problem is a problem which uses non-mathematical
words to convey mathematical meaning. At the K-12 level non-word problems tend to be technical exercises, which are necessary, but not exciting. It is only natural that most interesting and
non-standard problems are word problems. It does not mean that all word problems are difficult, but all of them need understanding of the natural language and ability to translate between different
modes of representation: in words, in symbols, in images. This is similar to Thomas ’ main idea [2], although I do not quite agree with his treatment of solving equations.
, 1998
"... this paper, resulting in greatly improved readability. ..."
"... II Implementation 6 ..."
"... In 1951 Shannon provided a simple analog heuristic for the connection game Bridg-It. Although this heuristic is based only on a simple network flow analysis, Shannon reported that it almost
always wins against human players when having the first move. In this note, we analyse this heuristic showing ..."
Add to MetaCart
In 1951 Shannon provided a simple analog heuristic for the connection game Bridg-It. Although this heuristic is based only on a simple network flow analysis, Shannon reported that it almost always
wins against human players when having the first move. In this note, we analyse this heuristic showing examples where the heuristic fails. Furthermore, we consider the question whether the first
player always wins if both players use Shannon’s heuristic. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1414447","timestamp":"2014-04-17T14:05:42Z","content_type":null,"content_length":"35360","record_id":"<urn:uuid:24194d20-1d11-4ab6-b085-3498b3d6522d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Density of certain functions in $C_c^\infty(0,T;V)$ in the space $W(0,T) \approx H^1(0,T;V)$?
up vote 1 down vote favorite
EDIT: I need to think more about the question I want to ask given comments in the answer below. Please close the thread if required. I leave it undeleted because answer is useful.
Let $V \subset H \subset V^*$ be separable Hilbert spaces with continuous and dense embeddings. Define the Hilbert space $$W(0,T) = \{u \in L^2(0,T;V) : u' \in L^2(0,T;V^*)\}$$ with inner product $$
(u,v)_W = \int_0^T (u(t),v(t))_{L^2(0,T;V)} + \int_0^T (u'(t), v'(t))_{L^2(0,T;V^*)}.$$
I want to know whether the set of functions of the form $$w(t) = \sum_j \phi_j w_j, \qquad\text{where $\phi_j \in C_c^\infty(0,T)$ and $w_j \in V$}$$ are dense in $W(0,T).$
We know from Lions and Magenes that $\mathcal{D}([0,T];V) \subset W(0,T)$ is dense, so the above should hopefully be true. According to a book, the set of functions $$f(t) = \sum_j t^j w_j \quad \
text{where $w_j \in V$}$$ are indeed dense in $W(0,T)$.
Does this imply the result I want? Can I approximate the $t^j$ by $C_c^\infty(0,T)$ functions or something like that? (I don't think so). Or is there another way to do this? I guess I may need to
replace $C_c^\infty(0,T)$ by $C_c^\infty[0,T]$..
I posted this at math.stackexchange.com but got no answers.
fa.functional-analysis ap.analysis-of-pdes measure-theory
See also this question: mathoverflow.net/questions/87486/… – András Bátkai Jul 18 '13 at 17:59
add comment
1 Answer
active oldest votes
We do not know that $C^\infty_c(0,T;V)$ is dense in $W(0,T)$. Note that $W(0,T)$ embeds into $C([0,T],V^*)$ (actually even into $C([0,T],H)$). Since $W(0,T)$ clearly contains functions
up vote 1 which do not vanish at the endpoints, no set of functions which are required to vanish at the endpoints can be dense.
down vote
Thanks for replying. But is not $C_c^\infty(0,T;V) = \mathcal{D}(0,T;V)$? The latter (defined as infinitely differentiable compactly-supported $V-$valued functions) is dense in $W(0,T)$
by a theorem of Lions and Magenes. – aere Jul 18 '13 at 18:16
Lions and Magenes use the notation ${\cal D}([a,b])$ to denote $C^\infty$ functions with compact support in the closed interval $[a,b]$. The condition of compact support is of course
redundant in this case unless the interval is infinite. This should not be confused with ${\cal D}(a,b)$, which is a set of functions which have support which is compact in the open
interval $(a,b)$. – Michael Renardy Jul 18 '13 at 19:11
Ah, I see. I will edit my post then. Thanks for pointing it out. – aere Jul 18 '13 at 21:07
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis ap.analysis-of-pdes measure-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/137088/density-of-certain-functions-in-c-c-infty0-tv-in-the-space-w0-t-approx","timestamp":"2014-04-19T22:50:12Z","content_type":null,"content_length":"56730","record_id":"<urn:uuid:8b8d4121-fb6b-4eca-a830-588406fd0264>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Mathematics Using a Computer
Discrete Mathematics Using a Computer
Discrete Mathematics Using a Computer, by Cordelia Hall and John O'Donnell. Published by Springer, January 2000. Softcover, 360 pages, ISBN 1-85233-089-9. £16.95
This book introduces the main topics of discrete mathematics with a strong emphasis on applications to computer science. It uses computer programs to implement and illustrate the mathematical ideas,
helping the reader to gain a concrete understanding of the abstract mathematics. The programs are also useful for practical calculations, and they can serve as a foundation for larger software
Designed for first and second year undergraduate students, the book is also ideally suited to self-study. No prior knowledge of functional programming is required; the book and the online
documentation provide everything you will need.
Software Tools for Discrete Mathematics
Download the Stdm software. There are also some older versions: Stdm1.lhs (oldest) and Stdm2.lhs
Using the Book: The Beseme Project at the University of Oklahoma
The Beseme Project at the University of Oklahoma Computer Science Department advocates Better Software Engineering through Mathematics Education. The Beseme project uses Discrete Mathematics Using a
Computer, and they provide excellent support materials on their Web page, including lecture slides, handouts, sample examinations and more. Rex Page is the Principal Investigator.
The Haskell Programming Language
The software is written in the standard functional language Haskell 98 . The book contains a self-contained introduction to Haskell, and the reader is not assumed to have any prior knowledge of
functional programming. Several good implementations of Haskell 98 are freely available, and nearly all computer platforms are supported, including Windows 2000/ME/NT/98/95, Macintosh, Unix and
Linux. The interactive Hugs implementation is recommended for use with the book.
Instructor's Guide and Teaching Materials
An Instructor's Guide is available online.
This page is maintained by John O'Donnell (jtod@dcs.gla.ac.uk), and was last modified February 22, 2000. | {"url":"http://www.dcs.gla.ac.uk/~jtod/discrete-mathematics/","timestamp":"2014-04-16T21:51:44Z","content_type":null,"content_length":"3944","record_id":"<urn:uuid:a59be489-1675-4e61-8295-85f07d1d023d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
For the FizzBuzz program, I got only fizz and buzz and fizzbuzz for the output, but no numbers...What did I do wrong? I cant seem to figure out the issue here. Here is my code: for i in range(1,101):
s = str(i) if i % 3==0 or i % 5==0: s='' if i % 3==0: s=s+'FIZZ' if i % 5==0: s = s+'BUZZ' print s
• 4 months ago
• 4 months ago
Best Response
You've already chosen the best response.
when you enter your first if loop you are resetting the value of s to just be a blank line which effectively is dropping out your number from the beginning of the line
Best Response
You've already chosen the best response.
what should I have instead of s=''?
Best Response
You've already chosen the best response.
when your code hits s=str(i) the first time it sets s='1'. then you enter your if loop and have s='' which changes s='1' to s='' what your doing is called reinitializing a variable which is
causing your original s to forget the '1'. btw the only reason im not outright telling you the fix is because problem solving/debugging is one of the most important skills for programming
Best Response
You've already chosen the best response.
thanks I appreciate it...Id rather learn the process that just be told the answer any day. Let me tinker with this a little longer.
Best Response
You've already chosen the best response.
so here is what I went with and it worked like a charm. Thanks again for your help: for i in range (1, 101): if i % 3 == 0 and i % 5 == 0: print 'FIZZBUZZ' elif i % 3 == 0: print 'FIZZ' elif i %
5 == 0: print 'BUZZ' else: print i
Best Response
You've already chosen the best response.
glad to see you got it worked out :D
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/52a9d6cee4b0a4b4fc25a109","timestamp":"2014-04-20T18:28:48Z","content_type":null,"content_length":"40447","record_id":"<urn:uuid:4874654d-3b9c-4202-9e2a-776d1a1dad88>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Containment of an element to an operator system
up vote 4 down vote favorite
This question will probably appeal to people in operator systems theory as it is very much related. However, I'm interested in down-to-earth concrete systems with finite dimensional Hilbert space
representations. Please suggest the adequate tags if you think I'm missing some.
Let $V\subset\mathcal B(\mathcal H)$ be a subspace of Hermitian operators in finite-dimensional Hilbert space $\mathcal H$, containing $1$. Let $\{X_i,Y_j\}$, (with $1\leq i\leq r$ and $1\leq j\leq
d$) be a basis of $V$ and let $\{\tilde X_i,\tilde Y_j\}$ be the dual basis w.r.t the Hilbert-Schmidt inner product \mathrm{tr}[X_i \tilde X_{i'}]=\delta_{ii'}, \quad \mathrm{tr}[Y_j \tilde Y_{j'}]=\
delta_{jj'},\quad \mathrm{tr}[X_i \tilde Y_{j}]=0, \quad \mathrm{tr}[Y_i \tilde X_{j}]=0. Note that defining $\alpha_i=\mathrm{tr}[\tilde X_i]$ and $\beta_j=\mathrm{tr}[\tilde Y_j]$ we have 1=\sum_{i
=1}^r \alpha_i X_i+\sum_{j=1}^d \beta_j Y_j, however not all $\alpha_i$'s are zero, meaning that $1\notin\mathrm{span}\{Y_j\}$.
Regard $\mathbb{R}^n\otimes\mathcal B(\mathcal H)$ as the algebra of $n$-tuples of elements from $\mathcal B(\mathcal H)$, as in operator systems theory. Let $\mathcal C\subset \mathbb{R}^r\otimes\
mathcal B(\mathcal H)$ be the pointed cone defined by
\mathcal C=\left\{Z=\{Z_i\}\Big|\exists \{K_j\}\in\mathbb{R}^d\otimes\mathcal B(\mathcal H)\mathrm{~s.t.~} \sum_{i=1}^r X_i\otimes Z_i+\sum_{j=1}^d Y_j\otimes K_j\geq0\right\}.
Question: Under what conditions does $\{\tilde X_i^\top\}\in\mathcal C$?
Note: If $1\in\mathrm{span}\{Y_j\}$ the answer is trivial: Always. This is however, not my case.
oa.operator-algebras operator-theory operator-spaces fa.functional-analysis
Also asked in math.stackexchange.com: Containment of an element to an operator system – Alex Monras Nov 14 '12 at 10:24
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged oa.operator-algebras operator-theory operator-spaces fa.functional-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/112368/containment-of-an-element-to-an-operator-system","timestamp":"2014-04-20T06:23:38Z","content_type":null,"content_length":"48739","record_id":"<urn:uuid:f574ce68-6ea4-4604-b767-94f5681865e9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hungarian Academy of Sciences
The Hungarian Academy of Sciences
The Hungarian Academy of Sciences was named the Hungarian Scholarly Society when it was founded in Pest in 1825. It had a charter which stated that there could be no more than 42 members, of whom no
more than 18 could be from Pest and no more than 24 from outside Pest. The Society had six departments: history, law, linguistics, mathematics, philosophy and natural sciences. When the Society was
founded the mathematics department had a maximum of six members, at most three from Pest and at most three from outside. However these were not filled and in the first 45 years of the Society's
existance there were only two or three mathematicians.
The young Society had a number of issues which they deemed important to make rapid progress on. One was to develop a dictionary of Hungarian mathematical terms so that those writing in the Hungarian
language could use genuine Hungarian words and also achieve a consistency of Hungarian notation. The Society published an Hungarian Mathematical Dictionary in 1834 but it failed to achieve the aims
we have described [1]:-
The impact of the dictionary on the Hungarian mathematical language is negligible, having fallen short of its express aim of creating a unified Hungarian mathematical volcabulary. This was caused
not so much by the defectiveness of terminology as to the editors' reluctance voiced in the Preface to take sides and to specify the principles to be followed in coining Hungarian terms.
The publication of Euclid's Elements in Hungarian by the Society in 1832 was probably more significant. Another aim was to bring into the Society international figures of high repute who would
enhance the reputation of the Society on the international stage. The first such foreign member was Babbage, elected in 1833, followed by Gauss and Poncelet in 1847 and John Herschel and Quetelet in
1858. Another aim of the young Society was to found a technical university where mathematics and the technical sciences would be taught to a high standard. The Society made a good start on
publishing, and they brought out Yearbooks from very early on, then from 1840 they started publication of a scientific journal Magyar Académiai Ertesito. The journal was split into two in 1859, then
three covering different areas.
The Academy hit real problems in 1849 following the Hungarian War of Independence. In that year the Hungarians had defeated the Habsburgs and declared Hungarian independence on 14 April. Following
this a combined force from Russia and Austria retook the country and the Hungarian army surrendered on 13 August. A period followed where many Hungarians were shot or imprisoned. Members of the
Academy fared particularly badly since many had actively supported the war of independence, while even those who had called for a Hungarian Technical University to be set up were forced into
voluntary exile. Vienna banned meetings of the Academy in the period immediately after the war and even when this was relaxed in the mid 1850s, since no new members were permitted to be elected, it
looked as though the Society would die a slow death. However from 1858 new members were admitted.
The Habsburg Empire was weakened by external conflicts over the decades following the Hungarian War of Independence. This allowed the Academy to begin again to support ventures to strengthen
Hungarian scholarship and, in 1860, a committee was set up to give special support to mathematics and natural science. In the Compromise of 1867 the Hungarian Kingdom and the Austrian Empire became
independent states within the Austro-Hungarian Monarchy. The Compromise led to rising standards of education in mathematics and the sciences and after pressure from the Academy, the Technical
University of Budapest was set up from the polytechnic school in 1871. In fact Budapest was created from the union of the towns of Pest, Buda, and Obuda in the following year.
The Academy continued with its policy of appointing foreign members, most of whom in the last quarter of the 19^th century had direct connections to Hungarian scientists. In mathematics Cayley,
Hermite, Helmholtz, Kronecker, Du Bois-Reymond, Fuchs, Klein, Stäckel, Darboux and Mittag-Leffler all became members.
Alphabetical list of Societies Chronological list of Societies
Welcome page Biographies Index
History Topics Index Famous curves index
Chronology Time lines
Mathematicians of the day Anniversaries for the year
Search Form Birthplace Maps
JOC/EFR August 2004 School of Mathematics and Statistics
University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-groups.dcs.st-and.ac.uk/~history/Societies/Hungarian.html","timestamp":"2014-04-18T13:14:23Z","content_type":null,"content_length":"9280","record_id":"<urn:uuid:f7743a8d-d37c-4b8f-b95e-d2d466920c45>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cypress, TX Algebra Tutor
Find a Cypress, TX Algebra Tutor
...I have taught Algebra II for over 4 years with a high success rate. All of my students continued to Precalculus and were successful in both subjects. I teach several different methods so that
the students can have options on how to solve the problems.
8 Subjects: including algebra 1, algebra 2, physics, geometry
...I'm an experienced tutor and will effectively teach all subjects in a way that is easily understood. I specialize in tutoring math (elementary math, geometry, prealgebra, algebra 1 & 2,
trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA programming. I'd love to talk mo...
17 Subjects: including algebra 1, algebra 2, reading, calculus
I am the Science Department Chair at a high school. I have five years of experience teaching high school and over seven tutoring the sciences. I have a large reserve of scientific knowledge and
enjoy helping students develop study skills and discover their own discipline.
18 Subjects: including algebra 2, algebra 1, chemistry, geometry
...I have also tutored many students. These students often need organization and study skills. I have a doctorate in education.
21 Subjects: including algebra 1, reading, English, study skills
...My specialty is algebra and trigonometry, therefore I am able to help with topics such as pre-calculus and calculus. I also have a very good track record of helping students understand the
mathematical concepts within chemistry. I hold an associate's degree in general mathematics and science fr...
7 Subjects: including algebra 1, algebra 2, chemistry, biology
Related Cypress, TX Tutors
Cypress, TX Accounting Tutors
Cypress, TX ACT Tutors
Cypress, TX Algebra Tutors
Cypress, TX Algebra 2 Tutors
Cypress, TX Calculus Tutors
Cypress, TX Geometry Tutors
Cypress, TX Math Tutors
Cypress, TX Prealgebra Tutors
Cypress, TX Precalculus Tutors
Cypress, TX SAT Tutors
Cypress, TX SAT Math Tutors
Cypress, TX Science Tutors
Cypress, TX Statistics Tutors
Cypress, TX Trigonometry Tutors
Nearby Cities With algebra Tutor
Brookshire algebra Tutors
Bunker Hill Village, TX algebra Tutors
Hedwig Village, TX algebra Tutors
Hilshire Village, TX algebra Tutors
Hockley algebra Tutors
Hockley Mine, TX algebra Tutors
Hufsmith algebra Tutors
Jacinto City, TX algebra Tutors
North Houston algebra Tutors
Oak Ridge N, TX algebra Tutors
Oak Ridge North, TX algebra Tutors
Pattison, TX algebra Tutors
Pinehurst, TX algebra Tutors
Prairie View, TX algebra Tutors
Stagecoach, TX algebra Tutors | {"url":"http://www.purplemath.com/Cypress_TX_Algebra_tutors.php","timestamp":"2014-04-21T14:47:55Z","content_type":null,"content_length":"23806","record_id":"<urn:uuid:acfbf215-2aa9-4499-8a7a-f62955036285>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Palo Alto, CA Algebra Tutor
Find an East Palo Alto, CA Algebra Tutor
...Basic: I have taught students (many of them adults) the basics of Math (4 operations, fraction, decimal) systematically so they gained valuable skills in their jobs and lives. I can help in
these areas: pre-Algebra, (Honors) Algebra I & II, Geometry (with proofs), pre-Calculus, (AP, College, 3-D...
15 Subjects: including algebra 2, algebra 1, calculus, GRE
...All of the students that I have worked with have increased their scores. I have worked with students specifically in Elementary Math for 3 years. I use the coaching and teaching skills that I
developed while obtaining my certification to help students understand math concepts.
15 Subjects: including algebra 1, algebra 2, English, geometry
...My tutoring methods vary student-by-student, but I specialize in breaking down problems and asking questions to guide the student toward discovering and truly understanding concepts which
helps with retention and effective test-taking. I take pride in the success of each and every one of my stud...
17 Subjects: including algebra 2, algebra 1, chemistry, calculus
...PS: I have a PhD in theoretical physics, am a Phi Beta Kappa, graduated from the two best universities in China, and was once a NASA scientist.I have a PhD in theoretical physics which
requires comprehensive training in mathematical methods and have working experience with differential equations ...
15 Subjects: including algebra 1, algebra 2, calculus, physics
I am currently working as an R&D Propulsion engineer at Space Systems Loral supporting the design and manufacture of a next-generation propulsion system. I've graduated from USC with an M.S. in
Astronautical Engineering and living my dream of being a rocket scientist. I previously graduated from Cal Poly Pomona with a B.S. degree in Aerospace Engineering.
20 Subjects: including algebra 1, algebra 2, English, reading
Related East Palo Alto, CA Tutors
East Palo Alto, CA Accounting Tutors
East Palo Alto, CA ACT Tutors
East Palo Alto, CA Algebra Tutors
East Palo Alto, CA Algebra 2 Tutors
East Palo Alto, CA Calculus Tutors
East Palo Alto, CA Geometry Tutors
East Palo Alto, CA Math Tutors
East Palo Alto, CA Prealgebra Tutors
East Palo Alto, CA Precalculus Tutors
East Palo Alto, CA SAT Tutors
East Palo Alto, CA SAT Math Tutors
East Palo Alto, CA Science Tutors
East Palo Alto, CA Statistics Tutors
East Palo Alto, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/east_palo_alto_ca_algebra_tutors.php","timestamp":"2014-04-21T00:12:01Z","content_type":null,"content_length":"24363","record_id":"<urn:uuid:6377f14a-b53b-45b2-a20e-b53a7064d461>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
standard dev
"Hai" I hope Somebody help me 2.0±0.2 - 1.0±0.2 (All related to weight") Thank you !
miavaro I'm not sure that anyone will be able to tell what your question is from this but one interpretation (not the most likely, but this is the interval arithmetic solution) will give: $(2 \pm
0.2) - (1 \pm 0.2)=1 \pm 0.4$ RonL
Last edited by CaptainBlack; April 24th 2006 at 11:43 PM.
CaptainBlack I would say that equally likely to CaptainBlack's answer would be the answers: 1) $1.0 \pm 0.2$ assuming the standard error in the measuring instrument is 0.2 2) $1.0 \pm 0.3$ using the
propagation of errors technique. (Define a function f = f(a, b), where we have measurements $a = \bar{a}+\delta a$ and $b=\bar{b}+ \delta b$. Then $\bar{f} = f(\bar{a}, \bar{b})$ and $\delta f = \
sqrt{ \left ( \frac{\partial f}{\partial a} \delta a \right ) ^2 + \left ( \frac{\partial f}{\partial b} \delta b \right ) ^2}$. ) Lacking any other information, my best guess is that the propagation
of errors answer (2) is the answer, though there are many statistical reasons for choosing either of the other two. It depends entirely on how the original error estimates were made. -Dan | {"url":"http://mathhelpforum.com/advanced-statistics/2659-standard-dev.html","timestamp":"2014-04-18T04:43:56Z","content_type":null,"content_length":"42020","record_id":"<urn:uuid:a1bfa4ae-0c94-4061-8204-457735dc01bb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Environment variables used by Sage
Sage uses several environment variables when running. These all have sensible default values, so many users won’t need to set any of these. (There are also variables used to compile Sage; see the
Sage Installation Guide for more about those.)
• DOT_SAGE – this is the directory, to which the user has read and write access, where Sage stores a number of files. The default location is ~/.sage/, but you can change that by setting this
• SAGE_RC_FILE – a shell script which is sourced after Sage has determined its environment variables. This script is executed before starting Sage or any of its subcommands (like sage -i <package>
). The default value is $DOT_SAGE/sagerc.
• SAGE_STARTUP_FILE – a file including commands to be executed every time Sage starts. The default value is $DOT_SAGE/init.sage.
• SAGE_SERVER – if you want to install a Sage package using sage -i PKG_NAME, Sage downloads the file from the web, using the address http://www.sagemath.org/ by default, or the address given by
SAGE_SERVER if it is set. If you wish to set up your own server, then note that Sage will search the directories SAGE_SERVER/packages/standard/, SAGE_SERVER/packages/optional/, SAGE_SERVER/
packages/experimental/, and SAGE_SERVER/packages/archive/ for packages. See the script $SAGE_ROOT/spkg/bin/sage-spkg for the implementation.
• SAGE_PATH – a colon-separated list of directories which Sage searches when trying to locate Python libraries.
• SAGE_BROWSER – on most platforms, Sage will detect the command to run a web browser, but if this doesn’t seem to work on your machine, set this variable to the appropriate command.
• SAGE_ORIG_LD_LIBRARY_PATH_SET – set this to something non-empty to force Sage to set the LD_LIBRARY_PATH before executing system commands.
• SAGE_ORIG_DYLD_LIBRARY_PATH_SET – similar, but only used on Mac OS X to set the DYLD_LIBRARY_PATH.
• SAGE_CBLAS – used in the file SAGE_ROOT/devel/sage/sage/misc/cython.py. Set this to the base name of the BLAS library file on your system if you want to override the default setting. That is, if
the relevant file is called libcblas_new.so or libcblas_new.dylib, then set this to “cblas_new”. | {"url":"http://sagemath.org/doc/reference/cmd/environ.html","timestamp":"2014-04-18T05:35:28Z","content_type":null,"content_length":"13846","record_id":"<urn:uuid:6981c55e-f2da-4681-bff8-08b2afee75c2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiple Choice Quiz
Financial and Managerial Accounting: The Basis for Business Decisions, 12/e
Cost-Volume-Profit Analysis
Multiple Choice Quiz
Please answer all questions
Consider the following:
Total of Total of Total of
Units Cost A Cost B Cost C
1,000 $10,000 $10,000 $10,000
2,000 10,000 20,000 14,000
3,000 10,000 30,000 18,000
4,000 10,000 40,000 22,000
Which of the following are true about the each of the costs?
A) Cost A is a fixed cost.
B) Cost B is a variable cost.
C) Cost C is a variable cost.
D) Cost B is a semivariable cost.
E) A and B are true.
At the point at which the total revenue line intersects the total fixed cost line in a graphic presentation of break-even point, the area above this intersection is which of the following?
A) Profit area
B) Profit area and variable costs beyond variable costs at break even
C) Operating income
D) Variable costs
E) Margin of safety
The selling price of Product A is $15. The variable costs to manufacture and sell the product are $6. What is the contribution margin ratio?
A) 60%
B) 40%
C) 35%
D) 50%
E) 80%
Target operating income is $450,000 and fixed costs are $60,000. The sales price per unit is $15, with a contribution margin of 40%. How many sales units are required to achieve the target
operating income?
A) 100,000 units
B) 85,000 units
C) 30,000 units
D) 34,000 units
E) None of the above.
Target operating income is $450,000 and fixed costs are $60,000. The sales price per unit is $15, with a contribution margin of 40%. What is the sales volume in dollars required to achieve the
target operating income?
A) $1,400,000
B) $750,000
C) $510,000
D) $474,000
E) $1,275,000
The contribution margin ratio is 40%. Operating income is $320,000. What is the margin of safety?
A) $800,000
B) $128,000
C) $672,000
D) Cannot be computed from information provided
E) None of the above
Product A sells for $24 and has a contribution margin ratio of 25%. Sales are expected to decline by 10,000 units over the next accounting period. What will be the loss in operating income?
A) $240,000
B) $250,000
C) $25,000
D) $60,000
E) $75,000
Fixed costs of $50,000 are expected to increase 20%. The contribution margin per unit of $6 on a sales price of $18 is expected to decrease by 25%. With a projected operating income of $300,000,
what must be the projected sales?
A) $1,081,081
B) $1,400,000
C) $1,440,000
D) $720,000
E) $1,240,000
The current sales price is $80 per unit. Variable costs are expected to increase from $65.00 to $67.50 per unit. Fixed costs of $300,000 will not change. How many additional sales units are
required in order to maintain an operating income of $360,000?
A) 8,000
B) 8,800
C) 10,800
D) 12,000
E) 2,800
Operating income is $240,000, fixed costs are $54,000, and 52,000 units are sold. What is the contribution margin per unit to the nearest cent?
A) $5.50
B) $4.65
C) $4.62
D) Cannot be determined from the information provided
E) None of the above
Rock-a-Way Gravel purchases raw gravel in 20-ton lots from which it manufactures 4 grades of gravel, which consists of its sales mix. The gravel grades are: 5 tons of Decorative Stone, 8 tons of
Road Stone, 4 tons of Pea Gravel, and 3 tons of Construction Gravel. The contribution margin ratio on each product is, respectively, 50%, 40%, 60%, and 75%. What is the average contribution
margin ratio for every 20 tons of gravel sold?
A) 56.25%
B) 51.75%
C) 75.0%
D) Greater than 75%
E) Less than 50%
Consider the following:
Total Total
Date Units Produced Cost of A
January 10,000 $14,000
February 12,600 15,040
March 13,400 15,360
April 11,500 14,360
What is the fixed cost component of the cost? Use the high-low method of analysis to determine your answer.
A) $10,000
B) $14,000
C) $ 8,000
D) $12,000
E) $ 9,500
Which of the following is not an assumption underlying cost-volume-profit analysis?
A) Sales price per unit is assumed to be constant.
B) Fixed costs are assumed to remain constant at all levels of sales within a relevant range of activity.
C) Variable costs are assumed to remain constant as a percentage of sales revenue.
D) If more than one product is sold, the proportion of the various products sold is assumed to remain constant.
E) All the above are true.
The sales volume in units can be determined by dividing a denominator into the total of fixed costs plus the target operating income. What is the denominator?
A) Unit contribution margin
B) Margin of safety
C) Contribution margin ratio
D) Unit sales price
E) Sales
Operating income can be determined by multiplying the contribution margin ratio by what other factor?
A) Unit contribution margin
B) Margin of safety
C) Variable costs per unit
D) Unit sales price
E) Change in sales volume
A change in operating income can be determined by multiplying the contribution margin ratio by what other factor?
A) Unit contribution margin
B) Margin of safety
C) Variable costs per unit
D) Unit sales price
E) Change in sales volume | {"url":"http://highered.mcgraw-hill.com/sites/0072396881/student_view0/chapter19/online_tutorial_quiz.html","timestamp":"2014-04-20T03:22:13Z","content_type":null,"content_length":"55724","record_id":"<urn:uuid:2a156d47-9ad7-45a0-bdb9-99e24e0bc055>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
ICICI Prudential Retirement Income Solution
Hi Manish,
Thank you for this great website. I recently came across the ICICI Pru non unit linked retirement income solution product at one of their branches. They claim guaranteed payment of double the annual
payments for the first 10 years from the 20 year mark till end of life of policyholder which then passes to the life of spouse and eventually ends with a lumpsum payment to an offspring. They also
claim a 10x life insurance on this product as well. Is this a good plan? Are there others that are similar? Is building and actively managing a good SIP platform a better alternative or is it good to
have both?
Thank you so much,
{ }
Hey Stranger Please Login or Register to answer. | {"url":"http://www.jagoinvestor.com/forum/icici-prudential-retirement-income-solution","timestamp":"2014-04-16T13:03:31Z","content_type":null,"content_length":"38101","record_id":"<urn:uuid:0d3a73d7-bb86-4673-ba47-607e5bc4ab29>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
complete or open Kähler manifold and simply connected
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
A complete or open Káhler manifold with positive definite Ricci tensor is simply connected? is there any counterexample?
add comment
Let $S\subset \mathbb{P}^1$ denote a finite subset consisting of $n\geq 2$ points. The Fubini-Study metric on $\mathbb{P}^1$ induces a Kahler metric on $X= \mathbb{P}^1\backslash S$ with a
positive-definite Ricci tensor. $X$ is open and Kahler, but $\pi_1(X, *)$ is free on $n-1$ generators. Maybe you want to take the manifold to be complete and non-compact? As you probably know, it is
a theorem of Kobayashi that a compact Kahler manifold with positive definite Ricci tensor is simply connected.
Dear Kevin, Thanks for your nice comment. Would you please explain with more details on your counterexample?
Hassan Jolany Jul 8 '12 at 11:39
Do you know the answer to the question if the metric is assumed to be complete (and of course the manifold is noncompact)?
YangMills Jul 8 '12 at 13:06
Dear YangMills. No, if you have any idea please write it
Hassan Jolany Jul 8 '12 at 13:13
@Haskell The Fubini-Study metric is well-known to be Kahler with positive-definite Ricci tensor. These properties are local and so are preserved if we restrict to an open subset, e.g. $\mathbb{P}^
1$ minus $n\geq 2$ points. This last space is biholomorphic to $\mathbb{C}$ minus $n-1$ points, which has the homotopy type of the wedge of $n-1$ circles. From this it follows that $\pi_1$ is free
on $n-1$ generators.
Kevin Jul 8 '12 at 16:34
@YangMills I can't think of any, at the moment. A quick Google search didn't turn up much either...
Kevin Jul 8 '12 at 16:35
show 2 more comments | {"url":"http://mathoverflow.net/questions/101610/complete-or-open-kahler-manifold-and-simply-connected","timestamp":"2014-04-21T05:10:30Z","content_type":null,"content_length":"57830","record_id":"<urn:uuid:83e4330a-00c9-4043-9814-34b10dff0648>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
consider two condition and |x-2| - Homework Help - eNotes.com
consider two condition and |x-2|<α on a real number x, where a is a positive real number
i. the range of values of α such that |x-2|<α is a necessary condition for is ???
2. the range of values α such that |x-2|<α is a sufficient condition for is ???
3. the range of values of such that |x-2|<α is a sufficient condition for is ???
This defines a domain of f,
`x in (2-alpha,2+alpha)`
`(i)` Function defined in an open interval.
(ii) We can discuss continuity and differentiability in an open interval.
Please specify more about function thenmore detail answer possible.
consider two condition `x^2-3x-10` and |x-2|<α on a real number x, where a is a positive real number
i. the range of values of α such that |x-2|<α is a necessary condition for `x^2-3x-10` is (A)
2. the range of values α such that |x-2|<α is a sufficient condition for `x^2-3x-10` is (B)
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/consider-two-condition-x-2-lt-real-number-x-where-436341","timestamp":"2014-04-19T15:40:43Z","content_type":null,"content_length":"26875","record_id":"<urn:uuid:a4c767fe-ab6d-474d-8259-f04f13c98ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - trivial solution method for constant coeff case
more details:
method 1:
suppose the RHS of our ode is a polynomial of degree < n. Then D^n =0 on that function, so if the LHS factors with factors such as D-a, then since
(1-D)(1+D+D^2+D^3+...+D^[n-1]) = 1-D^n, then
also (1-D/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) = 1-[D/a]^n,
hence a(1-D/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) =
(a-D)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) =
Hence (D-a)(-1/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1]) =
Thus if we want to solve (D-a)y = f where f is a polynomial of degree < n,m then taking y = (-1/a)(1+D/a+[D/a]^2+[D/a]^3+...+[D/a]^[n-1])f,
gives us
(D-a)y = (1-[D/a]^n)f = f, since (D/a)^nf = 0.
Repeat for each factor (D-a) of the differential operator.
I.e. this method inverts any operator which is a product of operators of form D-a, i.e. any linear diff op with constant coefficients. | {"url":"http://www.physicsforums.com/showpost.php?p=938377&postcount=2","timestamp":"2014-04-17T18:24:19Z","content_type":null,"content_length":"8198","record_id":"<urn:uuid:e0cfed23-f167-4123-97a9-6f1211d2be19>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orion HCCA Box plans....Can someone check it out?
09-12-2004 #1
Orion HCCA Box plans....Can someone check it out?
Okay...after lots of reading, searching and calculating I came up with this....
for a 2000 Orion HCCA 12D
2.5 cu/ft tuned to 30 HZ
14.5X2.5X35.49 (after port correction) internal port measurements
The box will be made of 3/4" MDF
outer dimensions
Height 16
Width 25.96672
Depth 19.25
I am mounting the sub on the long panel and having my slot on this plane running vertical using the side and back enclosure wall as part of the slot.
The port displacement is 871.8125 going back and then 90 degrees to the right is 765.31.
Bracing I will use 96cu/in total (8 4"sqaures cut into triangles and used in corners) and the sub displaces 244 cu/in.
A total 1977.1225 cu/in = 1.144168 cu/ft of displacement.
Did I screw this up or am I right? Thanks in advance.
Re: Orion HCCA Box plans....Can someone check it out?
(36.25 * 1.84 x 10^8)/ (2.5 * 1728 * (30/.159)^2 ) - (.823 * sqrt 36.25 (port area )
port correction
38.41532 - 1.25 = 37.16
PLV= 37.16
thats what i get doing it by hand and using your port area of 14.5*2.5 and your box volume of 2.5 ft^3
means you port displacements are off also didn't look at your displacements but it is down the middle of the port
hope this helps
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
Using winISD Pro I got 36.74 as the length. I subtracted 1.25 from that. Is this software unreliable for calculating Port Length?????
I then tried http://www.carstereo.com/help/Articles.cfm?id=31 which game me 37.2 which is very close to your number with correction. Math looks good on your equation....now I need to readjust my
width and will post when I have new numbers. Thanks.
(36.25 * 1.84 x 10^8)/ (2.5 * 1728 * (30/.159)^2 ) - (.823 * sqrt 36.25 (port area )
port correction
38.41532 - 1.25 = 37.16
PLV= 37.16
thats what i get doing it by hand and using your port area of 14.5*2.5 and your box volume of 2.5 ft^3
means you port displacements are off also didn't look at your displacements but it is down the middle of the port
hope this helps
Re: Orion HCCA Box plans....Can someone check it out?
yeah i always get diff numbers from winisd it it not real good for doing port length i just always do it by hand as most others do.
do you have the pro update its gets a little closer to the numbers done by hand
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
I have the latest version on the linearteam website. I tried to update it through the software but says its the latest version. So only if it was available elsewhere then I have the most recent.
Re: Orion HCCA Box plans....Can someone check it out?
ps remember when figureing your dispaclement it is down the center of the port
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
I have the latest version on the linearteam website. I tried to update it through the software but says its the latest version. So only if it was available elsewhere then I have the most recent.
you have version o.5a87?
if not here is the update link
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
Can you elaborate on that? I remember a thread with a picture.....the first part goes back down a 16" piece.....I measured 3.25X18.5X14.5(inner height) for the first part and the new second part
is 3.25X18.66X14.5
Re: Orion HCCA Box plans....Can someone check it out?
these two post will explain exactly what you need
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
you have version o.5a87?
if not here is the update link
Thanks for the link! Didn't have that version.
New version gives me 41.71 inches though!
Re: Orion HCCA Box plans....Can someone check it out?
heheh yeah its ghey just go by the hand numbers hehh
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
these two post will explain exactly what you need
First link doesn't work....thanks for these by the way.
Second link was the picture I was talking about. So....from that picture...port length starts at the opening in the middle and makes a left but your displacement doesn't start until the inisde of
where the MDF would be if it went straight across the front baffle?????
Re: Orion HCCA Box plans....Can someone check it out?
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
same + .75 if it does not but against the baffle but sits on the outside just it behind the baffle and butted right against it this way you get .75 inches more room to mount the subs
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
Re: Orion HCCA Box plans....Can someone check it out?
you have a PM
dodge dakota 2003 quad cab
CLARION DXZ545MP
Alpine MRV-T420 V12
6.5 boston comps in the door ( maybe focal soon or ID)
2 adire tempest 8 ft^3 @23hz
1 hifonics bxD 1500
1/0 power and stinger blocks terminals
AIM Id traumanurse06 add me
http://www.sounddomain.com/memberpage/681937/1 just set ti up so it is kinda lame
09-12-2004 #2
09-12-2004 #3
09-12-2004 #4
09-12-2004 #5
09-12-2004 #6
09-12-2004 #7
09-12-2004 #8
09-12-2004 #9
09-12-2004 #10
09-12-2004 #11
09-12-2004 #12
09-12-2004 #13
09-12-2004 #14
09-12-2004 #15 | {"url":"http://www.caraudio.com/forums/enclosure-design-construction-help/69551-orion-hcca-box-plans-can-someone-check-out.html","timestamp":"2014-04-19T23:36:31Z","content_type":null,"content_length":"94211","record_id":"<urn:uuid:37427b34-74d1-4078-8f60-0f969babed2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Have physicists discovered dark matter ?
keithisco, on 21 February 2013 - 07:38 PM, said:
You are claiming that Gravity is a "Force" not a consequence of Einsteinian Spacetime curvature?
Gravity clearly is a force. Whether it is frame dependent or not is a different question.
keithisco, on 21 February 2013 - 07:38 PM, said:
Let me ask you a question: what is the speed of propagation of the Gravitic Force,
That depends on the medium. In a non-polarized vacuum it is be the speed of light.
keithisco, on 21 February 2013 - 07:38 PM, said:
over what distance would you expect attenuation of this force,
Any finite distance should give finite attenuation, just like all forces from finite sources.
In a dispersionless, sourceless, unpolarized medium the force attenuation law is r
, where r is the distance from the source.
keithisco, on 21 February 2013 - 07:38 PM, said:
what is the Scaling of this force relative to Inertia?
It seems to be one-to-one.
keithisco, on 21 February 2013 - 07:38 PM, said:
You give me an Einstein thought experiment (maybe the Elevator accelerating away from the Earth at relativistic speed), and I will show you where it is inconsistent within its own F.O.R
You mean this
? Then OK, please proceed. | {"url":"http://www.unexplained-mysteries.com/forum/index.php?s=b83872a4f9b06ff750da9fa50594627f&showtopic=243300&st=30&p=4673335","timestamp":"2014-04-18T00:13:52Z","content_type":null,"content_length":"102563","record_id":"<urn:uuid:1535d257-2311-4e0f-9b5e-6bb26735486c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/joshyface/medals","timestamp":"2014-04-21T15:44:26Z","content_type":null,"content_length":"79282","record_id":"<urn:uuid:e9901789-7a10-4fd7-b818-ef3e87270209>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
String Theory Put to the Test
78892 story
Posted by
from the line-em-up-and-shoot-em-down dept.
writes to mention that scientists have come up with a definitive test that could
prove or disprove string theory
. The project is described as
"Similar to the well known U.S. particle collider at Fermi Lab, the Large Hadron Collider, scheduled for November 2007, is expected to be the largest, and highest energy particle accelerator in
existence; it will use liquid helium cooled superconducting magnets to produce electric fields that will propel particles to near light speeds in a 16.7 mile circular tunnel. They then introduce a
new particle into the accelerator, which collides with the existing ones, scattering many other mysterious subatomic particles about."
This discussion has been archived. No new comments can be posted.
String Theory Put to the Test
Comments Filter:
• You can't prove a theory (Score:5, Informative)
by hypnagogue (700024) on Wednesday January 24, 2007 @02:53PM (#17741060)
Welcome to slashdot; here's your junk science for the day.
You can't prove string theory through experimentation, all you can do is attempt to disprove it.
□ Re:You can't prove a theory (Score:4, Interesting)
by stevesliva (648202) on Wednesday January 24, 2007 @03:00PM (#17741176) Journal
Welcome to slashdot; here's your junk science for the day.
Welcome to Slashdot; here's your whining about semantics for the day. Pretty soon you're going to tell me that "subatomic particles" aren't actually particles, per se.
☆ Proofs are for mathematics (Score:5, Insightful)
by rumblin'rabbit (711865) on Wednesday January 24, 2007 @03:12PM (#17741398) Journal
I don't think it's whining. The public's confusion about science surely stems in part from sloppy reporting.
How often have we heard someone claim that we shouldn't allow something because it has never been proven to be safe? Such comments show serious misunderstanding about the nature of
○ Re: (Score:2, Funny)
by stevesliva (648202)
How often have we heard someone claim that we shouldn't allow something because it has never been proven to be safe?
Indeed, especially with regard to GMOs. Safety is a testable theory though, and "proven safe" is generally the third option of "lies, damn lies, and statistics."
○ Re: (Score:2, Informative)
by cwm9 (167296)
This is more than just "can't prove" in the "you can't prove you're alive" sense. It's more in line with the "you can't prove God exists" sense.
If you think gravity causes objects to attract one another, you can test the theory by putting two objects near each other and measuring their force upon one another. A big part of
your experiment is showing that it isn't an electrical or magnetic field that is causing the attraction. You show that the two objects attract one another in some new way outside of
○ Re: (Score:3, Interesting)
by gstoddart (321705)
How often have we heard someone claim that we shouldn't allow something because it has never been proven to be safe? Such comments show serious misunderstanding about the nature
of knowledge.
OK, then disallow them until they have rigorously been established as not being dangerous. We'll grant you your metaphysical wiggling and make it nice and obfuscated (but logically
and epistomoligically correct).
Way too many things have been released where the person says "it's perfectly safe" and has no evidence to ba
■ Re: (Score:3, Insightful)
by rumblin'rabbit (711865)
OK, then disallow them until they have rigorously been established as not being dangerous. We'll grant you your metaphysical wiggling and make it nice and obfuscated (but
logically and epistomoligically correct).
It's not "metaphysical wiggling". It goes right to the heart of how we make decisions as a society. We ignore a deep understanding of the nature of risk at our peril. And this
peril takes at least two forms: (1) avoiding beneficial practices because we mistakenly assume them to be too risky, a
□ Re:You can't prove a theory (Score:5, Insightful)
by Bastian (66383) on Wednesday January 24, 2007 @03:04PM (#17741238)
I wouldn't call that junk science so much as failure to make a pedantic distinction.
If experiment can show that string theory makes predictions more accurately than current models, I'd say that proven is a good enough word to describe what has happened. Not in the sense that
it's been shown to be an absolutely correct description of the machinations of the universe. Proven in the way that General Relativity was proven - decades before all of its predictions had
been tested. Proven as in "it's been shown to be a better model," i.e., proven in about the same sense a person can "prove himself."
☆ by PCM2 (4486)
If experiment can show that string theory makes predictions more accurately than current models, I'd say that proven is a good enough word to describe what has happened.
Part of the problem with string theory is that it makes no testable predictions at all. The experiment mentioned is intended to test some of the assumptions upon which string theory
rests. Disprove those, and expecting string theory to produce meaningful predictions starts to sound a little silly.
□ Re:You can't prove a theory (Score:4, Funny)
by giminy (94188) on Wednesday January 24, 2007 @03:06PM (#17741292) Homepage Journal
Thank you.
Please vote to give this article the scientificmethodcantproveonlydisprove tag :).
☆ Re: (Score:3, Funny)
by truthsearch (249536)
Please vote to give this comment the concatenated-words-need-hyphens-to-be-readable mod :).
□ Re:You can't prove a theory (Score:5, Informative)
by kripkenstein (913150) on Wednesday January 24, 2007 @04:31PM (#17742632) Homepage
Welcome to slashdot; here's your junk science for the day.
You can't prove string theory through experimentation, all you can do is attempt to disprove it.
Depends on what philosophy of science you subscribe to:
1. According to the 'old consensus' (e.g. the Logical Positivists, early 20th century), you can prove scientific theories.
2. According to Karl Popper, you cannot prove theories, you can only disprove them. It appears that you follow this approach.
3. According to W. V. Quine, you cannot prove or disprove theories, strictly speaking; evidence is taken along with previous information in order to arrive at conclusions.
4. And if you listen to Thomas Kuhn, you get a really different picture from all of these (which I won't go into).
Note that both Popper and Quine are among the most influential philosophers of the 20th century. It is of course legitimate that you are presenting the views of one of them. However, Slashdot
readers should be aware of the existence of other views, both in science and in philosophy.
• Bah (Score:5, Informative)
by Phanatic1a (413374) on Wednesday January 24, 2007 @02:53PM (#17741066)
It can't prove string theory. It can *support* it, or it can disprove it, falsify it, contradict it. But it can't confirm it. All the experimental data in the universe can't do that.
□ Re:Bah (Score:5, Funny)
by Bluesman (104513) on Wednesday January 24, 2007 @03:05PM (#17741276) Homepage
Actually, ALL of the experimental data in the universe could do that.
☆ Re: (Score:3, Insightful)
by Sunburnt (890890)
Actually, ALL of the experimental data in the universe could do that.
Of course, how would one know when they got there?
○ by gomoX (618462)
There is a "dead end" sign there.
■ Re: (Score:3, Funny)
by the_bard17 (626642)
Nah... there's a restaurant.
○ Re:Bah (Score:5, Funny)
by Dragonslicer (991472) on Wednesday January 24, 2007 @05:06PM (#17743178)
Of course, how would one know when they got there?
They'd see the sign for the restaurant. It's pretty tough to miss.
☆ Re:Bah (Score:5, Insightful)
by alienmole (15522) on Wednesday January 24, 2007 @03:18PM (#17741474)
Not so fast -- for a start, you'd need all data from the universe's future, too. But even then, you still won't have proved your theory, unless you count all possible parallel universes
too. Even if every event in the history of the universe fails to falsify a theory, it is still possible that you just got lucky, and nothing ever happened in such a way as to disprove the
theory. Of course, I'll concede that in that situation, you've got a pretty useful theory and the errors it contains are moot for someone living in the universe in question.
☆ by rumblin'rabbit (711865)
Unfortunately that collection of data would have to part of the data itself, since it's part of the universe. And the part of the collection of data that represented the thing that
collected data would have to be part of both the original collection, and part of the collection that represented the collection of data. And so on.
Enough to give Bertrand Russell a splitting headache, who's memory would also be part of the collection.
□ Re:Bah (Score:5, Informative)
by radtea (464814) on Wednesday January 24, 2007 @03:40PM (#17741852)
The tests proposed would not "prove" string theory. They are only testing some of the fundemental assumptions on which string theory is based.
The assumptions are:
1) Lorentz invariance
2) Analyticity
3) Unitarity
The problem is that these are not exactly assumptions but rather desirable characteristics of any good theory in this domain, period. If anyone comes up with an alternative to string theory
that is even remotely within the bounds of conventional physics, it will also have these chracteristics.
Lorentz invariance means that the theory is consistent with special relativity. Since our universe is manifestly correctly described by SR to a very high degree of accuracy, this is a
desirable property of any theory of everything.
Analyticity (am I spelling that right?) means that the theory is mathematically continuous, which is again something that seems to be highly desirable as our universe contains very few
(probably no) formal sigularities. One major goal for theories of everything is to show that the singularities in general relativity are smoothed away at small enough scales.
Unitarity means that the propogator conserves what is being propogated, so spontaneous creation or destruction of stuff doesn't just happen. Again, this is considered a generally desirable
property, to the extent that any theory that lacked any of these three properties would be considered a very bad theory. The creator of such a theory would have to give some account as to why
it was ok for their theory to not be Lorentz invariant, analytic or unitary.
So this is not so much "testing string theory" as "testing some very basic assumptions about the constraints any good theory should fulfill." This is a good and worthy goal, but it is a very
weird bit of marketing to advertise it as "testing string theory" rather than putting it in its more fundamental context.
• Somewhat innaccurate title (Score:4, Insightful)
by ThinkFr33ly (902481) on Wednesday January 24, 2007 @02:55PM (#17741092)
The tests proposed would not "prove" string theory. They are only testing some of the fundemental assumptions on which string theory is based.
If the test shows that one or more of these assumptions is incorrect, however, then it would probably force a very fundamental rethinking of string theory... essentially disproving it.
• by eviloverlordx (99809) on Wednesday January 24, 2007 @02:55PM (#17741094)
Did anyone honestly think that the answer would be different?
□ by iggymanz (596061)
no, of course not, but the question might be different
• XKCD Has a great take on this... (Score:5, Funny)
by Acy James Stapp (1005) on Wednesday January 24, 2007 @02:55PM (#17741102)
http://www.xkcd.com/c171.html [xkcd.com]
• Large what collider? (Score:5, Funny)
by elliott666 (447115) on Wednesday January 24, 2007 @02:57PM (#17741128)
Oh...Large Hadron Collider. If it was in the Castro district I would really be suspicious.
• Hmm... (Score:2)
by rewt66 (738525)
Grinstein also noted that if their test does not substantiate what the theory predicts, one of the key mathematical assumptions about the current string theory would be incorrect.
As opposed to the whole idea being bogus? The difference is whether you go for the New, Improved String Theory, Now With Fewer Bogus Assumptions(TM), or whether you throw the whole thing out.
Sounds like the physicists want to try to tweak it rather than junk it, even if it fails the experiment.
Note that "starting over with
• Epicycles redux? (Score:5, Insightful)
by sjbe (173966) on Wednesday January 24, 2007 @02:59PM (#17741154)
I'm by no means an expert in string theory. I barely grasp the basic concepts. However I am an engineer who has taken a LOT of physics classes over the years and I'm not completely ignorant
String theory has always struck me as a modern day version of epicycles before it was realized that planets follow ellipses instead of circles. It just seems like we're trying to fit the math to
the model instead of modifying the model so that the math makes sense. Add in the fact that it makes no testable predictions (not yet anyway) and it's bordering on not being science anymore.
Maybe technology advances will change that but then again maybe not.
Maybe string theory is right, I don't honestly know. But it seems like a lot of group think is going on and little progress is being made.
□ Re: (Score:3, Informative)
by stevesliva (648202)
String theory has always struck me as a modern day version of epicycles before it was realized that planets follow ellipses instead of circles
Epicycles were a way to explain why planets that were orbiting the earth apparently reversed their direction in our sky for certain periods of time.
□ Re:Epicycles redux? (Score:5, Interesting)
by Ambitwistor (1041236) on Wednesday January 24, 2007 @03:28PM (#17741634)
Everyone always seems eager to compare to epicycles any modern physics theory they don't care for. String theory, dark matter, what have you...
Physicists were led to string theory in a search for a consistent theory of quantum gravity, not in a search to make up the most complicated theory possible to fudge arbitrary data. For more
on why string theory should be taken seriously as a solution to this problem, you can read a long analysis in a previous post of mine here [slashdot.org]. String theory itself cannot be
modified to "fit" to a model; it is a unique theory with no adjustable parameters or interactions. However, you can construct various string models to fit observations, as you can presently
using quantum field theory models like the Standard Model.
It is also not correct that string theory doesn't make testable predictions. This whole story is about testing predictions of certain string models. However, we can't presently test
predictions of all string models at once, and thus rule out all of string theory. But then, the same is true of quantum field theory models as well; there are infinitely many such models that
could be true but which we can't yet test.
☆ Re:Epicycles redux? (Score:4, Insightful)
by Ambitwistor (1041236) on Wednesday January 24, 2007 @03:46PM (#17741974)
This whole story is about testing predictions of certain string models. However, we can't presently test predictions of all string models at once, and thus rule out all of string theory.
Shame on me for not RTFA. The story is about testing all string models at once. However, the tests of are a very general sort (e.g., "do probabilities add up to 1") so, with the possible
exception of Lorentz invariance (obeying special relativity at all scales), even non-string theorists would not bet highly on violations being seen.
□ Re:Epicycles redux? (Score:5, Insightful)
by alienmole (15522) on Wednesday January 24, 2007 @03:29PM (#17741654)
I think you're quite right. The problem, though, is that we really don't know how else to do this kind of science at this point. We've reached the edges of our ability to test theories, not
just for want of bigger particle accelerators, but also because of more fundamental issues -- we're inside the universe, and there's no fundamental reason that we should be able to figure out
exactly how it universe works, from the inside, any more than a creature inhabiting the two-dimensional surface of a balloon can figure out that the balloon's surface is supported by air
pressure in a three-dimensional space.
So in a sense, string theory is just the cover story that scientists use to continue conducting research. It's something to focus energy around, like the space program was for 1960's America.
Eventually maybe we'll hit on some experimental data or a less unconstrained idea which gives us a clue as to how to proceed.
• The LHC is at CERN (Score:5, Interesting)
by Anonymous Coward on Wednesday January 24, 2007 @03:01PM (#17741182)
I think it's funny how the article forgets to mention that the LH collider is located at the CERN (the European nuclear physics institute). As a matter of fact, it is not only in Switzerland, but
extends to France as well. The article only mentions it is similar to the U.S. Fermilab accelerator, but then forgets to add that there are many kinds of accelerators world wide.
Funny, ain't it?
□ by GrayCalx (597428)
Funny? Funny how? Funny as in it personally offends you that its specific location isn't mentioned?
Funny like, you naturally assuming the editor purposefully left out the location because it was not an American location?
Or funny like "Hahahah that old woman slipped on some ice and broke her hip" funny? Cuuuuz I gotta tell you, I didn't laugh as hard as I did at that woman this morning.
• Nothing new (Score:4, Informative)
by forand (530402) on Wednesday January 24, 2007 @03:02PM (#17741200) Homepage
The tests being proposed by the physicists in this blog would not test string theory, in that it does not test any prediction of string theory but the underlying assumptions. The write up is very
misleading since Lorentz invariance has been tested throughout the past 80 years and always stood up to the tests. I suspect that someone wants to get more funding and mentioned testing string
theory to a funding agency.
• All the tests in the world can only do one thing with string theory: show that we haven't found a way to disprove it yet. All scientific theories are open to being disproven, that is the beauty
of science, that is why it is not a religion, as much as religious types would like it to be, and despite the fact that many so-called scientists actually use it as a religion. The best one can
hope for is that observation continues to bear out the predictive abilities of the theory. And you can consider a well te
• Black holes? (Score:3, Interesting)
by egrinake (308662) <erikg@cod3.14159epoet.no minus pi> on Wednesday January 24, 2007 @03:05PM (#17741274)
I remember hearing about plans to use the LHC to produce and study miniature black holes. These are supposed to evaporate nearly instantanously due to Hawking radiation, but such radiation is
only a theory without any experimental verification, and apparantly quite a few scientists are concerned it will just go ahead and gobble up the earth.
At least it will be quick :)
□ Why we musn't fear microscopic black holes (Score:5, Informative)
by benhocking (724439) <[moc.oohay] [ta] [gnikcohnimajneb]> on Wednesday January 24, 2007 @03:19PM (#17741500) Homepage Journal
The energies that will be created in the LHC happen on a daily basis in our upper atmosphere. The only difference is that we will have detectors in the immediate vicinity.
□ by David_Shultz (750615)
I remember hearing about plans to use the LHC to produce and study miniature black holes. These are supposed to evaporate nearly instantanously due to Hawking radiation, but such radiation is
only a theory without any experimental verification, and apparantly quite a few scientists are concerned it will just go ahead and gobble up the earth. At least it will be quick :)
First of all the black holes being created by the LHC are not intentionally being created -they are a predicted (by some) consequence o
☆ Re:Black holes? (Score:4, Insightful)
by vondo (303621) on Wednesday January 24, 2007 @03:36PM (#17741766)
Second of all, a miniature black hole, even if it didn't dissipate due to Hawking radiation, wouldn't gobble up the Earth. It would still have the gravity of a mere two protons, since
that is what constitutes its mass.
Not quite. The theorized micro-black-holes would have masses of about 1000 protons, the amount of energy available in the collision.
○ by David_Shultz (750615)
The theorized micro-black-holes would have masses of about 1000 protons, the amount of energy available in the collision. That's a neat trick -but it makes sense. Thanks.
• IANA Theoretical Physisist, but.... (Score:4, Interesting)
by hhr (909621) on Wednesday January 24, 2007 @03:07PM (#17741312)
"The canonical forms of string theory include three mathematical assumptions--Lorentz invariance, analyticity and unitarity. Our test sets bounds on these assumptions." --Benjamin Grinstein
Don't quantum mechanics and GRT also include the above? Meaning if the experements don't confirm the above then more than just string theory is in trouble.
Of course analyticity probably has some very subtle meaning in string theory. Any one here in the know?
□ Re:IANA Theoretical Physisist, but.... (Score:4, Informative)
by Ambitwistor (1041236) on Wednesday January 24, 2007 @03:30PM (#17741684)
Yes, those assumptions are also shared by standard quantum field theory. (You can write down Lorentz-violating quantum field theories though.) So you're right, if those turn out to be wrong
it's a bigger deal than just ruining string theory.
• Bye, everyone! (Score:4, Funny)
by Rob T Firefly (844560) on Wednesday January 24, 2007 @03:07PM (#17741324) Homepage Journal
November 2007? Sure, what the hell, I've had a good life.
So, who wants to loan me large sums of money? Pay you back in December?
□ Oh, lighten up. (Score:4, Funny)
by Quiet_Desperation (858215) on Wednesday January 24, 2007 @03:47PM (#17741994)
We can have a raffle! How will the LHC destroy us? Check one: [ ] Microscopic black holes [ ] Trigger collapse of the false vacuum [ ] Strange matter [ ] Magnetic monopoles [ ] Disruption of
the Wigner observer cascade causes a universal system reset [ ] God notices and stuffs us all into Carlsbad Caverns
• by Quantam (870027)
Oh ye of little faith
• debate still rages? (Score:5, Funny)
by mugnyte (203225) on Wednesday January 24, 2007 @03:09PM (#17741342) Journal
It thought this was cleared up years ago:
Scanning/Copying based on a terminator byte pattern is fraught with error and is definitely not secure.
Buffer sizes are terribly problematic when left tot he caller to check on overflow. It must be in the methods, and thus part of the data structure. (see point above).
Strings these days are UTF-7 or 8, which makes them an even better candidate for a object-based construct rather than a memory map.
I'd like to point out the....oh, wait...
• If this is the same story referenced here [ucsd.edu], it's bogus [columbia.edu]. To quote Not Even Wrong [columbia.edu],
It is based on a paper which has nothing to with string theory and doesn't do a string theory calculation at all. The paper first appeared on the arXiv last April with the title Falsifying
String Theory Through WW Scattering, and was extensively discussed here. In October a new version of the paper was put on the arXiv, with a changed title Falsifying Models of New Physics via
WW Scattering (and this was discussed here). I'm gu
□ by LotsOfPhil (982823)
• by TheWoozle (984500) on Wednesday January 24, 2007 @03:18PM (#17741476)
In what other endeavor can you persuade people to spend hundreds of millions of dollars to build complicated machinery and pay you a salary based on the following (roughly paraphrased)
"You see, what we'll do is accelerate some shit up to within a hairs-breadth of the speed of light then smash it into some other shit and see what happens."
Gotta love those wacky physicists! ;-)
□ by Quiet_Desperation (858215)
"You see, what we'll do is accelerate some shit up to within a hairs-breadth of the speed of light then smash it into some other shit and see what happens."
Other than the speed of light part, most anyone who ever worked on a military weapons contract.
• arXiv link (Score:2, Informative)
by flawedconceptions (1000049)
http://arxiv.org/abs/hep-ph/0604255 [arxiv.org]
• In Brian Greene's book The Elegant Universe (1999), he claimed that the LHC would be able to find the existence of superparticles that were predicted by string theory. I'm unable to explain a lot
of the details there, but this new article seems pretty similar. 8 years ago we were waiting for the LHC to come along and have a chance of confirming string theory, and now some scientists tell
us to wait for the LHC to be able to prove string theory. It's not like we ran out of ways to prove/disprove string th
• Proven String Theory (Score:3, Funny)
by WED Fan (911325) <akahige@@@trashmail...net> on Wednesday January 24, 2007 @03:28PM (#17741628) Homepage Journal
String Theory was proven on July 16, 2003, and confirmed after peer review and over 20 separate duplicated efforts, including a lab in Dallas, Texas.
Proven: When you need a piece of string to tie something up, and you find a piece of string in a junk drawer, it will always be too short for use, or too long and when cut to the appropriate
length, the remaining piece will be too short for further use.
A similar, but as yet unproven theory is in testing: When you have a piece of string and measure it by "eyeballing" it will always be too short for actual use.
• Some questions: (Score:3, Insightful)
by Quiet_Desperation (858215) on Wednesday January 24, 2007 @03:32PM (#17741708)
1. Which string theory? There's a few. Anyone who says "M-Theory" will get slapped.
2. What predictions does the string theory in question make?
3. Are the predictions unique to string theory?
□ Re:Some questions: (Score:5, Informative)
by Ambitwistor (1041236) on Wednesday January 24, 2007 @03:42PM (#17741882)
Which string theory? There's a few. Anyone who says "M-Theory" will get slapped.
All of them. (And "M-theory" is a perfectly legitimate answer; you can't escape the fact that all the string "theories" are really just different regions of solution space of the same
What predictions does the string theory in question make?
In this case, unitarity, analyticity, Lorentz invariance, and crossing. (Or rather, that all those properties are obeyed to arbitrarily high energies.)
Are the predictions unique to string theory?
No, they're also axioms of standard relativistic quantum field theories.
☆ by Quiet_Desperation (858215)
Thanks, although define "arbitrarily high". ;-)
• by Ambitwistor (1041236)
If anyone cares to read a highly technical discussion of the paper by its first author (Jacques Distler), you can read his blog entries and the accompanying comments here [utexas.edu] and here
• Mythbusters (Score:5, Funny)
by Cervantes (612861) on Wednesday January 24, 2007 @03:44PM (#17741936) Journal
it will use liquid helium cooled superconducting magnets to produce electric fields that will propel particles to near light speeds in a 16.7 mile circular tunnel. They then introduce a new
particle into the accelerator, which collides with the existing ones, scattering many other mysterious subatomic particles about.
This is why the Mythbusters should not be allowed to design scientific equipment. I can picture Adam dancing about in girlish glee even now...
• Back when folks were still trying to figure out the Periodic Table of Elements, there was a promising idea which came out of the field of topology. It was based on the topology of knots, such as
one could visualize as closed loops of string. It seemed to "predict" chemical properties for elements as heavy as Calcium but broke down beyond that. The similarity of the two patterns turned
out to be only a coincidence, so the theory was discarded.
I predict that this new incarnation of "string theory" will b
□ by Ambitwistor (1041236)
Wow, a theory totally unrelated to modern string theory was once disproved, and you "predict" that string theory will also be disproved for similar reasons.
News at 11: phlogiston disproved, therefore string theory is wrong.
Related Links Top of the: day, week, month. | {"url":"http://science.slashdot.org/story/07/01/24/1645255/string-theory-put-to-the-test","timestamp":"2014-04-17T01:35:36Z","content_type":null,"content_length":"352777","record_id":"<urn:uuid:cda99f1a-aa12-49d6-8116-c45a3195bb4d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
November 26th 2009, 04:52 PM #1
Junior Member
Nov 2009
Let B={1,2,3.....m), let R be a relation on B and its antisymmetric
a) What is the biggest number of ordered pairs that can be in R?
b)How many antisymmetric relations on A have the size found in a) ??
(a)Since the matrix representation M of R can be a upper triangle matrix, it has the bigggest oder $\frac{n(n+1)}{2}$.
(b)Since the element in the diagonal of M should be 1, Thus there are $2^{\frac{n(n-1)}{2}}$ distinct antisymmetric relations on B having the size found in (a)
The same problem was also considered here. If you have difficulties, I suggest starting with writing some examples of such relations with the maximum number of pairs for small $m$.
November 26th 2009, 08:04 PM #2
November 27th 2009, 03:19 AM #3
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/116935-relations.html","timestamp":"2014-04-17T01:49:36Z","content_type":null,"content_length":"35304","record_id":"<urn:uuid:6dae702b-da47-4176-ad80-1c08f8f48b6d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
de Bruijn. -Calculus Notation with Nameless Dummies, A Tool for Automatic Formula Manipulation
, 1999
"... rule). This is also the case for NJ and LJ as defined in this formalisation. This is due to the particular nature of the logics in question, and does not necessarily generalise to other logics.
In particular, a formalisation of linear logic would not work in this fashion, and a more complex variable ..."
Cited by 3 (0 self)
Add to MetaCart
rule). This is also the case for NJ and LJ as defined in this formalisation. This is due to the particular nature of the logics in question, and does not necessarily generalise to other logics. In
particular, a formalisation of linear logic would not work in this fashion, and a more complex variable-referencing mechanism would be required. See Section 6 for a further discussion of this
problem. Other operations, such as substitutions (sub in Table 2) and weakening, require lift and drop operations as defined in [27] to ensure the correctness of the de Bruijn indexing.
, 1997
"... . We present an overview of three approaches to formal metatheory: the formal study of properties of deductive systems. The approaches studied are: nameless dummy variables (also called de
Bruijn indices) [dB72], first order abstract syntax for terms with higher order abstract syntax for judgements ..."
Cited by 1 (1 self)
Add to MetaCart
. We present an overview of three approaches to formal metatheory: the formal study of properties of deductive systems. The approaches studied are: nameless dummy variables (also called de Bruijn
indices) [dB72], first order abstract syntax for terms with higher order abstract syntax for judgements [MP93, MP97], and higher order abstract syntax [Pfe91]. 1 Introduction Formal meta-theory, the
machine assisted proof of theorems about logical systems, is a relatively new field. While some approaches ([dB72]) have been known about for some time, large developments have been rare until
recently. Starting with [Alt93, Coq93] we have some formalisations of strong normalisation for natural deduction calculi using de Bruijn indices. The body of work in Elf [Pfe91] includes some formal
meta-theory using the higher order abstract syntax method which is integral to the LF approach. The work of McKinna, Pollack and others in [vBJMR94, MP93, MP97] demonstrates a slightly different
approach using a ...
, 1997
"... We describe a formalisation of proof theory about sequent-style calculi, based on informal work in [DP96]. The formalisation uses de Bruijn nameless dummy variables (also called de Bruijn
indices) [dB72], and is performed within the proof assistant Coq [BB + 96]. We also present a description of ..."
Cited by 1 (1 self)
Add to MetaCart
We describe a formalisation of proof theory about sequent-style calculi, based on informal work in [DP96]. The formalisation uses de Bruijn nameless dummy variables (also called de Bruijn indices)
[dB72], and is performed within the proof assistant Coq [BB + 96]. We also present a description of some of the other possible approaches to formal meta-theory, particularly an abstract named syntax
and higher order abstract syntax. 1 Introduction Formal proof has developed into a significant area of mathematics and logic. Until recently, however, such proofs have concentrated on proofs within
logical systems, and meta-theoretic work has continued to be done informally. Recent developments in proof assistants and automated theorem provers have opened up the possibilities for
machine-supported meta-theory. This paper presents a formalisation of a large theory comprising of over 200 definitions and more than 500 individual theorems about three different deductive system. 1
The central dif... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1742099","timestamp":"2014-04-17T02:21:33Z","content_type":null,"content_length":"17973","record_id":"<urn:uuid:ef3697de-1199-4d75-8470-323282d60b46>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
chapter 06
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
CSU Northridge - COMP - 100
ExerCompthe Intelligent path to FitnessAuthor Date PurposeKrystal Raphael Monday, October 29, 2007 To report on the 2008 and 2009 sales of the X310 heart rate monitorExerCompthe Intelligent path to
FitnessX310 Yearly Sales AnalysisRegion R
UMSL - MANAGEMENT - 1080
ce what we recognize or screen out.TRUEAACSB: 3BT: ComprehensionDifficulty: Medium3-32Chapter 03 - Perception and Learning in Organizations5. (p. 69) Confirmation bias causes us to screen out
information that is contrary to our valuesand
Johns Hopkins - FRENCH - 210.102
physique de cette personne etdites quelle est sa profession (you can make it up!)
Columbia - COMS - 4172
purpose (e.g., Unity)Include a mix ofGraphics package functionalityInteractionPhysics (including collision detection)“AI”Visual effectsEnvironmental (fog, rain, sky, etc)Particle systems (fire,
smoke,…)AudioNetworked collaborationA
Brooklyn College - BUSN - 3240
Foundations of Finance, 7e (Keown)Chapter 7 The Valuation and Characteristics of Bonds7.1 Learning Objective 1True or False1) Subordinated debentures are more risky than unsubordinated debentures
because the claims ofsubordinated debenture holders ar
Brooklyn College - BUSN - 3240
Foundations of Finance, 7e (Keown)Chapter 8 The Valuation and Characteristics of Stock8.1 Learning Objective 1True or False1) Preferred stock is referred to as a hybrid security because it has many
characteristics of bothcommon stock and bonds.Answe
Brooklyn College - BUSN - 3240
Foundations of Finance, 7e (Keown)Chapter 9 The Cost of Capital9.1 Learning Objective 1True or False1) In order to create value a corporation must earn a rate of return on its invested capital that
ishigher than the market's required rate of return o
Brooklyn College - BUSN - 3240
Mohammed Ammar229 W 62st,Apt 4cNew York, Ny 10023646-5751780 , mr.abana@gmail.comJack RichardsomSenior Manager, Barneys New York660 Madison AveNew York, NY 10021Dear Mr Richardson,Walid Aly, a mutual
acquaintance recommended to contact you regard
Brooklyn College - BUSN - 3240
MODULE D: WAITING-LINE MODELSTRUE/FALSE1. Waiting-line models are useful to operations in such diverse settings as service systems, maintenance activities, and shop-floor control. True (Introduction,
easy) The two characteristics of the waiting line its
Brooklyn College - BUSN - 3240
There is no doubt that in the early modern world which is also setin the fifteenth and sixteenth centuries, they were many societies that developedlargely such as Native American, Europeans, Asians,
and the Turks. Many factorscontributed to the develop
Brooklyn College - BUSN - 3240
Operations Management, 10e (Heizer/Render)Chapter 1 Operations and ProductivityTrue/False1) Some of the operations-related activities of Hard Rock Caf include designing meals andanalyzing them for
ingredient cost and labor requirements.Answer: TRUED
Brooklyn College - BUSN - 3240
End of Chapter Solutions Corporate Finance 8th edition Ross, Westerfield, and Jaffe Updated 11-21-2006CHAPTER 1 INTRODUCTION TO CORPORATE FINANCEAnswers to Concept Questions 1. In the corporate form
of ownership, the shareholders are the owners of the f
Brooklyn College - BUSN - 3240
Access Global Talent with AIESECAIESEC is a global youth-run leadership development organisation based in 111 countries. We develop leadership ability through providing leadership opportunities
forstudents and recent graduates and through our global int
Brooklyn College - BUSN - 3240
INTERNATIONAL SUPPLY CHAINMANAGEMENT BSB20123-7Written Examination Stimulus MaterialCase Study IKEAAbout IKEASince its 1943 founding in Sweden, IKEA has offered home furnishings andaccessories of
good design and function at low prices so the majorit
Brooklyn College - BUSN - 3240
Todays lecture: Types of mutations and their impact on protein function Mutations can be classied by their effect on the DNA sequence OR the encoded protein1From my Lecture 4 (10/1):Classication of
mutations by their effects on the DNA molecule Substi
Brooklyn College - BUSN - 3240
MARKETING MINICASE: MKTG - 10 TEACHING NOTESBUSINESS ETHICS PROGRAMFalsification of Data Teaching NotesWhat Are the Relevant Facts? 1. 2. 3. 4. 5. The company for which Greg works is successful,
respected, and well-established within its business. Greg
Brooklyn College - BUSN - 3240
CSUS P.TONG Spring 2013CALIFORNIA STATE UNIVERSITY SACRAMENTOCollege of Business AdministrationMKTG 129 Marketing ManagementSection 01-SEM 31511COURSE OUTLINEMW 10:30 a.m. - 11:45 a.m.January 28 May
24, 2013Tahoe Hall 1026General Information:Off
Brooklyn College - BUSN - 3240
Case StudyIKEAWe chose UNIVERGE SV8100 because youget exactly what you need - nothing more,nothing less. But also with a completely openroad to future expansion and add-ons. TheBusiness ConneCT call
handling system wasalso way ahead of anything els
Brooklyn College - BUSN - 3240
INSIGHTSOrganizational EffectivenessDiscovering How to Make It HappenTable of Contentsexecutive summary3about this study4the call for sustainable business results5delivering workforce
excellence7obstacles to organizational effectiveness9The
Brooklyn College - BUSN - 3240
Spring 2012CIMBAIntroduction to Marketing Strategy06M:100/MKTG3000Professor Stephen Crockerscrocker9@yahoo.comDescription/Learning OutcomesThe American Marketing Association defines marketing as &
quot;the activity, set of institutions, andprocesses for
Jahangirnagar University, Savar - MGT - 360
Table of ContentsTable of Contents.11.0.Executive summary.32.0.Company background.43.0.Introduction.54.0. Analysis of Trends and Issues in the Auto Industry .54.1. Porters Five Forces Analysis of the
Auto Industry54.2. PESTEL Analysis of the Aut
Jahangirnagar University, Savar - MGT - 360
1. Prepare to describe in class the competition in the overnight deliveryindustry, and the strategies by which those two firms are meeting thecompetition. What are the enabling and inhibiting factors
facing the twofirms as they pursue their goals? Do y
Jahangirnagar University, Savar - MGT - 360
Assignment: Annual Report One-Page SummaryDue: November 4, 2009Either locate an online copy of your chosen companys 2008 annual report or get a hardcopy (You may find a hard copy at the library or
your parents.)Write a one-page summary of the annual r
Jahangirnagar University, Savar - MGT - 360
Exhibit TN2Exhibit TN1Unlevered Beta DerivationBoeings Commercial Beta (1)The value of a firm with debt can be thoughtof as the sum of the market value of its debt (D)Commercial Beta or theDebt Beta
value as ifand equity (E)sum of itstc Dunlever
Jahangirnagar University, Savar - MGT - 360
The Battle for Value, 2004: FedEx Corp. vs. United Parcel Service, Inc.Jonathan MihalcinAmberPaulAdamFIN 4410Professor Cook1. On June 18, 2004, the United States and China reached an
air-transportation agreement thatimpacted the global air-cargo m
Jahangirnagar University, Savar - MGT - 360
COURSE SYLLABUSFIN 4342.01Managerial FinanceSpring 2013Place & Time:Professor:Office Location:Phone Number:E-mail Address:Web Address:Office hours:AMB 126 MW 12:00 1:15Sinan Yildirim, PhDAMB 318
(817) 531 4835syildirim@txwes.eduhttp:/facult
Jahangirnagar University, Savar - MGT - 360
Ford Motor Company (Ford) has been one of the largest automobile industries in thepast decades. Ford has been an innovator in the automobile industries. They were consideredone of the best car
manufacturers of the world. Times have changed this for Ford
Jahangirnagar University, Savar - MGT - 360
Horizontal/Trend AnalysisIncome Statement2007-2008: Total revenue decreased from $172.4 billion to $146.3 billion (15.18%decrease) while cost of good sold decreased from $129.4 billion to $116.6
billion (9.88%decrease). SG&A expense increased from $21
Jahangirnagar University, Savar - MGT - 360
In Class Group Exercise: October 12, 2009In Cla s s GROUPE xercis e:The following production and averagecost data for two levels of monthly Production Volume:production volume. The companyDirect
Materials:produces a single product.Direct Labor:in
Jahangirnagar University, Savar - MGT - 360
Name _ Practice Quiz IIWeb Production J Rosco WCSUUse the code PYTHAGORAS HTML CODE to answer the following questions:1. The table that begins at line 27 ends where?_2. The table that begins at line
41 ends where?_3. The tables that begins at line
Jahangirnagar University, Savar - MGT - 360
What did the declaration of independence do? Announced our independence Declared our independence Said that United States is free (from Great Britain)The declaration of Independence contains
important ideas about the American system ofgovernment. The
Jahangirnagar University, Savar - MGT - 360
What is the freedom of religion? You can practice any religion, or not practice a religion.Colonists from Spain, France, Holland, England, and other countries came to America for muchdifferent
reason. One of the reasons was religious freedom. The ruler
Jahangirnagar University, Savar - MGT - 360
Who was the first president during World War I?(Woodrow) WilsonWoodrow Wilson was the 28th president of the United States. President Wilson served two termsfrom 1913to 1921. During his first term, he
was able to keep United States out of World War I.B
Jahangirnagar University, Savar - MGT - 360
Internal Factor Evaluation Matrix - McDonaldsKey Internal FactorsWeightRatingWeightedScore(3 to 4)Strengths1. Issue #1: Strong brandname0.1640.642. Issue #2: Strong global presence0.1040.403. Issue #
3: Strong financial performance and posi
Jahangirnagar University, Savar - MGT - 360
SWOT MATRIXStrengthsS1: Strong brandnameW1: High employees turnover including topmanagementS2: Strong global presenceW2: Unhealthy food imageS3: Strong financial performance and positionW3: It has
yet to accomplish going on the trend oforganic fo
Jahangirnagar University, Savar - MGT - 360
Informative SpeechGeneral purpose: To informSpecific purpose: To inform the audience about the 3 major causes of stress for college studentsCentral idea: There are 3 major causes of stress for
college students1. Financial2. Academic3. AdjustmentPat
Jahangirnagar University, Savar - MGT - 360
Personal Improvement ProjectWhat does self-improvement mean? To me, self-improvement means setting goals andworking toward those goals in order to improve my overall self. One of my goals this year
is tolearn how to have patient.As hard as it is for m
Jahangirnagar University, Savar - MGT - 360
Porters Five Forces Analysis for Hotel IndustryMichael Porter provided a framework that models an industry as being influenced by fiveforces: bargaining power of suppliers, bargaining power of
buyers, potential entry of newcompetitors, potential develo
Jahangirnagar University, Savar - MGT - 360
SWOT Matrix for McDonaldsSWOT analysis is a strategic planning method used to evaluate the Strengths,Weaknesses, Opportunities, and Threats involved in a project or in a business venture.
Afterconsidering McDonalds strengths, weaknesses, opportunities,
Jahangirnagar University, Savar - MGT - 360
Section 1 NotesGSI: Kyle EmerickEEP/IAS 118September 1st, 2011Derivation of OLS EstimatorIn class we set up the minimization problem that is the starting point for deriving the formulas for the
OLSintercept and slope coecient. That problem was,Nmi
Jahangirnagar University, Savar - MGT - 360
ANALYSISSeparation of a whole into its component parts in order to study ofsuch parts and their interrelationships in making up a whole. Theresults of studying the parts can be used to make
inferences about thewhole.FINANCIAL ANALYSISThe use and tra
Jahangirnagar University, Savar - MGT - 370
Class Exercise: TRADITIONAL VERSUS CONTRIBUTION INCOME STATEMENTSales $60,000Cost Of Goods Sold $34,000 , Variable Portion 12,000Selling Expense $15,000, Variable Portion 3,000Administrative Expense
$6,000, Variable portion 1,000TraditionalApproach(
Jahangirnagar University, Savar - MGT - 370
Economic Value AddedEconomic Value Added, EVA is a method to measure company value. It is important toanalyze the EVA of FedEx and UPS because it evaluates the profit performance of a companyand
project not only by deduction of direct cost like interes
Jahangirnagar University, Savar - MGT - 370
EXAM 2 Study AidESSENTIALS OF ORGANIZATIONAL BEHAVIOR: 11TH Edition Robbins and JudgeALL HANDOUT MATERIALS and CLASS DISCUSSIONSChapter 6: Motivation ConceptsChapter 8: Foundations of Group Behavior
What is motivation. Maslows Need Hierarchy. McCle
Jahangirnagar University, Savar - MGT - 370
External Factor Evaluation for McDonaldsWithin the fast casual segment, there are certain external trends and forces that the industry mustaddress. Some of these trends and forces can lend themselves
to being opportunistic in nature; however thereare s
Jahangirnagar University, Savar - MGT - 370
NPV SENSITIVITY ANALYSISThe NPV that we calculated for Compass Records depends on the base case assumptions. Twoimportant factors in the assumptions that we used in the sensitivity analysis are the
WACC andthe Sales in the United States. Since 50% of p
Jahangirnagar University, Savar - MGT - 370
FiveForcesIndustrySegmentAnalysisWorksheetIndustry Segment: Hotel IndustryVLLMHVHVLLMHVHThreat of New EntrantsIntensity of RivalryBargaining Power of CustomersBargaining Power of SuppliersThreat of
SubstitutesOverall Attractiveness
Jahangirnagar University, Savar - MGT - 370
AMZN Stock PricesDatePrice200354.43200434.13200539.86200638.09200789.15200869.58AMZN Stock PricesPriceRate of
Return54.4334.13-37.30%39.8616.79%38.09-4.44%89.15134.05%69.58-21.95%Date200320042005200620072008AMZN Stock P
Jahangirnagar University, Savar - MGT - 370
McDonaldsMcDonald's is the world's largest chain of hamburger fast food restaurants, servingaround 68 million customers daily in 119 countries. While McDonalds has many internalstrengths, they also
have many internal weaknesses.McDonalds internal stre
Jahangirnagar University, Savar - MGT - 370
RatioInternational Business MachineIncome StatementFor the Year Ended Dec. 30, 2011Total RevenueCost of RevenueGross ProfitResearch DevelopmentSelling General and AdministrativeTotal Other Income/
Expenses NetEarnings Before Interest And TaxesIn
Jahangirnagar University, Savar - MGT - 370
Creating a Financial Plan1.Assessment this involves taking a complete inventory of your financial position includingincome or all kinds, expenses and indebtedness. What are your current spending
patterns? Isyour current spending pattern efficient? Fro
Jahangirnagar University, Savar - MGT - 370
Monthly ScorecardApril 3, 2012IMPRESSIVE RETURNSIndex Performance (Price Performance % Change)PriceIndexOneThree3/30/2012 Month MonthsTwoYTDYearYears8.14%7.24%21.70%DJIA13,212.04 2.01%S&P
5001,408.473.13% 12.00% 12.00%6.23%20.44%S&P
Jahangirnagar University, Savar - COM - 190
Good afternoon everybody. Today I will talk a little bit aboutYahoo. As you may know, Yahoo is one of the largest global onlinenetworks in the world. As big as it is now, Yahoo was actually foundedby
two college students just like us. It was started as
Jahangirnagar University, Savar - COM - 190
Writing assignment #1This is the first of three writing assignments that together will contribute to your lecture grade. (20points per submission). Your essay should be sufficiently substantial to
address the questions posed in alogical, concise, gramm
Jahangirnagar University, Savar - MGT - 375
Admired LeaderWhen I think about leaders that I admire, a few names came to my mind such as BillGates and Warren Buffett. However for me, the leader that impressed me most in my life is
SteveJobs.Biographical SketchSteve Jobs was born on February 24,
Jahangirnagar University, Savar - CHE - 190
WRITING A LABORATORY REPORT: NOTES TO STUDENT EXPERIMENTERSIt is extremely important that you understand the need for, and format of, a good report.Scientific work of any sort is useless unless its
results can be communicated to others. Over theyears a
Jahangirnagar University, Savar - CHE - 190
Why the FDA Should Regulate the Consumption of Energy DrinksMany people in the United States and all around the world consume energy drinks such asRed Bull, Monster Energy, 5 Hours Energy, and
Rockstar Energy on a regular basis. According toa report on
Jahangirnagar University, Savar - FIN - 348
DEFINITIONSAnnual Report - report issued annually by a corporation to its shareholders. It containsbasis financial statements and managements analysis of past years operations andprospects for the
future.The four financial statements contained in most
Jahangirnagar University, Savar - FIN - 348
Risk ManagementRisk: A concept most decision makers feel they recognize, but hard to define andmeasure precisely.Risk can provide a reason for losses:Market riskRisk management requires risk to be
managed firm wide.Sound risk management practices en
Jahangirnagar University, Savar - FIN - 348
Methodology for Solving Super Normal Growth Stock Valuation Problem1. Generate dividend cash flows for period of super normal growth.(convert fromearnings if necessary)Do = Eo x d; where d is the
proportion of earnings paid out as dividendsDn = Do x (
Queensland Tech - ENB - 301 | {"url":"http://www.coursehero.com/file/7668524/chapter-06/","timestamp":"2014-04-18T08:05:42Z","content_type":null,"content_length":"127938","record_id":"<urn:uuid:cc6fa3eb-61f5-4377-9e4d-ecd6f532d84f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Navigation Links
Updated March 28, 2011
a, b, c, x, y, z are used to denote variables and /or expressions below. n is used to denote an integer, V a vector space, and T a linear operator. { and } pairs are always used when appropriate, but
they are sometimes optional, for example when a quantity has a subscript or superscript consisting of a single number or letter, as a^2 for a squared or x_i for x subscript i.
Common Functions
Latex Symbol Meaning
\frac{x}{y} fraction with numerator x and denominator y
\sqrt{x} square root of x
\sqrt[n]{x} n'th root of x
\sum_{a}^{b} x Sum from a to b of x
\prod_{a}^{b} x Product from a to b of x
\int_{a}^{b} x integral from a to b of x
Trig Functions
Latex Symbol Meaning
\cos x cosine of x
\sec x secant of x
\arccos x arc cosine of x
\cosh x hyperbolic cosine of x
\sin x sine of x
\csc x cosecant of x
\arcsin x arc sine of x
\sinh x hyperbolic sine of x
\tan x tangent of x
\cot x cotangent of x
\arctan x arc tangent of x
\tanh x hyperbolic tangent of x
\coth x hyperbolic cotangent of x
Latex Symbol Meaning
\arg(z) The argument of z; the angle in the polar form of the complex number z
\dim V The dimension of V, dimension of the vector space V
\exp x exponential of x
\hom (G,H) The set of homomorphisms from G to H; the set of morphisms from G to H
\ker T The kernel of T; the kernel of the linear operator T
\lg x Log of x to base 10
\ln x natural logarithm of x
\log x logarithm to base 10 of x
\log_n x logarithm to base n of x
\varliminf (x_n) The limit of the sequence of infima of (x_n)
\varlimsup (x_n) The limit of the sequence of suprema of (x_n)
\varinjlim The injective limit (colimit) of a family
\varprojlim The projective limit (limit) of a family
\det determinant
\gcd(n,m) The greatest common divisor of n and m
\inf a The infimum of A; the greatest lower bound of A
\injlim The injective limit (colimit) of a family
\lim_{x\to y} z limit of z as x approaches y
\liminf (x_n) The limit of the sequence of infima of (x_n)
\limsup (x_n) The limit of the sequence of suprema of (x_n)
\max (x,y) maximum of x,y
\min (x,y) minimum of x,y
\projlim The projective limit (limit) of a family
\Pr x The probability of event X
\sup a The supremum of A; the least upper bound of A
This work is licensed under a Creative Commons Attribution 3.0 Unported License | {"url":"http://www.access2science.com/latex/Functions.html","timestamp":"2014-04-17T15:42:14Z","content_type":null,"content_length":"6135","record_id":"<urn:uuid:8745018a-70f1-4ac9-9067-a9aff0240d98>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUE | ERRORS | CONFORMING TO | SEE ALSO | COLOPHON
SQRT(3) Linux Programmer's Manual SQRT(3)
sqrt, sqrtf, sqrtl - square root function
SYNOPSIS top
#include <math.h>
double sqrt(double x);
float sqrtf(float x);
long double sqrtl(long double x);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
sqrtf(), sqrtl():
_BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 ||
_ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L;
or cc -std=c99
DESCRIPTION top
The sqrt() function returns the nonnegative square root of x.
RETURN VALUE top
On success, these functions return the square root of x.
If x is a NaN, a NaN is returned.
If x is +0 (-0), +0 (-0) is returned.
If x is positive infinity, positive infinity is returned.
If x is less than -0, a domain error occurs, and a NaN is returned.
ERRORS top
See math_error(7) for information on how to determine whether an
error has occurred when calling these functions.
The following errors can occur:
Domain error: x less than -0
errno is set to EDOM. An invalid floating-point exception
(FE_INVALID) is raised.
CONFORMING TO top
C99, POSIX.1-2001. The variant returning double also conforms to
SVr4, 4.3BSD, C89.
SEE ALSO top
cbrt(3), csqrt(3), hypot(3)
COLOPHON top
This page is part of release 3.64 of the Linux man-pages project. A
description of the project, and information about reporting bugs, can
be found at http://www.kernel.org/doc/man-pages/. | {"url":"http://www.man7.org/linux/man-pages/man3/sqrt.3.html","timestamp":"2014-04-16T16:19:51Z","content_type":null,"content_length":"8531","record_id":"<urn:uuid:3c969c46-c4e6-44d0-be42-48d8d1a21805>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kripke resource models of a dependently-typed, bunched lambda-calculus
Ishtiaq, S. and Pym, D. J., 2002. Kripke resource models of a dependently-typed, bunched lambda-calculus. Journal of Logic and Computation, 12 (6), pp. 1061-1104.
Related documents:
This repository does not currently have the full-text of this item.
You may be able to access a copy if URLs are provided below.
The lambdaLambda-calculus is a dependent type theory with both linear and intuitionistic dependent function spaces. It can be seen to arise in two ways. Firstly, in logical frameworks, where it is
the language of the RLF logical framework and can uniformly represent linear and other relevant logics. Secondly, it is a presentation of the proof-objects of a structural variation, with
Dereliction, of a fragment of BI, the logic of bunched implications. As such, it is also closely related to linear logic. BI is a logic which directly combines linear and intuitionistic implication
and, in its predicate version, has both linear and intuitionistic quantifiers. The lambdaLambda-calculus is the dependent type theory which generalizes both implications and quantifiers. In this
paper, we study the categorical semantics of the lambdaLambda-calculus, gives a theory of 'Kripke resource models', i.e. monoid-indexed sets of functorial Kripke models, in which the monoid gives an
account of resource consumption. A class of concrete, set-theoretic models is given by the category of families of sets parametrized over a small monoidal category, in which the intuit ionistic
dependent function space is described in the established way, but the linear dependent function space is described using Day's tensor product.
Item Type Articles
Creators Ishtiaq, S.and Pym, D. J.
Departments Faculty of Science > Computer Science
Refereed Yes
Status Published
ID Code 5574
Additional Information ID number: ISI:000181160400007
Actions (login required) | {"url":"http://opus.bath.ac.uk/5574/","timestamp":"2014-04-19T20:08:39Z","content_type":null,"content_length":"28258","record_id":"<urn:uuid:88150e40-05e7-4d67-ac1a-a5736efc57ab>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Présentation de la théorie d’Arakelov. Current trends in arithmetical algebraic geometry
- Bull. Amer. Math. Soc , 1992
"... Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and
practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to ..."
Cited by 40 (3 self)
Add to MetaCart
Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical
issues are largely disregarded. We describe what has been done and, more importantly, what remains to be done in the area. We hope to show that the study of algorithms not only increases our
understanding of algebraic number fields but also stimulates our curiosity about them. The discussion is concentrated of three topics: the determination of Galois groups, the determination of the
ring of integers of an algebraic number field, and the computation of the group of units and the class group of that ring of integers. 1.
, 811
"... ABSTRACT. Let K be a number field and X1 and X2 two smooth projective curves defined over it. In this paper we prove an analogue of the Dyson Theorem for the product X1×X2. If X i = P1 we find
the classical Dyson theorem. In general, it will imply a self contained and easy proof of Siegel theorem on ..."
Add to MetaCart
ABSTRACT. Let K be a number field and X1 and X2 two smooth projective curves defined over it. In this paper we prove an analogue of the Dyson Theorem for the product X1×X2. If X i = P1 we find the
classical Dyson theorem. In general, it will imply a self contained and easy proof of Siegel theorem on integral points on hyperbolic curves and it will give some insight on effectiveness. This proof
is new and avoids the use of Roth and Mordell-Weil theorems, the theory of Linear Forms in Logarithms and the Schmidt subspace theorem. 1 Introduction. After the proof of the Mordell conjecture by
Faltings (the first proof is in [Fa1], but [Fa2], [B2] and [Vo2] are nearer to the spirit of this paper), most of the qualitative results in the diophantine approximation of algebraic divisors by
rational points over curves are solved. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=14538532","timestamp":"2014-04-21T05:26:53Z","content_type":null,"content_length":"14812","record_id":"<urn:uuid:f2eb3a40-3dbb-4a7b-b494-8a4322982b40>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Directory tex-archive/macros/mathematica
TeX/Mathematica is a set of tools that provide facilities of
Mathematica Notebooks in a UNIX environment, under GNU Emacs. They
permit interaction between a text and a Mathematica buffer and, if
desired, the use of TeX/LaTeX to annotate Mathematica-based
explorations and programs. Inclusion of Mathematica-generated
graphics in TeX/LaTeX documents printed using PostScript is
supported. The tools also support the automatic generation of
Mathematica packages from Mathematica documents.
With these tools one can interactively develop and refine teaching and
research documents. The interactive nature of the tools encourages
Mathematica-based exploration as a natural part of the writing
Getting TeX/Mathematica
The TeX/Mathematica tools are available from Internet host
`chem.bu.edu' [128.197.30.18] by anonymous `ftp' in directory
`/pub/tex-mathematica'. The author can be reached at Internet address
The `ftp' directory contains four files
* `README', which duplicates the information
in this node.
* `CHANGES', which describes changes that have been made.
* `tex-mma-j.ps.Z', the compressed PostScript documentation/example.
* `tex-mma.tar.Z', the TeX/Mathematica distribution kit (includes the
first three files).
LaTeX description of TeX/Mathematica
The PostScript document `tex-mma-j.ps.Z' is derived from the LaTeX
description of TeX/Mathematica. Transfer the document from the
`ftp' directory in binary (image) mode and then print it with
zcat tex-mma-j.ps.Z | lpr
The source for this document is the LaTeX file `tex-mma-j.tex' and
the BibTeX file `tex-mma-j.bib'. You can use `tex-mma-j.tex'
as a LaTeX example of a TeX/Mathematica document.
Files in the distribution kit
The file `tex-mma.tar.Z' contains the TeX/Mathematica
tools. These consist of the following:
Documentation things:
* `tex-mma.texinfo', Texinfo documentation of TeX/Mathematica
* `tex-mma.ps.Z', compressed Texinfo document
formatted with TeX into PostScript
* `tex-mma.info', Info file
(this file)
for on-line Emacs documentation
* `tex-mma-j.tex', LaTeX description/example of TeX/Mathematica
* `tex-mma-j.bib', BibTeX file for `tex-mma-j.tex'
* `tex-mma-j.ps.Z', compressed LaTeX documentation formatted in
* `sin3x.ps', Mathematica-generated figure included in
* `tex-mma-tex.tex', TeX example of TeX/Mathematica
* `texinfo.tex', TeX macros used to format `tex-mma.texinfo'
GNU Emacs things:
* `tex-mma.el', the `tex-mathematica' package
* `math.el', David Jacobson's Mathematica mode package (*Note math::)
* `unix-tex-mma.el', example implementation of cell-type `unix'
TeX/LaTeX things:
* `mathematica10pt.tex', TeX 10 point TeX/Mathematica interface
* `mathematica12pt.tex', TeX 12 point TeX/Mathematica interface
* `mathematica.sty', LaTeX generic TeX/Mathematica interface
* `mathematica.tex', macros for formatting TeX/Mathematica
documents, used by the preceding files
Shell scripts and Mathematica commands for processing graphics, and
shell script and template file for Mathematica package assembly:
* `PSTeX.m', generates `psfig'-adapted graphics to a file, without
PostScript prolog
* `addBBox', shell script called by `PSTeX'
* `PSTeXpro.m', generates `psfig'-adapted graphics to a file, with
PostScript prolog
* `addBBoxpro', shell script called by `PSTeXpro'
* `addBBoxpro.awk', `awk' script called by `addBBoxpro'
* `PSFile.m', generates full-page graphics to a file, with PostScript
* `tex-mma-assemble-package', shell script for Mathematica package
* `tex-mma-assemble-package.tmplt', Emacs Lisp template used by
This is from Cameron Smith (*Note addBBox::):
* `mma.pro.1.2', Mathematica Version 1.2 PostScript prolog. Note
that a different prolog will be needed for Mathematica Version 2.0.
This will be available from the author (*Note Getting tex-math::).
These are from Trevor Darrell's `psfig/tex' distribution (*Note psfig::):
* `psfig.pro', `psfig' PostScript prolog.
* `psfig.sty', `psfig' for LaTeX.
* `psfig.tex', `psfig' for TeX.
Installation procedure
1. Transfer `tex-mma.tar.Z' in binary (image) mode into an empty
directory, and extract its contents, with (for example)
zcat tex-mma.tar.Z | tar xvf -
2. Install the Info on-line documentation file `tex-mma.info' where
your Emacs looks for Info file. If desired, add a pointer to it to your
Info directory file so that it will appear in the Info top-level menu.
The source for `tex-mma.info' is `tex-mma.texinfo'. The file
`tex-mma.ps.Z' is the compressed TeX formatted version, made
using the `texinfo.tex' included in the distribution (see the
beginning of `tex-mma.texinfo' for directions). The formatted
version can be printed with
zcat tex-mma.ps.Z | lpr
It contains complete details of the TeX/Mathematica tools.
3. Edit `tex-mma.el' to
* set `tex-mma-process-string' to the command
you use to start Mathematica (default is `math');
* set `tex-mma-info-file' to point to where you put the file
4. Run `M-x byte-compile' on `tex-mma.el' and store the resulting
`tex-mma.elc' wherever your GNU Emacs looks for files.
5. Add the following to the GNU Emacs initialization file
(autoload 'tex-mathematica "tex-mma"
"Major-mode for interaction with Mathematica from TeX." t)
(autoload 'plain-tex-mathematica "tex-mma"
"Major-mode for interaction with Mathematica from TeX." t)
(autoload 'latex-mathematica "tex-mma"
"Major-mode for interaction with Mathematica from TeX." t)
6. If you do not have David Jacobson's Mathematica mode package
`math.el', install it following the instructions in `math.el'.
If you do have `math.el', make sure your version is at least as
current as the one here, which contains changes necessary for use with
`tex-mma.el'. Make sure you place the file `math.el' (or,
better, the byte-compiled file `math.elc') where your Emacs looks
for libraries, so that `tex-mma.elc' will be able to load it if it
is not already loaded.
7. Edit `mathematica.tex' (at the end) to specify where the PostScript
prolog files `psfig.pro' and `mma.pro.1.2' will be, and put the
prolog files there.
8. If you are using a LaTeX *earlier* than 2 May 90, edit
`mathematica.sty' according to the comments there.
9. Put `mathematica.tex', `mathematica.sty', `mathematica12pt.tex',
`mathematica10pt.tex', `psfig.sty', and `psfig.tex' where your
TeX looks for macro files.
10. Put `PSTeX.m', `PSTeXpro.m' and `PSFile.m' where your
Mathematica looks for packages.
11. Edit `addBBoxpro' to specify which `awk' you use (default is GNU
awk, `gawk').
12. Edit `tex-mma-assemble-package' to set the variables `bindir', `tmpdir'
and `tmpdirsed' for your system, as described in the comments there
and then place `tex-mma-assemble-package.tmplt' in `bindir'.
13. Put `addBBox', `addBBoxpro', `addBBoxpro.awk' and
`tex-mma-assemble-package' on your system's binary search path, make sure
they have execute status, and then execute `rehash'.
You should be ready to go: Startup emacs, run `M-x tex-mathematica'
and have fun. | {"url":"http://ctan.org/tex-archive/macros/mathematica","timestamp":"2014-04-18T20:45:14Z","content_type":null,"content_length":"13814","record_id":"<urn:uuid:f60c3bc6-a4cf-4ab8-a72e-377983dff75b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discussing with a non statistician colleague, it seems that the logistic regression is not intuitive; Some basics questions like : - Why don't use the linear model? - What's logistic function? - How
can we compute by hand, step by step t...
Creating a QGIS-Style (qml-file) with an R-Script
How to get from a txt-file with short names and labels to a QGIS-Style (qml-file)? I used the below R-script to create a style for this legend table where I copy-pasted the parts I needed to a
txt-file, like for the WRB-FULL (WRB-FULL: Full soil code o...
R/Finance 2013 Registration Open
The registration for R/Finance 2013 -- which will take place May 17 and 18 in Chicago -- is NOW OPEN!Building on the success of the previous conferences in 2009, 2010, 2011 and 2012, we expect more
than 250 attendees from around the world. R users from...
R / Finance 2013 Open for Registration
The annoucement below just went to the R-SIG-Finance list. More information is as usual at the R / Finance page:Now open for registrations:R / Finance 2013: Applied Finance with R May 17 and 18, 2013
Chicago, IL, USAThe registration for R/Fin...
Veterinary Epidemiologic Research: GLM (part 4) – Exact and Conditional Logistic Regressions
Next topic on logistic regression: the exact and the conditional logistic regressions. Exact logistic regression When the dataset is very small or severely unbalanced, maximum likelihood estimates of
coefficients may be biased. An alternative is to use exact logistic regression, available in R with the elrm package. Its syntax is based on an events/trials formulation.
Veterinary Epidemiologic Research: GLM – Evaluating Logistic Regression Models (part 3)
$Veterinary Epidemiologic Research: GLM – Evaluating Logistic Regression Models (part 3)$
Third part on logistic regression (first here, second here). Two steps in assessing the fit of the model: first is to determine if the model fits using summary measures of goodness of fit or by
assessing the predictive ability of the model; second is to deterime if there’s any observations that do not fit the
The evolution of EU legislation (graphed with ggplot2 and R)
During the last half century the European Union has adopted more than 100 000 pieces of legislation. In this presentation I look into the patterns of legislative adoption over time. I tried to create
clear and engaging graphs that provide … Continue reading →
Veterinary Epidemiologic Research: GLM – Logistic Regression (part 2)
$Veterinary Epidemiologic Research: GLM – Logistic Regression (part 2)$
Second part on logistic regression (first one here). We used in the previous post a likelihood ratio test to compare a full and null model. The same can be done to compare a full and nested model to
test the contribution of any subset of parameters: Interpretation of coefficients Note: Dohoo do not report the
Veterinary Epidemiologic Research: GLM – Logistic Regression
$Veterinary Epidemiologic Research: GLM – Logistic Regression$
We continue to explore the book Veterinary Epidemiologic Research and today we’ll have a look at generalized linear models (GLM), specifically the logistic regression (chapter 16). In veterinary
epidemiology, often the outcome is dichotomous (yes/no), representing the presence or absence of disease or mortality. We code 1 for the presence of the outcome and 0
Stop Sign Project Post1: Some GIS stuff done in R
(This article was first published on bRogramming, and kindly contributed to R-bloggers) To leave a comment for the author, please follow the link and comment on his blog: bRogramming. R-bloggers.com
offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web
Scraping) statistics... | {"url":"http://www.r-bloggers.com/page/3/?s=gis","timestamp":"2014-04-21T14:48:00Z","content_type":null,"content_length":"38632","record_id":"<urn:uuid:751087f7-f538-4ae6-9ee9-9e733633784b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
{0,1} Maslov potentials on Legendrian knots
up vote 2 down vote favorite
A Legendrian knot is a curve in $\mathbb{R}^3$ on which $dz - ydx$ vanishes identically. Its projection to the $x,z$ plane is called a front diagram; as we can recover $y = dz/dx$ this determines the
curve. Note that since $y$ is finite, we can never have vertical tangents; instead there are cusps in the diagram.
A $\mathbb{Z}$-Maslov potential is an assignment of integers to the arcs connecting cusps such that the number below a cusp is one less than the number above a cusp. The condition to admit such a
thing is that the 'rotation number' -- which can be computed by counting the difference between the number of times you go from the top to the bottom of a cusp minus the number of times you go from
the bottom to the top as you traverse the knot diagram -- be zero. Evidently a Maslov potential must take on at least two values.
Which rotation number zero Legendrian knots admit a diagram which admits a Maslov potential taking values in {$0,1$}?
For instance, the closure of any positive braid is such a knot.
knot-theory sg.symplectic-geometry
add comment
1 Answer
active oldest votes
I don't have anything close to a complete answer, but for many $tb$-maximizing knots we can get some obstructions from Legendrian contact homology. The generators for the DGA $A(K)$
associated to a given front diagram (as described by Ng) are right cusps, which have grading 1, and crossings, whose grading is the difference in Maslov potentials of the two strands through
a crossing, so if the Maslov potential is valued in {0,1} then every generator has grading 0 or ±1. In particular, any 2-graded normal ruling of the front is actually just a graded normal
ruling, and vice versa.
The classical invariants of $K$ satisfy the HOMFLY-PT bound $tb(K)+|r(K)| \leq -\deg_a P_K(a,z) - 1$, and Rutherford showed that $K$ has a 2-graded normal ruling if and only if $K$ achieves
equality and $r(K)=0$. Thus if $K$ achieves the bound with $r(K)=0$ and $K$ has a {0,1} Maslov potential, we see that $K$ actually has a graded normal ruling. A theorem of Fuchs now says
up vote that $A(K)$ has an augmentation, so we can compute the linearized contact homology of $K$, which is generated over $\mathbb{Z}/2$ by crossings and right cusps whose gradings are bounded as
1 down above. In particular, if $K$ has a {0,1} Maslov potential and $tb(K) = -\deg_a P_K(a,z)-1$ then $A(K)$ must have augmentations, and the linearized homology for any augmentation has to be
vote supported entirely in degrees -1,0,1.
We can use this on some examples in the Legendrian knot atlas as follows. The first $m(5_2)$ knot has linearized contact homology in degree 2, so it can't have a {0,1} Maslov potential,
whereas the second $m(5_2)$ clearly does. The second $m(9_{45})$ knot has two different linearized contact homologies, and one of them is supported in degrees -1,0,1 but the other isn't, so
it can't have a {0,1} Maslov potential either. Finally, the first three $7_4$ knots achieve the HOMFLY-PT bound ($tb=1$) but don't have any linearized contact homology, so they can't have
{0,1} Maslov potentials while the fourth $7_4$ does.
add comment
Not the answer you're looking for? Browse other questions tagged knot-theory sg.symplectic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/132847/0-1-maslov-potentials-on-legendrian-knots","timestamp":"2014-04-19T23:01:28Z","content_type":null,"content_length":"52103","record_id":"<urn:uuid:f671f064-c7d5-41c5-8520-3b2cfb657cfb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math.round() question
Author Math.round() question
Hello All,
Joined: Jul 29, I have read the round() method from K&B book and i'm a little confused
2002 on the statement below:
Posts: 20 "If the number after the decimal point is less than 0.5 then
Math.round() is equal to Math.floor(). If the number after the
decimal point is greater than equal 0.5 then Math.round() is
equal to Math.ceil()".
Maybe I misunderstood the statement above. Can someone please give me
guidance on this topic?
1.) Math.round(-5.4) returns -5
Math.floor(-5.4) returns -6
Note: The number after the decimal is less than 0.5 which means
Math.round() should be equal to Math.floor() but it the
result doesn't match?
2.) Math.round(-5.6) returns -6
Math.ceil (-5.6) returns -5
Note: The number after the decimal is greater than equal to 0.5
which means Math.round() should be equal to Math.ceil() but
the result doesn't match?
Thank you in advance,
Jonathan O.,
Joined: Dec 10, Welcome back, to JavaRanch!
2001 We ain't got many rules 'round these parts, but we do got one. Please change your display name to comply with The JavaRanch Naming Policy. We'd like it if your last name contained
Posts: 7023 more than just one letter.
Thanks Pardner! Hope to see you 'round the Ranch!
[How To Ask Good Questions] [JavaRanch FAQ Wiki] [JavaRanch Radio]
It looks like Kathy and Bert weren't thinking about negative numbers for that explanation of how Math.round() works.
Joined: Dec 10, I prefer how the Math class documentation describes the behavior of Math.round().
Posts: 7023
Returns the closest int to the argument. The result is rounded to an integer by adding 1/2, taking the floor of the result, and casting the result to type int. In other words,
the result is equal to the value of the expression:
(int)Math.floor(a + 0.5f)
If it's not already reported, and it doesn't look as if it has been, you might do us all a favor and email Kathy and/or Bert about this error.
[ January 07, 2004: Message edited by: Dirk Schreckmann ]
Ranch Hand
public static long round(double a)
Joined: Dec 18, Returns the closest long to the argument. The result is rounded to an integer by adding 1/2, taking the floor of the result, and casting the result to type long. In other words, the
2003 result is equal to the value of the expression:
Posts: 328 (long)Math.floor(a + 0.5d)
Which means that the statement from the book you have cited is correct for positive numbers only (and it's not correct for negatives)
Ranch Hand
Hi Jonathan,
Joined: Dec 16, Kathy and Bates did not include negative floats or doubles for explanation of the Math.round() method I had to try and figure it out, perhaps they thought it was too obvious? but
2003 they did point out the all interesting -0.5 behaviour:
Posts: 59 int i=Math.round(-10.5f); //gives -10
int j=Math.round(10.5f); //gives 11
both are ceil()'ed.
This is my first post so coudnt resist pointing out the 0.5 behavior
Joined: Aug 06, Yeah, I got confused too after reading that particular bit. At the end I just use Math.floor(x + 0.5).
Posts: 26
SCJP 1.4
Hello All, Thank you for your replies.
Joined: Jul 29, Can I assume for Math.round() the following?
2002 For positive numbers the following can be applied:
Posts: 20 1. less than 0.5 = floor()
2. greater than equal to 0.5 = ceil()
For negative numbers the following can be applied:
1. greater than 0.5 = floor()
2. less than equal to 0.5 = ceil()
Thanks again,
I think after looking the following result
Joined: May 28, Math.round(5.4) :5
2003 Math.round(5.5) :6
Posts: 22 Math.round(5.9) :6
Math.round(-5.4) :-5
Math.round(-5.5) :-5
Math.round(-5.9) :-6
I will simply follow the thumb rule
Add +0.5 (no matter pos or neg ) and floor it.
Pls correct me if I am wrong.
Thanks,<br />Sudhir <br />SCJP1.4
Ranch Hand
Joined: Oct 01, sudhir
2003 does
Posts: 65 round(-5.5) gives -5
Joined: May 28, yes it did gave to me
2003 System.out.println ("Math.round(-5.5) :"+Math.round(-5.5)); = -5
Posts: 22 i'm using jdk 1.4.2 version (in case..)
lowercase baba
Bartender For negative numbers the following can be applied:
1. greater than 0.5 = floor()
Joined: Oct 02, 2. less than equal to 0.5 = ceil()
Posts: 10916
This makes the former math teacher in me cringe. because, since we're talking about negative numbers, "greater than 0.5" is confusing. -0.4 is greater than -0.5.
12 I think what you mean is "larger magnitude" than -0.5
This is why it's probably always better to use the "add 0.5 and take the floor" rule. it's less confusing.
I like... There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
subject: Math.round() question | {"url":"http://www.coderanch.com/t/244345/java-programmer-SCJP/certification/Math","timestamp":"2014-04-19T22:25:45Z","content_type":null,"content_length":"41793","record_id":"<urn:uuid:0c66fc8d-ce38-41d5-835d-8f0f82c0bf4e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aurora, IL Statistics Tutor
Find an Aurora, IL Statistics Tutor
...Whether it is math abilities, general reasoning, or test taking abilities that need improvements, I can help you progress substantially. I work with systems of linear equations and matrices
almost every day. My PhD in physics and long experience as a researcher in theoretical physics make me well qualified for teaching linear algebra.
23 Subjects: including statistics, calculus, physics, geometry
...I have also tutored neighbors, both high school age and middle age adults, in math and physics. Before moving here to Bartlett, IL, I taught anatomy and physiology in a health professions
career school, where many of the students had their GED or high school diploma. I learned patience and an appreciation for different learning styles from those students.
17 Subjects: including statistics, chemistry, geometry, reading
...I have taught this course at Westwood College. Since this course is often for computer majors, I have also taught both computer programming and Numerical Methods at 2 other colleges. Since I
have a masters in math, this course is easy.
25 Subjects: including statistics, writing, calculus, geometry
...I look forward to passing along the skills I have learned in helping students succeed in taking the GRE. My undergraduate and graduate training in Psychology gave me the ideal foundation for
helping identify student's individual learning styles (visual, auditory/verbal, kinesthetic/experiential)...
11 Subjects: including statistics, writing, GRE, grammar
...In my current role I work in a problem solving environment where I pick up issues and deliver solutions by working with different groups and communicating to management level. This type of
work helped me to communicate better and to make people understand at all levels. My goal in teaching is to provide the nurturing environment that allows children to learn and grow
16 Subjects: including statistics, chemistry, physics, calculus
Related Aurora, IL Tutors
Aurora, IL Accounting Tutors
Aurora, IL ACT Tutors
Aurora, IL Algebra Tutors
Aurora, IL Algebra 2 Tutors
Aurora, IL Calculus Tutors
Aurora, IL Geometry Tutors
Aurora, IL Math Tutors
Aurora, IL Prealgebra Tutors
Aurora, IL Precalculus Tutors
Aurora, IL SAT Tutors
Aurora, IL SAT Math Tutors
Aurora, IL Science Tutors
Aurora, IL Statistics Tutors
Aurora, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Aurora_IL_statistics_tutors.php","timestamp":"2014-04-18T11:36:35Z","content_type":null,"content_length":"24279","record_id":"<urn:uuid:bb033bed-d742-476e-9ce1-cbd090873eb5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Build your own cryptographically safe server/client protocol
1. Abstract
The age of information is also the age of digital information assets, where the professional programmer has to deal with cryptography. This article presents the theory, source code, and
implementation for variable key size RSA encryption/decryption, digital signing, multi precision library, Diffie-Hellman key exchange, entropy collection, pseudo random number generator, and more.
The article presents how to implement your own secure protocol using the IOCP technology, by presenting a secure chat client/server solution implementation.
2. Requirements
• The article expects the reader to be familiar with C++, TCP/IP, socket programming, MFC, and multithreading.
• Some mathematic knowledge about elementary number theory and statistics is required.
• The source code uses Winsock 2.0 and IOCP technology, and requires:
□ Windows NT/2000 or later: Requires Windows NT 3.5 or later.
□ Windows 95/98/ME: Not supported.
□ Visual C++ .NET or a fully updated Visual C++ 6.0.
3. Introduction
The age of information is also the age of digital information assets. The professional developer has to deal with cryptography to make data storage and transmission secure. The purpose of this
article is not to “reinvent the wheel” or implement home made, mathematically-unsafe cryptographic algorithms. This article focuses on the practical details concerning cryptographically-safe
protocols, and presents the theory and source code for a secure client/server solution that can be used for any type of client/server application.
To know cryptography in theory is essential for a developer, but to implement it in practice is difficult. There are many security exploits as buffer overflow [1] and others that arise with the
implementation of theoretically secure algorithms. Many commercial/free high quality cryptography libraries exist in the market, as Crypto++ [2], OpenSSL [3], and Crypttool [4]. To use these
libraries, the developer has to know the cryptography theories behind the implementation, and also be aware of “what is happening under the hood of the library”. This is not an easy task, because the
internal structure of these libraries can be complex, and the libraries contain unnecessary functionality that is not always needed.
This article briefly explains cryptographic theories involving cryptographically-safe communication protocols, and also presents how this is implemented by providing a secure chat client/server
4. Background
This section explains some of the cryptographic theories and terminology needed to understand the rest of the article, namely section 5. If you think that you have enough experience and knowledge and
want to get your hands dirty, please continue to the next section (5. Implementation).
By using cryptographic algorithms, data storage and transmission can be performed in a secure way. Using algorithms to change the data in such a way that only an authorized recipient is able to
reconstruct the data is called encryption. For an unauthorized recipient, the encrypted data looks like a meaningless and random sequence of bits. A cryptographic algorithm, also known as cipher, is
a mathematical function which uses plain text (the data) as the input and produces cipher text (the encrypted data) as the output (and vice versa). A secret key is used together with the cipher to
encrypt the plain text, and the same key or another key is used to decrypt the cipher text back to plain text.
Different cryptographic algorithms or ciphers have different mathematical properties and weaknesses. The details of a cipher is usually made public. It is in the secret key that the security of a
modern cipher lies, not in the details of the cipher. To get additional information and knowledge about security, please read and try Crypttool [4]. The Crypttool software demonstrates several
encryption attack implementations, and gives you additional information about the algorithms used in this article.
4.1. Symmetric Cryptography
In Symmetric Cryptography, the sender and recipient use the same secret key to encrypt and decrypt plain text. This means that the sender and recipient must be in possession of a common (secret) key
which they have exchanged before actually starting to communicate.
The advantage of symmetric algorithms is the high speed with which data can be encrypted and decrypted.
One disadvantage is the need for key management. In order to communicate with each other confidentially, the sender and the recipient must have exchanged a key using a secure channel, before actually
starting to communicate.
Most modern symmetric algorithms operate on blocks of plain text. The procedures usually consist of many complex rounds of bit shifts and transformations to gain protection against different
mathematical analysis attacks. In section 5, we will choose symmetric cryptography based on its properties and cryptographic strength.
4.2. Asymmetric Cryptography
Unlike asymmetric encryption, each subscriber has a personal pair of keys consisting of a secret key and a public key. The public key is public, this means that anyone can have it. The keys are
mathematically related, yet it is computationally difficult to deduce one from the other. Using the public key, anyone can encrypt the plain text, and only the one that has the private key can
decrypt the message.
Using asymmetric encryption, two entities on a network (let’s call them Alice and Bob) can communicate securely using the following simple protocol:
1. Bob and Alice exchange public keys.
2. Alice encrypts her message with Bob's public key and sends it to Bob.
3. Bob encrypts his message with Alice's public key and sends it to Alice.
4. Bob decrypts Alice's message with his private key, and Alice decrypts Bob’s message with her private key.
Now, they can communicate because they know their private keys, and no one else can decrypt the message because they do not have Alice and Bob's private keys. This is not entirely true because of the
“man in the middle attack” and practical reasons that we will discuss in section 4.3 and 4.5.
The advantage is that no secure channel is needed before messages are transmitted, because all the information required in order to communicate confidentially can be sent openly.
The disadvantage is that pure asymmetric procedures take a lot longer to perform than symmetric ones. Therefore, asymmetric encryption is used to exchange a session key used with symmetric
The most well-known asymmetric procedure is the RSA algorithm [can be found in 5, 10], named after its developers Ronald Rivest, Adi Shamir, and Leonard Adleman. The RSA algorithm was published in
1978. The RSA algorithm can be used for both public key encryption and digital signatures. Its security is based on the difficulty of factoring large integers. We will discuss the RSA encryption
algorithm and its implementation in section 4.6.
4.3. Man-in-the-Middle Attack
The public asymmetric encryption secure communication protocol between Alice and Bob described above in section 4.2 is vulnerable to a man-in-the-middle attack.
Let's assume that Mallory is an enemy hacker and can:
• listen to the traffic between Alice and Bob.
• can modify, delete, and substitute Alice's and Bob's messages, and also introduce new ones.
Mallory can impersonate Alice when talking to Bob, and impersonate Bob when talking to Alice. This is possible in a real life communication over the Internet.
An example of the attacker:
1. Bob sends Alice his public key. Mallory intercepts the key and sends his own public key to Alice.
2. Alice generates a random session key, encrypts it with Bob’s public key (which is really Mallory's), and sends it to Bob.
3. Mallory intercepts the message. He decrypts the session key with his private key, encrypts it with Bob's public key, and sends it to Bob.
4. Bob receives the message thinking it came from Alice. He decrypts it with his private key, and obtains the session key.
Alice and Bob start exchanging messages using the session key. Mallory, who also has the session key, can now decipher the entire conversation. This can, of course, be solved by using One-Way Hash
functions (section 4.4) to digitally sign the message package (section 4.5 and 5.3).
4.4. Message Digest
A message digest, also known as a one-way hash function, is a mathematical function that takes a variable-length input and converts it into a fixed-length binary sequence called the hash value.
Another important property of a message digest is that it is hard to reverse the process. Given the hash value, it is hard or impossible to find the input. Furthermore, a good hash function also
makes it hard to find two different data inputs that produce the same hash value. Even a slight change in an input data causes the hash value to change drastically, therefore, a message digest is
used to check the integrity/correctness of the digital data.
This makes a one-way hash function a central notion in public-key cryptography, and is used when producing a digital signature (section 4.5 and 5.3) for a message. Message digests are also used to
distribute randomness. For example, a 8 bit true random number is consumed by a message digest and becomes 160 bits, where the randomness is distributed among the 160 bits. The most popular one-way
hash algorithms are MD4 and MD5 (both producing a 128-bit hash value), and SHA, also known as SHA1 (producing a 160-bit hash value).
4.5. Digital Signing
The main concept of digital signing is that the receiver can verify that a document or message is from a certain person or company. Digital signing is made using asymmetric cryptography (usually RSA)
and message digest to sign and verify a message, as described in the procedure below. For this to be possible, we need a strong asymmetric public key (n,e) that we trust. If you use your private key
to encrypt a message, anyone could decrypt it using your public key. What good is that? Well, it proves you are the one that encrypted it and it has not been altered since you did so. This forms the
basis of the digital signature.
Normally, an independent third party that everyone trusts, whose responsibility is to issue certificates, is called a Certification Authority (CA). A certificate is a data package that completely
identifies an entity, and is issued by a CA only after that authority has verified the entity's identity. The certificate can hold this asymmetric public key that everybody trusts.
In this article, we hard-code the trusted asymmetric public key (n,e) into the client software (also discussed in 5.6.3), and are not using other parties. We will discuss this later in section 5.3.4.
The procedure of digital signing is described below:
4.5.1. Digital signing assumption
The signer A possesses the private key (d,e) and the public key (n,e). The recipient B already trusts the public key of A (n,e).
4.5.2. The digital signature procedure
1. Create a message digest or hash of the information to be sent, that we call m (assuming m<n).
2. The signer A uses the private key to compute the signature (s = m^d mod n).
3. Send this signature s and the message to the recipient B.
4.5.3. The signature verification procedure
Recipient B does the following:
1. Uses sender A's public key (n, e) to compute an integer (v = s^e mod n).
2. Independently computes the message digest of the information that has been signed.
3. If both message digests are identical, the signature is valid.
The implementation of this can be found in MyCryptLib::DemoDSA(..) in the source code provided.
4.6. The RSA Algorithm
In this section, we briefly describe how the RSA algorithm works, we will discuss the implementation later. The RSA asymmetric algorithm [can be found in 5] was named after its developers, Ronald
Rivest, Adi Shamir, and Leonard Adleman. Almost all known commercial secure protocols depend on RSA public key exchange. The strength in RSA lies in the difficulty to factor two very large prime
numbers, p and q.
4.7. The RSA Procedure
The security of the RSA algorithm depends, as with all public key methods, on the difficulty to calculate the private key, d, from the public key, (n, e). This is obtained by making it difficult to
factorize the product of two prime numbers (p,q) from the public key n (described below, section 4.7.1). Therefore, the prime numbers (p,q) must be very large.
4.7.1. Standard RSA algorithm description
n = pq where p and q are distinct primes.
phi, = (p-1)(q-1)
e < n such that gcd(e, phi)=1, usually e=3.
d = e^-1 mod phi, d is the private key.
c = m^e mod n, 1<m<n.
m = c^d mod n.
This means that we need to operate with large integers, and also generate big prime numbers. We will discuss the implementation of this algorithm later, in section 5.3.
4.8. Random Numbers - The Backbone of all Cryptographic Security
Random numbers are used to generate keys, prime numbers, etc. Therefore, random numbers or random bits are the backbones of all cryptographic security. If the numbers are not truly random, the
security can easily be broken. The fact that computers are deterministic (a computing machine) and are unable to produce or handle true randomness, is the central problem. In this section, we discuss
what randomness is, and also how computers generate pseudo random numbers.
4.8.1. Entropy, the mathematical name for statistical randomness
Statistics tells us a few things about random bits. Zeros ought to occur as often as ones. There are statistical tests, such as the chi-squared test, that you can perform on a stream of numbers to
show how random they are. The degree to which a stream of bits follows this statistical form is the degree to which it is said to be entropic. This is the strict mathematical definition.
4.8.2. Pseudo-random number generators
A pseudo random number generator is an algorithm that generates a sequence of numbers that acts like random numbers, but they are not. The algorithm starts with a “seed”, a number to start with, and
then uses that seed to generate a series of numbers that “act” as if they were random.
Usually, all pseudo random number generator have a period. This means that, after a while, they start to follow a certain pattern. And there are other “bad” properties as:
• Shorter than expected periods for some seed states (the period is dependent on the seed value)
• Poor dimensional distribution
• Mutually dependent values
• Some bits being 'more random' than others
• Lack of uniformity (everything is not equally probable)
In late 1999, the ASF software Texas Hold’em poker application used the standard rand() to shuffle the cards. It was discovered by Cigital [12], that only five cards needed to be known to predict the
remaining cards. The pseudo-random number generator algorithm used to generate random values is very important because of the aspects that we have discussed here. In this implementation, we use the
Mersenne twister pseudorandom number generator developed in 1997 by Makoto Matsumoto and Takuji Nishimura, more details of this later, in section 5.
4.8.3. Collecting entropy
Collecting truly random numbers is hard, but is needed. There exist hardware devices that generate true noise, observing cosmic ray flux, etc. However, we need to use the equipments that we have,
which are the computers and the users. Instructing the user to move the mouse around the screen in a random manner (as in RandDialog in the source code) or type some random letters, is a good way to
generate data, but it takes too much time, and often we do not have a user (for servers). Using the existing hardware in the computer that has some entropy (randomness) is also another way.
• The clock in the computer is a quite good source of entropy, finer resolution clocks are better. But if we are using, for example, TickCount() to get a random seed, a search will find it quickly.
There are only 31.5 million ticks in a year. This is when we are using different parts of the hardware.
• At the Crypto '94 conference, Davis, Ihaka, and Fenstermach showed that air turbulence inside a disk drive creates enough randomness to make cryptographic-strength random numbers.
• Use a microphone. Reading the microphone that is built into many computers gives a source of apparently random data.
Once we have collected random data, we have to distribute the randomness, or distill it in a nice way so all bits are equally random. The best way to do this is to use some sort of message digest
(discussed in section 4.4).
4.8.4. Summary
│Symmetric Cryptography │When the same key is used for encryption/decryption. Symmetric ciphers are fast, but the problem is that the communicating entities need to exchange the key. │
│Asymmetric Cryptography │A public key is used to encrypt data, and another secret key is used to decrypt data. The algorithm used for this is mainly RSA, and it is based on the difficulty to │
│ │factorize big prime numbers. It is hard to derive the secret key from the private key. The communication can be done in open, but is not safe for “man in the middle │
│ │attacks”. │
│ │ │
│ │Asymmetric encryption is slow, and is only used to exchange keys. │
│Man-in-the-Middle Attack │It is possible to listen, modify, and delete data packages between two entities in a network. Therefore, it is possible for a hacker to impersonate others in a network │
│ │and be the “Man-in-the-Middle”. The solution to this is digital signing. │
│Message Digest │Is a very important one-way function that produces a hash value of a fixed size, given data of variable size. This is used to digitally sign data, compute checksum of │
│ │data, protect plain text passwords, and also distribute the randomness (entropy) in data. │
│Digital Signing │Can be based on the RSA algorithm. Is a way to determine that the data really comes from the sender (eliminates Man-in-the-Middle attacks). │
│Random Numbers - the │Random numbers are used to generate prime numbers, keys, etc. Therefore, they play an important role in cryptography. It is hard and time consuming to generate true │
│backbone of all │random numbers. And often, generating them needs special equipment. Seeded pseudo random number generators with good statistical properties can be used instead. The │
│cryptographic security │seed can be collected with much entropy using different sources, and be distilled with a message digest. │
5. Implementation
In this section, we describe the steps to implement a secure protocol. The reason for the chosen methods, the algorithms, and their reliability and performance are discussed. Furthermore, the source
code is briefly explained and presented.
5.1. The Protocol Design – General Overview
To implement a secure protocol, we need to have a symmetric cipher (section 4.1) and a key exchange procedure using asymmetric encryption (section 4.2).
Figure 1. The figure represents the dependencies between the classes involved in the implementation.
The protocol is implemented using three classes namely MyCryptLib, CRijndael, IOCPS/SecureChatIOCP (see figure 1). The MyCryptLib class contains the source code for all the details around the key
exchange procedure and digital signing, CRijndael is the symmetric cipher class, and IOCPS/SecureChatIOCP is a comprehensive IOCP server class [6]. Read more about the IOCP technology in my other
article “A simple IOCP Server/Client Class” [6]. The details around the implementation are given in this section.
5.2. The Symmetric Cipher
Rijndael was selected as the Advanced Encryption Standard (2000-10-02) [7]. The algorithm is designed with the state of the art in cryptographic research. The algorithm is fast, and is resistant
against cryptographic attacks as linear crypt analysis, differential crypt analysis, etc. However, the Rijndael algorithm can be written as a system of multivariate quadratic equations. To solve
these equations to obtain the key or plain text is difficult, because there are no efficient algorithms for solving such systems. There is, however, a lot of research around this area, and many
methods such as XL and XLS has been presented to solve big multivariate quadratic equation systems [8]. Therefore, 128-bit Rijndael may not be safe anymore. There are also proposals for algorithms
that might break the 256 bit Rijndael, but this is not certain [8]. For now, the (2005) 256-bit Rijndael is sufficiently secure for our task, but, this might not be in the future.
The symmetric 256-bit Rijndael cipher is implemented using the class CRijndael. The cipher is used with a 256-bit key, and encrypts/decrypts blocks of size 16 bytes.
Figure 2: The figure shows how data is encapsulated inside an encrypted package. The package size denotes the size of the package. The size of the data inside denotes the real size of the data. If
the data does not fit into 16 byte blocks, the data is padded with random data. Also, a CRC16 checksum is added to the block for more security.
The data is packaged into a number of blocks (figure 2), and if the data does not fit the blocks, it is padded with random numbers. A CRC16 checksum is also added to the block to avoid an attacker
changing the encrypted data, and also to determine if the decryption was successful. The implementation of the encryption and decryption is done in the functions SecureChatIOCP::EnCryptBuffer(..) and
5.2.1. Implementation source code
For each package, the Cipher Block Chaining (CBC) is used to encrypt the data. In CBC mode, an encrypted block is obtained by first XORing the data block with the previous encrypted block and then
encrypting the resulting value. This means that we need to enter a 16 byte block along with a key to encrypt/decrypt. Each connection/client has an instance of the class CRijndael inside its context,
this class is initialized with the key and the initial chain block, with the function:
// key - The 128/192/256-bit user-key to use.
// chain - initial chain block for CBC and CFB modes.
// keylength - 16, 24 or 32 bytes
// blockSize - The block size in bytes
// of this Rijndael (16, 24 or 32 bytes).
void CRijndael ::MakeKey(char const* key, char const* chain,
int keylength=DEFAULT_BLOCK_SIZE, int blockSize=DEFAULT_BLOCK_SIZE)
The functions EnCryptBuffer(CIOCPBuffer *pBuff, ClientContext *pContext) and DeCryptBuffer(CIOCPBuffer *pBuff, ClientContext *pContext) in the class SecureChatIOCP are used to encrypt/decrypt
buffers, as in figure 2. For more information and details, please see the source code.
5.3. Asymmetric Encryption – The RSA/DSA Implementation
In sections 4.5-4.8, we discussed what RSA/DSA is, and we know that to use asymmetric encryption, we need to perform computations with very large numbers. This is, however, not possible since the
hardware only supports 32 or 64 bit operations. That is why we need to go back to our old, plain computing books to find numerical algorithms to implement computations with arbitrary bit lengths
using the existing hardware. This is why we have implemented a multi-precision library. Furthermore, we need to generate very big prime numbers (section 4.6).
5.3.1. Multi-precision library
By using numerical algorithms [9], a set function denoted by BN*(…) (BN stands for Big Number) is implemented in the MyCryptLib class. The focus of the implementation has been simplicity, and not
performance. The functions implement all the operations needed for our task, using existing hardware that operates with the type DWORD. The BN*(…) function can compute with numbers with an arbitrary
bit length. To know what is happening under the hood is important here, since different implementation exploits can be used to hack the system (discussed in section 2).
The algorithm used to implement the different arithmetic operations as addition, division, and subtraction is complicated, and therefore, we explain only one of them here, namely addition.
Let's start by adding two binary bits. Since each bit has only two possible values, 0 or 1, there are only four possible combinations of inputs. These four possibilities, and the resulting sums, are:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 01
The fourth line indicates that we have an overflow! We need two bits to store the output! To add numbers of two bits can now be performed as in figure 3. The carrier which denotes the overflow in the
one bit adder is fed into the adder nr 2 to be summed into S2. By doing this, we can add two bits using two 1 bit adders.
Figure 3. The figure illustrates a two bit adder. A, B are two bit numbers, and S is a two bit number. The count denotes if we have an overflow or not.
In a similar fashion, we can use 32 bit hardware to operate with numbers of arbitrary lengths. Please see the source code below, for addition:
DWORD MyCryptLib::BNAdd(DWORD C[], const DWORD A[],
const DWORD B[], const UINT nSize)
DWORD k=0; // carry
for (UINT i = 0; i < nSize; i++) {
C[i] = A[i] + k;
// detect overflow and set k
else k=1;
C[i] += B[i];
if (C[i] < B[i])
// Detect overflow again.
return k;
To get additional information about the implementations, please read the source code. And, also check the DemoSimpleTest(..) function in the MyCryptLib class.
5.3.2. Prime number generation
• To generate prime numbers, we need the following:
□ We need to collect entropy (true randomness) to feed the random generator with.
□ A fast pseudo random generator with good statistical properties.
□ The last thing that is needed is a fast function that determines if a number is a prime number or not.
Collecting entropy
To collect entropy is not an easy task, as discussed in section 4.8.3. In our application, we collect entropy from the user using the class CRanDialog that collects random data when the user moves
the mouse randomly. If the user is not present, the entropy is collected from different hardware components, as described in the source code below:
BOOL MyCryptLib::MTCollectEntropy(BYTE *pRandomPool, UINT nSize)
while ( nSize-nCollected>0 ) {
// Hash the previus entropy Bucket..
// Destill The process ID
// Destill The thread ID
// Destill The system time.
SystemTimeToFileTime(&st, &ft);
//Destill The processors tickcount.
dwTick = GetTickCount();
// Destill The memory allocated
// GlobalMemoryStatus(&ms);
SHA1_Hash((BYTE*)&ms, sizeof(MEMORYSTATUS),&csha1);
// Put it inside the Bucket.
// Copy the Entropy to the pool
if ( nSize-nCollected < SHA1_DIGEST_SIZE ) {
else {
return TRUE;
The random generator
The collected entropy (either from the user or the hardware, with MTCollectEntropy(..)) is fed into the Mersenne Twister Random Generator [11].
The generator has the incredible period 2^(219937 – 1). This is a number with 6000 decimal digits. The number of elementary particles in the universe is “only” estimated to be a 80-digit number. The
algorithm is also very fast, and has very good statistical properties. The 32-bit random numbers exhibit the best possible equi-distribution properties in dimensions up to 623. Please see the source
code DWORD MyCryptLib::MTRandom() and BOOL MyCryptLib::MTInit(BYTE *pRandomPool, UINT nSize) to get additional information.)
Rabin-Miller Probabilistic Primality Test
Rabin Miller [11] is an algorithm that determines if a given number is a prime number or not. The algorithm is probabilistic, this means that it is not 100% secure, but is very fast. We will use this
algorithm to find prime numbers as follows:
1. Start with a random number A with size nSize.
2. Make it odd.
3. Use a small list of prime numbers and Rabin-Miller to check if the number is not a prime number.
4. If the number is not a prime number, add 2 to it, and jump to 3 until we find a prime number.
Please see the functions BNMakePrime(…), BNMakeRSAPrime(..), and RSAGenerateKey(..) to get additional information about prime number generation. As we can see in figure 4, it takes more time to find
bigger prime numbers because of two reasons, the computations involved to find prime numbers take longer, and the prime numbers get less dense when they are bigger. It takes about 73 seconds to
generate a 4096 bit RSA key on a Centrino laptop.
Figure 4. The figure represents the time it takes to generate an RSA key. The X-axis denotes the size of bits, and the Y-axis denotes the time in seconds.
5.3.3. RSA Encryption/Decryption
The RSA encryption/decryption is discussed in section 4.7. To get more implementation details, please read the MyCryptLib::DemoRSA(..) function. One important aspect discussed here is the
performance. The time to encrypt data using different key lengths is between 0.01 ms to 1 ms, depending on the key length.
However, to decrypt a message takes much longer than to encrypt the message (figure 5). The time can be reduced significantly using a different decryption method called Chinese Remainder Theorem
(CRT) [10]. This algorithm (the red line in figure 5) is much faster than the standard decryption (m = c^d mod n ) discussed in 4.7 (the blue line in figure 5).
Figure 5. In the figure, we can observe how long the decryption of the RSA algorithm takes for different key lengths. It can be observed that CRT gives much better performance than the standard
decryption algorithm (m = c^d mod n).
In the CRT algorithm, the private key is represented as a quintuple (p, q, dP, dQ, and qInv), where p and q are prime factors of n, dP and dQ are known as the CRT exponents, and qInv is the CRT
coefficient. The CRT method of decryption is four times faster, overall, than calculating m = c^d mod n (for large key lengths).
The extra values for the private key are:-
dP = (1/e) mod (p-1)
dQ = (1/e) mod (q-1)
qInv = (1/q) mod p where p > q
These are pre-computed and saved along with p and q, as private key. To compute the message m, given c, do the following:-
m1 = c^dP mod p
m2 = c^dQ mod q
h = qInv(m1 - m2) mod p
m = m2 + hq
Even though there are more steps in this procedure, the modular exponentiation to be carried out uses much shorter exponents, and so it is less expensive overall. The implementation of this can be
found in MyCryptLib::RSADecryptCRT(…).
5.3.4. Digital signing implementation
The details around digital signing are discussed in section 4.5. According to sections 4.5.2 and 4.5.3, a digital signature is implemented by the functions DigitalSignSHA1rDSA(..) and
DigitalVerifySHA1rDSA(..) in the MyCryptLib class. The function uses a SHA1 message digest to compute the checksum of a message, and then sign/verifies it. To get additional information and examples,
please see the DemoDSA(..) function inside the MyCryptLib class, or use Server->CryptoLibrary->GenerateKey.
To verify a signature is not CPU-expensive (takes less than 1ms, depending on the key size), but to calculate a signature is very expensive, as shown in figure 6 below. We can observe (figure 6) that
signing a message using a key length of 2688 bits takes 1 second. Therefore, it is not possible to sign every message sent to the client using that size.
Figure 6. The figure shows the time, in milliseconds, to compute a signature using the function DigitalSignSHA1rDSA(..) with different key sizes. To sign a message using a key length of 2688 bits
takes 1 second.
5.4. Key Exchange Procedure
Whitfield Diffie, Martin E. Hellman, and Ralph Merkle developed the Diffie-Hellman (D-H) key exchange protocol in Stanford in 1976. The protocol protects against “ear dropping”, that means no one can
know the secret key that is exchanged, but there is no protection against the “Man in the Middle” attacks discussed in section 4.3. The security of the algorithm lies on the difficulty to solve the
discrete logarithm problem, that is: if g= 2, or 5, p is the public prime number, and A=g^a mod(p) is given, calculate the secret random key, a. This is very difficult if p is higher than 1024 bits,
however, if the random secret key does not contain enough entropy, the key can be predicted, and then the algorithm is broken.
Description of the algorithm:
• The numbers g, p, A, B are public.
• Alice generates a secret key, a, and computes A= g^ a mod(p).
• Bob generates a secret key, b, and computes B=g^b mod(p).
• The keys A, B are exchanged.
• Alice computes S=B^a=g^(b*a) mod (p).
• Bob computes S=A^b=g^(a*b) mod (p).
• Alice and Bob have the same secret key because of the mathematical commutative law that gives g^(b*a) mod (p) = [a*b=b*a] = g^(a*b) mod(p). The implementation and usage of this algorithm can be
found in the DemoDiffieHellman(..) function in the MyCrypt class. In the figure below, we can see the computation time needed to exchange the keys according to this algorithm.
Figure 7. The figure shows how long a secure key exchange takes (in milliseconds (y-axis)) compared to the length of the public key, p. We can see that, for a 2048 bit key, it takes 600 ms to
exchange the key.
5.5. Key Length
The security of all algorithms presented in the previous articles lies on the key length and the algorithm to generate the keys. A rule of thumb is to have 10%-20% entropy (true randomness) bits of
the key size. In 1999, a 512 bits public key was factored with an implementation of the Number Field Sieve algorithm (GNFS), developed by Buhler, Lenstra, and Pomerance. The time it took to break
this key was five months. This means that a key of length 512 bits is no longer safe.
This made evident that a module length of 512 bits no longer protects from attackers. Within the last 20 years, a lot of progress has been made. Estimations about the future development of the
ability to factor public keys vary, and depend on some assumptions:
• Development in computing performance (Moore’s law: every 18 months, the computing power will double) and in grid computing.
• Factorizations of prime numbers are part of very many topical research areas in number theory and computer science. The progress is bigger than estimated. This is because of the new knowledge
about prime numbers.
In practice, it is still effectively impossible for ordinary people to break keys of size 512, but organizations like NSA with supercomputers can probably break keys. The length of a secure key is
typically 1024 (year 2005), and should be increased by 24 bits/year. In figure 5-7, section 5.3, it is shown that the computation time for asymmetric encryption increases exponentially using larger
prime numbers or keys.
"Oh, our system uses 4096-bits security”, may sound impressive, but using too large keys decreases the performance, and also gives a false sense of security because of security issues involved in
generating the key (10%-20% entropy bits of the key size, the properties of the random number generator, etc.).
In our system, we will use a key size greater than 2048 for digital signing. Digital signing is used to protect from “Man in the Middle” attacks (discussed in sections 4.5.2 and 4.5.3), and is needed
to be valid on the server side for a long time, since it is used by the clients to validate the sender. For key exchange procedures (section 5.4), the server accepts keys greater than 1024.
This means that the total key exchange procedure, including digital signing, takes approximately 600ms (100ms from figure 7, section 5.4, plus 500ms from figure 6, section 5.3.4) on a Centrino laptop
5.6. Putting it Together
The server is implemented using IOCP technology [6]. The communication protocol in the server and the client demo is implemented in the SecureChatIOCP class. The class is inherited from IOCPS [6]
that uses the IOCP technology. A storage structure (ClientContext found in iocps.h) is associated with each connection through IOCP. All the packages that are received from multiple connections are
handled only with one or more threads called IOWorkers. These threads call different functions namely NotifyReceivedPackage(..), NotifyNewConnection(..), etc., and how these functions are implemented
define the communicating protocol. Usually, a more sophisticated academic finite state machine model is used to implement complex protocols with IOCP, however, we will keep it as simple as we can.
Please read my article “A simple IOCP Server/Client Class” for more information and more details about the IOCP technology.
All the implementation is in the IOCPS class and SecureChatIOCP. By defining _IOCPClientMode_, we change the protocol behavior from the server to the client.
We start of by defining some packages, and how they are handled in NotifyReceivedPackage(..).
5.6.1. Defining the packages
The table below describes the different package structures. A package's first bytes start with the top of the table (yellow).
│ Package structure 1 │ Package structure 2 │ Package structure 3 │
│4 bytes, defining the size of a package │4 bytes, defining the size of a package │4 bytes, defining the size of a package │
│1 byte defining the package type. │1 byte defining the package type. │1 byte defining the package type. │
│Variable size, payload, usually contains another encrypted package or a │4 bytes defining the size of the payload (in number │4 bytes defining the size of the string (payload1) in bytes. │
│null terminated string. │of DWORDs). │ │
│ ├─────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────┤
│ │Payload of variable size (usually contains a public │Payload1 of variable size. Usually contains a null terminated string │
│ │key or signature). │(username and password). │
│ │ ├──────────────────────────────────────────────────────────────────────┤
│ │ │4 bytes defining the size of the string (payload2) in bytes. │
│ │ ├──────────────────────────────────────────────────────────────────────┤
│ │ │Payload2 of variable size (usually contains a null terminated string).│
│Table 1: Describes the different package structures that are transferred between the client and the server. │
The packages have one of the structures described in the above table, and are usually built inside the functions BuildAndSend(..), SendPublicKey(..), SendTextMessage(..), and SendErrorMessageTo(..)
inside the SecureChatIOCP class.
The package types are defined in the table below, and are handled with the appropriate functions, OnXXXX(..), where XXXX denotes the package type (e.g., OnPKG_PUBLIC_KEYA(..)).
│Package (of enum type││ How the package is handled │
│ PackageTypes) ││ │
│PKG_ERRORMSG ││This type of a message contains an error message that is going to be sent to the client. The package is received by the client and the error message is presented. As soon as │
│ ││the package is sent, the connection is closed. │
│Encapsulated in ││ │
│structure 1 (Table 1)││ │
│PKG_PUBLIC_KEYP ││This package is sent by the client. The package contains the public key, P, according to the Diffie-Hellman (D-H) key exchange protocol (section 5.4). When the server │
│ ││receives this package, it saves the public key, and generates a 512 bit private key, and computes and sends the public key, B, by ComputeAndSendPublicKeyA(..). │
│Encapsulated in ││ │
│structure 2 (Table 1)││ │
│PKG_PUBLIC_KEYA ││This package is sent by the server, and contains the public key, A, according to the Diffie-Hellman (D-H) key exchange protocol (section 5.4). This package is received by the│
│ ││client, which generates a private key, and computes the session key (ComputeAndSetSessionKey(..)), and computes and sends the public key, B (of type PKG_PUBLIC_KEYB), with │
│Encapsulated in ││the function ComputeAndSendPublicKeyB(..). │
│structure 2 (Table 1)││ │
│PKG_PUBLIC_KEYB ││This package is sent by the client to the server (see PKG_PUBLIC_KEYA). The session key is computed, and a signature is computed and sent to the client │
│ ││(ComputeAndSendSignature(…)) which signs the public keys, A and B, if we have defined #define USE_SIGNATURE to protect the client against “Man in the Middle” attacks (section│
│Encapsulated in ││3.4 and 4). │
│structure 2 (Table 1)││ │
│PKG_SIGNATURE ││This package is sent by the server, and contains a signature that confirms the packages PKG_PUBLIC_KEYA and PKG_PUBLIC_KEYB. The client receives the package, and validates │
│ ││the server using the trusted static key SecureChatIOCP::m_PubN that is hard-coded into the client. If everything is fine, a PKG_USERNAME_PASSWORD is sent to server, otherwise│
│Encapsulated in ││a PKG_ERRORMSG, that also terminates the connection. │
│structure 2 (Table 1)││ │
│PKG_TEXT_TO_ALL ││This package is always sent after the key exchange procedure, and is encapsulated inside a PKG_ENCRYPTED package. │
│ ││ │
│Encapsulated in ││When received by the client, the content is shown in the text box of the chatting client. However, if it is received by the server, it is sent to everybody. │
│structure 1 (Table 1)││ │
│PKG_USERNAME_PASSWORD││This package is always sent as a response from the client that the client accepted the signature. This package is encapsulated inside a PKG_ENCRYPTED package. │
│ ││ │
│Encapsulated in ││ │
│structure 3 (Table 1)││ │
│PKG_ENCRYPTED ││This package is sent after the key exchange procedure, and is handled inside NotifyReceivedPackage(..). The contents of the package are decrypted and processed according to │
│ ││the defined packages in this table. │
Table 2: Describes the different types of packages and how they are handled.
5.6.2. The key exchange details
The techniques used to implement the secure chat are Digital Signing/Verification (section 5.3.4), Diffie-Hellman (D-H) key exchange protocol (section 5.4), and Rijndael explained in section 5.2.
The key exchange procedure is done in the following way (to read more about the implementation details, read the previous section 5.6.1, tables 1 and 2):
1. The client generates a public key, p; the public key, g, is always equal to five. A package type of PKG_PUBLIC_KEYP is sent to the server. This is done when the client connects to the server, and
is performed in the function NotifyNewConnection(..).
2. The server receives the package PKG_PUBLIC_KEYP and sends the package PKG_PUBLIC_KEYA (read section 5.6.1).
3. The client receives the PKG_PUBLIC_KEYA. The client computes the session key, and sends PKG_PUBLIC_KEYB to the server according to section 5.6.1.
4. The server receives the PKG_PUBLIC_KEYB. The server computes the common session key, and sends a signature in PKG_SIGNATURE to the client.
5. The client validates the signature, and sends a PKG_USERNAME_PASSWORD package to the server.
Now, both the client and the server have a common secret session key, according to the Diffie-Hellman (D-H) key exchange algorithm, and also the client is protected against "Man in the Middle"
5.6.3. The digital signing details
To digitally sign the message, the client must trust a public key. This trusted public key is hard-coded in the software (even if it is public) as the global constant SecureChatIOCP::m_PubN declared
in the SecureChatIOCP class. The same public key and the secret key are also hard coded into the server SecureChatIOCP::m_PrivD and SecureChatIOCP::m_PubN. Remove the #define USE_SIGNATURE from
SecureChatIOCP.h to use the protocol without the “Man in the Middle” protection and digital signing.
Observe! The digital signing in the demo is not secure because everyone that downloads this demo knows the private key used to sign the data.
Use the Demo section and the “Generate DSA key” button in the server demo to generate your own private and public keys to be used for digital signing. Copy the generated keys to
SecureChatIOCP::m_PrivD and SecureChatIOCP::m_PubN to obtain your own secure client-server framework.
5.6.4. Different implementation approaches
As we have discussed before in previous sections, there are many different implementation approaches. The key exchange implementation is quite slow; it takes about 600ms to do a safe key exchange
(section 5.5). This is because of the digital signing computation. More efficient alternative digital signing algorithms other than RSA recursive can be used to decrease the key exchange time.
A quite nice alternative would be that the server uses the same public key, P, and public key, A, and signs only the public key, A. By doing that, the key exchange procedure would be much faster from
a server point of view, because the computation needs to be done only once at the server startup, or at a certain interval (few hours).
As a result of that, a larger key size can easily be used, because we do not do computations every time. However, the security of the protocol would be compromised using this approach, even if it is
a quite good fix. For example, imagine that the public key A is broken.
5.6.5. Special considerations
When you compile this source code for 64 bit processors, make sure that you change the parameters defined in MyCryptLib.h (e.g.: _HIBITMASK_, MAXIMUMNR) to appropriate values.
For commercial release, it is important to protect this software against buffer overflow attacks [1], specially regarding the class MyCryptLib. Use the compiler /GZ switch, found in Visual C++ 6.0,
and the /GS switch for the Visual C++ .NET compiler, to prevent buffer overflow attacks [1].
6. Future work
• Optimization of the multi-precision library used to implement asymmetric encryption and key exchange.
• Client authorization, using hashed SHA256 password / username.
• File transfer functionality between users.
7. F.A.Q.
Q: The protocol ensures that the client is protected from “Man in the Middle” attacks by using digital signing. In the server side, the client is not authorized in a similar way, why?
A: In practice, the most important thing is to protect the clients from being hacked. Also, usually, the server authorizes the client using a name and a password. Here, it is important that the
client is not “fooled” by a “Man in the Middle” attack to send information such as password, etc., to the attacker.
8. Revision History
• Initial release - 2006-06-08.
9. References
1. Compiler Security Checks In Depth, 2006-05-02
2. Crypto++® Library 5.2.1, 2006-05-02
3. OpenSSL Project, 2006-05-02
4. CrypTool, 2006-05-02
5. RSA Security, 2006-05-02
6. A simple IOCP Server/Client Class, 2006-05-02, Amin Gholiha
7. “Cryptanalysis of Block Ciphers with Overdefined Systems of Equations (or the XSL attack on block ciphers)”, Nicolas Courtois, Josef Pieprzyk, Asiacrypt 2002, LNCS 2501, pp. 267-287, Springer
8. "A report about the Courtois and Pieprzyk attack on AES", Leah McFall, Asiacrypt 2002 conference, Tuesday, 3 December 2002
9. The Art of Computer Programming, Knuth, Donald. 1968
10. RSA Encryption Standard, RSA Laboratories. PKCS #1 v2.1: June 2002.
11. The Art of Computer Programming, Vol 2: Seminumerical Algorithms, Knuth, Donald.E. Addison-Wesley, Reading, Mass., 1981
12. Reliable Software Technology or Cigital, 2006-06-07 | {"url":"http://www.codeproject.com/Articles/14462/Build-your-own-cryptographically-safe-server-clien?fid=316484&df=90&mpp=10&sort=Position&spc=None&tid=3675616","timestamp":"2014-04-21T12:26:34Z","content_type":null,"content_length":"127067","record_id":"<urn:uuid:1a9da0ee-0e0a-495a-a96d-0a0734de4999>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 6.
Distribution of the coefficients of variation of solution parameters. The coefficient of variation StdDev(log[10]K[i])/Mean(log[10]K[i]), where {K[i]}[i = 1...n ]is any kinetic parameter, was
computed for every parameter across the entire ensemble of solution sets. Their distribution is shown (red line). For comparison the distribution of the coefficient of variation of a variable X is
shown (green line), where X is sampled from a uniform distribution in the interval [-5,0]. The distribution of the coefficient of variation can be approximated by a Gaussian density function N(μ, σ)
with μ = 2/σ = 0.07 (in blue): the μ value is the coefficient of variation of the uniform distribution, while σ is the standard deviation of a random set of coefficients of variation obtained by
sampling the uniform distribution in the interval [-5,0] (Fig. 7). A coefficient of variation smaller than 0.33 has a probability of random occurrence ≤ 0.0002, while the 17 parameters selected using
the genetic algorithm represent 6.5% of the whole (Fig. 7), that is a coefficient of variation smaller the 0.33 have a proability of occurrence of 0.065 in the solution set.
Arisi et al. BMC Neuroscience 2006 7(Suppl 1):S6 doi:10.1186/1471-2202-7-S1-S6 | {"url":"http://www.biomedcentral.com/1471-2202/7/S1/S6/figure/F6","timestamp":"2014-04-18T03:35:28Z","content_type":null,"content_length":"12648","record_id":"<urn:uuid:c6147d13-bfa5-44bc-ad7d-3a8bb93bb508>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another Side of Light
A B C D
C. Quanta and gas molecules
Instead of a furnace, consider another container, this one containing air. Most likely, the air molecules in it will be distributed almost evenly; it's extremely unlikely that all the air molecules
would be found in one small part of the container if they are free to move anywhere in it and their motions are independent. If there were just one air molecule in the whole container, it would be
found in the right half of the container about half the time, and in the left half of the container the other half of the time. If we added a second molecule, we'd expect to find both molecules in
the left half one fourth of the time since the first molecule would be there half the time, and the second molecule would be there with it only half of that time>-and one half times one half equals
one fourth (one half squared). For three molecules, the chances of finding all three in the left half of the container should be one out of eight-one half times one half times one half again, or one
half to the third power. And for N molecules, the chances of finding all N of them in the left half at any given moment should be one half to the N^th power.
Einstein found that light waves in a furnace should act much the same if they behave according to Planck's quantum law. As we noted above, the intensity of the lower-frequency light is described
pretty accurately by a theory that doesn't assume the light's energy comes in quanta. But the behavior of high-frequency light (Figure 3) is quite different, and it's here that Einstein focused his
Figure 3. The upper curve is the Planck's-law curve of Figure 2; the lower curve is a close approximation for higher frequencies- so close that the curves can't be distinguished in this figure for
frequencies much higher than three units. Einstein found that, at least to the extent the light intensity follows the lower curve, quanta of light should behave in some ways like molecules of a gas.
For light waves of any of the highest frequencies f, the probability that all their quanta will be found at a given instant in the left half of the furnace turns out to be approximately one half to
the power
where E is the total energy of all those light waves, f again is their frequency, and h is the ratio of the energy of one quantum to the frequency. So for light with frequency f, the chances of
finding all E/hf of its quanta in the same half of the furnace are about as small as the chances of finding all of a gas made of N=E/hf molecules in the same half of its container. Einstein's more
general version of this analysis shows that any distribution of higher-frequency light quanta is, approximately, as likely to happen as the same distribution for gas molecules-at least if the
frequencies of the light waves considered are high enough.
So, while the law Planck discovered was not what one would expect just from Maxwell's theory of light waves, it was at least close to what one might expect if light quanta had an atom-like
distribution in space. Furthermore, Einstein described three other phenomena that seemed better understood under the latter assumption. (.....continued)
A B C D | {"url":"http://www.osti.gov/accomplishments/nuggets/einstein/lightc.html","timestamp":"2014-04-17T07:42:00Z","content_type":null,"content_length":"10108","record_id":"<urn:uuid:8afe01b8-0bfc-435a-8b92-3cd71ba41b7c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Office of Educational Assessment
Understanding Correlations Reports
ScorePak® can compute Pearson Product Moment Correlation coefficients among any number of scores of any type. The results are presented within a square correlation matrix of up to ten variables each.
Several matrices will be produced if intercorrelations are requested among more than ten variables.
Sample correlation report (9K PDF*)
Correlation Coefficients
Correlation coefficients index the extent to which two scores are related, and the direction of that relationship. They reflect the tendency of the variables to "co-vary"; that is, for changes in the
value of one variable to be associated with changes in the value of the other. In interpreting correlation coefficients, two properties are important.
• Magnitude. Correlations range in magnitude from -1.00 to 1.00. The larger the absolute value of the coefficient (the size of the number without regard to the sign) the greater the magnitude of
the relationship. For example, correlations of .60 and -.60 are of equal magnitude, and are both larger than a correlation of .30. When there is no linear relationship, the correlation will be
0.00; when there is a perfect linear relationship (one-to-one correspondence between the values of the variables), the correlation will be 1.00 or -1.00.
• Direction. The direction of the relationship (positive or negative) is indicated by the sign of the coefficient. A positive correlation implies that increases in the value of one score tend to be
accompanied by increases in the other. A negative correlation implies that increases in one are accompanied by decreases in the other.
Because ScorePak® scores are generally test scores, most of the relationships among them can be expected to be positive. The greater the degree to which the tests are measuring the same thing, the
stronger the relationship between them. Scores are often weighted and summed to create a composite score which is then used to assign grades. In such applications, moderately-sized positive
correlations (r>.30) among scores are desirable. Negative or small positive correlations (r<.20) among test scores imply that the composite score may be unreliable.
Missing Data
In computing correlations, ScorePak® includes pairs of observations for which neither test score is missing. However, ScorePak® does not delete an entire case just because data are missing on one or
more scores; if you are intercorrelating several scores, test scores for a particular individual will be included in those coefficients for which both scores are present, and excluded from those
coefficients for which one or both scores are missing.
Composite scores are created by combining scores using one or more transformation steps. A composite score may or may not be missing if one or more of the scores on which it is based is missing.
Check the description of missing values for each transformation if you plan to correlate composite scores. In general, the correlations of a composite score with the scores from which it is derived
tend to be relatively large because of the shared variance of the scores with the composite score. However, these "part-whole" correlations can be misleadingly small if there is much missing data
within the scores making up the composite, and the composite score is not set to missing if it contains missing scores.
It is important to keep in mind that test scores are themselves unreliable to some extent. Only the reliable portions of two sets of scores can be correlated; the unreliable portion is random error
and thus will be uncorrelated. As a result, the magnitude of the correlation between any two test scores is limited or attenuated by the unreliability of each. If the reliability of the test scores
is known, the correlation can be corrected for attenuation. ScorePak® does not make this correction, because the reliability coefficient is not available at the time that the program computes the
correlations. However, if you are correlating raw scores, you can use the reliability coefficients given in the ScorePak® Item Analysis to correct the correlations according to the following formula:
rxy' = rxy / [(SQRT(rxx*ryy)], where
rxy' = the corrected correlation of test score "x" with test score "y"
rxy = the uncorrected correlation
rxx = the reliability of test score "x"
ryy = the reliability of test score "y"
*Software capable of displaying a PDF is required for viewing or printing this document. Adobe Reader is available free of charge from the Adobe Web site at http://www.adobe.com/products/acrobat/ | {"url":"http://www.washington.edu/oea/services/scanning_scoring/scoring/correlations.html","timestamp":"2014-04-20T03:58:31Z","content_type":null,"content_length":"11312","record_id":"<urn:uuid:0541ae41-55f1-4fa7-9b7d-00c47d3e9b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Publication List
Publication List of Hongwei Long
[1] A. Bensoussan, H. Long, S. Perera, and S. Sethi, Impulse control with random reaction periods: A central bank intervention problem, Operations Research Letters (2012), doi:10.1016/
[2] Hongwei Long, Parameter estimation for a class of stochastic differential equations driven by small stable noises from discrete observations, Acta Mathematica Scientia, 30B (2010), 645-663
[3] Zdzislaw Brzezniak, Hongwei Long and Isabel Simao, Invariant measures for stochastic evolution equations in M-type 2 Banach spaces, Journal of Evolution Equations, 10 (2010), 785-810.
[4] Yaozhong Hu and Hongwei Long, On the singularity of least squares estimator for mean-reverting -stable motions, Acta. Math. Sci., 29B (2009), 599-608.
[5] Hongwei Long, Least squares estimator for discretely observed Ornstein-Uhlenbeck processes with small Levy noises, Statistics and Probability Letters, 79 (2009), 2076-2085.
[6] Yaozhong Hu and Hongwei Long, Least squares estimator for Ornstein-Uhlenbeck processes driven by -stable motions, Stochastic Processes and their Applications, 119 ( 2009), 2465-2480.
[7] Michael A. Kouritzin and Hongwei Long, On extending the classical filtering equations, Statistics and Probability Letters 78 (2008), 3195-3202.
[8] Yaozhong Hu and Hongwei Long, Parameter estimation for Ornstein-Uhlenbeck processes driven by alpha-stable Levy motions, Communications on Stochastic Analysis, 1 (2007), 175-192.
[9] S. Kim, S. Li, H. Long, and R. Pyke, Analyzing network traffic for malicious activity, Canadian Applied Math. Quarterly 12 (2004), 479-489.
[10] Michael A. Kouritzin, Hongwei Long, and Wei Sun, Markov chain approximations to filtering equations for reflecting diffusion processes, Stochastic Processes and their Applications 110 (2004),
[11] Hongwei Long and I. Simao, A note on the essential self-adjointness of Ornstein-Uhlenbeck operators perturbed by a dissipative drift and a potential, Infinite Dimensional Analysis, Quantum
Probability and Related Topics 7 (2004), 249-259.
[12] Hongwei Long and I. Simao, Essential self-adjointness of Ornstein- Uhlenbeck operators perturbed by certain drifts and singular potentials, Communications in Applied Analysis 8 (2004), 167-184.
[13] S. Kim, M.A. Kouritzin, H. Long, J. McCrosky and X. Zhao, A stochastic grid filter for multi-target tracking, The Proceedings of SPIE Defense & Security Symposium on Signal Processing, Sensor
Fusion, and Target Recognition XIII, Volume 5429, Ed. by I. Kadar, Orlando, USA, 2004, pp. 245-253.
[14] Michael A. Kouritzin, Hongwei Long, and Wei Sun, Nonlinear filtering for diffusions in random environments, Journal of Theoretical Probability 16 (2003), 1-20.
[15] Michael A. Kouritzin, Hongwei Long and Wei Sun, On Markov chain approximations to semilinear partial differential equations driven by Poisson measure noise, Stochastic Analysis and Applications
21 (2003), 419-441.
[16] M. A. Kouritzin, H. Long, X. Ma and W. Sun, Non-recursive and Recursive methods for parameter estimation in filtering problems. The Proceedings of SPIE AeroSense Conference on Signal Processing,
Sensor Fusion, and Target Recognition XII, Volume 5096, Ed. by I. Kadar, 2003, pp. 585-596.
[17] D. Ballantyne, M.A. Kouritzin, H. Long, J. Hailes, and J. Wiersma, A hybrid weighted interacting particle filter for multi-target tracking. The Proceedings of SPIE AeroSense Conference on Sign
al Processing, Sensor Fusion, and Target Recognition XII, Volume 5096, Ed. by I. Kadar, 2003, pp. 244-255.
[18] Michael A. Kouritzin and Hongwei Long, Convergence of Markov chain approximations to stochastic reaction diffusion equations, The Annals of Applied Probability 12 (2002), 1039-1070.
[19] D. Ballantyne, M.A. Kouritzin, H. Long and W. Sun, Discrete-space particle filters for reflecting diffusions, The Proceeding of 2002 IEEE Aerospace Conference, Big Sky, MT.
[20] Hongwei Long and I. Simao, On the essential self-adjointness of perturbed Ornstein-Uhlenbeck operators on Hilbert spaces, Communications in Applied Analysis 5 (2001), 371-382.
[21] Hongwei Long, An approximation criterion for essential self-adjointness of Dirichlet operators on certain Banach spaces, Potential Analysis 13 (2000), 409-421.
[22] Hongwei Long, Necessary and sufficient conditions for the symmetrizability of differential operators over infinite dimensional state spaces, Forum Mathematicum 12 (2000), 167-196.
[23] Hongwei Long and Isabel Simao, Kolmogorov equations in Hilbert spaces with application to essential self-adjointness of symmetric diffusion operators, Osaka J. Math. 37 (2000), 185-202.
[24] Hongwei Long, Kato's inequality and essential self-adjointness of Dirichlet operators on certain Banach spaces, Stochastic Analysis and Applications 16 (1998), 1019-1047.
[25] Hongwei Long, Anticipating quadrant and symmetric integrals in the plane with application to Wiener space, Acta Math. Sci. 17 (1997), 1-11.
[26] Hongwei Long, On the rate of convergence in the central limit theorem for two-parameter martingale differences, Acta Math. Sci. 16 (1996), no. 3, 287-295.
[27] Hongwei Long, The pathwise uniqueness of solutions of non-Markovian stochastic differential equations with jumps in plane, Chinese Science Bulletin 39 (1994), 1853-1858.
[28] Hongwei Long, The approximation theorem of stochastic differential equations in the plane, Acta Math. Sci. 14 (1994), 272-282.
[29] Yaozhong Hu and Hongwei Long, Symmetric integral and the approximation theorem of stochastic integrals in the plane, Acta Math. Sci. 13 (1993), 153-166.
[30] Hongwei Long, On the -optimal control of stochastic differential equations in the plane, In: Development of Enterprises and System Engineering, Ed. by Chinese Association of System Engineering,
pp. 428-438, Chinese Science and Technology Press, 1992 (in Chinese).
[31] Hongwei Long and Lianfen Qian, Nadaraya-Watson estimator for stochastic processes driven by stable Levy motions. Submitted, 2012.
[32] H. Long, Y. Shimizu, and W. Sun, Least squares estimators for discretely observed stochastic processes driven by small Levy noises. Submitted, 2012. | {"url":"http://math.fau.edu/long/Publist1.htm","timestamp":"2014-04-18T10:44:20Z","content_type":null,"content_length":"36304","record_id":"<urn:uuid:9409f266-44e7-498d-a655-0403e3758178>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: PARALLEL IMPLEMENTATION OF MAXIMUM A POSTERIORI PROBABILITY DECODER
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A MAP decoder may be implemented in parallel. In one implementation, a device may receive an input array that represents received encoded data and calculate, in parallel, a series of transition
matrices from the input array. The device may further calculate, in parallel, products of the cumulative products of the series of transition matrices and an initialization vector. The device may
further calculate, in parallel and based on the products of the cumulative products of the series of transition matrices and the initialization vector, an output array that corresponds to a decoded
version of the received encoded data in the input array.
A method, implemented by at least one device, the method comprising: receiving an input array that represents received encoded data, where the receiving is performed by the at least one device;
calculating, in parallel, a series of transition matrices from the input array, where the calculating the series of transition matrices is performed by the at least one device; calculating, in
parallel, products of the cumulative products of the series of transition matrices and an initialization vector, where the calculating the products of the cumulative products and the initialization
vector is performed by the at least one device; calculating, in parallel, based on the products of the cumulative products of the series of transition matrices and the initialization vector, an
output array that corresponds to a decoded version of the received encoded data in the input array, where the calculating the output array is performed by the at least one device; and outputting the
output array.
The method of claim 1, where values in the transition matrices represent probabilities that relate to state transitions in a Maximum A Posteriori Probability (MAP) decoder.
The method of claim 2, where the calculating the series of transition matrices and the calculating the products of the cumulative products of the series of transition matrices and the initialization
vector, is simultaneously performed for both alpha and beta parameters of the MAP decoder.
The method of claim 1, where calculating, in parallel, the products of the cumulative products of the series of transition matrices and the initialization vector includes: using a scan technique to
convert the series of transition matrices and the initialization vector into the products of the cumulative products of the series of transition matrices and the initialization vector.
The method of claim 4, where calculating, in parallel, the products of the cumulative products of the series of transition matrices and the initialization vector further includes: segmenting the
series of transition matrices into a plurality of sections; independently applying, as a first scan, the scan technique to each of the plurality of sections, where a full product of the scan of each
section is stored; applying, as a second scan, the scan technique to a series of the full products of the first scan; and distributing partial product scan results from the second scan.
The method of claim 5, where distributing partial product scan results from the second scan and forming the products of the cumulative products and the initialization vector includes preferentially
performing matrix by vector multiplications before performing matrix by matrix multiplications.
The method of claim 1, where calculating, in parallel, the products of the cumulative product of the series of transition matrices and the initialization vector includes: implementing N/2 parallel
pipelines, where N represents a size of the input array, and each pipeline includes K stages, where K corresponds to 2*log
The method of claim 7, where operations performed in each stage of each of the pipelines include: matrix multiplication operations.
The method of claim 7, where operations performed in each stage of each of the pipelines include: matrix multiplication operations, in which the matrix multiplication operations are performed using a
Max-Log-MAP technique.
The method of claim 1, where outputting the output array includes: transmitting the output array to an interleaving component.
The method of claim 1, where the receiving the input array, calculating the series of transition matrices, calculating the products of the cumulative products of the series of transition matrices and
the initialization vector, calculating the output array, and outputting the output array, include operations to implement a Maximum A Posteriori Probability (MAP) decoder within a turbo decoder.
The method of claim 1, where the received input array includes data received over a noisy transmission channel.
The method of claim 1, where the device includes a multiple GPU device, and where the parallel calculation of the series of transition matrices, the products of the cumulative products of the series
of transition matrices and the initialization vector, and the output array, are performed by the multiple GPU device.
Computer-readable media comprising: one or more instructions, which when executed by one or more processors, cause the one or more processors to receive input data that represents received encoded
data; one or more instructions, which when executed by the one or more processors, cause the one or more processors to calculate, in parallel, a series of transition matrices from the received input
data; one or more instructions, which when executed by the one or more processors, cause the one or more processors to calculate, in parallel, products of the cumulative products of the series of
transition matrices and an initialization vector; one or more instructions, which when executed by the one or more processors, cause the one or more processors to generate, based on the products of
the cumulative products of the series of transition matrices and the initialization vector, output data that corresponds to a decoded version of the received data; and one or more instructions, which
when executed by the one or more processors, cause the one or more processors to output the output data.
The computer-readable media of claim 14, where values in the transition matrices represent probabilities that relate to state transitions in a Maximum A Posteriori Probability (MAP) decoder.
The computer-readable media of claim 14, where the calculation of the series of transition matrices and the calculation of the products of the cumulative products of the series of transition matrices
and the initialization vector, is simultaneously performed for both alpha and beta parameters of the MAP decoder.
The computer-readable media of claim 14, where the one or more instructions to calculate, in parallel, the products of the cumulative products of the series of transition matrices and the
initialization vector, further includes: one or more instructions to perform a scan technique to convert the series of transition matrices and the initialization vector into the products of the
cumulative products of the series of transition matrices and the initialization vector.
The computer-readable media of claim 17, where the one or more instructions to calculate, in parallel, the products of the cumulative products of the series of transition matrices and the
initialization vector further includes: one or more instructions to segment the series of transition matrices into a plurality of sections; one or more instructions to independently apply, as a first
scan, the scan technique to each of the plurality of sections, where a full product of the scan of each section is stored; one or more instructions to apply, as a second scan, the scan technique to a
series of the full products of the first scan; and one or more instructions to distribute partial product scan results from the second scan.
The computer-readable media of claim 18, where distributing partial product scan results from the second scan and forming the products of the cumulative products and the initialization vector
includes preferentially performing matrix by vector multiplications before performing matrix by matrix multiplications.
The computer-readable media of claim 14, where the parallel calculations include matrix multiplication operations.
The computer-readable media of claim 20, where the parallel calculations include matrix multiplication operations, in which the matrix multiplication operations are performed using a Max-Log-MAP
A device comprising: a first Maximum A Posteriori Probability (MAP) decoder, including: a first plurality of parallel execution units to calculate, in parallel: a first series of transition matrices,
from a first input array, products of the cumulative products of the first series of transition matrices and a first initialization vector, and a first output array, based on the products of the
cumulative products of the first series of transition matrices and the first initialization vector; a second MAP decoder, including: a second plurality of parallel execution units to calculate, in
parallel: a second series of transition matrices, from a second input array, products of the cumulative products of the second series of transition matrices and a second initialization vector, and a
second output array, based on the products of the cumulative products of the second series of transition matrices and the second initialization vector; and one or more interleavers to interleave data
in the first output array and the second output array and to provide the interleaved data to the second MAP decoder and the first MAP decoder, respectively.
The device of claim 22, where the device implements a turbo decoder.
The device of claim 22, where values in the first and second series of transition matrices represent probabilities that relate to state transitions in the first and second MAP decoders, respectively.
The device of claim 22, where the first plurality of parallel execution units calculates, in parallel, the products of the cumulative products of the first series of transition matrices and the first
initialization vector using a scan technique, and where the second plurality of parallel execution units calculates, in parallel, the products of the cumulative products of the second series of
transition matrices and the second initialization vector using the scan technique.
The device of claim 25, where the first and second plurality of parallel execution units calculate the products of the cumulative products of the first and second series of transition matrices and
the first and second initialization vectors, respectively, by segmenting the first and second series of transition matrices into a plurality of sections and independently applying the scan technique
to each of the plurality of sections.
The device of claim 22, where operations performed in each of the first and second plurality of parallel execution units include: matrix multiplication operations.
A method, implemented by at least one device, the method comprising: receiving an input array that represents received encoded data, where the receiving is performed by the at least one device;
calculating, in parallel, a series of transition matrices from the input array, where the calculating the series of transition matrices is performed by the at least one device; calculating, in
parallel, cumulative products of the series of transition matrices, where the calculating the cumulative products is performed by the at least one device; calculating, in parallel, the products of
the cumulative products of the series of transition matrices and an initialization vector, where calculating the products of the cumulative products of the series of transition matrices and the
initialization vector is performed by the at least one device; calculating, based on the products of the cumulative products of the series of transition matrices and the initialization vector, an
output array that corresponds to a decoded version of the received encoded data in the input array, where the calculating the output array is performed by the at least one device; and outputting the
output array.
Computer-readable media comprising: one or more instructions, which when executed by one or more processors, cause the one or more processors to receive an input array that represents received
encoded data; one or more instructions, which when executed by one or more processors, cause the one or more processors to calculate, in parallel, a series of transition matrices from the input
array; one or more instructions, which when executed by one or more processors, cause the one or more processors to calculate, in parallel, cumulative products of the series of transition matrices;
one or more instructions, which when executed by one or more processors, cause the one or more processors to calculate, in parallel, products of the cumulative products of the series of transition
matrices and an initialization vector; one or more instructions, which when executed by one or more processors, cause the one or more processors to calculate, based on the products of the cumulative
products of the series of transition matrices and the initialization vector, an output array that corresponds to a decoded version of the received encoded data in the input array; and one or more
instructions, which when executed by one or more processors, cause the one or more processors to output the output array.
BACKGROUND [0001]
The maximum a posteriori probability (MAP) decoder, and/or variations of this decoder, is commonly used for signal processing. For instance, a MAP decoder may be used, as part of a larger decoder,
such as a turbo decoder, in a wireless communication device. The turbo decoder may be used to decode data that is received over a noisy channel, such as radio interfaces for the wireless
communication device.
A number of variations of the MAP decoder are known. The logarithmic version of the MAP decoder, for example, may be more feasible for practical hardware implementations. Whatever version of the MAP
decoder is used, however, it can be desirable to implement the MAP decoder as efficiently as possible, with respect to available hardware constraints.
BRIEF DESCRIPTION OF THE DRAWINGS [0003]
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more implementations described herein and, together with the description, explain
these implementations. In the drawings:
FIG. 1 is a diagram of an example system in which concepts described herein may be implemented;
FIG. 2 is a diagram of an example device that may correspond to a device in FIG. 1;
FIG. 3 is a diagram illustrating an example of a simplified trellis;
FIG. 4 is a diagram conceptually illustrating example components of a MAP decoder;
FIG. 5 is a diagram illustrating an example of the operation of the parallel execution units of FIG. 4 using a scan technique;
FIG. 6 is a flowchart illustrating an example process for the parallel implementation of a MAP decoder;
FIG. 7 is a diagram illustrating an alternative example implementation of the scan technique;
FIG. 8 is a flowchart illustrating an example process for generating a model that uses a MAP decoder;
FIG. 9 is a diagram illustrating an example system that may use a MAP decoder; and
FIG. 10 is a diagram illustrating an example implementation of a turbo decoder.
DETAILED DESCRIPTION [0014]
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
Implementations described herein may relate to a parallel implementation of the MAP decoder. A number of processing units, such as hardware processing units in an electronic device, may efficiently
implement a MAP decoder, such as a MAP decoder implemented as part of a turbo decoder. In one implementation, the MAP decoder may be designed and/or deployed in a technical computing environment
To implement the MAP decoder, a scan algorithm may be used for a parallel computation of intermediate results. For example, the scan algorithm may be used to calculate products of the cumulative
products of a series of transition matrices and an initialization vector. The scan algorithm, and hence the MAP decoder, may be performed by parallel processing units.
DEFINITIONS [0017]
A Technical Computing Environment (TCE) may include any hardware and/or software based logic that provides a computing environment that allows users to perform tasks related to disciplines, such as,
but not limited to, mathematics, science, engineering, medicine, and business. The TCE may include text-based facilities (e.g., MATLAB® software), a graphically-based environment (e.g., Simulink®
software, Stateflow® software, SimEvents® software, etc., by The MathWorks, Inc.; VisSim by Visual Solutions; LabView® by National Instruments; etc.), or another type of environment, such as a hybrid
environment that includes one or more of the above-referenced text-based environments and one or more of the above-referenced graphically-based environments.
The TCE may be integrated with or operate in conjunction with a graphical modeling environment, which may provide graphical tools for constructing models, systems, or processes. The TCE may include
additional tools, such as tools designed to convert a model into an alternate representation, such as source computer code, compiled computer code, or a hardware description (e.g., a description of a
circuit layout). In one implementation, the TCE may provide this ability using graphical toolboxes (e.g., toolboxes for signal processing, image processing, color manipulation, data plotting,
parallel processing, etc.). In another implementation, the TCE may provide these functions as block sets. In still another implementation, the TCE may provide these functions in another way.
Models generated with the TCE may be, for example, models of a physical system, a computing system (e.g., a distributed computing system), an engineered system, an embedded system, a biological
system, a chemical system, etc.
System Description [0020]
FIG. 1 is diagram of an example system 100 in which concepts described herein may be implemented. System 100 may include a personal computer or workstation 110. Workstation 110 may execute a TCE 120
that presents a user with an interface that enables design, analysis, and generation of, for example, technical applications, engineered systems, and business applications. For example, TCE 120 may
provide a numerical and/or symbolic computing environment that allows for matrix manipulation, plotting of functions and data, implementation of algorithms, creation of user interfaces, and/or
interfacing with programs in other languages. TCE 120 may particularly include a graphical modeling component and a component to convert graphic models into other forms, such as computer source code
(e.g., C++ code) or hardware descriptions (e.g., a description of an electronic circuit).
Workstation 110 may operate as a single detached computing device. Alternatively, workstation 110 may be connected to a network 130, such as a local area network (LAN) or a wide area network (WAN),
such as the Internet. When workstation 110 is connected to network 130, TCE 120 may be run by multiple networked computing devices or by one or more remote computing devices. In such an
implementation, TCE 120 may be executed in a distributed manner, such as by executing on multiple computing devices simultaneously. Additionally, in some implementations, TCE 120 may be executed over
network 130 in a client-server relationship. For example, workstation 110 may act as a client that communicates (e.g., using a web browser) with a server that stores and potentially executes
substantive elements of TCE 120.
As shown in FIG. 1, system 100 may include a remote TCE 140 (e.g., a remotely located computing device running a TCE) and/or a TCE service 160. TCE service 160 may include a server computing device
that provides a TCE as a remote service. For instance, a TCE may be provided as a web service. The web service may provide access to one or more programs provided by TCE service 160.
In one implementation, models created with TCE 120 may be executed at workstation 110 to present an interface, such as a graphical interface, to a user. In some implementations, TCE 120 may generate,
based on the model, code that is executable on another device, such as a target device 170. Target device 170 may include, for example, a consumer electronic device, a factory control device, an
embedded device, a general computing device, a graphics processing unit or device, a field programmable gate array, an application specific integrated circuit (ASIC), or any other type of
programmable device. In one implementation, target device 170 may particularly include a communication device or a semiconductor chip within a communication device, such as a wireless communication
Target device 170, workstation 110, and/or remote TCE 140 may include multiple, parallel processing engines. For example, workstation 110 may include a multicore processor. Similarly, target device
107 may include a multicore processor or may include parallel processing engines that may be used for signal processing tasks. As will be described in more detail below, multiple, parallel processing
engines of target device 170, workstation 110, and/or remote TCE 140, may be used to efficiently implement a MAP decoder.
Although FIG. 1 shows example components of system 100, in other implementations, system 100 may contain fewer components, different components, differently arranged components, and/or additional
components than those depicted in FIG. 1. Alternatively, or additionally, one or more components of system 100 may perform one or more other tasks described as being performed by one or more other
components of system 100.
FIG. 2 is a diagram of an example device 200 that may correspond to workstation 110, target device 170, or a remote device running remote TCE 140 or TCE service 160. As illustrated, device 200 may
include a bus 210, a processing unit 220, a main memory 230, a read-only memory (ROM) 240, a storage device 250, an input device 260, an output device 270, and/or a communication interface 280. Bus
210 may include a path that permits communication among the components of workstation 200.
Processing unit 220 may interpret and/or execute instructions. For example, processing unit 220 may include a general-purpose processor, a microprocessor, a multicore microprocessor, a data
processor, a graphical processing unit (GPU), co-processors, a network processor, an application specific integrated circuit (ASICs), an application specific instruction-set processor (ASIP), a
system-on-chip (SOC), a controller, a programmable logic device (PLD), a chipset, and/or a field programmable gate array (FPGA).
Memory 230 may store data and/or instructions related to the operation and use of device 200. For example, memory 230 may store data and/or instructions that may be configured to implement an
implementation described herein. Memory 230 may include, for example, a random access memory (RAM), a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic
random access memory (SDRAM), a ferroelectric random access memory (FRAM), a read only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), an
electrically erasable programmable read only memory (EEPROM), and/or a flash memory.
Storage device 240 may store data and/or software related to the operation and use of device 200. For example, storage device 240 may include a hard disk (e.g., a magnetic disk, an optical disk, a
magneto-optic disk, a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of computer-readable medium, along with
a corresponding drive. Memory 230 and/or storage device 240 may also include a storing device external to and/or removable from device 200, such as a Universal Serial Bus (USB) memory stick, a hard
disk, etc. In an implementation, storage device 240 may store TCE 120.
Input device 250 may include a mechanism that permits an operator to input information to device 200, such as a keyboard, a mouse, a pen, a single or multi-point touch interface, an accelerometer, a
gyroscope, a microphone, voice recognition and/or biometric mechanisms, etc. Output device 260 may include a mechanism that outputs information to the operator, including a display, a printer, a
speaker, etc. In the case of a display, the display may be a touch screen display that acts as both an input and an output device. Input device 250 and/or output device 260 may be haptic type
devices, such as joysticks or other devices based on touch.
Communication interface 270 may include any transceiver-like mechanism that enables device 200 to communicate with other devices and/or systems. For example, communication interface 270 may include
mechanisms for communicating with another device or system via a network.
As will be described in detail below, device 200 may perform certain operations in response to processing unit 220 executing software instructions contained in a computer-readable medium, such as
memory 230. For instance, device 200 may implement TCE 120 by executing software instructions from memory 230. A computer-readable medium may be defined as a non-transitory memory device, where the
memory device may include a number of physically, possible distributed, memory devices. The software instructions may be read into memory 230 from another computer-readable medium, such as storage
device 240, or from another device via communication interface 270. The software instructions contained in memory 230 may cause processing unit 220 to perform processes that will be described later.
Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes described herein. Thus, implementations described herein are not limited
to any specific combination of hardware circuitry and software.
Although FIG. 2 shows example components of device 200, in other implementations, device 200 may contain fewer components, different components, differently arranged components, or additional
components than depicted in FIG. 2. Alternatively, or additionally, one or more components of device 200 may perform one or more tasks described as being performed by one or more other components of
device 200.
Parallel Implementation of Map Decoder [0034]
In general, a MAP decoder may be used as a common decoding solution for an error-control coding system. A MAP decoder may implement a trellis-based estimation technique in which the MAP decoder
produces soft decisions relating to the state of a block of inputs. MAP decoders may be frequently used in the context of a larger decoder, such as a turbo decoder, where two or more component MAP
decoders may be used, and the coding may involve iteratively feeding outputs from the MAP decoders to one another until a final decision is reached on the state of the communicated information,
called the message.
FIG. 3 is a diagram illustrating an example of a simplified trellis 300. Trellis 300 may represent correspondences between codewords (i.e., an input sequence of data bits) and paths from the
beginning of the trellis, shown as node 310, and the end of the trellis, shown as node 320. Trellis 300 may be considered to be a definite finite automaton with one start state and one finish state.
Given a received, possibly error-corrupted codeword, error probabilities may be associated with weights on the edges (the lines between the nodes) of trellis 300. A MAP decoder is one technique for
estimating the message or minimizing code symbol errors.
In FIG. 3, a transmitted length N data block u
(0≦k≦N) and a corresponding sequence c
of extrinsic input data, is shown. In this example, trellis 300 is illustrated as a two state, states S0 and S1, trellis. In practice, trellis 300 may include additional states. In general, trellis
300 may be a sparse data structure in which not all states are connected by an edge.
FIG. 4 is a diagram conceptually illustrating example components of a MAP decoder 400. MAP decoder 400 may be implemented, for example, as a model in TCE 120 or as part of target device 170.
MAP decoder 400 may include pre-processor component 410, parallel execution units 420, and post-processing component 430. MAP decoder 400 may operate to compute the likelihood, such as the
Log-Likelihood Ratio (LLR), of each bit or symbol, of an input data block, being correct. MAP decoder 400 may receive, at pre-processor component 410, a length N data block 402 and may receive
extrinsic input data 404. Extrinsic input data 404 may include, for example, parity bits and/or LLR values from a previous iteration of MAP decoder 400 (or from another MAP decoder). MAP decoder 400
may output, from post-processor component 430, a length N output data block 432 and extrinsic output data 434. The extrinsic output data 434 may include, for example, updated LLR values.
The MAP decoding technique may be based on the calculation of a number of parameters, commonly called the alphas, α
, the betas, β
, and the gammas, γ. The alphas may be computed through a forward recursion operation, the betas may be computed through a backwards recursion operation, and the gammas may include the transition
probability of a channel and transition probabilities of an encoder trellis. In one implementation, the alphas and betas may be defined as:
α k ( s ) = s ' γ j ( s ' , s ) α k - 1 ( s ' ) ; ##EQU00001## and ##EQU00001.2## β k ( s ) = s ' γ j ( s , s ' ) β k + 1 ( s ' ) ##EQU00001.3##
, s and s' may represent states of the decoder, and γ
(s', s) may represent the transition probability of the channel and transition probabilities of the encoder trellis. The gammas may be defined as:
where S[k]
is the state at time k and the input block and parity sequence is R
, . . . , Rk, . . . , R
} and R
The forward recursion, the alphas, can be modeled as products of the cumulative matrix product of several square transition matrices (one matrix per received symbol) and an initialization vector.
Backwards recursion can be described as products of the right-to-left cumulative matrix product of different transition matrices and the initialization vector. For example, in a two state trellis,
each recursive computation may be described in matrix form, as in:
[ α k ( s 0 ) α k ( s 1 ) ] = [ γ k ( 0 , 0 ) γ k ( 1 , 0 ) γ k ( 0 , 1 ) γ k ( 1 , 1 ) ] * [ α k - 1 ( s 0 ) α k - 1 ( s 1 ) ] ( Eq . 1 ) ##EQU00002##
These formulas can be equivalently written as
where A[k]
represents a column vector of the alphas and G
represents a square matrix, which will also be referred to as transition matrices herein. The transition matrices may generally be relatively sparse. Based on the above equations, the following
equation can be derived:
* . . . *G
From this
, the forward recursion may be performed by left multiplying A
, the initialization vector, by each element of the cumulative matrix product of {G
, G
-1, G
, . . . , G
}. For example, for three symbols, the alphas may be calculated as:
, and
Consistent with aspects described herein
, the products of the cumulative products of the transition matrices and the initialization vector, as included in these equations, may be efficiently calculated, in parallel, based on the scan
The backwards recursion, the betas, can be similarly modeled as products of the cumulative matrix product of a second set of transition matrices (different than the transition matrices for the
alphas) and the initialization vector. The scan algorithm may also be used to efficiently calculate, in parallel, the products of the cumulative products of the transition matrices (for the betas)
and the initialization vector.
Pre-processor component 410 may receive input block 402 and extrinsic input data 404. Pre-processor component 410 initiates and controls the data flow through parallel execution units 420. In one
implementation, the quantity of the parallel execution units 420 may be equal to N/2, where N is the size of input block 402.
Parallel execution units 420 may include multiple, parallel executing processors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), graphic processing units
(GPUs), software threads running on a general processor, or other execution units. Parallel execution units 420 may calculate, in parallel, the transition matrices, G
(for both the alphas and the betas); calculate, in parallel, each of the products of the cumulative products of the transition matrices and the initialization vector A
; and convert, in parallel, the products of the cumulative products of the transition matrices and the initialization vector, for both the alphas and the betas, to an output vector. Parallel
execution units 420 may perform these operations in a pipelined manner in which there is communication between different parallel execution units 420.
As previously mentioned, the calculation of the products of the cumulative products of the transition matrices and the initialization vector may be performed according to the scan algorithm. In one
implementation, an initialization vector may be defined based on the particular MAP decoder being implemented. The initialization vector may be a constant value that is used in parallel execution
units 420 and is illustrated as a vector. The initialization vector is illustrated as A
in the above equations. The scan algorithm may then be implemented, by parallel execution unit 420, to calculate products of the cumulative products of the transition matrices and the initialization
vector (for both the alphas and the betas).
Post-processor 430 may perform any final, serial processing of the results from parallel execution units 420, and may output block 432 and extrinsic output data 434.
Although FIG. 4 shows example components of MAP decoder 400, in other implementations, MAP decoder 400 may contain fewer components, different components, differently arranged components, and/or
additional components than those depicted in FIG. 4. Alternatively, or additionally, one or more components of MAP decoder 400 may perform one or more tasks described as being performed by one or
more other components of MAP decoder 400.
FIG. 5 is a diagram illustrating an example of the operation of parallel execution units 420 in calculating, based on the scan algorithm, partial products of the cumulative products of transition
matrices. The operations shown in FIG. 5 may be separately performed, in parallel, to determine the partial products of the cumulative products of transition matrices for both the alphas and the
In this example, four parallel execution units are shown, labeled as parallel execution units 510, 520, 530, and 540. For this example, assume that the input array includes eight transition matrices
(i.e., N=8). The set of transition matrices includes the set: {G
, G
, G
, G
, G
, G
, G
, G
}. In a first pipeline stage 550 (i.e., the first step in the implementation of the scan algorithm), parallel execution unit 510 may receive the transition matrices G
and G
, parallel execution unit 520 may receive the transition matrices G
and G
, parallel execution unit 530 may receive the transition matrices G
and G
, and parallel execution unit 540 may receive the transition matrices G
and G
In a second stage 552 of the pipelines, parallel execution unit 510 may calculate the product of transition matrices G
and G
. . . G
)). Simultaneously, parallel execution unit 520 may calculate the product of transition matrices G
and G
. . . G
)); parallel execution unit 530 may calculate the product of transition matrices G
and G
. . . G
)); and parallel execution unit 540 may calculate the product of transition matrices G
and G
. . . G
)). Additionally, in the second stage of the pipelines, parallel execution unit 510 may store transition matrix G
, parallel execution unit 530 may store transition matrix G
, parallel execution unit 530 may store transition matrix G
, and parallel execution unit 540 may store transition matrix G
. As illustrated, each succeeding stage of the pipelines may involve one or more matrix product calculations or transfer previous matrix product calculation to a different one of the pipelines
implemented by the parallel execution units.
In the final stage of the pipeline, labeled as stage 554, each of parallel execution units 510, 520, 530, and 540, may output a portion of the partial products of the transition matrices, to obtain
the partial products of the transition matrices. The matrix multiplication operations, H, illustrated in FIG. 5, may refer to operations other than standard matrix multiplication operations,
depending on the version of the MAP decoder that is being implemented. For example, the original MAP decoder algorithm may be relatively computationally intensive. The Max-Log-MAP decoder technique
is one known variation of the MAP decoder. In general, the Max-Log-MAP may be based on using the natural logarithm of the alphas, betas, and gammas. For the Max-Log-MAP implementation, scalar
multiplication may be replaced with addition and scalar addition may be replaced with the maximum operation. With these replacements, the matrix multiplication may be performed as illustrated in FIG.
5. Other MAP decoder techniques, such as the Log-MAP may alternatively be used.
FIG. 6 is a flowchart illustrating an example process 600 for the parallel implementation of a MAP decoder. Process 600 may be performed by, for example, MAP decoder 400.
Process 600 may include receiving an input array that represents the encoded data (block 610). The input array may be a fixed length array and may include encoded data received over a noisy channel,
including parity bits added during the encoding (i.e., at the transmitting end of the noisy channel). The input array may also include extrinsic input data.
Process 600 may further include calculating the transition matrices (block 620). In one implementation, the transition matrices, G, may be calculated as discussed above with reference to equation
(1). The transition matrices may be calculated, in parallel, by parallel execution units 420. The transition matrices may be calculated for both the alphas and the betas.
Process 600 may further include, based on the transition matrices and using the scan algorithm, calculation of the products of the cumulative products of the transition matrices and an initialization
vector (block 630). The initialization vector may be a constant valued vector that is defined based on the particular MAP decoder that is being implemented. The calculation of block 630 may be
performed in parallel using the scan algorithm. In one implementation, the parallel processing may be performed, in a pipelined manner, using a quantity of processing units 420. The quantity of
processing units required for a maximally parallel implementation may be, for instance, N/2, where N may represent the number of transition matrices and each processing unit may implement a pipeline
having 2*log
(N) stages. Block 630 may be performed, in parallel, for both the alphas and the betas.
Process 600 may further include generating, based on the products of the cumulative products of the transition matrices and the initialization vector, as calculated in block 630, the MAP decoder
output data (block 640). The calculation of block 640 may be performed, in parallel, by parallel execution units 420. The calculation of block 640 may include forming the output based on both sets
(i.e., the alpha and the beta sets) of the products of the cumulative products of the transition matrices and the initialization vector. The output data may generally correspond to a decoded version
of the received encoded data, such as output block 432 and extrinsic output data 434.
In one particular example of an implementation of process 600, process 600 may be implemented on target device 170 that includes multiple, parallel, GPUs. In some implementations, data sent to the
multiple GPUs may be sent in a "batch" mode to potentially hide memory latency and increase throughput.
FIG. 7 is a diagram illustrating an alternative example implementation of the scan technique. In the implementations illustrated in FIG. 7, the cumulative products of the transition matrices may be
performed as a three stage operation.
As shown in FIG. 7, transition matrices 700 may be segmented into a number of independent subsegments 710-1 through 710-L (referred to collectively as "subsegments 710" or individually as "subsegment
710"). The scan operation may be applied to each of subsegments 710 and the full product of each scan may be stored, illustrated as full products 720-1 through 720-L (referred to collectively as
"full products 720" or individually as "full product 720"). The scan operation may then again be applied to full products 720 to obtain partial products 730. Partial products 730 may then be
distributed to obtain final cumulative products 740. Although not explicitly shown in FIG. 7, the initialization vector may also be multiplied as part of the operations in FIG. 7, such that the final
cumulative products 740 may represent the products of the cumulative products of the transition matrices and the initialization vector. By segmenting the transition matrices into a series of groups
of transition matrices, and then independently applying the scan operation to each group, as illustrated in FIG. 7, the scan operation can potentially be more efficiently and/or more quickly
In the techniques shown in FIG. 7, the initialization vector may be multiplied after performing matrix by matrix multiplications or before performance of the matrix by matrix multiplications. In one
implementation, the matrix by vector multiplications (e.g., multiplications of matrices by the initialization vector) may be preferentially performed before matrix by matrix multiplications, which
may lead to a more computationally efficient process.
FIG. 8 is a flow chart illustrating an example process 800 for generating a model that uses a MAP decoder. Process 800 may be performed by, for example, workstation 110, running TCE 120.
Process 800 may include receiving a model or otherwise enabling or facilitating the creation of a model (block 810). The model may include a MAP decoder component (block 810). The MAP decoder
component may implement MAP decoding using multiple parallel processing units, such as processing units 420. In one implementation, the MAP decoder component for the model may include parameters that
allow a designer to specify the hardware elements that are to implement the parallel computations. The MAP decoder component may be implemented with other components to perform a larger or more
complex function. For example, a turbo decoder may be implemented using multiple MAP decoder components that are connected to one another using other model components, such as interleavers.
Process 800 may further include testing the model (block 820). For example, the model may be run by TCE 120 and values for parameters in the model may be observed. In response, the user may, for
example, interactively, through TCE 120, modify the operation of the model.
At some point, the user may determine that the model is ready for deployment in a target device. At this point, process 800 may further include generating code, to implement the model, on one or more
target devices (block 830). For example, the user may control TCE 120 to generate compiled code for target device 170. In another possible implementation, the generated code may be code that controls
programming of a hardware device, such as code that specifies the layout of an ASIC or FPGA.
FIG. 9 is a diagram illustrating an example system 900 that may use a MAP decoder. System 900 may be a communication system in which information is transmitted across a noisy channel. As shown,
system 900 may include turbo encoder 910, channel 920, and turbo decoder 930.
Turbo encoder 910 may operate to encode an input information signal, to include redundant data, to make the information signal resistant to noise that may be introduced through channel 920. For
example, turbo encoder 910 may include two recursive systematic convolutional (RSC) encoders that each generate parity bits that are included with the information signal when transmitted over channel
Channel 920 may include a noisy channel that may tend to introduce errors into the signal output from turbo encoder 910. For example, channel 920 may be an over-the-air radio channel, optical-based
channel, or other channel that may tend to introduce noise.
Turbo decoder 930 may receive the encoded signal, after it is communicated over channel 920, and may act to decode the encoded signal, to ideally obtain the original input information signal. Turbo
decoder 930 may include multiple MAP decoders and one or more interleavers. A number of designs for turbo decoder 930 are known. One example of a design for a particular turbo decoder 930 is
described in more detail with respect to FIG. 10.
FIG. 10 is a diagram illustrating an example implementation of a turbo decoder, such as turbo decoder 930. Turbo decoder 930 may include a pair of MAP decoders 1010 and 1030 and a pair of
interleavers 1020 and 1040. Turbo decoder 930 may operate on blocks of data. MAP decoder 1010 may receive an initial block of data, including error correcting information, such as parity bits
(labeled as input Z). MAP decoder 1030 may also receive the initial data, or a version of the initial block of data, and the error correcting information (labeled as input Z').
The output of MAP decoders 1010 and 1030 may be forwarded through the pair of interleavers 1020 and 1040. Interleavers 1020 and 1040 may generally operate to reorder input data. Interleavers 1020 and
1040 may be matched as interleaver/de-interleaver pairs, so that the interleaving performed by one of interleavers 1020 and 1040 can be undone by the other.
MAP decoders 1010 and 1030, and interleavers 1020 and 1040, may iteratively operate until the probabilities determined by MAP decoders 1010 and 1030, such as the LLR probabilities, converge.
The foregoing description of implementations provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and
variations are possible in light of the above teachings or may be acquired from practice of the invention.
For example, while a series of acts has been described with regard to FIGS. 6 and 8, the order of the acts may be modified in other implementations. Further, non-dependent acts may be performed in
Also, the term "user" has been used herein. The term "user" is intended to be broadly interpreted to include, for example, a workstation or a user of a workstation.
It will be apparent that embodiments, as described herein, may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual
software code or specialized control hardware used to implement embodiments described herein is not limiting of the invention. Thus, the operation and behavior of the embodiments were described
without reference to the specific software code--it being understood that one would be able to design software and control hardware to implement the embodiments based on the description herein.
Further, certain portions of the invention may be implemented as "logic" that performs one or more functions. This logic may include hardware, such as an application specific integrated circuit or a
field programmable gate array, software, or a combination of hardware and software.
Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of the invention. In fact,
many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one
other claim, the disclosure of the invention includes each dependent claim in combination with every other claim in the claim set.
No element, act, or instruction used in the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article
"a" is intended to include one or more items. Where only one item is intended, the term "one" or similar language is used. Further, the phrase "based on" is intended to mean "based, at least in part,
on" unless explicitly stated otherwise.
Patent applications by Halldor N. Stefansson, Natick, MA US
Patent applications by The MathWorks, Inc.
Patent applications in class Maximum likelihood decoder or viterbi decoder
Patent applications in all subclasses Maximum likelihood decoder or viterbi decoder
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20130142289","timestamp":"2014-04-21T03:15:38Z","content_type":null,"content_length":"82543","record_id":"<urn:uuid:2b35e6c5-2eaa-4e7c-95d5-38f0299b755d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noise functions
04-14-2004, 06:19 AM #1
Intern Contributor
Join Date
Jan 2004
Noise functions
Can anyone explain to me how to use the noise functions?Whatever the input is,the output is always 0.0
I'm using GFX5200 (56.64 drivers)
Re: Noise functions
noise functions are not yet implemented in those drivers.
There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and
There is another theory which states that this has already happened...
Re: Noise functions
I was afraid of that.Do you know if they're going to support it in future drivers or i have to buy a new card?
Re: Noise functions
its very possible that you dont get HW accelerated noise in current generation cards. ATI has noise implemented but that throws the shader to software render. Why not upload a 3d texture and
lookup until we know that noise works in hw?
Re: Noise functions
Why not upload a 3d texture and lookup until we know that noise works in hw?
Why isn't it possible that the driver does exactly this for you? So noise can also be used...
There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and
There is another theory which states that this has already happened...
Re: Noise functions
since that takes a textureslot? well, i guess they can make it that way, but they havent yet
Re: Noise functions
I'm not sure the texture way would be conformant. It would be a repeating noise function. I think the spec requests a non-repeating noise function.
Re: Noise functions
Well, all noise implementations repeat at some point, it's just a question of what the period is.
We do have pixel shader implementations of real Perlin noise, but they take about 40 instructions. For now I recommend people just use 3D texture lookups.
Originally posted by Humus:
I'm not sure the texture way would be conformant. It would be a repeating noise function. I think the spec requests a non-repeating noise function.
Re: Noise functions
I want to use this Perlin Noise implementation for CPU based noise calculations. But this is a function from R^3 to R.
What would be the best way, to extend it for the GLSL suggested noise functions:
Code :
float noise1(float);
float noise1(vec2);
float noise1(vec3);
float noise1(vec4);
vec2 noise2(float);
vec2 noise2(vec2);
vec2 noise2(vec3);
vec2 noise2(vec4);
vec3 noise3(float);
vec3 noise3(vec2);
vec3 noise3(vec3);
vec3 noise3(vec4);
vec4 noise4(float);
vec4 noise4(vec2);
vec4 noise4(vec3);
vec4 noise4(vec4);
Re: Noise functions
Usually vector-valued noise is implemented on top of scalar noise by simply evaluating the noise function several times at n random offset positions, e.g.:
vec3 noise3(vec3 p)
return vec3(noise(p),
noise(p + vec3(31, 72, 54),
noise(p + vec3(156, 87, 99));
This is obviously n times more expensive than scalar noise.
Perlin doesn't give the code, but the algorithm can be extended to other dimensions.
Originally posted by Hampel:
I want to use this Perlin Noise implementation for CPU based noise calculations. But this is a function from R^3 to R.
What would be the best way, to extend it for the GLSL suggested noise functions:
Code :
float noise1(float);
float noise1(vec2);
float noise1(vec3);
float noise1(vec4);
vec2 noise2(float);
vec2 noise2(vec2);
vec2 noise2(vec3);
vec2 noise2(vec4);
vec3 noise3(float);
vec3 noise3(vec2);
vec3 noise3(vec3);
vec3 noise3(vec4);
vec4 noise4(float);
vec4 noise4(vec2);
vec4 noise4(vec3);
vec4 noise4(vec4);
04-14-2004, 06:46 AM #2
Member Regular Contributor
Join Date
Apr 2002
04-14-2004, 06:51 AM #3
Intern Contributor
Join Date
Jan 2004
04-14-2004, 06:58 AM #4
Advanced Member Frequent Contributor
Join Date
Oct 2001
04-14-2004, 08:57 AM #5
Member Regular Contributor
Join Date
Apr 2002
04-14-2004, 12:54 PM #6
Advanced Member Frequent Contributor
Join Date
Oct 2001
04-15-2004, 08:46 AM #7
04-15-2004, 01:32 PM #8
Intern Contributor
Join Date
Jul 2001
Santa Clara, CA
04-18-2004, 04:42 AM #9
Junior Member Regular Contributor
Join Date
Sep 2002
04-21-2004, 04:45 PM #10
Intern Contributor
Join Date
Jul 2001
Santa Clara, CA | {"url":"http://www.opengl.org/discussion_boards/showthread.php/162743-Noise-functions","timestamp":"2014-04-20T03:32:14Z","content_type":null,"content_length":"67010","record_id":"<urn:uuid:7f586e66-bc1f-41e3-b052-f9950343987d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Game number four.
3770:1-13-04 Game number four.
(A) Title and term. Ohio lottery commission game number four, "Bowling For Bucks," shall be conducted at such times and for such periods as the commission may determine. For the purpose of this rule,
"sales cycle" shall mean any such period, including reprints, beginning on the date when ticket sales are commenced and continuing through the date established by the director as the date on which
agents are to make their final settlement with respect to tickets allocated to them during the period in game number four.
(B) General design.
(1) Game number four is an instant lottery game within which are contained three separate games with a chance to win once in each game. Each game will consist of one "your score," one "their score"
and one "prize" amount. The player is required to remove the covering to reveal three games, each of which will be in a separate horizontal line in the play area. If the player's "your score" is
higher than "their score" in any game across, the player wins the "prize" amount shown for that game. If a score of "300" is revealed in any "your score" spot, the player wins double the prize amount
for that game. The player can win up to three times on each ticket, but only once in each game.
(2) As used in this rule, "prize award" shall mean a free "ticket" of comparable price or one of the following monetary amounts which is the total of all winning prize values appearing on the ticket:
two dollars, four dollars, five dollars, ten dollars, twenty - five dollars, fifty dollars, three hundred dollars and three thousand dollars. Each ticket in game number four shall be imprinted in
such a manner that prize awards from the set listed above may be won.
(C) Price of tickets. The price of a ticket issued by the commission in game number four shall be one dollar.
(D) Structure, nature and value of prize awards.
(1) There shall be one type of prize in game number four called a "regular prize award."
(2) The only "prize values" which shall appear on a ticket in game number four are: free "ticket," one dollar, two dollars, three dollars, five dollars, ten dollars, fifty dollars, one hundred
dollars, three hundred dollars, one thousand dollars and three thousand dollars. Prize values shall be concealed by an opaque covering which may be scratched off by the holder of the ticket to reveal
the underlying prize values.
(a) Holder of a valid winning ticket on which "your score" is higher than "their score" in one game across for a free "ticket" shall win a regular price award of a "free ticket" of comparable prize.
(b) Holder of a valid winning ticket on which "your score" is higher than "their score" in two games across for one dollar two times shall win a regular prize award of two dollars;
(c) Holder of a valid winning ticket on which "your score" is "300" in one game across for one dollar doubled once shall win a regular prize award of two dollars;
(d) Holder of a valid winning ticket on which "your score" is higher than "their score" in two games across for two dollars two times shall win a regular prize award of four dollars;
(e) Holder of a valid winning ticket on which "your score" is "300" in one game across for two dollars doubled once shall win a regular prize award of four dollars;
(f) Holder of a valid winning ticket on which "your score" is higher than "their score" in all three games across for two dollars two times and one dollar once shall win a regular prize award of five
(g) Holder of a valid winning ticket on which "your score" is higher than "their score" in one game across for five dollars once shall win a regular prize award of five dollars;
(h) Holder of a valid winning ticket on which "your score" is higher than "their score" in all three games across for five dollars once, three dollars once and two dollars once shall win a regular
prize award of ten dollars;
(i) Holder of a valid winning ticket on which "your score" is "300" in one game across for five dollars doubled once, shall win a regular prize award of ten dollars;
(j) Holder of a valid winning ticket on which "your score" is "300" in one game across for five dollars doubled once, and on which "your score" is higher than "their score" in two games across for
ten dollars once and five dollars once shall win a regular prize award of twenty-five dollars;
(k) Holder of a valid winning ticket on which "your score" is "300" in one game across for ten dollars doubled once, and on which "your score" is higher than "their score" in one game across for five
dollars once shall win a regular prize award of twenty-five dollars;
(l) Holder of a valid winning ticket on which "your score" is "300" in one game games across for ten dollars doubled two times and five dollars doubled once shall win a regular prize award of fifty
(m) Holder of a valid winning ticket on which "your score" is higher than "their score" in one game across for fifty dollars once shall win a regular prize award of fifty dollars;
(n) Holder of a valid winning ticket on which "your score" is "300" in two games across for fifty dollars doubled two times, and on which "your score" is higher than "their score" in one game across
for one hundred dollars once shall win a regular prize award of three hundred dollars;
(o) Holder of a valid wining ticket on which "your score" is higher than "their score" in one game across for three hundred dollars once shall win a regular prize award of three hundred dollars;
(p) Holder of a valid winning ticket on which "your score" is higher than "their score" in all three games across for one thousand dollars three times shall win a regular prize award of three
thousand dollars; and
(q) Holder of a valid winning ticket on which "your score" is higher than "their score" in one game across for three thousand dollars once shall win a regular prize award of three thousand dollars.
(E) Number of prize awards. The number of prize awards in any sales cycle of game number four will depend upon the number of tickets sold during that cycle. Tickets shall be printed in accordance
with this rule using random techniques in order that:
(1) Combinations winning each prize award are randomly distributed throughout all tickets printed in any given ticket issuance; and
(2) Mathematical reasoning indicates the number of winning tickets sold per eight million tickets sold in the following prize categories will be as follows:
(a) Two tickets winning in one game across for three thousand dollars once to win three thousand dollars;
(b) Two tickets winning in three games across for one thousand dollars three times to win three thousand dollars;
(c) Twenty-four tickets winning in one game across for three hundred dollars once to win three hundred dollars; and
(d) Twenty-four tickets winning in all three games across for fifty dollars doubled two times and one hundred dollars once to win three hundred dollars.
(3) Mathematical reasoning indicates that over a sufficiently large number of tickets sold, the average number of winning tickets per five hundred thousand tickets sold will be as follows:
(a) Two hundred tickets winning in one game across for fifty dollars once to win fifty dollars;
(b) Two hundred tickets winning in all three games across for ten dollars doubled two times and five dollars doubled once. To win fifty dollars;
(c) One thousand sixty-four tickets winning in two games across for ten dollars doubled once and five dollars once to win twenty-five dollars;
(d) One thousand one hundred tickets winning in all three games across for five dollars doubled once, ten dollars once and five dollars once to win twenty-five dollars;
(e) Two thousand five hundred tickets winning in one game across for five dollars doubled once to win ten dollars;
(f) One thousand eight hundred seventy-five tickets winning in all three games across for five dollars once, three dollars once and two dollars once to win ten dollars;
(g) Three thousand one hundred twenty-five tickets winning in one game across for five dollars once to win five dollars;
(h) Three thousand tickets winning in all three games across for two dollars two times and one dollar once to win five dollars;
(i) Five thousand tickets winning in one game across for two dollars doubled once to win four dollars;
(j) Five thousand one hundred twenty-five tickets winning in two games across for two dollars two times to win four dollars;
(k) Fourteen thousand one hundred twenty-five tickets winning in one game across for one dollar doubled once to win two dollars;
(l) Thirteen thousand eight hundred seventy-five tickets winning in two games across for one dollar two times to win two dollars; and
(m) Fifty-one thousand six hundred twenty-five tickets winning in one game across for a free "ticket" to win a free ticket of comparable price.
(F) Frequency of prize drawings.
(1) Random imprinting of prize awards on all tickets issued in game number four shall be accomplished in a manner which complies with the commission's rules and procedures.
(2) When a ticket issued in game number four is sold or deemed sold in accordance with this rule and the covering material over any of the numbers has been removed, the holder shall be deemed to have
drawn the numbers on that ticket which determine whether the holder is entitled to a regular prize award. All regular prize awards shall be deemed announced no later than the last day of the sales
cycle of game number four in which the ticket was sold.
(G) Special claim, entry, receipt and validation procedures. The director shall establish special claim, entry, receipt and validation procedures, including procedures for validation by agents of
tickets winning prizes which are to be paid by agents in accordance with commission rules. Prize awards shall be claimed within the time limits set forth by commission rules.
(H) Validity of tickets.
(1) A mechanical error in printing prize awards, symbols, words or other numbers on a ticket shall not automatically invalidate that ticket. To the extent feasible, the director shall establish
procedures by which the holder of any ticket on which information. Is incorrectly printed due to mechanical malfunction may be advised of correct information for the ticket. If it is not technically
feasible to recover the information from a mechanically misprinted ticket, the director may declare the ticket void and the holder shall be entitled to a return of the ticket price, or a replacement
ticket of comparable price.
(2) In addition to, but not in limitation of, all other power and authority conferred on the director by the commission's rules, the director may declare a ticket in game number four void if it is
stolen, unissued, deactivated, not sold or deemed not sold in accordance with commission rules; if it is illegible, mutilated, altered, counterfeit, misregistered, reconstituted, miscut, defective,
printed or produced in error or incomplete; or if the ticket fails any of the validation tests or procedures established by the director. The commission's liability and responsibility for a ticket
declared void, if any, is limited to refund of the retail sales price of the ticket or issuance of a replacement ticket of comparable price.
(I) Director's conduct of game number four.
(1) The director shall conduct game number four in a manner consistent with the Lottery Act, the rules of the commission, including without limitation, this rule and the regulations of the director.
As deemed necessary or advisable, the director shall adopt management regulations, orders or directives to implement and operate this lottery game. The director shall inform the public of the
provisions of this rule and the procedures established pursuant hereto which affect the play of game number four.
(2) Names and definitions of elements of game number four used in this rule are to be considered generic terms used solely for purposes of this rule. In actual operation, game number four and these
elements may be given names or titles chosen by the commission.
Eff 7-3-99
Rule promulgated under: RC 111.15
Rule authorized by: RC 3770.03
Rule amplifies: RC 3770.01 to 3770.08 | {"url":"http://codes.ohio.gov/oac/3770:1-13-04","timestamp":"2014-04-17T16:01:36Z","content_type":null,"content_length":"19019","record_id":"<urn:uuid:aec8c038-dd36-4bd7-afa9-8a100c00f3b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lone Oak Trigonometry Tutor
...My goal is to start and keep young people passionate about learning at a young age and hopefully they will maintain that passion for life. My educational background is technology driven. I
started early with a keen interest in computers and computer science.
32 Subjects: including trigonometry, reading, geometry, algebra 1
...Understanding the question2. Identifying the known information3. Determining the best problem solving strategyI find that clarity is key to success with math and science.
8 Subjects: including trigonometry, calculus, physics, algebra 1
...Thank you and I look forward to hearing from you. JenniferI am a certified teacher in the UK for mathematics in grades 5-10. To become certified I had to do two elementary school placements and
teach every subject.
13 Subjects: including trigonometry, reading, writing, geometry
I have loved math for as long as I can remember. I tutored many classmates while in middle/high school. I am currently pursuing my degree in mathematics at Texas A&M Commerce with an 8 - 12
teacher's certification.
14 Subjects: including trigonometry, reading, calculus, geometry
...One of my desires is to help people to reach their fullest potential. I enjoy working Math in general. I really like working with numbers & formulas.
26 Subjects: including trigonometry, reading, English, geometry | {"url":"http://www.purplemath.com/lone_oak_tx_trigonometry_tutors.php","timestamp":"2014-04-20T11:20:38Z","content_type":null,"content_length":"23554","record_id":"<urn:uuid:0ced5c35-6d60-467a-b229-f3a8d04cf4f6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
the potential as a function of position x is shown in the graph...
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i tried to draw a smooth curve
Best Response
You've already chosen the best response.
is the electric field zero at x=0? is the max magnitude at 15 or 5cm?
Best Response
You've already chosen the best response.
Electric Field is defined as \[E=-\frac{dv}{dx}\] v= potential so E is the negative of the slope of the potential curve. Do note that E found from this relation is the E in x - direction. At x=0
the curve is parallel to x -axis, that is it's constant so E=0 At x=5 the slope is negative and magnitude wise the highest. So E will be max at x=5 cm
Best Response
You've already chosen the best response.
how do you determine that the curve is parallel to x axis @ x=0? it doesn't look like it on my graph but in the actual graph, it looks parallel at x=10 too
Best Response
You've already chosen the best response.
i assumed E was zero at x=0 since d(v=0) should be zero so E=(d0)/dx=0
Best Response
You've already chosen the best response.
No if v=0 doesn't mean that E=0 but the negative of slope of V is E, Yeah at x=10 also the slope is zero. So there also E is zero
Best Response
You've already chosen the best response.
hmm still confused. i need the negative of the slope of V to find where E is zero? or to find the max magnitude? and which direction is the E at the max point?
Best Response
You've already chosen the best response.
|dw:1332352015313:dw| is this what the graph of E look like?
Best Response
You've already chosen the best response.
Ok Elica, Yeah Electric field is the negative of the voltage's slope. When we have Voltage constant or parallel to axis. Then it's slope is zero and consequently E is zero
Best Response
You've already chosen the best response.
ok i understand. so is the graph i did for E right?
Best Response
You've already chosen the best response.
If V is cos then E will be sine
Best Response
You've already chosen the best response.
x=0, v=constant so E will be zero
Best Response
You've already chosen the best response.
ok so it's not right. how do i determine the direction of E at x=0 or x=10?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
E is in the x direction always:)
Best Response
You've already chosen the best response.
so right either way
Best Response
You've already chosen the best response.
Yeah, when E is negative that means it's in -x direction
Best Response
You've already chosen the best response.
thank you so much
Best Response
You've already chosen the best response.
Welcome, Did you understand?
Best Response
You've already chosen the best response.
yes, that was great help
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f6a0c2ee4b0ba795ab369f6","timestamp":"2014-04-18T00:50:51Z","content_type":null,"content_length":"210469","record_id":"<urn:uuid:90ab9e12-bc46-4af2-956f-c3415758a96c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does time stop for a photon? Why is that a nonsensical question?
frames or planes of simultaneity do not really exist, or do you disagree?
The planes of simultaneity certainly exist, at least to the same extent that the spacetime as a whole exists. If you adopt the viewpoint that spacetime, as a whole, is a 4-dimensional geometric
object, then obviously you can "cut" particular spacelike 3-surfaces out of that 4-dimensional object that are orthogonal to particular timelike worldlines at particular events. The worldlines and
the 3-surfaces themselves are coordinate-independent geometric objects, and they are as "real" as the overall geometric object that they are parts of.
Labeling the coordinate-independent geometric objects with particular coordinates is arbitrary and doesn't affect the physics. So I would agree that "frames", in the sense of particular coordinate
labelings, "do not really exist". But the things that the coordinates label do (at least in the same sense that spacetime itself does).
For instance asking "What would the rate of a clock be if we discount the light travel time of a given Doppler shift of an object which is in relative motion to us" is interesting for professors to
ask students in a test but apart from that what is the scientific value of those questions
If the question you quoted in the above is equivalent to the question "How much proper time elapses along this timelike worldline between events A and B?", then that question seems to me to have a
direct physical meaning, since the proper time in question is directly measurable by a clock traveling along the given worldline.
Questions about "proper length" and more generally about surfaces of simultaneity are more complicated to correlate to direct physical measurements, since you first have to talk about clock
synchronization and the relativity of simultaneity. But it can still be done. Whether or not it is *useful* to do it depends on the problem. Over small distances it seems to me to be useful; for
example, it's hard to talk about local inertial frames and what happens in them without talking about proper length measurements within those frames. But it can be problematic when people try to
extend it out over large distances, such as the recent threads about what is happening "now" on Mars or in the Andromeda galaxy. In those cases I agree that trying to assign some sort of "real
meaning" to a particular surface of simultaneity causes confusion and doesn't help with understanding the physics. | {"url":"http://www.physicsforums.com/showthread.php?t=552175","timestamp":"2014-04-16T04:28:04Z","content_type":null,"content_length":"79851","record_id":"<urn:uuid:f7b57c5b-7bab-42e2-99d5-8b87bfad6cad>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
AC LCR circuit - All About Circuits Forum
Originally posted by Battousai@Mar 16 2004, 01:35 AM
Yes there definitley is. If I'm not mistaken any weird combination of LCR will result in at least once resonant frequency where the inductor and capacitor cancel eachother out, for your particular
case, the overall impedance is:
Zcap//(Zinductor+R) = (1/sC)//(R+sL) = (R+sL)/[1+sRC+(s^2)LC]
Ok I seem to have hit a mental block... But there is a resonant frequency and it's most likely w=1/(LC)^0.5
Here you are:
Take for example a series network of R, L & C
The impedance looking into this network is Zin = R + sL + 1/sC (They are in series)
Now, at the resonant frequency, the impedance of this network looks purely resistive right? Well in mathematical terms this means that Im{Zin(s)} = 0
The imaginary part of the impedance equals 0 and only the Real part remains (resistive)
So let s --> jw (for sinusoids)
Zin(jw) = R + jwL + 1/(jwC)
recall, 1/jwC = -j/wC
=> Zin(jw) = R + j(wL - 1/wC)
Now Set Im{Zin(jw)} = 0 and the Imaginary part is wL - 1/wC
So we solve wL - 1/wC = 0
=> wL = 1/wC
=> w^2 = 1/LC
=> w = Sqrt(1/LC)
And that's it.
When this network experiences a sinusoidal input with frequency w = sqrt(1/LC), The voltage and current will be in phase and the magnitudes predicted as if only the R were present.
NOW, following along with this same method, consider a network of an L in series with R, both in parallel with a C.
The Zin looking into this network is Zin(s) = [(sL+R)(1/sC)] / [(1/sC) + sL + R]
After some simplifying and s --> jw,
Zin(jw) = (jwL + R) / [ 1 - w^2LC + jwRC]
After much algebra, I finally get an expression for Im{Zin(jw)} and I set that equal to zero. Since the denominator is real and positive for all real values of w,R,L & C, I only need to solve for
what makes the numerator zero in this expression. When I do that, I get:
w = Sqrt[ (L - R^2C) / L^2C ]
I checked the answer with a few dummy values and it checked out ok.
BUT an interesting observation a real valued w only exists for this resonant frequency if L > R^2*C which would imply that I can choose L, R and C in such a way as to make a resonant frequency
which to me says there is something wrong because, I think any second order system has a "natural" frequency. So I prolly goofed somewhere in all the algebra.
but anyhow, in general that is how one solves resonant frequency problems algebraically.
Now that I think about it, EVERY second order system has a natural frequency. Because second order systems can always be written as a polynomial of second degree in the denominator (2 poles) and the
fundamental theorem of algebra states that every polynomial of degree n (n>1) has n zeros in the complex plane. | {"url":"http://forum.allaboutcircuits.com/showthread.php?t=144&page=1","timestamp":"2014-04-18T23:23:23Z","content_type":null,"content_length":"41833","record_id":"<urn:uuid:6b425040-48c1-49c7-91c8-adffab0c9f77>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Build a Math Tool Kit for the PSAT/NMSQT
One of the first things that every do-it-yourselfer learns is that the proper tool makes all the difference. You don’t need a saw or a screwdriver on the PSAT/NMSQT, but a couple of special
techniques help you nail the math questions. What techniques? Read on.
Plugging in
Plugging in is a great technique for solving lots of PSAT/NMSQT problems, especially those involving percents and variables. To plug in, pick a number — almost any number — and work through the
problem with that number. Imagine a problem involving percents, such as this one:
A tasteful, orange-and-purple shirt is marked down 40%, but somehow it fails to sell. The store owner lowers the price by an additional 10%. What is the total discount on this fashion-forward
(A) 25%
(B) 30%
(C) 35%
(D) 46%
(E) 50%
The answer is Choice (D). The question doesn’t explain how much the shirt cost originally (or who chose the colors). No worries: Just choose a number. For percent problems, 100 is always a good bet.
Now work through the problem.
The original price is $100. The first discount is $40, so the new price is $60. The next discount is 10% of $60, or $6. Subtract $6 from $60, and the new price is $54. The original price was $100, so
the discount is $100 – $54, or $46. That means that the total discount is 46%, also known as Choice (D).
Here’s another example:
During the hours marked on Jeannie’s calendar as PSAT/NMSQT Prep, Jeannie actually spends ½ her time watching reality TV shows. She devotes 2/3 of the remaining prep time to shredding old love
letters. During what proportion of the time Jeannie claims to be studying is she actually preparing for the PSAT/NMSQT?
(A) 1/6
(B) 1/3
(C) 1/2
(D) 2/3
(E) 5/6[]
The answer is Choice (A). You can solve this problem with algebra, naming the time studying as x. However, you can also plug in. You don’t know how much time Jeannie said she was studying. Her mom
checks her calendar, so chances are it’s a respectable amount. Plug in a number.
Because you’re dealing with 1/2 and 2/3, you probably want those denominators to be factors of the number you select. How about 12? Jeannie said she would study for 12 hours, but she watched TV for 6
hours. Subtract 6 from 12, and you have 6 hours left for study. Jeannie shreds her letters for 2/3[ ]of the remaining time, or 4 hours. She has 2 hours for study remaining.
Go back to your plug-in number, 12, and you see that Jeannie spent 2/12, or 1/6, of her time studying. Your answer is Choice (A).
A variation of plugging in is backsolving. This technique is great for simple equations or arithmetic problems. When you backsolve, you plug in the answer choices to see which one works.
Generally, the answer choices are listed in size order — from the smallest to the largest number. Start with Choice (C), which falls in the middle. When you try that answer, you may realize that
Choice (C) is too big, and then you know you have to try Choices (A) and (B). Or, you may discover that Choice (C) is too small, and then you can check Choices (D) and (E).
Take a look at these example problems, each answered by backsolving:
A number is tripled, increased by 4, and then halved. If the result is 8, what was the number?
(A) 2
(B) 4
(C) 8
(D) 12
(E) 16
The answer is Choice (B). You could solve with algebra, letting x represent the original number. However, backsolving works just fine. Try Choice (C), 8, as the original number and see what happens:
8 tripled is 24, which becomes 28 when increased by 4, and then 14 when halved.
Fourteen is too big, so try an answer that’s smaller than Choice (C); Choice (B) is a good next try. If the original number is 4, it becomes 12 when tripled, 16 when increased by 4, and then 8 when
halved — the result you want! The correct answer is Choice (B).
If f(x) = x^2 – 3x – 2, what value of x results in f(x) = 2?
(A) 1
(B) 2
(C) 3
(D) 4
(E) 5
The answer is Choice (D). You can answer this question by creating a quadratic equation and then factoring, but it may be easier for you to backsolve. As usual, start with Choice (C) and go from
there. If x is 3, you get f(3) = (3)^2 – 3(3) – 2 = 9 – 9 – 2 = –2. Uh-oh, –2 is too small. Try a larger answer, Choice (D). If x is 4, you get f(4) = (4)^2 – 3(4) – 2 = 16 – 12– 2= 2, the answer
you’re looking for!
Sketching a diagram
You know those annoying problems where one friend is driving west and the other is on a train heading east, both moving at different speeds? (Why doesn’t everyone just stay home? But back to math.)
You may find that a little sketch allows you to see the answer or at least the route to the answer. Here’s an example:
Stan and Evan leave school to bicycle home. Both boys ride at a rate of 15 miles per hour. Evan rides directly east for 12 minutes to get home, and Stan rides directly south for 16 minutes to get
to his home. How many miles apart are Evan’s and Stan’s homes?
(A) 4
(B) 5
(C) 10
(D) 15
(E) 20
The answer is Choice (B). Diagram time! Make sure you label your diagram so you get a good sense of what’s going on in the problem. But first, determine how far each of the boys live from school.
To get home, Evan rides for 12 minutes, or 1/5 of an hour, meaning that he travels (15 miles per hour) x (1/5 hour) = 3 miles. The formula is (rate) x (time) = distance. Stan rides for 16/60 of an
hour, so his distance is (15 miles per hour) x (^16/[60] hour) = 4 miles.
Hopefully you noticed that you have a right triangle, which means that you can use the Pythagorean theorem. Recall that a^2 + b^2 = c^2, where a and b are the legs of the triangle and c is the
hypotenuse. In this case, 3^2 + 4^2 = 5^2, so Stan and Evan live 5 miles apart, Choice (B).
Keeping it real
The PSAT/NMSQT doesn’t always give you real-world problems (not counting its role in ruining your life), but sometimes you can use your knowledge of how the world works to help you on the exam. If
you’re solving a problem involving decreasing prices, you know that you’re never going to get more than a 100 percent reduction. No store pays you to haul the stuff away!
Nor will you find that 110 students are studying Spanish if the problem tells you that the school has only 50 kids. Keep your eye on reality. If your answer doesn’t fit, go back and try again.
Using the booklet
Only your answer sheet is graded, but your question booklet is actually a valuable tool for PSAT/NMSQT math. As you read each question, circle key ideas (integers, largest, less than, and other such
words). The little circles help you focus on the question’s important elements.
Also, use the blank space around each question to jot down the calculations you’re doing to arrive at an answer. If you come up with –12 and none of the answer choices matches that number, you can
check your steps to see if you wrote a 2, for example, when you intended to write 4.
If you’ve spent more than a minute on one problem, even if you aren’t done with finding the answer, you should probably move on to the next. If you have time, you can return to that problem. Having
the steps written in your booklet helps you jump in where you left off. | {"url":"http://www.dummies.com/how-to/content/build-a-math-tool-kit-for-the-psatnmsqt.html","timestamp":"2014-04-17T20:10:46Z","content_type":null,"content_length":"57456","record_id":"<urn:uuid:0b1cf092-f187-484b-927e-d3b69c5e2e01>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generalized Functions
Hey mbempeni. What definition do you have of a generalized function?
Thank you for your answer. I mean Dirac δ function
Can you use the limiting function of the normal distribution (as variance goes to zero) to prove this?
@chiro Can you explain what you mean? Thanks for your interest!
I mean the limiting formulation like these: Dirac delta function - Wikipedia, the free encyclopedia However I think a better way would be to use Laplace transforms to do this: have you used these
No, I haven 't. Could you help me to solve it?
In laplace transforms you have the shift theorem which is referenced on the wiki site: Laplace transform - Wikipedia, the free encyclopedia Consider transforming both to Laplace space (frequency
domain) and use the time-scaling and shift theorem to show that they are both equal Laplace transform functions. Once you show that they are equal, then it follows that the functions are also equal. | {"url":"http://mathhelpforum.com/differential-equations/209187-generalized-functions.html","timestamp":"2014-04-19T11:58:43Z","content_type":null,"content_length":"45568","record_id":"<urn:uuid:5462e573-94b1-4e04-90ff-933a75965782>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
New York, NY: McGraw-Hill. xiv, 416 p. $ 44.95 (1987).
For the reviews of the first two editions (1966, 1974) see Zbl 0142.01701 and Zbl 0278.26001.
According to the preface: This third edition contains an entirely new chapter on differentiation. The basic facts about differentiation are now derived from the existence of the Lebesgue points which
in turn is an easy consequence of the so-called weak type inequality that is satisfied by the maximal functions of measures on Euclidean spaces. This approach yields strong theorems with minimal
effort. Even more important is that it familiarizes students with maximal functions, since these have become increasingly useful in several areas of analysis.
Also large parts of Chapters 11 and 17 were rewritten and simplified. Several smaller changes have been made in order to improve certain details.
00A05 General mathematics
26-01 Textbooks (real functions)
30-01 Textbooks (functions of one complex variable)
46-01 Textbooks (functional analysis) | {"url":"http://zbmath.org/?q=an:0925.00005&format=complete","timestamp":"2014-04-19T19:48:14Z","content_type":null,"content_length":"21472","record_id":"<urn:uuid:15b1a76a-b365-4985-b83e-97e8e83786fb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by JT
Total # Posts: 64
While taking a rest on a tree branch, a daring cowboy sees a wild horse running towards the tree he was on. he wants to land on the horses back when it passes under the tree. if the horse is running
at a constant velocity of about 2 m/s and the vertical distance between the co...
a contestant projects a coin with a speed of 7 m/s at an angleof 60 degrees to the horizontal. When the coin leaves his hand, the horizontal distance between the coin and the dish is 2.8m. The coin
lands in the dish. calculate the horizontal component of the initial velocity o...
math pre calc
solve for X marticies 2 0 5 4 -1 3 =2 6 x -7
math pre calc
gaussian elimination x+Y+Z=-2 X-y+5z+22 5x+y+z=-22
math pre calc
at the pittsburch zoo, children ride a train for 25 cents, adults pay $1 and seniors pay .75. on a given day, 1400 passengers paid a total of $740 for the rides. there were 250 more children riders
than adults and senior riders combined.find the nu,ber of each type of rider. s...
im in an online school :p
Funny thats a question on my last math test
Advanced Alegebra
How do I solve the following problems? My homework was to correct a puiz that I failed, but I do not know how to solve these problems..please help! Solving systems of equations using elimination: 7.)
10x - 10y = 0 5x - 4y = 8 Answer (8,8) I got the right answer because this on...
suppose you got 75 items corrct on a 100-item, six alternative, multiple-choice exam what would your score be after we corrected for guessing?
algebra 2
how can you tell when y is a function of x on a graph
algebra 2
If f(x) = int(x), find the given functional value. 1. f(-9.1)
c_hot = (4.184J/g*K)(18mL)(1g/mL) c_cold = (4.184J/g*K)(19mL)(1g/mL) Use: 0 = c_hot(T_f - T_initial,hot)+ c_cold(T_f - T_initial,cold) Solve for T_f and remember to convert temperature back to C from
Kelvin and vice versa. Best of luck, ChemVantage Loather
Two objects are connected by a light string that passes over a frictionless pulley, and two objects are on either side of the string. There is a hanging object off the vertical side of the incline
weighing 10.0 kg. The 3.50 kg object lies on a smooth incline of angle 37.0°...
how would I simplify 1089b^7t^5/891t^2b?
Explain the biological, emotional, cognitive, and behavioral components of postpartum depression
If the back of the truck is 1.5 {\rm m} above the ground and the ramp is inclined at 24{\rm ^{\circ}}, how much time do the workers have to get to the piano before it reaches the bottom of the ramp?
Strictly speaking, perfect competition has never existed and probably never will. Then why study it?
Managerial Economics
Strictly speaking, perfect competition has never existed and probably never will. Then why study it?
A car moves from a point located 26.0 m from the origin and at an angle of 35.0° from the x-axis to a point located 56.0 m from the origin and at an angle of 60.0° from the x-axis in 2.50 s. What is
the magnitude of the average velocity of the car?
3rd grade
There are 32 more apples than oranges at a fruit stand. How many apples and oranges could there be?
Physics Please Answer
1. to make a secure fit, rivets that are larger than the rivet hole are often used and the rivet is cooled (usually in dry ice) before it is placed in the hole. a steel rivet 1.871 cm in diameter is
to be placed in a hole 1.869 cm in diameter at 20 degree celcius. to what temp...
1. to make a secure fit, rivets that are larger than the rivet hole are often used and the rivet is cooled (usually in dry ice) before it is placed in the hole. a steel rivet 1.871 cm in diameter is
to be placed in a hole 1.869 cm in diameter at 20 degree celcius. to what temp...
A brass lid screws tightly onto a glass jar at 20 oC. To help open the jar, it can be placed into a bath of hot water. After this treatment, the temperature of the lid and the jar are both 60 oC. The
inside diameter of the lid is 8.0 cm at 20 oC. Find the size of the gap (dif...
Your company s sales are 50,000 units. The unit variable cost is $12. Your markup percent on sales is 40% and your fixed costs are $100,000. 1. What is your profit / loss?
Can a ANOVA be compared to a blood test?
I was thinking B
Which passage is correctly punctuated? a)In the middle if the night. My cat leaped onto mt stomach. b)In the middle if the night, my cat leaped onto mt stomach. c)In the middle if the night; my cat
leaped onto mt stomach. d) My cat leaped onto mt stomach. In the middle if the ...
So its B
Choose the sentence below that has no errors in pronoun case. a) Sharon and him dtive to Las Vegas at least once a year. b)For Nancy and he, the movie was a boring waste of time. c)Our neighbor made
a casserole for Chad and me. d)Nick said that he would meet mt mother and I at...
Which of the following word groups is punctuated correctly? a)It rained every day of our vacation nevertheless, we had a wonderful time. b)It rained every day of our vacation, nevertheless, we had a
wonderful time. c)It rained every day of our vacation, nevertheless; we had a ...
Why are these algebraic expressions used and explain the logical error: Since a = b, then a2 = a * b Subtract b2 from both sides a2 b2 = a * b b2 Factor both sides: ( a + b )( a b ) = b(a b )
Divide both sides by ( a b) ( a + b ) = b Since a ...
explain the logical error Square both sides to get a2 = b2
Find logical error a2 = b * b
8a^2 + 40ab+ 50b^2 = 2(4a^2 +20ab +25b^2) = 2(2a + 5b)^2 How is this a perfect square?
Are these complete? m^2 + 4mn +4n^2 = (m + 2n)(m+2n) 3x^3y^2 3x^2y^2 + 3xy^2 = 3xy(x^2y xy + y) Thank you
3m^3 + 27m = 3m(m^2 + 9) = 3m(m + 3)(m 3) I miss something? is this complete?
Factor polynomial completely 8a^2 + 40ab+ 50b^2 =
Factor polynomial completely 8a^2 + 40ab+ 50b^2 =
3m^3 + 27m = 3m(m^2 + 9) = 3m(m + 3)(m 3) its is not done? is this a cubeproblem?
3m^3 + 27m = 3m(m^2 + 9) = 3m(m + 3)(m 3)
8a^2 + 40ab+ 50b^2 =
a^2 + 2a -24
Factor polynomial completely ax - 2a - 5x +10
Solve equation 2x^2 + 5x 12 = 0
w^3 + 5w^2 - w = 5
Solve the equation (x-3)^2+(x+2)^2=17
Solve the equation (x-3)^2+(x+2)^=17
The perimeter of this rectangular portion needs to be 14 yards and the diagonal is 5 yards. Can you help Mike determine the length and width of this new portion? Mike requested that you show him how
you arrive at the answers.
JT has $16000 saved. In 2years time his investment grew to $250 00. Can you find the annual interest rate of his return by solving the following equation for him: 16000(1 + x)2 = 25000
a road crew has 3/4 ton of stone to divide evenly among four sidewalks. How much stone does the crew use for each sidewalk?
7th grade
I have to write two paragraphs discussing the problem of winning support from other nations for the Patriot cause during the Revolutionary War. I don't know what the problem was. The French Spanish &
Germans were eager to fight against the Engilish.
I would just like to know how each student can get their grades to 3.8 by the time each reaches 90 credits. Thanks--
YA, I am going to say they all want to raise their grade to 3.8 Is there some kind og matrix solutions that these numbers can be pluged into to get an answer?
Students at ACC must earn 90 credits to obtain an Associate s degree. Three students find that they all have a GPA of 3.3 even though they do not have the same number of credits. The students hope to
increase their GPAs to a 3.8 by the time they have earned 90 credits. On...
Can you help me to find the product of -4x^2 and x^3 + 2x^2 - 5x + 3... and the solution to the equation of x^2 + 30x = 1000...I came up with -20 and 50 but I think it is wrong also can you help me
to do the factorization of 6x^2 - 2x -20....Thank You please help with solutions
8th grade
can u help me to find the product of -4x^2 and x^3 + 2x^2 - 5x + 3 and the solution to the equation of x^2 + 30x = 1000...I came up with -20 and 50 but i dont think it is right..can u help also can
you help me to do the factorization of 6x^2 - 2x - 20.....Help please help
Thats cause you found POH, subtract taht from 14
How does the use of antibiotics result in the evolution of resistant stains of bacteria.
statistics...Please help
A sample of 12 measurements has a mean of 24 and a standard deviation of 4.5. Suppose that the sample is enlarged to 14 measurements, by including two additional measurements having a common value of
24 each. Find the standard deviation of the sample of 14 measurements.
Actually, just noticed that was for in -x direction, so change sign on acceleration 35*sin(11)+(35)(-1)=Fx 35*sin(11)+(35)(-2)=Fx
this is kinda late answer, but still... 35*sin(11)-Fx=0 This works for a = 0 35*sin(11)+ma-Fx=0 More complete answer... (a!=0) So, for increasing rate of a = 1m/s^2 This is going in negative
direction, so 35*sin(11)+(35)(1)=Fx And for a = 22/s^2 35*sin(11)+(35)(2)=Fx Crunch in...
Type Sketch - Cynic
"There are people in the world who see things as they are and deal with them accordingly. These people are called cynics by those who do not possess the same capacity." - George Bernard Shaw | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=JT","timestamp":"2014-04-18T16:14:48Z","content_type":null,"content_length":"18940","record_id":"<urn:uuid:7af78126-6798-4e9a-afd5-d4a995c10789>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting Belief Propagation
Kristian Kersting, Babak Ahmadi and Sriraam Natarajan
In: UAI 2009, 18-21 Jul 2009, Montreal, Canad.
A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical
structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such
additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that
are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that
counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of | {"url":"http://eprints.pascal-network.org/archive/00006541/","timestamp":"2014-04-20T05:46:44Z","content_type":null,"content_length":"7118","record_id":"<urn:uuid:9d3b270f-9222-4d22-8cb7-ed3c5e3c90c7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generators in the sense of Freyd and Kelly
up vote 0 down vote favorite
I am stuck in trying to interpret a definition in the paper "Categories of continuous functors" by P. Freyd and M. Kelly (click).
They say:
A category $\cal A$ with a proper factorization system $(\mathfrak E, \mathfrak M)$ [i.e. a factorization system where the left class is contained in the class $Epi$ and the right class in the
class $Mono$ of monic arrows] has a generator when it has a small full subcategory $\cal G$ such that the family of all morphisms $G\to A$ with domain $G\in\cal G$ is in $\mathfrak E$. If $\cal
A$ admits coproducts, then $\cal G$ is a generator iff the canonical arrow $\coprod_{G\to A}G\to A$ lies in $\mathfrak E$ for any $A\in\cal A$.
Edit: Proving that the two conditions are equivalent in presence of coproducts seemed to be easy but trying to reproduce the argument I noticed that the diagram I used wasn't commutative. I wanted to
say that the diagram
gives by lifting property the desired arrow to show that each $G\to A$ lies in $\mathfrak E = {}^\perp\mathfrak M$.
What is seems incredible to me is that this notion is the right one to capture the notion of generator, or that of separator, in $\cal A$.
In fact, one of the main point of Freyd-Kelly's paper is that the two notions are not equivalent (as they are stated on the nlab or wikipedia, if I remember well): in a finitely complete -or
discrete-cocomplete- category a generator separates arrows; with a particular choice of the factorization system, a separator is a generator.
My problem is that if I interpret "the family of all morphisms $G\to A$ with domain $G\in\cal G$ is in $\mathfrak E$" in the unique possible sense, I can't obtain what I expected: "each arrow $*\to
X$ is an epi in $\bf Set$" is a blatantly false statement, even if in that case the terminal object separates arrows.
Can you help me? I feel I'm lost in something easy, but I don't see where.
If Zhen Lin answered your question to your satisfaction, maybe you could accept his answer? I say this partly because the bots on this site will periodically bring back old questions to the top of
the stack if they think they haven't been answered. – Todd Trimble♦ Jan 28 at 16:26
Sure! sorry (to Zhen Lin and to everybody else)! – tetrapharmakon Jan 28 at 19:09
add comment
1 Answer
active oldest votes
Given an orthogonal (resp. weak) factorisation system $(\mathcal{E}, \mathcal{M})$, we can straightforwardly define when a sink $( U_i \to V : i \in I )$ (where the morphisms $U_i \to V$
may be repeated) is in the left class $\mathcal{E}$: this happens when, for each morphism $X \to Y$ in $\mathcal{M}$, given a commutative square of the form below for each $i$ in $I$, $$\
up vote 1 begin{array}{ccc} U_i & \rightarrow & X \\ \downarrow & & \downarrow \\ V & \rightarrow & Y \end{array}$$ there is a unique (resp. at least one) morphism $V \to X$ that makes all the
down vote evident diagrams commute simultaneously. In the presence of sufficiently large coproducts, this happens if and only if the amalgamation $\coprod_{i \in I} U_i \to V$ is in $\mathcal{E}$.
Sure, this is exactly the remark at the end of page 177 in FK paper. But my problem is different: FK say that a generator for a category is the gadget I defined before, and under
reasonable hypotheses a generator/generating subcat is the same as a separator/separating subcat. But this is false! The point is a separator in Set, and nevertheless it is not true
that any arrow $G=*\to A$ is an epi. On the other hand it is true that the arrow $k_A\colon \coprod_{G\to A}G\to A$ is an epi; where am I lost? – tetrapharmakon Dec 29 '13 at 14:30
Please notice that I edited the OP. Thank you for your time! – tetrapharmakon Dec 29 '13 at 14:37
You have misunderstood something: we are not claiming that each individual morphism is in $\mathcal{E}$, but that they are "jointly" in $\mathcal{E}$. Your "proof" is also mistaken. –
Zhen Lin Dec 29 '13 at 14:39
The problem is precisely this, I misunderstood something as I'm left with something which is blatantly false: but how could one interpret the sentence "the family of all morphisms $G\to
A$ with domain $G\in\cal G$ is in $\mathfrak E$" in a different manner? The "family" taken as a single set? But it is not an object of $\cal A$, it doesn't make any sense. –
tetrapharmakon Dec 29 '13 at 14:45
Ah! I understood. FK define a precise notion of a family "being in $\mathfrak E$", which is precisely the one you are stating. Sorry, this question was boring and ill-posed. –
tetrapharmakon Dec 29 '13 at 14:51
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/153054/generators-in-the-sense-of-freyd-and-kelly","timestamp":"2014-04-17T04:52:52Z","content_type":null,"content_length":"60403","record_id":"<urn:uuid:e39abf28-d3d4-404b-a89b-8e6ef43ef7c2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there MDPs (Markow Decision Process) which have a non deterministic optimal policy ?
up vote 2 down vote favorite
I'm working on Markov Decision Process and I have not found yet an example of MDP that has a stochastic (non deterministic) optimal policy. Is there MDPs that have a stochastic optimal policy or is
it shown that an optimal policy is always deterministic ?
If a stochastic policy exist, is it shown that some algorithms (like Q-Learning) converge to this policy ?
pr.probability markov-chains stochastic-processes
add comment
2 Answers
active oldest votes
If there is an optimal policy, there is a deterministic optimal policy. Here is a sketch of the argument:
Start with an optimal policy within the class of deterministic optimal policies. By the one-deviation-principle, you only have to check whether you can gain by randomizing after a
up vote 2 certain history of the process. If you randomize over two actions that do not lead to the same payoff, you could gain by putting more weight on the action with a higher payoff. So all
down vote actions will give you the same payoff and you might as well choose a deterministic one. Now a standard result says that in MDPs, such a history dependent strategy can not improve on all
accepted Markovian strategies. Therefore, an optimal, deterministic Markovian strategy exist.
Thank you for your answer. So, if a optimal policy exists, it is always deterministic ? If we model a game which has no pure Nash Equilibrium and only a mixed Nash Equilibrium, the
policy which optimize the long time reward can not be deterministic (because it would mean that a pure NE corresponding to this policy exists), so can't we say that this policy is a
stochastic optimal one for this MDP ? Or It means that there is no optimal policy for a game without pure NE wich is modeled as a MDP ? – Lamine Nov 3 '10 at 15:35
Well, there might exist optimal stochastic policies, but they are essentially randomization over determinstic optimal policies. There exists no stochastic policy that is strictly
better than every deterministic policy. In a mixed strategy equilibrium, a player is indifferent between all strategies in the support of her mixed strategy. – Michael Greinecker Nov 3
'10 at 21:22
add comment
I finally found the proof of this in "Markov Decision Process -- Discrete Stochastic Dynamic Programming" by Martin L. Puterman (John Wilson and Sons Ed.). It is proved that if the reward
up vote function is deterministic, the optimal policy exists and is also deterministic. But I don't know if this result can be generalized to MDPs with stochastic reward function.
1 down
HI, I know this was posted along while ago, but I'd like to know exactly which chapter has the proof that a deterministic rewards function implies the existence of an optimal deterministic
policy. I'm actually trying tackle problems where the the reward function is non deterministic (a simulation were we are using sarsa(L)) to learn a policy. But i'm thinking that it might
not be possible to find a deterministic policy? thanks for your help – user15939 Jun 22 '11 at 13:03
It depends if you are looking for maximal discounted total reward or maximal average total reward. For the first one, all the beginning of chapter 6 proves progressively the existence of a
deterministic optimal policy. First theorems and propositions prove the existence of an optimal policy under some assumptions. Then theorems 6.2.9 and 6.2.10 prove the existence of an
optimal deterministic policy under some reasonable assumptions. For instance, theorem 6.2.9 (p. 154) attests that if an optimal policy exists, then an optimal deterministic policy also
exists (it may be the same or not). – Lamine Jun 24 '11 at 13:42
Theorem 6.2.10 asserts that if the set of available actions is finite for each state, then an optimal deterministic policy exists. However, you have to read (at least) all the beginning of
the chapter (and some previous chapter) to understand the proof. Of course, this theorems are valid under assumptions provided in the beginning of the chapter (the set of states is finite
or countable, rewards are bounded, the discount factor is $ 0 \leq \lambda < 1$ and rewards and transition probabilities don't vary from decision epochs to others). – Lamine Jun 24 '11 at
If you are looking for optimal policy, there are equivalent theorems and propositions at the beginning of chapters 7 and 8. Actually, even if the reward function is random (but with fixed
distribution), there is a deterministic optimal policy. The only case where there is a stochastic optimal policy but not a deterministic one is when the distribution of the reward function
varies (for instance, if there is two players learning at the same time in a game without pure Nash Equilibrium). – Lamine Jun 24 '11 at 13:54
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability markov-chains stochastic-processes or ask your own question. | {"url":"http://mathoverflow.net/questions/44677/is-there-mdps-markow-decision-process-which-have-a-non-deterministic-optimal-p/44837","timestamp":"2014-04-20T08:46:31Z","content_type":null,"content_length":"62838","record_id":"<urn:uuid:8ee2c364-fa57-472b-ad8f-36c3c3516e05>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which computer algebra system should I be using to solve large systems of sparse linear equations over a number field?
up vote 7 down vote favorite
This is related to Noah's recent question about solving quadratics in a number field, but about an even earlier and easier step.
Suppose I have a huge system of linear equations, say ~10^6 equations in ~10^4 variables, and I have some external knowledge that suggests there's a small solution space, ~100 dimensional. Moreover,
the equations are sparse; in fact, the way I produce the equations gives me an upper bound on the number of variables appearing in each equation, ~10. (These numbers all come form the latest instance
of our problem, but we expect to want to try even bigger things later.) Finally, all the coefficients are in some number field.
Which computer algebra system should I be using to solve such a system? Everyone knows their favourite CAS, but it's often hard to get useful comparisons. One significant difficulty here is that even
writing down all the equations occupies a big fraction of a typical computer's available RAM.
I'll admit that so far I've only tried Mathematica; it's great for most of our purposes, but I'm well aware of its shortcomings, hence this question. A previous slightly smaller instance of our
problem was within Mathematica's range, but now I'm having trouble.
(For background, this problem is simply finding the "low weight spaces" in a graph planar algebra. See for example Emily Peter's thesis for an explanation, or our follow-up paper, with Noah Snyder
and Stephen Bigelow.)
computer-algebra planar-algebras linear-algebra number-fields
If you feel up to it, a minimal description of the syntax required to import a large system of sparse linear equations into your preferred CAS would be great! – Scott Morrison♦ Oct 25 '09 at 20:16
add comment
8 Answers
active oldest votes
It probably goes without saying that solving linear systems over number fields is probably far from the being among the most important user-level functionality of the main commercial
computer algebra systems. That said, I do know that this functionality in Maple was written by someone with a specific interest in this sort of thing. If you have access to a recent
version of Maple, take a look at the help page: ?SolveTools,Linear. 10^6 is pretty big, but it might still be within reach of the solver.
up vote 2 While, I do not know much about how Mathematica does these things, I do know that in Maple sparse linear systems are more efficiently solved as polynomials (rather than sparse-matrices)
down vote since underlying polynomial data-structure turns out to be well suited to sparse system solving.
If Maple does not work for you (or you do not have access to it), this strikes me as exactly the sort of problem that MAGMA might be targeting.
add comment
This sounds like a problem you'll really need to run through a strong solver in C, and if you want it to pull in objects easily it'd probably be best to use a Perl C-wrapper with MPI and
Parallel LAPACK. I'll look into easily accessible sparse matrix solvers, that are modular enough to change the inputs (oy that's gonna be interesting, but I'll take a look). Honestly at
numbers that high I don't see any reasonably sized computer that matlab or mathematica can deal with solving it.
up vote 3
down vote Will be back with results of search soon.
add comment
May be quite helpful, as it contains routines for finite fields and has an interface to Maple (which is significantly easier to use than LinBox as a C++ library).
up vote 3 down vote
Hope that helps!
add comment
For a problem of this size, you should consider which algorithm before you consider which computer algebra system.
In floating point arithmetic, there is an excellent algorithm for solving a sparse system of linear equations. Even for dense matrices it is more robust than Gaussian elimination; for
sparse matrices it is better still. This is the conjugate gradient method. It is available in certain packages and certain computer algebra systems, but it is also not very hard to
implement from scratch. For instance, to me it feels simpler to code conjugate gradient than to code Gaussian elimination.
up vote 3 If you find floating point solutions to high precision, there are algorithms to convert them to elements of a number field with bounded complexity. (See the Inverse Symbolic Calculator,
down vote etc.) Some of these also exist in computer algebra systems. If you use one of these solvers, you can then check that it is an exact solution to the original system of equations.
If the solution space is low-dimensional but not 1-dimensional, then you can run conjugate gradient repeatedly after adding constraints to eliminate the kernel found so far. Conjugate
gradient is an algorithm to minimize a positive-definite quadratic form. (If the system is not positive definite symmetric, you have to square or make some other change to fix that.) Thus
it still works if you add constraints.
add comment
If Magma can do this, you may well look at Sage, which is open source, remarkably powerful, and with support for sparse linear algebra.
up vote 3 down vote
add comment
Matlab might be a better choice. Be sure to use the sparse matrix functionality rather than just regular matrices. I haven't used Matlab for this purpose but I've seen people using it for
large systems.
Also, have you been using sparse matrices in Mathematica?
up vote
2 down http://reference.wolfram.com/mathematica/howto/WorkWithSparseMatrices.html
This might improve the performance.
Embarrassingly, it occurs to me that I might not have tried using Mathematica's sparse matrices. I've actually been manipulating the equations themselves, using an ad-hoc mixture of
Mathematica's built-in "Solve" command and a few hand-written tricks. – Scott Morrison♦ Oct 26 '09 at 3:51
How does Matlab cope with coefficients in number fields? My minimal experience with Matlab didn't get that far. – Scott Morrison♦ Oct 26 '09 at 4:08
My Matlab knowledge is not huge but my best guess would be that being aware of number fields is not a Matlab sort of functionality as I would think it requires some sense of symbolic
manipulation. Noah's question seemed to suggest that involving number fields was an optional idea directed toward trying to speed things up. However, I think porting from Mathematica to
Matlab might entail some speed up. The black box nature of Mathematica makes it harder (at least for me) to do optimizations in Mathematica than in Matlab. – Kim Greene Oct 26 '09 at 4:24
It's absolutely essential that we get exact answers. If matlab can't cope with number fields, it's useless to us. To clarify, in Noah's question it was also essential to end up with exact
answers (although I admit that our current technique is to solve things numerically then approximate by algebraic integers). Noah's point was that we have external knowledge that the
solution still lies in the same number field, even though we're solving quadratics and the field isn't algebraically closed! He was wondering if it's possible to translate this knowledge
into some sort of speedup. – Scott Morrison♦ Oct 26 '09 at 16:08
One idea is that it's easier to write tighter code in Matlab and therefore, you could get a faster but still numerical answer. The second idea is to optimize the Mathematica code. – Kim
Greene Oct 27 '09 at 3:39
show 1 more comment
A system that large is large enough where the speedup from skipping over all the layers of abstraction in most CAS packages is worth the trouble to write your own custom code to solve the
If you are ok with a floating point solution, ScaLAPACK or another linear algebra package that has algorithms and data structures for sparse matrices would be a lot better than (P)LAPACK,
which as far as I know uses only dense matrix data structures.
up vote 2 Python's scipy package also has sparse data structures and wrappers to call UMFPACK, and the syntax is easy enough that it wouldn't be significantly harder to use than a CAS program. It
down vote would be easier than writing a custom Fortran/C program straight up.
For solutions over the reals, there is an algorithm called the complete orthogonal decomposition (COD) that uses rank revealing QR factorization (see, e.g. Golub's Matrix Computations).
This lets you separate out and project away the kernel of the problem, leaving behind only the part of the problem that lies in the range. I don't know if there is an analogue for arbitrary
fields, but since you appear to have a problem with a small rank, it may be worth your trouble to look into this.
add comment
I think here is someone who does related research:
up vote 0 down vote http://www4.ncsu.edu/~kaltofen/bibliography/index.html
add comment
Not the answer you're looking for? Browse other questions tagged computer-algebra planar-algebras linear-algebra number-fields or ask your own question. | {"url":"http://mathoverflow.net/questions/2506/which-computer-algebra-system-should-i-be-using-to-solve-large-systems-of-sparse/7744","timestamp":"2014-04-20T18:39:12Z","content_type":null,"content_length":"84978","record_id":"<urn:uuid:2410fdaa-4872-48b4-9f51-a70a174c13fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Class: gjr
Filter disturbances with GJR model
[V,Y] = filter(model,Z)
[V,Y] = filter(model,Z,Name,Value)
[V,Y] = filter(model,Z) filters disturbances, Z, to produce conditional variances and responses of a univariate GJR(P,Q) model.
[V,Y] = filter(model,Z,Name,Value) filters disturbances using additional options specified by one or more Name,Value pair arguments.
Input Arguments
model GJR model object, as created by gjr or estimate. The input model object cannot have any NaN values.
Z numObs-by-numPaths matrix of disturbances, z[t], used to drive the innovation process, ε[t]. For a variance process the innovation process is given by
As a column vector, Z represents a single path of the underlying disturbance series. As a matrix, Z represents numObs observations of numPaths paths of the underlying disturbance series.
filter assumes that observations across any row occur simultaneously. The last row contains the most recent observation.
│ Note: NaNs indicate missing values. filter removes these values from Z by listwise deletion. That is, any row of Z with at least one NaN is removed, reducing the effective sample size. │
Name-Value Pair Arguments
Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several
name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.
'Z0' Presample disturbances, providing initial values for the input disturbance series, Z. If Z0 is a column vector, then filter applies it to each output path. If Z0 is a matrix, then it must have
at least numPaths columns. If the number of columns exceeds numPaths, then filter uses only the first numPaths columns.
Z0 must have at least model.Q rows to initialize the conditional variance model. If the number of rows in Z0 exceeds model.Q, then filter uses only the most recent observations. The last row
contains the most recent observation.
Default: Necessary presample observations are set equal to an independent sequence of standardized disturbances drawn from the distribution in model.
'V0' Positive presample conditional variances, providing initial values for the model. If V0 is a column vector, then filter applies it to each output path. If V0 is a matrix, then it must have at
least numPaths columns. If the number of columns exceeds numPaths, then filter uses only the first numPaths columns.
V0 must have at least max(model.P,model.Q)rows to initialize the variance equation. If the number of rows in V0 exceeds the number necessary, then filter uses only the most recent observations.
The last row contains the most recent observation.
Default: Necessary presample observations are set equal to the unconditional variance of the process.
│ Notes │
│ │
│ ● NaNs indicate missing values. filter uses listwise deletion to remove missing values in the presample data, Z0 and V0. That is, Z0 and V0 are merged into a composite series, and any row │
│ of the combined series with at least one NaN is removed. │
│ │
│ ● filter assumes you synchronize presample data such that the last (most recent) observation of each presample series occurs simultaneously. │
Output Arguments
V numObs-by-numPaths matrix of conditional variances of the mean-zero, heteroscedastic innovations associated with Y.
Y numObs-by-numPaths time series matrix of response data. Y usually represents a mean-zero, heteroscedastic time series of innovations with conditional variances given in V. Y might also represent a
time series of mean-zero, heteroscedastic innovations plus an offset. The inclusion of an offset is signaled by a nonzero Offset value in the input model. If the input model includes an offset,
filter adds the offset to the underlying mean-zero, heteroscedastic innovations so that Y represents a time series of offset-adjusted innovations.
This example illustrates the relationship between simulate and filter.
Specify a GJR(1,1) model with Gaussian innovations.
model = gjr('Constant',0.005,'GARCH',0.8,'ARCH',0.1,...
Simulate the model via Monte Carlo simulation. Then, standardize the simulated innovations and filter.
[v,e] = simulate(model,100,'E0',0,'V0',0.05);
Z = e./sqrt(v);
[V,E] = filter(model,Z,'Z0',0,'V0',0.05);
Confirm that the outputs of simulate and filter are identical.
ans =
The logical value 1 confirms the two outputs are identical.
[1] Bollerslev, T. "Generalized Autoregressive Conditional Heteroskedasticity." Journal of Econometrics. Vol. 31, 1986, pp. 307–327.
[2] Bollerslev, T. "A Conditionally Heteroskedastic Time Series Model for Speculative Prices and Rates of Return." The Review of Economics and Statistics. Vol. 69, 1987, pp. 542–547.
[3] Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.
[4] Enders, W. Applied Econometric Time Series. Hoboken, NJ: John Wiley & Sons, 1995.
[5] Engle, R. F. "Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation." Econometrica. Vol. 50, 1982, pp. 987–1007.
[6] Glosten, L. R., R. Jagannathan, and D. E. Runkle. "On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks." The Journal of Finance. Vol. 48, No. 5,
1993, pp. 1779–1801.
[7] Hamilton, J. D. Time Series Analysis. Princeton, NJ: Princeton University Press, 1994.
● The filter method generalizes the simulate method. Both methods filter a series of disturbances to produce output responses, innovations, and conditional variances. However, simulate
autogenerates a series of mean-zero, unit-variance, independent and identically distributed (iid) disturbances according to the distribution in the model object, model. In contrast, filter lets
you directly specify your own disturbances.
See Also
estimate | forecast | gjr | infer | print | simulate
More About | {"url":"http://www.mathworks.nl/help/econ/gjr.filter.html?nocookie=true","timestamp":"2014-04-23T12:40:33Z","content_type":null,"content_length":"43919","record_id":"<urn:uuid:e8c9b687-9057-4d71-a92a-1395d125cf9e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |