content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
st: Fixed effects using binary numbers + F test for Fixed effects
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Fixed effects using binary numbers + F test for Fixed effects
From "joelle farajallah" <joelle.farajallah@hec.ca>
To statalist@hsphsun2.harvard.edu
Subject st: Fixed effects using binary numbers + F test for Fixed effects
Date Thu, 1 Nov 2007 22:44:21 -0400 (EDT)
Hi everyone
I have quite a few questions about the fixed effects that might sound very
beginner level things but i'm getting really stuck with them
We have a certain model
lcrmrteit = â0 + â1 lprbarrit + â2 lprbconvit +d04 + d05 + uit.
The first question is to develop a model with fixed effects using binary
that can be estimated by MCO and estimate this model using Stata
The second question is to use an F test to validate the usage of fixed
effects (restricted F test)
Third question is to estimate the model using the fixed effects and use an
F statistic to validate the presence or absence of fixed effects ( using
integrated commands ?? )
And they finally ask us what the difference is between the 3 F statistics
obtained in these 3 questions , which for me kind of all seem to be the
Thanks in advance for your help.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2007-11/msg00034.html","timestamp":"2014-04-16T07:45:33Z","content_type":null,"content_length":"6132","record_id":"<urn:uuid:d980ae12-9943-469f-aba9-b0ac49bd4905>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools Discussion: All Tools in Algebra II on Sketchpad, Functions as objects?
Discussion: All Tools in Algebra II on Sketchpad
Topic: Functions as objects?
<< see all messages in this topic
next message >
Subject: Functions as objects?
Author: Susan
Date: Nov 29 2005
I was wondering if there is a way to plot a function in Sketchpad, and then have
it behave as an object. For example, I can graph f(x) = sqrt (x), but I would
like to make it an object so that I could move it freely. I would also like to
be able to reflect it over the x or y axis using the transformations menu. I
know that I can do this algebraically by writing and graphing a new function,
but I was thinking about just being able to do it geometrically. Any ideas?
Reply to this message Quote this message when replying?
yes no
Post a new topic to the All Tools in Algebra II on Sketchpad discussion
Visit related discussions:
Algebra II
Discussion Help
|
{"url":"http://mathforum.org/mathtools/discuss.html?context=cell&do=r&msg=21913","timestamp":"2014-04-17T21:47:13Z","content_type":null,"content_length":"15967","record_id":"<urn:uuid:4c6c1c85-2cb1-44d8-9dfc-0aef273c7e9e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts from April 18, 2009 on Two Guys Arguing
in bf, subtracting one element’s value from another is dead simple. to subtract the value of y in cell 1 from the value of x in cell 0, starting from cell 0:
this type of subtraction is useful for dealing with numerical values, but doesn’t help much when we need to compare numbers. I’d like to be able to tell if x is greater than y. Subtraction seems to
be the natural operation for this. Just subtract y from x and loop if x is positive. This, however, only works when we can be assured that y is less than or equal to x, in which case, our comparison
is probably moot. Negative numbers, of course are considered boolean true, so what we really want is a separate subtraction operation that won’t make x any less than 0.
this isn’t too hard to write in C:
while(x && y) {
x--; y--;
there is no && statement in bf, but a pair of nested whiles would do as well:
while(x) {
while(y) {
x--; y--;
which very easily translates into bf:
There is a problem with this loop, though. It doesn’t have a consistent exit point. If x is greater than y, the loop will exit with the pointer at y’s location. If y is greater than x, then the loop
will exit with the pointer at x’s location. To remedy this, we need to establish a fixed point to “rewind” to.
If we establish a 0 in cell 0 as our “anchor”, then put x in cell 1 and y in cell 2, after we finish subtracting y from x, we can “rewind” back to our anchor cell for a consistent starting and end
|
{"url":"https://twoguysarguing.wordpress.com/2009/04/18/","timestamp":"2014-04-17T18:23:41Z","content_type":null,"content_length":"29782","record_id":"<urn:uuid:5a96abca-5b94-4d36-a52c-59ea348e3f38>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interface Issues
Background jobs
Yes, a Sage job can be run in the background on a UNIX system. The canonical thing to do is type
nohup sage < command_file > output_file &
The advantage of nohup is that Sage will continue running after you log out.
Currently Sage will appear as “sage-ipython” or “python” in the output of the (unix) top command, but in future versions of Sage it will appears as sage.
Referencing Sage
To reference Sage, please add the following to your bibliography:
Stein, William, \emph{Sage: {O}pen {S}ource {M}athematical {S}oftware
({V}ersion 2.10.2)}, The Sage~Group, 2008, {\tt http://www.sagemath.org}.
Here is the bibtex entry:
Key = {Sage},
Author = {William Stein},
Organization = {The Sage~Group},
Title = {{Sage}: {O}pen {S}ource {M}athematical {S}oftware ({V}ersion 2.10.2)},
Note= {{\tt http://www.sagemath.org}},
Year = 2008
If you happen to use the Sage interface to PARI, GAP or Singular, you should definitely reference them as well. Likewise, if you use code that is implemented using PARI, GAP, or Singular, reference
the corresponding system (you can often tell from the documentation if PARI, GAP, or Singular is used in the implementation of a function).
For PARI, you may use
organization = "{The PARI~Group}",
title = "{PARI/GP, version {\tt 2.1.5}}",
year = 2004,
address = "Bordeaux",
note = "available from \url{http://pari.math.u-bordeaux.fr/}"
\bibitem{PARI2} PARI/GP, version {\tt 2.1.5}, Bordeaux, 2004,
(replace the version number by the one you used).
For GAP, you may use
[GAP04] The GAP Group, GAP -- Groups, Algorithms, and Programming,
Version 4.4; 2005. (http://www.gap-system.org)
key = "GAP",
organization = "The GAP~Group",
title = "{GAP -- Groups, Algorithms, and Programming,
Version 4.4}",
year = 2005,
note = "{\tt http://www.gap-system.org}",
keywords = "groups; *; gap; manual"}
The GAP~Group, \emph{GAP -- Groups, Algorithms, and Programming, Version 4.4}; 2005,
{\tt http://www.gap-system.org}.
For Singular, you may use
[GPS05] G.-M. Greuel, G. Pfister, and H. Sch\"onemann.
{\sc Singular} 3.0. A Computer Algebra System for Polynomial
Computations. Centre for Computer Algebra, University of
Kaiserslautern (2005). {\tt http://www.singular.uni-kl.de}.
author = {G.-M. Greuel and G. Pfister and H. Sch\"onemann},
title = {{\sc Singular} 3.0},
type = {{A Computer Algebra System for Polynomial Computations}},
institution = {Centre for Computer Algebra},
address = {University of Kaiserslautern},
year = {2005},
note = {{\tt http://www.singular.uni-kl.de}},
G.-M.~Greuel, G.~Pfister, and H.~Sch\"onemann.
\newblock {{\sc Singular} 3.0}. A Computer Algebra System for Polynomial Computations.
\newblock Centre for Computer Algebra, University of Kaiserslautern (2005).
\newblock {\tt http://www.singular.uni-kl.de}.
Logging your Sage session
Yes you can log your sessions.
(a) Modify line 186 of the .ipythonrc file (or open .ipythonrc into an editor and search for “logfile”). This will only log your input lines, not the output.
(b) You can also write the output to a file, by running Sage in the background ( Background jobs ).
(c) Start in a KDE konsole (this only work in linux). Go to Settings \(\rightarrow\) History ... and select unlimited. Start your session. When ready, go to edit \(\rightarrow\) save history as ....
Some interfaces (such as the interface to Singular or that to GAP) allow you to create a log file. For Singular, there is a logfile option (in singular.py). In GAP, use the command LogTo.
LaTeX conversion
Yes, you can output some of your results into LaTeX.
sage: M = MatrixSpace(RealField(),3,3)
sage: A = M([1,2,3, 4,5,6, 7,8,9])
sage: print latex(A)
1.00000000000000 & 2.00000000000000 & 3.00000000000000 \\
4.00000000000000 & 5.00000000000000 & 6.00000000000000 \\
7.00000000000000 & 8.00000000000000 & 9.00000000000000
At this point a dvi preview should automatically be called to display in a separate window the LaTeX output produced.
LaTeX previewing for multivariate polynomials and rational functions is also available:
sage: x = PolynomialRing(QQ,3, 'x').gens()
sage: f = x[0] + x[1] - 2*x[1]*x[2]
sage: h = f /(x[1] + x[2])
sage: print latex(h)
\frac{-2 x_{1} x_{2} + x_{0} + x_{1}}{x_{1} + x_{2}}
Sage and other computer algebra systems
If foo is a Pari, GAP ( without ending semicolon), Singular, Maxima command, resp., enter gp("foo") for Pari, gap.eval("foo")} singular.eval("foo"), maxima("foo"), resp.. These programs merely send
the command string to the external program, execute it, and read the result back into Sage. Therefore, these will not work if the external program is not installed and in your PATH.
Command-line Sage help
If you know only part of the name of a Sage command and want to know where it occurs in Sage, a new option for 0.10.11 has been added to make it easier to hunt it down. Just type sage -grep <string>
to find all occurences of <string> in the Sage source code. For example,
was@form:~/s/local/bin$ sage -grep berlekamp_massey
matrix/all.py:from berlekamp_massey import berlekamp_massey
matrix/berlekamp_massey.py:def berlekamp_massey(a):
matrix/matrix.py:import berlekamp_massey
matrix/matrix.py: g =
Type help(foo) or foo?? for help and foo.[tab] for searching of Sage commands. Type help() for Python commands.
For example
Help on function Matrix in module sage.matrix.constructor:
Matrix(R, nrows, ncols, entries = 0, sparse = False)
Create a matrix.
R -- ring
nrows -- int; number of rows
ncols -- int; number of columns
entries -- list; entries of the matrix
sparse -- bool (default: False); whether or not to store matrices as sparse
a matrix
sage: Matrix(RationalField(), 2, 2, [1,2,3,4])
[1 2]
[3 4]
sage: Matrix(FiniteField(5), 2, 3, range(6))
[0 1 2]
[3 4 0]
sage: Matrix(IntegerRing(), 10, 10, range(100)).parent()
Full MatrixSpace of 10 by 10 dense matrices over Integer Ring
sage: Matrix(IntegerRing(), 10, 10, range(100), sparse = True).parent()
Full MatrixSpace of 10 by 10 sparse matrices over Integer Ring
in a new screen. Type q to return to the Sage screen.
Reading and importing files into Sage
A file imported into Sage must end in .py, e.g., foo.py and contain legal Python syntax. For a simple example see Permutation groups with the Rubik’s cube group example above.
Another way to read a file in is to use the load or attach command. Create a file called example.sage (located in the home directory of Sage) with the following content:
print "Hello World"
print 2^3
Read in and execute example.sage file using the load command.
sage: load "example.sage"
Hello World
You can also attach a Sage file to a running session:
sage: attach "example.sage"
Hello World
Now if you change example.sage and enter one blank line into Sage, then the contents of example.sage will be automatically reloaded into Sage:
sage: !emacs example.sage& #change 2^3 to 2^4
sage: #hit return
Reloading 'example.sage'
Hello World
Installation for the impatient
We shall explain the basic steps for installing the most recent version of Sage (which is the “source” version, not the “binary”).
1. Download sage-*.tar (where * denotes the version number) from the website and save into a directory, say HOME. Type tar zxvf sage-*.tar in HOME.
2. cd sage-* (we call this SAGE_ROOT) and type make. Now be patient because this process make take 2 hours or so.
3. Optional: When the compilation is finished, type on the command line in the Sage home directory:
./sage -i database_jones_numfield
./sage -i database_gap-4.4.8
./sage -i database_cremona_ellcurve-2005.11.03
./sage -i gap_packages-4.4.8_1
This last package loads the GAP GPL’d packages braid, ctbllib, DESIGN, FactInt, GAPDoc, GRAPE, LAGUNA, SONATA 2.3, and TORIC . It also compiles (automatically) the C programs in GUAVA and GRAPE.
Other optional packages to install are at http://modular.math.washington.edu/sage/packages/optional/.
Another way: download packages from http://sage.scipy.org/sage/packages/optional/ and save to the directory SAGE_ROOT. Type
/sage -i sage-package.spkg
for each sage-package you download (use sage -f if you are reinstalling.) This might be useful if you have a CD of these packages but no (or a very slow) internet connection.
4. If you want to build the documentation, cd devel/doc and type ./rebuild. This requires having latex and latex2html installed.
Python language program code for Sage commands
Let’s say you want to know what the Python program is for the Sage command to compute the center of a permutation group. Use Sage’s help interface to find the file name:
sage: PermutationGroup.center?
Type: instancemethod
Base Class: <type 'instancemethod'>
String Form: <unbound method PermutationGroup.center>
Namespace: Interactive
File: /home/wdj/sage/local/lib/python2.4/site-packages/sage/groups/permgroup.py
Definition: PermutationGroup.center(self)
Now you know that the command is located in the permgroup.py file and you know the directory to look for that Python module. You can use an editor to read the code itself.
“Special functions” in Sage
Sage has several special functions:
• Bessel functions and Airy functions
• spherical harmonic functions
• spherical Bessel functions (of the 1st and 2nd kind)
• spherical Hankel functions (of the 1st and 2nd kind)
• Jacobi elliptic functions
• complete/incomplete elliptic integrals
• hyperbolic trig functions (for completeness, since they are special cases of elliptic functions)
and orthogonal polynomials
• chebyshev_T (n, x), chebyshev_U (n, x) - the Chebyshev polynomial of the first, second kind for integers \(n > -1\).
• laguerre (n, x), gen_laguerre (n, a, x) - the (generalized) Laguerre poly. for \(n > -1\).
• legendre_P (n, x), legendre_Q (n, x), gen_legendre_P (n, x), gen_legendre_Q (n, x) - the (generalized) Legendre function of the first, second kind for integers \(n > -1\).
• hermite (n,x) - the Hermite poly. for integers \(n > -1\).
• jacobi_P (n, a, b, x) - the Jacobi polynomial for integers \(n > -1\) and \(a\) and \(b\) symbolic or \(a > -1\) and \(b > -1\).
• ultraspherical (n,a,x) - the ultraspherical polynomials for integers \(n > -1\). The ultraspherical polynomials are also known as Gegenbauer polynomials.
In Sage, these are restricted to numerical evaluation and plotting but via maxima, some symbolic manipulation is allowed:
sage: maxima.eval("f:bessel_y (v, w)")
sage: maxima.eval("diff(f,w)")
sage: maxima.eval("diff (jacobi_sn (u, m), u)")
sage: jsn = lambda x: jacobi("sn",x,1)
sage: P = plot(jsn,0,1, plot_points=20); Q = plot(lambda x:bessel_Y( 1, x), 1/2,1)
sage: show(P)
sage: show(Q)
In addition to maxima, pari and octave also have special functions (in fact, some of pari‘s special functions are wrapped in Sage).
Here’s an example using Sage’s interface (located in sage/interfaces/octave.py) with octave (http://www.octave.org/doc/index.html).
sage: octave("atanh(1.1)") ## optional - octave
Here’s an example using Sage’s interface to pari‘s special functions.
sage: pari('2+I').besselk(3)
0.0455907718407551 + 0.0289192946582081*I
sage: pari('2').besselk(3)
What is Sage?
Sage is a framework for number theory, algebra, and geometry computation that is initially being designed for computing with elliptic curves and modular forms. The long-term goal is to make it much
more generally useful for algebra, geometry, and number theory. It is open source and freely available under the terms of the GPL. The section titles in the reference manual gives a rough idea of the
topics covered in Sage.
History of Sage
Sage was started by William Stein while at Harvard University in the Fall of 2004, with version 0.1 released in January of 2005. That version included Pari, but not GAP or Singular. Version 0.2 was
released in March, version 0.3 in April, version 0.4 in July. During this time, support for Cremona’s database, multivariate polynomials and large finite fields was added. Also, more documentation
was written. Version 0.5 beta was released in August, version 0.6 beta in September, and version 0.7 later that month. During this time, more support for vector spaces, rings, modular symbols, and
windows users was added. As of 0.8, released in October 2005, Sage contained the full distribution of GAP, though some of the GAP databases have to be added separately, and Singular. Adding Singular
was not easy, due to the difficulty of compiling Singular from source. Version 0.9 was released in November. This version went through 34 releases! As of version 0.9.34 (definitely by version
0.10.0), Maxima and clisp were included with Sage. Version 0.10.0 was released January 12, 2006. The release of Sage 1.0 was made early February, 2006. As of February 2008, the latest release is
Many people have contributed significant code and other expertise, such as assistance in compiling on various OS’s. Generally code authors are acknowledged in the AUTHOR section of the Python
docstring of their file and the credits section of the Sage website.
|
{"url":"http://sagemath.org/doc/constructions/interface_issues.html","timestamp":"2014-04-19T14:32:37Z","content_type":null,"content_length":"38955","record_id":"<urn:uuid:687514b7-34d9-463d-a985-4aa9aadedfae>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Count On - Spot the odds
Uncle Odd arrives
Suddenly there was a shout from behind the odd old tree. "I left this one out!" cried a voice. It was Uncle Odd.
Blue-bug was delighted. It was a cup with no spots. "10 plus 0 makes 10," she said.
"Now that I'm here let's have tea," said Uncle Odd.
"But we haven't enough cups!" said Big-bug, dismayed.
Uncle Odd smiled. "Don't worry," he said, "I've brought my own mug. It'll be the odd-bug-mug-out."
They all laughed.
"I've brought some biscuits too," said Uncle Odd.
Uncle Odd put the biscuits out on a large cloth.
"Everyone is to have the same number of biscuits each," said Uncle Odd.
"How many are there of us?" said Blue-bug, "and how many biscuits are there to share?"
The odd-bugs worked it out and then they had tea.
How many biscuits were there and how many did each odd-bug get?
|
{"url":"http://www.counton.org/magnet/minus2/oddbug/parcel2.html","timestamp":"2014-04-17T12:31:31Z","content_type":null,"content_length":"2300","record_id":"<urn:uuid:13b26a37-17b5-450f-8e7f-1336fe2f184c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Edward Witten on science, strings, himself
Two months ago, the Institute of Physics revealed this YouTube video:
Edward Witten, whom they still call "a 2010 Newton Medal Winner" rather than the "An Inaugural Milner Prize Winner" because they think that £1,000 with a stamp "IOP" on it (plus the name of Isaac
Newton, without his permission) is more than $3,000,000 ;-), is talking for 25 minutes about his CV, previous scholarly interests, as well as hot topics in string theory.
Edward Witten is known for having studied some social sciences – journalism, history, linguistics – and being a tool of the Democratic Party candidates (such as George McGovern 1972 who just died)
but he has been interested in physical sciences from his childhood. He was interested in astronomy but he was afraid that the job required him to be an astronaut. It is cute to mix astronomers and
astronauts. My dad doesn't distinguish astronomers from astrologers.
Of course, his father – a theoretical physicist – was probably affecting Edward Witten, too.
He has been interested in the peace in the Middle East. In fact, some of his $3 million Milner money is going to J Street, a left-wing NGO trying to create peace between the Israeli Arabs and Jews in
some of the most naive ways. Of course, just like anyone who takes string theory seriously, he shows an old picture of himself on a camel.
He talks about his wife, kids, and interests. His parents didn't believe in pushing kids too far too quickly. He got a standard theoretical physics education rather soon. Only when he was a postdoc,
his maths was getting deeper. Supersymmetry became essential when he was a student. It has played a key role in his research from the beginning.
Some extra remarks are dedicated to Einstein's general relativity, extra dimensions, and unification of all forces. He talks about his negative-energy instability of higher dimensions without SUSY.
He paints himself as a relative latecomer to string theory. Of course, it depends whom you compare him with. Witten compares the beauty of the sound of different musical instruments depending on the
admixtures of the higher Fourier modes.
In the early 1980s, he realized that the available consistent string vacua failed to violate the left-right symmetry (P and CP). At some moment in 1984, the first superstring revolution explodes and
it was the first string miracle that occurred when Witten was watching. That's why it was a signal from the Heaven for him. String theory got much more realistic.
The 1990s are the decade of dualities and M-theory. Who needed the other four string theories, and so on. From that time, he's been intrigued by the application of string/M-theoretical methods to
understand issues in "ordinary" established particle physics theories (why positive energy, why confinement, ...). This light that string theory manages to shine upon the established theories is
Witten's main reason to be convinced that string theory is on the right track. The elegance with which string theory sheds the light is another reason. Witten still calls our understanding of string
theory "the rough draft" but this rough draft has already led to amazing insights and Witten doesn't believe that such a chain of astonishing discoveries has happened by coincidence.
The last reason why string theory seems right to him is that it teaches us new and deeper things about the geometry – including things that surprised mathematicians and inspired those at the
frontier. He hadn't expected such a thing when he was young but these insights did materialize. Our confusion has actually helped to develop the new concepts.
A special discussion is dedicated to the big mystery what is the core principle underlying string theory much like the equivalence principle or spatial curvature underlying Einstein's general
relativity. What string theory really means? It fascinates him most. But for decades, the theory has been smarter than us and forced us to move in previously unanticipated direction with twists – and
that's probably still true today.
Witten isn't actively trying to solve the biggest questions. He says a thing often told by Andy Strominger as well – an important skill in the research is to choose a question small enough so that
you have a chance to answer it but big enough so that it is worth answering. The Khovanov issues are mentioned as an example. Witten couldn't understand what the stuff was about – but he did
understand it was physics-related (I am not that far). Witten makes it clear he realizes that most string theorists aren't interested in those things but he's independent enough not to care. Of
course, there's no guarantee this stuff will be important. He knows that but he suspects it will be important. ;-)
At the end, he compares the string theory research with the discovery of new continents and with finding a treasure underground that we don't fully understand but we see that pieces fit together.
New element
Some fun via Fred S. – a new densest element was just found.
snail feedback (32) :
Oh yeah, I look forward to watch this :-D
As the picture on a camel is a sign of coolness ;-), trying to play the fundamental physics prize down by not mentioning or ignoring it is quite a sourballish attitude :-/
...BTW Lumo, as a 7 year old I was in Isreal with my family for holidays, and on that occasion I had the oportunity to sit on a camel and having a picture taken too :-P ;-) :-D
That's cool it's from Israel - you're probably just like Witten. I was older and it was in Tunisia, not Israel. ;-)
J Street... *sigh*
I'm not sure whether to feel depressed or awe at Witten's natural talent: Up to his early 20s he didn't take much of an interest in physics/maths, he then takes a few maths/physics books home
from the library, studies them, then enrolls on a physics graduate course at Princeton. OMG... I just want to cry...
so his dad that if i remember correctly worked on supersymmetry didn't matter?
i personally prefer guys that are less good in mathematics than Witten
I have a picture astride a donkey in Ireland.
he is also not a libertarian!
Very nice interview... Witten makes String Theory glamorous : "...an underground treasure, we don't know what the treasure is... we just know that when we dig we find bits and pieces of the
On the Obamacronium I have a question : would a Mormon be a moron's super-partner ? ;-)
does anyone know if Ed Witten was also really good in economics? and if he was why he supported the Democratic party?
George is off again :-D
Lubos caused a serious anachronism here. I have seen this vid more than a year ago, indeed, it is from 2010.
This is why the ''fundamental physics prize'' is not mentioned :)
my brother just told me off that i should not make comments in particle physics blogs since i don't have a clue about particle physics!
so in case people read my comments. George and Kyriakos are different...i am the good looking one.
Absolutely, Shannon. In the Mormon tradition the word of God was brought down to Joseph Smith, the Latter Day Saints founder, by the angel Moroni. No doubt the angel Moron is the angel Moroni’s
This copy of the video was posted by IOP 2 months ago, August 10th, when the Milner Prize could have been mentioned in the description of the video etc.
Witten has an unusual personality, at lease in his public addresses, which I find remarkable. I'm not sure how to characterize it except as a kind of even-toned modesty in an almost feminine high
voice that seems guaranteed to not arouse envy in his colleagues. Is it true that he is almost in a league by himself? He's not from another planet is he? :)
The part "...My dad doesn't distinguish astronomers from astrologers..." was hilarious. I too find many people from my country do not have the capacity to differentiate between a science and a
pseudoscience. On the basis of this ignorance, they try to justify and unfortunately support age-old, exploitative practices.
I like what Witten said: an important skill in the research is to choose a question small enough
so that you have a chance to answer it but big enough so that it is
worth answering. Even the best mind won't challenge to answer the most difficult questions and that is what I would remember from this interview.
Hopefully, Witten's interest in "Khovanov issues" is not getting him stuck in an inversely similar way to how Einstein got stuck. %-}
Lol ! My intuition told me right.
Edward Witten, an excellent mathematician. I'm not sure I'd call him a physicist of any sort given that his entire career has been dedicated to mathematical abstractions whose connection to the
real world is non-existant.
I would point out his early work on the standard model... but with web critics of physics, one never knows just how much they want to reject. Do you reject quarks? Quantum mechanics? Or do you
just restrict your criticism to theories that haven't been verified yet?
May I kindly ask you to go away and troll somewhere else?
The language of physics IS math, like it or not !
i think that the solution discoverede by donaldson to the topology all totally smooth to the 4-dimensional manifolds are to the metrics of spacetime of einstein that are the part of the exotics
structures that g
has vector vector bundles not differentiables to the 4-dimensional topological geometry,that exotic structures can to be perceived by quantum fields or discreteness of time conected to the space
in spacetime continuos that generate the 4-dimensional manifolds( with chiralities) are the exotic structures defined to the connection of space and time
i think that the antimatter diesn't exist in the nature,the asymmetry between the particles and antiparticles are due the asymmetry of proper matter in the relativistic transformations of energy
into of mass and viceversa due the asymmetry of space and time( generating the spacetime continuos) that is associated the violation of CP or maximalmally PT that demonstrate the asymmetry of
mass and energy with the incresing of speeds between the relative inertial frames.the time dilatation and the contraction of space implies the existence of symmetry between the rotational
invariance broken´thence appear the the antiparticles as "energy locally bundled in the quantic vacuum.then the antiparticles appear as hypersymmetries of the dirac's equations.that implies in
the 5-dimensional spacetime continuos to the broken symmetry in the 4-dimensional manifolds with two opposite torsions in the hyperbolic and elliptic non-euclidean tpology geometries in the
conjugation of space and time in spacetime continuos with derivatives of violation of stronger of PT or to conservation of cp in the strong interactions that is associated to loops of spacetime
to electromagnetics fields deformeds
i thinh through STR.GTR,QUANTUM THEORY,AND STRING THEORY THAT THE UNIVERSE IS MADE NOT OF POINTS ,OR STRAIGHTS,BUT YES BY COLLECTIONS THAT ARE THREE SPECTRAL ENTITIES THAT CAN TO BE
THOUGHT STRUCTURES.
particles-antiparticles9 right-left symmetry) are property of the metrics of spacetime continuum.the violation of pt( or stronger CP) induce to the metrics of spacetime continuos,each one with an
only one frequency-where does appear the constancy of speed of light to measure the spacetime-to measure the space and time as openned and closed curvatures-the contraction of space and time
dilatation -that is due the violation of cp and the uniformity not of variation of spacetime with the incresing of speed,nearest to the speed of light,demonstrating that the violation of cp is
associated to the deformations of spacetime,it is the variability of both entities when connecteds.
the strings might be all equals? or there are of several forms,srtrucures,sizes?
is interesting to see that the 4-dimensional manifolds
is very rich in topologicaland algebrical ,geometry and therefore physical entities that does the constructions of spacetime,the metrics and non totally smooth that
is equivalent to structures multidimensional
i believe that exist antiparticles,antimatter,dark matter ,dark energy,or any others things.all is linked to miss of mathematical structures to physical theory yet,or
ours mathematical conceptions are faults yet
the STR,GTR,QUANTUM MECHANICS,STRING THEORIES AND OTHERS SUFFER OF PROBLEMS
|
{"url":"http://motls.blogspot.com/2012/10/edward-witten-on-science-strings-himself.html","timestamp":"2014-04-19T12:38:27Z","content_type":null,"content_length":"242968","record_id":"<urn:uuid:90894c02-555c-4856-bf79-0cbd19930593>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum cryptography leaves the lab
April 30th, 2007, 11:00 PM
Drakain Zeil
So the real question is how many ways can we make imprints on a photon? xD
Brute force is probably not going to be added off-the-bat. You heard it here first.
May 1st, 2007, 09:08 AM
Ok, the basic principle for quantum encryption relies on polarisation. A single photon vibrates in a single axis. Thus you can encode binary data on to photons by changing the axis of vibration.
E.g. Vertical vibration is 1 and horizontal vibration is 0. You then detect the data by passing it through a vertical (or a horizontal) polaroid filter. If the photon is "1" it will pass through
the filter and be recieved, if not it will be absorbed by the filter. Thus you build up a sequence of 1's and 0's.
Now the encryption bit. You change between encrypting on + axis and X axis at "random". Using | and / as 1, - and \ as 0. The receiving end knows your sequence of X and +, the key, and so can
receive the data. Malice intercepting your data has to play a 50-50 guessing game with his polaroid filters. if he puts in a | filter and recieves a / photon there is a 50-50 chance that the
photon will pass through the filter. Thus, every filter he gets wrong gives a 50% chance of receiving the wrong data. Additionally, when he sends the photon on he will encode the photon using the
same axis as his filter, giving a 50-50 chance that Bob the recipient will receive the wrong data. In this way not only does Malice not intercept the correct message but Bob is very much aware of
Malice's interference since the received message is corrupted.
Perhaps some day we will discover a way of identifying the vibration of a photon without destroying it, I don't deny that. The key point here is that the message is UNINTERCEPTABLE not
UNDECRYPTABLE. Once you receive the message its no problem to decrypt it. In fact, its in the clear. But first you have to get the key (so, weakness right there....simply get the key).
Other points:
1) this requires a dedicated fibre line. If you have a dedicated fibre line between two points its difficult to evesdrop anyway. This is not something that gets sent over the net for anyone with
a traffic sniffer in the right place to catch.
2) Its a bloody slow method of communicating. Sending single photons is slow especially allowing time for filter changes.
So yes, I do believe it is unbreakable encryption. I don't count getting the key as breaking the encryption. There is no way to "brute force" this. Its a laws of physics thing not a
"10,000,000,000 years with the fastest PC ever" thing.
|
{"url":"http://www.antionline.com/printthread.php?t=265225&pp=10&page=2","timestamp":"2014-04-17T08:30:20Z","content_type":null,"content_length":"10255","record_id":"<urn:uuid:67b4d841-6054-4504-8b36-989a13a4ae8a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Auburndale, MA Trigonometry Tutor
Find an Auburndale, MA Trigonometry Tutor
...I began using Microsoft Excel in the early 1990s as a graduate student at MIT, and since then I've used the software for a wide variety of personal, professional, and academic projects in
science, engineering, publishing, administration, finance, and management. I began tutoring Excel in 2004. ...
23 Subjects: including trigonometry, chemistry, biology, calculus
...As a result of this experience, my teaching philosophy places great emphasis on active learning. I have found with great consistency that students do not learn primarily by hearing
explanations. This is especially true in the hard sciences and in math, both of which are skill-intensive.
9 Subjects: including trigonometry, calculus, physics, geometry
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including trigonometry, English, reading, calculus
...I pride myself on my experience with younger children and adolescents through my work as a counselor in leadership camps, summer camps, after-school programs, Sunday school, school sponsored
tutoring programs and old fashioned child care. My experience in writing stems from my position as editor of my high school newspaper and through peer review. My strength is my ability to tutor
30 Subjects: including trigonometry, English, reading, elementary (k-6th)
...As someone who wasn't the strongest in science myself, I was pleasantly surprised when I took this section and found that it was much simpler than that. What the Science section of the ACT
actually tests is mostly interpreting the data in charts and tables, understanding experiments and evaluati...
28 Subjects: including trigonometry, English, reading, writing
|
{"url":"http://www.purplemath.com/Auburndale_MA_Trigonometry_tutors.php","timestamp":"2014-04-16T10:46:50Z","content_type":null,"content_length":"24520","record_id":"<urn:uuid:a2847e6a-ccb2-41f0-96e1-809d4e7d8ab9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Theoretical and Experimental Spinners
3.5: Theoretical and Experimental Spinners
Created by: CK-12
Practice Theoretical and Experimental Spinners
If you have a spinner that is divided into equal portions of red, green, purple, and blue, what is the probability that when you spin you will land on blue? If you spin the spinner 100 times will you
get this same result? What about 1,000 times?
Watch This
First watch this video to learn about theoretical and experimental spinners.
CK-12 Foundation: Chapter3TheoreticalandExperimentalSpinnersA
Then watch this video to see some examples.
CK-12 Foundation: Chapter3TheoreticalandExperimentalSpinnersB
The 2 types of probability are theoretical probability and experimental probability. Theoretical probability is defined as the number of desired outcomes divided by the total number of outcomes.
Theoretical Probability
$P(\text{desired}) = \frac{\text{number of desired outcomes}}{\text{total number of outcomes}}$
Experimental probability is, just as the name suggests, dependent on some form of data collection. To calculate the experimental probability, divide the number of times the desired outcome has
occurred by the total number of trials.
Experimental Probability
$P(\text{desired}) = \frac{\text{number of times desired outcome occurs}}{\text{total number of trials}}$
You can try a lot of examples and trials yourself using the NCTM Illuminations page found at http://illuminations.nctm.org/activitydetail.aspx?ID=79.
What is interesting about theoretical and experimental probabilities is that, in general, the more trials you do, the closer the experimental probability gets to the theoretical probability. We'll
see this as we go through the examples.
Example A
You are spinning a spinner like the one shown below 20 times. How many times does it land on blue? How does the experimental probability of landing on blue compare to the theoretical probability?
Simulate the spinning of the spinner using technology.
On the TI-84 calculator, there are a number of possible simulations you can do. You can do a coin toss, spin a spinner, roll dice, pick marbles from a bag, or even draw cards from a deck.
After pressing $\boxed{\text{ENTER}}$
Since we're doing a spinner problem, choose Spin Spinner.
In Spin Spinner, a wheel with 4 possible outcomes is shown. You can adjust the number of spins, graph the frequency of each number, and use a table to see the number of spins for each number. We want
to set this spinner to spin 20 times. Look at the keystrokes below and see how this is done.
In order to match our color spinner with the one found in the calculator, you will see that we have added numbers to our spinner. This is not necessary, but it may help in the beginning to remember
that 1 = blue (for this example).
Now that the spinner is set up for 20 trials, choose SPIN by pressing $\boxed{\text{WINDOW}}$
We can see the result of each trial by choosing TABL, or pressing $\boxed{\text{GRAPH}}$
And we see the graph of the resulting table, or go back to the first screen, simply by choosing GRPH, or pressing $\boxed{\text{GRAPH}}$
Now, the question asks how many times we landed on blue (number 1). We can actually see how many times we landed on blue for these 20 spins. If you press the right arrow $\left ( \boxed{\
blacktriangleright} \right )$
To go back to the question, how many times does the spinner land on blue if it is spun 20 times? The answer is 3. To calculate the experimental probability of landing on blue, we have to divide by
the total number of spins.
$P(\text{blue}) = \frac{3}{20} = 0.15$
Therefore, for this experiment, the experimental probability of landing on blue with 20 spins is 15%.
Now let's calculate the theoretical probability. We know that the spinner has 4 equal parts (blue, purple, green, and red). In a single trial, we can assume that:
$P(\text{blue}) = \frac{1}{4} = 0.25$
Therefore, for our spinner example, the theoretical probability of landing on blue is 0.25. Finding the theoretical probability requires no collection of data.
Example B
You are spinning a spinner like the one shown below 50 times. How many times does it land on blue? How about if you spin it 100 times? Does the experimental probability get closer to the theoretical
probability? Simulate the spinning of the spinner using technology.
Set the spinner to spin 50 times and choose SPIN by pressing $\boxed{\text{WINDOW}}$
You can see the result of each trial by choosing TABL, or pressing $\boxed{\text{GRAPH}}$
Again, we can see the graph of the resulting table, or go back to the first screen, simply by choosing GRPH, or pressing $\boxed{\text{GRAPH}}$
The question asks how many times we landed on blue (number 1) for the 50 spins. Press the right arrow $\left ( \boxed{\blacktriangleright} \right )$
Now go back to the question. How many times does the spinner land on blue if it is spun 50 times? The answer is 11. To calculate the probability of landing on blue, we have to divide by the total
number of spins.
$P(\text{blue}) = \frac{11}{50} = 0.22$
Therefore, for this experiment, the probability of landing on blue with 50 spins is 22%.
If we tried 100 trials, we would see something like the following:
In this case, we see that the frequency of 1 is 23.
So how many times does the spinner land on blue if it is spun 100 times? The answer is 23. To calculate the probability of landing on blue in this case, we again have to divide by the total number of
$P(\text{blue}) = \frac{23}{100} = 0.23$
Therefore, for this experiment, the probability of landing on blue with 100 spins is 23%. You can see that as we perform more trials, we get closer to 25%, which is the theoretical probability.
Example C
You are spinning a spinner like the one shown below 170 times. How many times does it land on blue? Does the experimental probability get closer to the theoretical probability? How many times do you
predict we would have to spin the spinner to have the experimental probability equal the theoretical probability? Simulate the spinning of the spinner using technology.
With 170 spins, we get a frequency of 42 for blue.
The experimental probability in this case can be calculated as follows:
$P(\text{blue}) = \frac{42}{170} = 0.247$
Therefore, the experimental probability is 24.7%, which is even closer to the theoretical probability of 25%. While we're getting closer to the theoretical probability, there is no number of trials
that will guarantee that the experimental probability will exactly equal the theoretical probability.
Guided Practice
You are spinning a spinner like the one shown below 500 times. How many times does it land on blue? How does the experimental probability of landing on blue compare to the theoretical probability?
Simulate the spinning of the spinner using technology.
In the list of applications on the TI-84 calculator, choose Prob Sim.
After pressing $\boxed{\text{ENTER}}$
Since we're doing a spinner problem, choose Spin Spinner.
In Spin Spinner, a wheel with 4 possible outcomes is shown. You can adjust the number of spins, graph the frequency of each number, and use a table to see the number of spins for each number. We want
to set this spinner to spin 500 times. To do this, choose SET by pressing $\boxed{\text{ZOOM}}$$\boxed{\text{GRAPH}}$
Remember that for this example, 1 = blue.
Now that the spinner is set up for 500 trials, choose SPIN by pressing $\boxed{\text{WINDOW}}$
We can see the result of each trial by choosing TABL, or pressing $\boxed{\text{GRAPH}}$
And we see the graph of the resulting table, or go back to the first screen, simply by choosing GRPH, or pressing $\boxed{\text{GRAPH}}$
Now, the question asks how many times we landed on blue (number 1). We can actually see how many times we landed on blue for these 500 spins. If you press the right arrow $\left ( \boxed{\
blacktriangleright} \right )$
To go back to the question, how many times does the spinner land on blue if it is spun 500 times? The answer is 123. To calculate the experimental probability of landing on blue, we have to divide by
the total number of spins.
$P(\text{blue}) = \frac{123}{500} = 0.246$
Therefore, for this experiment, the experimental probability of landing on blue with 500 spins is 24.6%.
Do you remember how to calculate the theoretical probability from Example A? We know that the spinner has 4 equal parts (blue, purple, green, and red). In a single trial, we can assume that:
$P(\text{blue}) = \frac{1}{4} = 0.25$
Therefore, for our spinner example, the theoretical probability of landing on blue is 0.25. As we pointed out in Example A, finding the theoretical probability requires no collection of data. It's
also worth mentioning that our experimental probability was slightly farther away from the theoretical probability with 500 spins that it was with 170 spins in Example C. While, in general,
increasing the number of spins will produce an experimental probability that is closer to the theoretical probability, as we've just seen, this is not always the case!
Interactive Practice
1. Based on what you know about probabilities, write definitions for theoretical and experimental probability.
1. What is the difference between theoretical and experimental probability?
2. As you add more data, do your experimental probabilities get closer to or further away from your theoretical probabilities?
3. Is spinning 1 spinner 100 times the same as spinning 100 spinners 1 time? Why or why not?
A spinner was spun 750 times using Spin Spinner on the TI-84 calculator, with 1 representing blue, 2 representing purple, 3 representing green, and 4 representing red as shown:
3. According to the following screen, what was the experimental probability of landing on blue?
5. According to the following screen, what was the experimental probability of landing on purple?
7. According to the following screen, what was the experimental probability of landing on green?
9. According to the following screen, what was the experimental probability of landing on red?
A spinner was spun 900 times using Spin Spinner on the TI-84 calculator, with 1 representing blue, 2 representing purple, 3 representing green, and 4 representing red as shown:
7. According to the following screen, what was the experimental probability of landing on blue?
9. According to the following screen, what was the experimental probability of landing on purple?
11. According to the following screen, what was the experimental probability of landing on green?
13. According to the following screen, what was the experimental probability of landing on red?
Files can only be attached to the latest version of Modality
|
{"url":"http://www.ck12.org/book/CK-12-Basic-Probability-and-Statistics-Concepts---A-Full-Course/r11/section/3.5/","timestamp":"2014-04-16T11:38:48Z","content_type":null,"content_length":"152070","record_id":"<urn:uuid:559c5de1-4c1a-4eaa-a010-6b0f0c3251e7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Filling the Gaps
Copyright © University of Cambridge. All rights reserved.
'Filling the Gaps' printed from http://nrich.maths.org/
It may be helpful to look at the problem
What Numbers Can We Make?
In which columns do the square numbers appear?
In which columns do the squares of even numbers appear? Can you explain why?
And the squares of odd numbers? Can you explain why?
How can we describe the numbers in a particular column?
You might want to start by looking at the completely empty columns.
|
{"url":"http://nrich.maths.org/7547/clue?nomenu=1","timestamp":"2014-04-19T07:09:06Z","content_type":null,"content_length":"3424","record_id":"<urn:uuid:98b3a102-c966-45be-999b-9fd01c1bc86b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gene order in rosid phylogeny, inferred from pairwise syntenies among extant genomes
Ancestral gene order reconstruction for flowering plants has lagged behind developments in yeasts, insects and higher animals, because of the recency of widespread plant genome sequencing,
sequencers' embargoes on public data use, paralogies due to whole genome duplication (WGD) and fractionation of undeleted duplicates, extensive paralogy from other sources, and the computational cost
of existing methods.
We address these problems, using the gene order of four core eudicot genomes (cacao, castor bean, papaya and grapevine) that have escaped any recent WGD events, and two others (poplar and cucumber)
that descend from independent WGDs, in inferring the ancestral gene order of the rosid clade and those of its main subgroups, the fabids and malvids. We improve and adapt techniques including the OMG
method for extracting large, paralogy-free, multiple orthologies from conflated pairwise synteny data among the six genomes and the PATHGROUPS approach for ancestral gene order reconstruction in a
given phylogeny, where some genomes may be descendants of WGD events. We use the gene order evidence to evaluate the hypothesis that the order Malpighiales belongs to the malvids rather than as
traditionally assigned to the fabids.
Gene orders of ancestral eudicot species, involving 10,000 or more genes can be reconstructed in an efficient, parsimonious and consistent way, despite paralogies due to WGD and other processes.
Pairwise genomic syntenies provide appropriate input to a parameter-free procedure of multiple ortholog identification followed by gene-order reconstruction in solving instances of the "small
phylogeny" problem.
Despite a tradition of inferring common genomic structure among plants, e.g., [1], and despite plant biologists' interest in detecting synteny, e.g., [2,3], the automated ancestral genome
reconstruction methods developed for animals [4-7] and yeasts [8-12] at the syntenic block or gene order levels, have yet to be applied to the recently sequenced plant genomes. Reasons for this
1. The relative recency of these data. Although almost twenty dicotyledon angiosperms have been sequenced and released, most of this has taken place in the last two years (at the time of writing) and
the comparative genomics analysis has been reserved by the various sequencing consortia for their own first publication, often delayed for years following the initial data release.
2. Algorithms maximizing a well-defined objective function for reconstructing ancestors through the median constructions and other methods are computationally costly, increasing both with n, the
number of genes orthologous across the genomes, and especially with , where d is the number of rearrangements occurring along a branch of the tree. Indeed, even with moderate values of , these
methods may fail to execute at all in reasonable time.
3. Whole genome duplication (WGD), which is rife in the plant world, particularly among the angiosperms [13,14], sets up a comparability barrier between those species descending from a WGD event and
species in all other lineages originating before the event [3]. This is largely due to the process of duplicate gene reduction, eventually affecting most pairs of duplicate genes created by the WGD,
which distributes the surviving members of duplicate pairs between two homeologous chromosomal segments in an unpredictable way, fractionation [15-19], thus scrambling gene order and disrupting the
phylogenetic signal. This difficulty is compounded by the residual duplicate gene pairs created by the WGD, complicating orthology identification essential for gene order comparison between species
descended from the doubling event and those outside it.
4. Global reconstruction methods are initially designed to work under the assumption of identical gene complement across the genomes, but if we look at dicotyledons, for example, each time we
increase the set of genomes being studied by one, the number of genes common to the whole set is reduced by approximately . Even comparing six genomes, retaining only the genes common to all six,
removes 85% of the genes from each genome, almost completely spoiling the study as far as local syntenies are concerned.
Motivated in part by these issues, we have been developing an ancestral gene order reconstruction algorithm PATHGROUPS, capable of handling large plant genomes, including descendants of WGD events,
as soon as they are released, using global optimization criteria, approached heuristically, but with well-understood performance properties [10,11]. The approach responds to the difficulties
enumerated above as follows:
1. The software has been developed and tested with all the released and annotated dicotyledon genome sequences, even though "ethical" claims by sequencing consortia leaders discourage the publication
of the results on the majority of them at this time. In this enterprise, we benefit from the up-to-date and well organized COGE platform [2,20], with its database of thousands of genome sequences and
its sophisticated, user-friendly SYNMAP facility for extraction of synteny blocks.
2. PATHGROUPS aims to rapidly reconstruct ancestral genomes according to a minimum total rearrangement count (using the DCJ metric [21]) along all the branches of a phylogenetic tree. PATHGROUPS'
speed is due to its heuristic approach (greedy search with look-ahead), which entails a small accuracy penalty as increases, but allows it to return a solution for values of where exact methods are
no longer feasible. The implementation first produces a rapid initial solution of the "small phylogeny" problem (i.e., where the tree topology is given and the ancestral genomes are to be
constructed), followed by an iterative improvement treating each ancestral node as a median problem (one unknown genome to be constructed on the basis of the three given adjacent genomes), using
techniques to avoid convergence to local minima.
3. The comparability barrier erected by a WGD event is not completely impenetrable, even though gene order fractionation is further confounded by genome rearrangement events. The WGD-origin duplicate
pairs remaining in the modern genome will contain much information about gene order in the ancestral diploid, immediately before WGD. The gene order information is retrievable through the method of
genome halving [22], which is incorporated in a natural way into PATHGROUPS, where it is combined with information on single-copy genes.
4. One of the main technical contributions of this paper is the feature of PATHGROUPS that allows the genome complement of the input genomes to vary. Where the restriction to equal gene complement
would lead to reconstructions involving only about 15% of the genes, the new feature allows close to 100% of the genes with orthologs in at least two genomes to appear in the reconstructions. The
other key innovation we put to phylogenetic use for the first time here is our "orthologs for multiple genomes" (OMG) method for combining the genes in the synteny block sets output by SYNMAP for
pairs of genomes, into orthology sets containing at most one gene from every genome in the phylogeny [23].
Both the PATHGROUPS and the OMG procedures are parameter-free. There are no thresholds or other arbitrary settings. We argue that the appropriate moment to tinker with such parameters is during the
synteny block construction and not during the orthology set construction nor the ancestral genome reconstruction. A well-tuned synteny block method goes a long way to attenuate genome alignment
problems due to paralogy. It is also the appropriate point to incorporate thresholds for declaring homology, since these depend on evolutionary divergence, which is specific to pairs of genomes.
Finally, the natural criteria for constructing pairwise syntenies do not extend in obvious ways to three or more genomes.
Six eudicotyledon sequences
There are presently almost twenty eudicotyledon genome sequences released. Removing all those that are embargoed by the sequencing consortia, all those who have undergone more than one WGD since the
divergence of the eudicots from the other angiosperms, such as Arabidopsis, and some for which the gene annotations are not easily accessible leaves us the six depicted in Figure 1, namely cacao [24
], castor bean [25], cucumber [26], grapevine [27,28], papaya [29] and poplar [30]. Of the two main eudicot clades, asterids and rosids, only the latter is represented, including the order Vitales,
considered the closest relative of the eurosids [13,31]. Poplar and cucumber are the only two to have undergone ancestral WGD since the divergence of the grapevine.
Figure 1. Eudicot phylogeny. Phylogenetic relationships among sequenced and non-embargoed eudicotyledon genomes (without regard for time scale). Poplar and cucumber each underwent WGD in their recent
lineages. Shaded dots represent gene orders reconstructed here, including the rosid, fabid, malvid and Malpighiales ancestors.
Formal methods
A genome is a set of chromosomes, each chromosome consisting of a number of genes linearly ordered. The genes are all distinct and each has positive or negative polarity, indicating on which of the
two DNA strands the gene is located.
Genomes can be rearranged through the accumulated operation of number of processes: inversion, reciprocal translocation, transposition, chromosome fusion and fission. These can all be subsumed under
a single operation called double-cut-and-join which we do not describe here. For our purposes all we need is a formula due to Yancopoulos et al. [21], stated below, that gives the genomic distance,
or length of a branch in a phylogeny, in terms of the minimum number of rearrangement operations needed to transform one genome into another.
Rearrangement distance
The genomic distance d(G[1], G[2]) is a metric counting the number of rearrangement operations necessary to transform one multichromosomal gene order G[1 ]into another G[2], where both contain the
same n genes. To calculate d efficiently, we use the breakpoint graph of G[1 ]and G[2], constructed as illustrated in Figure 2: For each genome, each gene g with a positive polarity is replaced by
two vertices representing its two ends, i.e., by a "tail" vertex and a "head" vertex in the order g[t], g[h]; for -g we would put g[h], g[t]. Each pair of successive genes in the gene order defines
an adjacency, namely the pair of vertices that are adjacent in the vertex order thus induced. For example, if i, j, -k are three neighbouring genes on a chromosome then the unordered pairs {i[h], j
[t]} and {j[h], k[h]} are the two adjacencies they define. There are two special vertices called telomeres for each linear chromosome, namely the first vertex from the first gene on the chromosome
and the second vertex from the last gene on the chromosome.
Figure 2. Steps in constructing breakpoint graph. Construction of the breakpoint graph. Left: Genomes G[1 ]and G[2], with "-" sign indicating negative polarity. Middle: Vertices and edges of
individual genome graphs. Right: Cycles in completed breakpoint graph. Adapted from [10], Figure 1.
If there are m genes on a chromosome, there are 2m vertices at this stage. As mentioned, the first and the last of these vertices are telomeres. We convert all the telomeres in genome G[1 ]and G[2 ]
into adjacencies with additional vertices all labelled T[1 ]or T[2], respectively. The breakpoint graph has a blue edge connecting the vertices in each adjacency in G[1 ]and a red edge for each
adjacency in G[2]. We make a cycle of any path ending in two T[1 ]or two T[2 ]vertices, connecting them by a red or blue edge, respectively, while for a path ending in a T[1 ]and a T[2], we collapse
them to a single vertex denoted "T".
Each vertex is now incident to exactly one blue and one red edge. This bicoloured graph decomposes uniquely into κ alternating cycles. If n' is the number of blue edges, then [21]:
The median problem and small phylogeny problem
Let G[1], G[2 ]and G[3 ]be three genomes on the same set of n genes. The rearrangement median problem is to find a genome M such that d(G[1], M) + d(G[2], M) + d(G[3], M) is minimal.
For a given unrooted binary tree T on N given genomes G[1], G[2], ⋯, G[N ](and thus with N - 2 unknown ancestral genomes M[1], M[2], ⋯, M[N - 2 ]and 2N - 3 branches) as depicted in Figure 3, the
small phylogeny problem is to infer the ancestral genomes so that the total edge length of T, namely
is minimal.
Figure 3. Reconstruction problems. Representation of median problem and small phylogeny problem. Red nodes represent ancestral genomes to be reconstructed. From [11], Figure 1.
The computational complexity of the median problem, which is just the small phylogeny problem with N = 3, is known to be NP-hard and hence so is that of the general small phylogeny problem.
The OMG problem
Pairwise orthologies
As justified in the Introduction, we construct sets of orthologous genes across the set of genomes by first identifying pairwise synteny blocks of genes. In our study, genomic data were obtained and
homologies identified within synteny blocks, using the SYNMAP tool in COGE[2,20]. This was applied to the six dicot genomes in COGE shown in Figure 1, i.e., to 15 pairs of genomes. We repeated all
the analyses to be described here using the default parameters of SYNMAP, with minimum block size 1, 2, 3 and 5 genes.
Multi-genome orthology sets
The pairwise homologies SYNMAP provided for all 15 pairs of genomes determine the set of edges E of the homology graph H = (V, E), where V is the set of genes in any of the genomes participating in
at least one homology relation.
The understanding of orthologous genes in two genomes as originating in a single gene in the most recent common ancestor of the two species, leads logically to transitivity as a necessary
consequence. If gene x in genome X is orthologous both to gene y in genome Y and gene z in genome Z, then y and z must also be orthologous, even if SYNMAP does not detect any homology between y and z
. The operational criteria for identifying homologs in SYNMAP, combining sequence similarity and syntenic context correspondences, may sometimes indicate that x is homologous to y and z, but not
necessarily that y and z are homologous. This may be due to threshold criteria, differing rates or durations of evolution, or simply statistical fluctuation. Nevertheless, it seems logical to extend
all homology relations by transitivity, so that in this example we will consider y to be homologous to z.
Ideally, then, all the genes in a connected component of H should be orthologous; insofar as SYNMAP resolves all relations of paralogy, we should expect at most one gene from each genome in such an
orthology set, or two for genomes that descend from a WGD event.
In practice, gene x in genome X may be identified as homologous to both y[1 ]and y[2 ]in genome Y. Or x in X is homologous both to gene y[1 ]in genome Y and gene z in genome Z, while z is also
homologous to y[2]. By transitivity, we again obtain that x is homologous to both y[1 ]and y[2 ]in the same genome. While one gene being homologous to several paralogs in another genome is
commonplace and meaningful, this should be relatively rare in the output from SYNMAP, where syntenic correspondence is a criterion for resolving paralogy. Aside from tandem duplicates, which do not
interfere with gene order calculations, and duplicates stemming from WGD events (which are handled separately by our methods [10]), we consider duplicate homologs in the same genome, inferred
directly by SYNMAP or indirectly by being members of the same connected component, as evidence of error or noise.
Suppose G = (V[G], E[G]) is a connected component of H with duplicate homologs in the same genome (or more than two in the case of a WGD descendant). We delete a subset of edges E' ⊂ E[G], so that
the remaining graph Q decomposes into smaller connected components, Q = Q[1 ]∪ ⋯ ∪Q[t], where each Q[i ]is free from (non-WGD) paralogy. To decide which edges to delete, we define an objective
function to be the total number of edges in the transitive closure of Q, i.e., in all the cliques generated by the components Q[i]. In other words, we seek to maximize , where Q[i ]= (V[i], E[i]).
We are not aware of any algorithm for this problem, aside from the heuristic we have recently developed [23], presented here in simplified form, but conjecture it to be NP-hard.
Let be the transitive closure of any graph P. To obtain we can raise its adjacency matrix M[P ](including 1's on the diagonal) to successively higher powers until a maximal set of non-zero
elements is attained. These non-zero elements correspond to the edges of the connectivity graph , which is the union of a set of disjoint cliques. In practice, the construction of can be made more
efficient using Warshall's algorithm [32].
The edges of , where G is a component of the homology graph H, define a single clique, since G is connected. These edges represent both given and indirectly inferred orthologies as discussed above,
but there may be paralogies. To remedy this by deleting edges from G to produce an optimal union of paralogy-free components Q = Q[1]∪ ⋯ ∪Q[t], we first examine the star subgraph s(v) of containing
ν(v) vertices, namely v, its ν(v) - 1 neighbours, and the ν(v) - 1 edges connecting the former to the latter.
Let c(v) ≥ 1 be the number of distinct genomes represented among the vertices in s(v). Let .
Without WGD descendants
1. set E' = ∅.
2. while there are still some v ∈ V where ν(v) >c(v),
(a) find the edge e ∈ E \ E' that maximizes
(b) if there are several such e, find the one that minimizes
(c) E ← E' ∪ {e}
3. relabel as Q[1], ⋯, Q[t ]the disjoint components created by deleting edges. These contain the vertices of the required components of Q.
Implicit in each greedy step is an attempt to create large orthology sets. If the deleted edges create two partitioned components, i.e., each with no internal paralogy, then the increment in F will
be proportional to the sum of the squares of the number of vertices in each one. This favours a decomposition into one large and one small component rather than two equal sized components.
WGD descendants allowed. To handle paralogs of WGD origin, the definition of c(v) must be amended to take account an allowance of 2 vertices from a single genome in s(v) if these are from the
appropriate genomes. And the condition in Step 2 must require that at most two vertices be contained in s(v) from any one genome, and only if these involve WGD descendants.
Note that it is neither practical nor necessary to deal with H in its entirety, with its hundred thousand or so edges. It suffices to delete edges, if necessary, from each connected component G
independently. Typically, this will contain only a few genes and very rarely more than 100. The output is a decomposition of G into two or more smaller sets with no undesired paralogy. These are the
orthology sets we input into the gene order reconstruction step.
For the small number (typically from 1 to 5) of very large components G we encounter, called "tangles" in [23], we break them into more tractable size sets by extracting genes with large numbers of
homologs, together with their immediate homologs, and treat them independently. This is done recursively on the remaining part of G until a small enough set of homologies is obtained that can be
handled by the procedure detailed above.
Though these procedures require only a few minutes of computation, there are a number of devices we employ to slash this time without materially affecting the end results of our analysis. One is
simply to remove at the outset all components G containing only two genes from two genomes separated by three or more ancestral nodes in the given phylogenetic tree. The algorithms later in our
pipeline would not infer an ortholog of such genes in the ancestral genomes, so there is no point in including them in the analysis. This step allows great computational saving when the minimum size
of syntenic blocks in SYNMAP is set to 1.
Once we have our solution to the OMG problem on the set of pairwise syntenies, we can proceed to reconstruct the ancestral genomes. First, we briefly review the PATHGROUPS approach (previously
detailed in [10,11]) as it applies to the median problem with three given genomes and one ancestor to be reconstructed, all having the same gene complement. The same principles apply to the
simultaneous reconstruction of all the ancestors in the small phylogeny problem, and to the incorporation of genomes having previously undergone WGD.
We redefine a path to be any connected subgraph of a breakpoint graph, namely any connected part of a cycle. Initially, each blue edge in the given genomes is a path. A fragment is any set of genes
connected by red edges in a linear order. The set of fragments represents the current state of the reconstruction procedure. Initially the set of fragments contains all the genes, but no red edges,
so each gene is a fragment by itself.
The objective function for the small phylogeny problem consists of the sum of a number of genomic distances, one distance for every branch in the phylogeny. Each of these distances corresponds to a
breakpoint graph. A given genome determines blue edges in one breakpoint graph, while the red edges correspond to the ancestral genome being constructed. For each such ancestor, the red edges are
identical in all the breakpoint graphs corresponding to distances to that ancestor.
A pathgroup is a set of three paths, all beginning with the same vertex, one path from each partial breakpoint graph currently being constructed. Initially, there is one pathgroup for each vertex.
Our main algorithm aims to construct three breakpoint graphs with a maximum aggregate number of cycles. At each step it adds an identical red edge to each path in the pathgroup, altering all three
breakpoint graphs, as in Figure 4. It is always possible to create one cycle, at least, by adding a red edge between the two ends of any one of the paths. The strategy is to create as many cycles as
possible. If alternate choices of steps create the same number of cycles, we choose one that sets up the best configuration for the next step. In the simplest formulation, the pathgroups are
Figure 4. Calculation of priorities. Priorities of all pathgroups of form [(x, a), (x, b), (x, c)] for inserting red edges, for each ancestral vertex in the median problem. Includes sketch of three
paths in "x" pathgroup plus other paths involved in calculating priority. For example, completing the pathgroup [(x, y), (x, y), (x, z)] by adding the red edge xy always produces two cycles, but can
set up a pathgroup with 3 potential cycles (priority 2), 2 potential cycles (priority 3) or 1 potential cycle (priority 4). From [11], Figure 2.
1. by the maximum number of cycles that can be created within the group, without giving rise to circular chromosomes, and
2. for those pathgroups allowing equal numbers of cycles, by considering the maximum number of cycles that could be created in the next iteration of step 1, in any one pathgroup affected by the
current choice.
By maintaining a list of pathgroups for each priority level, and a list of fragment endpoint pairs (initial and final), together with appropriate pointers, the algorithm requires O(n) running time.
In the current implementation of PATHGROUPS[11], much greater accuracy, with little additional computational cost, is achieved by designing a refined set of 163 priorities, based on a two-step
look-ahead greedy algorithm.
For completeness, we remark that some genomes are incompletely assembled and only available in the form of fragmented chromosomes. These are treated as full chromosomes by our procedures; for this
and other reasons the reconstructed ancestors may also be output as chromosomal fragments. To correct the distance between two such fragmented genomes, we note that part of the DCJ distance allows
for a number of chromosomal fusions or fissions to equalize the numbers of chromosomes in the two genomes. This number is a methodological artifact and should be removed from the DCJ score to
estimate the true evolutionary distance. Details of this correction have been published elsewhere [33].
Inferring the gene content of ancestral genomes
The assumption of equal gene content simplifies the mathematics of PATHGROUPS and allows for rapid computation. Unfortunately it also drastically reduces the number of genes available for ancestral
reconstruction, so that the method loses its utility when more than a few genomes are involved.
In this section, we address the problem of assigning gene content to the ancestral genomes, a question that was avoided previously when all genomes had the same content. Then in the next section we
show how to adapt PATHGROUPS to the unequal gene content median problem.
There are two natural ways to assign genes parsimoniously to ancestral genomes. One is to treat a different presence or absence status at the two ends of a branch of a phylogenetic tree as an
evolutionary event, and to minimize, by dynamic programming, the number of events for each gene. However, if we have a rooted tree, it is may be more appropriate to allow any number of loss events
for a gene but only one gain (innovation) event, since convergent evolution of a gene is unlikely. With real data sets, however, this rule (Dollo's principle) may be too restrictive. In our
implementation, we compromise, in allowing multiple gains but when there are equally costly choices during execution of the assignment algorithm, to choose the one that attributes the gain as early
in the tree as possible.
Using dynamic programming on unrooted trees, our assignment of genes to ancestors simply assures that if a gene is in at least two of the three adjacent nodes of an ancestral genome, it will be in
that ancestor. If it is in less than two of the adjacent nodes, it will be absent from the ancestor.
Median and small phylogeny problems with unequal genomes
To generalize our construction of the three breakpoint graphs for the median problem to the case of three unequal genomes, we set up the pathgroups much as before, and we use a slightly modified
priority structure. Each pathgroup, however, may have three paths, as before, or only two paths, if the initial vertex of the paths comes from a gene absent from one of the leaves. Moreover, when one
or two cycles are completed by drawing a red edge, this edge must be left out of the third breakpoint graph if the corresponding gene is missing from the third genome.
The consequence of this strategy is that some of the paths in the breakpoint graph will never be completed into cycles, impeding the evaluation of the objective function (1). We could continue to
search for cycles under a weakened definition, but this would be computationally costly to do in an exhaustive way, spoiling the linear run time property of the algorithm.
Nevertheless, we can quickly find "hidden" cycles resulting from the simple deletion of genes from one of the genomes, of an otherwise common gene sequence, a frequent occurrence. This is illustrated
in Figure 5, where knowledge from a limited search can be incorporated into the priority scheme when this vertex is missing from another breakpoint graph.
Figure 5. Extension to unequal gene complements. Handling pathgroups with unequal gene complement. Paths containing genes not in the median, such as gene 2 in the illustration, are "extended" by the
sequential addition of vertices from extra genes until a vertex from a median gene in encountered. In the depicted example, this shows that there is a second, hidden, cycle involving 1[h ]and 3[t].
In larger examples, this would affect the relative priority of this pathgroup. Whether or not there are hidden cycles is detected by a rapid search.
The small phylogeny problem can be formulated and solved using the same principles as the median problem, as with the case of equal genomes. The solution, however, only serves as an initialization.
As in [11], the solution can be improved by applying the median algorithm to each ancestral node in turn, based on the three neighbour nodes, and iterating until convergence of the total tree length
(2). At each step, the new median is accepted if the sum of the three branch lengths is no greater than the existing one. This strategy of allowing the median to change as long as it does not
increase total tree length is effective in exploring local solution space and avoiding local minima.
Coping with fractionation
As shown in Figure 6, PATHGROUPS integrates a descendant T of a WGD into a phylogeny by creating an immediate median-like ancestor node A in the tree where two of the paths (say G[1 ]and G[2 ]in
Figure 4 connect to T and the third (G[3]) to an ancestral node R in the phylogeny. Like all ancestral nodes, R is connected to two other nodes in the tree, leaves or ancestral.
Figure 6. Extension for incorporating WGD. PATHGROUP for WGD. A consisting of two identical genomes A' and A'' on branch between descendant T and ancestor median R. Shown are are two configurations
with different priorities.
There are some technical differences connected with avoiding the creation of circular chromosomes in PATHGROUPS for WGD. Our current implementation can only handle the case where T contains exactly
two copies of every gene in R. Thus we consider only the duplicate genes in T in constructing A during the small phylogeny analysis. After this is constructed single-copy genes are added to A in a
way that does not change the DCJ distance (1). This simply involves inserting in A each run of single-copy genes next to one of its adjacent (in T) double-copy genes g, and inserting the same run of
single-copy genes next to the duplicate of g as well. Sometimes both copies of g have adjacent single-copy runs in T, due to the process of fractionation. In this case the two single-copy runs must
be merged (or consolidated [15]). Using present methods, evidence from R does not contribute to how this merger proceeds, so that the gene order in this consolidated run may have a large random
component. This is particularly true of longer runs, with more than two or three genes.
Adding some randomness to a gene order will tend to create roughly one new rearrangement per added breakpoint [34] and fractionation tends to involve deletions of two or three consecutive WGD
paralogs [18], though many of the deletions will be adjacent, creating longer runs of single-copy genes.
This suggests that the distance between A and R may be exaggerated by a the addition of anywhere from s to s on the average for each single-copy run of length s for s larger than some cutoff value.
Therefore, as a crude correction, we deduct from the distance s for s > 3.
The two genomes for which this is pertinent are cucumber and poplar. Figure 7 shows the very different distributions of single-copy run lengths s for the two genomes, reflecting the relative recency
of the poplar WGD. The distance correction turns out to be 2524 for cucumber but only 536 for poplar. The distances portrayed in the next section incorporate these corrections.
Figure 7. Single-copy runs in WGD descendants. Distribution of length of runs of single-copy genes in cucumber and poplar genomes.
The Malpighiales
In the process of reconstructing the ancestors, we can also graphically demonstrate the great spread in genome rearrangement rates among the species studied, in particular the well-known conservatism
of the grapevine genome, as illustrated by the branch lengths in Figure 8.
Figure 8. Positioning the Malpighiales. Competing hypotheses for the phylogenetic assignment of the Malpighiales, with branch lengths proportional to genomic distances, following the reconstruction
of the ancestral genomes with PATHGROUPS. Red nodes indicate WGD event.
It has been suggested recently that the order Malpighiales should be assigned to the malvids rather than the fabids [35]. In our results, the tree supporting this suggestion is indeed more
parsimonious than the more traditional one. However, based on the limited number of genomes at our disposal, this is not conclusive.
Properties of the solution as a function of synteny block size
To construct the trees in Figure 8, from the 15 pairwise comparisons of the gene orders of the six dicot genomes, we identified some 18,000 sets of orthologs using SYNMAP and the OMG procedure. This
varied surprisingly little as the minimum size for a synteny block was set to 1, 2, 3 or 5, as in Figure 9. On the other hand, the total tree length was quite sensitive to minimum synteny block size.
This can be interpreted in terms of risky orthology identifications for small block sizes.
Figure 9. Role of minimum block length parameter. Left: Effect of minimum block size on number of orthology sets and total tree length. Right: Convergence behaviour as a function of minimum block
Of the 18,000 orthology sets, the number of genes considered on each branch ranged from 12,000 to 15,000. When the minimum block size is 5, the typical branch length over the 11 branches of the tree
(including one branch from each WGD descendant to its perfectly doubled ancestor plus one from that ancestor to a speciation node) is about 1600, so that is around 0.12, a low value for which
simulations have shown PATHGROUPS to be rather accurate, at least in the equal genomes context [11].
Figure 9 shows the convergence behaviour as the set of medians algorithms is repeated at each ancestral node. Each iteration required about 8 minutes on a MacBook.
Block validation
To what extent do the synteny blocks output by SYNMAP for a pair of genomes appear in the reconstructed ancestors on the path between these two genomes in the phylogeny? Answering this in a positive
way could validate the notion of syntenic conservation implicit in the block construction. If, however, the ancestors did not reflect the pairwise block construction due to conflicting homology
structure among other descendants of the same ancestors, we would be forced to discount the pairwise syntenies as artifactual.
Since our reconstructed ancestral genomes are not in the curated COGE database (and are lacking the DNA sequence version required of items in the database), we cannot use SYNMAP to construct synteny
blocks between modern and ancestor genomes. We can only see if the genes in the original pairwise syntenies tend to be colinear as well in the ancestor.
On the path connecting grapevine to cacao in the phylogeny in Figure 1, there are two ancestors, the malvid ancestor and the rosid ancestor. There are 308 syntenic blocks containing at least 5 genes
in the output of SYNMAP. A total of 11,229 genes are involved, of which 10,872 and 10,848 (97%) are inferred to be in the malvid and rosid ancestor respectively.
Table 1 shows that in each ancestor, roughly half of the blocks appear intact. This is indicated by the fact there are zero syntenic breaks in these blocks (no rearrangement breakpoints) and the
average amount of relative movement of adjacent genes within these blocks is less than one gene to the left or right of its original position almost all of the time. Most of the other blocks are
affected by one or two breaks, largely because the ancestors can be reconstructed with confidence by PATHGROUPS only in terms of a few hundred chromosomal fragments rather than intact chromosomes,
for reasons given in our detailed presentation of PATHGROUPS above. And it can be seen that the average shuffling of genes within these split blocks is little different from in the intact blocks.
Table 1. Integrity of cacao-grapevine syntenic blocks
We have developed a methodology for reconstructing ancestral gene orders in a phylogenetic tree, minimizing the number of genome rearrangements they imply over the entire tree. The input is the set
of synteny blocks produced by SYNMAP for all pairs of genomes. The two steps in this method, OMG and PATHGROUPS, are parameter-free; we argue that the proper moment for entering thresholds and other
parameters, as well as resolving paralogy, is in the pairwise synteny construction. Our method rapidly and accurately handles large data sets (tens of thousands of genes per genome, and potentially
dozens of genomes), although we have been constrained, for non-technical reasons (i.e., embargoes), to present the case of 6 genomes only. There is no requirement of equal gene complement.
For larger numbers of genomes, the quadratic increase in the number of pairs of genomes would become problematic, but this could be handled by extracting information from SYNMAP only from genomes
pairs that are relatively close phylogenetically.
Future work will concentrate first on ways to complete cycles in the breakpoint graph which are currently left as paths, without substantially increasing computational complexity. This will increase
the accuracy (optimality) of the results. Second, the incorporation of WGD descendants in the phylogeny will be upgraded to reflect the new unequal gene content techniques, in order to reduce the
crude correction terms now associated with single-copy regions. Third, to increase the biological utility of the results, a post-processing component will be added to differentiate regions of
confidence in the reconstructed genomes from regions of ambiguity.
The PATHGROUPS software, together with sample data, may be downloaded from http://137.122.149.195/IsbraSoftware/smallPhylogenyInDel.html webcite
The OMG software, together with sample data, may be downloaded from http://137.122.149.195/IsbraSoftware/OMGMec.html webcite
The data used here, as well as other genomic data, and the SynMap software for producing pairwise homology sets are available at http://genomevolution.org/CoGe/OrganismView.pl webcite and http://
genomevolution.org/CoGe/SynMap.pl webcite, respectively.
Because of the variety of formats in which genome data are released, and incorporated into COGE, the conversion of several SynMap pairwise homology outputs into a master homology graph, conserving
positional (on chromosome, fragment, contig, scaffold, pseudomolecule, etc.) information, at this time still requires short programs or scripts specific to the genomes under study.
This article has been published as part of BMC Bioinformatics Volume 13 Supplement 10, 2012: "Selected articles from the 7th International Symposium on Bioinformatics Research and Applications
(ISBRA'11)". The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/13/S10.
Thanks to Victor A. Albert for advice, Eric Lyons for much help and Nadia El-Mabrouk for encouragement in this work. Research supported by a postdoctoral fellowship to CZ from the NSERC, and a
Discovery grant to DS from the same agency. DS holds the Canada Research Chair in Mathematical Genomics.
1. Moore G, Devos KM, Wang Z, Gale MD: Cereal genome evolution. Grasses, line up and form a circle.
Current Biology 1995, 5:737-739. PubMed Abstract | Publisher Full Text
2. Lyons E, Pedersen B, Kane J, Alam M, Ming R, Tang H, Wang X, Bowers J, Paterson A, Lisch D, Freeling M: Finding and comparing syntenic regions among Arabidopsis and the outgroups papaya, poplar,
and grape: CoGe with rosids.
Plant Physiology 2008, 148:1772-1781. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
3. Tang H, Wang X, Bowers J, Ming R, Alam M, Paterson A: Unraveling ancient hexaploidy through multiply-aligned angiosperm gene maps.
Genome Research 2008, 18:1944. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
4. Murphy WJ, Larkin DM, Everts-van der Wind A, Bourque G, Tesler G, Auvil L, Beever JE, Chowdhary BP, Galibert F, Gatzke L, Hitte C, Meyers SN, Milan D, Ostrander EA, Pape G, Parker HG, Raudsepp T,
Rogatcheva MB, Schook LB, Skow LC, Welge M, Womack JE, O'brien SJ, Pevzner PA, Lewin HA: Dynamics of mammalian chromosome evolution inferred from multispecies comparative maps.
Science 2005, 309:613-617. PubMed Abstract | Publisher Full Text
5. Ma J, Zhang L, Suh BB, Raney BJ, Burhans RC, Kent WJ, Blanchette M, Haussler D, Miller W: Reconstructing contiguous regions of an ancestral genome.
Genome Research 2006, 16:1557-1565. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
6. Ouangraoua A, Boyer F, McPherson A, Tannier E, Chauve C: Prediction of contiguous regions in the amniote ancestral genome. In Bioinformatics Research and Applications, 5th International Symposium
(ISBRA), Lecture Notes in Computer Science. Volume 5542. Edited by Mandoiu II, Narasimhan G, Zhang Y. Springer; 2009::173-185.
7. Gordon JL, Byrne KP, Wolfe KH: Additions, losses, and rearrangements on the evolutionary route from a reconstructed ancestor to the modern Saccharomyces cerevisiae genome.
PLoS Genetics 2009, 5:e1000485. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
8. Tannier E: Yeast ancestral genome reconstructions: The possibilities of computational methods. In Comparative Genomics, 7th International Workshop (RECOMB-CG), Lecture Notes in Computer Science.
Volume 5817. Edited by Ciccarelli F, Miklós I. Springer; 2009::1-12. Publisher Full Text
9. Zheng C: PATHGROUPS, a dynamic data structure for genome reconstruction problems.
Bioinformatics 2010, 26:1587-1594. PubMed Abstract | Publisher Full Text
10. Zheng C, Sankoff D: On the PATHGROUPS approach to rapid small phylogeny.
BMC Bioinformatics 2011, 12(Suppl 1):S4. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
11. Bertrand D, Gagnon Y, Blanchette M, El-Mabrouk N: Reconstruction of ancestral genome subject to whole genome duplication, speciation, rearrangement and loss. In Algorithms in Bioinformatics, 10th
International Workshop, (WABI), Lecture Notes in Computer Science. Volume 6293. Edited by Moulton V, Singh M. Springer; 2010::78-89. Publisher Full Text
12. Soltis DE, Albert VA, Leebens-Mack J, Bell CD, Paterson AH, Zheng C, Sankoff D, Depamphilis CW, Wall PK, Soltis PS: Polyploidy and angiosperm diversification.
American Journal of Botany 2009, 96:336-348. PubMed Abstract | Publisher Full Text
13. Burleigh JG, Bansal MS, Wehe A, Eulenstein O: Locating large-scale gene duplication events through reconciled trees: implications for identifying ancient polyploidy events in plants.
Journal of Computational Biology 2009, 16:1071-1083. PubMed Abstract | Publisher Full Text
14. Langham RJ, Walsh J, Dunn M, Ko C, Goff SA, Freeling M: Genomic duplication, fractionation and the origin of regulatory novelty.
Genetics 2004, 166:935-945. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
15. Thomas B, Pedersen B, Freeling M: Following tetraploidy in an Arabidopsis ancestor, genes were removed preferentially from one homeolog leaving clusters enriched in dose-sensitive genes.
Genome Research 2006, 16:934-946. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
16. Sankoff D, Zheng C, Zhu Q: The collapse of gene complement following whole genome duplication.
BMC Genomics 2010, 11:313. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
17. Wang B, Zheng C, Sankoff D: Fractionation statistics.
BMC Bioinformatics 2011, 12(Suppl 9):S5. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
18. Sankoff D, Zheng C, Wang B: A model for biased fractionation after whole genome duplication.
19. Lyons E, Freeling M: How to usefully compare homologous plant genes and chromosomes as DNA sequences.
Plant Journal 2008, 53:661-673. PubMed Abstract | Publisher Full Text
20. Yancopoulos S, Attie O, Friedberg R: Efficient sorting of genomic permutations by translocation, inversion and block interchange.
Bioinformatics 2005, 21:3340-3346. PubMed Abstract | Publisher Full Text
21. El-Mabrouk N, Sankoff D: The reconstruction of doubled genomes.
SIAM Journal on Computing 2003, 32:754-792. Publisher Full Text
22. Zheng C, Swenson KM, Lyons E, Sankoff D: OMG! Orthologs in multiple genomes - competing graph-theoretical formulations. In Algorithms in Bioinformatics - 11th International Workshop (WABI),
Lecture Notes in Computer Science. Volume 6833. Edited by Przytycka TM, Sagot MF. Springer; 2011::364-375. Publisher Full Text
23. Argout X, Salse J, Aury JM, Guiltinan MJ, Droc G, Gouzy J, Allegre M, Chaparro C, Legavre T, Maximova SN, Abrouk M, Murat F, Fouet O, Poulain J, Ruiz M, Roguet Y, Rodier-Goud M, Barbosa-Neto JF,
Sabot F, Kudrna D, Ammiraju JS, Schuster SC, Carlson JE, Sallet E, Schiex T, Dievart A, Kramer M, Gelley L, Shi Z, Berard A, Viot C, Boccara M, Risterucci AM, Guignon V, Sabau X, Axtell MJ, Ma Z,
Zhang Y, Brown S, Bourge M, Golser W, Song X, Clement D, Rivallan R, Tahi M, Akaza JM, Pitollat B, Gramacho K, D'Hont A, Brunel D, Infante D, Kebe I, Costet P, Wing R, McCombie WR, Guiderdoni E,
Quetier F, Panaud O, Wincker P, Bocs S, Lanaud C: The genome of Theobroma cacao.
Nature Genetics 2011, 43:101-108. PubMed Abstract | Publisher Full Text
24. Chan AP, Crabtree J, Zhao Q, Lorenzi H, Orvis J, Puiu D, Melake-Berhan A, Jones KM, Redman J, Chen G, Cahoon EB, Gedil M, Stanke M, Haas BJ, Wortman JR, Fraser-Liggett CM, Ravel J, Rabinowicz PD:
Draft genome sequence of the oilseed species Ricinus communis.
Nature Biotechnology 2010, 28:951-956. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
25. Huang S, Li R, Zhang Z, Li L, Gu X, Fan W, Lucas W, Wang X, Xie B, Ni P, Ren Y, Zhu H, Li J, Lin K, Jin W, Fei Z, Li G, Staub J, Kilian A, van der Vossen EA, Wu Y, Guo J, He J, Jia Z, Ren Y, Tian
G, Lu Y, Ruan J, Qian W, Wang M, Huang Q, Li B, Xuan Z, Cao J, Wu Z, Zhang J, Cai Q, Bai Y, Zhao B, Han Y, Li Y, Li X, Wang S, Shi Q, Liu S, Cho WK, Kim JY, Xu Y, Heller-Uszynska K, Miao H, Cheng
Z, Zhang S, Wu J, Yang Y, Kang H, Li M, Liang H, Ren X, Shi Z, Wen M, Jian M, Yang H, Zhang G, Yang Z, Chen R, Liu S, Li J, Ma L, Liu H, Zhou Y, Zhao J, Fang X, Li G, Fang L, Li Y, Liu D, Zheng
H, Zhang Y, Qin N, Li Z, Yang G, Yang S, Bolund L, Kristiansen K, Zheng H, Li S, Zhang X, Yang H, Wang J, Sun R, Zhang B, Jiang S, Wang J, Du Y, Li S: The genome of the cucumber, Cucumis sativus
Nature Genetics 2009, 41:1275-1281. PubMed Abstract | Publisher Full Text
26. Jaillon O, Aury JM, Noel B, Policriti A, Clepet C, Casagrande A, Choisne N, Aubourg S, Vitulo N, Jubin C, Vezzi A, Legeai F, Hugueney P, Dasilva C, Horner D, Mica E, Jublot D, Poulain J, Bruyere
C, Billault A, Segurens B, Gouyvenoux M, Ugarte E, Cattonaro F, Anthouard V, Vico V, Del Fabbro C, Alaux M, Di Gaspero G, Dumas V, Felice N, Paillard S, Juman I, Moroldo M, Scalabrin S, Canaguier
A, Le Clainche I, Malacrida G, Durand E, Pesole G, Laucou V, Chatelet P, Merdinoglu D, Delledonne M, Pezzotti M, Lecharny A, Scarpelli C, Artiguenave F, Pe ME, Valle G, Morgante M, Caboche M,
Adam-Blondon AF, Weissenbach J, Quetier F, Wincker P: The grapevine genome sequence suggests ancestral hexaploidization in major angiosperm phyla.
Nature 2007, 449:463-467. PubMed Abstract | Publisher Full Text
27. Velasco R, Zharkikh A, Troggio M, Cartwright DA, Cestaro A, Pruss D, Pindo M, Fitzgerald LM, Vezzulli S, Reid J, Malacarne G, Iliev D, Coppola G, Wardell B, Micheletti D, Macalma T, Facci M,
Mitchell JT, Perazzolli M, Eldredge G, Gatto P, Oyzerski R, Moretto M, Gutin N, Stefanini M, Chen Y, Segala C, Davenport C, Dematte L, Mraz A, Battilana J, Stormo K, Costa F, Tao Q, Si-Ammour A,
Harkins T, Lackey A, Perbost C, Taillon B, Stella A, Solovyev V, Fawcett JA, Sterck L, Vandepoele K, Grando SM, Toppo S, Moser C, Lanchbury J, Bogden R, Skolnick M, Sgaramella V, Bhatnagar SK,
Fontana P, Gutin A, Van de Peer Y, Salamini F, Viola R: A high quality draft consensus sequence of the genome of a heterozygous grapevine variety.
PLoS ONE 2007, 2:e1326. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
28. Ming R, Hou S, Feng Y, Yu Q, Dionne-Laporte A, Saw JH, Senin P, Wang W, Ly BV, Lewis KL, Salzberg SL, Feng L, Jones MR, Skelton RL, Murray JE, Chen C, Qian W, Shen J, Du P, Eustice M, Tong E,
Tang H, Lyons E, Paull RE, Michael TP, Wall K, Rice DW, Albert H, Wang ML, Zhu YJ, Schatz M, Nagarajan N, Acob RA, Guan P, Blas A, Wai CM, Ackerman CM, Ren Y, Liu C, Wang J, Wang J, Na JK,
Shakirov EV, Haas B, Thimmapuram J, Nelson D, Wang X, Bowers JE, Gschwend AR, Delcher AL, Singh R, Suzuki JY, Tripathi S, Neupane K, Wei H, Irikura B, Paidi M, Jiang N, Zhang W, Presting G,
Windsor A, Navajas-Perez R, Torres MJ, Feltus FA, Porter B, Li Y, Burroughs AM, Luo MC, Liu L, Christopher DA, Mount SM, Moore PH, Sugimura T, Jiang J, Schuler MA, Friedman V, Mitchell-Olds T,
Shippen DE, dePamphilis CW, Palmer JD, Freeling M, Paterson AH, Gonsalves D, Wang L, Alam M: The draft genome of the transgenic tropical fruit tree papaya (Carica papaya Linnaeus).
Nature 2008, 452:991-996. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
29. Tuskan GA, Difazio S, Jansson S, Bohlmann J, Grigoriev I, Hellsten U, Putnam N, Ralph S, Rombauts S, Salamov A, Schein J, Sterck L, Aerts A, Bhalerao RR, Bhalerao RP, Blaudez D, Boerjan W, Brun
A, Brunner A, Busov V, Campbell M, Carlson J, Chalot M, Chapman J, Chen GL, Cooper D, Coutinho PM, Couturier J, Covert S, Cronk Q, Cunningham R, Davis J, Degroeve S, Dejardin A, Depamphilis C,
Detter J, Dirks B, Dubchak I, Duplessis S, Ehlting J, Ellis B, Gendler K, Goodstein D, Gribskov M, Grimwood J, Groover A, Gunter L, Hamberger B, Heinze B, Helariutta Y, Henrissat B, Holligan D,
Holt R, Huang W, Islam-Faridi N, Jones S, Jones-Rhoades M, Jorgensen R, Joshi C, Kangasjarvi J, Karlsson J, Kelleher C, Kirkpatrick R, Kirst M, Kohler A, Kalluri U, Larimer F, Leebens-Mack J,
Leple JC, Locascio P, Lou Y, Lucas S, Martin F, Montanini B, Napoli C, Nelson DR, Nelson C, Nieminen K, Nilsson O, Pereda V, Peter G, Philippe R, Pilate G, Poliakov A, Razumovskaya J, Richardson
P, Rinaldi C, Ritland K, Rouze P, Ryaboy D, Schmutz J, Schrader J, Segerman B, Shin H, Siddiqui A, Sterky F, Terry A, Tsai CJ, Uberbacher E, Unneberg P, Vahala J, Wall K, Wessler S, Yang G, Yin
T, Douglas C, Marra M, Sandberg G, Van de Peer Y, Rokhsar D: The genome of black cottonwood, Populus trichocarpa (Torr. & Gray).
Science 2006, 313:1596-1604. PubMed Abstract | Publisher Full Text
30. Forest F, Chase MW: Eudicots. In The Timetree of Life. Edited by Hedges SB, Kumar S. Oxford University Press; 2009:169-176.
31. Warshall S: A theorem on Boolean matrices.
Journal of the ACM 1962, 9:11-12. Publisher Full Text
32. Muñnoz A, Sankoff D: Rearrangement phylogeny of genomes in contig form.
IEEE/ACM Transactions on Computational Biology and Bioinformatics 2010, 7:579-587. PubMed Abstract | Publisher Full Text
33. Sankoff D, Haque L: The distribution of genomic distance between random genomes.
Journal of Computational Biology 2006, 13:1005-1012. PubMed Abstract | Publisher Full Text
34. Shulaev V, Sargent DJ, Crowhurst RN, Mockler TC, Folkerts O, Delcher AL, Jaiswal P, Mockaitis K, Liston A, Mane SP, Burns P, Davis TM, Slovin JP, Bassil N, Hellens RP, Evans C, Harkins T, Kodira
C, Desany B, Crasta OR, Jensen RV, Allan AC, Michael TP, Setubal JC, Celton JM, Rees DJ, Williams KP, Holt SH, Ruiz Rojas JJ, Chatterjee M, Liu B, Silva H, Meisel L, Adato A, Filichkin SA,
Troggio M, Viola R, Ashman TL, Wang H, Dharmawardhana P, Elser J, Raja R, Priest HD, Bryant DW, Fox SE, Givan SA, Wilhelm LJ, Naithani S, Christoffels A, Salama DY, Carter J, Lopez Girona E,
Zdepski A, Wang W, Kerstetter RA, Schwab W, Korban SS, Davik J, Monfort A, Denoyes-Rothan B, Arus P, Mittler R, Flinn B, Aharoni A, Bennetzen JL, Salzberg SL, Dickerman AW, Velasco R, Borodovsky
M, Veilleux RE, Folta KM: The genome of woodland strawberry (Fragaria vesca).
Nature Genetics 2011, 43:109-116. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
Sign up to receive new article alerts from BMC Bioinformatics
|
{"url":"http://www.biomedcentral.com/1471-2105/13/S10/S9","timestamp":"2014-04-20T21:03:00Z","content_type":null,"content_length":"164878","record_id":"<urn:uuid:f750a3c2-ebd5-4460-ae71-de41ca683d14>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Do you know of the equivalence of recursivity and the following... ?
[FOM] Do you know of the equivalence of recursivity and the following... ?
tphhec01@degas.ceu.hu tphhec01 at degas.ceu.hu
Tue Jun 1 04:02:12 EDT 2004
Dear FOM-ers,
I cooked up the following rewording of recursivity, and I'd like to know
whether it is now a genuine result, or it is a well-known fact. In the
latter case, please be as kind as to give references.
Nb. the following stuff is presented in the scope of finite set theory, but
it would be easy to "port" the statement to arithmetic (a bit more common
playground), it's just then it would become slightly more technical.
* * *
"E" will denote the set-theoretic membership relation.
"V" will denote the standard model of finite set theory (the set of
hereditarily finite sets).
Def.: Let A, B be E-type structures (ie., their similarity type consists of
the binary rel. symbol "E"), and i: A -> B be an embedding. We say that i is
_faithful_, if for any a in A and b in B such that b E i(a) [here E is E of
B], there is an a' in A such that b = i(a'); that is, the i-image of A is a
downward closed subset of B wrt. the transitive closure of E.
Thm.: A subset X of V is recursive iff there is a first-order formula f(x)
such that
i) f(x) defines X in V;
ii) for any q in V, there is a finite subset F of V such that
* F includes q;
* for any faithful embedding i: F -> B (here B is an *arbitrary* E-type
structure, no set-theoretic axiom is required!)
V |= f(q) <=> B |= f(i(q)) .
(Intuitively, a set is recursive iff for any object there is a finite
witness showing that object belongs / doesn't belong to the set; this might
be used in some "pro Church Thesis" argument.)
Csaba Henk
"There's more to life, than books, you know but not much more..."
[The Smiths]
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-June/008239.html","timestamp":"2014-04-19T01:52:52Z","content_type":null,"content_length":"4043","record_id":"<urn:uuid:1bcff6e4-d1d3-46a4-bb6c-e5fb7e84af85>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Karen Smith
Biographies of Women Mathematicians
Tight Closure of Parameter Ideals and F-rationality
University of Michigan, 1993
Let R be a locally excellent domain of prime characteristic and let R^+ denote its integral closure in an algebraic closure of its fraction field. It is shown that the tight closure I^* of any local
parameter ideal I is equal to (IR^+ intersection R). It is shown that locally excellent rings in which all parameter ideals are tightly closed must be pseudorational. Applications to algebraic
varieties over fields of any characteristic are developed, including a tight closure method for proving that certain varieties over a field of characteristic 0 have rational singularities. Examples
are given to demonstrate the effectiveness of this method. It is also shown that the parameter test ideal behaves well under localization: if J is the parameter test ideal for a complete local
Cohen-Macaulay ring R, then JU^-1R is the parameter test ideal for U^-1R, where U is any multiplicative system in R.
|
{"url":"http://www.agnesscott.edu/lriddle/women/abstracts/ksmith_abstract.htm","timestamp":"2014-04-17T07:09:49Z","content_type":null,"content_length":"3376","record_id":"<urn:uuid:a1d63969-3098-4541-b600-4b078663b140>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by Elizabeth Wages—Wolfram|Alpha Blog
Blog Posts from this author:
Wolfram|Alpha is a powerful tool for finding information about the universe at large, but sometimes we are interested in a much smaller universe: our families. Genealogical research is an
increasingly popular hobby, and one which Wolfram|Alpha can make easier using features across several of its subject areas.
We blogged last year about how Wolfram|Alpha can map family relations, which can certainly be more helpful the further your genealogical research takes you from the trunk of your family tree.
Recently, another researcher (and previously unknown relative) contacted me. This new connection sent me straight to Wolfram|Alpha to determine our relationship. Her great grandfather was my great
grandfather’s brother and, thanks to Wolfram|Alpha, I learned that she is my third cousin.
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
|
{"url":"http://blog.wolframalpha.com/author/elizabeth-wages/","timestamp":"2014-04-17T18:24:16Z","content_type":null,"content_length":"30786","record_id":"<urn:uuid:6502a0b3-f01f-402e-a79d-c95871f2c700>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Julia collects shells from the beach while on vacation with her family. Suppose each day, Julia collects 3 times as many shells as she did the day before. If Julia collects 1 shell on the first day,
how many shells will she collect on the fifth day of her vacation? 1. 3 2. 9 3. 27 4. 81
• one year ago
• one year ago
Best Response
You've already chosen the best response.
if she collects 1 on the first day, and she collects 3 times that on the second day, she collects 1x3=3 shells the second day. The third day, she collects 3 times that amount, so 3x3=9 shells.
The fourth day, she collects 3 times that amount, 3x9=27, and the fifth day, 27x3=81 shells
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50956eefe4b0d0275a3c8bb7","timestamp":"2014-04-16T10:36:56Z","content_type":null,"content_length":"28079","record_id":"<urn:uuid:6d4753fc-1f8e-4374-b490-9032146ed570>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is 53 KILOGRAMS IN POUNDS?
Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS!
All the definitions and meanings found are from third-party authors, please respect their copyright.
© 2014 - mrwhatis.net
|
{"url":"http://mrwhatis.net/53-kilograms-in-pounds.html","timestamp":"2014-04-19T14:31:40Z","content_type":null,"content_length":"36831","record_id":"<urn:uuid:b0da9654-1e2b-42a9-9b09-f51d2b166d25>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Wikibooks, open books for an open world
Introduction to High School Geometry[edit]
The word geometry comes originally from Greek, meaning literally, to measure the earth. It is an ancient branch of mathematics, but its modern meaning depends largely on context.
• To the elementary or middle school student (ages six to thirteen in the U.S. school system), geometry is the study of the names and properties of simple shapes (e.g., the defining properties of
triangles, squares, rectangles, trapezoids, circles, prisms, etc., along with formulas for their areas or volumes).
• To the high school student (ages fourteen to seventeen in the U.S. system), geometry has two flavors: synthetic and analytic. Synthetic geometry uses deductive proof to study the properties of
points, lines, angles, triangles, circles, and other plane figures, roughly following the plan laid out by the Greek textbook writer Euclid around 300 B.C. Analytic geometry follows the
pioneering work of the French mathematicians René Descartes (1596-1650) and Pierre Fermat (1601-1665) to impose a coordinate grid on the plane, making it possible to study geometric objects
(e.g., lines, parabolas, and circles) by means of algebra (e.g., linear equations and quadratic equations) and vice versa.
• To the mathematical researcher geometry is a subject that has grown far from its roots, and he may refer to his field as modern geometry to distinguish it from the school subject. Modern geometry
comes in two main flavours: algebraic geometry (Algebraic curves and surfaces, algebraic varieties), and differential geometry (Riemannian geometry, Spin geometry, Lie groups and algebras).
Differential geometry is used in many ways; to describe the universe in the theory of general relativity (Lorentzian 4-manifolds), to describe soap bubble films (minimal surfaces) and as the
building blocks of particle physics (Lie groups).
The first part of this wiki textbook aspires to be a high school geometry text adequate to satisfy the California curriculum content standards.
Why invest the energy to learn geometry?[edit]
Basic geometry is a very powerful practical problem solver. It was used by the ancient Egyptians and Greeks for solving most problems and was the proto-type for rational thinking. Where we use
algebra today the Greeks used geometry then. It is still very current in all the building and fabrication trades. Before building something big and expensive it is better to work out the bugs in a
small scale model. Before expending a lot of energy in making a model it is best to do a drawing. With geometry the drawings become very accurate and can be used to predict measurements and costs.
Geometry can be easy to master; the proofs are more fun than sudoko; and its applications are as practical as a hammer and saw. It will give you a sophisticated visual intuition and a strong sense of
rational proof and a jumping-off place for some of the most abstract areas of pure mathematics. It’s hard to imagine a mathematical education without geometry.
What should I know before attempting this text?[edit]
• You should be able to do arithmetic up to addition of fractions without a calculator (1/4 + 2/5 = 13/20); and solve simple algebra problems (2x -1= 0 then x= 1/2) and (X^2-16=0). Often algebra
and geometry are taught together and the students would know enough algebra by the time they needed it. It wouldn't hurt to read the first few chapters in both arithmetic and beginning algebra
while doing geometry. Mathematics builds on itself. This means that only if you work all the examples and learn the definitions will later pages make sense to you. Keep a notebook and make lots
of drawings. Talking to others about what you learn is invaluable. You do your part and a good text book will get you there.
How difficult? Intuition vs. rigor:[edit]
Fifteen little boys sit in front of their first soccer coach, knowing nothing about the game. The coach stands up to teach them soccer. How does he begin?
Perhaps he lectures for several hours, explaining the rules of soccer along with common strategies and tactics. The little boys grow bored. Having no experience of the game, they have trouble
understanding the rules and even more trouble remembering them. By the end they have no sense of what a soccer game feels like.
On the other hand, perhaps the coach divides the boys into two teams, takes them to a soccer field, throws them a ball, tells them, "Try to get the ball into the other team's goal and keep it out of
your own," and then lets them play. Knowing no rules, the boys play a game little like soccer. They pick the ball up and run with it, paying no attention to the boundaries of the field. When they
finally get the idea to kick the ball, they have no skill at dribbling, passing, or working together as a team. They have some fun, but again they end up not knowing what a soccer game feels like.
The first approach (teaching all the rules before playing) is what mathematicians might call a rigorous approach to soccer while the second (just playing without worrying about rules) is what they
might call an intuitive approach. Rigor emphasizes the rules. Intuition emphasizes the play. Good teaching and good learning need a balance of both. A good soccer coach starts by teaching the boys
some of the most important rules, making them practice some basic drills, and letting them play a little. At first he probably emphasizes the intuitive, letting them play more, so that they get a
sense of how the game works and how much fun it can be. Later, as they develop a taste for the game, he focuses more on skills and drills, on rules, strategy, and tactics.
The teacher of beginning geometry students faces the same question as the soccer coach: emphasize rigor or emphasize intuition? Following the example of the good coach, this first chapter emphasizes
intuition while introducing a little rigor. Specifically this chapter introduces many of the elementary concepts and much of the terminology of geometry in the context of an intuitive treatment of
Euclidean geometry. Later chapters revisit these topics with greater rigor.
So, for instance, this chapter addresses Euclidean geometry without explaining what makes it Euclidean and without mentioning other sorts of geometry. Again, since elementary geometers think of
points as dots, this chapter explains that a point is like an infinitely small dot. It is a helpful mental image that has served geometers well for over 2000 years. Rigorous approaches to geometry
leave the term point undefined, but this text reserves that subtle and confusing convention for a later chapter. Again, this text assumes that its readers already understand something about area and
volume and the units for measuring them, so it does not try to define and explain standard units of area and volume.
• Introduction
• Geometry/Chapter 1 Definitions and Reasoning (Introduction)
• Geometry/Chapter 2 Proofs
• Geometry/Chapter 3 Logical Arguments
• Geometry/Chapter 4 Congruence and Similarity
• Geometry/Chapter 5 Triangle: Congruence and Similiarity
• Geometry/Chapter 6 Triangle: Inequality Theorem
• Geometry/Chapter 7 Parallel Lines, Quadrilaterals, and Circles
• Geometry/Chapter 8 Perimeters, Areas, Volumes
• Geometry/Chapter 9 Prisms, Pyramids, Spheres
• Geometry/Chapter 10 Polygons
• Geometry/Chapter 11 $R,$$R^2,$$R^3$
• Geometry/Chapter 12 Angles: Interior and Exterior
• Geometry/Chapter 13 Angles: Complementary, Supplementary, Vertical
• Geometry/Chapter 14 Pythagorean Theorem: Proof
• Geometry/Chapter 15 Pythagorean Theorem: Distance and Triangles
• Geometry/Chapter 16 Constructions
• Geometry/Chapter 17 Coordinate Geometry
• Geometry/Chapter 18 Trigonometry
• Geometry/Chapter 19 Trigonometry: Solving Triangles
• Geometry/Chapter 20 Special Right Triangles
• Geometry/Chapter 21 Chords, Secants, Tangents, Inscribed Angles, Circumscribed Angles
• Geometry/Chapter 22 Rigid Motion
• Geometry/Appendix A Formulas
• Geometry/Appendix B Answers to problems
• Appendix C. Geometry/Postulates & Definitions
|
{"url":"https://en.wikibooks.org/wiki/Geometry/Introduction","timestamp":"2014-04-21T12:11:52Z","content_type":null,"content_length":"54631","record_id":"<urn:uuid:70d3cf65-6c1c-44de-bf8f-d91046e53f3b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: MatLab Output Argument Error
Replies: 2 Last Post: Nov 6, 2012 9:45 AM
Messages: [ Previous | Next ]
James MatLab Output Argument Error
Posted: Nov 6, 2012 2:05 AM
Posts: 4
Registered: 11/1/12 This is my code:
function [COP,Qdot]=james(T1,T2)
temperature=menu('Temperature Scale','Fahrenheit','Centigrade','Kelvin','Rankine');
if temperature == 1
T_C= (5/9)*(T1-32)+273;
T_H= (5/9)*(T2-32)+273;
if temperature == 2
T_C= T1+273;
T_H= T2+273;
if temperature == 3
T_C= T1;
T_H= T2;
if temperature == 4
T_C= (5/9)*(T1);
T_H= (5/9)*(T2);
if T_H>=T_C
if COP >= 0
fprintf('The coefficient of performance is %f \n',COP)
disp('Ensure that T_Cold is not greater than T_Hot and that T_Cold and T_Hot are equal or greater than zero when converted to Kelvin')
if T_H>=T_C
if COP>=0
fprintf('The heat transfer due to convection is %f W \n', Qdot)
disp('Ensure that T1 is not greater than T2')
I enter this command:
and get:
Error in james (line 4)
temperature=menu('Temperature Scale','Fahrenheit','Centigrade','Kelvin','Rankine');
Output argument "COP" (and maybe others) not assigned during call to
Where am I going wrong in my code? I do not know where exactly I am going wrong because everything seems to be called for.
Date Subject Author
11/6/12 MatLab Output Argument Error James
11/6/12 Re: MatLab Output Argument Error Rick Branch
11/6/12 Re: MatLab Output Argument Error Steven Lord
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2413530&messageID=7918524","timestamp":"2014-04-19T09:47:04Z","content_type":null,"content_length":"19773","record_id":"<urn:uuid:c2be4a73-ddbd-4a55-ae14-0576afaf57c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When can't MATLAB add up?
When can’t MATLAB add up?
I have got a nice, shiny 64bit version of MATLAB running on my nice, shiny 64bit Linux machine and so, naturally, I wanted to be able to use 64 bit integers when the need arose. Sadly, MATLAB had
other ideas. On MATLAB 2010a:
??? Undefined function or method 'plus' for input arguments of type 'int64'.
It doesn’t like any of the other operators either
>> a-b
??? Undefined function or method 'minus' for input arguments of type 'int64'.
>> a*b
??? Undefined function or method 'mtimes' for input arguments of type 'int64'.
>> a/b
??? Undefined function or method 'mrdivide' for input arguments of type 'int64'.
At first I thought that there was something wrong with my MATLAB installation but it turns out that this behaviour is expected and documented. At the time of writing, the MATLAB documentation
contains the lines
Note Only the lower order integer data types support math operations.
Math operations are not supported for int64 and uint64.
So, there you have it. When can’t MATLAB add up? When you ask it to add 64 bit integers!
Update: Just had an email from someone who points out that Octave (Free MATLAB Clone) can handle 64bit integers just fine
octave:1> a=int64(10);
octave:2> b=int64(20);
octave:3> a+b
ans = 30
Update 2: Since I got slashdotted, people have been asking why I (or, more importantly, the user I was helping) needed 64bit integers. Well, we were using the NAG Toolbox for MATLAB which is a MATLAB
interface to the NAG Fortran library. Some of its routines require integer arguments and on a 64bit machine these must be 64bit integers. We just needed to do some basic arithmetic on them before
passing them to the toolbox and discovered that we couldn’t. The work-around was simple – use int32s for the arithmetic and then convert to int64 at the end so no big deal really. I was simply
surprised that arithmetic wasn’t supported for int64s directly – hence this post.
Update 3: In the comments, someone pointed out that there is a package on the File Exchange that adds basic arithmetic support for 64bit integers. I’ve not tried it myself though so can’t comment on
its quality.
Update 4: Someone has made me aware of an interesting discussion concerning this topic on MATLAB Central a few years back: http://www.mathworks.com/matlabcentral/newsreader/view_thread/154955
Update 5 (14th September 2010): MATLAB 2010b now has support for 64 bit integers.
April 29th, 2010 at 13:55
Reply | Quote | #1
what is the point of having numerical data types that can’t be manipulated? What can matlab do with 64bit integers?
April 29th, 2010 at 13:56
Reply | Quote | #2
@tom Quite! It can store them. Move them around. Pass them to mex files. um….that’s it I think.
April 29th, 2010 at 18:44
Reply | Quote | #3
Laughs out loud…..Amazing! I hope they are not promoting the 64bit-ness!
May 3rd, 2010 at 02:00
Reply | Quote | #4
It’s a problem in the fact that they follow the IEEE std 754 which basically allows for only 52 of the 64 bits for the mantissa (full break down below):
1. Assume 64 bits are allocated to a number,
2. Assume the LSB is N1 (to match with Matlab/Octave nomenclature), MSB is N64
3. Then N1 is used as a sign bit
4. N2-N13 are used to store the exponent (ranging from -1021 to 1024)
5. The remaining bits 52 are used to store the mantissa (the fractional component)
It stinks – there are work around – but they aren’t pretty.
May 3rd, 2010 at 02:04
Reply | Quote | #5
As you surely know, MATLAB first and foremost is a double-precision floating point algorithm platform. Next in importance for its target market is its fixed-point support for real-world signal
processing and control design apps. But in such cases, widths larger than 32 bits are rare, and anything between 33 and 53 bits can be emulated quite efficiently in floating point.
It would be silly to assume that 64-bit integers would just “work” by recompiling the code on a 64-bit platform. (That is not an accusation that *you* are silly :)) They have to be implemented
explicitly, and of course they’ll only do so if their market research detects enough of a demand for them. (Indeed, templated code would simplify their implementation, but MATLAB is old enough that
I’d be surprised if they employ templates in their core computational code. I would *not* want them reinventing the wheel just to adopt the modern technologies, either.)
So while I admit the “64-bit” repetition in your blog text is cute, I’m curious as to precisely what application you’re thinking of 1) calls for 64-bit integer support, and 2) would be well suited
for MATLAB implementation (if it could be done). I’m going to guess that application is somewhat unique. And I’m glad to hear that Octave will do it for you.
May 3rd, 2010 at 02:10
Reply | Quote | #6
The documentation does mention this as a limitation:
“Note: Only the lower order integer data types support math operations. Math operations are not supported for int64 and uint64″
A bit more digging reveals that you can write them to a file (fwrite), use them with imreconstruct, and pass them off to mex, java and C#. I’m sure it’ll eventually turn up in a few releases time…
May 3rd, 2010 at 02:13
Reply | Quote | #7
OK, just want to make a couple things clear. First I don’t work for the Mathworks, never have. I do have an optimization toolbox developed on the platform, though, and I can tell you that I am as
frustrated sometimes by Matlab’s limitations and bugs as anyone. I’m in the process of spec-ing out the next version of my system so that it can be effectively divorced from Matlab.
And second I don’t wish to denigrate you or your work. I’m genuinely curious what applications would make heavy use of 64-bit integers. And I genuinely hope you’ll
My point is simply that Matlab’s 64-bit compilation and support for int64 computations are really two different issues. 32-bit Matlab should be able to support 64-bit computations, too, as long as
there are a lot less than (2^32-1)/8 of them!
May 3rd, 2010 at 02:14
Reply | Quote | #8
OK, just want to make a couple things clear. First I don’t work for the Mathworks, never have. I do have an optimization toolbox developed on the platform, though, and I can tell you that I am as
frustrated sometimes by Matlab’s limitations and bugs as anyone. I’m in the process of spec-ing out the next version of my system so that it can be effectively divorced from Matlab.
And second I don’t wish to denigrate you or your work. I’m genuinely curious what applications would make heavy use of 64-bit integers. And I genuinely hope you’ll find a good platform for your work
in that area.
My point is simply that Matlab’s 64-bit compilation and support for int64 computations are really two different issues. 32-bit Matlab should be able to support 64-bit computations, too, as long as
there are a lot less than (2^32-1)/8 of them!
May 3rd, 2010 at 03:05
Reply | Quote | #9
I believe the main (only?) advantage of 64bit Matlab is that gets past the 32bit memory limit. This enables you to work with much larger sets of data (not 64bit data apparently).
May 3rd, 2010 at 03:15
Reply | Quote | #10
>> methods int64
Methods for class int64:
abs diag ge le nnz reshape uplus
accumarray display gt linsolve nonzeros round xor
and eq imag logical not sort
bsxfun find isfinite lt nzmax sparsfun
ceil fix isinf max or transpose
conj floor isnan min permute tril
ctranspose full issorted ne real triu
>> methods uint64
Methods for class uint64:
abs bitxor find isinf min real uplus
accumarray bsxfun fix isnan ne reshape xor
and ceil floor issorted nnz round
bitand conj full le nonzeros sort
bitget ctranspose ge linsolve not sparsfun
bitor diag gt logical nzmax transpose
bitset display imag lt or tril
bitshift eq isfinite max permute triu
May 3rd, 2010 at 04:04
Reply | Quote | #11
On my version of GNU Octave (3.0.5, x86_64-pc-linux-gnu, Ubuntu Lucid)
octave:1> a = int64(10)
a = 10
octave:2> b = int64(10)
b = 10
octave:3> a+b
error: binary operator `+’ not implemented for `int64 scalar’ by `int64 scalar’ operations
error: evaluating binary operator `+’ near line 3, column 2
May 3rd, 2010 at 04:19
Reply | Quote | #12
Sorry, I naively assumed that lucid would have a recent version of octave. It seems octave 3.2 has been out for nearly a year, and does implement 64-bit int support.
May 3rd, 2010 at 04:34
Reply | Quote | #13
I actually wrote an extension to do that:
It does a pretty good job. Its not native matlab persay, but the code is competitivly fast and works with all data types. It also looks for overflows, etc…
May 3rd, 2010 at 05:49
Reply | Quote | #14
Geez, am I glad I switched to Python/NumPy:
>>> from numpy import *
>>> int64(10) + int64(20)
>>> _.dtype
May 3rd, 2010 at 05:57
Reply | Quote | #15
It runs in a 64 bit memory space, so you can have arrays bigger than 2^16 x 2^16. I don’t know if it can index beyond 2^32 … someone should test that. I think only 40 address bits are actually hooked
up on modern CPU’s ?
May 3rd, 2010 at 07:58
Reply | Quote | #16
Not so long ago, MATLAB couldn’t do arithmetic with any type except double (maybe also float, I don’t recall). At that time one could only perform arithmetic by first converting to double, though
this would be a lossy operation for 64-bit, unlike prior cases. So this shouldn’t surprise any MATLAB veteran.
One potential explanation might be that they are taking some time to support 64-bit ints in the JIT compiler (a feature that the much slower-performing Octave clearly lacks).
May 3rd, 2010 at 11:22
Reply | Quote | #17
Well, it’s not that much relevant how much is hooked up, for some algorithms ultra fast swap (be it RAM or flash based) can be fine enough.
May 3rd, 2010 at 12:29
Reply | Quote | #18
I’m pretty sure this is to do with the majority of MATLAB core functions being handled by the Intel MKL. To code around this, I expect there would have to be a heck of a lot of case handling that
would impact on performance. If you want arbitrary integer handling I would suggest casting / passing a pointer to something like the GNU MP Bignum Library to handle this case.
May 3rd, 2010 at 12:59
Reply | Quote | #19
Matlab supports 64 bit types, the built in functions just haven’t been expanded to properly manipulate them. Again, take a look at the link I put up earlier – you can do it and it will work. If you
need more you just need to do a bit of work on it…
Truthfully, I don’t think it would be all that difficult to add even bigger numbers than 64 bit. Again, the only problem is the built in functions depending on what you want to do and your skill set…
BTW – the JIT only works on m scripts. A great deal of the code that needs to be updated to fully support 64 bit isn’t m script – its C, C++, or Java.
May 3rd, 2010 at 17:52
Reply | Quote | #20
“On my version of GNU Octave (3.0.5, x86_64-pc-linux-gnu, Ubuntu Lucid)”
It works fine on GNU Octave 3.2.3
octave:1> a= int64(10)
a = 10
octave:2> b = int64(20)
b = 20
octave:3> a+b
ans = 30
“And second I don’t wish to denigrate you or your work. I’m genuinely curious what applications would make heavy use of 64-bit integers. And I genuinely hope you’ll find a good platform for your work
in that area.”
We use them a lot in orbital mechanics. If you use the center of the Earth as your origin, and need high precision in meters, (we do) then having double precision and 64 bit math is important. Right
now we do a lot of code in fortran and C/C++, but it would be nice to have more tools that have the capability. We do use Matlab as a solver, but it would be nice to be able to do more in Matlab.
May 3rd, 2010 at 18:43
Reply | Quote | #21
I’ve been following this discussion with interest, and wanted to chime in to help describe the different meanings of “64-bits” in MATLAB. You can use it to refer to the amount of memory available to
MATLAB or your operating system, the number of elements allowed in a matrix, or to an integer data type.
64-bit operating systems give us 64 bit address space. This gives MATLAB the ability to store many more arrays in memory, or even a few really large arrays. In practice, you need A LOT of memory to
support working with arrays this large:
>> x = zeros(2^31,1); % requires 17 GB
In addition, indices for arrays in MATLAB can now exceed the previous INTMAX (2^31-1) value of 32 bit OS’s. Now the maximum theoretical limit is 2^48-1 on 64-bit platforms, but not many people have
the memory to create:
>> x = zeros(2^45-1,1); % requires 280 Terabytes
??? Out of memory. Type HELP MEMORY for your options.
One easy way to observe the new index limit is in sparse matrices, which only store individual row indices:
>> S = sparse(2^48-1,1,17)
S =
(281474976710655,1) 17
Finally, there is the 64-bit integer data type, which currently exists as a storage class only in MATLAB, with several dozen non-mathematical operations like indexing, concatenation, reshaping,
permuting, and so on.
We’re interested in hearing customer requests for uses of 64-bit integer math in MATLAB that are not satisfied by the 32-bit integer or 64-bit floating point data types. Thank you for your interest
and feedback.
Penny Anderson
MATLAB Math
May 3rd, 2010 at 22:17
Reply | Quote | #22
Hi Penny
Thanks for the insightful comments. As I put in Update 2 of the original post, the only reason I needed 64bit integers was because an external library expected to have them as input arguments. In my
particular application I worked around the lack of 64bit operators by doing my arithmetic with int32s and then cast to int64 at the end. This worked fine but looked inelegant.
So, if I am being honest, I didn’t NEED 64bit integers – it just would have been nice. Reading through the comments both on here and on slashdot, it seems that others are not so lucky.
However, I wonder how much work would it be to implement addition and subtraction for int64s? If it would be difficult then could you share with us what the complications are please?
May 3rd, 2010 at 23:12
Reply | Quote | #23
I find all fuss about MATLAB amazing. OK, so someone found a problem with MATLAB that ought to be fixed. So what? Bugs are reported to the MW continuously on every imaginable MATLAB and Simulink
product – and there are close to 100 (R2010a). Report the bug or problem to the MW and they will do everything they can to fix the problem. I have developed MATLAB applications and Simulink models
professionally for +15 years. Every year I report at least one bug (that has yet to be reported) and each time the bug has been fixed in future versions. The MW goes out of its way to respond to
feedback of its customers and I am sure what has been posted here will be no exception.
May 3rd, 2010 at 23:23
Reply | Quote | #24
Hi Eddie
I completely agree – The Mathworks are extremely good at responding to feedback from customers. I just like to post some of my feedback in blog-posts on here if I think that it might be interesting
to other people. I report plenty of other bugs via the usual channels – most of them are not interesting enough to write about on here. Mathworks always fix them quickly and I’m a happy customer
(most of the time).
I find this sort of stuff intrinsicly interesting that’s all.
May 4th, 2010 at 20:11
Reply | Quote | #25
I agree that MathWorks does an extremely good work in fixing bugs in future versions. But who has the money to constantly buy new licenses?
That’s why A LOT of people are switching (or at least try) to python, sage etc.
P.S. Why they don’t make free patches for current versions? (correct me if I’m wrong)
May 6th, 2010 at 19:28
Reply | Quote | #26
No int64 arithmetic in Matlab? Maybe that is better than having slow int32 arithmetic.
Here is an extract from a comment by John D’Errico on Matlab’s File Exchange (FEX):
“Use of int32 instead of double would be a tremendously silly thing to do, since int32 operations are not even faster than operations on doubles! In fact, use of int32 would slow down this code!
>> N = int32(1:10000);
>> P = int32(23);
>> timeit(@() mod(N,P))
ans =
>> N = double(1:10000);
>> P = double(23);
>> timeit(@() mod(N,P))
ans =
In fact, arithmetic using doubles is FASTER than int32 arithmetic. int64 or uint64 arithmetic is not even defined in the target release, so the speed is incomparable there.”
Remember: In Matlab you can have any type you want as long as it’s a double.
Derek O’Connor
May 8th, 2010 at 07:55
Reply | Quote | #27
I had forgotten that I had my own example of how slow Matlab’s
int32 arithmetic can be:
function p = ShuffleP3264(n,wbits);
% This shows the difference between Matlab’s
% normal 64-bit double ‘integers’ and 32-bit int32 ‘integers’.
% Generate a random permutation vector p(1:n) using
% Knuth’s Algorithm P, Section 3.4.2, TAOCP, Vol 2, 2nd Ed.
% Derek O’Connor, 30 Mar 2010
if wbits == 32 % This type propagates below and
n = int32(n); % causes a big slowdown
p = 1:n;
for i = n:-1:2
r = floor(rand*i)+1; % random integer between 1 and i
t = p(r);
p(r) = p(i); % Swap(p(r),p(i))
p(i) = t;
return; % ShuffleP3264
% TEST
% clear all
% tic;p = ShuffleP3264(10^7,64);toc
% Elapsed time is 2.005458 seconds.
% clear all
% tic;p = ShuffleP3264(10^7,32);toc
% Elapsed time is 24.762971 seconds.
On this problem int32 arithmetic is 12 times slower than the default ‘double integers’. This means that I can have really huge int32 arrays but I can’t do anything with them.
Derek O’Connor.
Matlab 7.6.0.324 (R2008a) 64 bit
Dell Precision 690, Intel 2xQuad-Core E5345 @ 2.33GHz
16GB RAM, Windows7 64-bit Professional.
June 8th, 2010 at 14:21
Reply | Quote | #28
For those MathWorks customers on maintenance, try out the R2010b Prerelease, as announced yesterday:
64-bit integer arithmetic support is included.
Penny Anderson
June 8th, 2010 at 16:42
Reply | Quote | #29
Hi Penny
That’s great news – thanks :)
Best Wishes,
|
{"url":"http://www.walkingrandomly.com/?p=2629","timestamp":"2014-04-19T00:14:46Z","content_type":null,"content_length":"88646","record_id":"<urn:uuid:286c3807-d723-4d5b-b3d1-2a0b0fd56e76>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sweetwater, FL SAT Math Tutor
Find a Sweetwater, FL SAT Math Tutor
...I am required to maintain a 3.0 average and have a 3.7 unweighted and a 6.06 weighted GPA. I also have over 500 community service hours which I obtained from volunteering at a daycare in
Georgia over the course of 3 consecutive summers. I assisted teaching several of the classes for 10 hours a day, 5 days a week.
11 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I've also taken specialized courses in combinatorics, axiomatic set theory, graph theory, and number theory. I have then gained experience tutoring discrete mathematics by helping many
computer science majors pass this required course. It is this experience that has helped me to develop my ways of explaining the various concepts contained in the course.
25 Subjects: including SAT math, reading, statistics, Spanish
I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and
Programming. After college I moved to Spain where I gave private test prep lessons to high school students ...
11 Subjects: including SAT math, calculus, physics, geometry
...I have worked as a private tutor for 5 years in a variety of subjects. I am very patient and believe in teaching by example. My general teaching strategy is the following: I generally cover
the topic, then explain in detail, make the student do some problems or write depending on the subject, and finally I make them explain and teach the topic back to me.
30 Subjects: including SAT math, chemistry, English, geometry
...Prior to that I worked as an IT specialist at All Saints Episcopal Church for three years. I managed over 50 workstations, peripherals, and printers as well as their wired and wireless
networks. I have experience in tech support, virus removal, knowledge management, networking, and computer hardware repair.
23 Subjects: including SAT math, English, reading, biology
Related Sweetwater, FL Tutors
Sweetwater, FL Accounting Tutors
Sweetwater, FL ACT Tutors
Sweetwater, FL Algebra Tutors
Sweetwater, FL Algebra 2 Tutors
Sweetwater, FL Calculus Tutors
Sweetwater, FL Geometry Tutors
Sweetwater, FL Math Tutors
Sweetwater, FL Prealgebra Tutors
Sweetwater, FL Precalculus Tutors
Sweetwater, FL SAT Tutors
Sweetwater, FL SAT Math Tutors
Sweetwater, FL Science Tutors
Sweetwater, FL Statistics Tutors
Sweetwater, FL Trigonometry Tutors
Nearby Cities With SAT math Tutor
Coral Gables, FL SAT math Tutors
Doral, FL SAT math Tutors
El Portal, FL SAT math Tutors
Hialeah SAT math Tutors
Hialeah Gardens, FL SAT math Tutors
Hialeah Lakes, FL SAT math Tutors
Maimi, OK SAT math Tutors
Medley, FL SAT math Tutors
Mia Shores, FL SAT math Tutors
Miami Springs, FL SAT math Tutors
Olympia Heights, FL SAT math Tutors
Pinecrest, FL SAT math Tutors
South Miami, FL SAT math Tutors
West Miami, FL SAT math Tutors
Westchester, FL SAT math Tutors
|
{"url":"http://www.purplemath.com/Sweetwater_FL_SAT_Math_tutors.php","timestamp":"2014-04-19T20:07:06Z","content_type":null,"content_length":"24356","record_id":"<urn:uuid:3d05c7f2-808e-4e30-8c20-5c9656ee8546>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is a(n algebro-geometric) family of modular forms?
up vote 8 down vote favorite
We know that a family of elliptic curves is a morphism of schemes $f:X \to Y$ such that the fiber of every point of $Y$ is an elliptic curve (and we usually require the morphism to be smooth, proper,
Given such a family, we can take the $\ell$-adic representation associated to any given fiber, and in this sense we also have a "family" of Galois representations. (Alternatively, by the proper base
change theorem in étale cohomology, we can take $R^1f_*(\mathbb{Z}_{\ell})$, which is a sheaf on $Y$, and the stalks of this sheaf are the duals of the above Galois representations).
Now consider different question. Can we have a "family" of cuspidal eigenforms whose associated Galois representations fit into a family in the above sense?
I'll consider the case of weight 2 (though I'm most interested in higher weight). Then such family should lead to a family of RM abelian varieties, i.e. those associated to the weight 2 cusp forms.
Let's go back to the elliptic curve (or abelian variety) side a bit, and think about what this would mean. The level of a modular form corresponds to the conductor of the associated elliptic curve,
so the level of the modular forms in such a family should be just as bizarre a function of the base as is the conductor of a family of curves.
To try to engineer such a family, suppose we had a family of elliptic curves that were all known to be modular. I'm most interested in rational families, i.e. with open subsets of projective space as
bases. Then if the family is defined over $\mathbb{Q}$, we at least know that the fibers of rational points are modular, and we get a "family" of modular forms over the rational points. What would
this "family" look like? I have a feeling it would be pretty strange from the point of view of modular forms.
A different approach is to try to to construct a family of modular curves, then view a family of modular forms as a section of the relative cotangent sheaf or some power thereof. Maybe one could try
to make it an "eigensection" of some sort of relative Hecke operators. Of course, the very idea of a family of modular curves seems strange, as there are countably-many modular curves!
In fact, this points to a general problem with this attempt: modular forms are based on discrete data, a discrete set of levels, and a discrete set of eigenforms within each level.
I have a feeling that it's impossible to make this notion work, but please let me know if you have good ideas. In particular, it's possible that experts in modular forms and curves would have more
5 Katz's paper published in the Antwerpen III volume gives a detailed account of modular forms over a ring, more or less along the lines you expect. Of course, the level and the weight are fixed. In
the theory of $p$-adic modular forms, one defines analytic families of modular forms. There, the level is basically fixed but the weight varies. Such objects are $q$-expansions $\sum a_n(x)q^n$
parametrized by $p$-adic analytic functions such that at some dense set of $x$, the obtained $q$-expansion is the one of a true modular form. – ACL Feb 4 '13 at 10:18
and the Galois representations associated to these families will behave like those associated to families of varieties? – David Corwin Feb 4 '13 at 10:43
3 At ACL and Davidac897. Katz' modular forms over a ring are in general not families of eigenforms. Moreover, they more or less all come by base changes from families over some "trivial" ring such
as $\mathbb Z$, so they are in a sense "constant families". This is not what Davidac897 is looking for. – Joël Feb 4 '13 at 15:40
2 Are you aware of Hida's families of modular forms? They might not be exactly what you're interested in since their whole point is to make the weight vary (and hence you can't really force them to
be elliptic), but still : you were wondering about the problem of being based on discrete data, and that is an example where it is not annoying, since integers can vary continuously in a $p$-adic
setting! – Julien Puydt Feb 4 '13 at 17:18
@Joël. You're right for Katz's forms. But the analytic $p$-modular forms (of which you're much more aware than I am, of course) are genuinely more general objects. <br /> @Davidac897. Yes: the
Galois representations can be defined over the whole family extending the classical representations, at least in weight $\geq 2$ (cf. Mazur & Wiles's paper in Compositio). – ACL Feb 6 '13 at 17:28
show 2 more comments
2 Answers
active oldest votes
Perhaps it is easier to explain what's going on here in the context of Galois representations. I know of two almost wholly disjoint ideas that are, confusingly, both described using phrases
like "families of Galois representations".
• One can consider "families of representations" of any group, which are just homomorphisms from G to GL_n(A) for some (usually commutative) ring A, or more generally GL(V) for some
locally free sheaf V on a base scheme S, etc. These are "families" in the sense that the image of a group element is a matrix whose entries are functions on Spec(A) (resp. on S, etc).
Note that, intuitively, the group is fixed and the coefficients are varying.
• One can also consider the kind of "geometric" family you mentioned in your question and Michigan J Frog enlarged upon: given a family of geometric objects over a base S, you can do
up vote 3 various kinds of relative cohomology to give sheaves on S whose fibres have an action of some kind of Galois group depending on the fibre, and in particular the generic fibre has an
down vote action of something like the fundamental group of S. So here the group is, so to speak, varying in the family as well.
The first kind of family, over a p-adic base and with G being a Galois group, comes up a lot in the context of modular forms (Hida, Coleman-Mazur, etc). These can be viewed as sections of a
family of sheaves on a subvariety of the rigid-analytic space you get by analytifying the modular curve. Note that we are varying the coefficients and not the group, again.
The second kind of family doesn't come up so much in modular form theory, although it makes a notable appearance in Kato's work on Iwasawa theory for modular forms.
@David: Speaking about Kato's work, are you refering to his Asterisque paper? Can you expand a bit? Thanks. – Filippo Alberto Edoardo Feb 6 '13 at 14:44
Yes, I mean the Asterisque paper "P-adic Hodge theory and zeta functions of modular forms" – David Loeffler Feb 7 '13 at 7:09
add comment
$\newcommand\Q{\mathbf{Q}}$ $\newcommand\A{\mathbf{A}}$ $\newcommand\AQ{\A_{\Q}}$
How about instead of considering lisse sheaves, derived pushforwards, étale cohomology, modular forms, blah blah blah, you consider instead the following situation: over the affine line $\Q[t]
$, you can consider the equation $x^2 - t$; it's a family of quadratic extensions. If you like, you can turn this into a smooth map of curves (eliminating the bad fibre at $0$) $\pi: X \
rightarrow Y$, and you can consider the lisse sheaf $R^0 \pi_* \mathbf{Z}_l$, the stalks of which are Galois representations corresponding to the quadratic character of the associated
up quadratic extension (along with the trivial character, which I'll suppress). All the Galois representations corresponding to rational points are automorphic/modular for $\mathrm{GL}_1(\AQ)$ -
vote 3 that's a theorem of Gauss called quadratic reciprocity. What does this family look like? Well, it is what it is; the global data is related to the factorization of $t$ in the expected way, and
down it's not really a "family" is any analytic sense. There is, however, one useful fact to observe about how this family behaves as one varies $t$. Namely, the characters $\chi_t$ are locally
vote constant. That is, if $t$ and $s$ are close in $\Q_v$ for any place $v$, then the local characters $\chi_{t,v}$ and $\chi_{s,v}$ are equal. For example, if $t$ and $s$ are both the same sign,
then $\chi_{t,v}(-1) = \chi_{s,v}(-1)$. This is Krasner's lemma (we use smoothness here). It turns out that this "local constancy" of Galois representations is true more generally, I think
Kisin proved something along these lines (although you should think of that result as also being Krasner's Lemma).
Dear Michigan J. Frog: Kisin's result is that the isomorphism class of the fibers over rational points for a lisse $\ell$-adic or lcc abelian sheaf on a $p$-adic analytic space is "locally
constant" on the base. It is in his 1999 paper with the title "Local constancy in p-adic families..." on his webpage. The motivation is certainly Krasner's Lemma, but even the lcc case over
a disk lies somewhat deeper, requiring either more geometric or function-theortic input (roughly because the "local constancy" is just for an isom. class, without "canonicity"; Kisin
captures it via $\pi_1$'s). – user30379 Feb 13 '13 at 6:34
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry nt.number-theory modular-forms elliptic-curves galois-representations or ask your own question.
|
{"url":"http://mathoverflow.net/questions/120749/what-is-an-algebro-geometric-family-of-modular-forms?answertab=active","timestamp":"2014-04-17T01:35:31Z","content_type":null,"content_length":"70215","record_id":"<urn:uuid:3be2340c-7654-4d08-8c04-970a2aff3db6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exponential Distribution general and practical example
October 19th 2009, 12:16 PM #1
Mar 2009
Exponential Distribution general and practical example
(a)Suppose we obtain a random sample Y1, Y2, . . . , Yn from an Exponential(μ) distribution. Using the log–likelihood function, find an expression for the maximum likelihood estimator for μ.
(b) Suppose the time interval between arrivals of buses at rush–hour at a bus–stop is thought to follow an Exponential(μ) distribution. The first 10 inter–arrival times (in minutes) are observed
to be:
4.2, 3.6, 3.0, 9.0, 1.3, 0.8, 2.7, 2.1, 1.8, 1.5.
Find the maximum likelihood estimate for μ. What assumption have you made about the ten measurements?
Last edited by sirellwood; October 19th 2009 at 02:31 PM.
It makes a big difference how you write you exponential density.
October 19th 2009, 09:54 PM #2
|
{"url":"http://mathhelpforum.com/advanced-statistics/109028-exponential-distribution-general-practical-example.html","timestamp":"2014-04-20T02:37:47Z","content_type":null,"content_length":"33860","record_id":"<urn:uuid:b158e822-9f57-4a91-8251-238910d59a22>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Addition and Subtraction
This selection of spreadsheets, designed by the Primary National Strategies to aid teaching and learning, tackles addition and subtraction in a variety of ways.
Number line counting shows a number line of ten places, numbers between 0 and 99 can be chosen. The numbers on the line can be hidden or revealed.
Counting stick shows a counting stick with eleven numbers, which can be revealed or hidden. The starting number and steps between numbers can be chosen and hidden or revealed. Counting stick further
options has the same functions but can be used with visual representations, whole numbers, decimals, percentages, fractions, conversions between mass, length and capacity, and time.
Adding 1 and 2 digit numbers has a four by four grid with generates numbers. Numbers are then selected and the total sum of the numbers can be revealed.
Add and subtract 1, 10, 100, 1000 allows a number to be selected, which can be hidden or revealed, and then 1, 10, 100 or 1000 can be added or taken away by selecting a shape.
Add and subtract number sentences generates two random numbers and then the four possible addition and subtraction sums are shown. There are a variety of options for the sizes of the numbers.
Addition and subtraction missing sign shows either an addition or subtraction sum where all three numbers and the sign can be hidden or revealed.
Addition and subtraction facts shows a four by four addition grid where the numbers are randomly generated and the answers and the numbers can be hidden or revealed.
Addition and subtraction trios and Addition and subtraction flash cards shows a triangle of three numbers with an addition and two subtraction signs between them. Two numbers, which can be chosen
between one and a hundred, are added to give the third and therefore this can be subtracted from one original number to give the other. It shows the relationship between addition and subtraction. Any
of the numbers can be hidden or revealed. The flash card version includes the four possible calculations on flash cards.
Addition and subtraction pyramids shows an addition pyramid where the numbers can be randomly generated or chosen. The numbers can then all be hidden or revealed.
Complements is a timed activity where the complement of a number has to be found. The level of difficulty of the number and the time allowed can be varied.
Total 20 allows a differentiated approach to complements. There is a choice of two, three or four numbers that total 20, one missing number in each section has to be found. There is an extension
sheet where the total is a randomly generated number.
Please enable the macros to allow the resources to function correctly.
HEALTH and SAFETY
Any use of a resource that includes a practical activity must include a risk assessment. Please note that collections may contain ARCHIVE resources, which were developed at a much earlier date. Since
that time there have been significant changes in the rules and guidance affecting laboratory practical work. Further information is provided in our Health and Safety guidance.
The resource is part of Department for Education
|
{"url":"http://www.nationalstemcentre.org.uk/elibrary/resource/5112/addition-and-subtraction","timestamp":"2014-04-20T20:57:20Z","content_type":null,"content_length":"31471","record_id":"<urn:uuid:6cc4480a-1940-4605-b01d-119c4457ab40>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPSSX-L archives -- March 2001 (#231)LISTSERV at the University of Georgia
Date: Tue, 20 Mar 2001 11:24:12 -0600
Reply-To: Sonya Premeaux <premeaux@MAIL.MCNEESE.EDU>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: Sonya Premeaux <premeaux@MAIL.MCNEESE.EDU>
Subject: plotting interactions
Comments: To: SPSS list serve <SPSSX-L@uga.cc.uga.edu>
Content-Type: multipart/alternative;
I need to plot interaction effects for a regression. Can anyone help? My data looks something like this:
SM SU TR TM SE 1 1 1 1 1 2 2 2 2 2
My dep variable is SU. SM is a moderator of the relationships between SU and the other variables. I want to split SM into High and Low and plot regression equations. I would end up with 2 regression
equations on each chart so that my dep variable is on my Y axis, one of the IVs is on the X axis and there is a regression line for high SM and low SM. Can this be done? If so, how? If not, any
suggestions on how to do it elsewhere? Thanks.
|
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0103&L=spssx-l&D=0&P=24155&F=P","timestamp":"2014-04-16T21:52:21Z","content_type":null,"content_length":"10217","record_id":"<urn:uuid:f6a0088d-27cf-42a3-bb4c-4a729ca320ee>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
k-tuply monoidal (n,r)-category
Higher category theory
Basic concepts
Basic theorems
Universal constructions
Extra properties and structure
1-categorical presentations
Monoidal categories
With symmetry
With duals for objects
With duals for morphisms
With traces
Closed structure
Special sorts of products
In higher category theory
$k$-tuply monoidal $(n,r)$-categories
Two important periodic tables are the table of $k$-tuply monoidal $n$-categories and the table of $(n,r)$-categories. These can actually be combined into a single 3D table, which surprisingly also
includes $k$-tuply groupal $n$-groupoids.
A $k$-tuply monoidal $(n,r)$-category is a pointed $\infty$-category (which you may interpret as weakly or strictly as you like) such that:
• any two parallel $j$-morphisms are equivalent, for $j \lt k$;
• any $j$-morphism is an equivalence, for $j \gt r + k$;
• any two parallel $j$-morphisms are equivalent, for $j \gt n + k$.
Keep in mind that one usually relabels the $j$-morphisms as $(j-k)$-morphisms, which explains the usage of $r + k$ and $n + k$ instead of $r$ and $n$. As explained below, we may assume that $n \geq
-1$, $-1 \leq r \leq n + 1$, $0 \leq k \leq n + 2$, and (if convenient) $r + k \geq 0$.
To interpret this correctly for low values of $j$, assume that all objects ($0$-morphisms) in a given $\infty$-category are parallel, which leads one to speak of the two $(-1)$-morphisms that serve
as their common source and target and to accept any object as an equivalence between these. In particular, any $j$-morphism is an equivalence for $j \lt 1$, so if $r + k = 0$, then the condition is
satisfied for any smaller value of $r + k$. Thus, we may assume that $r + k \geq 0$. Similarly, since there is a chosen object (the basepoint), any parallel $j$-morphisms are equivalent for $j \lt 1$
The conditions that $j \lt k$ and that $j \gt n + k$ will overlap if $n \lt - 1$, so we don't use such values of $n$. In other words, any $k$-tuply monoidal $(-1,r)$-category is also a $k$-tuply
monoidal $(n,r)$-category for any $n \lt - 1$.
If any two parallel $j$-morphisms are equivalent, then any $j$-morphism between equivalent $(j-1)$-morphisms is an equivalence (being parallel to an equivalence for $j \gt 0$ and automatically for $j
\lt 1$). Accordingly, any $k$-tuply monoidal $(n,0)$-category is automatically also a $k$-tuply monoidal $(n,r)$-category for any $r \lt 0$, and any $k$-tuply monoidal $(n,r)$-category for $r \gt n +
1$ is also a $k$-tuply monoidal $(n,n+1)$-category. Thus, we don't need $r \lt -1$ or $r \gt n + 1$.
According to the stabilisation hypothesis, every $k$-tuply monoidal $(n,r)$-category for $k \gt n + 2$ may be reinterpreted as an $(n+2)$-tuply monoidal $(n,r)$-category. Unlike the other
restrictions on values of $n, r, k$, this one is not trivial.
Special cases
A $0$-tuply monoidal $(n,r)$-category is simply a pointed $(n,r)$-category. The restriction that $r + k \geq 0$ becomes that $r \geq 0$. This is why $(n,r)$-categories use $0 \leq r \leq n + 1$
rather than the restriction on $r$ given before.
A $k$-tuply monoidal $(n,0)$-category is a $k$-tuply monoidal $n$-groupoid. A $k$-tuply monoidal $(n,-1)$-category is a $k$-tuply groupal $n$-groupoid. This is why groupal categories? don't come up
much; the progression from monoidal categories to monoidal groupoids? to groupal groupoids? is a straight line up one column of the periodic table of monoidal? $(n,r)$-categories. (But if we moved to
a 4D table that required all $j$-morphisms to be equivalences for sufficiently low values of $j$, then groupal categories would appear there.)
A $k$-tuply monoidal $(n,n)$-category is simply a $k$-tuply monoidal $n$-category. A $k$-tuply monoidal $(n,n+1)$-category is a $k$-tuply monoidal $(n+1)$-poset. Note that a $k$-tuply monoidal $\
infty$-category and a $k$-tuply monoidal $\infty$-poset are the same thing.
A stably monoidal $(n,r)$-category, or symmetric monoidal $(n,r)$-category, is an $(n+2)$-tuply monoidal $(n,r)$-category. Although the general definition above won't give it, there is a notion of
stably monoidal $(\infty,r)$-category, basically an $(\infty,r)$-category that can be made $k$-tuply monoidal for any value of $k$ in a consistent way.
|
{"url":"http://www.ncatlab.org/nlab/show/k-tuply+monoidal+(n%2Cr)-category","timestamp":"2014-04-20T03:30:08Z","content_type":null,"content_length":"60922","record_id":"<urn:uuid:c98d8d30-03be-4c10-aea4-268030d2bc1a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sum with Period of Date (sum during a date period)
Hi John,
I am not sure that I understand you well. any way, I want to input something to be starting point for next tuning..
I assume that this the the time schedule, and want to share money,,,flat share...
I input 1 and 0 to indicate the on / off time attendant.
Then calculate in the table below..
here is sample for easy understand..-->
Hope this help.
Pichart Y
|
{"url":"http://www.nullskull.com/q/10450339/sum-with-period-of-date-sum-during-a-date-period.aspx","timestamp":"2014-04-20T13:34:06Z","content_type":null,"content_length":"28014","record_id":"<urn:uuid:fee8585b-3242-48ac-a5fa-08ca47d1e242>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra Tutors
Palo Alto, CA 94306
Math, Science, Engineering, and Technical Communication
...D from Stanford in electrical engineering, and have a very strong background in math and science. I have always enjoyed teaching, and have tutored throughout my time in college and graduate
school. As an undergrad at Harvey Mudd, I helped design and teach a class...
Offering 10+ subjects including algebra 2
|
{"url":"http://www.wyzant.com/Redwood_City_college_algebra_tutors.aspx","timestamp":"2014-04-21T05:04:40Z","content_type":null,"content_length":"60433","record_id":"<urn:uuid:53685664-212b-4df7-b2fb-4679a743b3f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Manchester, WA Science Tutor
Find a Manchester, WA Science Tutor
...Please let me know if I can be of help and I look forward to working with you! As an undergraduate, I received a degree in cell molecular biology and genetics. In addition, I also have a PhD
in microbiology and immunology and have peer reviewed published work in the field.
4 Subjects: including biology, chemistry, microbiology, genetics
...I work with students to familiarize them with the test format and test management strategies. We also work on content areas, reviewing math facts, learning vocabulary, and practicing critical
reading and essay writing. I've tutored all subjects on the ACT since 2003.
32 Subjects: including ACT Science, English, reading, geometry
...I have tutored or taught for the past thirty years. This has provided me with an extensive background for general mathematics. As an instructor, I strive to provide background information and
context to my students so that they not only are able to do the work, but have an understanding of the material as well.
12 Subjects: including organic chemistry, chemistry, geometry, algebra 2
...In addition to this, I can tutor elementary-high school students in Science, English and Algebra 1. Organic chemistry is a subject I highly enjoy, and would be able to guide my students
through the core principles to a higher understanding of organic chemistry. Finally, Biology was my major, my focus in college.
13 Subjects: including botany, vocabulary, grammar, reading
...If you think it would help you out, I'd love to share my passion and expertise with you!I completed the entirety of the calculus series at the University of Washington. I completed
algebra-based physics at the University of Washington. I tutored peers in junior high and high school in algebra.
16 Subjects: including biology, English, chemistry, writing
Related Manchester, WA Tutors
Manchester, WA Accounting Tutors
Manchester, WA ACT Tutors
Manchester, WA Algebra Tutors
Manchester, WA Algebra 2 Tutors
Manchester, WA Calculus Tutors
Manchester, WA Geometry Tutors
Manchester, WA Math Tutors
Manchester, WA Prealgebra Tutors
Manchester, WA Precalculus Tutors
Manchester, WA SAT Tutors
Manchester, WA SAT Math Tutors
Manchester, WA Science Tutors
Manchester, WA Statistics Tutors
Manchester, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/manchester_wa_science_tutors.php","timestamp":"2014-04-19T20:08:58Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:d9e73fcc-ece7-440a-96b3-98e82b4dad88>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
a question about invariant volume forms on homogeneous spaces.
up vote 1 down vote favorite
Here I consider $G$ a connected Lie group, which is assumed to be linear (i.e. embeddable in some $GL_n(\mathbb{R})$, and $X$ a homogeneous space under $G$. Fix a point $x\in X$, one considers the
map $m:G\rightarrow X$ sending $g$ to $g(x)$.
Does the left (or right) invariant volume form on $G$ passes to an invariant volume form on $X$, under the pushing forward along $m$? Here by pushing forward along $m$, I mean the measure $\mu$ on
$X$, such that for a continuous function $f$ of compact support, one has $$\int_X f(x)d\mu(x):=\int_G f(m(g))dg$$, $dg$ being the left (or right) Haar measure on $G$.
It seems that one needs to assume that the isotropy subgroup of $x$ in $G$ is compact. Does it matter if $G$ is not unimodular?
Many thanks.
lie-groups homogeneous-spaces
add comment
2 Answers
active oldest votes
If $X = G/H$ then it carries a $G$--invariant measure if and only if the quotient $\Delta_G/\Delta_H$ if the modular functions is equal to $1$. So for example if $G$ is unimodular
up vote 4 down then the condition is that $H$ be unimodular.
add comment
For a recent discussion of invariant measures on homogeneous spaces, see e.g. Appendix B in M. Bachir Bekka, Pierre de La Harpe, Alain Valette, Kazhdan's property (T), Cambridge Univ.
Press 2008 : http://perso.univ-rennes1.fr/bachir.bekka/KazhdanTotal.pdf
up vote 1 down The necessary and sufficient condition for the existence of an invariant measure on $X$, is that the restriction of the modular function of $G$ to $G_x$ (= the isotropy subgroup of
vote $x$),coincides with the modular function of $G_x$.
add comment
Not the answer you're looking for? Browse other questions tagged lie-groups homogeneous-spaces or ask your own question.
|
{"url":"http://mathoverflow.net/questions/77459/a-question-about-invariant-volume-forms-on-homogeneous-spaces/78279","timestamp":"2014-04-20T01:47:52Z","content_type":null,"content_length":"53208","record_id":"<urn:uuid:1717aed9-3b49-4bea-8521-3435c73a7856>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
// EVOGOD
// SciLab program to simulate the evolution of religion
// Version: 1.0
// Modification date: 1/28/2008
// Copyright 2008 James W. Dow
// This program is released under the GNU General Public License Version 2, June 1991. See http://www.gnu.org/licenses/gpl.txt
ns = 140; ip1 = 10; le = 65; mc = 5; dcr = 0.01; dcu = 0.01; dcur = 0.01; dfir = 0.010; dfiu = -0.005; dfisu = -0.02; dp = 0; lr = 0; ur = 1; lv = 0; uu = 1; dist = 0; popgrow = 0; greenbeard = 1; mult =1; cmr = 0.5; cmu = 0.5; icp = 0.2;
// MODEL
// Build the initial population matrix (a) of agents . We don't know the size yet. It is easier to build it and then compute the size. Each agent is a row.
nva = 6; // Number of variables (columns) for each agent vector
// The column variables for each agent are
// 1. age (ag)
// 2. initial probability of communicating a real experiences (ipr). A real experience is an experience that gives the holder knowledge that can enhance the fitness of others. This knowledge can be communicated.
// 3. initial probability of communicating unreal experiences (ipu) A number representing the probability that the agent will personally have an unreal experience. An unreal experience is something that can be communicated but contains no knowledge that can enhance fitness. The probability variables pr and pu represent biological evolution in the brain.
// 4. developing probability of communicating a real experience (cpr). This changes during the simulation
// 5. developing probability of communicating an unreal experience (cpu).
// 6.fitness (fi) The number of offspring produced over a lifetime.
// INITIALIZATION of ages and number of agents
// The number of agents in cohort k is ip1 - (k-1)*dp.
// Function for the number of agents in cohort k
function y = na(k), y = ip1 - (k-1) * dp, endfunction;
// Set up ages
// i indexes the cohort, j indexes the column of the a matrix.
// Do the first cohort. a(1) contains the age. The maximum number of cohorts is le, the initial life expectancy. However there are ilkley to be fewer if the population in each cohort decrements (dp) by a large amount
a = ones(na(1),nva);
// Do remaining cohorts
for i = 2:le;
if na(i) < 0 then break; end;
b = ones(na(i),nva) * i ;
a = cat(1,a,b);
// Determine the size of the population
// Function for the number of rows in a matrix.
function y = sz(x), y=size(x,'r'); endfunction;
// Set initial population size
ips = sz(a);
// Set up initial probalities of communicating a real experience, a(:,2) and, an unreal experience, a(:,3).
// Set up a random vector of probabilities. If dist = 0 use a uniform distribution. If dist = 1 use a binomial distribution.
select dist ;
case 0 then; // Uniform distribution
// Function to set up a random vector of uniformly distributed probabilities between l and u.
function y = rv(l,u), y = (u - l) * rand(ips,1) + l, endfunction;
a(:,2) = rv(lr,ur);
a(:,3) = rv(lv,uu);
case 1 then; // Binomial distribution
a(:,2) = grand(ips,1,'bin',ip1,ur) / ip1;
a(:,3) = grand(ips,1,'bin',ip1,uu) / ip1;
// Learned communication probabilities. Start with icp and add according to inherited fitness.
a(:,4:5) = ones(ips,2)*icp;
// INITIAL FITNESS
a(:,6) = ones(ips, 1);
// Set up recording for the steps.
rps = ones(ns,1); // Population size
mcp = zeros(ns,2); // Mean of the probability of commnicating real and unreal information
// Note that in this simulation it is possible for an agent to communicate with himself. This does not seem to be a conceptual problem and occurs in reality.
// Functions to provide a set of x random indices to agents in a. greenbeard determines the distribution from which the random indices are selected.
select greenbeard;
case 0 then ; // The probabilities of selection are distributed uniformly over the population
function y=ria(x), y1=grand(1,'prm',(1:sz(a))'), y=y1(1:x), endfunction;
case 1 then ; // The probabilities of selection are distributed according to the current probabilities, cpu, of generating unreal communications. The higher the cpu of an agent, the greater the chance of it being selected.
function y=ria(x);
ps = size(a,'r')
// Normalize the cpu so that they sum to 1
y1 = a(:,5) / sum(a(:,5))
// Form a cumulative distribution
y2 = cumsum(y1)
y = [] // Initialize y as a blank vector.
for i = 1:x
y3 = y2 < rand() // A T-F vector for lower values
y =[y, sum(y3)+1] // Counts agents lower in the cumulative distribution and adds the count to the vector, thus creating the index of the area within which the random number lies.
// START STEPPING
for s = 1:ns; // Do the step
// Calculate population size for this step
ps = sz(a);
nrc = 0; // monitor total real communications
nuc = 0; // monitor unreal communications
for j = 1:ps; // Do the agents
// Communicate REAL experiences
// The number of agents communicated with (nca) will depend on the communnicators probablity of communicating real information. It will be a proportion of mc
nca = round(a(j,4) * mc) ; // Get the number of agents to receive real communications from this one.
// Select a set of agents to receive real communications. Note that i is a vector of indices.
i = ria(nca); // select a subset of agents in a at random. i is a column vector. It can be empty
// Have them learn to communicate real experiences
if ~ (i == []) then;
// Change the probability of the receivers communicating real information. This will depend on the inherited probability of communicating unreal real information
a(i,4) = min(ones(size(i,'r'),1),a(i,4) + a(i,2)*dcr);
// Add to their fitness.
if mult == 0 then;
a(i,6) = a(i,6) + (cmr * dfir); // Multiply the amount by a constant, cmr
else; // Multiply by the receivers learned probability of communicating real information a(i,2)
a(i,6) = a(i,6) + (a(i,4) * dfir);
// Communicate UNREAL experiences. The number of agents contacted will depend on the probability of communicating unreal information
nca = a(j,5) * mc ; // Calculate number
nuc = nuc + nca; // Add to unreal communications
// Select a set of agents randomly
i = ria(nca); // note that i is a vector and possibly null if there are less than .5 agents to contact
// Are there agents to contact
if ~ (i == []) then;
// Change the probability of the receivers communicating unreal information
a(i,5) = min(ones(size(i,'r'),1),a(i,5) + a(i,3)*dcu);
// Change the probability of the receiver's communicating real information
a(i,4) = min(ones(size(i,'r'),1),a(i,4) + a(i,2)*dcur);
//Change the fitness of the receiver. dfiu will normally be negative
if mult == 0 then;
a(i,6) = a(i,6) + (cmu * dfiu); // Multiply by a constant
a(i,6) = a(i,6) + (a(i,5) * dfiu); //Multiply by the recivers learned probability of making unreal communications
// Make the sender pay a cost for sending unreal information. This is the cost of costly signaling, and can occur even if there are no receivers
a(j,6) = a(j,6) + dfisu; // Note dfisu is normally negative
end; // Agents are done. j is free
// EVOLVE THE POPULATION OF AGENTS
// Advance the ages
a(:,1) = a(:,1) + 1;
//Find agents who will die
ida = []; //Start with a blank index vector
for i = 1:sz(a);
if a(i,1) > le then; ida = [ida ; i]; // Add the index to ida
// Index of dead agents (ida) is ready
// Add offspring of dead agents
// Calculate the number of offspring of each dead agents
no = round(a(ida,6)); // vector of the number of offspring of agents indexed by ida
// Form new offspring with age 1 and fitness 1 (not inherited)
// Maintain the probabilities of communication of the parents
for i = 1:sz(ida); // Scan dead agents
j = ida(i) ; // index of the dead agent in a
for k=1:no(i); // Scan number of offspring
new = [1, a(j,2), a(j,3), icp, icp, 1];
// Catenate new agent at the end of a
a($+1,:) = new;
a(ida,:) = []; // Remove the dead ones from a.
// Function to randomly select x agents selected from a uniform distribution.
function y=riau(x), y1=grand(1,'prm',(1:sz(a))'), y=y1(1:x), endfunction;
select popgrow ;
case 0 then; // Population can grow or decline. Do nothing
case 1 then;// Control the increase or the decline
dps = sz(a) - ips; // Calculate the change in population size
if dps > 0 then ; // There is an increase in population
// Remove dps agents at random
i = riau(dps); // Get a uniform set at random
a(i,:) = [] ; // Kill them off
elseif dps < 0 then ; // There is a decrease in population
i = riau(-dps) ; // Pick a set of agents at random
// Have them reproduce
// Maintain the probabilities of communication of the parents
new = [ones(-dps,1), a(i,2:3), a(i,2:3), ones(-dps,1)];
// Catenate new agents at the end of a
a = [a ; new];
end; // End if
// There is neither an increase of a decrease
end; // End of population control
// save results for this step
ps = sz(a); // Calculate population size
rps(s,1) = ps; // Record the population size
mcp(s,:) = [mean(a(:,2)), mean(a(:,3))]; // record means of pr and pu
// immediate display
printf("step %i population %i \tmean pr %f mean pu %f \n",s,ps,mcp(s,1),mcp(s,2));
end; // Move to next step
// End of simulation
// PLOT RESULTS
scf(0); // Open first plot window.
plot2d(1:ns,rps); // Population size per step
ax = get("current_axes");
ax.x_label.text = "Step";
ax.x_label.font_size = 3;
ax.y_label.font_size = 3;
ax.sub_ticks = [3,1];
scf(1); // Open second plot window.
plot2d(1:ns,mcp); // Probability of real and unreal communications per step
ax = get("current_axes");
ax.x_label.text = "Step";
ax.x_label.font_size = 3;
ax.y_label.font_size = 3;
ax.sub_ticks = [3,1];
// Plot histograms of pr and pu
xbasc(2); scf(2); // Open third plot window
title("Final Distribution of ipr", 'fontsize', 4);
x = nfreq(int(a(:,2)*10));bar (x(:,1),x(:,2))
xbasc(3); scf(3); // Open fourth plot window
title("Final Distribution of ipu", 'fontsize', 4);
x = nfreq(int(a(:,3)*10));bar (x(:,1),x(:,2));
|
{"url":"http://jasss.soc.surrey.ac.uk/11/2/2/sim4.sce.html","timestamp":"2014-04-18T17:26:14Z","content_type":null,"content_length":"11715","record_id":"<urn:uuid:9f94beb5-8c6f-4c29-9577-c13b6cb2b085>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Projectile Motion Lab Practicum and Computational Modeling
In my AP Physics B class, I’m reviewing all of the material on the AP exam even though all of the students studied some of this materials last year in either Physics or Honors Physics. When we do
have a review unit, I try to keep it engaging for all students by studying the concepts from a different perspective and performing more sophisticated labs.
When reviewing kinematics, I took the opportunity to introduce computational modeling using VPython and the physutils package. I started with John Burk’s Computational Modeling Introduction and
extended it with my experiences at Fermilab where computational modeling plays a role in everything from the optics of interferometers to the distribution of dark matter in the galaxy. I then
provided students with a working example of a typical projectile motion model and let them explore. I encouraged them to extend the model to have the projectile launched with an initial vertical
Later that unit, I introduced the lab practicum which was based on a lab shared by my counterpart at our neighboring high school. The goal of the lab was to characterize the projectile launcher such
that when the launcher is placed on a lab table, the projectile will hit a constant velocity buggy driving on the floor, away from the launcher, at the specified location. The location would not be
specified until the day of the lab practicum. No procedure was specified and students decided what they needed to measure and how they wanted to measure it. I also used this as practice for writing
clear and concise lab procedures like those required on the free response section of the AP exam.
All groups figured out that they needed to determine the velocity of the car (which some had done the previous year) and the initial velocity of the projectile. Some groups used a technique very
similar to the previous year’s projectile motion lab where a marble is rolled down a ramp and launched horizontally. These groups fired the projectile horizontally from atop the table and measured
the horizontal displacement. Groups that calculated the flight time based on the vertical height were more accurate than those that timed the flight with a stopwatch. Another group fired the
projectile straight up, measured the maximum height, and calculated the initial velocity. This group was particularly successful. Another group attempted to use a motion sensor to measure the initial
velocity of the ball as they fired it straight up. The motion sensor had trouble picking up the projectile and this group’s data was suspect. A couple of other groups fired the projectile at a
variety of angles, timed the flight, and measured the horizontal displacement. Some of these groups later realized that they didn’t really need to perform measurements at a variety of angles. After
gathering data and calculating the initial velocity of the projectile as a group, I asked the students to practice calculating their launch angle based on a sample target distance. I hadn’t really
thought this lab through and didn’t appreciate how challenging it would be to derive an equation for the launch angle as a function of horizontal displacement when the projectile is launched with an
initial vertical displacement. It wasn’t until that night that I appreciated the magnitude of this challenge and then realized how this challenge could be used to dramatically improve the value of
this lab.
Most students returned the next day a bit frustrated but with an appreciation of how hard it is to derive this equation. One student, who is concurrently taking AP Physics B and AP Physics C, used
the function from his AP Physics C text successfully. Another student amazed me by completing pages of trig and algebra to derive the equation. No one tried to use the range equation in the text,
which pleased me greatly (the found candy discussion must have made an impact on them). As we discussed how challenging it was to solve this problem, I dramatically lamented, “if only there was
another approach that would allow us to solve this complex scenario…” The connection clicked and students realized that they could apply the computational model for projectile motion to this lab.
Almost all of the groups chose to use the computational model. One student wrote his own model in Matlab since he was more familiar with that than Python. With assistance, all groups were able to
modify the computational model and most were successful in hitting the CV buggy. One group dressed for the occasion:
Students’ reflections on this lab were very positive. They remarked how they appreciated learning that there are some physics problems that are not easily solved algebraically (they are accustomed to
only being given problems that they can solve). They also remarked that, while they didn’t appreciate the value of computational modeling at first, using their computational model in the lab
practicum showed them its value. I saw evidence of their appreciation for computational modeling a couple of weeks later when a few of the students tried to model an after-school Physics Club
challenge with VPython. For me, I was pleased that an oversight on my part resulted in a much more effective unit than what I had originally planned.
|
{"url":"http://pedagoguepadawan.net/204/projectile-motion-lab-practicum-and-computational-modeling/","timestamp":"2014-04-16T04:22:10Z","content_type":null,"content_length":"57829","record_id":"<urn:uuid:8b707557-561b-4b34-a5ce-9e16be06bff1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of symmetric
Symmetric-key algorithms are a class of algorithms for cryptography that use trivially related, often identical, cryptographic keys for both decryption and encryption.
The encryption key is trivially related to the decryption key, in that they may be identical or there is a simple transform to go between the two keys. The keys, in practice, represent a shared
secret between two or more parties that can be used to maintain a private information link.
Other terms for symmetric-key encryption are secret-key, single-key, shared-key, one-key and eventually private-key encryption. Use of the latter term does conflict with the term private key in
public-key cryptography.
Types of symmetric-key algorithms
Symmetric-key algorithms can be divided into
stream ciphers
block ciphers
. Stream ciphers encrypt the bits of the message one at a time, and block ciphers take a number of bits and encrypt them as a single unit. Blocks of 64 bits have been commonly used; the
Advanced Encryption Standard
algorithm approved by
in December
uses 128-bit blocks.
Some examples of popular and well-respected symmetric algorithms include Twofish, Serpent, AES (aka Rijndael), Blowfish, CAST5, RC4, TDES, and IDEA.
Symmetric vs. asymmetric algorithms
Unlike symmetric algorithms, asymmetric key algorithms use a different key for encryption than for decryption. I.e., a user knowing the encryption key of an asymmetric algorithm can encrypt messages,
but cannot derive the decryption key and cannot decrypt messages encrypted with that key. A short comparison of these two types of algorithms is given below:
Symmetric-key algorithms are generally much less computationally intensive than asymmetric key algorithms. In practice, asymmetric key algorithms are typically hundreds to thousands times slower than
symmetric key algorithms.
Key management
Main article: Key management
One disadvantage of symmetric-key algorithms is the requirement of a
shared secret key
, with one copy at each end. In order to ensure secure communications between everyone in a population of n people a total of
− 1)/2 keys are needed, which is the total number of possible communication channels. To limit the impact of a potential discovery by a cryptographic adversary, they should be changed regularly and
during distribution and in service. The process of selecting, distributing and storing keys is known as
key management
, and is difficult to achieve reliably and securely.
Hybrid cryptosystem
Main article: hybrid cryptosystem
In modern
designs, both asymmetric (public key) and symmetric algorithms are used to take advantage of the virtues of both. Asymmetric algorithms are used to distribute symmetric-keys at the start of a
session. Once a symmetric key is known to all parties of the session, faster symmetric-key algorithms using that key can be used to encrypt the remainder of the session. This simplifies the key
distribution problem, because asymmetric keys only have to be distributed authentically, whereas symmetric keys need to be distributed in an authentic and confidential manner.
Systems that use such a hybrid approach include SSL, PGP and GPG, etc.
Cryptographic primitives based on symmetric ciphers
Symmetric ciphers are often used to achieve other cryptographic primitives than just encryption.
Encrypting a message does not guarantee that this message is not changed while encrypted. Hence often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext
will be noted by the receiver. Message authentication codes can be constructed from symmetric ciphers (e.g. CBC-MAC). However, these messages authentication codes cannot be used for non-repudiation
Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods.
Construction of symmetric ciphers
Main article: Feistel cipher
Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction allows to build invertible functions from other functions that are itself not invertible.
Security of symmetric ciphers
Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions
for each round can greatly reduce the chances of a success
Key generation
When used with asymmetric ciphers for key transfer,
pseudorandom key generators
are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their
initialization vectors
is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation uses a source of high entropy for its initialization.
|
{"url":"http://www.reference.com/browse/symmetric","timestamp":"2014-04-17T03:15:06Z","content_type":null,"content_length":"84838","record_id":"<urn:uuid:d754e05b-4fc6-42b8-a3e5-aa65eb1b83f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
jacobian inverse kinematics: Topics by Science.gov
Lifetimes of chiral candidate structures in {sup 103,104}Rh were measured using the recoil distance Doppler-shift method. The Gammasphere detector array was used in conjunction with the Cologne
plunger device. Excited states of {sup 103,104}Rh were populated by the {sup 11}B({sup 96}Zr,4n){sup 103}Rh and {sup 11}B({sup 96}Zr,3n){sup 104}Rh fusion-evaporation reactions in inverse kinematics.
Three and five lifetimes of levels belonging to the proposed chiral doublet bands are measured in {sup 103}Rh and {sup 104}Rh, respectively. The previously observed even-odd spin dependence of the B
(M1)/B(E2) values is caused by the variation in the B(E2) values, whereas the B(M1) values decrease as a function of spin.
Suzuki, T. [Cyclotron and Radio-isotope Center, Tohoku University, Sendai 980-8578 (Japan); Department of Physics, Tohoku University, Sendai 980-8577 (Japan); Rainovski, G. [St. Kliment Ohridski
University of Sofia, Sofia 1164 (Bulgaria); Department of Physics and Astronomy, SUNY, Stony Brook, New York 11794-3800 (United States); Koike, T. [Department of Physics, Tohoku University, Sendai
980-8577 (Japan); Ahn, T.; Costin, A. [Department of Physics and Astronomy, SUNY, Stony Brook, New York 11794-3800 (United States); Carpenter, M. P.; Janssens, R. V. F.; Lister, C. J.; Zhu, S.
[Physics Division, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Danchev, M. [Department of Physics, University of Tennessee Knoxville, Tennessee 37996 (United States);
Dewald, A. [Institute fuer Kernphysik der Universitaet zu Koeln, D-50937 Koeln (Germany); Joshi, P.; Wadsworth, R. [Department of Physics, University of York, Heslington YO10 5DD (United Kingdom);
Moeller, O. [Institute fuer Kernphysik der Universitaet zu Koeln, D-50937 Koeln (Germany); Institut fuer Kernphysik, TU Darmstadt, D-64689, Darmstadt (Germany); Pietralla, N. [Department of Physics
and Astronomy, SUNY, Stony Brook, New York 11794-3800 (United States); Institut fuer Kernphysik, TU Darmstadt, D-64689, Darmstadt (Germany); Shinozuka, T. [Cyclotron and Radio-isotope Center, Tohoku
University, Sendai 980-8578 (Japan); Timar, J. [Institute of Nuclear Research (ATOMKI), Pf. 51, 4001 Debrecen (Hungary); Vaman, C. [National Superconducting Cyclotron Laboratory, Michigan State
University, East Lansing, Michigan 48824 (United States)
|
{"url":"http://www.science.gov/topicpages/j/jacobian+inverse+kinematics.html","timestamp":"2014-04-17T07:13:00Z","content_type":null,"content_length":"1044840","record_id":"<urn:uuid:7cdbeef4-f9c9-42c8-9725-1cbca615c192>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This page is going to explain how to develope the concepts of maximizing volume and minizing surface area.
to volume of sphere
Surface Area and Volume Formulas
Objectives -To explore the ratios of surface area to volume.
-To develope the concepts of maximizing volume and minimizing surface area.
The surface area (S) and Volume (V) of a right rectangular prism with length ( l) width (w) and height (h) are
S= 2lh + 2wh + 2lh and V= lwh
The surface (S) area and volume (V) of a cube with side (s) are
S= 6s² and V= s³
A shipping company is choosing between two box designs. (Box A and Box B)
Which design has the greater surface area and requires more material for the same volume?
Both box's have a volume of 160 cubic inches.
SA of box A is: 2(8) (5 ) + 2(10) (8) = 184 square inches.
SA of box B is: 2(10) (8) + 2(2) (10) = 232 square inches.
Answer: Box B has the greater surface area.
Surface Area and Volume of Prisms
-Define and use a formula for finding the surface area of a right prism.
-Define and use a formula for finding the volume or a right prism.
-Use Cavalieri's Principle to develope a formula for the volume of a right or oblique prism.
of a prism is a segment hat has endpoints in the planes containing the bases and that is perpendicular to both planes.
of a prism is the length of an altitude.
Surface Area of Right Prisms
The surface area of a prism may be broken down into two parts: the area of the bases, or base area, and the area of the lateral faces, or lateral area.
Since the bases are congruent, the base area is twice the area of one base, or 2B, where B is the area of one base.
If the sides of the base are s¹, s², and s³ and the height is
, then the lateral area is given by the following formula:
L = s¹
+ s²
+ s³
(s¹ + s² + s³)
Because s¹ + s² + s³ is the perimeter of the base, we can write the lateral area as L =
, where
is the perimeter of the base.
- The surface area of a prism is the sum of the base area and the lateral area.
The surface area,
, of a right prism with lateral area
, base area
, perimeter
, and height
is :
S = L + 2 B or S = hp + 2B Example 1:
The area of each base is
= ½ (2) (21) = 21.
The perimeter of each base is
= 10 + 21 + 17 = 48,
so the lateral area is
= 30 (48) = 1440.
So then the surface area is
+ 2
= 1440 + 2 (21) = 1440 + 42 = 1482.
Volumes of Right Prisms
The volume of a right rectangular prism with length
, width
, and height
, is given by
. Because the base area,
, of this type of prism is equal to
, you can also write the formula for the volume as
Example 2:
An aquarium in the shape of a right rectangular prism has dimensions of 110 × 50 × 7 feet.
Given that 1 gallon is approximately 0.134 cubic feet, how many gallons of water will the aquarium hold?
Given 1 gallon of water is approximately 8.33 pounds, how much will the water weigh?
The volume of the aquarium is found by using the volume formula.
= (110) (50) (7) = 38,500 cubic feet.
To approximate the volume in gallons, divide by 0.134.
V = 38,500 ÷ 0.134 aprrox 287,313 gallons
To approximate the weight, multiply by 8.33.
weight approx (287,313) (8.33) approx 2,393,317 pounds.
Example 3:
An aquarium has the shape of a right regular hexagonal prism.
The base of the aquarium has a perimeter of (14) (6), or 84, inches and an apothem of 7÷3 inches, so the base area is found as follows:
= ½
= ½ (84) (7 ÷ 3) = 294 ÷ 3 approx 509.22 square inches
The volume is V = Bh = (294 ÷ 3) (48) = 14112 ÷ 3 approx 24,443 cubic inches.
Volumes of Oblique Prisms
In an oblique prism, the lateral edges are not perpendicular to the bases, and there is no simple general formula for surface area. Although, the formula for the volume is the same as that for a
right prism. To understand why this is true, consider the explanation below.
Stack a set of index cards in the shape of a right rectangular prism. If you push the stack into the shape of an oblique prism, the volume of the solid does not change because the number of cards
does not change.
Both stacks have the same number of cards, and each prism is the same height.
Because every card has the same size and shape, they all have the same area. Any card in either stack represents a cross section of each prism.
Cavalieri's Principle
If two solids have equal heights and the cross sections formed by every plane parrallel to the bases of both solid have equal areas, then the two solids have equal volumes.
Volume of a Prism
The Volume,
, of a prism with height
and base area
Surface Area and Volume of Pyramids
-Define and use a formula for the surface area of a regular pyramid.
-Define and use a formula for the volume of a pyramid.
is a polyhedron consisting of a
, which is a polygon, and three or more
lateral faces
The lateral faces are triangles that sharea a single vertex, called the
vertex of the pyramid
Each lateral face has one edge in common with the base, called a
base edge
The intersection of two lateral faces is a
lateral edge
of a pyramid is the perpendicular segment from the vertex to the plane of the base. The
of a pyramid is the length of its altitude.
regular pyramid
is a pyramid whose base is a regular polygon and whose lateral faces are congruent isosceles triangles. In a regular pyramid, all of the lateral edges area congruent, and the altitude intersects the
base at its center. The length of an altitude of a lateral face of a regular pyramid is called the
slant height
of the pyramid.
Pyramids, like prisms, are named by the shape of their base.
Surface Area of a Pyramid
Example 1:
Find the surface area of a regular square pyramid whose slant height is
and whose base edge length is
The surface area is the sum of the lateral areas and the base area.
B S
= 4 (½
) + s²
This can be rewritten as follows:
= ½
+ s².
Surface Area of a Regular Pyramid
The surface area,
, of a regular pyramid with lateral area
, base area
, perimeter of the base
, and slant height
= ½
B Example 2:
The roof of a gazebo is a regular octagonal pyramid with a base edge of 4 feet and a slant height of 6 feet.
Find the area of the roof. If roofing material costs $3.50 per square foot, find the cost of covering the roof with this material.
The area of the roof is the lateral area of the pyramid.
L = ½lp = ½ (6) (8 × 4) = 96 square feet
96 square feet × $3.50 per square foot = $336.00
Volume of a Pyramid
The volume,
, of a pyramid with height
and base area
V = 1/3
bh Example 3:
The pyramid of Khufu is a regular square pyramid with a base edge of approxiamtely
776 feet and an original height of 481 feet. The limestone used to construct the pyramid weighs
approxmately 167 pounds per cubic foot. Estimate the weight of the pyramid of Khufu. (Assume the pyramid is solid)
The volume of the pyramid is found as follows:
V = 1/3
approx 1/3 (776²) ( 481)
approx 96,548,885 cubic feet
The weight in pounds is 96,548,885 cubic feet × 167 pounds per cubic foot approx 16,123,663,850
pounds, or 8,061,831 tons.
Surface Area and Volume of Cylinders
-Define and use a formula for the surface area of a right cylinder.
-Define and use a formula for the volume of a cylinder.
is a solid that consists of a circular region and its translated
image on a parallel plane, with
lateral surface
connecting the circles.
The faces formed by the circular region and its translated image are called
of the cylinder.
of a cylinder is a segment that has endpoints in the planes containing the bases and is perpendicular to both planes. The
of a
cylinder is the length of an altitude.
of a cylinder is the segment joining the centers of the two bases.
If the axis of a cylinder is perpendicular to the bases, then the cylinder is
right cylinder
If not, it is an
oblique cylinder
Cylinders and Prisms
As the number of lateral faces of a regular polygonal prism increases, the figure becomes more and more
like a circle.
Similarity, as the number of lateral faces of a regular polygonal prism incresses, the figure becomes more and more like a cylinder.
This fact suggests that the formulas for surface areas and volumes of prisms and cylinders are similar.
Surface Area of a Right Cylinder
The surface area,
, of a right cylinder with lateral area
, base area
, radius
, and height
S= L + 2B or S = 2pie
+ 2pie
Example 1:
A penny
and a thickness of 1.55 millimeters. Ignoring the raised design, estimate
the surface area of a penny.
The radius of a penny is half of the diameter, or 9.525 millimeters. Use
the formula for the sirface area of a right cylinder.
S = 2pie
+ 2pie
S = 2pie (9.525) (1.55) + 2pie (9.525)² approx 663.46 square
Volume of a Cylinder
The volume,
, of a cylinder with radius
, height
, and base area
= pie
Example 2:
A tank has a length of 31 feet 6½ inches and an outer diameter of 8 feet 0 inches. Assuming a wall thickness of about 2 inches, what is the volume of the tank? At 15 gallons per car, how many car
tanks could be filled from the storage tank if it starts out completely full of gasoline?
The tank is not perfectly cylindrical, because of its hemispherical heads, but you can approximate its volume by a slightly shorter cylindrical tank, say 29 feet long. Subtracting the wall thickness
from the dimensions of the tank,
V = pie r² h approx pie (3.833)² (28.667) approx 1323 cubic feet
Convert from cubic feet to gallons.
1323 cubic feet × 7.48 gallons per cubic foot approx 9896 gallons
So the tank could deliver about 9896/15, or approx 660,15-gallon fill-ups.
Surface Area and Volume of Cones
- Define and use the fomulas for the surface area of a cone.
- Define and use the formula for the volume of a cone.
is a three-dimenstional figure that consists of a circular base and a curved
lateral surface
that connects the base to a single point not in the plane of the base, called the
of a cone is the perpendicular segment from the vertex to the plane of the base. The
of the cone is the length of the altitude.
If the altitude of a cone intersects the base of the cone at its center, the cone is a
right cone
. If not, it is an
oblique cone
Surface Area of a right cone Example 1:
Find the surface area of a right cone with the indicated measurements.
The circumfrence of the base is c = 2pie r = 14pie.
The lateral area is a sector of a circular region with curcumference C = 2pieL = 30pie.
The portion of the circular region occupied by the sector is c/C = 14pie/30pie = 7/15.
Calculate the area of the sector ( lateral area).
pieL² = 225pie
L = 7/15 • 225pie = 105pie
Calculate the base area and add the lateral area.
B = pier² = 49pie
B +
|
{"url":"http://whites-geometry-wiki.wikispaces.com/3kemjan?responseToken=3141b0619ce0e4b9c72fe2f487b65325","timestamp":"2014-04-23T07:35:13Z","content_type":null,"content_length":"65495","record_id":"<urn:uuid:0e00591e-5fe8-43ac-93b9-3f6fec2bbe51>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Penrose and Gurzadyan have rediscovered is the WMAP excess at L=40
...the abundance of all other patterns ("random concentric circles") they may be observing statistically coincides with the standard WMAP prediction...
I have been making more explicit and quantitative calculations of the claims by Penrose and Gurzadyan and I finally understood what they actually see.
The explanation why the effect is there remains mysterious but what the effect actually is, in the usual terminology of cosmologists, is totally clear to me now. First, let us ask what is the spacing
of their concentric circles.
Click to zoom in.
The graph above is borrowed from their paper. There are several similar graphs in the paper. You may see that the apparent radii of the concentric circles may be estimated as 4°, 9°, 14°, 19°. So the
spacing between the concentric circles' radii as seen in the skies is 5° or so.
Imagine that these "waves" with periodicity 5° are repeated across the sphere. How many periods can you get? Well, that's easy to calculate. There are 180° between a pole - the center of the
concentric circles - and the antipodal point on the sphere. And because 180/5=36, there will be around 36 concentric circles.
The angular separation is arguably a bit smaller than 5°, so we will say that there are 40 concentric circles or 40 periods on the sphere. Now imagine that you put the center of the concentric
circles at theta=0 of spherical coordinates and decompose the WMAP temperature anomaly into spherical harmonics.
See also Why Penrose and Gurzadyan cannot possibly "see" beyond the spherical harmonics.
Which spherical harmonics will be elevated because of this periodicity?
For your convenience, here you have the theta-dependence of the L=40 spherical harmonics for M=0,1,2,3,4,5 (much higher values of M lead to functions that nearly vanish near the pole - near the
center of concentric circles). The spherical harmonics are multiplied by sqrt(sin(theta)) to make the amplitude pretty much constant. Don't overlook that there are roughly 40 local extrema (20+20)
between theta=0 and theta=pi.
It's not hard to see the answer. They will be the L=40 spherical harmonics. Note that e.g. the L=40, M=40 spherical harmonic has 40 periods around the equator - because of the exp(40.i.phi) factor.
However, Penrose and Gurzadyan have drawn a "temperature variance" (essentially the sum of "(T_{point}-T_{average})^2" over all points in the rings) which is proportional to the squared amplitudes:
note that the y-axis of their graph has non-negative values. That's why there will really be 80 periods around the circle or 40 periods - concentric circles - per 180°.
(Of course, the real WMAP spherical harmonics will include mixtures with different values of M, and possibly L that slightly differ from L=40, so there won't be a single center - Matti! - but there
will still be the characteristic wave number seen in the patterns.)
Once again, if there are concentric circles in the temperature variance whose radii differ by multiples of slightly less than 5°, then they predict that the L=40 or so spherical harmonics should be
Well, let's look at the decomposition of the WMAP temperature variations into the spherical harmonics to see whether this prediction works out correctly.
Click to zoom in.
The black Φ-shaped symbols are the datapoints extracted from the WMAP observations. The red curve is a theoretical model. In general, you see a very nice agreement between the theory and the data up
to L=900 or so. In particular, you may check the "acoustic peaks", especially the first one at L=221. These peaks appear as results of sound waves that propagated through the cosmic plasma before the
microwave background was born at the age of 350,000 years; see a review of baryon acoustic oscillations.
However, there are also several points that don't work too well. The spherical harmonics with L=5 or less have too unsatisfactory statistics and other reasons why they don't agree too accurately.
But you may also see a deficit (black symbols below the red curve) at L=22 or so, and an even more significant (relatively to the uncertainty band) excess at L=40 or so. Can you see it? On the
x-axis, check the points L=10 and L=100 and how the interval between them is separated to 9 pieces with the spacing delta L=10. You can find L=40 now, can't you?
So this small bump - the single black data point above the red curve near L=40 - is exactly what Penrose and Gurzadyan have observed and interpreted in their crazy "alternative" way.
Now, the L=40 point shows a visible discrepancy but it is a part of the values of L that are otherwise beautifully explained by the standard Big Bang cosmology, combined with the initial conditions
produced by the cosmic inflation. The L=40 is just a single black swan, if you wish. The graph above also puts the Penrose-Gurzadyan "discovery" to its proper modest place. It's just a single
deviation from a curve that beautifully agrees with the theory - and that's totally ignored by Penrose and Gurzadyan.
The observed, black curve really looks "discontinuous" near L=40. No "continuous" model can explain the L=40 excess naturally. It is apparently not an "acoustic peak" of a similar type as the peak at
"L=221" (thanks, Tobias, for having spotted the typo!).
Penrose and Gurzadyan are not the first one who have noticed the L=40 peak (in their case, they just noticed something equivalent to it). For example, in 2003 and 2004, Stacy McGaugh has noticed the
excess at L=40, too. See her figure 7 in particular. The L=40 peak is gigantic, indeed.
See also Mortonson et al. 2009 for an attempt to find an inflationary explanation of the L=40 bump (and the related L=40 dip). The original papers that identified these features are listed in the
[17] S. Hannestad, JCAP 0404, 002 (2004), astro-ph/0311491.
[18] A. Shafieloo and T. Souradeep, Phys. Rev. D70, 043523 (2004), astro-ph/0312174.
[19] P. Mukherjee and Y. Wang, Astrophys. J. 599, 1 (2003), astro-ph/0303211.
[20] A. Shafieloo, T. Souradeep, P. Manimaran, P. K. Panigrahi, and R. Rangarajan, Phys. Rev. D75, 123502 (2007), astro-ph/0611352.
[21] G. Nicholson and C. R. Contaldi (2009), arXiv:0903.1106.
Clearly, if you want a satisfactory theory not only of the single peak but also the whole curve, you must continue to work with the standard cosmology - that explains the rest of the curve - and
abandon the whole crackpottish explanation by Penrose and Gurzadyan.
I wonder whether someone knows where the L=40 peak comes from physically. Because the L-dependence seems so discontinuous near L=40, I actually feel that it could make sense to describe the extra
effect in terms of additional concentric circles just like the two authors did. However, what surely doesn't make sense is to treat this little L=40 bump as the "zeroth order" hint about the right
theory of our cosmic origins and the rest of the WMAP graph, correctly predicted by TBBT cosmology, as a detail to be added to Penrose's divine visions. ;-)
And that's the memo.
Bonus I
This is the WMAP spectral graph above, including the L=40 bump, as seen and presented by the "science journalists". All the context and sensible quantitative perspective on the "big picture" is
totally erased and an infinitesimal bump is suddenly promoted to a proof that "upends" inflationary cosmology if not the Big Bang theory itself. ;-)
Most people are just way too gullible.
Bonus II
I have also tried to reverse-engineer their thinking a little bit. In my guess, they decided that there were "circles" in the WMAP picture at the beginning, and then they were trying to find them by
slightly more quantitative methods described in the article. If that is so, it is pretty much impossible to see the excess at L=40 with "bare eyes".
It would mean that even though the L=40 modes dominate the pictures they're showing (Figure 2 and 4 in particular), they had to be originally attracted by some more common features of the WMAP
picture that doesn't depend on the L=40 bump. And they simply incorrectly calculated the typical frequency of the concentric circles as predicted by the smooth model - the version of the "red"
acoustic WMAP curve.
It's even my guess that they have incorrectly assumed that the WMAP temperatures at random places in the skies are distributed as a "white noise", with different points being independent from each
other. Of course, this naive model is instantly falsified at almost any confidence level - which they may have interpreted as seeing statistical deviations from the Big Bang model.
In reality, they would have only seen the falsification of a (their) completely naive "white noise" model of the WMAP temperatures. The actual temperature variations in the WMAP data exhibit lots of
autocorrelations at longer wavelengths - which may inevitably be interpreted as waves of segments of concentric circles (or at least their arcs).
It's pretty much obvious that there can't exist any similar "qualitative patterns" - such as the excess of some random concentric circles - that wouldn't be captured in the WMAP spectral graph as a
function of L. Any pattern similar to "concentric circles" would always manifest itself as a rather simple feature of the spherical harmonics components.
So they don't use the standard spherical harmonics; and they don't have a robust enough statistical methodology that would allow them to say whether the abundance of their circles is unusual or not -
relatively to the smooth Big Bang model. (Their starting point is really to "deny" the whole Big Bang theory so they couldn't possibly used the right spectral curve - with the right L-profile and
acoustic peaks - to make the relevant statistical comparisons.)
At any rate, the paper describes no genuine new effect and the statement that they have actually been able to see the excess of the L=40 concentric circles may turn out to be too flattering.
snail feedback (5) :
The graph shows similar escapes below the model projections at around 0.2 and 20.
Why are these not of comparable interest to the one around 40?
You surely meant L=2 and not L=0.2, right? ;-) There is no L=0.2 mode. The quantum number is integer.
The dip at L=20 or 22 is discussed somewhere in the fast comments. It is surely comparably interesting. The dip is just a little bit less statistically significant than the L=40 bump. Note that
the uncertainty is higher for smaller L so deviations for low L are "more normal".
The anomalously low value for L=2 is discussed under the keyword "WMAP quadrupole". It could be just a chance - or some new effect that knows about the whole size of the visible Universe.
Frankly speaking, the main reason why the "mainstream" media haven't discussed the L=2 and L=20 dips is because Prof Roger Penrose didn't rediscover them by his heuristic or amateurish methods.
just in case
Сonsider natural logarithms mass of quarks and explore them:
Mu=1.5- 3 MeV; Md=3-7 MeV;
lnMu=0.4-1.09; lnMd=1.09-1.94
Ms=70-120 MeV; Mc=1160-1390 MeV;
lnMs=4.24-4.78; lnMc=7.05-7.23
Mb=4130-4270 MeV; Mt=170900- 177500MeV;
lnMb=8.32-8.35; lnMt=12.04-12.08
As we can see, natural logarithms of values of mass are as following (expressed in round numbers): 1;2;4;5;7;8;12. Note that numbers divisible by 3(3,6,9) are absent, except the last number N=12
. Why? It seems to me interesting question!
Ratio 3:1?
Very good Lubos. When you're right you're right.
It's creepy as hell, but the instant I clicked on the link, I thought, "This sounds like something Penrose would think..."
That's unfair, I know he's quite brilliant in many ways, but for the love of...
|
{"url":"http://motls.blogspot.com/2010/11/what-penrose-and-gurzadyan-have.html","timestamp":"2014-04-16T13:02:47Z","content_type":null,"content_length":"209386","record_id":"<urn:uuid:e91f6667-aa0c-4b1e-8fd3-a024080ee54e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: On the Set of Realizations of
Edge-Weighted Graphs in Euclidean Spaces
Abdo Y. Alfakih
Department of Mathematics and Statistics
University of Windsor
Windsor, Ontario N9B 3P4
February 22, 2005
AMS classication: 51K05, 90C22, 52A20, 05C50, 15A57
Keywords: graph realizations, Euclidean distance matrices, semidenite program-
ming, low rank solutions, Gale transform, convex sets.
Let G = (V; E; !) be an edge-weighted graph. A realization of G in < r is
a mapping of the vertices 1; 2; : : : ; n of G into points p 1 ; p 2 ; : : : ; p n in < r such
that jjp i p j jj 2 = ! ij for every edge (i; j) 2 E. In this paper, we study the
geometry of the set of all realizations of G, and we present a simple randomized
algorithm for obtaining realizations of G in low dimensional Euclidean spaces.
Some numerical results on randomly generated problems are also presented.
1 Introduction
Let G = (V; E; !) be a given simple connected undirected graph, where V =
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/788/3927166.html","timestamp":"2014-04-17T13:14:26Z","content_type":null,"content_length":"8069","record_id":"<urn:uuid:ed89987a-6ebd-4924-96e5-9bd00fb31428>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gurnee Algebra 2 Tutor
Find a Gurnee Algebra 2 Tutor
...I graduated with a BS in Biology (with minor in chemistry) and have taken some independent master's level biology courses as well. My specialization was cellular and molecular biology which
required multiple courses in genetics. As an undergraduate, I worked with my school, Towson University, t...
26 Subjects: including algebra 2, chemistry, geometry, algebra 1
...I have been successfully tutoring students on all sections of the ACT, including Mathematics - which is my favorite - for nearly seven years. In addition I hold a mathematics endorsement, and
served as a Mathematics Coach to make math teachers better at what they do. As a result, I have become very proficient at teaching math and particularly good at helping students that are having
20 Subjects: including algebra 2, reading, English, writing
...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help
all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon.
12 Subjects: including algebra 2, calculus, geometry, algebra 1
...I am also helping students who is planning to take the AP Calculus, ACT and SAT exams. Many students who hated math started liking it after my tutoring. That is my specialty.
12 Subjects: including algebra 2, calculus, trigonometry, statistics
...I have tutored students in Precalculus in a math lab setting. I can assist with any homework or class work you have or bring supplemental worksheets in areas you may need more assistance. I can
tutor using manipulatives and other helpful tools for struggling students, or I can help students to learn in the classic style.
8 Subjects: including algebra 2, geometry, algebra 1, SAT math
|
{"url":"http://www.purplemath.com/Gurnee_Algebra_2_tutors.php","timestamp":"2014-04-19T05:10:04Z","content_type":null,"content_length":"23980","record_id":"<urn:uuid:0d449933-7430-4ff8-a9f2-a03d2220b301>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mackey(also Green and Tambara) functors and Greenlees-May
up vote 4 down vote favorite
This is somewhat related to a question that I asked on Math.SE but, sadly, received no response. I apologize ahead of time if this is not appropriate for MO. Feel free to vote to close if this is the
I really have two (distinct) questions. The first is regarding a paper by Greenlees and May and the second is more of a "big-picture" question with no explicit relationship to their paper.
Let $G$ be a finite group.
In their paper Some Remarks On the Structure of Mackey functors , Greenlees and May define the functor:
$R: GMod \rightarrow M[G]$ where $GMod$ is the category of finite left $G$-modules and $M[G]$ is the category of $G$ Mackey functors by:
$RV(G/H) = V^H$ where $V$ is a $G$ module and $V^H$ is the $H$ fixed point set of $V$.
In their main theorem(Thm. 12) they consider the map $\eta: M \rightarrow RM(G/H_{j,k})$.
Question 1: In Theorem 12, why are $coker(\eta)$ and $ker(\eta)$ in $\mathcal{A}$? Unfortunately I don't see how this clearly follows from the induction hypothesis at the moment.
Question 2: In general, What are some examples of added benefits (aside from additional structure) that one obtains when it is known that you have a Green or Tambara functor rather than just a Mackey
at.algebraic-topology group-cohomology ct.category-theory equivariant-cohomology
add comment
2 Answers
active oldest votes
I thank you for the careful reading and apologize for the concision. This is a downwards induction on the size of subgroups. Using the explicit description of the $RV$ given top of page
239 and the conventions recalled at the bottom of page 240, we arrived at the description of the relevant $RV$ given just below Prop. 7. The map $\eta$ is the identity on $M(G/H_{j,k})$
and since $M(G/H_{j',k'}) = 0$ for $j'< j$ and for $j'=j$ and $k'< k$, the same is true of $RM(G/H_{j,k})$. Therefore $Ker(\eta)$ and $Coker(\eta)$ can only be non-zero on $G/H_{j',k'}$
up vote 12 where $j'>j$ or $j'=j$ and $k'>k$ and hence they are in $\mathcal{A}$ by the induction hypothesis.
down vote
accepted For the second question, I see no obvious relationship between our additive description of Mackey functors and any multiplicative structure that might be present. Anything anyone can say
about that would be welcome. I've not thought hard about the question.
@PeterMay Thank you for your response! That makes much more sense now. Personally, the definition on the bottom of page 238 (definition 1) of $RV$ in terms of the $H-$fixed points
seems more natural. Is there is a way to see $ker$ and $coker$ being in $\mathcal{A}$ without appealing to the explicit construction given on the top of page 239? My second question is
not directly related to your paper. I was simply asking (in general) as to what one gains from having a Green or Tambara functor instead of a run-of-the-mill Mackey functor. I will
edit as to make this distinction clear. Thanks! – confusedmath Sep 19 '12 at 2:07
@PeterMay Also, I was wondering if there are analogous theorems for Green/Tambara functors. Are all Green (resp.Tambara) functors built from "simpler" Green (resp. Tambara) functors? –
confusedmath Sep 19 '12 at 2:16
add comment
That is not what one expects from analogy with simpler structures. It would be of interest is to compute the ``box product'' $RV\Box RW$ in terms of the additive description of Mackey
functors that Greenlees and I gave. The point is that a Green functor $M$ is specified by a product $M\Box M \to M$. The calculation should not be hard in the simplest cases (cyclic groups
up vote 0 of prime order say), and as far as I know has not been done.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology group-cohomology ct.category-theory equivariant-cohomology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/107491/mackeyalso-green-and-tambara-functors-and-greenlees-may","timestamp":"2014-04-20T09:01:46Z","content_type":null,"content_length":"58673","record_id":"<urn:uuid:b953b569-fe24-48e2-8bf0-84ed9d7ad2ea>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Filter list
Narrow the list of resources by…
• SUB TOPIC
• FORMAT
• Event
• Also in…
• WHOLE SCHOOL
• Most Popular
Lesson plan
What's in my bag? Simple Algebraicsubstitution.KS3. A PowerPoint starting with the simple rules of algebraic substitution and finishing with a puzzle where an algebraic code leads to objects
found in my bag. Objective Substitute num…
• Popular
Lesson plan
"finding the nth term" sequences level 6/7. nth term, sequences, position to term rule level 6/7, grade D/C powerpoint 97 lesson A few sequences… 9, 13, 17, 21…. ….. 25, 29 term to term rule: add
4 1 On mini whiteboards All students write …
• Popular
Lesson plan
Linear graphs. PowerPoint. Plenary. Algebra KS4.. Lesson on how to draw the graphs of linear function. Involves understanding of key words. 04/18/14 Presented By J. Mills-Dadson 1 Linear Graphs
Objectives: by the end of the lesson, student…
• Popular
Lesson plan, Activity
Sequences Lessons nth term linear. Here are 6 lessons in order with differentiated activities on linear sequences. If terms are next to each other they are referred to as consecutive terms. 1 4
25 16 9 How is the sequence being generated? …
• Popular
Lesson plan, Activity
Coordinates & Straight-Line Graphs. Spreadsheet for either MS Excel 98-2003 or MS Excel 2007-2013. Read and plot positive (x, y) coordinates in the first quadrant. Recognise straight-line graphs
parallel to the x or y axes and main diagona…
• Popular
Lesson plan
Collecting Like terms level Grade E D. Complete lesson with differentiated activities and answers so students can mark their own work Check out my other lessons Please let me know what you think?
Some: will be able to subtract terms to sim…
• Popular
Lesson plan
Expanding double bracket quadratics grade C. Great lessons with differentiated activities showing students how to expand double brackets using the grid method Worked brilliantly with my students
check out my other lesson let me know what y…
• Popular
Lesson plan, Worksheet
Name the Planet- Algebraic Substitution.Powerpoint. This PowerPoint substitutes numbers into algebraic expressions. Each answer is then converted back into a letter which then rearrange to make
the names of planets, dwarf planets or moo…
• Popular
Lesson plan, Worksheet
KS3 Substitution into simple expressions game. KS3 Maths game, worksheet and powerpoint on substitution. The resource includes a game and to play this students will need a dice. The game ends
when the first person gets to the finish. When …
• Popular
Lesson plan, Worksheet
KS3 Sequences Ppt Lesson and Worksheet. This series of sequences lessons are based on a serial burglar to attacks houses in a pattern. Moves onto Nth Term and sequences from geometric patterns.
Night House 1 5 2 9 3 13 4 17 5 21 6 7 8 9 10…
• Popular
Lesson plan, Worksheet
Substituting into formula crime scene. Substitution where unknown is not the subject crime scene investigation complete with investigation booklet. ÷ 2 ÷ 2 –3 –3 2 Last night a treasure chest was
taken from the school funds, police have na…
• Popular
Lesson plan, Activity
Speed, Distance, Time - interactive whiteboard act. Others will need more practice. Teacher Notes:Use the triangle to help pupils remember the equations. Speed = distance ÷ time. To find the
Distance = speed x time To find the Time = dista…
• Popular
Lesson plan, Worksheet
Expanding Single Brackets. Level 6 or Grade D lesson on expanding brackets. Some: will be able to expand 2 brackets and simplify the expression. Brackets Getting a little more difficult: 4(3x +
2) 3x(x + 5) 2x(y + 3) Questions Green 3(x + …
• Lesson plan, Leadership plan, Worksheet
KS4 Maths: Quadratic Graphs. KS4 Maths lesson powerpoint and worksheets. Created by J. Mills-Dadson 1 Quadratic Graphs y = x2 x -2 -1 0 1 2 x2 y Note: - X - = + So -22 = -2 X -2 = 4 Use it to
find the values of x2 4 1 0 1 4 What do you thi…
• Lesson plan, Activity
Pythagoras' Theorem. Spreadsheet for either MS Excel 98-2003 or MS Excel 2007-2013. Find length of a line joining two (x, y) points using Pythagoras. Solve problems involving cuboids using
Pythagoras. Investigate dimensions of cuboids usin…
• Lesson plan, Worksheet, Tutorial
introduction to algebra. A lesson to introduce algebra. (DIFFERENTIATION). Work in pairs Swap sheet and mark Address Misconceptions Praise work Write Mathematically: Independent Working Bring
class back together Class examples Write the we…
• Lesson plan, Other, Worksheet, Activity
Simultaneous Equations and Graphs - KS4. Full lesson with lesson plan on introducing solving simultaneous equations using graphical methods - starter is a kinaesthetic activity where pupils order
themselves according to the value of the ex…
• Lesson plan, Pupil assessment, Other, Worksheet
KS3 Number Patterns & Sequence (MEP– Yr 7– Unit 7). Maths worksheets and activities. Again, this topic is an important building block in mathematical understanding. The number square will help
you with some of them. Exercises 1. On a numbe…
• Lesson plan, Pupil assessment, Other, Worksheet
KS3 Areas and Perimeters (MEP – Year 7 – Unit 9). Worksheets, activities and lesson plans. We will also begin to develop a more algebraic approach to finding areas and perimeters. So, in total,
we have an area of 5 square units. (a) (b) (c…
• Lesson plan, Pupil assessment, Other, Worksheet
KS3 Algebra – Brackets (MEP – Year 8 – Unit 8). Worksheets and Activities. MEP Y8 Practice Book A 127 8 Algebra: Brackets 8.1 Expansion of Single Brackets In this section we consider how to
expand (multiply out) brackets to give two or mor…
|
{"url":"http://www.tes.co.uk/teaching-resources/secondary-44354/maths-46116/ks3-lesson-plan-60011/?SFBC_FilterOption=2¶metrics=46117","timestamp":"2014-04-21T14:57:41Z","content_type":null,"content_length":"67267","record_id":"<urn:uuid:f63983dc-6bcb-49a3-9f6f-cd3213ed6414>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
These four problems require some logical thinking and a willingness to work systematically. Routes to a solution are not immediately obvious so working on the problems could help students to develop
their resilience, an important quality for mathematical problem-solvers.
Possible approach
Before introducing the task:
"Mathematicians need to be resourceful problem-solvers who don't give up, who talk to each other and share good ideas, who work strategically and systematically. The answers won't be immediately
obvious, so these problems will test your ability to work as mathematicians."
The problems could be used in several ways:
Hand out this worksheet and invite students to work on all four problems in pairs.
Cut the worksheet into the four separate problems and invite different groups to work on different problems.
Work on one or two of the questions together as a class and then invite students to use each other's useful insights to solve the remaining problems.
While students are working, circulate and listen for useful insights. Where appropriate, bring the class together so that students can share successful strategies with the rest of the class.
The following key questions or prompts could be offered to students who are stuck:
What do you know? Write it down.
What can you deduce from what you know? Write it down.
Is there a good way of representing what you know, to make it clear?
Possible extension
For other problems that require systematic and logical thinking see Two and Two, Product Sudoku and Cinema Problem.
Possible support
Of the four problems, Football Champ and Hockey use very similar techniques, so one could be solved as a class before students are given the second to apply any useful methods they have devised.
|
{"url":"http://nrich.maths.org/8073/note?nomenu=1","timestamp":"2014-04-19T17:06:00Z","content_type":null,"content_length":"4972","record_id":"<urn:uuid:10502f34-6bb1-45e3-a2c0-2c9a9728af2c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Balanced Partitioning Dynamic Programming Problem
February 28th, 2013, 12:27 PM #1
Junior Member
Join Date
Feb 2013
Balanced Partitioning Dynamic Programming Problem
The problem is that you have a set of numbers and you need to divide that set into two subsets where the difference between the sums of the subset is minimal.
Example: a set of numbers {1,5,9,3,8}, now the solution is two subsets, one subset with elements {9,3} and the other {8,5,1} the sum of the first one is 13 and the sum of the second is 13 so the
difference between the sums is 0. The result shows the difference between the sums.
Another example: a set of numbers where the difference between the subsets cannot be zero, {9 51 308 107 27 91 62 176 28 6}, the minimal difference between the two subsets is 2.
Can someone please explain this code in detail? I've tried debugging it but i can't figure out how it produces the result. I've been searching for a solution for the problem and this is the code
that I stumbled upon. I want to know how the function finds the two subsets, it works great because I've tested it for up to 300 inputs which sum adds up to 100,000.
Many thanks.
p.s Don't worry about the memory leak or that you can only input 300 numbers.
#include <iostream>
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <limits.h>
using namespace std;
int BalancedPartition ( int a[] , int n )
int sum = 0;
for( int i = 0 ; i < n ; i++)
sum += a[i];
int *s = new int[sum+1];
s[0] = 1;
for(int i = 1 ; i < sum+1 ; i++) s[i] = 0;
int diff = INT_MAX , ans;
for(int i = 0 ; i < n ; i++)
for(int j = sum ; j >= a[i] ; j--)
s[j] = s[j] | s[j-a[i]];
if( s[j] == 1 )
if( diff > abs( sum/2 - j) )
diff = abs( sum/2 - j );
ans = j;
return sum-ans-ans;
int main()
int n,result, arr[300];
cin >>n;
for(int i = 0; i < n; i++)
result = BalancedPartition(arr,n);
cout <<abs(result); // The difference between the sums of the two subsets
return 0;
Re: Balanced Partitioning Dynamic Programming Problem
Shouldn't set the subsets be {1 3 9} and {8 5} to give each subset a total of 13? Also your program gives a difference of 1 not 2 for {9 51 308 107 27 91 62 176 28 6}. To understand how it
produces the result, try making s a pointer to an array of bool instead of int and replace 1 with true and 0 with false when using s. Also note that the number of elements of the s array is 1
more than the sum of the numbers entered. It iterates through the entered numbers from start to finish and for each number entered iterates through the s bool array backwards.
Can someone please explain this code in detail?
Which part of the code don't you understand as it's fairly straightforward c++. If you indicate which line(s) you don't understand I'll try to help to explain what that particular c++ code is
Re: Balanced Partitioning Dynamic Programming Problem
Yeah the example i wrote has many solutions i wrote one of them, yours is one of them too, about the code it is straightforward C++, but i do not see how is the s array used and how through it
the solutions are generated (that would be the 2 for loops)
Re: Balanced Partitioning Dynamic Programming Problem
clearly, if S is the total sum and P is the sum of a subset then the result R ( = the absolute value of the difference of P and the sum of its complement ) equals |S - 2*P|. The two loops "for(
... for( ... if( s[j] == 1 ){ /*test*/ } ..." just iterates all the possible values of P ( non optimally though ). Then, the "/*test*/" part above just select the P that minimizes R.
if you don't see how the iteration works follow the 2kaud's advice of considering "s" as an array of booleans, then follow the code and write down on paper the sequence of positions of s[]
elements that are set true at each iteration.
February 28th, 2013, 02:59 PM #2
Senior Member
Join Date
Dec 2012
February 28th, 2013, 05:04 PM #3
Junior Member
Join Date
Feb 2013
March 1st, 2013, 02:54 AM #4
Senior Member
Join Date
Oct 2008
|
{"url":"http://forums.codeguru.com/showthread.php?535049-Balanced-Partitioning-Dynamic-Programming-Problem&p=2107585","timestamp":"2014-04-17T00:59:02Z","content_type":null,"content_length":"82944","record_id":"<urn:uuid:357341d2-facd-413a-82b5-9bc4eeff46a0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lattice Stick Number vs. Stick Number of Knot
up vote 6 down vote favorite
Can the lattice stick number of a knot be bounded by the stick number of the knot?
The stick number $S(K)$ of a knot $K$ is the fewest number of segments needed to realize it by a simple 3D polygon. The lattice stick number $S_L(K)$ is the fewest segments in a realization in the
cubic lattice, with all segments parallel to a coordinate axis. For example, the stick number of the trefoil knot $K=3_1$ is 6, and its lattice stick number is 12 (the latter a result of Huh and Oh
from 2005).
My question is whether it is possible to bound $S_L(K)$ by $m S(K)$, where $m$ is some multiplier factor. Ideally $m$ would be a constant, but perhaps it is more realistic to expect it to depend on
the complexity of the knot (e.g., on its crossing number $cr(K)$). What I have in mind is replacing each stick in a stick realization by a bounded lattice path.
Addendum. Tracy Hall's clever example below indicates that it is unlikely that $m$ could be a constant.
knot-theory stick-knots geometry discrete-geometry
1 Clarification: for lattice stick number, can a "segment" by arbitrarily long, or are you counting the total number of unit segment lengths? (Both questions look equally interesting a priori.) –
Cap Khoury Jun 15 '10 at 12:50
@Cap Khoury: The former. In general, the length of the sticks is not part of the definition of $S_L(K)$. One stick can consist of several collinear unit-length segments. – Joseph O'Rourke Jun 15
'10 at 12:57
add comment
2 Answers
active oldest votes
I wouldn't be surprised by something like a quadratic bound, or possibly something reasonable in terms of another complexity measure for the knot, but I see no hope for making $m$
constant. Consider the following construction: given $m$, choose some large number like $N=(10m)^6$ of points uniformly at random in the unit sphere, and connect them sequentially in a
cycle with straight line segments to define a knot $K$. By construction $K$ has stick number no more than $N$, but each stick has a long narrow tunnel that it must traverse in a very
up vote 7 precise direction, which is difficult to do with only $m$ lattice sticks. Of course any one tunnel can be made shorter and wider with an affine transformation (or any small collection of
down vote tunnels, with a piecewise affine transformation) but I am convinced (without attempting a rigorous proof) that with probability approaching $1$ a knot so constructed has a lattice stick
accepted number much higher than $mS_L(K)$.
This is a convincing example, Tracy! Thanks! – Joseph O'Rourke Aug 4 '10 at 13:12
add comment
The lattice stick number is obviously bounded by some function of the stick number: there are finitely many graphs with stick number at most k (because there are finitely many line segment
arrangements in the plane and choices of over-under relationships on each crossing) so the lattice stick number of stick-number-k graphs is just the max of the lattice stick number of this
up vote 1 finite set of graphs. This argument doesn't give an explicit or good bound on the number, though.
down vote
Thanks, David, for that clear argument! – Joseph O'Rourke Aug 5 '10 at 16:22
add comment
Not the answer you're looking for? Browse other questions tagged knot-theory stick-knots geometry discrete-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/28241/lattice-stick-number-vs-stick-number-of-knot?sort=newest","timestamp":"2014-04-18T21:08:16Z","content_type":null,"content_length":"59399","record_id":"<urn:uuid:e3a2172e-1a57-4b7a-a9bd-48abee027f37>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Strategy of Cramming
Title The Strategy of Cramming
Publication Report
Year of 2001
Authors Wos, L
Series J. Automated Reasoning
Date 08/2001
Other ANL/MCS-P898-0801
Offered in this article is a new strategy, cramming, that can serve well in an attempt to answer an open question or in an attempt to find a shorter proof. Indeed, when the question can
be answered by proving a conjunction, cramming can provide substantial assistance. The basis of the strategy rests with forcing so many steps of a subproof into the remainder of the proof
that the desired answer is obtained. As for reduction in proof length, the literature shows that proof shortening (proof abridgment) was indeed of interest to some of the masters of
logic, masters that include C. A. Meredith, A. Prior, and I. Thomas. The problem of proof shortening (as well as other aspects of simplification) is also central to the recent discovery
by R. Thiele of Hilbert\'s twenty-fourth problem. Although that problem was not included in his 1900 Paris lecture (because he had not yet sufficiently formulated it), Hilbert stressed at
Abstract various times in his life the importance of finding simpler proofs. Because a sharp reduction in proof length (of constructive proofs) is correlated with a significant reduction in the
complexity of the object being constructed, the cramming strategy is relevant to circuit design and program synthesis. The most impressive success with the use of the cramming strategy
concerns an abridgment of the Meredith-Prior abridging of the Lukasiewicz proof for his shortest single axiom for the implicational fragment of two-valued sentential (or classical
propositional) calculus. In the context of answering open questions, the most satisfying examples to date concern the study of the right-group calculus and the study of the modal logic
C5. Various challenges are offered here.
PDF http://www.mcs.anl.gov/papers/P898.ps.Z
|
{"url":"http://www.mcs.anl.gov/publication/strategy-cramming","timestamp":"2014-04-24T08:40:07Z","content_type":null,"content_length":"25302","record_id":"<urn:uuid:4234435e-27ff-4ca9-87b4-8bd431163402>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
WPI REU in Industrial Mathematics and Statistics Alumni
The REU program at WPI creates lasting relationships between its participants. After spending an intense 8 weeks living and working together, they become almost like a family. We at WPI are proud of
all of our family members and would like to give you a glimpse of their work and accomplishments.
If you have any questions or need advice you can contact us by email.
Ane Coughlin
• 1999: Graduated from Bates College with B.S. in mathematics
• 2001: Graduated from Boston College with M.S. in mathematics
• Currently teaching math in Braintree High School, Braintree MA
Kristin Duncan
• May 1999: graduated from University of Dayton with B.S. in mathematics
• Summer 2000: won ASA Student Paper Competition
• June 2001: graduated from Ohio State University with M.S. in statistics
• Summer 2001, 2002: REU assistant for biostatistics at Ohio State University
• August 2002: married
• Currently working on Ph.D. in statistics at Ohio State University
Ellen Phifer
• Completed M.S. in Applied Math at U. of Delaware
• Now working in defense industry
Jonathan Van Haste
• 1999: graduated from Calvin College with a B.S. in mathematics
• 2001: graduated from Mont Clair University with M.S. in statistics
• Currently works at AC Nilson, a market research firm
Brian Ball
• Summer 1999: REU Assistant
• May 2001: graduated from WPI in Mathematics and Physics
• Currently pursuing graduate studies at North Carolina State University in Computational and Applied Mathematics
• Adopted a dog named Zoe
Chris Shane
• May 2000: graduated from Kansas State University with B.S. in Mathematics
• December 2002: graduated from Kansas State University with M.S. in Mathematics
• Currently Actuary for Lafayette Life Insurance, Indiana
Sarah Winnie
• 2000: graduated from Hamilton College with B.A in Mathematics
• 2002: graduated from University of East Anglia (England) with M.Sc. in environmental sciences (math modeling concentration)
Brooke Andersen, REU 2000
• January 2001: presented at AMS/MAA Joint Mathematics Meeting in New Orleans
• Summer 2001: attended summer institute at Carnegie Mellon University
• 2002: graduated from Centre College in Mathematics
• Currently teaching math at Lexington High School, Lexington MA
Michael Escovitz, REU 2000
Ed De Guzman, REU 2000
• Summer 2001: participated in Computer Science REU at UMass-Amherst while doing research for the Center for Intelligent Information Retrieval
• 2001-2002: was undergraduate researcher for the Vision, Interaction, Language, Logic, and Graphics Environment (VILLAGE) Research Group
• May 2002: completed a senior honors thesis in Computer Science and graduated from Rutgers University with a BS in Computer Science and a BA in Mathematics
• Currently is a Ph.D. student in the Department of Computer Science at the University of California-Berkeley
Barb Hess (Bennie), REU 2000
• January 2001: presented at AMS/MAA Joint Mathematics Meeting in New Orleans
• February 2001: participated in COMAP math modeling competition with outstanding rating
• May 2001: graduated from Bethel College
• Fall 2001: began graduate studies in Statistics at University of Minnesota
Garth Johnson, REU 2000
• 2000-2001: research at University of Central Arkansas was presented at MAA, Arkansas Academic of Sciences and Arkansas Space Grant Consortium conferences
• 2002: graduated
• 2001: attended joint conference in San Diego and presented his undergraduate thesis: "Exact Solutions to the Variable Speed Wave Equations Through Darboux Transformations"
Jon Kennedy, REU 1998 & REU 2000
• May 2000: graduated from WPI in physics
• Summer 2000: REU assistant at WPI
• January 2002: graduated from Boston University with M.A in Physics
• August 2002: began working on his PhD in Applied Math at WPI
Wendy Kooiman, REU 2000
• December 2000: graduated from Grand Valley State University in math
• May 2003: Graduated from Rensselaer Polytechnic Institute with M.S degree in Operations Research and Statistics
• January 2001: Working at Smiths Industries Aerospace in Kentwood, MI as an Integrated Navigation Systems Engineer
• January 2004: Began studying for M.Eng Electrical Engineering at Western Michigan University in Kalamazoo, MI
Doug Mitarotonda, REU 2000
• Fall 2000: research work on solar power lights in Nepal
• January 2001: presented at AMS/MAA Joint Mathematics Meeting in New Orleans
• Summer 2001: working for CIA in Washington, D.C.
• May 2002: graduated from Cornell University in Computer Science, Mathematics, and Asian Studies
• Fall 2002: attending Cornell University for Master of Engineering in Computer Science
• Summer 2002: worked at MIT Lincoln lab
Sandor Swartz, REU 2000
• Summer 2001: participating in research at U. of Missouri-Rolla
• December 2001: graduated from U. of Missouri-Rolla
Katie Tranbarger, REU 2000
• June 2001: graduated from California Polytechnic State University in statistics
• Fall 2001: began Ph.D. in statistics at UCLA
• Dec. 2002: graduated from UCLA with M.S in statistics
Rebecca Wasyk, REU 2000
• December 2000: graduated from James Madison University in Mathematics
• January 2001: presented at AMS/MAA Joint Math Meeting in New Orleans
• Summer 2001: REU assistant at WPI
• Fall 2001: began graduate studies at Brandeis University
• Fall 2003: began Ph.D. studies in Applied Mathematics at WPI
Blythe Ashcraft, REU 2001
• May 2002: graduated from Centre College, Danville, KY, with a B.S. in mathematics
• January 2002: presented at AMS/MAA Joint Mathematics Meeting in San Diego
• Fall 2002: began working toward a Ph.D. in Physical Chemistry from Wake Forest University of Winston-Salem, NC
Katherine Kline, REU 2001
• May 2002: graduated from Bryn Mawr College in Mathematics
• Fall 2002: began graduate studies in Systems Science and Mathematics at Washington University, St. Louis, MO
Ivan Ramler, REU 2001
• January 2002: presented at AMS/MAA Joint Mathematics Meeting in San Diego with Thomas Wakefield - won award
• May 2002: graduated from University of Minnesota-Morris in Mathematics and Statistics
• Summer 2002: REU assistant at WPI
• Fall 2002: began graduate studies in Statistics at Iowa State University
Erin Renk, REU 2001
• May 2002: graduated summa cum laude with departmental honors in economics and mathematics (B.Sc.) from the University of Pittsburgh
• Fall 2002: began work as a civilian Operations Research Analyst at the Naval Air Systems Command.
Jennica Sherwood, REU 2001
• December 2002: graduated magna cum laude with a degree in mathematics from the University of San Francisco
• May 2003: Participant in the 2003 Program for Women in Mathematics in Mathematical Biology, Institute for Advanced Study & Princeton University
• Fall 2003: began Ph.D. program in Neuroscience at Vanderbilt University Brain Institute
Thomas Wakefield, REU 2001
• May 2002: graduated from Youngstown State Unversity in Mathematics and Economics
• January 2002: presented at AMS/MAA Joint Mathematics Meeting in San Diego with Ivan Ramler - won award
• Fall 2002: began Ph.D. studies in Mathematics at Kent State University
Andrew Jalil, REU 2002
• 2004: graduating from Brown University with B.S in Mathematics and Economics
Alex Lenkoski, REU 2002
• Graduate Student in Statistics at University of Washington
Borislav Mezhericher, REU 2002
• 2003: graduating from Queens College with B.A in Mathematics
• January 2003: presented his REU project at the AMS meeting in Baltimore - won award
• graduated from Columbia University in 2008 with a Ph.D. in computational number theory
• quantitative researcher at Citadel (a financial firm in Chicago) since graduation
Sonja Petrovic, REU 2002
• Ph.D. student in Algebra & Number Theory at Univ. of Kentucky
Ravi Srinivasan, REU 2002
• Grad student in Applied Math at Brown University
David Stoltzfus, REU 2002
• 2003: graduating with a B.S in Mathematics from Asbury College
• 2006: Graduated with M.S in Applied Mathematics from WPI
Patricia Tong, REU 2002
• 2004: graduating from NYU with a B.S in Mathematics and Economics
Matthew Willyard, REU 2002
• 2003: will graduate from the University of Rochester
• January 2003: presented his REU project at the AMS meeting in Baltimore - won award
• Graduate student, Dept. of Mathematics, Florida State University
Amy Cohen, REU 2003
• January 2004: co-presented at AMS/MAA Joint Mathematics Meeting in Phoenix, AZ
• May 2004: graduated from the University of New Hampshire with a B.S. in Mathematics
• June 2004: began work as an Actuarial Assistant at Liberty Mutual in Boston
Michael Coleman, REU 2003
• January 2004: co-presented at AMS/MAA Joint Mathematics Meeting in Phoenix, AZ - won award
• May 2004: graduated from Boston University with a B.A. in Mathematics and a minor in Astrophysics
• Graduate student, Dept. of Mathematics, George Washington University
Adam Czernikowski, REU 2003
• May 2004: graduated from Oberlin College with a B.Sc. in Mathematics
• Graduate student in Industrial Engineering at University of Florida
Heather Griffin, REU 2003
• May 2004: graduated from the University of Arkansas with a B.S. in Mathematics and a B.S. in Physics
• Spring 2004: got married!
Mary Gruber, REU 2003
• January 2004: co-presented at AMS/MAA Joint Mathematics Meeting in Phoenix, AZ
• May 2004: graduated from Capital University
• Fall 2004: will attend Michigan State University to pursue a Master's degree in Industrial Mathematics
Erin Haller, REU 2003
• January 2004: co-presented at AMS/MAA Joint Mathematics Meeting in Phoenix, AZ - won award
• May 2004: graduated from the University of Missouri-Rolla
• Graduate Student in Math at University of Arkansas, Fayetteville
Jessica Jajosky, REU 2003
• Completed medical school at WVU
• Is currently an internal medicine resident at WVU
• Looking to switch to an anesthesia residency, also at WVU
Christopher Kim, REU 2003
• January 2004: co-presented at AMS/MAA Joint Mathematics Meeting in Phoenix, AZ - won award
• May 2004: graduated from Cornell University with a B.S. in Mechanical Engineering and a B.A. in Mathematics
• Fall 2004: will begin Ph.D. studies in Computational Neuroscience at the University of Chicago
Megan McKinney, REU 2003
• January 2004: co-presented at AMS/MAA Joint Mathematics Meeting in Phoenix, AZ
• May 2004: graduated from Slippery Rock University with a B.S. in Mathematics concentrating in Actuarial Mathematics, and a minors in Statistics and Economics
• Works at University of Pittsburgh Medical Center
Trinh Pham, REU 2003
• May 2004: graduated from Mills College
• Fall 2004: will attend UC Berkeley to pursue a Master's degree in Statistics
• Assistant Professor of Pharmacy Practice, Yale-New Haven Hospital, New Haven, CT
Sid Rupani, REU 2003
• January 2004: co-presented at AMS/MAA Joint Mathematics Meeting in Phoenix, AZ - won award
• May 2004: graduated from WPI with a B.S. in Mechanical Engineering
• Fall 2004: will pursue an MS in Mechanical Engineering at WPI
• Ph.D. Student in Engineering Systems at MIT
Erica Johnson, REU 2004
• Grad. Student in Math at Univ. of New Hampshire
Andrew Magyar, REU 2004
• Ph.D. student in Biostatistics at Rutgers University.
Kimberly Millard, REU 2004
• Graduate Student in Math at Florida State University
Jessica Scheld, REU 2004
• Graduater Student in Math at Uniersity of Vermont
Brian Skjerven, REU 2004
• May 2005 - Graduated from St. Mary's University of Minnesota, B.A. Mathematics
• July 2006 - Married
• June 2006-August 2007 - IBM intern in Cambridge
• May 2007 - Graduated from WPI, M.S. Applied Mathematics
• Currently - pursuing a Ph.D. in Scientific Computation at the University of Minnesota
Nathanial Burch, REU 2005
• Spring of 2006: Got his B.S. degree in Mathematics from Grand Valley State University
• Graduate Student in Math at Colorado State University
• Plans on finishing up his M.S. in the spring of 2008 and staying at Colorado State University for his Ph.D.
• Research interests include areas involving differential equations, numerical analysis, and probability. Currently working on a modeling project related to wireless ad hoc networks.
Sarah Hartenstein, REU 2005
• Undergraduate student at Carroll College
Razvan Ionescu, REU 2005
• Summer 2006: 4 months internship at New Reinsurance Company Geneva (Switzerland), worked on exposure curves (a pricing method) applied on the non-life contracts
• Spring 2007: Graduaded from ISFA (Institut de Sciences Financičres et Assurances) (Institute for Financial Science and Insurance) University Claude Bernard Lyon France in actuarial math
• Summer 2007: 6 months internship at SCOR Global Life (Paris France) (SCOR is the biggest Life Reinsurer in France, number 7 in the world), working on disability pensions.
Alex Mills, REU 2005
• Undergraduate student at College William and Mary
Christopher Mirabito, REU 2005
• Graduate Student in Math at Univ. of Texas at Austin
Eugene Quan, REU 2005
• May 2007: Graduated from Harvey Mudd in May
• July 2007: Moved to Chicago to work at Citadel Investment Group.
Wendy Chen, REU 2006
Rachel Danson, REU 2006
Dena Feldman, REU 2006
Erin Kiley, REU 2006
Mary Korch, REU 2006
Daniel Lawver, REU 2006
Jason Miller, REU 2006
• May 2007 - Graduated magna cum laude from Marist College with B.S. in Applied Mathematics
• Currently - Pursuing PhD in Mathematics at Tulane University, working on numerical methods for solving hyperbolic conservation laws
Jody Mullis, REU 2006
Melissa Moon, REU 2006
• May 2007: Graduated from Pacific Lutheran University with degrees in mathematics and physics
• June 2007-current: Working as an actuary for Milliman, Inc. in the healthcare practice area
• February 2009: Married husband, Danny
Alla Shved, REU 2006
Lingfeng Tang, REU 2006
• Was Assistant Director at Moody's Analytics, making software for structured finance transparency
• Is currently looking to either change careers or go to grad school
Daniel Smaltz, REU Assistant 2006
Jonathan Adler, REU 2007
Matthew Bader, REU 2007
• Started Venatic Outdoors, an outdoor video production company, in Fall 2009
• Currently works for Paychex Inc. in the Credit Risk department
• Has almost completed his MS in Banking and Finance at Boston University online
• Got married on July 24^th
Paul Bernhardt, REU 2007
Naomi Brownstein, REU 2007
• Currently pursuing a doctorate in Biostatistics at UNC Chapel Hill under an NSF graduate research fellowship.
Jaye Bupp, REU 2007
• Graduated from Alma College in 2008 with B.S. in Mathematics and a B.A. in Music
• Received M.S. in Applied Statistics from Purdue University-Indianapolis in May 2010
• Is currently job searching
Patrick Crutcher, REU 2007
Morgan Gieseke, REU 2007
Yu-Jay Huoh, REU 2007
Nathan Langholz, REU 2007
Sean Skwerer, REU 2007
• Currently a graduate student at UNC-CH department of statistics and operations research
• Is doing research in optimization and shape statistics, whose main applications are brain image analysis and phylogenetic trees
Christopher Steiner, REU 2007
• Completed Masters Degree in Economics
• Currently pursuing Ph.D. in Economics at UC San Diego
• Interested in the applications of mathematics in economic modeling
Grant Weller, REU 2007
• May 2008 - Completed B.A. in Mathematics and Economics at Concordia College
• 2007 - Named to the d3football.com All-America football team in 2007
• August 2008 - started graduate school Department of Statistics at Colorado State University
• December 2010 - Will finish M.S. degree. Plans to continue for a Ph.D. in Statistics.
Gerardo Hernández, REU Assistant 2007
Sara Adler, REU 2008
• Spring 2009 - Received B.A. in mathematics and economics from Case Western Reserve University
• Currently persuing a Ph.D. in economics at UC Santa Barbara
• Research focus is in Environmental Economics, specifically water pollution and water allocation
Jennifer Boyko, REU 2008
• Spring 2009 - Earned B.S. in Mathematics from Haverford College
• Currently a Ph.D. student in the Statistics dept at University of Connecticut.
David B. Brown, REU 2008
David P. Brown, REU 2008
Stephanie Browne-Schlack, REU 2008
Pulak Goswami, REU 2008
Xiao Han, REU 2008
Evan Herring, REU 2008
• Currently pursuing an MS in Math Finance at NYU's Courant Institute
• After graduation, plans to work as a "quant" for a bank in New York
• Currently going for a biostats minor
• Working towards an honors thesis in biology
• Is applying to medical school
• Also enjoys writing website articles, stories, and translating
Laura Hou, REU 2008
Thomas Seaquist, REU 2008
Erin Vinnedge, REU 2008
Jian Wang, REU 2008
Heather Standring, REU Assistant 2008
Maintained by webmaster@wpi.edu
Last modified: Sep 04, 2012, 15:27 EDT
|
{"url":"http://www.wpi.edu/academics/math/CIMS/REU/alumni.html","timestamp":"2014-04-16T07:35:27Z","content_type":null,"content_length":"42965","record_id":"<urn:uuid:9cec1488-e8b9-4b31-8251-baf1bf1a7716>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westvern, CA Algebra Tutor
Find a Westvern, CA Algebra Tutor
It may seem strange to begin with a statistic, but I trust you will agree it's not wholly inappropriate. During the 2009-2010 school year, my 11 AP Chemistry students had a 100% pass rate with
the following score distribution: Seven of them scored a 5, three scored a 4, and one scored a 3. The high scores are greater than three times the national average.
20 Subjects: including algebra 2, algebra 1, chemistry, reading
...I am currently in my second year of being a teacher's assistant for this subject at a university. I have taken many teaching classes and have found easy ways for this particular subject to be
learned. I have been tutoring various levels of math for 4+ years.
14 Subjects: including algebra 1, algebra 2, calculus, physics
...I completed a MA in education in curriculum and instruction and also the requirements for California state teaching credentials in social studies (history), mathematics, physics, and business.
Throughout this time, I have worked as a tutor of both high school and college students. I also worked as a longterm high school substitute teacher for literature, physics, math, and history.
19 Subjects: including algebra 1, algebra 2, reading, writing
...John's College for 2 years, a teaching assistant for astronomy at Oklahoma State University for 3 years, a teaching assistant for general physics at OSU for 3 years, and currently I am a
physics professor at Marymount College in Palos Verdes, CA I have had glowing student reviews throughout my t...
10 Subjects: including algebra 2, algebra 1, calculus, physics
Hello, I love math and transfer that to my students. My present position teaches it all. Patience and attention to detail, by determining what concept(s) the student needs to learn and review, is
my approach to helping them get through their hurdles in the quickest way.
14 Subjects: including algebra 1, algebra 2, reading, grammar
Related Westvern, CA Tutors
Westvern, CA Accounting Tutors
Westvern, CA ACT Tutors
Westvern, CA Algebra Tutors
Westvern, CA Algebra 2 Tutors
Westvern, CA Calculus Tutors
Westvern, CA Geometry Tutors
Westvern, CA Math Tutors
Westvern, CA Prealgebra Tutors
Westvern, CA Precalculus Tutors
Westvern, CA SAT Tutors
Westvern, CA SAT Math Tutors
Westvern, CA Science Tutors
Westvern, CA Statistics Tutors
Westvern, CA Trigonometry Tutors
Nearby Cities With algebra Tutor
Broadway Manchester, CA algebra Tutors
Cimarron, CA algebra Tutors
Dockweiler, CA algebra Tutors
Dowtown Carrier Annex, CA algebra Tutors
Foy, CA algebra Tutors
Green, CA algebra Tutors
La Tijera, CA algebra Tutors
Lafayette Square, LA algebra Tutors
Miracle Mile, CA algebra Tutors
Pico Heights, CA algebra Tutors
Preuss, CA algebra Tutors
Rimpau, CA algebra Tutors
View Park, CA algebra Tutors
Wagner, CA algebra Tutors
Windsor Hills, CA algebra Tutors
|
{"url":"http://www.purplemath.com/westvern_ca_algebra_tutors.php","timestamp":"2014-04-18T22:04:32Z","content_type":null,"content_length":"24191","record_id":"<urn:uuid:84464140-aff9-4ffc-a4d8-a3aba2d81c2a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Imagine you are teaching a fellow student how to solve: 2(x + 6) - 10 = 12 + 4(x - 1) In your own words, explain the process for solving this equation. Please include your work at each step of the
process along with the final answer.
Best Response
You've already chosen the best response.
If it were me, I'd probably make a list of statements and reasons
Best Response
You've already chosen the best response.
for each step
Best Response
You've already chosen the best response.
you want to get all the numbers to one side of the equals sign and all the letters to the right, so first you want to get rid of the brackets. so the equation would be 2x+12-10=12+4x-4
Best Response
You've already chosen the best response.
sorry all the letters to the other side, doesnt matter just as long as they are on opposite sides of the equals sign
Best Response
You've already chosen the best response.
Yeah, see that's what I mean. You did good by stating a "wholeness" or a "goal" to achieve, but you kinda skipped explaining "distributive rule".
Best Response
You've already chosen the best response.
then you just go from there and move everything over then you can simplify what x is but when your moving things to the other side, make sure you change the sign so for example if its +6 and you
need to move it to the other side make it negative and vise versa
Best Response
You've already chosen the best response.
I can imagine that the student would have so many questions after such a short explanation
Best Response
You've already chosen the best response.
2(x + 6) - 10 = 12 + 4(x - 1) You always do parentheses first by using the distributive property... leaving you with 2x + 12 - 10 = 12 + 4x - 4 Add like terms... x's with x's.. numbers with
numbers.. I like to put the x's first and then the numbers... leaving you with 2x + 2 = 4x + 8 I like to get rid of the smallest x by taking the opposite of the smallest x.. since 2x is smaller
than 4x we would get rid of the 2x by -2x from both sides. ... leaving you with 2 = 2x + 8 Then you always go to the side with the x and get rid of the number being added or subtracted from the x
by taking its opposite.. so in this case you would -8 from both sides. .. leaving you with -6 = 2x Go to the side with the x and either multiply or divide... in this case divide both sides by
2... leaving you with -3 = x
Best Response
You've already chosen the best response.
You're assuming the person knows what distributive property and like terms are
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4eb0724de4b0cb9176279caa","timestamp":"2014-04-21T08:05:25Z","content_type":null,"content_length":"48174","record_id":"<urn:uuid:65fd4508-34db-4856-a426-256f707f4a97>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help...?
03-21-2013 #1
Registered User
Join Date
Mar 2013
Homework Help...?
Hello, I'm a beginner to the C language. Today, I got assigned homework, and I need help. Below are the questions and what I got (but I do not think it is right):
1) Write a C statement which will set a to 1 if at least two of the three integer variables x, y, z are true (i.e. if any two or all three are true).
I know it's an if/else statement, but I don't understand what to put (like how to write it so that just 2 or all 3 variables need to be true for a to equal 1).
2) Write a C statement to perform the following operation: if x is between -3.0 and 2.0 inclusive, set y equal to x, otherwise set y to zero.
if (-3.0 <= x <=2.0) y = x;
else y = 0;
3) Given the initialization: double z1 = 1.0, z2 = z1/2; use z1,z2 as a two-point sliding window (i.e. z1=z2; z2=z1/2; slides the window) and write a loop to find the value of z1 such that 1 + z1
is greater than 1 but 1 + z2 is equal to 1.
No clue how to solve it.
4) How many times is the body of the following loop executed?
int i, x=0;
for( i = 0; i < 11; i += 3)
{ x += i; }
I put 5. My rationale is that it'll print 0, 3, 6, 9, and 12 (it'll add 3 to 9 because 9 is less than 11; after 12 the statement is invalid). Is this right?
5) Write a for statement using integer counter i which will print the integer values starting at 0 and ending with 44.
int i; for(i = 0; i <= 44; ++i)
{ printf( "%i \n" , i); }
Can anyone help? I really want to learn this language. Help is much appreciated!
Last edited by JParker; 03-21-2013 at 02:52 PM.
Q1) One way to approach this would be to make a new variable that counts how many of x,y,z are true.
Q2) C syntax does not allow things like
if( a < x < b) /* do stuff */
You need to think of it like "If a is less than x and x is less than b".
Q3) I don't understand what is meant by "sliding window" so I have no idea what the question is about.
Q4) Well, technically speaking, it won't print anything because your code contains no printf. Why not run your code (with printf) and check whether you are right?
Q5) Try it and see!
while(!asleep) {
1) So how would I go about doing that? Sorry, I am a beginner.
2) So would this code work:
if(x >= 3.0 && x <= 2.0) y = x;
else y = 0
3) Is there any other way to solve it without the "sliding window"?
4) I tried it and got 4 numbers (0,3,6,9).
5) The code I made worked.
Last edited by JParker; 03-21-2013 at 04:02 PM.
Bump! I need help!
I understand #2, 3 and 5. I still have no clue about #1 and 4.
well you know how to do and if statement right?
so it is simple,
compare it like you would on paper
you have 3 numbers,
take two and compare them
then take two more and compare them
and you KEEP mentioning "sliding window" but yet dont tell us what your even talking about, so no help there!
PS, this isnt facebook, things like BUMP and posting over and over with no patience for us to help, just ........ people off and they dont help you
3) Given the initialization: double z1 = 1.0, z2 = z1/2; use z1,z2 as a two-point sliding window (i.e. z1=z2; z2=z1/2; slides the window) and write a loop to find the value of z1 such that 1 + z1
is greater than 1 but 1 + z2 is equal to 1.
I am guessing this is a way to calculate epsilon; but, I am not sure.
How to (portably) get DBL_EPSILON in C/C++ - Stack Overflow
I kept getting the wrong value for epsilon; turns out the compiler optimizes the code and results in LDBL_EPSILON under MinGW GCC.
Tim S.
Last edited by stahta01; 03-21-2013 at 10:14 PM.
"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the Universe
is winning." Rick Cook
I am guessing this is a way to calculate epsilon; but, I am not sure.
How to (portably) get DBL_EPSILON in C/C++ - Stack Overflow
I kept getting the wrong value for epsilon; turns out the compiler optimizes the code and results in LDBL_EPSILON under MinGW GCC.
Machine epsilon - Wikipedia, the free encyclopedia
Tim S.
Sorry, but I have no clue what this means. I really wish I knew C
#1 is easy:
Count up the number of true's in the variables x,y, and z.
//false is 0, anything else is true
#define false 0
#define true !false
//In your function:
int trueCount=0;
if(x!=0) trueCount++;
if(y!=0) trueCount++;
if(z!=0) trueCount++;
Then a simple
if(trueCount > 1)
It's always tempting to work out more elegant and complex code logic - but avoid being as complex or elegant as you can. By definition, to debug such code, you have to be MORE elegant and
complex, and that isn't always possible - or as clear.
3) Given the initialization: double z1 = 1.0, z2 = z1/2; use z1,z2 as a two-point sliding window (i.e. z1=z2; z2=z1/2; slides the window) and write a loop to find the value of z1 such that 1 + z1
is greater than 1 but 1 + z2 is equal to 1.
Looks for me that you are supposed to write a for-loop.
Bye, Andreas
The loop will execute 4 times. Increments are made before the test of the next loop.
int main(void) {
int i, x=0,n=0;
for( i = 0; i < 11; i += 3)
x += i;++n;
printf("loop counter: %d\n",n);
return 0;
"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the Universe
is winning." Rick Cook
03-21-2013 #2
03-21-2013 #3
Registered User
Join Date
Mar 2013
03-21-2013 #4
Registered User
Join Date
Mar 2013
03-21-2013 #5
Registered User
Join Date
Mar 2013
03-21-2013 #6
Registered User
Join Date
Dec 2012
03-21-2013 #7
Registered User
Join Date
May 2009
03-21-2013 #8
Registered User
Join Date
Mar 2013
03-21-2013 #9
Registered User
Join Date
Sep 2006
03-22-2013 #10
Registered User
Join Date
May 2012
03-22-2013 #11
Registered User
Join Date
Sep 2006
03-22-2013 #12
Join Date
Dec 2007
03-22-2013 #13
Registered User
Join Date
May 2009
|
{"url":"http://cboard.cprogramming.com/c-programming/155281-homework-help.html","timestamp":"2014-04-20T22:37:16Z","content_type":null,"content_length":"91498","record_id":"<urn:uuid:b30b3a32-4ca5-4875-915a-1ab094c81aaf>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Linear Algebra View of Calculus: Taking a Derivative with a Matrix
Most people think of linear algebra as a tool for solving systems of linear equations. While it definitely helps with that, the theory of linear algebra goes much deeper, providing powerful insights
into many other areas of math.
In this post I'll explain a powerful and surprising application of linear algebra to another field of mathematics -- calculus. I'll explain how the fundamental calculus operations of differentiation
and integration can be understood instead as a linear transformation. This is the "linear algebra" view of basic calculus.
Taking Derivatives as a Linear Transformation
In linear algebra, the concept of a vector space is very general. Anything can be a vector space as long as it follows two rules.
The first rule is that if u and v are in the space, then u + v must also be in the space. Mathematicians call this "closed under addition." Second, if u is in the space and c is a constant, then cu
must also be in the space. This is known as "closed under scalar multiplication." Any collection of objects that follows those two rules -- they can be vectors, functions, matrices and more --
qualifies as a vector space.
One of the more interesting vector spaces is the set of polynomials of degree less than or equal to n. This is the set of all functions that have the following form:
$p(t) = a_0 +a_1 t +a_2 t^2 + ... +a_n t^n$
where a0...an are constants.
Is this really a vector space? To check, we can verify that it follows our two rules from above. First, if p(t) and q(t) are both polynomials, then p(t) + q(t) is also a polynomial. That shows it's
closed under addition. Second, if p(t) is a polynomial, so is c times p(t), where c is a constant. That shows it's closed under scalar multiplication. So the set of polynomials of degree at most n is
indeed a vector space.
Now let's think about calculus. One of the first methods we learn is taking derivatives of polynomials. It's easy. If our polynomial is ax^2 + 3x, then our first derivative is 2ax + 3. This is true
for all polynomials. So the general first derivative of an nth degree polynomial is given by:
$p'(t) = a_1 + 2a_2 t + 3a_3 t^2 + ... + n a_n t^{n-1}$
The question is: is this also a vector space? To answer that, we check to see that it follows our two rules above. First, if we add any two derivatives together, the result will still be the
derivative of some polynomial. Second, if we multiply any derivative by a constant c, this will still be the derivative of some polynomial. So the set of first derivatives of polynomials is also a
vector space.
Now that we know polynomials and their first derivatives are both vector spaces, we can think of the operation "take the derivative" as a rule that maps "things in the first vector space" to "things
in the second vector space." That is, taking the derivative of a polynomial is a "linear transformation" that maps one vector space (the set of all polynomials of degree at most n) into another
vector space (the set of all first derivatives of polynomials of degree at most n).
If we call the set of polynomials $\inline P_n$, then the set of derivatives of this is $\inline P_{n-1}$, since taking the first derivative will reduce the degree of each polynomial term by 1. Thus,
the operation "take the derivative" is just a function that maps $\inline P_n \rightarrow P_{n-1}$. A similar argument shows that "taking the integral" is also a linear transformation in the opposite
direction, from $\inline P_{n-1} \rightarrow P_{n}$.
Once we realize differentiation and integration from calculus is really just a linear transformation, we can describe them using the tools of linear algebra.
Here's how we do that. To fully describe any linear transformation as a matrix multiplication in linear algebra, we follow three steps.
First, we find a basis for the subspace in the domain of the transformation. That is, if our transformation is from $\inline P_{n} \rightarrow P_{n-1}$, we first write down a basis for $\inline P_{n}
Next, we feed each element of this basis through the linear transformation, and see what comes out the other side. That is, we apply the transformation to each element of the basis, which gives the
"image" of each element under the transformation. Since every element of the domain is some combination of those basis elements, by running them through the transformation we can see the impact the
transformation will have on any element in the domain.
Finally, we collect each of those resulting images into the columns of a matrix. That is, each time we run an element of the basis through the linear transformation, the output will be a vector (the
"image" of the basis element). We then place these vectors into a matrix D, one in each column from left to right. That matrix D will fully represent our linear transformation.
An Example for Third-Degree Polynomials
Here's an example of how to do this for $\inline P_{3}$, the set of all polynomials of at most degree 3. This is the set of all functions of the following form:
$p(t) = a_0 + a_1 t + a_2 t^2 + a_3 t^3$
where at a0...a3 are constants. When we apply our transformation, "take the derivative of this polynomial," it will reduce the degree of each term in our polynomial by one. Thus, the transformation D
will be a linear mapping from $\inline P_3$ to $\inline P_2$, which we write as $\inline D: P_3 \rightarrow P_2$.
To find the matrix representation for our transformation, we follow our three steps above: find a basis for the domain, apply the transformation to each basis element, and compile the resulting
images into columns of a matrix.
First we find a basis for $\inline P_3$. The simplest basis is the following: 1, t, t^2, and t^3. All third-degree polynomials will be some linear combination of these four elements. In vector
notation, we say that a basis for $\inline P_3$ is given by:
$span \begin{Bmatrix} \begin{pmatrix} 1\\ 0\\ 0\\ 0 \end{pmatrix}, & \begin{pmatrix} 0\\ t\\ 0\\ 0 \end{pmatrix}, & \begin{pmatrix} 0\\ 0\\ t^2\\ 0 \end{pmatrix}, & \begin{pmatrix} 0\\ 0\\ 0\\ t^3 \
end{pmatrix} \\ \end{Bmatrix}$
Now that we have a basis for our domain $\inline P_3$, the next step is to feed the elements of it into the linear transformation to see what it does to them. Our linear transformation is, "take the
first derivative of the element." So to find the "image" of each element, we just take the first derivative.
The first element of the basis is 1. The derivative of this is just zero. That is, the transformation D maps the vector (1, 0, 0, 0) to (0, 0, 0). Our second element is t. The derivative of this is
just one. So the transformation D maps our second basis vector (0, t, 0, 0) to (1, 0, 0). Similarly for our third and fourth basis vectors, the transformation maps (0, 0, t^2, 0) to (0, 2t, 0), and
it maps (0, 0, 0, t^3) to (0, 0, 3t^2).
Applying our transformation to the four basis vectors, we get the following four images under D:
$D\begin{pmatrix} 1\\ 0\\ 0\\ 0 \end{pmatrix}=\begin{pmatrix} 0\\ 0\\ 0\\ \end{pmatrix}$
$D\begin{pmatrix} 0\\ t\\ 0\\ 0 \end{pmatrix}=\begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix}$
$D\begin{pmatrix} 0\\ 0\\ t^2\\ 0 \end{pmatrix}=\begin{pmatrix} 0\\ 2t\\ 0 \end{pmatrix}$
$D\begin{pmatrix} 0\\ 0\\ 0\\ t^3 \end{pmatrix}=\begin{pmatrix} 0\\ 0\\ 3t^2 \end{pmatrix}$
Now that we've applied our linear transformation to each of our four basis vectors, we next collect the resulting images into the columns of a matrix. This is the matrix we're looking for -- it fully
describes the action of differentiation for any third-degree polynomial in one simple matrix.
Collecting our four image vectors into a matrix, we have:
$D = \begin{pmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 3 \end{pmatrix}$
This matrix gives the linear algebra view of differentiation from calculus. Using it, we can find the derivative of any polynomial of degree three by expressing it as a vector and multiplying by this
For example, consider the polynomial $\inline p(t) = 2t^3 + 4t^2 - 5t + 6$. Note that the first derivative of this polynomial is $\inline 6t^2 +8t - 5$; we'll use this in a minute. In vector form,
this polynomial can be written as:
$p(t) = \begin{pmatrix} 6\\ -5\\ 4\\ 2 \end{pmatrix}$
To find its derivative, we simply multiply this vector by our D matrix from above:
$p'(t) = Dp = \begin{pmatrix} 0 &1 &0 &0 \\ 0 & 0 & 2 & 0\\ 0 & 0 &0 & 3 \end{pmatrix} \begin{pmatrix} 6\\ -5\\ 4\\ 2 \end{pmatrix} = \begin{pmatrix} -5\\ 8\\ 6 \end{pmatrix} = 6t^2 + 8t -5$
which is exactly the first derivative of our polynomial function!
This is a powerful tool. By recognizing that differentiation is just a linear transformation -- as is integration, which follows a similar argument that I'll leave as an exercise -- we can see it's
really just a rule that linearly maps functions in $\inline P_n$ to functions in $\inline P_{n-1}$.
In fact, all m x n matrices can be understood in this way. That is, an m x n matrix is just a linear mapping that sends vectors in $\inline \mathbb{R}^n$ into $\inline \mathbb{R}^m$. In the case of
the example above, we have a 3 x 4 matrix that sends polynomials in $\inline \mathbb{R}^4$ (such as ax^3 + bx^2 + cx +d, which has four elements) into the space of first derivatives in $\inline \
mathbb{R}^3$ (in this case, 3ax^2 + 2bx +c, which has three elements).
For more on linear transformations, here's a useful lecture from MIT's Gilbert Strang.
Posted by Andrew on Wednesday April 21, 2010 | Feedback?
* * *
|
{"url":"http://www.the-idea-shop.com/article/225/the-linear-algebra-view-of-calculus","timestamp":"2014-04-25T04:58:23Z","content_type":null,"content_length":"20920","record_id":"<urn:uuid:e2a5f434-b787-4730-99bf-e87ec73158b0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] fromiter
Tim Hochberg tim.hochberg at cox.net
Sat Jun 10 17:28:55 CDT 2006
David M. Cooke wrote:
>On Sat, Jun 10, 2006 at 01:18:05PM -0700, Tim Hochberg wrote:
>>I finally got around to cleaning up and checking in fromiter. As Travis
>>suggested, this version does not require that you specify count. From
>>the docstring:
>> fromiter(...)
>> fromiter(iterable, dtype, count=-1) returns a new 1d array
>> initialized from iterable. If count is nonegative, the new array
>> will have count elements, otherwise it's size is determined by the
>> generator.
>>If count is specified, it allocates the full array ahead of time. If it
>>is not, it periodically reallocates space for the array, allocating 50%
>>extra space each time and reallocating back to the final size at the end
>>(to give realloc a chance to reclaim any extra space).
>>Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as
>>"array(list(iterable),dtype=dtype)". Omitting count slows things down by
>>about 15%; still much faster than using "array(list(...))". It also is
>>going to chew up more memory than if you include count, at least
>>temporarily, but still should typically use much less than the
>>"array(list(...))" approach.
>Can this be integrated into array() so that array(iterable, dtype=dtype)
>does the expected thing?
It get's a little sticky since the expected thing is probably that
array([iterable, iterable, iterable], dtype=dtype) work and produce an
array of shape [3, N]. That looks like that would be hard to do
>Can you try to find the length of the iterable, with PySequence_Size() on
>the original object? This gets a bit iffy, as that might not be correct
>(but it could be used as a hint).
The way the code is setup, a hint could be made use of with little
additional complexity. Allegedly, some objects in 2.5 will grow
__length_hint__, which could be made use of as well. I'm not very
motivated to mess with this at the moment though as the benefit is
relatively small.
>What about iterables that return, say, tuples? Maybe add a shape argument,
>so that fromiter(iterable, dtype, count, shape=(None, 3)) expects elements
>from iterable that can be turned into arrays of shape (3,)? That could
>replace count, too.
I expect that this would double (or more) the complexity of the current
code (which is nice and simple at present). I'm inclined to leave it as
it is and advocate solutions of this type:
>>> import numpy
>>> tupleiter = ((x, x+1, x+2) for x in range(10)) # Just for example
>>> def flatten(x):
... for y in x:
... for z in y:
... yield z
>>> numpy.fromiter(flatten(tupleiter), int).reshape(-1, 3)
array([[ 0, 1, 2],
[ 1, 2, 3],
[ 2, 3, 4],
[ 3, 4, 5],
[ 4, 5, 6],
[ 5, 6, 7],
[ 6, 7, 8],
[ 7, 8, 9],
[ 8, 9, 10],
[ 9, 10, 11]])
[As a side note, I'm quite suprised that there isn't a way to flatten
stuff already in itertools, but if there is, I can't find it].
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-June/020866.html","timestamp":"2014-04-18T18:52:29Z","content_type":null,"content_length":"6141","record_id":"<urn:uuid:f36e9426-a217-4fc2-b650-2dd262f6c062>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The potential distribution for steady conduction is determined by solving (7.4.1)
in a volume V having conductivity r) and current source distribution s(r), respectively.
On the other hand, if the volume is filled by a perfect dielectric having permittivity r) and unpaired charge density distribution [u] (r), respectively, the potential distribution is determined
by the combination of (6.5.1) and (6.5.2).
It is clear that solutions pertaining to one of these physical situations are solutions for the other, provided that the boundary conditions are also analogous. We have been exploiting this
analogy in Sec. 7.5 for piece-wise continuous systems. There, solutions for the fields in dielectrics were applied to conduction problems. Of course, measurements made on dielectrics can also be
used to predict steady conduction phemonena.
Conversely, fields found either theoretically or by experimentation in a steady conduction situation can be used to describe those in perfect dielectrics. When measurements are used, the latter
procedure is a particularly useful one, because conduction processes are conveniently simulated and comparatively easy to measure. It is more difficult to measure the potential in free space than
in a conductor, and to measure a capacitance than a resistance.
Formally, a quantitative analogy is established by introducing the constant ratios for the magnitudes of the properties, sources, and potentials, respectively, in the two systems throughout the
volumes and on the boundaries. With k[1] and k[2] defined as scaling constants,
substitution of the conduction variables into (2) converts it into (1). The boundary conditions on surfaces S^' where the potential is constrained are analogous, provided the boundary potentials
also have the constant ratio k[2] given by (3).
Most often, interest is in systems where there are no volume source distributions. Thus, suppose that the capacitance of a pair of electrodes is to be determined by measuring the conductance of
analogously shaped electrodes immersed in a conducting material. The ratio of the measured capacitance to conductance, the ratio of (6.5.6) to (7.2.15), follows from substituting [1], (3a),
In multiple terminal pair systems, the capacitance matrix defined by (5.1.12) and (5.1.13) is similarly deduced from measurement of a conductance matrix, defined in (7.4.6).
Demonstration 7.6.1. Electrolyte-Tank Measurements
If great accuracy is required, fields in complex geometries are most easily determined numerically. However, especially if the capacitance is sought- and not a detailed field mapping- a
conduction analog can prove convenient. A simple experiment to determine the capacitance of a pair of electrodes is shown in Fig. 7.6.1, where they are mounted on insulated rods, contacted
through insulated wires, and immersed in tap water. To avoid electrolysis, where the conductors contact the water, low-frequency ac is used. Care should be taken to insure that boundary
conditions imposed by the tank wall are either analogous or inconsequential.
Figure 7.6.1 Electrolytic conduction analog tank for determining potential distributions in complex configurations.
Often, to motivate or justify approximations used in analytical modeling of complex systems, it is helpful to probe the potential distribution using such an experiment. The probe consists of
a small metal tip, mounted and wired like the electrodes, but connected to a divider. By setting the probe potential to the desired rms value, it is possible to trace out equipotential
surfaces by moving the probe in such a way as to keep the probe current nulled. Commercial equipment is automated with a feedback system to perform such measurements with great precision.
However, given the alternative of numerical simulation, it is more likely that such approaches are appropriate in establishing rough approximations.
Mapping Fields that Satisfy Laplace's Equation
Laplace's equation determines the potential distribution in a volume filled with a material of uniform conductivity that is source free. Especially for two-dimensional fields, the conduction
analog then also gives the opportunity to refine the art of sketching the equipotentials of solutions to Laplace's equation and the associated field lines.
Before considering how a sheet of conducting paper provides the medium for determining two-dimensional fields, it is worthwhile to identify the properties of a field sketch that indeed represents
a two-dimensional solution to Laplace's equation.
A review of the many two-dimensional plots of equipotentials and fields given in Chaps. 4 and 5 shows that they form a grid of curvilinear rectangles. In terms of variables defined for the field
sketch of Fig. 7.6.2, where the distance between equipotentials is denoted by and the distance between E lines is , the ratio tends to be constant, as we shall now show.
Figure 7.6.2 In two dimensions, equipotential and field lines predicted by Laplace's equation form a grid of curvilinear squares.
The condition that the field be irrotational gives
while the steady charge conservation law implies that along a flux tube,
Thus, along a flux tube,
If each of the flux tubes carries the same current, and if the equipotential lines are drawn for equal increments of , then the ratio must be constant throughout the mapping. The sides of the
curvilinear rectangles are commonly made equal, so that the equipotentials and field lines form a grid of curvilinear squares.
The faithfulness to Laplace's equation of a map of equipotentials at equal increments in potential can be checked by sketching in the perpendicular field lines. With the field lines forming
curvilinear squares in the starting region, a correct distribution of the equipotentials is achieved when a grid of squares is maintained throughout the region. With some practice, it is possible
to iterate between refinements of the equipotentials and the field lines until a satisfactory map of the solution is sketched.
Demonstration 7.6.2
Two-Dimensional Solution to Laplace's Equation by Means of Teledeltos Paper
For the mapping of two-dimensional fields, the conduction analog has the advantage that it is not necessary to make the electrodes and conductor "infinitely" long in the third dimension.
Two-dimensional current distributions will result even in a thin-sheet conductor, provided that it has a conductivity that is large compared to its surroundings. Here again we exploit the
boundary condition applying to the surfaces of the paper. As far as the fields inside the paper are concerned, a two-dimensional current distribution automatically meets the requirement that
there be no current density normal to those parts of the paper bounded by air.
A typical field mapping apparatus is as simple as that shown in Fig. 7.6.3. The paper has the thickness and a conductivity . The electrodes take the form of silver paint or copper tape put on
the upper surface of the paper, with a shape simulating the electrodes of the actual system. Because the paper is so thin compared to dimensions of interest in the plane of the paper surface,
the currents from the electrodes quickly assume an essentially uniform profile over the cross-section of the paper, much as suggested by the inset to Fig. 7.6.3.
Figure 7.6.3 Conducting paper with attached electrodes can be used to determine two-dimensional potential distributions.
In using the paper, it is usual to deal in terms of a surface resistance 1/. The conductance of the plane parallel electrode system shown in Fig. 7.6.4 can be used to establish this
Figure 7.6.4 Apparatus for determining surface conductivity
The units are simply ohms, and 1/ is the resistance of a square of the material having any sidelength. Thus, the units are commonly denoted as "ohms/square."
To associate a conductance as measured at the terminals of the experiment shown in Fig. 7.6.3 with the capacitance of a pair of electrodes having length l in the third dimension, note that
the surface integrations used to define C and G reduce to
where the surface integrals have been reduced to line integrals by carrying out the integration in the third dimension. The ratio of these quantities follows in terms of the surface
conductance as
Here G is the conductance as actually measured using the conducting paper, and C is the capacitance of the two-dimensional capacitor it simulates.
In Chap. 9, we will find that magnetic field distributions as well can often be found by using the conduction analog.
|
{"url":"http://web.mit.edu/6.013_book/www/chapter7/7.6.html","timestamp":"2014-04-21T04:34:56Z","content_type":null,"content_length":"11783","record_id":"<urn:uuid:1d3443de-47df-4132-ba40-4908c8c42ef8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of F1 Score
, the
F1 score
) is a measure of a test's accuracy. It considers both the
precision p
and the
recall r
of the test to compute the score:
is the number of correct results divided by the number of all returned results and
is the number of correct results divided by the number of results that should have been returned. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score
reaches its best value at 1 and worst score at 0.
The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:
$F = 2 cdot \left(mathrm\left\{precision\right\} cdot mathrm\left\{recall\right\}\right) / \left(mathrm\left\{precision\right\} + mathrm\left\{recall\right\}\right).,$
The general formula for non-negative real β is:
$F_beta = \left(1 + beta^2\right) cdot \left(mathrm\left\{precision\right\} cdot mathrm\left\{recall\right\}\right) / \left(beta^2 cdot mathrm\left\{precision\right\} + mathrm\left\{recall\right
The formula in terms of Type I and type II errors:
$F_beta = frac \left\{\left(1 + beta^2\right) cdot mathrm\left\{true positive\right\} \right\}\left\{\left(\left(1 + beta^2\right) cdot mathrm\left\{true positive\right\} + beta^2 cdot mathrm\
left\{false positive\right\} + mathrm\left\{false negative\right\}\right)\right\}.,$
Two other commonly used F measures are the $F_\left\{2\right\}$ measure, which weights recall twice as much as precision, and the $F_\left\{0.5\right\}$ measure, which weights precision twice as much
as recall.
The F-measure was derived by van Rijsbergen (1979) so that $F_beta$ "measures the effectiveness of retrieval with respect to a user who attaches β times as much importance to recall as precision". It
is based on van Rijsbergen's effectiveness measure $E = 1-\left(1/\left(alpha/P + \left(1-alpha\right)/R\right)\right)$. Their relationship is $F_beta = 1 - E$ where $alpha=1/\left(beta^2+1\right)$.
|
{"url":"http://www.reference.com/browse/F1+Score","timestamp":"2014-04-16T18:08:16Z","content_type":null,"content_length":"77548","record_id":"<urn:uuid:6559e942-b02b-4129-80ea-ff33bb11aee2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Laura from Charlotte, NC's Algebra Equation Resources
Latest answer by
Tamara J.
Jacksonville, FL
One pump can drain a pool in 10 minutes. When the other pump is also used, the pool only takes 8 minutes to drain. How long would it take the second pump to drain the pool if it were the only pump...
|
{"url":"http://www.wyzant.com/resources/users/7540459/all/algebra_equation","timestamp":"2014-04-24T22:44:43Z","content_type":null,"content_length":"30776","record_id":"<urn:uuid:4169b2d7-f46d-484e-b6ec-e2b04db06c79>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The identification of the Polynomial functions ?
I am confusing about how can we identify whether the function is in the polynomial form or not.
For example i have : x^5 - 1 and 25 . Are they the polynomial functions and i need to be explained ?
Please help me !Thanks!
honest_denverco09 wrote:I am confusing about how can we identify whether the function is in the polynomial form or not. For example i have : x^5 - 1 and 25 .
To learn what polynomials are (along with their names and various other definitions), try this online lesson.
Long story short: If the function consists of "terms" (bits that are added together) in which the variables have only whole-number powers (so no square roots, no absolute-value bars, no other
functions like logs or cosines, etc) and the variables are only in the numerator (so no "3/x" sorts of terms), then you're looking at a polynomial.
Re: The identification of the Polynomial functions ?
I got it. Thank you so much!
Is any number like 25, 2, 4, the polynomial function, and if so, what is the degree of these? Is that a "0" ? Thank again !
Also i would like to ask "What is the real number and how can we know it ?"
Constant terms like "5" can be stated as "5x^0", since anything to the zero power is just "1". So constants are terms, and they are of degree zero.
I don't know your context, but you can figure out the answer for real numbers in this lesson on number types and how to recognize which is which.
Re: The identification of the Polynomial functions ?
tHANK you so much! You and this web are very helpful! ^^!Thank so much
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=328&p=914","timestamp":"2014-04-17T21:50:07Z","content_type":null,"content_length":"24601","record_id":"<urn:uuid:4d736a10-1452-416f-a1b9-b874c90da609>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Evaluate a Company’s Debt Management with Debt Analytics
Corporate debt is a big deal. In fact, it’s such a big deal that companies value their capital structure based on how effectively they manage debt. Why is debt so important as compared to, say,
First, a company can have more debt than assets. Unlike equity, where the maximum equity is the value of all the company’s assets, debt can surpass assets. Second, debt incurs interest that needs to
be paid off; equity doesn’t. Third, as far as the capital structure goes, using debt as a measure usually provides important information about equity as well.
Times interest earned
When evaluating a company’s debt structure, you need to know whether a company can pay the interest it owes on the debt it has incurred. To find out, you can use times interest earned, which looks
To use this equation, follow these steps:
1. Find EBIT near the middle or bottom of the income statement and interest expense somewhere below that.
2. Divide EBIT by interest expense to calculate the times interest earned.
A low times interest earned may mean the company is at risk of defaulting on its debt obligations, which is a bad sign for its level of earnings. But a very high interest earned may mean the company
isn’t fully utilizing its available capital and could possibly generate additional sales by acquiring more debt to expand its production capacity, particularly if earnings have reached a plateau with
the company already producing at full capacity.
Fixed charge coverage
Interest is only one form of fixed charge that a company can default on. Leases are another particularly common form, but there are many others, as well. To determine whether a company will default
on any of these charges, you can use the metric called fixed charge coverage. Although this equation has a few variations, this one is the most common:
Follow these steps to use this equation:
1. Find EBIT, fixed charges before tax, and interest expense on the income statement.
2. Add EBIT and fixed charges before tax.
3. Add interest and fixed charges before tax.
4. Divide the answer from Step 2 by the answer from Step 3 to calculate the fixed charge coverage.
This metric is extremely similar to times interest earned, but adding the same value (fixed charges before tax) to both the top and bottom of the equation is guaranteed to change the end value. Fixed
charge coverage is particularly important for companies that have a high portion of fixed charges other than interest.
Debt ratio
A company with an excessive amount of debt is at very serious risk of default, so it’s no wonder that a company’s debt ratio is important to a number of its constituents. Lenders like to know a
company’s debt ratio because they want to be reassured that they’ll get their money back — even if the company goes out of business.
Investors like to know a company’s debt ratio because they want to be reassured that they’ll be owning a company that’s worth the value of their investment. Companies themselves like to know their
debt ratio to determine whether they’re at risk of defaulting on that debt. You can determine all this and more with the help of one little equation:
Here’s how to use this equation:
1. Find total liabilities in the liabilities portion of the balance sheet and total assets in the assets portion.
2. Divide total liabilities by total assets to get the debt ratio.
The debt ratio tells you what percentage of a company’s total assets were funded by incurring debt. A debt ratio of more than 1 means the company actually has more debt than the company is worth. A
debt ratio of less than 1 means the company has more assets than the debt it owes. A debt ratio anywhere near 1 is a bad position to be in, much less a ratio higher than 1.
Debt to equity ratio
When measuring a company’s capital structure, you need to calculate the debt to equity ratio, which tells you the ratio that liabilities compose of a company’s funding compared to equity. Here’s what
this equation looks like:
To use this equation, follow these steps:
1. Find total liabilities in the liabilities portion of the balance sheet and stockholders’ equity in the equity portion (nothing like stating the obvious!).
2. Divide total liabilities by stockholders’ equity to find the debt to equity ratio.
A high debt to equity ratio can mean two different things: If a company also has a low times interest earned, then it was probably a bit too reliant on funding operations with debt and will have a
hard time paying its interest. If the company also has a very high times interest earned, then it was likely incurring debt to generate funding beyond what it could earn selling debt to generate
As long as the extra ratio of debt increases a company’s times interest earned, then the differential in earnings will increase the value of equity, balancing out the debt to equity ratio in the long
Debt to tangible net worth
If a company were to default on its debt, get bought by an investor who intends to sell off the assets, or otherwise go out of business, everyone involved would likely want to know the value of the
physical assets owned by the company. You can’t sell off intellectual property when liquidating a company, so you need to use the ratio called debt to tangible net worth:
Follow these steps to use this equation:
1. Find total liabilities in the liabilities portion of the balance sheet, stockholders’ equity in the equity portion of the income statement, and intangible assets in the assets portion of the
income statement.
2. Subtract the value of intangible assets from the value of stockholders’ equity.
3. Divide total liabilities from the answer from Step 2 to find the debt to tangible net worth ratio.
If the ratio is greater than 1, the company has more debt than it could pay off by liquidating all its assets. If the ratio is less than 1, the company could pay off all its debt by liquidating its
assets and still have some left over. In the 1980s, many investors purchased companies with very low debt to tangible net worth ratios and made profits by selling off the assets.
Operating cash flows to total debt
When possible, companies prefer not to have to sell off their assets in order to pay their debt. After all, having to do so is usually a sign that the company is in big trouble. To measure a
company’s ability to pay off debt without actually selling off its assets, you can use the operating cash flows to total debt ratio:
Here’s how to use this ratio:
1. Find operating cash flows on the statement of cash flows and total debt in the debt portion of the balance sheet.
2. Divide operating cash flows by total debt to get the operating cash flows to debt ratio.
A high ratio means the company is probably able to pay off its debts by using cash flows. A low ratio means the company may end up having to sell stuff off to pay its debts.
Equity multiplier
The equity multiplier measures the ratio of a company’s assets that stockholders own:
To put this equation to use, follow these steps:
1. Find total assets in the assets portion of the balance sheet and stockholders’ equity in the equity portion.
2. Divide total assets by stockholders’ equity to get the equity multiplier.
A ratio of 1 means that all the company’s assets are funded through equity and that the company has no debt. In other words, the company has one dollar’s worth of assets for every dollar’s worth of
equity. Even if the company increases in value, unless it incurs debt, that increased value will go toward increasing stockholders’ equity.
On the other hand, a ratio of less than one means that the company used debt to fund its activities.
|
{"url":"http://www.dummies.com/how-to/content/how-to-evaluate-a-companys-debt-management-with-de.navId-410640.html","timestamp":"2014-04-17T16:27:48Z","content_type":null,"content_length":"75091","record_id":"<urn:uuid:62e0064e-fc78-467c-9606-413dc8af6e29>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: ONE LAST SHAME ON MATHEMATICS AND SCI MATH RESEARCH Twin Prime joke
on your Mathematics, shame!!
Replies: 4 Last Post: May 18, 2012 4:12 PM
Messages: [ Previous | Next ]
Re: ONE LAST SHAME ON MATHEMATICS AND SCI MATH RESEARCH Twin Prime
joke on your Mathematics, shame!!
Posted: May 18, 2012 4:12 PM
also, one of the most recondite subjects of numbertheory,
is "repunits," including with regard to the periods
of the digital representation of their reciprocals.
> Brun's constant sums the reciprocals
> of all of the twin primes, including 1/5, twice
-- but not 1/2!
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2384643&messageID=7825158","timestamp":"2014-04-21T15:42:20Z","content_type":null,"content_length":"21293","record_id":"<urn:uuid:edb67f17-4632-4f24-b264-d9d9f3914e59>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Illinois PER Interactive Examples: Block and Spring SHM
Website Detail Page
written by Gary Gladding
published by the University of llinois Physics Education Research Group
This interactive homework problem presents a block attached to a massless spring on a frictionless surface. Given an initial velocity and distance from the equilibrium point, the problem
takes learners step-by-step through the components of simple harmonic motion. It provides a conceptual analysis and explicit help to set up the appropriate solution. The problem is
accompanied by a sequence of questions designed to encourage critical thinking and conceptual analysis.
This tutorial is part of a larger collection of interactive problems developed by the Illinois Physics Education Research Group.
Editor's Note: This problem can help students recognize the connection between the oscillation of a mass on a spring and the sinusoidal nature of simple harmonic motion. It provides help
with the related free-body diagram, graphs depicting SHM, and support in using the Work-Kinetic Energy Theorem to solve. See Related Materials for an interactive simulation of spring
motion, recommended by the editors.
Subjects Levels Resource Types
Education Practices
- Active Learning
- Instructional Material
= Problem Solving
- High School = Activity
Oscillations & Waves
- Lower Undergraduate = Problem/Problem Set
- Oscillations
= Tutorial
= Simple Harmonic Motion
= Springs and Oscillators
Appropriate Courses Categories Ratings
- Conceptual Physics - Activity
- Algebra-based Physics - Assessment
- AP Physics - New teachers
Intended User:
Access Rights:
Free access
© 2006 University of Illinois Physics Education Research Group
SHM, homework problem, interactive problem, mass on spring, oscillation, simple harmonic motion, tutorial problem
Record Cloner:
Metadata instance created February 11, 2008 by Alea Smith
Record Updated:
March 12, 2013 by Lyle Barbato
Last Update
when Cataloged:
June 16, 2006
AAAS Benchmark Alignments (2008 Version)
2. The Nature of Mathematics
2A. Patterns and Relationships
• 9-12: 2A/H1. Mathematics is the study of quantities and shapes, the patterns and relationships between quantities or shapes, and operations on either quantities or shapes. Some of
these relationships involve natural phenomena, while others deal with abstractions not tied to the physical world.
2C. Mathematical Inquiry
• 9-12: 2C/H3. To be able to use and interpret mathematics well, it is necessary to be concerned with more than the mathematical validity of abstract operations and to take into account
how well they correspond to the properties of the things represented.
4. The Physical Setting
4F. Motion
• 9-12: 4F/H1. The change in motion (direction or speed) of an object is proportional to the applied force and inversely proportional to the mass.
• 9-12: 4F/H7. In most familiar situations, frictional forces complicate the description of motion, although the basic principles still apply.
• 9-12: 4F/H8. Any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it.
9. The Mathematical World
9B. Symbolic Relationships
• 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease
steadily, increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase
or decrease in steps, or do something different from any of these.
• 9-12: 9B/H2a. Symbolic statements can be manipulated by rules of mathematical logic to produce other statements of the same relationship, which may show some interesting aspect more
• 9-12: 9B/H4. Tables, graphs, and symbols are alternative ways of representing data and relationships that can be translated from one to another.
• 9-12: 9B/H5. When a relationship is represented in symbols, numbers can be substituted for all but one of the symbols and the possible value of the remaining symbol computed.
Sometimes the relationship may be satisfied by one value, sometimes by more than one, and sometimes not at all.
11. Common Themes
11B. Models
• 9-12: 11B/H1a. A mathematical model uses rules and relationships to describe and predict objects and events in the real world.
Common Core State Standards for Mathematics Alignments
High School — Algebra (9-12)
Seeing Structure in Expressions (9-12)
• A-SSE.1.a Interpret parts of an expression, such as terms, factors, and coefficients.
• A-SSE.2 Use the structure of an expression to identify ways to rewrite it.
Creating Equations^? (9-12)
• A-CED.2 Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales.
Reasoning with Equations and Inequalities (9-12)
• A-REI.1 Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation
has a solution. Construct a viable argument to justify a solution method.
High School — Functions (9-12)
Interpreting Functions (9-12)
• F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features
given a verbal description of the relationship.^?
• F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.^? Supplements
Building Functions (9-12) Contribute
• F-BF.1.a Determine an explicit expression, a recursive process, or steps for calculation from a context. Related
Trigonometric Functions (9-12)
• F-TF.5 Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline.^? Materials
Common Core State Reading Standards for Literacy in Science and Technical Subjects 6—12
Key Ideas and Details (6-12)
• RST.11-12.3 Follow precisely a complex multistep procedure when carrying out experiments, taking measurements, or performing technical tasks; analyze the specific results based on
explanations in the text.
Craft and Structure (6-12)
• RST.11-12.4 Determine the meaning of symbols, key terms, and other domain-specific words and phrases as they are used in a specific scientific or technical context relevant to grades
11—12 texts and topics.
• RST.11-12.5 Analyze how the text structures information or ideas into categories or hierarchies, demonstrating understanding of the information or ideas.
Range of Reading and Level of Text Complexity (6-12)
• RST.11-12.10 By the end of grade 12, read and comprehend science/technical texts in the grades 11—CCR text complexity band independently and proficiently.
This resource is part of a Physics Front Topical Unit.
Periodic and Simple Harmonic Motion
Unit Title:
Simple Harmonic Motion
This interactive problem takes learners step-by-step through the components of simple harmonic motion. It will help students recognize the connection between the oscillation of a mass on
a spring and the sinusoidal nature of SHM. It provides help with the related free-body diagram, graphs depicting SHM, and support in using the Work-Kinetic Energy Theorem to do the
Link to Unit:
ComPADRE is beta testing Citation Styles!
<a href="http://www.thephysicsfront.org/items/detail.cfm?ID=6505">Gladding, Gary. Illinois PER Interactive Examples: Block and Spring SHM. Urbana: University of llinois Physics Education
Research Group, June 16, 2006.</a>
G. Gladding, (University of llinois Physics Education Research Group, Urbana, 2006), WWW Document, (http://research.physics.illinois.edu/per/IE/ie.pl?phys111/ie/12/IE_block_and_spring).
G. Gladding, Illinois PER Interactive Examples: Block and Spring SHM (University of llinois Physics Education Research Group, Urbana, 2006), <http://research.physics.illinois.edu/per/IE/
Gladding, G. (2006, June 16). Illinois PER Interactive Examples: Block and Spring SHM. Retrieved April 16, 2014, from University of llinois Physics Education Research Group: http://
Gladding, Gary. Illinois PER Interactive Examples: Block and Spring SHM. Urbana: University of llinois Physics Education Research Group, June 16, 2006. http://
research.physics.illinois.edu/per/IE/ie.pl?phys111/ie/12/IE_block_and_spring (accessed 16 April 2014).
Gladding, Gary. Illinois PER Interactive Examples: Block and Spring SHM. Urbana: University of llinois Physics Education Research Group, 2006. 16 June 2006. 16 Apr. 2014 <http://
@misc{ Author = "Gary Gladding", Title = {Illinois PER Interactive Examples: Block and Spring SHM}, Publisher = {University of llinois Physics Education Research Group}, Volume = {2014},
Number = {16 April 2014}, Month = {June 16, 2006}, Year = {2006} }
%A Gary Gladding
%T Illinois PER Interactive Examples: Block and Spring SHM
%D June 16, 2006
%I University of llinois Physics Education Research Group
%C Urbana
%U http://research.physics.illinois.edu/per/IE/ie.pl?phys111/ie/12/IE_block_and_spring
%O text/html
%0 Electronic Source
%A Gladding, Gary
%D June 16, 2006
%T Illinois PER Interactive Examples: Block and Spring SHM
%I University of llinois Physics Education Research Group
%V 2014
%N 16 April 2014
%8 June 16, 2006
%9 text/html
%U http://research.physics.illinois.edu/per/IE/ie.pl?phys111/ie/12/IE_block_and_spring
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Illinois PER Interactive Examples: Block and Spring SHM:
Same topic as Spring Motion Model
A Java simulation that addresses the same concepts explored in the "Block and Spring SHM" homework problem.
relation by Caroline Hall
See details...
Know of another related resource? Login to relate this resource to it.
|
{"url":"http://www.thephysicsfront.org/items/detail.cfm?ID=6505","timestamp":"2014-04-16T16:14:25Z","content_type":null,"content_length":"55386","record_id":"<urn:uuid:c0cc3c6c-c99a-491b-a6f6-cc1283865687>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPSSX-L archives -- July 2006 (#223)LISTSERV at the University of Georgia
Date: Fri, 14 Jul 2006 23:42:15 -0300
Reply-To: Hector Maletta <hmaletta@fibertel.com.ar>
Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>
From: Hector Maletta <hmaletta@fibertel.com.ar>
Subject: Re: regression -- predicted values
Comments: To: Statisticsdoc <statisticsdoc@cox.net>
In-Reply-To: <PMEJJAHAJHJANCGEODHEAEBCCCAA.statisticsdoc@cox.net>
Content-Type: text/plain; charset="us-ascii"
Agree with Stephen. Moreover, even the adjusted predicted values are
practically the same as the unadjusted, except in the case of outliers with
a disproportionate influence on the estimated coefficients. In many datasets
there are no cases with enough influence as to cause the correlation to be
less than (nearly) perfect.
-----Mensaje original-----
De: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU] En nombre de
Enviado el: Friday, July 14, 2006 11:12 PM
Para: SPSSX-L@LISTSERV.UGA.EDU
Asunto: Re: regression -- predicted values
Stephen Brand
The Unstandardized Predicted Value and the Standardized Predicted Value have
a perfect correlation because they are simple linear transformations of one
another. Illustrattively, the standardized predicted value involves the
subtraction of a constant (the mean predicted value) from each predicted
value, and division by a constant (the standard deviation of the predicted
The adjusted predicted value is somewhat more complicated. This is the
predicted value for a case when it is excluded from the computation of the
regression coefficients.
Stephen Brand
For personalized and professional consultation in statistics and research
design, visit
-----Original Message-----
From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU]On Behalf Of
Dogan, Enis
Sent: Friday, July 14, 2006 3:52 PM
To: SPSSX-L@LISTSERV.UGA.EDU
Subject: regression -- predicted values
Dear list
I hope this email finds you all well on this late Friday afternoon.
Quick question:
I ran a simple regression and saved Unstandardized Predicted Value;
Adjusted Predicted Value; and Standardized Predicted Value
What is the difference between Unstandardized Predicted Value and
Adjusted Predicted Value?
I get almost identical values on these variables, correlation = 1.00
almost however they are not necessarily 100% equal.
Any ideas?
|
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0607&L=spssx-l&F=&S=&P=24511","timestamp":"2014-04-18T08:38:43Z","content_type":null,"content_length":"11668","record_id":"<urn:uuid:f0893d9f-2991-4b12-842d-24954fb081d3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shear and Bending Moment
Internal Forces in Beams
The shearing forces and bending moments due to distributed load in a statically determinate beam can also be determined by the equilibrium equations also. Using the standard convention, the
distribution of internal forces in a beam can be represented graphically by plotting the values of shear or bending moment against the distance from one end of the beam. The problem of using the
equilibrium equations to determine the values of internal forces in a beam with distributed load is similar to that with concentrated loads. Since both shear and bending moment in the beam is created
by the applied load, relations among load, shear and bending can be used to develope formulas for calculating load, shear and bending moment in the beam. As distributed load can be represented by an
equivalent concentrated load, the relations can be developed in a similar way.
Relations among Load, Shear and Bending Moment
Distributed Load
For example, ignoring the horizontal forces, dividing a simply supported beam with variable distributed load w(x) per unit length into three imaginary beam sections.
Since a concentrated load P can be considered as a distributed force over an infinitely small length Δx of the beam, a distributed load over an infinitely small length Δx may also be considered as
concentrated forces of variable intensity w(x) per unit length spread over the beam by neglecting the small variation Δw. Therefore the relations among load, shear and bending moment of a distributed
load can be determined by the evaluation of an infinitesimal beam element under loading. Consider the free body diagram of an infinitesimal beam element of length Δx under the variable distributed
load w(x). The total distributed load acting on the infinitesimal beam element is approximately equal to wΔx. Assuming the shear difference between the ends of the infinitesimal beam element is ΔV.
And assuming the bending moment difference between the ends of the rigid infinitesimal beam element is ΔM. Imply
Relation between Distributed Load and Shear
Similar to concentrated load, the relation between distributed load and shear can be determined by setting equilibrium equation of the vertical components of forces acting on the infinitesimal beam
element. Imply
When taking the limit as Δx tends to zero, the function ΔV/Δx can therefore be interpreted as the derivative of the shear V with respect to x. Imply
According to the derivative, the slope of the shear curve V(x) is negative, i.e. slope is decreasing, since the load per unit length is usually point downward in positive sign. And the derivative of
the shear curve V(x) with respectie to x is therefore equal to the negative sense of the load per unit length at point x. And the shear difference between point D and point E can then be determined
by integration. Imply
Therefore the shear difference between point D and point E is also equal to the negative sense of the total signed area under the loading curve of the applied distributed load between point D and
point E. However, concentrated loads cannot be considered together with distributed loads, since when a concentrated load is applied at a point between point D and point E, the shear curve will be
discontinuous at that point and the formula obtained by integration becomes invalid. Therefore the shear difference due to distributed load can only be considered between two successive concentrated
loads and the shear difference due to all concentrated loads are considered separately.
Relation between Shear due to Distributed Load and Bending Moment
Similarly, the relation between shear due to distributed load and bending moment can be determined by setting equilibrium equation of the bending moment of forces about point D' on the right hand
side of the infinitesimal beam element . Imply
When Δx approachs zero, the term w(Δx)^2/2 is a small quantity of the second order and can be neglected.
When taking the limit as Δx tends to zero, the function ΔM/Δx can therefore be interpreted as the derivative of the bending moment M with respect to x. Imply
According to the derivative, the slope of the bending moment curve M(x) is equal to the value of shear. And the derivative of the shear curve V(x) with respectie to x is therefore equal to the same
sense of the shear at point x at which no concentrated load is applied. Since the maximum bending moment of the bending moment curve M(x) is located at the derivative equals to zero, Therefore the
maximum bending moment can be found at shear V(x) is equal to zero. And the bending moment difference between point D and point E can then be determined by integration. Imply
Therefore the bending moment difference between point D and point E is also equal to the same sense of the total signed area under the shear curve due to the applied distributed load between between
point D and point E. Although concentrated loads is not considered together with distributed loads to determine the shear function because of the discontinuity of shear curve at a point caused by the
concentrated load applied at the point between point D and point E. However, if the bending moment difference equation or the shear curve is corrected according to the additional of concentrated
loads, the corrected bending moment difference equations are still valid. Therefore in order to prevent extra work, the bending moment difference due to distributed load is usually considered between
two successive concentrated loads and the shear difference due to all concntrated loads are considered separately and the shear curve is corrected accordingly. But, the bending moment difference
equation will become invalid when a couple is applied at a point beween point D and point E, because the sudden change in bending moment caused by a couple will create a shear in the beam also. Both
the additional shear and bending moment are not considered by both the shear difference and bending moment difference equations.
Simply Supported Beam with Unformly Distributed Load Example
For example, ignoring the horizontal forces, a simply supported beam with uniformly distributed load of w per unit length over the span LAB of the beam.
For an uniformly distributed load of w per unit length over the span LAB of the beam, the uniformly distributed load can be represented by an equivalent concentrated force of wLAB acting at the
centroid of the distributed load, i.e. the middle point of point A and point B, on the simply supported beam. The Reaction forces can be determined by setting up the equilibrium equations. Imply
External Loads Diagram
For determining the internal forces in beam, the first step is to prepare the curve of external loads, concentrated loads, distributed load, and bending moment and couple. One of the important points
in plotting external loads diagram, shear curve and bending moment curve is that all curves are plotted according to the correctness of the forces comparing with the standard convention, not the
sense of the forces. Therefore, both concentrated forces RA and RB are negative sign of incorrect sense. And the distributed load over the span of beam AB is positive sign of correct sens. Imply
Shear Diagram
In determining the internal force, shear, in beam, the key concern is the locations of external forces. As there are only two external forces at the end of the span of the beam and one distributed
load spread over the whole span of beam, the beam can be treated as one imaginary beam section. The two sets of equations for calculating the internal force, shear are
Start drawing the shear diagram from point A at the left side of the beam and begin with the concentrated force -RA at point A in the external loads diagram. Assume point A' is the right hand side of
the infinitesimal beam element section AA'. Imply
The shear is equal to wL/2 at point A' as shear at point A is equal zero. The slope of the shear curve of the infinitesimal beam element section AA' is equal to a vertical slope, a discontinuous jump
wL/2, i.e. a point at point A'. In other words, there is a discontinous step change at point A.
Then the distributed load w per unit length over length A'B' in the external loads diagram of the beam section A'B'. Imply
The shear curve is a oblique straight line with slope -w. The straight line is equal to wL/2 at point A' and is equal to -wL/2 at point B'.
Finally the concentrated force -RB at point B in the external loads diagram. Since point B is the assumed fixed point at the rightmost edge of the beam considered, according to the Newton's third
Law, i.e action and reaction forces are equal and opposite, the fixing force provided by the concentrated force RB should be equal in magnitude and opposite in sense to the shear force of the left
hand side. Or the shear force VB should equal to the value of external concentrated force, -RB at point B in the external loads diagram. Imply
The shear is equal to -wL/2 at point B as external force at point A is ignored, i.e. equal to zero. The slope of the shear curve of the infinitesimal beam element section B'B is equal to a vertical
slope, a discontinuous jump wL/2, i.e. a point at point B. In other words, there is a discontinous step change at point B.
The shear diagram is
Bending Moment Diagram
In determining the internal force, bending moment, in beam, the key concern is also the locations of external forces. As there are only two external forces at the end of the span of the beam and one
distributed load spread over the whole span of beam, the beam can be treated as one imaginary beam section. The two sets of equations for calculating the internal force, bending moment are
Start drawing the bending moment diagram from point A at the left side of the beam and begin with all forces VA-(-RA) at point A. Assume point A' is the right hand side of the infinitesimal beam
element section AA'. Imply
The bending moment is equal to zero at point A' as bending moment at point A is equal zero and moment arm of a infinitesimal beam element section tends to zero also. The slope of the bending moment
curve of the infinitesimal beam element section AA' is equal to wL/2.
Then the distributed load w per unit length over length A'B' in the external loads diagram of the beam section A'B'. Imply
The bending moment curve is a curve of the second degree with maximum bending moment of wL^2/8 at position L/2, i.e. point C. And both the bending moments at point A' and point B' are equal to zero.
But the slope of bending moment curve at point A' is wL/2 and the slope of the bending moment curve at point B' is -wL/2.
Finally the concentrated force -RB at point B in the external loads diagram. Since point B is the assumed fixed point at the rightmost edge of the beam considered, according to the Newton's third
Law, i.e action and reaction forces are equal and opposite, the fixing force provided by the concentrated force RB should be equal in magnitude and opposite in sense to the shear force of the left
hand side. Or the shear force VB should equal to the value of external concentrated force, -RB at point B in the external loads diagram. Imply
The bending moment is equal to zero at point B' as bending moment at point A is equal zero and moment arm of a infinitesimal beam element section tends to zero also. The slope of the bending moment
curve of the infinitesimal beam element section AA' is equal to wL/2.
The bending moment diagram is
|
{"url":"http://output.to/sideway/default.asp?qno=120900002","timestamp":"2014-04-19T04:56:47Z","content_type":null,"content_length":"34538","record_id":"<urn:uuid:07db93ad-9c18-42f4-a039-3b68115081e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Do Fractions
As the students move higher up in their academics their difficulty level simultaneously increases. Starting from simple calculations in elementary classes they come across some more calculations like
solving the decimals. Although it is not difficult but just to start with they do get into some kind of confusion. For example, they try to add, subtract, multiply and divide the fractions in the
same way that they do with normal numbers or what we call whole numbers. However, when they realize that the calculations are wrong they come across difficulty in making them correct. There are a few
things that are to be remembered while solving fractions.
• 1
First of all while solving fractions related sums, the students should know about the numerator and denominator. The number at the top in a fraction is the numerator and the one at the bottom is
the denominator.
• 2
The next thing to understand is the difference between proper and improper fractions. If the numerator is smaller than the denominator, the fraction is proper and if the numerator is greater than
the denominator that is the improper fraction.
3/8 Proper Fraction
9/8 Improper Fraction
• 3
While adding fractions there are two things that you need to remember. The numerators in both the fractions can be directly added if the denominators in both the fractions are equal such as:
3/8 + 5/8 = 8/8 = 1
• 4
The same fraction can be subtracted directly in the same way i.e. the smaller fraction can be subtracted from the larger one like:
5/8 – 3/8 = 2/8
• 5
For multiplying the decimals, you have to ensure that the fractions are first simplified as much as possible. This means a larger number can be converted into a smaller one by diving both
numerator and denominator with the same number.
4/10 x 8/12 = 2/5 x 2/3
• 6
The division of the fractions can also be made simple by first reducing the fractions into their simplest form such as 4/12 to 2/3. However, it is advised to convert the division sign between the
two fractions into multiplication. This will make the numerator of either of the fraction as denominator and vice versa.
2/3 ÷ 9/6 = 2/3 6/9
|
{"url":"http://www.stepbystep.com/how-to-do-fractions-3751/","timestamp":"2014-04-21T09:43:20Z","content_type":null,"content_length":"42297","record_id":"<urn:uuid:099a3498-2787-47f9-bfff-0d1660ef83f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This is our 512th post
Jim Carrey is an actor known best for his comedic roles, and is considered one of the top movie stars in Hollywood. He starred in the movie The Number 23, which is based on the the 23 enigma: a
belief that all of life is directly connected to the number 23, often in an indirect manner.
Today Ken and I want to thank everyone who reads and supports GLL; thank you very much.
Because this is our ${512^{th}}$ musing, we want it to be brilliant, unique, and fun. Well we have many things that we would like to talk about, but none seemed that special. After all the next power
of two is distant, so this is our last chance for a long time to highlight the cardinality of the post.
We could talk about our secret polynomial time algorithm for factoring ${\dots}$ Oops. I forgot that is supposed to be kept secret. Actually, if the algorithm is a Leonid Levin-style stealing or
adapting algorithm, then is the algorithm itself the secret? We don’t know, so instead we will talk about something else.
The 23 Enigma
As I started to think about the significance of ${512}$, I realized the number itself is of supreme importance. Perhaps there is some truth in the 23 enigma that is the basis of Carrey’s movie—which
by the way was panned by the critics. Carrey was nominated for the 2007 Golden Raspberry Award for Worst Actor, but lost out to Eddie Murphy in the movie “Norbit.” I am thankful that in our area of
endeavor we only give positive awards. What would it be like to get the “worst published paper award of 2013”, or to lose by one in that category?
Back to the 23 enigma. My first observation was that David Hilbert’s famous problem list had twenty-three problems—hmmm. I started to look to see if there were any other connections. I next noted
$\displaystyle \begin{array}{rcl} 23 \times 23 - 2^3 - 3^2 &=& 512;\\ 23 \times 22 + 2 \times 3 &=& 512. \end{array}$
I had originally thought that the more obvious way of making ${512}$ as ${(2^3)^3}$ didn’t balance out, but Ken pointed out that the outer part can still be read “to three” in English. Well, this is
just chance—right? But how about these:
• “Dick Lipton & Ken Regan” has 23 characters, including spaces;
• “SAT in polynomial time?” has 23 characters;
• “Prof. Richard J. Lipton” has 23 characters;
• “Doctor Kenneth W. Regan” has 23 characters;
• “Does P equal NP or not?” has 23 characters;
• “P could equal NP really” has 23 characters;
• “The twenty-three enigma” has 23 characters;
• “Twenty-three characters” has 23 characters.
Now “Gödel’s Lost Letter and P=NP” has ${23 + 2 + 3}$ characters, but also its name suggests counting letters. It has 22 letters, but remember there’s a lost letter, so it’s originally 23. And if
you’re thinking of a “factoring algorithm,” what name would you most likely put in front? Rabin or Levin, right? Or John Dixon, or someone with “Peter” in his name. All add up to 23 letters.
The History of the 23 Enigma
According to our friends at Wikipedia: Robert Wilson cites William Burroughs as being the first person to believe in the 23 enigma. Wilson, in an article in the still-running magazine Fortean Times,
related the following story:
I first heard of the 23 enigma from William Burroughs, author of Naked Lunch, Nova Express, etc. According to Burroughs, he had known a certain Captain Clark, around 1960 in Tangier, who once
bragged that he had been sailing 23 years without an accident. That very day, Clark’s ship had an accident that killed him and everybody else aboard. Furthermore, while Burroughs was thinking
about this crude example of the irony of the gods that evening, a bulletin on the radio announced the crash of an airliner in Florida, USA. The pilot was another captain Clark and the flight was
Flight 23.
I looked at this and noticed that “William S. Burroughs II” (his full name) has—yes that is right—exactly 23 characters. What does this all mean?
An Interesting Open Problem
We are after all about mathematics, so I decided to leave numerology behind and state one open problem that is not well known. It was thought of by Eduardo Casas-Alvero and is named for him. It is
easy to state:
Let ${f}$ be a one-variable polynomial of degree ${d}$ defined over a field ${K}$ of characteristic zero. If ${f}$ has a factor in common with each of its derivatives ${f^{(i)}}$, ${i = 1,\
dots,d-1}$, then ${f}$ must be a power of a linear factor.
See this for a nice survey of what is known, which includes a proof for ${d= 12}$. The lowest open ${d}$ currently is ${20}$, and the authors say attempting their calculation method for ${d=20}$
would be “utopic.” But solving it would prove the conjecture up through ${d = 23}$.
Let me note that “Casas-Alvero conjecture” has, yes—yes—twenty-three characters. By the way I selected this conjecture before I started to think about the 23 enigma—I did not search for a conjecture
that fit.
Open Problems
Is 23 the key to understanding the universe? Does it shed light on theory? The one that scares me a bit is:
$\displaystyle \text{Five hundred and twelve}$
has exactly 23 characters.
1. October 15, 2013 9:34 am
In the Realm of Riffs and Rotes —
$23 = \text{p}_9^1 = \text{p}_{\text{p}_\text{p}^\text{p}}$
$512 = \text{p}_1^9 = \text{p}^{\text{p}_\text{p}^\text{p}}$
□ October 15, 2013 9:38 am
There are pictures of 23 and 512 here.
2. October 15, 2013 10:25 am
23 is the smallest odd prime that is not a twin prime… (Wikipedia) :-)
□ October 16, 2013 9:35 am
Of course the only reason you need the “odd prime” qualifier is because of the 2, which is right next to the 3.
3. October 15, 2013 12:04 pm
“Barbosa proves P != NP.” has exactly 23 characters.
The Proof is at: http://arxiv.org/ftp/arxiv/papers/0907/0907.3965.pdf.
4. October 15, 2013 12:58 pm
I have a prove that P=NP :
It has exactly 512 characters
□ October 15, 2013 9:17 pm
Sorry! I do not understand your proof ;-)
☆ October 16, 2013 9:01 am
It requires the knowledge of commutative algebra, and some encryption theory ;). It is certain that there is a Turing, or Quantum or Sigma computer algorithm that can easily convert it
into human readable form ;)
☆ October 16, 2013 9:38 am
Thanks! I think the first digits should be 10111, i.e. 23 ;-)
5. October 16, 2013 4:37 pm
Appreciation and thanks to Gödel’s Lost Letter and P=NP for 2^9-fold contributions to PROGRESS IN MATHEMATICS (23!).
while true; do ( read -r; echo -n "$REPLY" | wc -c ); done
□ October 16, 2013 4:52 pm
A couple more 23-magical GLL topics: “PvsNP is a Hard Problem” and “Quantum Dynamical Paths” Best wishes for plenty more magical GLL essays!
7. October 18, 2013 7:16 am
As long as we’re thinking outside the box and off the wall, I might as well vent a bit of worry I have from time to time. Sometimes I wonder if we are always working from the right definition of
a problem and the right idea of what counts as a solution. Do the notions of recognizing a set and generating a set, or computing a function and inverting a function, as important as they are,
really give us the most natural, undistorted representation of the kinds of situations we pre-theoretically call problems in real life and scientific inquiry? Just something to think about …
□ October 18, 2013 9:10 am
“I might as well vent a bit of worry I have from time to time.’
Thanks for sharing your concern. For many of us it is less of a worry and more of a frustrating struggle to reconcile what we see with how we are instructed to see it!
8. November 7, 2013 9:26 pm
perhaps 23 has some real significance in our universe. the revised up quark energy value is 23. the down quark has an energy value of (23)^.5. the eta prime particle has an energy value of 2(23
^.5). the birthday problem has a critical point at the number 23.
Recent Comments
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
Leonid Gurvits on Counting Is Sometimes Eas…
Cristopher Moore on Multiple-Credit Tests
Multiple-Credit Test… on Wait Wait… Don’t F…
Amanda on Counting Is Sometimes Eas…
matrix 15 year anniv… on The Evil Genius
Phil on Wait Wait… Don’t F…
Sam Slogren on Counting Is Sometimes Eas…
Dustin Yoder on Can We Solve Chess One Da…
Amir Ben-Amram on Counting Is Sometimes Eas…
Istvan on Counting Is Sometimes Eas…
Istvan on Counting Is Sometimes Eas…
|
{"url":"http://rjlipton.wordpress.com/2013/10/15/512/","timestamp":"2014-04-16T04:12:50Z","content_type":null,"content_length":"97123","record_id":"<urn:uuid:90ab88e6-cd9c-48fd-9f40-562691f44ecd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US4162480 - Galois field computer
This invention relates to apparatus for correcting multiple errors in cyclic encoded data streams composed of sequential sets of data bits followed by respective sets of encoded check bits.
Information transmitted over a communication channel is generally received as a combination of the original information and a noise component. Integrity of the information content is substantially
entirely preserved when the signal to noise ratio of the system is large. Accordingly, refinements in design and realization of the appropriate hardware can increase the probability or error-free
transmission, theoretically, up to the limits imposed by the channel itself. In order to minimize the effect of intrinsic channel limitations, various techniques are employed which ultimately require
a compromise between bandwidth and information transfer rate. Various limitations imposed on the channel bandwidth, information rate, and the degree of complexity of receiving and transmitting
apparatus contribute to a probable error rate.
Although redundancy is a common element among these techniques, mere repetition exacts a heavy penalty in transmission rate. For example, a single repetition reduces the information rate 50% and a
second repetition (to implement majority logic) reduces the information rate by 611/2%. Other means for insuring message integrity have employed sophisticated coding techniques which permit the
detection, location, and correction of errors. Among the desiderata of these coding techniques are high information rate and a capability of correcting multiple errors within any given codeword of
transmitted data.
In this context a codeword results from encoding operations performed upon the elements of the original data comprising k bits to yield an encoded word ("codeword") of information having k
information bits and r check bits. The encoded redundancy in the form of r check bits is then available during the decoding operations to detect and correct errors in the codeword (including all k+r
bits) up to some limit or merely to detect errors up to some larger limit.
Many such codes, having distinct mathematical properties, have been studied and mathematically efficient decoding procedures have been devised, but reduction to practice with concomitant efficiency
requires a special purpose computer. For example, certain classes of codes are founded on association of each information element of a codeword with an element of a Galois field.
Very briefly, the Galois field is a finite field, the elements of which may be represented as polynomials in a particular primitive field element, with coefficients in the prime subfield. The
locations of errors and the true value of the erroneous information elements are determined after constructing certain polynomials defined on the Galois field and finding the roots of these
polynomials. A decoder is therefore required which has the capability of performing Galois field arithmetic.
Of the error correcting codes, a particular class of such codes, separately described by Bose, Chaudhuri and Hocquenhem (thus "BCH" codes), are capable of multiple error correction. A special case of
such codes are the Reed-Solomon (RS) Codes with respect to which the present invention will be described.
One approach to the problem of sufficiently high speed error correction of BCH encoded data was described in terms of an algorithm published in my text "Algebraic Coding Theory" (McGraw-Hill, 1968).
Prior art employment of the aforesaid algorithm has utilized in one instance a general purpose digital computer controlling an essentially peripheral arithmetic unit implementing Galois field
manipulation. Certain prior art arithmetic units have used large stored tables to implement inversions appearing in decoding procedures.
Other prior art Galois field arithmetic units employ iterative multiplications for inversions, thereby avoiding heavy penalties in hardware and calculation time, which are associated with division
operations. Finite field multiplication manipulation can lead to inversion because in a Galois field, as for example, the field GF(2^5),
beta^31 =Beta^0 =1.
beta^30 =beta^-1
Thus, given a quantity beta, a straightforward prior art method of obtaining its inverse, beta^-1 is defined by performing 2^m -2 (here 30) repetitions of a Galois field multiplication upon the
Galois field representation of beta in the Galois field GF (2^m).
Prior art computers have conventionally employed memory addressing techniques based on an arithmetically sequential organization of information. Addressing is physically implemented by developing an
effective address which may be the result of conventional arithmetic operations. Address modification is conventionally effected in hardware which performs an addition or decremention of a base
address. Consequently, circuitry implementing this conventional form of address modification incorporates adder circuits to arithmetically increment or decrement an address with a resulting delay for
signal propagation corresponding to the arithmetic carry operation between adjacent bit positions of a working register wherein the result is developed.
Accordingly, one principal object of the invention is provision of a novel computer for implementing Galois field arithmetic and algebra. The computer has fewer components, fewer data paths, and
higher speed than a general purpose digital computer employed for this purpose.
It is a feature of this invention to divide the structure of the invention intor three distinct sub-structures and to allocate operations such that arithmetic operations upon data are implemented in
an arithmetic unit substructure, memory addressing for such arithmetric unit being separately effected in an address generator substructure, and each said substructure is controlled by a control unit
substructure, whereby said substructures are capable of synchronous concurrent operation.
It is also a feature of this invention to provde addressable memories within each of the aforementioned substructures, each of which addressable memories is specialized to the purpose of the
respective substructure wherein said memory is situated.
It is also a feature of the invention to provide means for addressing an array of memory elements according to a shift register sequence whereby address arithmetic may be implemented without
incurring penalties in propagation delays and additional circuit components.
It is also a feature of the present invention to provide apparatus for implementing Galois field arithmetic operations, including means to more efficiently perform the divisionless inversion of a
quantity to obtain its reciprocal.
It is again a feature of the present invention to provide a Galois field arithmetic unit having a register which may be selectively reset to the square of the current content of said register or
which said register may be reset to the product of the current content of said register with the quantity alpha, where alpha is a root of the primitive factor of the Galois field.
Another feature of the invention is the provision of a programmed sequence of instructions whereby said computer, having the features above set forth, is operable to correct any combination of 2t
erasures and s errors, in a RS (31,15) codeword, such that (2t+s)≦16, wherein said codeword comprises digital data encoded in the Reed-Solomon (31,16) code.
It is again a further feature to provide means for said arithmetic unit to alter the definition of the function to be executed by said Galois field arithmetic unit in response to the content of a
register, wherein the register content is an independent variable upon which said function operates.
Another feature of the invention is the provision of a sequence of control of said computer whereby said computer having the features above set forth is capable of solving two, three, four or five
simultaneous linear binary equations among respectively, two, three, four or five variables.
FIG. 1 is a block diagram of apparatus organized according to a preferred aspect of the invention.
FIG. 2 is a more detailed block diagram of apparatus built according to a preferred aspect of the invention.
FIG. 2A is a yet more detailed block diagram of a control unit employed in such apparatus.
FIG. 2B is a yet more detailed block diagram of an address generator employed in such apparatus.
FIG. 2C is a yet more detailed block diagram of an arithmetic unit employed in such apparatus.
FIG. 3 is an illustration of the format of the ESK control word employed in such apparatus.
FIG. 4 is a block diagram of memory allocation within the data memory of the arithmetic unit of such apparatus.
FIG. 5 is an illustration of the format of address information created by the address generator of such apparatus.
FIG. 6 is an overview of the preferred RS decoding program.
FIG. 7A, 7B and 7C show more detailed flow charts for the preferred error correction program of FIG. 6.
FIG. 8A, 8B and 8C illustrate the progression of content of arrays wherein polynomials sigma and reverse omega are developed by program segment SUGIY
FIG. 9 illustrates part of a flow chart used by prior art linear binary equations solving apparatus.
The discussion of apparatus is gest prefaced by a review of the salient aspects of coding theory, applicable to nonbinary BCH codes in general and to RS codes in particular. As a general reference,
my text cited above, "Algebraic Coding Theory", (hereafter, Berlekamp, 1968) is recommended. In a binary realization, such a code may be regarded as having three principal positive integer
parameters, n, m, and t, where n is the total length in m-bit characters of a word of encoded information and n=2^m -1 and t is the error correcting capability of the code. Assuming no more than mt
redundant characters or check characters such a codeword is capable of providing sufficient informational redundancy to detect and correct any set of less than t independent errors and erasures
within the codeword of encoded information, or to detect (without correction) any set of 2t independent errors and erasures. An erasure may be defined as an error of known location within the
received codeword.
The properties of an algebraic finite field may be summarized briefly. For the purposes of the present invention, a field may be informally defined as a set of elements including the null element, 0,
and the unit element, 1, upon which are defined operations of addition, multiplication and division. Addition and multiplication are associative and commutative and multiplication is distributive
with respect to addition. Every element of the field has a unique negative such that the negative of a given element summed with that given element itself yields the null or 0. Further, evey non-zero
element has a unique reciprocal such that the product of such an element with its reciprocal yields the unit element, 1. The elements comprising the field may be considered symbolic representations
of binary or ternary or q-ary numbers. The description of the invention will be understood best in terms of a field of characteristic two.
The general finite field is called the Galois field and is specified by two parameters; a prime p, and an integer m, whereby GF (p^m) describes a unique finite field (the Galois field of order p^m)
having p^m elements. In such a field all operations between elements comprising the field yield results which are again elements of the field. For example, the operation of addition carried out an
elements of the finite field is defined, modulo 2, according to relations which do not admit of a "carry". Thus, the binary addition tables are: 0+1=1+0=1 and 0+0=1+1=0. Arithmetically, this is a
"carry-less" addition, sometimes referred to as half addition and more commonly denoted as the exclusive-or (XOR). It is apparent that absence of a carry thereby limits the magnitude of the resulting
sum to the finite field.
The mathematical basis of Reed-Solomon codes and decoding thereof, as discussed in greater detail in Chapter 10 of Algebraic Coding Theory is as follows:
Let α be a primitive element in GF(2^m). The code's generator polynomial is defined by ##EQU1## where d is the code's designed distance. The block length of the Reed-Solomon code is n=2^m -1. The
codewords consist of all polynomials of degrees<n which are multiples of g(x).
Let C(x) be the transmitted codeword, ##EQU2## If the channel noise adds to this codeword the error pattern then the received word is ##EQU3##
The syndrome characters are defined by
S[i] =E(α^i) Equ. 3
Since C(x) is a multiple of g(x) it follows that for i=1, 2,. . ., d-1, C(α^i)=0, whence
S[i] =R(α^i) i=1, 2,. . . d-1 Equ. 4
The generating function of the S's may be defined by ##EQU4##
In order to correct the erros, the decoder may find the corresponding error locations and error values. If ##EQU5## then the jth error location may be defined as
X[j] =α^e.sbsp.j
where the e[j] are the unique integers such that
E[0] =E[1] =. . . =E[e].sbsb.j+1[-1] =0
E[e].sbsb.j ≠0
E[e].sbsb.j+1 =E[e].sbsb.j+2 =. . . =E[e].sbsb.j+1 -1.sup.≠0
Erasure locations may be similarly associated with field elements and the corresponding values of errata may be defined as
Y[j] =E[e].sbsb.j
To determine the unknown X's and Y's, it is useful to define these polynomials
error locator polynomial ##EQU6## erasure locator polynomial ##EQU7## errata locator polynomial
ρ(z)=α(z)·λ(z) Equ. 8
errata evaluator polynomial ##EQU8##
To find the X's and Y'x the decoder first multiplies S(z) by λ(z) to obtain the modified syndrome generating function
T(z)=S(z)·λ(z) Equ. 10
The unknown errata evaluator polynomial and the unknown error locator polynomial are related by the key equation,
T(z) σ(z)≡ω(z) mod z^d Equ. 11
Given T(z), low-degree solutions of σ(z) and ω(z) may be found by solving this key equation using the iterative algorithm presented in Algebraic Coding Theory, and later described more succinctly by
Sugiyama, et. al., A Method For Solving Key Equations for Decoding Gappa Codes, Information & Control, Vol. 27 No. 1, January 1975, pp 87-99.
After the coefficients of σ(z) are known, the decoder may evaluate the polynomials σ(1), σ(α^-1), σ(α^-2), σ(α^-3) , . . . ##EQU9## If σ(α^-i)≠0, then the received character at location α^i is
presumed correct (unless erased). If σ(α^-i)=0 or if λ(α^-i)=0, then α^i is an errata location, and the received character at that position should be corrected by the value given in Eq. (10.14) of
Algebraic Coding Theory: ##EQU10## Having briefly described the mathematical basis of error correction, a preferred embodiment of the present invention next follows. The ensuing description is
addressed both to one skilled in the art of digital computer design and to one skilled in the art of programming. The former artisan may be specifically characterized as having familiarity with the
organization of digital computers generally and with the implementation of digital logic from commercially available electronic components. The latter artisan will be familiar with programming at the
level of machine dependent languages and instruction codes of microprogrammable processing units.
A. Hardware
Referring now to FIG. 1, there is illustrated an organizational diagram relating the control unit 4, address generator 5 and arithmetic unit 6. These functional units are capable of independent
concurrent operations. A number of control signals and data paths are here indicated.
Turning now to FIGS. 2 and 2A, control unit 4 includes a control memory 40 of 512 words, each 36 bits in length. This control memory is advantageously realized from read only constituents (ROM).
Addressing of this memory is accomplished in conventional fashion from a program counter, or P register 41, 9 bits in length, which increments, or alternatively resets the P register to a new value
in accord with the value of the single bit G register 42, as further described below. The content of the word fetched from control memory 50 is transferred to the 36 bit ESK register 43.
The control unit also contains a master clock 47 providing periodic signals CLK (and -CLK) for synchronizing the operation of various components as discussed at greater length below.
Throughout this description the registers and their value or their contents will often be referenced by reciting the letter labeling the register. In order to resolve any possible ambiguity, the
hardware realization of the register will be denoted by letter label and numeric descriptor. In general, however, the reader is cautioned that the identical component may be referenced by multiple
labels for the convenience afforded in various sections of the specification, i.e. hardware description, program description, etc. Notational limitations are imposed by the requirements of
programming languages as well as by the need of avoiding ambiguity. Thus a particular component may be referenced as "data memory", "M memory", "M", "60", and "arithmetic unit memory".
FIG. 2B supplements FIG. 2 for discussion of the address generator. The address generator 5 contains two principal registers: the address register 50 (or A register), 9 bits in length and the test
register 51 (or T register), 8 bits in length. The output from address generator 5 includes an 8 bit number, A, residing in the 8 least significant bits of the 9 bit A register 50 and a separate
single bit AEQT, which is active upon the detection of the condition of equality for the contents of A register 50 and T register 51. The principal funtion of A register 50 is to supply address
information for accessing another memory located as described below in arithmetic unit 6. The address generator includes a memory 52, or C memory, realized from a set of shift register counters
(hence "Counter Memory"), from which words may be fetched for loading the A register 50. The C memory 52 comprises 32 words of 8 bits each, only one of which words can be active during any single
cycle of the master clock of the invention. The content of any word fetched from C memory 52 may be modified by the C-modifier 53, which can increment, decrement or leave unmodified the contents of
the currently selected word of C memory 52.
Incrementation and decrementation here refer to operations carried out in the Galois field (2^7) on the content of a selected word of C memory 52. Accordingly, C word modifier 53 represents the
combinatorial logic for implementing these operations in GF(2^7). The truth table completely describing this logic is here deferred to follow the more complete description of functional instructions
of the address generator 5. Qualitatively, incrementation may be viewed as advancing a pointer in the clockwise direction around a circular path of length 2^7 -1, and decrementation may be regarded
as backing up the pointer along the same path. Thus the operations are mutually inverse and when equally applied, the status quo is restored.
Any of the above referenced modifier options may be selected for loading the A register 50. Also available is a fourth modifier option for loading A register 50 from a constant, K, generated
externally to the address generator. In addition to loading the A register 50, the modified word may be rewritten into the C memory 52 during the second half of the clock cycle.
The T register 51 is selectively initialized by control unit command from the lower 8 bits of A register 50. The current value of A is compared with the value of T in comparator 54 which supplies a
signal AEQT to control unit 5 to indicate either the condition of equality, A==T, or non-equality, A!=T.
The control unit 4 is more fully described in FIG. 2A. The ESK register 43 of the control unit 4 is illustrated in FIG. 3. This latter register derives its name from the subfields of the ESK control
word as further illustrated in FIG. 3: a 9 bit subfield, K, provides numeric constant information which may be employed by either of the P register 41 or the address generator 5 as hereafter
described; a subfield, E, of 9 bits supplying enabling signals to the various registers and memory writing functions as hereinafter further described, and an 18 bit subfield S, further subdivided as
described below for the selection of various multiplexor options, or to determine the course of branching. The allocation of functions for groups of S bits within the subfield will appear more
clearly after the apparatus is more fully described.
Reference will occasionally be made to the D latch 55 which is located intermediate C memory 52 and C word modifier 53 in information flow. The D latch 55 permits rapid re-write back into C memory
52, via path AP, without propagating the information content of the selected C word, C[j] over an unduly long path.
Returning now to FIG. 2, and 2A, a one-bit signal G or "GOTO" bit controls the P register 41 in the following sense: at the end of the current clock cycle the P register 41 is either arithmetically
incremented or alternatively reset to the value of the K subfield of the current content of ESK register 43. The value of G is determined in turn as the result of logical operations among signals
effected by an aggregate of circuits denoted by branching logic 44. Any one of sixteen logical conditions may be tested during a cycle and the result of such testing loaded into the G bit at the end
of that cycle (cycle i). A "true" condition for the G bit causes loading of the P register 41 from the value of the k subfield of the control word present in ESK register 43 at the end of cycle i+1.
During the next cycle (cycle i+2) another word is transferred to the ESK register, such word having been pre-fetched during the prior cycle using the then current P register content to specify the
address in control memory 40. In the other words, the control memory 40 is fully overlapped between the memory fetch operation and the transfer to the memory output. At the completion of cycle i+2
and concurrent initiation of cycle i+3, the content of address K, as specified during cycle i is present at the output of control memory 40 and strobed into ESK register 43. Thus, there is a deferral
of two cycles between a transition of the G bit to its true state and the commencement of execution of the instruction located at the destination of the "GOTO".
FIG. 2 and 2C are referenced for discussion of arithmetic unit 6. The arithmetic unit 6 employs a random access memory 60 also called data memory, or simply M, of 256 words, each 6 bits in length.
The input path to the data memory 60 proceeds through the memory driver register 61, or V register. Addressing of the data memory 60 is accomplished from the A register 50 as previously described. In
addition to the addressing modes for accomplishing deposit and retrieval of information from data memory 60, bit A8 of A register 50 (the 9th bit of A) has the function of selecting an immediate
address mode whereby the 6 bit constant (A5. . .A0) is available at the output bus of data memory 60, bypassing the memory itself. Thus, selected 6 bit operands may be supplied to those arithmetic
registers described below which communicate with data memory 60 through the output bus of that memory.
Input to the arithmetic unit 6 from the data memory 60 output bus (M5-M0) is available as well as the result (R4-R0) of previous finite field arithmetic operations. A multiplexer 64' selects which of
words M or R is to be captured by the Y register 64. This register and the Z register 65 may be selected to provide data to the combinatorial logic 62 which implements the arithmetic operations. The
X register 63 drives the combinatorics as well, but gets its input not directly from the data memory 60, but from Y register 64, Z register 65 or the combinatorial logic 62. The latter includes a
multiplier, which is simple a set of XOR and NAND gates, driven by X, Y, and partial sums of X contained in an extension 63" of X register 63. Such networks are well known in the art. It also drives
a summation network (adder) by which R bus 66 sums content of Z register 65 when selected. The summation outputs (R bus 66) are made available for selection by the V register 61 (which subsequently
returns the result to memory) or to Y register 64.
The X register 63 is a five bit register which has a multiplexer 63' associated with it providing one of four possible input types for each X register bit. The input is selected during a cycle by
control bits SX1 and SX0 (Select X) as further described. Preferred options for input to X are: Y, X*X, alpha*X, or Z, where alpha is the primitive element of GF(2^5). The X register 63 captures the
result of any cycle at the end of a cycle unless its control bit, !EX, inhibits the clocking of the X register. Sums of pairs of bits of X which are needed by the multiplier are also held in an
extension of the X register 63". These partial sums are used to generate X*X for example.
Although the Z register 65 is six bits wide, only the lower five bits drive the arithmetic unit. The signal YZR detects the all zero state of the low order five bits of Y. This condition is available
for input to the G bit multiplexer 42'. A further select bit, SR, will enable the Z register input to the summer when set as described below.
The X register 63 and its extension 63' drive the NAND gate portion of the multipler network. The multiplier is preferably grouped as four sets of five NAND gates and one set of six NOR gates, each
set of NAND gates enabled by a unique bit of the Y register, Y4 to Y0. The outputs of the NAND gates in turn drive a parity tree used as an XOR summation network, one tree or summer for each bit of
the result bus.
At the end of a cycle, new data is strobed into every register enabled by its respective control bit. The new data propagates through the partial products, multiplier, and summer of combinational
logic 62 during the following cycle and thus determines the next available data for V and Y via the R bus. Note that the Z register may selectively be included in the summation as determined by the
state of the SRO control bit.
The specification of the Galois field combinatorial logic is most compactly represented by the truth table set forth in Table I below. The format of this truth table is conventional; inasmuch as the
operands associated therewith are elements of GF(2^5), the set of symbolic names of the 32 elements form an alphabet such that each element is compactly represented by a single character. The
cross-definition of the symbolic character to its binary equivalent is appended in the form of the left hand vertical column of 5 bit binary characters. Thus a period denotes the all zero word.=
00000. Similarly, the unit element of the field is denoted by !=00001. Alpha, the primitive element of the field is denoted by the symbol &. In this way use of the symbol "1" is avoided, because in
some circumstances it might be taken to mean & and in other circumstances it might be taken to mean!. However, the elements alpha^2, alpha^3, . . . are denoted by 2,3. . . 8,9,A,B,. . . U. In other
words, the 32 symbols in GF(2^5), are represented by the character set containing the 21 letters A to U, the eight digits 2-9, and the additional characters ".", "!," and "&".
TABLE I The Product X*Y for GF(2^5) MULTIPLICATION TABLE FOR GF(2^5) . ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U 00000 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 00001 = ! . ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U 00010 = & . & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U ! 00100 = 2 . 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U ! & 01000 = 3 . 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U ! $ 2 10000 = 4 . 4 5 6 7 8 9 A B C D E F G H I J K L M N O PQ R S T U ! & 2 3 00101 = 5 . 5 6 7 8 9 A B C D E F G H I J K L MN O PQR S T U ! & 2 3 4 01010 = 6 . 6 7 8 9 A B C D E FG H I J K LMN O P Q R S T U ! & 2 3 4 5 10100 = 7 . 7 8 9 A B C D E F G H I J K L M NOPQ R S T U ! & 2 3 4 5 6 01101 = 8 . 8 9 A B C D E F G H I J K L M N OPQ RS T U ! & 2 3 4 5 6 7 11010 = 9 . 9 A B C D E F G H I J K L M N O PQ RSTU ! & 2 3 4 5 6 7 8 10001 = A . A B C D E F G H IJK LM N O P Q R S T U ! & 2 3 4 5 6 7 8 9 00111 = B . B C D E F G H I J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A 01110 = C . C D E F G H I J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B 11100 = D . D E F G H I J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C 11101 = E . E F G H I J K L M NO P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D 11111 = F . F G H I J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E 11011 = G . G H I J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F 10011 = H . H I J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G 00011 = I . I J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H 00110 = J . J K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I 01100 = K . K L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J 11000 = L . L M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B CD E F GH I J K 10101 = M . M N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L 01111 = N . N O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M 11110 = O . O P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K LM N 11001 = P . P Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O 10111 = Q . Q R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P 01011 = R . R S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J KL M N O P Q 10110 = S . S T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R 01001 = T . T U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S 10010 = U . U ! & 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T
The above truth table expresses the operation of multiplication in GF(2^5). The exclusive OR (XOR) operation, well known in the art, may be similarly expressed. The Galois field logic 62 implements
the operation X*Y according to the rules expressed by table I and also the operation X*Y+Z, where "+" is the XOR of the product (X*Y) with the GF(2^5) variable Z.
The result of finite field arithmetic operations appear at the output of the Galois field combinatorial logic 62 in the form of a pseudo register 66 called the R register, or R bus. As distinct from
the real registers, R bus 66 is merely an aggregate of 6 conductors which carry the voltage levels expressing the digital result of a Galois field operation to the gating logic for the Y register 63
and V register 61. The information on R bus 66 is thus transitory, available at a point during the clock cycle when it may be gated to Y register 64 or V register 61.
An addressed word M [A] of the data memory 60, also may be selected as a data source of the output, or O register 67, also 6 bits in length. Thus input data streams enter the data memory 60 by V
register 61 and corrected data are available to the data sink by O register 67.
Three logic signals are employed for communication from arithmetic unit 6 to control unit 4 for decoding purposes. The branching logic 44 of control unit 4 treats these signals as logical variables
determining the content of the "GOTO" but which in turn controls the branching from sequential execution of instructions stored in controm memory 40. The three signals originating in arithmetic unit
6 include a signal indicative of the zero content of the Y register 63, a signal Z5, derived from the erasure bit of Z register 64 and the signal TV derived from V4. Z5 is the erasure bit through
which erased data words are labeled, while TV can produce a signal dependent upon content of an individual bit of the V register.
The organization of data memory 60 is clarified by reference to FIGS. 4 and 5. It will be noted that the output developed by the address generator 5 is directed toward addressing the data memory 60
of the arithmetic unit. The A register word format is shown in FIG. 5. Clearly, the 256 word data memory 60 is directly addressed by 8 bits; consequently the 9th or most significant bit of A register
50 (that is, A8) is superfluous to direct addressing.
The asseration of this bit enables an "immediate address" mode whereby the content of the 6 bits A0-A5 are gated directly to the output bus of data memory 60, thereby providing 2^6 constants directly
to the Y,Z, and O registers of the arithmetic unit. These constants are shown schematically in FIG. 4 as an annex to the data memory. The 8th bit, A7 of A register 50, effectively divides the 256
word data memory into upper and lower 128 addresses as described by FIG. 4. Addresses 1 to 127 are freely utilized for scratch pad purposes. Upper memory addresses 129-255 are employed for
input-output (I/O) buffers. Addresses 000 and 128 are not used by the system in operation.
The operation of the C work modifier 53 is best understood with a brief explanation of the address arithmetic. The addresses developed by the address generator 5 are used to address data memory 60
according to the general scheme of FIG. 4. The upper 127 locations of data memory 60 are designed to be used as input/output buffers. These locations are sequentially ordered according to the
canonical representation of the powers of gamma in the Galois field GF(2^7), where gamma^7 =gamma+1. Thus,
gamma^1 =0000010
gamma^2 =0000100
gamma^3 =0001000
gamma^4 =00010000
gamma^5 =0100000
gamma^6 =1000000
gamma^7 =0000011
. . . . .
gamma^126 =1000001
gamma^127 =0000001
The only arithmetic operations ever performed on addresses referring to locations in the i/o buffer are the ±± operations which convert gamma^i into gamma^i±1:. These operations act on the 127
nonzero seven bit words and run them through a cycle of length 127. For this reason, it is useful both to refer to the 7th bit representation of these numbers corresponding to the actual content of A
register 50, and also to the rational integers corresponding to logarithms, base gamma, of these entities considered as elements of GF(2^7).
A complete functional description of the C word modifier 53 and a definition of the operations of incrementation and decrementation are provided by the truth table given in Table II. The data read
from C memory 52 resides in the D latch 55, which latch may be regarded as the operand upon which the C word modifier 53 operates. Table II gives the value of D ± 1 corresponding to the 2^7 values
which can be assumed by the lower 7 bits, D6 . . . D0. The correspondence is given separately for the hexadecimal content of the lower 7 bits of D expressed together with the leading bit D7. The
latter bit appears separately as the most significant character. Correspondence is also given for D±1 expressed as a logarithm base gamma of the same data interpreted as a polynomial in gamma.
Bit D7 (thus A7) is unaffected by the incrementation/decrementation operations. This insures that any combination of increments and decrements of an address referring to the i/o buffer will still
refer to the i/o buffer and any combination of increments and decrements of an address referring to the scratch storage will still give an address which refers to the scratch storage.
The addresses 00000000 and 10000000 are unusual in that they are also unchanged by incrementing or decrementing. These locations of data memory are used only for diagnostic purposes.
Having described the major components and their mutual relationships, attention is again invited to the structures of the ESK control word illustrated in FIG. 3. Tables III and IV respectively
indicate the allocation of enable (E) bits and select (S) bits.
TABLE II______________________________________TRUTH TABLE FOR C WORD MODIFIER______________________________________D D + 1 D - 1hex log hex log hex log______________________________________000 ∞ 000 ∞ 000 ∞001 5 002 6 040 4002 6 00c 7 001 5003 12 00e 13 041 11004 127=0 008 1 006 126005 54 00a 55 046 53006 126 004 127=0 007 125007 125 006 126 047 124008 1 010 2 004 0=127009 29 012 30 044 2800a 55 01c 56 005 5400b 88 01e 89 045 8700c 7 018 8 002 600d 19 01a 20 042 1800e 13 014 14 003 1200f 61 016 62 043 60010 2 020 3 008 1011 65 022 66 048 64012 30 02c 31 009 29013 110 02e 111 049 109014 14 028 15 00e 13015 95 02a 96 04e 94016 62 024 63 00f 61017 26 026 27 04f 25018 8 030 9 00c 7019 68 032 69 04c 6701a 20 03c 21 00d 1901b 36 03e 37 04d 3501c 56 038 57 00a 5501d 45 03a 46 04a 4401e 89 034 90 00b 8801f 106 036 107 04b 105020 3 040 4 010 2021 17 042 18 050 16022 66 04c 67 011 65023 93 04e 94 051 92024 63 048 64 016 62025 43 04a 44 056 42026 27 044 28 017 26______________________________________hex log.sub.γ hex log.sub.γ hex log.sub.γ______________________________________027 52 046 53 057 51028 15 050 16 014 14029 113 052 114 054 11202a 96 05c 97 015 9502b 75 02a 76 04e 7402c 31 058 32 012 3002d 115 05a 116 052 11402e 111 054 112 013 11002f 41 056 42 053 40030 9 060 10 018 8031 33 062 34 058 32032 69 06c 70 019 68033 72 06e 73 059 71034 90 068 91 01e 89035 77 06a 78 05e 76036 107 064 108 01f 106037 85 066 86 05f 84038 57 070 58 01c 56039 98 072 99 05c 9703a 46 07c 47 01d 4503b 80 07e 81 05d 7903c 21 078 22 01a 2003d 117 07a 118 05a 11603e 37 074 38 01b 3603f 102 076 103 05b 101040 4 001 5 020 3041 11 003 12 060 10042 18 00d 19 021 17043 60 00f 61 061 59044 28 009 29 026 27045 87 00b 88 066 86046 53 005 54 027 52047 124 007 125 067 123048 64 011 65 024 63049 109 013 110 064 10804a 44 01d 45 025 4304b 105 01f 106 065 10404c 67 019 68 022 6604d 35 01b 36 062 3404e 94 015 95 023 9304f 25 017 26 063 24050 16 021 17 028 15051 92 023 93 068 91052 114 02d 115 029 113053 40 02f 41 069 39054 112 029 113 02e 111055 74 02b 75 06e 73056 42 025 43 02f 41057 51 027 52 06f 50058 32 031 33 02c 31059 71 033 72 06c 7005a 116 03d 117 02d 11505b 101 03f 102 06d 10005c 97 039 98 02a 9605d 79 03b 80 06a 7805e 76 035 77 02b 7505f 84 037 85 06b 83060 10 041 11 030 9061 59 043 60 070 58062 34 04d 35 031 33063 24 04f 25 071 23064 108 049 109 036 107065 104 04b 105 076 103066 86 045 87 037 85067 123 047 124 077 122068 91 051 92 034 90069 39 053 40 074 3806a 78 05d 79 035 7706b 83 05f 84 075 8206c 70 059 71 032 6906d 100 05b 101 072 9906e 73 055 74 033 7206f 50 057 51 073 49070 58 061 59 038 57071 23 063 24 078 22072 99 06d 100 039 98073 49 06f 50 079 48074 38 069 39 03e 37075 82 06b 83 07e 81076 103 065 104 03f 102077 122 067 123 07f 121078 22 071 23 03c 21079 48 073 49 07c 4707a 118 07d 119 03d 11707b 120 07f 121 07d 11907c 47 079 48 03a 4607d 119 07b 120 07a 11807e 81 075 82 03b 8007f 121 077 122 07b 120______________________________________
TABLE III______________________________________ENABLE BITS OF ESK CONTROL WORDEX Enables local clock input to X register.If EX=0, X retains its previous value;If EX=1, X will be reset to a new valuedetermined by the SX select character.EY Enables local clock to Y registerEZ Enables local clock to Z registerEO Enables local clock to Output registerEV Enables local clock to V registerEM Enables writing of data memory MET Enables local clock to T registerEC Enables writing of address generator memory CEA Allows A register to retain its previous value;actually enable Dlatch and also called ED______________________________________
TABLE IV______________________________________SELECTION CODES OF ESK CONTROL WORDSR One bit (of this two-bit character) determines whetherR=X*Y or R=X*Y+Z.The second bit permits this determination to becomedata dependent as describedSX Two bits (of this three-bit character) control the multi-plexor which selects the potential next X value from among Z, Y,X^2, and alpha *X.SY One bit character selects the potential next Y value fromamong R or M[A].SV One bit character selects the potential next V value fromamong R or input bus N.SC Five bit character addresses one of 32 C registersSA Two bit characterselects the input path to the A register from four possi-bilities: C, C incremented, C decremented, K.SG Four bit character controls branching logic whose 1 bitoutput is the value of G.______________________________________
At this point it is useful to consider the command structure for each of the control unit, address generator and arithmetic unit. Because the invention here described is almost completely
synchronous, commands for the 3 major substructures are executed concurrently, thereby resulting in very high speed operation.
Control unit commands are tabulated together with a brief explanation in Table V. With respect to the notation here employed, the exclamation point (!) to the right of the symbol connotes negation,
as for example, G=A!=T states the assertion of G upon the condition of non quality between the content of A and the content of T. The exclamation point to the left of a symbol connotes the
non-assertion condition or logical 0: The symbol !NS has the meaning that the signal NS is in the "off" condition. The double equality sign is the equality relation, unlike the single equality, which
is an assignment rather than a relation. Thus G=A==T is a command which sets G to 1 if and only if (iff) the contents of A and T are equal. Failure of the relation A==T sets G to 0. In like manner,
the double ampersand, && denotes the AND relation. For example, G=A! =T&&NS sets G to the logical conjunction of A not equal to T and NS asserted.
It will be observed that the control unit instructions focus upon logical conditions which determine the value of G, the "GOTO" bit, which controls branching of the execution of the instructions
contained in control memory 40. Thus, control unit commands are branching instructions: G=1 is an absolute jump to the instruction whose address is contained in the K subfield of the next word
fetched from control memory 40; G=0, the jump inhibit, insures that instructions continue to execute in sequence. The remaining branching instructions are conditional jump instructions wherein a jump
or transfer is conditioned upon specified values of the various logical variables.
The instructions G=TV() and G=!TV() permit the branching logic to sense a specific V register bit (V4).
TABLE V__________________________________________________________________________CONTROL UNIT COMMANDSCOMMAND SG Bits EXPLANATION__________________________________________________________________________G=1 0000 Absolute control transfer to ROM address contained in K fieldG=Z5() 1010 Sets G to the highest bit of Z. This is the sixth bit, through which erased characters are detected.*G=!Z5() 1010 Sets G to the complement of Z5G=Y==0 1000 Sets G to 1 iff the Y register contains 0 (YZR true)G=Y!=0 1100 Sets G to 1 iff Y is nonzero (YZR not true)G=A==T 1011 Sets G to 1 iff A and T are equal (only the lower 8 bits of A enter the comparator; A8 is ignoredG=A!T 1111 Sets G to 1 iff A differs from TG=!0S&&!NS 0001 Set G to 1 iff the outputs status bits OS and the input status bit NS are both offG=0SλNS 0011 Sets G to 1 iff either OS or NS is one. This detects that some i/o channel wants serviceG=A!=T&&0S 0100 Sets G to 1 iff OS==1 and A!=TG=A!=T&&NS 0101 Sets G to 1 iff NS==1 and A!=TG=A!=T&&G 0111 Sets G to 1 only if previous G==1 and A!=TG=A!=T&&G 0110 Sets G to 1 only if previous G==0 and A!=TG=TV() 1001 Sets G to the value of the bit V4G=!TV() 1101 Sets G to the complement of the bit V4__________________________________________________________________________ *Parenthesis indicate functional relationship of the indicated quantity
The commands executed within the address generator are summarized in Table VI. These instructions are directed to manipulation of contents of A register 50, T register 51 and address generator memory
52. Basically, there are four options for the content of the A register as previously described. These options are controlled by two Select A (SA) bits of the Select subfield of the ESk register. It
will be recalled that incrementation in the address generator is distinctly non-arithmetic in the conventional sense, being merely the incremental step in a shift register sequence, characteristic of
the field GF (2^7) through which 2^7 -1 locations of data memory are addressed. This modification to the content of A may be written back to C memory 52, or not so stored, in accord with the
appropriate command. Locations within C memory 52 are addressed by the 5 bit Select C (SC) subfield of the ESK register.
In the notation of Table VI, the incrementation of A register 50 together with the storage of the incremented word into the jth location, C[j], of C memory 52 is denoted by the double plus symbol,++
or double minus symbols, --. A register incrementation (decrementation) not coupled with re-storing of the non-incremented (non-decremented) value back into the appropriate C memory location is
denoted by the single plus (minus) symbol +1(-1)
TABLE VI:__________________________________________________________________________ADDRESS GENERATOR COMMANDS Code bitsCOMMAND SA !EA !EC EXPLANATION__________________________________________________________________________A=K 11 0 1 Sets A to 9 bit Konstant taken from K-field of the instructionA=Cj=K 11 0 0 Sets both A and the j^th C register to KO-7. (the 9th bit of Cj does not exist.) The instruction must specify j explicitly in hexadecimal.A=Cj 01 0 1 Sets A0...A7 to Cj; A8 becomes 0A=Cj+1 10 0 1 Sets A0...A7 to Cj incremented once; A8 becomes 0A=Cj-1 00 0 1 Sets A0...A7 to Cj decremented once; A8 becomes 0A=++Cj 10 0 0 Increments Cj and then sets A to the incremented value. Equivalent to Cj=Cj+1 followed by A=Cj.A=--C 00 0 0 Decrements Cj and then sets A to the decremented value. Equivalent to Cj=Cj-1 followed by A=Cj.A=Cj=A * 1 0 Equivalent to A8=0 followed by Cj=AA=A * 1 1 Default option. Contents of A0-7 unchanged; A8=0.T=A ET=1 Sets T to A__________________________________________________________________________ *Same as SA code of command immediately prior in time.
Arithmetic unit commands are summarized in Table VII. These instructions involve a variety of inter-register transfers, e.g. Y=R, and four specialized arithmetic operations defined on the field GF (2
^5). By the symbolic instruction SQ(), the content of X register 63 is replaced by its square. This operation is facilitated in a Galois field of characteristic two because in such a field, the
squaring operation is effectively linear. The symbolic instruction AL() replaces the content of X regyster 63 with the product of the previous content of X with alpha, the primitive element of GF(2^
5), where alpha is a solution of the equation, alpha^5 +alpha^2 +1=0. Thus AL() is operationally similar to SQ(). These products are part of the Galois field logic which are wired to a multiplexer
63' for loading X in response to the 2 bit SX character of the ESK control word. The remaining X register loading options are the content of Y register 64 (or Z register 65) gated through the
instruction X=Y (or X=Z). In contrast LF(), FF() and RF() operate upon arguments supplied from registers indicated in Table VII, placig the result on the R bus.
TABLE VII__________________________________________________________________________ARITHMETIC UNIT COMMANDSCommand* Code Bits Explanation__________________________________________________________________________V=N() EV=1:SV=1 Input is read into V. Input Status bit NS is turned off.V=R EV=1;SV=O VO-5 are set to RO-5.M[A]=V EM=1 Contents of V are written into Memory at location A. Only bits 0-7 of A are used: A8 must be 0.φ =M[A] E=1 Memory at A is read to output register φ. If A8=O, one of the 256 RAM words is read. If A8=1, 00-05 are set to A0-A5.Z=M[A] EZ=1 Similar to 0=M[A]Y=M[A] EY=1;SY=1 Similar to 0=M[A]. M5 is ignored because Y has only bits YO-Y4Y=R EY=1;SY=0 YO-Y4 are set to R0-R4. R5 is ignored.X=Y EX=1;SX=001 X is set to Y.X=Z EX=1;SX=010 X is set to Z.SQ() EX=1;SX=011 Replaces X by its square. The value of X^2 is (2^5) is obtained from the GF logic and loaded into X.AL() EX=1;SX=011 Multiplies X by alpha. The value of alpha*X in GF(2^5) is obtained from the GF logic and loaded into X. Not required for error correction of RS (.31,15) codeLF() SR=00 Select Linear Function in arithmetic unit so that R.increment.X*Y, the Galois field product of X and Y, appear on Result bus. R5 assumes the value 0.FF() SR=01 Select aFFine function in arithmetic unit so that R=X*Y+Z.RF() SR=10 Select LF() or FF() respectively as RO=0 or 1. Not required for (31,15) RS error correction.__________________________________________________________________________ *The letter φ is employed to denote the letter O to avoid confusion with the numeral zero in this table.
Referring again to FIG. 2, input and output status of the invention are recorded in two status bits realized by two J-K flipflops. One status bit 45, called NS, indicates the status of the input; The
other status bit 46, called OS, indicates the status of the output. The J inputs to the flipflops are controlled by the invention; the K inputs are controlled by the device supplying input or the
device accepting output. Both the present invention and the external devices have access to the contents of each of these two J K status flipflops. When the input flipflop is in one state, it means
that the external device has data ready, waiting for the invention to read it. When the invention reads this in via the wires N0,N1,. . . ,N5. it simultaneously switches the input status flipflop NS
to the other state. Similarly, the output status indicator reveals whether or not the output device is prepared to accept another word of output. Before emitting output, the invention interrogtes
this flipflop to ensure that the output device is prepared to accept the output. When this is the case, the invention may reload the O register 67, and simultaneously reset the status of the OS
output flipflop.
All of the input and output is under the control of the invention. Data input is thus accomplished by an instruction V=N() and output is initiated by φ=M[A]. Interrupts are not possible, although the
external devices with which the invention communicates need never be kept waiting very long, because the invention can interrogate the input and output flipflops frequently under program control.
Hence, the invention will honor any I/O request quickly unless there are good reasons for failing to do so. This can occur, for example, only if an output word is requested before the first input
block has been read in and decoded, or if the device supplying inputs tries to feed more data into the invention than its memory buffer will hold.
Summarizing the meaning of NS and OS;
Ns=1 external input waiting for the invention
Ns=0 the invention waiting for external input
Os=1 output device waiting for the invention
Os=0 the invention waiting for output device
Since the input and output status bits may be changed by external devices, they may be in transition at the instant they are sampled. If such an ambiguous signal reached the progrm counter select
input, it might cause some bits of the program counter to count while others reset, thereby causing a fatal branch to a meaningless address. To prevent this disaster, it is necessary to allow
sufficient time for any such ambiguities due to the asynchronous interface of the external signals to be resolved before the signal reaches the program counter. This resolution is accomplished in two
stages: First, in the additional prior flipflops (not shown) which set NS and OS. Then, additional resolution time is provided by the flipflop which holds bit G. Even though signals G and -G may be
fed back into the branching logic, unresolved asynchronization ambiguities can not propogate along that path because the program follows every I/O status test by a command setting G=0 before any
subsequent command that sets G to some value depending on G.
There is provided a master clock signal called CLK which synchronizes operation of the present invention. This signal may be generated externally of the invention or provided by an internal clock.
During each instruction cycle of the master clock 47, the invention will execute several of the above described subinstructions in parallel. A typical value for the time period defined by the master
clock is 120 nanoseconds. Derived from the phase of the master clock are the several "local clocks" which are merely gated clock signals. All clock signal names in the invention begin with the
letters CLK to which is added a suffic indicating the identifying letter of the register of destination, e.g. CLKM, CLKV, etc.
The invention is a synchronous machine with essentially all significant changes occurring at the end of every cycle of the positive transition of the master clock. The clocking scheme for controlling
clocked registers and memories simply jams a clock into its inactive state during the first half of the cycle (Clk high), and allows the clock to become ready at the beginning of the second phase if
the enable bit for that particular clock is active. With no inhibiting, during the CLK cycle a clock will be activated and then make a transition, usually low to high, at the end of a cycle. When
clocking memories, e.g. M or C, the clock is actually a write enable to the memory during all of the second phase of the cycle.
Since the preferred embodiment uses commercially available memory components which have zero minimum address hold time, the rising edge of the write enables signals (alias CLKM, CLKC) is nominally
allowed to coincide with the rising edge of the CLK inputs to flipflops which hold addresses for these memories. However, a potential implementation problem may arise if the CLKM or CLKC signal rings
and the flipflops containing the address register operate at much faster then typical speeds. The defense against this potential hazard has been found to be careful layout of the components,
especially the C memory and its environs, and by resistor terminations to reduce ringing of the A and CLKM signals. Additional measures to further enhance the reliability of operation of memory M
include the use of a delay-line for advancing the rising edge of CLKM several manoseconds with respect to the rising edge of CLK, and a software prohibition against any sequence of instructions which
would allow CLKM and CLKV to be active simultaneously. This prohibition insures that the data hold requirements for writing memory M are trivially satisfied.
Gate delay times are known to vary widely among individual integrated circuit chips. Allowing for the most extreme specifications guaranteed by most manufacturers, Schottky gates can have delays
ranging from 2 nsecs to 5 nsec. Thus, in the worst case, there might be up to a 3 nsec difference between the rising edges of the several clocks and this difference might be in either direction. This
may be exacerbated by layout of the invention plural circuit boards. The preferred embodiment of the invention minimizes sensitivity to this possible skew of clocks. Except for CLKM all clocks
originate on the same circuit board on which said clocks are to be used. For the M memory 60, CLKM, a read-write strobe is located on the same board as the memory's address register. (A software
prohibition against simultaneously setting M and V circumvents the need to specify data hold requirements when writing M.) Since all gates on the same chip have the same delays to within a small
fraction of a nanosecond, clock skew is effectively minimized within each board by generating all clock signals on the same chip(s). Excepting CLKD, every clock signal on the control unit board
follows CLKP by exactly one inverter and one NAND gate, and in each case, these two delays are located on the same chips.
The only exception to the generally synchronous clocking scheme is the D latch 55. This latch breaks the loop from C through multiplexers back to C, and thereby allows data to be rewritten back into
C on the same clock cycle without racing around the address generator loop comprising C and the C modifier.
All of the aforementioned registers, namely, X, Y, Z, V, O, A, T, P, ESK, and G, operate synchronously. All are able to change states only within a few nanoseconds of the rising edge of the master
clock. Furthermore, data memory 60 and/or counter memory 52 also operate in this same phase.
B. DECODING PROGRAM FOR REED-SOLOMON (31,15) CODE
The symbolic language is useful to remind the programmer of the concurrent operations which may be realized by this invention in contrast to sequentially executing computers. The symbolic language
will therefore be elucidated by description and example and then used to state a preferred program executed by the invention for correcting errata in the Reed-Solomon (31,15) code. The identical
program will then be restated in a language more specific to the structure of the invention, while descriptive of the state of the components at the conclusion of each clock cycle contemporary with
the instructions. Finally, the hexadecimal content of the control memory corresponding to the preferred program will be given.
It is understood that a special purpose digital computer, such as the present invention, cannot be economically employed to compile and assemble its own executable code (firmware) from a higher level
language. It is outside the scope of the present invention to describe the syntactical detail and complier code of a crosscomplier which has been constructed to run on a commercially available
general purpose digital computer to compile and assemble executable code for the control memory of the present invention. With the aid of flow charts (FIGS. 6, 7A, 7B, and 7C), and code, the symbolic
source language (Table VIII), the semi-symbolic ROM code (Table IX) and the actual hexadecimal ROM-stored contents (Table X,) one skilled in the programming art will be able to discern the
programming steps and manually trace the program without resort to an execution simulator.
The symbolic source language consists of lines of micro-programmable-code and optional explanatory comments. A line of such code specifies all sub-instructions to be concurrently executed by the
invention. A line of such code comprises fields, here delimited by colons and semi-colons. Non executable comments are enclosed by asterisks and slashes. An executable line specifies sub-instructions
to be executed by the invention's Galois field manipulative circuits, a sub-instruction specifying the content of the G bit of the control unit, sub-instructions loading and transferring content of
the registers of the arithmetic unit, sub-instruction manipulating addresses (ultimately for addressing the data memory) by means of operations in the address generator.
The symbolic language employs several conventions which are here described in order to aid in understanding the specific symbolic code presented below. These conventions are now briefly described.
One skilled in the art of programming will recognize the efficacy of those conventions in the design of a cross-compiler and simulator for operations wherein a conventional computer is employed to
debug programs for the invention.
At the option of the programmer, each line of code may be symbolically labeled in conventional fashion with a symbolic address. These symbolic addresses for executable code appear at the extreme left
of the line of code, delimited by colons from the next adjacent symbolic sub-instruction field. By convention the symbolic addresses are distinguished from named variables by commencing the symbolic
address with the letter "W". These locations in control memory are mnemonically the "WHERE TO" addresses which are ordinarily the addresses to which control transfers as a result of the status of the
G, or "GOTO" bit The conventions and techniques of branching instructions will be described below.
The symbolic sub-instruction fields commence with one of the three operations LF(), FF() or RF(). This field principally indicates the content of the R bus, which content results from the operation
LF() or FF() as previously defined. The pseudo instruction UF() stands for "undefined function" and is a default option emphasizing the loss of the current content of the R bus when no other
reference to the R bus is made in the same line of symbolic source language microcode. The hardware execution of UF() is identical to LF(). The G bit of the control unit is specified in the next
subfield. The default option is G=0 whereby the next sequential instruction will be the next instruction executed from control memory.
Statements encountered in the program herein disclosed, such as "printf (...)", are not relevant to the operation of this invention, being merely instructions to a cross-compiler or execution
Before continuing with a detailed discussion of these fine points it is helpful to present an example which shows the power of the invention, and how the symbolic language corresponds to the hardware
implementation. The example below presents a sequence of six instructions which invert a data word in GF(2^5). This data word initially resides in the Y register. The value of the data word is taken
to be beta. After the following software sub-instruction have been executed, the value beta^-1 will be present in the Y register, unless beta=0, in which case the Y register will also contain 0. The
mathematical explanation for this software is the fact, in GF(2^5), beta^-1 beta^30, and beta^30 may be computed via the following aggregate of individual arithmetic unit sub-instruction.
Y=4 beta
X=4 beta
Y= beta^2
X= beta^2
X= beta^4
Y= beta^6
X= beta^8
Y= beta^14
X= beta^16
Y= beta^30
In the symbolic language used to express this microcode, this program reads as follows.
______________________________________ X=Y /* X=Beta Y=BetaLF( ); SQ( ); Y=R /* X=Beta^2 Y=Beta^2 */UF( ); SQ( ); /* X=Beta^4 /*LF( ); SQ( ); Y=R; /* X=Beta^8 Y=Beta^6 */LF( ); SQ( ); Y=R; /* X=Beta^16 Y=Beta^14 */LF( ); Y=R; /* Y=Beta^30 */______________________________________
In the above example the first line of micronode initializes the X and Y registers to the quantity beta which has been previously taken from the data memory. The second line of microcode produces
beta^2 in two ways; first the R bus will contain beta^2 due to the function LF() which causes a GF(2^5) multiplication and places the product on the R bus. The micro-instruction Y=R reloads the Y
register from the R bus. Thus, Y contains beta^2. The X register is reset to its square at the conclusion of the clock cycle as a result of the SQ() function. The third line of microcode places beta^
4 on the R bus, but this result is not required for the same purpose. The function UF() is here merely written as a convention of the language to emphasize the volatility of R. The SQ() operation
resets X to the square of its current content thus producing beta^4 in the X register. The 4th line of microcode is identical to the first line: the content of Y retained from two cycles previous now
gives, together with the X register, a product beta^2 *beta^4 =beta^6 on the R bus, via the LF(); Y is reset to the R bus thereby acquiring beta^6 and X is reset to its square, thereby producing beta
^8. The 5th line of microcode will be seen to produce beta^16 in X and beta^8 *beta^6 =beta^14 in the Y register. In the final line of code, beta^16 *beta^14 =beta^30 is formed on the R bus and the Y
register set thereto. Thus, an inversion has been completed which consumes only 6 clock cycles.
One will also observe that no memory references have been expended in the above example. In regard to this point, it is clear that inasmuch as all of these operations occur in the arthimetic unit
without requiring addressing activity from the address generator, the address generator is available during this time for independent manipulation.
We now add a second example to the first example, wherein the concurrent operations in the address register effect an exchange of information contained in two counter memory words of the C memory.
This might be accomplished via the following sequence of operations.
this program has the effect of interchanging the values of C5 and C7 while using C13 as a temporary storage location. In practice, it is easier to use symbolic names for the values of C registers.
Hence, if C5 refers to the top register of some array, that address could be given the symbol TOP. If C7 refers to the bottom register of that array, that address could be given the symbol BOT. Since
C13 is temporary, it can be referenced by the symbolic name TEM. In order to emphasize the distinction between symbolic names which refer to constants entering the system through the K field of a
sub-instruction and symbolic names representing content of elements of the C memory, the convention is adopted that symbolic names beginning with K can represent only konstants; symbolic names
beginning with any other letter refer to an element or word of C memory. Furthermore, in order to distinguish those subinstructions which affect the address generator, all such subinstructions begin
with the two characters "A=." Thus, instead of TEM=A, one writes A=TEM=A. Since chains of equalities are executed from right to left in this language, this instruction has the same effect as the
simpler instruction TEM=A.
Hence, the example interchange program, when run in parallel with the inversion program, appears as follows:
______________________________________UF( ); G=0; X=Y; A=TOP;LF( ); G=0; SQ( ); Y=R A=TEM=A;UF( ); G=0; SQ( ); A=BOT;LF( ); G=0; SQ( ); Y=R; A=TOP=A;LF( ); G=0; SQ( ); Y=R; A=TEM;LF( ); G=0; Y=R; A=BOT=A;______________________________________
The control unit instruction G=0 emphasizes that the sequential addresses of control memory are executed.
Turning now to an exposition of control transfer, it is emphasized that the present invention features pipelined branching. This means that every jump instruction is spread out over three
microinstructions, as discussed previously. During the first instruction, the G bit is set to 1. During the next (second) microinstruction, the program counter (P register) is reloaded with the label
found in the K part of the then current ESK control word. For symbolic language purposes this label has a 3-character symbolic name beginning with W (e.g., WXX), as below described. During the next
(third) microinstruction, the microinstruction at location WXX is loaded into the ESK control register. Finally, on the next (fourth) microinstruction, the machine executes the microinstruction at
location WXX of control memory.
The preferred symbolic programming technique includes the use of a hypothetical, that is non-hardware, bit to ensure an orderly progression in symbolic programming without undue complication arising
from the pipelined branching feature. One skilled in the arts of compiler structure and simulation will perceive the utility of these conventions for compiler language purposes. In order for the
programmer (or an execution simulating program) to properly anticipate the effects of the hardware registers G, P, and ESK, the preferred software technique includes defining two additional bits,
called GH and H. These bits may be regarded as artifacts of the symbolic language and are to be interpreted as delayed versions of the G bit. The LF(), FF(), and UF(), subroutines always have the
effect of setting these bits as follows: H=GH; GH=G. These two pseudo sub-instructions are symbolically executed before the new value of G is assigned. Therefore H and GH always contain delayed
values of what was formerly in the G bit. The GH bit corresponds to the state of the P register. The condition GH=0 defines a relative incrementation of the hardware P register such that P now
contains one more than it did at the previous instruction cycle. The condition GH=1 signals the state wherein the hardware P register now contains a new value which was just obtained from the K field
of the ESK register. Similarly, the H bit is defined to reveal whether or not the current contents of the ESK register correspond to an instruction which sequentially followed its predecessor (H=0),
or to an instruction which was arrived at via a jumnp (H=1). Since the programmer contemplates execution of an instruction immediately after it appears in the ESK register, in symbolic language
conditional jumps are based on the value of H, and a conditional jump is executed as the final subinstruction of each line. Thus, following the subinstruction which sets a new value of A, this
technique requires that each line of subinstructions conclude with a symbolic language statement of the form:
If (H) GOTO . . . (destination address)
This pseudo instruction is interpreted as transferring control of the destination address if H=1, otherwise control gong to the next sequential instruction. The address to which this conditional jump
occurs is either a legitimate location (which by convention begins with the letter W, for WHERETO), or a special default location, called HL. Control going to HL is a representation of a serious bug.
This corresponds to an attempt of the hardware to branch into data rather than into a control memory address. Alternatively, it may correspond to a jump to an unspecified address. In either case, a
jump to HL occurs only in the event of a serious software defect.
As a particular example of how branching is implemented, consider the following lines of instructions, which might occur in a program which is prepared to service requests for input or output.
______________________________________UF( ); G=!NS&&!OS;UF( ); G=0 ;UF( ); G=0 ; IF(H)GOTO WMN:______________________________________041624800470x v
If there are no I/0 requests, the program jumps to WMN (a continuation of the main program); otherwise it falls through to an I/0 macro. This symbolic code may be expressed by the following values of
the control signals (the coding is assume to begin at control memory location 300):
______________________________________ROM location SG K______________________________________300 !NS&&!OS --301 0 400302 0 --______________________________________
Notice that the numerical value of the symbolic WHERETO label has been placed in the K field of the microinstruction preceding the line in which it occurs in the symbolic language. This facilitates a
correspondence between firmware and the proper symbolic language simulation of the jumps. The contents of the various relevant hardware registers and pseudo registers as the above microcode is
executed are sequentially indicated as follows (in obvious notation):
______________________________________original E - S - KTime ROM loc G P SG... ...K______________________________________0 300 -- 301 !NS&&!0S --1 301 1 302 0 4002 302 0 400 0 --3 400 0______________________________________
Turning now to FIGS. 6, 7A, 7B, and 7C, the preferred RS decoding program is further described. FIG. 6 is an overview flow chart of the program, indicating the major segmentation and alarm exits.
FIGS. 7A, 7B, and 7C provide additional detail and correlation with the actual program coding. The format of the several FIGS. 7 is that of parallel flow charts.
The right hand block diagram concisely indicates the functional segmentation of the firmware while the parallel left hand flow chart details the transfer of control with respect to the significant
symbolic addresses comprising each program segment. Unconditional branches are distinguished from conditional branches by the heavy lined arrows for the former and light lined arrows for the latter.
The symbolic source language program implementing the flow charts of FIGS. 6, 7A, 7B, and 7C is presented in Tables VIII and IX. One skilled in the art of programming will detect several features of
the listing of these Tables which are artifacts of the process of program assembly for assisting the programmer. For example, the executable instructions comprising the program (Table IX) are
preceded by a list, (Table VIIIA) describing the use of many of the individual locations (counter words) of the C memory 52 and another list (Table VIIIB) numerically defining symbolic konstants (in
decimal). Of course, the symbolic addresses of FIGS. 7A, 7B, and 7C correlate with the symbolic addresses of the preferred program.
TABLE VIII______________________________________ DESCRIPTION OF SYMBOLIC NAMES______________________________________bend End point of bz.bit An index which keeps track of the number of bits converted within program segment CONVERT.bl Bottom left.br Bottom right.bz Bottom zero.cflag Flag to record whether erasure count or error count is being converted.c1 Volatile pointer used only in local context.c2 Another volatile pointer used only in local context.c3 Another volatile pointer used only in local context.del Used by SUGIY to maintain record of relationship between tz and bz.errcount Keeps a record of the number of roots of SIG(z) which have been found by CHORNEY.firstin Location in I/O buffer memory of the first input character.fixed An index used by CHORNEY to keep track of which received digits have been corrected.gam Used together with del by SUGIY.lambot Bottom location of the polynomial LAM.lamj Index to the coefficients of polynomial LAM.lamtop Top location of the array holding coefficients of polynomial LAM.lastin The location in I/O buffer memory which holds the last received character.rvobot Bottom location of the array holding coefficients of the polynomial reverse omega (RVO).rvobotml Rvobot - 1.rvoj Index pointing to the coefficients of the polynomial reverse omega (RVO).rvotop Location at the top of the array holding the coefficients of polynomial reverse omega (RVO).sigbot Bottom of the array holding coefficients of polynomial SIG.sigtoppl Sigtop + 1.tend Endpoint of tz.tl Top left.tr Top right.trvobo Used by CONVERT to record the parity of the number being converted; later needed at end of SUGIY to determine proper endpoints of polynomial reverse omega (RVO).tz Top zero.xi Index which runs through erasure list.xtoppl Pointer to location one past the top of the erasure list.______________________________________ ##SPC1## ##SPC2##
In the RS decoding program of FIGS. 6 and 7, the initial segement, INPUT, responds to the handshake signal NS and accepts each new input character as soon as it is declared ready. After 31 characters
have been accepted, the program exits from the loop in INPUT and proceeds to the next segment.
The next segment, ERASLIST, copies the received word from one array in memory to another, and accumulates a count of the number of erased characters while doing so. This counter resides in the
difference between a pair of words in the C memory, and is expressed in the GF(2^7) arithmetic used by the address generator.
The next segment, SYNDROME, treats the received word as a polynominal and evaluates it at each of the points alpha, alpha^2, alpha^3, . . . alpha^15. The results of these evaluations are the syndrome
characters of Equation 4, which are stored in scratch memory.
The program then sets a counter called CFLAG and loads the two C registers whose difference gives the erasure count into registers T and A.
The next segment of the program, called CONVERT, converts the difference between the GF(2^7) expressions in the A and T registers into a 5-bit natural binary number which is accumulated in the
arithmetic unit. BEND and TEND, the stopping points needed by SUGIY, are initialized as a by-product of this conversion. When the program exits from this segment, the result is in the V register.
The CFLAG is then examined to determine whether the program has just converted the erasure count or the error count. The first time CONVERT is executed, it converts the erasure count and the program
then continues processing software segments sequentially.
The top bit of the V register is inspected to decide whether or not the erasure count is ≧ 16. If it is less, the program continues. Otherwise, a test is made to decide whether or not the erasure
count is equal to 16. If it is not, the program jumps to an ALARM mode indicative of 17 or more erasures; otherwise the program continues.
The next segment of the program, called NEWSS, calculates the coefficients of the generating function for the modified syndrones, according to Equation 10.
The next program segment, called SUGIY, solves the key equation (Equation number 11 above). This is accomplished by careful iterations on a top and bottom "register" each of which is an array
containing two polynomials separated by a comma as explained in Chapter 2 of my text (See FIG. 8.) However, in the present program commas are not placed between elements of the array as is done in
the book, but instead the commas are on locations in the memory, in which are stored zeros. The pointer to the comma/zero in the bottom array is called BZ. Similarly, the pointer to the left endpoint
of the bottom register is called BL and the bottom right endpoint is called BR. The initial format of these registers is shown in FIG. 8A. The software in the section of the program called SUGIY
manipulates the top and bottom registers according to the Euclidean Algorithm for finding continued fractions. At a typical intermediate state during execution of SUGIY the conditions of the pointers
and delimiters are as shown in FIG. 8B. As explained in Chapter 2 of my text, this algorithm terminates when the commas reach the endpoint conditions TEND or BEND. This condition is illustrated in
FIG. 8C. When this termination occurs, the top register contains the coefficients of the error locator polynomial SIG between the top comma TZ and the top right endpoint TR. The bottom register then
contains the coefficients of the polynomial omega between the bottom left endpoint BL and the bottom comma BZ. However, the omega polynomial is stored in reverse order, and is therefore called
reverse omega or RVO.
The next program segment, RVO, sets the degree of Reverse Omega (the error evaluator polynomial which is formed by SUGIY), and also tests to see whether the key equation actually had a solution of
valid degree. If it did not, then the program jumps to the ALARM mode.
Otherwise, the program continues to the next segment, NEWLAM, which constructs the erasure-locator polynomial according to Equation 7.
The next portion of the software is called CHORNEY (a contraction of the names Chien and Forney, who first presented the original theoretical pioneering work on decoding after sigma and omega are
known). However, the version here implemented contains a number of recent innovations. Altogether, CHORNEY contains portions of code for evaluating five different polynomials. In practice, at most
four of these polynomials will be evaluated at any particular point. However, it is instructive to consider all five evaluations. They are called EVAL1, EVAL3, and EVAL5, and EVALEV2 and EVALEV4.
EVAL1 evaluates the polynomial SIG(X). EVAL3 evaluates the polynomial LAM(X). EVAL5 evaluates the polynomial RVO(X). EVALEV2 evaluates the even part of the polynomial SIG(X), and EVALEV4 evaluates
the even part of the polynomial LAM(X). The even parts of SIG and LAM are useful because they correspond to the derivatives of SIG and LAM respectively. This arises from the fact that X times the
derivative of a polynomial in a field of characteristic 2 is simply equal to its odd part. However, if derivatives are evaluated only at points at which the original polynomial vanishes, the only
relevant cases are those in which the odd part and the even part are identical; thus it is sufficient to evaluate the even part of the polynomial instead of the odd part of the polynomial.
The overall structure of CHORNEY is a single main loop which processes each of the thirty-one non-zero characters in GF(2^5) corresponding to the thirty-one locations of the (31,15) Reed-Solomon
code. The location being processed resides in register X. First, CHORNEY evaluates the polynomial SIG(X) according to Equation 12. If it is zero, the control branches to location WLR, which is
prepared to deal with an error which has been found. If SIG(X) is not zero, its value is stored and the program compares X with the next item on the erasure list to see whether X is an erased
location. If it is not, then the program returns to the beginning of its major loop and considers the next value of X.
If the current value of X has been found to be in the location of an error, the program then evaluates the even part of SIG(X) as well as LAM(X) and RVO(X) and determines the value of the error via
the following formula, which is a refinement of Equation 13. ##EQU11##
On the other hand, if the program deduces that the current value of X corresponds to an erasure location, then the program determines the value of this erased character via the formula ##EQU12##
Part of the very high speed of the present invention is due to the fact that it is able to evaluate all of these relevant polynomials so quickly. The evaluations of EVAL1 consist of a one-cycle inner
loop, during which most of the components of the invention are active. The evaluation of the even part of a polynomial uses a two-cycle inner loop. CHORNEY exits after all 31 locations have been
"corrected" in the copy of the received word which is stored in buffer memory. A count of the number of roots of SIG which CHORNEY actually found is also stored in the C memory. After exiting from
CHORNEY, the number of roots of SIG in GF(2^5) is compared with the degree of SIG, and the ALARM mode is entered if there is a discrepancy.
If there is no discrepancy, then CFLAG is reset, T and A registers are loaded with the two elements of GF(2^7) whose value is the number of errors, and the program jumps to CONVERT.
CONVERT obtains the number of errors as a natural binary number in the V register. Upon exit, a test of CFLAG causes the program to jump to the OK output segment, where the program prepares to emit
the corrected codeword residing in buffer memory.
If any of the tests for correctness fail, the program enters the ALARMS segment. Since this can occur when some incorrect "corrections" have been made on the copy of the received word in buffer
memory, the ALARMS segment turns on the ALARM output and then prepares to emit the original unchanged copy of the received word residing in scratch memory as its output.
Both ALARMS and Output=OK segments are followed by OUTPUT, which emits a header of three characters (one ALARM word, then the erasure count as a natural binary number, then the error count as a
natural binary member), then the first 15 characters of the corrected codeword (if the decoding was successful) or the first 15 characters of the original received word (if the ALARM condition is
one, signifying detection of an errata pattern more severe than the code is capable of correcting).
After finishing OUTPUT, the program returns to INPUT to await the next word.
From the program set out in Table IX, it will be noticed that the address generator 5 may be utilized for certain purposes quite apart from furnishing addresses for the data memory. It will be
recalled that shift register incrementation and decrementation, modulo 2^7 -1, can be performed in the address generator and equality may be tested and furnished as output together with the content
of any counter word of C memory. As these operations may be executed concurrently and independently of operations in the arithmetic unit, the address generator may be employed for certain scratch pad
and logic functions auxiliary to the operation of the arithmetic unit. An example of such use appears in the symbolic source code at location "WIN" where an element of C memory called "ERRCOUNT" is
incremented to totalize the number of errors corrected; after processing a codeword, control passes to location "WIN" where the Test or T register is initialized to the degree of SIG(z) and the
content of "ERRCOUNT" is tested for equality therewith. Non-equality indicates an alarm condition.
This invention features inner loops of length one. Thus all of the Chien search performed by the decoder is done in a single physical instruction, namely:
WE1:FF();G=A!=T&&G; Y=R; Z=M[A];
v=r; a=++c2; if(h)goto we1;
the hardware may execute this line of code many times consecutively. G remains 1 until A attains the same value as T. Then G becomes 0 because the logical condition A!=T is no longer true.
Thereafter, G remains zero, because G=(A'=T) && G=1 && 0=0. Hence control falls out of this one-instruction loop after the third execution following the instruction at which A becomes equal to T.
There also occur in the program at Table IX some loops of length 2, as for example:
__________________________________________________________________________WE4:ff(); G=A==T ; Y=R; V=R ; A=++C2 ;IF(H)GOTO WV4;LF(); G=A!=T&&!G ; Y=R; Z=M[A];V= R; A=++C2 ;IF(H)GOTO WE4;WV4-UF(); G=O ; A=LAMTOP-1 ;IF(H)GOTO HL:UF(): G=0 ; T=A; A=KFACT ;IF(H)GOTO WW4;__________________________________________________________________________
This loop evaluates the even part of a polynomial. Control circulates around the loop of length two located at WE4 and the instruction following it, until the A register attains the value equal to T.
On the third instruction following the occurrence of that equality, the program exits from the loop, either by falling out or by jumping to WV4, depending on the parity of T and the initial parity of
C2 (an element of C memory).
Following the prescribed software technique outlined above, execution of the instruction at the line following WE4 sets H=1 and a branch back to WE4 will therefore occur due to the logical conditions
determining G. On the final execution of the line at WE4, H=1 due to the condition A==T and a branch occurs outside the loop to WV4. When the initial contents of C2 and T have the other phase of
parity, the final execution within the loop occurs on the line following WE4. Notice, however, that as the fallout occurs, H=0 and GH=0, but G=1. This G=1 condition creates another jump on the second
instruction after the fallout due to the pipelined branching of the invention. When that line, beginning UF(), is executed, a jump is initiated. Since in fact this jump is not desired by the
post-loop software, it is nullified by making its Whereto address (WW4) equal to the address of the next sequential instruction. This tactic of nullifying the effect of a previous execution of G=1 by
executing a (harmless) jump to the next sequential instruction is used only rarely at several points in the program of Table IX. All next instruction Whereto addresses conventionally begin with two
W's in order to distinguish these pseudo jumps from all real jump addresses which conventionally begin with only one W.
The semisymbolic listing appears in Table X below. The extreme left field contains the symbolic address of the symbolic source language text of Table IX. The next field, under the heading "PC="
contains the absolute address in control memory in the form "rom[pqr]= which is interpreted as "read only memory address pqr contains" where pqr is the absolute hexadecimal address in control memory.
The content of the particular address (that is the ESK word) is then displayed in segments under headings here briefly indicated. The select X (sx) character may be y, respectively, z, q or-, for
selecting the indicated register words Y, Z, X^2 or don't-care. The binary character !ix determines the content of the enable bit EX which places the result of any cycle in register 63 unless
inhibited, at the end of that cycle.
The select R (sr) bit is next displayed: the value 0 selects R=X*Y and the value 1 selects R=X*Y+Z while-indicates a default condition. (Note that this is the SR0 bit; SR1 is not used by the
preferred (31,15) Reed-Solomon decoding program.) The select Y character is next displayed: the source of data for Y from data memory is indicated by m, and from the R bus, by r. The binary enable Y
(ey) character follows: 0 inhibits, and 1 enables the writing onto Y register 64. The bits select V, (!sv), enable V (ev), enable 0, (not inhibit 0, symbolically !io) and enable Z (!iz) occupy the
next four columns. (See Table III and IV for definitions.) The select G hexadecimal character (sg) is next given. The select A character follows in the notation below:
k A loaded from k
P a loaded from content of C[i] incremented
m A loaded from content of C[i] decremented
O a loaded from content of C[i].
The enable C bit (ec) and enable M bit (em) follow in an obvious notation. The 5 bit C memory address (sc) is next given in symbolic notation. The next column, headed ed, contains the enable D bit
(ed). (This bit is identical with the function of enabling the A register and is elsewhere called "EA" for notational convenience. The 1 state for ed enables new data to A register 50. The next
column displays the T enable bit et=1 enables new data to T register 51. The final column displays the content of the konstant field in symbolic form. ##SPC3##
Table XI contains the hexadecimal (hex) images of the coding given in tables IX and X. The inclusion of the hex listing is thought to better guide both the artisan skilled in computer design and the
artisan skilled in programming by providing a teaching of the invention at a common level. The hexadecimal code given below is the actual content of control memory 40. The format used here lists five
hexadecimal characters, then a sixth hexadecimal character expanded into four binary bits with intermediate spaces, then the seventh and eighth characters in hexadecimal. The use of binary notation
for the sixth character is intended to help the reader split off the leftmost bit, which is part of the SC field, and the rightmost bit, which is part of the K-W field. ##SPC4##
Operational Tests of RS Decoding Program
In operation as a decoder (error corrector) for the (31,15) Reed-Solomon code, the present invention achieves very high speed. The time required to decode any particular 31-character received word is
independent of the transmitted codeword, but it does depend upon the errata pattern. The worst case is 3555 clock cycles between the end of the INPUT loop and the beginning of the output character
stream. With the clock running at the designed speed of 120 nanoseconds per cycle, the total (worst case) decoding time is thus only 0.426 milliseconds.
Table XII lists some of the many errata patterns with which the decoder of the present invention has been tested. Each errata pattern is listed in two rows. The first row gives the pattern of errata
in the message characters; the second row gives the patter of errata in the check characters. Each six-bit character is taken as a decimal number between 0 and 63, inclusive. The erasure indicator is
the most significant bit. Following each pair of rows presenting the errata pattern, we have listed the output of the present invention. The first number gives the value of an external counter
measuring the number of clock cycles between the exit from the INPUT loop and the appearance of the first output character. This first output character gives the alarm status (32 means decoding was
successful; ≧48 means decoding was unsuccessful.) The second output gives the number of erasures. The third output gives the number of errors which is presented as zero when the alarm condition
reveals that the precise value of the number of errors is unknown because the error-correcting capability of the minimum distance decoding algorithm is exceeded. The next 15 outputs reveal the
decoded information characters.
Tables XIII and XIV summarize the running time of "typical" errata patterns as a function of the number of erased characters and the number of erroneous characters. While there are additional small
data-dependent variations among the errata patterns of a particular type, no combination of such variations can exceed the worst-case maximum of 3555 cycles. ##SPC5##
TABLE XIII______________________________________RUNNING-TIMES FORCORRECTABLE ERRATA PATTERNST=0 T=1 T=2 T=3 T=4 T=5 T=6 T=7 T=8______________________________________S= 0 3515S= 0 3538S= 0 2068 2238 2404 2579 2761 2950 3140 3343 3555S= 1 2163 2336 2505 2683 2868 3060 3253 3459S= 2 2140 2317 2490 2672 2861 3057 3254 3464S= 3 2242 2422 2598 2783 2975 3174 3357S= 4 2232 2416 2596 2785 2981 3184 3388S= 5 2350 2537 2720 2912 3111 3294S= 6 2345 2536 2723 2914 3122 3309S= 7 2479 2673 2863 3057 3263S= 8 2488 2691 2880 3088 3276S= 9 2643 2844 3041 3242S=10 2660 2865 3066 3276S=11 2821 3034 3233S=12 2856 3068 3276S=13 3033 3248S=14 3072 3296S=15 3270S=16 3334S=16 3339______________________________________
TABLE XIV______________________________________RUNNING-TIMES FORUNCORRECTABLE ERRATA PATTERNS "ERROR" INSUFFBIGS SINGULAR =ERASE ROOTS______________________________________S= 0 3035S=0 3102S=0 3150S=0 3198S=1 2526S=1 2547S=2 2252S=3 2413S=4 3073S=5 2283S=6 3010S=7 2157S=8 3081S=9 2037S=10 1766S=11 1919S=12 1656S=12 3114S=12 3119S=12 3174S=12 2090S=12 2226S=12 2238S=12 2242S=12 2374S=12 2874S=12 2894S=13 1801S=14 2558S=15 1691S=17 871S=18 876S=31 949______________________________________
Many patterns of 8 errors and 0 erasures attain the worst-case time of 3555 clock cycles. However, some errata patterns are decoded slightly faster for various reasons. Some time savings accrue when
the error pattern is such that accidental leading zeroes occur in the calculations in SUGIY. Shortly after symbolic ROM location WBG thee is a data-dependent branch to WLQ, and each time this branch
is taken, 20 cycles are bypassed. However, if the branch is taken an odd number of times, the parity of the exit from SUGIY is changed, thereby resulting in a loss of 3 clock cycles.
Errata patterns containing erasures cause reductions both in the running time of SUGIY and in the running time of CHORNEY. This happens because the presence of erasures forces an earlier termination
threshold for SUGIY, necessarily yielding a candidate error locator polynomial of smaller degree. This candidate error locator polynomial then also facilitates a faster running time for the Chien
search conducted by CHORNEY, both because SIG(z) has fewer roots and because it has lower degree. Thus, the running times of SUGIY and the Chien search part of CHORNEY both decrease as the number of
erasures increases. However, the running times of subprograms CONVERT, which converts the number of erasures to binary; of NEWSS, which modifies the syndrome; and of NEWLAM, which calculates the
erasure polynomial lambda as well as the error evaluation part of CHORNEY: all increase as the number of erasures increases. In fact, the growth of NEWLAM, like the decline of SUGIY, both vary as the
square of the number of erasures. Thus, assuming many errata (a number near to or greater than the error-correction capability of the code), the running time is a quadratic function of the number of
erasures. This quadratic function assumes a minimum value within the interior of the relevant range of its argument (0≦≦16). Hence, although the worst case is 0 erasures and 8 errors, errate patterns
containing 16 erasures and 0 errors run slightly longer than errata patterns containing 10 erasures and 3 errors.
The most common type of errata pattern with more than 8 errors and no erasures causes SUGIY to find a potential error-locator evaluator polynomial of degree 8, but this polynomial fails to have eight
roots in GF(2^5). When attempting to decode such an errata pattern, SUGIY runs for the same length of time as it would for a pattern of 8 errors. However, CHORNEY runs faster because it does not
evaluate the unfound roots of the error locator polynomial. Each unfound root saves 48 cycles in CHORNEY and an additional 69 cycles are saved because the number of errors does not have to be
converted to binary when this number is not known. Thus, a typical 9-error pattern, containing no erasures, yielding no accidental leading zeroes in SUGIY, and giving a candidate error-locator
polynomial having degree eight but only one root in GF(2^5), has a running time of 3555-7×48-69=3150 (cycles). If the candidate error polynomial has two roots, its running time is 3198; if the
candidate error polynomial has zero roots, its running time is 3102. In either case, an additional savings of some multiple of 20 cycles (possibly minus 3) are saved if accidental leading zeroes
occur in SUGIY.
When there are more than 8 errors, there are generally no constraints on the error-locator polynomial. Each value which such a polynomial assumes is an independent sample from a uniform distribution
over GF(2^5), so that each value is zero with probability about 1/32 and the expected number of roots is therefore about 1. If a polynomial of degree 8 has more than 6 roots, it must have 8, so the
worst-case running time for any uncorrectable errata pattern is 355-2×48 -69=3390 (cycles). Since a random polynomial of degree 8 is very unlikely to have 6 roots, this case is very rarely attained
and has never occurred among the random test cases run to date.
Other alarm conditions yield faster running times. The shortest running time occurs when there are 17 erasures, in which case the program aborts and begins output after only 871 cycles. More erasures
entail slightly longer running times because of the additional computation required to convert the larger erasure count to binary. Another early abortion occurs when the key equation is singular, and
has no solution. This condition forces a program termination immediately after SUGIY, bypassing CHORNEY entirely. Even when conditioned upon the assumption that the errata pattern is uncorrectable,
this type of alarm condition is unlikely when the number of erasures is even, although it is common when the number of erasures is odd. In fact, when the number of erasures is odd, a single nonzero
resultant is sufficient to trigger this alarm, and that happens with probability about 31/32=97%.
Another alarm condition occurs when CHORNEY finds a root of SIG(z) which coincides with an erasure location. This phenomenon is common only when the number of erasures is large and even. In order to
test out the ALARM software package, several extra noise words containing twelve erasures and many errors have been tested. Termination due to coincident "errors" and erasures is not uncommon among
errata patterns of this type, and the variation in running times reflects some of the variation among the 31 possible locations in which the aborting coincidence might occur.
C. Simultaneous Linear Binary Equation Solver
Although this feature is not used in the preferred embodiment of the above described firmware for decoding the (31,15) Reed-Solomon code, special instructions to facilitate the easy and fast hardware
implementation of algorithms to solve simultaneous linear binary equations are included in the existing design. This feature is particularly useful for decoding BCH and/or Reed-Solomon codes of block
length n and distance d when log[2] n is relatively large compared with d. In these circumstances, a relatively large part of the decoding time of the conventinal algorithm would be devoted to
finding the roots of the error-locator polynomial, SIG(z). When n is large, it may impractical to test all nonzero field elements. As explained in chapter 13 of my text, a better approach is to
compute an affine multiple of SIG(z). One then has a system of simultaneous binary equations:
where a and the rows of the square binary matrix L are words in the data memory of the present invention. The error locations are a subset of the row vectors β which satisfy the above equation.
If the Galois field has order 2^m, the above matrix equation represents a system of m simultaneous linear binary equations in the m unknown binary coefficients of β. If we annex an additional
component of value 1 to β, the annex row u to the bottom of L to form the (n+1)×n binary matrix A, we have
this system of equations may now be solved by column operations on the matrix A.
However, since matrix A is obtained via calculations which yield its rows (as elements in GF(2^m), and therefore as words in the decoder's data memory), it is much more feasible to do row operations
than column operations. Fortunately, this invention includes two special instructions, RF(); and G=TV(); which facilitate a very fast solution of these simultaneous linear binary equations.
A flow chart of the software to column-reduce the A matrix, well known in prior art, is shown in FIG. 9.
Consider the matrix equation which occurs on page 243 of Algebraic Coding Theory: ##STR1##
By cycling columns leftward we may ensure that the pivot element lies in the first column and the reduction precedes as follows:
EXAMPLE ##STR2##
At this point the matrix is column-reduced, but not yet echelon. Only permutations and addition of the identity are needed to finish the solution: ##STR3##
When this prior-art algorithm is implemented by one skilled in programming practice, a major part of the running time is consumed by branches deciding whether or not to add one column into another.
There is also some problem of implementing column operations on hardware which stores words in rows. A conventional solution provides hardware to implement AND, XOR and OR logic on full words, and to
perform shifts on the bits of a word. This solution entails additional integrated circuit components as well as additional running time costs, all of which are avoided by the present invention, which
is implemented in the RF() instruction.
We will now describe the RF() feature of this invention.
The RF() instruction allows the decoder to base the selection of a choice between the affine function and the linear function of the Galois field arithmetic unit on the least significant bit of the
result rather than on any specified OP code. This allows the column operations need to solve simultaneous linear binary equations to be implemented on a row by row basis, at high speed, without any
branching overhead costs. The same program also uses the already existing Galois field arithmetic unit to effect the left cyclic shifts needed to bring the next column into the top bit position so
that it can be tested to find the next pivot element.
To illustrate the use of the RF() instruction, we describe how our decoder could use it to perform the column reduction described in the previous example.
To initialize, the row containing the pivotal element is multiplied by alpha, a 1 is added, and the resulting quantity is stored and loaded into the Z register:
the X register is loaded with alpha. Then, each row of the matrix is pipelined from memory to the Y register, the RF() subinstruction is executed, and the result is loaded into the V register and
stored back into memory. When a row containing a leading zero is loaded into Y, the result of the RF() instruction is a left shift of that row modified in the appropriate columns to account for the
stage of the matrix reduction now being performed. The relevant instructions in the inner loop are these:
______________________________________ Y = M[A];LF( );RF( ); V = R M[A] = V______________________________________
A single pass through the inner loop which executes each of the above instructions as a register content points to successive rows of the matrix transforms ##STR4## The value of the next pivot row
could be obtained as the rows of this matrix go back into storage. It is [ 11010 ]. The outer loop must reinitialize Z to
Z=α[11010] +[00001] =[10001] +[00001] =[10000] .
the next pass through the inner loop which executes the RF() instruction on the Y register yields ##STR5## after resetting Z to α[11000] +[00001] =[10100] , the next pass yields ##STR6## With Z
arbitrary, the next pass yields ##STR7## After resetting Z to α[10100] +[00001] =[01100] , the final pass yields ##STR8##
By providing the means to select the linear or affine function depending on the value of the bit R0, this invention also provides the means to test any single bit (or any linear combination of bits)
in a very few instructions. In general, when LF() is executed, the value of R0 is
______________________________________RO = Y0 X0 + Y1 X4 + Y2 X3 + Y3 X2 + Y4(X1 + X4)______________________________________
Hence, to test Y3, load [00100] into the X register, execute LF() and then RF(). To test Y4, load [00010] into X and then execute LF() and RF(). Similarly, to test Y1, load [10010] into X and then
execute LF() and RF().
The RF() feature proves very useful for permuting columns. For example, to permute the first and second columns in a particular row, either add or not add into that row the word [11000] , depending
on whether or not the sum of the first and second components are different or alike. This decision may be made by multiplying the original row by the appropriate constant (which are [11010 ]). Thus,
the following program permutes the first and second columns of the word located at M[CO]. Here, 5-bit constants are specified explicitly in binary, preceded by the letter K:
__________________________________________________________________________ Z = M[A]; A = K11000 ; A = K11010 ; Y = m[A]; A = C0 ; X = Y Y = M[A]; A = K00001;LF; X = Y; Y = M [A];RF ( ); V = R; A = C0 ; M[A] = V ;__________________________________________________________________________
One skilled in the programming art is able to expand these illustrations into a very fast program for solving simultaneous linear binary equations.
The major advantages of the RF() amd TV() features are that they provide great speed and program flexibility with virtually no hardware. This is accomplished by embellishing the Galois field
arithmetic unit so that it accomplishes data-dependent additions, compatible with its intrinsic shifting capability.
One skilled in the art will immediately recognize that many changes could be made in the above construction and many apparently widely differing embodiments of this invention could be made without
departing from the scope thereof. For example, the choice of a cyclic code other than the Reed-Solomon (31,34) code can be easily implemented; the utilization of the apparatus for encoding of data
streams may be effectuated without substantial departure from the structure described: the use of shift-register sequences for memory addressing may be applied to a wider scope of operations.
Accordingly, it is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative only and not in a limiting sense.
|
{"url":"http://www.google.com/patents/US4162480?dq=7,013,345/","timestamp":"2014-04-23T15:58:37Z","content_type":null,"content_length":"235336","record_id":"<urn:uuid:136d8651-2185-4d2e-b6de-535b53065e9f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why I don't like math
You might think it is a contradiction to be a computer scientist and to dislike math. It isn't that I dislike ‘mathematics’. I like the logic, symmetry, and beauty of the ideas. What drives me up the
wall is the
Mathematicians like to think they are rigorous. Ha! If you haven't tried to cast your equations into a working computer program I can guarantee you that you left something out. Mathematicians like to
omit ‘unnecessary’ detail. I'm a fan of abstraction, but you have to use it in a disciplined manner. If you simply discard the boring parts of the notation, you'll end up with an ambiguous mess. Of
course humans can handle ambiguity more gracefully than computers, but it seems to me that if a computer cannot make sense out of what is being expressed, then a poor human like me has very little
chance of success.
So here's my current frustration: the distribution I've been studying has an interesting quantile function.
And I'm already in trouble. This notation has some extra baggage. Let me recast this as a program.
(define (make-quantile-function alpha beta)
(lambda (p)
(* alpha (expt (/ p (- 1 p)) (/ 1 beta)))))
are tuning parameters for this kind of distribution.
is essentially the median of the distribution,
determines whether the curve descends steeply or has a long tail. The quantile function takes an argument
which is a number between 0 and 1. If you give it an argument of .9, it will return the value of the 90th percentile (that value of X where 90 percent of the distribution is below X). If you give it
an argument of .5, you'll get the median. If you give it argument of .25, you'll get the first quartile, etc. It's a useful way of describing a distribution.
The reason the fancy equation had a superscript of -1 is because the quantile function is the inverse of the cumulative distribution function. The reason the function was named F
is because they named the cumulative distribution function F. The cumulative distribution function could be defined like this:
(define (make-cumulative-distribution-function alpha beta)
(lambda (x)
(/ 1 (+ 1 (expt (/ x alpha) (- beta))))))
We can show that these functions are inverses of each other:
(define q (make-quantile-function .9 1.3))
;Value: q
(define cdf (make-cumulative-distribution-function .9 1.3))
;Value: cdf
(q (cdf 3))
;Value: 3.000000000000001
(q (cdf 1.8736214))
;Value: 1.8736213999999998
(cdf (q .2))
;Value: .19999999999999996
(cdf (q .134))
;Value: .13399999999999995
I mentioned earlier that I thought the quantile function was particularly interesting. If you've read my past posts, you know that I think you should work in log-odds space whenever you are working
with probability. The quantile function maps a probability to the point in the distribution that gives that probability. The 90th percentile is where 9 times out of 10, your latency is at least as
good, if not better. So what if we convert the quantile function to a `logile' function? In other words, take the log-odds as an argument.
(define (log-odds->probability lo)
(odds->probability (exp lo)))
(define (odds->probability o)
(/ o (+ o 1)))
(define (quantile->logile q)
(lambda (lo)
(q (log-odds->probability lo))))
(define (make-logile-function alpha beta)
(lambda (p)
(* alpha (expt (/ p (- 1 p)) (/ 1 beta))))))
(define logile (make-logile-function .9 1.3))
(logile 3)
;Value: 9.046082508750782
(logile 1)
;Value: 1.9422949805536014
(logile 0)
;Value: .9
(logile -3)
;Value: .0895415224453726
But let's simplify that computation:
(lambda (p)
(* alpha (expt (/ p (- 1 p)) (/ 1 beta)))))
((lambda (p)
(* alpha (expt (/ p (- 1 p)) (/ 1 beta))))
(log-odds->probability lo))
(* alpha (expt (/ (log-odds->probability lo)
(- 1 (log-odds->probability lo)))
(/ 1 beta)))
(* alpha (expt (/ (odds->probability (exp lo))
(- 1 (odds->probability (exp lo))))
(/ 1 beta)))
(* alpha (expt (/ (/ (exp lo) (+ (exp lo) 1))
(- 1 (/ (exp lo) (+ (exp lo) 1))))
(/ 1 beta)))
;; after a lot of simplification
(* alpha (exp (/ lo beta)))
(define (make-logile-function alpha beta)
(* alpha (exp (/ lo beta))))
My ‘logile’ function is even simpler than the ‘quantile’ function. Now if I want to express the
in log space as well, I simply take the log of the logile function:
(log (* alpha (exp (/ lo beta))))
(+ (log alpha) (/ lo beta))
Which you will recognize is the equation of a line.
Well so what? With sufficient algebra, any function that has two independent parameters can be turned into a line. However, the cumulative distribution function is an integral, so I have an intuition
that there is something interesting going on here. I found some work by Steinbrecher and Shaw where they point out that “Quantile functions may also be characterized as solutions of non-linear
ordinary differential equations.” Obviously, the logile function can also be characterized this way, too. I'm hoping that looking at the distribution in this way can yield some insight about the
And this is where I run into the annoying math. Check out these two equations:
By the first equation, Q is obviously some sort of function (the quantile function). H therefore operates on a function. By the second equation, the argument to H, which is called x, is used as the
argument to f, which is the probability distribution function. f maps numbers to numbers, so x is a number. The types don't match.
So what am I supposed to make of this? Well, I'll try to follow the process and see if I can make sense of the result.
10 comments:
The first equation has an implicit "at p" (except that the variable name itself is omitted), so it says
Q''(p) = H(Q(p))*(Q'(p))^2
H is a function from numbers to numbers, and it is defined by the second equation. One might say that "H(Q)" in the first equation represented composition.
It's a very typical notation shift. Mathematicians call their functions f, g, Q... whereas physicists call them f(x), g(x), Q(x) and when there is need to evaluate at a point x_0 they write
f(x=x_0) or f(x)|_{x=x_0} or simply f(x)|_{x_0}
Well Q is a function, but you could also call a function a vector in a Hilbert Space. And x can be a function (or vector) in a Hilbert Space too (a constant function if you wish). Then if f:H ->
H rather than f: R->R then the notation wouldn't be a problem. I think math is weird/interesting because the further you go in courses the more everything just melds together as just examples of
abstract objects.
"We can show that these functions are inverses of each other"
At this point, mathematicians reading the blog had their heads explode. We don't show inverses by anecdote!
agreed. that wouldn't even pass for a heuristic :-)
can someone say composition of functions?
Function composition - through variable substitution :O
Even Eric Cartman knew that one :P
The post should instead have been titled "Why I don't like math notation".
A - An expression contains no relation (i.e. no =;<;>;<=;>=;etc...)
B - An equation (including inequalities) contains some relation between two expressions.
C - A function associates a unique value to each input of a specified type. http://en.wikipedia.org/wiki/Function_(mathematics)
I think printing the following and sticking it to a wall is a must for all math wiz wannabe's:
The first is an equation, and the second is a function. 'x' is a variable, which represents a number. What do you mean by 'types?'
|
{"url":"http://funcall.blogspot.com/2009/07/why-i-dont-like-math.html","timestamp":"2014-04-19T22:30:43Z","content_type":null,"content_length":"74934","record_id":"<urn:uuid:65872ef2-0ca0-4772-bafa-88d87efd9012>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A 1420-kg car is being driven up a 7.34° hill. The frictional force is directed opposite to the motion of the car and has a magnitude of 545 N. A force F is applied to the car by the road and propels
the car forward. In addition to these two forces, two other forces act on the car: its weight W and the normal force FN directed perpendicular to the road surface. The length of the road up the hill
is 240 m. What should be the magnitude of F, so that the net work done by all the forces acting on the car is 203 kJ?
i found the height and then proceeded to find the work of F (keeping the variable in the equation) and then i found the work of the frictional force by multiplying the given force and the height.
Then i broke down the weight into two components, x and y. since the work of the normal force and the y component of gravity are both 0 (considering the fact that cos90 = 0) i disregarded those. i
then found the work of the y component of gravity by m*g*h... i added all 3 works to equal the net work given and then i solved for F but everytime i get the wrong answer
|
{"url":"http://www.physicsforums.com/showthread.php?t=95282","timestamp":"2014-04-18T23:16:59Z","content_type":null,"content_length":"25050","record_id":"<urn:uuid:0a9b2307-c2ab-4af8-89fe-ea150976841a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Goes Pop!
Car Talk Mathematics
Happy 2012! I hope you all has a restful and calorie-filled holiday. For my part, the holidays typically involve a fair amount of driving, and ergo, a fair amount of listening to podcasts. To that
end, I’d like to ease into a new year of mathematics by considering a simple puzzle, one which was featured recently on NPR’s Car Talk. If you are not fortunate enough to have listened to this show,
it centers on two brothers from Cambridge, Massachusetts, affectionately known as Click and Clack, the Tappet Brothers (though their real names are Tom and Ray Magliozzi). Each week, in between a
fair amount of good-natured banter, the brothers field a variety of automotive questions from callers nationwide.
Most significant to our present discussion, however, is Car Talk’s weekly diversion known as . . . → Read More: Car Talk Mathematics
A few weeks ago, I was downtown with the missus when we stumbled upon the Bottega Louie Restaurant and Gourmet Market. The window display was enticing, so we went inside and discovered, among other
things, a bakery. This one’s focus was the macaron, one of many sweets aiming to topple the cupcake as the trendiest dessert, and so for a town obsessed with the current trends, it is no surprise
that Los Angeles is home to several similarly specialized patisseries.
Though smaller than the average cupcake, the macaron is also more labor-intensive, and is therefore frequently on the more expensive end of the confectionery spectrum. The macarons at Bottega Louie,
for example, will run you $1.75 each.
If you need a sweet fix, though, a single macaron may not be enough. Anticipating such a first-world problem, Bottega Louie also offers boxes of macarons for . . . → Read More: Math of Macarons
This week, Steve Carell uttered what may well be his last “That’s what she said” as Michael Scott, boss extraordinaire on the US version of The Office. Though the show will go on, Michael Scott has
(spoiler alert) left Pennsylvania for Colorado and the love of his life. In preparation for this departure, the show has spent the last several episodes easing the audience through the transition.
From a mathematical standpoint, though, there are a couple of inconsistencies. Michael makes no secret of the fact that he has worked for the company for 19 years. His employees take this loyalty to
heart, and in Michael Scott’s penultimate episode, “Michael’s Last Dundies,” they surprise their boss with a song parody of the Rent song “Seasons of Love,” which pays homage to such a long period of
service. Below is the relevant clip – if you don’t have access to Hulu, you . . . → Read More: Dunder Math-lin
If you follow “Weird Al” Yankovic on Twitter (and really, why wouldn’t you?), you may have noticed this picture, which he posted earlier this week along with the tweet “Wow, waffles for just .25
cents? That means I can get 400 for a dollar!!”
Kudos to you, Mr. Yankovic, for spotting what I can only assume to be a mathematical error of the type we’ve seen before. If this music thing doesn’t pan out, maybe you can work for Verizon.
Then again, maybe it’s not an error, in which case I can only hope that Weird Al wastes no time in naming this establishment, so that I can patronize it before they catch wise.
(Thanks to Nate for sending this my way!)
Last year, Professor Steven Strogatz of Cornell University wrote a series of op-eds for the New York Times that discussed the presence of mathematics in unlikely places. I discussed one of these
columns here. Now, either those articles were well-received, or Professor Strogatz is well-connected, because this year he’s back in the Times with a much more ambitious series of articles. This time
around, Strogatz is attempting to “[write] about the elements of mathematics, from preschool to grad school, for anyone out there who’d like to have a second chance at the subject.”
Preschool to grad school is a significant amount of ground to cover, but thus far Strogatz has used his articles to assault this goal with gusto. To date, he has tackled counting, patterns in
addition, negative numbers, division, and basic high school algebra. This doesn’t really do justice to his content, though. Along the way he . . . → Read More: Math in the News(paper)
First, let me begin by wishing a happy 2010 to you all. If you celebrate the holidays the way I do, then the past few weeks have seen you spending time with friends and family. And if you really
celebrate the holidays the way I do, then some of that time with friends and family will have been spent with mathematical puzzles.
Very recently I was with a group of friends, discussing all that would come to pass in this new year. One friend, whose anonymity I will preserve by referring to him only as “Smith,” was in the
enviable position of being the only one among us whose age divided the current year (I won’t embarrass him by revealing his age, but given that it’s a divisor of 2010, this certainly restricts the
possibilities). Once we realized this, it became natural to ask how common an occurrence this should . . . → Read More: A Mathematical New Years Game
I apologize for my silence over the past few weeks – I have been out of the country learning math and eating pancakes. While I get back into the swing of things, I’ve got a couple of points to
mention that relate to earlier posts regarding our collective inability to correctly use the decimal point.
The first is a picture from a flyer advertising maid service. Here’s the ad (sent in to me by a dedicated foot soldier in the army that is my readership, a.k.a. my mother):
Names and phone numbers have been cropped out to protect the innocent. But in a case such as this, are there really any innocents? Although we’ve seen decimal point errors on signs before, this one
is arguably the most egregious of all. Presumably the intended price is $100 – if that’s the case, then not only is the decimal point . . . → Read More: Decimal Point Fail, Ctd
Last week, I went to a number theory conference in Utah. The conference was very good, and I learned quite a lot, which I suppose is the goal of any such conference. The location of the conference
itself was also quite nice – it was close to the mountains, a lake, and the home of Blendtec, famous for their “Will it Blend” series of videos.
As you might expect, most of what I learned on this conference pertained to number theory. However, there were lessons outside of this sphere of knowledge as well. The one lesson I will share with
you is best encapsulated in this picture:
That’s right – Ghiradelli now makes salad.
It was my friend Jack who pointed out the placement of the decimal point. Apparently the people who work in cafeterias in Utah are the same people who work at Verizon call centers. If you ever . . .
→ Read More: The Cheapest Salad Bar in the World
Math of Macarons
Dunder Math-lin
|
{"url":"http://www.mathgoespop.com/tag/arithmetic","timestamp":"2014-04-16T16:05:54Z","content_type":null,"content_length":"100677","record_id":"<urn:uuid:9a7398fd-ac6d-4161-950e-edd02a425d19>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Year, Month, Week, Day, Hour, Minute
5. In which month is in a leap year ONE day added?
9. How much is 1 hour 40 min + 30 min
10. It is now 9:25amWhat will the time be in 3 hours and 55 minutes?
11. How many minutes are in a quarter of an hour?
12. What is the time in the 24 hour format at 3:25pm
13. What is the time in the 12 hour format at 18:55
15. 4,722 seconds are how many hours, minutes and seconds?
16. 3,658 minutes are how many days, hours and minutes?
|
{"url":"http://www.proprofs.com/quiz-school/story.php?title=year-month-week-day-hour-minute-second","timestamp":"2014-04-18T03:01:43Z","content_type":null,"content_length":"152684","record_id":"<urn:uuid:390bf290-8b79-457f-9abf-b851376f5578>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Model minimization in markov decision processes
Results 1 - 10 of 78
- JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH , 1999
"... Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning,
decision analysis, operations research, control theory and economics. While the assumptions and perspectives ..."
Cited by 417 (4 self)
Add to MetaCart
Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision
analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to
researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related
methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular
classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to
- In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence , 1999
"... Recently, structured methods for solving factored Markov decisions processes (MDPs) with large state spaces have been proposed recently to allow dynamic programming to be applied without the
need for complete state enumeration. We propose and examine a new value iteration algorithm for MDPs that use ..."
Cited by 178 (17 self)
Add to MetaCart
Recently, structured methods for solving factored Markov decisions processes (MDPs) with large state spaces have been proposed recently to allow dynamic programming to be applied without the need for
complete state enumeration. We propose and examine a new value iteration algorithm for MDPs that uses algebraic decision diagrams (ADDs) to represent value functions and policies, assuming an ADD
input representation of the MDP. Dynamic programming is implemented via ADD manipulation. We demonstrate our method on a class of large MDPs (up to 63 million states) and show that significant gains
can be had when compared to tree-structured representations (with up to a thirty-fold reduction in the number of nodes required to represent optimal value functions). 1
, 1997
"... Markov decision processes(MDPs) have proven to be popular models for decision-theoretic planning, but standard dynamic programming algorithms for solving MDPs rely on explicit, state-based
specifications and computations. To alleviate the combinatorial problems associated with such methods, we propo ..."
Cited by 145 (10 self)
Add to MetaCart
Markov decision processes(MDPs) have proven to be popular models for decision-theoretic planning, but standard dynamic programming algorithms for solving MDPs rely on explicit, state-based
specifications and computations. To alleviate the combinatorial problems associated with such methods, we propose new representational and computational techniques for MDPs that exploit certain types
of problem structure. We use dynamic Bayesian networks (with decision trees representing the local families of conditional probability distributions) to represent stochastic actions in an MDP,
together with a decision-tree representation of rewards. Based on this representation, we develop versions of standard dynamic programming algorithms that directly manipulate decision-tree
representations of policies and value functions. This generally obviates the need for state-by-state computation, aggregating states at the leaves of these trees and requiring computations only for
each aggregate state. The key to these algorithms is a decision-theoretic generalization of classic regression analysis, in which we determine the features relevant to predicting expected value. We
demonstrate the method empirically on several planning problems,
, 2003
"... This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition
model using a dynamic Bayesian network. This representation often allows an exponential reduction in the re ..."
Cited by 129 (4 self)
Add to MetaCart
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model
using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such
MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function
represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic
operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel
linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm
uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that
more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 10^40 states, demonstrating a promising indication
of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time.
, 1998
"... This dissertation investigates the use of hierarchy and problem decomposition as a means of solving large, stochastic, sequential decision problems. These problems are framed as Markov decision
problems (MDPs). The new technical content of this dissertation begins with a discussion of the concept o ..."
Cited by 108 (2 self)
Add to MetaCart
This dissertation investigates the use of hierarchy and problem decomposition as a means of solving large, stochastic, sequential decision problems. These problems are framed as Markov decision
problems (MDPs). The new technical content of this dissertation begins with a discussion of the concept of temporal abstraction. Temporal abstraction is shown to be equivalent to the transformation
of a policy defined over a region of an MDP to an action in a semi-Markov decision problem (SMDP). Several algorithms are presented for performing this transformation efficiently. This dissertation
introduces the HAM method for generating hierarchical, temporally abstract actions. This method permits the partial specification of abstract actions in a way that corresponds to an abstract plan or
strategy. Abstr...
, 2003
"... Many stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for solving them run
in time polynomial in the size of the state space, where this size is extremely large for most real-world ..."
Cited by 92 (2 self)
Add to MetaCart
Many stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for solving them run in
time polynomial in the size of the state space, where this size is extremely large for most real-world planning problems of interest. Recent AI research has addressed this problem by representing the
MDP in a factored form. Factored MDPs, however, are not amenable to traditional solution methods that call for an explicit enumeration of the state space. One familiar way to solve MDP problems with
very large state spaces is to form a reduced (or aggregated) MDP with the same properties as the original MDP by combining “equivalent ” states. In this paper, we discuss applying this approach to
solving factored MDP problems—we avoid enumerating the state space by describing large blocks of “equivalent” states in factored form, with the block descriptions being inferred directly from the
original factored representation. The resulting reduced MDP may have exponentially fewer states than the original factored MDP, and can then be solved using traditional methods. The reduced MDP found
depends on the notion of equivalence between states used in the aggregation. The notion of equivalence chosen will be fundamental in designing and analyzing
- In Proceedings of the Fifteenth National Conference on Artificial Intelligence , 1998
"... We present a technique for computing approximately optimal solutions to stochastic resource allocation problems modeled as Markov decision processes (MDPs). We exploit two key properties to
avoid explicitly enumerating the very large state and action spaces associated with these problems. First, the ..."
Cited by 81 (11 self)
Add to MetaCart
We present a technique for computing approximately optimal solutions to stochastic resource allocation problems modeled as Markov decision processes (MDPs). We exploit two key properties to avoid
explicitly enumerating the very large state and action spaces associated with these problems. First, the problems are composed of multiple tasks whose utilities are independent. Second, the actions
taken with respect to (or resources allocated to) a task do not influence the status of any other task. We can therefore view each task as an MDP. However, these MDPs are weakly coupled by resource
constraints: actions selected for one MDP restrict the actions available to others. We describe heuristic techniques for dealing with several classes of constraints that use the solutions for
individual MDPs to construct an approximate global solution. We demonstrate this technique on problems involving thousandsof tasks, approximating the solution to problems that are far beyond the
reach of standard methods. 1
- In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI-00 , 2000
"... Many large MDPs can be represented compactly using a dynamic Bayesian network. Although the structure of the value function does not retain the structure of the process, recent work has
suggested that value functions in factored MDPs can often be approximated well using a factored value functi ..."
Cited by 73 (8 self)
Add to MetaCart
Many large MDPs can be represented compactly using a dynamic Bayesian network. Although the structure of the value function does not retain the structure of the process, recent work has suggested
that value functions in factored MDPs can often be approximated well using a factored value function: a linear combination of restricted basis functions, each of which refers only to a small subset
of variables. An approximate factored value function for a particular policy can be computed using approximate dynamic programming, but this approach (and others) can only produce an approximation
relative to a distance metric which is weighted by the stationary distribution of the current policy. This type of weighted projection is ill-suited to policy improvement.
- In Thirteenth Conference on Uncertainty in Artificial Intelligence , 1997
"... We present a method for solving implicit (factored) Markov decision processes (MDPs) with very large state spaces. We introduce a property of state space partitions which we call
ffl-homogeneity. Intuitively, an ffl-homogeneous partition groups together states that behave approximately the sam ..."
Cited by 51 (7 self)
Add to MetaCart
We present a method for solving implicit (factored) Markov decision processes (MDPs) with very large state spaces. We introduce a property of state space partitions which we call ffl-homogeneity.
Intuitively, an ffl-homogeneous partition groups together states that behave approximately the same under all or some subset of policies. Borrowing from recent work on model minimization in
computer-aided software verification, we present an algorithm that takes a factored representation of an MDP and an 0 ffl 1 and computes a factored ffl-homogeneous partition of the state space. This
partition defines a family of related MDPs---those MDP's with state space equal to the blocks of the partition, and transition probabilities "approximately" like those of any (original MDP) state in
the source block. To formally study such families of MDPs, we introduce the new notion of a "bounded parameter MDP" (BMDP), which is a family of (traditional) MDPs defined by specifying upper...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=519034","timestamp":"2014-04-18T01:54:26Z","content_type":null,"content_length":"40490","record_id":"<urn:uuid:bad026de-4440-4acd-813a-625879324e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Any engineers or engineering students here? - The Ranger Station Forums
Been working on some other things..........
I understand what you're saying about using the whole length. I also looked at a picture yesterday in Crawl Magazine of a Jeep standing up trying to climb a ledge. You can see the drivers side rear
lower link. It's straight up and down like a column falling more in to a buckling issue at that point.
Let's say my Ranger weighs 5,000lbs and is mostly horizontal. The left rear lower link is resting centered on a rock with the wheel off the ground. To keep things simple we'll say the weight is
evenly distributed so that link has 1,250lbs (5,000lbs / 4) on it.
Using the same formula above I would get this:
1.5X.125 (1.5 OD - .125 wall - 1.25 ID)
I = .1284
Z = .1712
1250x48 / 4x.1712
= 60,000 / .6848 =
87,616 PSI 1.75X.250 (1.75 OD - .250 wall - 1.25 ID)
I = .3400
Z = .3885
1250x48 / 4x.3885
= 60,000 / 1.554 =
38,610 PSI
Looking at it that way, The 1.5" tubing would fail because of the 70,000 yield strength and the examples 87,616 PSI calculation but the 1.75" would be fine.
Does it look like I'm calculating that correctly now?
I wonder how you would take the torque applied to the links under accelleration in to consideration?
As far as your equation for buckling:
Fcr = (pi^2*E*A)/(l/k)^2
Fcr = minimum critical load
pi = 3.14159....
E = modulus of elasticity (stiffness)
A = area of cross-section
l = length of beam
k = radius of gyration
k = sqrt(I/A)
I = second moment of area (for a tube I = (pi/64)*(Do^4-Di^4))
Where Do and Di are the outside and inside diameters.
You give the formula for k as k = sqrt(I/A). I've also seen Radius of Gyration for the tubing listed as k = sqrt(D^2 + d^2)/4. When I do both I get different results, so which formula is correct for
the tubing. Also, you didn't mention the formula for A (Area of Section). For tubing I have A = pi(D^2 - d^2)/4 or A = 0.7854(D^2 - d^2). Is that correct?
You're going to need to know how much force is going to be applied to your links and then find a tube that has a higher minimum critical bucking load, or you could resolve the equation for the
outside and inside diameters if you know the force that will be applied. (To do that, you would need a second equation relating the inside and outside diameters, i.e. if you know the wall thickness
you want to use).
What formula are you referring too?
I haven't worked the buckling formula yet.
I find all the math fascinating. I'm dorky like that. I'm surprised some of the other people lurking with engineering knowledge aren't speaking up.
There's not a lot on the net on this, and one chart I looked at didn't look right. I'd rather know how to get the information than rely on what I see on the net.
|
{"url":"http://www.therangerstation.com/forums/showthread.php?t=3438","timestamp":"2014-04-20T08:15:35Z","content_type":null,"content_length":"129613","record_id":"<urn:uuid:66e303fb-3b6c-4359-afb0-c76784881a05>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] object to string for numpy 1.3.0
Thomas Robitaille thomas.robitaille@gmail....
Fri Nov 6 14:39:01 CST 2009
Until recently, object to string conversion was buggy:
In [1]: import numpy as np
In [2]: a = np.array(['4TxismCut','nXe4f0sAs'],dtype=np.object_)
In [3]: np.array(a,dtype='|S9')
array(['4TxismCun', 'Xe4f0sAs'],
(notice the strings are not quite the same)
This is now fixed in svn, but is there a way to write the conversion
differently such that it does work with numpy 1.3.0? (so as not to
have to force users of a package to upgrade to numpy 1.4.0)
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046485.html","timestamp":"2014-04-20T16:03:26Z","content_type":null,"content_length":"3112","record_id":"<urn:uuid:5cf3a4bf-dab2-471e-a7ae-f5a1c3d69a5f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Commutators with Lipschitz Functions and Nonintegral Operators
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 178961, 8 pages
Research Article
Commutators with Lipschitz Functions and Nonintegral Operators
^1School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
^2Key Laboratory of Mathematics and Interdisciplinary Sciences of Guangdong Higher Education Institutes, Guangzhou University, Guangzhou 510006, China
Received 29 March 2013; Accepted 30 May 2013
Academic Editor: Alberto Cabada
Copyright © 2013 Peizhu Xie and Ruming Gong. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
Let be a singular nonintegral operator; that is, it does not have an integral representation by a kernel with size estimates, even rough. In this paper, we consider the boundedness of commutators
with and Lipschitz functions. Applications include spectral multipliers of self-adjoint, positive operators, Riesz transforms of second-order divergence form operators, and fractional power of
elliptic operators.
1. Introduction
Let be a bounded operator on for some , . A measurable function is called an associated kernel of if holds for each continuous function with compact support and for almost all not in the support of .
The kernel is said to satisfy the following.
(i) The pointwise Hörmander condition on variable if there exist and such that when , and denotes the ball with center , radius .
(ii) The integral Hörmander condition on variable if there exist constants and such that for all .
It is well known that if is bounded on for some , , and , the two Hörmander conditions (i) and (ii) above are sufficient to imply that the commutator is bounded on for all , , with norm where the
commutator is defined by and is the BMO seminorm of . See [1, 2] for BMO functions on Euclidean spaces and [3] for spaces of homogeneous type.
A particular case of the result of Janson [2] states that is bounded, , if , . Here, is the homogeneous Lipschitz space determined by the first difference operator.
In [4], Duong and Yan have replaced the two Hörmander conditions (2) and (3) by the following weaker conditions (5) and (6) below which previously appeared in [5] and still concluded that the
commutator is bounded on for all , . And in [6], Hu and Yang obtained the weighted boundedness of maximal commutator when satisfy (5) and (6). Roughly speaking, we assume the following.
(iii) There exists a class of operators with kernels , which satisfy the condition (23) in Section 2, so that the kernels of the operators satisfy the condition when for some , , where is a positive
(iv) There exists a class of operators with kernels , which satisfy the condition (23), such that have associated kernels and there exist positive constants , such that
Under conditions (5) and (6), if is bounded on for some , , then the commutator is bounded on for all , .
In [7], Auscher and Martell have considered the commutators of singular nonintegral operators, where the implicit terminology has been introduced in [8]. By this we mean that they are still of order
0, but they do not have an integral representation by a kernel with size and/or smoothness estimates. Let . Suppose that the singular nonintegral operator is a sublinear operator bounded on and that
is a family of operators acting from into . Auscher and Martell assume the following.
(v) For all and all balls where denotes its radius,
(vi) For all and all balls where denotes its radius,
Let and (for the definitions of and see Section 2). Under conditions (7) and (8), if , then the commutator is bounded on ; that is, for all .
The main object of this paper is the commutators of nonintegral operators . Compared to the result in [7], we can obtain a more general result for belongs to the Lipschitz spaces . To be more
specific, we can obtain the following.
Theorem 1. Let , such that . Suppose that is a sublinear operator bounded from to and that is a family of operators acting from into . Assume that for all and all balls , where denotes its radius.
Let such that . Let and . If , then there is a constant such that for all and for all .
The case is understood in the sense that the -average in (10) is indeed an essential supremum.
Remark 2. Let be such that . Under the assumptions above, we know that if , then is bounded from to . See Theorem 2.2 in [9].
In the limiting case , from the assumptions (9) and (10), we deduce Consequently, from the Theorem 3.7 in [7], we know that if , then for and for all .
Theorem 3. Let . Suppose that is a sublinear operator bounded on and that is a family of operators acting from to . Assume that satisfy (9) and (10) with . Let , , and . Assume that there exists a
constant such that . If , then there is a constant such that for all .
The class is defined in Section 2.
2. Definitions and Preliminary Results
We use the notation and we often ignore the Lebesgue measure and the variable of the integrand in writing integrals, unless this is needed to avoid confusions.
A weight is a nonnegative locally integrable function. We say that , , if there exists a constant such that for every ball For , we say that if there is a constant such that for every ball , , for
a.e. , or, equivalently, a.e., where denotes the classical Hardy-Littlewood maximal function of . The reverse Hölder classes are defined in the following way: , , if there is a constant such that for
every ball
The endpoint is given by the condition: whenever, for any ball ,
The homogenous Lipschitz function space is the space of functions such that where denotes the th difference operator (see [10]). That is, , , .
We have the following lemmas.
Lemma 4 (see [10]). For , , one has For , the last formula should be modified appropriately.
Lemma 5 (see [10]). Let , and then .
Lemma 6 (see [11]). For and , let Suppose that and , and then
Theorem A (see [7]). Fix , , and , . Then, there exist and with the following property: assume that , , , and are nonnegative measurable functions on such that for any cube there exist nonnegative
functions and with for a.e. and Then for all , and As a consequence, for all , one has provided , and provided . Furthermore, if , then (24) and (25) hold, provided (whether or not ).
For and , we denote where the supremum is taken with respect to all balls of positive measure containing the point .
Theorem B. Let , , and let and be the weight functions. For a constant to exist so that the inequality would hold, it is necessary and sufficient that the condition where , be fulfilled.
For the proof of this theorem, see [12].
Definition 7. is said to belong to , if (28) holds.
Lemma 8. Let , . If , then
Proof. Since , we have By Theorem B, we have Thus,
3. The Proof of the Main Theorems
In order to prove Theorem 1, we need the following lemma.
Lemma 9. Let , , and . Let be a sublinear operator bounded from to .(i) If and , then .(ii) Assume that for any and for any one has that where does not depend on and . Then for all , (33) holds.
Proof. The ideas of the following argument are taken from [7].
Fix . Note that (i) follows easily observing that since , imply that and hence, by assumption, .
To obtain (ii), we fix and . Let be a cube such that . We may assume that since otherwise we can work with and observe that Note that for , we have that and are finite almost everywhere since they
belong to .
Let and define as follows: Then, it is immediate to see that for all . Thus, . As , we can use (33) and To conclude, by Fatou’s lemma, it suffices to show that for a.e. and for some subsequence such
that .
As , for any , the dominated convergence theorem yields that in as . Therefore, is bounded from to . It follows that in . Thus, there exists a subsequence such that for a.e. . In this way we obtain
as desired, and we get that for a.e .
Proof of Theorem 1. We assume that , for , and the main ideas are the same and details are left to the interested reader. Lemma 9 ensures that it suffices to consider the case . Let and set . Note
that by (i) of Lemma 9. Given a ball , we set and decompose as follows: We observe that , where and .
We first estimate the average of on . Fix any . Let . Using Lemma 4, Using (9) and Lemmas 4 and 5, since . Hence, for any , We next estimate the average of on with . Using (10) and proceeding as
before, we see that for any . Thus we have obtained
For and , we can find a such that and . As mentioned before . Applying Theorem A and Remark 2 with in place of , we obtain where we have used Lemma 6. This implies that
Proof of Theorem 3. Let , , and be the same as those used in the proof of Theorem 1. As mentioned before . Since , applying Theorem A with in place of and , we obtain Noting that , Lemma 8 and Remark
2 give us that This implies that
4. Applications
4.1. Spectral Multipliers: Off-Diagonal Estimates
Suppose that is a self-adjoint nonnegative definite operator on . Let be the spectral resolution of . For any bounded Borel function , by using the spectral theorem, we can define the operator This
is of course bounded on .
The following will be assumed throughout this subsection.(H1) is a nonnegative self-adjoint operator on .(H2) The operator generates an analytic semigroup which satisfies the Davies-Gaffney
condition. That is, there exist constants such that for any open subsets , for every with , , where .(H3) Suppose . Assume that the analytic semigroup generated by satisfies “ off-diagonal”
estimates: there exist coefficients satisfying such that for all balls and for all functions
Let be a nonnegative function such that For , let denote the integer part of . Recall that is the space of functions on for which is finite.
Then the following result holds.
Theorem 10. Let satisfy assumptions (H1)–(H3). Let be a nonnegative function satisfying (54), and suppose that the bounded measurable function satisfies for some . Then(i) let . If and , then there
is a constant such that for all and for all .(ii) Let , , and . If there exists a constant such that , then there is a constant such that for all and for all .
Proof. Estimate (57) follows from Theorem 1 with and estimate (58) follows from Theorem 3, applied to and with and . It suffices to show that there exist coefficients satisfying such that (9) and (10
) hold for all .
Fix . From (53), we deduce that This estimate with in place of yields (10). Since, by functional calculus, , (9) was proved in [13].
4.2. Riesz Transforms
Let be an matrix of complex and -valued coefficients on . We assume that this matrix satisfies the following ellipticity (or “accretivity”) condition: there exist such that for all and almost every .
Associated with this matrix we define the second-order divergence form operator
The Riesz transforms associated to are , . Set . The solution of the Kato conjecture [14] implies that this operator extends boundedly to . This allows the representation in which the integral
converges strongly in both at and when .
Define by
We write for , .
We extract from [15] some definitions and results on unweighted off-diagonal estimates.
Definition 11. Let . One says that a family of sublinear operators satisfies full off-diagonal estimates, in short , if for some , for all closed sets and , all , and all , we have
If is a subinterval of , Int denotes the interior in of .
Proposition 12 (see [15]). Fix and .(a)There exists a nonempty maximal interval in , denoted by , such that if with , then satisfies full off-diagonal estimates and is a bounded set in .(b)There
exists a nonempty maximal interval in , denoted by , such that if with , then satisfies full off-diagonal estimates and is a bounded set in .(c) and, for , we have if and only if .(d)Denote by , the
lower and upper bounds of and by , those of . We have and . (We have set , the Sobolev exponent of when and , otherwise.)(e)If , . If , and with . (f)If , , , and .
Then for , satisfy (9) and (10) with and , where is a large enough integer. For the proof of this argument, see [15]. So Theorem 1 with and Theorem 3 can be applied to .
4.3. Fractional Operators
Let . The fractional power of an elliptic operator on is given formally by with . There exist and , such that the semigroup is uniformly bounded on for every (see Proposition 12). We have the
following results.
Lemma 13 (see [9]). Let so that . Fix a ball with radius . For and large enough, one has and for where and .
Theorem 14. Let , , and . Given , one has
Proof. We are going to apply Theorem 1 to the linear operator . We fix , , and so that . Then we can find , , such that , , and . Notice that as , we have that . By Theorem 1.2 in [9], we know that
is bounded from to .
We take , where is an integer to be chosen. We apply Lemma 13. Note that (66) is (9). Also, (10) follows from (67) after expanding . Then, we have that for by choosing . Consequently applying Theorem
1, we conclude that .
The authors would like to thank the referee for carefully reading the paper and for making several useful suggestions. This research was supported by Tianyuan Fund for Mathematics (Grant no.
11226100), Specialized Research Fund for the Doctoral Program of Higher Education (Grant no. 20124410120002), and SRF of Guangzhou Education Bureau (Grant no. 2012A088).
|
{"url":"http://www.hindawi.com/journals/jam/2013/178961/","timestamp":"2014-04-16T05:26:57Z","content_type":null,"content_length":"890023","record_id":"<urn:uuid:e2b062a5-a4e1-4df6-9084-7c952fde583e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hangman 1
Re: Hangman 1
_ _ S _ _ _, _ _ _ _ _
Re: Hangman 1
Let me have those 5 E's I know are in there!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
no e.My HANGMAN IS FOOL PROOF!
Re: Hangman 1
Give me an A!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ _ S _ A _, _ _ _ A _
hint:Go for an O
Re: Hangman 1
I need an O!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ O S _ A _, _ O _ A _
Re: Hangman 1
Is there an M?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ O S _ A M, _ O _ A _
Re: Hangman 1
I will need to know if there is a T?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Full Member
Re: Hangman 1
Re: Hangman 1
There is a Y in there.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
No N or Y?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
No S _ A M, No _ A N
Re: Hangman 1
How about a P?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
No SPAM, No _ A N
Real Member
Re: Hangman 1
And a B, too!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Hangman 1
No Spam No Ban
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ _ _ 3
Re: Hangman 1
Big 3
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ _ _ _
Re: Hangman 1
Let me have T.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ _ t _
|
{"url":"http://mathisfunforum.com/viewtopic.php?id=18429&p=10","timestamp":"2014-04-18T11:19:53Z","content_type":null,"content_length":"32328","record_id":"<urn:uuid:c5113586-80a5-4ef1-8d0d-610739f085c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Units with plots?
Replies: 0
Units with plots?
Posted: Mar 8, 2013 4:46 PM
Should the new Mathematica 9 functionality of built-in units handling be extended from numerics and symbolics to graphics, too?
For example, whereas the following is allowed with units . . .
Plot[x^2, {x, Quantity[0, "Meters"], Quantity[1, "Meters"]}]
. . . Quantity does not seem to be allowed in expressions such as:
Graphics[Disk[{Quantity[0, "Meters"], Quantity[0, "Meters"]},
Quantity[2, "Meters"]]]
The built-in QuantityMagnitude applied to the arguments can take care of that, but it seems to me that a fully integrated handling of units shouldn't require that -- unless, perhaps, the overhead of
checking for units as arguments is just too high at present.
(This question was motivated by the post http://mathematica.stackexchange.com/questions/20867/plotting-with-units.)
Murray Eisenberg murrayeisenberg@gmail.com
80 Fearing Street phone 413 549-1020 (H)
Amherst, MA 01002-1912
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2439765&messageID=8572809","timestamp":"2014-04-16T22:14:20Z","content_type":null,"content_length":"14548","record_id":"<urn:uuid:d21408f8-34d0-4448-82d5-92c2932cb232>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiplying fractions
Hey all i know this is really basic but i have hit a wall
(d / a+b) * (5a + 5b / 2d^2 + 2d)
i can get the answer but im not sure what im actually doing S:
i khow to multiply problem like (1/2) * (1/2) = 1/4 but what confuses me in this problem is the denominator a+b not sure how to multiply the other denominator with it.
My workings
(d / a+b) * (5a + 5b / 2d^2 + 2d)
i just divide by common factors first step
numerator 5a + 5b to 5 ( a+b)
then denominator 2d (d + 1)
then i just divided both numerator and denominator by common factors (a+b) and d
so i got the answer 5 / 2 (d + 1)
how would you guys solve it? sorry if my reasoning bit over the place, any chance you can show me your working so i can aware myself.
thank you.
Last edited by bronxsystem (2013-07-14 10:25:09)
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=278166","timestamp":"2014-04-21T02:24:17Z","content_type":null,"content_length":"16591","record_id":"<urn:uuid:d318f8ce-abc0-404d-9752-fcc765e4da75>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Virginia Standards of Learning: 6th Grade
Shodor > Interactivate > Standards > Virginia Standards of Learning: 6th Grade
Virginia Standards of Learning
6th Grade
Standard Category • Show All
Standard Category (...)
Computation and Estimation
• 6.14 The student will identify, classify, and describe the characteristics of plane figures, describing their similarities, differences, and defining properties.
• 6.15 The student will determine congruence of segments, angles, and polygons by direct comparison, given their attributes. Examples of noncongruent and congruent figures will be included.
• 6.16 The student will construct the perpendicular bisector of a line segment and an angle bisector.
• 6.17 The student will sketch, construct models of, and classify solid figures (rectangular prism, cone, cylinder, and pyramid).
• 6.9a The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: length - part of an inch (1/2, 1/4, and 1/8), inches, feet, yards, miles, millimeters, centimeters, meters, and kilometers;
• 6.9a The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: length â  part of an inch (1/2, 1/4, and 1/
• 6.9b The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: weight/mass - ounces, pounds, tons, grams, and kilograms;
• 6.9b The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: weight/mass â  ounces, pounds, tons, grams,
• 6.9c The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: liquid volume - cups, pints, quarts, gallons, milliliters, and liters; and
• 6.9c The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: liquid volume â  cups, pints, quarts, gallo
• 6.9d The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: area - square units.
• 6.9d The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: area â  square units.
• 6.10 The student will estimate and then determine length, weight/mass, area, and liquid volume/capacity, using standard and nonstandard units of measure.
• 6.11 The student will determine if a problem situation involving polygons of four or fewer sides represents the application of perimeter or area and apply the appropriate formula.
• 6.12a The student will solve problems involving the circumference and/or area of a circle when given the diameter or radius
• 6.12b The student will derive approximations for pi (π) from measurements for circumference and diameter, using concrete materials or computer models.
• 6.12b The student will derive approximations for pi (Ã Â ) from measurements for circumference and diameter, using concrete materials or computer models.
• 6.13a The student will estimate angle measures, using 45°, 90°, and 180° as referents, and use the appropriate tools to measure the given angles
• 6.13a The student will estimate angle measures, using 45à °, 90à °, and 180à ° as referents, and use the appropriate tools to measure the given angles
• 6.13b The student will measure and draw right, acute, and obtuse angles and triangles.
Patterns, Functions, and Algebra
• 6.21 The student will investigate, describe, and extend numerical and geometric patterns, including triangular numbers, patterns formed by powers of 10, and arithmetic sequences.
• 6.22 The student will investigate and describe concepts of positive exponents, perfect squares, square roots, and, for numbers greater than 10, scientific notation. Calculators will be used to
develop exponential patterns.
• 6.23a The student will model and solve algebraic equations, using concrete materials
• 6.23b The student will solve one-step linear equations in one variable, involving whole number coefficients and positive rational solutions
• 6.23c The student will use the following algebraic terms appropriately: variable, coefficient, term, and equation.
Probability and Statistics
• 6.18a The student, given a problem situation, will collect, analyze, display, and interpret data in a variety of graphical methods, including line, bar, and circle graphs
• 6.18b The student, given a problem situation, will collect, analyze, display, and interpret data in a variety of graphical methods, including stem-and-leaf plots
• 6.18c The student, given a problem situation, will collect, analyze, display, and interpret data in a variety of graphical methods, including box-and-whisker plots.
• 6.19 The student will describe the mean, median, and mode as measures of central tendency, describe the range, and determine their meaning for a set of data.
• 6.20a The student will make a sample space for selected experiments and represent it in the form of a list, chart, picture, or tree diagram
• 6.20b The student will determine and interpret the probability of an event occurring from a given sample space and represent the probability as a ratio, decimal or percent, as appropriate for the
given situation.
No Results Found
©1994-2014 Shodor Website Feedback
Virginia Standards of Learning
6th Grade
Standard Category • Show All
Standard Category (...)
Computation and Estimation
• 6.14 The student will identify, classify, and describe the characteristics of plane figures, describing their similarities, differences, and defining properties.
• 6.15 The student will determine congruence of segments, angles, and polygons by direct comparison, given their attributes. Examples of noncongruent and congruent figures will be included.
• 6.16 The student will construct the perpendicular bisector of a line segment and an angle bisector.
• 6.17 The student will sketch, construct models of, and classify solid figures (rectangular prism, cone, cylinder, and pyramid).
• 6.9a The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: length - part of an inch (1/2, 1/4, and 1/8), inches, feet, yards, miles, millimeters, centimeters, meters, and kilometers;
• 6.9a The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: length â  part of an inch (1/2, 1/4, and 1/
• 6.9b The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: weight/mass - ounces, pounds, tons, grams, and kilograms;
• 6.9b The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: weight/mass â  ounces, pounds, tons, grams,
• 6.9c The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: liquid volume - cups, pints, quarts, gallons, milliliters, and liters; and
• 6.9c The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: liquid volume â  cups, pints, quarts, gallo
• 6.9d The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: area - square units.
• 6.9d The student will compare and convert units of measure for length, area, weight/mass, and volume within the U.S. Customary system and the metric system and estimate conversions between units
in each system: area â  square units.
• 6.10 The student will estimate and then determine length, weight/mass, area, and liquid volume/capacity, using standard and nonstandard units of measure.
• 6.11 The student will determine if a problem situation involving polygons of four or fewer sides represents the application of perimeter or area and apply the appropriate formula.
• 6.12a The student will solve problems involving the circumference and/or area of a circle when given the diameter or radius
• 6.12b The student will derive approximations for pi (π) from measurements for circumference and diameter, using concrete materials or computer models.
• 6.12b The student will derive approximations for pi (Ã Â ) from measurements for circumference and diameter, using concrete materials or computer models.
• 6.13a The student will estimate angle measures, using 45°, 90°, and 180° as referents, and use the appropriate tools to measure the given angles
• 6.13a The student will estimate angle measures, using 45à °, 90à °, and 180à ° as referents, and use the appropriate tools to measure the given angles
• 6.13b The student will measure and draw right, acute, and obtuse angles and triangles.
Patterns, Functions, and Algebra
• 6.21 The student will investigate, describe, and extend numerical and geometric patterns, including triangular numbers, patterns formed by powers of 10, and arithmetic sequences.
• 6.22 The student will investigate and describe concepts of positive exponents, perfect squares, square roots, and, for numbers greater than 10, scientific notation. Calculators will be used to
develop exponential patterns.
• 6.23a The student will model and solve algebraic equations, using concrete materials
• 6.23b The student will solve one-step linear equations in one variable, involving whole number coefficients and positive rational solutions
• 6.23c The student will use the following algebraic terms appropriately: variable, coefficient, term, and equation.
Probability and Statistics
• 6.18a The student, given a problem situation, will collect, analyze, display, and interpret data in a variety of graphical methods, including line, bar, and circle graphs
• 6.18b The student, given a problem situation, will collect, analyze, display, and interpret data in a variety of graphical methods, including stem-and-leaf plots
• 6.18c The student, given a problem situation, will collect, analyze, display, and interpret data in a variety of graphical methods, including box-and-whisker plots.
• 6.19 The student will describe the mean, median, and mode as measures of central tendency, describe the range, and determine their meaning for a set of data.
• 6.20a The student will make a sample space for selected experiments and represent it in the form of a list, chart, picture, or tree diagram
• 6.20b The student will determine and interpret the probability of an event occurring from a given sample space and represent the probability as a ratio, decimal or percent, as appropriate for the
given situation.
No Results Found
|
{"url":"http://www.shodor.org/interactivate/standards/organization/6/","timestamp":"2014-04-16T16:03:29Z","content_type":null,"content_length":"24797","record_id":"<urn:uuid:1dbbe5b5-78e5-4665-83fb-301e9c695060>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Showing Students how to Add and Subtract Directed Numbers
The following article is all about how I help students to understand directed numbers. I would like to focus on the subtraction and addition of directed numbers.
Where I teach the first time students learn about directed numbers in a math class is when they enter secondary schools. I find they already have some knowledge of how to add and subtract numbers, as
they learn this when at primary school. I find this is a good starting point, as students are aware that when you add to something it means get bigger, and when you subtract it gets smaller.
The greatest hurdle I find the students encounter is when they start to add or subtract negative numbers.
The first thing I do is have students construct a basic number line of range -10 through to +10. I then try the students out to see how capable they are of subtracting and adding small positive
numbers. I leave the + sign out at this point.
I ask the students about the number their subtracting, in relation to if it's a positive or negative. Students realize it is positive. I also ask them what direction they travel along the number
line, starting from their original number. Before long students realize that adding means one way and subtracting is the other.
The next step is to talk to students about real life examples of when things rise and fall. Some good examples the students provide are:
1. Temperature change.
2. Stock market.
3. Football scores.
4. Water levels.
5. Bank balances.
6. Time line.
My next step is to talk about what are the different types of numbers. Before long some student will list negative numbers. I ask them about when they added and subtracted numbers and if they were
able to do this to positive numbers, could they do it to negative numbers too.
I regularly refer to a number line as I find it helps the students with the direction they need to take when adding and subtracting, and also where their starting point is.
In a basic sum, for example 6 + 4, I explain that 6 is their stating point and 4 is how far they need to travel along the number line.
I then ask them if they knew what the opposite to positive is, and they generally say negative. I then go on to the part where if I add a positive number, such as six, what would happen if I added
negative six. Students soon realize that it must travel in the opposite direction, thus adding a negative simply means subtract.
Students form a an understanding that + - = -.
My next step is to go back to the 6 + 4 example, and ask them 'Instead of adding four, what about subtracting four. Soon students realize that - + and + - both mean to subtract.
Hence: odd signs mean to subtract.
The final part, that I find number lines are still essential to incorporate into is - - (subtracting a negative). I ask the students about how subtracting a positive and adding a negative make
subtract. therefore, what would happen if instead of subtracting a positive, I now subtract a negative. This part kids find hard to comprehend. So I make reference to the direction of travel when I
changed a positive to a negative. After awhile students start to form an understanding of how like signs add and unlike signs subtract.
In summary:
+ + AND - - = +
- + AND + - = -
I find it essential for students to reinforce their understanding by completing a set amount of core work.
I also find it useful to incorporate several activities designed to engage the students, so they don't look at this topic as a bunch of number crunching math work. Some of the activities I use are:
1. Tug of war.
2. Die games, there are many out there, so I wont mention any.
3. Board games. I.e. move forward 3 places or move back 2 places.
4. Stock market projects. Where they're given a set amount to invest and they track it's progress over a set period of time.
5. Money games. Where the students buy and sell stuff.
This is a basic overview of what I do, and I'm sure other mathematicians or math teachers have great ideas. So with this I welcome any other article that may help others to understand more about
directed numbers.
|
{"url":"http://www.sciences360.com/index.php/showing-students-how-to-add-and-subtract-directed-numbers-24933/","timestamp":"2014-04-17T21:22:37Z","content_type":null,"content_length":"24020","record_id":"<urn:uuid:edcd7f2a-84de-4de6-a74f-f1f799937c2f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
II. MODEL
A. Field-action
B. Dimensionless field-action
C. Density and electroneutrality
D. Grand potential, free energy, and pressure
A. Mean-field
B. Pressure in the plane-parallel geometry
C. Second order fluctuations correction
D. Pressure
E. Density
A. Formulation
B. Density
C. Pressure
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/137/17/10.1063/1.4763986","timestamp":"2014-04-16T17:06:04Z","content_type":null,"content_length":"78920","record_id":"<urn:uuid:2f20c2ac-10d2-441d-945f-3f17e33a531c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Boyer-Moore prover and Nuprl: An experimental comparison
Results 1 - 10 of 22
- J. of Computer and Mathematics with Applications , 1995
"... RRL (Rewrite Rule Laboratory) was originally developed as an environment for experimenting with automated reasoning algorithms for equational logic based on rewrite techniques. It has now
matured into a full-fledged theorem prover which has been used to solve hard and challenging mathematical proble ..."
Cited by 64 (25 self)
Add to MetaCart
RRL (Rewrite Rule Laboratory) was originally developed as an environment for experimenting with automated reasoning algorithms for equational logic based on rewrite techniques. It has now matured
into a full-fledged theorem prover which has been used to solve hard and challenging mathematical problems in automated reasoning literature as well as a research tool for investigating the use of
formal methods in hardware and software design. We provide a brief historical account of development of RRL and its descendants, give an overview of the main capabilities of RRL and conclude with a
discussion of applications of RRL. Key words. RRL, rewrite techniques, equational logic, discrimination nets 1 Introduction The theorem prover RRL (Rewrite Rule Laboratory) is an automated reasoning
program based on rewrite techniques. The theorem prover has implementations of completion procedures for generating a complete set of rewrite rules from an equational axiomatization,
associativecommutative mat...
- J. Auto. Reas , 1993
"... A logic for specification and verification is derived from the axioms of Zermelo-Fraenkel set theory. The proofs are performed using the proof assistant Isabelle. Isabelle is generic, supporting
several different logics. Isabelle has the flexibility to adapt to variants of set theory. Its higher-ord ..."
Cited by 46 (18 self)
Add to MetaCart
A logic for specification and verification is derived from the axioms of Zermelo-Fraenkel set theory. The proofs are performed using the proof assistant Isabelle. Isabelle is generic, supporting
several different logics. Isabelle has the flexibility to adapt to variants of set theory. Its higher-order syntax supports the definition of new binding operators. Unknowns in subgoals can be
instantiated incrementally. The paper describes the derivation of rules for descriptions, relations and functions, and discusses interactive proofs of Cantor’s Theorem, the Composition of
Homomorphisms challenge [9], and Ramsey’s Theorem [5]. A generic proof assistant can stand up against provers dedicated to particular logics. Key words. Isabelle, set theory, generic theorem proving,
Ramsey’s Theorem,
- In Mathematical Knowledge Management, 2nd Int’l Conf., Proceedings , 2003
"... Abstract. We compare fifteen systems for the formalizations of mathematics with the computer. We present several tables that list various properties of these programs. The three main dimensions
on which we compare these systems are: the size of their library, the strength of their logic and their le ..."
Cited by 23 (0 self)
Add to MetaCart
Abstract. We compare fifteen systems for the formalizations of mathematics with the computer. We present several tables that list various properties of these programs. The three main dimensions on
which we compare these systems are: the size of their library, the strength of their logic and their level of automation. 1
- Theorem Proving in Higher Order Logics, number 1479 in Lect. Notes Comp. Sci , 1998
"... . There is an overwhelming number of different proof tools available and it is hard to find the right one for a particular application. Manuals usually concentrate on the strong points of a
proof tool, but to make a good choice, one should also know (1) which are the weak points and (2) whether the ..."
Cited by 11 (3 self)
Add to MetaCart
. There is an overwhelming number of different proof tools available and it is hard to find the right one for a particular application. Manuals usually concentrate on the strong points of a proof
tool, but to make a good choice, one should also know (1) which are the weak points and (2) whether the proof tool is suited for the application in hand. This paper gives an initial impetus to a
consumers' report on proof tools. The powerful higher-order logic proof tools PVS and Isabelle are compared with respect to several aspects: logic, specification language, prover, soundness, proof
manager, user interface (and more). The paper concludes with a list of criteria for judging proof tools, it is applied to both PVS and Isabelle. 1 Introduction There is an overwhelming number of
different proof tools available (e.g. in the Database of Existing Mechanised Reasoning Systems one can find references to over 60 proof tools [Dat]). All have particular applications that they are
especially suited ...
- TPHOLs 2000, LNCS 1869 , 2000
"... Theorem provers for higher-order logics often use tactics to implement automated proof search. Tactics use a general-purpose meta-language to implement both general-purpose reasoning and
computationally intensive domain-specific proof procedures. The generality of tactic provers has a performance pe ..."
Cited by 9 (4 self)
Add to MetaCart
Theorem provers for higher-order logics often use tactics to implement automated proof search. Tactics use a general-purpose meta-language to implement both general-purpose reasoning and
computationally intensive domain-specific proof procedures. The generality of tactic provers has a performance penalty; the speed of proof search lags far behind special-purpose provers. We present a
new modular proving architecture that significantly increases the speed of the core logic engine.
- Journal of Automated Reasoning , 1995
"... We use the Boyer-Moore Prover, Nqthm, to verify the Paris-Harrington version of Ramsey's Theorem. The proof we verify is a modification of the one given by Ketonen and Solovay. The Theorem is
not provable in Peano Arithmetic, and one key step in the proof requires ffl 0 induction. x0. Introduction. ..."
Cited by 8 (1 self)
Add to MetaCart
We use the Boyer-Moore Prover, Nqthm, to verify the Paris-Harrington version of Ramsey's Theorem. The proof we verify is a modification of the one given by Ketonen and Solovay. The Theorem is not
provable in Peano Arithmetic, and one key step in the proof requires ffl 0 induction. x0. Introduction. The most well-known formalizations of finite mathematics are PA (Peano Arithmetic) and PRA
(Primitive Recursive Arithmetic). In both, the "intended" domain of discourse is the set of natural numbers. PA is formalized in standard first-order logic, and contains the induction schema, which
can apply to arbitrary first-order formulas. The logic of PRA allows only quantifier-free formulas, which are thought of as being universally quantified, and PRA has the induction scheme for
quantifier-free formulas, expressed as a proof rule. Also, for each primitive recursive function f , PRA contains a function symbol for f and has the recursive definition of f as an axiom. Clearly,
PRA is much weaker tha...
, 1998
"... This paper describes a formalization of a class of fixed-point problems on graphs and its applications. This class captures several well-known graph theoretical problems such as those of
shortest path type and for data flow analysis. An abstract solution algorithm of the fixed-point problem is forma ..."
Cited by 8 (0 self)
Add to MetaCart
This paper describes a formalization of a class of fixed-point problems on graphs and its applications. This class captures several well-known graph theoretical problems such as those of shortest
path type and for data flow analysis. An abstract solution algorithm of the fixed-point problem is formalized and its correctness is proved using a theorem proving system. Moreover, the validity of
the A* algorithm, considered as a specialized version of the abstract algorithm, is proved by extending the proof of the latter. The insights we obtained through these formalizations are described.
We also discuss the extension of this approach to the verification of model checking algorithms.
, 2003
"... This manual describes Isabelle’s formalizations of many-sorted first-order logic (FOL) and Zermelo-Fraenkel set theory (ZF). See the Reference Manual for general Isabelle commands, and
Introduction to Isabelle for an overall tutorial. This manual is part of the earlier Isabelle documentation, which ..."
Cited by 4 (3 self)
Add to MetaCart
This manual describes Isabelle’s formalizations of many-sorted first-order logic (FOL) and Zermelo-Fraenkel set theory (ZF). See the Reference Manual for general Isabelle commands, and Introduction
to Isabelle for an overall tutorial. This manual is part of the earlier Isabelle documentation, which is somewhat superseded by the Isabelle/HOL Tutorial [11]. However, the present document is the
only available documentation for Isabelle’s versions of firstorder
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1397696","timestamp":"2014-04-18T19:11:08Z","content_type":null,"content_length":"35663","record_id":"<urn:uuid:c0e610c1-8252-4896-8235-7fcb9f434c18>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Do Fractions
As the students move higher up in their academics their difficulty level simultaneously increases. Starting from simple calculations in elementary classes they come across some more calculations like
solving the decimals. Although it is not difficult but just to start with they do get into some kind of confusion. For example, they try to add, subtract, multiply and divide the fractions in the
same way that they do with normal numbers or what we call whole numbers. However, when they realize that the calculations are wrong they come across difficulty in making them correct. There are a few
things that are to be remembered while solving fractions.
• 1
First of all while solving fractions related sums, the students should know about the numerator and denominator. The number at the top in a fraction is the numerator and the one at the bottom is
the denominator.
• 2
The next thing to understand is the difference between proper and improper fractions. If the numerator is smaller than the denominator, the fraction is proper and if the numerator is greater than
the denominator that is the improper fraction.
3/8 Proper Fraction
9/8 Improper Fraction
• 3
While adding fractions there are two things that you need to remember. The numerators in both the fractions can be directly added if the denominators in both the fractions are equal such as:
3/8 + 5/8 = 8/8 = 1
• 4
The same fraction can be subtracted directly in the same way i.e. the smaller fraction can be subtracted from the larger one like:
5/8 – 3/8 = 2/8
• 5
For multiplying the decimals, you have to ensure that the fractions are first simplified as much as possible. This means a larger number can be converted into a smaller one by diving both
numerator and denominator with the same number.
4/10 x 8/12 = 2/5 x 2/3
• 6
The division of the fractions can also be made simple by first reducing the fractions into their simplest form such as 4/12 to 2/3. However, it is advised to convert the division sign between the
two fractions into multiplication. This will make the numerator of either of the fraction as denominator and vice versa.
2/3 ÷ 9/6 = 2/3 6/9
|
{"url":"http://www.stepbystep.com/how-to-do-fractions-3751/","timestamp":"2014-04-21T09:43:20Z","content_type":null,"content_length":"42297","record_id":"<urn:uuid:099a3498-2787-47f9-bfff-0d1660ef83f9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [GP/PARI-2.1.0] arithmetic weirdness
Karim.Belabas on Wed, 4 Apr 2001 11:16:41 +0200 (MET DST)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [GP/PARI-2.1.0] arithmetic weirdness
On Wed, 4 Apr 2001, Gerhard Niklasch wrote:
> The exact result of the multiplication %47*100 would have five extra
> 1 bits to the right, which we're pushing out during normalization.
> We shouldn't drop them, we should remember enough of them to round
> upward. The standard guard bits + sticky bit technique is being
> called for here.
> For elementary operations and for as many algebraic and transcen-
> dental functions as feasible, we should return the correctly rounded
> t_REAL representation, at the appropriate precision, of the exact
> result of applying the operation (in the reals, not in the t_REALs)
> to the input value(s). Perhaps stuff like LLL would be a trifle
> better behaved if we didn't drop bits in simple multiplications...
> It is not necessary to compute the full 192-bit product of two 96-bit
> significands to achieve this; a few extra bits suffice to do the
> tracking.
This is more or less what is being done internally. All transcendental
functions are computed with one extra word of precision, then the result is
truncated. The first two things (t_REAL) LLL does are
1) compute the number of significant words used in the input
2) add one guard word to all entries [ x = gmul(x, realun(prec+1)) ]
It's a(n arguable) design choice, for the sake of maximal speed for low
precision inputs: the t_REAL type corresponds to low-level multiplication;
correct rounding and precision settings are the caller's responsibility.
Of course you may end up with a number of recursive calls adding up their own
extra word of precision until sluggish death occurs [ remember the zetak
soaring accuracy problem ? ], so this places another burden on the PARI
programmer [ stack management being the number 1 annoyance, until you get
really used to it. Ubiquitous typecasts as opposed to, eg. structs or OO
approach being my number 2 ].
It would be nicer if PARI also supported directed rounding or interval
arithmetic; very useful in a number of settings. Such a package was submitted
3 years ago by Manfred Radimersky but I don't like changing the kernel [e.g
Karatsuba multiplication for t_REAL has been there for 4 years, and is still
not enabled by default, nor is it documented!] and, to my great regret, I
unfortunately never had to time to check and adapt/include it. It still is
on my personal TODO list.
Karim Belabas email: Karim.Belabas@math.u-psud.fr
Dep. de Mathematiques, Bat. 425
Universite Paris-Sud Tel: (00 33) 1 69 15 57 48
F-91405 Orsay (France) Fax: (00 33) 1 69 15 60 19
PARI/GP Home Page: http://www.parigp-home.de/
|
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0104/msg00009.html","timestamp":"2014-04-17T12:38:33Z","content_type":null,"content_length":"6475","record_id":"<urn:uuid:1417782f-d12e-4706-b659-7daa39d2feab>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FHSST Physics/Vectors/Examples
From Wikibooks, open books for an open world
Some Examples of Vectors[edit]
Imagine you walked from your house to the shops along a winding path through the veld. Your route is shown in blue in Figure 3.1. Your sister also walked from the house to the shops, but she decided
to walk along the pavements. Her path is shown in red and consisted of two straight stretches, one after the other.
Figure 3.1: Illustration of Displacement
Although you took very different routes, both you and your sister walked from the house to the shops. The overall effect was the same! Clearly the shortest path from your house to the shops is along
the straight line between these two points. The length of this line and the direction from the start point (the house) to the end point (the shops) forms a very special vector known as displacement.
Displacement is assigned the symbol $\overrightarrow{s}$
Definition: Displacement is defined as the magnitude and direction of the straight line joining one's starting point to one's final point.
Definition: Displacement is a vector with direction pointing from some initial (starting) point to some final (end) point and whose magnitude is the straight-line distance from the starting point
to the end point.
(NOTE TO SELF: choose one of the above)
In this example both you and your sister had the same displacement. This is shown as the black arrow in Figure 3.1. Remember displacement is not concerned with the actual path taken. It is only
concerned with your start and end points. It tells you the length of the straight-line path between your start and end points and the direction from start to finish. The distance travelled is the
length of the path followed and is a scalar (just a number). Note that the magnitude of the displacement need not be the same as the distance travelled. In this case the magnitude of your
displacement would be considerably less than the actual length of the path you followed through the veld!
Definition: Velocity is the rate of change of displacement with respect to time.
The terms rate of change and with respect to are ones we will use often and it is important that you understand what they mean. Velocity describes how much displacement changes for a certain change
in time.
We usually denote a change in something with the symbol $\Delta$ (the Greek letter Delta). You have probably seen this before in maths — the gradient of a straight line is $\frac{\Delta y}{\Delta x}$
. The gradient is just how much y changes for a certain change in x. In other words it is just the rate of change of y with respect to x. This means that velocity must be
$\begin{matrix}\overrightarrow{v}=\frac{\Delta \overrightarrow{s}}{\Delta t} =\frac{\overrightarrow{s}_{final}-\overrightarrow{s}_{initial}}{t_{final}-t_{initial}}\end{matrix}$
(NOTE TO SELF: This is actually average velocity. For instantaneous $\Delta$'s change to differentials. Explain that if $\Delta$is large then we have average velocity else for infinitesimal time
interval instantaneous!)
What then is speed? Speed is how quickly something is moving. How is it different from velocity? Speed is not a vector. It does not tell you which direction something is moving, only how fast. Speed
is the magnitude of the velocity vector (NOTE TO SELF: instantaneous speed is the magnitude of the instantaneous velocity.... not true of averages!).
Consider the following example to test your understanding of the differences between velocity and speed.
Worked Example 3: Speed and Velocity[edit]
Question: A man runs around a circular track of radius 100m. It takes him 120s to complete a revolution of the track. If he runs at constant speed, calculate:
1. his speed,
2. his instantaneous velocity at point A,
3. his instantaneous velocity at point B,
4. his average velocity between points A and B,
5. his average velocity during a revolution.
1. To determine the man's speed, we need to know the distance he travels and how long it takes. We know it takes $120 s$ to complete one revolution of the track. What distance is one revolution
of the track? We know the track is a circle and we know its radius, so we can determine the perimeter or distance around the circle. We start with the equation for the circumference of a circle:
$\begin{matrix}C & =& 2\pi r \\ & = & 2\pi (100m) \\& = & 628.3\;m.\end{matrix}$
2. Now that we have distance and time, we can determine speed. We know that speed is distance covered per unit time. If we divide the distance covered by the time it took, we will know how much
distance was covered for every unit of time.
$\begin{matrix} v & = &\frac{Distance\ travelled}{time\ taken} \\ & = & \frac{628.3m}{120s} \\ & = & 5.23\ m.s^{-1} \end{matrix}$
3. Consider point A in the diagram:
We know which way the man is running around the track, and we know his speed. His velocity at point A will be his speed (the magnitude of the velocity) plus his direction of motion (the direction of
his velocity). He is moving at the instant that he arrives at A, as indicated in the diagram below.
His velocity vector will be $5.23\ m.s^{-1}$ West.
4. Consider point B in the diagram:
We know which way the man is running around the track, and we know his speed. His velocity at point B will be his speed (the magnitude of the velocity) plus his direction of motion (the direction
of his velocity). He is moving at the instant that he arrives at B, as indicated in the diagram below.
His velocity vector will be $5.23\ m.s^{-1}$ South.
4. So, now, what is the man's average velocity between Point A and Point B?
As he runs around the circle, he changes direction constantly. (Imagine a series of vector arrows pointing out from the circle, one for each step he takes.) If you add up all these directions and
find the average it turns out to be ...Right. South west. And, notice that if you just looked for the average between his velocity at Point A and at Point B, that comes out south west, too. So his
average velocity between Point A and Point B is $5.23\ m.s^{-1}$ south west.
5. Now we need to calculate his average velocity over a complete revolution. The definition of average velocity is given earlier and requires that you know the total displacement and the total
time. The total displacement for a revolution is given by the vector from the initial point to the final point. If the man runs in a circle, then he ends where he started. This means the vector
from his initial point to his final point has zero length. A calculation of his average velocity follows:
$\begin{matrix} \overrightarrow{v}&=&\frac{\Delta\overrightarrow{s}}{\Delta t} \\ &=& \frac{0m}{120s} \\ &=& 0\ m.s^{-1} \end{matrix}$
Remember: Displacement can be zero even when distance is not!
Definition: Acceleration is the rate of change of velocity with respect to time.
Acceleration is also a vector. Remember that velocity was the rate of change of displacement with respect to time so we expect the velocity and acceleration equations to look very similar. In fact:
$\begin{matrix}\overrightarrow{a}=\frac{\Delta \overrightarrow{v}}{\Delta t} =\frac{\overrightarrow{v}_{final}-\overrightarrow{v}_{initial}}{t_{final}-t_{initial}}\end{matrix}$
(NOTE TO SELF: average and instantaneous distinction again! expand further — what does it mean?)
Acceleration will become very important later when we consider forces.
Imagine that you and your friend are pushing a cardboard box kept on a smooth floor. Both of you are equally strong. Can you tell me in which direction the box will move ? Probably not. Because I
have not told you in which direction each of you are pushing the box. If both of you push it towards north, the box would move northwards. If you push it towards north and you friend pushes it
towards east, it would move north-eastwards. If you two push it in opposite directions, it wouldn't move at all !
Thus in dealing with force applied on any object, it is equally important to take into account the direction of the force, as the magnitude. This is the case with all vectors.
|
{"url":"http://en.wikibooks.org/wiki/FHSST_Physics/Vectors/Examples","timestamp":"2014-04-16T10:18:30Z","content_type":null,"content_length":"39972","record_id":"<urn:uuid:6a454534-ab85-4e4e-9483-f1178eb62954>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: image show
Replies: 2 Last Post: Apr 28, 2010 1:13 PM
Messages: [ Previous | Next ]
uny gg Re: image show
Posted: Apr 28, 2010 12:44 PM
Posts: 45
Registered: 8/4/09 Oh.. ;)
I found a way to figure out.
I have used a function called "imagesc"..
It shows that one I would like to!! ;)
Have a great day!
"uny gg" <illinois.ks@gmail.com> wrote in message <hr9mup$j93$1@fred.mathworks.com>...
> I did image clustering.. within image itself.
> I applied the following procedure until Step 4 in my images.
> http://www.mathworks.com/products/demos/image/color_seg_k/ipexhistology.html
> Using this " imshow(pixel_labels,[]), title('image labeled by cluster index');" command,
> I can see the bw image based on its clustered index. For example K = 3, there will be three index values. If K=6, there will be six index values.
> My question is that is there any way to see this image as a color picture.
> I mean if K=3, currently, it shows that K=1, almost black, if K=2 then grey, if K=3 then white.. something like this.
> But is it possible to show for example, when K=1, red, if k=2, blue, if K=3 green, etc..
> So, I would like to see as a color clustered index image instead of bw image.
> I tried to do it by changing the value in [].. but, it doesn't work for me.
> Anybody knows this?
> Thanks.
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2067831&messageID=7051894","timestamp":"2014-04-19T15:52:22Z","content_type":null,"content_length":"19478","record_id":"<urn:uuid:3b754b20-f818-4bf4-9386-93217a8e778c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sign-Magnitude Drive
In this article I’ll talk about one of the popular drive-modes of H-bridges, the Sign-magnitude drive in detail. If you’re not familiar with H-bridges in general, I suggest you read the previous part
of the series first, where we’ve looked at the basic operating principles of an H-bridge and went through the various meaningful operating modes.
Let’s quickly review the basics first! Our H-bridge looks like this: We will also make use of our motor equivalent circuit that I’ve introduced before:
Basic operation
In sign-magnitude drive, we have four control modes to choose from:
Mapping 1 Q1 Q2 Q3 Q4
on-time state close open open close
off-time state close open close open
Mapping 2 Q1 Q2 Q3 Q4
on-time state close open open close
off-time state open close open close
Mapping 5 Q1 Q2 Q3 Q4
on-time state open close close open
off-time state close open close open
Mapping 6 Q1 Q2 Q3 Q4
on-time state open close close open
off-time state open close open close
These modes describe the way we map the states of the four switches to the ‘on-time’ and the ‘off-time’ of our PWM control signal. If you carefully investigate these tables, you’ll see that the four
options describe the four possibilities for two binary choices: whether to keep the a-side or the b-side in a constant state, and whether to keep the low-side or the high-side switch closed
You will see pretty soon that one of these binary choices is usually made statically, while a control signal is used to decide between the remaining two choices.
However, for now, let’s just concentrate on mapping 1 and see, how the bridge operates! As the mapping tells us, during the on-time, Q1 and Q4 are closed. This means that the left-side of the motor
is connected to V[bat], while the right-side is grounded. Current can flow from the supply through the motor:
When the off-time comes along, Q1 stays closed, but Q4 opens and Q3 closes instead. In this state, there’s no path from the supply to ground through the bridge. However, both of the motor terminals
are connected to V[bat], basically short-circuiting the motor. If there was current flowing through the motor at the time the switch-over happened, that current can continue circulating around the in
that loop:
The voltages and the current through the bridge will follow the following wave-forms:
The average voltage the motor sees can be calculated as the following:
V[mot_avg] = V[bat] * t[on]/t[cycle], where t[cycle] is the cycle time, and t[on] + t[off] = t[cycle]
From this equation something should be immediately obvious: this mapping can’t move the motor backwards. For that we would need to be able to apply a negative (average) voltage to the motor
terminals, but that’s not possible: the motor voltage can only be adjusted between 0 and V[bat]. That would be a problem, but if we do the same math for mapping 5, we’ll see that in that mode, the
opposite is true: that mode can only turn the motor in the reverse direction and the average motor voltage can only be between 0 and -V[bat].
In order to make a functional H-bridge, we need to employ both of these mappings and introduce a control signal that can choose between the two. This is the origin of the name of the drive mode: one
control signal – the one that chooses between the two mappings – is used to determine the ‘sign’ of the voltage applied to the motor, while the other – the PWM signal – is used to determine the
‘magnitude’ of that (average) voltage.
The remaining two mappings are only slightly different from the previous two: the difference is that, during the off-time, instead of the two high-side switches, the two low-side ones are turned on.
This means that the motor terminals are both connected to ground instead of the battery, but they are still shorted together and the motor current still circulates inside the bridge during the
Current flow
Let’s take a closer look now at the way the current flows in the system! During the ‘on-time’ the motor inductors see a voltage difference of V[bat]-V[g] (provided the motor rotates in the same
direction, the bridge tries to rotate it). If we disregard the internal resistance of the motor for a minute (or assume that the switching frequency is much higher than the electrical resonance
frequency of the motor) this voltage will create a linear current ramp on the inductor (dI/dt = V/L[m]). Since we assume V[bat] is constant, the slope of the current increase will be determined by V
[g], or in other words by the speed of the motor.
When the motor runs under no load V[g] is very close to V[bat], so the current will change very slowly. If the motor is stalled, the current will rise much faster, as V[g] is 0. The fastest current
rise will happen if we abruptly change the drive direction from full-speed one way to full-speed the other way. In that case V[g] will be close to V[bat] in absolute value, but reversed in polarity.
During the off-time, the motor is short-circuited. If we disregard the internal resistance of the switches and the motor, the current can flow without interruption, and the inductors see V[g] between
their terminals. The current will start decreasing linearly, determined by the generator voltage and the motor inductor.
Steady State
We say the bridge operates in a steady state if the cycle-to-cycle averages if the various parameters (voltages, currents) remain constant. The motor doesn’t accelerate nor decelerates and outputs a
constant torque.
If things don’t change from cycle-to-cycle, it means that the motor current at the beginning of a cycle must be the same as at the end. Of course this doesn’t mean that the current stays constant
during the cycle and this current-change is called ripple-current. Assuming linear current change during both the on- and the off time, the current deltas are the following:
I[ripple] = (V[bat]-V[g])/L[m]*t[on] = V[g]/L[m]*t[off]
If we express V[g] from this, we get:
V[g] = V[bat]* t[on]/(t[on]+t[off]) = V[bat] * t[on]/t[cycle] = V[mot_avg]
What we get is this: in a steady-state, the generator voltage must be equal to the average voltage the motor sees. The average current will be whatever it needs to be to reach that condition. Now, if
we put this V[g] back to the current equation, we get this:
I[ripple] = V[bat] / L[m] / t[cycle] * t[on]*t[off]
and since t[off] = t[cycle]-t[on]:
I[ripple] = V[bat] / L[m] / t[cycle] * t[on]*(t[cycle]-t[on])
This is a second-order function (a parabola) with its maximum at t[on] = t[cycle]/2. So the ripple current reaches its maximum at 50% duty cycle, and its value is:
I[ripple_max] = V[bat ]/ L[m] * t[cycle]/4
Reality Check
If you recall, the generator voltage is proportional to speed and current is proportional to torque. From the previous chapter it seems as though we have perfect speed control: the speed which is
proportional to the generator voltage is equal to the average voltage seen by the motor, which in turn only depends on the duty cycle of the PWM input, our control signal. The torque (current) on the
motor will be whatever it has to be to make that happen.
There’s something wrong with that picture isn’t here? After all we know that DC motors don’t keep a constant speed under load changes without any control circuit. True, and it’s easy to see what the
problem is: we disregarded the internal resistance of the motor (and the switches).
Once you take those into account, you’ll see, that the motor current drops some voltage on those resistors, and the inductor sees only the remainder. (In the following equation I’ve made the
simplification to assume that the motor current doesn’t change much, so the voltage drop on the resistor is relatively constant during the on- and off-times.)
V[L_on] = V[bat] – V[g] – I[mot_avg]*R[m] during the on-time and
V[L_off] = -V[g] – I[mot_avg]*R[m] during the off-time.
If you put this into our previous ripple-current equation, you’ll see the result is a current- (or torque-) dependent difference between our intended speed and the actual speed of the motor:
V[g] = V[bat] * t[on]/t[cycle] – I[mot_avg]*R[m] = V[mot_avg] – I[mot_avg]*R
Input Capacitor
In the previous chapter we calculated the ripple current, but we kept saying that the average motor current (in the ideal case at least) will settle at whatever level it needs to to make V[g] equal
to V[mot_avg]. This statement has an interesting consequence: we can’t predict the direction the current will flow during the on-time and the off-time. In fact, if I[mot_avg] is lower than half of I
[ripple], the motor current changes direction twice during the cycle:
Now, one of those direction changes happen during the off-time, when the motor is short-circuited, but the other happens when the motor is connected to the supply. What follows is that at the
beginning of every on-time for a short while at least, the current flows out of the bridge. Most supplies can’t deal with this quick reversal of current flow and if the reverse-current can’t find its
way back to the power supply, the supply voltage will start rising potentially to dangerous levels.
To handle this reverse current properly we need to put a capacitor on the input terminals of the bridge to temporarily soak up the current coming from the bridge. The capacitor will release it’s
extra charge back into the motor in the part of the cycle when the current flows to the ‘proper’ direction:
But just how big this capacitor needs to be? Well, that depends on a lot of things, so let’s list them:
• How much reverse-current can the power supply handle?
• How much voltage-hike can the circuit live with?
• What are the motor characteristics (mostly inductance)?
• What is the switching frequency and duty cycle of the bridge?
• How much torque the motor needs to output (the average current through the motor)?
Since some of these parameters are dependent on the operating conditions and the exact application, let’s first do some simplifications and assume worst-case conditions:
• Let’s assume the power supply can’t take any reverse current
• Let’s also assume that I[mot_avg] is 0, so the current changes direction exactly at half of the on-time. (The capacitor can’t be sized for the condition when even I[mot_avg] is reverse compared V
[mot_avg], since in that case the capacitor can’t completely release its extra charge during the cycle. We will discuss that situation later).
• Finally let’s assume that the ripple current is at its maximum, so the duty cycle is at 50%
Under those conditions, the current will rise for 50% of the time, and cross 0 at 25% of the total cycle time:
The total charge released back to the system during the first-half of the on-time is:
Q[release] = 1/2*((I[ripple_max]/2)*t[on]/2)
We also know that
t[on] = t[cycle]/2
Finally we know our ripple current is at its maximum:
I[ripple_max] = V[bat ]/ L[m] * t[cycle]/4
Putting these together and do some simplifications, we get:
Q[release] = 1/64 * V[bat] / L[m] * t[cycle]^2
If we assume that the power supply can’t take any of this charge as we’ve said before – in other words all of it needs to be stored in the capacitor – the capacitor voltage will rise:
V[bat_ripple] = Q[release]/C
So if we know how much change in the supply voltage we can tolerate, we get a capacitor value:
C = Q[release]/V[max_bat_ripple]
Substituting Q[release ]we get:
C = 1/64 * V[bat]/V[max_bat_ripple] / L[m] * t[cycle]^2
Note that the term V[bat]/V[max_bat_ripple] is the ratio between the supply voltage and the ripple voltage allowed on the it. If for example we allow for 5% ripple, this value is a constant 20
independent of the supply level. Also note that the capacitance value needed increases quadratically with the cycle time.
I’ve measured a few motors, and the lowest inductance value I’ve seen was in the order of 30µH, but of course this value varies a lot from motor to motor. So, to give you actual numbers, let’s take
that 30µH inductance value, allow for a 5% ripple on the power supply and a 20kHz switching frequency. With that, we get a minimum ~26µF of capacitance needed on the power supply. If we only want to
switch at – let’s say – 1kHz though, the capacitance needed is more than 10000µF!
Transient states
Whenever V[g] is not equal to V[mot_avg], the bridge is not in steady-state. When it is lower, the motor is accelerating, when it is higher, it is braking. We’ve seen above that the on-time current
change is:
I[delta_on] = (V[bat]-V[g])/L[m]*t[on]
while the off-time current change is:
I[delta_off] = V[g]/L[m]*t[off]
Let’s express V[bat] from V[mot_avg] and put that into the first equation! We get:
I[delta_on] = (V[mot_avg]-V[g])/L[m]*t[on] + V[mot_avg]/L[m]*t[off]
Next, let’s see what the difference between the current at the beginning of the on-time and the end of the off-time:
I[delta_cycle] = I[delta_on] – I[delta_off] = (V[mot_avg]-V[g])/L[m]*t[on] + V[mot_avg]/L[m]*t[off] – V[g]/L[m]*t[off]
After some re-arranging:
I[delta_cycle] = (V[mot_avg]-V[g])/L[m]*t[cycle]
In other words, the motor current changes from cycle to cycle proportionally to the difference between the intended generator voltage (V[mot_avg]) and the actual generator voltage (V[g]).
Since current is proportional to torque, this means that if there’s a difference between the two voltages, the torque will start changing linearly. In this simple model, the current would keep
changing forever, but in reality that of course isn’t the case. For one, internal losses limit the maximum current as we’ll see in a minute, but eventually, the changed torque would hopefully change
the motor speed as well, bringing it closer to V[mot_avg], slowing down the current change. Eventually a new steady-state is reached where V[g] is equal to V[mot_avg] and the torque changed to a new
value that is needed in order to maintain that balance. (If you read the article on motor modeling, you’ll see that in many cases the motor and the attached mechanical system can be modeled as a
large capacitor and some sort of loss. That capacitor, with the internal internal resistance of the motor determines the time-constant by which this new steady-state is reached, and the actual
response will be closer to an exponential curve.)
Now, this is all fine for acceleration, but there’s something strange that happens during braking. When V[g] is higher than V[mot_avg], the torque (current) will start decreasing. As the electrical
time-constants are usually several orders of magnitude lower than the mechanical ones (again, see the article on motor modeling for the details), the current (and the torque) turns the opposite
direction to the shaft rotation (generator voltage).
Let’s see how big this reverse current can be, but in order to do so, we’ll first have to re-introduce the motor resistance into our model. The reason is that without the resistor, the motor current
will just keep decreasing (or increasing in the negative direction) until the generator voltage changes and becomes equal to V[mot_avg]. In other words, without the resistor, the motor current can be
an arbitrarily large negative number. With the resistor in place however, the bridge will quickly settle to a constant cycle-to-cycle current, as the voltage drop on the resistor will put an end to
the current-increase over the inductor.
Once that happens, we can use our steady-state equations, to figure out the average current:
V[g] = V[mot_avg] – I[mot_avg]*R[m]
and solving that to the current, we get:
I[mot_avg] = (V[mot_avg]-V[g])/R[m]
The reason the system can be in steady-state, while V[g] is not equal to V[mot_avg] is the internal motor resistance. What happens is that the internal resistance of the motor makes the
cycle-to-cycle current change zero, which was the initial assumption in deriving the steady-state equations.
You can also see, that the average motor current is negative, as expected. It means that the torque is in opposite direction to the shaft rotation, so we are in fact braking the motor.
During the off-time this current is circulating through the motor and either the two low- or the two high-side FETs. However, during the on-time, the only way for the current to flow is through the
supply. While the current through the motor is always in the ‘negative direction’ – more precisely opposing the generator voltage – the same current flows into the supply in the positive or negative
direction, depending on which way the FETs are open during the on-time:
This current either charges the battery (first case), which is called regenerative braking or dis-charges it (second case), which is called dynamic braking.
Regenerative braking
While one would think that re-charging the battery is a good thing, there are two serious limitations to the effectiveness of regenerative braking:
1. As we’ve seen, the average current during braking depends on the difference between V[mot_avg] and V[g], and reaches its maximum when V[mot_avg] = -V[bat]. However, re-charging only happens if
the current flow goes against the battery voltage, so we can’t operate the bridge in the reverse direction. This limits the amount of torque that’s available during regenerative braking. If more
torque is needed, the bridge needs to operate in the non-regenerative (dynamic braking) domain. The regenerative braking torque will also decrease as the speed (V[g]) decreases: it’s not possible
to provide a constant deceleration down to 0 using regenerative braking alone.
2. Even though the motor braking current gets greater, as V[mot_avg] gets further away from V[g], that doesn’t mean that all that current will re-charge the battery: only during the on-time does the
current flow through the battery. As you decrease V[mot_avg] to increase the braking current, t[on]will decrease as well and the re-charging effect will decrease with it. To be more precise, the
amount of energy transferred in each cycle to the battery is the following:E[re-charge] = V[bat] * I[mot_avg] * t[on]which becomes this after doing the proper substitutions for I[mot_avg] and V
[mot_avg]:E[re-charge] = V[bat]/R[m]*(V[bat]*t[on]/t[cycle]-V[g])*t[on]As you can see this is a quadratic function of t[on], and it reaches its maximum whent[on] = 1/2 * V[g]/V[bat] * t[cycle]in
other words when the V[mot_avg] is half of V[g]. This effect will further limit your ability to regenerate energy from the motor while braking, or your ability to quickly brake the motor while
maintaining good re-generation efficiency, depending on which way you look at it.
Even if you’re fine with all the limitations above, the question still remains: what to do with the back-converted energy? If your system is battery operated, the battery may or may not absorb this
energy, depending on its technology. Even more serious problem is that the amount of charge a battery can take depends on its charge level – a fully charged battery can’t take any more charge. This
means that you either risk overcharging your battery or limit your braking capability at least under some circumstances. Neither options are too pleasing.
If you run your system from mains (and necessarily through a power supply, since mains is AC and we’re driving a DC motor here), you’re even more limited: unless you specially design your power
supply, it simply can’t pump energy back into the power outlet.
In both cases it seems that your safest bet is to consume or store the regenerated energy locally. If you have other loads consuming power, of course you can power those loads up to their current
requirements. You might have lights, computers, other motors, heaters, sensors, what not running at the same time, that need energy. All that is great, but you don’t want the brake distance depend on
how much the seat-wormer is on in your car, do you?
In other words, all these options provide some, but not very deterministic or reliable places to put energy to. What you would really need is some sort of reliable energy storage device that is not
as sensitive as a battery and can store a lot of energy. Effectively a capacitor. Trouble is, actual capacitors of the sizes needed (easily several Farads) are not feasible. There is however another
way (you would have to read more on mechanical modeling to understand why): stick another motor with a big wheel attached to it into your system. When you have more energy then what you know what to
do with, simply spin-up the wheel to store the energy. When you are in need of energy, use (again) regenerative braking on that wheel to regain the energy and supply your needs. This is called a
flywheel, and is in fact used in certain systems.
Trains, subways and other large, but well controlled systems usually benefit from regenerative braking as well as they quite often have another consumer (another, accelerating train) that can use the
regenerated energy.
As you can probably see from this much, the problem of safely and reliably putting the braking energy somewhere is a complex one. It requires full understanding of the complete electro-mechanical
system, and there is no one-size fits all solution.
Dynamic braking
Let’s say you don’t want to use regenerative braking because of all the complexities involved. So you want to operate in the dynamic braking mode of the bridge, and avoid regenerative braking
altogether. How do you do that? The first thing you need to do is to figure out if the system is in regenerative braking mode. You can go about it in two ways:
1. We know the bridge is in regenerative mode if V[g] is higher than V[mot_avg] but the same polarity. To use this, we need to measure V[g] or something related to it. That’s not necessarily a
simple thing to do. I will come back to techniques for doing so in a future article about speed-control mechanisms, but usually you will need a method to measure the shaft speed using an encoder
for example.
2. Use the knowledge that the bridge is in regenerative mode if the bridge current flows in reverse direction compared to the battery voltage. This involves measuring the current through the bridge
including it’s polarity.
After you have the right feedback in place, you can adjust your drive direction to make sure that V[mot_avg] is always reverse in polarity than V[g] any time you’re braking. But there’s another
complication: how do you know you need to brake? You only know that if you know the intended and the actual speed of the motor, so you need to measure the shaft-speed (or V[g] somehow) and you have
to implement some sort of a control circuit.
If you did all that, you can successfully avoid regenerative braking. Then another problem arises: how much torque can you apply in braking mode to the shaft, provided you would only want to operate
in the dynamic braking domain? We know that torque is related to the motor current and we’ve also seen that the average motor current during braking is the following:
I[mot_avg] = (V[mot_avg]-V[g])/R[m]
To make sure that the motor remains in the dynamic braking domain, you will have to make sure that V[mot_avg] has the same polarity as the motor current, which is opposite to V[g]. So if V[g] is
positive as in all of our examples so far, V[mot_avg] will need to be negative. That has the unfortunate consequence that I[mot_avg] cannot be smaller (in absolute value) than a certain amount:
abs(I[mot_avg]) >= V[g]/R[m]
This is a problem because it tells us that we can’t brake the motor arbitrarily gently. The only way to achieve that is with regenerative braking. As a matter of fact, it also tells us that he
minimum amount of braking torque we can apply to the motor depends on the speed and is proportional to it. So at high speeds we will have more abrupt braking than at low speeds.
By now you probably see that dealing with bridges in phase-magnitude drive is not simple. At first sight it seems you only need two control signals – a PWM input to set the average voltage on the
motor and a digital signal to set the direction – and you’re ready to go.
Detailed analysis however shows that no matter how you operate your bridge, braking is a problem. You either use regenerative braking with all of its complications or try to avoid it which isn’t
simple either. You probably need to monitor the bridge current and the supply voltage to make sure you you don’t over-charge your battery. If you want to avoid regenerative braking completely you’ll
most likely have to somehow measure the motor speed as well.
The silver lining is that at least you have a fail-safe mode to go back to: you can always set the PWM to 0% duty-cycle, effectively short-circuiting the motor. That will safely brake it down to a
halt (almost) without risking to over-charge your battery or causing any other harm to your system. When you do that however, you have no control over how long it will take for the motor to stop
though, so while it sounds safe from an electrical perspective, it might not be from the mechanical side.
Also, if there’s an external torque applied to the motor, simply short-circuiting it will never completely stop it as the braking torque is proportional to the speed: your electrical car will never
stop on a slope just by short-circuiting the motor. You’re battery won’t explode, so there’s some benefit, but you would still end up in the ditch.
Talking about safety, there’s another positive about sign-magnitude drive: you can set the input signals to a static state (0% PWM is basically a constant low- or high- voltage) at which point the
motor won’t see any voltage from the battery. This is especially important in systems where some setup is needed on power-on before normal operation can start, like in the case of microcontrollers.
You can configure the HW so that the default state of the control pins is such that the motor is safely stationary and only move out from that state only after the system initialization is complete.
All in all, sign-magnitude drive seems simple at first, but as with many things, the devil is in the details. For undemanding, simple applications it might be a good fit as it is, in most cases
you’re probably going to need some sort of monitoring circuitry that can detect dangerous or unwanted situations and intervene to prevent them.
Where to go from there?
I will come back to the sign magnitude drive when we will talk about drive circuits and component selection. Before that however I will cover cover the other main drive mode, the lock anti-phase
drive and compare the two to each other.
14 thoughts on “Sign-Magnitude Drive”
1. Dude, you might have mistaken dI/dt for I. dI/dt is inversely proportional to L. The solution for the diff.eq is a exponential function. The current does NOT increase or decrease linearly in
time. Otherwise, it’s an awesome write-up.
□ Thanks for the comment. Actually, my statement about a linear current change is an approximation, but a correct one. If we disregard all resistance in the circuit, the solution of the
differential equation is in fact the linear function. Of course in reality there’s always some resistance somewhere, if which case you’re right: the solution is an exponential function.
What happens though is that if you switch the circuit fast enough, you will only see the very first part of this exponential response which can be well approximated with a line even in the
presence of some resistance. There is a more accurate frequency-domain explanation to this, which I meant to expand upon in an ‘advanced’ article in the future. For now it’s suffice to say
that in almost all H-bridge applications your operating PWM frequency is higher than the (inverse of the) electrical time constant of the motor (Rm-Lm) to minimize ripple-current. In those
cases, the linear approximation is a good one.
2. Hi,
No need to publish this comment. At the beginning of the article, there’s this sentence “As the mapping tells us, during the on-time, Q2 and Q4 are closed.” with an image showing current flow. I
think you meant Q1 and Q4.
3. In the start of the “Breaking” section, you say “When Vg is lower than Vmot_avg …”; don’t you meant “Vg higher than Vmot_avg”? Otherwise I can’t see how the current becomes negative. If I
understood correctly the scenario, it’s let’s say, when you speed to X and then leave the throttle a bit but the inertia of the vehicle makes it take some time to reduce to the new lower speed,
and in this situation the motor will be spinning faster than the throttle value (PWM duty cycle) “would allow”. Or where’s my understanding failing?
□ Thank you Nuno for all these corrections! You are right and I’ve fixed the page now.
☆ You’re welcome. Spotted another one, in the following sentence, “As the electrical time-constants are usually several orders of magnitude lower than the electrical ones”. I think you
meant “… than the mechanical ones.”
Mega-excellent job anyways :). Cheers
4. Hello,
I have a small query regarding the operation of H-bridge in sign magnitude mode.
In Sign magnitude mode, you mentioned that when Q1 closes, Q4 opens and Q3 closes, the decaying current circulates in a loop from source to drain of Q3. Does the current flows from source to
drain in NMOS when source is tied to body internally? Or does the current flow through the internal body diode of the MOSFET? I have searched many articles and forums regarding this query but
found mixed answers. Could you please clarify this?
□ When a MOSFET is open (that is it’s gate-source voltage is above threshold) it conducts in both directions, the body diode doesn’t play a role. So, in sign-magnitude drive, when you open both
Q1 and Q3 for the off-time, the decaying current flows through the channel of the fet and not through the body-diode. The reason you might find conflicting data on this on the web is that
there is a similar drive mode (I’ve called it asynchronous sign-magnitude drive where the decaying current is in fact conducted through the body diode. If you look at the current-flow
diagrams in the two articles, you’ll see the difference highlighted. Others might not make such a clear distinction between the two drive-modes, or use a different nomenclature than I do,
which might result in some confusion.
I hope this helps,
5. Can you please explain how to determine the ripple current requirement for the input capacitor? Thank you for providing all this information.
□ The maximum ripple current on the capacitor is the maximum ripple current through the motor.
6. Hello,
Thank you for the great article. i am dealing with DC motor driving for a month and have some problems i guess regarding the FETs needed for my design and this material i hope will clarify for me
the missing points.Thank you. (i have motor that with high loads can reach up to 5A and i don’t succeed to design something that will work.)
I have a question though regarding the beginning of the artice, maybe i missed something – what is Vg voltage? you calculates motor voltage as Vbat-Vg and i didn’t see this value before (at the
beginning of the article or the previous one.).
Can you please explain?
thanks a lot
□ Thanks for your comment Yulya. V[g] is the generator voltage of the motor. It is introduced on the second image in the article.
Good luck with your design!
7. Hey,
this is a really amazing article. I learned a lot!
But when discussing breaking the motor using sign-magnitude-drive, I got a question:
I undestand most of the things about using regenerative und dynamic breaking. But I was asking myself:
What happens if I use dynamic breaking (requirement: Vmot_avg > Vg with reversed polarity) and now Vmot_avg gets lower than Vg (requirement gets injured)? Than we do something like regenerative
breaking but with the wrong polarity… What will happen?
Lets say I have a huge capacitor, delivering power to the H-bridge and a power-supply is connected in parallel through a diode. In my eyes the current trys to lower the voltage on the capacitor.
If my power-supply delivers enough current, I can hold the capacitor-voltage on a positive level (electrolyt-capacitor would be destroyed otherwise). But there is something wrong in my thinking…
In this case I would stick two currents into the system (power-supply and generator) but what happens with that energy?
Sorry, my head is spinnig… Maybe you can help me to solve it… Thanks a lot!
□ Thanks!
The short answer is: nothing. You have dynamic braking still. I think your confusion starts where you say: Vmot_avg > Vg with reversed polarity. Braking happens when Vmot_avg < Vg, not in
absolute value, but sign included. So any time Vmot_avg has reversed polarity compared to Vg, independent of magnitude, you’re braking. In fact, you are in braking using dynamic braking, no
re-generation will happen.
So where does the energy go? In these – highly idealized – models the only resistive component is the motor winding resistance, so that’s where all this braking energy gets burned up. In
reality, some if it gets to heat the FETs, the input capacitor, the wires and the battery.
I hope this helps,
|
{"url":"http://modularcircuits.tantosonline.com/blog/articles/h-bridge-secrets/sign-magnitude-drive/","timestamp":"2014-04-20T20:55:04Z","content_type":null,"content_length":"77002","record_id":"<urn:uuid:32ab4795-1790-49ab-8b38-be0666e06458>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
STL random_sample
December 31, 2012
By Rcpp Gallery
An earlier post looked at random shuffle for permutations. The STL also supports creation of random samples.
Alas, it seems that this functionality has not been promoted to the C++ standard yet — so we will have to do with what is an extensions by the GNU g++ compiler.
The other drawback is the sampling without replacement.
As in the previous post, we use a function object conformant to the STL’s requirements for a random number generator to be able to use R’s RNG.
#include <Rcpp.h>
// wrapper around R's RNG such that we get a uniform distribution over
// [0,n) as required by the STL algorithm
inline int randWrapper(const int n) { return floor(unif_rand()*n); }
// it would appear that randomSample is still only a GNU g++ extension ?
#include <ext/algorithm>
// [[Rcpp::export]]
Rcpp::NumericVector randomSample(Rcpp::NumericVector a, int n) {
// clone a into b to leave a alone
Rcpp::NumericVector b(n);
__gnu_cxx::random_sample(a.begin(), a.end(),
b.begin(), b.end(), randWrapper);
return b;
We can illustrate this on a simple example:
a <- 1:8
randomSample(a, 4)
[1] 1 2 7 4
randomSample(a, 4)
[1] 1 2 7 4
for the author, please follow the link and comment on his blog:
Rcpp Gallery
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/stl-random_sample/","timestamp":"2014-04-16T04:13:03Z","content_type":null,"content_length":"36848","record_id":"<urn:uuid:004e8608-3ede-411a-975c-8f578fb91936>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Swap two bits with a single operation in C?
up vote 11 down vote favorite
Let's say I have a byte with six unknown values:
and I want to swap bits 2 and 4 (without changing any of the ? values):
But how would I do this in one operation in C?
I'm performing this operation thousands of times per second on a microcontroller so performance is the top priority.
It would be fine to "toggle" these bits. Even though this is not the same as swapping the bits, toggling would work fine for my purposes.
3 Wow three identical answers with different numbers - good luck with that! – Benjol Jun 11 '09 at 15:01
1 Are you trying to interchange the two bits, or toggle the bits? That is, does 00 become 00 or 11? – Adam Rosenfield Jun 11 '09 at 15:01
3 He said swap, they all did toggle :) – Benjol Jun 11 '09 at 15:02
3 If the intention here is to swap, as in exchange, the two bit values, then the answers below are all incorrect. – Andre Miller Jun 11 '09 at 15:03
3 If you're doing it on a known microcontroller you might be better off looking into assembly instructions rather than doing this in C. – Matthew Monaghan Jun 11 '09 at 16:48
show 9 more comments
7 Answers
active oldest votes
x ^= 0x14;
That toggles both bits. It's a little bit unclear in question as you first mention swap and then give a toggle example. Anyway, to swap the bits:
x = precomputed_lookup [x];
where precomputed_lookup is a 256 byte array, could be the fastest way, it depends on the memory speed relative to the processor speed. Otherwise, it's:
x = (x & ~0x14) | ((x & 0x10) >> 2) | ((x & 0x04) << 2);
EDIT: Some more information about toggling bits.
When you xor (^) two integer values together, the xor is performed at the bit level, like this:
for each (bit in value 1 and value 2)
result bit = value 1 bit xor value 2 bit
so that bit 0 of the first value is xor'ed with bit 0 of the second value, bit 1 with bit 1 and so on. The xor operation doesn't affect the other bits in the value. In effect, it's a
up vote 28 down parallel bit xor on many bits.
vote accepted
Looking at the truth table for xor, you will see that xor'ing a bit with the value '1' effectively toggles the bit.
a b a^b
So, to toggle bits 1 and 3, write a binary number with a one where you want the bit to toggle and a zero where you want to leave the value unchanged:
convert to hex: 0x0a. You can toggle as many bits as you want:
0x39 = 00111001
will toggle bits 0, 3, 4 and 5
Don't know why this was downvoted, it's as correct as the others. – Skurmedel Jun 11 '09 at 15:02
Assuming the OP wants to toggle both bits (as opposed to interchange the two bits), this is the correct answer. OP is using the convention that the LSB is bit 0. – Adam Rosenfield
Jun 11 '09 at 15:03
It's actually slightly more correct, since it's the only answer that operates on the right bits. Still just toggles the state and not swaps. – Nietzche-jou Jun 11 '09 at 15:04
Skizz, can you elaborate on why x ^= 0x14; toggles both bits? For instance, how could one calculate what hex value toggles bits 1 and 3? – Nate Murray Jun 11 '09 at 15:23
@Nate: XORing a bit with 1 will toggle that bit. XORing a bit with 0 will keep it the same. If you want to toggle a certain set of bits in a 32-bit value, create another 32-bit
1 value with the bits you want to toggle set to 1, and the rest 0. In the case of bits 2 and 4, (1 << 2) | (1 << 4) == 0x14. In the case of bits 1 and 3, (1 << 1) && (1 << 3) ==
0xA. – Matt J Jun 11 '09 at 18:34
show 1 more comment
You cannot "swap" two bits (i.e. the bits change places, not value) in a single instruction using bit-fiddling.
The optimum approach if you want to really swap them is probably a lookup table. This holds true for many 'awkward' transformations.
up vote 11 BYTE lookup[256] = {/* left this to your imagination */};
down vote
for (/*all my data values */)
newValue = lookup[oldValue];
This is the best if you can spare 256 bytes and if memory accesses are faster than bitwise operations. This varies based on the system. For instance, on modern x86 processors, if the
lookup table is in the cache, the lookup is very fast, but if not, you could probably do a few hundred bitwise operations in the time it takes to perform the lookup. In a small embedded
controller, you might not have 256 bytes to spare. – Nathan Fellman Aug 7 '10 at 7:30
add comment
This might not be optimized, but it should work:
unsigned char bit_swap(unsigned char n, unsigned char pos1, unsigned char pos2)
unsigned char mask1 = 0x01 << pos1;
up vote 4 down unsigned char mask2 = 0x01 << pos2;
vote if ((n & mask1) != (n &mask2))
n ^= (mask1 | mask2);
return n;
1 this is incorrect- the (n & mask1) != (n & mask2) will do an integer comparison, but you want to do a boolean comparison. Change to "if (bool(n & mask1) != bool(n & mask2))" will
make it correct. – arolson101 Jan 20 at 21:30
add comment
The function below will swap bits 2 and 4. You can use this to precompute a lookup table, if necessary (so that swapping becomes a single operation):
unsigned char swap24(unsigned char bytein) {
unsigned char mask2 = ( bytein & 0x04 ) << 2;
unsigned char mask4 = ( bytein & 0x10 ) >> 2;
unsigned char mask = mask2 | mask4 ;
up vote 2 down vote return ( bytein & 0xeb ) | mask;
I wrote each operation on a separate line to make it clearer.
add comment
The following method is NOT a single C instruction, it's just another bit fiddling method. The method was simplified from Swapping individual bits with XOR.
As stated in Roddy's answer, a lookup table would be best. I only suggest this in case you didn't want to use one. This will indeed swap bits also, not just toggle (that is, whatever is
in bit 2 will be in 4 and vice versa).
• b: your original value - ???1?0?? for instance
• x: just a temp
• r: the result
x = ((b >> 2) ^ (b >> 4)) & 0x01
r = b ^ ((x << 2) | (x << 4))
up vote 2
down vote Quick explanation: get the two bits you want to look at and XOR them, store the value to x. By shifting this value back to bits 2 and 4 (and OR'ing together) you get a mask that when
XORed back with b will swap your two original bits. The table below shows all possible cases.
bit2: 0 1 0 1
bit4: 0 0 1 1
x : 0 1 1 0 <-- Low bit of x only in this case
r2 : 0 0 1 1
r4 : 0 1 0 1
I did not fully test this, but for the few cases I tried quickly it seemed to work.
This is a very elegant way to do it. Basically you're saying that if the bits are different (x is 1 if they're different) then flip the bits (effectively swapping them), otherwise,
leave them unchanged (which is the same as swapping them since they're identical. – Nathan Fellman Aug 7 '10 at 7:34
add comment
Say your value is x i.e, x=???1?0??
The two bits can be toggled by this operation:
up vote 0 down vote
x = x ^ ((1<<2) | (1<<4));
add comment
void printb(char x) {
int i;
for(i =7;i>=0;i--)
printf("%d",(1 & (x >> i)));
int swapb(char c, int p, int q) {
if( !((c & (1 << p)) >> p) ^ ((c & (1 << q)) >> q) )
printf("bits are not same will not be swaped\n");
else {
c = c ^ (1 << p);
up vote 0 down vote c = c ^ (1 << q);
return c;
int main()
char c = 10;
c = swapb(c, 3, 1);
return 0;
add comment
Not the answer you're looking for? Browse other questions tagged c bit-manipulation arduino bit bit-twiddling or ask your own question.
|
{"url":"http://stackoverflow.com/questions/981608/swap-two-bits-with-a-single-operation-in-c","timestamp":"2014-04-17T22:04:49Z","content_type":null,"content_length":"102882","record_id":"<urn:uuid:cbf7b13f-0ded-4a95-ac00-d2e5d075a3fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pleasantville, NY Math Tutor
Find a Pleasantville, NY Math Tutor
...I find that I share this passion, and it's refreshing to be around people that do more than just crunch numbers all day! I also spent a semester in Madrid, Spain in an immersion program which
did nothing but increase my knowledge of, and love for, the Spanish people and language. To round things out, I love music.
11 Subjects: including ACT Math, algebra 1, algebra 2, geometry
...That's why if a student isn't understanding something that's being explained to them, I work to find a different approach to explaining the concept to them until they get their "a-ha!" moment.
Aside from the reason above, a student might be confused on one or two concepts related to a topic, and...
4 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I first began using the program over 15 years ago. Like most professionals, I have been using Outlook for years. But unlike most, I have Microsoft-certified proficiency in its use.
17 Subjects: including statistics, accounting, finance, economics
I am an undergraduate student studying atmospheric sciences, with a strong background in math and science. I graduated high school with an advanced Regents designation, excelling in my high school
mathematics classes. I can help you reach your educational goals for high school math classes with flexible scheduling at an affordable price.
11 Subjects: including calculus, physics, SAT math, logic
...As part of my science background, I have taken numerous math classes, including calculus, and have extensive experience in tutoring/teaching math through my work as a peer tutor during college.
At Yale University, I excelled in several courses in the English department that were focused on the c...
24 Subjects: including geometry, algebra 1, algebra 2, prealgebra
|
{"url":"http://www.purplemath.com/pleasantville_ny_math_tutors.php","timestamp":"2014-04-21T14:57:54Z","content_type":null,"content_length":"24066","record_id":"<urn:uuid:01dff278-db03-4f82-b20d-cac1c22f2e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gas in tank exposed to a vacuum
1. The problem statement, all variables and given/known data
A tank is equally divided into two equal halves, one a vacuum and one filled with argon gas at 298K and 700 bar. The divider bursts and the gas equally disperses throughout the tank. What is the new
T and P of the gas assuming argon is an ideal gas?
2. Relevant equations
PV=nRT P1V1/T1=P2V2/T2
3. The attempt at a solution
I assumed there was no temperature change which I am unsure of but using that logic, P=350 bar and T=298K
You can't make that assumption unless you can prove it.
Begin with the definiton of an ideal gas. Hint: it's more than just pV = nRT.
What can you say about the dependence of U, internal energy, as a function of p,V and/or T?
Then go with the first law and determine if you get the same results whether the free expansion is adiabatic or isothermal or anything inbetween.
|
{"url":"http://www.physicsforums.com/showthread.php?t=587419","timestamp":"2014-04-16T07:37:05Z","content_type":null,"content_length":"27733","record_id":"<urn:uuid:e08d4901-3ffa-4046-b96f-385d2b140b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
In what generality is the Verdier biduality map an isomorphism?
up vote 5 down vote favorite
Let $X$ be a finite-dimensional, locally compact topological space, and consider the dualizing complex $K_X \in \mathbf{D}^b(X,k)$ (bounded derived category of $k$-sheaves, where $k$ is a noetherian
ring). We can define the dualizing functor $$C \mapsto D(C) = \mathbf{R}\mathcal{H}om(C, K_X),$$ (derived internal hom), which leads to a biduality map $C \to D^2(C)$. In SGA 4.5 "Th. de finitude,"
Deligne shows that, when $k = \mathbb{Z}/n$, the analogous biduality morphism is an isomorphism on the constructible bounded derived category (so, an anti-involution of said category) when one is
working with a scheme of finite type over a field or DVR (with $n$ prime to the characteristic). I have heard that the same is true for topological spaces under certain conditions, although I'm not
sure what the statement (or proof) should be: first, presumably we are going to want with a nice (Gorenstein?) ring like $\mathbb{Z}/n$, and second, probably there needs to be some analog of the
constructible derived category. What is this statement?
I had a look at Kashiwara-Schapira's "Sheaves on Manifolds," but I can't parse the biduality statement given in chapter 3. It's not clear to me how to adapt Deligne's argument to the present case,
add comment
1 Answer
active oldest votes
For an analytic space, you can find this on page 118 of Verdier's article "Classe d'homologie d'un cycle" in Asterisque 36-37. And yes this is on an appropriate constructible derived
category with $\mathbb{Z}$-coefficients. I seem to recall that Borel, in his book on intersection cohomology, also discusses this for pseudomanifolds, in case you need something more
up vote 3 down general.
vote accepted
Thanks! . – Akhil Mathew Jul 14 '11 at 13:53
add comment
Not the answer you're looking for? Browse other questions tagged sheaf-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/70264/in-what-generality-is-the-verdier-biduality-map-an-isomorphism","timestamp":"2014-04-16T16:11:04Z","content_type":null,"content_length":"51434","record_id":"<urn:uuid:250c9335-3235-4737-a0b9-d75d5dad2b2e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: November 2004 [00590]
[Date Index] [Thread Index] [Author Index]
integration using PSQL algorithm
• To: mathgroup at smc.vnet.net
• Subject: [mg52332] integration using PSQL algorithm
• From: Arturas Acus <acus at itpa.lt>
• Date: Wed, 24 Nov 2004 02:32:03 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
Dear group,
I have some reasons to suspect that some integral may have closed form
in terms of powers of Pi: a+b*Pi+c*Pi^2+d*Pi^3+e*Pi^4 (a,b,c,d,e being
algebraic numbers, probably with a=d=0). Assuming I can calculate the
integral numerically to desired precision, is it possible to verify this
hypothesis. As far as I know, the PSLQ algorithm, which helps to find
algebraic relations between real numbers, can be useful here. The only
(Mathematica related) reference I know is
, but the example here is too simple.
As for details, the integral in consideration is
integral = (-6*(-1 + a)*(4*Sqrt[7 + a] + Sqrt[2]*(5 + a)*Pi))/((5 +
a)*(7 + a)^(3/2)) -
(24*(-1 + a)*(6 + a)*Log[6 + a - Sqrt[5 + a]*Sqrt[7 + a]])/((5 + a)*(7
+ a))^(3/2)
with a= Cos[4* phi], integrated from 0 to Pi/2 (or to 2 Pi, no matter)
The problems arises from second term.
Mathematica 5.0 can't do it symbolically. How about 5.1?
Arturas Acus <acus at itpa.lt>
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Nov/msg00590.html","timestamp":"2014-04-16T16:06:06Z","content_type":null,"content_length":"35139","record_id":"<urn:uuid:473fcd28-a09d-4172-9f64-8e767b02516e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reducing Noises and Artifacts Simultaneously of Low-Dosed X-Ray Computed Tomography Using Bilateral Filter Weighted by Gaussian Filtered Sinogram
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 138581, 14 pages
Research Article
Reducing Noises and Artifacts Simultaneously of Low-Dosed X-Ray Computed Tomography Using Bilateral Filter Weighted by Gaussian Filtered Sinogram
^1School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
^2School of Computer Science, Sichuan Normal University, Chengdu 610101, China
^3Institute of Medical Information and Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
Received 2 February 2012; Accepted 2 March 2012
Academic Editor: Ming Li
Copyright © 2012 Shaoxiang Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Existing sinogram restoration methods cannot handle noises and nonstationary artifacts simultaneously. Although bilateral filter provides an efficient way to preserve image details while denoising,
its performance in sinogram restoration for low-dosed X-ray computed tomography (LDCT) is unsatisfied. The main reason for this situation is that the range filter of the bilateral filter measures
similarity by sinogram values, which are polluted seriously by noises and nonstationary artifacts of LDCT. In this paper, we propose a simple method to obtain satisfied restoration results for
sinogram of LDCT. That is, the range filter weighs the similarity by Gaussian smoothed sinogram. Since smoothed sinogram can reduce the influence of both noises and nonstationary artifacts for
similarity measurement greatly, our new method can provide more satisfied denoising results for sinogram restoration of LDCT. Experimental results show that our method has good visual quality and can
preserve anatomy details in sinogram restoration even in both noises and non-stationary artifacts.
1. Introduction
Radiation exposure and associated risk of cancer for patients receiving CT examination have been an increasing concern in recent years. Thus, minimizing the radiation exposure to patients has been
one of the major efforts in modern clinical X-ray CT radiology [1–8]. However, the presentation of strong noises and non-stationary artifacts degrades the quality of low-dose CT images dramatically
and decreases the accuracy of diagnosis dose. Many strategies have been proposed to reduce the noise, for example, by nonlinear noise filters [8–19] and statistics-based iterative image
reconstructions (SIIRs) [20–28].
The SIIRs utilize the statistical information of the measured data to obtain good denoising results but are limited for their excessive computational demands for the large CT image size. Moreover,
the mottled noise and non-stationary artifacts in LDCT images cannot be accurately modeled into one specific distribution, which makes it a difficult task to differentiate between noise/artifact and
informative anatomical/pathological features [29].
Although the nonlinear filters show effectiveness in reducing noise both in sinogram space and image space, they cannot handle the noise-induced streak artifacts. Since existing methods cannot handle
noises and artifacts simultaneously, designing a method to reduce noise and non-stationary artifacts simultaneously becomes an open problem in sinogram restoration of LDCT.
Recently, many new nonlinear filters are presented and show promising denoising performance on space domain [29–44]. Bilateral filter (BF), which integrates range filter (gray level) and domain
filter (space) together, is a well-known one [35, 36]. However, BF cannot obtain satisfied results in sinogram restoration of LDCT because of polluted sinogram values for the range filter. To obtain
satisfied denoising results in serious noises, some efforts on image space are proposed [37–41].
Wongsuggests that two parameters, and , the variances of Gaussian functions in domain and range filters, should be modulated according to local phase coherence of the image pixels [37]. But it blurs
edges or leaves uncleaned noises.
Ming and Bahadir improve the performance of BF by multiresolution method [38]. That is, filtering LL subband uses BF while smoothing wavelet subbands uses SURE shrinkage. It also leads to blur edges
while denoising.
van Boomgaard and van de Weijer argue that the main reason for unsatisfied denoising results is the polluted center pixel of BF [40]. Thus, the satisfied results can be obtained by replacing polluted
center pixel with an estimate of its true gray levels.
Following [40], median bilateral filter (MBF) is proposed in [41]. MBF replaces the center pixel with the median of a window. However, only replacing the center pixel also cannot obtain satisfied
denoising results.
Although BF and its improvements can obtain satisfied results in general image denoising, all these methods cannot handle sinogram restoration with noises and non-stationary artifacts simultaneously.
We think that the key to handle noises and artifacts simultaneously is how to reduce the influence of both the noises and artifacts of sinogram of LDCT.
In this paper, we propose a new method to reduce the influence of the noises and artifacts of sinogram simultaneously, named bilateral filter weighted by Gaussian filtered sinogram (BFWGFS), which
carried on BF on Gaussian smoothed sinogram. Note that, proposed method is different to the method proposed in [40]. The proposed method in [40] only replaces the gray levels of the center point with
the median of a square centered at the center point, while our method replaces both the center point and all considering points with their Gaussian smoothed sinogram values.
Since the smoothed sinogram can reduce the influence of both the noises and artifacts of sinogram, the weight of the range filter defined on BFWGFS can measure the similarities more precisely
comparing to the original sinogram values in BF. Thus, the proposed method can obtain satisfied results in noises and non-stationary artifacts simultaneously.
In the reminder of this paper, Section 2 will introduce the noise models; then in Section 3, we will discuss the measurement of similarity and discussed the difference between the proposed method and
method in [40]. Section 4 describes the denoising framework. Section 5 is the experimental results and discussion. Section 6 gives conclusions and finally, the acknowledgment part.
2. Noise Models
Based on repeated phantom experiments, low-mA (or low-dose) CT calibrated projection data after logarithm transform were found to follow approximately a Gaussian distribution with an analytical
formula between the sample mean and sample variance, that is, the noise is a signal-dependent Gaussian distribution [19].
In this section, we will introduce signal-independent Gaussian noise (SIGN), Poisson noise, and signal-dependent Gaussian noise.
2.1. Signal-Independent Gaussian Noise (SIGN)
SIGN is a common noise for the imaging system. Let the original projection data be , , where is the index of the th bin. The signal has been corrupted by additive noise , and one noisy observation
where , , are observations for the random variables , , and where the upper-case and letters denote the random variables and the lower-cased letters denote the observations for respective variables.
is normal , and is normal and independent of the Gaussian random variable . Thus, is normal .
2.2. Poisson Model and Signal-Dependent Gaussian Model
The photon noise is due to the limited number of photons collected by the detector [30]. For a given attenuating path in the imaged subject, and denote the incident and the penetrated photon numbers,
respectively. Here, denotes the index of detector channel or bin, and is the index of projection angle. In the presence of noises, the sinogram should be considered as a random process, and the
attenuating path is given by where is a constant, and is Poisson distribution with mean .
Thus, we have
Both its mean value and variance are .
Gaussian distributions of ployenergetic systems were assumed based on limited theorem for high-flux levels, and following many repeated experiments in [19], we have where is the mean, and is the
variance of the projection data at detector channel or bin , is a scaling parameter, and is a parameter adaptive to different detector bins.
The most common conclusion for the relation between Poisson distribution and Gaussian distribution is that the photon count will obey Gaussian distribution for the case with large incident intensity
and Poisson distribution with feeble intensity [19]. In addition, in [30], the authors deduce the equivalency between Poisson model and Gaussian model. Therefore, both theories indicate that these
two noises have similar statistical properties and can be unified into a whole framework.
3. Measure Similarity
The formula of bilateral filter is where and are two pixels of sinogram. Here, sinogram is the observations of projection data, that is, the noisy projection data of LDCT. and are sinogram values of
and , respectively. is a normalized constant for two weighs and is defined as where and are measures of the spatial and range similarity between the center pixel and its neighbor , respectively.
Usually, these two measures are defined as two Gaussian Kernel functions
Since the value filtered by BF is the weighted average of nearby points weighted by product of spatial distance and gray level difference, it was named by bilateral filter (BF) to distinguish the
general filter weighted only by spatial distance.
From (3.1)–(3.4), we can conclude that a pair of pixels , with both small spatial distance and small sinogram value difference have high similarity and large-weighed coefficients. It is plausible in
slightly noisy projection data. For sinograms with serious noise and non-stationary artifacts, it is unreal! That is, polluted sinogram values lead to incorrect similarity measurement in the range
filter of the bilateral filter. Thus, finding a measure of similarity, which can measure similarity correctly in noise and non-stationary artifacts, is a key problem in denoising using BF.
3.1. Gaussian Filter
Gaussian filter is defined as
Since for and , noisy sinogram value and the Gaussian-dependent noise (GWN) , the distribution of the pixel filtered by the low-passed filter defined in (3.5) is
For example, in image denoising, generally, is set to ; thus,
From the above equation, the variance of the smoothed sinogram value becomes very small (smaller than original variance 0.0157 times). It means that the Gaussian filter makes smoothed sinogram value
closer to real projection data than the noisy sinogram value. Since most of non-stationary artifacts in image space are the high-light points in noisy sinogram, most of non-stationary artifacts can
be suppressed by Gaussian filter.
In the same way, the distribution of the median in an window centered at the pixel is
Just as the above discussion, if the median filter has similar estimate precision to Gaussian filter in image denoising, should at least be 8, which is estimated by. However, so large window of
median filter will delete some real lines in sinogram, which will lead to many artifacts in denoising sinogram.
3.2. Similarity Discussion
From the second equation of (3.4), the similarity between the sinogram values of two pixels and is defined as a Gaussian function of the difference to their sinogram values. Thus, large difference
has small similarity, while small difference has large similarity. Following this conclusion, similarity discussion can be accomplished by discussing the difference for each pair of pixels of
sinogram. In this subsection, we will discuss the differences by variances of three denoising schemes for BF.
Assume that and are iid Gaussian random variables corresponding to a pair of pixels with the same real gray levels, , , and their difference
In the same way, since , we can conclude that
Since , thus
Just as discussed in the last subsection, if we set to 2,
It is obvious that the variance of the first scheme is the biggest in all three schemes, while the variance of the last scheme is the smallest in all three schemes. Since , we have
The first scheme corresponds to the bilateral, which measures difference by the sinogram values of and directly. The second scheme corresponds to the mean bilateral proposed in [41] whose similarity
is measured between the median of the center pixel and the sinogram value of its neighbor . The third scheme corresponds to the scheme of measuring the difference on the Gaussian filtered sinogram
It is well-known that smallest variance corresponds to the best estimate of real projection data value. According to this rule, our proposed method can provide the best estimate of real projection
data value. Thus, BFWGFS can reduce both the influence of noises and non-stationary artifacts.
4. The Algorithm
Just as the above discussion, satisfied denoising results can be got by weighed range filter on Gaussian filtered sinogram. The steps of the algorithm are as follows:(1)compute the Gaussian filtered
sinogram value for all sinogram pixels using (3.5),(2)give and ,(3)for each of pixel, (i)compute using the first equation of (3.4) and using (ii)compute using (iii)compute using (4)repeat step 3
until all sinogram pixels have been proceeded.
5. Experiments and Discussion
The main objective for smoothing L-CT images is to delete the noise and non-stationary artifacts while to preserve anatomy details for the images. Thus, the image visual quality can be improved, and
the denoised image can help doctors make correct medical diagnosis more easily.
5.1. Data
Four groups of CT images with different doses were scanned from a 16 multidetector row CT unit (Somatom Sensation 16; Siemens Medical Solutions) using 120 kVp and 5 mm slice thickness: a 58-year-old
man, two groups of 62-year-old women with different reduced dose, and a 60-year-old man. Other remaining scanning parameters are gantry rotation time, 0.5 second; detector configuration (number of
detector rows section thickness), mm; table feed per gantry rotation, 24 mm; pitch, 1:1; reconstruction method, filtered back projection (FBP) algorithm with the soft-tissue convolution kernel
“B30f.” Different CT doses were controlled by using two different fixed tube currents 30 mAs and 150 mAs (60 mA or 300 mAs) for LDCT and standard-dose CT (SDCT) protocols, resp.. The CT dose index
volume (CTDIvol) for LDCT images and SDCT images are in positive linear correlation to the tube current and is calculated to be approximately ranged between 15.32 mGy and 3.16 mGy [29]. For
additional visually illustration, we also put two groups of abdominal CT images of a same woman with 60 mAs, and two groups of shoulder CT images with low dose 35 mAs and standard dose 135 mAs (see
Figure 2).
5.2. Compared Methods
Bilateral filter (BF) is introduced at the beginning of Section 3. The main motivation for BF is that the noisy image should be weighted not only by the position distance (spatial filter) but also by
the difference of sinogram values (range filter) [35]. The parameters of BF are Gaussian Kernel for spatial filter , Gaussian Kernel for range filter , and iteration time is 3.
Context is a term imported from image coding. The context of a pixel is always defined as a vector used for describing the relationship between this pixels and other image pixels. In this paper, in
order to suppress the influence of noises, the context is defined as The context filter estimates real sinogram values from the points with similar context value. In this paper, the threshold value
for similar context is 10, that is, where is defined on (5.1). Although context filter can provide more samples for real value estimate, it will produce some artifacts for losing the spatial
relationship of sinogram.
Median bilateral filter (MBF) replaces the center pixel with the median of an window [41]. However, just as analysis in Section 3, only replacing the center pixel also cannot obtain satisfied
denoising results. Here, when set to 5 has the best performance, and .
Multiresolution bilateral filter (MRBF) filtering LL subband uses BF while smoothing wavelet subbands uses SURE shrinkage [38]. The wavelet used in the experiment is 1-level symlets with support 4.
The noisy variance is estimated using median of HH band of the wavelet [45] and and . Although authors report that MRBF can obtain good denoising results, it also leads to blur some important
Weighted intensity averaging over large-scale neighborhoods (WIA-LNs) is a state-of-the-art method for sinogram reconstruction [29]. The motivation for WIA-LN is that the two pixels of the same organ
or tissue should have surrounding patches with higher similarities than the two pixels of different organs or tissues. Thus, the real sinogram value of can be estimated as where
Here, denotes the intensities of the neighboring pixels in the search neighborhood centered at pixel index . The weight of WIA-LN is built by using a similarity criterion between the two comparing
patches and . This similarity metrics is calculated using (5.4), in which denotes the two-dimensional standard deviation of Gaussian kernel. is the total pixel number in patch . is a superparameter.
In this paper, is set to be 0.8, and the sizes are set to . Although better vision and quantitative performance are reported, the authors also indicate that WIA-LN cannot handle noise and
non-stationary artifacts simultaneously (see Figure 3(g)).
Proposed method (BFWGFS) replaces all sinogram values used in range filter of BF by the Gaussian filtered sinogram values. Just as discussed in Section 3, smoothed sinogram values can reduce the
influence of both noise and non-stationary artifacts greatly, and BFWGFS can provide good visual results and preserve more anatomy details. The parameters are , , iteration number is set to 1, and
Gaussian smoothed kernel is set to 1.8.
5.3. Visual Comparison
Three groups of SDCT images, LDCT images, and the processed LDCT images for the clinical abdominal examinations are shown in Figures 1–3. The parameters for compared methods have been given in the
last subsection. In Figure 1, the original and processed abdominal CT images of a 58-year-old man are illustrated. Figures 1(a) and 1(b) are one SDCT image and one LDCT image acquired at tube current
time product 150 mAs and 30 mAs, respectively. Figures 1(c), 1(d), 1(e), 1(f), 1(g), and 1(h) show BF, context, MBF, MRBF, WIA-LN, and proposed method processed LDCT images, respectively. Figure 2
illustrates the original and processed abdominal CT images of a 62-year-old woman. Figure 2(a) is one SDCT image acquired at tube current time product 150 mAs. Figures 2(b) and 2(c) are two LDCT
images acquired at reduced tube current time products 60 mAs and 30 mAs, respectively. Figures 2(d), 2(e), 2(f), 2(j), 2(k), 2(l), 2(g), 2(h), 2(i), 2(m), 2(n), and 2(o) illustrate the two groups of
processed LDCT images of Figures 2(b) and 2(c) by using compared methods. Figure 3 illustrates the original and processed images for one shoulder scan of a 60-year-old man, from which we found that
WIA-LN tends to smooth both the streak artifacts and informative human tissues, while proposed method can reduce the noise and artifacts with preservation of anatomy details.
Comparing all the original SDCT images and LDCT images in Figures 1–3, we found that the LDCT images were severely degraded by noise and streak artifacts. In Figures 1(c)–1(f), just as the discussion
in Section 3, there are so many noises left in processed images using BF, context, MBF, and MRBF. WIA-LN shown in Figure 1(g) also makes some obvious artifacts, while we can observe better noise/
artifacts suppression and edge preservation for proposed methods in Figure 1(h). Both WIA-LN and proposed method have good performance in noises. Especially, compared to corresponding original SDCT
images, the fine features representing the intrahepatic bile duct dilatation and the hepatic cyst (pointed by the white circles in the images of Figures 1 and 2, resp.) were well restored by using
WIA-LN and proposed method. The fine anatomical/pathological features (the exemplary structures pointed by circles in Figures 1 and 2) can be well preserved compared to the original SDCT images
(Figures 1(a) and 2(a)) under-standard dose conditions. In Figures 3(g) and 3(h), it indicates that although WIA-LN cannot handle noises and artifacts simultaneously, proposed method can obtain
satisfied results in this complex situation. Especially, proposed method not only can suppress noises and artifacts in original LDCT image (Figure 3(a)) but also can preserve tiny anatomy details of
subscapular arteries indicated by the white circles in Figure 3(h) compared to the original SDCT image (Figure 3(b)).
6. Conclusions
In this paper, in order to improve the performance of LDCT imaging, we propose a new method, named bilateral filter weighted by Gaussian filtered sinogram (BFWGFS) which replaces the sinogram values
of range filter of BF to the Gaussian filtered sinogram values. Since carefully chosen parameters of Gaussian filter can reduce the influence both of noises and non-stationary artifacts greatly,
BFWGFS can provide a more reliable estimate sinogram values for the range filter to improve the performance of classical BF in noises. Restoration results for three real sinograms show that proposed
method with suitable parameters can obtain satisfied results even in both the noises and artifacts situation.
This paper is supported by the National Natural Science Foundation of China (no. 60873102), Major State Basic Research Development Program (no. 2010CB732501), and Open Foundation of Visual Computing
and Virtual Reality Key Laboratory of Sichuan Province (no. J2010N03). This work was supported by a Grant from the National High Technology Research and Development Program of China (no.
1. D. J. Brenner and E. J. Hall, “Computed tomography—an increasing source of radiation exposure,” New England Journal of Medicine, vol. 357, no. 22, pp. 2277–2284, 2007. View at Publisher · View at
Google Scholar · View at Scopus
2. J. Hansen and A. G. Jurik, “Survival and radiation risk in patients obtaining more than six CT examinations during one year,” Acta Oncologica, vol. 48, no. 2, pp. 302–307, 2009. View at Publisher
· View at Google Scholar · View at Scopus
3. H. J. Brisse, J. Brenot, N. Pierrat et al., “The relevance of image quality indices for dose optimization in abdominal multi-detector row CT in children: experimental assessment with pediatric
phantoms,” Physics in Medicine and Biology, vol. 54, no. 7, pp. 1871–1892, 2009. View at Publisher · View at Google Scholar · View at Scopus
4. L. Yu, “Radiation dose reduction in computed tomography: techniques and future perspective,” Imaging in Medicine, vol. 1, no. 1, pp. 65–84, 2009.
5. J. Weidemann, G. Stamm, M. Galanski, and M. Keberle, “Comparison of the image quality of various fixed and dose modulated protocols for soft tissue neck CT on a GE Lightspeed scanner,” European
Journal of Radiology, vol. 69, no. 3, pp. 473–477, 2009. View at Publisher · View at Google Scholar · View at Scopus
6. W. Qi, J. Li, and X. Du, “Method for automatic tube current selection for obtaining a consistent image quality and dose optimization in a cardiac multidetector CT,” Korean Journal of Radiology,
vol. 10, no. 6, pp. 568–574, 2009. View at Publisher · View at Google Scholar · View at Scopus
7. A. Kuettner, B. Gehann, J. Spolnik et al., “Strategies for dose-optimized imaging in pediatric cardiac dual source CT,” Fortschr. Röntgenstr, vol. 181, no. 4, pp. 339–348, 2009. View at Publisher
· View at Google Scholar · View at Scopus
8. P. Kropil, R. S. Lanzman, C. Walther, et al., “Dose reduction and image quality in MDCT of the upper abdomen: potential of an adaptive post-processing filter,” Fortschr. Röntgenstr, vol. 182, no.
3, pp. 284–253, 2009.
9. M. K. Kalra, M. M. Maher, M. A. Blake et al., “Detection and characterization of lesions on low-radiation-dose abdominal CT images postprocessed with noise reduction filters,” Radiology, vol.
232, no. 3, pp. 791–797, 2004. View at Publisher · View at Google Scholar · View at Scopus
10. H. B. Lu, X. Li, L. H. Li, et al., “Adaptive noise reduction toward low-dose computed tomography,” in Proceedings of the Medical Imaging 2003: Physics of Medical Imaging, vol. 5030, parts 1 and
2, pp. 759–766, San Diego, Calif, USA, February 2003. View at Publisher · View at Google Scholar
11. M. K. Kalra, C. Wittram, M. M. Maher et al., “Can noise reduction filters improve low-radiation-dose chest CT images? Pilot study,” Radiology, vol. 228, no. 1, pp. 257–264, 2003. View at
Publisher · View at Google Scholar · View at Scopus
12. M. K. Kalra, M. M. Maher, D. V. Sahani et al., “Low-dose CT of the abdomen: evaluation of image improvement with use of noise reduction filters pilot study,” Radiology, vol. 228, no. 1, pp.
251–256, 2003. View at Publisher · View at Google Scholar · View at Scopus
13. J. C. Giraldo, Z. S. Kelm, L. S. Guimaraes et al., “Comparative study of two image space noise reduction methods for computed tomography: bilateral filter and nonlocal means,” in Proceedings of
the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society: Engineering the Future of Biomedicine (EMBC '09), pp. 3529–3532, Minneapolis, Minn, USA,
September 2009. View at Publisher · View at Google Scholar · View at Scopus
14. A. Manduca, L. Yu, J. D. Trzasko et al., “Projection space denoising with bilateral filtering and CT noise modeling for dose reduction in CT,” Medical Physics, vol. 36, no. 11, pp. 4911–4919,
2009. View at Publisher · View at Google Scholar · View at Scopus
15. N. Mail, D. J. Moseley, J. H. Siewerdsen, and D. A. Jaffray, “The influence of bowtie filtration on cone-beam CT image quality,” Medical Physics, vol. 36, no. 1, pp. 22–32, 2009. View at
Publisher · View at Google Scholar · View at Scopus
16. M. Kachelrie, O. Watzke, and W. A. Kalender, “Generalized multi-dimensional adaptive filtering for conventional and spiral single-slice, multi-slice, and cone-beam CT,” Medical Physics, vol. 28,
no. 4, pp. 475–490, 2001. View at Publisher · View at Google Scholar · View at Scopus
17. G. F. Rust, V. Aurich, and M. Reiser, “Noise/dose reduction and image improvements in screening virtual colonoscopy with tube currents of 20 mAs with nonlinear Gaussian filter chains,” in Medical
Imaging 2002: Physiology and Function from Multidimensional Images, vol. 4683 of Proceedings of SPIE, pp. 186–197, San Diego, Calif, USA, February 2002. View at Publisher · View at Google Scholar
· View at Scopus
18. Z. Liao, S. Hu, and W. Chen, “Determining neighborhoods of image pixels automatically for adaptive image denoising using nonlinear time series analysis,” Mathematical Problems in Engineering,
vol. 2010, Article ID 914564, 14 pages, 2010. View at Publisher · View at Google Scholar
19. H. Lu, I. T. Hsiao, X. Li, and Z. G. Liang, “Noise properties of low-dose CT projections and noise treatment by scale transformations,” in Proceedings of the IEEE Nuclear Science Symposium
Conference Record, vol. 1–4, pp. 1662–1666, November 2001. View at Scopus
20. J. Xu and B. M. W. Tsui, “Electronic noise modeling in statistical iterative reconstruction,” IEEE Transactions on Image Processing, vol. 18, no. 6, pp. 1228–1238, 2009. View at Publisher · View
at Google Scholar · View at Scopus
21. I. A. Elbakri and J. A. Fessler, “Statistical image reconstruction for polyenergetic X-ray computed tomography,” IEEE Transactions on Medical Imaging, vol. 21, no. 2, pp. 89–99, 2002. View at
Publisher · View at Google Scholar · View at Scopus
22. P. J. la Rivire and D. M. Billmire, “Reduction of noise-induced streak artifacts in X-ray computed tomography through spline-based penalized-likelihood sinogram smoothing,” IEEE Transactions on
Medical Imaging, vol. 24, no. 1, pp. 105–111, 2005. View at Publisher · View at Google Scholar · View at Scopus
23. P. J. la Rivire, “Penalized-likelihood sinogram smoothing for low-dose CT,” Medical Physics, vol. 32, no. 6, pp. 1676–1683, 2005. View at Publisher · View at Google Scholar · View at Scopus
24. J. Wang, H. Lu, J. Wen, and Z. G. Liang, “Multiscale penalized weighted least-squares sinogram restoration for low-dose X-ray computed tomography,” IEEE Transactions on Biomedical Engineering,
vol. 55, no. 3, pp. 1022–1031, 2008. View at Publisher · View at Google Scholar · View at Scopus
25. P. Forthmann, T. Kohler, P. G. Begemann, and M. Defrise, “Penalized maximum-likelihood sinogram restoration for dual focal spot computed tomography,” Physics in Medicine and Biology, vol. 52, no.
15, pp. 4513–4523, 2007. View at Publisher · View at Google Scholar · View at Scopus
26. J. Wang, T. Li, H. Lu, and Z. G. Liang, “Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose X-ray computed tomography,” IEEE Transactions
on Medical Imaging, vol. 25, no. 10, pp. 1272–1283, 2006. View at Publisher · View at Google Scholar · View at Scopus
27. Z. Liao, S. Hu, M. Li, and W. Chen, “Noise estimation for single-slice sinogram of low-dose X-ray computed tomography using homogenous patch,” Mathematical Problems in Engineering, vol. 2012,
Article ID 696212, 16 pages, 2012. View at Publisher · View at Google Scholar
28. H. B. Lu, X. Li, I. T. Hsiao, and Z. G. Liang, “Analytical noise treatment for low-dose CT projection data by penalized weighted least-square smoothing in the K-L domain,” in Proceedings of the
Medical Imaging 2002: Physics of Medical Imaging, vol. 4682, pp. 146–152, May 2002.
29. C. Yang, C. Wufan, Y. Xindao, et al., “Improving low-dose abdominal CT images by weighted intensity averaging over large-scale neighborhoods,” European Journal of Radiology, vol. 80, no. 2, pp.
e42–e49, 2011. View at Publisher · View at Google Scholar
30. T. Li, X. Li, J. Wang, et al., “Nonlinear sinogram smoothing for low-dose X-ray CT,” IEEE Transactions on Nuclear Science, vol. 51, no. 5, pp. 2505–2513, 2004.
31. S. Hu, Z. Liao, D. Sun, and W. Chen, “A numerical method for preserving curve edges in nonlinear anisotropic smoothing,” Mathematical Problems in Engineering, vol. 2011, Article ID 186507, 14
pages, 2011. View at Publisher · View at Google Scholar
32. M. Li and W. Zhao, “Visiting power laws in cyber-physical networking systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 302786, 13 pages, 2012. View at Publisher · View at
Google Scholar
33. M. Li, C. Cattani, and S. Y. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 654284, 13 pages, 2011.
View at Publisher · View at Google Scholar
34. M. Li, “Fractal time series: a tutorial review,” Mathematical Problems in Engineering, vol. 2010, Article ID 157264, 26 pages, 2010. View at Publisher · View at Google Scholar
35. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 836–846, Bombay, India, January 1998.
36. D. Barash, “A fundamental relationship between bilateral filtering, adaptive smoothing, and the nonlinear diffusion equation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
24, no. 6, pp. 844–847, 2002.
37. A. Wong, “Adaptive bilateral filtering of image signals using local phase characteristics,” Signal Processing, vol. 88, no. 6, pp. 1615–1619, 2008.
38. Z. Ming and G. Bahadir, “Multiresolution bilateral filtering for image denoising,” IEEE Transactions on Image Processing, vol. 17, no. 12, pp. 2324–2333, 2008.
39. H. Yu, L. Zhao, and H. Wang, “Image denoising using trivariate shrinkage filter in the wavelet domain and joint bilateral filter in the spatial domain,” IEEE Transactions on Image Processing,
vol. 18, no. 10, pp. 2364–2369, 2009. View at Publisher · View at Google Scholar
40. R. van Boomgaard and J. van de Weijer, “On the equivalence of local-mode finding,robust estimation and mean-shift analysis as used in early vision tasks,” in Proceedings of the 16th International
Conference on Pattern Recognition, vol. 3, pp. 972–930, Quebec, Canada, August 2002.
41. J. J. Francis and G. de Jager, “The bilateral median filter,” Transactions of the South African Institute of Electrical Engineers, vol. 96, no. 2, pp. 106–111, 2005.
42. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095,
2007. View at Publisher · View at Google Scholar
43. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of Gaussians in the wavelet domain,” IEEE Transactions on Image Processing, vol. 12, no. 11,
pp. 1338–1351, 2003. View at Publisher · View at Google Scholar
44. A. Buades, B. Coll, and J.-M. Morel, “Nonlocal image and movie denoising,” International Journal of Computer Vision, vol. 76, no. 2, pp. 123–139, 2008, Special section: selection of papers for
CVPR 2005, guest editors: C. Schmid, S. Soatto and C. Tomasi.
45. D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994. View at Publisher · View at Google Scholar
|
{"url":"http://www.hindawi.com/journals/mpe/2012/138581/","timestamp":"2014-04-19T18:21:06Z","content_type":null,"content_length":"299254","record_id":"<urn:uuid:0fdc66be-306f-403c-a43b-cf3abbfde800>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Abdul Lorenz on Kieu
Tien D Kieu kieu at swin.edu.au
Sun Apr 25 16:36:01 EDT 2004
Dear Abdul Lorenz,
Thank you for the comments, my responses in the order given are:
1/ Even though the Hamiltonians involved are not quadratic in momentum
operator, that does not imply that we cannot create/simulate them. Lloyd and
Braunstein have proposed a way to simulate Hamiltonians with arbitrary powers
in position and momentum operators (and hence arbitrary powers in creation and
annihilation operators) in quantum optics with linear and non-linear effects
(including squeezing and Kerr nonlinearity). See their paper on
2/ Decoherence is indeed the possible killer. The whole difficulty with the
physical implementation of standard quantum computation (with qubits and
quantum gates) is how to protect the computation process from decoherence
before the process is completed. Here with the proposed quantum adiabatic
computation too, decoherence is important has to be taken care of in any
physical implemtation. It deserves further study Some initial numerical
investigation by Childs, Farhi and Preskill
(http://arxiv.org/pdf/quant-ph/0108048) into unitary control errors and
decoherence in quantum adiabatic computation for various combinatoric search
problems has shown some inherent robustness. This surprising robustness is
perhaps parallel with the surprise that there exist possible quantum error
corrections in standard quantum computation (with qubits and quantum gates).
(Recall that initially there was strong scepticism (even from experienced
people like Landauer) that, because of the no-cloning theorem in quantum
mechanics, error correction was impossible in quantum computation; but the
discovery of quantum error correction code soon afterwards was a big suprise
and changed all that.) At this stage, even the mere theoretical and in-
principle existence of a quantum mechanical procedure to probabilistic
determine Hilbert's tenth problem is, at least to me, extremely interesting.
3/ I have nothing more to add to what I have already said about energy
measurement here (except to repeat that the final spectrum is integer-valued so
the gap is *at least* one unit in energy, which is then sufficient for
specifying \Delta E without knowing the spectrum). However, I would like to
point out that beside the energy measurement we can also do with the
measurement of occupation number (represented by the operator a^\dagger a),
since by construction the end-point Hamiltonian (H_P in my notation) commutes
with the number operators so these two observables (energy and occupation
number) are compatible. Such occupation number measurement would give us the
positive integers n_1, n_2, etc which we can then substitute into the
Diophantine polynomial in the places of corresponding variables to see if the
polynomial vanishes (in which case, the equation has some positive integer
solution) or not (no integer solution). This point has been made in, for
example, my paper in Contemporary Physics.
Tien Kieu
Quoting martin at eipye.com:
> 1 - I still keep my point of view about the physical meaning of the
> squared Diophantine operator D^2. A physical Hamiltonian in Quantum
> Mechanics has dependence only on the second power of the momentum
> operator: p^2. However, as the D^2 operator involves arbitrary powers of
> the number operator, say n^k; this means D^2 contains terms that go as
> p^(2k)!!! In consequence, D^2 can not be interpreted as a Hamiltonian
> describing any physical system, nor the D^2 eigenvalues represent
> energies. Therefore, D^2 can not be reproduced in the laboratory.
> 2.- The transition process, described by Kieu's algorithm, for passing
> from an oscillator Hamiltonian to the D^2 operator in a not well defined
> time T, has to deal with all the decoherence processes that result from the
> interaction of the measurement and control apparatus to the coherent
> states. Decoherence would destroy the coherent state in a finite time.
> These effects are not taken into account in any way into Kieu's analysis,
> reflecting that one is assuming a completely ideal system, where a
> coherent state is to be conserved in time. A real experiment has to take
> into account that there always be dissipation and/or scattering processes
> that produce decoherence. Since decoherence grows in time, one can not
> extend the time T arbitrarily.
> 3.- Regarding the required precision one has to reach to determine
> whether there is a zero "energy" level in the D^2 spectrum, Kieu has
> argue that it should be enough to reach a Delta E smaller than
> the space among the first two levels. In principle, given by a integer
> number multiplied by some constant that gives the dimensions of energy to
> the spectrum. But knowing this spacing is equivalent to know in advance
> the spectrum of D^2, and so, whether there is a solution to the
> Diophantine equation!!!
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-April/008074.html","timestamp":"2014-04-19T04:23:39Z","content_type":null,"content_length":"8064","record_id":"<urn:uuid:e6607e8b-9324-4e9f-914c-f91e24beb72d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
i tried my best to make a function but i
question is write a function that calculate the greatest common divisor of two number..
i dn't have my laptop with me yet.i live in hostel and i came home due to weekend..i hav to submit task tomorrw,,plz if you help me i wud b very thnkful to you
We're not going to write your homework for you.
Make an attempt and we'll help with any questions.
# include<iostream>
# include<conio.h>
using namespace std;
void sum(double& sum,int n){
float add=0;
cout<<"please enter the value of n"<<endl;
for (int i=1;i<=n; i++){
add +=1.0/i;
void main(){
double k=0;
int j=0;
cout<<"sum is"<<endl;
this is the program that i want to build..actully i want to sum the series using function void sum(double& sum,int n),the series is 1/1+1/2+1/3...1/n where n is positiv integer
sorry i posted another code,this code is not the one i mentioned..can you please give me the hint that how to calculate the greatest comon divisor of two numbers?
thanx brother.it helps me alot
dear it is only giving me greatest divisor,not the common divisor
question is write a function that calculate the greatest common divisor of two number..
oops, this is your question.
what do you mean by common divisor, but not the greatest common divisor?
there could be more then one common divisor between two files.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/general/110173/","timestamp":"2014-04-21T07:22:47Z","content_type":null,"content_length":"16197","record_id":"<urn:uuid:9e0bc9d4-102e-4da0-a40c-44633ad4b373>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Frequent Itemset problem for MapReduce
I have received many emails asking for tips for starting Hadoop projects with Data Mining. In this post I describe how the Apriori algorithm solves the frequent itemset problem, and how it can be
applied to a MapReduce framework. The Problem
The frequent itemset problem consists of mining a set of items to
find a subset of items that have a strong connexion between them
A simple example to clear the concept would be: given a set of baskets in a supermarket, a frequent itemset would be hamburgers and ketchup.
These items appear frequently in the baskets, and very often, together.
In the general
a set of items that appear in many baskets is said to be frequent
In the computer world, we could use this algorithm to recommend items of purchase for a user. If A and B are a frequent itemset,
once a user buys A, B would certainly be a good recommendation.
In this problem, the number of "baskets" in assumed to be very large. Greater than what could fit in memory. The number of items in a basket, on the other hand, is considered small.
The main challenge in this problem is the amount of data to be put in memory. In a set of N items per basket for example,
there are n!/2!(n-2)! pair combinations of items. We would have to keep all these combinations for all baskets and iterate through them
to find the frequent pairs.
This is where the
algorithm enters!
The Apriori algorithm is based on the idea that for a pair o items to be frequent, each individual item should also be frequent.
If the hamburguer-ketchup pair is frequent, the hamburger itself must also appear frequently in the baskets. The same can be said about the ketchup.
The Solution
So for the algorithm, it is established a "threshold X" to define what is or it is not frequent. If an item appears more than X times, it is considered frequent.
first step of the algorithm
is to pass for each item in each basket, and calculate their frequency (count how many time it appears).
This can be done with a hash of size N, where the position y of the hash, refers to the frequency of Y.
If item y has a frequency greater than X, it is said to be frequent.
In the
second step of the algorithm
, we iterate through the items again, computing the frequency of pairs in the baskets. The catch is that
we compute only for items that are individually frequent. So if item y and item z are frequent on itselves,
we then compute the frequency of the pair. This condition greatly reduces the pairs to compute, and the amount of memory taken.
Once this is calculated, the frequencies greater than the threshold are said frequent itemset.
How to Map-and-Reduce it???
To use this algorithm in the MapReduce model, you can follow these instructions described in the
"Mining of Massive Datasets" book
First Map Function: Take the assigned subset of the baskets and find the
itemsets frequent in the subset using the algorithm of Section 6.4.1. As described
there, lower the support threshold from s to ps if each Map task gets
fraction p of the total input file. The output is a set of key-value pairs (F, 1),
where F is a frequent itemset from the sample. The value is always 1 and is
First Reduce Function: Each Reduce task is assigned a set of keys, which
are itemsets. The value is ignored, and the Reduce task simply produces those keys (itemsets) that appear one or more times. Thus, the output of the first
Reduce function is the candidate itemsets.
Second Map Function: The Map tasks for the second Map function take
all the output from the first Reduce Function (the candidate itemsets) and a
portion of the input data file. Each Map task counts the number of occurrences of each of the candidate itemsets among the baskets in the portion of the dataset that it was assigned. The output is a
set of key-value pairs (C, v), where C is one of the candidate sets and v is the support for that itemset among the baskets that were input to this Map task.
Second Reduce Function: The Reduce tasks take the itemsets they are
given as keys and sum the associated values. The result is the total support
for each of the itemsets that the Reduce task was assigned to handle. Those
itemsets whose sum of values is at least s are frequent in the whole dataset, so the Reduce task outputs these itemsets with their counts. Itemsets that do not have total support at least s are not
transmitted to the output of the Reduce task.
For this post I used the book "Mining of Massive Datasets", from Anand Rajaraman and Jeff Ullman. T
his book is really really good. I definitely recommend it, and you can get it for free here:
Besides the frequent itemset problem, it shows how to model many data mining algorithms to MapReduce framework.
|
{"url":"http://girlincomputerscience.blogspot.com.br/2013/01/frequent-itemset-problem-for-mapreduce.html","timestamp":"2014-04-20T23:27:48Z","content_type":null,"content_length":"81802","record_id":"<urn:uuid:450780f3-df9a-46e2-bb4c-1b513ddbb9c5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Distances In Bounded Regions
One of the standard problems in introductory calculus courses is to find the average distance between two randomly selected points inside a unit sphere. The popularity of this particular problem is
probably due to the fact that it happens to lead to an integral that can be evaluated in "closed form" to give a nice explicit answer, but it's interesting to consider the average distances (and
powers of distances) within various other bounded regions in various dimensions. For regions other than a sphere, the solution is not always so simple, but we'll find that a parametric formulation
of the distance density can be used to simplify the analysis.
We'll begin by considering circles (spheres) and squares (cubes) of different dimensions, starting with the classic problem of the solid unit sphere in 3D space. Let R and r be the radial distances
from the origin to two randomly chosen points in a unit sphere, and let q be the angle between these vectors. The distance between the two points is
Covering just the case R > r, we need to triple-integrate this quantity over r, R, and q, and we need to weight the quantity in proportion to the fraction of the two-point state-space corresponding
to each set of parameters. For given values of r and R, the angle q defines a circle whose circumference is proportional to sin(q), so this is the weight for the q integration. Similarly each value
of r and R defines a sphere with surface area proportional to r^2 and R^2 respectively, so these are the weights for the r and R integrations.
We can restrict our analysis to just the case R > r because the other case (R < r) is symmetrical and has the same distribution of distances. Hence we need only integrate the distance function with
the appropriate weights for the ranges r = [0,1], R = [r,1], and q = [0,p], and then divide the result by the triple integral of just the weights (rR)^2 sin(q), which is 1/9. This gives the mean
We can also evaluate this integral with the distance function raised to any integral power, to give the average of the nth powers of the distances
Notice that the weight for q is very fortuitous, because the factor of sin(q) enables us to evaluate the integral in closed-form, using the easy integral for
In contrast, the seemingly simpler case of a unit disk is actually more difficult because it lacks this convenient weight factor. It's also worth noting that if we try to cover both the cases R > r
and R < r at once by integrating over r = [0,1] and R = [0,1] we are led to a result like 21/20 instead of 36/35. The subtle problem here is that the integral of
over the range q = [0,p], divided by the integral of the weight, is
which is formally symmetrical in R and r. Now, since R+r is always non-negative there's not much ambiguity in evaluating the left hand term in the numerator; we just take the positive value (R+r)^3,
glossing over the fact that there's really a square root there in the 3/2 power (which means we could have taken the negative root). However, the right hand term requires a bit of thought: Do we
evaluate it as (R-r)^3 or (r-R)^3 ? The answer depends on whether R > r or R < r. In these two cases the above expression reduces to
so it's necessary (not just convenient) to treat the cases separately. Fortunately, due to symmetry, the cases give the same distribution of distances, so we need only evaluate one of them.
Now let's consider the distances on a unit disk. This case can be formulated in essentially the same way as with the unit sphere, except that the weights are different. Each angle q now represents
only a single point, so it's weight is just 1. Each radius r and R now represents a circle with length proportional to r and R respectively, so these are the weights. The triple integral of the
product of these weights is p/8, so the average distance on a unit disk can be expressed as
Unfortunately this integral isn't as easy to evaluate as the one for the sphere, because it lacks the sin(q) factor. We will describe a much simpler approach later, so rather than presenting the
elaborate integration here, suffice it to say that the result is 128/(45p). We can also evaluate the above integral for any integer power of the distance function. For odd powers of the form 2k-1
the general result is
whereas for even powers of the form 2k the general result is
Thus the average squared distance between two point in a unit disk is 1, and the average cubed distance is 2048/(525p). Incidentally, this last formula shows that if C[n] denotes the nth Catalan
number, then the average (2k)th power of distances on a unit disk is just C[k+1]/(k+1).
As an alternative to the preceding triple integral, we could just integrate the unit disk by scanning two points (x,y) and (X,Y) orthogonally across the disk. The weight factors are all 1 in this
case, and their product integrates to the squared area of the disk, p^2, so we have the quadruple integral
where a = (1-x^2)^1/2 and b = (1-X^2)^1/2. This gives the same results as the previous triple integral, but it's even more laborious to integrate. We will show later a single integral that gives the
same results.
As a way of introducing the simpler method for evaluating the average distances in arbitrary bounded regions, let's first consider the distribution of distances on a 1´1 unit square. We could use
the rectangular approach of the preceding equation and just integrate the distances between two points (x,y) and (X,Y) as each parameter ranges from 0 to 1, but the resulting quadruple integral is
not very easy to evaluate. A much more efficient approach is to notice that the distances and densities (weight factors) on a unit square are distributed according to the parametric formulas
Therefore, we just need to integrate the product d(u,v) s(u,v) as u and v range from 0 to 1. The orthogonal way of formulating this double-integral is
This is a big improvement over the quadruple integral, but it still is not easily evaluated in closed form. Let's try polar coordinates in this parametric space by setting u = r cos(q) and v = r
sin(q) and integrating over the region u < v by letting r range from 0 to 1/cos(q) at each q from 0 to p/4. For any incremental slice of q the weight at r is proportional to r, and of course the
integral of r over this region u < v is just 1/2, so we have the integral
This value of this double-integral is
By incrementing the exponent of r in the integral we can evaluate the average of the nth powers of distances on the unit square. In general the results are of the form
where the values of A, B, C, D are as shown below
Now, what about the unit cube, or the unit 4D hyper-cube? The nice thing about the parametric distance density equations is that they immediately generalize to higher dimensions. The parametric
equations for the distance density of a d-dimensional unit cube are
For even powers of the distance we can immediately evaluate the d-dimensional analogs of equation (1). We find that the average squared distance in a d-dimensional unit cube is simply d/6. (This
gives a nice trivia question: In what dimensional space is the average squared distance in a unit cube equal to unity?) The average nth powers of distances in a d-dimensional unit cube are given by
the following formulas for the first few even values of n:
Returning to the parametric density formulas for the distances inside a d-dimensional cube, notice that in each case the density of the vector v = [x[1], x[2], .... x[d]] within a bounded region B
is proportional to the intersection of B with a copy of B shifted by the vector v. More generally, given any finite bounded region B in k-dimensional space, and any k-vector v, let B[®v] denote the
image of B translated by v. Then the density of distances with the magnitude and direction of v is proportional to the content (volume) of the intersection of B with B[®v]. This is because B is the
set of possible starting points for the vector v, and B[®v] is the corresponding set of end points. A particular instance of v (with a particular starting point in B) is a distance between two
points of B if and only if its end point is also in B. Thus, the intersection of B[®v] with B (divided by the content of B itself) is the proportion of points in B for which the vector v is a
distance to another point in B. This is illustrated for a few simple shapes below.
Let's apply this principle to determine the distribution of distances (between two randomly chosen points from a uniform distribution) on a unit disk (radius 1). For any 2-vector v = [r sin(q), r
cos(q)] the density is proportional to the overlap of two unit circles with centers a distance r apart. This is given by
Now, in terms of the parametric lattice of distance vectors (which can be seen as the first quadrant of a circle of radius 2), the number of vectors with magnitude r is proportional to r, so the
overall density of distances of magnitude r is proportional to r times the above quantity. The integral of this resulting function from r = 0 to 2 is simply p/2, so the distribution density of
distances on a unit disk is
The mean value of any function f of the point-to-point distances is then given by
If we make the change of variables r = 2cos(q/2), the density function can be simplified and we have
In particular, if f is just the nth power of the distances, we replace the f function with [2cos(q/2)]^n. Thus for n = 1 we just need the elementary integrals
which gives
in agreement with the mean distance computed previously by triple and quadruple integrals. An important advantage of this approach is that it not only gives a simpler integration, it gives the entire
distribution explicitly, rather than just the mean. Using this same density distribution we can compute the mean value of any function of the distances. For example, the mean of the inverse distances
1/s in a unit disk is given by
(The integral is even simpler in this case.) For a disk of radius r this is simply 16/(3pr). Thus, given n points uniformly distributed on a disk of radius r, there are n(n-1)/2 point-to-point
distances, which implies that the expected sum of the inverse distances is 8n(n-1)/(3pr). It’s interesting that while the mean reciprocal distance is finite, the mean squared reciprocal distance is
not. This suggests a geometrical version of the Petersburg paradox: the game is to choose two points randomly from a uniform distribution inside a unit circle, and pay an amount proportional to the
inverse square of the distance between them. Intuitively it might seem as if the payout would not be very high, but the expected value is actually infinite. (Unlike a sequence of coin tosses, this is
a one-step game, but of course we assume infinite precision in the definition of the selected points.)
The proposition that the density of v is proportional to the intersection of B with B[®v] is very general, and applies to non-convex (and even non-connected regions), not just simple regions like
spheres and cubes. For example, it can be used to determine the distance distribution for points of a torus.
Return to MathPages Main Menu
|
{"url":"http://mathpages.com/home/kmath324/kmath324.htm","timestamp":"2014-04-16T13:58:58Z","content_type":null,"content_length":"42003","record_id":"<urn:uuid:fe133ede-f0bc-4ea7-8b05-cac6f7715037>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FFT Help: Music LED Display - Arduino Forum
Hi all! I've been doing a lot of research lately on FFT to develop what I thought to be a simple project but it seems I keep hitting road blocks. What I'm doing is I'm using an Arduino Uno to light 3
different LED's depending on the beat of the music. Searching the internet I've found tons of similar projects, but their programs seem to do random beats and not actually analyze the frequency. I
know of music labs FFT library and the 8-bit library by defi, I just need more understanding on using them.
This is what I want my program to do. Read in an audio source from the A0 port. Analyze this audio using FFT and determine the frequency of the audio. Depending on the frequency light a certain LED.
For example if frequency is < 150 light red, if it's between 150 and 300 light blue and if it's greater than 300 light green.
Any and all help would be appreciated! Below are some questions that would really help me out if answered!
Learning FFT, I've picked up on most concepts except for one major thing. What is it exactly that the FFT arrays store? What do the numbers derived from the audio sources mean and what do I have to
do to use them for my purpose?
|
{"url":"http://forum.arduino.cc/index.php?topic=195378.0","timestamp":"2014-04-17T21:49:29Z","content_type":null,"content_length":"75394","record_id":"<urn:uuid:0c653d1e-0467-4716-99c1-d1b9b63a2b34>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Merrimack Prealgebra Tutor
Find a Merrimack Prealgebra Tutor
...I also have a master's of science degree in oceanography, and graduated with a GPA of 3.9. I have worked as a research scientist on several projects where I have studied the genetic regulation
of male fertility at Jackson Lab in Bar Harbor, Maine, the mechanical properties of phytoplankton chain...
22 Subjects: including prealgebra, reading, writing, geometry
I am a mother of three young children who is currently in my Junior year at Southern NH University. I am studying to become a Middle School Mathematics Teacher. I am also a cheerleading coach for
girls who range from 6 years old to 13 years old.
3 Subjects: including prealgebra, algebra 1, algebra 2
...I have experience tutoring math for the PSAT, SAT, ACT, and GED. As a tutor, I try to break concepts down to their most basic terms to ensure students understand them. I will also use various
methods to teach a subject to ensure understanding.
11 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...Geometry comes in two catagories: 1. mathematics of shapes: the branch of mathematics that is concerned with the properties and relationships of points, lines, angles, curves, surfaces, and
solids 2."Euclidean geometry" or "solid geometry": a set of distinct theories or its application to a par...
27 Subjects: including prealgebra, reading, English, writing
Hello, I was a certified level 3 tutor through College of Reading and Learning accredited program at UNH in Manchester, where I graduated in 2004. I was a class-link tutor for remedial English
classes, and tutored individual students in writing, study skills, and social sciences.
29 Subjects: including prealgebra, reading, English, writing
|
{"url":"http://www.purplemath.com/merrimack_nh_prealgebra_tutors.php","timestamp":"2014-04-19T17:51:37Z","content_type":null,"content_length":"23850","record_id":"<urn:uuid:7a659b51-d027-40b9-a042-12ec5e951cda>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pleasantville, NY Trigonometry Tutor
Find a Pleasantville, NY Trigonometry Tutor
...I have been able to help the student to better grasp an understanding of the math they are currently working on. I work to gain the student's focus, understanding and trust and then look to
challenge the student's understanding and interest. I support the student in all inquiries that they bring to the sessions so that trust, personal satisfaction and degree of understanding are
7 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...I taught Precalculus at St. Thomas Aquinas College and I tutored Precalculus at a private company. I taught Precalculus at St.
16 Subjects: including trigonometry, calculus, accounting, algebra 1
...I am known for my patience, my ability to connect with my students, and my ability to explain ideas in multiple ways. My experience: I have been tutoring on and off since in 2001, and have
tutored SAT Math, all levels of high school math, AP Calculus, and other calculus through Multivariable. I...
9 Subjects: including trigonometry, calculus, geometry, algebra 1
...I also have tutoring experience with the SAT and ACT math examination. Aside from tutoring, I used to be a freelance math item writer and editor. I have written two practice manuals for
Intermediate Algebra II, test items for Algebra I/II and Geometry as well.
12 Subjects: including trigonometry, calculus, algebra 1, GRE
...Good luck with the studying!I studied Physics with Astronomy at undergraduate level, gaining a master's degree at upper 2nd class honors level (approx. 3.67 GPA equivalent). I then proceeded to
complete a PhD in Astrophysics, writing a thesis on Massive star formation in the Milky Way Galaxy usin...
8 Subjects: including trigonometry, physics, geometry, algebra 1
|
{"url":"http://www.purplemath.com/Pleasantville_NY_Trigonometry_tutors.php","timestamp":"2014-04-17T11:25:19Z","content_type":null,"content_length":"24443","record_id":"<urn:uuid:f85ea452-8e5e-44ff-9778-df5ff468bbf0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Easiest Method for Beginners
The Easiest^(*) Method for Beginners
^(*) Or maybe the worst...
It's perfectly possible to solve a Rubik's Cube using common intuition and simple rules.
Usually, people can solve one layer by themselves. If you can do it, you (almost) can solve the whole cube.
No mysterious magical sequences required. If you understand how it works, you'll remember it forever.
No need to learn any notation, the animated cubes will show you the basic moves you need.
No mathematical formulae (group theory principles explained here), I'll try to make it intuitive.
This page is not about efficient solving.
If you think this page is not very helpful, you should take a look at Jasmine's beginner page where a more conventional approach is proposed.
Solving the first layer
It seems that everybody can do it using common intuition. It can take some time if it's your first try.
You cube should now look like this:
Manipulating the first layer
You just built a layer starting from a random state. So, you should not have any problem making transformations of this layer.
Look at the following basic moves. You can do them differently, it doesn't matter.
Rotate a corner Rotate an edge
Swap two corners Swap two edges
Do you have problems understanding them? Don't go to the next section before you can master these easy moves (or the ones you found) allowing you to change pieces of the first layer easily.
Rearranging the first layer without disturbing the others
Once a first layer is solved, freedom of movement is reduced, and people can't see what they can do without destroying it. You have to find a way of moving only selected cubies, preserving the state
of others (local transformation).
Take the first basic move that rotates a corner for example. What's the problem with it? It destroys the two lower layers of course. Do it backwards, the cube is restored.
Doing a move Undoing a move
But what happened at the end of the move? Think of it this way:
- Pieces of the first layer have been rearranged.
- Pieces of the two lower layers have been rearranged.
- Pieces of the first layer and pieces of the two lower layers are still separated.
- Undoing the move will independently restore the state of the first layer and the state of the two lower layers.
And now, the cornerstone of this method. Try this:
- Do a move that rearranges pieces of the first layer. Call it X.
- Move the first layer. Call it Y.
- Undo X. Call it X'.
- Undo Y (only a matter of readjusting the first layer). Call it Y'.
Since the two lower layers and their chaos have not been changed by Y, X' can still restore them to their original state!
But the first layer has moved, it won't be restored with X'. The backward transformation will be applied to a different part of it.
We have reached our goal: Making (local) transformations in a layer, without disturbing the others.
- X is a clockwise corner rotation move.
- Y is a clockwise turn of the first layer.
- X' is a counter-clockwise corner rotation move.
- Y' is a counter-clockwise turn.
Result: Two corners in the first layer have been rotated (different directions).
Thanks to the four basic moves, we can build four interesting local transformations of the first layer.
Basic move (X) Result of the commutator (X.Y.X'.Y') Example
Rotate a corner Rotate two corners
Rotate an edge Rotate two edges
Swap two corners Swap three corners
Swap two edges Swap three edges
Changing pieces belonging to different layers
The pieces on which a local transformation must be applied do not always belong to the same layer. You'll have to bring them to a same layer first with a positioning move:
- Make interesting pieces belong to the first layer. Call it P.
- X.Y.X'.Y'.
- Undo the positioning move. Call it P'.
Example: Permutation of three edges.
- Move the front side and then the right side to bring edges to up-front and up-right (P).
- Apply the three-edge swapping technique (based on X.Y.X'.Y').
- Move the right side and then the front side back to their original positions (P').
That's all you need to solve the 3x3x3 cube.
Solving example on a random cube
Let's solve a cube completely.
I don't detail how the first layer is built, you can do it, even if it takes more moves.
One by one, the edges of the second layer are positioned, using exclusively sequences that swap three edges. Red-green edge is easy, because in the same layer, you can find its destination and
another free edge position. Same thing for red-blue and green-orange. It's more difficult for the blue-orange edge. A positioning move brings blue-orange, it's destination position, and another free
position to the same layer. This move is undone at the end of the sequence.
Now, the last layer. You'll notice that only the blue-white-red corner is at a correct place, the three others must be swapped. Then, three edges are swapped, because the white-red edge only is
where it needs to be. Finally, we have to fix the orientations.
Working with slices
All the examples above were based on working with a side of the cube. You can apply the same rules to inner slices as well.
X must be a move that rearranges pieces of a slice, and Y a move of this slice.
Example: Permutation of three edges in a slice.
- Swap up-front and up-back edges (X).
- Move center slice (Y).
- Swap up-front and up-back edges again (X'=X).
- Move center slice back (Y').
Different kinds of pieces
For now, we've only worked with corners or edges, but never both kinds of pieces at the same moment. Why not? It's exactly the same principle. On the example, two corner-edge blocks are removed from
the first layer and swapped.
More interesting commutators
In order to make things clear for beginners, I described a technique based on changing things in a single layer. But commutators can be much more powerful.
Try to see how and why they work. Hint: The Y move doesn't compromise any pieces but the ones we need to move.
Swap three corners Swap three edges
Same random cube, but optimized solving
Now we'll make use of some improvements.
Green-orange and green-red edges can be solved simulaneously with a commutator based on a slice, after an easy positioning.
Then I decided to swap three corners in an efficient way.
Two edges again: Green-white and blue-orange.
Then, the three last edges at a wrong place are moved.
In the end, two misoriented edges are fixed.
Once the first layered is completed, everything is based on an identical strategy: P.X.Y.X'.Y'.P'.
|
{"url":"http://grrroux.free.fr/begin/Begin.html","timestamp":"2014-04-21T12:09:38Z","content_type":null,"content_length":"27421","record_id":"<urn:uuid:419fcc1b-f2b9-4be9-a613-8c44f2fa4248>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newest 'divisors characteristic-p' Questions
A smooth curve $X$ in $\mathbb{P}^n$ is strange if there is a point $p$ which lies on all the tangent lines of $X$. Examples are $\mathbb{P}^1$ is strange and so is $y=x^2$ in characteristic $2$. ...
Suppose that $X$ is a smooth algebraic variety over an algebraically closed (uncountable if it helps) field of characteristic $p > 0$. Suppose that $L$ is a line bundle, probably ample or at least
|
{"url":"http://mathoverflow.net/questions/tagged/divisors+characteristic-p","timestamp":"2014-04-21T05:16:15Z","content_type":null,"content_length":"34278","record_id":"<urn:uuid:20ebb327-a5d4-407a-ba1a-a1185369891a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the point z(4,-2) is rotated 180 about the origin. what is the image of z? - Homework Help - eNotes.com
the point z(4,-2) is rotated 180 about the origin. what is the image of z?
You need to determine the coordinates of symmetric point to origin, hence, first you need to determine the quadrant the point z is in.
Since x coordinate is positive and y coordinate is negative, the point z is in quadrant 4, hence, the symmetric point is in quadrant 2.
Since the image of the point z is in quadrant 2, the x coordinate is negative and y coordinate is positive such that:
Notice that x and y coordinates of image point z'(-4,2) have opposite values to the coordinates of original point z(4,-2).
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/point-z-4-2-rotated-180-about-origin-what-image-z-340358","timestamp":"2014-04-18T08:09:34Z","content_type":null,"content_length":"25130","record_id":"<urn:uuid:cae85dde-6ba9-435b-aad4-01583a6e950e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: options in Plot
Replies: 1 Last Post: Dec 20, 2012 3:19 AM
Re: options in Plot
Posted: Dec 20, 2012 3:19 AM
It's because of the respective Attributes of ListPlot and Plot.
{Protected, ReadProtected}
{HoldAll, Protected, ReadProtected}
So you have to Evaluate the opts in Plot.
Plot[x, {x, 0, 1}, Evaluate@opts]
Maybe somebody can again explain why Plot has the Attribute HoldAll, instead
of no Holds or HoldFirst.
I still think it would be helpful if the Function pages had a distinctive
place that listed all Attributes of a function. They tend to get lost in
the Details listing and do not list all of them. It may be that all WRI
functions are Protected, but not all private symbols are.
David Park
From: Nigel King [mailto:nigel.king@cambiumnetworks.com]
Hi MathGroup,
Most Graphic functions allow applying a collection of options as in opts =
{Frame -> True, GridLines -> {{1}, {1}}}
One can then use the opts in the following plots ListPlot[{{1, 1}}, opts]
Plot[x, {x, 0, 1}, opts] The ListPlot works as expected, the Plot does not.
It results with Plot[x, {x, 0, 1}, opts]
I believe that this is a change from M8 to M9.
Is this a bug or intended functionality?
Nigel King
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7941759","timestamp":"2014-04-20T22:32:27Z","content_type":null,"content_length":"15289","record_id":"<urn:uuid:1ddd5f80-cb8b-400b-ada7-4e56a929541a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computational Complexity
The 17x17 challenge: worth $289.00. I am not kidding.
The n x m grid is
if there is a way to c-color the vertices of the n x m grid so that there is no rectangle with all four corners the same color. (The rectangles I care about have the sides parallel to the x and y
The 17x17 challenge
: The first person to email me a 4-coloring of 17x17 in LaTeX will win $289.00. (I have a LaTeX template below.)
HAVE CHANGED SINCE THE ORIGINAL POST SINCE BRAD LARSON EMAILED ME A 4-COL OF 21x10 and 21x11. UPDATE: BRAD LARSON EMAILED ME A 4-COL OF 22x10.
UPDATE- PROBLEM WAS SOLVED. See my arXiv paper on grid colorings OR
my Feb 8, 2012 post.
Why 17x17? Are there other grids I care about?
We (We=Stephen Fenner and Charles Glover and Semmy Purewal) will soon be submitting a
for a superset of the slide-talk I've given on this paper) on coloring grids. Here are the results and comments:
1. For all c there is a finite set of grids a[1]xb[1], ..., a[k]xb[k] such that a grid is c-colorable iff it does not contain any of the a[i]xb[i] (n x m contains a x b if a &le n and b &le m). We
call this set of grids OBS[c] (OBS for Obstruction Set).
2. OBS[2] = {3x7, 5x5, 7x3}
3. OBS[3] = {19x4, 16x5, 13x7, 11x10, 10x11, 7x13, 5x16, 4x19}
4. OBS[4] contains 41x5, 31x6, 29x7, 25x9, 23x10, 10x23, 9x25, 7x29, 6x31, 5x41
5. Exactly one of the following sets is in OBS[4]: 21x13, 21x12.
6. Exactly one of the following sets is in OBS[4]: 19x17, 18x17, 17x17.
7. If 19x17 is in OBS[4] then 18x18 might be in OBS[4] . If 19x17 is not in OBS[4] then 18x18 is not in OBS[4] .
8. A chart of what we do and do not know about 4-colorings of grids is here.
9. A Rectangle Free Subset of a x b is a subset of a x b that has no rectangles. Note that if a x b is 4-colorable then there must be a rectangle free subset of a x b of size Ceil(ab/4). We actually
have a rectangle free subset of 17x17 of size Ceil(289/4)+1. Hence we think it is 4-colorable. For the rectangle-free subset see here. For the original LaTeX (which you can use for a template
when you submit yours) see here. THIS IS WHY I AM FOCUSING ON 17x17--- BECAUSE I REALLY REALLY THINK THAT IT IS 4-COLORABLE. I could still be wrong.
What if 17x17 is not 4-colorable?
Then nobody will collect the $289.00. Even so, if you find this out I would very much like to hear about it. I don't want to offer money for that since if your proof is a computer proof that I can't
verify then its not clear how I can verify it. I also don't want to offer money for a
reasonable proof
since this may lead to having the Supreme Court decide what is a reasonable proof.
Can I get a paper out of this?
FACT: If you get me the coloring and want to use it in a publication then that is fine. OPINION: It would not be worth publishing unless you get all of OBS
. See next question.
What do you hope to get out of this?
My most optimistic scenario is that someone solves the 17x17 problem and that the math or software that they use can be used to crack the entire OBS
problem. If this happens and my paper has not been accepted yet then we could talk about a merge, though this is tentative on both ends.
Has there been any work done on this problem that is not in your paper?
1. A Rutgers grad student named Beth Kupkin has worked on the problem: here.
2. A high school student (member of my VDW gang) Rohan Puttagunta has obtained a 4-coloring of 17x17 EXCEPT for one square: here.
3. SAT-solvers and IP-programs have been used but have not worked--- however, I don't think they were tried that seriously.
4. Here are some thoughts I have had on the problem lately: here.
5. There is a paper on the 3-d version of this problem: here.
Is Lance involved with this at all?
When I gave a talk on grid colorings at Northwestern, Lance fell asleep.
First a message from David Johnson for proposals on locations for SODA 2012 both in and outside the US.
Here's an interesting approach to the birthday paradox using variances.
Suppose we have m people who have birthdays spread uniformly over n days. We want to bound m such that the probability that there are are at least two people with the same birthay is about one-half.
For 1 ≤ i < j ≤ m, let A[i,j] be a random variable taking value 1 if the ith and jth person share the same birthday and zero otherwise. Let A be the sum of the A[i,j]. At least two people have the
same birthday if A ≥ 1.
E(A[i,j]) = 1/n so by linearity of expectations, E(A) = m(m-1)/2n. By Markov's inequality, Prob(A &ge 1) ≤ E(A) so if m(m-1)/2n ≤ 1/2 (approximately m ≤ n^1/2), the probability that two people have
the same birthday is less than 1/2.
How about the other direction? For that we need to compute the variance. Var(A[i,j]) = E(A[i,j]^2)-E^2(A[i,j]) = 1/n-1/n^2 = (n-1)/n^2.
A[i,j] and A[u,v] are independent random variables, obvious if {i,j}∩{u,v} = ∅ but still true even if they share an index: Prob(A[i,j]A[i,v] = 1) = Prob(The ith, jith and vth person all share the
same birthday) = 1/n^2 = Prob(A[i,j]=1)Prob(A[i,v]=1).
The variance of a sum of pairwise independent random variables is the sum of the variances so we have Var(A) = m(m-1)(n-1)/2n^2.
Since A is integral we have Prob(A < 1) = Prob(A = 0) ≤ Prob(|A-E(A)| ≥ E(A)) ≤ Var(A)/E^2(A) by Chebyshev's inequality. After simplifying we get Prob(A < 1) ≤ 2(n-1)/(m(m-1)) or approximately 2n/m^
2. Setting that equal to 1/2 says that if m ≥ 2n^1/2 the probability that everyone has different birthdays is at most 1/2.
If m is the value that gives probability one-half that we have at least two people with the same birthday, we get n^1/2 ≤ m ≤ 2n^1/2, a factor of 2 difference. Not bad for a simple variance
Plugging in n = 365 into the exact formulas we get 19.612 ≤ m ≤ 38.661 where the real answer is about m = 23.
Enjoy the Thanksgiving holiday. We'll be back on Monday.
Last Friday DIMACS celebrated its 20th anniversary. Muthu summarizes the event.
DIMACS has served the theoretical computer science community well over these two decades. They have hosted a number of postdocs and visitors usually around a Special Focus (originally Special years
but one year is usually not enough). DIMACS runs a large number of educational and research activities but most importantly the great workshops over the years. DIMACS's reputation for having strong
workshops allows it to continue to attract strong workshops and has helped make New Jersey (my home state) a major center of theory.
The first DIMACS workshop I attended, Structural Complexity and Cryptography is where I first heard about about the first Gödel-prize winning research relating interactive proofs to hardness of
approximation that was done primarily during that Special Year on Complexity Theory and Interactive Computation.
When I moved to the NEC Research Institute in 1999, I became quite active in DIMACS which had then started a Special Year on Computational Intractability including a great workshop on the Intrinsic
Complexity of Computations with many great talks and discussions on the hardness of proving lower bounds. My talk on Diagonalization later became an article in the BEATCS complexity column.
I then joined the DIMACS executive board as the NEC representative just as the center was ending its 11-years of funding as an NSF Science and Technology Center. Amazing that DIMACS survived that
transition and survived for these twenty years and beyond. Most of the credit goes to DIMACS director Fred Roberts who has often hustled for funding for specific special foci, projects and workshops,
as well as finding people to run those foci and workshops.
The 2001 Workshop on Computational Issues in Game Theory and Mechanism Design truly established a new discipline in connecting computer science and economic theory. Based on the excitement from that
workshop, DIMACS started a Special Focus on Computation and the Socio-Economic Sciences which Fred talked me into co-organizing with Rakesh Vohra, from Northwestern's Kellogg business school. After I
moved back to Chicago in 2003, I met with Rakesh to plan the focus which led to collaboration and eventually my moving to Northwestern.
The special focus had a number of exciting workshops particularly Information Markets which restarted that research area a couple years after the PAM disaster and our closing workshop on the Boundary
between Economic Theory and Computer Science one of the few meetings that truly attracted both strong computer scientists and economists.
That's just a few of my DIMACS memories. Many others have similar stories for a center that helped shape the professional lives of myself and many other CS researchers. Congrats for 20 years and
here's hoping for many more.
As most of you know there are 7 problems worth $1,000,000 (see here). It may be just 6 since Poincare's conjecture has probably been solved. Why are these problems worth that much money? There are
other open problems that are worth far less money. What determines how much money a problem is worth?
When Erdos offered money for a problem (from 10 to 3000 dollars) I suspect that the amount of money depended on (1) how hard Erdos thought the problem was, (2) how much Erdos cared about the problem,
(3) how much money Erdos had when he offered the prize, and (4) inflation. (If anyone can find a pointer to the list of open Erdos Problems please comment and I'll add it here.)
Here is a problem that I have heard is hard and deep, yet it is only worth $3000 (Erdos proposed it). I think that it should be worth more.
BACKGROUND: Szemeredi's theorem: Let A&sube N. If the limit as n goes to infinity of size(A &cap {1,...,n})/n is bounded below by a positive constant then A has arbitrarily long arithmetic sequences.
Intuition: if a set is large then it has arb long arith seqs. The CONJECTURE below uses a diff notion of large.
CONJECTURE: Let A&sube N. If &sum[a&isin A] 1/a div
KNOWN: Its known that if A is the set of all primes (note that &sum[a&isin A] 1/a diverges) then A has arbitrarily large arithmetic progressions. Nothing else is known! The conjecture for 3-AP's
isn't even known!
Is this a good problem? If it is solved quickly (very unlikely) than NO. If absolutely no progress is made on it and no interesting mathematics comes out of the attempts than NO. It has to be just
A student asked me which version of a research paper to cite, a journal (the last reviewed version) or a conference (the first reviewed version) of a paper. I generally cite papers in this precedence
1. The fully refereed journal version, even if it is "to appear".
2. The reviewed, though not usually refereed, conference proceedings version, again even if it is "to appear".
3. On an electronic archive, like arXiv or ECCC.
4. As a departmental technical report.
5. On a generic web page, like a personal page.
6. As a "Manuscript", if I have seen the paper but it's not publicly available.
7. As "Personal Communication" if the paper doesn't exist.
If the original paper is not in English I'll cite both the paper and an English translation if there is one.
The journal version can distort precedence. Paper A that depends on paper B can have a much earlier journal publication date. If precedence is a real issue, say when I am trying to give an historical
overview, then I will cite both the journal and conference versions.
What if you use a theorem that appears in a page-limited conference paper but who's proof only appears in the longer tech report. Then I cite the conference paper for the theorem and the tech report
for the proof. Even if a proof exists in a paper, I'll often cite another paper or book if it has a cleaner or simpler proof.
What if you cite a paper for a theorem for a proof that doesn't exist (the infamous "will appear in a later version of the paper")? If your paper critically needs that theorem, you really should give
the proof for it yourself. At the very least later papers will cite your paper for the proof.
What if the conference or journal version is not on-line or behind a pay wall? I still cite the latest version figuring that if someone wants to read the paper they can use a service like Google
Scholar to find an accessible version.
I try to use the same rules for links to papers on this blog because it's more important to give out the citation. If someone wants to download the paper again it's usually easy enough to find it.
In my .bib file (of bibtex entries), I replace the entries of papers as they get updated under the same citation-key. That way when I go back to latex older paper they get the latest references. We
have too many people in our field who don't bother updating references, pointing to a tech report when the conference or journal version has been published.
Should you add hyperlinks in your bibliography to other papers? Nice if you do so and probably good if you are young and get into the habit now. But I haven't found the impetus to add links to papers
in my now quite large .bib file.
In my ideal world, each research paper would have a web location which has human and machine readable descriptions of and pointers to all versions of that paper. We would just input that location
into bibtex and it would automatically pull the information from the web and make the appropriate entry in your references. Then we would all cite correctly, or at least consistently.
There are now bibles online where you can click for different versions, different translations, different interepretations, historical context, etc. The same is true, or will be soon, for other
faith's holy books as well.
Will there come a day when people bring their laptops (or smaller devices) to church? They can claim that they are looking up things in their online bible. Some will indeed be looking up bible
passages. Some will be balancing their checkbook. Some will be reading blogs. Some will be looking at porn. Will the church need to deal with this? If it does not distract others (and smaller devices
won't) than perhaps not. But the notion of looking at porn while you are in Church is troublesome.
How does this compare to the laptops-in-classroom debate that we had here?
- Nothing I have said here is particulary to Christianity-- All faiths will have these problems. I wonder when it will come up and how they will deal with it.
As I tweeted yesterday, the videos of talks from the 2009 FOCS conference are now online. Thanks to FOCS PC chair Daniel Spielman and Georgia Tech's ARC Center for making it happen.
Let me be a bit Billish (or is it Gasarchian) and make my comments as questions.
1. Which talks are most worth watching?
2. How many of these videos do you plan on watching?
3. Do you get value over these talks over reading the papers? The papers aren't on the IEEE DLs yet but Shiva has collected some links.
4. If STOC/FOCS talks were generally available on-line shortly after the conference, would this affect your attendance?
5. Are the videos useful even if you did attend FOCS?
6. Would you prefer videos on a established site like having a YouTube channel so the talks will always be available and you can use YouTube features like embedding?
7. Should STOC/FOCS and other conferences continue to make videos of their talks and post them freely on-line? How much would you be willing to pay in increased registration fees to make it happen?
This would be a subsidy from those attending the conference to those who don't.
As a young kid in the Reform Jewish community we used the Union Prayer Book, a traditional book with Hebrew on the right and English on the left with lots of instructions for page jumping, standing,
when to sing and whether everyone should speak. When I became a Bar Mitzvah in 1976, the temple sisterhood gave me a copy of the new prayer book, Gates of Prayer. The Gates of Prayer had services in
a linear fashion using different fonts to denote when to sing and who should speak and with instructions on when to stand and sit. One could run an entire service giving no instructions from the
Gates of Prayer lasted more than three decades with only some rewording mostly to make the prayers gender neutral.
But in this Internet age people no longer read linearly. Last Friday I had my first taste of the new Reform Jewish prayer book, the Mishkan T'filiah. No instructions or any indication when to stand
or who should talk or sing. Here is a sample page (via the New York Times).
Each prayer gets its own two page spread with Hebrew, a faithful transliteration and translation and a couple alternative interpretations. No instructions or indications on when to stand, who should
talk or sing and even when to turn the page. Our temple put in a notecard in each prayer book to explain the new book with some clues like whenever we hear "Baruch Atah (Blessed are you..)" we should
turn the page.
One can easily see the Internet's influence. Each prayer gets the equivalent of a web page (or maybe a blog post). Lists on sides are like hyperlinks to other prayers. At the bottom are
Twitter-length commentaries on the prayer.
I expect this will be the last paper prayer book in the Reform movement. In the future we'll all download the prayers on our e-readers or smart phones with links to the next and other relevant
prayers, all preset by the rabbis for that day's service.
There are now laws about blogging and twittering that Lance and I (and all the bloggers) will need to be aware of. Here is a short summary:
1. If a blogger posts about a product that she got for free then she must disclose that she got it for free. (Applies to guys also.)
2. There are no such laws for traditional media.
Some questions:
1. What problem is this trying to solve? Did Lance get a free copy of some textbook, twitter about it, and it sold 1,000,000 copies? Or did Alaska Nebraska twitter about (say) a car she got for free
and it sold alot? And if she did, so what? If the book or car is terrible then Lance's and Alaska's credibility will go down and people will stop following them. Thats how the free market is
supposed to work. But does it?
2. Lets say that the evil trying to be prevented here is that Alaska Nebraka takes the car as a bribe and gives it a good review. Should that be illegal? And if so, why is that okay in old media?
Because nobody pays attention to old media anymore?
3. If Lance just took straight-up-cash to write a good review, is that illegal?
4. If I get something for free, do not acknowledge that I got it for free, and TRASH IT on the blog, is that illegal?
What will Lance and I do? Well, in my SIGACT NEWS book review column I will put at the top of every column that the books were given to me for free. And if I ever comment about something I got for
free (again, likely a book) then I will note that fact.
The wallet that Lance blogged about here is AWESOME!!!!!!!!!!!
As many university's still feel the effect of the financial crises, many have limited or no positions to hire new tenure-track faculty so I expect the academic job market to be difficult again this
year. But a few people have asked me to post announcements. Here are some places that are likely planning to hire tenure-track faculty in theory-related areas.
Feel free to add other announcements in the comments.
Also watch the DMANet and TheoryNet lists, now collected in a new blog, and the listings at the CRA and the ACM as well as individual departmental home pages. Even if a place doesn't specifically
advertise for theory, or at all, they still may hire so it never hurts to ask or apply.
Nevertheless finding a tenure-track job will be difficult this year. Should be a bit easier to find a postdoc position and no one will count it against you if you take a second or third postdoc to
tie you over until the market improves.
Many of you readers don't remember a time when there were two Germanys or when we didn't think IP = PSPACE. Two walls collapsed in November of 1989 that changed all that. One marked the end of the
Cold War. The other marked the start of the most exciting seven weeks of my research career.
It all happened during my rookie year as an assistant professor at the University of Chicago. On November 27, 1989, Noam Nisan emailed a paper draft showing that verifying the permanent (and thus
co-NP) has multiple prover interactive proofs. Besides being a major breakthrough, this was the first time someone had proven a theorem where there was a previously published oracle result that went
the other way. In my very first theorem in grad school (with Mike Sipser) we had an oracle relative to which co-NP did not have single prover interactive proofs (IP). With John Rompel, we extended
the result to an oracle where co-NP did not have multiple prover interactive proofs (MIP). Noam showed co-NP did have multiple prover proof systems in the non-relativized world. This led me to
thinking about whether we can find a single prover proof. I worked on this problem with then student Carsten Lund and Howard Karloff who had the office across from me. After a few weeks of hacking we
finally did get such a protocol. Noam agreed to merge his now weaker result into ours and we quickly wrote it up and on December 13 emailed it out to the masses (these were the days before web
archives, or even a web).
It took Adi Shamir less than two weeks to find the twist to add to our proof to get the full result that IP = PSPACE that he emailed out the day after Chirstmas. Babai (jokingly I think) chastised me
afterwards for going away on a planned ski trip in mid-December.
In Fortnow-Rompel-Sipser we had also showed that MIP was contained in NEXP, non-deterministisic exponential time. After IP=PSPACE I then embarked on showing that you could put all of NEXP in MIP. At
one point someone asked me why I didn't first try EXP in MIP and I mumbled something about how one ought to get nondeterminism for free in this model, but deep down I didn't want another partial
result that someone else could usurp. My approach (not the one anyone would teach today) involved relativizing the LFKN co-NP in IP protocol using the provers to simulate the oracle. But we needed a
low-degree test to make the relativization work, and working with Carsten and Laszlo Babai, we finally came up with a proof of the test. We sent out that paper on January 17, 1990.
I remember most the breakthrough moments when Carsten walked into my office asking me why his protocol for the permanent didn't work (it did) and when Babai, Lund and I discovered that the expansion
properties of the hypercube was the last piece we needed to make our linear test work.
After MIP=NEXP, I thought we now knew nearly everything that matters about interactive proofs and moved on to other areas of complexity. Thus I missed the entire PCP craze that started about a year
At the 1990 Structures in Complexity Conference in Barcelona, Babai gave a cool talk E-mail and the Unexpected Power of Interaction describing those then recent results and the process behind them. I
used that write-up to recover the dates above.
IBM-NYU-COLUMBIA theory day on Dec 11 ! Here are pointers to more information: here and here and here.
My advice: If you are able to go (distance, time, money all okay) then you should go. If you are unable to go then you shouldn't go.
Will this by available on video later? Also- if it is, will people actually watch it? There is something about BEING at a talk that is given at a particular time that seems more compelling then being
able to look at it whenever you want, and hence not get around to looking at it. ~
Guest Post from Aaron Sterling
Multi-Agent Biological Systems and the Natural Algorithms Workshop
I attended the Natural Algorithms Workshop on November 2nd and 3rd. This was an event held by the Center for Computational Intractability in Princeton (which brought us the Barriers in Complexity
Workshop). Bernard Chazelle was the organizer.
Chazelle is interested in applying complexity-theoretic techniques to the mathematical modeling of biological systems. Recently, Chazelle applied spectral graph theory, combinatorics of
nondeterministic computation paths, and other techniques, to obtain upper and lower time bounds on two representative models of bird flocking. (Control-theoretic methods, e.g., Lyapunov functions,
had obtained only an existence theorem the models converge without providing time bounds.) Chazelle's Natural Algorithms won the Best Paper Award at SODA 2009. (I found the SODA paper hard to read,
as it moves through multiple difficult techniques in a compressed fashion. I recommend reading the first thirteen pages of this version instead, to get a sense of what he's doing.)
While computational complexity's entry into this field may be new, other areas of computer science have much longer-standing connection to multi-agent biological systems. A notable example is the
work of Marco Dorigo, whose ant colony optimization algorithm has been influential. Dorigo's research group won the AAAI video contest in 2007; their video is a lot of fun to watch.
Back to the Workshop! The speakers were from diverse backgrounds (control theory, robotics, biology, distributed computing, mechanical engineering), and the talks were great. For reasons of space,
I'll limit myself to summarizing one talk I found particularly exciting.
Magnus Egerstedt, a roboticist at Georgia Tech, talked about dolphin-inspired robot algorithms. He explained how dolphins feed. They use three basic strategies. In the first, a group of dolphins
swims circles around a school of fish, in a formation called a carousel. Once the fish are well trapped by the carousel, the dolphins swim through the school one at a time to eat. The second strategy
involves swimming on either side of the fish to herd them in a particular direction. The third involves physically pushing the fish from behind, either into a waiting group of dolphins, or onto dry
land. Egerstedt showed a video of dolphins beaching themselves so they could push fish onto the sand. These feeding strategies demonstrate team effort, despite (presumably) limited ability to
communicate. As one example, the dolphins have to take turns to feed during the carousel, or the entire strategy will be ruined. Egerstedt's group has used the behavior of dolphins to inspire several
multi-agent algorithms. Here is a recent technical paper of theirs that was dolphin-inspired.
All the talks were videotaped, and are supposed to be on the Center's web page soon.
As a final note, Chazelle has written a followup paper to Natural Algorithms, and will be presenting it at ICS in January. There's no online version yet, but the abstract says the paper fits within a
larger effort to develop an algorithmic calculus for self-organizing systems. With a battery of tools available, we might see quite a bit of TCS activity in this area over the next couple years.
Back in 1993 I had the following conversation with one of my relatives:
BILL: Just give me your email address and I'll email it to you.
RELATIVE: I don't have an email account. Why would I? Only you techno-geeks need email.
In 1994 I had the following conversation with the same relative:
BILL: Just give me your postal address and I'll mail it to you.
RELATIVE: Better- I'll give you my email address.
BILL: I thought you didn't have an email account.
RELATIVE: Only a techno-geek like you wouldn't know that everyone has email now.
At one time most people (outside of academia) did not have email. Now everyone has email. It is quite standard to have email on business cards and to ask for someones email address (are business
cards still standard?).
I have run into people who are surprised that I don't have a FACEBOOK page. But here is the question: will having a FACEBOOK page be on the same level as having email in that everyone is expected to
have it, and if you don't then you are just so 2005? My question is not tied to FACEBOOK per se--- I really mean will it be standard to be in some social network. Is having a webpage already
(NOTE: My spell program tells me that its FACEBOOK not Facebook or facebook. So at least this time the capitol letters are not my idea.)
Here is the offer: If you press the button you will receive $200,000. The caveat: Someone you don't know will die.
I was born during run of the original Twilight Zone but growing up watched them over and over again in reruns. People dealing with some small change in reality often with a clever twist in the
ending. But they made you think.
CBS had a remake of the Twilight Zone series in the mid-80's. But nearly all the episodes were quite bland and highly predictable stories. But one episode in that new series, "Button Button", which
made that offer above, caught be by surprise and worthy of being in the same caliber of the best of the original series. I won't spoil anything else about the episode.
A new movie The Box, a remake of Button, Button that opens today. I thank the commercials for The Box for reminding me of that sole great Twilight Zone episode of the 80's. Can a full length movie do
justice to that short and powerful episode, even with the writer and director of Donnie Darko? I doubt it so skip the movie and rent the the original (disk 5) instead.
The new Innovations in Computer Science conference announced their accepted papers earlier this week including my paper with Rahul Santhanam "Bounding Rationality by Discounting Time". Shiva is
collecting PDF pointers (hope to get ours posted and on that list soon).
According to Noam, "this is the most interesting list of accepted papers that I’ve seen in years". Suresh seems happy too but some of his commenter's were less impressed. I view ICS as playing an
orthogonal to STOC/FOCS, trying to present papers with potentially interesting ideas that wouldn't normally be accepted into STOC or FOCS.
Why does STOC and FOCS not accept many innovative papers, even after adding the line "Papers that broaden the reach of theory, or raise important problems that can benefit from theoretical
investigation and analysis, are encouraged" to the CFP? Since I can remember, STOC has focused more on solving well-defined problems with a deference towards technical proofs. A more innovative
papers, that is a new model with a few simple results, tends to be harder to sell as one needs strong motivation and a model that really captures that motivation. When we do have more innovative
papers they tend to be selected based on the authors more than the content.
But this problem has gotten worse in recent years as our field has grown and PCs seem unwilling to trade a solid technical paper for a risky innovative one. When the PC meets, an innovative paper has
a higher standard, the need to justify a new model or problem. Papers that answer old open questions or extend earlier work can justify their models simply because those models have appeared in
earlier conferences. But no theoretical model can exactly capture a real world scenario so one can always find negative things to say about any innovative model, causing more traditional papers to
often win the fight for those last few acceptance slots.
ICS did not accept as wide a spectrum of papers as I would have liked, probably due more to lack of a broad submission base. And the models of most of these papers will likely go nowhere but the hope
is some will go to instigate new research areas.
Given the accepted papers can we tell yet if ICS is a success or a failure? Not yet and that's the point. Wait five years and see what fraction of papers from ICS (compared to STOC/FOCS) are still
relevant and then we'll know.
Amir Pnueli passed away Monday from a brain hemorrhage. Amir was an expert in program verification who won the 1996 Turing Award primarily for his 1977 FOCS paper The Temporal Logic of Programs.
Lenore Zuck shares her thoughts.
I'm reading through what people wrote about Amir in the past 30 or so hours since we received the sad news of his passing. I'm leafing through the citations of his numerous awards and honorary
degrees, the list of academies of which he was a member, and the lists of his unique accomplishments. Throughout today I was helping with NYU's press release about him. Everybody mentions how
brilliant he was, how modest he was, how pleasant he was, how gracious he was. Some mention his patience with those of us who are less talented than him. What is mentioned less is how much fun he was
to work with, how he put the people around him at ease, how he managed to find good ideas in seemingly random thoughts people brought to him.
I recall the hours I've spent with Amir as a graduate student: joking, our regular Friday afternoon meetings solving the acrostic puzzle of "Koteret Rashit", discussing books, news, politics, and
taking work breaks. It's a wonder we got anything done. It's a wonder we got so much done. This was Amir's style -- alternating between work and play, and making large strides in research while
fostering the most pleasant work environment imaginable. Years later, when we were working together at NYU, it was the same with our joint PhD students. Long sessions consisting mainly of laughter,
with some breaks for coffee and sweets, at the end of which I always found myself stunned how we made so much progress during all the hilarity.
I'm trying to recall Amir's funniest lines. Some don't translate, some require too much background. But I thought of a few that may illustrate Amir's special humor. When a student of his was called
to military reserve duty whenever his oral exam for the completion of his MSc was scheduled, (temporal logic) until he found a note from Amir saying "you had your oral exam in absentia and you passed
with flying colors." Or when I interchanged N and M in a paper, on which, upon proof reading, Amir wrote a note "even for N=M" when I claimed I proved something for any N. Or... after long arguments
with me that the past operators in temporal logic are not needed, he gave a lecture in which he basically admitted they were useful, with slides whose titles were "Admitting the Past" and "Why Should
We be Ashamed of the Past" that I felt were for me. And the list goes on.
I've collaborated with Amir since I was a graduate student. I've had several joint students with him. I've worked with his students. I've held regular meetings with him, the next one is scheduled for
this Friday -- I still cannot bring myself to remove it from my calendar. I cannot imagine life without him. He was a genius, a friend, a colleague, one of a kind. I feel fortunate to have had the
opportunity to know him, to have worked him, to have learnt so much from him. May his memory, legacy, and example stay with us.
Lenore D. Zuck, Arlington, Virginia, November 3rd, 2009
Update 11/15: NYT Obit for Pnueli
(Reminder: STOC Deadline Thursday Nov 5, 7:00PM, Eastern: link.)
After yesterday's post about RaTLoCC 2009 (Ramsey Theory in Logic, Combinatorics, and Complexity) at Bertinoro some people emailed me and others commented asking Did you like it?. AH- my post was
about what I learned there but not whether I liked it. IT WAS GREAT! I was at Dagstuhl Complexity a few weeks earlier and that was also GREAT! Some comparisons:
1. Both meetings had the same format. Guests are invited, the area is specialized, there are talks in the morning, then a long lunch break (12:00-3:00 or so) then more talks. Also an excursion-- a
Hike in Germany and a Guided Tour of a city nearby in Italy.
2. Dagstuhl had 50 people or so, of which I probably knew 30 before hand. Bertinoro had 28 people of so, of which I probably knew 2 before hand. Both are good- I got to meet NEW people at both.
3. I got more out of Bertinoro. There were talks at Bertinoro where stuff I learned about Ramsey Theory 10 years ago was all I needed. At Dagstuhl I needed more recent background knowledge that I
often didn't have.
4. I got more out of both then I do out of conferences which, by their nature, ONLY have the latest results.
5. At Dagstuhl out last meal was served late which was annoying since we had a plane to catch. At Bertinoro all meals were served on time. While some may use this to support the tired stereotype
that Italians are better at organization then the Germans I stand by my prior postings about stereotypes (I'm against them).
Back from RaTLoCC 2009 which is Ramsey Theory in Logic, Combinatorics, and Complexity.
1. Here are the list of talks: here.
2. Reverse Mathematics tries to classify theorems by what is the strength of the axioms needed to prove them. There are five levels that almost all theorems in mathematics fit into. One curious
outlier: the infinite Ramsey Theorem for pairs (for all c, for all c-colorings of the unordered pairs of naturals, there exists an infinite homogeneous set--- that is, a set of naturals where
every pair has the same color). It does not fit in nicely. Ramsey for triples, etc, does.
3. One of the first natural theorems to be proven independent of PA was a Ramsey-type theorem. (One can debate if its natural.) See here.
4. Van der Waerden's Theorem In 1985 and earlier people thought that one of two things would happen with VDW's theorem. At that time the only proof lead to INSANE bounds on the VDW numbers. EITHER a
combinatorist would obtain a proof with smaller bounds OR a logician would prove that the bounds were inherent. What happened: A logician (Shelah) came up with a proof that yielded primitive
recursive bounds on VDW numbers. (Later Gowers got much better bounds.)
5. The talks on Complexity were all on Proof Complexity- things like blah-blah logical system needs blah-blah space to prove blah-blah theorem. Some of the theorems considered were Ramsey Theorems.
6. A while back I did two guest posts on Luca's blog on the poly VDW theorem here and here At the conference I found out two things that I said or implied that are INCORRECT.
1. I said that the first proof of Poly VDW, using Ergodic theory, did not give bounds. Philipp Gerhardy gave a talk at RaTLOcCC that claims that the ergodic proofs either do yield bounds, or can
easily be made to yield bounds. See his webpage (above pointer) for his paper. (When I later asked him if he prefers the classical proof of VDW's theorem or the Ergodic proof he said he can't
tell them apart anymore.)
2. I said that the only bounds known for PolyVDW numbers are the ones coming from an omega^omega induction. It turns out that Shelah has a paper that gives primitive recursive bounds. See either
here or entry 679 here. (The entry 679 version looks like its better typeset.) The paper is from 2002. I am not surprised that this is true or that Shelah proved it. I am surprised that I
didn't know about it since I am on the lookout for such things.
7. The conference was about 1/3 logicians, 1/3 combinatorists and 1/3 complexity theorists. There were only 27 people there, partially because it conflicted with FOCS and partially because there is
some institute having the entire Fall devoted to Combinatorics (I forget which inst. but they are in California).
8. They plan to run it again, and if so I will so be there. Or I will be so there. Or something.
|
{"url":"http://blog.computationalcomplexity.org/2009_11_01_archive.html","timestamp":"2014-04-16T18:57:12Z","content_type":null,"content_length":"277453","record_id":"<urn:uuid:4776da5d-da1f-4a2d-9b2b-7f33f3b7cd7e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|