content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: The missing dollar problem
Replies: 3 Last Post: Mar 14, 1995 11:09 AM
Messages: [ Previous | Next ]
Re: The missing dollar problem
Posted: Mar 13, 1995 8:41 PM
I know exactly the problem you are talking about! However, it is *not*
*un*solvable... it is just that the "intuitive" method for solving this
problem is actually erroneous. It is somewhat like an "optical illusion"
for the mathematical part of one's brain. The problem goes something like
Three friends need a hotel room for the night. They are charged
$30 for the room, or $10 apiece. However, an hour later, the hotel manager
discovered that she overcharged the three for their room. She sends the
bellhop upstairs with $5 in change. On the way upstairs, the bellhop
decides to pocket $2 for himself (since no one will know), because it will
make the dividing of the change among the three guests much simpler. Once
upstairs, he returns $1 to each of the three men.
Since the three friends each paid $9 for the room ($10 originally
and $1 dollar in change) and the bellhop ended up with a $2 dollar
"tip"--for a total of $29 dollars--What happened to the other dollar of the
$30 that was originally paid?
Have fun!!
>I am looking for The Missing Dollar Problem. I have a group of kids that
>will find this one "interesting". I have searched high and low around this
>house, and the problem is no where to be found. I am hoping that someone on
>list will have it and post it.
>I am not sure exactly how it goes, but The Missing Dollar Problem is
>unsolvable. It starts out describing a man who is paying his hotel bill. He
>has $30 and the bill is $27. He tips the bellboy, and somehow gets some
>change. When it is all done, he is short a dollar.
>Does anyone have a copy of the problem?
>Thanks!!!!!! Diane McElwain
Angie S. Eshelman
116 Erickson Hall Office: (517) 353-0628
Michigan State University E-Mail: eshelma2@student.msu.edu
East Lansing, MI 48824-1034
Date Subject Author
3/13/95 The missing dollar problem DMcelwain@aol.com
3/13/95 Re: The missing dollar problem Angie S. Eshelman
3/13/95 Re: The missing dollar problem JSTEW@VMS.ACAD1.ALASKA.EDU
3/14/95 Re: The missing dollar problem Howard L. Hansen | {"url":"http://mathforum.org/kb/thread.jspa?threadID=481672&messageID=1474715","timestamp":"2014-04-17T13:14:31Z","content_type":null,"content_length":"21554","record_id":"<urn:uuid:e0a2cf25-0e25-4bdd-a605-a0f6999e5794>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marlborough, MA Statistics Tutor
Find a Marlborough, MA Statistics Tutor
...I am a Novell Certified Network Engineer and I managed the Intel Network Certification Lab for two years. We certified Intel Servers on Banyan, Novell, and Microsoft software. I am currently a
Certified Microsoft Partner.
35 Subjects: including statistics, reading, English, writing
...I make sure they have all they need to succeed at their goal. My main main areas of interest are: Elementary School and Young Students: Math and Science. I teach science in general and
computers and computer science.
47 Subjects: including statistics, chemistry, reading, calculus
...My patience is unparalleled, and I'm very easy to work with. Flexible scheduling. Contact me so we can discuss your success in math!
15 Subjects: including statistics, calculus, geometry, algebra 1
...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science
from West Point. My academic strengths are in mathematics and French.
16 Subjects: including statistics, French, elementary math, algebra 1
...I also have a master’s degree in software engineering from Brandeis University. I am in the process of pursuing a master of arts degree in teaching mathematics at the secondary level at Boston
University. I have tutored students in Mathematics for over 20 years.
13 Subjects: including statistics, calculus, geometry, algebra 1
Related Marlborough, MA Tutors
Marlborough, MA Accounting Tutors
Marlborough, MA ACT Tutors
Marlborough, MA Algebra Tutors
Marlborough, MA Algebra 2 Tutors
Marlborough, MA Calculus Tutors
Marlborough, MA Geometry Tutors
Marlborough, MA Math Tutors
Marlborough, MA Prealgebra Tutors
Marlborough, MA Precalculus Tutors
Marlborough, MA SAT Tutors
Marlborough, MA SAT Math Tutors
Marlborough, MA Science Tutors
Marlborough, MA Statistics Tutors
Marlborough, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Marlborough_MA_Statistics_tutors.php","timestamp":"2014-04-19T05:21:57Z","content_type":null,"content_length":"23946","record_id":"<urn:uuid:30711552-5057-4384-a08d-b8f6c329742f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Wittgenstein's '770' in pi
[FOM] Wittgenstein's '770' in pi
Stephen G Simpson simpson at math.psu.edu
Mon Jul 16 09:52:35 EDT 2007
Harold Teichman writes:
> No such statements (about decimal expansions) figure prominently in
> the texts on number theory that I have seen. Could someone refer
> me to some actual literature in number theory (or any other part of
> mathematics) that has a bearing on this question?
The literature on so-called "normal numbers" is surely relevant. Some
references are in this on-line article:
Quoting from the article:
A normal number is an irrational number for which any finite pattern
of numbers occurs with the expected limiting frequency in the
expansion in a given base (or all bases). For example, for a normal
decimal number, each digit 0-9 would be expected to occur 1/10 of
the time, each pair of digits 00-99 would be expected to occur 1/100
of the time, etc.
The experts in this kind of number theory have a strong intuition that
numbers such as pi, the square root of 2, ... are normal, but this has
not been proved. Obviously, if pi is normal then it would follow that
770 occurs infinitely often in the decimal expansion of pi.
Stephen G. Simpson
Professor of Mathematics
Pennsylvania State University
research specialties: mathematical logic, foundations of mathematics
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-July/011726.html","timestamp":"2014-04-19T22:21:10Z","content_type":null,"content_length":"3681","record_id":"<urn:uuid:2720e7a9-0543-48ba-aae7-49fe1ef938e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibonacci Look-a-like
January 15th 2006, 01:37 PM
Fibonacci Look-a-like
Define a fibonacci series generated by {a,b} such as,
(The original fibonacci series is generated by {1,1})
( $aot =0$)
Then there are a number of interesting properties:
1)If $K$ is the finite continued fraction for $b/a$
then $\frac{u_{n+1}}{u_n}=[1;1,1...,1,K]$
Where the $1$ appears $n-2$ times.
2)Thus, from here we have that $\lim_{n\rightarrow \infty}\frac{u_{n+1}}{u_n}=\psi$
(Thus, any fibonacci series generated by any two real numbers converges to the divine proportion).
3)The formula for $u_n$ is given by
where $F(n)$ is the n-th fibonacci number.
But by Binet's formula we have that,
for the n-th fibonacci number we can find a formula for $u_n$ but it is rather messy and will be omitted. Giving a second method for proving statement 2.
January 20th 2006, 03:57 AM
Treadstone 71
I think you mean $\phi$ and not $\psi$. Very interesting, though.
May 28th 2006, 02:55 PM
Originally Posted by Treadstone 71
I think you mean $\phi$ and not $\psi$. Very interesting, though.
$<br /> \lim_{n\rightarrow \infty}\frac{u_{n+1}}{u_n}=\phi<br />$
May 28th 2006, 04:10 PM
Originally Posted by Natasha1
$<br /> \lim_{n\rightarrow \infty}\frac{u_{n+1}}{u_n}=\phi<br />$
That is not important ;)
June 3rd 2006, 08:45 PM
The proof of the second statement is rather neat.
$2)\;\;\lim_{n\to\infty}\frac{u_{n+1}}{u_n}\:=\: \phi$
The limit of the ratio of consecutive Fibonacci numbers is already established.
. . . . . $[1]\;\lim_{n\to\infty}\frac{F_{n+1}}{F_n}\:=\:\phi \qquad\qquad[2]\;\lim_{n\to\infty}\frac{F_n}{F_{n+1}} \:=\:\frac{1}{\phi}$
We have: $u_n\:=\:a\!\cdot\!F(n-2) + b\!\cdot\!F(n-1)$
The ratio is: $R\;=\;\frac{u_{n+1}}{u_n}\,=\,\frac{a\!\cdot\!F(n-1) + b\!\cdot\!F(n)}{a\!\cdot\!F(n-2) + b\!\cdot\!F(n-1)}$
Divide top and bottom by $F_{n-1}:\;\;R\;=\;\frac{a + b\!\cdot\!\frac{F(n)}{F_{n-1)}}}{a\!\cdot\!\frac{F(n-2)}{F(n-1)} + b}$
Take the limit: $\lim_{n\to\infty}R\;=\;\lim_{n\to\infty}\left( \frac{a + b\!\cdot\!\frac{F(n)}{F_{n-1)}}}{a\!\cdot\!\frac{F(n-2)}{F(n-1)} + b}\right)$
From [1] amd [2], this becomes: $\frac{a + b\!\cdot\!\phi}{a\!\cdot\!\frac{1}{\phi} + b}$
Multiply top and bottom by $\phi:\;\;\frac{\phi(a + b\!\cdot\!\phi)}{a + b\!\cdot\!\phi}\;=\;\phi$
June 4th 2006, 05:37 AM
Originally Posted by Soroban
The proof of the second statement is rather neat.
My proof is simpler.
Because of condition 1), since this countinued fractions converges to $[1:1,1,...]=\phi$.
June 4th 2006, 12:09 PM
Hello, ThePerfectHacker!
My proof is simpler.
Of course it is . . . I wasn't claiming otherwise.
I was demonstrating a straight algebraic approach to the limit
. . for those not familiar with continued fractions.
Please understand: I never ever compete with other posters here.
I may post an alternate approach or even an improvement
. . . but never with a snicker or a sneer, implied or otherwise. | {"url":"http://mathhelpforum.com/number-theory/1632-fibonacci-look-like-print.html","timestamp":"2014-04-17T09:39:28Z","content_type":null,"content_length":"14183","record_id":"<urn:uuid:6f111e33-9477-4478-9910-7010342bbd60>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
I want to be a mathematician
I want to be a mathematician: an automathography in three parts
Paul Richard Halmos
Mathematical Association of America, 1985 - Biography & Autobiography - 421 pages
From inside the book
28 pages matching ergodic theory in this book
Page 411
Where's the rest of this book?
Results 1-3 of 28
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
CHAPTER 3
Study or worry 12
CHAPTER 2 20
50 other sections not shown
Other editions - View all
Common terms and phrases
algebraic logic Allen Shields analysis answer asked became Boolean algebra Budapest calculus called chairman Chicago colloquium committee couple course dean diary Doob editor English ergodic theory
exam faculty friends functional analysis functions geometry grade graduate student Halmos hard Hilbert space Hungarian Hungary idea Illinois Institute invited keep knew Laguardia language later
lecture letter linear algebra look Marshall Stone mathe mathematician mathematics department matics measure theory meeting Michigan minutes Montevideo months Moore method Neumann never once
operators paper Ph.D Princeton problem professor proof prove published question referee remember Russian semester seminar started sure talk taught teacher teaching tell theorem thesis things thought
tion told took topological groups topology trying Uruguay vector space wanted week words write wrote
Bibliographic information
I want to be a mathematician: an automathography in three parts
Paul Richard Halmos
Mathematical Association of America, 1985 - Biography & Autobiography - 421 pages | {"url":"http://books.google.com/books?id=hhpZAAAAYAAJ&q=ergodic+theory&dq=related:ISBN0671750615&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-16T23:00:08Z","content_type":null,"content_length":"121866","record_id":"<urn:uuid:ae211d82-8e0b-4cb6-965e-05d26f053f9e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weakest Metrizable Topology?
October 16th 2011, 01:33 PM #1
MHF Contributor
Nov 2010
Weakest Metrizable Topology?
Given a set $X$ and a nonmetrizable topology $\mathcal{T}$, is the intersection $\bigcap\left\{\mathcal{U}\subseteq\mathcal{P}(X)| \mathcal{U}\text{ is metrizable and }\mathcal{T}\subset\mathcal
{U}\right\}$ metrizable?
If not, are there properties that $\mathcal{T}$ could have that would make that intersection metrizable?
Re: Weakest Metrizable Topology?
Given a set $X$ and a nonmetrizable topology $\mathcal{T}$, is the intersection $\bigcap\left\{\mathcal{U}\subseteq\mathcal{P}(X)| \mathcal{U}\text{ is metrizable and }\mathcal{T}\subset\mathcal
{U}\right\}$ metrizable?
If not, are there properties that $\mathcal{T}$ could have that would make that intersection metrizable?
What have you tried? It seems like you could, if its true, prove this using Nagata-Smirnov.
Re: Weakest Metrizable Topology?
That looks like an excellent place to start. Thank you. I will get back to you with any results that I find.
October 16th 2011, 07:48 PM #2
October 17th 2011, 05:56 AM #3
MHF Contributor
Nov 2010 | {"url":"http://mathhelpforum.com/differential-geometry/190537-weakest-metrizable-topology.html","timestamp":"2014-04-20T12:24:12Z","content_type":null,"content_length":"37748","record_id":"<urn:uuid:dbf95d25-c813-44f0-9ee1-0e4ee5cbc47e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
• [X]
• [X]
• Bestsellers - This Week
• Foreign Language Study
• Pets
• Bestsellers - Last 6 months
• Games
• Philosophy
• Archaeology
• Gardening
• Photography
• Architecture
• Graphic Books
• Poetry
• Art
• Health & Fitness
• Political Science
• Biography & Autobiography
• History
• Psychology & Psychiatry
• Body Mind & Spirit
• House & Home
• Reference
• Business & Economics
• Humor
• Religion
• Children's & Young Adult Fiction
• Juvenile Nonfiction
• Romance
• Computers
• Language Arts & Disciplines
• Science
• Crafts & Hobbies
• Law
• Science Fiction
• Current Events
• Literary Collections
• Self-Help
• Drama
• Literary Criticism
• Sex
• Education
• Literary Fiction
• Social Science
• The Environment
• Mathematics
• Sports & Recreation
• Family & Relationships
• Media
• Study Aids
• Fantasy
• Medical
• Technology
• Fiction
• Music
• Transportation
• Folklore & Mythology
• Nature
• Travel
• Food and Wine
• Performing Arts
• True Crime
• Foreign Language Books
Most popular at the top
• Cambridge University Press 2000; US$ 76.00
This text is designed for an intermediate-level, two-semester undergraduate course in mathematical physics. more...
• World Scientific Publishing Company 2004; US$ 134.00
This book presents, for the first time, a systematic formulation of the geometric theory of noncommutative PDE's which is suitable enough to be used for a mathematical description of quantum
dynamics and quantum field theory. A geometric theory of supersymmetric quantum PDE's is also considered, in order to describe quantum supergravity. Covariant... more...
• Cambridge University Press 2005; US$ 36.00
This 2005 text was the first book on the Lévy Laplacian that generalized classical work and could be widely applied. more...
• Cambridge University Press 2006; US$ 26.00
An up-to-date, broad scope textbook for senior undergraduates starting computational physics courses. more...
• CRC Press 2001; US$ 83.95
This text covers the central themes of: space-time geometry and the general relativistic account of gravity; quantum mechanics and quantum-field theory; gauge theories and the fundamental forces
of nature; and statistical mechanics and the theory of phase transitions. more...
• Elsevier Science 2004; US$ 170.00
Traditionally, randomness and determinism have been viewed as being diametrically opposed, based on the idea that causality and determinism is complicated by ?noise.” Although recent research has
suggested that noise can have a productive role, it still views noise as a separate entity. This work suggests that this not need to be so. In an informal... more...
• World Scientific Publishing Company 2005; US$ 167.00
The Ginzburg?Landau equation as a mathematical model of superconductors has become an extremely useful tool in many areas of physics where vortices carrying a topological charge appear. The
remarkable progress in the mathematical understanding of this equation involves a combined use of mathematical tools from many branches of mathematics. The Ginzburg?Landau... more...
• World Scientific Publishing Company 2005; US$ 248.00
Since 1984, a series of regional conferences on mathematical physics has been organized by physicists from Iran, Pakistan and Turkey to promote the research in mathematical and theoretical
physics in the region. This volume, which derives from the XI Regional Conference on Mathematical Physics, comprises 8 review and 44 research articles on the most... more...
• World Scientific Publishing Company 2005; US$ 216.00
This proceedings volume widely surveys new problems, methods and techniques in mathematical physics. The 22 original papers featured are of great interest to various areas of applied mathematics.
They are presented in honour of Professor Salvatore Rionero 70th birthday. The proceedings have been selected for coverage in:. ? Index to Scientific & Technical... more...
• World Scientific Publishing Company 2005; US$ 220.00
This is the first monograph devoted to the Sturm oscillatory theory for infinite systems of differential equations and its relations with the spectral theory. It aims to study a theory of
self-adjoint problems for such systems, based on an elegant method of binary relations. Another topic investigated in the book is the behavior of discrete eigenvalues... more... | {"url":"http://www.ebooks.com/subjects/physics-mathematical-physics-ebooks/5436/?page=4","timestamp":"2014-04-17T09:55:58Z","content_type":null,"content_length":"84551","record_id":"<urn:uuid:bf496700-a275-4fb6-9236-be2f93f80aab>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
When infinities attack
« Longhorn piracy 'coup'? | Main | The economics of usability »
December 05, 2003
When infinities attack
Mark Pilgrim writes up Hilbert's hotel, a classic metaphor for the weird stuff that happens when you start dealing with mathematical infinities.
Infinities are my mathematical first love. I read Ian Stewart's "Concepts of Modern Mathematics" at an impressionable age, and the strange antics of the alephs, the sheer elegance of cardinality
proofs and the enigmas of the continuum hypothesis and the large cardinals pretty much settled the core of my mathematical interests for the rest of my life.
What I didn't learn about until much later was the equal strangeness of ordinal infinities. Cardinals deal with "how many." If you can map two sets onto each other, by no matter how distorting a
mapping, they have the same cardinality. So, as illustrated by the Hilbert hotel, all countable sets get flattened to the same cardinality, aleph-null.
Ordinals, however, are concerned with the order in which you count. So you can have different countable ordinals, and the rules are quite different from cardinals.
In the cardinal world, for example, aleph-null + 1 = aleph-null (moving one guest into Hilbert's hotel). In the ordinal world, 1 + omega = omega, but omega + 1 does not equal omega.
Why is this? Well, look at the three sets and, more importantly, their orders:
omega = { item1, item2, item3... }
1 + omega = { newitem, item1, item2... }
omega + 1 = { item1, item2... newitem }
We can map between 1 + omega and omega in a way that preserves the order structure. We can't do that with omega + 1, because in omega + 1 newitem is after every other item. omega and 1 + omega have
no member with that property.
So the omega (ordinal) family offers a much richer view of countable infinities than the cardinal view, where's it's all just aleph-null. You have omega + n, omega + omega, omega * 2 (but not 2 *
omega -- that's { a1, b1, a2, b2, ... } which is just the same as omega (I hope I've got that the right way round)), omega * omega, omega ^ omega, omega ^ omega ^ omega ^ ...
... and it's all still countable. In fact if you go far enough you eventually find ordinals not expressible in terms of omega, and they're still countable. The first such ordinal is known as
epsilon-0. I guess there is an epsilon-1 and various epsilon-ns and presumably epsilon-omega and eventually an epsilon-epsilon-0. But, like Mark, I find myself tapped out just thinking about it.
Perhaps someone more knowledgeable can educate me further.
December 5, 2003 in Science | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference When infinities attack:
» Hilbert's Hotel: Infinity != Infinity from Jason E. Shao
Mark Pilgrim wrote up Hilbert's Hotel an example of using a diagonal argument that uses the metaphor of an infinite hotel to discuss the difference between countable and uncountable infinite
quantities (e.g. the fact that the Real numbers is a... [Read More]
Tracked on Dec 21, 2003 5:07:33 AM
Here is an ordinal spiel:
You know that between any two infinite cardinals there are many ordinals. That is, the first infinite cardinal (aleph_0) is the first infinite ordinal (omega), but the second infinite cardinal (the
smallest uncountable cardinal aleph_1) is not the second infinite ordinal (this is omega+1, where as aleph_1 corresponds to epsilon_0). Thus each succesor cardinal is a huge leap in terms of
Then it is should be surprising that there are ordinals k, such that k is simultaneously the k^th infinite ordinal and the k^th uncountable cardinal
Intuitively, I can't explain (understand) this at all, but in fact the proof is not difficult.
Intriguingly, the smallest ordinal with shocking property above is the "giant"
where it is just omegas all the way down, omega times.
Of course the ordinals we can discuss are very small. In fact, transitions occur which fundamentally change things as much as the transition from finite ordinals to omega. When this happens, we say
that such ordinals are inaccessible. This means they cannot be constructed from within ZF, just as omega cannot be constructed in ZF without the axiom of infinity.
Posted by: Crosson at Dec 6, 2006 8:06:20 PM | {"url":"http://hestia.typepad.com/flatlander/2003/12/when_infinities.html","timestamp":"2014-04-20T00:38:47Z","content_type":null,"content_length":"21308","record_id":"<urn:uuid:1471c794-7efe-47b8-9b89-12d5f8fb8e61>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Just Four Questions
How many days have passed since the beginning of this year?
Two consecutive numbers add up to 9003, what are they?
What is the square root of one quarter?
If 70% of the cost of an item is $7.70. What is the cost of the item?
4501 and 4502
Teacher, do your students have access to computers?
Do they have Laptops in Lessons or iPads?
Whether your students each have a TabletPC, a Surface or a Mac, this activity lends itself to eLearning (Engaged Learning).
Here is the URL for a concise version of this page without comments or answers.
Here's the URL which will take them to an activity called Pentransum.
Maths is Important!
"Mathematics equips pupils with uniquely powerful ways to describe, analyse and change the world. It can stimulate moments of pleasure and wonder for all pupils when they solve a problem for the
first time, discover a more elegant solution, or notice hidden connections. ...
Mathematics is a creative discipline. The language of mathematics is international. The subject transcends cultural boundaries and its importance is universally recognised. Mathematics has developed
over time as a means of solving problems and also for its own sake."
Qualifications and Curriculum Authority | {"url":"http://www.transum.org/Software/sw/Starter_of_the_day/Starter_August28.asp","timestamp":"2014-04-18T23:16:13Z","content_type":null,"content_length":"21933","record_id":"<urn:uuid:15a44e30-7654-44d3-a7e1-92487dbf8886>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Camden, NJ Calculus Tutor
Find an East Camden, NJ Calculus Tutor
...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just
before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy...
14 Subjects: including calculus, physics, geometry, ASVAB
...I also teach Solfege (Do, Re Mi..) and intervals to help with correct pitch. I use IPA to teach correct pronunciation of foreign language lyrics (particularly Latin). I have a portable
keyboard to bring to lessons and music if the student doesn't have their own selections that they wish to learn...
58 Subjects: including calculus, reading, geometry, biology
...Besides working at college, I have worked as a teacher, co-teacher, and student aide in middle and high schools. I enjoy working with teens, and I generally get along pretty well with them.
Although, adults are okay too (they tend to ask better questions). So if you are someone in need of tutor...
16 Subjects: including calculus, English, physics, geometry
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching
because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including calculus, physics, geometry, algebra 2
...I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a
local university. I hold degrees in economics and business and an MBA.
13 Subjects: including calculus, geometry, statistics, algebra 1
Related East Camden, NJ Tutors
East Camden, NJ Accounting Tutors
East Camden, NJ ACT Tutors
East Camden, NJ Algebra Tutors
East Camden, NJ Algebra 2 Tutors
East Camden, NJ Calculus Tutors
East Camden, NJ Geometry Tutors
East Camden, NJ Math Tutors
East Camden, NJ Prealgebra Tutors
East Camden, NJ Precalculus Tutors
East Camden, NJ SAT Tutors
East Camden, NJ SAT Math Tutors
East Camden, NJ Science Tutors
East Camden, NJ Statistics Tutors
East Camden, NJ Trigonometry Tutors
Nearby Cities With calculus Tutor
Ashland, NJ calculus Tutors
Briarcliff, PA calculus Tutors
Camden, NJ calculus Tutors
Center City, PA calculus Tutors
East Haddonfield, NJ calculus Tutors
Eastwick, PA calculus Tutors
Edgewater Park, NJ calculus Tutors
Ellisburg, NJ calculus Tutors
Erlton, NJ calculus Tutors
Middle City East, PA calculus Tutors
Middle City West, PA calculus Tutors
South Camden, NJ calculus Tutors
West Collingswood Heights, NJ calculus Tutors
West Collingswood, NJ calculus Tutors
Westmont, NJ calculus Tutors | {"url":"http://www.purplemath.com/East_Camden_NJ_Calculus_tutors.php","timestamp":"2014-04-17T04:24:40Z","content_type":null,"content_length":"24417","record_id":"<urn:uuid:ad34199c-808e-4ad1-a1b0-db93ed095211>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Econometric Sense
A nonlinear model of complex relationships composed of multiple 'hidden' layers (similar to composite functions)
Y = f(g(h(x)) or
x -> hidden layers ->Y
Example 1
With a logistic/sigmoidal activation function, a neural network can be visualized as a sum of weighted logits:
Y = α Σ w
/1 + e
+ ε
= weights θ = linear function Xβ
Y= 2 + w
Logit A + w
Logit B + w
Logit C + w
Logit D
( Adapted from ‘A Guide to Econometrics, Kennedy, 2003)
Example 2
Where: Y= W[0] + W[1] Logit H[1] + W[2] Logit H[2] + W[3] Logit H[3] + W[4] Logit H[4] and
H[1]= logit(w[10] +w[11] x[1] + w[12] x[2] )
H[2] = logit(w[20] +w[21] x[1] + w[22] x[2] )
H[3] = logit(w[30] +w[31] x[1] + w[32] x[2] )
The links between each layer in the diagram correspond to the weights (w’s) in each equation. The weights can be estimated via
back propagation.
( Adapted from ‘A Guide to Econometrics, Kennedy, 2003 and Applied Analytics Using SAS Enterprise Miner 6.1)
MULTILAYER PERCEPTRON: a neural network architecture that has one or more hidden layers, specifically having linear combination functions in the hidden and output layers, and sigmoidal activation
functions in the hidden layers. (note: a basic logistic regression function can be visualized as a single layer perceptron)
RADIAL BASIS FUNCTION (architecture): a neural network architecture with exponential or softmax (generalized multinomial logistic) activation functions and radial basis combination functions in the
hidden layers and linear combination functions in the output layers.
RADIAL BASIS FUNCTION: A combination function that is based on the Euclidean distance between inputs and weights
ACTIVATION FUNCTION: formula used for transforming values from inputs and the outputs in a neural network.
COMBINATION FUNCTION: formula used for combining transformed values from activation functions in neural networks.
HIDDEN LAYER: The layer between input and output layers in a neural network. | {"url":"http://econometricsense.blogspot.com/2010/09/neural-networks.html","timestamp":"2014-04-20T23:49:44Z","content_type":null,"content_length":"87729","record_id":"<urn:uuid:0a021be4-5677-4fe5-af4c-9e6e4e60ae98>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to change fractions into decimals?
Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division.
Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations
traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division.
The base-24 (quadrivigesimal) system is a numeral system with 24 as its base.
There are 24 hours in a day (nychthemeron, day–night cycle), so solar time includes a base-24 component.
In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings
about interest, sympathy or motivation in the reader or viewer.
Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an
interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement.
Related Websites: | {"url":"http://answerparty.com/question/answer/how-to-change-fractions-into-decimals","timestamp":"2014-04-18T20:43:11Z","content_type":null,"content_length":"24825","record_id":"<urn:uuid:8f053ca3-6eca-4ddf-a474-82eb93653dd2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the length of the longest side of a triangle that has vertices at (-5, 2) (1,-6) and (1,2)?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50cccd54e4b0031882dc10c1","timestamp":"2014-04-16T04:38:16Z","content_type":null,"content_length":"153229","record_id":"<urn:uuid:91f03ed6-0044-4931-bc03-48a4b2f01014>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] lsq problem
Angus McMorland amcmorl@gmail....
Wed Feb 14 22:01:08 CST 2007
Hi Tommy,
On 15/02/07, Tommy Grav <tgrav@mac.com> wrote:
> I need to fit a gaussian profile to a set of points and would like to
> use scipy (or numpy) to
> do the least square fitting if possible. I am however unsure if the
> proper routines are
> available, so I thought I would ask to get some hints to get going in
> the right direction.
> The input are two 1-dimensional arrays x and flux, together with a
> function
> def Gaussian(a,b,c,x1):
> return a*exp(-(pow(x1,2)/pow(c,2))) - c
> I would like to find the values of (a,b,c), such that the difference
> between the gaussian
> and fluxes are minimalized. Would scipy.linalg.lstsq be the right
> function to use, or is this
> problem not linear? (I know I could find out this particular problem
> with a little research, but
> I am under a little time pressure and I can not for the life of me
> remember my old math
> classes). If the problem is not linear, is there another function
> that can be used or do I have
> to code up my own lstsq function to solve the problem?
> Thanks in advance for any hints to the answers.
Using scipy.optimize.leastsq, this problem is pretty easy to solve.
Check the docstring for that function. Basically, you need to
construct an error function: I use the one below, but hopefully you
can see how to adapt this to your needs:
def erf(p, I, r):
(A, k, c) = p
return I - A * exp( -(r - c)**2 / k**2 )
then in your code:
p0 = (1,1,1) # starting guesses (anything vaguely close seems to be okay)
plsq = scipy.optimize.leastsq(erf, p0, args=(flux, x)
A = plsq[0][0]
k = plsq[0][1]
c = plsq[0][2]
I hope that helps,
AJC McMorland, PhD Student
Physiology, University of Auckland
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2007-February/010999.html","timestamp":"2014-04-18T18:31:43Z","content_type":null,"content_length":"4367","record_id":"<urn:uuid:cde7c572-a6a2-46ef-8094-eb2dab58a5df>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
non-commutative iwasawa theory
up vote 7 down vote favorite
In commutative Iwasawa theory, the main conjecture states that the p-adic L-function generates the characteristic ideal of an algebraic object. Non-commutative Iwasawa theory seems to mimik this -
except that the existence of the object on the analytic side (to my knowledge) is still conjectural. My question is: why is it so hard to define a "non-commutative" p-adic L-function? And on a more
technical note: In Coates-Fukaya-Kato-Sujatha-Venjakob, this conjectural function only exists for primes of good ordinary reduction. This seems to imply that in the excluded cases, things go horribly
wrong. Does anybody know what or why?
iwasawa-theory nt.number-theory
Even in commutative Iwasawa theory, the non-ordinary case (and/or the singular reduction case) are much harder/different to the ordinary case. So (and I write this not knowing much about the
specifics of C-F-K-S-V) it doesn't surprise me that a similar thing happens in the non-commutative setting. – Emerton May 26 '10 at 0:10
add comment
2 Answers
active oldest votes
First a short answer. I don't think one can say that the commutative analytic side is known, as you do. It is fully known only in the cyclotomic $\mathbb{Z}_{p}$ situation, assuming the
ETNC and in the crystalline case (see below). The answer to your "why is it so hard" question is: because $p$-adic $L$-functions are linked with $B_dR$ and so require subtle knowledge of
$p$-adic Hodge theory which is lacking in the non-commutative case (and also mostly in the general commutative case). The answer to your technical question is: because in the ordinary
case, there is a concrete incarnation of $D_{dR}$ which allows for a definition of the required trivialization. Outside the ordinary case, no such concrete incarnation is known and so
things are two orders of magnitude harder, already in the commutative case (again, see below for some more details).
In both the commutative and non-commutative situation, if you want to construct a $p$-adic $L$-function in the style of Kato, Perrin-Riou, Colmez (basically in the cyclotomic $\mathbb{Z}_
{p}$-extension case) and Fukaya-Kato, CFKSV (in the non-commutative but still number field tower case) you need two things: one is an equivariant basis of the fundamental line (more or
less the determinant of the motivic cohomology of your motive, or more concretely some sort of Euler system), the other is what is usually called a reciprocity law or Coleman map or $\
epsilon$-isomorphism to transform this into a "function" or "measure" or "element in a localized $K_{1}$" (depending what you mean by $p$-adic $L$-function).
Slightly more precisely and in a more technical language, in order to formulate a conjectural setting allowing for the existence of a $p$-adic $L$-function, you need a trivialization of
up vote 4 some complexes of Galois cohomology which is compatible with "evaluation at characters" (without this compatibility, there can be no interpolation property worth its name). This
down vote trivialization involves very subtle properties of the ring $B_{dR}$.
Now, in the cyclotomic $\mathbb{Z}_{p}$-extension case, such a trivialization is known to exist for any crystalline motive thanks to very deep results of many people, but most notably
Perrin-Riou, Kato-Kurihara-Tsuji, Colmez and Benois-Berger. So in that case, if you $assume$ the Equivariant Tamagawa Number Conjecture, then the $p$-adic $L$-function is well-defined.
This is only in this (very limited) sense that the analytic object you refer to in your question is well-defined.
In the general commutative case, very little is known, because one has to consider the $D_{dR}$ of Galois representations with coefficients in rings of large dimension and this is
extremely hard though spectacular progresses are made each day in that respect. In the good ordinary tower of number fields case, be it commutative or non-commutative, the required
trivialization is known because in that case there is a "concrete incarnation" of $D_{dR}$.
Finally, in the general non-commutative case, almost nothing is known because, as far as I know, the collective knowledge we have on the behaviour of $D_{dR}$ for non-commutative rings of
large dimension is very far from what would be required only to formulate a question.
1 Surely you don't need all of this machinery to define cyclotomic p-adic L-functions for elliptic curves, at least over Q? Since we know these are modular, the existence of the p-adic
L-function just drops out of the theory of modular symbols (see Mazur-Tate-Teitelbaum). – David Loeffler May 26 '10 at 12:21
Sure, but the question was about p-adic L-functions in general, not specifically for elliptic curves. – Olivier May 26 '10 at 15:18
Ah, sorry. CFKSV is about elliptic curves, but of course the question makes sense for much more general motives. – David Loeffler May 26 '10 at 16:17
add comment
On a lower level and restricted to elliptic curves, there are at least two problems on the analytic side. First of all, if $\rho$ is an irreducible Artin representation of dimension $>1$ of
a $p$-adic Lie group, then usually no one knows if $L(E,\rho,s)$ has an analytic continuation and so one can not yet interpolate the values at $s=1$. If one has the values, one needs
congruences between them. So we need more complicated automorphic gadjets than nice modular forms of weight 2. Likewise, if we wish to construct it from an "Euler system", whatever that
will be in the non-commutative setting.
up vote 1
down vote For the supersingular case not even the algebraic side has ever been looked at to my knowledge. The usual conjectures like that the Selmer group should lie in $\mathfrak M_H$ are probably
not true since the Tate-Shafarevich group is expected to grow a lot in these extensions. As for the cyclotomic extension, a single $p$-adic L-function will not be enough to discribe the
situation here. First, we need a good cyclotomic $\pm$-theory for all number fields. (Or do we have that now ?)
We don't yet have a cyclotomic +/- theory a la Kobayashi/Pollack over a number field, unless p is totally split in the field (recent papers of B.D. Kim and coauthors). Darmon and Iovita
have done something along similar lines for the anticylotomic extension; I'm not sure exactly what. – David Loeffler Jun 8 '10 at 20:51
Another remark: I may have misunderstood all this; but I think if the image of $\rho$ is solvable, then we can use cyclic base change to show that $E \otimes \rho$ is modular, i.e. $L(E,
\rho, s)$ is the L-function of an automorphic form on $GL_{2 \dim \rho}$. Then it follows that $L(E, \rho, s)$ has analytic continuation. – David Loeffler Jun 8 '10 at 21:01
Yop and that seems a good indication to me that the construction of the $p$-adic L-function is going to be more complicated than in the cyclotomic case. The most interesting $p$-adic Lie
extension for an elliptic curve is not solvable, all other extensions won't be canonically attached to the curve so the information of the extension needs to enter as well. – Chris
Wuthrich Jun 9 '10 at 9:10
add comment
Not the answer you're looking for? Browse other questions tagged iwasawa-theory nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/25948/non-commutative-iwasawa-theory?sort=votes","timestamp":"2014-04-18T19:01:30Z","content_type":null,"content_length":"66576","record_id":"<urn:uuid:6558c722-50f3-40df-bdcd-9282b730d5f8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/niman/asked","timestamp":"2014-04-24T01:25:50Z","content_type":null,"content_length":"107268","record_id":"<urn:uuid:becf2d9c-a4e8-4d63-b3d6-f190d91271aa>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unitary Method
Here are some self-marking exercises of problems which can be solved using the unitary method. This is a technique in mathematics for solving particular types of problems. It involves scaling down
one of the variables to a single unit, i.e. 1, and then performing the operation necessary to alter it to the desired value.
For example if six coins weigh 66g. What would seventeen coins weigh?
Consider the weight of one coin first
1 coin weighs 11g (66รท6)
Now it is easy to calculate the cost of seventeen coins
17 coins weigh 187g (17 x 11)
There are currently two sets of questions you can try. Level one involves calculations you could do in your head while the numbers involved in level 2 are a little larger. | {"url":"http://www.transum.org/software/SW/Starter_of_the_day/Students/Unitary_Method.asp","timestamp":"2014-04-18T21:31:30Z","content_type":null,"content_length":"22360","record_id":"<urn:uuid:2a9fcf56-c205-4617-a226-4373d582b8b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
But there is another issue, whether she is attracted to me. I need to be more sure.
Re: Linear Interpolation FP1 Formula
In this case just take your time. Make her comfortable around you until you get the go ahead.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yeah, that will take some work... and with exam season fast approaching it won't be easy.
Re: Linear Interpolation FP1 Formula
You will somehow find the time.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Still contemplating if it is worth it.
Re: Linear Interpolation FP1 Formula
What is the downside?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Less time to prepare for exams. I want 6A*s as a reward for the mental breakdown I had last year. But if I get with Holly that could slip to 4A*AB.
Re: Linear Interpolation FP1 Formula
Could and could not. You really do not know. You pretty much do remind me of the American husband and wife. Both are so driven by their careers, they have no time for each other. They end up
divorced, rich and
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Well, my academics are very important to me. Nobody in my family has any kind of academic record, to them, going to university is akin to going to the moon. I don't want to let them down. You could
also make the argument that talking to H cost me my place at Cambridge, since I could have got higher scores in those modules I didn't do so well in.
Re: Linear Interpolation FP1 Formula
Yes, we could argue that the moon is made out of cheese but it is just conjecture. You will have to strike a balance between academic achievement and play time. Without that playtime you could break
down mentally at worst and at best be a very dull fellow.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
But I have been striking that balance for the rest of the year -- when it comes to exam time, I cannot maintain that balance, I must tip it in favour of revision instead.
Re: Linear Interpolation FP1 Formula
Will you see her ever again after these exams?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Well, after the exams, lessons finish, so no, I won't see her in school at least. She is going on holiday for the summer but I don't know when. And she is going to Leeds or Edinburgh so she'll be
moving out to another city. So I have until about September.
Re: Linear Interpolation FP1 Formula
So, if this was the girl of your dreams. The girl for you, you would not pursue her because it might affect your grades?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
But she isn't the girl of my dreams...
Re: Linear Interpolation FP1 Formula
So far, based on your other experiences...
1) She is still talking to you.
2) You find her attractive.
I hate to say this but the above two are all that it takes for girl of your dreams.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I don't know, I feel a bit reluctant about just rushing into something with someone.
A good sign is that I'm forgetting about adriana.
Re: Linear Interpolation FP1 Formula
Nothing makes you forget about the last one better than the next one.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yes, that's true. Can't believe it has been two weeks, I had lost count until I thought about it for a bit. I'd still like to beat that 25-day record.
Re: Linear Interpolation FP1 Formula
From what you are saying you do not have the time to date. So just forget about it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
For the next 60 days, anyway.
Re: Linear Interpolation FP1 Formula
So, for the next 60 days you should put them out of your mind and study.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I don't think I can ever put them completely out of my mind, I will always think about them a little bit, just with less intensity...
Did STEP III 2010 as a mock, got an S grade which means the hard work is finally paying off it seems! The probability questions looked interesting, but I doubt I would have made much progress on
Re: Linear Interpolation FP1 Formula
No intensity is best. Concentrate on the job at hand. S, stands for what?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I thought of F a bit during the mock, but it didn't have much of an effect.
S is the highest grade you can get in STEP, not sure what it stands for. To be honest, I have to do more to be sure that this just wasn't an anomaly... this paper played to my strengths a fair bit.
That may not happen in reality. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=265544","timestamp":"2014-04-16T13:31:17Z","content_type":null,"content_length":"35620","record_id":"<urn:uuid:36e7c003-9b1f-488b-bf82-b9a71296f728>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decimals in Words
3.3: Decimals in Words
Created by: CK-12
Practice Decimals in Words
Julie has figured out how to identify decimals and how to determine the place value of certain decimals. She also knows how to write one out in expanded notation. With confidence, she was able to
finish this section of her homework.
What about writing decimals? Do you know how to do that?
Well, the next part of Julie's homework requires that she know how to write a decimal out in words. Here is the first decimal in this part of the homework.
Julie isn't sure how to write this one out.
This Concept is all about reading and writing decimals. This is exactly what is needed for Julie to be successful in her assignment.
We have been learning all about figuring out the value of different decimals. We have used place value to write them, we have used pictures and we have stretched them out. Now it is time to learn to
read and write them directly. Let’s start with reading decimals.
How do we read a decimal?
We read a decimal by using the words that show the place value of the last digit of the decimal.
To help us read this decimal, we can put it into our place value chart.
Hundred Tens Ones Tenths Hundredths Thousandths
. 4 5
We read this decimal by using the place value of the last digit to the right of the decimal point. Normally, we would read this number as forty-five. Because it is a decimal, we read forty-five
hundredths. The last digit is a five and it is in the hundredths place.
Can we use place value to write the number too?
Yes we can. We write the number as we normally would.
Next, we add the place value of the last digit to the right of the decimal point.
Forty-five hundredths
Our answer is forty-five hundredths.
We can use this method to read and write any decimal. What about a decimal with more digits?
First, let’s put this number in our place value chart.
Hundred Tens Ones Tenths Hundredths Thousandths
. 5 4 2 1
First, let’s read the number. We can look at the number without the decimal. It would read:
Five thousand four hundred twenty-one
Next we add the place value of the last digit
Ten thousandth
Five thousand four hundred and twenty-one ten thousandths
This is our answer.
It is also the way we write the number in words too. Notice that is it very important that we add the THS to the end of the place value when working with decimals.
Now let's practice. Write each decimal in words.
Example A
Solution: Seven Tenths
Example B
Solution: Seven Hundred and Sixty - Five Thousandths
Example C
Solution: Two Thousand Two Hundred and Nineteen Ten - Thousandths
Do you have it? Now it's time to help Julie with this part of her math homework. Here is the original problem once again.
Julie has figured out how to identify decimals and how to figure out the place value of certain decimals. She also knows how to write one out in expanded notation. With confidence, she was able to
finish this section of her homework.
What about writing decimals? Do you know how to do that?
Well, the next part of Julie's homework requires that she know how to write a decimal out in words. Here is the first decimal in this part of the homework.
Julie isn't sure how to write this one out.
First, let's read the number as if it wasn't a decimal.
Five hundred and sixty - seven.
But because this is a decimal, we have to add the place value of the last digit to the right. This is a seven in the thousandths place.
Our answer is five hundred and sixty - seven thousandths.
Here are the vocabulary words in this Concept.
Whole number
a number that represents a whole quantity
a part of a whole
Decimal point
the point in a decimal that divides parts and wholes
Expanded form
writing out a decimal the long way to represent the value of each place value in a number
Guided Practice
Here is one for you to try on your own.
Write the following decimal in words.
First, we can write the decimal out as if it wasn't a decimal.
One thousand three hundred and forty - five
Next, we add the place value of the last digit which is a five in the ten - thousandths place.
Our answer is one thousand three hundred and forty - five ten - thousandths.
Video Review
Here are videos for review.
Khan Academy Decimal Place Value
James Sousa, Write a Number in Decimal Notation from Words
Directions: Write out each decimal in words.
1. .5
2. .8
3. .21
4. .18
5. .4
6. .56
7. .93
8. .801
9. .834
10. .355
11. .155
12. .624
13. .5623
14. .9783
15. .5671
16. .2134
17. .0123
18. .0098
19. .0008
20. .0001
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r4/section/3.3/","timestamp":"2014-04-19T12:33:10Z","content_type":null,"content_length":"123809","record_id":"<urn:uuid:af8c78a2-e9cc-4ecf-bb65-63cb541e97d3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
RSA Encryption Library
rsa_free( rsa var )
Frees memory associated with a previously loaded RSA keypair.
Returns 1 on success, 0 on error.
• rsa = a handle returned by one of the rsa_load_xxx functions
rsa_generate_keypair( pubkey_file, privkey_file, bits, e, passphrase )
Generates an RSA keypair, saving the public key in pubkey_file, the private key in privkey_file, and encrypting the private key with passphrase.
Returns 1 on success, 0 on error.
• pubkey_file = The name of the file in which the generated public key is stored
• privkey_file = The name of the file in which the generated private key is stored
• bits = The RSA modulus size, in bits
• e = The public key exponent. Must be an odd number, typically 3, 17 or 65537
• passphrase = The passphrase used to encrypt the private key
rsa_generate_keypair_mem( pubkey var, privkey var, bits, e, passphrase )
Generates an RSA keypair, returning the public and private keys in variables, and encrypting the private key with passphrase.
Returns 1 on success, 0 on error.
• pubkey = The variable which receives the generated public key
• privkey = The variable which receives the generated private key
• bits = The RSA modulus size, in bits
• e = The public key exponent. Must be an odd number, typically 3, 17 or 65537
• passphrase = The passphrase used to encrypt the private key
rsa_load_privatekey( privkey_file, rsa var, passphrase )
Load an encrypted RSA private key from a PKCS#8 file specified by privkey_file, and decrypt it using passphrase.
Returns 1 on success, 0 on error.
• privkey_file = The name of the file containing the encrypted private key
• rsa = A variable which receives an internal reference to the loaded RSA key
• passphrase = The passphrase used to decrypt the private key
rsa_load_privatekey_mem( privkey, rsa var, passphrase )
Loads an encrypted RSA private key from a memory buffer containing PKCS#8 data
Returns 1 on success, 0 on error.
• privkey = The encrypted private key information in PKCS#8 format
• rsa = A variable which receives an internal reference to the loaded RSA key
• passphrase = The passphrase used to decrypt the private key
rsa_load_publickey( pubkey_file, rsa var )
Load an RSA public key from a PKCS#1 file specified by "pubkey_file".
Returns 1 on success, 0 on error.
• pubkey_file = The name of the file containing the public key
• rsa = A variable which receives an internal reference to the loaded RSA key
rsa_load_publickey_mem( pubkey, rsa var )
Loads an RSA public key from a memory buffer containing PKCS#1 data
Returns 1 on success, 0 on error.
• pubkey = The public key in PKCS#1 format
• rsa = A variable which receives an internal reference to the loaded RSA key
rsa_private_decrypt( rsa, encrypted, plaintext var )
Decrypts data previously encrypted using the public key portion of an RSA keypair.
Returns 1 on success, 0 on error.
• rsa = The internal reference to the RSA private key used for decryption
• encrypted = The encrypted ciphertext, in raw binary format
• plaintext = A variable which receives the decrypted plaintext
rsa_private_encrypt( rsa, plaintext, encrypted var )
Encrypts data using the private key portion of an RSA keypair.
Returns 1 on success, 0 on error.
• rsa = The internal reference to the RSA private key used for encryption
• plaintext = The data to be encrypted
• encrypted = A variable which receives the encrypted ciphertext in raw binary format
rsa_public_decrypt( rsa, encrypted, plaintext var )
Decrypts data previously encrypted using the private key portion of an RSA keypair.
Returns 1 on success, 0 on error.
• rsa = The internal reference to the RSA public key used for decryption
• encrypted = The encrypted ciphertext, in raw binary format
• plaintext = A variable which receives the decrypted plaintext
rsa_public_encrypt( rsa, plaintext, encrypted var )
Encrypts data using the public key portion of an RSA keypair.
Returns 1 on success, 0 on error.
• rsa = The internal reference to the RSA public key used for encryption
• plaintext = The data to be encrypted
• encrypted = A variable which receives the encrypted ciphertext in raw binary format
rsa_save_privatekey( privkey_file, rsa var, passphrase )
Encrypts and writes a previously loaded RSA private key to a file in PKCS#8 format
Returns 1 on success, 0 on error.
• privkey_file = The name of the file in which the private key is to be stored
• rsa = The internal reference to the RSA private key to be saved
• passphrase = The passphrase used to encrypt the private key
rsa_save_privatekey_mem( privkey var, rsa var, passphrase )
Encrypts a previously loaded RSA private key and stores it into a variable in PKCS#8 format
Returns 1 on success, 0 on error.
• privkey = The variable which will receive the encrypted private key
• rsa = The internal reference to the RSA private key to be saved
• passphrase = The passphrase used to encrypt the private key
rsa_sign( rsa, buffer, signature var )
Generates a digital signature using SHA1 and an RSA private key
Returns 1 on success, 0 on failure. Requires OpenSSL 0.9.7 or greater.
• rsa = The internal reference to the RSA private key to be used
• buffer = The data to be signed
• signature = A variable which receives the signature in raw binary format
rsa_verify( rsa, buffer, signature )
Verifies a digital signature previously generated by rsa_sign
Returns 1 on success, 0 on verification failure or error. Requires OpenSSL 0.9.7 or greater.
• rsa = The internal reference to the RSA public key used for verification
• buffer = The data for which the signature is to be verified
• signature = The signature to verify, in raw binary format
rsa_generate_keypair_mem_cipher( pubkey var, privkey var, bits, e, passphrase, ciphername )
Behaves identical to the legacy counterpart rsa_generate_keypair_mem() except that it allows the caller to specify the cipher used to encrypt the private key (the legacy function always uses
Returns 1 on success or 0 on error.
• pubkey = The variable which receives the generated public key
• privkey = The variable which receives the generated private key
• bits = The RSA modulus size, in bits
• e = The public key exponent. Must be an odd number, typically 3, 17 or 65537
• passphrase = The passphrase used to encrypt the private key
• ciphername = an OpenSSL cipher identifier, such as "aes-128-cbc"
rsa_save_privatekey_mem_cipher( privkey var, rsa var, passphrase, ciphername )
Behaves identical to the legacy counterpart rsa_save_privatekey_mem() except that it allows the caller to specify the cipher used to encrypt the private key (the legacy function always uses
• ciphername = an OpenSSL cipher identifier that specifies the cipher to be used. Note that OpenSSL only permits a subset of its supported ciphers to be used for RSA key encryption. For example,
only CBC mode ciphers are permitted. | {"url":"http://www.mivascript.com/topic/rsa.html","timestamp":"2014-04-19T19:53:38Z","content_type":null,"content_length":"69758","record_id":"<urn:uuid:ea3b7443-95d6-4e3a-8764-dc66fbdd1af4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
strong contactomorphism group inside contactomorphism group
up vote 9 down vote favorite
Let $(M, \xi)$ be a closed contact manifold with co-oriented contact structure $\xi = \ker \alpha$. Let $\mathrm{Cont}(M, \alpha)$ be the group of diffeomorphisms that preserve the contact form $\
alpha$, and let $\mathrm{Cont}^+(M, \xi)$ be the diffeomorphisms that preserve $\xi$ with its co-orientation (i.e. that pull $\alpha$ back to a positive multiple of $\alpha$).
Does the inclusion $\mathrm{Cont}(M, \alpha) \hookrightarrow \mathrm{Cont}^+(M, \xi)$ induce a weak homotopy equivalence?
My guess is that this is too much to hope for, but I don't have any candidate for a counter-example.
Motivation: This question came up in understanding the question of when a contact fibre bundle admits a global contact form with diffeomorphic Reeb dynamics on every fibre.
sg.symplectic-geometry contact-manifolds contact-geometry
$Cont(M,\alpha)$ can be very small (finite), and there is not really a Gray stability in this case, so it seems to be no. – Chris Gerig Apr 12 '12 at 21:22
@Chris: Isn't the Reeb flow a 1-parameter subgroup of $Cont(M,\alpha)$? (Hit Cartan's formula for the Lie derivative of $\alpha$ along $R$ with the defining properties of $R$ and you get zero) –
Jonny Evans Apr 12 '12 at 22:36
Oh right, $M$ is closed. – Chris Gerig Apr 13 '12 at 2:46
I believe it is true however that generically "most" of $\text{Cont}(M,\alpha)$, or at least its identity component, should be generated by the Reeb flow. (I'm being intentionally imprecise.) –
Chris Wendl Apr 16 '12 at 17:28
add comment
2 Answers
active oldest votes
One of the problems with this question is that $\text{Cont}(M,\alpha)$ depends heavily on the choice of contact form $\alpha$, so while the previous answer shows that there is a choice
of contact form for which $\text{Cont}(M,\alpha)$ and $\text{Cont}^+(M,\xi)$ can't be homotopy equivalent, it's not immediately clear if this is true for all choices of $\alpha$.
That said, here's a connected example for which the inclusion also fails to be surjective on $\pi_0$. (Attribution: this emerged out of discussions with Hansjoerg Geiges.)
Let $(M,\xi)$ be $T^3$ with its standard contact structure $\xi_0$, and using coordinates $(x,y,\theta)$ on $T^3$, write the standard contact form as
$\alpha_0 = \cos(2\pi\theta) dx + \sin(2\pi\theta) dy$.
Now for any $A \in \text{SL}(2,\mathbb{Z})$, the direct sum of $A$ with the identity defines a linear map on $\mathbb{R}^3$ which descends to $T^3$ as a diffeomorphism $f : T^3 \to T^3$.
It is easy to show that for any such map, $f^*\alpha_0$ can be deformed through contact forms to $\alpha_0$, hence by Gray's stability theorem, $f$ is isotopic to a contactomorphism
up vote 8 $f_0$.
down vote
accepted However, if $f_0$ is a strict contactomorphism with respect to $\alpha_0$, then it must preserve the corresponding Reeb vector field, and this is a very strong restriction. It means for
instance that the Morse-Bott torus of Reeb orbits at $\{\theta=0\}$ is mapped to another Morse-Bott torus of orbits with the same period, and the only such orbits that exist for $\
alpha_0$ point in either the same, opposite or an orthogonal direction. With arguments like this one can show that $f_0$ cannot be a strict contactomorphism unless the matrix $A$ is
orthogonal... in fact, I believe it must be a fourth root of the identity.
With a little more work one can say something similar for a much larger set of contact forms, using the fact that a strict contactomorphism must always map Reeb orbits to Reeb orbits of
the same period. (I'm fairly sure that for a generic choice of contact form, not only are all Reeb orbits nondegenerate but no two of them have the same period.) Unfortunately, I still
don't know how to turn this into any statement for all contact forms, but the evidence is certainly against $\text{Cont}(M,\alpha) \hookrightarrow \text{Cont}^+(M,\xi)$ ever being
bijective on $\pi_0$.
add comment
The map isn't surjective on $\pi_{0}$. Take $M$ to be a disjoint union of two 3-spheres, put any contact form $\beta$ on the first sphere, and put $\beta/2$ on the second sphere. There is
an element of $\mathrm{Cont}^{+}(M)$ that simply interchanges the two pieces. But this map is not isotopic to a map that preserves the contact form. (Just think of the 3-form $\beta \wedge
up vote 7 d\beta$ and the volumes of the two 3-spheres.)
down vote
add comment
Not the answer you're looking for? Browse other questions tagged sg.symplectic-geometry contact-manifolds contact-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/93871/strong-contactomorphism-group-inside-contactomorphism-group","timestamp":"2014-04-18T06:15:46Z","content_type":null,"content_length":"60803","record_id":"<urn:uuid:d930c374-3dfd-4758-8f03-58fc4ed59cb8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
DMOZ - Science: Physics: Computational: Courses
[ ] [Search] [the entire directory ]
See also:
• Numerical Recipes - Complete free online versions of this standard textbook on computational methods, in the C, Fortran and Fortran-90 editions. PS/PDF. Software and C++ edition for sale.
• 3D Physics - An overview of numerical methods of simulating physical systems.
• Books on Computational Science - A handy list of books on Computational Sciences.
• Computational methods in physics - Illustrate various numerical techniques by focussing on physical problems, aimed at the application of numerical methods to physical models.
• Computational Physics - An introductory course to Computational Physics.
• Computational Physics - The goal of this course is to make aware of what is involved in computational physics, and the large variety of methods form classical to quantum physics using the
• Computational physics - The topic in computational physics in Wikiversity.
• Computational Physics - Source code for computational physics projects
• Computational Physics - A free ebook by Angus MacKinnon.
• Computational Physics (PH281) - Introduction to computational methods for simulating physical systems using MATLAB.
• Computational Physics by Jos Thijssen - Free online support material with implementation of the algorithm and exercises of the book.
• Computational Physics Education - Computational physics classes at Michigan State University.
• Computational Physics Fortran Edition - The FORTRAN and BASIC source codes to accompany the book.
• Computational Physics Resource on the Internet - This course provides an introduction to some of the most widely used methods of computational physics.
• Computational Physics with Python - A complete introduction to the field of computational physics, with examples and exercises in the Python programming language.
• Computational Studies of Pure and Dilute Spin Models - Monte Carlo simulations of ferromagnetic material using Ising and Potts spin models to ascertain selected properties of such material.
Discusses what spin models are and how they are used to simulate magnetic material, with particular attention to the use of cluster algorithms.
• Free university lectures - Lectures online on computer science, mathematics, physics and more.
• FreeScience books in Computational Physics - Free books in Computational Physics.
• Hands-on tutorial course on VASP - A self-learning course of the Vienna Ab-initio Simulation Package with slides of the talks and the exercises of the sessions.
• Institute for Computational Physics: - The institute of computational physics on high performance computers.
• An Introduction to Computational Physics - The materials for the book. Including, Errors Found, Fortran and C codes.
• Lectures from Summer Schools on Computational Materials Science - Full set of notes in computational solid state physics from talks given by recognized leaders in the field.
• NIC Series - From Theory to Algorithms, a summary of the key ingredients necessary to carry out simulation by the Forschungszentrum Jülich.
• Prof. Phillip M. Duxbury's website - Lectures by Professor Phillip M. Duxbury at Michigan State University.
Volunteer to edit this category. | {"url":"http://www.dmoz.org/Science/Physics/Computational/Courses/","timestamp":"2014-04-25T01:05:42Z","content_type":null,"content_length":"29995","record_id":"<urn:uuid:834f70b9-3637-4518-8ed3-09c860e94e21>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Moduli space of motives vs moduli space of varieties
up vote 10 down vote favorite
A (projective) abelian variety $A$ over the complex numbers is determined by $H^1(A,\mathbb{Z})$ together with its Hodge structure and polarization. This miracle means that one can parametrise
polarized abelian varieties (which sounds like a hard geometric problem) by parametrising their H^1's with the extra structure (which sounds like an easier algebraic problem, solved via the theory of
Shimura varieties).
If we move away from abelian varieties, the miracle won't occur, and indeed attempting to understand a general variety (a complicated non-linear object) via its cohomology groups (linear objects) is
presumably not going to work in general -- the linearization will lose information.
I have two related questions about what is going on in general. Let me talk about K3 surfaces below, although my real confusion has nothing to do with K3 surfaces in particular -- I could easily be
saying "Calabi-Yau 8-folds" or "curves of genus 23" or indeed any random type of smooth projective variety.
Q1) I am confused about why the power of functors and representability theorems do not give me everything. This probably just reflects my lack of real understanding of what is going on. For example,
let me just naively consider the functor sending a scheme S (over the complexes, if you like) to the set of isomorphism classes of polarized K3 surfaces over S (or the groupoid of polarized K3
surfaces). I now want to vaguely mutter that a big machine the likes of which I don't really understand says that this functor satisfies some basic continuity properties and hence (perhaps by some
theorem of M. Artin) is representable by some algebraic stack. My understanding of the abelian variety situation is that this method is one way to prove the existence of things like Siegel modular
varieties (perhaps even over $\mathbb{Z})$. Why does this method fail for more general families of varieties? I am guessing that I am perhaps being too sloppy with my polarizations but I don't really
have a precise feeling for what goes wrong.
Q2) This general functor nonsense must surely fail, so here's a second approach to constructing moduli spaces of certain types of variety. Again let me stick to K3's for concreteness. I understand H^
2 of a K3 surface quite well and presumably there is a Shimura variety parametrising the types of Hodge structures showing up as H^2 of a K3. I am wondering how far this Shimura variety would be from
the "moduli space of K3 surfaces" which I am naively assuming exists. I can see the issue -- if the moduli space of K3's exists, there will be a map to the Shimura variety, but there is no reason to
expect either injectivity or surjectivity on the face of it (I am losing information by linearising, and the linear stuff is too naive to know whether it comes from geometry). Let's forget about
injectivity for a moment -- injectivity is an issue whose answer will depend on which type of variety I am trying to parametrise (e.g. for curves I have Torelli etc). But what about surjectivity?
Because I'm not really interested in K3's, I'm interested in the general picture of "moduli spaces of varieties of type X" and understanding this "space" via Hodge structure, I am led to the
following question, the answer to which is presumably well-known:
If $H$ is a polarizable $\mathbb{Z}$-Hodge structure of weight $n$, does one expect $H$ to be the singular cohomology of a pure motive? Perhaps more concretely, does one expect there to exist some
smooth projective algebraic variety $X$ over the complexes such that $H$ is a subspace of $H^i(X,\mathbb{Z})(j)$ for some $i,j$, and even a subspace cut out by correspondences?
ag.algebraic-geometry moduli-spaces motives shimura-varieties
3 For K3 surfaces there is also a Torelli theorem and the situation is as nice as you might want. For general varieties, the period map is usually far from being a surjection. Similarly, a general
Hodge structure will not be the cohomology of a motive (by "Griffiths transversality"). – ulrich Nov 21 '13 at 12:07
I don't think I understand Q1. It reminds of fundamental groups, in a silly way. Saying 'compute the fundamental group' of some topological space isn't a really well-defined question. The group $\
pi_1(X)$ has a definition and so is tautologically computed for any $X$. What we would really like is a (simple as possible) presentation of it. Similarly with moduli problems, the moduli functor
is the tautological answer. – John Salvatierrez Nov 21 '13 at 13:12
Artin/Schlessinger gives you a way to tell whether the moduli functor is actually 'algebraic'. It doesn't come for free but the fact that you know some wild Artin stack parameterises the geometric
problem you started with doesn't necessarily gain you the information you wanted. – John Salvatierrez Nov 21 '13 at 13:12
@John Salvatierrez: Q1 is supposed to say "why isn't this a proof that there's a nice smooth moduli space of polarized Calabi-Yau 4-folds over the complexes: (1) write down the functor (2) check
it has some nice local properties (3) big machine says it's representable (4) more local analysis [and adding some auxiliary structure to kill automorphisms] says it's representable by a smooth
algebraic space, hence (5) what's all the fuss about constructing moduli spaces nowadays? A big machine always works." – eric Nov 21 '13 at 13:57
@ulrich: I thought Griffiths transversality was an assertion about how a Hodge structure is allowed to move in a family, so I don't see how to apply it to one Hodge structure to deduce what you
1 say. A dimension count shows that for curves of big genus, their H^1 is a Hodge structure of the type which is not usually the H^1 of a curve. But it is the H^1 of an ab var. I am asking about
whether in general a Hodge structure can show up as a sub of an H^i(X)(j) -- I don't know if this is a reasonable question or not, but I don't understand your comment. – eric Nov 21 '13 at 13:59
show 3 more comments
1 Answer
active oldest votes
Let me expand my (and ulrich's) comment slightly concerning your last question. Let $D$ be the period domain of all Hodge structures with fixed Hodge numbers and polarization. For the sake
of simplicity, let's say the weight is $2$. Suppose that $H\in D$ is a summand of $H^2$ of some smooth projective variety, then by weak Lefschetz, it's a summand of $H^2$ of a smooth
projective surface. All surfaces embed into $\mathbb{P}^5$. So using a Hilbert scheme argument, the set of surfaces is parametrized by a countable union of quasi projective varieties $\
bigcup T_i$. With a bit more work, we can find $\bigcup T_i$, such that $t\in T_i$ parametrize pairs $(S_t, C_t)$, where $S_t$ is a surface and $C_t\subset S_t\times S_t$ is a
correspondence yielding a motivic sub Hodge structure $$H^2([S_t,C_t]):=im(p_{1*}[C_t]\cup p_2^*-):[H^2(S_t)\to H^2(S_t)]$$ After throwing away some components $T_i$ we can assume that $H^2
up vote 4 ([S_t,C_t])\in D$ for all $i$ and $t\in T_i$. So we get holomorphic period maps $f_i:\tilde T_i\to D$ from the universal covers. Griffiths tranversality says that for any tangent vector $v$
down vote of $\tilde T_i$, $df_i(v):F^p\subset F^{p-1}$, and this is typically (although not always) a nontrivial constraint. In such a typical case, this forces the image $f_i(T_i)\subset D$ to be
accepted proper. By Baire category $\cup f_i(T_i)\subset D$ is a proper subset as well. This shows that a generic element of such a typical $D$ does not come from an effective motive.
Of course this argument is highly nonconstructive. In fact, I don't know of a single explicit example of a polarizable Hodge structure which not motivic!
Thanks for this answer! You have not yet taken twisting into account: maybe $H$ is a summand of $H^{2+2j}(j)$ of some smooth projective variety. But I guess this does not change the rest
of the argument, and you still find a countable union $\bigcup T_{i}$. There is one part that I do not understand. Why do we need to throw away some of the $T_{i}$? (Do all the Hodge
structures parameterised by one $T_{i}$ have the same Hodge numbers?). And can we be sure that the non-typical cases, where the constraint $F^{p} \subset F^{p-1}$ is trivial, don't blow
up the argument? – jmc Nov 22 '13 at 9:22
Indeed my answer was more of an extended comment. Perhaps later on if I have more time and energy... – Donu Arapura Nov 22 '13 at 13:50
In the period domains Deligne uses in his "travaux de Shimura", Griffiths transversality is built into the axioms. So you're using a more general period domain probably -- it is still a
complex manifold, right? Do you need some sort of algebraicity here to make this argument work though (you mention quasi-projective varieties and I am not sure why, or where they're
coming from)? – eric Nov 25 '13 at 10:58
Yes, I was taking $D$ to be Griffiths period domain, which is the moduli space of polarized Hodge structures with basis; it is still a complex manifold. – Donu Arapura Nov 25 '13 at 11:54
Let H in D be in the "image of motives". Let C a germ of curve in D through H such that the family of Hodge structures over C satisfies Griffits transversality. Is C in the "image of
motives"? (In other words, is Griffiths transversality the only obstruction) – user25309 Nov 25 '13 at 22:36
show 1 more comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry moduli-spaces motives shimura-varieties or ask your own question. | {"url":"http://mathoverflow.net/questions/149520/moduli-space-of-motives-vs-moduli-space-of-varieties","timestamp":"2014-04-21T13:11:01Z","content_type":null,"content_length":"69152","record_id":"<urn:uuid:b7a1f800-0830-499a-b3e4-90b9877afc23>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Physics? (Particularly computational)
up vote 13 down vote favorite
I just saw a post like this one, but particularly for statistical mechanics, I thought I'd ask the question in general.
Where does a mathematically trained person go to learn mathematical physics? By that I mean, what books or manuscripts are demanding in the area of mathematical maturity but not particularly
demanding in the area of physics knowledge (physics maturity I guess, idk if they use that word in physics?). I myself am particularly interested in computational fluid dynamics and other kinds of
computational physics, but I want to keep this general to help as many people as possible. Also, if someone knows a good book for mathematicians to help with one of the biggest difficulties I've
found "Physics INTUITION" that'd be helpful.
As usual, one answer per post so votes can be tabulated well.
soft-question mp.mathematical-physics
add comment
7 Answers
active oldest votes
A classic reference is Courant and Hilbert (volume 1, volume 2).
up vote 2 down vote
Ahh yes, Courant. I remember my father (a chemist) always saying, whenever you need to know a formula, check Courant, then you'll sound smart when you know the answer. =) –
Michael Hoffman Nov 5 '09 at 1:58
add comment
There is a problem with this kind of question, namely for many mathematicians the most interesting mathematical physics is a new vast area on the interface of quantum field theory and
geometry/topology emerging from about late 1960s till now. You will find no word on this new mathematical physics in the classical books like Reed-Simon, Morse-Feshbach (Methods of
mathematical physics, 1953 and later ed.), Vladimirov (Equations of mathematical physics) and even older Courant-Hilbert which focus on the integral and differential equations of
mathematical physics, special functions, generalized functions (distributions), representations of classical groups and functional analysis. For your classical hydrodynamics indeed the
classical textbooks and reference books suffice, but for people interested in a bit more modern mathematical physics we could add (in various level of exposition and specialization)
• Yvonne Choquet-Bruhat, Cecile Dewitt-Morette, Analysis, manifolds and physics, 1982 and 2001
• Albert Schwartz, Quantum field theory and topology, Grundlehren der Math. Wissen. 307, Springer 1993. (translated from Russian original)
• Bernard F. Schutz, Geometrical methods of mathematical physics (elementary intro)
• Eberhard Zeidler, Quantum field theory. A bridge between mathematicians and physicists. I: Basics in mathematics and physics. , II: Quantum electrodynamics
• Charles Nash, Differential topology and quantum field theory, Acad. Press 1991.
• P. Deligne, P. Etingof, D.S. Freed, L. Jeffrey, D. Kazhdan, J. Morgan, D.R. Morrison and E. Witten, eds. Quantum fields and strings, A course for mathematicians, 2 vols. Amer. Math. Soc.
Providence 1999. (web version)
• Gregory L. Naber, Topology, geometry, and gauge fields: interactions
• Mikio Nakahara, Geometry, topology and physics
• Peter Olver, Equivalence, invariants, and symmetry, Cambridge University Press, Cambridge, UK, 1995.
• James Glimm, Arthur Jaffe, Quantum physics: a functional integral point of view, Springer
• Sternberg, Shlomo (1994), Group theory and physics, Cambridge University Press.
• V. I. Arnold, Mathematical methods of classical mechanics, Springer (1989).
• V. Guillemin, S. Sternberg, Symplectic techniques in physics, Cambridge University Press (1990)
• Leon A. Takhtajan, Quantum mechanics for mathematicians, Graduate Studies in Mathematics 95, Amer. Math. Soc. 2008.
up vote
16 down • Marian Fecko, Differential geometry and Lie groups for physicists
• V. S. Varadarajan, Supersymmetry for mathematicians: an introduction, AMS and Courant Institute, 2004.
• R. E. Borcherds, A. Barnard, Lectures on QFT, arxiv:math-ph/0204014
• Paul Aspinwall, Tom Bridgeland, Alastair Craw, Michael R. Douglas, Mark Gross, Dirichlet branes and mirror symmetry, Amer. Math. Soc. Clay Math. Institute 2009.
• R. S. Ward, R. O. Wells, Twistor geometry and field theory (CUP, 1990)
• N. N. Bogoliubov, A. A. Logunov, I. T. Todorov, Introduction to axiomatic quantum field theory, 1975
• O. Babelon, D. Bernard, M. Talon, Introduction to classical integrable systems, Cambridge Univ. Press 2003.
• Martin Schottenloher, A mathematical introduction to conformal field theory
• Philippe Di Francesco,Pierre Mathieu,David Sénéchal, Conformal field theory, Springer 1997
• T. Miwa, M. Jimbo, E. Date, Solitons: Differential equations, symmetries and infinite dimensional algebras, Cambridge Tracts in Mathematics 135, translated from Japanese by Miles Reid
• V. Kac, Vertex algebras for beginners, Amer. Math. Soc.
• Ludwig D. Faddeev, Leon Takhtajan, Hamiltonian methods in the theory of solitons, Springer
• V.E. Korepin, N. M. Bogoliubov, A. G. Izergin, Quantum inverse scattering method and correlation functions, Cambridge Univ. Press 1997.
• N. P. Landsman, Mathematical topics between classical and quantum mechanics, Springer Monographs in Mathematics 1998. xx+529 pp.
• Sean Bates, Alan Weinstein, Lectures on the geometry of quantization, pdf
• A. Cannas da Silva, A. Weinstein, Geometric models for noncommutative algebras, 1999, pdf
I have placed these references in new nlab entry books and reviews in mathematical physics which will be updated at times wuth more specialized references.
The above mentioned nlab entry is now already much improved (subsections, more references, links) in comparison to the above raw list. – Zoran Skoda May 18 '10 at 16:36
add comment
For a mathematician who wants to learn some (classical) physics, the first book I'd recommend is Arnold's "Mathematical Methods of Classical Mechanics".
up vote 12 down vote
add comment
Courant and Hilbert is great. However, it predates many significant developments in mathematics and physics in much of the 20th century. A more recent such work is Reed and Simon's
up vote 3 down Methods of Modern Mathematical Physics.
Large parts of Courant and Hilbert is orthogonal to the content of Reed and Simon. I wouldn't suggest one as the replacement of the other. – Willie Wong Jun 18 '10 at 22:23
add comment
While not directly geared at the interest you mentioned, if a mathematician were trying to learn quantum gravity(perhaps to work in this field from the math side), I would recommend they
consult the duo of books by Rovelli and Thiemann.
The first book focuses on the physics side, and builds your background in the physics of quantizing gravity from the loop quantum approach. The second book focuses on the mathematical
methods used and the specific mathematical models currently in use. Additionally, both books provided me with much needed exposition in the field, feeding my desire as a mathematician to
up vote 2 understand why many of these math structures apply to these physical phenomena.
down vote
The first book is Rovelli: Quantum Gravity, the second is Thiemann: Modern Canonical Quantum General Relativity.
I warn you they are large books.
It is a bit misleading to say that "a mathematician can learn quantum gravity" from these books. Where they cover classical material about classical gravity this may be the case, but
beyond that these books discuss a mathematical construct whose relation to quantum gravity a bit uncertain. – Urs Schreiber May 18 '10 at 16:15
That is a good point Urs, I didn't mean to suggest these were in some way comprehensive, just some useful references to get some perspective on the field. I am far from an expert so this
advice should be taken with a spoon of salt. – B. Bischof May 19 '10 at 3:04
add comment
Another book is Robert Geroch's Mathematical physics, although this may perhaps be more properly characterized as a book on modern mathematics for physicists.
up vote 1 down vote
add comment
As no one has given any references that aim to improve physical intuition, I would recommend Einstein's classic "The Meaning of Relativity". This short book gives good physical insights
into the theories of special and general relativity, and the mathematics should be easy enough for someone with a knowledge of differential geometry. It contains a good portion of text that
aims to explain Einsteins reasoning as the basis for the theories, as well as some more philosophical ponderings.
up vote 1 For someone less knowledgable in differential geometry, I think that Einstein's book, coupled with J.G. Simmonds' "A Brief on Tensor Analysis" would be a good combination. Although it is a
down vote math book, Simmonds goes a long way in offering physical intuition as a means of motivating differential geometry and tensor analysis for physics and engineering students. Also, this book
is short, and at an undergraduate level, so the first two chapters should read extremely fast for a seasoned mathematician.
Thanks! Intuition is always key! – Michael Hoffman Jun 18 '10 at 14:09
add comment
Not the answer you're looking for? Browse other questions tagged soft-question mp.mathematical-physics or ask your own question. | {"url":"http://mathoverflow.net/questions/4183/mathematical-physics-particularly-computational?sort=newest","timestamp":"2014-04-19T10:16:07Z","content_type":null,"content_length":"82760","record_id":"<urn:uuid:8b8eb968-457d-49ea-971c-1ec47f65ed11>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Size of the Battle Isles [Archive] - Guild Wars 2 - GWOnline
Ranger Nietzsche
01-09-2006, 17:18
So in keeping with my other two threads, about Tyria (http://forums.gwonline.net/showthread.php?p=4355841#post4355841) and cantha (http://forums.gwonline.net/showthread.php?t=400768)
And using the methods currently under discussion there I have determined the size of the battle isles map. This was definately the easiest one and the most accurate, as it had one less ratio
necessary for the calculation. It also provided interesting obstacles, like finding a stretch of land where I wouldn't get a windborne on me and ruin the experiment.
This is the distance over which I ran in 15 seconds.
The ratio between this distance and the horizontal size of the battle isles map is 1:24. The vertical ratio is 1:21. This gives a horizontal:vertical ratio of 1.14:1, which is in keeping with the
ratios from the tyrian and canthan maps (1.18 and 1.29).
This gives a running time for the full traversal of the battle isles map of 6 minutes horizontally and 5.25 minutes vertically.
The ratios of the vertical axes between Tyria and the battle isles and the horizontal axes of the same should be equal. They are 1:7.58 and 1:7.33 respectively. The similar case is for battle isles
and cantha, where they are 1:5.76 and 1:5.08.
The ratio of the battle isles to Tyria is then roughly 1:7.5 | {"url":"http://guildwars.incgamers.com/forums/archive/index.php/t-419127.html","timestamp":"2014-04-21T02:51:17Z","content_type":null,"content_length":"4391","record_id":"<urn:uuid:c46ccc31-c657-4d01-a4d0-ad4721eeba08>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rocket launching - Newton's Second Law
Thanks I got it now!
Before, I canceled my "m"'s too early, before I knew that T was N, so when I put in 3.6mg, I was left with that m and didn't know what to do with it.
As for the 35, I used the 3.6 in the wrong way - I used it with Fg because the question stated that the astronaut is 3.6 times her normal weight. Perhaps if you have the time to quickly explain to me
why the 3.6 goes with N instead of Fg I would very much appreciate it.
Thank you again! | {"url":"http://www.physicsforums.com/showthread.php?p=3083659","timestamp":"2014-04-19T19:43:28Z","content_type":null,"content_length":"31433","record_id":"<urn:uuid:7ed3b655-cddb-4306-8c3c-674510554cae>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proving Euler's Formula
November 7th 2011, 10:32 AM
Proving Euler's Formula
e^(iΘ) = cos (Θ) + i sin (Θ)
Typically the proof involves using the summation e^x = Sum (x^n / n!) from 1 to infinity. Then you set x = iΘ, and separate the summation into two halves: cosine and sine.
The part I never understood is, how do you establish that the summation e^x = Sum (x^n / n!) holds for complex values of x? The proofs I've seen just assume that it does. Is there a way to
develop summation definition for C?
November 7th 2011, 11:02 AM
Re: Proving Euler's Formula
the taylor series of any function f is
$f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} \cdot (x-a)^n$
you can check
Taylor series - Wikipedia, the free encyclopedia
November 7th 2011, 11:08 AM
Re: Proving Euler's Formula
You can also try to prove it with induction.
November 8th 2011, 11:29 AM
Re: Proving Euler's Formula
How would I approach the induction proof?
I'm asking this because typically when people get introduced to Euler's formula proof, they've been dealing with real numbers and are used to working with series of all real-number quantities and
variables. It looks questionable when you suddenly plug in complex numbers to the series and expect it to work.
November 8th 2011, 09:38 PM
Re: Proving Euler's Formula
The Taylor series of a real or complex function ƒ(x) that is infinitely differentiable in a neighborhood of a real or complex number a is the power series
$f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots.$
from WIKI
Taylor series for complex and real function dose not matter so
$e^{i\theta} = \sum_{n=0}^{\infty} \frac{(i\theta)^n}{n!}=1 + \frac{i\theta}{1} + \frac{-\theta ^2}{2!}+ + \frac{-i\theta ^3}{3!}+\frac{\theta ^4}{4!} + \frac{i\theta ^5}{5!} +...$
i terms together and without i together
$\sum_{n=0}^{\infty} \frac{(i\theta)^n}{n!} = i \sum_{n=0}^{\infty} \frac{\theta ^{2n+1}}{(2n+1)!} + \sum_{n=0}^{\infty} \frac{\theta ^{2n}}{(2n)!}$
it is clear that the term with i will be sin theta and without will be cosine
November 8th 2011, 09:52 PM
Prove It
Re: Proving Euler's Formula
e^(iΘ) = cos (Θ) + i sin (Θ)
Typically the proof involves using the summation e^x = Sum (x^n / n!) from 1 to infinity. Then you set x = iΘ, and separate the summation into two halves: cosine and sine.
The part I never understood is, how do you establish that the summation e^x = Sum (x^n / n!) holds for complex values of x? The proofs I've seen just assume that it does. Is there a way to
develop summation definition for C?
Here's how I prove it.
Let $\displaystyle z = \cos{\theta} + i\sin{\theta}$. Then
\displaystyle \begin{align*} \frac{dz}{d\theta} &= -\sin{\theta} + i\cos{\theta} \\ \frac{dz}{d\theta} &= i^2\sin{\theta} + i\cos{\theta} \\ \frac{dz}{d\theta} &= i\left(\cos{\theta} + i\sin{\
theta}\right) \\ \frac{dz}{d\theta} &= i\,z \\ \frac{1}{z}\,\frac{dz}{d\theta} &= i \\ \int{\frac{1}{z}\,\frac{dz}{d\theta}\,d\theta} &= \int{i\,d\theta} \\ \int{\frac{1}{z}\,dz} &= i\,\theta +
C_1 \\ \log{z} + C_2 &= i\,\theta + C_1 \\ \log{z} &= i\,\theta + C \textrm{ where }C = C_1 - C_2 \\ z &= e^{i\,\theta + C} \\ z&= e^{i\,\theta}e^C \\ z &= A\,e^{i\,\theta} \textrm{ where }A = e^
C \\ \cos{\theta} + i\sin{\theta} &= A\,e^{i\,\theta} \end{align*}
Now if we let $\displaystyle \theta = 0$ we find
\displaystyle \begin{align*} \cos{0} + i\sin{0} &= A\,e^{0i} \\ 1 &= A \end{align*}
\displaystyle \begin{align*} \cos{\theta} + i\sin{\theta} &= e^{i\theta} \\ r\left(\cos{\theta} + i\sin{\theta}\right) &= r\,e^{i\theta} \end{align*}
November 8th 2011, 10:08 PM
Re: Proving Euler's Formula
But that, again, assumes that raising e to something imaginary is valid in the first place.
What if you've been just introduced to complex numbers and are trying to make sense out of this concept? Then the first thing you would probably prove is e^(iΘ) = cos (Θ) + i sin (Θ).
Taylor series for complex and real function dose not matter so
$e^{i\theta} = \sum_{n=0}^{\infty} \frac{(i\theta)^n}{n!}=1 + \frac{i\theta}{1} + \frac{-\theta ^2}{2!}+ + \frac{-i\theta ^3}{3!}+\frac{\theta ^4}{4!} + \frac{i\theta ^5}{5!} +...$
i terms together and without i together
$\sum_{n=0}^{\infty} \frac{(i\theta)^n}{n!} = i \sum_{n=0}^{\infty} \frac{\theta ^{2n+1}}{(2n+1)!} + \sum_{n=0}^{\infty} \frac{\theta ^{2n}}{(2n)!}$
it is clear that the term with i will be sin theta and without will be cosine
I guess I would like to see how the concept of series can be developed in terms of complex numbers.
November 8th 2011, 10:13 PM
Prove It
Re: Proving Euler's Formula
November 9th 2011, 12:07 AM
Re: Proving Euler's Formula
e^(iΘ) = cos (Θ) + i sin (Θ)
Typically the proof involves using the summation e^x = Sum (x^n / n!) from 1 to infinity. Then you set x = iΘ, and separate the summation into two halves: cosine and sine.
The part I never understood is, how do you establish that the summation e^x = Sum (x^n / n!) holds for complex values of x? The proofs I've seen just assume that it does. Is there a way to
develop summation definition for C?
one establishes convergence for complex power series in the same way that one does for real series:
a power series $\sum_{k=1}^{\infty}a_kz^k$ converges iff the sequence of partial sums $\{S_n = \sum_{k=1}^n a_kz^k\}$ converges as a sequence.
a complex sequence $\{S_n\}$ converges to a limit L iff for every $\epsilon > 0$, there exists $N \in \mathbb{Z}^+$ such that :
$|S_n-L| < \epsilon$ for all n > N. the only difference here is that $|z| = |a+bi| = \sqrt{a^2+b^2}$ instead of $|x| = \sqrt{x^2}$,
we measure distance between complex numbers a little differently than distance between real numbers, instead of "intervals" we use "disks" (so intsead of having an interval of convergence, we
have a radius of convergence).
for the particular power series:
it turns out that this converges for every complex number z (the radius of convergence is infinite), so we can assign the limit of this series to be the image of a function, f(z).
of course, on the real-axis, this obviously converges to $e^x$, so it is natural to define this function f to be the complex exponential.
glossing over some technicalities, here, one can show that not only is this function f continuous, it is also infinitely-differentiable. one can (but i will not) show that for a
convergent-everywhere complex power series, term-by-term differentiation is justified; doing so, we find that f'(z) = f(z), that is, not only does the extension of the real exponential to complex
values behave well with respect to convergence, it also behaves the same under differentiation (the complex derivative is defined in exactly the same way as the real derivative, since complex
numbers form a field, all the operations involved in the definition of the derivative still "make sense").
at this point, you can see that one can limit z to numbers of the form 0+iy, which then quickly gives euler's identity (since we wind up with two real series, one of which is multiplied by i,
giving a complex series).
if you wish to prove this for yourself, a good first place to start, is by verifying that the triangle inequality still holds, when absolute value is replaced by the complex norm (or modulus).
this will go a long way to convincing you many of the methods used in "epsilon-delta" proofs for real numbers, carry over to complex numbers with little or no modification.
November 9th 2011, 06:52 AM
Re: Proving Euler's Formula
Proof: $e^{i\theta}=\cos(\theta)+i\sin(\theta)$
By induction.
Firs step: take $\theta=0$ therefore:
$e^{0}=\cos(0)+i\sin(0) \Rightarrow 1=1$
So the statement is true for $\theta=0$
Induction step: Suppose the statement is true for $\theta=k$, therefore we suppose this statement is true:
$e^{i\cdot k}=\cos(k)+i\sin(k)$
We have to proof it's true for $\theta=k+1$:
We can write:
$e^{i(k+1)}=e^{i\cdot k+i}=e^{i\cdot k}\cdot e^{i}=[\cos(k)+i\sin(k)]\cdot [\cos(1)+i\sin(1)]=\cos(k)\cos(1)+i\sin(k)\cos(1)+\cos(k)[i\sin(1)]-\sin(k)\sin(1)=[\cos(k)\cos(1)-\sin(k)\sin(1)]+i[\
Which we wanted to prove, therefore the statement is true.
I'm sure there're much better proves then this one, but I think it's an easy way to prove it.
November 9th 2011, 08:25 AM
Re: Proving Euler's Formula
@Prove It: I mean that I want to prove that e^(imaginary) actually works before I go on any further. When students first learn complex variables, they don't know what e^(imaginary) means. They
start with e^(iΘ) because it's the most accessible. But you have to show that it's meaningful.
A proof that uses log z would come after e^(imaginary). Ordinarily students first learn the concept of e^(imaginary) before learning about log z, unless there's another way to establish logs
@Deveno: Thank you. That's what I meant, a more rigorous development of series or whatever you would need for a proof. I'll read that and try to understand it.
@Siron: I guess that works, though I prefer a proof that doesn't rely on complex exponents until it's well-defined, which is why in the first post, I was asking for a way to build up series in C.
Essentially I want to assume that I don't know anything about complex numbers other than a+bi and z^n = z * z * ... * z. | {"url":"http://mathhelpforum.com/differential-geometry/191384-proving-eulers-formula-print.html","timestamp":"2014-04-19T15:10:32Z","content_type":null,"content_length":"25026","record_id":"<urn:uuid:a76b2e66-9fb8-45ba-adb4-0dcc498f25c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Who Invented the Distance Formula?
The man who invented the distance formula must have been amazed by distances. And why not? He was a traveler. He was a scientist and a philosopher always seeking the meaning in life. Aside from being
educated in Greece, the distance formula inventor traveled other parts of the world to learn from other civilizations.
Many acknowledge that Pythagoras was the person who invented the distance formula. He was from Samos and born around 570 B.C. He traveled not just to Egypt and Babylon, but also to Arabia, Phoenicia,
Judea, and India. He did this in search of knowledge. According to records, he was much amazed by what knowledge was available in Egypt. He was also fond of experimenting with numbers and later on
ended up being the distance formula inventor.
Pythagorean Theorem
Vertical, horizontal, or diagonal distance can be solved using Pythagoras distance formula, or what is called “Pythagorean theorem.” The man who invented the distance formula based this on the
dimensions of a right triangle or a 90 degree triangle. This triangle has three sides—the base, the height, and the hypotenuse, which is the diagonal side. If 2 sides have known dimensions the third
unknown can be solved.
Thus, to solve the distance from one point to a standing structure, and the dimensions for the structure height and the hypotenuse are given, the distance can be calculated. The formula for this is C
squared is equal to A squared plus B squared. According to the distance formula inventor, C is the value for the base, A is the value for the height, and B is the value for the hypotenuse side. The
man who invented the distance formula figured any unknown value of this equation can be solved by merely manipulating or transposing the formula.
The Pythagorean Theorem is a vital formula in Geometry and Trigonometry. Architects, engineers, pilots, and seamen use it in their works. Pythagoras was said to have learned basic Geometry from the
Egyptians—the famed Pyramid builders, Arithmetic from the Phoenicians, and Astronomy from the Babylonians. But his contribution to Geometry and Trigonometry was exemplary. In fact, Plato was so
influenced by Pythagoras’ ideas. Scores of other philosophers and scientists were likewise influenced by the distance formula inventor.
Pythagoras’ fondness for numbers was mixed with his delight for magic. Later, he established the religion Pythagoreanism. He believed that Mathematics and numbers governed life and afterlife.
Together with magic, they also enabled him to predict future events.
VN:F [1.9.22_1171]
Who Invented the Distance Formula?, | {"url":"http://www.whoinventedit.net/who-invented-the-distance-formula.html","timestamp":"2014-04-16T17:06:56Z","content_type":null,"content_length":"31467","record_id":"<urn:uuid:1ffe7038-8a46-4219-bdd8-e3ec59f21188>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/ryanpauls/answered","timestamp":"2014-04-17T04:14:36Z","content_type":null,"content_length":"99713","record_id":"<urn:uuid:d118f1da-e1e0-4be4-83ba-64d139cdf730>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
Recent Developments in Geometry
             
Contemporary This volume is the outgrowth of a Special Session on Geometry, held at the November 1987 meeting of the AMS at the University of California at Los Angeles. The unusually
Mathematics well-attended session attracted more than sixty participants and featured over forty addresses by some of the day's outstanding geometers. By common consent, it was decided that the
papers to be collected in the present volume should be surveys of relatively broad areas of geometry, rather than detailed presentations of new research results. A comprehensive
1990; 338 pp; survey of the field is beyond the scope of a volume such as this. Nonetheless, the editors have sought to provide all geometers, whatever their specialties, with some insight into
softcover recent developments in a variety of topics in this active area of research.
Volume: 101 • R. E. Greene -- Some recent developments in Riemannian geometry
• R. Brooks -- Designer metrics on Riemannian manifolds
ISBN-10: • G. Walschap -- Open manifolds of nonnegative curvature
0-8218-5107-1 • S. W. Wei -- An extrinsic average variational method
• P. Aviles -- The Dirichlet problem and Fatou's theorem for harmonic mappings on regular domains
ISBN-13: • D. A. Hoffman -- New examples of singly-periodic minimal surfaces and their qualitative behavior
978-0-8218-5107-4 • V. I. Oliker -- The Gauss curvature and Minkowski problems in space forms
• Y. S. Poon -- Self-duality, twistor theory, its generalization and application
List Price: US$56 • P. B. Gilkey -- Spectral geometry of Riemannian manifolds
• M. A. Pinsky -- Eigenvalue asymptotics and their geometric applications
Member Price: • R. M. Tse -- A lower bound for the number of isospectral surfaces
US$44.80 • S.-Y. A. Chang and P. C. Yang -- The conformal deformation equation and isospectral set of conformal metrics
• J. Lott -- Analytic torsion for group actions
Order Code: CONM/ • G. Knieper and H. Weiss -- Regularity of entropy for geodesic flows
101 • G. R. Jensen and M. Rigoli -- Twistor and Gauss lifts of surfaces in four-manifolds
• W. Gao -- Affine differential geometry of complex hypersurfaces
• K.-T. Kim -- Domains with noncompact automorphism groups
• S. Frankel -- Affine approach to complex geometry
• N. Monk -- Compactification of complete Kähler-Einstein manifolds of finite volume
• S. S.-T. Yau -- Topological types of isolated hypersurface singularities
• T. Duchamp and M. Kalka -- Complex foliations | {"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-101","timestamp":"2014-04-18T23:42:46Z","content_type":null,"content_length":"16169","record_id":"<urn:uuid:5c0f681e-0499-4762-9774-adde3ed62a9b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clemson University : Department of Mathematical Sciences : Algebra and Discrete Mathematics : Number Theory
Number Theory
The number theory group at Clemson covers a wide range of interests touching on many areas of number theory. The diverse interests of the faculty provide for numerous number theory courses so that
graduate students leave Clemson with an overview of the entire subject. There is a weekly ADM seminar run by the Algebra, Discrete Mathematics, and Number theory group as well as a weekly algebraic
geometry and number theory seminar. Graduate students are funded to attend the Palmetto Number Theory Series (PANTS), a conference held three times a year featuring talks by prominent number
theorists from across the country. Once students have begun their research they are encouraged to give contributed talks at these meetings. The Southeast Regional Meeting on Numbers (SERMON) is also
held once a year with faculty and graduate students from Clemson playing a prominent role.
Students with an interest in number theory should feel free to e-mail or stop by and speak with any of the number theorists at Clemson. Potential graduate students with an interest in number theory
should visit the webpages of the number theorists listed below and contact any one of us with any questions about number theory at Clemson University.
┃ │ Faculty │ ┃
┃ │ │ ┃
┃ │ │ ┃
┃ Jim Brown │ Kevin James │ Hui Xue ┃
┃ │ │ ┃
┃ │ │ ┃
┃ Dania Zantout │ │ ┃
┃ │ Graduate Students │ ┃
┃ │ │ ┃
┃ │ │ ┃
┃ Rodney Keaton │ Luke Giberson │ Huixi Li ┃
┃ │ │ ┃
┃ │ │ ┃
┃ Liem Nguyen │ Trevor Vilardi │ ┃
Recent Graduates in Number Theory
The following are recent graduates in number theory, their advisor, and employment at the time of graduation.
• Rodney Keaton (Ph.D. 2014, Jim Brown), Three year postdoctoral position at the University of Oklahoma
• Dania Zantout (Ph.D. 2013, Jim Brown) Visiting Assistant Professor at Clemson University
• Liem Nguyen (M.S. 2013, Kevin James and Hui Xue) Ph.D. student at Clemson University
• Trevor Vilardi (M.S. 2013, Hui Xue) Ph.D. student at Clemson University
• Jeff Beyerl: (Ph.D. 2012, Kevin James and Hui Xue) One year visiting position at Furman University
• Catherine Trentacoste: (Ph.D. 2012, Kevin James and Hui Xue) Research Analyst at Center for Naval Analyses
• Rodney Keaton: (M.S. 2010, Jim Brown and Kevin James) Ph.D. student at Clemson University.
• Ethan Smith: (Ph.D. 2009, Kevin James) Tenure track position at Michigan Technological University.
• Jeff Beyerl: (M.S. 2009, Kevin James and Hui Xue) Ph.D. student at Clemson University.
• Catherine Trentacoste: (M.S. 2009, Kevin James and Hui Xue) Ph.D. student at Clemson University.
• Bryan Faulkner: (Ph.D. 2007, Kevin James) Tenure track position at Ferrum College.
Number Theory Concentration
The following are number theory courses that have been offered in the recent past.
• Introduction to Number Theory
• Algebraic Number Theory
• Analytic Number Theory
• Cryptography
• Algebraic Geometry I & II
• Function Fields
• Class Field Theory
• Introduction to Elliptic Curves and Modular Forms
• Computational Number Theory
• Probabilistic Methods in Combinatorics and Number Theory
Introduction to Number Theory is run in the fall of odd numbered years, followed by Algebraic Number Theory in the spring. Analytic number theory is offered in the spring the following year. Students
with an interest in number theory are encouraged to enroll in these classes as soon as possible to help them choose a branch of number theory that will comprise their master's thesis as well as their
Ph.D. research should they choose to continue on to their Ph.D.
In addition to the courses run each semester, there is a number theory seminar that meets weekly. This is typically used to help students gain background in material that will not appear in a course
in the near future. Some recent topics have been Drinfeld modules, elliptic curves, L-functions, and modular forms.
Current Courses of Interest to Number Theory Students
The following are courses being offered Fall 2014 that may be of interest to students interested in number theory.
• Math 851: Abstract Algebra I (Felice Manganiello)
• Math 853: Matrix Analysis (Jim Brown)
• Math 985: Modular curves, modular forms, and elliptic curves (Hui Xue)
Additional Number Theory Links | {"url":"http://www.math.clemson.edu/~jimlb/NumberTheoryGroup/numbertheorygroup.html","timestamp":"2014-04-20T10:46:31Z","content_type":null,"content_length":"15165","record_id":"<urn:uuid:c877637e-0244-4ac6-8846-d2ec4f631f50>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Title page for ETD etd-050607-202823
A fundamental problem of microwave (MW) thermal processing of materials is the intrinsic non-uniformity of the resulting internal heating pattern. This project proposes a general technique to
solve this problem by using comprehensive numerical modeling to determine the optimal process guaranteeing uniformity. The distinctive features of the approach are the use of an original concept
of uniformity for MW-induced temperature fields and pulsed MW energy as a mechanism for achieving uniformity of heating.
The mathematical model used to represent MW heating describes two component physical processes: electromagnetic wave propagation and heat diffusion. A numerical solution for the corresponding
boundary value problem is obtained using an appropriate iterative framework in which each sub-problem is solved independently by applying the 3D FDTD method. Given a specific MW heating system
and load configuration, the optimization problem is to find the experiment which minimizes the time required to raise the minimum temperature of the load to a prescribed goal temperature while
maintaining the maximum temperature below a prescribed threshold. The characteristics of the system which most dramatically influence the internal heating pattern, when changed, are identified
through extensive modeling, and are subsequently chosen as the design variables in the related optimization. Pulsing MW power is also incorporated into the optimization to allow heat diffusion to
affect cold spots not addressed by the heating controlled by the design variables.
The developed optimization algorithm proceeds at each time-step by choosing the values of design variables which produce the most uniform heating pattern. Uniformity is measured as the average
squared temperature deviation corresponding to all distinct neighboring pairs of FDTD cells representing the load. The algorithm is implemented as a collection of MATLAB scripts producing a
description of the optimal MW heating process along with the final 3D temperature field.
We demonstrate that CAD of a practical applicator providing uniform heating is reduced to the determination of suitable design variables and their incorporation into the optimization process.
Although uniformity cannot be attained using “static” MW heating, it is achievable by applying an appropriate pulsing regime. The functionality of the proposed optimization is illustrated by
computational experiments which show that time-to-uniformity can be reduced, compared to the pulsing regime, by up to an order of magnitude. | {"url":"http://www.wpi.edu/Pubs/ETD/Available/etd-050607-202823/","timestamp":"2014-04-21T05:01:28Z","content_type":null,"content_length":"5890","record_id":"<urn:uuid:44929c5e-c485-4856-961f-42fa20f688ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Intuitive Approach to ROC Curves (with SAS & R)
I developed the following schematic (with annotations) based on supporting documents (
) from the article cited below. The authors used R for their work. The ROC curve in my schematic was output from PROC LOGISTIC in SAS, the scatterplot with marginal histograms was created in R (code
below) using the scored data from PROC LOGISTIC exported using my SAS MACRO
link to SAS macro code
(click to enlarge)
Selection of Target Sites for Mobile DNA Integration in the Human Genome
Berry C, Hannenhalli S, Leipzig J, Bushman FD, 2006 Selection of Target Sites for Mobile DNA Integration in the Human Genome. PLoS Comput Biol 2(11): e157. doi:10.1371/journal.pcbi.0020157
"The data were analyzed using the R language and environment for statistical computing and graphics "
R code for plot was adapted from code provided via the addicted to R graph gallery :
# *------------------------------------------------------------------
# |
# | import scored logit data from SAS - code generated by SAS MACRO %EXPORT_TO_R
# |
# |
# *-----------------------------------------------------------------
# set R working directory
setwd("C:\\Documents and Settings\\wkuuser\\Desktop\\PROJECTS\\Stats Training")
# get data
dat.from.SAS <- read.csv("fromSAS_delete.CSV", header=T)
# check data dimensions
# *------------------------------------------------------------------
# |
# | scatter plot with marginal histograms
# |
# |
# *-----------------------------------------------------------------
# model predicts P(G) so we want these probabilities for each group
# get p(G) data set for the group that is actually green
green <- dat.from.SAS[ dat.from.SAS$class=="G",]
# get p(G) data set for group that is actually red
red <- dat.from.SAS[ dat.from.SAS$class=="R",]
# just look at regular histograms for each group
hist(green$P_G, main = 'histogram for green')
hist(red$P_G, main = 'histogram for red')
# in order to do scatter plots n must be the same for each
# group, randomly sample n = n(green) from red
# Total number of red observations to match green
N <- 24
# Randomly arrange the data and select out N size sample for red
# and test set.
dat <- red[sample(1:N),]
red.rs <- dat[1:N,]
# does the distribution retain original properties? Yes
hist(red.rs$P_G, main = 'histogram for red sample')
plot(green$P_G, red.rs$P_G)
# *------------------------------------------------------------------
# |
# | create the marginal plots
# |
# |
# *-----------------------------------------------------------------
def.par <- par(no.readonly = TRUE) # save default, for resetting...
# define histograms
Ghist <- hist(green$P_G,plot=FALSE)
Rhist <- hist(red.rs$P_G, plot=FALSE)
top <- max(c(Ghist$counts, Rhist$counts))
Grange <- c(0,1)
Rrange <- c(0,1)
nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE)
plot(green$P_G, red.rs$P_G, xlim=Grange, ylim=Rrange, xlab="green", ylab="red")
barplot(Ghist$counts, axes=FALSE, ylim=c(0, top), space=0, main = 'green')
barplot(Rhist$counts, axes=FALSE, xlim=c(0, top), space=0, horiz=TRUE, main = 'red')
Created by Pretty R at inside-R.org
2 comments:
1. Matt,
Did you know that there is also a graph gallery for the SG procedures in SAS?
The graph you want is in the PROC SGRENDER gallery (Sample 35172) and includes a link that takes you to http://support.sas.com/kb/35/172.html
If you cut and paste the two calls to PROC TEMPLATE, then the following statements give a scatter plot with marginal histograms for some fake data:
data a (drop=i); /* fake data */
do i = 1 to 20;
x=rannor(1); y = rannor(1); output;
ods graphics;
proc sgrender data=a template=scatterhist;
dynamic YVAR="X" XVAR="Y";
Actually, I don't like that the SAS Sample uses transparency for the scatter plot.
Set datatransparency=0 on the SCATTERPLOT statement in order to get the usual scatter plot.
2. See also "How to create a scatter plot with marginal histograms in SAS" http://bit.ly/jQVow4 | {"url":"http://econometricsense.blogspot.com/2011/05/intuitive-approach-to-roc-curves-with.html","timestamp":"2014-04-19T09:48:59Z","content_type":null,"content_length":"113366","record_id":"<urn:uuid:a863fe01-fdf5-4b43-b0f4-04ce5930948f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
fundamental group, free group
March 21st 2009, 02:32 PM
fundamental group, free group
Let $Y$ be the complement of the following subset of the plane $\mathbb{R}^2:$
$\{(x,0) \in \mathbb{R}^2: x \in \mathbb{Z} \}$
Prove that $\pi_1(Y)$ is a free group on a countable set of generators.
I don't know how to start this problem. Thanks in advance.
March 22nd 2009, 07:41 PM
The deformation retract to the wedge product of circles would be the proper way to approach this problem.
Once you get the deformation retract, I think it would not be hard to prove this. | {"url":"http://mathhelpforum.com/differential-geometry/79807-fundamental-group-free-group-print.html","timestamp":"2014-04-20T00:06:19Z","content_type":null,"content_length":"5868","record_id":"<urn:uuid:02ea5467-f1ed-4490-a5ad-8959d5d2338c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Majorant and minorant
September 29th 2011, 10:42 AM
Majorant and minorant
Hi , Need some help plz :
Find out the minimum on |R of the following functions :
a - f: x --> 1 + |x| + 2x².
b - g:x --> |x+1| - 4.
Find out the maximum on |R of the following functions :
a- h: x--> (1/|x|+3) +1.
b- K: x--> (2/1+x²) - 3.
September 29th 2011, 10:50 PM
Re: Majorant and minorant
To #a): $1+2x^2>0$ and is continuously increasing for all $x \in \mathbb{R}$. So the minimum of the complete term occurs when |x| has it's minimum.
T #b): Similar argumentation as described at #a).
Find out the maximum on |R of the following functions :
a- h: x--> (1/|x|+3) +1.
b- K: x--> (2/1+x²) - 3.
I don't understand the writing of the term at #a). Do you mean:
$h(x)=\frac1{|x|+3}+1$ or $h(x)=\left(\frac1{|x|}+3\right)+1$
To #b): The maximum of the complete term occurs when the fraction has it's maximum. Since the numerator is a constant the maximum is reached if the denominator has it's minimum. Since x² is
positiv or zero the minimum of the denominator is obvious. | {"url":"http://mathhelpforum.com/algebra/189141-majorant-minorant-print.html","timestamp":"2014-04-18T09:18:07Z","content_type":null,"content_length":"5812","record_id":"<urn:uuid:63de43f2-d0f3-49b9-b2d1-7f14aaa178f5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homepage > Misc. Topics > This Page
┃ This article originally appeared in the September 1992 (volume VIII #1) issue of "Experimental Musical Instruments", published from Nicasio, CA. Page 18-22 ┃
The technique of designing musical instruments has not changed much in the last several thousand years. A maker builds an instrument, listens to the tone, then repeats the entire process with a
slight change in construction. This is a tedious process and one often thinks that it could be easier if there was a way to "see" the sound. Spectrum analysis is a tool that gives us the ability to
see the timbre. In this article we will discuss its various aspects; including sampling theory, processing, and graphic output.
The graphic representation of sound has been an area of interest for years. The earliest experiments focused beams of light against a mirror which was attached to a vibrating object. This technique
was used extensively until the twentieth century when the oscilloscope was invented. Both light beams and oscilloscopes give a graphic representation of the vibratory nature of sound.
Musical sounds are usually visualized as "waves" of air that vibrate with a particular frequency. This frequency is expressed in cycles per second; however, instead of saying "cycles per second" we
say "Hertz". The range of human hearing is said to extend from 20 Hertz to 20 Kilohertz (i.e., 20 cycles to 20,000 cycles-per-second). This range is referred to as the "audio spectrum".
However, day-to-day sounds and musical sounds consist of a mixture of different frequencies. It is the nature of this mix which helps to determine timbre. Therefore, by looking closely at these
component frequencies we get insight into the timbre of any sound. This is spectrum analysis.
The pioneer of spectrum analysis was undoubtedly Hermann von Helmholtz. He developed a series of hollow glass spheres with carefully calibrated resonance frequencies. They would vibrate when excited
by the appropriate frequency and one could hear this by placing them against the ear. It was a very tedious process, but with these simple devices he was a pioneer in the field.
The Helmholtz resonators had their problems. The were awkward and the lack of a graphic output meant that only a subjective evaluation of the component frequencies was possible. By the later part of
this century they were replaced by totally electronic techniques. Unfortunately, they were very expensive.
However today these once expensive spectrum analyzers are within the reach of the average instrument maker. This is a consequence of the rapid drop in the price of digital electronics. $200 and a
personal computer is all that one requires to enter the world of spectrum analysis. Table 1 is a small list of available packages. (NOTE - This article was published in 1992. Products and pricing are
not current.)
TABLE 1
PRODUCT NAME HARDWARE ENVIRONMENT MANUFACTURER STREET PRICE COMMENTS
Digital Sound Studio Amiga Great Valley Products $100 Hardware / Software
Compuscope / GageCalc IBM Gage Applied Sciences N/A Hardware / Software
MacRecorder Sound System Macintosh Macromind $175 Hardware / Software
MacRecorder Pro Macintosh Macromind $240 Hardware / Software
Alchemy Macintosh Passport Designs $695 Software only
We have briefly reviewed what spectrum analysis is. It would be very appropriate to discuss the technical details. One of the most fundamental is the process of taking the sound and putting it into
the computer. This is a subject known as sampling.
If the computer is going to do our work, we have to find some way to get the music into the computer. The hardware and software, with all of the myriad of technical considerations has been the topic
of numerous books, and dissertations. However the essentials are surprisingly simple.
The hardware in our sampling process revolves around a specialized peripheral called an Analog-to-Digital converter. This device, usually called A/D converter for short is responsible for taking the
analog signal and converting it into discrete numbers that the computer can process. These discrete numbers are our samples
The concept behind sampling is quite simple. The waveform in figure1-A can be sampled and expressed as figure 1-B. This is similar to the operation of a motion picture camera. Just as an event may be
captured on film as a series of still frames, so too an audio signal may be captured as a series of discrete values.
The concept may be quite simple but the implementation may be quite complicated. There are a number of factors which must be kept in mind. The two most important are the sampling rate and the
The sampling rate is an option on most computer systems. But how fast should it be?
We must turn to the Nyquist theorem to help us find the correct sampling rate. It tells us that the sampling rate must be greater then twice the highest frequency to be encountered. Any attempt to
sample at a lower rate results in a phenomena known as aliasing.
Aliasing is where the frequencies above the Nyquist point (half the sampling rate) become reflected back down the audio spectrum. This is illustrated in figure 2. It is very much like the movement of
the wheels in the old films. If the wheels were moving slowly, the camera has no trouble "sampling" the event. However, as the wheels go faster the apparent motion tends to slow down. At a certain
point the wheel appears to stop, thereafter it appears to go backwards. This apparent retrograde motion of the wheels is analogous to the aliasing which occurs in digitized audio signals.
The resolution is another consideration. Most low cost systems default to eight bits. An 8-bit code has 256 possible combinations. Therefore the maximum resolution that one could expect from an 8-bit
code is 256 steps. There are systems which are capable of processing up to 16-bit codes. This gives 65,536 possible steps! However these systems cost more than the average instrument maker would be
willing to spend. For the purposes of the average craftsman an 8-bit resolution is quite sufficient.
This digitizing process, with all of its considerations is the first step. However merely putting the information into the computer is insufficient to produce any useful result. The data must be
processed to yield the frequency information.
The key to spectrum analysis lies in the computer processes. These processes are variations upon an extremely complicated field of mathematics known as Fourier transforms. The utility of the Fourier
transform is underscored by the failure of simpler methods to yield clear information about musical timbre.
The oscilloscope is a classic example of the inadequacy of a simpler technology. Virtually any instrument maker can afford to purchase an oscilloscope. Yet the images that appear fail to give much
information about timbre. It fails because the oscilloscope functions in what is called "Time domain" while our perception of timbre depends upon something called "Frequency domain". These are
referred to as "inverse domains" of each other.
The concept of the inverse domain may sound very intimidating but it is based upon a simple idea. Let us begin by looking at figure 3. Here is a simple question. Which one is the quarter? We know
that both images represent the same object even though they look absolutely nothing alike. Once we accept the fact that totally different images may represent the same object, we have made the first
conceptual breakthrough in the understanding of inverse domains.
A further understanding of inverse domains is seen in common wall current. Wall current (60Hz, 120V) is graphically shown by the two diagrams in figure four. Figure 4-A shows voltage as a function of
time. This is the standard sine wave which is familiar to most people. Figure 4-B shows voltage with respect to frequency. This shows a single spectral line at 60Hz. It does not require a strong
technical or mathematical background to see that both of these diagrams represent the same phenomenon.
The reason that these two representations are referred to as inverse domains is equally simple. The time domain diagram (fig. 4-A) shows the period as being .01667 sec. The Frequency domain (fig. 4B)
shows the frequency as being 60Hz. The relationship is simple:
We see that this is a simple reciprocal relationship. It is because of this simple relationship that they are called inverse domains.
Unfortunately, the real world conditions do not allow us to take a simple reciprocal and obtain our spectra. To derive spectra from complex sounds we are forced to perform what is called a Fourier
The Fourier transform may be visualized as a magic "Black Box" which is able to convert time domain to frequency domain. There are numerous algorithms to accomplish, however the most common is an
algorithm known as the "Fast Fourier Transform". This particular algorithm is usually abbreviated as FFT. The FFT is the most commonly used algorithm for small computer systems.
The Fourier transform was developed by Jean Baptiste Joseph Fourier in the beginning of the 19th century. The life of Fourier would make an interesting book in its own right. He was successful at
politics, sciences, and mathematics. It is also curious that the mathematical process that made him immortal was not developed for acoustics. It was instead developed during the course of his work on
thermodynamics. However to us it is his "black box" that converts time domain to frequency domain which is important.
Although the Fourier transform may be visualized as "black box" there are still some considerations which should be observed. Primarily we need to keep in mind the effects of our sample.
The size of the sample is extremely important. This is because the amount of information which goes into the process is going to be the same as the information which comes out. The Fourier transform
merely changes the form of the information. It does not generate nor destroy information. Therefore a larger sample will give us a higher frequency resolution. Let us say that we transform a sample
which has 1024 points. Our output will have 512 frequency bands.
At this point the attentive reader will be saying "Hey, that is only half the information which went into the transform. Where did the other information go?" This would be a convenient place to zoom
into the stratosphere with an esoteric discussion of imaginary numbers, but we will not do that. The simple fact is that the other half of the information is the phase relationship of the various
frequency bands. Therefore the 1024 point sample was transformed into 512 frequency bands and the corresponding 512 phase relationships. However, this phase information is generally ignored.
There are situations when a characteristic of the sample produces a frequency which is not in the original. This is called an artifact. Aliasing is one example of an artifact.
There is another artifact which is particularly troublesome for the Fourier transform. This arises when the sample does not correspond to an even number of periods. We find that the Fourier transform
presumes that it is dealing with an even number of periods and generates the frequency information accordingly. Therefore the presumed waveform from the sample in figure 5-A would be the waveform in
figure 5-B
This artifact points to a fundamental weakness of the Fourier transform. The process presumes that there is a repeating pattern and that the sample conforms to an even number of periods.
Unfortunately, real world sounds tend to show an absence of such simple repeating patterns. This absence is usually derived from several mechanisms. The first is a random component in the sound
(i.e., white noise). The another is the effect of the envelope (i.e., the attack and decay of the sound). And another deals with different envelopes for each component frequency. Although such
fundamental inconsistencies exist between the presumptions of the Fourier transform and the real world, this does not weaken the value of the process. It merely means that we must be conscious of the
artifacts and how they may influence our final results.
Usually these artifacts are of such a low amplitude that we do not need to worry about them. However, if one suspects that an area of interest may be an artifact, the easiest thing to do is to
resample with a different sample size. If the particular component shows wide variation, it is probably an artifact. If it shows a certain consistency then it is probably a legitimate component.
We have seen that the Fourier transform is the major tool by which we are able to obtain the frequency information from a sample. We have also shown that there are certain considerations which should
be observed if the transform is to be reliable. However we have not discussed one of the most important aspects of the process. That is the graphic representation of the information.
The output of the spectrum analyzer is of prime importance. This is what is going to be interpreted by the instrument maker. An unintelligible output renders the whole system worthless.
Undoubtedly a simple numeric table would be the most fundamental computer output. After all, the Fourier transform is just a mathematical process which takes in number and spits out numbers.
Unfortuanately, this is not an intuitive way to read the data. It is for this reason that a numeric output is not common for spectrum analyzers.
The simple X/Y plot is the most common form of output. This simply plots the data from the Fourier transform in standard Cartesian coordinates. The X axis is conventionally fixed as frequency and the
Y axis is conventionally fixed as the amplitude. Furthermore there is a tendency to "fill" the diagram to make it visually more appealing. Figure 6 is a typical X/Y spectrum of a guitar with a black
The simple X/Y has one disadvantage. It does not have the ability to show how the spectrum changes with respect to time. It is a characteristic of acoustic instruments that the spectrum is not fixed
but changes over the course of time. If we take repetitive samples and plot them on the Z axis, then we can better illustrate the timbre of an instrument.
This is the principal behind the 3-D wireframe. In figure 7 we see a 3-D representation of the sound of a mridangam. There are several characteristics which may be seen that would not be apparent in
a simple X/Y plot. For instance there is a moderate component of white noise (random vibration) in the initial sounding. This is indicated by the unusually broad peaks and the large degree of filling
between them. The initial spectrum very quickly dies away and is replaced by a relatively stable 2nd, 3rd, and 4th harmonic. There is a peak in the second harmonic at an unusually long period after
the drum was excited. All of these are characteristics which are clear when viewed as an 3-D wireframe but would not be so evident in a simple X/Y plot.
There is another way to represent the same information in a 2-D format. This is in the form of a "sonogram". This particular form of representation gained wide popularity in the pre-computer era
because it lent itself well to analog techniques of spectrum analysis. This technique uses the X axis to display time and the Y axis to portray frequency. The amplitude is denoted by the darkness of
the print. This method is still in use today in voice-print analysis, however for virtually all other applications it is on the decline.
All of the previous examples utilized a linear method of presenting the information. That is to say that each unit of time or voltage corresponded to a single unit of vertical or horizontal
displacement. However, this one-to-one relationship is inconsistent with human perception.
Haven't you always wondered why when you walk into a dark room and turn on a light it gets bright but when you turn on two lights it doesn't get twice as bright. This is because human perception is
not linear. Sometimes spectrum analyzers allow you to look at the spectrum in a non linear fashion somewhat analogous to the way we hear. This is referred to as a power spectrum while the normal
linear graph is referred to as a normal spectrum. Figure 9 (A & B) shows both the normal spectrum and the power spectrum of steel drums.
It is apparent that that the power spectrum shows much more detail than the normal spectrum. Unfortunately it takes some practice to properly interpret the relative values of the component
frequencies. The choice between displaying the power spectra or normal spectra is often a question of personal choice.
We may summarize the whole topic of output quite simply. Although the output from the Fourier transform must be numeric, virtually every package gives a graphic output. These may a standard X/Y plot,
the older spectrogram, or the much more attractive 3-D wireframe.
Spectrum analyzers are not out of the reach of the common man. Software/ hardware packages are now in the range where almost anybody can afford one. However, the complexity of the subject still means
that there has to be a certain attention to detail. If the nature of sampling and the quirks of the Fourier transform are known, it may be a useful tool for virtually any serious instrument builder,
especially with an appropriate graphic output.
This page last updated
© 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 David and Chandrakantha Courtney
For comments, corrections, and suggestions, kindly contact David Courtney at david@chandrakantha.com | {"url":"http://chandrakantha.com/articles/spectrum/spectrum.html","timestamp":"2014-04-18T08:16:57Z","content_type":null,"content_length":"29950","record_id":"<urn:uuid:c6d0484b-a81d-498c-b16e-f384542ff7e2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching Chaos
But First:
What is Chaos Theory?
Strategies and Rubrics for Teaching Chaos and Complex Systems Theories as Elaborating, Self-Organizing, and Fractionating Evolutionary Systems,
Fichter, Lynn S., Pyle, E.J., and Whitmeyer, S.J., 2010, Journal of Geoscience Education (in press)
Chaos theory says that simple systems are capable of producing complex outcomes.
Or, the behavior of a simple system - simple algorithm - becomes more complex and more unpredictable the harder the system is pushed.
This is not intuititely obvious.
For example, the logistic system Xnext = rX (1-X) is a simple growth model that conventionally produces an 'S' shaped curve of initial exponential growth that soon levels off to equilibrium.
But, the same logistic system also generates this bifurcation diagram which is not simple.
Chaos Theory (technically known as deterministic chaos) studies why and how the behavior of simple systems — simple algorithms — becomes more complex and unpredictable as the energy/information the
system dissipates increases.
Because chaos theory was discovered more or less independently by a number of workers in different disciplines there are a variety of definitions/descriptions. We use two definitions:
Chaos Definition One: "the quantitative study of unstable aperiodic behavior in deterministic non-linear systems.” Frankly this is a bit opaque unless you already have a lot of experience
observing the behavior of chaotic systems.
Chaos Definition Two: the logistic system Xnext = rX (1-X), or actually the behavior of this system. And, since this system can be modeled in a computer, in class, in real time, it is the way we
introduce chaos theory.
It is not deductively obvious that all the complex behavior in the bifurcation diagram is really there. It must be experienced. If we tell someone that the behavior of a very simple system - like a
swinging pendulum - will become astronomically complex they may accept it just because we tell them (because we are the teacher and they are the student), but that is not the same as knowing and
believing it in your gut. Especially if you have been told your whole life that simple causes produce simple effects.
You have to see for youself, be confronted with the marvel and mystery so that you know what you saw, even if you are not quite sure yet you believe what you saw. That is what these rubrics are
designed to do. At some stage understanding the phenomena analytically (mathematically) will be important, but we have to start empirically. | {"url":"http://www.jmu.edu/geology/ComplexEvolutionarySystems/ChaosTheory.htm","timestamp":"2014-04-18T20:52:07Z","content_type":null,"content_length":"10590","record_id":"<urn:uuid:e78f5c75-f031-4330-b530-b2100a150d33>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
New capability of step consumption in process manufacturing AX2012
Comments 8
Let me provide some context behind the decision to introduce step consumption capability in Process Manufacturing Dynamics AX2012.
Discrete manufacturing almost always involves linear consumption of ingredients - four tires are required to put together a car. Whereas in process manufactured products consumption can be linear and
it can also be non-linear. As the home brewers know 5 Kg of malt gives 12 litres of beer and 10 Kg of malt gives 24 litres of beer but the bitterness is more in smaller lot, so if you want to keep
same bitterness in bigger lots you need to add some extra malt, how much extra is non-linear - based on experience. Another industrial example will be use of carbon as a catalyst in plasticizer
manufacturing. You can use 1 Kg of carbon to make up to 5 kiloliters of phthalates. Then you need 2 Kg of carbon up to 8 Kilo litres of phthalate (of course, it’s a different matter that phthalates
are banned in many regions for some products, so you shouldn’t really be making them).
Many such reactions have non-linear consumption across different industries but sometimes it's not essential to capture them in an ERP and at other times it is possible but cost intensive to capture
and maintain the formulae. This is where step consumption comes useful. Setting up Bills of material is fairly straightforward when compared to setting up formulae. It isn't because formulae setup in
AX is not user friendly, it's just because the number of parameters required to setup a formulae are many more. In order to setup formulae with non-linear consumption in previous versions, the only
possibility will be to setup many different formulae where every detail is same except the quantity of the ingredient that is consumed non-linearly. Since this would be extremely cumbersome in itself
and furthermore because process manufacturers needed multiple set of versions - master formula, production formula, distributed formula, batch card formula and so on, we decided to introduce the
concept of step consumption into AX2012.
So for the non-linear consumption of malt, you will create two lines in the formula for malt product. On one line you will setup linear consumption, 5 kg for 12 litre, 10 kg for 24 litres and so on.
On the second line you can change the formula to "STEP" on the setup tab. This will make the step consumption grid available. Here you can specify that an extra 0.1 kg of malt is needed when finished
beer quantity is between 12 and 24 litres and an extra 0.15 kg of malt is needed when finished beer quantity is more than 24 litres and so on. So, this gives you flexibility to setup non-linear
consumption in the same formula.
During production estimation system will look at the finished quantity of beer you want to manufacture and will automatically calculate the correct quantity of malt needed. In case of carbon example
above, it can be achieved by using just one line on the formula lines for carbon where you can setup step consumption of carbon of 1 kg up to 5 KL and then of 2 kg between 5 - 8 KL and so on.
Hopefully this will be useful for your scenarios. In case you have suggestions for improvement please email me. | {"url":"http://blogs.msdn.com/b/axmfg/archive/2012/08/11/new-capability-of-step-consumption-in-process-manufacturing-ax2012.aspx","timestamp":"2014-04-19T02:09:30Z","content_type":null,"content_length":"68000","record_id":"<urn:uuid:0a0c5f8a-e1fa-4379-a10d-8a8eb11b9996>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Cube Octahedron
Figure 1
The cube octahedron is sometimes called the vector equilibrium. That is what Buckminster Fuller called it.
It has 12 vertices, 14 faces, and 24 edges. It is made by cutting off the corners of a cube.
Figure 1A showing the cube octahedron (yellow) inside the cube (blue).
Notice that the triangular face of the cube octahedron IJK is formed by cutting off the corner of the cube G, and that the square face of the cube octahedron NPJI is formed when the 4 corners of the
cube H,E,F, and G are cut off.
The cube octahedron has 8 triangular faces and 6 square faces. Figure 1 shows 3 of the square faces and 3 of the triangular faces.
There are 6 square faces on the cube octahedron, one for each face of the cube.
There are 8 triangular faces on the cube octahedron, one for each vertex of the cube.
This polyhedron has the fascinating property that the radius of the enclosing sphere, which touches all 12 vertices, is exactly equal to the length of all of the sides of the cube octahedron.
Unfortunately, our 2 dimensional perspective cannot accurately capture the cube octahedron in true perspective, but if you build a model of one you’ll see its true.
Figure 1B
In Figure 1B you can see some of the rays coming out from O, for example OB and OA. These rays coming out from the center are all equal in length to the sides, for example, GL and LC. This means that
all of the rays branching out from the centroid, O, do so at 60 degree angles, forming 4 'great circle' hexagonal planes on the outside of the figure.
Figure 2 -- showing the 4 'great circle' hexagons
The cube octahedron, if you observe Figure 1C closely, can be seen to be composed of 8 tetrahedrons and 6 half-octahedrons. OGLH and OJAB, for example, are tetrahedrons. The half-octahedrons are
formed from the square planes, for example OEIAJ and OCKGL.
Figure 3 -- showing the tetrahedron and the half-octahedron,
which are also the 2 distinct pyramids of the cube octahedron.
The 12 vertices of the cube octahedron can also be considered to be composed of 3 orthogonal squares centered around O, the sides of which cross through the square faces of the cube octahedron as the
diagonals of the squares:
Figure 4 -- showing the 3 interlocking squares, the corners of which are the vertices of the cube octahedron. Note how the sides of the squares are the diagonals of the square faces of the cube
Notice that in Figure 5 below, the intersection points of the squares form the 6 vertices of an octahedron:
Figure 5. The diagonals of the cube octahedral “squares” intersect to form the vertices of an octahedron (in purple)
What is the ratio of the radius of the enclosing sphere around the cube octahedron, and the side of the cube octahedron?
r = s.
All line segments from the centroid to any vertex are identical in length to the edge length’s.
What is the volume of the cube octahedron (hereinafter referred to as c.o.)?
We use the pyramid method as usual.
V = 1/3 * area of base * pyramid height.
We have 8 triangular faces and 6 square faces.
From The Equilateral Triangle we know that the area of the triangular face is .
The area of the square face is just s * s = s².
We need to find the height of the square pyramid and the height of the triangular pyramid.
In Figure 3, the triangular pyramid, or tetrahedron, is OEHI, with height OM. Triangle OMH is right, the line OM being perpendicular to the plane EHI at M. The height of the tetrahedral pyramid
is known, from Tetrahedron, as .
In Figure 3, F is the center of the square face LCKG, and OF is the pyramid height. The triangle OFL is right, the line OF being perpendicular to the plane LCKG at F. OL is just the radius of the
enclosing sphere, which, in the c.o., is the same as the c.o. side, s.
LF is one half the diagonal of the square LCKG, or , because the diagonal of a square is always times the side of the square.
Therefore we can write for the pyramid height OF, .
. Note that this distance is identical to LF.
The Volume of 1 triangular pyramid then, is = .
The Volume of all 8 triangular pyramids is = .
The Volume of 1 square pyramid = = .
The Volume of all 6 square pyramids = 6 * = .
Therefore the Total Volume of the cube octahedron =
= 2.357022604 = 2.357022604 .
This figure is larger than for the cube, but why? The volume of the cube in the unit sphere is only 1.539600718 r³ ! The reason lies in the fact that the cube octahedron fits more snugly within the
unit sphere than does the cube.
The model on my desk shows the cube octahedron inside the cube, as in Figure 1. Here, the side of the cube octahedron is This can be determined by an inspection of triangle GIJ in Figure 1.
GI = GJ = 1/2 the side of the cube. Angle IGJ is right. Therefore, GJ, the side of the cube octahedron = .
Calculating the volume of the cube octahedron in terms of the side of the cube as in Figure 1, we write
Therefore the volume of the cubeoctahedron is 5/6ths the volume of the cube when the edge length of the cubeoctahedron and the edge length of the cube are the same.
There are 8 small tetrahedrons which represent the volume of the cube octahedron that have been “cut out” of the cube (See Figure 1 and the tetrahedron GKIJ, for example). The volume of each of these
tetrahedrons must then be one–eighth of 1/6, the difference between the volume of the cube and the volume of the cube octahedron. Therefore the volume of each small tetrahedron = .
What is the surface area of the cube octahedron?
The surface area is 8 * area of triangular face + 6 * area of square face = .
What is the central angle of the cube octahedron? Each of the internal angles of the c.o. are formed from any of the 4 hexagonal planes which surround the centroid (see Figure 2).
Therefore the central angle of the cube octahedron = 60°.
There are 2 surface angles of the c.o., one being the 60° angle of the triangular faces, the other being the 90° angle of the square faces.
What is the dihedral angle of the cube octahedron? This angle is the intersection between a square face and a triangular face. If you sit the c.o. on one of its square faces you can see the
Figure 5 - illustrating the dihedral angle of the cube octahedron -- top view
I have drawn the line EGHF to illustrate the dihedral angle between the two triangular faces and the square face ABCD. The point G is directly above the point M. O is the centroid of the c.o. M
bisects EO.
The triangle GME is right by construction. The angle MGH is right by construction. The angle EGH is the dihedral angle. If we can find the angle EGM, all we have to do then is add 90° to it ( MGH),
and we have the dihedral angle.
Figure 6 -- showing that GME and MGH are right angles
OE = OF = side of c.o., or s. EM = 1/2 * OE.
EG is the height of a triangular face, which we know from The Equilateral Triangle is .
So we can write:
sin( EGM) = EM / EG = .
= 35.26438968° .
EGH = 90° + 35.26438968° = 125.26438968° .
Dihedral angle = 125.26438968° .
The distance from the centroid to each vertex = s = r.
The distance from the centroid to any mid-edge is just the height of an equilateral triangle (remember that the centroid is surrounded by 4 hexagons (Figure 2)). This distance is
The distance from the centroid to a triangular mid-face is just the height of the tetrahedron (see Figure 3), which we know from above, as
The distance from the centroid to a square mid-face is just the height of the half–octahedron OF (see Figure 3), which we know from above, is
The relationship between these distances is 1, 0.866025404, 0.816496581, 0.707106781
Cube Octahedron Reference Tables
(included in the book)
Return to Geometry Home Page The Big Picture Home | {"url":"http://kjmaclean.com/Geometry/Cubeoctahedron.html","timestamp":"2014-04-17T06:40:56Z","content_type":null,"content_length":"27965","record_id":"<urn:uuid:5ec18c4c-3b5c-4db9-a616-88d366bddda3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math calculations in VHDL
October 17th, 2012, 04:13 AM #1
Altera Teacher
Join Date
Feb 2011
Rep Power
Hi all,
I need a little help with my design, there are a few issues that I'm trying to resolve.
My design flow is as following: A signal is sampled by a 16bit A2D, multiplied by 2 another 16bit signals (which results in 2 32bit signals). After a few manipulations I need to implement a
formula that uses those 2 signals with division, multiplication and a few other functions that are calculated using LUTs. The LUTs are designed for a signal normalized to the range of 0 - pi/2.
Note that after the formula the signals will be converted to 32bit floating-point signals.
The formula requires some fractional calculations so I'm trying to use the fixed point package by David Bishop but I'm facing a few issues:
1. How am I supposed to convert the 32bit signals (integer) to 32 fixed point numbers?
2. I'm trying to divide the 2 32bit signals - what is the best way to do this? I tried the lpm_divide megafunction but I get a quotient and a remain while I need the whole number (i.e. 5.65 and
not only 5).
3. How exactly am I supposed to use the fixed-point package? I have the following libraries, is that enough?
Library ieee;
use ieee.std_logic_1164.all;
use ieee.std_logic_unsigned.all;
use ieee.fixed_float_types.all;
use ieee.fixed_pkg.all;
Thanks in advance, help will be very appreciated.
Fixed point unlike the floating point arithmetics does not need any package at all. Every math operation implemented for integers is applicable to fixed point numbers.
This might be helpful http://www.digitalsignallabs.com/fp.pdf
As for the division this might be implemented using CORDIC algorithms.
I'm sorry if I wasn't clear enough, I understand the theory, my problem is with the implementation.
First of all, how can I get the whole number from the lpm_divide function (including the fractional part).
Second how can I scale the 32bit integers from the first part of my design so that they can be used as 32bit fixed-point fractions in the second part.
well, if its an integer, it has no fractional part to it. An integer is just a fixed point number with no fractional part. Unless you are talking about an integer you really meant was a
fractional number and you biassed it by 2^n (where N is the number of fractional bits). In which case you need to tell us what the value of N is. You DONT need an LPM divide for this.
Answer to questions
1. You zero pad the inputs to the correct number of bits (integer and fractional) and the result will be correct.
2. You need to just assign it to the correct sized ufixed or sfixed values (fixed point is just integers, offset by 2^n)
either way, there are the to_ufixed and to_sfixed functions from the fixed point package.
one point to note. You should not be using the std_logic_unsigned package, its not an IEEE standard. If you have a number, you shouldnt make it a std_logic_vector. There is no need to make ports
std_logic_vector, they can be anything.
Hi Tricky, thank you for your reply.
The reason I said that there are fractions is because when I divide the 2 32bit integers I will probably get a fraction.
You mean that I should convert the 32bit integers to ufixed and then divide them and store the answer in a 64bit ufixed signal? I tried to do that but I got a strange error regarding the "/"
operator, that's why I used the lpm_divide megafunction instead, which works for std_logic_vector.
If I'm not using the std_logic_unsigned package I get an error even for a "+" sign, where am I wrong? Have I included all the correct files from the fixed-point package?
Yes, you will get a fraction. But if they start out as 32 bit, you will need a lot more bits to store the result. But making it a 32.32 sfixed is rather easy (remember integers are signed. For
unsigned, you can only have a max of 31 bit integers). Do you really mean integers as in the VHDL type integer, or just a number that arrives via a 32 bit bus?
signal my_number : sfixed(31 downto -32);
my_number <= to_sfixed(input, 31, -32);
And this is free in terms of synthesis because you're really just appending a load of zeros to it. Then you can convert this to a std_logic_vector (with the to_slv function) so you can connect it
into the lpm divide. No, you should not use the "/" function unless you can get away with a pipeline length of 1 (until altera sort out their register placement inside infered dividers properly!)
you shouldnt do + with std_logic_vectors, you should keep them in ufixed or sfixed type when you do this.
I think at this point you need to post some code to show us what you're actually trying to do.
Below is a simple code that uses the lpm_divide megafunction to divide 2 32bit numbers. I tried to implement your comments from the previous post. The numbers are 32bit std_logic_vectors (I
refered to them as integers i.e. a number that arrives via a 32bit bus). I want to get the full result (including the fractional part) because the next step will be performing modulus pi/2 on the
result. I'm getting the following compilation error: Error (10344): VHDL expression error at PreNICOMFunction_fixed.vhd(: expression has 0 elements, but must have 64 elements (it refers to the
line I_fp <= to_ufixed(I,31,-32)
1. What am I doing wrong?
2. How can I convert the divider result to a fractional number?
Thanks again for your help.
Library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use ieee.fixed_float_types.all;
use ieee.fixed_pkg.all;
entity PreNICOMFunction_fixed is
port (
clock : in std_logic;
reset : in std_logic;
I : in std_logic_vector (31 downto 0);
Q : in std_logic_vector (31 downto 0);
V0 : in std_logic_vector (31 downto 0) --;
-- temp_res : out ufixed (31 downto -32)
end PreNICOMFunction_fixed;
architecture rtl of PreNICOMFunction_fixed is
component lpm_divider1 IS --32 clocks latency
PORT (
aclr : IN STD_LOGIC ;
clock : IN STD_LOGIC ;
denom : IN STD_LOGIC_VECTOR (63 DOWNTO 0);
numer : IN STD_LOGIC_VECTOR (63 DOWNTO 0);
quotient : OUT STD_LOGIC_VECTOR (63 DOWNTO 0);
remain : OUT STD_LOGIC_VECTOR (63 DOWNTO 0)
END component lpm_divider1;
type fsm_type1 is (s0, s1, s2, s3, s4);
signal fsm1 : fsm_type1 := s0;
signal QdivI : std_logic_vector (63 downto 0) := (others => '0');
signal QdivI_remain : std_logic_vector (63 downto 0) := (others => '0');
signal I_ff : std_logic_vector (31 downto 0) := (others => '0');
signal Q_ff : std_logic_vector (31 downto 0) := (others => '0');
signal I_fp : ufixed (31 downto -32) := (others => '0');
signal Q_fp : ufixed (31 downto -32) := (others => '0');
signal div_res : ufixed (31 downto -32) := (others => '0');
signal cntr1 : ufixed (4 downto 0) := (others => '0');
constant div_latency : ufixed (4 downto 0) := "11111";
divider : component lpm_divider1
port map (
aclr => '0',
clock => clock,
denom => to_slv(I_fp),
numer => to_slv(Q_fp),
quotient => QdivI,
remain => QdivI_remain
main : process (clock)
if (rising_edge(clock)) then
I_ff <= I;
Q_ff <= Q;
case fsm1 is
when s0 => if ((I_ff /= I) or (Q_ff /= Q)) then --New data availabe
I_fp <= to_ufixed(I,31,-32);
Q_fp <= to_ufixed(Q,31,-32);
fsm1 <= s1;
end if;
-- atanLUT_clk <= '0';
when s1 => if (cntr1 < div_latency) then --Wait for the division operation to complete (Q/I)
cntr1 <= cntr1 + 1;
cntr1 <= (others => '0');
fsm1 <= s0;
end if;
when others => fsm1 <= s0;
end case;
end if;
end process main;
end rtl;
I modified the code so it compiles now but I still get a quotient and a remain and not the whole result. Besides, if my input is std_logic_vector why convert it to ufixed and then back to
Library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use ieee.fixed_float_types.all;
use ieee.fixed_pkg.all;
entity PreNICOMFunction_fixed is
port (
clock : in std_logic;
reset : in std_logic;
I : in std_logic_vector (31 downto 0);
Q : in std_logic_vector (31 downto 0);
V0 : in std_logic_vector (31 downto 0) --;
-- temp_res : out ufixed (31 downto -32)
end PreNICOMFunction_fixed;
architecture rtl of PreNICOMFunction_fixed is
component lpm_divider1 IS --32 clocks latency
PORT (
aclr : IN STD_LOGIC ;
clock : IN STD_LOGIC ;
denom : IN STD_LOGIC_VECTOR (63 DOWNTO 0);
numer : IN STD_LOGIC_VECTOR (63 DOWNTO 0);
quotient : OUT STD_LOGIC_VECTOR (63 DOWNTO 0);
remain : OUT STD_LOGIC_VECTOR (63 DOWNTO 0)
END component lpm_divider1;
type fsm_type1 is (s0, s1, s2, s3, s4);
signal fsm1 : fsm_type1 := s0;
signal QdivI : std_logic_vector (63 downto 0) := (others => '0');
signal QdivI_remain : std_logic_vector (63 downto 0) := (others => '0');
signal I_ff : std_logic_vector (31 downto 0) := (others => '0');
signal Q_ff : std_logic_vector (31 downto 0) := (others => '0');
signal I_fp : ufixed (31 downto -32) := (others => '0');
signal Q_fp : ufixed (31 downto -32) := (others => '0');
signal div_res : ufixed (31 downto -32) := (others => '0');
signal cntr1 : ufixed (4 downto 0) := (others => '0');
constant div_latency : ufixed (4 downto 0) := "11111";
divider : component lpm_divider1
port map (
aclr => '0',
clock => clock,
denom => to_slv(I_fp),
numer => to_slv(Q_fp),
quotient => QdivI,
remain => QdivI_remain
main : process (clock)
if (rising_edge(clock)) then
I_ff <= I;
Q_ff <= Q;
case fsm1 is
when s0 => if ((I_ff /= I) or (Q_ff /= Q)) then --New data availabe
I_fp <= to_ufixed(unsigned(I),31,-32);
Q_fp <= to_ufixed(unsigned(Q),31,-32);
fsm1 <= s1;
end if;
when s1 => if (cntr1 < div_latency) then --Wait for the division operation to complete (Q/I)
cntr1 <= resize(cntr1 + 1, cntr1'high, cntr1'low);
cntr1 <= (others => '0');
fsm1 <= s0;
end if;
when others => fsm1 <= s0;
end case;
end if;
end process main;
end rtl;
You will always get a remainder because you would otherwise need more bits to get and even smaller result in the quotient, which is not possible with lpm divide (64 bits being the max).
You dont really need to use sfixed, but you still need to append all the '0's onto the end when connecting it to the lpm_divide.
And I notice you dont have any outputs from this block - are you going to add them later?
Yes, the block is going to be more complicated. After the division I need to perform modulus pi/2 on the result and then use it with an LUT which is based on fractional numbers, that's why I need
the fractional result. How can I find it?
October 17th, 2012, 05:06 AM #2
Altera Scholar
Join Date
Jan 2012
Rep Power
October 17th, 2012, 05:56 AM #3
Altera Teacher
Join Date
Feb 2011
Rep Power
October 17th, 2012, 06:55 AM #4
Moderator **Forum Master**
Join Date
Oct 2008
Rep Power
October 17th, 2012, 08:45 AM #5
Altera Teacher
Join Date
Feb 2011
Rep Power
October 17th, 2012, 01:11 PM #6
Moderator **Forum Master**
Join Date
Oct 2008
Rep Power
October 17th, 2012, 10:31 PM #7
Altera Teacher
Join Date
Feb 2011
Rep Power
October 17th, 2012, 11:52 PM #8
Altera Teacher
Join Date
Feb 2011
Rep Power
October 17th, 2012, 11:55 PM #9
Moderator **Forum Master**
Join Date
Oct 2008
Rep Power
October 17th, 2012, 11:58 PM #10
Altera Teacher
Join Date
Feb 2011
Rep Power | {"url":"http://www.alteraforum.com/forum/showthread.php?t=38135","timestamp":"2014-04-18T03:54:31Z","content_type":null,"content_length":"97802","record_id":"<urn:uuid:c5bc06e2-5e2f-4377-af87-feefd5681b28>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: SIGNAL ACQUISITION METHOD AND SIGNAL ACQUISITION APPARATUS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A signal acquisition method includes: performing a correlation operation for a received satellite signal, the satellite signal being transmitted from a positioning satellite; frequency-analyzing a
result of the correlation operation over a predetermined time which is equal to or longer than a bit length of navigation message data carried by the satellite signal; extracting a power value in
each frequency in which the power value satisfies a predetermined power condition, from a result of the frequency analysis; and acquiring the satellite signal using the extracted power value.
A signal acquisition method comprising: performing a correlation operation for a received satellite signal, the satellite signal being transmitted from a positioning satellite; frequency-analyzing a
result of the correlation operation over a predetermined time which is equal to or longer than a bit length of navigation message data carried by the satellite signal; extracting a power value in
each frequency in which the power value satisfies a predetermined power condition, from a result of the frequency analysis; and acquiring the satellite signal using the extracted power value.
The signal acquisition method according to claim 1, further comprising increasing the result of the correlation operation over the predetermined time by n times (n>1), wherein the frequency analysis
is performed for the result of the correlation operation which is increased by n times.
The signal acquisition method according to claim 1, wherein the acquisition is performed considering the extracted power value as a power value at a zero frequency, in the satellite signal
The signal acquisition method according to claim 1, wherein the satellite signal acquisition includes: performing an inverse frequency analysis; and acquiring the satellite signal using a result of
the inverse frequency analysis.
The signal acquisition method according to claim 1, wherein the power value in the frequency in which the power value satisfies the power condition, among a specific frequency determined according to
the bit length and harmonics of the specific frequency, is extracted, in the extraction.
The signal acquisition method according to claim 1, wherein the frequency analysis uses a Fourier transform.
The signal acquisition method according to claim 1, wherein the frequency analysis uses a wavelet transform.
A signal acquisition apparatus comprising: a correlation operation section which performs a correlation operation for a satellite signal which is transmitted from a positioning satellite and received
by a receiving section; an analyzing section which frequency-analyzes a result of the correlation operation over a predetermined time which is equal to or longer than a bit length of navigation
message data carried by the satellite signal; an extracting section which extracts a power value in each frequency in which the power value satisfies a predetermined power condition, from a result of
the frequency analysis; and an acquiring section which acquires the satellite signal using the extracted power value.
BACKGROUND [0001]
1. Technical Field
The present invention relates to a signal acquisition method and a signal acquisition apparatus.
2. Related Art
A GPS (Global Positioning System) is widely known as a positioning system which uses a positioning signal, and is applied to a position calculation device built into a mobile phone, a car navigation
apparatus or the like. In the GPS, a position calculation is performed for calculating the position coordinates of the position calculation device and a time piece error on the basis of the
information including positions of a plurality of GPS satellites, a pseudo distance from each GPS satellite to the position calculation device, and the like.
A GPS satellite signal transmitted from the GPS satellite is modulated using spread codes called CA (Coarse and Acquisition) codes, which differ according to each GPS satellite. In order to acquire
the GPS satellite signal from weak received signals, the position calculation device performs a correlation operation of the received signals and replica CA codes which are replicas of the CA codes,
and acquires the GPS satellite signal on the basis of correlation values. In this case, in order to easily detect a peak of the correlation values, a technique is used in which the correlation values
obtained by the correlation operation are integrated over a predetermined integration time.
However, since the CA codes themselves which spread modulate the GPS satellite signal are BPSK (Binary Phase Shift Keying) modulated every 20 milliseconds by the navigation message data, polarity of
the CA codes may be inverted every 20 milliseconds, which is the bit length. Thus, in a case where the correlation values are integrated over the timing when the bit value of the navigation message
data is changed, there is a possibility that the correlation values having different signs are integrated. In order to solve this problem, a technique is known in which correlation values are
integrated using assistance data with respect to the timing when the bit value of the navigation message data is changed, as disclosed in JP-A-2001-349935, for example.
According to JP-A-2001-349935, a correlation integration time can be set longer than the bit length (20 milliseconds) of the navigation message data. However, in the technique disclosed in
JP-A-2001-349935, it is necessary to acquire the assistance data with respect to the timing when the bit value of the navigation message data is changed, from the outside, thereby causing
restrictions or problems related to data acquisition such as a problem of communication cost or communication time. In particular, after the navigation message data transmitted from the GPS satellite
signal is switched to new data, it is necessary to wait for update of the assistance data and to acquire the new assistance data.
SUMMARY [0008]
An advantage of some aspects of the invention is that it provides a new technique which is capable of performing a correlation process over a correlation integration time longer than the bit length
of navigation message data.
According to a first aspect of the invention, there is provided a signal acquisition method including: performing a correlation operation for a received satellite signal, the satellite signal being
transmitted from a positioning satellite; frequency-analyzing a result of the correlation operation over a predetermined time which is equal to or longer than a bit length of navigation message data
carried by the satellite signal; extracting a power value in each frequency in which the power value satisfies a predetermined power condition, from a result of the frequency analysis; and acquiring
the satellite signal using the extracted power value.
According to another aspect of the invention, there may be provided a signal acquisition apparatus including: a correlation operation section which performs a correlation operation for a satellite
signal which is transmitted from a positioning satellite and received by a receiving section; an analyzing section which frequency-analyzes a result of the correlation operation over a predetermined
time which is equal to or longer than a bit length of navigation message data carried by the satellite signal; an extracting section which extracts a power value in each frequency in which the power
value satisfies a predetermined power condition, from a result of the frequency analysis; and an acquiring section which acquires the satellite signal using the extracted power value.
According to the above embodiments, the correlation operation is performed for the received satellite signal which is transmitted from the positioning satellite. Then, the result of the correlation
operation over the predetermined time which is equal to or longer than the bit length of the navigation message data carried by the satellite signal is frequency-analyzed, and the power value in each
frequency in which the power value satisfies a predetermined power condition is extracted from the result of the frequency analysis. Then, the satellite signal is acquired using the extracted power
If the correlation process is performed for the satellite signal by which the navigation message data is carried over an arbitrary time which is equal to or longer than the bit length of the
navigation message data, even though the satellite signal is acquired in accordance with the correct frequency, time-series data on the correlation values having a sign change is obtained. However,
when the frequency analysis is performed for the time series data on the correlation values, if the satellite signal can be acquired in accordance with a correct frequency, the peak of the power
value appears in the frequency determined according to the cycle of the sign change. Here, the size of the power value varies according to the cycle (frequency) of the sign change or its harmonics.
Thus, by using the power value in the frequency in which the power value satisfies the predetermined power condition, it is possible to perform the correlation process over a correlation integration
time which is longer than the bit length of the navigation message data, and to accurately perform the signal acquisition.
Further, according to a second aspect of the invention, the signal acquisition method according to the first embodiment may further include increasing the result of the correlation operation over the
predetermined time by n times (n>1), and the frequency analysis may be performed for the result of the correlation operation which is increased by n times.
According to the second embodiment, the result of the correlation operation is increased by increasing the result of the correlation operation over the predetermined time by n times. Then, the
frequency analysis is performed for the result of the correlation operation which is increased by n times. Accordingly, it is possible to increase the power spectrum density obtained in the frequency
Further, according to a third aspect of the invention, in the signal acquisition method according to the first or second embodiment, the acquisition may be performed considering the extracted power
value as a power value at a zero frequency, in the satellite signal acquisition.
According to the third embodiment, the acquisition of the satellite signal is performed considering the extracted power value as the power value at the zero frequency. To consider the extracted power
value as the power value at the zero frequency means that the peak of the power value exists in the zero frequency as the result of the frequency analysis. That the peak of the power value exists in
the zero frequency corresponds to success in detection of the reception frequency of the satellite signal. Accordingly, according to the third embodiment, it is possible to easily determine the
success or failure in the signal acquisition.
Further, according to a fourth aspect of the invention, in the signal acquisition method according to any one of the first to third embodiments, the satellite signal acquisition may include:
performing an inverse frequency analysis; and acquiring the satellite signal using a result of the inverse frequency analysis.
Further, according to a fifth aspect of the invention, in the signal acquisition method according to the first to fourth embodiments, the power value in the frequency in which the power value
satisfies the power condition, among a specific frequency determined according to the bit length and harmonics of the specific frequency, may be extracted, in the extraction.
According to the fifth embodiment, the power value in the frequency in which the power value satisfies the power condition, among the specific frequency determined according to the bit length and the
harmonics of the specific frequency, is extracted. If the frequency analysis is performed for the correlation operation result, the peak of the power value generally appears in the specific frequency
determined according to the bit length of the navigation message data. Further, the peak of the power value generally appears in the frequency of the harmonics of the specific frequency, using the
specific frequency as a fundamental frequency. Thus, by extracting the power value using the specific frequency and the frequency of its harmonics as targets, it is possible to accurately acquire the
satellite signal while reducing the calculation amount.
Further, the frequency analysis in the first to fifth aspects may apply a Fourier transform as a sixth aspect of the invention, or may apply a wavelet transform as a seventh aspect of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS [0021]
The invention will be described with reference to the accompanying drawings, wherein like numbers reference like elements.
[0022] FIG. 1A
is an example of a time change in correlation values, FIG. 1B is an example of a frequency analysis result,
FIG. 1C
is a diagram illustrating power value processing, and
FIG. 1D
is an example of a time change in reconfigured correlation values.
FIGS. 2A and 2B illustrate direct-current components of correlation values, and
FIG. 2c
illustrates specific frequency components of correlation values.
[0024] FIG. 3
is a block diagram illustrating an example of a functional configuration of a mobile phone.
[0025] FIG. 4
is a block diagram illustrating an example of a circuit configuration of a baseband processing circuit section.
[0026] FIG. 5
is a flowchart illustrating a work flow of baseband processing.
[0027] FIG. 6
is a flowchart illustrating a work flow of a first correlation process.
FIG. 7 is a flowchart illustrating a work flow of a second correlation process.
[0029] FIG. 8
is a flowchart illustrating a work flow of a third correlation process.
[0030] FIG. 9
is a flowchart illustrating a work flow of a fourth correlation process.
[0031] FIG. 10
is a diagram illustrating an example of a result of a correlation process in a phase direction and in a frequency direction in the related art.
[0032] FIG. 11
is a diagram illustrating an example of a result of a correlation process in a frequency direction in the related art.
[0033] FIG. 12
is a diagram illustrating an example of a result of a correlation process in a phase direction in the related art.
[0034] FIG. 13
is a diagram illustrating an example of a result of a correlation process in a phase direction and in a frequency direction according to a first embodiment.
[0035] FIG. 14
is a diagram illustrating an example of a result of a correlation process in a frequency direction according to the first embodiment.
[0036] FIG. 15
is a diagram illustrating an example of a result of a correlation process in a phase direction according to the first embodiment.
[0037] FIG. 16
is a flowchart illustrating a work flow of a fifth correlation process.
[0038] FIG. 17
is a flowchart illustrating a work flow of a sixth correlation process.
[0039] FIG. 18
is a diagram illustrating an example of a time-series change in correlation values in the related art.
[0040] FIG. 19
is a diagram illustrating an example of a time-series change in correlation values according to a second embodiment.
1. Principle
Firstly, the principle of a satellite signal acquisition according to the present embodiment will be described.
In a position calculation system using a GPS satellite, the GPS satellite which is a type of positioning satellite transmits navigation message data including satellite orbit data such as an almanac
or an ephemeris, through a GPS satellite signal which is a type of positioning satellite signal.
The GPS satellite signal is a communication signal of 1.57542 GHz modulated by CDMA (Code Division Multiple Access) which is known as a spectrum spread technique, using CA (Coarse and Acquisition)
codes which are a type of spread code. The CA codes are pseudo random noise codes in a repetitive cycle of 1 ms in which a code length of 1023 chips is set to one PN frame, which differ according to
each satellite.
The frequency (regulated carrier frequency) at the time when the GPS satellite transmits the GPS satellite signal is regulated in advance as 1.57542 GHz. However, due to the Doppler effect or the
like generated by the movement of the GPS satellite or a GPS receiver, the frequency at the time when the GPS receiver receives the GPS satellite signal does not necessarily coincide with the
regulated carrier frequency. Thus, the GPS receiver in the related art performs a frequency search which is a correlation operation in a frequency direction for acquiring the GPS satellite signal
from received signals to acquire the GPS satellite signal. Further, in order to specify the phase of the received GPS satellite signal (CA codes), the GPS receiver performs a phase search which is a
correlation operation in a phase direction to acquire the GPS satellite signal.
However, in particular, in a weak electric field environment such as an indoor environment, since the level of a correlation value in a true reception frequency and a true code phase is lowered, it
is difficult to distinguish it from noise. As a result, detection of the true reception frequency and the true code phase, that is, signal acquisition, becomes difficult. Thus, in such a reception
environment, a technique is used in which correlation values obtained by the correlation operation are integrated over a predetermined correlation integration time and a peak is detected from the
integrated correlation values to acquire the GPS satellite signal.
However, the GPS satellite signal is spread modulated by the CA codes, and the CA codes themselves are BPSK (Binary Phase Shift Keying) modulated according to a bit value of navigation message data.
Since the bit length of the navigation message data is 20 milliseconds, there is a possibility that the bit value is changed (inverted) every 20 milliseconds. The possibility means that the bit value
may not be changed. In this embodiment, the timing when the bit value of the navigation message data is actually changed is referred to as "bit inversion timing".
That the bit value of the navigation message data is changed means that polarity of the CA codes is inverted. Thus, if a correlation operation of the received CA codes and replica codes is performed,
correlation values having different signs can be calculated every 20 milliseconds which is the bit length of the navigation message data. Accordingly, if the correlation values are integrated over
the bit inversion timing of the navigation message data, correlation values having different signs are offset against each other, and thus, a problem where the correlation values become significantly
small (in an extreme case, 0) occurs. In order to solve this problem, the present inventor contrived a new technique of integrating the correlation values with same signs, using a frequency analysis
for the correlation values.
FIGS. 1A to 1D and FIGS. 2A to 2C are diagrams illustrating a work flow of a correlation process according to the embodiment.
[0049] FIG. 1A
illustrates an example of a time-series change in correlation values. For ease of description, the correlation values are expressed as two positive and negative values, for example, "+1" and "-1".
Further, a case where a true value of a reception frequency is already known and a correlation operation is performed using replica codes which coincide with received CA codes in phase to calculate
the correlation values, will be described hereinafter.
If the polarity of the CA code in a case where the bit value of the navigation message data is "1" is positive, the received CA code is multiplied by the replica code, and thus, the correlation value
"+1" is obtained. On the other hand, if the polarity of the CA code in a case where the bit value of the navigation message data is "0" is negative, the received CA code is multiplied by the replica
code, and thus, the correlation value "-1" is obtained.
Referring to
FIG. 1A
, it can be understood that the signs of the correlation values are switched at the bit inversion timing of the navigation message data. In a case where the bit inversion timing comes after 20
milliseconds from the previous bit inversion timing, according to the bit value, the sign of the correlation value is inverted at the timing after 20 milliseconds. Further, in a case where the bit
inversion timing comes after 40 milliseconds, the sign of the correlation value is inverted at the timing after 40 milliseconds.
If the time-series correlation values shown in
FIG. 1A
are integrated over a predetermined time which is equal to or longer than the bit length and the frequency analysis is performed, a power spectrum as shown in
FIG. 1B
is obtained, for example. As described later in the embodiment, as the frequency analysis, for example, a Fourier transform or a wavelet transform can be used. In
FIG. 1B
, the transverse axis represents frequency, and the longitudinal axis represents a power value. For ease of description, white noise is not shown.
As shown in
FIG. 1B
, a peak of a power value appears in a zero frequency (0 Hz). This is direct-current components of the time-series correlation values. That is, as shown in FIGS. 2A and 2B, frequency components
(direct-current components) corresponding to a portion where the sign is not changed, among the time-series correlation values, appear as the peak of the power value of 0 Hz.
However, as shown in
FIG. 1B
, a peak having a large power value also appears in a frequency of 25 Hz. This is caused by the fact that the bit length of the navigation message data is 20 milliseconds. That is, as shown in
FIG. 2c
, in a case where the bit value of the navigation message data is changed every 20 milliseconds, for example, since the correlation values are changed to "1" in initial 20 milliseconds, "-1" in the
next 20 milliseconds, "1" in the second next 20 milliseconds, the cycle of the correlation values becomes 40 milliseconds.
The period of 40 milliseconds is a period corresponding to two times the bit length of the navigation message data. If the cycle of 40 milliseconds is converted into frequency, "f=1/T=1/(40×10
)=25 Hz". Frequency components of 25 Hz included in the time-series correlation values appear as the peak of the power value. In this embodiment, the frequency of 25 Hz is defined as a "specific
Further, referring to
FIG. 1B
, it can be understood that small peaks, which are not as large as the specific frequency (25 Hz), appear in higher frequencies such as 75 Hz, 125 Hz and 175 Hz. A waveform of the correlation values
is symmetric, and thus, the peak of the power value appears in the frequency of an odd multiple of the specific frequency which is a fundamental frequency, that is, in the odd-order harmonic
Further, although not shown here, a peak of the power value may also appear in a frequency which is lower than 25 Hz which is the specific frequency. This occurs due to the fact that the bit
inversion timing of the navigation message data does not necessarily become timing every 20 milliseconds. That is, in a case where the bit inversion timing of the navigation message data comes in a
period which is longer than the 20 milliseconds, the cycle of the correlation values corresponding to the period does not become 40 milliseconds, and becomes the period longer than 40 milliseconds.
If the cycle becomes longer than 40 milliseconds, the frequency becomes lower than 25 Hz. For example, in a case where the bit inversion timing of the navigation message data comes in the period of
40 milliseconds, the cycle of the correlation values corresponding to the period becomes 80 milliseconds and its frequency becomes 12.5 Hz.
The peak of the power values is caused by the fact that the polarity of the CA code is inverted at the bit inversion timing of the navigation message data and the sign of the correlation value is
changed. For example, in a case where the frequency analysis is performed by setting the integration time of the correlation values to shorter than 20 milliseconds of the bit length in order not to
exceed the bit inversion timing, the peak appears only at the zero frequency and the peak does not appear in the other frequency. That is, if the sign of the correlation value is not changed, the
peak of the power value in the frequency other than the zero frequency does not appear. In other words, if the peak of the power values other than the zero frequency can disappear, it is possible to
ignore the sign change of the correlation value, and further, to negate the affect of the bit inversion of the navigation message data.
Thus, in the present embodiment, a process of extracting a power value of each frequency which satisfies a predetermined power condition from among power values obtained by performing the frequency
analysis for the time-series correlation values and of moving and adding the extracted power value to a power value in the zero frequency, is performed. For example, by setting a threshold value "θ"
for the power value and using a condition (hereinafter, referred to as a "high power condition") which exceeds the threshold value "θ", the power value of each frequency which satisfies the high
power condition is extracted. Then, a process of adding the extracted power value to the power value in the zero frequency (0 Hz) and setting the extracted power value to "0" is performed. The
threshold value of the power value may be appropriately set according to which size of power value is considered as noise.
The power condition is not limited to the condition using the above-described threshold value determination, and may be appropriately changed. For example, the power condition may be determined as a
condition that power values of frequencies of predetermined numbers or predetermined ratios are sequentially selected from a descending order in the power value, from among the power values in the
respective frequencies.
For example, in
FIG. 1C
, the power value in the specific frequency "25 Hz" and the power value in a harmonic frequency "75 Hz" exceed the threshold value "θ". Thus, the power values in the specific frequency "25 Hz" and in
the harmonic frequency "75 Hz" move to the power value in the zero frequency. That is, the power values are added to the power value in the zero frequency and the power value is set to "0".
After the above-described process is performed, the time-series correlation values are reconfigured using an inverse frequency analysis. Then, as shown in
FIG. 1D
, the time-series correlation values with no sign change are obtained. If the correlation values with no sign change are integrated, the problem that the correlation values are offset against each
other and thus that the integration correlation value becomes small does not occur. Accordingly, the above-described correlation process is performed, and thus, it is possible to set the correlation
integration time to be longer than 20 milliseconds which is the bit length of the navigation message data, and to integrate the correlation values for an arbitrary correlation integration time.
2. Embodiments
Next, embodiments in a case where the invention is applied to a mobile phone which is a type of electronic device including a satellite signal acquisition device and a position calculation device
will be described. It is obvious that embodiments to which the invention can be applied are not limited to the following embodiments.
[0064] FIG. 3
is a block diagram illustrating an example of a functional configuration of a mobile phone 1 in each embodiment. The mobile phone 1 includes a GPS antenna 5, a GPS receiving section 10, a host CPU
(Central Processing Unit) 30, a manipulation section 40, a display section 50, a mobile phone antenna 60, a mobile phone wireless communication circuit section 70, and a storing section 80.
The GPS antenna 5 receives an RF (Radio Frequency) signal including a GPS satellite signal transmitted from the GPS satellite, and outputs the received signal to the GPS receiving section 10.
The GPS receiving section 10 is a position calculation circuit or a position calculation device which calculates the position of the mobile phone 1 on the basis of the signal output from the GPS
antenna 5, which is a functional block corresponding to a so-called GPS receiver. The GPS receiving section 10 includes an RF receiving circuit section 11 and a baseband processing circuit section
20. The RF receiving circuit section 11 and the baseband processing circuit section 20 may be manufactured as different LSIs (Large Scale Integration) or as one chip.
The RF receiving circuit section 11 is a circuit which receives an RF signal. For example, a receiving circuit which converts the RF signal output from the GPS antenna 5 into a digital signal by an A
/D converter and processes the digital signal may be used as the circuit configuration. Further, a configuration may be used in which the RF signal output from the GPS antenna 5 is processed as an
analog signal as it is and is finally A/D converted, and then the digital signal is output to the baseband processing circuit section 20.
In the latter case, for example, it is possible to configure the RF receiving circuit section 11 as follows. That is, a predetermined oscillation signal is frequency-divided or frequency-multiplied,
to generate an oscillation signal of RF signal multiplication. Then, the RF signal output from the GPS antenna 5 is multiplied by the generated oscillation signal to be down-converted into a signal
of an intermediate frequency (hereinafter, referred to as an IF (Intermediate Frequency) signal). Then, the IF signal undergoes amplification and the like, is converted into a digital signal by the A
/D converter, and then is output to the baseband processing circuit section 20.
The baseband processing circuit section 20 performs a correlation process or the like for the received signal output from the RF receiving circuit 11 to acquire the GPS satellite signal, and performs
a predetermined position calculation on the basis of satellite orbit data, time data and the like extracted from the GPS satellite signal to calculate the position (position coordinates) of the
mobile phone 1. The baseband processing circuit section 20 functions as the satellite signal acquisition device which acquires the GPS satellite signal from the received signals.
[0070] FIG. 4
is a diagram illustrating an example of a circuit configuration of the baseband processing circuit section 20, which mainly illustrates a circuit block according to this embodiment. For example, the
baseband processing circuit section 20 includes a multiplier 21, a carrier removal signal generating section 22, a correlator 23, a replica code generating section 24, a processing section 25, and a
storing section 27.
The multiplier 21 removes a carrier from the received signal by multiplying the received signal by a carrier removal signal generated by the carrier removal signal generating section 22, and outputs
the result to the correlator 23.
The carrier removal signal generating section 22 generates a carrier removal signal which is a signal of the same frequency as the carrier signal of the GPS satellite signal, and includes an
oscillator such as a carrier NCO (Numerical Controlled Oscillator) or the like, for example. In a case where the signal output from the RF receiving circuit section 11 is the IF signal, the signal is
generated using an IF frequency as a carrier frequency. The carrier removal signal generating section 22 is a circuit which generates the carrier removal signal of the same frequency as the frequency
of the signal output from the RF receiving circuit section 11.
The correlator 23 performs a correlation operation of a replica code generated by the replica code generating section 24 and a received CA code output from the multiplier 21 from which the carrier is
removed, which corresponds to a correlation operation section.
The replica code generating section 24 is a circuit section which generates the replica codes of the CA codes which are the spread codes of the GPS satellite signal, and for example, includes an
oscillator such as a code NCO or the like. The replica code generating section 24 generates the replica codes according to a PRN number (satellite number) instructed from the processing section 25,
by adjusting the output phase (time) according to an instructed phase, and outputs the generated replica codes to the correlator 23.
The correlator 23 performs the correlation process of respective "I" and "Q" components of the received signal and the replica codes input from the replica code generating section 24. The "I"
component represents the same phase component (real part) of the received signal and the "Q" component represents a perpendicular component (imaginary part) of the received signal.
A circuit block which performs separation of I and Q components (IQ separation) of the received signal is not shown, and may be configured in a variety of methods. For example, when the received
signal is down-converted into the IF signal in the RF receiving circuit section 11, the IQ separation may be performed by multiplying the received signal by a local oscillation signal having a
different phase of 90 degrees.
The processing section 25 is a control device which controls respective functional sections of the baseband processing circuit section 20 overall, and includes a processor such as a CPU, for example.
The processing section 25 functions as an analysis section which frequency-analyzes the result of the correlation operation output from the correlator 23, as an extracting section which extracts the
power value exceeding the predetermined threshold value among the power values in the respective frequencies obtained as the frequency analysis result, and as an acquiring section which acquires the
GPS satellite signal from the received signal. As main functional sections, the processing section 25 includes a satellite signal acquiring section 251 and a position calculating section 253.
The satellite signal acquiring section 251 performs a process of integrating the correlation values output from the correlator 23 over the correlation integration time, and acquires the GPS satellite
signal on the basis of the integrated correlation values (integration correlation value).
The position calculating section 253 is a calculating section which calculates the position of the mobile phone 1 by performing the known position calculation using the GPS satellite signal acquired
by the satellite signal acquiring section 251, which outputs the calculated position to the host CPU 30.
The storing section 27 includes storage devices (memory) such as a ROM (Read Only Memory), a flash ROM, a RAM (Random Access Memory), and stores a system program of the baseband processing circuit
section 20, or various programs, data or the like for realizing a variety of functions such as a satellite signal acquisition function, a position calculation function or the like. Further, the
storing section 27 includes a work area in which data being processed in a variety of processes, processed results, and the like are temporarily stored.
For example, as shown in
FIG. 4
, a baseband processing program 271 which is read out by the processing section 25 as a program and is executed as a baseband processing (see
FIG. 5
) is stored in the storing section 27. The baseband processing program 271 includes a correlation processing program 2711 executed as a variety of correlation processes (see FIGS. 6 to 9, FIGS. 16
and 17) as a sub-routine.
Further, as the temporarily stored data, for example, satellite orbit data 272, a correlation integration time 273, correlation value data 275, increased correlation value data 276, integration
correlation value data 277, and a threshold value 278 are stored in the storing section 27.
The baseband processing is a process in which the processing section 25 performs a variety of correlation processes with respect to each GPS satellite which is an acquisition target (hereinafter,
referred to as an "acquisition target satellite"), performs a process of acquiring the GPS satellite signal, and performs the position calculation using the acquired GPS satellite signal, to thereby
calculate the position of the mobile phone 1.
Further, the correlation process is a process in which the processing section 25 performs the frequency analysis for the time-series correlation values according to the above-described principle, and
reconfigures the time-series correlation values by the inverse frequency analysis by considering the power value exceeding the predetermined threshold value as the power value in the zero frequency.
Then, the integration correlation value is calculated and obtained by integrating the reconfigured time-series correlation values. These processes will be described in detail with reference to
The satellite orbit data 272 is data such as an almanac in which schematic satellite orbit information about all GPS satellites is stored, an ephemeris in which detailed satellite orbit information
about each GPS satellite is stored, or the like. The satellite orbit data 272 is obtained by decoding the GPS satellite signal received from the GPS satellite, and for example, is obtained as
assistance data from a base station of the mobile phone 1 or an assistance server.
The correlation integration time 273 is the time when the correlation values are integrated, and is variably set on the basis of information on the signal strength of the received signal, a reception
environment or the like.
The correlation value data 275 is data in which the correlation values output from the correlator 23 are accumulated over the correlation integration time 273. Further, the increased correlation
value data 276 is data on the increased correlation values obtained by increasing the correlation values corresponding to the correlation integration time by n times (n>1). In this embodiment, in
order to increase the power spectrum density in each frequency obtained in the frequency analysis and to enhance accuracy of the frequency analysis, the frequency analysis is performed for the
increased correlation value data 276.
The integration correlation value 277 is data on the integration correlation value obtained by integrating the correlation values reconfigured by the inverse frequency analysis.
The threshold value 278 is a threshold value for threshold determination of the power value in each frequency obtained by performing the frequency analysis for the increased correlation value data
276, and is set to a fixed value, for example.
Returning to the functional block in
FIG. 3
, the host CPU 30 is a processor which generally controls the respective sections of the mobile phone 1 according to a variety of programs such as a system program stored in the storing section 80.
The host CPU 30 displays a map which represents a current position on the display section 50 on the basis of the position coordinates output from the baseband processing circuit section 20, or uses
the position coordinates for various application processes.
The manipulation section 40 is an input device including, for example, a touch panel, a button switch or the like, and outputs a signal of a pressed key or button to the host CPU 30. Through the
manipulation of the manipulation section 40, a variety of instructions such as a call request, a mail transmission/reception request, a position calculation request or the like are input.
The display section 50 includes an LCD (Liquid Crystal Display) or the like, and is a display device which performs various displays based on a display signal input from the host CPU 30. A position
display screen, time information or the like is displayed on the display section 50.
The mobile phone antenna 60 is an antenna which performs transmission and reception of wireless signals for a mobile phone through a wireless base station installed by a communication service
provider of the mobile phone 1.
The mobile phone wireless communication circuit section 70 is a communication circuit section of the mobile phone including an RF conversion circuit, a baseband processing circuit or the like, and
realizes communication or mail transmission/reception by performing modulation and demodulation or the like for the mobile phone wireless signal.
The storing section 80 is a storage device which stores a system program by which the host CPU 30 controls the mobile phone 1, or various programs, data or the like for performing various application
2-1. First Embodiment
In the first embodiment, the correlation process using the Fourier transform, which is a type of the frequency analysis, is performed, and the GPS satellite signal is acquired on the basis of the
integration correlation value obtained by integrating the reconfigured correlation values.
(1) Process Flow
[0097] FIG. 5
is a flowchart illustrating a work flow of baseband processing performed in the baseband processing circuit section 20, as the baseband processing program 271 stored in the storing section 27 is read
out by the processing section 25.
Firstly, the satellite signal acquiring section 251 performs an acquisition target satellite determination process (step A1). Specifically, at a current time measured by a time piece (not shown), the
satellite signal acquiring section 251 determines a GPS satellite positioned in a predetermined reference position in the sky using the satellite orbit data 272, such as an almanac or an ephemeris
stored in the storing section 27, as the acquisition target satellite. For example, in a case of the first position calculation after power supply, the reference position may be set to a position
obtained from the assistance server using so-called server assistance. Further, in a case of the second position calculation and thereafter, the reference position may be set to the latest
calculation position.
Then, the satellite signal acquiring section 251 performs a process of a loop A with respect to each acquisition target satellite determined in step A1 (steps A3 to A17). In the process of the loop
A, the satellite signal acquiring section 251 sets the correlation integration time 273 with respect to the acquisition target satellite (step A5).
The setting of the correlation integration time may be realized by various methods. For example, the setting may be performed on the basis of the signal strength of the received signal from the
acquisition target satellite. As the signal strength becomes weaker, it is more difficult to detect the peak of the correlation values if the correlation values are not integrated over a longer time.
Thus, the correlation integration time may be preferably set so that the correlation integration time is increased as the signal strength becomes weaker.
Further, the reception environment of the GPS satellite signal may be determined, and then the correlation integration time may be determined on the basis of the determined reception environment. For
example, in a case where the reception environment is an "indoor environment", the correlation integration time may be set to a long "1000 milliseconds", and in a case where the reception environment
is an "outdoor environment", the correlation integration time may be set to a short "200 milliseconds".
Subsequently, the satellite signal acquiring section 251 sets an initial phase of the replica code (step A7). Then, the satellite signal acquiring section 251 outputs an instruction signal which
instructs a PRN number of the acquisition target satellite and a phase of the replica code to the replica code generating section 24 (step A9). Further, the satellite signal acquiring section 251
performs the correlation process by reading out the correlation processing program 2711 stored in the storing section 27 (step A11).
[0103] FIG. 6
is a flowchart illustrating a work flow of a first correlation process which is an example of the correlation process.
Firstly, the satellite signal acquiring section 251 sets a predetermined value as the threshold value 278 of the power value and stores it in the storing section 27 (step B1).
Then, the satellite signal acquiring section 251 accumulates the correlation values output from the correlator 23 over the correlation integration time set in step A5 and then stores its time-series
data in the storing section 27 as the correlation value data 275 (step B3). Then, the satellite signal acquiring section 251 calculates the increased correlation values by increasing the correlation
values corresponding to the accumulation time by n times (n>1), and stores them in the storing section 27 as the increased correlation value data 276 (step B5).
Next, the satellite signal acquiring section 251 performs an FFT (Fast Fourier Transform) process with respect to the increased correlation value data 276 (step B7). Since the process relating to the
FFT is already known in the related art, detailed description thereof will be omitted.
If the power spectrum in a frequency area is calculated through the FFT process, the satellite signal acquiring section 251 extracts the power value in each frequency exceeding the threshold value
278 set in step B1, among the power values in the respective frequencies, and then adds it to the power value in the zero frequency (0 Hz) (step B9). Further, the satellite signal acquiring section
251 sets the extracted power value to "0" (step B11).
Then, the satellite signal acquiring section 251 performs an IFFT (Inverse Fast Fourier Transform) process to reconfigure the correlation values (step B13). Since the inverse fast Fourier transform
process is also already known in the related art, detailed description thereof will be omitted.
If the correlation values are reconfigured through the IFFT process, the satellite signal acquiring section 251 integrates the reconfigured correlation values corresponding to the correlation
integration time, and stores them in the storing section 27 as the integration correlation value data 277 (step B15). Then, the satellite signal acquiring section 251 terminates the first correlation
Returning to the baseband processing in
FIG. 5
, after performing the correlation process, the satellite signal acquiring section 251 performs the peak detection for the integration correlation value data 277 in the storing section 27 (step A13).
If it is determined that no peak is detected (step A13; No), the phase of the replica code is changed (step A15), and then the procedure returns to step A9.
Further, if it is determined that a peak is detected (step A13; Yes), the satellite signal acquiring section 251 transits the process to the next acquisition target satellite. Then, after performing
the processes of steps A5 to A15 with respect to all the acquisition target satellites, the satellite signal acquiring section 251 terminates the process of the loop A (step A17).
Then, the position calculating section 253 performs the position calculation using the GPS satellite signal acquired with respect to each acquisition target satellite (step A19). The position
calculation may be realized by performing a known convergence operation, for example, using the least-square method or the Kalman filter, on the basis of a pseudo distance between the mobile phone 1
and each acquisition satellite.
The pseudo distance can be calculated as follows. That is, an integer part of the pseudo distance is calculated using the satellite position of the acquisition satellite calculated from the satellite
orbit data 272 and the latest calculation position of the mobile phone 1. Further, a fractional part of the pseudo distance is calculated using the phase (code phase) of the replica code
corresponding to the peak of the correlation values detected in step A13. The pseudo distance can be calculated by summing the integer part and the fractional part which are calculated in this way.
Subsequently, the position calculating section 253 outputs the calculated position (position coordinates) to the host CPU 30 (step A21). Then, the processing section 25 determines whether the process
is terminated (step A23). If it is determined that the process is not yet terminated (step A23; No), the procedure returns to step A1. Further, if it is determined that the process is terminated
(step A23; Yes), the baseband processing is terminated.
(2) Experimental Result
An experimental result in a case where the GPS satellite signal is acquired will be described with reference to FIGS. 10 to 15. FIGS. 10 to 12 illustrate an example of an experimental result in the
case where the GPS satellite signal is acquired according to a signal acquisition method in the related art. With respect to each of a frequency direction and a phase direction, an experiment has
been performed in which the integration correlation value is calculated by integrating the correlation values for one second to thereby detect its peak.
[0116] FIG. 10
is a graph illustrating the integration correlation value in the phase direction and the frequency direction, in a three-dimensional manner. In
FIG. 10
, a right depth direction represents a phase difference between a received CA code phase and a replica code phase, and a left depth direction represents a frequency difference between a received
signal frequency and a carrier removal signal frequency. Further, the longitudinal axis represents the integration correlation value.
FIG. 11
is a graph illustrating the correlation process result extracted in the frequency direction in
FIG. 10
, and FIG. 12 is a graph illustrating the correlation process result extracted in the phase direction in
FIG. 10
Referring to
FIG. 12
, it can be understood that a peak of the integration correlation value appears in a portion of a phase difference "0" with respect to the correlation process result in the phase direction and a
correct result is obtained. However, referring to
FIG. 11
, it can be understood that the peak of the integration correlation value does not appear in the portion of the frequency difference "0 Hz" and peaks appear in frequency differences slightly spaced
in the left and right directions from "0 Hz", with respect to the correlation process result in the frequency direction. As a result of investigation of the frequency differences in which the peaks
appear, it could be understood that the frequency differences are frequency differences corresponding to "±25 Hz" which is the specific frequency. The peak not appearing in the frequency difference
"0 Hz" means that the acquisition of the GPS satellite signal fails.
FIGS. 13 to 15 illustrate an example of the experimental result in a case where the GPS satellite signal is acquired according to the signal acquisition method in the first embodiment. With respect
to each of the frequency direction and the phase direction, the experiment has been performed in which the above-described first correlation process is performed over the correlation integration time
of "500 milliseconds" to calculate the integration correlation value and to detect its peak. Here, an experimental result when the threshold value of the power value is "100" is shown.
[0119] FIG. 13
is a graph illustrating the integration correlation value in the phase direction and the frequency direction, in a three-dimensional manner. Further,
FIG. 14
is a graph illustrating the correlation integration result extracted in the frequency direction in
FIG. 13
, and
FIG. 15
is a graph illustrating the correlation integration result extracted in the phase direction in
FIG. 13
. Estimation of the graphs is the same as in FIGS. 10 to 12.
Referring to
FIG. 15
, it can be understood that a peak of the integration correlation value appears in a portion of the phase difference "0", with respect to the correlation process result in the phase direction, and a
correct result is obtained. Further, referring to
FIG. 14
, it can be understood that a peak of the integration correlation value appears in a portion of the frequency difference "0 Hz", with respect to the correlation process result in the frequency
direction. The phase and the frequency coincide with each other, which means that the acquisition of the GPS satellite signal is successful.
(3) Effects
In the baseband processing circuit section 20, the correlation operation is performed in the correlator 23, with respect to the received GPS satellite signal which is transmitted from the GPS
satellite. Then, with respect to the correlation operation result over the correlation integration time which is equal to or longer than the bit length (20 milliseconds) of the navigation message
data which is carried in the GPS satellite signal, the frequency analysis based on the Fourier transform is performed by the processing section 25. Then, the process of extracting the power value of
each frequency exceeding the predetermined threshold value, among the power values obtained in the frequency analysis and moving it to the power value in the zero frequency is performed by the
processing section 25. Then, after the correlation values are reconfigured by the IFFT process, the reconfigured correlation values corresponding to the correlation integration time are integrated
and the peak of the corresponding integration correlation value is detected, and thus, the GPS satellite signal is acquired.
If the bit value of the navigation message data is changed (inverted), the polarity of the CA code is also inverted. Thus, in a case where the correlation process is performed over an arbitrary time
which is equal to or longer than the bit length of the navigation message data, even though the GPS satellite signal is acquired in accordance with a correct frequency, a sign change appears in the
time-series data on the correlation values. Thus, if the Fourier transform is performed for the time-series data on the correlation values, as described in the principle, the peak of the power value
appears in a plurality of frequencies such as the specific frequency (25 Hz), the frequency of harmonics of the specific frequency, a low frequency which is lower than the specific frequency, or the
like. Here, the size of the power value varies according to the cycle (frequency) of the sign change or its harmonics.
The peak of the power values is caused by the change (inversion) in the bit value of the navigation message data. Thus, a process of extracting the power value in each frequency exceeding the
predetermined threshold value among the power values in the respective frequencies obtained by the Fourier transform and moving it to the power value in the zero frequency is performed. After
performing such a process, the correlation values are reconfigured by the inverse Fourier transform, and thus, it is possible to obtain the time-series data on the correlation values with same signs.
If the correlation values with same signs can be integrated, the correlation values having different signs are not offset against each other. Accordingly, the correlation process over the correlation
integration time longer than the bit length (20 milliseconds) of the navigation message data can be realized.
Further, in this embodiment, the correlation values accumulated over the correlation integration time are increased by n times to calculate the increased correlation values. Further, the Fourier
transform is performed for the time-series data on the increased correlation values, thereby making it possible to increase the power spectrum density and to enhance the accuracy of the frequency
As can be understood from the above-described experimental result, in a case where the acquisition of the GPS satellite signal fails, the peak of the integration correlation value appears in the
frequency difference corresponding to the specific frequency, and in a case where the acquisition succeeds, the peak of the integration correlation value appears in the zero frequency difference.
Hence, the power value in each frequency which satisfies the high power condition moves to the power value in the frequency zero, which means that the reception frequency of the GPS satellite signal
is detected.
(4) Other Correlation Processes
The first correlation process described with reference to
FIG. 6
is an example of the correlation process, and the invention is not limited thereto. Examples of other correlation processes will be described with reference to flowcharts. In the following
flowcharts, the same reference numerals are given to the same steps as in the first correlation process, and thus, description thereof will be omitted. Further, different steps from the first
correlation process will be mainly described.
FIG. 7 is a flowchart illustrating a work flow of a second correlation process which is an example of other correlation processes. In the second correlation process, after the FFT process in step B7
is performed, the power value exceeding the threshold value is extracted with respect to the specific frequency (25 Hz) and the frequency of its harmonics (frequency of an odd multiple of 25 Hz) and
is added to the power value in the zero frequency (step C9). Subsequent processes are the same as the first correlation process.
In a case where the frequency analysis is performed, the peak of a large power value mainly appears in the specific frequency and the frequency of its harmonics (frequency of an odd multiple of the
specific frequency). Thus, in the second correlation process, using the specific frequency and the frequency of harmonics of the specific frequency as the target, in a case where the power value
exceeds the threshold value, it moves to the power value in the zero frequency. Thus, as compared with a case where a process is performed using a wide range of frequency as a target, the calculation
amount can be reduced, and the correlation values optimal to the acquisition of the GPS satellite signal can be obtained.
[0129] FIG. 8
is a flowchart illustrating a work flow of a third correlation process which is an example of other correlation processes. In the third correlation process, the satellite signal acquiring section 251
extracts the power value exceeding the threshold value in step B9, adds it to the power value in the zero frequency and then stores the power value in the zero frequency in the storing section 27 as
the correlation process result, without performing the IFFT process (step D11).
In the next process, the satellite signal acquiring section 251 considers the power value in the zero frequency as the correlation process result and performs the acquisition of the GPS satellite
signal. In the third correlation process, since a power value other than the power value in the zero frequency is not required unlike the first correlation process, the step (step B11 in
FIG. 6
) in which the extracted power value is set to "0" is omitted.
In this way, a reason why the power value in the zero frequency is considered as the correlation process result to perform the process will be described. When performing the Fourier transform using a
calculator (computer), a discrete Fourier transform is generally used. The discrete Fourier transform for the correlation values is formulated by the following formula (1).
f j
= k = 0 n - 1 x k - 2 π n j k j = 0 , 1 , 2 , , n - 1 ( 1 ) ##EQU00001##
In the formula (1), "x
" represents a correlation value, and a suffix "k" represents the number of sampled correlation value. Further, "f
" represents a frequency, and a suffix "j=0, 1, 2, . . . , n-1" represents the number of sampled frequency.
In this case, a power value "Power
" for a j-th frequency is given according to the following formula (2).
Power j
= f j 2 n ( 2 ) ##EQU00002##
Further, the inverse Fourier transform to the correlation value from the frequency is formulated by the following formula (3).
x k
= 1 n j = 0 n - 1 f j 2 π n j k k = 0 , 1 , 2 , n - 1 ( 3 ) ##EQU00003##
In step B9 of the third correlation process, in a case where the power value in the zero frequency obtained by adding the power value exceeding the threshold value to the power value in the zero
frequency (hereinafter, referred to as a "combination zero frequency power value") is expressed as "Power'
", if the inverse Fourier transform is performed in consideration of only the direct-current component, the following formula (4) is obtained.
x k
= 1 n f 0 = Power 0 ' × n n = Power 0 ' n ( 4 ) ##EQU00004##
Here, when the formula (4) is obtained, the fact that the following formula (5) is established from the formula (2) is used.
= {square root over (Power'
×n)} (5)
In a case where the correlation integration time is "T", the correlation values "x
" which are reconfigured in the inverse Fourier transform are integrated over the correlation integration time "T", to thereby obtain an integration correlation value "X" shown in the following
formula (6).
= T × Power 0 ' n ( 6 ) ##EQU00005##
Referring to the formula (6), it can be understood that the integration correlation value "X" corresponding to the correlation integration time depends on the correlation integration time "T", the
combination zero frequency power value "Power'
" and the total number of samplings "n". Here, the correlation integration time "T" and the total number of samplings "n" are constants. Accordingly, the integration correlation value "X" is obtained
by multiplying the combination zero frequency power value "Power'
" by times corresponding to the constants. Thus, it can be said that the combination zero frequency power value is equivalent to the correlation values which are reconfigured. Thus, the GPS satellite
signal can be acquired using the combination zero frequency power value itself, without performing the inverse Fourier transform.
[0139] FIG. 9
is a flowchart illustrating a work flow of a fourth correlation process which is an example of other correlation processes. In the fourth correlation process, in step E3, the satellite signal
acquiring section 251 accumulates data on the correlation values output from the correlator 23 over the predetermined accumulation time and stores it in the storing section 27 (step E3). The
accumulation time is preferably set to a time of 1/m (m>1) times the correlation integration time, for example. For example, in a case where the correlation integration time is set to "1000
milliseconds" and m is 5, the accumulation time is set to "200 milliseconds".
Then, the satellite signal acquiring section 251 increases the correlation values corresponding to the accumulation time stored in step E3 by n times to calculate the increased correlation values,
and stores them in the storing section 27 as the increased correlation value data 276 (step E5). Then, after performing the processes of steps B7 to B13, the satellite signal acquiring section 251
integrates the reconfigured correlation values corresponding to the accumulation time, adds the integration result to the latest integration correlation value, and then updates the integration
correlation value (step E15).
The satellite signal acquiring section 251 repeats the processes of steps E3 to E15 until the correlation integration time elapses (step E17; No). Then, if the correlation integration time elapses
(step E17; Yes), the fourth correlation process is terminated.
In the fourth correlation process, the data on the correlation values corresponding to the correlation integration time is not accumulated in a lump, but the data on the correlation values
corresponding to a predetermined accumulation time which is shorter than the correlation integration time is accumulated. Then, the FFT process for the data on the correlation values corresponding to
the accumulation time, the movement process of the power value, the IFFT process are performed. Then, the reconfigured correlation values corresponding to the accumulation time are integrated to
update the latest integration correlation value. The above-described process is repeated until the correlation integration time elapses, and the integration correlation value corresponding to the
correlation integration time is finally obtained.
2-2. Second Embodiment
In the second embodiment, a correlation process based on the wavelet transform which is a type of the frequency analysis is performed, and the GPS satellite signal is acquired on the basis of the
integration correlation value obtained by integrating the reconfigured correlation values.
(1) Process Flow
[0144] FIG. 16
is a flowchart illustrating a work flow of a fifth correlation process which is a type of the correlation process based on the wavelet transform.
Firstly, the satellite signal acquiring section 251 sets the threshold value 278 of the energy value, and stores it in the storing section 27 (step F1).
Then, the satellite signal acquiring section 251 accumulates the correlation values output from the correlator 23 over the correlation integration time, and stores them in the storing section 27 as
the correlation value data 275 (step F3). Further, the satellite signal acquiring section 251 increases the correlation values corresponding to the correlation integration time by n times to
calculate the increased correlation values, and then stores them in the storing section 27 as the increased correlation value data 276 (step F5).
Next, the satellite signal acquiring section 251 performs a wavelet transform process for the increased correlation value data 276 (step F11). The wavelet transform process is a type of linear
filtering, and decomposes the input signal (here, time-series data on the increased correlation values) into a detailed component of a high frequency and a proximate component of a low frequency,
using two types of filters including a wavelet filter "h" corresponding to a high-pass filter, and a scaling filter "g" corresponding to a low-pass filter. Then, a process of repeatedly decomposing
the proximate component is performed until it reaches a predetermined decomposition level, and the input signal is expressed using the wavelet component having multiple resolutions.
Specifically, in a case where the increased time-series correlation values are "x(t)", if the proximate component "x
(t)" of the decomposition level "0" is decomposed up to a decomposition level "J-1" when the number of decomposition levels is "J", the following formula (7) is obtained.
0 ( t ) = x 1 ( t ) + g 1 ( t ) x 1 ( t ) = x 2 ( t ) + g 2 ( t ) x J - 1 ( t ) = x J ( t ) + g J ( t ) ( 7 ) ##EQU00006##
In the formula (7), "x
(t)" represents the proximate component of the decomposition level "j", and "g
(t)" represents the detailed component of the decomposition level "j". If a process of calculating "x
-2(t)" by substituting "x
-1(t)" into the formula of the decomposition level "J-2" which is one level below and calculating "x
-3(t)" by substituting the calculated "x
-2(t)" into the formula of the decomposition level "J-3" which is one level below is performed up to the decomposition level "0", the increased time-series correlation values "x(t)" are expressed by
the following formula (8).
0 ( t ) = g 1 ( t ) + g 2 ( t ) + + g J ( t ) + x J ( t ) = j = 1 J g j ( t ) + x J ( t ) ( 8 ) ##EQU00007##
In this way, a technique of expressing the input signal using a sum of wavelet components having different resolutions is referred to as a multi-resolution analysis. In a case where the wavelet
transform is realized using a calculator (computer), in order to further effectively perform the calculation, the discrete wavelet transform which selects a scale parameter "a" using power of 2 as a
base is used.
After performing the wavelet transform process, the satellite signal acquiring section 251 determines a decomposition level in which the energy value exceeds the threshold value 278 set in step F1,
among the high-frequency detailed components obtained with respect to the respective decomposition levels (step F13).
Then, the satellite signal acquiring section 251 adds the energy value of the high-frequency detailed component to the energy value of the low-frequency proximate component with respect to each
decomposition level determined in step F13 (step F15). Further, the energy value of the high-frequency detailed component in each decomposition level determined in step F13 is set to "0" (step F17).
The energy value of the low-frequency proximate component is expressed as a square of a proximate component coefficient (scaling coefficient), and the energy value of the high-frequency detailed
component is expressed as a square of a detailed component coefficient (wavelet coefficient).
Since the concept of the "energy value" is generally used in the wavelet transform, in this embodiment, a process in which the concept of the energy value is used is performed, which will be
illustrated and described. However, the energy value is a type of the power value in the frequency analysis, and has the same meaning as the power value.
In the discrete wavelet transform, since the increased correlation values "x(t)" are decomposed into the proximate component and the detailed component, the energy values of the increased correlation
values "x(t)" are conserved as the proximate component and the detailed component. That is, the law of energy conservation of the following formulas (9) and (10) is established.
(x)=E(A)+E(D) (9)
+∥cD.pa- rallel.
Here, "cA" represents the proximate component coefficient (scaling coefficient), and "cD" represents the detailed component coefficient (wavelet coefficient).
In steps F15 and F17, a process is performed for extracting the energy value of the high-frequency detailed component with respect to the decomposition level in which the energy value exceeds the
threshold value, adding it to the energy value of the low-frequency proximate component, and then setting the energy value of the high-frequency detailed component to "0". Since the total amount of
the energy value is not changed, in the process according to the present embodiment, the law of energy conservation is satisfied.
Thereafter, the satellite signal acquiring section 251 performs the inverse wavelet transform process to reconfigure the increased correlation values (step F19). Then, the satellite signal acquiring
section 251 integrates the reconfigured correlation values corresponding to the correlation integration time, and stores the obtained integration correlation value in the storing section 27 as the
integration correlation value data 277 (step F21). Then, the fifth correlation process is terminated.
(2) Experimental Result
An experimental result in a case where the GPS satellite signal is acquired using the technique according to the second embodiment will be described. Here, a result obtained by performing the
correlation process between the received CA code and the replica CA code is shown, assuming that the frequency of the received signal and the phase of the received CA code are already known. The
experiment has been performed by setting the correlation integration time to "1000 milliseconds" and by setting the threshold value of the energy value to "100".
[0159] FIG. 18
is a graph illustrating the correlation values measured over 1000 milliseconds (1 second), which illustrates a time-series change in raw correlation values before performing the wavelet transform in
the above-described fifth correlation process. The transverse axis represents time, and the longitudinal axis represents correlation values. Referring to this figure, it can be understood that the
sign of the correlation value is changed in a short cycle as the polarity of the received CA code is inverted by the change in the bit value of the navigation message data and significantly vibrates
over positive or negative areas around the correlation value "0". The correlation values are integrated over the correlation integration time, and as a result, the integration correlation value
becomes "0".
[0160] FIG. 19
is a graph illustrating a time-series change in the correlation values obtained by reconfiguring the signal by performing the above-described fifth correlation process with respect to data on the
correlation values in
FIG. 18
. Referring to this figure, it can be understood that the center of the correlation values is shifted in the positive area and the correlation values approximately converge to the positive value.
Further, in consideration of the up and down vibration, in can be seen that a pulse shape change is partially recognized but the change is performed with a small amplitude overall. The reconfigured
correlation values are integrated over the correlation integration time, and as a result, the integration correlation value becomes a significantly large value "650".
(3) Other Correlation Processes
The fifth correlation process described with reference to
FIG. 16
is an example of the correlation process using the wavelet transform, but the invention is not limited thereto. Examples of other correlation processes will be described with reference to flowcharts.
In the following flowcharts to be described hereinafter, the same reference numerals are given to the same steps as the fifth correlation process, and thus, detailed description thereof will be
omitted. Further, different steps from those in the fifth correlation process will be mainly described.
[0162] FIG. 17
is a flowchart illustrating a work flow of a sixth correlation process which is an example of other correlation processes. In the sixth correlation process, the satellite signal acquiring section 251
sets the threshold value of the energy value in step F1, integrates the correlation values output from the correlator 23 for a predetermined accumulation time, and then, stores the data in the
storing section 27 (step G3). In a similar way to the fourth correlation process described with reference to
FIG. 9
, the accumulation time is set to a time of 1/m times (m>1) the correlation integration time, for example.
Then, the satellite signal acquiring section 251 increases the correlation values corresponding to the accumulation time stored in step G3 by n times to calculate the increased correlation values,
and stores them in the storing section 27 as the increased correlation value data 276 (step G5). Further, the satellite signal acquiring section 251 performs the processes of steps F11 to F19,
integrates the reconfigured correlation values corresponding to the accumulation time, and then, as a result, updates the latest integration correlation value (step G21).
Then, the satellite signal acquiring section 251 repeats the processes of steps G3 to G21 until the correlation integration time elapses (step G23; No). Then, if the correlation integration time
elapses (step G23; Yes), the sixth correlation process is terminated.
As other correlation processes, in a similar way to the third correlation process described in the modification according to the first embodiment, the inverse wavelet transform may be omitted. In
this case, among the energy values of the time-series correlation values reconfigured by the inverse wavelet transform, the energy value of the low-frequency proximate component may be considered as
the correlation process result to acquire the GPS satellite signal.
3. Modifications
3-1. Electronic Devices
In the above-described embodiment, the invention is applied to a mobile phone which is a type of electronic device, but the electronic device to which the invention is able to be applied is not
limited thereto. For example, the invention may be similarly applied to other electronic devices such as a car navigation device, a mobile navigation device, a personal computer, a PDA (Personal
Digital Assistant) or a wrist watch.
3-2. Position Calculation System
Further, in the above-described embodiment, the GPS is exemplified as the position calculation system, but a position calculation system which uses other satellite positioning systems such as WAAS
(Wide Area Augmentation System), QZSS (Quasi Zenith Satellite System), GLONASS (GLObal Navigation Satellite System), GALILEO, or the like may be employed.
3-3. Increased Correlation Values
In the above-described embodiment, the correlation values corresponding to the correlation integration time are increased by n times to calculate the increased correlation values, and the frequency
analysis is performed with respect to the time-series data on the increased correlation values. However, this process may be omitted and the frequency analysis may be performed with respect to the
correlation value data corresponding to the correlation integration time.
Further, without performing the frequency analysis for the increased correlation value data, the frequency analysis may be performed for data (intensification correlation value data) obtained by
repeating the correlation values corresponding to the correlation integration time over a predetermined intensification time. For example, as the intensification time, a time of k times (k>1) the
correlation integration time may be set. Then, data obtained by repeating the correlation values corresponding to the correlation integration time over the intensification time is generated as
intensification correlation value data, and a power spectrum is obtained by performing the frequency analysis for the intensification correlation value data.
3-4. Frequency Analysis
Further, the frequency analysis is not limited to the Fourier transform or the wavelet transform. If the frequency component of the correlation values can be expressed as a power value, the same
effect as in the above-described embodiment can be obtained by performing the correlation process based on other frequency analyses.
The entire disclosure of Japanese Patent Application No. 2010-037169, filed on Feb. 23, 2010 is expressly incorporated by reference herein.
Patent applications by Shunichi Mizuochi, Matsumoto-Shi JP
Patent applications by SEIKO EPSON CORPORATION
Patent applications in class Correlation-type receiver
Patent applications in all subclasses Correlation-type receiver
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20110206092","timestamp":"2014-04-16T16:22:56Z","content_type":null,"content_length":"119875","record_id":"<urn:uuid:a22a2b8c-c858-4838-b2ae-ff0b964399f6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improper Integrals
April 17th 2008, 05:46 PM #1
Apr 2008
Improper Integrals
Calculate the value of the integral 1/x^2 going from 1 to b where b is a finite number greater than 1. Does the value of the integral approach a limit as b tends to infinity? (it does) If so,
what is the value?
Thanks =)
$\int_1^b x^{-2} dx$
Now, take the limit of this as b approaches infinity...
$lim_{b \rightarrow \infty} 1-\frac{1}{b} = 1$
Thank you so much!! I see what I did- stupid mistake- I had my answers backwards!! Ha ha =)
April 17th 2008, 05:58 PM #2
April 17th 2008, 06:23 PM #3
Apr 2008 | {"url":"http://mathhelpforum.com/calculus/34969-improper-integrals.html","timestamp":"2014-04-18T23:23:42Z","content_type":null,"content_length":"35705","record_id":"<urn:uuid:4237ce69-8cca-4352-b0fc-298e4c6615a8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Each one of the mirror boxes is provided with some tiles, which are the right ones in order to see in the box a given picture (which will be a tessellation of the plane): some of these pictures
are shown on the walls of the box, so that the visitor knows in advance in which one of the three boxes he will be able to build that picture; other ones are shown mixed up in a poster, so that the
visitor willing to reconstruct that picture has first of all to decide in which one of the three boxes this can be done. Mathematically, this is a first not trivial problem: this decision corresponds
in fact to detecting the symmetry group of the given picture.
The 3-dimensional caleidoscopes can be handled in an analogous way: some small pieces are provided, which correspond to the fundamental domain[1] for the action of the symmetry group on some
polyhedra. By putting the piece in the corresponding caleidoscope, the polyhedron is reconstructed.
Both in the 2-dimensional mirror boxes and in the 3-dimensional caleidoscopes the same mathematical concept is underlined: the classification of something (a planar picture in the first case, a solid
object in the second one) with respect to its symmetry group. So a “machine for building symmetry” builds up different things, depending on what is put inside it, but all what is built with the same
machine has the same kind of symmetry; while different machines build up different symmetries[2].
The actual exhibition contains in fact other objects, which are all functional to this main idea.
This paper is organized as follows: section 2 is more technical and is directed to the reader who likes to have a more detailed idea of the mathematical concepts underlying the objects
just described; section 3 contains a discussion about different ways for using these “machines”, with different sorts of public. Readers mainly interested to this discussion can skip directly to
section 3.
2. The mathematics underlying the machines for building symmetry.
2.1 Coxeter groups
Coxeter, in a series of papers around 1930, began to study those subgroups of the isometry group in R^n having the two properties of being discrete and being generated by reflections in hyperplanes.
These two properties are exactly what is necessary in order to “see” the group in a system of mirrors. In order to explain what we mean, we shall first illustrate a couple of examples in the plane.
Let G be the group generated by the reflections r and s in two lines r and s, making an angle of p/4 in point O. G is a finite group, consisting of four rotations (with center O and angles,
respectively, p/2, p, 3p/2, 2p) and four reflections, with respect to the two lines r and s and other two lines t and u (see figure 1); the set of these four lines is the complete set of hyperplanes
related to the group G (that is, the maximal set of hyperplanes such that the reflection with respect to that hyperplane belongs to the group); the connected components of the complement, in the
plane, of these four lines are the chambers of the group G. All the chambers are equivalent (in the sense that there exists an element of the group sending one into the other), each one of them is an
angle and the group is generated by the reflections in the walls of any chamber. There is a bijection between the whole set of the chambers and the elements of the group: fixing one of the chambers R
[0], the bijection is obtained by sending any other chamber R into the element g in G such that
g(R)= R[0].
In order to consider another example, let H be the group generated by the reflections a, b, c in the three sides of a right isosceles triangle. H is an infinite group, and the complete set of
hyperplanes is sketched in figure 2:
It is an infinite grid, and the complement of this grid in the plane is made up of an infinity of isometric triangles, which are the chambers of the group.
It is worthwhile observing that, in both cases, the complete set of hyperplanes (and, as a consequence, also the set of the chambers) is just what we see if we start from a set of mirrors (two in the
first case, three in the second one) corresponding to the generators of the group.
In fact, when we say that we are “seeing the group”, we are not referring only to the fact that the chambers we see are in one-to-one correspondence with the elements of the group, but, also, to the
possibility of reading out from what we see the presentation of the group with generators and relations: in the case of figure 1, the group G is generated by r and s with relations r^2 = s^2 = (rs)^4
= 1; in the case of figure 2, H has three generators a,b,c and relations a^2 = b^2 = c^2 = (ab)^2 = (bc)^4 = (ac)^4 = 1. In the photo, we can see another example, corresponding to thegfroup with
three generators a, b, c and relations a^2 = b^2 = c^2 = (ab)^2 = (bc)^3 = (ac)^6 = 1.
This situation is quite general: any Coxeter group has a presentation with k generators g[1],…,g[k] and relations of the kind (g[i]g[j])^Nij=1; if i=j, N[ii]=1, as each g[i] is the reflection in a
hyperplane; if i¹j, p/N[ij] is the angle between the two hyperplanes associated to the generators g[i] and g[j].
From a geometric point of view, one can prove that any (irreducible[3]) Coxeter group is generated by the reflections on the walls of a chamber which can only be:
· a simplex, if the group is infinite;
· the cone on a simplex, if the group is finite.
There are other limitations on the shape of these chambers, which lead to the complete enumeration of Coxeter groups in any dimension; this is reached through an analysis which is quite lengthy in
the general case, but, in low dimensions, is essentially a consequence of the observation that, due to the discreteness of the group, the dyhedral angles between the walls of the chamber must be of
the form p/n, where n is any positive integral number (³2). This leaves very few possibilities, that is:
1. for finite groups in two dimensions: two mirrors making an angle of p/n (for any n in N); the chamber is the cone on a 1-dimensional simplex;
2. for infinite groups in two dimensions: three mirrors forming a triangle, with angles p/p, p/q, p/r. As 1/p + 1/q + 1/r = 1, the only possibilities are:
· p=q=r=3: equilateral triangle;
· p=2, q=r=4: right isosceles triangle;
· p=2, q=3, r=6: right triangle with angles p/3 and p/6;[4]
3. for finite groups in three dimensions: three mirrors forming the cone on a triangle. The angles between the mirrors must be of the form p/p, p/q, p/r; as these are the angles of the corresponding
spherical triangle, we get the inequality 1/p + 1/q + 1/r > 1, which leaves the only possibilities:
· p=2, q=r=3: this corresponds to the symmetry group of a tetrahedron;
· p=2, q=3, r=4: this corresponds to the symmetry group of a cube, or of a octahedron;
· p=2, q=3, r=5: this corresponds to the symmetry group of a dodecahedron, or of an icosahedron.
The six cases just obtained in 2. and in 3. are exactly the six “machines for building symmetry” described in the introduction.
In order to complete the list of Coxeter groups acting in dimension less or equal to 3, we have to add also:
0. groups in one dimension: there is only one finite group, corresponding to just one mirror; there is also only one infinite group, corresponding to two parallel mirrors.
4. infinite groups in three dimensions: these are given by four mirrors forming a tetrahedron with all dyhedral angles of the form p/n; there are three possibilities, whose corresponding chambers are
all shown in figure 3:
· the tetrahedron of vertices O,V,O’,V’, which has two dyhedral angles equal to p/2 and four equal to p/3;
· the tetrahedron of vertices O,V,O’,M, which has three dyhedral angles equal to p/2, two equal to p/3 and one equal to p/4;
· the tetrahedron of vertices O,V,H,M, which has three dyhedral angles equal to p/2, one equal to p/3 and two equal to p/4.
2.2 Two-dimensional machines and plane cristallographic groups
In the last subsection we found three irreducible cases among Coxeter infinite two-dimensional groups (equilateral triangle, right isosceles triangle, right triangle with angles p/3 and p
/6) and one reducible case (rectangle). The same four cases appear among crystallographic groups, that is discrete groups of isometries in R^n, whose translation subgroup is generated by n
independent translations. For n = 2, these are the 17 (up to affine conjugacy) wallpaper groups; among them, the ones which are generated by reflections are Coxeter groups (reducible or irreducible),
and there are four of them:
· p3m1 corresponding to the mirror box with the shape of an equilateral triangle;
· p4m corresponding to the mirror box with the shape of a right isosceles triangle;
· p6m corresponding to the mirror box with the shape of a right triangle with angles p/3 and p/6;
· pmm corresponding to the mirror box with the shape of a rectangle.
It is also interesting to notice that these four groups are not the only ones – among wallpaper groups – which can be seen in our “machines”: in fact, when we put something in a mirror box, the
planar picture we get has a symmetry group which contains the group G associated to the box (that is, the group generated by the reflections in the walls of the box), but not necessarily coincides
with it. If we put in the box something which already has some symmetry, what we get is a group H which properly contains G as a subgroup and we may also “read” the index of G in H[5]. So, if we want
to make a list of which ones of the 17 wallpaper groups may be seen in our machines, we have to add some cases, that is:
· p6m is not contained as a proper subgroup in any of the other 16 groups; so in the mirror box with the shape of a right triangle with angles p/3 and p/6 we can only see something with symmetry
group p6m[6];
· the same happens for the group p4m corresponding to the mirror box with the shape of a right isosceles triangle[7];
· p3m1 is contained as a proper subgroup of (minimal) index 3 in the
crystallographic group p31m so, in the box with the shape of an equilateral triangle we generally see something with symmetry group p3m1 (by putting in the box something with no symmetry at all); but
we can also see a planar picture with symmetry group p31m, by putting in the box something with symmetry group C[3], that is a center of rotation of order 3 in the center of the mirror box. p3m1 is
also contained as a proper subgroup of (minimal) index 2 in the crystallographic group p6m; this is for us less interesting, as it does not give a “new” group to see; however, in order to see the
group p6m in the equilateral mirror box it is enough to put inside the box something with a bilateral symmetry.
· pmm is contained as a proper subgroup
· of (minimal) index 2 in the group cmm;
· of (minimal) index 4 in the group p4g.
Besides these two cases, which add two new groups to the list of the ones we can see in mirror boxes, there are other two possibilities, as pmm is also contained as a proper subgroup
· of (minimal) index 2 in the group p4m;
· of (minimal) index 6 in the group p6m.
Thus in a rectangular mirror box we generally see something with symmetry group pmm (that is, this happens when we put inside the box something without any symmetry); we manage to get a picture with
symmetry group cmm
when we put inside the box something with a center of symmetry of order 2 in the center of the rectangle. In order to get a picture with symmetry group p4g,we cannot start from any rectangular
mirror box, but we need a square one,
and we have to put in the square box something with symmetry group C[4], that is with a rotational center of order 4 in the center of the square. If we put in the square box something whose symmetry
group is the dyhedral group D[4] we get a picture with symmetry group p4m, but in fact we can get the same group (as a subgroup of index 2 instead of 8, that is by putting inside the box a picture
with just a symmetry axis along the diagonal of the square. In order to get a picture with symmetry group p6m, we cannot start from any rectangular mirror box, but we need one such that the ratio
between its sides is square root of 3 so that it can be divided in six right triangles with angles p/3 and p/6, each one obtained from the adjacent one by reflection in the common side: see figure 4.
So, the wallpaper groups which can be seen in a mirror box are seven, the four ones which are generated by reflections, and other three, which contain a subgroup generated by reflections.
2.3 Three-dimensional machines and polyhedra
And what about three-dimensional symmetry machines? The ones we described in the introduction realise the only possible irreducible finite subgroups of Iso(R^3) generated by reflections, which
correspond to the symmetry groups of the regular polyhedra. As in the two-dimensional case, the significative mathematical result hidden in these machines is the fact that they are not just an
example, but they are in fact the only possible cases.
As in the two-dimensional case, by putting something in the “machine”, what one sees is the orbit F of that “something” with respect to the group associated to the caleidoscope (that is, the group
generated by the reflections in the walls of the caleidoscope); the symmetry group of F contains the group of the caleidoscope, but does not necessarily coincides with it. However, finite groups of
isometries in the space are very few, so that for two of the caleidoscopes (the one associated to the symmetry group of the cube and the one associated to the symmetry group of the dodecahedron) we
may be sure that anything we see inside will always have a symmetry group which coincides with the group of the caleidoscope: the reason of this is simply the fact that there does not exist any
finite group of space isometries containing as a subgroup either the symmetry group of the cube or the symmetry group of the dodecahedron.
Instead, in the caleidoscope associated to the symmetry group G of the tetrahedron we may see either objects having G as symmetry group or objects having the same symmetry group H of a cube (as G is
a subgroup of index 2 in H).
Colouring can be used (here as in the planar case) to underline this phenomenon. For example, an octahedron can be reproduced in the caleidoscope of the cube, but if we colour it with black and white
faces (each black face touching only white ones, and viceversa) and we do want to reproduce that colouring also, then we need the caleidoscope of the tetrahedron[8].
Of course, regular polyhedra are not the only objects one can observe in the three-dimensional caleidoscopes. To give another example, we can consider the orbit F of a single point x with respect to
the group associated to one of the three caleidoscopes, and the convex envelope of the points in F: in this way, we naturally get a uniform polyhedron (that is, a polyhedron whose symmetry group is
transitive on the set of vertices). In particular, we get in this way nearly all the 13 Archimedean polyhedra:
· in the caleidoscope of the tetrahedron we get also the archimedean polyhedron (3,6,6)[9];
· in the caleidoscope of the cube we get the archimedean polyhedra (3,4,3,4), (4,6,6), (3,4,4,4), (3,8,8), (4,6,8);
· in the caleidoscope of the dodecahedron we get the Archimedean polyhedra (3,5,3,5), (5,6,6), (3,4,5,4), (3,10,10), (4,6,10).
The only two archimedean polyhedra which can not be seen in the caleidoscopes are (3,3,3,3,4) and (3,3,3,3,5), whose symmetry groups contain only rotations.
The same construction does not only yield the Archimedean polyhedra (which, besides beeing uniform, have regular faces): in general, for any choice of x, we always get a polyhedron whose faces are
equiangular; for each one of the described cases, there is just one position for the point x such that the faces of the corresponding polyhedron are also equilateral.
2.4 Elliptic, euclidean (and hyperbolic) geometry
Up to now we spoke about two-dimensional and three-dimensional machines for building symmetry; but there is another way - probably more suitable[10] - to look at this situation. It should be noticed
in fact that the world of polyhedra can be handled in (at least) two different ways: we can think to a polyhedron as a solid object (homeomorphic to D^3), the analogous in the three-dimensional space
of what is a polygon (homeomorphic to D^2) in the plane; alternatively, we can think to a polyhedron as a two-dimensional object (homeomorphic to S^2). From this point of view a polyedron is not so
much the analogous of a plane polygon, but rather the analogous of a plane tessellation. With the first point of view one rather sees the symmetries of the object as isometries in the space; with the
second point of view, one thinks more about isometries of the sphere.
This second point of view is probably the best one to underline the similarity between the different situations shown with our six “machines for building symmetry”. In any case we handle surfaces,
more precisely triangulated surfaces; with the first three machines, the mirror boxes, we are in the world of euclidean geometry, and the restriction on the number of possible machines comes out
essentially from the equality
1/p + 1/q + 1/r = 1,
implied by the fact that the sum of the angles of a euclidean triangle is equal to p; with the other three machines, the caleidoscopes, we are in the world of elliptic geometry, and the restriction
on the number of possible machines comes out essentially from the inequality
1/p + 1/q + 1/r > 1,
implied by the fact that the sum of the angles of a spherical triangle is greater than p.
A very natural extension of this would be to have “machines for building symmetry” in hyperbolic geometry; and, in this case, we have a much greater variety, as there exists such a machine for any p,
q, r such that
1/p + 1/q + 1/r < 1,
so there are an infinity of them. Each one corresponds to fixing a hyperbolic triangle whose angles are p/p, p/q, p/r and considering the subgroup G of hyperbolic isometries generated by the
reflections in the sides of the triangle. There is no difficulty in simulating on a computer a virtual hyperbolic machine: it is enough (for example, in the Poincaré model) to substitute reflections
with circular inversions. This is of course conceptually identical to a physical realization: however, in a plan for a math exhibition, a real object still makes a great difference, at least in my
opinion, with respect to a virtual one (and it does not seem technically easy to construct such a machine).
3. What can be done with the “machines for building symmetry”.
3.1 Classification with respect to symmetry, for different visitors
The principal aim of an exhibition based on the objects described in the introduction is to give the visitor an idea of the problem of classifying something with respect to its type of symmetry – an
idea which of course will be at very different levels, depending on the degree of mathematical knowledge of the visitor.
The possibility of giving at least a flavour of this idea, even to someone with no mathematical knowledge at all (like small children) is due to the fact that, given a group generated by reflections,
it is possible to express the idea of isomorphic or non-isomorphic groups without any technical algebraic language, but simply by looking at the geometry of the mirrors.
This opens the possibility of making a lot of non trivial considerations, related to the symmetry group of a planar picture or of a solid object, without the necessity of introducing the
algebraic language related to groups; for example, planar pictures which can be reconstructed in the same mirror box (or solid objects which can be reconstructed in the same caleidoscope) have
isomorphic symmetry groups, while pictures (or solid objects) reconstructed in different mirror boxes (or caleidoscopes) have non-isomorphic symmetry groups.
Of course, due to what we observed in the preceding section, the last sentence is in fact not quite correct[11]. However, we do not think this ambiguity makes a serious problem towards
mathematical communication in such a sort of exhibition. In fact, for the minority of the public who can appreciate this difference, the ambiguity is not hidden but, on the contrary, some problems
are posed on purpose, in order to provoke questions on the subject, and this gives a new, not-trivial problem to investigate. For the majority of the public, in order to give a flavour of the
different types of symmetry, it is enough to make them observe how the pictures coming out from certain mirror boxes are all based on numbers 3 or 6 (and on a triangular grid), while others are based
on number 4 (and on a square grid).
Moreover, the same problems can be used for the more and for the less sophisticated public with different purposes. To give just one example, the reconstruction of the same tessellation with
different kinds of colouring (which can also lead to different symmetry groups) can be used with the more sophisticated public precisely to provoke the ambiguity we were discussing before, while with
the less sophisticated one the same problem can be used for simpler observations.
In fact, one of the reasons why we thought the “machines for building symmetry” were useful for an exhibition is exactly what is exemplified in the previous comment, that is the fact that
the same objects can be used to communicate different levels of mathematics to different sorts of public. The fact that they are based on some non-trivial mathematical concepts from one side allows
also an interesting communication towards a public with more mathematical background and from another side it is clearly perceivable also by the public with very little mathematical knowledge: often,
it is not necessary to be able to enter deeply inside a problem in order to understand whether the problem has, or has not, such a depth.
3.2 The role of interactivity
Another reason why we think these machines can be a useful example in the direction of finding ways for the popularization of mathematics is the fact that they give occasions of “doing mathematics”;
and, in saying this, we think both to the public with less mathematical knowledge and to the public with more technical instruments. Both will have the possibility of putting their hands on the
objects and meet a problem they will have to solve: for a seven-year old child the problem can be how to put a given triangle in a mirror box in order to see an hexagon; for a mathematics university
student the problem can be that of understanding why a crystallographic group cannot have order 5 rotations, … : in both cases (as well as with other possible categories of public) the objects can
provoke an active reaction by the visitor: which we think is the only effective way to learn some mathematics.
Another aspect, related to this one, is the crucial role played by fancy and creativity, which are human capacities usually (and wrongly) thought to be far away from mathematical
capacities. This crucial role comes out from the fact that the main thing to do with our machines is just to look what happens when one puts something inside, and to observe analogies and differences
between them.
In order to make these observations, any object would do: if I take the piece which represents the fundamental domain of the action of the symmetry group of the cube and I put it, in the “right”
way, in the caleidoscope of the cube, I see a cube.
But if I put it in a “wrong” way, or I do not put that piece at all, but I prefer to insert a ball, or even a dry flower I had in my pocket, that will work equally well; in this case also I can
observe that what I see has always the same kind of symmetry of a cube and always a kind of symmetry different from what we see in the other caleidoscopes.
So, the “wrong” trials to solve the problems proposed are equally well useful to observe what happens and thus familiarize with the concepts involved; this may be very “relaxing”, especially for
people (unfortunately not so rare) who are paralysed by a sort of “fear” towards maths.
3.3 The role of mathematical proofs
We spoke of active interaction with the objects, and of fancy, and of playing. But mathematics is also (or mainly?) rigorous proofs. What could the role of proof be in this kind of
proposal? In fact, the problem of achieving a rigorous proof is (always, but especially the context of undergraduate teaching) a matter of subsequent approximations. And the first stages of these
approximations are the understanding of what has to be proved, and the consciousness that the given fact is not trivial and has thus to be proved. This seems (and in fact is) an obvious
consideration, but it is unfortunately frequently forgotten: pupils are often forced to prove a statement in the moment when they have not yet clear ideas about what it does mean (what it means if it
happens to be true, what would happen if it is false,… ecc.); or they may be asked to prove some facts which are eventually not so easy to prove, but whose statement is (or appears) evident.
It is of course one of the main aspects of mathematics the fact that, in a deductive construction, one has to prove everything, also “self-evident” statements, and we perfectly know how some
self-evident statements are not at all trivial to prove (Jordan’s theorem, just to give an example), and some are even false. But …; but one needs some mathematical maturity to appreciate the need to
prove self-evident statements; and it can be useless, or even damaging, to propose their proof in a context where this maturity is lacking.
This brings to the (apparent) paradox that it may be easier to propose the proof of “difficult” statements than that of “easy” ones in secondary school. The role of a mathematical exhibition towards
the achieving of proofs could well be (on some categories of public) that of making people conscious, and curious, about some facts to be proved.
Let us exemplify what we are trying to say on two statements: the first one is the fact that a triangle with two equal sides has two equal angles; the second one is the fact that the frieze groups
[12] are seven. And let us keep in mind two sorts of public, a secondary school student and an adult with no mathematical background after school. It is very unlikely that both the student or the
grown-up person manage to become particularly curious about the statement on isosceles triangles; in any case they both believe it is true, and they would use the statement, without realising it
requires a proof, if in a concrete problem they happen to need it. The situation is completely different for the statement about friezes: first of all the statement is strange, and difficult to
understand; one does not understand it at once, but has to think about it, to make a list of the possibilities, to reason about the fact that whatever drawing he or she is making, it has to be one of
those seven. When one grasps what this means, usually this is related to a sense of beauty: the result is beautiful, conceptually beautiful. Moreover, it looks strange; and it is very natural to ask
why is an eighth case impossible. So, with some time at disposal, it is very easy that people arrive naturally to the consciousness of the need for a proof.
When we have obtained such a consciousness, there are still many intermediate stages which can be significant, before achieving a complete proof: for example, one could begin to observe
that, due to the fact that the group is a frieze group, there must be some restrictions on the kind of possible isometries in the group (rotations may only be of order two; the axis of reflections
may only be either parallel or perpendicular to the direction of translations; the axis of glide reflections must be parallel to the direction of translations; …). This is not yet a proof, but it
begins to give some flavour of it, and the statement, which at first sight appeared completely mysterious (“why just seven???”) can now be seen as more reasonable (“I still do not know why they are
exactly seven, but I do understand that the situation is not completely free, and there are some limitations to be respected”).
In fact, these intermediate stages may be exactly what we would like to be grasped about proofs in mathematics undergraduate learning, much more than the particular proof of a particular
statement, which is often not relevant in itself (at least at that level).
3.4 The role of beauty
Beauty has – in many different ways – a crucial role in the exhibition just described.
The first aspect regards a problem of motivations. We all know very well that mathematics has on the whole a very bad reputation; it is quite common to meet persons who have a sort of hate and/or
fear towards mathematics; there may be also (in the same person) interest or curiosity about maths, besides fear, but it is very likely that fear acts as a sort of block towards curiosity. This block
is a concrete problem that any trial of popularizing maths has to consider; one needs a way to overcome it, to be able to begin to communicate with the public; and, moreover, this way must be an
immediate one, because it has to win an irrational feeling, not a rational one.
The strong impact of beauty is an enormous help in this sense; and we found symmetry a very good subject for popularization of maths also for this reason (besides the ones already discussed).
By saying this, we do not only refer to the trial of involving art, by proposing posters with the reproductions of some masterpieces where symmetry is wonderfully used; but we refer also to the
“beauty” of the artificial images that the visitor is invited to reconstruct (and/or to invent) by himself. Of course these two kinds of “beauty” are not comparable, but it is a fact that both have a
precious role in disposing people to be willing to interact with maths – a result which is not at all obvious to reach.
A particular role is played by the strong effect of surprise. In our mirror boxes one “can see infinity” and this effect is very strong, very beautiful, and also very unexpected, in all sort of
Moreover, this effect of surprise does not come from spectacular, enormous, scenic objects, but from very simple ones. This has a double positive effect: the first one is that it magnifies the
surprise (if I have to enter into an enormous building, which I see from very far away, with much light and colour, I do expect I shall see something which will surprise me; maybe I have no idea of
what I shall see, so I shall still be surprised, but I know in advance that this will happen; instead, if I put my eye on the border of an object which looks very simple, I do not expect any
particular “special effect”); another positive consequence is that the objects are easily reconstructible, so for example teachers realize they can easily build something analogous in their schools
A last aspect about beauty I would like to remind here is one which has already been mentioned in the preceding section: mathematics is beautiful not only for the beauty of some of its images,
but, also, for the conceptual beauty of some of its results. Too often – in my opinion – we do not even try to communicate this kind of beauty: mathematicians seem to lack any confidence about the
fact that this could be communicated to someone, unless he or she has done the right number of exams in algebra, geometry, analysis ecc. Sometimes this is true, but it is probably much less true than
what is generally thought, and it is possible to communicate much more than we think. At least, it is worthwhile trying.
[1] D is a fundamental domain for the action of G on P if for any point x in P we can find an element g in G such that g(x) belongs to D and no two points in the interior of D are related by an
element in G.
[2] This is in fact just a first rough approximation, which is not quite correct: see the following section for a more precise statement.
[3] A Coxeter group is irreducible if it is not isomorphic to the direct product of two Coxeter groups acting in lower dimensions.
[4] If we try to construct a plane polygon with all angles of the form p/n, there is another possibility besides these three triangles, that is a rectangle, with all angles p/2. This case is less
significative among Coxeter groups, because it is a reducible group: in fact, it can be seen as the direct product of two copies of the group generated by the reflections in two parallel lines, each
copy acting over R.
[5] The index of G in H is equal to the ratio of the areas of the fundamental domains of G and H: this can easily be read from what we see, keeping in mind that the fundamental domain of G is the
box itself.
[6] As p6m is a subgroup of itself, what can happen is that by putting something in the box we may get a picture whose symmetry group H is always isomorphic to p6m, BUT is different from the group G
generated by the reflections in the walls of the box.
[7] As p4m is a subgroup of itself, the same phenomenon described in note 6 for p6m can happen.
[8] In fact, the symmetry group of the octahedron is isomorphic to H, while the coloured symmetry group (that is, the subgroup of those isometries which, besides fixing the object, fix that colouring
also) is isomorphic to G.
[9] The notation (a[1],a[2],…a[k]) used for an archimedean polyhedron expresses the fact that each vertex is adjacent to a regular a[1]-gone, a regular a[2]-gone …, a regular a[k]-gone (in this
cyclical order).
[10] The distinction between two- and three-dimensional situation can generate ambiguity in a context where the objects are phisycally shown (and, therefore, are all necessarily three-dimensional).
[11] For example, we can build in a square mirror box pictures having four possible (non isomorphic) symmetry groups (pmm, cmm, p4m, p4g); while a picture with symmetry group p4m could be built both
in a square mirror box or in a mirror box with the shape of a right isosceles triangle.
[12] Frieze groups are the symmetry groups of patterns repeating in just one direction; that is, they are discrete subgroups of the group of plane isometries, whose translation subgroup is isomorphic
to Z.
[13] Although this is on the whole a very positive fact, it also poses some problems: in fact, it is not always easy tokeep control on this and to prevent imitations by people who are
misunderstanding the conceptual and didactical role of the objects.
[1] H.S.M. COXETER, Introduction to geometry, Wiley (1961)
[2] H.S.M. COXETER, J. MOSER, Generators and relations for discrete groups, Springer (1980)
[3] P. CROMWELL, Polyhedra, Cambridge University Press (1997 )
[4] M. DEDÒ, Forme, Decibel-Zanichelli (1999)
[5] H. WEYL, Symmetry, Princeton University Press (1952)
Dipartimento di Matematica “F. Enriques” Università di Milano | {"url":"http://arpam.free.fr/dedo.htm","timestamp":"2014-04-17T21:23:11Z","content_type":null,"content_length":"144213","record_id":"<urn:uuid:a7e8c66a-b0c8-4581-b070-9da7d95b4df2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use only in the MuPAD Notebook Interface.
This functionality does not run in MATLAB.
assumeAlso(expr, set)
assumeAlso(condition) adds the assumption that condition is true for all further calculations. It does not remove previous assumptions containing identifiers used in condition.
assumeAlso(expr, set) attaches the property set to the identifier or expression x. It does not remove previous assumptions containing identifiers used in expr.
Assumptions are mathematical conditions that are assumed to hold true for all calculations. By default, all MuPAD^® identifiers are independent of each other and can take any value in the complex
plane. For example, sign(1 + x^2) cannot be simplified any more because x MuPAD assumes that x is a complex number. If you set an assumption that x is a real number, then MuPAD can simplify sign(1 +
x^2) to 1.
For this reason, many MuPAD functions return very general or piecewise-defined results depending on further conditions. For example, solve or int can return piecewise results.
Many mathematical theorems hold only under certain conditions. For example, x^b*y^b=(x*y)^b holds if b is an integer. But this equation is not true for all combinations of x, y, and b. For example,
it is not true if x = y = -1, b = 1/2. In such cases, you can use assumptions to get more specific results.
If you use assumeAlso inside a function or procedure, MuPAD uses that assumption only inside the function or procedure. After the function or procedure call, MuPAD removes that assumption and only
keeps the assumptions that were set before the function or procedure call.
If condition is a relation (for example, x < y), then MuPAD implicitly assumes that both sides of the relation are real. See Example 4.
To delete assumptions previously set on x, use unassume(x) or delete x.
When assigning a value to an identifier with assumptions, the assigned value can be inconsistent with existing assumptions. Assignments overwrite all assumptions previously set on an identifier. See
Example 5.
assumeAlso accepts any condition and Boolean combinations of conditions. See Example 7.
If expr is a list, vector, or matrix, use the syntax assumeAlso(expr, set). Here, set must be specified as one of C_, R_, Q_, Z_, N_, or an expression constructed with the set operations, such as
union, intersect, or minus. set also can be a function of the Type library, for example, Type::Real, Type::Integer, Type::PosInt, and so on.
Do not use the syntaxes assumeAlso(expr in set) and assumeAlso(condition) for nonscalar expr.
Example 1
Solve this equation without any assumptions on the variable x:
Suppose, your computations deal with real numbers only. In this case, use the assume function to set the permanent assumption that x is real:
If you solve the same equation now, you will get three real solutions:
If you also want to get only nonzero solutions, use assumeAlso to add the corresponding assumption:
assumeAlso(x <> 0);
solve(x^5 - x, x)
MuPAD keeps both assumptions for further computations:
For further computations, delete the identifier x:
Example 2
When you use assumeAlso, MuPAD does not remove existing assumptions. Instead, it combines them with new assumptions. For example, assume that n is an integer:
assume(n, Type::Integer):
Add the assumption that n is positive:
assumeAlso(n, Type::Positive):
For further computations, delete the identifier n:
Alternatively, set multiple assumptions in one function call using the Boolean operator and:
assume(n, Type::Integer and Type::Positive):
For further computations, delete the identifier n:
Example 3
You can set separate assumptions on the real and imaginary parts of an identifier:
assume(Re(z) > 0);
assumeAlso(Im(z) < 0):
For further computations, delete the identifier z:
Example 4
Using assume, set the assumption x > y. Assumptions set as relations affect the properties of both identifiers.
To see the assumptions set on identifiers, use getprop:
To keep an existing assumption on y and add a new one, use assumeAlso. For example, add the new assumption that y is greater than 0 while keeping the assumption that y is less than x:
Relations, such as x > y, imply that the involved identifiers are real:
is(x, Type::Real), is(y, Type::Real)
You also can add a relational assumption where one side is not an identifier, but an expression:
For further computations, delete the identifiers x and y:
Example 5
_assign and := do not check if an identifier has any assumptions. The assignment operation overwrites all assumptions:
assume(a > 0):
a := -2:
a, getprop(a)
For further computations, delete the identifier a:
Example 7
Use assume to set the assumption that the identifier a is positive:
Now, add two new assumptions using one call to assumeAlso. To combine the assumptions, use the Boolean operator and:
assumeAlso(a in Z_ and a < 5):
is(a = 0);
is(a = 1/2);
is(a = 2);
is(a = 6);
expr Identifier, mathematical expression, list, vector, or matrix containing identifiers.
If expr is a list, vector, or matrix, then only the syntax assumeAlso(expr, set) is valid.
set Property representing a set of numbers or a set returned by solve. This set can be an element of Dom::Interval, Dom::ImageSet, piecewise, or one of C_, R_, Q_, Z_, N_. It also can be an
expression constructed with the set operations, such as union, intersect or minus. For more examples, see Properties.
condition Equality, inequality, element of relation, or Boolean combination (with the operators and or or).
Return Values
Void object null() of type DOM_NULL.
See Also
MuPAD Functions
Related Examples | {"url":"http://www.mathworks.nl/help/symbolic/mupad_ref/assumealso.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-23T09:29:11Z","content_type":null,"content_length":"55494","record_id":"<urn:uuid:fde83b2f-4b13-4d47-bbd0-44fd37d35794>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
search for a function satisfying some conditions
up vote 0 down vote favorite
Hi everyone, I would like to find a function
$$\Psi\in\mathcal{C}^2: z\in\mathbb{R}\rightarrow\Psi(z)\in\mathbb{R_+}$$
satisfying the following conditions:
$$1-\frac{z\Psi'(z)}{\Psi(z)}+8s\Psi''(z)\geq 0$$
$$\Psi(z)^2+\frac{4}{\theta}\Psi(z)\leq\frac{z^2}{4\theta s}$$
where $\theta$, $s$ are given positive constants.
The second inequality yields
$$0\leq\Psi(z)\leq\sqrt{\frac{z^2}{4\theta s}+\frac{4}{\theta^2}}-\frac{2}{\theta}$$
Could someone have an idea to solve the first differential equation? Or give such a function $\Psi(z)$? Many thanks!
real-analysis differential-equations
Maybe such a function does not exist? However, if we allow $\Psi$ to map to the negative reals, then $\Psi(z)$ given by the negative solution to the quadratic equation above satisfies the other
conditions. But I have not checked any further. It would be good to know a bit of your motivation for investigating these inequalities... – Suvrit Aug 27 '12 at 14:20
1 The second consition implies that $\Psi(0)=0$. Shouldn't you exclude $z=0$ for the first condition? – Ilya Bogdanov Aug 27 '12 at 14:48
Notice also that the substitution $z=4t\sqrt{s/\theta}$, $\psi(z)=2\phi(t)/\theta$ leads to cancellation of constants. The inequalities read as $1-t\phi'/\phi+\phi''\geq 0$, $\phi^2+2\phi\leq t^
2$. – Ilya Bogdanov Aug 27 '12 at 14:56
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged real-analysis differential-equations or ask your own question. | {"url":"http://mathoverflow.net/questions/105626/search-for-a-function-satisfying-some-conditions","timestamp":"2014-04-21T02:49:12Z","content_type":null,"content_length":"49819","record_id":"<urn:uuid:5684327f-684f-4720-8818-d609c27b02c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conductivity of a copper wire at 4K
Hi everyone,
I am trying to find the mean free time between collisions for free electrons in a copper wire given the dimensions of the wire and the resistance, electron density and temperature.
I figured I needed to find the conductivity, plug that into an expression for mean free path then use the energy of the electron at 4K to determine its speed, and find the time between collisions
from that, although it's quite possible there is a more straightforward way than this.
Anyway with that being the method I first tried using the resistivity eqn
where my resistance is 2x10
my wire is 1m long with cross section 1mm
this gave me a resistivity of 2x10
resistivity being the reciprocal of conductivity gave me a conductivity of
However it seems strange that temperature does not feature in this eqn as that must have an effect on resistivity and therefore conduction.
I can't find any eqn involving temperature. Could anyone suggest an eqn which may be suitable? Or indeed tell me if I am on the complete wrong track.
Thanks a bunch | {"url":"http://www.physicsforums.com/showthread.php?s=0376525dccc10b7629aec653f171944d&p=4267422","timestamp":"2014-04-24T12:10:57Z","content_type":null,"content_length":"20749","record_id":"<urn:uuid:f3e375d6-b64e-46de-92c0-1012bdb7445f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
volume and calculus for upcoiming exam!
June 10th 2013, 05:39 PM #1
May 2013
volume and calculus for upcoiming exam!
Hi,Would really appreciate help with text in red.thanks.
one of the question asks for region of integration and evaluation of e^(2x+y)dydx where inner integral goes from x to 3x;outer integral froom o to 4.
but I have issues with this part:-how should I solve?
the solid sphere x^2+y^2+z^2 is less than or equal R^2 has a hole bored through it by the solid vertical cylinder x^2+y^2 less than or equal to Q^2 where
calculate volume of part of the sphere that is removed.
Re: volume and calculus for upcoiming exam!
Hey n22.
Check your other thread for a response.
Re: volume and calculus for upcoiming exam!
Here's the other thread.
June 10th 2013, 08:53 PM #2
MHF Contributor
Sep 2012
June 11th 2013, 05:00 AM #3 | {"url":"http://mathhelpforum.com/calculus/219734-volume-calculus-upcoiming-exam.html","timestamp":"2014-04-19T14:55:28Z","content_type":null,"content_length":"37995","record_id":"<urn:uuid:8570f38d-31c7-453d-8869-af6cd058400f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
s o
The basics of a step by step, easy to follow and duplicate tutorial on the design of LC Filters ranging from the simple to the complex together with required learning aids.
L/C FILTERS - ESSENTIAL BASICS TO BEGIN WITH
you can have this page translated /vous pouvez faire traduire ces pages /Sie können lassen diese Seiten übersetzen /potete fare queste tradurre pagine /você pode ter estas páginas traduzido /usted
puede hacer estas paginaciones traducir
Back or Home
Well the letter L stands for inductance and the letter C stands for capacitance - (I love this high tech talk) - therefore you should read on.
If you don't know or understandinductance, capacitance and resistance jump ahead now and then get back to us.
Just don't react too badly to this.
Now here comes one of the most confusing aspects of electronics - which I will de-mystify by taking my customary casual approach.
4. What is Q and why is it so important?
Another aspect which confuses some people no end. But it is quite critical to your understanding of filters later.
5. You should have a decent calculator to do your work.
You will need:-
(a) At least one memory location.
(b) an exp or exponential function.
(c) pi of course.
(d) square root function.
(e) a powers function - often denoted Yx where the first number is raised to the power of the second number.
(f) decimal to hex-decimal and vice versa. This proves very helpful for graphics and some programming.
6. Buy
a cheap exercise book as this will prove beneficial. Leave the first few pages vacant to enter commonly used formula and the next few pages vacant to record the L/C of the frequencies you might
happen to refer to quite often.
1. What is this LC jazz anyway?
Well the letter L stands for inductance and the letter C stands for capacitance. Ah! I still love this high-tech talk.
If you still don't know or understand inductance, capacitance and resistance or you cheated then jump out now and then get back to us.
Happy with that?. Good, now any quantity of inductance and capacitance will resonate at a particular frequency. As an example assume we have on the bench an inductor which measures exactly 159.155 uH
and a capacitor which measures exactly 159.155 pF. In the real world we usually don't measure to that level of precision and in fact you wouldn't be that accurate anyway. Mostly, if you are within
about 5% you are doing O.K.
Using our calculator you will find if we multiply L times C (L * C) we get 25330.314. This figure you will now ensure becomes permanently embedded in your mind for evermore because it is the very
basis of my formula for L/C.
There are other ways and means of calculating resonance etc. but guys, this is the simplest yet. You will learn in the next section how I arrived at it but for the moment learn these formula. They
are one and the same, just re-arranged by simple algebra.
LC= 25330.3 / Fo [Mhz]^2
Fo [Mhz] = Ö[25330.3 / LC]
Where obviously LC is the combination of inductance and capacitance, F[o] is our frequency of resonance and of course 25330.3 is our constant we have just learnt.
One formula will give you LC for a known frequency and the other will give you a frequency for a known LC combination. It's just simple re-arrangement of an algebraic expression. And you thought
maths at school did/does suck! and had no practical real-world value to you.
Consider if we were designing any type of filter for say, 7.150 Mhz (which is the same by the way as saying 7150 Khz).
Now 1000Khz = 1Mhz remember that. Using our formula above we would need an L/C combination totalling 495.5 - what combination would you need for some sort of filter at;
(a) 75Khz (b) 1750 Khz (c) 9Mhz and (d) 101.7 Mhz
QUIZ - No. 1
1. What is the L/C for the following frequencies;
(a) 455Khz (b) 7224 Khz and (c) 9 Mhz.
2. What capacitor will resonate at the above respective frequencies with the following respective inductors ;
(a) 679 uH (b) 22uH and (c) 3.81 uH.
3. What is the resonant frequency for the following L/C (inductor - capacitor) combinations to the nearest Khz;
(a) 2,533,030 (b) 116,716 (c) 312.72 and (d) 2.53303.
Yes Ian, I did complete the above quiz in my exercise book. Can I please now see the answers to check out my work.
These are the principal passive components which make up most circuits we are discussing here. A capacitor is a capacitor, an inductor is an inductor and of course a resistor is a resistor or an
impedance at its rated value.
Don't you believe it because it ain't necessarily so.
We shall soon see that a capacitor can at certain frequencies look like an inductor and an inductor can look like a capacitor and our resistor may look a bit like both of them.
1. What is an Inductor
Well the letter L stands for inductance. The simplest inductor consists of a piece of wire. A piece of #22 wire which is approximately 100 mm ( 4" ) long has a measured inductance of 0.14 uH. Well
mine did because I measured it on an inductance meter (see kits) operating at about 740 Khz. - [but compare that value with formula later].
Now this 100 mm length of conductor is about 25 mils (i.e. 25/1000") or 0.644 mm in diameter but does NOT carry rf throughout its whole cross-sectional area of 0.326 mm^2. ( [Pi * Dia^2] / 4 ) -
another useful formula as you will soon see because I then use the traditional pi * r^2 which is a pain if you talk ( as we mostly do ) in diameters.
At low frequencies this would be the case, but as the frequency is increased so is the magnetic field at the centre of the conductor and this leads to an increasing impedance to the charge carriers.
This decreases the current density at the centre of the conductor and increases the current density around the perimeter of the conductor. We call this increase in current density around the
perimeter "skin effect".
^2 and A2 = pi * r2^2
Skin depth area = A2 - A1
= [ pi * ( R2 ^2 - R1 ^2 ) ]
The net result of skin effect is a net decrease in the cross sectional area of the conductor and a consequent increase in the rf resistance. Consult the references I have suggested you read for a
more detailed and informed discussion on this very important topic.
In particular I would recommend RF Circuit Design - Chris Bowick - Sams.
This skin effect we will see later on has a very significant impact on the Q of our inductors.
Throughout these tutorials I will incorporate as many relevant formulae as possible. Why?.
Because you will have under one roof all the formulae necessary to tackle any design goal you may have in mind.
Let's start out with the formula for the inductance for a straight piece of wire.
L = (.002 * length)[ 2.3 log {((4 * length)/ dia) - 0.75}] uH
L = inductance in uH.
length = length in cm. (1" = 2.54 cm).
dia = diameter of the wire in cm also.
plugging in the details of our piece of wire which are, length = 10 and dia = 0.064, all dimensions being in cms.
L = (.002 * 4)[ 2.3 log {((4 * 4) / 0.064) - 0.75}] uH
L = (.008)[ 2.3 log {((16 ) / 0.064) - 0.75}] uH
L = (.008)[ 2.3 log {249.25}] uH
L = (.008)[ 2.3 {2.3966}] = 0.044 uH
The reason I have included as many steps as possible is to help those people who do not necessarily feel comfortable with maths. Including as many steps as possible means they should be able to
follow how we arrive at an answer. [now compare that answer with the measurement I made earlier???]
Now from the above equation you should begin to understand any piece of wire exhibits inductance. This of course includes capacitor and resistor leads as well as any inter-connecting wiring in your
circuit. Circuit board traces exhibit inductance and in fact this property is often taken advantage of at UHF. We can draw inductors on the PCB's.
What if we decide to wind our 100 mm piece of wire into a coil? Here is another formula which is referred to as "Wheelers' Formula" - * Proc IRE, p 1,398, Oct 1928., although here the formula has
been metricated. If you are more comfortable with imperial inches then delete the 0.394 and use inches in lieu of cm.
L = (0.394 * r^2 * n^2 ) / ( 9r + 10b ) uH
L = inductance in uH.
r = radius of the coil (in cm).
n^2 = number of turns in the coil squared and;
b = the length of the coil in cm.
Our length of 100 mm wire can easily become 2.5 turns wound on a 12.7 mm dia. former or even 5 turns wound on a 6.35 mm dia. former. If for convenience we make both coils equal in length to 12.7 mm
then using the above formula and not forgetting to convert to cm and NOT confusing radius with diameter.
Proceeding further you should have calculated inductances of (a) 0.054 uH and (b) 0.064 uH.
Our straight piece of wire has gone from 0.044 uH to 0.054 uH and 0.064 uH respectively.
A much flasher formula:
where coil length = diameter (which is best for optimum Q).
N = Ö [29L / 0.394r] OR L = [(0.394 * r * N^2) / 29]
N = number of turns
L = inductance required in uH.
r = radius of the coil (in cm).
All others (29 & 0.394) are constants
Using example (a) above we would get:
L = [(0.394 * 0.635 * (2.5)^2) / 29] = 0.054 uH
Permeability:- So far our inductors have been air wound where the permeability of air is 1. If we introduce iron or ferrites into our core then we find that for a given number of turns, the
inductance will increase in proportion to the permeability of the core. This means for a given inductance less turns will be required.
e.g. Neosid have a fairly popular former (part 52-021-67 page 279) designed to accept a core of 4 mm dia. This former has an external diameter of 5.23 mm and a typical core at 20 metres (15 Mhz)
would be an F29 type.
If we conveniently wanted an inductance of say, and of course I obviously cheated, 3.4937 uH and an F29 slug gives the formula:
N = 10.7 Ö L or N = 10.7 Ö 3.4937
we very conveniently get exactly 20 turns (remarkable wasn't it) On the other hand to achieve the same inductance on a 5.23 mm former we would need to wind?
Therefore introducing permeability means reduced turns therefore less rf resistance and hopefully greater Q.
Using the above former but with a different core and bobbin Mr. Neosid in his cattle-dog (australian for catalogue) page 264, tells me 150 turns of 3 x .06 EnCu wire, wave wound, will yield an
inductance of about 670 uH. This would not be achievable without the permeability of core and bobbin, especially the claimed unloaded Q (Q[u]) of 150. Use the above formula and substitute a factor of
'X' for the 10.7 - what is the X?. If you need help, email me.
These come in two types. Powdered Iron or Ferrites. Both introduce permeability. Toroids look exactly like doughnuts and come in various diameters, thicknesses, permeabilities and types depending
upon the requency range of interest. Some of their advantages are:
• high inductance for the physical space occupied
• no interaction or coupling with adjacent components (unlike air wound and other inductors)
• various permeabilities available
• exceptional Q values when wound correctly and optimum core and windings selected
• wide range of diameters and thicknesses.
• relatively low cost
• often simple to mount or secure mechanically
The only disadvantage I can think of
• nearly impossible to introduce variable tuning of the inductance
A typically popular type is made available by Amidon Associates and a representative example is the T50-2. This core is lacquered red (so you know the type) and has the following main properties.
Being T50 it's outside diameter is 0.5", the ID is 0.3" and the thickness is 0.19"
The permeabilities or in this case A[L] factors i.e. ( inductance per 100 turns^2 ) are:
TYPE COLOR A[L ]Freq. Range
T50-26 Yel-Wh 320uH power freq.
T50-3 Gray 175 50 Khz to 500 Khz
T50-1 Blue 100 500 Khz to 5 Mhz
T50-2 Red 57 2 Mhz to 30 Mhz
T50-6 Yellow 47 10 to 50 Mhz
T50-10 Black 32 30 to 100 Mhz
This is only a small sample to give you an idea. Your turns required to give a certain inductance based on the above A[L] is as follows:
N = 100 * Ö[ L / A[L] ]
Therefore to obtain an inductance of 3.685 uH using a T50-6 toroid would require 28 turns (of course I cheated again) but check it out on your calculator as I may have left in a deliberate mistake to
see if you're awake.
By the way don't get too paranoid about the exact number of turns because cores do vary in value anyway and particularly with temperature changes.
Series or Parallel Inductors
Just as is the same with resistors, if two or more inductors are in a circuit connected in series, then the inductances add together, i.e. L [total] = L1 + L2 + Ln etc. If they are in parallel then
they reduce to less then the lowest value in the set by the formula:
L [total] = {1 / [(1 / L1) + (1 / L2) + (1 / Ln)]}
Enter all of those formulae along with every other one above and below into your exercise book.
Now for the downer:
I said in the second paragraph in the beginning that an Inductor can look like a capacitor.
What is a Capacitor
Well the letter C stands for capacitance. A capacitor is formed by two or more conductive parallel plates separated by an insulating material or dielectric. Typical dielectrics you will encounter are
air, mica, ceramic or plastic.
Consider an air variable capacitor of the type used either as a trimmer or even the tuning element in a radio. It is merely two or more conductive plates separated by air - is this not so?.
Go back to our inductors above, could not a similar description apply to them?.
That is why inductors also have capacitance!. We have also learnt a small piece of wire has inductance. This is why capacitors with leads etc. exhibit inductance.
At some point our inductor with its inherent capacitance (called stray) will resonate (you will learn about that later) and this is called its self resonant frequency. The same rule applies to
capacitors. Mostly it is only the VHF and UHF aficionado who has to be greatly concerned about these properties. Stray capacitance is everywhere. Sometimes it can be used to advantage, usually you
take it into account (that is another function of trimmers) but often it's a monumental pain.
I said mostly - which means if you forget about it, then you can surely guarantee one day this property will bite you. That will be the day when you're working on your pet project, which of course
won't work as expected and you don't know why your theory doesn't work out in the real world - another "gotcha".
Stray capacitance in sloppy layouts can account for unexpected oscillations, no oscillations, different circuit responses and generally cause you to "become a victim of the bottle, eternally caught
in the grip of the grape and, if you become like me, occasionally ruin an otherwise good keyboard with an even much better claret - well actually 'Tyrrel's Pinot Noir' when the budget allows".
Even the best of designs and careful layouts are affected. Not only stray capacitance but stray inductance can affect you. Try some high speed, double sided, digital PCB designs.
Back to capacitors. Apart from their uses in resonant circuits they are used as:
(a) dc blocking devices - a capacitor will pass ac or rf but not dc. That is often the function of coupling capacitors in circuits. The coupling capacitor will pass our required signal but block the
dc supply from the previous stage e.g. .001 uF (1000 pF or 1N0).
(b) supply by-pass capacitor - a capacitor used to pass ac or rf. Examples include by-pass of a dc supply to an emitter bypass capacitor. When used on a dc supply line the capacitor shunts (shorts)
to ground any unwanted ac or rf to avoid contamination of our supply e.g. 0.1 uF (100,000 pF or 100N)
(c) reservoir by-pass capacitor - Used in the output of a dc rectifier to smooth out the power line ac pulses and as a reservoir between the charging pulses. Think of it as a big water storage tank.
This is where you have the high value capacitors e.g. 10,000 uF / 63V. (10,000U)
(d) emitter by-pass capacitor - In the case of an emitter by-pass our transistor may, as one example, be considered as two distinct models. One under dc conditions which sets up how we want the
transistor to operate and another under ac or rf conditions e.g. as an amplifier. In this case our emitter by-pass capacitor merely makes any emitter resistor invisible for ac or rf purposes typical
values might range from 2.2 uF down to 0.1 uF. (2U2 down to 100N)
Parallel plate capacitors have a formula for calculating capacitance:
C = 0.22248 * K * (A/t)
C = capacitance in pF
K = dielectric constant (Air = 1)
A = area of plates or dielectric in inches^2
t = thickness of dielectric or spacing in the case of air variables.
Series or Parallel Capacitors:
Series and parallel combinations calculate in an opposite fashion to inductors, i.e. parallel = C1 + C2 + Cn and series are:
C [total] = {1/ [(1 / C1) + (1 / C2) + (1 / Cn)]}
Now here come some very very useful formula. Using the one immediately above you should determine that a 22 pF capacitor in series with a 15 pF capacitor gives a total of 8.92 pF. Go ahead do it!
check me out.
What if we had say 105 pF already in circuit and needed to reduce it to say 56 pF. A real world example might be say the only air variable capacitor available is one with a maximum of 105 pF. What do
you put in series to reduce the maximum to 56 pF?.
Did you know the formula above, for two capacitors in series, can become:
C [total] = [( C1 * C2 ) / ( C1 + C2 )] ** AND:
C [series] = [( C[avail] * C[wanted] ) / ( C[avail] - C[wanted] )] THEREFORE:
C [series] = [( 105 * 56 ) / ( 105 - 56 )] = 120 pF
If you disbelieve me then check it out using ** above as a double check. Nifty eh!. Write it in your exercise book. If that's flash try this red-hot formula.
Suppose we wanted an oscillator to tune from 7.0 Mhz to 7.2 Mhz. Now if you don't quite know what an oscillator is yet don't worry. For reasons we will explore later on in the tutorials you will find
that your capacitance MUST vary, for tuning purposes, in the direct ratio of the frequency ratio squared or C[max] to C[min ]= (F[high] / F[low])^2 . Eh What!.
In this example our frequency ratio is F[high] / F[low] = 7.2 / 7.0 = 1.02857. Our ratio must be squared so then it becomes (1.02857 * 1.02857) = 1.05796. This must be our capacitance ratio i.e. C
[max] to C[min ]= 1.05796.
Using the 105 pF air variable capacitor is no help because it usually has a minimum of 10.5 pF and that's a ratio of 10:1. Assuming the 105 pF, for the moment ***, is acceptable then apply this
formula (which is quite difficult to do in HTML language the way I wanted to do it - so bear with me):
C[pad] + 105 / C[pad] + 10.5 = 1.05796 / 1
What needs to be done is re-write that formula on a piece of paper using two lines and writing the division sign in the usual way, i.e. as an underlining of both C[pad] +105 and 1.05796. I hope
everybody knows what I mean. Next we perform a simple algebraic function, we cross multiply so you should end up with:
C[pad] + 105 = (1.05796 * C[pad] ) + ( 1.05796 * 10.5)
C[pad] + 105 = (1.05796 C[pad] ) + ( 11.10858)
then subtracting 1 times C[pad] from each side as well as subtracting 11.10858 from each side (in algebra what is done to one side of the equal sign must be done to the other side - I hope that's
clear also) we get:
93.89142 = 0.05796 C[pad] and dividing both sides by 0.05796
C[pad ] = 1619.93 pF
Now firstly I accept the fact that the formula may be obscure for many people and if that is the case then I will re-write it as a graphic but including too many graphics can cause other problems. It
is MOST important (to me anyway) that I make this as easy as possible for you.
Secondly the figure of 1619.93 pF above is most likely too high to use in the real world. Having said that let's check our sums. We have 105 pF max and 10.5 pF min in parallel with 1619.93 pF which
is the same as ( 105 + 1620 ) / (10.5 + 1620 ). This calculates out to 1.057977 : 1 which when applied to our tuning range would tune from 7.0 Mhz to 7.19999 Mhz or what we set out to do. Remember
our strays earlier?. They would play havoc with this, so the 1620 pF would become in the real world 1000 pF + 560 pF + 100 pF trimmer ( i.e. 10 - 100 pF ).
I said earlier the 105 pF was possibly acceptable ***
Let's go back and reduce it to say 33 pF and redo our sums to get another C[pad ]value. Let me know your answer.
Some home brewers (hi-tech code word for building-it-yourself) advocate using a starting goal of 1 pF per wavelength of frequency. e.g. 7.0 Mhz = 40 metres (approx.) = 40 pF.
I must say I am not especially in love with that particular logic for many reasons which will become apparent as we become more deeply involved with filter and oscillator theory.
This is as far as I want to go on inductors and capacitors for the moment.
What is a Resistor
Resistance by technical definition "is the property of a material that determines the rate at which electrical energy is converted into heat energy for a given electrical current".
Mouthful?. Well look through the books I recommend to get a better insight. In particular if you don't already know Ohm's law backwards then learn it now!. It is as fundamental to your development as
the first breath you took the moment you were born. Believe it.
It should be fairly obvious if you have managed to stay awake so far, that a resistor must exhibit some inductance. Depending on how it is mounted on a board or elsewhere then it will exhibit some
capacitance also.
On a totally different tack, many people may not appreciate what is a cunning trick. In relatively low frequency circuits e.g. below 15 Mhz, surplus high value resistors make excellent stand-offs
when building rat's nests. I use a heap of 1 Meg ohm or more resistors for this very purpose.
Of special interest to me is just what do you guys overseas use for resistors. Here in Australia our hobbyist suppliers (see Links) offer E24 2% metal film resistors for a reasonable price and I tend
to use these. The 5% are becoming so small they usually become embedded in the feet of my grand-children. I would like to know what you use outside of N.Z., the U.S., Canada or U.K.
Now if we put two or more resistors in series (R1 and R2) then the total resistances will add together. If one were say 1K, or 1000 ohms and the other was 1K5 or 1500 ohms then the total is 2K5 or
2500 ohms.
On the other hand if we put two or more resistors in parallel (R1 and R2) then the total resistances will be less than the lowest value. If one were again say 1K, or 1000 ohms and the other was again
1K5 or 1500 ohms then the total net resistance is 600R or 600 ohms.
How is this so? Well because we have now provided two paths for the current flow and that means less overall resistance. The formula is:
(R1 X R2) / (R1 + R2) OR
(1000 X 1500) / (1000 + 1500)
Try it out for yourself on the calculator.
If I have omitted anything important let me know - but remember I am NOT endeavouring to be a substitute for basic text books.
What is Reactance
Because I don't want to have herds of graphic files or complicate programming in HTML, we will throughout these Tutorials denote inductive reactance as X[L] and capacitive reactance as X[C].
Reactance is somewhat similar to resistance, but don't take that statement too literally. See - inductance, capacitance and resistance - except where resistance applies to D.C., reactance applies to
a.c.which also includes radio frequencies or we say in the game r.f. for short.
Reactance might be considered as a quantity of resistance to alternating or varying frequencies.
Inductive reactance is = (2 * pi * Fo * L) and;
Capacitive reactance is 1 / ( 2 * pi * Fo * C ).
Where 2 * pi = 6.2832, Fo is Mhz, L = uH, C = pF, * means multiply and 1/(xxxx) means one divided by the result of the numbers in brackets.
Let's use our examples from the previous tutorial. If we use 159.155 uH inductance, 159.155 pF capacitance and using a frequency of 1.0 Mhz substituted into our reactance formulas you should get
reactances of XL = 1000 ohms and XC = 1000 ohms respectively. XL is dead easy or should be but XC might be difficult for you because you need to calculate as follows:-
1 / ( 2 * 6.2832 * 1,000,000 * .000000000159155 ) OR enter into your calculator as;
1 / [( 2 * 6.2832 * (1 ^X 10+6 ) * (159.155 * ^X 10-12 )]
Where (1 ^X 10+6 ) = one million or one followed by six zeros and;
(159.155 * ^X 10-12 ) = 159.155 with the decimal place moved twelve places to the left.
If you have a decent calculator then always enter both as follows:
(a) For frequency (always in Mhz) enter 1, then hit your exp key and always enter 6.
(b) For capacitance (always in pF) enter, in this example, 159.155, then hit your exp key and always enter 12 and then the change sign key to - or minus.
And remember reactance is always called ohms just as we call resistance by the same unit and use the same symbol.
1. What is the reactance of a 100pF capacitor at;
(a) 100Khz (b) 1.5 Mhz and (c) 22 Mhz.
2. What is the reactance of a 22uH inductor at;
(a) 100Khz (b) 1.5 Mhz and (c) 22 Mhz.
3. This is a revision of the previous tutorial:- What capacitor will resonate with the above inductor at;
(a) 7234Khz (b) 1224 Khz and (c) 3.5 Mhz.
Yes Ian, I did complete the above quiz in my exercise book. Can I please now see the answers to check out my work.
What is Impedance?
Now here comes one of the most confusing aspects of electronics - which I will de-mystify by taking an extremely casual approach, so what's new!. I have known electronic enthusiasts who still
couldn't even mentally visualise the concept even after 25 years.
I'll keep it dead simple, very inelegant but dead simple and give all the purists heart palpitations. I bet you walk away with a better understanding though.
If you need to know the technical answer for impedance and you should, then consult one of the must read texts I have suggested elsewhere. Assume you have available these 4 items on your bench:
(a) A series of eight fresh AA type 1.5 volt cells to create a total of 12 volts supply.
(b) A 12 volt heavy duty automotive battery - fully charged.
(c) a small 12v bulb (globe) of very, very low wattage. and;
(d) a very high wattage automotive high-beam headlight.
Now if we connect the extremely low wattage bulb to the series string of AA cells we would expect all to work well. Similarly if we connect the high wattage, high-beam headlight to the heavy duty
automotive battery all will be well. Well for a time anyway. Both of these sets are "sort" of matched together. Light duty to light duty and heavy duty to heavy duty.
Now what do you think would happen if we connect the high beam headlight to the series AA cells and conversely the low wattage bulb to the automotive battery?.
In the first case we could imagine the high beam headlight would quickly trash our little tiny AA cells. In the second case our min-wattage bulb would glow quite happily at its rated wattage for
quite a long time. Why?, therein lies my expanation of impedance. Consider it!
The heavy duty battery is capable of delivering relatively large amounts of power but the series string is capable of delivering only relatively minimal power. The first is a low impedance source and
the other, in comparison is a relatively high impedance source.
On the other hand the high beam headlight is capable of consuming relatively large amounts of power but the minature bulb is capable of consuming only minimal amounts of power.
Again the first is a low impedance load and the other is high impedance load. If you're keen to apply ohms law you will discover why, research it through the text books.
Meanwhile take a well deserved coffee or tea break now and think it over. Me?, I'll just have another beer while I'm waiting for you.
Good break?. If you were paying attention you would now be able to understand an analogy - a particularly rough but effective one;
Imagine a tiny caterpiller chewing on a large blade of grass - no problem plenty to eat there. Now on the other hand imagine a poor cow stuck in a desert with only one similar blade of grass
available to eat. I hope you have some better understanding now.
The term impedance is a general expression which can be applied to any electrical entity which impedes the flow of current. Thus this expression could be used to denote a resistance, a pure reactance
(as in previous tutorial), or as is most likely in the real world, a complex combination of both reactance and resistance.
Don't get overly concerned if you're a bit confused by that statement at the moment. However this does then lead us on to "Q".
The good news is there isn't one here.
Well what is Q and why is it so important?
This is another aspect of radio which confuses some people no end. Q can be considered to mean Factor - 0f - Quality (that's my expression - not an official one).
It has no dimension as such. Usually when discussing Q in resonant circuits we can look at either the capacitor or inductor. Capacitors do have a Q many times greater than that of an inductor so we
can generally ignore the capacitor and concentrate only on the Q of the inductor.
Throughout these tutorials we are dealing with resonant circuits and of primary interest to us here is the response of our circuits. Now we're heading toward the pointy end of the stick.
The standard we will observe is the response at the 3[db] points. You have previously learnt that a combination of x - capacitance with y - inductance will resonate at a z - frequency. Now a very
common fallacy - and a huge inexcusible one with some designers - is, if we arbtrarily select an L with a C then all is Okey Dokey. Many constructors building say a receiver for 40 metres will
blindly follow this approach.
This constructor also blindly assumes his front end filter will accept all frequencies of interest and reject those which are of no interest to him or her (notice the political correctness here - I
have a wife and 7 daughters, that's why!).
Have you always thought that was the case - I did for many years until I bit the bullet and learnt theory just as you are now beginning to learn right here.
What is the 3[db] point anyway?. Well firstly let us consider the principal factor affecting Q. It's resistance. A length of wire wound into a coil to provide an inductor will have a certain D.C.
resistance which can be measured with an ohmeter (actually you would need a milli-ohmeter because the D.C. resistance most likely would be well below 1 ohm).
However the resistance to a.c. or r.f. is altogether different, especially at low frquencies such as 455Khz.
It is this R.F. resistance which will affect or rather limit our Q to sometimes below useful values. You may define Q as the ratio of reactance to R.F. resistance. Thus;
Q = ( 2 * pi * Fo * L )/ R
A typical practical toroidal inductor suitable for 40 metres may well be 23 turns of #20 wire wound on a T68-6 core. This inductor has an inductance of 2.42 uH and a measured Q of 295 at 7.0 Mhz. As
an aside the Q at 5.0 Mhz is about 270 whilst at 9.0 Mhz the Q has already peaked at about 302.
What have we learnt here?. Firstly for a given inductance the Q is frequency dependent. It varies with frequency!. Do some more sums here based on what we have learnt so far. Go on, fire up the
calculator, don't now or ever take my word for it. I've been known to be a blatant liar before. Use your own initiative and always follow me on your calculator - you learn by actually doing it.
If you are alert you should have established;
(a) that the reactance of 2.42 uH at 7.0 Mhz is 106.44 ohms and;
(b) it will resonate at 7.0 Mhz with a 213.6 capacitor because your L/C was calculated to be 516.94 and further;
(c) if our Q is 295 then the r.f. resistance must be 0.36 ohms.
Now I didn't just dream all that up. I took the toroidal coil inductance and Q from the Amidon Data Book. If it's wrong - and it won't be, blame Bill Amidon.
Where does this leave us? Simple - without any load connected to our inductor/capacitor (resonator) the 3db bandwidth is ( Fo / Q ) or ( 7000Khz / 295 ) or 23.729 Khz. Notice I said without any load
connected to our resonator, this is referred to as Q[u] or the unloaded Q when the resonator is hanging around in thin air and is actually doing nothing of particular value.
Our resonator hasn't met either the caterpiller or cow yet. This resonator impedance may be described as ( Q * 2 * pi * Fo * L ) or as it will in future be referred to as ( Z = Q * X[L] ). This
equates to 31,400 ohms or 31K4.
Why the big K?. Because with small type, commas and decimal places often get lost or obliterated. A very good example was a circuit requiring 4.7 ohms as an emitter resistance. The typesetter, this
was in the olden days of course, had for so often saw 4.7K ohms or 47K ohms repeated everywhere and assumed your poor author was a dill and had merely omitted the K.
In an amplifier with emitter degeneration the difference between 4.7 ohms ( or as we will use it, 4R7) and 4.7K ( or as we will use it, 4K7) is infinitely profound. In short it don't work!. So if I
say 31K4 then you should know beyond any doubt what I mean.
Circuit designers and publishers who cling to the old system should be summarily executed for inexcusible stupidity. Tell 'em I said so, that should have same the effect as hitting 'em with a wet
1. In our previous tutorial we had calculated the reactance of a 22uH inductor at;
(a) 100Khz (b) 1.5 Mhz and (c) 22 Mhz. - therefore:
What are the impedances if the unloaded Q's are respectively (a) 32, (b) 170, and (c) 110
2. What is the effective r.f. resistance of the inductor above at the respective frequencies and Q as follows;
(a) 100Khz/32, (b) 1.5 Mhz/170, and (c) 22 Mhz/110
3. What is the correct method of describing the following resistances and capacitances.
(a) 0.68 ohms, (b) 10 ohms, (c) 220 ohms, (d) 1,800 ohms, (e) 82,000 ohms, (f)
470,000 ohms, (g) 5,600,000 ohms, and;
(h) 6.8 Pf, (i) 180 Pf, and (j) 4.7 uF
Yes Ian, I did complete the above quiz in my exercise book. Can I please now see the answers to check out my work.
More later - but if you have got this far then send me an email.
Copyright © 1998 - 1999 - 2000 Ian C. Purdie. All Rights Reserved.
URL: http://my.integritynet.com.au/purdic/basics.html
You can contact Ian C. Purdie
This site was entirely written and produced by Ian C. Purdie*
Created:26th December, 1998 and Revised: 26th July, 2000 | {"url":"http://my.integritynet.com.au/purdic/basics.html","timestamp":"2014-04-20T15:50:31Z","content_type":null,"content_length":"46086","record_id":"<urn:uuid:ed1433b8-03de-4339-880c-145c78b0a713>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
theta-pinch, plasma physics
1. The problem statement, all variables and given/known data
An idealized theta-pinch geometry is an infinitely long, cylindrically symmetric (d-by-d theta = 0), translationally-invariant (d-by-d z = 0) plasma column with an externally applied axial magnetic
field B_z_ext. This induced a large diamagnetic azimuthal current which produces its own magnetic field which opposes the external magnetic field. Assume the plasma column is in MHD equilibrium with
velocity v = 0 and with mag field B_z(r) and gas pressure p(r).
I'm stuck on the first part of the question which is find the differential relationwhich the field B_z and the gas pressure must satisfy.
2. Relevant equations
gradp = j cross B
maxwell's equations to replace the current with the curl of B
3. The attempt at a solution
grad p = (del cross B_z z-hat) cross B_z z-hat
del cros B_x z-hat = - d-bydr B_z theta-hat
grad p = d-by-dr p
(1/mu_0)(- d-by-dr B_z)theta-hat cross B_z z-hat = (1/mu_0)(- d-by-dr B_z) = d-by-dr p
Is that the correct answer? Do I need to go further?
Next I need to integrate to find an expression for the externally applied mag field as a function of B_z and p and I just don't see how i can integrate the expression i found above. | {"url":"http://www.physicsforums.com/showthread.php?t=331939","timestamp":"2014-04-20T16:11:40Z","content_type":null,"content_length":"20453","record_id":"<urn:uuid:987fbc59-e6d6-4754-bd5b-8302b656502f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hoyt, CO Algebra Tutor
Find a Hoyt, CO Algebra Tutor
...I am a college graduate and have worked as a substitute teacher along the I-70 Corridor for the past five(5) years. I have also worked with elementary school students, mentoring them, and
assisting with their studies and homework. I was a math major in college and I have kept up with the changes in mathematics.
26 Subjects: including algebra 1, English, reading, writing
...I am proficient in all levels of math from Algebra and Geometry through Calculus, Differential Equations, and Linear Algebra. I can also teach Intro Statistics and Logic. I've worked with high
school and college students, priding myself on being able to explain any concept to anyone.
11 Subjects: including algebra 1, algebra 2, calculus, geometry
...I believe that with effort and patience, anyone can become proficient in math. I have tutored students in math from pre-algebra through to partial differential equations. This includes algebra
, trigonometry, the calculus sequence, finite math and business calculus.
6 Subjects: including algebra 1, algebra 2, calculus, geometry
...I currently teach an online math class for a private university. Over the years I have enjoyed tutoring struggling students one-on-one, and I have the ability to communicate math concepts to
high school students at their level of understanding. As a high school counselor I often looked for opportunities to help students who were frustrated with their math courses.
11 Subjects: including algebra 2, GED, algebra 1, geometry
...I believe this is the key to being well rounded and successful in school and later in a career. Often times better grades are achieved simply by knowing what the teacher is looking for. I want
to help students discover methods for studying smarter and not harder.
19 Subjects: including algebra 1, English, reading, soccer
Related Hoyt, CO Tutors
Hoyt, CO Accounting Tutors
Hoyt, CO ACT Tutors
Hoyt, CO Algebra Tutors
Hoyt, CO Algebra 2 Tutors
Hoyt, CO Calculus Tutors
Hoyt, CO Geometry Tutors
Hoyt, CO Math Tutors
Hoyt, CO Prealgebra Tutors
Hoyt, CO Precalculus Tutors
Hoyt, CO SAT Tutors
Hoyt, CO SAT Math Tutors
Hoyt, CO Science Tutors
Hoyt, CO Statistics Tutors
Hoyt, CO Trigonometry Tutors
Nearby Cities With algebra Tutor
Adams City, CO algebra Tutors
Arickaree, CO algebra Tutors
Aspen Park, CO algebra Tutors
Bovina, CO algebra Tutors
Cadet Sta, CO algebra Tutors
Deckers, CO algebra Tutors
Dupont, CO algebra Tutors
Foxton, CO algebra Tutors
Irondale, CO algebra Tutors
Montclair, CO algebra Tutors
Raymer, CO algebra Tutors
Riverbend, CO algebra Tutors
Schriever Air Sta, CO algebra Tutors
Virginia Dale, CO algebra Tutors
Western Area, CO algebra Tutors | {"url":"http://www.purplemath.com/Hoyt_CO_Algebra_tutors.php","timestamp":"2014-04-19T07:07:27Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:b7dab324-00c9-415c-84e0-6ef5e8fc639d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Collected Works
2001; 615 pp; hardcover
Volume: 16
ISBN-10: 0-8218-2925-4
ISBN-13: 978-0-8218-2925-7
List Price: US$122
Member Price: US$97.60
Order Code: CWORKS/16.2
This item is also sold as part of the following set:
A lead figure in twentieth century noncommutative algebra, S. A. Amitsur's contributions are wide-ranging and enduring. This volume collects almost all of his work. The papers are organized into
broad topic areas: general ring theory, rings satisfying a polynomial identity, combinatorial polynomial identity theory, and division algebras. Included are essays by the editors on Amitsur's work
in these four areas and a biography of Amitsur written by A. Mann. This volume makes a fine addition to any mathematics book collection.
Graduate students and research mathematicians interested in ring theory, combinatorial algebra, and number theory.
"There is no doubt that the collected papers should help and inspire the reader to look at the topic even more deeply."
-- Mathematical Reviews
"These volumes are a collection of short and very readable papers, which many working algebraists will want to own; they should certainly form part of every mathematics library."
-- Zentralblatt MATH
Combinatoiral polynomial identity theory
• Amitsur and combinatorial P.I. theory
• Minimal identities for algebras
• Remarks on minimal identities for algebras
• The identities of PI-rings
• Identities and generators of matrix rings
• Identities and linear dependence
• On a central identity for matrix rings
• Alternating identities
• PI-algebras and their cocharacters
• The sequence of codimensions of PI-algebras
Division algebras
• Amitsur and division algebras
• Contributions to the theory of central simple algebras
• La représentation d'algèbres centrales simples
• Construction d'algèbres centrales simples sur des corps de caractéristique zéro
• Non-commutative cyclic fields
• Differential polynomials and division algebras
• Generic splitting fields of central simple algebras
• Finite subgroups of division rings
• Some results on central simple algebras
• On arithmetic functions
• Simple algebras and cohomology groups of arbitrary fields
• Some results on arithmetic functions
• Finite dimensional central divison algebras
• Homology groups and double complexes for arbitrary fields
• On a lemma in elementary proofs of the prime number theorem
• Complexes of rings
• On central division algebras
• The generic division rings
• Generic abelian crossed products and \(p\)-algebras
• Division algebras of degree 4 and 8 with involution
• On the characteristic polynomial of a sum of matrices
• Generic splitting fields. Brauer groups in ring theory and algebraic geometry.
• Extension of derivations to central simple algebras
• Kummer subfields of Malcev-Neumann division algebras
• Symplectic modules
• Totally ramified splitting fields of central simple algebras over Henselian fields
• Galois splitting fields of a universal division algebra
• Elements of reduced trace 0
• Finite-dimensional subalgebras of division rings
• Acknowledgment | {"url":"http://www.ams.org/bookstore?fn=20&arg1=cworksseries&ikey=CWORKS-16-2","timestamp":"2014-04-17T04:29:54Z","content_type":null,"content_length":"18097","record_id":"<urn:uuid:ce5b9ac0-3c84-4296-8274-d156d48e0212>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving an inequality #2
I'm terrible at these... -3 < (1/x) < 1 If possible could someone show a step-by-step solution?
First solve 1/x ≤1=>x≥1 Then solve 1/x>-3=>x<-1/3 See the solution on the number line and you will find that there is no common solution which satisfies both the inequalities simultaneously. So the
solution is -1/3 < x ≥ 1 That means the inequality holds good for all x ≥ 1 And -1/3 < x | {"url":"http://mathhelpforum.com/algebra/101752-solving-inequality-2-a-print.html","timestamp":"2014-04-19T15:22:00Z","content_type":null,"content_length":"3886","record_id":"<urn:uuid:e4f565f3-7405-4b26-9a06-7317471494dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: PRECODING AND FEEDBACK CHANNEL INFORMATION IN WIRELESS COMMUNICATION SYSTEM
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The present invention relates to precoding and feedback channel information in wireless communication system.
A method, comprising: mapping one or two codewords to the layers; precoding a mapped set of symbols using a precoding matrix derived from at least two downlink channel information, where one of them
is for rank adaptation and power allocation and the other of them is for the precoding without rank adaptation and power allocation; and transmitting a signal that comprises the precoded set of
The method in claim 1, wherein the precoding matrix is P=DW, where D is the first level matrix for rank adaptation and power allocation, and W is the second level matrix for the precoding without
rank adaptation and power allocation.
The method in claim 1, wherein the precoding matrix comprises a first level matrix for rank adaptation and power allocation and a second level matrix for the precoding without rank adaptation and
power allocation, and the precoding precodes the mapped set of symbols using the first and the second level matrices in turn.
The method in claim 2, wherein the first level matrix is either a unitary matrix or a non-unitary matrix and the second level matrix is either a unitary matrix or a non-unitary matrix.
The method in claim 1, wherein the downlink channel information is the feedback PMI (Precoding Matrix Index).
The method in claim 5, wherein the feedback PMI for rank adaptation and power allocation is feedbacked less frequently than the feedback PMI for the precoding without rank adaptation and power
The method in claim 5, wherein the feedback PMI for rank adaptation and power allocation is by long term feedback, and the feedback PMI for the precoding without rank adaptation and power allocation
is by short term feedback.
A method, comprising: mapping one or two codewords to the layers; precoding a mapped set of symbols using a precoding matrix P=DW, where D is the first level matrix for rank adaptation and power
allocation and W is the second level matrix for the precoding without rank adaptation and power allocation; and transmitting a signal that comprises the precoded set of symbols.
The method in claim 8, wherein the first level matrix is derived from the feedback PMI (Precoding Matrix Index) for rank adaptation and power allocation by long term feedback and the second level
matrix is derived from the feedback PMI for the precoding without rank adaptation and power allocation by short term feedback.
A method for feedbacking channel information for the mobile terminal, the method comprising: estimating a downlink channel from the received signal; selecting one matrix for rank adaptation and power
allocation and the other matrix for the precoding without rank adaptation and power allocation based on the estimated channel state information; and feedbacking the PMI (Precoding Matrix Index) of
the selected matrix for rank adaptation and power allocation by long term and the PMI of the selected matrix for the precoding without rank adaptation and power allocation by short term to the base
The method in claim 10, wherein the selecting one matrix comprises: picking one matrix for rank adaptation and power allocation from a first level codebook; picking the other matrix for the precoding
without rank adaptation and power allocation from the second level codebook; and selecting the matrices which have the highest sum rate or Signal-to-Interference-and-Noise Ratio (SINR) when combined
with two picked matrices.
The method in claim 10, wherein one matrix for rank adaptation is either a unitary matrix or a non-unitary matrix and the other matrix for the original precoding is either a unitary matrix or a
non-unitary matrix.
A base station, comprising: a layer mapper configured to map one or two codewords to the layers; and a precoder configured to precode a mapped set of symbols using a precoding matrix derived from at
least two downlink channel information, where one of them is for rank adaptation and power allocation and the other of them is for the precoding without rank adaptation and power allocation, and to
transmit a signal that comprises the precoded set of symbols.
The base station in claim 13, wherein the downlink channel information is the feedback PMI (Precoding Matrix Index).
The base station in claim 14, wherein the feedback PMI for rank adaptation and power allocation is feedbacked less frequently than the feedback PMI for the precoding without rank adaptation and power
The base station in claim 14, wherein the feedback PMI for rank adaptation and power allocation is feedbacked less frequently than the feedback PMI for the precoding without rank adaptation and power
A base station, comprising: a layer mapper configured to map one or two codewords to the layers; and a precoder configured to precode a mapped set of symbols using a precoding matrix P=DW, where D is
the first level matrix for rank adaptation and power allocation and W is the second level matrix for the precoding without rank adaptation and power allocation, and to transmit a signal that
comprises the precoded set of symbols.
The base station in claim 17, wherein the first level matrix is derived from the feedback PMI (Precoding Matrix Index) for rank adaptation and power allocation by long term feedback and the second
level matrix is derived from the feedback PMI for the precoding without rank adaptation and power allocation by short term feedback.
A mobile terminal, comprising: an estimator configured to estimate a downlink channel from a received signal, select one matrix for rank adaptation and the other matrix for the precoding without rank
adaptation and power allocation based on the estimated channel state information and feedback the PMI (Precoding Matrix Index) of the selected matrix for rank adaptation and power allocation by long
term and the PMI of the selected matrix for the precoding without rank adaptation and power allocation by short term to the base station; and a post-decoder configured to decode the received signal
to recover the set of data symbols.
The mobile terminal in claim 19, wherein the estimator picks one matrix for rank adaptation and power allocation from a first level codebook, picks the other matrix for the precoding without rank
adaptation and power allocation from the second level codebook and selects the matrices which have the highest sum rate or SINR when combined with two picked matrices.
The mobile terminal in claim 20, wherein one matrix for rank adaptation and power allocation is either a unitary matrix or a non-unitary matrix and the other matrix for the original precoding is
either a unitary matrix or a non-unitary matrix.
This application is the National Stage Entry of International Application PCT/KR2009/005705, filed on Oct. 6, 2009, which is incorporated herein by reference for all purposes as if fully set forth
BACKGROUND [0002]
1. Field
The present invention relates to precoding and feedback channel information in wireless communication system.
2. Discussion of the Background
There are a number of multi-antenna transmission schemes or transmission such as transit diversity, closed-loop spatial multiplexing or open-loop spatial multiplexing. Closed-loop MIMO(CL-MIMO)
relies on more extensive feedback from the mobile terminal.
SUMMARY [0006]
In accordance with an aspect, there is provided a method or a system, comprising: mapping one or two codewords to the layers; precoding a mapped set of symbols using a precoding matrix derived from
at least two downlink channel information where one of them is for rank adaptation and power allocation and the other of them is for the precoding without rank adaptation and power allocation and
transmitting a signal that comprises the precoded set of symbols.
In accordance with another aspect, there is provided a method or a system for feedbacking channel information for the mobile terminal, the method comprising: estimating a downlink channel from the
received signal; selecting one matrix for rank adaptation plus power allocation and the other matrix for the precoding without rank adaptation and power allocation based on the estimated channel
state information and feedbacking the PMI (Precoding Matrix Index) of the selected matrix for rank adaptation and power allocation by long term and the PMI of the selected matrix for the original
precoding by short term to the base station.
BRIEF DESCRIPTION OF THE DRAWINGS [0008]
FIG. 1 is the block diagram of the wireless communication system using closed-loop spatial multiplexing according to one embodiment.
FIG. 2 is the diagram of the precoder according to the other embodiment.
FIG. 3 is the flowchart of the CL-MIMO according to other embodiment.
It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the drawings have not necessarily been drawn to scale. For example, the dimensions of some of the
elements are exaggerated relative to other elements for purposes of promoting and improving clarity and understanding. Further, where considered appropriate, reference numerals have been repeated
among the drawings to represent corresponding or analogous elements.
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS [0012]
Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings.
There are a number of multi-antenna transmission schemes or transmission such as transit diversity, closed-loop spatial multiplexing or open-loop spatial multiplexing. Closed-loop MIMO(CL-MIMO)
relies on more extensive feedback from the mobile terminal.
A unitary precoding is employed for Single User CL-MIMO(SU CL-MIMO), and unitary codebooks for different antenna configuration are defined. In LTE advance, it can be non-unitary also. Moreover, rank
adaptation is also considered in LTE to enhance the performance.
However, in LTE, there is no power allocation among different layers if the rank is larger than 1. It is well known that unitary precoding with water filling power allocation is the optimal solution
for CL-MIMO. So the original CL-MIMO in LTE is not optimal.
In this exemplary embodiment, a multi level precoding scheme is proposed for CL-MIMO. In the proposed scheme, we consider the use of two level precoding. The first level procoding is for rank
adaptation and power allocation, and the second one is for unitary precoding. With the proposed scheme, we can get optimal solution for CL-MIMO. So it can increase the CL-MIMO performance. By
harmonizing rank adaptation and power allocation, we can use fewer coding bits to show the same information so that it can reduce the overhead. Moreover, by using multilevel precoding, we can
separately feedback the PMI for each level. The first level PMI is feedbacked less frequently than the second one. So the feedback overhead is further reduced.
FIG. 1 is the block diagram of the wireless communication system using closed-loop spatial multiplexing according to one embodiment.
Referring to FIG. 1, the communication system may be any type of wireless communication system, including but not limited to a MIMO system, SDMA system, CDMA system, OFDMA system, OFDM system, etc.
In the communication system, the wireless communication system using closed-loop spatial multiplexing according to one embodiment comprises a transmitter 10 and a receiver 20. The transmitter 10 may
act as a base station, while the receiver 20 may act as a subscriber station, which can be virtually any type of wireless one-way or two-way communication device such as a cellular telephone,
wireless equipped computer system, and wireless personal digital assistant. Of course, the receiver/subscriber station 20 can also transmits signals which are received by the transmitter/base station
10. The signals communicated between the transmitter 10 and the receiver 20 can include voice, data, electronic mail, video, and other data, voice, and video signals.
In operation, the transmitter 10 transmits a signal data stream through one or more antennas and over a channel to a receiver 20, which combines the received signal from one or more receiver antennas
to reconstruct the transmitted data. To transmit the signal, the transmitter 10 prepares a transmission signal represented by the vector for the signal.
The transmitter 10 comprises a layer mapper 30 and a precoder 40.
The layer mapper 30 of the transmitter 10 maps one or two codewords, corresponding to one or two transports, to the layers N
which may range from a minimum of one layer up to a maximum number of layers equal to the number of antenna ports. In case of multi-antenna transmission, there can be up to two transport blocks of
dynamic size for each TTI (Transmission Time Interval), where each transport block corresponds to one codeword in case of downlink spatial multiplexing. In other words, the block of modulation
symbols (one block per each transport block) refers to as a codeword. If there is only one codeword, we call it single codeword (SCW). Otherwise, we call it multiple codeword (MCW).
After layer mapping by the layer mapper 30, a set of N
symbols (one symbol from each layer) is linearly combined and mapped to the N
antenna port by the precoder 40. This combining/mapping can be described by means of a precoding matrix P of size N
In various example embodiments, the precoding matrix P is implemented with the matrix P=WD, where D is a first level matrix for rank adaptation and power allocation, and W is a second level matrix
for the original precoding. It can be unitary or non-unitary without power allocation information.
The precoder 40 has its own codebook, which is accessed to obtain a transmission profile and/or precoding information to be used to process the input data signal to make best use of the existing
channel conditions for individual receiver stations. In addition, the receiver 20 includes the same codebook for use in efficiently transferring information in either the feedback or feedforward
channel, as described herein below.
In various embodiments, the codebook is constructed as a composite product codebook from separable sections, where the codebook index may be used to access the different sections of the codebook. For
example, one or more predetermined bits from the codebook index are allocated for accessing the first level matrix, while a second set of predetermined bits from the second level index is allocated
to indicate the values for the second level matrix.
In various embodiments, instead of having a single codebook at each of the transmitter 10 and the receiver 20, separate codebooks can be stored so that there is, for example, a codebook for the first
level precoding matrix W, a codebook for the second level matrix D. In such a case, separate indices may be generated wherein each index points to a codeword in its corresponding codebook, and each
of these indices may be transmitted over a feedback channel to the transmitter, so that the transmitter uses these indices to access the corresponding codewords from the corresponding codebooks and
determine a transmission profile or precoding information.
FIG. 2 is the diagram of the precoder according to the other embodiment.
Referring to FIG. 2, after layer mapping by the layer mapper 30, a set of N
symbols (one symbol from each layer) is linearly combined and mapped to the N
antenna port by the precoder 40.
The precoder 40 comprises two level precoders 42 and 44 to optimize the performance. The first level precoder 42 is for rank adaptation and power allocation. The second level one 44 is for the
original precoding.
In various example embodiments, the first precoder 42 may precode a set of symbols from the layer mapper 30 by means of a precoding matrix D of size N
X N
. The second precoder 44 may also precode a set of symbols from the first precoder 42 by means of a precoding matrix W of size N
. The precoding matrix D is a first level matrix for rank adaptation and power allocation, and the precoding matrix W is a second level matrix for the original precoding. As a result, the first and
the second precoder 42 and 44 precode a set of symbols by means of the matrix P=WD.
To assist the base station in selecting a suitable precoding matrix for transmission by the transmitter (10), the receiver/mobile terminal 20 may report channel information such as a recommended
number of layers (expressed as a Rank Indication, RI) or a recommended precoding matrix (Precoding Matrix Index, PMI) corresponding to that number of layers, depending on estimates of the downlink
channel conditions.
Referring to FIGS. 1 and 2 again, the receiver 20 comprises a channel estimator 50 and a post-decoder 60.
The channel estimator 50 of the receiver 20 estimates the downlink channel condition. The channel estimator 50 feedbacks at least one of RI and PMI to the transmitter 10. The channel estimator 50 may
perform many kinds of codebook based PMI feedback.
The receiver 20 estimates the channel by the channel estimator 50. Based on the estimated channel information, then the receiver 20 selects the precoding matrix for each level from the corresponding
codebooks, which can make the system have the highest sum rate. Once the precoding matrix for each level is decided, the receiver/mobile terminal 20 separately feedback the PMIs of both level to the
transmitter 10.
There is codebook based PMI feedback where the receiver/mobile terminal 20 feedbacks the precoding matrix index (PMI) of the favorite matrix in the codebook to the transmitter/base station 10 to
support CL-MIMO (closed MIMO) operation in wireless communication system.
The feedback frequency of the receiver 20 is different for different level precoding. The first level precoding is for rank adaptation and power allocation, which is decided by the channel amplitude.
The second level precoding is for the original precoding, which is mainly decided by the phase. Since the phase changes much faster than the amplitude, the change of PMI feedback for the first level
precoding is also much slower than the change of PMI feedback for the second one. So the first level precoding is by long term feedback and the second one is by short term feedback. So multi level
precoding can reduce the feedback overhead.
The transmitter 10 receives PMI feedback for the first level precoding by long term and PMI feedback for the second level precoding by short term. The transmitter 10 precodes the set of symbols by
means of the precoding matrix P=WD based on the two feedback PMIs as shown in FIG. 1, where D is the first level matrix for rank adaptation and power allocation, and W is the second level matrix for
the original precoding. In the other embodiment as shown in FIG. 2, the transmitter 10 precodes the set of data symbols by means of the two level precoders 42 and 44 based on the two feedback PMIs.
For example, the first precoder 42 and the second precoder 44 in turn precodes the set of data symbols by means of each of matrices W or D based on the long and the short-term feedback PMIs.
Then the transmitter 10 transmits the precoded data symbols by different antennas.
The receiver 20 recovers the original data symbols by post-decoder 60 with the previous feedback precoding matrices combination. The post-decoder 60 processes the received signal and decodes the
precoded symbols.
FIG. 3 is the flowchart of the CL-MIMO according to other embodiment.
Referring to FIGS. 1 to 3, in the multilevel precoding CL-MIMO, the receiver 20 estimates the channel S10. Based on the estimated channel information, the receiver 20 computes the sum rate for all
the possible combinations of the two level precoding matrices from the codebooks.
The receiver 20 picks one matrix from the first level codebook and picks another one from the second level codebook. Then the receiver 20 computes the sum rate of the system when combining these two
matrices together for precoding. The receiver 20 computes the sum rate for all the possible combinations and selects the one which has the highest sum rate. In other words, the receiver 20 selects
the precoding matrix for each level from the corresponding codebooks S20, which has the highest sum rate among all the possible combinations.
Once the precoding matrix for each level is decided, the receiver 20 feedbacks the PMIs of the matrix in the best combination to the transmitter 10.
In multiple level precoding, the first level is for rank adaptation and power allocation, which is decided by the channel amplitude. The second level precoding is the original, which is mainly
decided by the phase. Since the phase changes much faster than the amplitude, the change PMI feedback of for the first level precoding is also much slower than the second one.
In multilevel precoding, the first level is for rank adaptation and power allocation. So the codebook is not unitary. By harmonizing rank adaptation and power allocation, we can use fewer coding bits
to show the same information so that it can reduce the overhead.
1) For rank adaptation information in the codebook, the identity matrix with some 1 element replaced by zeros can be used. It can also indicate the information that equal power allocation is used. So
that we reduce the coding bits
2) For power allocation, it attach to each rank. The total power is not changed. The diagonal matrix is used in the codebook for power allocation.
3) For the exact value for power allocation, we can drive them from transmit adaptive antennas (TxAA) codebook, or by simulation to get the optimized value.
Table 1 gives an example of the first level codebook for 2×2 CL MIMO.1.
-US-00001 TABLE 1 Codebook index codebook Meaning 0 [ 1 0 0 0 ] ##EQU00001## Rank 1 1 1 2 [ 1 0 0 1 ] ##EQU00002## Rank 2 Equal power allocation 2 [ 2 5 0 0 1 5 ] ##EQU00003## Rank 2 None Equal power
allocation 3 [ 1 5 0 0 2 5 ] ##EQU00004## Rank 2 None Equal power allocation
Referring to the table 1, each of codebook indices has each of codebooks. Each of codebooks means rank adaptation and power allocation. For example, the codebook of codebook index "0" is
[ 1 0 0 0 ] , ##EQU00005##
which means rank
1. The codebook of codebook index "1" is
1 2 [ 1 0 0 1 ] , ##EQU00006##
which means rank
2 and equal power allocation. The codebook of codebook index "2" is
[ 2 5 0 0 1 5 ] , ##EQU00007##
which means rank
2 and unequal power allocation. The codebook of codebook index "3" is
[ 1 5 0 0 2 5 ] , ##EQU00008##
which means rank
2 and unequal power allocation.
So the receiver 20 does not need feedback the PMIs for both levels all the time. In every feedback, the receiver 20 feedbacks the PMI for the second level precoding. For every several feedbacks, the
receiver 20 feedbacks the PMI for the first level precoding.
Before the PMI feedbacks, the receiver 20 checks whether the PMI of the first level need to be feedbacked S30 because the feedback of the first level precoding is less frequent than the feedback of
the second one. If the first level precoding is needed, the receiver 20 feedbacks the PMIs for both level precodings to the transmitter separately S40. Otherwise, it only feedbacks the second level
precoding S50.
At the transmitter 10, it precodes the data symbols by using the two level precoder based on the feedback PMIs, and then transmit the precoded data symbols by different antennas.
At the receiver 20, it recovers the original data symbols by post-decoder 60 with the previous feedback precoding matrices combination.
In the original LTE CL-MIMO, there is no power allocation among different layers. It is well known that unitary precoding with water filling power allocation is the optimal solution for CL-MIMO. So
the original CL-MIMO in LTE is not optimal.
In this exemplary embodiment, a multi level precoding scheme is proposed for CL-MIMO. In the proposed scheme, we consider use two level precoding. The first level precoding is for rank adaptation and
power allocation, and the second one is for original precoding. With the proposed scheme, we can get optimal solution for CL-MIMO. So it can increase the CL-MIMO performance.
By harmonizing rank adaptation and power allocation, we can use fewer coding bits to show the same information so that it can reduce the overhead.
By using multilevel precoding, we can separately feedback the PMI for each level. Since the change in the feedback PMI for the first level precoding is much slower than that of the second one, the
feedback frequency is different for different level precoding. We feed back the first level PMI less frequently than the second one. So multilevel precoding can reduce the feedback.
The methods and systems as shown and described herein may be implemented in software stored on a computer-readable medium and executed as a computer program on a general purpose or special purpose
computer to perform certain tasks. For a hardware implementation, the elements used to perform various signal processing steps at the transmitter (e.g., coding and modulating the data, precoding the
modulated signals, preconditioning the precoded signals, and so on) and/or at the receiver (e.g., recovering the transmitted signals, demodulating and decoding the recovered signals, and so on) may
be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs),
field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination
thereof. In addition or in the alternative, a software implementation may be used, whereby some or all of the signal processing steps at each of the transmitter and receiver may be implemented with
modules (e.g., procedures, functions, and so on) that perform the functions described herein. It will be appreciated that the separation of functionality into modules is for illustrative purposes,
and alternative embodiments may merge the functionality of multiple software modules into a single module or may impose an alternate decomposition of functionality of modules. In any software
implementation, the software code may be executed by a processor or controller, with the code and any underlying or processed data being stored in any machine-readable or computer-readable storage
medium, such as an on-board or external memory unit.
Although the described exemplary embodiments disclosed herein are directed to various MIMO precoding systems and methods for using same, the present invention is not necessarily limited to the
example embodiments illustrate herein. For example, various embodiments of a MIMO precoding system and design methodology disclosed herein may be implemented in connection with various proprietary or
wireless communication standards, such as IEEE 802.16e, 3GPP-LTE, DVB and other multi-user MIMO systems. Thus, the particular embodiments disclosed above are illustrative only and should not be taken
as limitations upon the present invention, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings
herein. Accordingly, the foregoing description is not intended to limit the invention to the particular form set forth, but on the contrary, is intended to cover such alternatives, modifications and
equivalents as may be included within the spirit and scope of the invention as defined by the appended claims so that those skilled in the art should understand that they can make various changes,
substitutions and alterations without departing from the spirit and scope of the invention in its broadest form.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any element(s) that
may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. As used
herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list
of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Patent applications by Jianjun Li, Seoul KR
Patent applications by Kyoungmin Park, Goyang-Si KR
Patent applications by PANTECH CO., LTD.
Patent applications in class TRANSCEIVERS
Patent applications in all subclasses TRANSCEIVERS
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120201282","timestamp":"2014-04-17T16:15:34Z","content_type":null,"content_length":"58305","record_id":"<urn:uuid:560e7701-20f4-4f97-992c-bfe527989d2f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
31 Caroline St. N. Waterloo Ontario, Canada
N2L 2Y5
Tel: (519) 569-7600
Fax: (519) 569-7611
Postdoctoral Researcher
Simone Giombi
PhD Stony Brook University, YITP (2007)
Superstring Theory
Position: Postdoctoral Researcher
Research Interests
My research interests are in string theory and related topics, with a focus on the interplay between string theory and quantum field theory. Currently my work is centered on aspects of the AdS/CFT
correspondence in various dimensions. For instance, I have recently investigated higher spin gauge theories in AdS4 and their 3d CFT duals. Another research direction that I am pursuing is the
calculation of quantum corrections in the AdS5 x S5 superstring sigma model, and comparison to the strong coupling expansion of certain observables in the 4d N=4 SYM theory. I am also interested in
the dynamics of certain non-local operators in gauge theories, such as Wilson loops, and their dual string theory description.
Positions Held
• Nov. 2010-present, Postdoctoral Fellow, String Theory Group, Perimeter Institute
• Sep. 2007-Oct. 2010, Postdoctoral Fellow, High Energy Theory Group, Harvard University
• May 2004-Aug. 2007, Research Assistant, YITP, Stony Brook University
Recent Publications
• ``Higher spin theories and holography", Neve-Shalom, Israel, May 2011, PhyMSI conference, Cargese, July 2011
• ``Higher spins and vector models: the three-point functions", Simons Center, Stony Brook, March 2011
• ``Precision tests of AdS/CFT integrability from strings in the AdS light-cone gauge", Princeton, February 2011 and McGill University, March 2011
• ``Higher spins and holography", MIT, May 2010
• ``Quantum AdS_5 \times S^5 superstring in the AdS light-cone gauge", Bologna, March 2010, at the conference ``Problemi Attuali di Fisica Teorica", Vietri sul Mare, March 2010 and at Brandeis,
April 2010
• ``Exact results in N=4 SYM from 2d Yang-Mills", Harvard, October 2009, and Caltech, November 2009
• ``Supersymmetric Wilson loops from weak to strong coupling", Simons Workshop in Mathematics and Physics, Stony Brook, August 2009
• ``Comments on supersymmetric Wilson loops on a two sphere", Brown, April 2009
• ``Wilson loops: from pseudo-holomorphic surfaces to 2d Yang-Mills and Matrix Models", Imperial College, London, March 2009 and Bologna, April 2009
• ``One-loop partition functions of 3D gravity", McGill University, April 2008, Harvard, April 2008 and U. of North Carolina at Chapel Hill, December 2008
• ``Spin Chains in N=6 Superconformal Chern-Simons-Matter Theory'', at the Workshop ``3D SCFTs and Their Gravity Duals", McGill University, September 2008
• ``A new family of supersymmetric Wilson loops", Brown University, November 2007
• PIRSA:11030098, Higher Spin Theories - Lecture 4, 2011-03-29, Higher Spin Theories
• PIRSA:11030097, Higher Spin Theories - Lecture 3, 2011-03-28, Higher Spin Theories
• PIRSA:11030094, Higher Spin Theories - Lecture 2, 2011-03-25, Higher Spin Theories
• PIRSA:11030093, Higher Spin Theories - Lecture 1, 2011-03-24, Higher Spin Theories | {"url":"http://www.perimeterinstitute.ca/personal/sgiombi/","timestamp":"2014-04-20T06:53:04Z","content_type":null,"content_length":"19685","record_id":"<urn:uuid:ded0920d-666c-4fc8-bfdd-7718d5bc4241>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chemical Equilibrium
I. Introduction
As early as 1799 C. Berthollet proposed the idea that chemical reactions are reversible. noting the ion exchange reaction:
2NaCl + Ca(CO
3)2 ´ Na2CO3 + CaCl2
Later, in the mid 1800's Berthelot and Gilles showed that concentration of reactants has an effect on the concentration of products in chemical reactions. Shortly afterward, Guldberg and Waage showed
that an equilibrium is reached in chemical reactions, and equilibrium can be approached from either direction. In 1877 van't Hoff quantified the expression for the equilibrium of a chemical reaction
and showed that it is a function of the concentration of the various species involved, and that the concentrations appear as powers corresponding to the stoichiometric number in the balanced chemical
The relationship between concentration and thermodynamic force (or lack thereof) in a chemical reaction or simple transformation is one of the most important in all of thermochemistry. It is
important to recognize that this driving force results from maximizing entropy and therefore, minimizing Gibbs energy and naturally leads to the concept of equilibrium. The relationship between
equilibrium conditions and concentration results from the definition of Gibbs energy or chemical potential,
II. Equilibrium Expression
A. Derivation
In order to derive a generalized expression for the equilibrium of a chemical reaction we must start with a balanced equation (conservation of mass). For example,
2A + 3B
´ A2B3
For any balanced chemical reaction we can write the following expression,
where A
i=atom type, ni=stoichiometric # (positive integer for products, negative integer for reactants). Because an equilibrium condition can occur partway through completion of the reaction in either
direction, a measure of the extent of reaction must be introduced. Moreover, since when a chemical reaction occurs, the changes in the amounts of species involved in the reaction are proportional to
their stoichiometric numbers, ni, in the balanced chemical equation, we can write:
where ni° is the amount of species i at time zero, and ni is the amount of species i at some later time, and
x is the extent of reaction
Since the change in Gibbs energy is our convenient thermodynamic property to probe (or define) equilibrium at constant T and P (conditions appropriate in the lab), we consider the total differential
of G for an open system:
Substituting the expression for dni in terms of extent of reaction,
x, we have,
The quantity
DrG called the reaction Gibbs energy.
As a reaction takes place, the Gibbs energy of the system continues to decrease until it reaches a minimum value. At equilibrium (constant T and P), the Gibbs energy is at the minimum, and
DrG is equal to zero, thus,
Recall, the chemical potential of species (i) is related to the activity a
i, of the species at constant T and P by,
Substituting this into the expression for
DrG we have,
The quotient of activities to the power of the stoichiometric number is given a special symbol Q, and is called the reaction quotient. Thus,
At equilibrium
DrG = 0, and Q is written as K to symbolize equilibrium and is referred to as the equilibrium constant.
B. Equilibrium Expression for Gas Reactions
In the case of reactions involving only gases, we can rewrite the equilibrium expression in terms of partial pressure (ideal gases) or fugacity (real gases). Thus,
Substituting these into the general expression for the equilibrium constant,
Note: The value of K can only be interpreted if the chemical equation is balanced, and the standard state of each species is specified.
C. Thermodynamics of Gaseous Reactions (Dependence of G on
We have already discussed the thermodynamics of mixing gases, and discovered that although
Dhmix and DVmix are zero for ideal gases, DGmix is not zero,and results from the systems tendency to maximize entropy (i.e. a mixture of gases has more possible states than each gas individually).
Naturally, in gaseous reactions the relative proportion of reactant gas and product gas change as a function of extent of reaction, x. Therefore, DGmix must be a contributing term in the Gibbs energy
of the system for reactions involving more than one gas. We can illustrate this point by considering a simple reaction to convert A(g) to B(g),
´ B(g)
Starting with 1 mole of A, the amounts at some time later are,
and at any value of
x the Gibbs energy of the mixture is,
G = (1
x)mA + x mB
Recall, the chemical potential for an ideal gas mixture at constant T
where P = total pressure at
x. Substituting these into the expression for G above,
where the last term is the contribution due to ĘG
Note: We can generalize the above analysis to say that no chemical reaction involving only gases can go to completion!!
III. Determination of Equilibrium Constants
We have derived the correct expression for the equilibrium constant for a general system as well as for a gaseous system, and related it to the change in Gibbs energy with extent of reaction. We have
yet to discuss how to actually determine the equilibrium constant, there are several different ways:
A. Determination of Concentrations of Reactants and Products at Equilibrium
Given that K is directly related to the amounts (pressures for gaseous rxn.) of reactants and products at equilibrium, a direct way to determine K is to simply determine the quantities of
each species at equilibrium. There are many different ways to measure the relative amounts of species in a mixture (i.e. spectroscopy, refractive index, density, etc.) Often however, only
the extent of reaction can be determined, not the individual amounts of each species. Determination of the density of a partially dissociated gas represents a simple example of a method
which provides the extent of reaction at equilibrium. In order to use the extent of reaction in calculating the equilibrium constant we must derive the expression for the equilibrium
constant in terms of the equilibrium extent of reaction and the total pressure. Consider the simple reaction,
1. A (g)
´ 2B (g)
If we start with
xeq, then at equilibrium we have,
xeq ´ 2 xeq
We can just as easily divide these quantities by the initial number of moles of A,
x'eq. = xeq/
(equilibrium amounts) 1
x'eq ´ 2x'eq
We can relate the amounts of each species at equilibrium to the partial pressures through mole fractions;
(equilibrium mole fraction)
The equilibrium constant in terms of partial pressures is,
Since Pi = yiP, we can substitute,
Note: Like the equilibrium constant in terms of partial pressure, the equilibrium constant in terms of
x is dependent on the chemical reaction and its stoichiometry.
B. Determination of
At equilibrium, the complete equilibrium expression is given by,
DrG° is the standard reaction Gibbs energy. Therefore, the equilibrium constant, K, can also be determined directly from DrG°. There are several ways to determine DrG° for given chemical reaction
including the use of Statistical Mechanics (Chem. 3308). DrG°can also be obtained from the thermodynamic expression,
in which the standard reaction enthalpies and entropies can be obtained calorimetrically at various temperatures. Instead of tabulating many different values for
DrG°at different temperature, standard Gibbs energies of formation, DfG°, are used to reference Gibbs energies of various species to their respective elements. The standard reaction Gibbs energy,
DrG°, can then be determined from the following expression,
This is directly analogous to the way enthalpies of formations were used to determine Note: in the above equation,
IV. The Effects on Equilibrium due to Changes in Independent Variables
Just as was done for other thermodynamic parameters, it is useful to probe the effects of changes in certain independent variables (i.e. temperature, pressure, initial composition, etc.) on the
equilibrium, and hence equilibrium constant, of chemical reactions.
A. LeChatelier Principle
We can probe the effects of temperature and pressure in a qualitative sense by considering how the quantity
DrG° changes with these variables. More specifically, we can probe how xeq changes with P and T since
If we assume this quantity is a function of T, P, and
x, we can write the full expression in terms of a total differential,
At equilibrium
At constant temperature and pressure respectively,
Therefore as:
These expressions are simply qualitative statements of LeChatelier Principle which states: "If the external constraints under which an equilibrium is established are changed, the equilibrium will
shift in such a way as to moderate the effect on the change."
B. Changes in the Equilibrium Constant Expression - although the previous discussion is qualitatively useful, it would be more useful to have exact expressions for the equilibrium constant as a
function of certain variables, especially for gaseous reactions.
1) Effects of pressure - the equilibrium constant in terms of partial pressures is given by,
Therefore if:
2) Effects of Temperature - the equilibrium constant in terms of temperature can be easily derived by starting from the Gibbs-Helmholtz equation introduced in the previous discussion of Gibbs
energy. Recall,
DrG° = RTlnK
The equilibrium constant can be written as an explicit function of temperature by integrating the above expression,
DrH° is independent of temperature we have,
We can derive a similar expression for the equilibrium constant as a function of temperature by starting from the fundamental expression obtained earlier, namely,
DrG° =DrH° TDrS°,
Again, if
DrH° is assumed to be independent of temperature then DrS° is independent of temperature, and the above expression is exact (moreover, it must be identical to the one derived above). In such a case,
a plot of lnK versus
DrH° and DrS° are not independent of temperature the above expressions get much more complicated!!
3) Equilibrium expression in terms of concentrations - since kinetic rate equations are written in terms of concentrations of each species, it would be worthwhile to have an expression for the
equilibrium constant in terms of concentration. We will use Kc for concentration and Kp for partial pressures. Recall,
For an ideal gas
In order to keep both terms dimensionless we simply introduce a standard concentration
Note: if | {"url":"http://www2.mcdaniel.edu/Chemistry/ch307.notes/Chemical%20Equilibrium.html","timestamp":"2014-04-19T17:25:04Z","content_type":null,"content_length":"22088","record_id":"<urn:uuid:f7f08b19-addf-49a6-9811-7cc545cf7521>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alviso Algebra 1 Tutor
Find an Alviso Algebra 1 Tutor
...While not all of them got A’s, those that regularly attended tutoring did see improvements in their grades. I believe that mathematics should not get in the way of learning the natural
sciences, so I am capable of tutoring in that as well. My passion is in chemistry but the quantitative nature of the natural sciences means that I am fluent in algebra through calculus.
24 Subjects: including algebra 1, reading, chemistry, calculus
...Trigonometry pays an important role in Pre-Calculus and Calculus . Trigonometry is my favorite subject. I have expertise in this subject. I give small - small but valuable tips to solve
triangle, law of sines, law of cosine, verifying identities and graphing, etc.
17 Subjects: including algebra 1, calculus, geometry, statistics
...I can offer tutoring in many subjects, but my specialties are science, math, and computer programming (I favor Python). I hope to teach you more than just what you need for tomorrow's test. I
hope you will also learn how to keep learning on your own!I am a professional research biologist, with s...
17 Subjects: including algebra 1, chemistry, statistics, geometry
...I can communicate clearly, and can easily help your son or daughter expand their own vocabulary! I am a native English speaker and have always performed well on written tests and public
speaking. I would love a chance to enable you or your child to speak and write English effectively, both in formal and informal discussions!
17 Subjects: including algebra 1, reading, English, study skills
...As an Electrical Engineer, knowledge Math and Physics is my base and most frequently being used in work.Algebra 1 and 2 are basic courses in my university. I have a strong background and I have
helped a lot of high school students and college students in recent years. Calculus was my basic subject in my university education.
9 Subjects: including algebra 1, physics, calculus, geometry | {"url":"http://www.purplemath.com/alviso_ca_algebra_1_tutors.php","timestamp":"2014-04-17T21:45:51Z","content_type":null,"content_length":"23998","record_id":"<urn:uuid:6a61807d-fc7d-4085-afc1-431de6b56ade>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient distributed quantum computing
February 21st, 2013 in Physics / Quantum Physics
(Phys.org)—A quantum computer doesn't need to be a single large device but could be built from a network of small parts, new research from the University of Bristol has demonstrated. As a result,
building such a computer would be easier to achieve.
Many groups of research scientists around the world are trying to build a quantum computer to run algorithms that take advantage of the strange effects of quantum mechanics such as entanglement and
superposition. A quantum computer could solve problems in chemistry by simulating many body quantum systems, or break modern cryptographic schemes by quickly factorising large numbers.
Previous research shows that if a quantum algorithm is to offer an exponential speed-up over classical computing, there must be a large entangled state at some point in the computation and it was
widely believed that this translates into requiring a single large device.
In a paper published today in Proceedings of the Royal Society A, Dr Steve Brierley of Bristol's School of Mathematics and colleagues show that, in fact, this is not the case. A network of small
quantum computers can implement any quantum algorithm with a small overhead.
The key breakthrough was learning how to efficiently move quantum data between the many sites without causing a collision or destroying the delicate superposition needed in the computation. This
allows the different sites to communicate with each other during the computation in much the same way a parallel classical computer would do.
Dr Brierley said: "Building a computer whose operation is based on the laws of quantum mechanics is a daunting challenge. At least now we know that we can build one as a network of small modules."
More information: Beals, R. et al. Efficient Distributed Quantum Computing, Proceedings of the Royal Society A: rspa.royalsocietypublishing.org/content/469/2153/20120686.abstract
Arxiv: arxiv.org/abs/1207.2307
Provided by University of Bristol
"Efficient distributed quantum computing." February 21st, 2013. http://phys.org/news/2013-02-efficient-quantum.html | {"url":"http://phys.org/print280655516.html","timestamp":"2014-04-20T06:49:32Z","content_type":null,"content_length":"6712","record_id":"<urn:uuid:4d46393f-7aca-45d7-96ba-0122360a1c22>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\var}{\text{var}}\) \(\
The Roulette Wheel
The (American) roulette wheel has 38 slots numbered 00, 0, and 1-36. As shown in the picture below,
• slots 0, 00 are green;
• slots 1, 3, 5, 7, 9, 12, 14, 16, 18, 19, 21, 23, 25, 27, 30, 32, 34, 36 are red;
• slots 2, 4, 6, 8, 10, 11, 13, 15, 17, 20, 22, 24, 26, 28, 29, 31, 33, 35 are black.
Except for 0 and 00, the slots on the wheel alternate between red and black. The strange order of the numbers on the wheel is intended so that high and low numbers, as well as odd and even numbers,
tend to alternate.
A typical roulette wheel and table
According to Richard Epstein, roulette is the oldest casino game still in operation. It's invention has been variously attributed to Blaise Pascal, the Italian mathematician Don Pasquale, and several
others. In any event, the roulette wheel was first introduced into Paris in 1765.
The roulette experiment is very simple. The wheel is spun and then a small ball is rolled in a groove, in the opposite direction as the motion of the wheel.. Eventually the ball falls into one of the
slots. Naturally, we assume mathematically that the wheel is fair, so that the random variable \(X\) that gives the slot number of the ball is uniformly distributed over the sample space \(S = \{00,
0, 1, \ldots, 36\}\). Thus, \(\P(X = x) = \frac{1}{38}\) for each \(x \in S\).
As with craps, roulette is a popular casino game because of the rich variety of bets that can be made. The picture above shows the roulette table and indicates some of the bets we will study. All
bets turn out to have the same expected value (negative, of course).
Straight Bets
A straight bet is a bet on a single number, and pays \(35 : 1\).
Let \(W\) denote the winnings on a unit straight bet. Then
1. \(\P(W = -1) = \frac{37}{38}\), \(\P(W = 35) = \frac{1}{38}\)
2. \(\E(W) = -\frac{1}{19} \approx -0.0526\)
3. \(\sd(W) \approx 5.7626\)
In the roulette experiment, select the single number bet. Run the simulation 1000 times and watch the apparent convergence of the empirical density function and moments of \(W\) to the true
probability density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be?
Two Number Bets
A 2-number bet (or a split bet) is a bet on two adjacent numbers in the roulette table. The bet pays \(17 : 1\).
Let \(W\) denote the winnings on a unit split bet. Then
1. \(\P(W = -1) = \frac{18}{19}\), \(\P(W = 17) = \frac{1}{19}\)
2. \(\E(W) = -\frac{1}{19} \approx -0.0526\)
3. \(\sd(W) \approx 4.0193\)
In the roulette experiment, select the 2 number bet. Run the simulation 1000 times and watch the apparent convergence of the empirical density function and moments of \(W\) to the true probability
density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be?
Three Number Bets
A 3-number bet (or row bet) is a bet on the three numbers in a vertical row on the roulette table. The bet pays \(11 : 1\).
Let \(W\) denote the winnings on a unit row bet. Then
1. \(\P(W = -1) = \frac{35}{38}\), \(\P(W = 11) = \frac{3}{38}\)
2. \(\E(W) = -\frac{1}{19} \approx -0.0526\)
3. \(\sd(W) \approx 3.2359\)
In the roulette experiment, select the 3-number bet. Run the simulation 1000 times and watch the apparent convergence of the empirical density function and moments of \(W\) to the true probability
density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be?
Four Number Bets
A 4-number bet or a square bet is a bet on the four numbers that form a square on the roulette table. The bet pays \(8 : 1\).
Let \(W\) denote the winnings on a unit 4-number bet. Then
1. \(\P(W = -1) = \frac{17}{19}\), \(\P(W = 8) = \frac{2}{19}\)
2. \(\E(W) = -\frac{1}{19} \approx -0.0526\)
3. \(\sd(W) \approx 2.7620\)
In the roulette experiment, select the 4-number bet. Run the simulation 1000 times and watch the apparent convergence of the empirical density function and moments of \(W\) to the true probability
density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be?
Six Number Bets
A 6-number bet or 2-row bet is a bet on the 6 numbers in two adjacent rows of the roulette table. The bet pays \(5 : 1\).
Let \(W\) denote the winnings on a unit 6-number bet. Then
1. \(\P(W = -1) = \frac{16}{19}\), \(\P(W = 5) = \frac{3}{19}\)
2. \(\E(W) = -\frac{1}{19} \approx -0.0526\)
3. \(\sd(W) \approx 2.1879\)
In the roulette experiment, select the 6-number bet. Run the simulation 1000 times and watch the apparent convergence of the empirical density function and moments of \(W\) to the true probability
density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be?
Twelve Number Bets
A 12-number bet is a bet on 12 numbers. In particular, a column bet is bet on any one of the three columns of 12 numbers running horizontally along the table. Other 12-number bets are the first 12
(1-12), the middle 12 (13-24), and the last 12 (25-36). A 12-number bet pays \(2 : 1\).
Let \(W\) denote the winnings on a unit 12-number bet. Then
1. \(\P(W = -1) = \frac{13}{19}\), \(\P(W = 2) = \frac{6}{19}\)
2. \(\E(W) = -\frac{1}{19} \approx -0.0526\)
3. \(\sd(W) \approx 1.3945\)
In the roulette experiment, select the 12-number bet. Run the simulation 1000 times and watch the apparent convergence of the empirical density function and moments of \(W\) to the true probability
density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be?
Eighteen Number Bets
An 18-number bet is a bet on 18 numbers. In particular, A color bet is a bet either on red or on black. A parity bet is a bet on the odd numbers from 1 to 36 or the even numbers from 1 to 36. The low
bet is a bet on the numbers 1-18, and the high bet is the bet on the numbers from 19-36. An 18-number bet pays \(1 : 1\).
Let \(W\) denote the winnings on a unit 18-number bet. Then
1. \(\P(W = -1) = \frac{10}{19}\), \(\P(W = 1) = \frac{9}{19}\)
2. \(\E(W) = -\frac{1}{19} \approx -0.0526\)
3. \(\sd(W) \approx 0.9986\)
In the roulette experiment, select the 18-number bet. Run the simulation 1000 times and watch the apparent convergence of the empirical density function and moments of \(W\) to the true probability
density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be?
Although all bets in roulette have the same expected value, the standard deviations vary inversely with the number of numbers selected. What are the implications of this for the gambler? | {"url":"http://www.math.uah.edu/stat/games/Roulette.html","timestamp":"2014-04-20T23:44:46Z","content_type":null,"content_length":"14973","record_id":"<urn:uuid:e0723b20-d3a5-4273-ae68-cba8dffe7fa6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burbank, IL ACT Tutor
Find a Burbank, IL ACT Tutor
...I think that I can persuade any student that math is not only interesting and fun, but beautiful as well. I've been tutoring test prep for 15 years, and I have a lot of experience helping
students get the score they need on the GRE. I've helped students push past their goal scores in both the Quant and Verbal.
24 Subjects: including ACT Math, calculus, physics, geometry
...I am willing to travel to meet wherever the student feels comfortable. I hope to hear from you to talk about how I can help with whatever needs you might have. Thank you for considering me!I
have excelled in math classes throughout my school career.
5 Subjects: including ACT Math, statistics, prealgebra, probability
...After high school I went on to receive a bachelor of science in mechanical engineering and a master of science in mechanical engineering from the University of Toledo. During my graduate
assistantship I received the "Outstanding Teaching Assistant" award, showing my ability to communicate subjec...
20 Subjects: including ACT Math, physics, statistics, calculus
...I also minored in Asian Studies. After graduating from Loyola University, I began tutoring in ACT Math/Science at Huntington Learning Center in Elgin. I took pleasure in helping students
understand concepts and succeed.
26 Subjects: including ACT Math, chemistry, English, reading
...I've had the amazing opportunity to teach diverse groups of students, including those with special needs, English language learners, gifted students and diverse learners. As a result, I have
become proficient in differentiating instruction to meet the needs of every learner by using techniques s...
70 Subjects: including ACT Math, chemistry, reading, English
Related Burbank, IL Tutors
Burbank, IL Accounting Tutors
Burbank, IL ACT Tutors
Burbank, IL Algebra Tutors
Burbank, IL Algebra 2 Tutors
Burbank, IL Calculus Tutors
Burbank, IL Geometry Tutors
Burbank, IL Math Tutors
Burbank, IL Prealgebra Tutors
Burbank, IL Precalculus Tutors
Burbank, IL SAT Tutors
Burbank, IL SAT Math Tutors
Burbank, IL Science Tutors
Burbank, IL Statistics Tutors
Burbank, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Burbank_IL_ACT_tutors.php","timestamp":"2014-04-17T01:15:21Z","content_type":null,"content_length":"23610","record_id":"<urn:uuid:f954746c-6ead-40a7-847a-4d3b1af11552>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
AW: st: WG: XML TAB - inconsistent p-value output
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
AW: st: WG: XML TAB - inconsistent p-value output
From "Kleindienst, Ingo" <Ingo.Kleindienst@whu.edu>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject AW: st: WG: XML TAB - inconsistent p-value output
Date Sun, 16 Jun 2013 09:58:10 +0000
Thank your very much for your answer.
Below the additional information:
Stata version: 11.0
Xml_tab version: 3.24
As you proposed I used a public dataset to try to reproduce the inconsistency. Below are my Stata commands:
--> use http://www.stata.com/data/jwooldridge/eacsap/patent
--> xtset cusip year
--> xtreg return patents patentsg stckpr merger sic sales
--> estimates store xtreg
--> xtscc return patents patensg stckpr merger sic sales
--> estimates store xtscc
--> xml_tab xtreg xtscc, p save(C:\Desktop\...\xtreg vs xtscc.xls)
The p-values reported for the xtreg model in the Stata results monitor are consistent with the p-values reported in the Excel-file produced by xml_tab.
However, the p-values reported for the xtscc model in the Stata results monitor do not correspond to the p-values reported in the Excel-file produced by xml_tab. In particular, the p-values reported in the xml_tab file are consistently lower than those reported in the State results monitor.
For example:
The p-values for the xtscc model in the Stata results monitor are:
Patents 0.815
Patensg 0.487
Stckpr 0.000
Merger 0.073
Sic 0.982
Sales 0.245
The p-values for the xtscc model in the Excel-file created using xml_tab are:
Patents 0.809
Patensg 0.468
Stckpr 0.000
Merger 0.043
Sic 0.982
Sales 0.213
I made some additional tests with other estimators and other datasets (xtabond, areg). The problem did not occur with these estimators but did occur with xtscc and other datasets.
Below the -ereturn list- after my estimations with xtreg and xtscc
ereturn list for xtreg
e(rank) = 7
e(df_m) = 6
e(chi2) = 174.8559236866668
e(p) = 4.19491705850e-35
e(sigma_u) = 3.400876980217211
e(sigma_e) = 4.070117243032458
e(sigma) = 5.303943684335409
e(rho) = .4111346086902908
e(rmse) = 4.073657423364998
e(N) = 2252
e(Tbar) = 9.951076320939334
e(Tcon) = 0
e(N_g) = 226
e(g_min) = 6
e(g_avg) = 9.964601769911505
e(g_max) = 10
e(thta_min) = .5610100539706587
e(thta_5) = .6460439694667539
e(thta_50) = .6460439694667539
e(thta_95) = .6460439694667539
e(r2_w) = .0661225799380726
e(r2_b) = .1239851695834196
e(r2_o) = .093572659666385
e(thta_max) = .6460439694667539
e(cmdline) : "xtreg return patents patentsg stckpr merger sic sales"
e(cmd) : "xtreg"
e(marginsnotok) : "E U UE SCore STDP XBU"
e(predict) : "xtrere_p"
e(model) : "re"
e(ivar) : "cusip"
e(vce) : "conventional"
e(depvar) : "return"
e(chi2type) : "Wald"
e(properties) : "b V"
e(b) : 1 x 7
e(V) : 7 x 7
e(theta) : 1 x 5
e(VCEf) : 7 x 7
e(bf) : 1 x 7
ereturn list for xtscc
e(N) = 2252
e(N_g) = 226
e(df_m) = 6
e(df_r) = 9
e(F) = 215.2730394966106
e(r2) = .0982803084873544
e(rmse) = 5.264078304739547
e(lag) = 2
e(cmd) : "xtscc"
e(predict) : "xtscc_p"
e(method) : "Pooled OLS"
e(depvar) : "return"
e(vcetype) : "Drisc/Kraay"
e(title) : "Regression with Driscoll-Kraay standard errors"
e(groupvar) : "cusip"
e(properties) : "b V"
e(b) : 1 x 7
e(V) : 7 x 7
e(t) : 1 x 7
e(se_beta) : 1 x 7
I hope these additional information help.
Any hint is greatly appreciated
Dr. Ingo Kleindienst
Juniorprofessor für Strategieprozesse
Assistant Professor of Strategy Processes
Lehrstuhl für Betriebswirtschaftslehre, insbesondere Unternehmensentwicklung und Corporate Governance
Chair of Corporate Strategy and Governance
Burgplatz 2, 56179 Vallendar, Germany
Fon: ++49-(0)261-65 09-204
Fax: ++49-(0)261-65 09-209
-----Ursprüngliche Nachricht-----
Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Sergiy Radyakin
Gesendet: Donnerstag, 13. Juni 2013 23:15
An: statalist@hsphsun2.harvard.edu
Betreff: Re: st: WG: XML TAB - inconsistent p-value output
can you reproduce the same issue with any public dataset (webuse-able or sysuse-able) and post the full code here? Economic interpretation does not have to make sense, but the illustration of difference in the reported coefficients is crucial.
What is the version of Stata you are using? --> about
What is the version of -xml_tab- you are using? --> which xml_tab
Do you have any problems when outputting e.g. p-values from a simple linear regression model?
Provide -return list- and -ereturn list- after your estimation commands. Show us how the estimates results are saved/restored. Make sure there is a unique set of p-values reported (e.g. ttest creates three sets of p-values one for each type of alternative hypothesis)
Finally, STATA-->Stata
Best, Sergiy
On Thu, Jun 13, 2013 at 5:02 PM, Kleindienst, Ingo <Ingo.Kleindienst@whu.edu> wrote:
> Dear Statalisters,
> I have the following question regarding xml_tab
> I regularly use xml_tab to save results in Excel. However, I have no encountered some inconsistencies between the results shown in the STATA results monitor and the Excel-Output generated by xml_tab.
> The coefficents are consistent between the STATA results monitor and the Excel-output created by xml_tab. However, there is what seems to be a non-random deviation between the p-values that are shown in the STATA results monitor and the respective Excel-output using xml_tab. In particular, the Excel-output consistently shows lower p-values than are reported in the STATA results monitor.
> As an example: The p-value shown in the STATA results monitor is 0.022 for a certain variable. The respective Excel-output using first the command "estimates store modelX" and then the command "xml_tab modelX , p save(C:\ \Desktop\results_paper.xls)" leads to a p-value of only 0.014 for the respective variable.
> The problem with inconsistent p-values occur when using the "xtscc"
> STATA command (Driscoll-Kraay panel estimator)
> So, I am wondering what the problem here is? Am I missing something?
> I would be very happy, if you could give me some hints.
> Best regards
> Ingo
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-06/msg00742.html","timestamp":"2014-04-21T04:32:22Z","content_type":null,"content_length":"15957","record_id":"<urn:uuid:edf4ec39-dc09-4617-bb13-81de791b6664>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
EconPort - Glossary
Validity tells us the degree to which a test really measures the behaviour it was designed for
value added
A measure of output. Value added by an organization or industry is, in principle:
revenue - non-labor costs of inputs
where revenue can be imagined to be price*quantity, and costs are usually described by capital (structures, equipment, land), materials, energy, and purchased services.
Treatment of taxes and subsidies can be nontrivial.
Value-added is a measure of output which is potentially comparable across countries and economic structures.
value function
Often denoted v() or V(). Its value is the present discounted value, in consumption or utility terms, of the choice represented by its arguments.
The classic example, from Stokey and Lucas, is:
v(k) = max[k'] { u(k, k') + bv(k') }
where k is current capital,
k' is the choice of capital for the next (discrete time) period,
u(k, k') is the utility from the consumption implied by k and k',
b is the period-to-period discount factor,
and the agent is presumed to have a time-separable function, in a discrete time environment, and to make the choice of k' that maximizes the given function.
Vector Autoregression, a kind of model of related time series. In the simplest example, the vector of data points at each time t (y[t]) is thought of as a parameter vector (say, phi1) times a
previous value of the data vector, plus a vector of errors about which some distribution is assumed. Such a model may have autoregression going back further in time than t-1 too.
An operator returning the variance of its argument
The variance of a distribution is the average of squares of the distances from the values drawn from the mean of the distribution:
var(x) = E[(x-Ex)^2].
Also called 'centered second moment.' Nick Cox attributes the term to R.A. Fisher, 1918.
variance decomposition
In a VAR, the variance decomposition at horizon h is the set of R^2 values associated with the dependent variable y[t] and each of the shocks h periods prior.
variance ratio statistic
discussed thoroughly on Bollerslev-Hodrick 1992 p. 19. Equations and estimation there.
Vector Autoregressions. "Vector autoregressive models are _atheoretical_ models that use only the observed time series properties of the data to forecast economic variables." Unlike structural models
there are no assumptions/restrictions that theorists of different stripes would object to. But a VAR approach only test LINEAR relations among the time series.
An operator. For a matrix C, vec(C) is the vector constructed by stacking all of the columns of C, the second below the first and so on. So if C is n x k, then vec(C) is nk x 1.
As used with respect to options: "The vega of a portfolio of derivatives is the rate of change fo the value of the portfolio with respect to the volatility of the underlying asset." -- Hull (1997) p
328. Formally this is a partial derivative.
A portfolio is vega-neutral if it has a vega of zero.
Observable to outsiders, in the context of a model of information.
Models commonly assume that some the values of some variables are known to both of the parties to a contract but are NOT verifiable, by which we mean that outsiders cannot see them and so references
to those variables in a contract between the two parties cannot be enforced by outside authorities.
Examples: .....
vintage model
One in which technological change is 'embodied' in Solow's language.
Abbreviation for von Neumann-Morgenstern, which describes attributes of some utility functions.
volatility clustering
In a time series of stock prices, it is observed that the variance of returns or log-prices is high for extended periods and then low for extended periods. (E.g. the variance of daily returns can be
high one month and low the next.) This occurs to a degree that makes an iid model of log-prices or returns unconvincing. This property of time series of prices can be called 'volatility clustering'
and is usually approached by modeling the price process with an ARCH-type model.
von Neumann-Morgenstern utility
Describes a utility function (or perhaps a broader class of preference relations) that has the expected utility property: the agent is indifferent between receiving a given bundle or a gamble with
the same expected value.
There may be other, or somewhat stronger or weaker assumptions in the vNM phrasing but this is a basic and important one. It does not seem to be the case that such a utility representation is
required to be increasing in all arguments or concave in all arguments, although these are also common assumptions about utility functions. The name refers to John von Neumann and Oskar Morgenstern's
Theory of Games and Economic Behavior. Kreps (1990), p 76, says that this kind of utility function predates that work substantially, and was used in the 1700s by Daniel Bernoulli. | {"url":"http://www.econport.org/econport/request?page=web_glossary&glossaryLetter=V","timestamp":"2014-04-20T05:43:33Z","content_type":null,"content_length":"33458","record_id":"<urn:uuid:e975ac3a-677d-4cd5-94a1-22d578a236cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Display Class Syllabus
Advanced Calculus for Secondary Education Teachers
CLASS CODE: MATH 460 CREDITS: 2
DIVISION: PHYSICAL SCIENCE & ENGINEERING
DEPARTMENT: MATHEMATICS
GENERAL This course does not fulfill a General Education requirement.
DESCRIPTION: Intended for those majoring in mathematics education. This course reveals the theoretical underpinnings of the topics taught in first and second semester calculus. Topics will include
epsilon-delta proofs, intermediate and mean value theorems, the fundamental theorems of calculus, differentiation, integration, infinite series, Taylor series, and how to teach
calculus concepts to secondary school students.
TAUGHT: Winter, Summer
CONTENT AND Topics will include epsilon-delta proofs, intermediate and mean value theorems, the fundamental theorems of calculus, differentiation, integration, infinite series, Taylor series, and
TOPICS: how to teach calculus concepts to secondary school students.
GOALS AND 1. Learn the fundamental definitions and theorems of differential and integral calculus along with rigorous proofs of the theorems.
OBJECTIVES: 2. Acquire examples, nonexamples, and applications for each main concept in calculus.
3. Become familiar and comfortable with the standard notations and vocabulary of Calculus.
4. Learn to read and write mathematics on their own and teach them the importance of careful, accurate and precise work.
5. Develop both oral and written communication skills.
6. Acquire ideas on how to teach the subject matter.
REQUIREMENTS: There will be regular homework assignments and examinations. In addition students will give in class presentations of proofs and teaching demonstrations of how they would teach
various topics.
PREREQUISITES: Math 112, Math 113, Math 301
EFFECTIVE August 2003 | {"url":"http://www2.byui.edu/catalog-archive/2004-2005/class.asp2193.htm","timestamp":"2014-04-17T18:48:57Z","content_type":null,"content_length":"5048","record_id":"<urn:uuid:f737018f-b40a-44ca-95a8-7f2c64b3ad7f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Embedded System Specification
Title: Embedded System Specification
Code: SVSe
Ac.Year: 2012/2013
Term: Summer
Study plans:
Language: English
Credits: 5
Completion: examination (written&verbal)
Type of ┌──────────┬─────────────┬────────────────┬────────────────┬──────────────────┬───────┐
instruction: │ Hour/sem │ Lectures │ Sem. Exercises │ Lab. exercises │ Comp. exercises │ Other │
│ Hours: │ 39 │ 0 │ 0 │ 6 │ 7 │
│ │ Examination │ Tests │ Exercises │ Laboratories │ Other │
│ Points: │ 60 │ 15 │ 0 │ 0 │ 25 │
Guarantee: Švéda Miroslav, prof. Ing., CSc., DIFS
Lecturer: Ryšavý Ondřej, Ing., Ph.D., DIFS
Švéda Miroslav, prof. Ing., CSc., DIFS
Instructor: Ryšavý Ondřej, Ing., Ph.D., DIFS
Faculty: Faculty of Information Technology BUT
Department: Department of Information Systems FIT BUT
Substitute for:
Learning objectives:
Understand formal specification principles as applied to embedded systems design; be aware of utilizing temporal logics for modeling reactive systems and real-time systems; be aware of embedded
distributed system architectures.
Embedded distributed system design principles. Reactive systems and real-time systems. Reactive system and real-time system models. Fairness, livness, safety, feasibility; real-time livness. Temporal
logic fundamentals. Time models and temporal logics. Temporal logic and real time. Formal specifications of embedded systems. Hybrid systems. Provers. Model checking. Real-time systems verification.
Knowledge and skills required for the course:
Propositional logic. Basics of the first-order logic. The elementary notions of communication protocols.
Subject specific learning outcomes and competences:
Understanding behavioral formal specifications as applied to embedded systems design; being aware of utilizing temporal logics for modeling reactive systems and real-time systems; being informed
about embedded distributed system architectures.
Generic learning outcomes and competences:
Being acknowledged with basics of temporal logic.
Syllabus of lectures:
1. Embedded distributed system design principles
2. Reactive system and real-time system models
3. Fairness, livness, safety, feasibility; real-time livness
4. Temporal logic fundamentals
5. Time models and temporal logics
6. Temporal logic and real time
7. Formal specifications of embedded systems
8. Provers
9. Model checking
10. Real-time systems verification
11. Formal specification of abstract data types and objects, algebraic specifications
12. Using type theoretic systems for formal specification and verification of programs
Syllabus of numerical exercises:
1. Introductory to system Coq. Brief exploration of formalism of the system, description of mathematical vernacular - the language used for communicating with the system, demonstration of the system
for specifying and verifying properties of a simple algorithm.
Syllabus of laboratory exercises:
1. Embedded application management with intranet
Syllabus of computer exercises:
1. Spin, model checking techniques
2. PVS, theorem proving techniques
Syllabus - others, projects and individual work of students:
1. Specification and verification of embedded system properties
Fundamental literature:
1. Schneider, K.: Verification of Reactive Systems, Springer-Verlag, 2004, ISBN 3-540-00296-0
2. Huth, M.R.A., Ryan, M.D.: Logic in Computer Science - Modelling and Reasoning about Systems, Cambridge University Press, 2002, ISBN 0-521-65602-8
3. Clarke, E.M., Jr., Grumberg, O., Peled, D.A.: Model Checking, MIT Press, 2000, ISBN 0-262-03270-8
4. de Bakker, J.W. et all. (Editors): Real-Time: Theory in Practice, Springer-Verlag, LNCS 600, 1992, ISBN 3-540-55564-1
5. Gabbay, D.M., Ohlbach, H.J. (Editors): Temporal Logic, Springer-Verlag, LNCS 827, 1994, ISBN 3-540-58241-X
6. Monin, J.F., Hinchey, M.G.:Understanding Formal Methods, Springer-Verlang, 2003.
7. Peled, D.A.:Software Reliability Methods, Text in Computer Science, Springer, 2001.
8. Tennent, R.D.:Specifying Software: A Hand-On Introduction, Cambridge University Press, 2002.
9. Bertot, Y., Casteran, P.:Interactive Theorem Proving and Program Development, Springer-Verlang, 2004.
Study literature:
1. Huth, M.R.A., Ryan, M.D.: Logic in Computer Science - Modelling and Reasoning about Systems, Cambridge University Press, 2002, ISBN 0-521-65602-8
2. Clarke, E.M., Jr., Grumberg, O., Peled, D.A.: Model Checking, MIT Press, 2000, ISBN 0-262-03270-8
3. de Bakker, J.W. et all. (Editors): Real-Time: Theory in Practice, Springer-Verlag, LNCS 600, 1992, ISBN 3-540-55564-1
Controlled instruction:
Mid-term exam, laboratory practice supported by project work, and final exam are the monitored, and points earning, education. Mid-term exam and laboratory practice are without correction
eventuality. Final exam has two additional correction eventualities.
Progress assessment:
Protocol about a lab experiment, written mid-term exam and submitting project in a due date.
Exam prerequisites:
Requirements for class accreditation are not defined. | {"url":"http://www.fit.vutbr.cz/study/course-l.php?id=8875","timestamp":"2014-04-21T05:14:26Z","content_type":null,"content_length":"13869","record_id":"<urn:uuid:d8542ef8-a9d5-41ce-82d1-d5a1409dc1e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posted by LimaBeans93 on Sunday, January 29, 2012 at 5:45pm.
1. Solve these systems of equations
a. 3x + 6y + 3z= 3
2x - y + 4z= 14
x + 10y + 8z= -8
b. x + y + 3z= 10
x + 2y + 3z= 4
x + 4y + 3z= -6
2. Solve these systems of equations word problems.
a. A movie theater sold 450 tickets to a movie. Adult tickets cost $8 and children's tickets cost $6. If the theater took in a total of $3,250 from ticket sales, how many of the tickets were adult
tickets and how many were children's tickets?
b. Bond A pays interest of 8% per year. Bond B pays interest of 10% per year. You are to invest a total of $100,000 and earn exactly $9,500 interest per year. How much should you invest in Bond A and
in Bond B? (round to the nearest cent)
c. At a certain college, all courses are either 3-credit courses or 4-credit courses. Suppose a student has taken 16 course and has 53 credits. How many 3-credit courses and how many 4-credit courses
has the student taken?
• COLLEGE MATH- SYSTEMS OF EQUATIONS - drwls, Sunday, January 29, 2012 at 9:46pm
This is not a homework dropoff service. If you expect to learn high school algebra in college, it's time you made an effort.
• COLLEGE MATH- SYSTEMS OF EQUATIONS - LimaBeans93, Sunday, January 29, 2012 at 9:52pm
WOW. I know high school algebra. I am in the highest math class that freshman could even be in. I did all of the problems but those were tricky for me sir. I started them all but got stuck and I
have proof. You shouldn't even respond if you're not going to offer any relevant help.
• COLLEGE MATH- SYSTEMS OF EQUATIONS - LimaBeans93, Sunday, January 29, 2012 at 9:59pm
And also I was in a gifted program in high school and graduated with a math average of a 97.37. I made more than an effort and can prove it. Before you assume know the facts.
• COLLEGE MATH- SYSTEMS OF EQUATIONS - john, Saturday, September 15, 2012 at 10:23am
a. A movie theater sold 450 tickets to a movie. Adult tickets cost $8 and children's tickets cost $6. If the theater took in a total of $3,250 from ticket sales, how many of the tickets were
adult tickets and how many were children's tickets?
Related Questions
College Algebra - solve systems of equations by graphing... I get how to graph ...
SCIENCE!! - what are two examples of these four types of systems? Physical ...
college - In general, think of ways that information systems add value to ...
Science - Why do scientists use models of natural systems? a. Scientists use ...
Math - 3x + y = 3 x + y = 2 Solve the systems of equations. Options: x=1/2 y=3 x...
math - how do i solve systems of three equations with elimination??
math - i don't understand what to do.if someone can tell me what to do i can try...
math - Solve the following systems of equations. 3x + 4y = 4 2x + y = 6
math - solve the systems of equations by elimination. 6x-y=18 2x+y=7
Algebra - College intro - I'm having trouble learning "Solving systems of ... | {"url":"http://www.jiskha.com/display.cgi?id=1327877105","timestamp":"2014-04-19T07:39:00Z","content_type":null,"content_length":"10538","record_id":"<urn:uuid:e3128370-4fb7-4518-b168-d3cc0f0a6a9f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Material Results
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
This classroom activity presents College Algebra students with a ConcepTest, a Question of the Day, and a Write-pair-share...
see more
Material Type:
James Rutledge
Date Added:
Aug 22, 2010
Date Modified:
Jul 02, 2012
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the...
see more
Material Type:
James Rutledge
Date Added:
Aug 25, 2010
Date Modified:
Jul 02, 2012
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the...
see more
Material Type:
James Rutledge
Date Added:
Aug 22, 2010
Date Modified:
Jul 02, 2012
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the...
see more
Material Type:
James Rutledge
Date Added:
Aug 25, 2010
Date Modified:
Jul 02, 2012
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the...
see more
Material Type:
James Rutledge
Date Added:
Aug 22, 2010
Date Modified:
Jul 02, 2012
Exponential Functions allows the user to investigate the relationship between the equation and the graph of exponential...
see more
Material Type:
Philippe Laval
Date Added:
Oct 02, 2004
Date Modified:
Apr 04, 2011
This interactive, applet-based site steps the student through the process of identifying an equation as an ellipse or...
see more
Material Type:
larry green
Date Added:
Nov 28, 2004
Date Modified:
Apr 10, 2011
This interactive, applet-based site steps the student through the process of identifying an equation as an ellipse or...
see more
Material Type:
larry green
Date Added:
Nov 28, 2004
Date Modified:
Apr 10, 2011
Linear Functions contains two applets, one on slope and a second on the effect of "m" and "b" in a linear equation on the...
see more
Material Type:
Philippe Laval
Date Added:
Oct 02, 2004
Date Modified:
Nov 30, 2012
Quadratic Functions contains two applets that allow the user to change the coefficients of a quadratic equation and observe...
see more
Material Type:
Philippe Laval
Date Added:
Oct 02, 2004
Date Modified:
Apr 04, 2011 | {"url":"http://www.merlot.org/merlot/materials.htm?nosearchlanguage=&pageSize=&page=3&category=2592","timestamp":"2014-04-20T17:31:48Z","content_type":null,"content_length":"180764","record_id":"<urn:uuid:c7edf472-2d56-410f-9d11-794177279585>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Matching matrix
In the field of
artificial intelligence
, a
confusion matrix
is a visualization tool typically used in
supervised learning
unsupervised learning
it is typically called a
matching matrix
). Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. One benefit of a confusion matrix is that it is easy to see if
the system is confusing two classes (i.e. commonly mislabelling one as another).
When a data set is unbalanced (when the number of samples in different classes vary greatly) the error rate of a classifier is not representative of the true performance of the classifier. This can
easily be understood by an example: If there are for example 990 samples from class A and only 10 samples from class B, the classifier can easily be biased towards class A. If the classifier
classifies all the samples as class A, the accuracy will be 99%. This is not a good indication of the classifier's true performance. The classifier has a 100% recognition rate for class A but a 0%
recognition rate for class B.
In the example confusion matrix below, of the 8 actual cats, the system predicted that three were dogs, and of the six dogs, it predicted that one was a rabbit and two were cats. We can see from the
matrix that the system in question has trouble distinguishing between cats and dogs, but can make the distinction between rabbits and other types of animals pretty well.
Example confusion matrix
│ │ Cat │ Dog │ Rabbit │
│ Cat │ 5 │ 3 │ 0 │
│ Dog │ 2 │ 3 │ 1 │
│ Rabbit │ 0 │ 2 │ 11 │
Table of Confusion
Predictive Analytics
, a
Table of Confusion
, also known as a
confusion matrix
, is a table with two rows and two columns that reports the number of True Negatives, False Positives, False Negatives, and True Positives.
actual value
p n total
p' True False P'
prediction Positive Positive
outcome n' False True N'
Negative Negative
total P N
Table 1: Table of Confusion.
For example, consider a model which predicts for 10,000 Insurance Claims whether each case is Fraudulent. This model correctly predicts 9,700 non-fraudulent cases, and 100 fraudulent cases. The model
also incorrectly predicts 150 cases which are not fraudulent to be fraudulent, and 50 cases which are fraudulent to be non-fraudulent. The resulting Table of Confusion is shown below.
actual value
p n total
prediction p' 100 150 P'
outcome n' 50 9700 N'
total P N
Table 2: Example Table of Confusion.
See also
External links | {"url":"http://www.reference.com/browse/Matching+matrix","timestamp":"2014-04-16T23:35:40Z","content_type":null,"content_length":"81368","record_id":"<urn:uuid:cf74196d-c654-48a6-b7bb-2049017323e2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Divide Fractions by a Whole Number
Edit Article
Edited by Runescapepro123, Ben Rubenstein, Alex, Histrion and 79 others
Dividing fractions by a whole number isn't as hard as it looks. To divide a fraction by a whole number, all you have to do is to convert the whole number into a fraction, find the reciprocal of that
fraction, and multiply the result by the first fraction. If you want to know how to do it, just follow these steps:
1. 1
Write the problem. The first step to dividing a fraction by a whole number is to simply write out the fraction followed by the division sign and the whole number you need to divide it by. Let's
say we're working with the following problem: 2/3 ÷ 4.^[1]
2. 2
Change the whole number into a fraction. To change a whole number into a fraction, all you have to do is place the number over the number 1. The whole number becomes the numerator and 1 becomes
the denominator of the fraction. Saying 4/1 is really the same as saying 4, since you're just showing that the number includes "1" 4 times. The problem should read 2/3 ÷ 4/1.
3. 3
Dividing a fraction by another fraction is the same as multiplying that fraction by the reciprocal of the other fraction.
4. 4
Write the reciprocal of the whole number. To find the reciprocal of a number, simply switch the numerator and the denominator of the number. Therefore, to find the reciprocal of 4/1, simply
switch the numerator and denominator so that the number becomes 1/4.
5. 5
Change the division sign into a multiplication sign. The problem should read 2/3 x 1/4.
6. 6
Multiply the numerators and denominators of the fractions. Therefore, the next step is to multiply the numerators and denominators of the fraction to get the new numerator and denominator of the
final answer.
□ To multiply the numerators, just multiply 2 x 1 to get 2.
□ To multiply the denominators, just multiply 3 x 4 to get 12.
□ 2/3 x 1/4 = 2/12
7. 7
Simplify the fraction. To simply the fraction, you need to find the lowest common denominator, which means that you should divide both the numerator and denominator by any number that divides
evenly into both numbers. Since 2 is the numerator, you should see if 2 divides evenly into 12 -- it does because 12 is even. Then, divide both the numerator and denominator by 2 to get the new
numerator and denominator to get a simplified answer.
□ 2 ÷ 2 = 1
□ 12 ÷ 2 = 6
□ The fraction 2/12 can be simplified to 1/6. This is your final answer.
• Here's a mnemonic, an easy way to remember how to do all of this. Remember the following: "Dividing fractions is easy as pie, flip the second number and multiply!"
• Another Variation of the above is KCF/KFC. Keep the first number. Change to multiplication. Flip the last number. Or F before C.
• If you cross-cancel before you multiply, you probably won't need to reduce to lowest terms because its already on its lowest term as you can see. In our example, before we multiply 2/3 × 1/4, we
might notice that the first numerator (2) and the second denominator (4) have a common factor of 2, which we can cancel in advance. This changes the problem to 1/3 × 1/2, giving us 1/6
immediately and saving us the work of reducing the fraction at the end.
• If any of your fractions is negative, this method still applies; just make sure you keep track of the sign as you go through the steps.
• Only take the reciprocal of the second fraction, the one you're dividing by. Don't change the first one, the one you're dividing into. In our example, we converted the 4/1 to 1/4, but we left the
2/3 as 2/3 (we didn't change it to 3/2).
Article Info
Thanks to all authors for creating a page that has been read 291,953 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Divide-Fractions-by-a-Whole-Number","timestamp":"2014-04-18T03:06:24Z","content_type":null,"content_length":"72050","record_id":"<urn:uuid:d9c40868-ba93-4c8f-86e2-aeff25f4f00e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Anonymous on Wednesday, June 11, 2008 at 5:41pm.
Algorithm Verification
Consider the following selection statement where X is an integer test score between 0 and 100.
input X
if (0 <= X and X < 49)
output "you fail"
else if (50 <= X and X < 70)
output "your grade is" X
output "you did OK"
else if (70 <= X and X < 85)
output "your grade is" X
output "you did well"
else if (85 <= X and X < 100)
output "your grade is" X
output "you did great"
output "how did you do?"
o What will be printed if the input is 0?
o What will be printed if the input is 100?
o What will be printed if the input is 51?
o What will be printed if the user enters ¿Wingding¿?
o Is this design robust? If so, explain why. If not, explain what you can do to make it robust. The design is not robust because there is no explicit assignment of an integer type to the input
variable X or check to see if the integer input lies between 0 and 100 (inclusive). So, the program will execute irrespective of the input. Moreover, it does not have a proper output defined at X =
49 and X = 1 ...
o How many levels of nesting are there in this design?
o Give a set of values that will test the normal operation of this program segment. Defend your choices.
o Give a set of test values that will cause each of the branches to be executed.
o Give a set of test values that test the abnormal operation of this program segment.
Please Help!!!!
Related Questions
programming - Consider the following selection statement where X is an integer ...
Part 1 - Consider the following selection statement where X is an integer test ...
Help Help - Consider the following selection statement where X is an integer ...
programming - Can You Please help me understand how to do this and what i have ...
programming - I am in IT/210 and am having problems with the entire class. But ...
Part 1 - input X if (0 <= X and X < 49) output "you fail" else if (50 <...
Algorithm Verification - Please give example of algorithm verification.
Information Technology - Develop an algorithm, flow chart and pseudocode that ...
Computers - Problem-Solving 1. Develop an algorithm or write pseudocode to ...
algorithm - can someone tell me if this is correct. Write an algorithm that will... | {"url":"http://www.jiskha.com/display.cgi?id=1213220485","timestamp":"2014-04-19T17:13:24Z","content_type":null,"content_length":"9420","record_id":"<urn:uuid:f7815c1f-88b5-4441-87ca-ff50c864b77e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Springfield, VA
Fairfax, VA 22031
Professional Full-Time Tutor --- UCLA Graduate
...I am a full time tutor, this is not a part time endeavor for me. I can effectively instruct all the math-related aspects of the
. I am a former high school math teacher with well over 10 years of full time teaching & tutoring experience. I can also assist...
Offering 10+ subjects including GMAT | {"url":"http://www.wyzant.com/Springfield_VA_GMAT_tutors.aspx","timestamp":"2014-04-19T20:21:34Z","content_type":null,"content_length":"60396","record_id":"<urn:uuid:7d6d0e30-c4fe-44b3-93a1-67982dfad739>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
In triangle ABC, find k if there is only one possible value for x.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/509624a6e4b0d0275a3cd4d8","timestamp":"2014-04-16T22:47:57Z","content_type":null,"content_length":"54528","record_id":"<urn:uuid:a06f5525-dc0e-4aa3-8743-f6f0efa05cee>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Playing with fire (or water)
April 2, 2012
By arthur charpentier
A few days ago, http://www.futilitycloset.com/ published a short post based on the fourth problem of the 1987 Canadian Mathematical Olympiad (from on a problem from the 6th All Soviet Union
Mathematical Competition in Voronezh, 1966). The problem is simple (as always). It is about water pistol duels (with an odd number of players)
The answer is nice, an can be read on the blog.
What puzzled me in this problem is the following: if we know, for sure, that at least one player won't get wet, we don't know exactly how many of them won't get wet (assuming that if they shoot at
the closest, they hit him for sure) ? It is simple to run simulations, e.g. assuming that players are uniformly distributed over a square,
(d=as.matrix(dist(cbind(x,y), method = "euclidean",upper=TRUE)))
It is then rather simple to get the distribution of the number of player that did not get wet,
The graph for different values for the total number of players is the following (based on 25,000 simulations)
If we investigate further, say with 51 players, we have a distribution for the total number of players that did not get wet which looks exactly like the Gaussian distribution,
If anyone has an intuition (not to say a proof) for that, I'd be glad to hear it...
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/playing-with-fire-or-water/","timestamp":"2014-04-16T22:27:16Z","content_type":null,"content_length":"45240","record_id":"<urn:uuid:94a850db-1de5-4027-ae20-5a231e24177f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Divide Fractions by a Whole Number
Edit Article
Edited by Runescapepro123, Ben Rubenstein, Alex, Histrion and 79 others
Dividing fractions by a whole number isn't as hard as it looks. To divide a fraction by a whole number, all you have to do is to convert the whole number into a fraction, find the reciprocal of that
fraction, and multiply the result by the first fraction. If you want to know how to do it, just follow these steps:
1. 1
Write the problem. The first step to dividing a fraction by a whole number is to simply write out the fraction followed by the division sign and the whole number you need to divide it by. Let's
say we're working with the following problem: 2/3 ÷ 4.^[1]
2. 2
Change the whole number into a fraction. To change a whole number into a fraction, all you have to do is place the number over the number 1. The whole number becomes the numerator and 1 becomes
the denominator of the fraction. Saying 4/1 is really the same as saying 4, since you're just showing that the number includes "1" 4 times. The problem should read 2/3 ÷ 4/1.
3. 3
Dividing a fraction by another fraction is the same as multiplying that fraction by the reciprocal of the other fraction.
4. 4
Write the reciprocal of the whole number. To find the reciprocal of a number, simply switch the numerator and the denominator of the number. Therefore, to find the reciprocal of 4/1, simply
switch the numerator and denominator so that the number becomes 1/4.
5. 5
Change the division sign into a multiplication sign. The problem should read 2/3 x 1/4.
6. 6
Multiply the numerators and denominators of the fractions. Therefore, the next step is to multiply the numerators and denominators of the fraction to get the new numerator and denominator of the
final answer.
□ To multiply the numerators, just multiply 2 x 1 to get 2.
□ To multiply the denominators, just multiply 3 x 4 to get 12.
□ 2/3 x 1/4 = 2/12
7. 7
Simplify the fraction. To simply the fraction, you need to find the lowest common denominator, which means that you should divide both the numerator and denominator by any number that divides
evenly into both numbers. Since 2 is the numerator, you should see if 2 divides evenly into 12 -- it does because 12 is even. Then, divide both the numerator and denominator by 2 to get the new
numerator and denominator to get a simplified answer.
□ 2 ÷ 2 = 1
□ 12 ÷ 2 = 6
□ The fraction 2/12 can be simplified to 1/6. This is your final answer.
• Here's a mnemonic, an easy way to remember how to do all of this. Remember the following: "Dividing fractions is easy as pie, flip the second number and multiply!"
• Another Variation of the above is KCF/KFC. Keep the first number. Change to multiplication. Flip the last number. Or F before C.
• If you cross-cancel before you multiply, you probably won't need to reduce to lowest terms because its already on its lowest term as you can see. In our example, before we multiply 2/3 × 1/4, we
might notice that the first numerator (2) and the second denominator (4) have a common factor of 2, which we can cancel in advance. This changes the problem to 1/3 × 1/2, giving us 1/6
immediately and saving us the work of reducing the fraction at the end.
• If any of your fractions is negative, this method still applies; just make sure you keep track of the sign as you go through the steps.
• Only take the reciprocal of the second fraction, the one you're dividing by. Don't change the first one, the one you're dividing into. In our example, we converted the 4/1 to 1/4, but we left the
2/3 as 2/3 (we didn't change it to 3/2).
Article Info
Thanks to all authors for creating a page that has been read 291,953 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Divide-Fractions-by-a-Whole-Number","timestamp":"2014-04-18T03:06:24Z","content_type":null,"content_length":"72050","record_id":"<urn:uuid:d9c40868-ba93-4c8f-86e2-aeff25f4f00e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
View Tubes: Student Worksheet
Name: ____________________________________
Does the length of a viewing tube, its diameter, or the distance from an object affect the type of data collected and the resulting graph?
Work in groups of four or five.
INVESTIGATION A
1. Collect the Data: Each member of the group will view the poster with the given tubes at varying distances from the wall. Others will mark the top and bottom of the described portion to
determine the measurement in inches of the viewing height. Calculate the average visible height for each of the distances.
2. Graph the Data: Choose appropriate labels and scales for the horizontal and vertical axes. Plot the data as ordered pairs, (x, y).
3. Read the Results: Looking at your points, do they seem to lie along a line or a curve? Draw the line that best fits your data.
4. Describe in words how to determine the height of the visible portion if you know the distance from the wall.
5. Describe by equation how to determine the height of the visible portion (y) if you know the distance from the wall (x).
y =
6. Predict the height of the visible portion, if you were standing 10 feet
from the wall.
7. Predict the distance from the wall you would have to stand in order to see a 25-inch portion of the poster.
Viewing Height:
Same Tube Length & Tube Diameter
Varying Distances from Wall
│ │ Distance from Poster │
│Student name ├──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┤
│ │1 foot│2 feet│3 feet│4 feet│5 feet│6 feet│7 feet│8 feet│
│Total │ │ │ │ │ │ │ │ │
INVESTIGATION B
1. Collect the Data: Each member of the group will view the poster with the given tubes at varying distances from the wall. Others will mark the top and bottom of the described portion to
determine the measurement in inches of the viewing height. Calculate the average visible height for each of the distances.
2. Graph the Data: Choose appropriate labels and scales for the horizontal and vertical axes. Plot the data as ordered pairs, (x, y).
3. Read the Results: Looking at your points, do they seem to lie along a line or a curve? Draw the line that best fits your data.
4. Describe in words how to determine the height of the visible portion if you know the diameter of the tube.
5. Describe by equation how to determine the height of the visible portion (y) if you know the diameter of the tube (x).
y =
6. Predict the height of the visible portion for a tube with a diameter of five inches.
7. Predict the diameter of a tube that would allow you to see a 10-inch portion of the poster.
Viewing Height:
Same Distance from Wall & Tube Length
Varying Diameters
│ │Diameter of Tube │
│Student Name ├──┬──┬──┬──┬──┬──┤
│ │1"│2"│3"│4"│5"│6"│
│Average │ │ │ │ │ │ │
INVESTIGATION C
1. Collect the Data: Each member of the group will view the poster with the given tubes varying in length, a designated distance from the wall. Others will mark the top and bottom of the
described portion to determine the measurement in inches of the viewing height. Calculate the average visible height for each of the lengths.
2. Graph the Data: Choose appropriate labels and scales for the horizontal and vertical axes. Plot the data as ordered pairs, (x, y).
3. Read the Results: Looking at your points, as the length of the tube increases, what is happening to the height of the visible portion of the picture?
4. Would the height of the portion ever decrease to zero? Why?
5. What height does the line seem to be approaching as a minimum?
Viewing Height
Same Tube Diameter & Distance from Wall
Varying Length
│ │ Length of Tube │
│Student Name ├────────┬────────┬─────────┬─────────┤
│ │4 inches│7 inches│22 inches│33 inches│
│Average │ │ │ │ │
As a result of this activity, students learn to analyze data that they have collected, look for relationships, and make predictions.
Have students answer the following questions:
1. Explain how you knew the graphs were or were not linear.
2. Predict the height of the visible portion of the measuring tape if you were standing 15 feet from the wall.
3. Predict the distance you would have to stand from the wall in order to see a 30-inch portion of the measuring tape.
4. Predict the height of the visible portion for the tube with a diameter of 7.5 inches.
5. Predict the diameter of a tube that would allow you to see a 20-inch portion of the measuring tape. | {"url":"http://fcit.usf.edu/math/lessons/activities/viewS.htm","timestamp":"2014-04-18T00:54:01Z","content_type":null,"content_length":"20310","record_id":"<urn:uuid:2186f33e-fb50-4978-b788-1e4d9de66974>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Submitted by Anonymous on August 28, 2011.
"Physicists usually explain the arrow of time using the concept of increasing entropy (...) The universe evolves from a highly-ordered, low-entropy beginning, to a progressively more disordered
state, defining time's arrow."
I'm not sure, but for me it looks like circular reasoning.
First there is assumption, that there are previous and next states (this implies directionality), and next states are more disordered than previous states, thus entropy is increasing.
Then there is explanation, that time (directionality) exists, because entropy is increasing.
So directionality exist, because there is directionality?
Please correct me, if I'm wrong.
Arek W. | {"url":"http://plus.maths.org/content/comment/reply/5545/2709","timestamp":"2014-04-20T11:07:01Z","content_type":null,"content_length":"20497","record_id":"<urn:uuid:77f50185-f09c-4eb5-9ec2-e7b88d8d8f41>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instructor Class Description
Time Schedule:
Subramanian Ramachandran
PHYS 224
Seattle Campus
Thermal Physics
Introduces heat, thermodynamics, elementary kinetic theory, and statistical physics. Prerequisite: either MATH 126 or MATH 136, which may be taken concurrently; PHYS 122, which may be taken
concurrently. Offered: ASpS.
Class description
The course will cover equations of state, specifically for an ideal gas, thermodynamic phase diagram, statistical methods to describe a microcanonical ensemble, thermodynamics of cyclical processes
and inhomogenous reactions, state variables, and Boltzmann statistics.
Student learning goals
Understand the fundamental origins of physical quantities such as temperature and pressure and how it is derived from state variables.
Understand the laws of thermodynamics, and in particular reference to the second law, appreciate the importance of entropy.
Be able to calculate the heat of reaction from enthalpy at standard states
Apply principles of thermodynamics to ideal and non-ideal cyclical processes.
Describe phase transitions in states of matter including magnetic phase transitions
Apply the principles studied to solve problems in thermal physics
General method of instruction
In-class demonstrations will be used to reinforce the concepts studied. Most of the lecture will involve use of balckboard. Students will be encouraged to participate with questions and answers.
Powerpoint presentation will be used as a teaching aid.
Recommended preparation
Introcudtory calculus and introductory physics - may be taken concurrently
Class assignments and grading
Homework problesm will be assigned from the prescribed text books. There may be multiple choice questions as well as problems that involve calculations.
The grades will be based on a curve with a certain minimum threshold established for an A grade. Weights are assigned to each component.
The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without
notice. In most cases, the official course syllabus will be distributed on the first day of class. Additional Information
Last Update by Subramanian Ramachandran
Date: 06/17/2011 | {"url":"https://www.washington.edu/students/icd/S/phys/224ramacs.html","timestamp":"2014-04-16T13:18:23Z","content_type":null,"content_length":"5236","record_id":"<urn:uuid:0aacbf8c-a79e-406e-b098-4e2d5bf3a14b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Das U-Blog by Prashanth
There were two posts that got a few comments each, so I will repost most of those.
Gary Newell
said, "This is a good in depth review. I am currently using the Mint 14 Cinnamon release (Ubuntu base). If you have a powerful enough PC then Cinnamon is the best desktop as far as I can tell. I
prefer the Consort desktop used by SolusOS to the MATE desktop and if I have an older PC I actually overall prefer to use XFCE and so tend to run Xubuntu."
Juan Carlos García Ramírez
had this to say: "I still prefer xfce :D so linux mint 14 (Nadia) xfce for me".
shared, "I tested Pardus 2013 as well. In my case, I could test the repositories using Synaptic and could download the localization file for my language. I did notice that even so, some programs were
still in Turkish (VLC, Synaptic). I have the Release Candidate installed on my laptop and it is greatly stable."
had this bit of support: "@Prashanth, Thank you for your time with Pardus and your review. Good luck going back to school! @Mega, I just came from your blog and recommended you to read this review. I
guess you are too fast :-)"
Thanks to all those who commented on those posts. I am back on campus now, and the semester isn't about to wait for me to settle down, so it will be back in full swing any minute now. This means that
my post frequency will once again decrease through the rest of the semester. Anyway, if you like what I write, please continue subscribing and commenting!
My spring break is coming to an end (I only have 1.5 more days), so I figured it might be nice to do another review while I still can. Today I'm reviewing Pardus 2013.
Main Screen + KDE Kickoff Menu
Pardus is a distribution developed at least in part by the Turkish military. It used to not be based on any other distribution and used its unique PISI package management system, which featured delta
upgrades (meaning that only the differences between package versions would be applied for upgrades, greatly reducing their size). Since then, though, the organization largely responsible for the
development of Pardus went through some troubles. One result was the forking of Pardus into PISI Linux to further develop the original alpha release of Pardus 2013. The other result was the rebasing
of Pardus on Debian, abandoning PISI in that regard. Now Pardus 2013 is a distribution based on Debian 7 "Wheezy" that uses either KDE 4.8 or GNOME 3 (whatever version is packaged in the latest
version of Debian, though I'm not sure what that is).
I reviewed Pardus on a live USB made with MultiSystem. Follow the jump to see what it's like.
As an update to a previous post about my adventures in QED-land for 8.06, I emailed my recitation leader about whether my intuition about the meaning of the Fourier components of the electromagnetic
potential solving the wave equation (and being quantized to the ladder operators) was correct. He said it basically is correct, although there are a few things that, while I kept in mind at that
time, I still need to keep in mind throughout. The first is that the canonical quantization procedure uses the potential $\vec{A}$ as the coordinate-like quantity and finds the conjugate momentum to
this field to be proportional to the electric field $\vec{E}$, with the magnetic field nowhere to be found directly in the Hamiltonian. The second is that there is a different harmonic oscillator for
each mode, and the number eigenstates do not represent the energy of a given photon but instead represent the number of photons present with an energy corresponding to that mode. Hence, while
coherent states do indeed represent points in the phase space of $(\vec{A}, \vec{E})$, the main point is that the photon number can fluctuate, and while classical behavior is recovered for large
numbers $n$ of photons as the fluctuations of the number are $\sqrt{n}$ by Poisson statistics, the interesting physics happens for low $n$ eigenstates or superpositions thereof in which $a$ and $a^{\
dagger}$ play the same role as in the usual quantum harmonic oscillator. Furthermore, the third issue is that only a particular mode $\vec{k}$ and position $\vec{x}$ can be considered, because the
electromagnetic potential has a value for each of those quantities, so unless those are held constant, the picture of phase space $(\vec{A}, \vec{E})$ becomes infinite-dimensional. Related to this,
the fourth and fifth issues are, respectively, that $\vec{A}$ is used as the field and $\vec{E}$ as its conjugate momentum rather than using $\vec{E}$ and $\vec{B}$ because the latter two fields are
coupled to each other by the Maxwell equations so they form an overcomplete set of degrees of freedom (or something like that), whereas using $\vec{A}$ as the field and finding its conjugate momentum
in conjunction with a particular gauge choice (usually the Coulomb gauge $\nabla \cdot \vec{A} = 0$) yields the correct number of degrees of freedom. These explanations seem convincing enough to me,
so I will leave those there for the moment.
Another major issue that I brought up with him for which he didn't give me a complete answer was the issue that the conjugate momentum to $\vec{A}$ was being found through \[ \Pi_j = \frac{\partial \
mathcal{L}}{\partial (\partial_t A_j)} \] given the Lagrangian density $\mathcal{L} = \frac{1}{8\pi} \left(\vec{E}^2 - \vec{B}^2 \right)$ and the field relations $\vec{E} = -\frac{1}{c}\partial_t \
vec{A}$ & $\vec{B} = \nabla \times \vec{A}$. This didn't seem manifestly Lorentz-covariant to me, because in the class 8.033 — Relativity, I had learned that the conjugate momentum to the
electromagnetic potential $A^{\mu}$ in the above Lagrangian density would be the 2-index tensor \[ \Pi^{\mu \nu} = \frac{\partial \mathcal{L}}{\partial (\partial_{\mu} A_{\nu})} .\] This would make a
difference in finding the Hamiltonian density \[ \mathcal{H} = \sum_{\mu} \Pi^{\mu} \partial_t A_{\mu} - \mathcal{L} = \frac{1}{8\pi} \left(\vec{E}^2 + \vec{B}^2 \right). \] I thought that the
Hamiltonian density would need to be a Lorentz-invariant scalar just like the Lagrangian density. As it turns out, this is not the case, because the Hamiltonian density represents the energy which
explicitly picks out the temporal direction as special, so time derivatives are OK in finding the momentum conjugate to the potential; because the Lagrangian and Hamiltonian densities looks so
similar, it looks like both could be Lorentz-invariant scalar functions, but deceptively, only the former is so. At this point, I figured that because the Hamiltonian and (not field conjugate, but
physical) momentum looked so similar, they could arise from the same covariant vector. However, there is no "natural" 1-index vector with which to multiply the Lagrangian density to get some sort of
covariant vector generalization of the Hamiltonian density, though there is a 2-index tensor, and that is the metric. I figured here that the Hamiltonian and momentum for the electromagnetic field
could be related to the stress-energy tensor, which gives the energy and momentum densities and fluxes. After a while of searching online for answers, I was quite pleased to find my intuition to be
essentially spot-on: indeed the conjugate momentum should be a tensor as given above, the Legendre transformation can then be done in a covariant manner, and it does in fact turn out that the result
is just the stress-energy tensor \[ T^{\mu \nu} = \sum_{\mu, \xi} \Pi^{\mu \xi} \partial^{\nu} A_{\xi} - \mathcal{L}\eta^{\mu \nu} \] (UPDATE: the index positions have been corrected) for the
electromagnetic field. Indeed, the time-time component is exactly the energy/Hamiltonian density $\mathcal{H} = T_{(0, 0)}$, and the Hamiltonian $H = \sum_{\vec{k}} \hbar\omega(\vec{k}) \cdot (\alpha
^{\star} (\vec{k}) \alpha(\vec{k}) + \alpha(\vec{k}) \alpha^{\star} (\vec{k})) = \int T_{(0, 0)} d^3 x$. As it turns out, the momentum $\vec{p} = \sum_{\vec{k}} \hbar\vec{k} \cdot (\alpha^{\star} (\
vec{k}) \alpha(\vec{k}) + \alpha(\vec{k}) \alpha^{\star} (\vec{k}))$ doesn't look similar just by coincidence: $p_j = \int T_{(0, j)} d^3 x$. The only remaining point of confusion is that it seems
like the Hamiltonian and momentum should together form a Lorentz-covariant vector $p_{\mu} = (H, p_j)$, yet if the stress-energy tensor respects Lorentz-covariance, then integrating over the volume
element $d^3 x$ won't respect transformations in a Lorentz-covariant manner. I guess because the individual components of the stress-energy tensor transform under a Lorentz boost and the volume
element does as well, then maybe the vector $p_{\mu}$ as given above will respect Lorentz-covariance. (UPDATE: another issue I was having but forgot to write before clicking "Publish" was the fact
that only the $T_{(0, \nu)}$ components are being considered. I wonder if there is some natural 1-index Lorentz-convariant vector $b_{\nu}$ to contract with $T_{\mu \nu}$ so that the result is a
1-index vector which in a given frame has a temporal component given by the Hamiltonian density and spatial components given by the momentum density.) Overall, I think it is interesting that this
particular hang-up was over a point in classical field theory and special relativity and had nothing to do with the quantization of the fields; in any case, I think I have gotten over the major
hang-ups about this and can proceed reading through what I need to read for the 8.06 paper.
There were two things that I would like to post here today. The first is something I have been mulling over for a while. The second is something that I thought about more recently.
Time evolution in nonrelativistic quantum mechanics occurs according to the [time-dependent] Schrödinger equation \[ H|\Psi\rangle = i\hbar \frac{\partial}{\partial t} |\Psi\rangle .\] While this at
first may seem intractable, the trick is that typically the Hamiltonian is not time-dependent, so a candidate solution could be $|\Psi\rangle = \phi(t)|E\rangle$. Plugging this back in yields time
evolution that occurs through the phase $\phi(t) = e^{-\frac{iEt}{\hbar}}$ applied to energy eigenstates that solve \[ H|E\rangle = E \cdot |E\rangle \] and this equation is often called the
"time-independent Schrödinger equation". When I was taking 8.04 — Quantum Physics I, I agreed with my professor who called this a misnomer, in that the Schrödinger equation is supposed to only
describe time evolution, so what is being called "time-independent" is more properly just an energy eigenvalue equation. That said, I was thinking that the "time-independent Schrödinger equation" is
really just like a Fourier transform of the Schrödinger equation from time to frequency (related to energy by $E = \hbar\omega$), so the former could be an OK nomenclature because it is just a change
of basis. However, there are two things to note: the Schrödinger equation is basis-independent, whereas the "time-independent Schrödinger equation" is expressed only in the basis of energy
eigenstates, and time is not an observable quantity (i.e. Hermitian operator) but is a parameter, so the change of basis/Fourier transform argument doesn't work in quite the same way that it does for
position versus momentum. Hence, I've come to the conclusion that it is better to call the "time-independent Schrödinger equation" as the energy eigenvalue equation.
Switching gears, I was thinking about how the Biot-Savart law is derived. My AP Physics C teacher told me that the Ampère law is derived from the Biot-Savart law. However, this is patently not true,
because the Biot-Savart law only works for charges moving at a constant velocity, whereas the Ampère law is true for magnetic fields created by any currents or any changing electric fields. In 8.022
— Physics II, I did see a derivation of the Biot-Savart law from the Ampère law, showing that the latter is indeed more fundamental than the former, but it involved the magnetic potential and a lot
more work. I wanted to see if that derivation still made sense to me, but then I realized that because magnetism essentially springs from the combination of electricity and special relativity and
because the Biot-Savart law relies on the approximation of the charges moving at a constant velocity, it should be possible to derive the Biot-Savart law from the Coulomb law and special relativity.
Indeed, it is possible. Consider a charge $q$ whose electric field is \[ \vec{E} = \frac{q}{r^2} \vec{e}_r \] in its rest frame. Note that the Coulomb law is exact in the rest frame of a charge. Now
consider a frame moving with respect to the charge at a velocity $-\vec{v}$, so that observers in the frame see the charge move at a velocity $\vec{v}$. Considering only the component of the magnetic
field perpendicular to the relative motion, noting that there is no magnetic field in the rest frame of the charge yields, and considering the low-speed limit (which is the range of validity of the
Biot-Savart law) $\left|\frac{\vec{v}}{c}\right| \ll 1$ so that $\gamma \approx 1$ yields $\vec{B} \approx -\frac{\vec{v}}{c} \times \vec{E}$. Plugging in $-\vec{v}$ (the specified velocity of the
new frame relative to the charge) for $\vec{v}$ (the general expression for the relative velocity) and plugging in the Coulomb expression for $\vec{E}$ yields the Biot-Savart law \[ \vec{B} = \frac{q
\vec{v} \times \vec{e}_r}{cr^2}. \] One thing to be emphasized again is that the Coulomb law is exact in the rest frame of the charge, while the Biot-Savart law is always an approximation because a
moving charge will have an electric field that deviates from the Coulomb expression; the fact that the Biot-Savart law is a low-speed inertial approximation is why I feel comfortable doing the
derivation this way.
For those of you who have been waiting for a review, I think I may have said before that my writing would shift more to science-y stuff and away from distribution reviews. However, that does not mean
that reviews will stop entirely. I'm on spring break now and have a little more time to do these reviews, so today I am reviewing Linux Mint MATE 201303, which came out earlier this week.
Main Screen + Linux Mint Menu
This is the version of Linux Mint based on Debian rather than Ubuntu. It uses a variant of a rolling-release model, in that while existing users can get the latest and greatest software simply by
applying updates as usual, the updates come in large bundles (I almost want to say they are like the Microsoft Windows Service Packs, except that they work) rather than individual package files. This
means that the most common packages used on a Debian-based Linux Mint system are tested so that they can be guaranteed to work not only individually but also together, so that the problem of an
individual update breaking other dependencies becomes moot. Around the time of releasing a new update pack, a new ISO file snapshot of the distribution is released, as was the case this time around.
I reviewed the [32-bit] MATE edition using a live USB made with MultiSystem; I wanted to review the Cinnamon edition too, but it refused to boot, so I will leave my assessment of it at that. I also
did an installation of this (which regular readers know is rare), so you will have to follow the jump to see what this is like.
In a post from a few days ago, I briefly mentioned the notion of imaginary time with regard to angular momentum. I'd like to go into that a little further in this post.
In 3 spatial dimensions, the flat (Euclidean) metric is $\eta_{ij} = \delta_{ij}$, which is quite convenient, as lengths are given by $(\Delta s)^2 = (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2$ which
is just the usual Pythagorean theorem. When a temporal dimension is added, as in special relativity, the coordinates are now $x^{\mu} = (ct, x_{j})$, and the Euclidean metric becomes the Minkowski
metric $\eta_{\mu \nu} = \mathrm{diag}(-1, 1, 1, 1)$ so that $\eta_{tt} = -1$, $\eta_{(t, j)} = 0$, and $\eta_{ij} = \delta_{ij}$. This means that spacetime intervals become $(\Delta s)^2 = -(c\Delta
t)^2 + (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2$, which is the normal Pythagorean theorem only if $\Delta t = 0$. In general, time coordinate differences contribute negatively to the spacetime
interval. In addition, Lorentz transformations are given by a hyperbolic rotation by a [hyperbolic] angle $\alpha$ equal to the rapidity given by $\frac{v}{c} = \tanh(\alpha)$. This doesn't look
quite the same as normal Euclidean geometry. However, a transformation to imaginary time, called a Wick rotation, can be done by setting $\tau = it$, so $x^{\mu} = (ic\tau, x_{j})$, $\eta_{\mu \nu} =
\delta_{\mu \nu}$, $(\Delta s)^2 = (c\Delta t)^2 + (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2$ as in the usual Pythagorean theorem, and the Lorentz transformation is given by a real rotation by an
angle $\theta = i\alpha$ (though I may have gotten some of these signs wrong so forgive me) where $\alpha$ is now imaginary. Now, the connection to the component $L_{(0, j)}$ of the angular momentum
tensor should be more clear.
I first encountered this in the class 8.033 — Relativity, where I was able to explore this curiosity on a problem set. That question and the accompanying discussion seemed to say that while this is a
cool thing to try doing once, it isn't really useful, especially because it does not hold true in general relativity with more general metrics $g_{\mu \nu} \neq \eta_{\mu \nu}$ except in very special
cases. However, as it turns out, imaginary time does play a role in quantum mechanics, even without the help of relativity.
Schrödinger time evolution occurs through the unitary transformation $u = e^{-\frac{itH}{\hbar}}$ satisfying $uu^{\dagger} = u^{\dagger} u = 1$. This means that the probability that an initial state
$|\psi\rangle$ ends after time $t$ in the same state is given by the amplitude (whose square is the probability [density]) $\mathfrak{p}(t) = \langle\psi|e^{-\frac{itH}{\hbar}}|\psi\rangle$.
Meanwhile, assuming the states $|\psi\rangle$ form a complete and orthonormal basis (though I don't know if this assumption is truly necessary), the partition function $Z = \mathrm{trace}\left(e^{-\
frac{H}{k_B T}}\right)$, which can be expanded in the basis $|\psi\rangle$ as $Z = \sum_{\psi} \langle\psi|e^{-\frac{H}{k_B T}}|\psi\rangle$. This, however, is just as well rewritten as $Z = \sum_{\
psi} \mathfrak{p}\left(t = -\frac{i\hbar}{k_B T}\right)$. Hence, quantum and statistical mechanical information can be gotten from the same amplitudes using the substitution $t = -\frac{i\hbar}{k_B
T}$, which essentially calls temperature a reciprocal imaginary time. This is not really meant to show anything more deep or profound about the connection between time and temperature; it is really
more of a trick stemming from the fact that the same Hamiltonian can be used to solve problem in quantum mechanics or equilibrium statistical mechanics.
As an aside, it turns out that temperature, even when measured in an absolute scale, can be negative. There are plenty of papers of this online, but suffice it to say that this comes from a more
general statistical definition of temperature. Rather than defining it (as it commonly is) as the average kinetic energy of particles, it is better to define it as a measure of the probability
distribution that a particle will have a given energy. Usually, particles tend to be in lower energy states more than in higher energy states, and as a consequence, the temperature is positive.
However, it is possible (and has been done repeatedly) under certain circumstances to cleverly force the system in a way that causes particles to be in higher energy states with higher probability
than in lower energy states, and this is exactly the negative temperature. More formally, $\frac{1}{T} = \frac{\partial S}{\partial E}$ where $E$ is the energy and $S$ is the entropy of the system,
which is a measure of how many different states the system can possibly have for a given energy. For positive temperature, if two objects of different temperatures are brought into contact, energy
will flow from the hotter one to the colder, cooling the former and heating the latter until equal temperatures are achieved. For negative temperature, though, if an object with negative temperature
is brought in contact with an object that has positive temperature, each object tends to increase its own entropy. Like most normal objects, the latter does this by absorbing energy, but by the
definition of temperature, the former does this by releasing energy, meaning the former will spontaneously heat the latter. Hence, negative temperature is hotter than positive temperature; this is a
quirk of the definition of reciprocal temperature, so really what is happening is that absolute zero on the positive side is still the coldest possible temperature, absolute zero on the negative side
is now the hottest temperature, and $\pm \infty$ is in the middle.
This was really just me writing down stuff that I had been thinking about a couple of months ago. I hope this helps someone, and I also await the day when TV newscasters say "complex time brought to
you by..." instead of "time and temperature brought to you by...".
The class 8.06 — Quantum Physics III requires a final paper, written essentially like a review article of a certain area of physics that uses quantum mechanics and that is written for the level of
8.06 (and not much higher). At the same time, I have also been looking into other possible UROP projects because while I am quite happy with my photonic crystals UROP and would be pleased to continue
with it, that project is the only one I have done at MIT thus far, and I would like to try at least one more thing before I graduate. My advisor suggested that I not do something already done to
death like the Feynman path integrals in the 8.06 paper but instead to do something that could act as a springboard in my UROP search. One of the UROP projects I have been investigating has to do
with Casimir forces, but I pretty much don't know anything about that, QED, or [more generally] QFT. Given that other students have successfully written 8.06 papers about Casimir forces, I figured
this would be the perfect way to teach myself what I might need to know to be able to start on a UROP project in that area. Most helpful thus far has been my recitation leader, who is a graduate
student working in the same group that I have been looking into for UROP projects; he has been able to show me some of the basic tools in Casimir physics and point me in the right direction for more
information. Finally, note that there will probably be more posts about this in the near future, as I'll be using this to jot down my thoughts and make them more coherent (no pun intended) for future
Anyway, I've been able to read some more papers on the subject, including Casimir's original paper on it as well as Lifshitz's paper going a little further with it. One of the things that confused me
in those papers (and in my recitation leader's explanation, which was basically the same thing) was the following. The explanation ends with the notion that quantum electrodynamic fluctuations in a
space with a given dielectric constant, say in a vacuum surrounded by two metal plates, will cause those metal plates to attract or repel in a manner dependent on their separation. This depends on
the separation being comparable to the wavelength of the electromagnetic field (or something like that), because at much larger distances, the power of normal blackbody radiation (which ironically
still requires quantum mechanics to be explained) does not depend on the separation of the two objects, nor does it really depend on their geometries, but only on their temperatures. The explanation
of the Casimir effect starts with the notion of an electromagnetic field confined between two infinite perfectly conducting parallel plates, so the fields form standing waves like the wavefunctions
of a quantum particle in an infinite square well. This is all fine and dandy...except that this presumes that there is an electromagnetic field. This confused me: why should one assume the existence
of an electromagnetic field, and why couldn't it be possible to assume that there really is no field between the plates?
Then I remembered what the deal is with quantization of the electromagnetic field and photon states from 8.05 — Quantum Physics II. The derivation from that class still seems quite fascinating to me,
so I'm going to repost it here. You don't need to know QED or QFT, but you do need to be familiar with Dirac notation and at least a little comfortable with the quantization of the simple harmonic
Let us first get the classical picture straight. Consider an electromagnetic field inside a cavity of volume $\mathcal{V}$. Let us only consider the lowest-energy mode, which is when $k_x = k_y = 0$
so only $k_z > 0$, stemming from the appropriate application of boundary conditions. The energy density of the system can be given as \[H = \frac{1}{8\pi} \left(\vec{E}^2 + \vec{B}^2 \right)\] and
the fields that solve the dynamic Maxwell equations \[\nabla \times \vec{E} = -\frac{1}{c} \frac{\partial \vec{B}}{\partial t}\] \[\nabla \times \vec{B} = \frac{1}{c} \frac{\partial \vec{E}}{\partial
t}\] as well as the source-free Maxwell equations \[\nabla \cdot \vec{E} = \nabla \cdot \vec{B} = 0\] can be written as \[\vec{E} = \sqrt{\frac{8\pi}{\mathcal{V}}} \omega Q(t) \sin(kz) \vec{e}_x\] \
[\vec{B} = \sqrt{\frac{8\pi}{\mathcal{V}}} P(t) \cos(kz) \vec{e}_y\] where $\vec{k} = k_z \vec{e}_z = k\vec{e}_z$ and $\omega = c|\vec{k}|$. The prefactor comes from normalization, the spatial
dependence and direction come from boundary conditions, and the time dependence is somewhat arbitrary. I think this is because the spatial conditions are unaffected by time dependence if they are
separable, and the Maxwell equations are linear so if a periodic function like a sinusoid or complex exponential in time satisfies Maxwell time evolution, so does any arbitrary superposition (Fourier
series) thereof. That said, I'm not entirely sure about that point. Also note that $P$ and $Q$ are not entirely arbitrary, because they are restricted by the Maxwell equations. Plugging the fields
into those equations yields conditions on $P$ and $Q$ given by \[\dot{Q} = P\] \[\dot{P} = -\omega^2 Q\] which looks suspiciously like simple harmonic motion. Indeed, plugging these electromagnetic
field components into the Hamiltonian [density] yields \[H = \frac{1}{2} \left(P^2 + \omega^2 Q^2 \right)\] which is the equation for a simple harmonic oscillator with $m = 1$; this is because the
electromagnetic field has no mass, so there is no characteristic mass term to stick into the equation. Note that these quantities have a canonical Poisson bracket $\{Q, P\} = 1$, so $Q$ can be
identified as a position and $P$ can be identified as a momentum, though they are actually neither of those things but are simply mathematical conveniences to simplify expressions involving the
fields; this will become useful shortly.
Quantizing this yields turns the canonical Poisson bracket relation into the canonical commutation relation $[Q, P] = i\hbar$. This also implies that $[E_a, B_b] \neq 0$, which is huge: this means
that states of the photon cannot have definite values for both the electric and magnetic fields simultaneously, just as a quantum mechanical particle state cannot have both a definite position and
momentum. Now the fields themselves are operators that depend on space and time as parameters, while the states are now vectors in a Hilbert space defined for a given mode $\vec{k}$, which has been
chosen in this case as $\vec{k} = k\vec{e}_z$ for some allowed value of $k$. The raising and lowering operators $a$ and $a^{\dagger}$ can be defined in the usual way but with the substitutions $m \
rightarrow 1$, $x \rightarrow Q$, and $p \rightarrow P$. The Hamiltonian then becomes $H = \hbar\omega \cdot \left(a^{\dagger} a + \frac{1}{2} \right)$, where again $\omega = c|\vec{k}|$ for the
given mode $\vec{k}$. This means that eigenstates of the Hamiltonian are the usual $|n\rangle$, where $n$ specifies the number of photons which have mode $\vec{k}$ and therefore frequency $\omega$;
this is in contrast to the single particle harmonic oscillator eigenstate $|n\rangle$ which specifies that there is only one particle and it has energy $E_n = \hbar \omega \cdot \left(n + \frac{1}{2}
\right)$. This makes sense on two counts: for one, photons are bosons, so multiple photons should be able to occupy the same mode, and for another, each photon carries energy $\hbar\omega$, so adding
a photon to a mode should increase the energy of the system by a unit of the energy of that mode, and indeed it does. Also note that these number eigenstates are not eigenstates of either the
electric or the magnetic fields, just as normal particle harmonic oscillator eigenstates are not eigenstates of either position or momentum. (As an aside, the reason why lasers are called coherent is
because they are composed of light in coherent states of a given mode satisfying $a|\alpha\rangle = \alpha \cdot |\alpha\rangle$ where $\alpha \in \mathbb{C}$. These, as opposed to energy/number
eigenstates, are physically realizable.)
So what does this have to do with quantum fluctuations in a cavity? Well, if you notice, just as with the usual quantum harmonic oscillator, this Hamiltonian has a ground state energy above the
minimum of the potential given by $\frac{1}{2} \hbar\omega$ for a given mode; this corresponds to having no photons in that mode. Hence, even an electrodynamic vacuum has a nonzero ground state
energy. Equally important is the fact that while the mean fields $\langle 0|\vec{E}|0\rangle = \langle 0|\vec{B}|0\rangle = \vec{0}$, the field fluctuations $\langle 0|\vec{E}^2|0\rangle \neq 0$ and
$\langle 0|\vec{B}^2|0 \rangle \neq 0$; thus, the electromagnetic fields fluctuate with some nonzero variance even in the absence of photons. This relieves the confusion I was having earlier about
why any analysis of the Casimir effect assumes the presence of an electromagnetic field in a cavity by way of nonzero fluctuations even when no photons are present. Just to tie up the loose ends,
because the Casimir effect is introduced as having the electromagnetic field in a cavity, the allowed modes are standing waves with wavevectors given by $\vec{k} = k_x \vec{e}_x + k_y \vec{e}_y + \
frac{\pi n_z}{l} \vec{e}_z$ where $n_z \in \mathbb{Z}$, assuming that the cavity bounds the fields along $\vec{e}_z$ but the other directions are left unspecified. This means that each different
value of $\vec{k}$ specifies a different harmonic oscillator, and each of those different harmonic oscillators is in the ground state in the absence of photons. You'll be hearing more about this in
the near future, but for now, thinking through this helped me clear up my basic misunderstandings, and I hope anyone else who was having the same misunderstandings feels more comfortable with this
Many people learn in basic physics classes that angular momentum is a scalar quantity that describes the magnitude and direction of rotation, such that its rate of change is equal to the sum of all
torques $\tau = \dot{L}$, akin to Newton's equation of motion $\vec{F} = \dot{\vec{p}}$. People who take more advanced physics classes, such as 8.012 — Physics I, learn that in fact angular momentum
and torque are vectors; in the case of fixed-axis rotation, the moment of inertia (the rotational equivalent to mass) is a scalar so $\vec{L} = I\vec{\omega}$ means that angular momentum points in
the same direction as angular velocity. By contrast, in general rigid body motion, the moment of inertia becomes anisotropic and becomes a tensor, so \[\vec{L} = \stackrel{\leftrightarrow}{I} \cdot \
vec{\omega}\] implies that angular momentum is no longer parallel to angular velocity, but instead the components are related (using Einstein summation for convenience) by \[L_i = I_{ij} \omega_{j}.
\] This becomes important in the analysis of situations like gyroscopes and torque-induced precession, torque-free precession, and nutation.
There is one problem though: there is nothing particularly vector-like about angular momentum. It is constructed as a vector essentially for mathematical convenience. The definition $\vec{L} = \vec
{x} \times \vec{p}$ only works in 3 dimensions. Why is this? Let's look at the definition of the cross product components: in 3 dimensions, the permutation tensor has 3 indices, so contracting it
with 2 vectors produces a third vector $\vec{c} = \vec{a} \times \vec{b}$ such that $c_i = \varepsilon_{ijk} a_{j} b_{k}$. One trick that is commonly taught to make the cross product easier is to
turn the first vector into a matrix and then perform matrix multiplication with the column representation of the second vector to get the column representation of the resulting vector: the details of
this rule are hard to remember, but the source is simple, as it is just $a_{ij} = \varepsilon_{ijk} a_{k}$. Now let us see what happens to angular velocity and angular momentum using this definition.
Angular velocity was previously defined as a vector through $\vec{v} = \vec{\omega} \times \vec{x}$. We know that $\vec{x}$ and $\vec{v}$ are true vectors, while $\vec{\omega}$ is a pseudovector
(defined by it flipping direction when the coordinate system undergoes reflection), so $\vec{\omega}$ is vector to be made into a tensor. Using the previous definition that in 3 dimensions $\omega_
{ij} = \varepsilon_{ijk} \omega_{k}$, then \[v_i = \omega_{ij} x_{j}\] now defines the angular velocity tensor. Similarly, angular momentum is a pseudovector, so it can be made into a tensor through
$L_{ij} = \varepsilon_{ijk} L_{k}$. Substituting this into the equation relating angular momenta and angular velocities yields \[L_{ij} = I_{ik} \omega_{kj}\] meaning the matrix representation of the
angular momentum tensor is now the matrix multiplication of the matrices representing the moment of inertia and angular velocity tensors.
This has another consequence: the meaning of the components of the angular velocity and angular momentum become much more clear. Previously, $L_{j}$ was the generator of rotation in the plane
perpendicular to the $j$-axis, and $\omega_{j}$ described the rate of this rotation: for instance, $L_z$ and $\omega_z$ relate to rotation in the $xy$-plane. This is somewhat counterintuitive. On the
other hand, the tensor definitions $L_{ij}$ and $\omega_{ij}$ deal with rotations in the $ij$-plane: for example, $L_{xy}$ generates and $\omega_{xy}$ describes rotations in the $xy$-plane, which
seem much more intuitive. Also, with this, $L_{ij} = x_{i} p_{j} - p_{i} x_{j}$ becomes a definition (though there may be a numerical coefficient that I am missing, so forgive me).
The nice thing about this formulation of angular velocities and momenta as tensor quantities is that this is generalizable to 4 dimensions, be it 4 spatial dimensions or 3 spatial and 1 temporal
dimension (as in relativity). $L_{\mu \nu} = x_{\mu} p_{\nu} - p_{\mu} x_{\nu}$ now defines the generator of rotation in the $\mu\nu$-plane. Similarly, $\omega_{\mu \nu}$ defined in $L_{\mu \nu} = I_
{\mu}^{\; \xi} \omega^{\xi}_{\; \nu}$ describes the rate of rotation in that plane. The reason why these cannot be vectors any more is that the permutation tensor gains an additional index, so
contracting it with two vectors yields a tensor with 2 indices; this means that the cross product as laid out in 3 dimensions does not work in any other number of dimensions (except, interestingly
enough, for 7, and that is because a 7-dimensional Cartesian vector space can be described through the algebra of octonions which does have a cross product, just as 2-dimensional vectors can be
described by complex numbers and 3-dimensional vectors can be described by quaternions).
This has further nice consequence for special relativity. The Lorentz transformation as given in $x^{\mu'} = \Lambda^{\mu'}_{\; \mu} x^{\mu}$ is a hyperbolic rotation through an angle $\alpha$, equal
to the rapidity defined as $\alpha = \tanh(\beta)$. A hyperbolic rotation is basically just a normal rotation through an imaginary angle. This can actually be seen by transforming to coordinates with
imaginary time (called a Wick rotation, which may come back up in a post in the near future): $x^{\mu} = (ct, x^{j}) \rightarrow (ict, x^{j})$, allowing the metric to change as $\eta_{\mu \nu} = \
mathrm{diag}(-1, 1, 1, 1) \rightarrow \delta_{\mu \nu}$. This changes the rapidity to just be a real angle, and the Lorentz transformation becomes a real rotation. Because only the temporal
coordinate has been made imaginary while the spatial coordinates have been left untouched, because the Lorentz transformation is now a real rotation, and because angular momentum generates real
rotations, then it can be said that the angular momentum components $L_{(0, j)}$ generate Lorentz boosts along the $j$-axis. This fact remains true even if the temporal coordinate is not made
imaginary and the metric remains with an opposite sign for the temporal component, though the math of Lorentz boost generation becomes a little more tricky. That said, typically the conservation of
angular momentum implies symmetry of the system under rotation, thanks to the Noether theorem. Naïvely, this would imply that conservation of $L_{(0, j)}$ is associated with symmetry under the
Lorentz transformation. The truth is a little more complicated (but not by too much), as my advisor and I found from a few Internet searches. Basically, in nonrelativistic mechanics, just as momentum
is the generator of spatial translation, position is the generator of (Galilean) momentum boosting: this can be seen in the quantum mechanical representation of momentum in the position basis $\hat
{p} = -i\hbar \frac{\partial}{\partial x}$, and the analogous representation of position in the momentum basis $\hat{x} = i\hbar \frac{\partial}{\partial p}$. If the system is invariant under
translation, then the momentum is conserved and the system is inertial, whereas if the system is invariant under boosting, then the position is conserved and the system is fixed at a given point in
space. In relativity, the analogue to a Galilean momentum boost is exactly the Lorentz transformation, so conservation of $L_{(0, j)}$ corresponds to the system being fixed at its initial spacetime
coordinate; this is OK even in relativity because spacetime coordinates are invariant geometric objects, even if their components transform covariantly.
There are a few remaining issues with this analysis. One is that rotations in 3 dimensions are just sums of pairs of rotations in planes, and rotations in 4 dimensions are just sums of pairs of
rotations in 3 dimensions. This relates in some way (that I am not really sure of) to symmetries under special orthogonal/unitary transformations in those dimensions. In dimensions higher than 4,
things get a lot more hairy, and I'm not sure if any of this continues to hold. Also, one remaining issue is that in special relativity, because the speed of light is fixed and finite, rigid bodies
cease to exist except as an approximation, so the description of such dynamics using a moment of inertia tensor generalized to special relativity may not work anymore (though the description of
angular momentum as a tensor should still work anyway). Finally, note that the generalization of particle momentum $p_{\mu}$ to a distribution of energy lies in the stress-energy tensor $T_{\mu \nu}
$, so the angular momentum of such a distribution becomes a tensor with 3 indices that looks something like (though maybe not exactly like) $L_{\mu \nu \xi} = x_{\mu} T_{\nu \xi} - x_{\nu} T_{\mu \
xi}$. In addition, stress-energy tensors with relativistic angular momenta may change the metric itself, so that would need to be accounted for through the Einstein field equations. Anyway, I just
wanted to further explore the formulations and generalizations of angular momentum, and I hope this helped in that regard.
One of the things I learned in my high school AP Microeconomics class was that a tax causes the supply curve to shift to the left, making the equilibrium quantity decrease and price increase.
Consumer and producer surplus both decrease, but while government revenue can account for some of the loss in total welfare, some part of total welfare gets fully lost, and this is what is known as
deadweight loss. I didn't have a very good intuition for how this worked at the time (though I was able to get through it on homework, quizzes, tests, and the AP exam). At the same time, though, I
thought that a tax should be fully reversible by having the government subsidize producers, and that as this would be the opposite of a tax, supply would shift to the right, the equilibrium quantity
would rise and price would fall, and there would be a welfare gain.
Then, when I took 14.01 — Introduction to Microeconomics, we again discussed the situation with a tax. Then we talked about subsidies, but I was confused because the mechanism seemed to be in
providing a subsidy to consumers rather than to producers. My intuition at that point was that taxes were creating deadweight loss because producers who wanted to produce and consumers who wanted to
consume near the original equilibrium could not do so after the tax, so some transactions were essentially being prohibited. However, I still didn't quite understand why a subsidy would create
deadweight loss, because it seemed to me like consumers who wanted to consume more and producers who wanted to produce more than the original equilibrium quantity could now do so, meaning it seemed
to me like more transactions were being made possible. That said, I did understand why the government would never subsidize producers: unless the market is perfectly competitive, producers would
rather collude and pocket their subsidies while keeping prices high when they can. On the other hand, consumers prefer consuming, so subsidizing consumers is a more surefire way of increasing the
equilibrium quantity, even though the price would go up rather than down.
(In 14.04 — Intermediate Microeconomic Theory, we barely touched on deadweight loss in the way that it is covered in more traditional microeconomics classes.) Now, in 14.03 — Microeconomic Theory and
Public Policy, I think I better understand the intuition behind deadweight losses stemming from taxes and subsidies, and why a subsidy is not the opposite of a tax. In a tax, the government might try
to target some new equilibrium quantity below the original one, so the tax revenue collected, which increases total welfare, is the difference between the willingness of consumers to pay and the
willingness of producers to accept at that quantity multiplied by that quantity. Consumer and producer surplus both decrease, and the tax revenue contribution to the increase in total welfare is not
enough to offset these two, so there is an overall deadweight loss. A completely isomorphic way of picturing this is by considering the tax falling on consumers so that the demand shifts to the left;
in both cases, the equilibrium quantity drops, the government collects its revenue, surpluses drop, so deadweight losses appear.
Meanwhile, for a subsidy, the government might target a higher quantity than the original equilibrium. The spending on that subsidy is the difference between the willingness of producers to accept
and the willingness of consumers to pay at that quantity multiplied by that quantity. Consumer and producer surpluses both increase, but together they do not increase enough to offset government
spending which is an overall drain on total welfare, so there exists a deadweight loss.
It's interesting that taxes and subsidies are not opposites. The intuition is that for a tax, the revenue is not enough to compensate for the welfare losses of consumers and producers because the new
equilibrium quantity is lower. By contrast, for a subsidy, the spending is too high compared to the welfare gains of consumers and producers because the new equilibrium quantity is higher. It looks
like it is not possible to spend money given by tax revenue to undo the effects of a tax; instead, the government can only overshoot and overspend. It reminds me very much of how friction works:
moving in one direction on a surface with friction causes energy loss, while turning around to move in the other direction on that same surface most certainly does not cause energy gain. Essentially,
in this model, the market is frictionless, and the government introduces friction.
Of course, this essentially contradicts Keynesian models of government taxation and spending and their respective effects. That's why care must be taken when putting microeconomic models in a
macroeconomic perspective. This also doesn't consider externalities, less than perfectly competitive market structures, et cetera. Anyway, I hope my musings on this may help give other people some
intuition on simple issues of deadweight loss in microeconomic theory.
Last semester, I was taking 8.05, 8.13, 8.231, and 14.04, along with continuing my UROP. I was busy and stressed basically all the time. Now I think I know why: it turns out that the classes I was
taking were much closer to graduate classes in material, yet they came with all the trappings of an undergraduate class, like exams (that were not intentionally easy). Let me explain a little more.
8.05 — Quantum Physics II is where the linear algebra formalism and bra-ket notation of quantum mechanics are introduced and thoroughly investigated. Topics of the class include analysis of
wavefunctions in 1-dimensional potentials, vectors in Hilbert spaces, matrix representations of operators, 2-state systems, applications to spin, NMR, continuous Hilbert spaces (e.g. position), the
harmonic oscillator, coherent & squeezed states as well as the representation of photon states and the electromagnetic field operators forming a harmonic oscillator, angular momentum, addition of
angular momenta, and Clebsch-Gordan coefficients. OK, so considering that most of these things are expected knowledge for the GRE in physics, this is probably more like a standard undergraduate
quantum mechanics curriculum rather than a graduate-level curriculum. That said, apparently this perfectly substitutes for the graduate-level quantum theory class, because I know of a lot of people
who go right from 8.05 to the graduate relativistic quantum field theory class.
8.13 — Experimental Physics I is generally a standard undergraduate physics laboratory class (although it is considered standard in the sense that its innovations have spread far and wide). The care
and detail in performing experiments, analyzing data, making presentations, and writing papers seem like fairly obvious previews of graduate life as an experimental physicist.
8.231 — Physics of Solids I might be the first class on this list that actually could be considered a graduate-level class for undergraduates, also because the TAs for that class have said that it is
basically a perfect substitute for the graduate class 8.511 — Theory of Solids I, allowing people who did well in 8.231 to take the graduate class 8.512 & mdash; Theory of Solids II immediately after
that. 8.231 emphasized that it is not a survey course but intends to go deep into the physics of solids. I would say that it in fact did both: it was both fairly broad and incredibly deep. Even
though the only prerequisite is 8.044 — Statistical Physics I with the corequisite being 8.05, 8.231 really requires intimate familiarity with the material of 8.06 — Quantum Physics III, which is
what I am taking this semester. 8.06 introduces in fairly simple terms things like the free electron gas (which is also a review from 8.044), the tight-binding model, electrons in an electromagnetic
field, the de Haas-van Alphen effect, and the integer quantum Hall effect, and it will probably talk about perturbation theory and the nearly-free electron gas. 8.231 requires a good level of comfort
with these topics, as it goes into much more depth with all of these, as well as the basic descriptions of crystals and lattices, reciprocal space and diffraction, intermolecular forces, phonons,
band theory, semiconductor theory and doping, a little bit of the fractional quantum Hall effect (which is much more complicated than its integer counterpart), a little bit of topological insulator
theory, and a little demonstration on superfluidity and superconductivity.
14.04 &; Intermediate Microeconomic Theory is the other class I can confidently say is much closer to a graduate class than an undergraduate class, because I talked to the professor yesterday and he
said exactly this. He said that typical undergraduate intermediate microeconomic theory classes are more like 14.03 &; Microeconomic Theory and Public Policy (which I am taking now), where the
constrained optimization problems are fairly mechanical, and there may be discussion on the side of applications to real-world problems. By contrast, 14.04 last semester focused on the fundamentals
of abstract choice theory with a lot more elegant mathematical formalism, the application of those first principles to derive all of consumer and producer choice theories, partial and general
equilibrium, risky choice theory, subjective risky choice theory and its connections to Arrow-Debreu securities and general equilibrium, oligopoly and game theory, asymmetric information, and other
welfare problems. The professor was saying that by contrast to a typical such class elsewhere, 14.04 here is much closer to a graduate microeconomic theory/decision theory class, and the professor
wanted to achieve that level of abstract conceptualization while not going too far for an undergraduate audience.
At this point, I'm hoping that the experiences from last semester pay off this semester. It looks like that has been working so far!
In my post at the end of the summer, I talked a bit about what I actually did in that UROP. Upon rereading it, I have come to realize that it is a little jumbled and technical. I'd like to basically
rephrase it in less technical terms, along with providing more context on what I did in the 2011 fall semester. Follow the jump to see more. | {"url":"http://dasublogbyprashanth.blogspot.com/2013_03_01_archive.html","timestamp":"2014-04-20T23:35:33Z","content_type":null,"content_length":"198234","record_id":"<urn:uuid:d37fa386-adce-4be1-81dd-e186b622ec0e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Colloquium Publications
1946; 363 pp; softcover
Volume: 29
Reprint/Revision History:
revised and enlarged edition 1962; ninth printing with corrections 1999; tenth printing 2000
ISBN-10: 0-8218-1029-4
ISBN-13: 978-0-8218-1029-3
List Price: US$41
Member Price: US$32.80
Order Code: COLL/29
This classic is one of the cornerstones of modern algebraic geometry. At the same time, it is entirely self-contained, assuming no knowledge whatsoever of algebraic geometry, and no knowledge of
modern algebra beyond the simplest facts about abstract fields and their extensions, and the bare rudiments of the theory of ideals.
"It is not often that one reviews a text written 53 years ago, updated 37 years ago, and still as relevant today as it was in its previous incarnations ... This book has played an important role in
establishing the mathematical foundations of Algebraic Geometry and in providing its accepted language. Although there have been very significant subsequent ideological shifts in the subject, this
book is just as fresh today as it was when it first appeared."
-- Monatshefte für Mathematik
• Algebraic preliminaries
• Algebraic theory of specializations
• Analytic theory of specializations
• The geometric language
• Intersection-multiplicities (special case)
• General intersection-theory
• The geometry on abstract varieties
• The calculus of cycles
• Divisors and linear systems
• Comments and discussions
• Appendix I. Normal varieties and normalization
• Appendix II. Characterization of the \(i\)-symbol by its properties
• Appendix III. Varieties over topological fields
• Index of definitions | {"url":"http://ams.org/bookstore?fn=20&arg1=collseries&ikey=COLL-29","timestamp":"2014-04-17T21:50:11Z","content_type":null,"content_length":"15249","record_id":"<urn:uuid:e1bb332f-c2cc-451b-873d-8185c79bc0ca>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which of the following statements would be sufficient to prove that parallelogram PQRS is a rectangle?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ea1d59e4b0d4a537cc2223","timestamp":"2014-04-19T15:17:55Z","content_type":null,"content_length":"35935","record_id":"<urn:uuid:500afe32-b739-49b8-8653-41a5568574b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symposium : Sets Within Geometry : Nancy, France 26-29 July 2011
“Sets Within Geometry” - A Symposium
Nancy, France. Maison Sciences De L’Homme UNIVERSITY of NANCY 2
91, Avenue de La Liberation, 54001 NANCY FRANCE
26th-29th July 2011
The Symposium began on the AFTERNOON of Tuesday 26th July and ended on the afternoon of Friday 29th July.
SCOPE and AIMS OF THE SYMPOSIUM
The Theory of Sets, founded in the last quarter of the 19th Century by Georg Cantor, underwent rapid development at the hands of many contributors.
Within that develoment, several distinct lines can be traced, and these have connected in contrasting ways with the subsequent overall development of mathematical knowledge. The aim of this
Symposium is to study and compare these, with particular focus on the question how far the subject matter of set theory can be viewed as part of Geometry
For a more extended discussion of the manner in which this question naturally arises from the study of these various lines of development, see below.
By way of orientation, the organisers of the Symposium offered the following statement of aims :
Those who have come together to organise this Symposium believe that the ultimate aim of foundational efforts is to provide clarifying guidance to teaching and research in mathematics,
by concentrating the essential aspects of past such endeavors. By mathematics we mean the investigation of the Relations between Space and Quantity, of the reflected relations between quantity and
quantity and between space and space, and the development of our knowledge of these in other words Geometry.
Using tools developed by Cantor and his contemporaries, much more explicit forms of the relation between space and quantity were developed in the 1930s in the field of functional analysis by Stone
and Gelfand, partly through the notion of Spectrum (a space corresponding to a given system of quantities). In the 1950s Grothendieck applied those
same tools, around the notion of Spectrum, to algebraic geometry by using and developing the further powerful tool of category theory . Further developments have strongly suggested that it is now
possible to incorporate the whole set-theoretic “foundation” of Geometry, explicitly as part of that space-quantity dialectic, in other words as a chapter in an extended Algebraic Geometry.
Some further Background to the Aims of the Symposium
The Theory of Sets, founded in the last quarter of the 19th Century by Georg Cantor, underwent rapid development at the hands of many contributors and by the mid – 20th Century was regarded by some
commentators as providing a framework for the whole of mathematics.
Several distinct lines can be traced within that development, and these have connected in contrasting ways with the subsequent overall development of mathematical knowledge.
In one line of development concepts made explicit by the young Cantor, such as the power set operation and the cohesive/discrete or Mengen/Kardinalen contrast, were extensively used in the algebra,
topology, and analysis developed over the last 150 years, resulting in a body of informal methods which might be labelled the *tacit set theory of mathematical practice*. In this tacit theory, as
illustrated by Bourbaki’s algebra and topology (but not their official set theory) sets are handled via the kind of isomorphism invariant mapping properties which have long been at the heart of
mathematical practice in algebra and geometry
* In the Book of Lawvere and Roseburgh “Sets for Mathematics” it is labelled “Naive Mathematical Set Theory”
In contrast to that line of development and in opposition to features of Cantor’s work. though using others ( his theory of ordinals and his striving for absolute infinity ) there was a line
originating in Peano’s dual invention of global singleton and epsilon ( with a sense quite confusingly similar to the local singleton and epsilon made precise by Grothendieck in his 1957 Tohoku paper
). Peano’s aim was to supply mathematical underpinnings for some of
the philosophical positions of Frege. Later Zermelo and von Neumann took up this line of development which became formalized into the theory of the cumulative hierarchy based on a hypothesised
relation of absolute and global membership.
That theory is awkward as a framework for mathematical practice, cutting across the requirements of the latter in various respects. Examples abound. As representative instances two will serve :
1) in algebra, to show that a given domain is a ‘subring’ of a field, the image of the injection that naturally arises from the construction must be thrown away and replaced by the original
2) In topology, an important fragment of naïve set theory, involving the lattice of subsets of any given set, needs crucially to be combined with the interactions of these lattices for distinct sets
such as the plane and the three-space, but the spurious question of whether one given space is ‘included in’
another (according to the global epsilon relation associated with this “iterative” notion of set), obstructs a focus on the necessary contra-variant and co-variant homomorphisms induced by given
transformations between the two spaces.
Such examples indicate why axiomatic set theory in the mould of the epsilon-based formalism has become increasingly detached from other areas of mathematical inquiry .
Over the same period that this detachment was becoming apparent, the definition of the concepts of Category and Functor in 1945 and the subsequent understanding made possible by many co-workers of
the functoriality of basic and universal mathematical constructions, in particular the Covariant and Contravariant Functoriality of maps out of or in to their respective domains and co-domains,
marked the consolidation and strengthening of a development which could be seen in retrospect to have been under way at least since the work of Galois and Grassmann more than a century earlier.
Central to this understanding is the recognition that mathematics is permeated by situations involving adjoint pairs of functors.
One chapter in this development was to make algebraically precise the formerly more loosely connected body of informal methods for which the label ”the tacit set theory of mathematical practice” was
suggested above. This was achieved by the axiomatic formulation of the Elementary Theory of The Category of Sets by Lawvere in 1963, whereby set theory, by acquiring an explicitly functorial
formulation, regained contact with the fundamental feature of constructions in the main areas of mathematical practice
Greatly extending and deepening that re-engagement was the development at the same period of Topos Theory. This arose from Grothendieck’s great work in Algebraic Geometry. Amongst the many facets
of its later development has been the recognition, through the work of Lawvere and Tierney, that it provides the natural basis for the study of both variable and constant sets within a framework for
which Geometry “supplies the leading aspect” (Lawvere 1973). Indeed one of its earliest conceptual achievements in the work of Tierney and others was to display the content of the then newly
discovered epsilon-based forcing techniques used by Cohen to prove the Independence of Cantor’s Continum Hypothesis from the viewpoint of the geometrical meaning of the corresponding topos-theoretic
In the light of these and other developments, the question naturally arises to what extent the subject matter of set theory should be re-conceived as forming part of Geometry
The Symposium will be devoted chiefly to that question.
Any examination of it involves the recognition that the subject matter of Geometry has itself come to be re-conceived in a way involving a generalisation and progressive deepening of our previous
geometric notions, culminating in the work of Grothendieck. In this connection, the remarks in the brief Statement of Aims given above are especially pertinent and worth repeating, as providing
clarification of the Title chosen for the Symposium “Sets Within Geometry” :
Using tools developed by Cantor and his contemporaries, much more explicit forms of the relation between space and quantity were developed in the 1930s in the field of functional analysis by Stone
and Gelfand, partly through the notion of Spectrum (a space corresponding to a given system of quantities). In the 1950s Grothendieck applied those same tools, elaborated around the notion of
Spectrum, to algebraic geometry by using and developing the further powerful tool of category theory . Further developments have strongly suggested that it is now possible to incorporate the whole
set-theoretic “foundation” of Geometry, explicitly as part of that space-quantity dialectic, in other words as a chapter in an extended Algebraic Geometry.
It cannot be stressed too strongly that the claim that Set Theory can, in the above sense, be viewed as grounded in Geometry, does *not* imply – as some have insinuated – a “rejection” of set theory.
What is at stake here is rather a *deepening* of set theory and the better understanding of the indispensable role within mathematics of the notions of both constant and variable sets. The object of
criticism here is one particular line of development, which appeared to have achieved a hegemonic position, and which rested on an axiomatic fixation of the set concept on the basis of a supposed
absolute and global relation of iterated membership.
In this connection the Symposium will also undertake an investigation into contrasting ways of formulating and viewing the Continuum Hypothesis – those determined by the tacit presuppositions
generated by membership-based axiomatic set theory as against those ways of viewing the Continuum based on an explicit recognition of the central role of the mapping properties of categories of
space as studied in geometry.
The Topics of the Symposium fall under three broad headings : Mathematics – Conceptual Analysis – History
1. Mathematics
Examination of Mathematical Developments yielding a deeper understanding of the place of sets in mathematics.
See below for further discussion.
2. Conceptual Analysis
The connection between these Mathematical developments and broader analysis of the epistemological sources of mathematical ideas. One illustration of such analysis would be an investigation of the
meaning of Extensionality and Choice principles in the setting of Topos Theory
3. History
Related Historical investigations such as a re-examination of the work of
Cantor and Dedekind and other figures in the early development and discussion of different
approaches to the Continuum Hypothesis.
The following Speakers had hoped and planned to participate but were prevented from doing so
Professor Alberto Peruzzi (University of Florence) Professor Yuri I. Manin (MPIM Bonn) ; Professor Christian Houzel (Paris) ; Professor Francois Chargois (Nancy)
Professor Marta Bunge (Montreal)
In several cases, Abstracts of their talks appear below. It is hoped that in every case a full text and overheads of their intended Talk will be added to this Page later.
The Invited Speakers who took part were as follows :
Professor FW Lawvere (Buffalo)
Professor Anders Kock (Aarhus)
Professor Colin McLarty (CWRU Cleveland)
Professor Jean-Pierre Marquis (Montreal)
In addition Professor John STACHEL gave his Contributed Talk by webex video link
Professor Lou CRANE (KSU) was prevented by family bereavement from giving a Contributed talk and the Text and overheads of his talk will also be added later.
Contributed talks were given at the Symposium by
Dr Andrei RODIN (University of Paris 7)
Professor Panagis KARAZERIS (Patras)
and Professor John STACHEL (Boston University) who spoke by video-link
See additions to the List of Abstracts of Talks Below
PROGRAM OF THE SYMPOSIUM and LINKS TO VIDEO and AUDIO RECORDINGS OF THE PROCEEDINGS
Venue : The Main Conference Room,
Archives Henri Poincare (3rd Floor)
Maison Sciences De L’Homme de Lorraine,
91 Avenue de La Liberation,
Nancy, France.
Day 1 26th JULY 2011
3:45 PM : Welcome and Introduction to the Symposium
4 :00 – 5:45 PM Professor FW Lawvere (Talk 1)
What Is A Space ?
Click here for audio recording
Click here for Speakers overheads
5: 45 -6:15PM : Discussion period
Day 2 : 27th JULY 2011
9: 30 – 11:00 AM Professor Jean-Pierre Marquis (Talk 1)
Abstract geometric sets and homotopy types : The metaphysics of abstraction
Video recording
Click here for audio recording
Click here for Speakers overheads
11:00 – 11:30 AM Discussion Period
11:30 - 11:45 AM Coffee Break
11:45 – 1:15 PM : Professor Anders Kock (Talk 1)
Monads and Extensive Quantities (Part I)
Click here for audio recording
Click here for Speakers overheads
Buffet Lunch
2:15 - 3:45 : Professor Jean-Pierre Marquis (Talk 2)
Space and sets :
The evolution of the notion of topological space in the first half of the 20th century
Click here for link to audio recording
Click here for Speakers Overheads
Coffee Pause
3:30 – 4:30 : Discussion period concerning both Prof Kock and Prof. Marquis’ Talks
Day 3 : 28th JULY 2011
9:00-10:30 : Professor FW Lawvere (Talk 2)
The Dialectic of Continuous and Discrete in the history of the struggle for
a usable guide to mathematical thought
Click here for audio recording
Click here for Speakers overheads
Coffee Pause
10:45-12:15 : Professor Colin McLarty (Talk 1)
Cohomology in the Category of Categories as Foundation
Click here for audio recording
Click here for Speakers overheads
PM : Contributed Talks
Dr Andrei RODIN (University of Paris 7) : Formal Axiomatics and Set-theoretic Construction in Bourbaki
Click here for audio recording
Click here for Speakers overheads
Prof. Panagis KARAZERIS (Patras) : Embedding Kelley Spaces in Toposes : With some remarks on A proposed fundamental theorem of Category Theory
Click here for audio recording
Click here for Speakers overheads
Prof John STACHEL (Boston ) (To be Delivered by Webcast Video)
Sets in Algebra and In Geometry
Click here for audio recording
Click here for Speakers overheads
7:45 PM : CONFERENCE DINNER
Day 4 : 29th JULY 2011
9:30-11:00 : Professor Colin McLarty (Talk 2)
Title : ”Why is so much of category
theory constructive?”
Click here for audio recording
Click here for Speakers overheads
Coffee Pause
11:15-12:45 : Professor Anders KOCK (Talk 2)
Monads and Extensive Quantities (Part II)
Click here for audio recording
Click here for Speakers overheads
Buffet Lunch
2:15-4:30 : Talk 3 by F.W. LAWVERE : Categorical Dynamics re-visited: Category Theory and the representation of physical quantities
Click here for audio recording
Click here for Speakers overheads
c. 4:45 PM : Closure of Symposium followed by Visit to Archives Henri Poincare
TITLES AND ABSTRACTS (In Alphabetical Order of Speakers)
Prof. Marta Bunge Talk 1 :
The notion of a stack has its origins in Geometry, more precisely as a tool in the
non-abelian cohomology (Giraud 1970) of a Grothendieck topos S. Following a suggestion of Lawvere (Perugia1973),
such a notion was introduced and studied
(Bunge-Pare 1979) relative to the intrinsic site of all epimorphisms of S. The study of stacks is intrinsically connected with the axiom of choice. The stack
completion of an S-category C is constructed (Bunge 1979) as an S-category carved out of the Yoneda embedding of C into the presheaf topos P(C). Examples from cohomology are particularly
illuminating. In addition, the stack completion of C represents the so called anafunctors with target C. Completions of his sort, more generally for a closed category V with a faithful
functor into
Set, and with all limits and colimits, will be called “tight”. We
give an equivalent version of the notion of a tight completion in terms of
V-distributors (Benabou 1973). The distributors version is
particularly suited to exhibit the Cauchy completion
(Lawvere 1973) as a tight completion. We clarify the
connection between the Karoubi envelope and the Cauchy
completion. More precisely, we prove
that the axiom of choice (epimorphisms split)
holds in V if and only if every Karoubi complete V-category is Cauchy complete. Certain tight completions of a V-category C are what we call here `of
Morita
type’. Examples are the Cauchy completion in terms of those distributors with target C
which have a right adjoint, the (related) completion of essential points of P(C), and the points of
the
classifying topos B(G) of a groupoid G. Neither the Karoubi envelope nor the
stack completion are of Morita type.
Prof. Marta Bunge Talk 2
The topologist R.H. Fox (1957) introduced a notion of spread in order to
describe branched coverings in topological rather than combinatorial terms. Implicit in
his treatment was a connection between complete spreads and cosheaves, hence with Lawvere
distributions (1983, 1992). This connection was made explicit by J.Funk (1995)
for locales.
A full treatment of the subject for toposes was made possible by the role of the
classifier for Lawvere distributions on a topos X , constructed algebraically by
M.Bunge and A. Carboni (1995). A theory of Fox complete spreads and Lawvere
distributions (or of Singular Coverings of Toposes) is the subject matter of the book by M.
Bunge and J.Funk (2006), in which Kock-Zoberlein monads of the completion type play a
crucial role. In my talk, and after a review of some aspects of this theory, I will list
some open problems.
Click on this Link For A Text of Professor Bunge’s Second Talk (Link not yet activated)
Prof. Christian Houzel : Title(s) and Abstract(s) to be advised.
Prof. Anders KOCK
Talk 1 : Monads and Extensive Quantities
Abstract: If $T$ is a commutative monad on a cartesian
closed category, then there exists a natural $T$-bilinear pairing
$T(X)\times T(1)^{X}\to T(1)$ (“integration”), as well as a natural
$T$-bilinear action $T(X)\times T(1)^{X} \to T(X)$. These data
together make the endofunctors $T$ and $T(1)^{(-)}$ (co- and
contravariant, respectively) into
a system of extensive/intensive quantities, in the sense of Lawvere.
A natural monad map from $T$ to a certain monad of distributions (in
the sense of functional analysis (Schwartz)) arises from this integration.
In less Technical terms:
Abstract: If T is a commutative monad on a cartesian
closed category, then there exists a natural T-bilinear pairing from
T(X) times the space of T(1)-valued functions on X (“integration”), as well as a natural
T-bilinear action on T(X) by the space of these functions. These data
together make the endofuncors T and “functions into T(1)” into
a system of extensive/intensive quantities, in the sense of Lawvere.
A natural monad map from T to a certain monad of distributions (in
the sense of functional analysis (Schwartz)) arises from this integration.
Prof. Anders KOCK Talk 2 (To be confirmed)
Title: Geometric algebra of projective lines.
Abstract: The projective line over a local ring carries structure of a
category with a certain correspondence between objects and arrows. We
investigate to what extent the local ring can be reconstructed from the
Prof. FW Lawvere :
Talk 1 : The Dialectic of Continuous and Discrete in the history of the struggle for
a usable guide to mathematical thought
Dedekind and the young Cantor extracted from the complexity of mathematical cohesions and structures the ideas of structureless lauter Einsen and cardinality, following Steiner’s algebraic geometry.
They immediately used them in turn as a base for pure structures such as ordered sets and groups.
Hausdorff and others used them as a base for representing various specific categories of cohesion as consisting also of structures of a slightly different kind.
Moreover the notion of structureless discreteness had also been relativized by Galois and others in the study of algebraic geometry over non-algebraically closed fields. Although practicing
mathematicians refer to these collections of lauter Einsen as “sets”, that term is used in another way by students of the Frege-Peano-Zermelo-vonNeumann hierarchy. In order to permit general
considerations of these sorts to serve as a useful guide to the development of mathematical thinking, it is necessary to extricate them from the continuing pursuit of the elder Cantor’s idealist
speculations about an infinity beyond infinity, which as he himself realized belong more to theology than to science.
See my recent talks
((Bristol 2009) Cantor, Zermelo, & the Category of Categories
(Firenze 2010) Cantor’s lauter Einsen , Galois & Grothendieck
(Pisa 2010) What is a space ?
as well as my 2007 TAC article Axiomatic Cohesion
Talk 2 : What is a Space?
Abstract : A space is just an object in a category of spaces!
The implied further question is given a very general answer involving lextensivity,
as well as a much more structured answer involving dialectically coupled
(cohesive/discrete) pairs of toposes. Examples can be analyzed and constructed
using the simple geometric paradigm of figures and incidence relations, by which
any lextensive category can be embedded in a Grothendieck topos; more refined
subtoposes of the latter are specified by Grothendieck coverings that embody the
geometrical equivalent of existential/disjunctive conditions on these extended
spaces; a specific example involves a generalization of Maschke means. The
extended spaces always include the Hurewicz exponential spaces, for example,
spaces of functions and distributions equipped automatically with the ambient sort
of cohesion. Examples important for smooth, analytic, and algebraic geometry are
infinitesimally generated, pursuing an observation that goes back to Euler. A smooth
account of points and components for algebraic geometry over a non‐algebraically
closed field is achieved by replacing Cantorian abstract sets with a Galois‐Barr topos
as the discrete aspect. The basic goal is to help make the advances in Algebraic
Geometry during the past 50 years more accessible to students and to colleagues in
related fields by utilizing the simplifying advances in categorical foundations during
the same 50 years, especially guided by proposals made by Grothendieck in 1973
Prof. Yuri I. MANIN
Talk 1 : Foundations or Superstructure? (TheViewpoint of a practicing
ABSTRACT. I intend to discuss a series of topics related
to foundations and philosophy of mathematics from the perspective
of researcher and teacher.
More detailed Conspectus of Talk :
(The Reflections of a practicing mathematician)
The Content of the idea of Foundations in the wide sense is this: principles of organization of mathematical knowledge and of the interpersonal and transgenerational transferral of this knowledge.
When these principles are studied using the tools of mathematics itself, self-referential anxiety raises its ugly head and self-doubts start dominating the autistic psyche of a lonely mathematician
…In order to avoid this abyss, one can simply cut the self-referentiality loop, and the way this author did it involved stripping “Foundations” from their normative
functions and to consider various foundational matters simply from the viewpoint of their mathematical content. Then Goedel’s theorem becomes a statement that a certain class of structures is not
nitely generated (no big deal but interesting thanks to a new context), and the structures/categories controversy is seen in a much more realistic light: contemporary studies fuse (Bourbaki type)
structures and categories freely, naturally and unavoidably, by first structurizing sets of morphisms, then categorifying them, then applying to this vast building principles of homotopy and topology
in order to squeeze it down to size etc.
In this way foundations turn into superstructure, and the memory of their foundational provenance is conserved only in the way we are speaking about them.
Observing the nature of changes in Foundations from this perspective, one
sees not so much the replacement of one vision by another but rather a permanent enrichment of intuitions whose creative/pedagogical potential seemed temporarily exhausted. Moreover, the scale of
historical legacy and continuity becomes much more visible. Here are some illustrations.
(i) Whatever the fate of the scale of Cantorial cardinal and ordinal infinities, the basic idea of set embodied in Cantor’s famous definition”, as a collection of definite, distinct objects of our
thought, is as alive as ever. Thinking about a topological space, a category, a homotopy type, a language or a model, we start with imagining such a collection, or several ones, and continue by
adding new types of distinct objects of our thought, whether derivable from the previous ones or new.
(ii) We can decide that we wish to study the category of all projective smooth algebraic varieties and various cohomology functors on it, transcending old fashionedstructures. Then we find out that
our ideal goal might be in proving that a universal cohomology functor produces an immense motivic Galois group whose represen
tations solve our initial problem: structures win! (the famous Grothendieck motivic program and its Tannakian embodiment).
(iii) The enrichment may possess its inner logic, and understanding of this logic may be a fascinating challenge even for the creators of new paradigms.
An outstanding example for me is the history of the idea of triangulated categories.
Very briefly, in the categorical development of algebraic topology, at a certain stage one had to produce a framework for treating complexes of (co-)chains of various topological spaces as objects
better reecting properties of the space itself than of various ways of choosing these (co-)chains. This led to the complicated definition of triangulated category a la Grothendieck and Verdier. It
was successful and influential, until more and more contexts revealed its basic technical flaw (cone is not functorial”). About two decades were required in order to see the remedy that should have
been evident to Grothendieck himself from the start. Namely,axiomatizing categories of complexes one should from the start think about morphisms rather than objects. When this was done, categories
were created and life became much easier.
To summarize: good metamathematics is good mathematics rather than shack
les on good mathematics.
Yuri I. Manin
Prof. Jean-Pierre MARQUIS
Talk 1 : Abstract geometric sets and homotopy types
The metaphysics of abstraction :
Abstract :
In this paper, I explore the metaphysical aspects of abstract sets given
via geometric means, namely in the usual terminology, as homotopy 0-
types. I am not claiming that homotopy types should be taken as the
ultimate foundations for mathematics, but rather that homotopy types
exhibit what it means to be abstract for sets in such a foundational frame-
work. I will therefore first rehearse some basic facts about homotopy
types, what we know about them and how we know it. What I want
to emphasize is not so much what homotopy types are, but rather the
type of being they have in the mathematical realm. Once this is clarified,
we can zero in on homotopy 0-types, namely sets. I will compare
and contrast how these sets differ from the ones that are usually thought
of as constituting the foundations of mathematics, e.g. in ZF. Further-
more, as homotopy types already indicates, one cannot apply one and the
same criterion of identity for all mathematical entities. There are inherent
dimensions arising naturally in the abstract geometric framework.
Prof. Jean-Pierre MARQUIS
Talk 2 : Space and sets
The evolution of the notion of topological space in the
First half of the 20th century
(based on joint work with Mathieu BELANGER)
In this talk, we explore how mathematicians axiomatized the notion of
space from Hausdorff’s first topological notions to the late 1950′s, that is
just before Grothendieck introduced the notion of topos as a generalized
topological space. In particular, we explore the role played by sets of
points in the various axiomatizations up to Bourbaki and contrast this
role with the apparition and development of the algebraic point of view
introduced by Stone. Whereas in the first case, sets of points are taken
to be ontologically primary and epistemologically prior to any concept of
space, in the latter case, sets of points are derived from the algebraic data
and one can argue that the concept of space becomes prior to the notion
of sets of points. Thus, when Grothendieck comes in, it was possible,
although radical, to reorganize the nature and role of sets of points in the
study of spaces.
Prof. Colin McLARTY
Talk 1 : Cohomology in the Category of Categories as Foundation
The elimination of the axiom scheme of replacement from the
construction of injective resolutions makes possible a rigorous naive
treatment of cohomological number theory in a natural version of the
Category of Categories axioms.
Prof. Colin McLARTY Talk 2 : To be advised
Prof. Alberto PERUZZI
Talk 1 : Geometric Roots of Semantics II
Abstract. Formal semantics was mainly done in terms of sets, as extensional
discrete collections apparently needed in order to analyse the logical and
cognitive meaning of any sentence, while intensional semantic theories were
supposed to be inherently non-compositional. In a paper I presented at the
LMPS, X,1995, and published in 2000 with the title “Geometric roots of
semantics”, both the apparent necessity of classical set theory for
semantics and the supposed commitment of intensional theories to
non-compositionality were refuted by making reference to the wider framework provided by topos
theory. At the same time, categorical notions were used to propose a theory
of kinaestethic universal patterns of meaning. Since then, various advances
relevant to ascribe geometry, in a wide sense, the role of setting up the
roots of semantics, call for additions and qualifications of the original
proposal. The paper to be presented in Nancy will elaborate this
subject and will fill some gaps, to the purpose of identifying a basic
theoretical ground of common interest for the foundations of mathematics, logic, and cognitive grammar.
A.P., Geometric Roots of Semantics I: From a Logical Point of View,
in Logic, Methodology and Philosophy of Science X. Abstracts, University of
Florence, Florence 1995, p. 165.
A.P., The Geometric Roots of Semantics,
in Meaning and Cognition, L. Albertazzi ed., John Benjamins, Amsterdam, 2000,
pp. 169-201.
A.P., Compositionality up to Parameters,
in Protosociology, monographical issue on “Compositionality, Concepts and
Representations”, 21,
2005, pp. 41-66.
A.P. Il Lifting Categoriale dalla Topologia alla Logica,
in Annali del Dipartimento di Filosofia, Università di Firenze, 11, 2005, pp.
Prof. Alberto PERUZZI
Talk 2 : To be advised
Professor John STACHEL (Boston University)
Sets in Geometry and Algebra
Please Click here for the link to Professor Stachel’s Overheads. (Link not yet activated)
Dr Andrei RODIN (University of Paris 7)
Formal Axiomatics and Set-Theoretic Construction in Bourbaki
I compare the published Bourbaki’s volume on set theory (the version of 1968) with an unpublished Bourbaki’s draft that treats the same topic in an informal manner, which better reflects the
contemporary practice of using sets in mathematics in general and in geometry in particular. Both texts use set-theoretic concepts for developing a theory of mathematical (set-based) structure
rather then a set theory for its own sake.
The draft begins with a philosophical introduction explaining the central notion of structure, which is followed by a description of various procedures allowing for obtaining further set-theoretic
constructions from a given finite family of base sets. Finally the draft presents a general form of set-theoretic construction called mathematical structure and introduces some operations with
structures such as induction, transport and product.
In the published volume the notion of set-theoretic construction reduces to a mere metaphor and gets replaced by a notion of syntactic construction, which is treated in great detail. Crucially,
these syntactic constructions represent not informal set-theoretic constructions themselves but certain propositions referring to sets and relations between sets as well as certain relations between
such propositions (like the relation of logical inference). This is, of course, a very general feature of the formal axiomatic method, which dates back to Hilbert’s work. Although it is not specific
for Bourbaki’s approach comparing the two versions of Bourbaki’s set theory gives an opportunity to evaluate the effect of formalization.
I argue that the Hilbert-style formalization of the informal set theory is not an improvement and hence must be abandoned and replaced by a different theoretical setting based on non-propositional
postulates rather than usual axioms (I have in mind here the difference between postulates and axioms in Euclid). In addition to stressing the fact that such a constructive setting better fits the
current mathematical practice I explain why the Hilbert-style formalization makes mathematics inapplicable in empirical sciences and show how to fix the problem.
Professor Lou Crane (KSU)
Title and Abstract to be added.
Profesor Panagis KARAZERIS (Patras)
Title and Abstract to be added
As decribed above, the topics of the Symposium fall under three broad headings :
Mathematics – Conceptual Analysis – History
Michael Wright
Archive of Mathematical Sciences | {"url":"http://www.archmathsci.org/conferences-and-workshops/symposium-sets-within-geometry-nancy-france-26-29-july-2011-2/","timestamp":"2014-04-20T00:54:03Z","content_type":null,"content_length":"54650","record_id":"<urn:uuid:c2cd4afe-d799-46b4-bcf4-e43363bae561>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Toeplitz Operators with Quasihomogeneous Symbols on the Bergman Space of the Unit Ball
Journal of Function Spaces and Applications
Volume 2012 (2012), Article ID 414201, 16 pages
Research Article
Toeplitz Operators with Quasihomogeneous Symbols on the Bergman Space of the Unit Ball
^1College of Science, Dalian Ocean University, Dalian 116023, China
^2School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
Received 24 May 2012; Revised 6 August 2012; Accepted 30 August 2012
Academic Editor: Nikolai M. Vasilevski
Copyright © 2012 Bo Zhang and Yufeng Lu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
We consider when the product of two Toeplitz operators with some quasihomogeneous symbols on the Bergman space of the unit ball equals a Toeplitz operator with quasihomogeneous symbols. We also
characterize finite-rank semicommutators or commutators of two Toeplitz operators with quasihomogeneous symbols.
1. Introduction
Let denote the Lebesgue volume measure on the unit ball of normalized so that the measure of equals 1. The Bergman space is the Hilbert space consisting of holomorphic functions on that are also in .
Hence, for each , there exists an unique function with the following property: for all . As is well known that the reproducing kernel is given by
Let be the orthogonal projection from onto . Given a function , the Toeplitz operator : is defined by the formula for . Since the Bergman projection has norm , it is clear that Toeplitz operators
defined in this way are bounded linear operators on and .
We now consider a more general class of Toeplitz operators. For , in analogy to (1.3) we define an operator on by
Since the Bergman projection can be extended to , the operator is well defined on , the space of bounded analytic functions on . Hence, is always densely defined on . Since is not bounded on , it is
well known that can be unbounded in general. In [1], Zhou and Dong gave the following definitions, which are based on the definitions on the unit disk in [2].
Definition 1.1. Let .(a) We say that is a T-function if (1.4) defines a bounded operator on .(b) If is a T-function, we write for the continuous extension of the operator (it is defined on the dense
subset of ) defined by (1.4). We say that is a Toeplitz operator if and only if is defined in this way.(c) If there exists an , such that is (essentially) bounded on the annulus , then we say is
“nearly bounded.”
On the Bergman space of the unit ball, Grudsky et al. [3] gave necessary and sufficient conditions for boundedness of Toeplitz operators with radial symbols. These conditions give a characterization
of the radial functions in which correspond to bounded operators and furthermore show that the T-functions form a proper subset of which contains all bounded and “nearly bounded” functions.
We denote the semicommutator and commutator of two Toeplitz operators and by
In the setting of the classical Hardy space, Brown and Halmos [4] gave a complete characterization for the product of two Toeplitz operators to be a Toeplitz operator. On the Bergman space of the
unit disk, Ahern and Čučković [5] and Ahern [6] obtained a similar characterization for Toeplitz operators with bounded harmonic symbols. For general symbols, the situation is much more complicated.
Louhichi et al. [2] gave necessary and sufficient conditions for the product of two Toeplitz operators with quasihomogeneous symbols to be a Toeplitz operator and Louhichi and Zakariasy made a
further discussion in [7]. But it remains open to determine when the product of two Toeplitz operators is a Toeplitz operator on the Bergman space.
The problem of determining when the semicommutator or commutator on the Bergman space has finite-rank seems to be far from solution. The analogous problem on the Hardy space has been completely
solved (see [8, 9]). Guo et al. [10] completely characterized the finite-rank semicommutator or commutator of two Toeplitz operators with bounded harmonic symbols on the Bergman space of the unit
disk and Luecking [11] showed that finite-rank Toeplitz operators on the Bergman space of the unit disk must be zero. Recently, Čučković and Louhichi [12] studied finite-rank semicommutators and
commutators of Toeplitz operators with quasihomogeneous symbols and obtained different results from the case of harmonic Toeplitz operators.
Motivated by recent work of Čučković and Louhichi, Zhang et al., and Zhou and Dong (see [1, 12, 13]), we discuss the finite-rank commutator (semicommutator) of Toeplitz operators with more general
symbols on the unit ball in this paper. Let and be two multi-indexes. A function is called a quasihomogeneous function of quasihomogeneous degree if is of the form for all in the unit sphere and some
function defined on the interval .
Let and be two nonconstant quasihomogeneous functions (with certain restrictions on their quasihomogeneous degree). In this paper, we investigate the following problems:(1)Under what conditions does
hold for some quasihomogeneous function ?(2)Under what conditions does the semicommutator have finite rank?(3)Under what conditions does the commutator have finite rank?
2. The Mellin Transform and Mellin Convolution
Main tool in this paper will be the Mellin transform. Recall that the Mellin transform of a function is defined by the equation: It is easy to check that where and .
For convenience, we denote by when the form of is complicated. It is clear that is well defined on and analytic on . It is well known that the Mellin transform is uniquely determined by its values on
, where and . The following classical theorem is proved in [14, page 102].
Theorem 2.1. Assume that is a bounded analytic function on which vanishes at the pairwise distinct points , , where(i) and(ii). Then vanishes identically on .
Remark 2.2. We will often use this theorem to show that if and if there exists a sequence such that then for all and so .
If and are defined on the interval , then their Mellin convolution is defined by The Mellin convolution theorem states that and that, if and are in , then so is .
3. Products of Toeplitz Operators with Quasihomogeneous Symbols
For any multi-index , where each is a nonnegative integer, we will write for .
For and , the notation means that and means that We also define and obtain
It is known that is radial if and only if for any unitary transform of . So we have for , and . That is, depends only on . In this case, we denote by for convenience. The definition of
quasihomogeneous function on the unit disk has been given in many papers (see [2] or [7]), and a similar definition on the unit ball will be given in the following.
Definition 3.1. Let . A function is called a quasihomogeneous function of quasihomogeneous degree if is of the form where is a radial function, that is, for any in the unit sphere and .
The following lemma is from [1] and we will use it often.
Lemma 3.2. Let , be two multi-indexes and let be a bounded radial function on . Then for any multi-index ,
Proposition 3.3. Let be multi-indexes and let be bounded radial functions on . If the product is of finite rank, then there exists such that
Proof. Denote by the product of Toeplitz operators and let rank . For multi-indexes , we have where It follows that , where and is a constant dependent on , and . Thus the set contains at most
elements. Let , then there exists such that
Proposition 3.4. , are defined as in Proposition 3.3. The product is of finite rank if and only if for some .
Proof. Using Proposition 3.3 and Theorem 2.1, we can easily get the result.
This result is analogous to Theorem 3.2 in [15], but we get it in a different way.
Similar to the proof of Proposition 3.3, we can get a result about finite-rank commutators (semicommutators).
Proposition 3.5. Let , and be multi-indexes and let , be bounded radial functions on . If the commutator (or the semicommutator ) is of finite rank, then there exists such that (or ) for .
Now we are in a position to characterize when the product of two Toeplitz operators with some quasihomogeneous symbols equals a Toeplitz operator with quasihomogeneous symbols.
When , if , then or . But when , there exist nonzero multi-indexes and such that . In this case, we have the following theorem.
Theorem 3.6. Suppose and are two nonzero multi-indexes with . Let and be bounded radial functions on . If there exists a bounded radial function such that , then must be a solution of the equation
Proof. Obviously, the equality holds for each monomial with the multi-index .
Since , is equivalent to . By Lemma 3.2, it is easy to check that where Since , we have . So (3.13) implies that As , we have A direct calculation gives that for . Then we have or equivalently,
Combining the above equality with Remark 2.2, we get the conclusion.
In the following, we give some explicit examples in which Theorem 3.6 is applied.
Example 3.7. Suppose , , , , is a bounded radial function such that . Using Theorem 3.6, we can get that .
Example 3.8. Suppose , is a bounded radial function such that , then must be a solution of the equation where is the characteristic function of the set . For example, suppose , , , is a bounded
radial function such that , then it follows that and .
Louhichi et al. [2] showed that there exist two nontrivial quasihomogeneous Toeplitz operators on the Bergman space of the unit disk such that the product of those Toeplitz operators is also a
nontrivial Toeplitz operator, for example . On weighted Bergman space of the unit ball , Vasilevski [16, 17] showed that there exist parabolic quasihomogeneous (It is clear that the quasihomogeneous
function is also a parabolic quasihomogeneous function) symbol Toeplitz operators such that the finite product of those Toeplitz operators is also a Toeplitz operator of this type. However, on the
unit ball , if and are two nonzero multi-indexes which are not orthogonal, we can get that there exist no nontrivial and such that .
Theorem 3.9. Let , be two nonzero multi-indexes which are not orthogonal. Given , and let and be two bounded radial functions on . If there exists a bounded radial function such that , then or .
Proof. If , as in Theorem 3.6, we can get where We claim that there exists such that
Since is not orthogonal to , without loss of generality, we can suppose .
Case 1 (). We denote , . For , let and .
Let . Then where .
Therefore, the equation has two solutions at most. It means that there exists such that for any . Thus for each , we have and .
Case 2. Given a multi-index , let and for . As in Case 1, we can find an integer with such that and for any .
Using (3.20), we have for . As and , it is easy to see that (3.22) holds.
Let and and and . Then . Since we know that at least one of the series and diverges. Hence it follows from Remark 2.2 that or .
4. Finite-Rank Semicommutator
On the unit ball , we will show that the semicommutator of two Toeplitz operators with some quasihomogeneous symbols is of finite rank if and only if it is zero.
Theorem 4.1. Let , be two multi-indexes, and let , be two integrable radial functions on such that , , and are T-functions. If the semicommutator has finite rank, then it is equal to zero.
Proof. Let be the semicommutator . If is finite rank, using Proposition 3.5, there exists such that Therefore, Lemma 3.2 implies that for all . Since the above equation is equivalent to Note that and
are both analytic on the right half-plane and the sequence is arithmetic. Then Remark 2.2 implies that Hence, . The proof is complete.
Next we will consider when the semicommutator of two quasihomogeneous Toeplitz operators is a finite-rank operator.
Remark 4.2. If the semicommutator is of finite rank, following the same process as in Theorem 4.1, we can prove that it must be zero.
On the unit disk, Čučković and Louhichi [12] gave an example to show that there exists a nonzero finite rank semicommutator , where and are radial functions. However, the situation on the unit ball
is different. Let , be two integrable radial functions on and , be two multi-indexes. Then we will prove that is a finite-rank operator if and only if . Now, we begin with the case that and are not
Theorem 4.3. Let be two multi-indexes which are not orthogonal, and let be two integrable radial functions on such that , and are T-functions. If the semicommutator has finite rank, then or .
Proof. Let be the semicommutator . If is of finite rank, using Proposition 3.5, we can get that there exists such that
Lemma 3.2 gives that for all .
Since is not orthogonal to , from the proof of Theorem 3.9 and using (4.7), we can get that there exists such that Analogous to the proof of Theorem 3.9, it is easy to get or .
Next, we will show that there exists no nontrivial finite-rank semicommutator in the case that .
Theorem 4.4. Let , be two multi-indexes with , and let and be two integrable radial functions on such that , and are T-functions. The semicommutator has finite rank if and only if .
Proof. We only need to prove the necessity. Let be of finite rank. Since , it is easy to see that if and only if for multi-indexes . By Lemma 3.2, the following statements hold:(i) if , then ;(ii) if
, then Combining (i) and (ii) with the assumption that has finite rank, we get that there exists such that To finish the proof, we will prove that for .
Note that for and . Then for each , (ii) implies that if and only if that is, Since , the preceding equality is equivalent to for all .
By (4.10), we obtain that the equality (4.13) holds for all . It is easy to see that is arithmetic. Therefore, by Remark 2.2, we have for all .
In particular if , with , we have It follows that the equality (4.13) holds for all . So for all . Hence, the proof is complete.
Example 4.5. Let , be two multi-indexes, is a bounded radial function and , . If is of finite rank, using Theorem 4.1, we obtain that . If is not orthogonal to and is of finite rank, so it follows
from Theorem 4.3 that or . But if , there exist and such that . In particular, suppose , , , a direct calculation gives that and , that is, .
5. Finite-Rank Commutators
In this section, let be two integrable radial functions on . We now pass to investigate the commutator of two quasihomogeneous Toeplitz operators and consider when , , or have finite rank,
Theorem 5.1. Let be two multi-indexes with , and let be two integrable radial functions on such that and are T-functions. If is not a constant, then is of finite rank if and only if or .
Proof. Let be the commutator . By Lemma 3.2, if and only if for . If is of finite rank, using Proposition 3.5, there exists such that
Since , by Theorem 2.1 and following the same process as in Theorem 4.4, we get . Using Theorem 4.4 in [1], we have if and only if or .
Conversely, if or , then we can easily show that for each multi-index , which implies that and commute.
Remark 5.2. The same as in Theorem 5.1, we can easily prove if the commutator is of finite rank, then it must be a zero operator.
Next, we give some examples.
Example 5.3. Suppose that , where and . Let be a T-function, where is a radial function. Then is of finite rank if and only if and commute. By Theorem 4.9 in [1], we can also get is of finite rank if
and only if is a monomial.
On the unit disk, if the commutator has finite rank , then is at most equal to the quasihomogeneous degree and a nonzero finite rank commutator has been given in [12]. On the unit ball , we will show
that the commutator has finite rank if and only if commutes with if and only if or .
Theorem 5.4. Let , be two nonzero multi-indexes, and let , be two integrable radial functions on such that and are T-functions. If the commutator has finite rank, then or .
Proof. Let denote the commutator . Applying Lemma 3.2, we get for . If is finite rank, using Proposition 3.5, there exists such that
If , then we have . Combining (5.4) and (5.3), we have for . Analogous to the proof of Theorem 4.8 in [1], it is not difficult to get that or .
On the other hand, if is not orthogonal to , then (5.3) and (5.4) imply that for , where Following the same process as in Theorem 3.9, we get or , as desired.
The authors thank the referee for several suggestions that improved the paper. This research is supported by NSFC, Item Number: 10971020.
1. Z.-H. Zhou and X.-T. Dong, “Algebraic properties of Toeplitz operators with radial symbols on the Bergman space of the unit ball,” Integral Equations and Operator Theory, vol. 64, no. 1, pp.
137–154, 2009. View at Publisher · View at Google Scholar
2. I. Louhichi, E. Strouse, and L. Zakariasy, “Products of Toeplitz operators on the Bergman space,” Integral Equations and Operator Theory, vol. 54, no. 4, pp. 525–539, 2006. View at Publisher ·
View at Google Scholar
3. S. Grudsky, A. Karapetyants, and N. Vasilevski, “Toeplitz operators on the unit ball in ${ℂ}^{n}$ with radial symbols,” Journal of Operator Theory, vol. 49, no. 2, pp. 325–346, 2003.
4. A. Brown and P. R. Halmos, “Algebraic properties of Toeplitz operators,” Journal für die Reine und Angewandte Mathematik, vol. 213, pp. 89–102, 1963.
5. P. Ahern and Ž. Čučković, “A theorem of Brown-Halmos type for Bergman space Toeplitz operators,” Journal of Functional Analysis, vol. 187, no. 1, pp. 200–210, 2001. View at Publisher · View at
Google Scholar
6. P. Ahern, “On the range of the Berezin transform,” Journal of Functional Analysis, vol. 215, no. 1, pp. 206–216, 2004. View at Publisher · View at Google Scholar
7. I. Louhichi and L. Zakariasy, “On Toeplitz operators with quasihomogeneous symbols,” Archiv der Mathematik, vol. 85, no. 3, pp. 248–257, 2005. View at Publisher · View at Google Scholar
8. S. Axler, Sun-Y.A. Chang, and D. Sarason, “Products of Toeplitz operators,” Integral Equations and Operator Theory, vol. 1, no. 3, pp. 285–309, 1978. View at Publisher · View at Google Scholar
9. X. Ding and D. Zheng, “Finite rank commutator of Toeplitz operators or Hankel operators,” Houston Journal of Mathematics, vol. 34, no. 4, pp. 1099–1119, 2008.
10. K. Guo, S. Sun, and D. Zheng, “Finite rank commutators and semicommutators of Toeplitz operators with harmonic symbols,” Illinois Journal of Mathematics, vol. 51, no. 2, pp. 583–596, 2007.
11. D. H. Luecking, “Finite rank Toeplitz operators on the Bergman space,” Proceedings of the American Mathematical Society, vol. 136, no. 5, pp. 1717–1723, 2008. View at Publisher · View at Google
12. Ž. Čučković and I. Louhichi, “Finite rank commutators and semicommutators of quasihomogeneous Toeplitz operators,” Complex Analysis and Operator Theory, vol. 2, no. 3, pp. 429–439, 2008. View at
Publisher · View at Google Scholar
13. B. Zhang, Y. Shi, and Y. Lu, “Algebraic properties of Toeplitz operators on the polydisk,” Abstract and Applied Analysis, vol. 2011, Article ID 962313, 18 pages, 2011. View at Publisher · View at
Google Scholar
14. R. Remmert, Classical Topics in Complex Function Theory, vol. 172, Springer, New York, NY, USA, 1998.
15. T. Le, “Finite-rank products of Toeplitz operators in several complex variables,” Integral Equations and Operator Theory, vol. 63, no. 4, pp. 547–555, 2009. View at Publisher · View at Google
16. N. Vasilevski, “Parabolic quasi-radial quasi-homogeneous symbols and commutative algebras of Toeplitz operators,” in Topics in Operator Theory. Volume 1. Operators, Matrices and Analytic
Functions, vol. 202, pp. 553–568, Birkhäuser, Basel, Switzerland, 2010. View at Publisher · View at Google Scholar
17. N. Vasilevski, “Quasi-radial quasi-homogeneous symbols and commutative Banach algebras of Toeplitz operators,” Integral Equations and Operator Theory, vol. 66, no. 1, pp. 141–152, 2010. View at
Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/jfs/2012/414201/","timestamp":"2014-04-20T04:11:07Z","content_type":null,"content_length":"849781","record_id":"<urn:uuid:6aa4cf14-0313-44ac-8d8c-f0aea34c31a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
primitive root
show that primitive root modulo 2^t where t is positive integer, does not exist for t>2
Ok, I'll take a crack at this. A primitive root modulo $2^t$ is a number $r: 1 < r < 2^t$ such that the numbers $r, r^2, r^3, ... r^{2^t - 1}$ are all distinct numbers modulo $2^t$. One way of
disproving this would be to show that $r^m$ is congruent to 1 for some $m < 2^t - 1$. Perhaps you could try induction on n, taking $m = 2^{t-1}$?
The proof depends on the fact that $5^{2^{t-3}} \equiv 1+2^{t-1}(\bmod 2^t)$ for $t\geq 3$. This shows that the order of $5$ is $2^{t-2}$. Once you established that prove that $A=\{1,5,5^2,...,5^{2^
{t-1}-1}\} = \{5^k| 0\leq k < 2^{t-2}\}$ are all incongruent with eachother. Then $B=\{ -1,-5,-5^2,...,-5^{2^{t-1} - 1}\} = \{ -5^k|0\leq k < 2^{t-2}\}$ are all incrongruent with eachother. And
finally each element in $A$ is incongruenct with each element of $B$. With those two facts the proof is almost complete. Because there are a total of $2^{t-2} + 2^{t-2} = 2^{t-1}$ in $A\cup B$ and $\
phi ( 2^t) = 2^{t-1}$. Which means $A\cup B$ is a reduced system of residues. And each element does not have order $2^{t-1}$. Which means it has no primitive root. | {"url":"http://mathhelpforum.com/number-theory/36381-primitive-root.html","timestamp":"2014-04-17T19:34:45Z","content_type":null,"content_length":"39647","record_id":"<urn:uuid:e01240e6-9450-4d46-8d75-624f39599c8d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof that an odd number raised to an integral power is odd
September 1st 2010, 12:25 PM #1
Jul 2010
Proof that an odd number raised to an integral power is odd
Sorry if this is too elementary for this sub-forum.
With n = 1, 2, 3, 4 I can see that $(2k+1)^n$ will produce polynomials that would consist of terms with even coefficients and $1^n$, which is to say odd numbers. How can I prove this for all n?
You can simply use that odd times odd equals odd.
(And for completeness you might want to address the negative integers.)
Two ways:
1) Use the bionomial theorem:
$(2k+1)^n= \sum_{i=0}^n \begin{pmatrix}n \\i\end{pmatrix}2^ik^i= 2\sum_{i=1}^n\begin{pmatrix}n \\i\end{pmatrix}2^{i-1}k^i+ 1$
2) Proof by induction:
Certainly if n= 1, $(2k+ 1)^1= 2k+ 1$ is odd. Suppose that, for some i, [tex](2k+ 1)^i[tex] is odd. Then $(2k+1)^{i+1}= (2k+1)^i(2k+1)$ is the product of two odd numbers and so odd, as undefined
(To prove "the product of two odd number is odd" just look at (2k+ 1)(2j+1).)
$\forall\,n\in\mathbb{N}\,,\,\,(2k+1)^n=\sum\limits ^n_{i=0}(2k)^i$ , according to Newton's binom, and this is a sum of
even numbers except for the first index $i=0$ , for which we have $(2k)^0=1$ , and thus the result is, by definition, an odd number.
(i) $(2n+1)(2n+1) = 4n^2 + 4n + 1 = 2(2n^2 + 2n) + 1$
(ii) $(2n+1)^{-n} = \frac{1}{2n + 1}$. An odd number divided by an odd number will produce an odd number, for:
(iii) $\frac{2n+1}{2n+1} = 1$, which is odd.
??? You should recheck what you wrote
Odd times odd is odd, we learn this in primary school, proof without modular arithmetic (2k+1)(2m+1) = 4km+2k+2m+1
Use weak induction on exponent n in {0,1,2,...} and for negative exponents note that the result is only an integer if absolute value of base is 1.
Edit: HallsofIvy already did it.
??? You should recheck what you wrote
Odd times odd is odd, we learn this in primary school, proof without modular arithmetic (2k+1)(2m+1) = 4km+2k+2m+1
Use weak induction on exponent n in {0,1,2,...} and for negative exponents note that the result is only an integer if absolute value of base is 1.
Edit: HallsofIvy already did it.
Thanks! I got confused there with your previous note about completeness and the negative exponents and produced complete nonsense, which attributes parity to fractions. =)
I jsut want to show a different way. The result is clear for 1 because $1^n=1=2\cdot0+1$, odd. By the Fundamental Theorem of Arithmetic any integer $m>1$ has a unique factorizaton into primes
(not considering the order), say $m=p_1^{m_1}p_2^{m_2}\cdots{p_r}^{m_r}$; the exponents are positive. Then $m^n=p_1^{nm_1}p_2^{nm_2}\cdots{p_r}^{nm_r}$.
It follows that $n^m$ and $n$ have precisely the same prime divisors. In particular, $2|m$ if and only if $2|m^n$, i.e., if $m$ is of the form $2k+1$ for some $k$ integer, then $m^n$ is of the
same form.
September 1st 2010, 12:27 PM #2
September 1st 2010, 01:17 PM #3
MHF Contributor
Apr 2005
September 1st 2010, 01:19 PM #4
Oct 2009
September 1st 2010, 01:28 PM #5
Jul 2010
September 1st 2010, 01:36 PM #6
September 1st 2010, 01:48 PM #7
Jul 2010
September 1st 2010, 06:28 PM #8
Jun 2010 | {"url":"http://mathhelpforum.com/number-theory/154965-proof-odd-number-raised-integral-power-odd.html","timestamp":"2014-04-17T14:47:10Z","content_type":null,"content_length":"62253","record_id":"<urn:uuid:21d61d5d-e6f4-4afb-b292-b277f9c1b377>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate max value of variables
How can I calculate the max value of variables. Let's say I have three different variables which are later assigned and is there some command which will order them from the maximum to minimum and
reverse. Thanks.
Are you talking about sorting 3 values in ascending or descending order?
Order 3 values and calculate which one is highest put it first 2nd and 3rd.
And where is it that you are stuck? Show your code.
The code looks good to me. Try to expand it to one more variable. It is not that difficult. Go for it.
Have you tried nesting a couple of ternary conditional expression operators ?:
I`m not saying that this may not be frowned on by some!
Last edited on
Well, tried but don't know how to order them. Ok, thanks anyway.
Well, Quicksort is non-deterministic, so Quicksort might yield different results in some cases. This is why the STL provides two sorts: std::sort() and std::stable_sort().
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/44936/","timestamp":"2014-04-16T16:34:22Z","content_type":null,"content_length":"19512","record_id":"<urn:uuid:3fcbfc58-b651-48e1-b589-33c15b4780a7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibonnacci sequence
Can anyone explain if the Fibonnacci sequence can be placed into a parabolic or hyperbolic function?
If you're asking if there's a function $\phi:\mathbb{R}^+ \rightarrow \mathbb{R}$ such that the graph of $\phi$ is a parabola or hyperbola, and that $\phi$ satistfies, for all positive integers $n, \
\phi(n) = F_n$, the $n^{th}$ Fibonnacci number, then the answer is no. Parabolas and hyperbolas only have a few "free parameters", and the requirement $\phi(n) = F_n \forall n \in \mathbb{Z}^+$ would
quickly exhaust them. It would be possible that it algebraically worked out, but it doesn't. I'll do the parabola case, and leave to you the hyperbola ( $\phi(x) = b\sqrt{1+(x/a)^2}$). If a parabola
is the graph of a function $\phi$ that satisfies $\phi(n) = F_n$, then let $\phi(x) = ax^2 + bx + c$ for some $a, b, c \in \mathbb{R}$. Then $\forall n \in \mathbb{Z}^+, \phi(n+2) = \phi(n+1) + \phi
(n)$, so $a(n+2)^2 + b(n+2) + c = ( a(n+1)^2 + b (n+1) + c ) + ( an^2 + bn + c )$. Thus $\forall n \in \mathbb{Z}^+, a \[ (n+2)^2 - (n+1)^2 - n^2 \] + b\[ (n+2) - (n+1) - (n) \] + \[ c - c - c \] =
0$, so $a(- n^2 -2n + 3 )+ b(-n + 1) - c = 0$, so $(-a)n^2 + (-2a-b)n + (3a+b-c) = 0$ The only way that could be true for all n is if each of those n-coefficients was 0. Thus $-a=0, -2a-b = 0, 3a+b-c
= 0$. That system of equations (3 equations in the 3 unknowns $a, b$, and $c$) has only one solution: $a = b = c = 0$. That means the only quadratic that has a prayer of working is the 0 quadratic $\
phi(x) = 0x^2+0x+0 = 0$. But that obviously doesn't satisfy $\phi(n) = F_n \forall n \in \mathbb{Z}^+$, and it was the only one that had a chance. ----- I suppose I should note that even if you
consider rotated parabolas/hyperbolas, you're still going to quickly run out of free parameters. (The standard equation for the graph of a generic conic section in the plane has 6 free parameters.
That's more than the 3 a quadratic polynomial has, but it's still less than the infinity required.).
Last edited by johnsomeone; September 14th 2012 at 06:49 PM. | {"url":"http://mathhelpforum.com/calculus/203433-fibonnacci-sequence.html","timestamp":"2014-04-20T09:59:08Z","content_type":null,"content_length":"37100","record_id":"<urn:uuid:d764297c-71e8-4dc0-8e8c-a974fab501f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 582
- International Journal of Computer Vision , 1998
"... The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimodal, cannot represent
simultaneous alternative hypotheses. The Condensation algorithm uses "factored sampling", previously applied to the ..."
Cited by 1124 (12 self)
Add to MetaCart
The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimodal, cannot represent simultaneous
alternative hypotheses. The Condensation algorithm uses "factored sampling", previously applied to the interpretation of static images, in which the probability distribution of possible
interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly
robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time. Contents 1 Tracking curves in clutter 2 2 Discrete-time propagation of state
density 3 3 Factored sampling 6 4 The Condensation algorithm 8 5 Stochastic dynamical models for curve motion 10 6 Observation model 13 7 Applying the Condensation algorithm to video-streams 17 8
Conclusions 26 A Non-line...
, 2001
"... Mobile robot localization is the problem of determining a robot's pose from sensor data. This article presents a family of probabilistic localization algorithms known as Monte Carlo Localization
(MCL). MCL algorithms represent a robot's belief by a set of weighted hypotheses (samples), which approxi ..."
Cited by 608 (83 self)
Add to MetaCart
Mobile robot localization is the problem of determining a robot's pose from sensor data. This article presents a family of probabilistic localization algorithms known as Monte Carlo Localization
(MCL). MCL algorithms represent a robot's belief by a set of weighted hypotheses (samples), which approximate the posterior under a common Bayesian formulation of the localization problem. Building
on the basic MCL algorithm, this article develops a more robust algorithm called MixtureMCL, which integrates two complimentary ways of generating samples in the estimation. To apply this algorithm
to mobile robots equipped with range finders, a kernel density tree is learned that permits fast sampling. Systematic empirical results illustrate the robustness and computational efficiency of the
, 1996
"... . In Proc. European Conf. Computer Vision, 1996, pp. 343--356, Cambridge, UK The problem of tracking curves in dense visual clutter is a challenging one. Trackers based on Kalman filters are of
limited use; because they are based on Gaussian densities which are unimodal, they cannot represent s ..."
Cited by 565 (23 self)
Add to MetaCart
. In Proc. European Conf. Computer Vision, 1996, pp. 343--356, Cambridge, UK The problem of tracking curves in dense visual clutter is a challenging one. Trackers based on Kalman filters are of
limited use; because they are based on Gaussian densities which are unimodal, they cannot represent simultaneous alternative hypotheses. Extensions to the Kalman filter to handle multiple data
associations work satisfactorily in the simple case of point targets, but do not extend naturally to continuous curves. A new, stochastic algorithm is proposed here, the Condensation algorithm ---
Conditional Density Propagation over time. It uses `factored sampling', a method previously applied to interpretation of static images, in which the distribution of possible interpretations is
represented by a randomly generated set of representatives. The Condensation algorithm combines factored sampling with learned dynamical models to propagate an entire probability distribution for
object pos...
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and
flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have bee ..."
Cited by 564 (3 self)
Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible.
For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However,
HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete
random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of
models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. In particular, the main novel technical contributions of this thesis are as
follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that
takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic
approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering
to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of
DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
, 2003
"... A new approach toward target representation and localization, the central component in visual tracking of non-rigid objects, is proposed. The feature histogram based target representations are
regularized by spatial masking with an isotropic kernel. The masking induces spatially-smooth similarity fu ..."
Cited by 546 (3 self)
Add to MetaCart
A new approach toward target representation and localization, the central component in visual tracking of non-rigid objects, is proposed. The feature histogram based target representations are
regularized by spatial masking with an isotropic kernel. The masking induces spatially-smooth similarity functions suitable for gradient-based optimization, hence, the target localization problem can
be formulated using the basin of attraction of the local maxima. We employ a metric derived from the Bhattacharyya coefficient as similarity measure, and use the mean shift procedure to perform the
optimization. In the presented tracking examples the new method successfully coped with camera motion, partial occlusions, clutter, and target scale variations. Integration with motion filters and
data association techniques is also discussed. We describe only few of the potential applications: exploitation of background information, Kalman tracking using motion models, and face tracking.
Keywords: non-rigid object tracking; target localization and representation; spatially-smooth similarity function; Bhattacharyya coefficient; face tracking. 1
- In Proceedings of the AAAI National Conference on Artificial Intelligence , 2002
"... The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this
problem scale up to handle the very large number of landmarks present in real environments. Kalman filter-base ..."
Cited by 447 (10 self)
Add to MetaCart
The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem
scale up to handle the very large number of landmarks present in real environments. Kalman filter-based algorithms, for example, require time quadratic in the number of landmarks to incorporate each
sensor observation. This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the
number of landmarks in the map. This algorithm is based on a factorization of the posterior into a product of conditional landmark distributions and a distribution over robot paths. The algorithm has
been run successfully on as many as 50,000 landmarks, environments far beyond the reach of previous approaches. Experimental results demonstrate the advantages and limitations of the FastSLAM
algorithm on both simulated and real-world data.
, 2006
"... The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging
problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns o ..."
Cited by 297 (4 self)
Add to MetaCart
The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem.
Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene
occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions
are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used,
provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of
appropriate image features, selection of motion models, and detection of objects.
, 1997
"... This paper describes a probabilistic decomposition of human dynamics at multiple abstractions, and shows how to propagate hypotheses across space, time, and abstraction levels. Recognition in
this framework is the succession of very general low level grouping mechanisms to increased specific and lea ..."
Cited by 292 (2 self)
Add to MetaCart
This paper describes a probabilistic decomposition of human dynamics at multiple abstractions, and shows how to propagate hypotheses across space, time, and abstraction levels. Recognition in this
framework is the succession of very general low level grouping mechanisms to increased specific and learned model based grouping techniques at higher levels. Hard decision thresholds are delayed and
resolved by higher level statistical models and temporal context. Low-level primitives are areas of coherent motion found by EM clustering, mid-level categories are simple movements represented by
dynamical systems, and highlevel complex gestures are represented by Hidden Markov Models as successive phases of simple movements. We show how such a representation can be learned from training
data, and apply it to the example of human gait recognition. 1 Introduction This paper addresses the problem of learning and recognizing human and other biological movements in video sequences of an
- Exploring Artificial Intelligence in the New Millenium
"... This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are
presently being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is al ..."
Cited by 288 (9 self)
Add to MetaCart
This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently
being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is also described, along with an extensive list of open research problems.
- Proceedings of the IEEE , 2004
"... The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that
is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the ..."
Cited by 253 (2 self)
Add to MetaCart
The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is
difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization. To
overcome this limitation, the unscented transformation (UT) was developed as a method to propagate mean and covariance information through nonlinear transformations. It is more accurate, easier to
implement, and uses the same order of calculations as linearization. This paper reviews the motivation, development, use, and implications of the UT. Keywords—Estimation, Kalman filtering, nonlinear
systems, target tracking. I. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2939","timestamp":"2014-04-20T22:34:05Z","content_type":null,"content_length":"39833","record_id":"<urn:uuid:d94d54ae-3688-4b95-a9b6-087e2ae6a74d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hilbert matrix
Thanks for the replies.
I've searched by Google (actually the book doesn't mention the name of the matrix), and all I found is some stronger results (like in Wikipedia:
) with not enough elementary proofs. I don't need to prove what the inverse matrix is, just need to prove it is invertible, and the inverse has integer entries.
Here is the example 16: | {"url":"http://www.physicsforums.com/showthread.php?p=3796914","timestamp":"2014-04-18T18:18:19Z","content_type":null,"content_length":"30041","record_id":"<urn:uuid:cadf0b1e-c0e2-478b-88da-e543de91deb2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: MEANS AND METHODS FOR DETECTING ANTIBIOTIC RESISTANT BACTERIA IN A SAMPLE
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The present invention provides a method for detecting and/or identifying specific bacteria within an uncultured sample, comprising steps of: a. obtaining an absorption spectrum (AS) of said
uncultured sample; b. acquiring the n dimensional volume boundaries for said specific bacteria; c. data processing said AS; i. noise reducing; ii. extracting m features from said entire AS; iii.
dividing said AS into several segments according to said m features; iv. calculating m
features of each of said segment; and, d. detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume; wherein said bacteria is a antibiotics resistance bacteria.
A method for detecting and/or identifying specific bacteria within an uncultured sample; said method comprising steps of: a. obtaining an absorption spectrum (AS) of said uncultured sample; b.
acquiring the n dimensional volume boundaries for said specific bacteria by i. obtaining at least one absorption spectrum (AS2) of known samples containing said specific bacteria; ii. extracting x
features from said entire AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of
the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the
signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer higher or equal to
one; x is an integer greater than or equal to one; iii. dividing said AS2 into several segments according to said x features; iv. calculating y features of each of said segment of said AS2; said y
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; y is an integer higher or equal to one; v. assigning at least one of
said x features and/ or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of Sequential Backward Selection, Sequential Forward Selection,
Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof; vi. defining n dimensional space; n equals the sum of said x and said y features; vii. defining the n
dimensional volume in said n dimensional space; viii. determining said boundaries of said n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector
Machine (SVM), Linear discriminant, functions and Fisher's linear discriminant, C
5 algorithm tree, K-nearest neighbor, Gaussian Mixed Model (GMM), Weighted K-nearest neighbor, Hierarchical clustering algorithm, K-mean clustering algorithm, Ward's clustering algorithm, Minimum
least square, Neural-Network or any combination thereof; c. data processing said AS; i. noise reducing by using different smoothing techniques selected from a group consisting of running average
savitzky-golay, low pass filter or any combination thereof; ii. extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength,
peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear
prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet
coefficients or any combination thereof; m is an integer higher or equal to one; iii. dividing said AS into several segments according to said m features; iv. calculating m
features of each of said segment; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, d. detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume; wherein said bacteria is a antibiotics resistance bacteria.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, additionally comprising step of selecting said x feature and/or said y features via
algorithms selected form Chi-Squared, c2, test, Wilcoxon test, and t-test or any combination thereof
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, wherein said sample is an aerosol or solid or liquid sample selected from a group
consisting of cough, sneeze, saliva, mucus, bile, urine, vaginal secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, wherein said step of acquiring the n dimensional volume boundaries for the specific
bacteria, additionally comprising step of calculating the Gaussian distribution and/or Multivariate Gaussian distribution, and/or Rayleigh distribution, and/or Maxwell distribution, and/or Estimate
the distribution by the Parzen method or mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in
the n dimensional space.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, wherein said step (c) of data processing said AS additionally comprising steps of: i.
calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1; ii. extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; iii. dividing said o
derivative into several segments according to said m
features; iv. calculating the m
features in at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, v. detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, additionally comprising the step of selecting said specific bacteria from a is selected
from a group consisting of Gram negative pathogens such as Various types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus
pneumonia resistant to b lactamase and macrolides, Streptococcus viridians group resistant to b lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant
to penicillins and aminoglycosides (for example: Enterococcus Faecium, Enterococcus Faecalis), staphylococcus aureus, B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus pyogenes
resistant to macrolides, macrolide-resistant streptococci of groups B, C and G. Coagulase negative staphylococci resistant to b lactams, aminoglycosides, macrolides, lincosamides and glycopeptides,
multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium, C. Difficile, Haemophilus Influenza resistant to b lactamase, Pseudomonas Aeruginosa, Stenotrophomonas
Maltophilia, Klebsiella Pneumonia resistant to antibiotics Klebsiella Pneumonia, Klebsiella Pneumonia sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, wherein said step of obtaining the AS additionally comprising steps of: a. providing at
least one optical cell accommodates said uncultured sample; b. providing p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or
greater than 1; said p light source are adapted to emit light to said optical cell; c. providing detecting means for receiving the spectroscopic data of said sample; d. emitting light from said light
source at different wavelength to said optical cell; and, e. collecting said light exiting from said optical cell by said detecting means; thereby obtaining said AS.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 7, wherein said step of emitting light is performed at the wavelength range of UV,
visible, IR, mid-IR, far-IR and terahertz.
A method for detecting and/or identifying specific bacteria within an uncultured sample; said method comprising steps of: a. obtaining an absorption spectrum (AS) of said uncultured sample; said AS
containing water influence; b. acquiring the n dimensional volume boundaries for said specific bacteria by: i. obtaining at least one absorption spectrum (AS2) of known samples containing said
specific bacteria; ii. extracting x features from said AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is
an integer higher or equal to one; x is an integer greater than or equal to one; iii. calculating at least one derivative of said AS2; iv. dividing said AS2 into several segments according to said x
features; v. calculating the y features of each of said segment; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; y
is an integer higher or equal to one; vi. assigning at least one of said x features and/ or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of
Sequential Backward Selection, Sequential Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof; vii. defining n dimensional space; n equals the sum of said x features and said y features; viii. defining the
n dimensional volume in said n dimensional space; ix. determining said boundaries of said n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector
Machine (SVM), Linear discriminant, functions and Fisher's linear discriminant, C
5 algorithm tree, K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, Gaussian Mixed Model (GMM), K-mean clustering algorithm, Ward's clustering algorithm, Minimum
least square, Neural-Network or any combination thereof; c. eliminating said water influence from said AS by at least one of the following methods: Low pass filter, High pass filter and Water
absorption division; d. data processing said AS without said water influence by i. noise reducing by using different smoothing techniques selected from a group consisting of running average
savitzky-golay, low pass filter or any combination thereof; ii. extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength,
peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear
prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet
coefficients or any combination thereof; m is an integer greater or equal to one; iii. dividing said AS into several segments according to said m features; iv. calculating the m
features of at least one of said segment; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, e. detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume; wherein said bacteria is a antibiotics resistance bacteria.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 9, additionally comprising step of selecting said x feature and/or said y features via
algorithms selected form Chi-Squared, c2, test, Wilcoxon test, and t-test or any combination thereof
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 9, wherein said sample is an aerosol or solid or liquid sample selected from a group
consisting of cough, sneeze, saliva, mucus, bile, urine, vaginal secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 9, wherein said step of acquiring the n dimensional volume boundaries for the specific
bacteria, additionally comprising step of calculating the Gaussian distribution and/or Multivariate Gaussian distribution, and/or Rayleigh distribution, and/or Maxwell distribution, and/or Estimate
the distribution by the Parzen method or mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in
the n dimensional space.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 9, wherein said step (c) of data processing said AS without said water influence,
additionally comprising steps of i. calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1; ii. extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; iii. dividing said o
derivative into several segments according to said m
features; iv. calculating the m
features in at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, v. detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 9, additionally comprising the step of selecting said specific bacteria selected from a is
selected from a group consisting of Gram negative pathogens such as Various types of Acinetobacter A. baumannii, Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia
resistant to b lactamase and macrolides, Streptococcus viridians group resistant to b lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to
penicillins and aminoglycosides (for example: Enterococcus Faecium, Enterococcus Faecalis, staphylococcus aureus B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus pyogenes
resistant to macrolides, macrolide-resistant streptococci of groups B, C and G. Coagulase negative staphylococci resistant to b lactams, aminoglycosides, macrolides, lincosamides and glycopeptides,
multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium C. Difficile, resistant to penicillins and macrolides, Haemophilus Influenza resistant to b lactamase,
Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics Klebsiella Pneumonia Resistant to carbapenem), Klebsiella Pneumonia sensitive to antibiotics,
aminoglycosides and macrolides or any combination thereof.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 9, wherein said step of obtaining the AS additionally comprising steps of: a. providing at
least one optical cell accommodating said uncultured sample; b. providing p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or
greater than 1; said p light source are adapted to emit light to said optical cell; c. providing detecting means for receiving the spectroscopic data of said sample; d. emitting light from said light
source at different wavelength to said optical cell; e. collecting said light exiting from said optical cell by said detecting means; thereby obtaining said AS.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 15, wherein said step of emitting light is performed at the wavelength range of UV,
visible, IR, mid-IR, far IR and terahertz.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, wherein the absorption spectra is obtained using an instrument selected from the group
consisting of a spectrometer, Fourier transform infrared spectrometer, a fluorometer and a Raman spectrometer.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 1, wherein said sample is taken from the human body.
A system 1000 adapted to detect and/or identify specific bacteria within an uncultured sample; said system comprising: a. means 100 for obtaining an absorption spectrum (AS) of said uncultured
sample; b. statistical processing means 200 for acquiring the n dimensional volume boundaries for said specific bacteria; said means 200 are characterized by: i. means 201 for obtaining at least one
absorption spectrum (AS2) of known samples containing said specific bacteria; ii. means 202 for extracting x features from said entire AS2; said x features are selected from a group consisting of
Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least
two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different
peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer higher or equal to one; iii. means 203 for dividing said AS2 into several segments according to said x
features; iv. means 204 for calculating y features from at least one of each of said segment; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height,
peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction
coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet
coefficients or any combination thereof; y is an integer higher or equal to one; v. means 205 for assigning at least one of said x features and/ or at least one of said y features to said specific
bacteria by algorithms selected from a group consisting of Sequential Backward Selection, Sequential Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof; vi. means 206 for defining n dimensional space; n equals the sum of said x features and said y features; i.
means 207 for defining the n dimensional volume in the n dimensional space; vii. means 208 for determining said boundaries of said n dimensional volume by using technique selected from a group
consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant, functions and Fisher's linear discriminant, C
5 algorithm tree, K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, Gaussian Mixed Model (GMM), K-mean clustering algorithm, Ward's clustering algorithm, Minimum
least square, Neural-Network or any combination thereof; viii. means 209 for assigning the n dimensional volume to said specific bacteria; c. means 300 for data processing said AS; said means 300 are
characterized by i. means 301 for noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination
thereof; ii. means 302 for extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer higher or equal to one; iii. means 303 for dividing said AS into several segments according to said m features; iv. means 304 for calculating the m
features of at least one of said segment; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, d. means 400 for detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume; wherein said bacteria is an antibiotics resistance bacteria.
The system 1000 according to claim 19, additionally comprising means for selecting said x feature and/or said y features via algorithms selected form Chi-Squared, c2, test, Wilcoxon test, and t-test
or any combination thereof.
The system 1000 according to claim 19, wherein said sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze, saliva, mucus, bile, urine, vaginal secretions,
middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
The system 1000 according to claim 19, wherein said statistical processing means 200 additionally comprising means 210 for calculating the Gaussian distribution or Multivariate Gaussian distribution,
or Rayleigh distribution, or Maxwell distribution, or Estimate the distribution by the Parzen method or by mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features
such that the distributions defines the n dimensional volume in the n dimensional space.
The system 1000 according to claim 19, wherein said means 300 for data processing said AS additionally characterized by: i. means 305 for calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1; ii. means 306 for extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; iii. means 307 for dividing said o
derivative into several segments according to said m
features; iv. means 308 for calculating the m
features in at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, v. Means 309 for detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
The system 1000 according to claim 19, wherein said specific bacteria is selected from a is selected from a group consisting of Gram negative pathogens such as Various types of Acinetobacter A.
baumannii, Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia resistant to b lactamase and macrolides, Streptococcus viridians group resistant to b lactamase and
aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to penicillins and aminoglycosides (for example: Enterococcus Faecium, Enterococcus Faecalis), staphylococcus
aureus, B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus pyogenes resistant to macrolides, macrolide- resistant streptococci of groups B, C and G. Coagulase negative
staphylococci resistant to b lactams, aminoglycosides, macrolides, lincosamides and glycopeptides, multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium, C.
Difficile, Haemophilus Influenza resistant to b lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics, Klebsiella Pneumonia Resistant to
carbapenem Klebsiella Pneumonia sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
The system 1000 according to claim 19, wherein said means 100 for obtaining an absorption spectrum (AS) of said sample additionally comprising: a. at least one optical cell for accommodating said
uncultured sample; b. p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or greater than 1; said p light source are adapted to
emit light at different wavelength to said optical cell; and, c. Detecting means for receiving the spectroscopic data of said sample exiting from said optical cell.
The system 1000 according to claim 25, wherein said p light source are adapted to emit light at wavelength range selected from a group consisting of UV, visible, IR, mid-IR, far-IR and terahertz.
A system 2000 adapted to detect and/or identify specific bacteria within an uncultured sample; said system 2000 comprising: a. means 100 for obtaining an absorption spectrum (AS) of said uncultured
sample; said AS containing water influence; b. statistical processing means 200 for acquiring the n dimensional volume boundaries for said specific bacteria; said means 200 are characterized by: i.
means 201 for obtaining at least one absorption spectrum (AS2) of known samples containing said specific bacteria; ii. means 202 for extracting x features from said entire AS2; said x features are
selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the
total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set
of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer higher or equal to one; x is an integer greater than or equal to one; iii.
means 203 for dividing said AS2 into several segments according to said x features; iv. means 204 for calculating the y features of at least one of said segments; said y features are selected from a
group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of
areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters
(m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; y is an integer higher or equal to one; v. means 205 for assigning at least one of said x features and/
or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of Sequential Backward Selection, Sequential Forward Selection, Sequential Forward Floating
Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof; vi. means 206 for defining n dimensional space; n equals the sum of said x features and said y features; vii.
means 207 for defining the n dimensional volume in said n dimensional space; viii. means 208 for determining said boundaries of said n dimensional volume by using technique selected from a group
consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant, functions and Fisher's linear discriminant, C
5 algorithm tree, K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, Gaussian Mixed Model (GMM), K-mean clustering algorithm, Ward's clustering algorithm, Minimum
least square, Neural-Network or any combination thereof; ix. means 209 for assigning said n dimensional volume to said specific bacteria; c. means 300 for eliminating said water influence from said
AS selected from a group consisting of; Low pass filter, High pass filter and Water absorption division d. means 400 for data processing said AS without said water influence; said means 400 are
characterized by: i. means 401 for noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination
thereof; ii. means 402 for extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; iii. means 403 for dividing said AS into several segments according to said m features; iv. means 404 for calculating m
features at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, e. means 500 for detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume; wherein said bacteria is a antibiotics resistance bacteria.
The system 2000 according to claim 27, additionally comprising means for selecting said x feature and/or said y features via algorithms selected form Chi-Squared, c2, test, Wilcoxon test, and t-test
or any combination thereof.
The system 2000 according to claim 27, wherein said sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze, saliva, mucus, bile, urine, vaginal secretions,
middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
The system 2000 according to claim 27, wherein said statistical processing means 200 additionally comprising means 210 for calculating the Gaussian distribution or Multivariate Gaussian distribution,
or Rayleigh distribution, or Maxwell distribution, or Estimate the distribution by the Parzen method or by mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features
such that the distributions defines the n dimensional volume in the n dimensional space.
The system 2000 according to claim 27, wherein said means 400 for data processing said AS without said water influence additionally comprising: i. means 405 for calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1; ii. means 406 for extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof m
is an integer greater than or equal to one; iii. means 407 for dividing said o
derivative into several segments according to said m
features; iv. means 408 for calculating the m
features from at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (m,s,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and, v. Means 409 for detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
The system 2000 according to claim 27, wherein said specific bacteria is selected from a is selected from a group consisting of Gram negative pathogens such as Various types of Acinetobacter, A.
baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia resistant to b lactamase and macrolides, Streptococcus viridians group resistant to b lactamase and
aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to penicillins and aminoglycosides, Enterococcus Faecium, Enterococcus Faecalis, staphylococcus aureus, B
lactams, macrolides, lincosamides and aminoglicozides, Streptococcus pyogenes resistant to macrolides, macrolide-resistant streptococci of groups B, C and G. Coagulase negative staphylococci
resistant to b lactams, aminoglycosides, macrolides, lincosamides and glycopeptides, multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium, C. Difficile,
resistant to penicillins and macrolides, Haemophilus Influenza resistant to b lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics,
Klebsiella Pneumonia Resistant to carbapenem, Klebsiella Pneumonia sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
The system 2000 according to claim 27, wherein said means 100 for obtaining an absorption spectrum (AS) of said sample additionally comprising: a. at least one optical cell for accommodating said
uncultured sample; b. p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or greater than 1; said p light source are adapted to
emit light at different wavelength to said optical cell; and, c. Detecting means for receiving the spectroscopic data of said sample exiting from said optical cell.
The system 2000 according to claim 33, wherein said p light source are adapted to emit light at wavelength range selected from a group consisting of UV, visible, IR, mid-IR, far-IR and terahertz.
The system according to claim 19, wherein at least one is being held true: (a) the absorption spectra is obtained using an instrument selected from the group consisting of a spectrometer, Fourier
transform infrared spectrometer, a fluorometer and a Raman spectrometer; (b) said sample is taken from the hum 1 body: and any combination thereof.
The system according to claim 19, additionally comprising means adapted to recommend, after the specific bacteria has been identified, what kind of antibiotics and medicine to take.
The method according to claim 1, additionally comprising at least one step selected from (a) recommending, after the specific bacteria has been identified, what kind of antibiotics and medicine to
take, (b) obtaining said sample from air moisture and/or contaminations in air condition systems; (c) detecting said bacteria analyzing said AS in the canon about 3000-3300 cm
and/or about 850-1000 cm
and/or about 1300-1350 cm
, and/or about 2835-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-11363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
; and any combination thereof.
The system according to claim 19, wherein at least one is being held true (a) said sample is a sample obtained from air moisture and/or contaminations in air condition systems; (b) said
identification is preformed in the region of about 3000-3300 cm
or about 850-1000 cm
and/or about 1300-1350 cm
and/or about 2836-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-1363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
; and any combination thereof.
The method for detecting and/or identifying specific bacteria within an uncultured sample according to claim 9, wherein at least one is being held true (a) the absorption spectra is obtained using an
instrument selected from the group consisting of a spectrometer, Fourier transform infrared spectrometer, a fluorometer and a Raman spectrometer; (b) said sample is taken from the human body; (c)
said method additionally comprising step of recommending, after the specific bacteria has been identified, what kind of antibiotics and medicine to take; (d) said sample is obtained from air moisture
and/or contaminations in air condition systems; (e) said method additionally comprising the step of detecting said bacteria by analyzing said AS in the region of about 3000-3300 cm
and/or about 850-1000 cm
and/or about 1300-1350 cm
, and/or about 2836-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-1363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
; and any combination thereof
The system according to claim 27, wherein at least one is being held true (a) the absorption spectra is obtained using an instrument selected from the group consisting of a spectrometer, Fourier
transform infrared spectrometer, a fluorometer and a Raman spectrometer; (b) said sample is taken from the human body; (c) said system additionally comprising means adapted to recommend, after the
specific bacteria has been identified, what kind of antibiotics and medicine to take; (d) said sample is obtained from air moisture and/or contaminations in air condition systems; (e) said
identification is preformed in the region of about 3000-3300 cm
and/or about 850-1000 cm
and/or about 1300-1350 cm
, and/or about 2836-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-1363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
; and any combination thereof.
FIELD OF THE INVENTION [0001]
The present invention relates to the field of spectroscopic medical diagnostics of specific antibiotic resistance bacteria within a sample. More particularly, the present invention provides means and
methods for differentiating among antibiotic resistant bacteria and the same bacteria that is sensitive to antibiotics. The detection can be used for both medical and non-medical applications, such
as detecting antibiotics resistance bacteria in water, beverages, food production lines, sensing for hazardous materials in crowded places etc.
BACKGROUND OF THE INVENTION [0002]
The identification of microorganisms is clearly of great importance in the medical fields, especially the detection of antibiotics resistance microorganisms. It is well known that health care
facilities invest large efforts to prevent patient from being infected with secondary diseases especially those relate to antibiotic resistant bacteria. Furthermore, in recent years the need for
efficient and relatively rapid identification techniques has become even more pressing owing to the remarkable expansion of environmental and industrial microbiology.
The common method to distinguish between antibiotic resistant bacteria and antibiotic sensitive bacteria is using PCR directly from the sample or after culturing the sample. The result time of these
methods is at least one hour and it requires a proffecianal technician to perform.
The bacterial analysis will determine what is the desired and correct treatment and medication.
Usually the physician desires to know if the bacteria is present and then perscribe the correct treatment, antibiotics or isolation. Therefore, it will be beneficial for the doctor and the patient
alike to get an immidiate response for the sample.
An immindiate response might be obtained by taking a sample (saliva, mucos, nose swabs, samples from wounds etc.) and optically characterizing their content. Optically characterizing the sample will
likely be fater and easier to perform than PCR and culture analysis.
Some spectroscopic techniques, not specific to antibiotics resistance microorganisms already known in the art. For example, PCT No. WO 98/41842 to NELSON, Wilfred discloses a system for the detection
of bacteria antibody complexes. The sample to be tested for the presence of bacteria is placed in a medium which contains antibodies attached to a surface for binding to specific bacteria to form an
antigen--antibody complex. The medium is contacted with an incident beam of light energy. Some of the energy is emitted from the medium as a lower resonance enhanced Raman backscattered energy. The
detection of the presence or absence of the microorganism is based on the characteristic spectral peak of said microorganism. In other words PCT No. WO 98/41842 uses UV resonance Raman spectroscopy.
U.S. Pat. No. 6,599,715 to Laura A. Vanderberg relates to a process for detecting the presence of viable bacterial spores in a sample and to a spore detection system. The process includes placing a
sample in a germination medium for a period of time sufficient for commitment of any present viable bacterial spores to occur. Then the sample is mixed with a solution of a lanthanide capable of
forming a fluorescent complex with dipicolinic acid. Lastly, the sample is measured for the presence of dipicolinic acid.
U.S. Pat. No. 4,847,198 to Wilfred H. Nelson; discloses a method for the identification of a bacterium. Firstly, taxonomic markers are excited in a bacterium with a beam of ultra violet energy. Then,
the resonance enhance Raman back scattered energy is collected substantially in the absence of fluorescence. Next, the resonance enhanced Raman back scattered energy is converted into spectra which
corresponds to the taxonomic markers in said bacterium. Finally, the spectra are displayed and thus the bacterium may be identified.
U.S. Pat. No. 6,379,920 to Mostafa A. El-Sayed discloses a method to analyze and diagnose specific bacteria in a biologic sample by using spectroscopic means. The method includes obtaining the
spectra of a biologic sample of a non-infected patient for use as a reference, subtracting the reference from the spectra of an infected sample, and comparing the fingerprint regions of the resulting
differential spectrum with reference spectra of bacteria. Using this diagnostic technique, U.S. Pat. No. 6,379,920 claims to identify specific bacteria without culturing.
Naumann et at had demonstrated bacteria detection and classification in dried samples using FTIR spectroscopy [Naumann D. et al., "Infrared spectroscopy in microbiology", Encyclopedia of Analytical
Chemistry, R. A. Meyers (Ed.) pp. 102-131, John Wiley & Sons Ltd, Chichester, 2000.]. Marshall et al had identifies live microbes using FTIR Raman spectroscopy [Marshall et al "Vibrational
spectroscopy of extant and fossil microbes: Relevance for the astrobiological exploration of Mars", Vibrational Spectroscopy 41 (2006) 182-189]. Others methods involve fluorescence spectroscopy of a
combination of the above.
There are some techniques for distinguishing bacteria and even antibiotics resistance bacteria. One of them is polymerase chain reaction (PCR) patented in U.S. Pat. No. 4,683,202 in 1987. However,
such techniques distinguish and/or identify bacteria by amplifying at least one specific nucleic acid sequence contained in a nucleic acid or a mixture of nucleic acids. They do not relate to
optically detecting the bacteria.
Another method is by detecting the proteome, i.e., different proteins expressed by a genome. However said methods, again, does not relate to optically detecting the bacteria.
Furthermore, there is a lot of patent literature which relates to different. DNA-based methods for universal bacterial detection, for specific detection of the common bacterial pathogens. An example
of such teaching is US application no. US20050042606. Another, another patent literature relates to detection viable bacteria in biological samples by exposing bacterial cultures obtained from the
samples to transducing particles having a known host range. An example of such teaching is PCT application no. WO9004041.
Furthermore, there is patent literature which does not relate to the bacteria detection, but to the prediction of the antibiotic effectiveness of a composition. An example of such teaching can be
found in US application no. US20030013104.
None of the prior art literature discloses means and method that can quickly (less than one hour) and without the need for professional technician detect antibiotics resistance bacteria from a
sample. Furthermore, none of the prior art literature discloses means and method that can eliminate the water influence from the sample so as to better detect the antibiotics resistance bacteria.
Moreover all of the above require a skilled operator and/or the use of reagents or a complicated sample preparation for the detection of bacteria.
Thus, there is along felt need for means and method for accurate antibiotics resistance bacteria identification from an uncultured sample without the use of reagents and/or complicated sample
SUMMARY OF THE INVENTION [0018]
It is one object of the present invention to provide a method for detecting and/or identifying specific bacteria within an uncultured sample. The method comprises steps selected inter alia from:
a. obtaining an absorption spectrum (AS) of said uncultured sample;
b. acquiring the n dimensional volume boundaries for said specific bacteria by
i. obtaining at least one absorption spectrum (AS2) of known samples containing said specific bacteria;
ii. extracting x features from said entire AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area,
at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer higher
or equal to one; x is an integer greater than or equal to one;
iii. dividing said AS2 into several segments according to said x features;
iv. calculating y features of each of said segment of said AS2; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; y
is an integer higher or equal to one;
v. assigning at least one of said x features and/or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of Sequential Backward Selection,
Sequential Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof;
vi. defining n dimensional space; n equals the sum of said x and said y features;
vii. defining the n dimensional volume in said n dimensional space;
viii. determining said boundaries of said n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant, functions
and Fisher's linear discriminant, C4.5 algorithm tree, Gaussian Mixed Model (GMM), K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, K-mean clustering algorithm,
Ward's clustering algorithm, Minimum least square, Neural-Network or any combination thereof;
c. data processing said AS;
i. noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination thereof;
ii. extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area,
at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is an integer higher
or equal to one;
iii. dividing said AS into several segments according to said m features;
iv. calculating m
features of each of said segment; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
d. detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume;
wherein said bacteria is a antibiotics resistance bacteria.
It is another object of the present invention to provide the method as defined above, additionally comprising step of selecting said x feature and/or said y features via algorithms selected form
Chi-Squared, χ2, test, Wilcoxon test, and t-test or any combination thereof.
It is another object of the present invention to provide the method as defined above, wherein said sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze,
saliva, mucus, bile, urine, vaginal secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
It is another object of the present invention to provide the method as defined above, wherein said step of acquiring the n dimensional volume boundaries for the specific bacteria, additionally
comprising step of calculating the Gaussian distribution and/or Multivariate Gaussian distribution, and/or Rayleigh distribution, and/or Maxwell distribution, and/or Estimate the distribution by the
Parzen method or mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
It is another object of the present invention to provide the method as defined above, wherein said step (c) of data processing said AS additionally comprising steps of:
i. calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1;
ii. extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one;
iii. dividing said o
derivative into several segments according to said m
iv. calculating the m
features in at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
v. detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
It is another object of the present invention to provide the method as defined above, additionally comprising the step of selecting said specific bacteria from a group consisting of Gram negative
pathogens such as Various types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia resistant to β lactamase and
macrolides, Streptococcus viridians group resistant to β lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to penicillins and aminoglycosides
(for example: Enterococcus Faecium, Enterococcus Faecalis), staphylococcus aureus SENSITIVE AND resistant to methicillin, other B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus
pyogenes resistant to macrolides, macrolide-resistant streptococci of groups B,C and G. Coagulase negative staphylococci resistant to β lactams, aminoglycosides, macrolides, lincosamides and
glycopeptides, multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium (FOR EXAMPLE: C. Difficile), resistant to penicillins and macrolides, Haemophilus Influenza
resistant to lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics (for example: Klebsiella Pneumonia Resistant to carbapenem), Klebsiella.
Pneumonia sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
It is another object of the present invention to provide the method as defined above, wherein said step of obtaining the AS additionally comprising steps of:
a. providing at least one optical cell accommodates said uncultured sample;
b. providing p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or greater than 1; said p light source are adapted to emit light
to said optical cell;
c. providing detecting means for receiving the spectroscopic data of said sample;
d. emitting light from said light source at different wavelength to said optical cell; and,
e. collecting said light exiting from said optical cell by said detecting means; thereby obtaining said AS.
It is another object of the present invention to provide the method as defined above, wherein said step of emitting light is performed at the wavelength range of UV, visible, IR, mid-IR, far-IR and
It is another object of the present invention to provide a method for detecting and/or identifying specific bacteria within an uncultured sample. The method:comprises steps selected inter alia from:
a. obtaining an absorption spectrum (AS) of said uncultured sample; said AS containing water influence;
b. acquiring the n dimensional volume boundaries for said specific bacteria by:
i. obtaining at least one absorption spectrum (AS2) of known samples containing said specific bacteria;
ii. extracting x features from said AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at
least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer higher
or equal to one; x is an integer greater than or equal to one;
iii. calculating at least one derivative of said AS2;
iv. dividing said AS2 into several segments according to said x features;
v. calculating the y features of each of said segment; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; y is
an integer higher or equal to one;
vi. assigning at least one of said x features and/or at least one of said y features to said-specific bacteria by algorithms selected from a group consisting of Sequential Backward Selection,
Sequential Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof;
vii. defining n dimensional space; n equals the sum of said x features and said y features;
viii. defining the n dimensional volume in said n dimensional space;
ix. determining said boundaries of said n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant, functions and
Fisher's linear discriminant, C4.5 algorithm tree, K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, Gaussian Mixed Model (GMM), K-mean clustering algorithm, Ward's
clustering algorithm, Minimum least square, Neural-Network or any combination thereof;
c. eliminating said water influence from said AS by at least one of the following methods: Low pass filter, High pass filter and Water absorption division;
d. data processing said AS without said water influence by
i. noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky golay, low pass filter or any combination thereof;
ii. extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area,
at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is an integer greater
or equal to one;
iii. dividing said AS into several segments according to said m features;
iv. calculating the m
features of at least one of said segment; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
e. detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume;
wherein said bacteria is a antibiotics resistance bacteria.
It is another object of the present invention to provide the method as defined above, additionally comprising step of selecting said x feature and/or said y features via algorithms selected form
Chi-Squared, χ2, test, Wilcoxon test, and t-test or any combination thereof.
It is another object of the present invention to provide the method as defined above, wherein said sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze,
saliva, mucus, bile, urine, vaginal secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
It is another object of the present invention to provide the method as defined above, wherein said step of acquiring the n dimensional volume boundaries for the specific bacteria, additionally,
comprising step of calculating the Gaussian distribution and/or Multivariate Gaussian distribution, and/or Rayleigh distribution, and/or Maxwell distribution, and/or Estimate the distribution by the
Parzen method or mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
It is another object of the present invention to provide the method as defined above, wherein said step (c) of data processing said AS without said water influence, additionally comprising steps of
i. calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1;
ii. extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one;
iii. dividing said o
derivative into several segments according to said m
iv. calculating the m
features in at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
v. detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
It is another object of the present invention to provide the method as defined above, additionally comprising the step of selecting said specific bacteria selected from a is selected from a group
consisting of Gram negative pathogens such as Various types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia
resistant to β lactamase and macrolides, Streptococcus viridian group resistant to β lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to
penicillins and aminoglycosides (for example: Enterococcus Faecium, Enterococcus Faecalis), staphylococcus aureus SENSITIVE AND resistant to methicillin, other B lactams, macrolides, lincosamides and
aminoglicozides. Streptococcus pyogenes resistant to macrolides, macrolide-resistant streptococci of groups B, C and G. Coagulase negative staphylococci resistant to β lactams, aminoglycosides,
macrolides, lincosamides and glycopeptides, multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium (FOR EXAMPLE: C. Difficile), resistant to penicillins and
macrolides, Haemophilus Influenza resistant to β lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics (for example: Klebsiella Pneumonia
Resistant to carbapenem), Klebsiella Pneumonia sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
It is another object of the present invention to provide the method as defined above, wherein said step of obtaining the AS additionally comprising steps of:
a. providing at least one optical cell accommodating said uncultured sample;
b. providing p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or greater than 1; said p light source are adapted to emit light
to said optical cell;
c. providing detecting means for receiving the spectroscopic data of said sample;
d. emitting light from said light source at different wavelength to said optical cell;
e. collecting said light exiting from said optical cell by said detecting means; thereby obtaining said AS.
It is another object of the present invention to provide the method as defined above, wherein said step of emitting light is performed at the wavelength range of UV, visible, IR, mid-IR, far IR and
It is another object of the present invention to provide the method as defined above, wherein the absorption spectra is obtained using an instrument selected from the group consisting of a
spectrometer, Fourier transform infrared spectrometer, a fluorometer and a Raman spectrometer.
It is another object of the present invention to provide the method as defined above, wherein said sample is taken from the human body.
It is another object of the present invention to provide a system 1000 adapted to detect and/or identify specific bacteria within an uncultured sample. The system comprises:
a. means 100 for obtaining an absorption spectrum (AS) of said uncultured sample;
b. statistical processing means 200 for acquiring the n dimensional volume boundaries for said specific bacteria; said means 200 are characterized by:
i. means 201 for obtaining at least one absorption spectrum (AS2) of known samples containing said specific bacteria;
ii. means 202 for extracting x features from said entire AS2; said `x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x
is an integer higher or equal to one;
iii. means 203 for dividing said AS2 into several segments according to said x features;
iv. means 204 for calculating y features from at least one of each of said segment; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's
width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient
(LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any
combination thereof; y is an integer higher or equal to one;
v. means 205 for assigning at least one of said x features and/or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of Sequential Backward
Selection, Sequential Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof;
vi. means 206 for defining n dimensional space; n equals the sum of said x features and said y features;
i. means 207 for defining the n dimensional volume in the n dimensional space;
vii. means 208 for determining said boundaries of said n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant,
functions and Fisher's linear discriminant, C4.5 algorithm tree, K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, Gaussian Mixed Model (GMM), K-mean clustering
algorithm, Ward's clustering algorithm, Minimum least square, Neural-Network or any combination thereof;
viii. means 209 for assigning the n dimensional volume to said specific bacteria;
c. means 300 for data processing said AS; said means 300 are characterized by
i. means 301 for noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination thereof;
ii. means 302 for extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is
an integer higher or equal to one;
iii. means 303 for dividing said AS into several segments according to said m features;
iv. means 304 for calculating the m
features of at least one of said segment; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
d. means 400 for detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume;
wherein said bacteria is an antibiotics resistance bacteria.
It is another object of the present invention to provide the system as defined above, additionally comprising means for selecting said x feature and/or said y features via algorithms selected form
Chi-Squared, χ2, test, Wilcoxon test, and t-test or any combination thereof.
It is another object of the present invention to provide the system as defined above, wherein said sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze,
saliva, mucus, bile, urine, vaginal secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
It is another object of the present invention to provide the system as defined above, wherein said statistical processing means 200 additionally comprising means 210 for calculating the Gaussian
distribution or Multivariate Gaussian distribution, or Rayleigh distribution, or Maxwell distribution, or Estimate the distribution by the Parzen method or by mixed model (like the Gaussian Mixed
Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
It is another object of the present invention to provide the system as defined above, wherein said means 300 for data processing said AS additionally characterized by:
i. means 305 for calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1;
ii. means 306 for extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one;
iii. means 307 for dividing said o
derivative into several segments according to said m
iv. means 308 for calculating the m
features in at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
v. means 309 for detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
It is another object of the present invention to provide the system as defined above, wherein said specific bacteria is selected from a is selected from a group consisting of Gram negative pathogens
such as Various types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia resistant to β lactamase and macrolides,
Streptococcus viridians group resistant to β lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to penicillins and aminoglycosides (for example:
Enterococcus Faecium, Enterococcus Faecalis), staphylococcus aureus SENSITIVE AND resistant to methicillin, other B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus pyogenes
resistant to macrolides, macrolide-resistant streptococci of groups B,C and G. Coagulase negative staphylococci resistant to β lactams, aminoglycosides, macrolides, lincosamides and glycopeptides,
multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium (FOR EXAMPLE: C. Difficile), resistant to penicillins and macrolides, Haemophilus Influenza resistant to β
lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics (for example: Klebsiella Pneumonia Resistant to carbapenem), Klebsiella Pneumonia
sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
It is another object of the present invention to provide the system as defined above, wherein said means 100 for obtaining an absorption spectrum (AS) of said sample additionally comprising:
a. at least one optical cell for accommodating said uncultured sample;
b. p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monoclirimator, p is an integer equal or greater than 1; said p light source are adapted to emit light at
different wavelength to said optical cell; and,
c. detecting means for receiving the spectroscopic data of said sample exiting from said optical cell.
It is another object of the present invention to provide the system as defined above, wherein said p light source are adapted to emit light at wavelength range selected from a group consisting of UV,
visible, IR, mid-IR, far-IR and terahertz.
It is another object of the present invention to provide a system 2000 adapted to detect and/or identify specific bacteria within an uncultured sample; said system 2000 comprising:
a. means 100 for obtaining an absorption spectrum (AS) of said uncultured sample; said AS containing water influence;
b. statistical processing means 200 for acquiring the n dimensional volume boundaries for said specific bacteria; said means 200 are characterized by:
i. means 201 for obtaining at least one absorption spectrum (AS2) of known samples containing said specific bacteria;
ii. means 202 for extracting x features from said entire AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x
is an integer higher or equal to one; x is an integer greater than or equal to one;
iii. means 203 for dividing said AS2 into several segments according to said x features;
iv. means 204 for calculating the y features of at least one of said segments; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width,
peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC),
mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any
combination thereof; y is an integer higher or equal to one;
v. means 205 for assigning at least one of said x features and/or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of Sequential Backward
Selection, Sequential Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof;
vi. means 206 for defining n dimensional space; n equals the sum of said x features and said y features;
vii. means 207 for defining the n dimensional volume in said n dimensional space;
viii. means 208 for determining said boundaries of said n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear
discriminant, functions and Fisher's linear discriminant, C4.5 algorithm tree, K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, Gaussian Mixed Model (GMM), K-mean
clustering algorithm, Ward's clustering algorithm, Minimum least square, Neural-Network or any combination thereof;
ix. means 209 for assigning said n dimensional volume to said specific bacteria;
c. means 300 for eliminating said water influence from said AS selected from a group consisting of; Low pass filter, High pass filter and Water absorption division
d. means 400 for data processing said AS without said water influence; said means 400 are characterized by:
i. means 401 for noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination thereof;
ii. means 402 for extracting m features from said entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is
an integer greater than or equal to one;
iii. means 403 for dividing said AS into several segments according to said m features;
iv. means 404 for calculating m
features at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
e. means 500 for detecting and/or identifying said specific bacteria if said m
features and/or said m features are within said n dimensional volume;
wherein said bacteria is a antibiotics resistance bacteria.
It is another object of the present invention to provide the system as defined above, additionally comprising means for selecting said x feature and/or said y features via algorithms selected form
Chi-Squared, χ2, test, Wilcoxon test, and t-test or any combination thereof.
It is another object of the present invention to provide the system as defined above, wherein said sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze,
saliva, mucus, bile, urine, vaginal secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
It is another object of the present invention to provide the system as defined above, wherein said statistical processing means 200 additionally comprising means 210 for calculating the Gaussian
distribution or Multivariate Gaussian distribution, or Rayleigh distribution, or Maxwell distribution, or Estimate the distribution by the Parzen method or by mixed model (like the Gaussian Mixed
Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
It is another object of the present invention to provide the system as defined above, wherein said means 400 for data processing said AS without said water influence additionally comprising:
i. means 405 for calculating at least one of the o
derivative of said AS; said o is an integer greater than or equals 1;
ii. means 406 for extracting m
features from said entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one;
iii. means 407 for dividing said o
derivative into several segments according to said m
iv. means 408 for calculating the m
features from at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
v. means 409 for detecting and/or identifying said specific bacteria if said m
and/or m
features and/or said m and/or said m
features are within said n dimensional volume.
It is another object of the present invention to provide the system as defined above, wherein said specific bacteria is selected from a is selected from a group consisting of Gram negative pathogens
such as Various types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia resistant to β lactamase and macrolides,
Streptococcus viridians group resistant to β lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to penicillins and aminoglycosides (for example:
Enterococcus Faecium, Enterococcus Faecalis), staphylococcus aureus SENSITIVE AND resistant to methicillin, other B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus pyogenes
resistant to macrolides, macrolide-resistant streptococci of groups B, C and G. Coagulase negative staphylococci resistant to β lactams, aminoglycosides, macrolides, lincosamides and glycopeptides,
multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium (FOR EXAMPLE: C. Difficile), resistant to penicillins and macrolides, Haemophilus Influenza resistant to β
lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics (for example: Klebsiella Pneumonia Resistant to carbapenem), Klebsiella Pneumonia
sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
It is another object of the present invention to provide the system as defined above, wherein said means 100 for obtaining an absorption spectrum (AS) of said sample additionally comprising:
a. at least one optical cell for accommodating said uncultured sample;
b. p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or greater than 1; said p light source are adapted to emit light at
different wavelength to said optical cell; and,
c. detecting means for receiving the spectroscopic data of said sample exiting from said optical cell.
It is another object of the present invention to provide the system as defined above, wherein said p light source are adapted to emit light at wavelength range selected from a group consisting of UV,
visible, IR, mid-IR, far-IR and terahertz.
It is another object of the present invention to provide the system as defined above, wherein the absorption spectra is obtained using an instrument selected from the group consisting of a
spectrometer, Fourier transform infrared spectrometer, a fluorometer and a Raman spectrometer.
It is another object of the present invention to provide the system as defined above, wherein said sample is taken from the human body.
It is another object of the present invention to provide the system as defined above, additionally comprising means adapted to recommend, after the specific bacteria has been identified, what kind of
antibiotics and medicine to take.
It is another object of the present invention to provide the methods as defined above, additionally comprising step of recommending, after the specific bacteria has been identified, what kind of
antibiotics and medicine to take.
It is another object of the present invention to provide the system as defined above, wherein said sample is a sample obtained from air moisture and/or contaminations in air condition systems.
It is another object of the present invention to provide the methods as defined above, wherein said sample is an sample obtained from air moisture and/or contaminations in air condition systems.
It is still an object of the present invention to provide the methods as defined above, additionally comprising the step of detecting said bacteria by analyzing said AS in the region of about
3000-3300 cm
and/or about 850-1000 cm
and/or about 1300-1350 cm
, and/or about 2836-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-1363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
It is lastly an object of the present invention to provide the system as defined above, wherein said identification is preformed in the region of about 3000-3300 cm
and/or about 850-1000 cm
and/or about 1300-1350 cm
, and/or about 2836-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-1363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
BRIEF DESCRIPTION OF THE FIGURES [0169]
In order to understand the invention and to see how it may be implemented in practice, a plurality of embodiments will now be described, by way of non-limiting example only, with reference to the
accompanying drawings, in which
FIGS. 1-2 illustrate a system 1000 and 2000 respectfully for detecting and/or identify bacteria within an aerosol sample according to preferred embodiments of the present invention.
FIGS. 3-4 illustrate an absorption spectrum prior to the water influence elimination (FIG. 3) and after the water influence elimination (FIG. 4) whilst using the first method.
FIGS. 5-7 illustrate the second method for eliminating the water influence.
FIGS. 8-9 illustrate the third method for eliminating the water influence.
FIGS. 10a, 10b and 11 illustrate the absorption spectrum of Meticylin Sensitive Staphylococcus Aurous (MSSA) and Meticylin Resistant Staphylococcus Aurous (MRSA) respectfully.
FIGS. 12a and 12b illustrate the possibility of distinguishing between the MRSA bacteria and MSSA bacteria.
FIGS. 13-15 illustrate the manner FIG. 12 was achieved.
FIG. 16 illustrates the differentiation between Enterococcus Faecium sensitive to Vancomycin and Enterococcus Faecium Resistant to Vancomycin.
FIG. 17 illustrates the differentiation between Enterococcus faecalis_sensitive to Vancomycin and Enterococcus faecalis
- Resistant to Vancomycin.
FIG. 18 illustrates the differentiation between Acinetobacter sensitive to antibiotic and Acinetobacter Resistant to antibiotics.
FIG. 19 illustrates the differentiation between KP Sen (i.e, sensitive to antibiotics),-KP Res (resistant to antibiotics) and KPC (Resistant to carbapenem (more resistant then the previous type)).
FIG. 20 illustrates the differentiation between samples of blood with MRSA and MSSA.
FIG. 21 illustrates the differentiation between nose swabs samples spiked with MRSA and MSSA.
FIG. 22 illustrates the differentiation between axillary swabs spiked with MRSA and MSSA.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0184]
The following description is provided, alongside all chapters of the present invention, so as to enable any person skilled in the art to make use of said invention and sets forth the best modes
contemplated by the inventor of carrying out this invention. Various modifications, however, will remain apparent to those skilled in the art, since the generic principles of the present invention
have been defined specifically to provide means and methods for detecting bacteria within a sample by using Spectroscopic measurements.
Spectroscopic measurements, whether absorption fluorescence Raman, and scattering are the bases for all optical sensing devices. In order to identify a hazardous material (for example a bacteria) in
a sample that might contain the material, the sample is placed inside a spectrometer and the absorption spectrum of the sample is then analyzed to verify whether the spectral signature of the
hazardous material is recognized.
The present invention provides means and methods for detection or identification of bacteria by analyzing the absorption spectra of a sample which might contain bacteria.
The term "sample" refers herein to a liquid or an aerosol or a solid sample. The present invention provides accurate detection means that enable the detection of bacteria in the sample. The detection
means can be used for medical or non-medical applications. Furthermore, the detection means can be used, for example, in detecting bacteria in bodily samples, water, beverages, food production,
sensing for hazardous materials in crowded places etc.
The sample will be obtained from coughing, sneezing, saliva, bile, mucus, urine, nose swabs, throat swabs, blood (, blood Serum or spinal fluid, vaginal secretions, middle ear aspirate, pus, pleural
effusions, synovial fluid, abscesses, cavity swabs, serum.
Furthermore, the samples will be obtained from air moisture (hazardous materials such as soot, metals), contaminations in air condition systems, water, fluids and solids that are sampled.
The present invention will provides means and method for detecting antibiotic resistant bacteria.
It should be emphasized that the sample can be selected from a group consisting of an aerosol sample, solid sample or a liquid sample.
The term "High-pass filter (HPF)" refers hereinafter to a filter that passes high frequencies well, but attenuates (reduces the amplitude of) frequencies lower than a cutoff frequency.
The term "Low-pass filter (LPF)" refers hereinafter to a filter that passes low-frequency signals but attenuates (reduces the amplitude of) signals with frequencies higher than a cutoff frequency.
The term "Chi-Squared, χ2, test" refers hereinafter to any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis
is true, or any in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-square distribution as closely as
desired by making the sample size large enough. The term "Pearson's correlation coefficient" refers hereinafter to the correlation between two variables that reflects the degree to which the
variables are related. Pearson's correlation reflects the degree of linear relationship between two variables. It ranges from +1 to -1. A correlation of --1 means that there is a perfect negative
linear relationship between variables. A correlation of 0 means there is no linear relationship between the two variables. A correlation of 1 means there is a complete linear relationship between the
two variables.
A commonly used formula for computing Pearson's correlation coefficient r is the following one:
= XY - X Y N ( X 2 - ( X ) 2 N ) ( Y 2 - ( Y ) 2 N ) ##EQU00001##
The term "about" refers hereinafter to a range of 25% below or above the referred value.
The term "segments" refers hereinafter to wavelength ranges within the absorption spectrum.
The term "n dimensional volume" refers hereinafter to a volume in an n dimensional space that is especially adapted to identify the bacteria under consideration. The n dimensional volume is
constructed by extracting features and correlations from the absorption spectrum or its derivatives.
The term "n dimensional space" refers hereinafter to a space where each coordinate is a feature extracted from the bacteria spectral signature or calculated out of the spectrum and its derivatives or
from a segment of the spectrum and/or its derivatives.
The term "n dimensional volume boundaries" refers hereinafter to a range that includes about 95% of the bacteria under consideration possible features and correlation values.
The term "trace(S
)" refers hereinafter to the ratio between interclass and intraclass covariance matrix. It refers to a method used to measure the separability of two classes. It relates to the ability to achieve
high correct classification in a designed classifier. In the following disclosure S
is the covariance matrix reflecting the distance between two classes, and S
is covariance matrix reflecting the distance within class.
The term "Correlation" refers herein after to correlation between the aerosol bacteria spectrum and a reference bacteria spectrum which is already known, correlation between bacteria spectrum without
the water influence and a reference bacteria spectrum which is already known, correlation between o
derivative of the aerosol bacteria spectrum and a reference bacteria spectrum which is already known, correlation between o
derivative of the bacteria spectrum without the water influence and a reference bacteria spectrum which is already known. o is an integer greater than or equals to 1. The above correlations are
calculated on the whole spectrum and/or segments of the spectrum and/or their derivatives.
Methods and means for bacteria detection adapted to utilize the unique spectroscopic signature of microbes/bacteria and thus enables the detection of the antibiotic resistant microbes/bacteria within
a sample are provided by the present invention.
Reference is now made to FIG. 1, illustrating a system 1000 adapted to detect and/or identify specific bacteria within a sample according to one preferred embodiment of the present invention. System
1000 comprises:
a. means 100 for obtaining an absorption spectrum (AS) of the sample;
b. statistical processing means 200 for acquiring the n dimensional volume boundaries of at least specific bacteria, having:
i. means 201 for obtaining at least one absorption spectrum (AS2) of known samples containing the specific bacteria;
ii. means 202 for extracting x features from the entire AS2; said x features are selected from a group consisting of Correlation peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of patameters different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer
higher or equal to one;
iii. means 203 for dividing the AS2 into several segments according to at least one of the x features;
iv. means 204 for extracting y features from at least one of said segments; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's
cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean
value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination
thereof; y is an integer higher or equal to one;
v. means 205 for assigning at least one of said x features and/or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of Sequential Backward
Selection, Sequential. Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof;
vi. means 206 for defining n dimensional space; n equals the sum of the x and y;
vii. means 207 for defining the n dimensional volume in the n dimensional space;
viii. means 208 for determining the boundaries of the n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant,
functions and Fisher's linear discriminant, C4.5 algorithm tree, Gaussian Mixed Model (GMM), K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, K-mean clustering
algorithm, Ward's clustering algorithm, Minimum least square, Neural-Network or any combination thereof;
ix. means 209 for assigning the n dimensional volume to the specific bacteria;
c. means 300 for data processing the AS, having:
i. means 301 for noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination thereof;
ii. means 302 for extracting m features from the entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is
an integer higher or equal to one;
iii. means 303 for dividing the AS into several segments according to the m features;
iv. means 304 for extracting m
features from at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
d. means 400 for detecting and/or identifying the specific bacteria if the m
and/or m features are within the n dimensional volume; wherein said bacteria is an antibiotics resistance bacteria.
According to another embodiment of the present invention, the method as defined above, additionally comprising step of selecting said x feature and/or said y features via algorithms selected form
Chi-Squared, χ2, test, Wilcoxon test, and t-test or any combination thereof.
According to another embodiment of the present invention, the sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze, saliva, mucus, bile, urine, vaginal
secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
According to another embodiment of the present invention, the statistical processing means 200 additionally comprising means 210 (not illustrated in the figures) for calculating the Gaussian
distribution or Multivariate Gaussian distribution, or Rayleigh distribution, or Maxwell distribution, Estimate the distribution by the Parzen method or mixed model (like the Gaussian Mixed Model
known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
According to another embodiment of the, present invention, means 300 (in system 1000) for data processing the AS additionally characterized by:
i. means 305 (not illustrated in the figures) for calculating at least one of the o
derivative of the AS; o is an integer greater than or equals 1;
ii. means 306 (not illustrated in the figures) for extracting m
features from the entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one;
iii. means 307 (not illustrated in the figures) for dividing the o
derivative into several segments according to the m
iv. mean 308 (not illustrated in the figures) for extracting m
features from at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
v. means 309 (not illustrated in the figures) for detecting and/or identifying the specific bacteria if the m
and/or m
and/or the m and/or the m
features are within the n dimensional volume.
According to yet another embodiment of the present invention, the specific bacteria to be identified by system 1000 is selected from a is selected from a group consisting of Gram negative pathogens
such as Various types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia resistant to β lactamase and macrolides,
Streptococcus viridians group resistant to β lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to penicillins and aminoglycosides (for example:
Enterococcus Faecium, Enterococcus Faecalis), staphylococcus aureus SENSITIVE AND resistant to methicillin, other B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus pyogenes
resistant to macrolides, macrolide-resistant streptococci of groups B,C and G. Coagulase negative staphylococci resistant to β lactams, aminoglycosides, macrolides, lincosamides and glycopeptides,
multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium (FOR EXAMPLE: C. Difficile), resistant to penicillins and macrolides, Haemophilus Influenza resistant to β
lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics (for example: Klebsiella Pneumonia Resistant to carbapenem), Klebsiella Pneumonia
sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
According to another embodiment of the present invention, the means 100 for obtaining an absorption spectrum (AS) of the sample (in system 1000), additionally comprising:
a. at least one optical cell for accommodating the sample;
b. p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or greater than 1; the p light source are adapted to emit light at
different wavelength to the optical cell; and,
c. detecting means for receiving the spectroscopic data of the sample exiting from the optical cell.
According to yet another embodiment of the present invention, the p light source (in system 1000) are adapted to emit light at wavelength range selected from a group consisting of UV, visible, IR,
mid-IR, far-IR and terahertz.
Reference is now made to FIG. 2, illustrating a system 2000 adapted to detect and/or identify specific bacteria within a sample, according to another preferred embodiment of the present invention.
System 2000 comprises:
a. means 100 for obtaining an absorption spectrum (AS) of the sample; the AS containing water influence;
b. statistical processing means 200 for acquiring the n dimensional volume boundaries for at least one specific bacteria, having:
i. means 201 for obtaining at least one absorption spectrum (AS2) of known samples containing the specific bacteria;
i. means 202 for extracting x features from the entire AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is
an integer higher or equal to one; x is an integer greater than or equal to one;
ii. means 203 for dividing the AS2 into several segments according to at least one of the x features;
iii. means 204 for extracting y features from at least one of said segments; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width,
peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC),
mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any
combination thereof; y is an integer higher or equal to one;
iv. means 205 for assigning at least one of said x features and/or at least one of said y features to said specific bacteria;
v. means 206 for defining n dimensional space; n equals the sum of the x and y;
vi. means 207 for defining the n dimensional volume in said n dimensional space;
vii. means 208 for determining the boundaries of the n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant,
functions and Fisher's linear discriminant, C4.5 algorithm tree, Gaussian Mixed Model (GMM), K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, K-mean clustering
algorithm, Ward's clustering algorithm, Minimum least square, Neural-Network or any combination thereof;
viii. means 209 for assigning the n dimensional volume to the specific bacteria;
c. means 300 for eliminating the water influence from the AS, selected from a group consisting of; Low pass filter, High pass filter and Water absorption division;
d. means 400 for data processing the AS without the water influence, characterized by:
i. means 401 for noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination thereof;
ii. means 402 for extracting m features from the entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section,
peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the
signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is
an integer greater or equal to one;
iii. means 403 for dividing the AS into several segments according to the m features;
iv. means 404 for extracting m
features from at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
e. means 500 for detecting and/or identifying the specific bacteria if the m
and/or m features are within the n dimensional volume; wherein said bacteria is an antibiotics resistance bacteria.
According to another embodiment of the present invention, the sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze, saliva, mucus, bile, urine, vaginal
secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
According to another embodiment of the present invention, the selection of said x feature and/or said y features is performed via algorithms selected form Chi-Squared, χ2, test, Wilcoxon test, and
t-test or any combination thereof.
According to another embodiment of the present invention, the statistical processing means 200 in system 2000) additionally comprising means 210 (not illustrated in the figures) for calculating the
Gaussian distribution or Multivariate Gaussian distribution, or Rayleigh distribution, or Maxwell distribution, Estimate the distribution by the Parzen method or mixed model (like the Gaussian Mixed
Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
According to another embodiment of the present invention, means 400 (in system 2000) for data processing the AS without the water influence additionally comprising:
ii. means 405 (not illustrated in the figures) for calculating at least one of the o
derivative of the AS; o is an integer greater than or equals 1;
iii. means 406 (not illustrated in the figures) for extracting m
features from the entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one;
iv. means 407 (not illustrated in the figures) for dividing the o
derivative into several segments according to the m
v. mean 408 (not illustrated in the figures) for extracting m
features from at least one of said segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
vi. means 409 (not illustrated in the figures) for detecting and/or identifying the specific bacteria if the m
and/or m
and/or the m and/or the m
features are within the n dimensional volume.
According to another embodiment of the present invention, the specific bacteria (in system 2000) is selected from a is selected from a group consisting of Gram negative pathogens such as Various
types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia resistant to β lactamase and macrolides, Streptococcus
viridian group resistant to β lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to penicillins and amino glycosides (for example: Enterococcus
Faecium, Enterococcus Faecalis), staphylococcus aureus SENSITIVE AND resistant to methicillin, other B lactams, macrolides, lincosamides and aminoglicozides. Streptococcus pyogenes resistant to
macrolides, macrolide-resistant streptococci of groups B, C and G. Coagulase negative staphylococci resistant to β lactams, aminoglycosides, macrolides, lincosamides and glycopeptides, multiresistant
strains of Listeria and corynebacterium, Peptostreptococcus and clostridium (FOR EXAMPLE: C. Difficile), resistant to penicillins and macrolides, Haemophilus Influenza resistant to β lactamase,
Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics (for example: Klebsiella Pneumonia Resistant to carbapenem), Klebsiella Pneumonia sensitive to
antibiotics, aminoglycosides and macrolides or any combination thereof.
According to another embodiment of the present invention, means 100 for obtaining an absorption spectrum (AS) of the sample additionally comprising:
a. at least one optical cell for accommodating the sample;
b. p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is, an integer equal or greater than 1; p light source are adapted to emit light at different
wavelength to the optical cell; and,
c. detecting means for receiving the spectroscopic data of the sample exiting from the optical cell.
According to yet another embodiment of the present invention, the p light source are adapted to emit light at wavelength range selected from a group consisting of UV, visible, IR, mid-IR, far-IR and
According to yet another embodiment of the present invention, identification is preformed in the region, of about 3000-3300 cm
and/or about 850-1000 cm
and/or about 1300-1350 cm
, and/or about 2836-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-1363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
Yet another object of the present invention is to provide a method for detecting and/or identifying specific bacteria within a sample. The method comprises step selected inter alia from:
a. obtaining an absorption spectrum (AS) of the sample;
b. acquiring the n dimensional volume boundaries for the specific bacteria by:
i. obtaining at least one absorption spectrum (AS2) of samples containing the specific bacteria;
ii. extracting x features from the entire AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area,
at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer higher
or equal to one; x is an integer greater than or equal to one;
iii. dividing the AS2 into several segments according to the x features;
iv. extracting y features from of each of the segment of AS2; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; y
is an integer higher or equal to one;
v. assigning at least one of said x features and/or at least one of said y features to said specific bacteria by algorithms selected from a group consisting of Sequential Backward Selection,
Sequential Forward Selection, Sequential Forward Floating Selection (SFFS), Max-Min algorithm, trace(S
); S
); Kullback-Lieber divergence; correct classification rate; and any combination thereof;
vi. defining n dimensional space; n equals the sum of the x features and/or they features;
vii. defining the n dimensional volume in said n dimensional space;
viii. determining the boundaries of the n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant, functions and
Fisher's linear discriminant, C4.5 algorithm tree, Gaussian Mixed Model (GMM), K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, K-mean clustering algorithm, Ward's
clustering algorithm, Minimum least square, Neural-Network or any combination thereof.
c. data processing the AS;
i. noise reducing by using different smoothing techniques selected, from a group consisting of running average savitzky-golay, low pass filter or any combination thereof;
ii. extracting m features from the entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross Section, peak's area,
at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is an integer higher
or equal to one;
iii. dividing the AS into several segments according to the m features;
iv. calculating the m
features of at least one of the segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
d. detecting and/or identifying the specific bacteria if the m
and/or the m features are within the n dimensional volume.
It should be pointed out that in each of the systems or methods as described above (either 1000 or 2000), the statistical processing means 200 is used only once for each specific bacteria. Once the
boundaries were provided by the statistical processing means 200 the determination whether the specific bacteria is present in a sample is performed by verifying whether the m and/or m
features are within the boundaries. Furthermore, once the boundaries were provided, there exists no need for the statistical processing of the same specific bacteria again.
It should be further pointed out that according to one embodiment of the present invention, either one of the systems (1000 and/or 2000) as defined above can additionally comprise means adapted to
recommend any physician, after the specific bacteria has been identified, what kind of antibiotics and medicine to take.
Yet another object of the present invention is to provide a method for detecting and/or identifying specific bacteria within a sample. The method comprises steps selected inter alia from:
a. obtaining an absorption spectrum (AS) of the sample; the AS containing water influence;
b. acquiring the n dimensional volume boundaries for the specific bacteria by:
i. obtaining at least one absorption spectrum (AS2) of known samples containing the specific bacteria;
ii. extracting x features from the entire AS2; said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area,
at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; x is an integer higher
or equal to one; x is an integer greater than or equal to one;
iii. dividing the AS2 into several segments according to the x features;
iv. Extracting y features from of each of the segment of AS2; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross
section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of
the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; y
is an integer higher or equal to one; v. assigning at least one of said x features and/or at least one of said y features to said specific bacteria
vi. defining n dimensional space; n equals the sum of the x features and/or they;
vii. determining the boundaries of the n dimensional volume by using technique selected from a group consisting of Bayes classifier, Support Vector Machine (SVM), Linear discriminant, functions and
Fisher's linear discriminant, C4.5 algorithm tree, Gaussian Mixed Model (GMM), K-nearest neighbor, Weighted K-nearest neighbor, Hierarchical clustering algorithm, K-mean clustering algorithm, Ward's
clustering algorithm, Minimum least square, Neural-Network or any combination thereof;
c. eliminating the water influence from the AS by at least one of the following methods: Low pass filter, High pass filter and Water absorption division;
d. data processing the AS without the water influence by:
i. noise reducing by using different smoothing techniques selected from a group consisting of running average savitzky-golay, low pass filter or any combination thereof;
ii. extracting m features from the entire AS; said m features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area,
at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance
value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m is an integer greater
or equal to one;
iii. dividing the AS into several segments according to the m features;
iv. calculating the m
features of each of the segment; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
e. detecting and/or identifying the specific bacteria if the m
and/or the m features are within the n dimensional volume; wherein said bacteria is an antibiotics resistance bacteria.
According to another embodiment, the sample is an aerosol or solid or liquid sample selected from a group consisting of cough, sneeze, saliva, mucus, bile, urine, vaginal secretions, middle ear
aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, serum, blood and spinal fluid.
According to another embodiment, the selection of said x feature and/or said y features is performed via algorithms selected form Chi-Squared, χ2, test, Wilcoxon test, and t-test or any combination
In each of the methods as described above, the statistical processing is used only once for each specific bacteria. Once the boundaries were provided by the statistical processing the determination
whether the specific bacteria is present in a sample is performed by verifying whether the m
and/or said m features are within the boundaries. Furthermore, once the boundaries were provided, there exists no need for the statistical processing of the same specific bacteria again.
According to another embodiment of the present invention, the step of acquiring the n dimensional volume boundaries for the specific bacteria in each of the methods as defined above, additionally
comprising step of calculating the Gaussian distribution and/or Multivariate Gaussian distribution, and/or Rayleigh distribution, and/or Maxwell distribution, and/or Estimate the distribution by the
Parzen method or mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
According to another embodiment of the present invention step (c) of data processing the AS, in the methods as described above, additionally comprising steps of:
i. calculating at least one of the o
derivative of the AS; o is an integer greater than or equals 1;
ii. extracting m
features from the entire o
derivative spectrum; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one;
iii. dividing the o
derivative into several segments according to the m
iv. calculating the m
features in at least one of the segments; said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one; and,
v. detecting and/or identifying the specific bacteria if the m
and/or m
features and/or the m and/or the m
features are within the n dimensional volume.
According to another embodiment of the present invention, the methods as described above, additionally comprising the step of selecting the specific bacteria selected from a is selected from a group
consisting of Gram negative pathogens such as Various types of Acinetobacter (for example: A. baumannii), Stenotrophomonas maltophilia, Gram positive pathogens such as Streptococcus pneumonia
resistant to β lactamase and macrolides, Streptococcus viridians group resistant to β lactamase and aminoglycosides, enterococci resistant to vancomycin and teicoplanin and highly resistant to
penicillins and aminoglycosides (for example: Enterococcus Faecium, Enterococcus Faecalis), staphylococcus aureus SENSITIVE AND resistant to methicillin, other B lactams, macrolides, lincosamides and
aminoglicozides. Streptococcus pyogenes resistant to macrolides, macrolide-resistant streptococci of groups B, C and G. Coagulase negative staphylococci resistant to β lactams, aminoglycosides,
macrolides, lincosamides and glycopeptides, multiresistant strains of Listeria and corynebacterium, Peptostreptococcus and clostridium (FOR EXAMPLE: C. Difficile), resistant to penicillins and
macrolides, Haemophilus Influenza resistant to β lactamase, Pseudomonas Aeruginosa, Stenotrophomonas Maltophilia, Klebsiella Pneumonia resistant to antibiotics (for example: Klebsiella Pneumonia
Resistant to carbapenem), Klebsiella Pneumonia sensitive to antibiotics, aminoglycosides and macrolides or any combination thereof.
According to another embodiment of the present invention, the step of obtaining the AS, in the methods as described above, additionally comprising the following steps:
a. providing at least one optical cell accommodates the sample;
b. providing p light source selected from a group consisting of laser, lamp, LEDs tunable lasers, monochrimator, p is an integer equal or greater than 1; p light source are adapted to emit light to
the optical cell;
c. providing detecting means for receiving the spectroscopic data of the sample;
d. emitting light from the light source at different wavelength to the optical cell; and,
e. collecting the light exiting from the optical cell by the detecting means; thereby obtaining the AS.
According to another embodiment of the present invention, the step of emitting light is performed at the wavelength range of UV, visible, IR, mid-IR, far-IR and terahertz. It should be further
pointed out that according to one embodiment of the present invention, the methods as described above, additionally comprising the step of detecting said bacteria by analyzing said AS in the region
of about 3000-3300 cm
and/or about 850-1000 cm
and/or about 1300-1350 cm
, and/or about 2836-2995 cm
, and/or about 1720-1780 cm
, and/or about 1550-1650 cm
, and/or about 1235-1363 cm
, and/or about 990-1190 cm
and/or about 1500-1800 cm
and/or about 2800-3050 cm
and/or about 1180-1290 cm
According to yet another embodiment of the present invention, the absorption spectra, in any of the systems (1000 or 2000) or for any of the methods as described above, is obtained using an
instrument selected from the group consisting of a spectrometer, Fourier transform infrared spectrometer, a fluorometer and a Raman spectrometer.
According to yet another embodiment of the present invention, the uncultured sample, in any of the systems (1000 or 2000) or for any of the methods as described above, is selected from fluid
originated from the human body such as blood, saliva, urine, bile, vaginal secretions, middle ear aspirate, pus, pleural effusions, synovial fluid, abscesses, cavity swabs, mucous, and serum.
It should be further pointed out that according to one embodiment of the present invention, either one of the methods as described above can additionally comprise step of recommending, after the
specific bacteria has been identified, what kind of antibiotics and medicine to take.
In the foregoing description, embodiments of the invention, including preferred embodiments, have been presented for the purpose of illustration and description. They are not intended to be
exhaustive or to limit the invention to the precise form disclosed. Obvious modifications or variations are possible in light of the above teachings. The embodiments were chosen and described to
provide the best illustration of the principals of the invention and its practical application, and to enable one of ordinary skill in the art to utilize the invention in various embodiments and with
various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when
interpreted in accordance with the breadth they are fairly, legally, and equitably entitled.
EXAMPLES [0329]
Examples are given in order to prove the embodiments claimed in the present invention. The examples describe the manner and process of the present invention and set forth the best mode contemplated
by the inventors for carrying out the invention, but are not to be construed as limiting the invention.
Water Influence [0330]
One of the major problems in identifying bacteria from a fluid sample's spectrum (and especially an aerosol spectrum) is the water influence (i.e., the water noise which masks the desired spectrum by
the water spectrum).
The water molecule may vibrate in a number of ways. In the gas state, the vibrations involve combinations of symmetric stretch (v1), asymmetric stretch (v3) and bending (v2) of the covalent bonds.
The water molecule has a very small moment of inertia on rotation which gives rise to rich combined vibrational-rotational spectra in the vapor containing tens of thousands to millions of absorption
lines. The water molecule has three vibrational modes x, y and z. The following table (table 1) illustrates the water vibrations, wavelength and the assignment of each vibration:
-US-00001 TABLE 1 water vibrations, wavelength and the assignment of each vibration Wavelength cm-1 Assignment 0.2 mm 50 intermolecular bend 55 μm 183.4 intermolecular stretch 25 μm 395.5 L1,
librations 15 μm 686.3 L2, librations 6.08 μm 1645 v2, bend 4.65 μm 2150 v2 + L2 b 3.05 μm 3277 v1, symmetric stretch 2.87 μm 3490 v3, asymmetric stretch 1900 nm 5260 av1 + v2 + bv3; a + b = 1 1470
nm 6800 av1 + bv3; a + b = 2 1200 nm 8330 av1 + v2 + bv3; a + b = 2 970 nm 10310 av1 + bv3; a + b = 3 836 nm 11960 av1 + v2 + bv3; a + b = 3 739 nm 13530 av1 + bv3; a + b = 4 660 nm 15150 av1 + v2 +
bv3; a + b = 4 606 nm 16500 av1 + bv3; a + b = 5 514 nm 19460 av1 + bv3; a + b = 6 a and b are integers, ≧0 ms.
The present invention provides a method for significantly reducing and even eliminating the water influence within the absorption spectra.
Reference is now made to FIGS. 3 and 4 which illustrate an absorption spectrum of a sample with and without the water influence.
The present invention provides three main methods for eliminating the water influence.
The First Method
The first method for eliminating the water influence uses Water absorption division and contains the following steps:
First the absorption spectrum was divided in several segments (i.e, wavelength ranges). The spectrum was divided to segments (wavenumber ranges) of about 1800 cm
to about 2650 cm
, about 1400 cm
to about 1850 cm
, about 1100 cm
to about 1450 cm
, about 950 cm
to about 1100 cm
, about 550 cm
to about 970 cm
The segments were determined according to (i) different intensity peaks within the water's absorption spectrum; and, (ii) the signal's trends.
Next, each segment was eliminated from the water influence in the following manner:
(a) providing the absorption intensity at each of wavenumber (x) within the absorption spectrum (refers hereinafter as Sig
(b) calculating the correction factors (CF) at each wavelength (refers hereinafter as x) within each segment (refers hereinafter as CF(x));
(c) acquiring from the absorption spectrum, at least one absorption intensity that is mainly influenced by water (refers hereinafter as Sig
only(x1)) at the corresponding wavenumbers (x1);
(d) calculating at least one correction factor of the water (CF
only (x1)) at said at least one wavenumber (x1);
(e) dividing at least one Sig
only(x1) by at least one CF
(i.e., Sig
only (x1)) at said at least one wavenumber (x1);
(f) calculating the average of the results of step (e) (refers hereinafter as AVG[Sig
(g) multiplying the AVG[Sig
only] (x1) by CF(x) for each wavenumber (x); and,
(h) Subtracting each result of step (g) from Sig
water(x) per each (x).
In other words, each absorption intensity within the spectrum is eliminated from the water influence according to the following equation:
Calculating the Correction Factors
The correction factors (CF) depends on the wavelength range, the water absorption peak's shape at each wavelength, peak's width, peak's height, absorption spectrum trends and any combination thereof.
The following series were used as a correction factor (x--denote the wavenumber in cm
1. Wavelength range 1846 cm
to 2613 cm
^-1 [0352]
.sup.)+a21*e.sup.(-((x-b21)/c- 21)
.sup.)+a41*e.sup.(-((x-b41)- /c41)
.sup.)+a61*e.sup.(-- ((x-b61)/c61)
.sup.)+a81*e.sup- .(-((x-b81)/c81)
2. Wavelength range 1461 cm
to 1846 cm
^-1 [0378]
.sup.)+a22*e.sup.(-((x-b22)/c22).sup- .2.sup.)+a32*e.sup.(-((x-b32)/c32)
.sup.)+a42*e.sup.(-((x-b42)/c42).s- up.2.sup.)+a52*e.sup.(-((x-b52)/c52)
.sup.)+a62*e.sup.(-((x-b62- )/c62)
.sup.)+a82*e.sup.(-((x-- b82)/c82)
3. Wavelength range 1111 cm
to 1461 c
[m]^-1 [0403]
.sup.)+a23*e.sup.(-((x-b23)/c23).sup- .2.sup.)+a33*e.sup.(-((x-b33)/c33)
.sup.)+a43*e.sup.(-((x-b43)/c43).s- up.2.sup.)+a53*e.sup.(-((x-b53)/c53)
.sup.)+a63*e.sup.(-((x-b63- )/c63)
.sup.)+a83*e.sup.(-((x-- b83)/c83)
4. Wavelength range 961 cm
to 1111 cm
^-1 [0428]
.sup.)+a24*e.sup.(-((x-b24)/c24).sup- .2.sup.)+a34*e.sup.(-((x-b34)/c34)
.sup.)+a44*e.sup.(-((x-b44)/c44).s- up.2.sup.)+a54*e.sup.(-((x-b54)/c54)
.sup.)+a64*e.sup.(-((x-b64- )/c64)
.sup.)+a84*e.sup.(-((x-- b84)/c84)
Wavelength range 570 cm
to 961 cm
^-1 [0453]
.sup.)+a25*e.sup.(-((x-b25)/c25).sup- .2.sup.)+a35*e.sup.(-((x-b35)/c35)
.sup.)+a45*e.sup.(-((x-b45)/c45).s- up.2.sup.)+a55*e.sup.(-((x-b55)/c55)
.sup.)+a65*e.sup.(-((x-b65- )/c65)
.sup.)+a85*e.sup.(-((x-- b85)/c85)
Absorption Intensity Mainly Influenced by Water
Reference is made again to FIG. 3 which illustrate the absorption spectrum prior to eliminating the water influence.
As can be seen from the figure, the absorption intensity that is mainly influenced by the water is the wavenumber region of 2000 cm
and above. The intensity at that region is about 0.2 absorption units. In the present example, x1 is 2000 and Sig
only(x1) is 0.2.
Reference is made again to FIG. 4, which illustrate the absorption spectrum of a sample after the influence of the water was eliminated.
It should be pointed out that for the purpose of obtaining a better resolution both graphs (3 and 4) are normalized to 2 (i.e., multiplied by 2).
The Second Method
The second method uses a low pass filter, LPF. The method comprises the following steps:
1. Selecting the entire spectrum or at least one sub-region of the fully-hydrated bacteria spectrum.
2. Computing a water-baseline spectrum estimate by filtering the selected fully-hydrated bacteria spectrum by a Low-Pass-Filter (LPF).
3. Subtracting the water-baseline spectrum estimate from the selected fully-hydrated bacteria spectrum to obtain the non-smoothed sole bacteria spectrum.
4. A smoothed version of the sole bacteria spectrum is obtained by applying any smoothing operator like Savitzky-Golay, but not limited, on the non-smoothed sole bacteria spectrum.
All the steps described above (in the second method) are illustrated in FIGS. 5-7.
FIG. 5 illustrates steps 1-4. FIG. 6 illustrates the subtracted non smoothed signal and the subtracted smoothed signal. FIG. 7 illustrates Finite-Impulse-Response (FIR) used to generate the LPF
The Third Method
The third method uses a high pass filter, HPF. The method comprises the, following steps:
1. Selecting the entire spectrum or a sub-region of the fully-hydrated bacteria spectrum.
2. Computing the sole bacteria spectrum by filtering the selected fully-hydrated bacteria spectrum by a High-Pass-Filter (HPF).
3. Subtracting the sole bacteria, spectrum from the entire spectrum to obtain the non-smoothed sole bacteria spectrum.
4. A smoothed version of the sole bacteria spectrum is obtained by applying any smoothing operator like Savitzky-Golay, but not limited, on the non-smoothed sole bacteria spectrum.
All the steps described above (in the third method) are illustrated in FIGS. 8-9.
FIG. 8 illustrates steps 1-4. FIG. 9 illustrates Finite-Impulse-Response (FIR) used to generate the HPF coefficients.
's Absorption Spectrum
Each type of bacteria has a unique spectral signature. Although many types of bacteria have similar spectral signatures there are still some spectral differences that are due to different proteins on
the cell membrane and differences in the DNA/RNA structure. The following protocol was used to obtain the spectra signature of Meticylin Sensitive Staphylococcus Aurous (MSSA) and Meticylin Resistant
Staphylococcus Aurous (MRSA):
1. A strip of 2 cm of MSSA and MRSA bacteria was removed from an agar plate (quadloop was used). Both the bacteria were purchased from HY labs and dissolve in two eppendorf tubes with 160 μl ddH2O
each (Sigma-Aldrich W3500-100 mL):
ATCC Type Cat number Lot number resistancy signal 1 6538 101692 7026 MSSA 2 25923 101676 6985 MSSA 3 43300 101709 7115 MRSA
Each of species was introduced into 2 tubes, A and B, (i.e., total of 6 tubes, 3 A's and 3 B's).
1. Tube A for each of species was:
a. centrifuge 7 minutes at 9000 RPM;
b. discard supernatant;
c. added with 1 mL ddH2O;
d. centrifuge 7 minutes at 9000 RPM;
e. discard supernatant;
f. Added 160 mL;
2. Put 30 μL from tubes A and B in three areas on an optical plate (ZnSe). Each plate for a different subtype of Staph Aureus.
3. Place the plate in a desiccator (Dessicator 250 mm polypropylene) in the presence of several petri plates with a desiccant agent (Phosphorus Pentoxide cat #79610 Sigma Aldrich) and vacuum for 30
4. Read and analyze the spectral signature.
The following figures show the absorption spectrum of bacteria in the sample.
Reference is now made to FIGS. 10a, 10b and 11 illustrating the absorption spectrum of Meticylin Sensitive Staphylococcus Aurous (MSSA) ATCC type 6538, and ATCC type 25923 and Meticylin Resistant
Staphylococcus Aurous (MRSA) respectfully.
Distinguishing Between Different Kind of Bacteria [0513]
The following examples illustrate in-vitro examples to provide a method to distinguish between different kinds of bacteria.
The identification and/or detection of specific bacteria were as follows:
(a) The water influence was eliminated using methods selected inter alia from, but not limited, low pass filter, high pass filter, and water absorption division to receive the dry bacteria spectrum
(b) the noise in each of the absorption spectra (without the water influence) was reduced by using Savitzky-Golay smoothing;
(c) m features such as, but not limited to, Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial
curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value,
Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof were extracted from the spectra. A total of m features were extracted. m is
an integer higher or equals 1;
(d) the signal was divided into several regions (segments, i.e., several wavenumber regions) according to said m features;
(e) m
features were extracted from at least one of the spectrum's regions. said m
features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted
polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis
value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination thereof; m
is an integer greater than or equal to one.
(f) the m features and the m
features were examined and checked whether they are within the n dimensional volume boundaries (which acquired by the statistical processing);
(g) the identification of the specific bacteria was determined as positive if the m features and/or the m
features are within the n dimensional volume boundaries.
Statistical Processing
The statistical processing is especially adapted to provide the n dimensional volume boundaries. For each specific bacterium the statistical processing was performed only once, for obtaining the
boundaries. Once the boundaries were provided, the determination whether the specific bacteria is present in a sample was as explained above (i.e., verifying whether the feature vector are within the
The statistical processing for each specific bacterium is performed in the following manner:
(a) obtaining several absorption spectrum (AS2) of known samples containing the specific bacteria;
(b) extracting x features from the signal such as, but not limited to, said x features are selected from a group consisting of Correlation, peak's wavelength, peak's height, peak's width, peak's
cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction coefficient (LPC), mean
value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet coefficients or any combination
thereof; x is an integer higher or equal to one A total of x features. x is an integer higher or equals 1;
(c) dividing the signal into several regions (segments) according to said x features;
(d) Calculating y features for at least one of the segments within the absorption spectrum; said y features are selected from a group consisting of Correlation, peak's wavelength, peak's height,
peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of the signal, linear prediction
coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ,σ,Ai), different peaks' intensity ratios, wavelet
coefficients or any combination thereof; y is an integer higher or equal to one;
(e) assigning at least one of said x features and/ or at least one of said y features to said specific bacteria;
(f) Defining n dimensional space n equals the sum of the x features and y features;
(g) Assigning and/or interlinking each one of the x and y features, to the specific bacteria which its identification is required;
(h) Optionally calculating the statistical distribution for each of the x and y features (thus, defining the n dimensional volume). and,
(i) Determining the boundaries of each volume by using a classifier or a combination of classifiers (for example k nearest neighbor, Bayesian classification et cetera).
It should be pointed out that the assignment of at least one of the x features and/or at least one of the y features to the specific bacteria is performed by method of feature selection and
It should be further pointed out that the Gaussian distribution or Multivariate Gaussian distribution, or Rayleigh distribution, or Maxwell distribution, or Estimate the distribution by the Parzen
method or mixed model (like the Gaussian Mixed Model known as GMM) for at least one of the n features such that the distributions defines the n dimensional volume in the n dimensional space.
It should be further emphasized that all the above mentioned steps could be performed on at least one of the o
derivative of the absorption spectrum; o is an integer greater than or equals 1. e.g. the features are extracted from the o
derivative instead of the signal.
If the features (extracted from the spectrum and/or its derivatives) are within the n dimensional volume boundaries, the specific bacteria is identified. Otherwise the bacteria are not identified.
Alternatively or additionally, each of the x and/or y features are given a weighting factor. The weighting factor is determined by the examining how each feature improves the bacteria detection
prediction (for example by using maximum likelihood or Bayesian estimation). Once the weighting factor is assigned to each one of the x and y features the boundaries are determined for the features
having the most significant contribution to the bacteria prediction.
Alternatively or additionally, the AS2 and its derivatives is smoothed by reducing the noise. The noise reduction is obtained by different smoothing techniques selected from a group consisting of
running average savitzky-golay or any combination thereof.
Boundaries Calculation
As explained above, the boundaries are calculated according to the features which had the most significant contribution for the specific bacteria identification in the sample. The following figure is
a 3D illustration of the two dimensional boundary based on the three best features from a segment of the spectrum.
Reference is now made to FIG. 12a, which clearly enables to distinguish between the MRSA bacteria and MSSA bacteria. The diagram demonstrates the graphs after the water influence was eliminated using
one of the above mentioned methods.
The values of the x-axis of FIG. 12a (or 12b) are calculated in the following manner:
A min-max normalization was employed on the first derivative of the dried-bacteria (i.e., bacteria spectrum after the water was eliminated) at 1091 cm
wavenumber in the range [990-1170] wavenumber cm
The y-axis of FIG. 12a (or 12b) is coefficient #2 in the detail of the Wavelet transform at level 1 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria (i.e.,
bacteria spectrum after the water was eliminated) in the range [990 cm
-1170 cm
] wavenumber.
The z-axis of FIG. 12a (or 12b) is coefficient #21 in the detail of the Wavelet transform at level 2 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria estimate
in the range [990 cm
-1170 cm
] wavenumber.
The m Features Extracted from the Spectrum
The following m features were extracted:
peak's wavelength, peak's height, peak's width, peak's cross section, peak's area, at least one of the coefficients of a fitted polynomial curve, the total sum of areas under at least two peaks of
the signal, linear prediction coefficient (LPC), mean value of the signal, Variance value of the signal, Skewness value, Kurtosis value, Gaussians' set of parameters (μ, τ, Ai), different peaks'
intensity ratios, wavelet coefficients or any combination thereof. m is an integer greater or equal to one.
The features were extracted from (i) the dried bacteria spectrum (i.e., after the water influence was eliminated), (ii) First derivative of the wet bacteria spectrum (prior to the water influence
elimination), (iii) Second derivative of the wet bacteria spectrum, (iv) First derivative of the dried bacteria spectrum (i.e., after the water influence was eliminated), (v) Second derivative of the
dried bacteria spectrum estimate (i.e., after the water influence was eliminated), (vi) Correlation.
Other features that were extracted were Peak's wave length and height of the wet bacteria spectrum, Peak's wave length and height of the dried bacteria spectrum estimate, Peak Width from a peak's
wave length of the wet bacteria spectrum, Peak Width from a peak's wave length of the dried bacteria spectrum estimate, Peak Width from a specified wavenumber of the wet bacteria spectrum, Peak Width
from a specified wavenumber of the dried bacteria spectrum estimate.
The m
features were extracted from at least one of the above mentioned spectrum segments.
The three most significant features found to be following:
1. The first derivative of the dried-bacteria (i.e., bacteria spectrum after the water was eliminated) at 1091 cm
wavenumber in the range [990 cm
-1170 cm
] wavenumber after employing a min-max normalization on said first derivative.
2. coefficient #2 in the detail of the Wavelet transform at level 1 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria (i.e., bacteria spectrum after the water
was eliminated) estimate in the range [990 - 1170] wavenumber cm
3. coefficient #21 in the detail of the Wavelet transform at level 2 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria estimate in the range [990 cm
-1170 cm
] wavenumber.
FIG. 12b is identical to FIG. 12a except for the fact that FIG. 12b also illustrates the boundaries between StaphMRSA and StaphMSSA.
FIG. 12a was constructed in the following manner:
Step I--a separation between Streptococcus payogenes and other kinds of bacteria was conducted (see FIG. 13a and FIG. 13b). The other kind of bacteria consists of the following bacterium: Agalactia,
Bovis, Klaps, Pneomonia, StaphMSSA, StaphMRSA, and Staphepider. The boundary between these classes is shown in the figure.
The values of the x-axis of FIGS. 13a and 13b are calculated in the following manner:
A min-max normalization was employed on the second derivative of the, dried-bacteria (i.e., bacteria spectrum after the water was eliminated) in the range [990 cm
-1170 cm
] wavenumber.
The values of the y-axis of FIGS. 13a and 13b are calculated in the following manner:
A min-max normalization was employed on the first derivative of the dried-bacteria (i.e., bacteria spectrum after the water was eliminated) at 1020 cm
wavenumber in the range [990 cm
-1170 cm
] wavenumber.
FIG. 13b illustrates the different kinds of bacteria (e.g., Agalactia, Bovis, Klaps, Pneomonia, StaphMSSA, StaphMRSA, and Staphepider) that were included in the analysis.
Step II--a separation in the Streptococcus payogenes classes between different species is conducted (see FIG. 14).
The values of the y-axis of FIG. 14 are coefficient #35 in the detail of the Wavelet transform at level 1 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria
(i.e., bacteria spectrum after the water was eliminated) in the range [990 cm
-1170 cm
] wavenumber.
The values of the y-axis of FIG. 14 are coefficient #2 in the detail of the Wavelet transform at level 2 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria
estimate in the range [990 cm
-1170 cm
] wavenumber.
Step III--a separation between StaphMSSA and StaphMRSA species is conducted (see FIG. 15).
As described earlier, the values of the x-axis of FIG. 15 are calculated in the following manner:
A min-max normalization was employed on the first derivative of the dried-bacteria (i.e., bacteria spectrum after the water was eliminated) at 1091 cm
wavenumber in the range [990-1170] wavenumber cm
The y-axis of FIG. 15 is coefficient #2 in the detail of the Wavelet transform at level 1 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria (i.e., bacteria
spectrum after the water was eliminated) in the range [990 cm
-1170 cm
] wavenumber.
The z-axis of FIG. 15 is coefficient #21 in the detail of the Wavelet transform at level 2 using the Daubechies family wavelet of order 2 on the min-max normalized dried-bacteria estimate in the
range [990 cm
-1170 cm
] wavenumber.
Distinguishing between MRSA and MSSA [0573]
The following example demonstrates that a distinction can be made between MRSA and MSSA.
The system was tested using 74 samples of MSSA and 91 samples of MRSA.
The calculated sensitivity and specificity are 98.66% and 100% respectively.
Experimental Protocol
1. A strips of bacteria is removed from the agar plate (via a quadloop) and the same us dissolved in an eppendorf tube with 400 μl ddH2O (Sigma-Aldrich W3500-100 mL).
2. 30 μL from every tube is put on an area on an optical plate (ZnSe).
3. The plate is placed in a desiccator (Dessicator 250 mm polypropylene, Yavin Yeda) in the presence of several petri plates with a desiccant agent (Phosphorus Pentoxide cat #79610 Sigma Aldrich) and
vacuum is used for 30 minutes.
4. The spectral signature is read:
5. The spectral analysis is performed. The analysis provides the differentiation between the resistant and sensitive bacteria.
It should be pointed out that the same experimental protocol was performed to each of the following bacteria.
VRE (Vancomycin Resistant Enterococcus) Vs. VSE (Vancomycin Resistant Enterococcus)
Enterococcus Faecium Sensitive to Vancomycin vs. Enterococcus Faecium Resistant to Vancomycin
Reference is now made to FIG. 16 which illustrates the differentiation between Enterococcus Faecium sensitive to Vancomycin and Enterococcus Faecium Resistant to Vancomycin.
In the graph, the x-axis represents Feature #1 which is the value at 1045.8559 cm
of the max normalized absorbance signal in the region [942 1187] cm
after its baseline was subtracted.
The Y-axis represents Feature #2 which is the value at 943.9871 cm
of the max normalized absorbance signal in the region [942 1187] cm
after its baseline was subtracted.
As is clear from the figure, the Enterococcus Faecium sensitive to Vancomycin and Enterococcus Faecium Resistant to Vancomycin can be identified individually.
Enterococcus Faecalis Sensitive to Vancomycin vs. Enterococcus Faecalis Resistant to Vancomycin
Reference is now made to FIG. 17 which illustrates the differentiation between Enterococcus Faecalis_sensitive to Vancomycin and Enterococcus Faecalis_Resistant to Vancomycin.
In the graph, the x-axis represents Feature #1 which is the value at 1177.8319 cm
of the max normalized signal in the region [942 1187] cm
after its baseline was subtracted.
The Y-axis represents Feature #2 which is the value at 945.4642 cm
of the min-max normalized of the 2
derivate in the region [942 1187] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [942 1187] cm
after its baseline was subtracted.
As is clear from the figure, the Enterococcus faecalis_sensitive to Vancomycin and Enterococcus faecalis_Resistant to Vancomycin can be identified individually.
Acinetobacter Senesitive to Antibiotics vs. Acinetobacter Resistant to Antibiotics
Reference is now made to FIG. 18 which illustrates the differentiation between Acinetobacter sensitive to antibiotic and Acinetobacter Resistant to antibiotics.
In the graph, the x-axis represents Feature #1 which is the value at 1172.0934 cm
of the max normalized absorbance signal in the region [942 1187] cm
after its baseline was subtracted.
The Y-axis represents Feature #2 which is the value at 1021.4572 cm
of the min-max normalized of the 2
derivate in the region [942 1187] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [942 1187] cm
after its baseline was subtracted.
As is clear from the figure, the Acinetobacter sensitive to antibiotic and Acinetobabter Resistant to antibiotics can be identified individually.
Klebsiella Pneumonia (KP)
Reference is now made to FIG. 19 illustrating the differentiation between KP Sen (i.e, sensitive to antibiotics), KP Res (resistant to antibiotics) and KPC (Resistant to carbapenem (more resistant
then the previous type)).
In the graph, the x-axis represents Feature #1 which is the value at 1163.4856 cm
of the min-max normalized of the 1
derivate in the region [942 1187] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [942 1187] cm
after its baseline was subtracted.
The Y-axis represents Feature #2 which is the value at 1024.3264 cm
of the min-max normalized of the 2
derivates in the region [942 1187] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [942 1187] cm
after its baseline was subtracted.
As is clear from the figure, the KP Sen, KP Res or the KPC can be identified individually.
BloodMSSA vs. BloodMRSA
Reference is now made to FIG. 20 illustrating the differentiation between samples of blood with MRSA and MSSA.
In the graph, the x-axis represents Feature #1 which is the value at 1094.6233 cm
of the min-max normalized of the 2
derivate in the region [925 1190] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [925 1190] cm
after its baseline was subtracted.
The Y-axis represents Feature #2 which is the value at 1070.2346 cm
of the min-max normalized of the 1
derivate in the region [925 1190] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [925 1190] cm
after its baseline was subtracted.
As is clear from the figure, the blood samples containing MRSA and/or MSSA can be identified individually.
Swab Samples
Experimental Protocol for the Swabbing Experiments
1. A Copan cotton swab is used to pick up human fluid sample in duplicates.
a. Swab #1--human fluid without bacteria.
b. Swab #2--pick up 1-5 CFUs from a MRSA or MSSA plate.
2. Do reference reading of the optical cell.
3. Apply each of the swabs on the optical cell.
4. Read the spectral signature
5. Analyze the recorded data
NoseMSSA vs. NoseMRSA
Reference is now made to FIG. 21 illustrating the differentiation between nose swabs samples spiked with MRSA and MSSA.
In the graph, the x-axis represents Feature #1 which is the value at 1065.9307 cm
of the min-max normalized of the 2
derivate in the region [925 1190] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [925 1190] cm
after its baseline was subtracted.
The Y-axis represents Feature #2 which is the value at 974.1144 cm
of the min-max normalized of the 2
derivate in the region [925 1190] cm
, where the derivative was calculated on the max normalized absorbance signal in the region [925 1190] cm
after its baseline was subtracted.
As is clear from the figure, the nose swabs samples spiked with MRSA and/or MSSA can be identified individually.
AxillaryMSSA vs. AxillaryMRSA
Reference is now made to FIG. 22 illustrating the differentiation between axillary swabs spiked with MRSA and MSSA.
In the graph, the x-axis represents Feature #1 which is the width of the peak measured from 1650 cm
to the half of its magnitude of the value at 1650 cm
in the max normalized signal in the region [1458 1800] cm
after its baseline was subtracted. The Y-axis represents Feature #2 which is the cD1(95)--coefficient #95 in the approximation of level #1 with db2 wavelet transform, where db2 is the Daubechies
family wavelet of order 2. The transformation was applied on the max normalized signal in the region [1458 1800] cm
after its baseline was subtracted.
As is clear from the figure, the axillary swabs spiked with MRSA and/or MSSA can be identified individually.
Patent applications by Gallya Gannot, Ramat Hasharon IL
Patent applications by Moshe Ben-David, Tel-Aviv IL
Patent applications by Tomer Eruv, Kfar Tapuah IL
Patent applications by OPTICUL DIAGNOSTICS LTD.
Patent applications in class Biological or biochemical
Patent applications in all subclasses Biological or biochemical
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20110178721","timestamp":"2014-04-17T17:00:40Z","content_type":null,"content_length":"236056","record_id":"<urn:uuid:5db2aaf3-3566-4682-9bef-707b2dba95a5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
ratio and proportions
Try to get all variables (letters) on one side, numbers on the other. Your problem is that the answer, w, is divided by 5 on the lhs. You must do the opposite (ie multiply by 5) throughout the
expression (do it on both sides) to remove the offending 5 and hence get w = .... So $\frac{w}{5}= \frac{7}{10}$ Multiply both sides by 5 $5\times\frac{w}{5}= 5\times\frac{7}{10}$ Do some
cancellation on lhs and tidying up on rhs $w=\frac{7}{2} = 3.5$. Hope that helps. | {"url":"http://mathhelpforum.com/algebra/57351-ratio-proportions.html","timestamp":"2014-04-16T16:37:03Z","content_type":null,"content_length":"36612","record_id":"<urn:uuid:4cdf4c81-2b4c-49a3-8420-e7de4a7108e8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contexts and Packages
A typical package written in Mathematica introduces several new symbols intended for use outside the package. These symbols may correspond for example to new functions or new objects defined in the
There is a general convention that all new symbols introduced in a particular package are put into a context whose name is related to the name of the package. When you read in the package, it adds
this context at the beginning of your context search path $ContextPath.
The full names of symbols defined in packages are often quite long. In most cases, however, you will only need to use their short names. The reason for this is that after you have read in a package,
its context is added to $ContextPath, so the context is automatically searched whenever you type in a short name.
There is a complication, however, when two symbols with the same short name appear in two different packages. In such a case, Mathematica will warn you when you read in the second package. It will
tell you which symbols will be "shadowed" by the new symbols that are being introduced.
Conflicts can occur not only between symbols in different packages, but also between symbols in packages and symbols that you introduce directly in your Mathematica session. If you define a symbol in
your current context, then this symbol may become shadowed by another symbol with the same short name in packages that you read in. The reason for this is that Mathematica searches for symbols in
contexts on the context search path before looking in the current context.
function in your current context will be shadowed by the one in the package.
from the package is used.
If you get into the situation where unwanted symbols are shadowing the symbols you want, the best thing to do is usually to get rid of the unwanted symbols using Remove[s]. An alternative that is
sometimes appropriate is to rearrange the entries in $ContextPath and to reset the value of $Context so as to make the contexts that contain the symbols you want be the ones that are searched first.
$Packages a list of the contexts corresponding to all packages loaded into your Mathematica session | {"url":"http://reference.wolfram.com/mathematica/tutorial/ContextsAndPackages.html","timestamp":"2014-04-19T14:38:11Z","content_type":null,"content_length":"37362","record_id":"<urn:uuid:fa0497be-de50-4c1f-97ae-c8cee8adb503>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
GL_LINE Vs. DDA algorithm [Archive] - OpenGL Discussion and Help Forums
02-12-2012, 02:50 PM
I do not understand what is the difference between the line that is drawn by glBegin(GL_LINE) and the line which is drawn by DDA algorithm.
and I do not know whether DDA is better than glBegin(GL_LINE) or not.
Thanks for replying me | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-176887.html","timestamp":"2014-04-17T13:05:03Z","content_type":null,"content_length":"5439","record_id":"<urn:uuid:ba78305c-7db3-4111-af39-c060bdeb694d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Other Math Archive | December 25, 2007 | Chegg.com
a Let us drawn linesin the plane in such a way that no two are
parallel and no threeintersect in a common point Prove that the
plane is divided intoexactly n n parts by thelines
b ??Similarlyconsider n planes in the dimensional space in gen
eral position no two areparallel any three have exactly one point in
common and no four have acommon point What is the number of
regions into which these planespartition the space
We will prove thefollowing statement by mathematical induction Let
?? ???? ??n be n distinct lines in the plane no two of which are
parallel Then all theselines have a point in common
For n the statement istrue since any nonparallel linesintersect
Let the statement holdfor n n and let us haven n
lines ?? ?? nas in the statement By the inductivehypothesis all these
lines but the last one iethe lines ?? ???? ??n havesome point
in common let us denotethis point by x Similarly the n lines
?? ???? ??n???? n have a point in commonlet us denote it by yThe
line lies in both groups so it contains bothx and y The same is true
for the linen?? Now and n?? intersect at asingle point only and
so we must havex y Therefore all thelines ?? ?? n have a pointin
common namely thepoint x
Something must be wrong What isit
• Show less | {"url":"http://www.chegg.com/homework-help/questions-and-answers/other-math-archive-2007-december-25","timestamp":"2014-04-24T21:16:38Z","content_type":null,"content_length":"34793","record_id":"<urn:uuid:d1d03ae8-12c2-4c9a-b665-cd7485e50bdb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Brief Introduction to Retro-PK
I’ve contacted a guy, Bryan Williams, who publishes great parapsychology articles on a yahoo group named Psi Society. He allowed me to post his articles on this site as well, verbatim. I only allow
myself to reformat them a bit for the site. I believe you’ll enjoy them. The first three articles I’ll post will be on retroactive psychokinesis, also called Retro-PK. So, without further due:
A Brief Introduction to Retro-PK
An interesting study was published in the latest issue of the Journal of Parapsychology that reports a possible lunar modulation effect on retroactive psychokinesis effects. Before I attempt to
describe the results of that study in a post, I thought I would provide a little background on retroactive psychokinesis for those in this group unfamiliar with the concept so that it a bit more
clear as to what I am talking about in the post:
Precognition is often considered to be most perplexing form of ESP because it appears to be retrocausal; that is, it seems to involve a “backwards acting in time” process in which an effect appears
to precede it cause, counter to our usual assumptions that cause leads to effect. It turns out that psychokinesis (PK, or “mind over matter”) occurring on the microscopic (i.e., subatomic) scale also
appears to be capable of producing a “backwards acting in time” effect of its own, and this effect is called retroactive psychokinesis, or “retro-PK,” for short.
One of the first to report statistical evidence for retro-PK effects was physicist Helmut Schmidt (1976), then of the Mind Science Foundation in Texas, who was also the one to introduce the random
number generator (RNG) to parapsychology as a useful apparatus for testing microscopic PK. In a regular PK test conducted in real-time, a subject attempts to mentally influence the electronic
“coin-flips” of a binary RNG as the RNG is producing them. What Dr. Schmidt did differently in his study is that he had the RNG produce the “coin-flips” before the subject even attempts to influence
them (how much time before ranged from hours to even days), recording the results on magnetic tape without anyone looking at what they were. Later on, during the actual test, Dr. Schmidt would play
back the tape with the recorded RNG data to the subject, at which time the subject would try to influence them. But wait a minute…if the RNG data are already recorded on tape, and are thus already
assumed to be “set in stone,” how can the subject possibly influence them by PK? This is where the “backwards acting in time” assumption comes in. Since the RNG data are already recorded, it would
seem that in order to influence the data by PK, the subject would have to direct his or her PK influence backwards in time to the moment that the data were being recorded^1.
As impossible as it may sound, all three of Dr. Schmidt’s (1976) initial experiments did indeed produce results supportive of an ostensibly backwards-acting PK effect, with the results having
statistical odds ranging from twenty to one, to nearly a thousand to one, against chance occurrence. When he then compared these retro-PK results with regular PK results, he found little difference
between them, suggesting that both worked nearly the same. Nearly two decades later, Dr. Schmidt (1993) had repeated these experiments under the close, watchful eyes of five outside witnesses (who
were psychologists, physicists, or other parapsychologists), producing a combined retro-PK effect that had odds of about 8,000 to 1 of not being due to chance alone.
Since Dr. Schmidt reported his first retro-PK study in 1976, many other parapsychologists have attempted to reproduce his work by conducting retro-PK studies of their own, and by the time that Dr.
Schmidt had reported the second study in 1993, 16 other studies by eight different researchers had also been reported. Dr. Dick Bierman (1998) of the University of Amsterdam carried out a
meta-analysis of all of the retro-PK studies conducted by Schmidt and others between 1975 and 1993 to see how robust the effect really was, finding that together they produced evidence for a positive
retro-PK effect that had statistical odds of around 18 million to 1 against chance.
Stimulated by these retro-PK results, Matthew Watkins and John Walker founded the RetroPsychoKinesis Project, a long-term, Internet-based retro-PK experiment that is hosted on Walker’s Fourmilab
website (http://www.fourmilab.ch/rpkp/). Since 1996, the project has been running individual retro-PK experiments using data recorded and saved beforehand from an RNG that uses radioactive decay as
its source of randomness, and the results are updated in a daily summary. The project is ongoing, so it is possible for anyone on the Internet to participate in the retro-PK experiments (if you want
to learn more or even participate in the experiments, go to the Web address above). It is the data collected as part of the Fourmilab RetroPsychoKinesis Project since 1997 that was used in the study
of a possible lunar modulation effect on retro-PK, which I will discuss in a post to follow.
- Bryan
^1 There is actually an alternative approach to retro-PK that sidesteps the “backwards acting in time” assumption, but since this approach requires knowledge of certain concepts of quantum physics
that many in this group may not have and would require me to make this post much longer than it already is to explain them in simpler terms, I won’t get into it here.
Schmidt, H. (1976). PK effect on pre-recorded targets. Journal of the American Society for Psychical Research 70(3), July. pp. 267 – 271.
Schmidt, H. (1993). Observation of a psychokinetic effect under highly controlled conditions. Journal of Parapsychology 57(4), December. pp. 351 – 372.
Bierman, D. J. (1998). Do psi phenomena suggest radical dualism? In S. R. Hameroff, A. W. Kazniak, & A. C. Scott (Eds.) Toward a Science of Consciousness II: The Second TUSCON Discussions and Debates
(pp. 709 – 713). Cambridge, MA: MIT Press/Bradford.
Follow @MindEnergyNetTweet
11 Comments
I have often wondered after researching E.V.P.
why its not taken seriously as a form of
complex structured micro pk from the experimenter. It shows up on FFT spectrum shots over time, so its not all Apophenia??
Obviously ignore all the dubious/fake examples on You Tube – but theres a genuine phenomenon here..
I known how use the telekinetic energy…..
a couple weeks ago i was opening and closing my bedroom door with my mind. it was the morning after a party and i almost had a hangover but i was in an almost trance kinda mode. and i was slowly
opening and closing the door from across the room with my emotions. i did this for 15 mins. then decided to go back to sleep. havent done it since. but it was a cool experience.
Hi, i think that i noticed you visited my web site so i came to
return the choose?.I’m attempting to find issues to improve my site!I assume its good enough to make use of some of your ideas!! cooling towers
If people could really move things with their minds, they’d have collected James Randi’s million-dollar prize, but no one has even passed the initial tests, much less been able to prove esp
It’s all b.s.
Can someone who get JoP identify the “interesting study [...] published in the latest issue of the Journal of Parapsychology” that Williams notes? Is he talking about the 2005 study that he cites
in the second part of his piece, which is available on line at:
That paper I’d seen, but gave up reading it when tracking all the different factors and statistics became excruciating. Anyone up for studying it and stating what it finally says about RPKP data?
A couple more years of data is now available, and we could check how well the best previous result predicts subsequent outcomes.
More on “one-to-one correspondance” math and parapsychology:
OK another example of synchronicity in the West is the concept of “coherence” whereby two sine-waves are aligned via phase shift.
This coherence relies on a one-to-one correspondance.
But one-to-one correspondance is a VISUAL concept — not an auditory concept.
So in the book “Magic of the Senses” by a German Biophysicist — von Droscher or something it’s detailed how
The Ear perceives a sine-wave and it’s overtones as being in the same phase relation — EVEN THOUGH THE OVERTONES ARE NOT IN THE SAME PHASE RELATION.
So visually we get a complex sinusoidal function that is analyzed for difference in amplitude and frequency, i.e. that which creates “timbre” differences in music, with a change in intensity of
But in Pythagorean or Taoist harmonics the visual difference is not used — the reliance is the sine-wave as
ASYMMETRIC resonance that can not be contained as a frequency.
In Western science frequency means the time it takes for the sine-wave to return to it’s source — in a CLOSED system.
So even quantum chaos is considered a “closed” system and in calculus open systems are contained as much as possible.
Statistics, even when using nonlinear functions and real number functions (non-quadratic functions), still relies on a logarithmic, one-to-one correspondance for analysis.
Basically the use of a circle as 360 degrees relies on an equal-tempered tuning system. This degree system is then converted into matrices through the Poisson Bracket that converts classical
physics into quantum mechanics.
So even though the matrices of quantum mechanics are noncommutative they are still converted into a logarithmic-based symmetrical geometry.
Only by seeing PI through it’s ancient Egyptian or Vedic definition — through harmonics of 9/8 or 22/7 can the asymmetric resonance be achieved.
So for example in this analysis of set theory total reliance on one-to-one correspondance is used:
But Cantor Set Theory only proved that Real Numbers have a greater infinity than Rational Numbers but Cantor could not prove a LIMIT for a Real Number Set.
Betrand Russell observes that the first real number, the square root of two, is thereby a “convenient fiction” and one interpretation of the incompleteness theory that Godel built from Cantor is
that math is
“inherently inconsistent.” (Professor A.K. Dewdney’s book “Beyond Reason”)
The mystery of ear science remains unsolved via the
Time-Frequency Uncertainty Principle of Dennis Gabor.
Riemann tried to solve the mathematics of the Ear but was unable to….
OK so here’s the paragraph on “one-to-one correspondence” in noncommutative geometry:
What is Noncommutative Geometry ?
Shortly speaking – in a very broad sense – it is a generalization of geometry. Geometry deals with spaces, their properties and features, maps between them and structures built on them.
Surprisingly, one can attempt to have a look at spaces from a different angle and then it appears that they are only a small fraction of a huge family of similar objects. This different point of
view is offered by the theory of algebras. The starting point is the Gelfand-Naimark theorem, which establishes a one-to-one correspondence between a certain class of algebras (C* commutative)
and a class of spaces (topological, compact). To see this link let us spend a while on this correspondence. Having a topological, compact space we can consider the algebra of continuous
complex-valued functions, with the pointwise addition and multiplication. It be can proven that with a suitable (maximum) norm it is a commutative C*-algebra. Now, let us see it in the opposite
direction. Take a commutative C*-algebra and consider the set of all characters on it – characters are complex valued maps, which preserve operations in the algebra. Then, it could be shown that
this set can be equipped with a topology such that the original algebra is identical with continuous functions on it. From the algebraic point of view there is only one thing, which makes the
above construction of the link between space and algebra possible: the fact that the algebra is commutative. However, there are many noncommutative algebras. The main idea of Noncommutative
Geometry is to treat the noncommutative objects as if they were related to some noncommutative spaces although there are no such spaces in the usual sense of the word. Nevertheless many of the
geometrical constructions (conformal, metric structures, differential calculus, fibre bundles etc.) can be carried out in this situation, and what is even more remarkable, the algebraic language
seems to be more appropriate also for the “classical” objects.
from: http://th-www.if.uj.edu.pl/~sitarz/ncgmain.html
So the big point is you can’t visualize noncommutative geometry — yet it’s still based on one-to-one correspondence.
Even Octonions which throw out both commutative and associative principle of algebra STILL rely on a one-to-one correspondance of symmetry:
This isomorphism turns out to map points to lines, and in fact, it sets up a one-to-one correspondence between points and lines.
from: http://www.math.ucr.edu/home/baez/octonions/node12.html
Synchronicity is a Western term and is measured by symmetric-based mathematics, meaning a one-to-one correspondance of letter and number. All parapsychology studies rely on statistics using
logarithmic-based math and even noncommutative math still relies on symmetry.
In 1995 I woke up from a dream more vivid than being awake. I wrote the dream in my journal. It was of activist friends standing on the roof of a house to protest against the destruction of a
forest. I wrote that I thought the dream was predicting the future.
In 1998 I looked at a photocopy of a news article that was exactly the same as the above dream. No conscious recognition occurred but this slight feeling was noticeable. For some reason I went to
parent’s house and decided to page through my old journal — for no particular reasons. All of a sudden I came across my journal entry about which I had completely forgotten.
The incident was from a big protest to stop the logging of old growth in Minneapolis in a sacred indigenous area where a huge spring gave water at the confluence of the Minnesota and Mississippi
river. The protest finally ended when a military raid took place on the activist encampment. I had gotten arrested previously in a civil disobedience protest, I had researched the history of the
issue and I had camped in the forest.
So somehow our future, in detail, is already existing. How? Because Western science, i.e. synchronicity, is measured by spatial sense while time is really a measurment of pure Number that is
asymmetric, i.e. does not have a one-to-one correspondance of letter and number. This is modeled in Pythagorean harmonics whereby C to G is 2:3 and G to C is 3:4, in violation of symmetric-based
When our consciousness resonates with time as pure Number then it’s possible to bend time so that our consciousness becomes aware of the pure knowledge that exists beyond spacetime. It is said
that 70% of our future is already known, based on the trajectory of consciousness determined by the time of our birth and the cosmic alignment.
It’s possible to travel through time in meditation so that even our past lives are known and yogi masters can indeed read our past lives. But the only path to stop reincarnation is to logically
persue pure consciousness itself until our mind resonates with it’s source, in the right-side of the heart. The heart literally stops for over 10 minutes and then “eternal liberation” is achieved
and the heart starts up again and the person continues living.
That person is then a Jnana and accumulates no further karma since with each breath the mind empties out or resonates back to the source of the heart-mind. Can “synchroncities” happen for such a
person? Indeed the rest of the life of that individual will be determined by the karma previously accumulated with that current body. That life may appear “out of synch” for an outside observer
but for the Jnana that life, in that body, no longer exists.
Pure knowledge is outside of time and therefore can not have synchronicities.
There’s a lot to like about the RetroPsychoKinesis project at Fourmilab. The whole things is automated from running the trials to reporting the results. They opened the source-code that runs the
tests, they offer the complete data for download, and their system updates the results every day.
Williams’ “Brief Introduction to Retro-PK” closes with a paragraph on the RPKP project, but the only outcome he notes is this Lunar thing. In fact RPKP has been looking for the same retro-pk
phenomenon Schmidt had reported. RPKP ‘s results are at chance.
Actually, it was the Quantum Mechanical interpretation that inspired Dr. Schmidt. It isn’t really distinct from retro-causation: its a matter of how you look at what is happening. It turns out
that a limited form of retro-causal effects (delayed choice experiments) is a hot topic in QM research just now — they even use the term sometimes.
However, research in “decoherence” that has been done since Schmidt’s initial experiments mean that it is extremely unlikely that the QM mechanism Dr. Schmidt initially envisioned (that the
recorded magnetic bits remained in a state of quantum superimposition until observed physically by the PK agent) could apply.
One alternative to a retro-causal effect on a purely mechanical system (albeit quantum mechanical) is that the PK influence occurs at the time of target recording guided by precognition. One of
the paradoxes of parapsychology is that combining multiple apparent psi “steps” into one action produces results no weaker than one requiring one apparent step. Precognition is, of course, also
retrocausal, but some would argue that the influence is on something that is not strictly “mechanical” — the mind of the percipient.
Also relevant here is a theory by Ed May and James Spottiswoode (I believe that James is the author of the paper on lunar effects mentioned in the above note) that they call Decision Augmentation
Theory. It claims that many if not all examples of micro-PK are better explained as an unconscious selection of, for example, when each trial is to take place. This still requires precognitive
“guiding” of actions.
An aside: at about the same time that Schmidt was publishing his experiments I delivered a theoretical paper at a parapsychology conference that essentially proposed that *all* psi phenomena are
a form of retro-causal PK. (Which explains, among other puzzling characteristics, why multiple psi steps are no harder than a single one).
EVP i,ve researched and they(evp) contain a lot of precognitive information regarding my future.
BUT the phenomenon doesnt support DAT theory does it?
Leave a Comment
Recent Comments
• austin..okl on Feel energy between your hands | {"url":"http://www.mind-energy.net/archives/218-A-Brief-Introduction-to-Retro-PK.html","timestamp":"2014-04-19T17:41:48Z","content_type":null,"content_length":"185574","record_id":"<urn:uuid:58f34cb5-8c85-419b-8048-fc8b80f096a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Area
Surface area is the measure of how much exposed area any two- or three-dimensional object has.
Units for measuring surface area include:
Old British units, as currently defined from the metre:
The article Orders of magnitude links to lists of objects of comparable surface area.
For a two dimensional object the area and surface area are the same:
• square or rectangle: l × w (where l is the length and w is the width; in the case of a square, l = w.
• circle: π×r^2 (where r is the radius)
• any regular polygon: P × a / 2 (where P = the length of the perimeter, and a is the length of the apothem of the polygon [the distance from the center of the polygon to the center of one side])
• a parallelogram: B × h (where the base B is any side, and the height h is the distance between the lines that the sides of length B lie on)
• a trapezoid: (B + b) × h / 2 (B and b are the lengths of the parallel sides, and h is the distance between the lines on which the parallel sides lie)
• a triangle: B × h / 2 (where B is any side, and h is the distance from the line on which B lies to the other point of the triangle). Alternatively, Heron's formula can be used: √(s×(s-a)×(s-b)×
(s-c)) (where a, b, c are the sides of the triangle, and s = (a + b + c)/2 is half of its perimeter)
Some basic formulas for calculating surface areas of three dimensional objects are:
• cube: 6×(s^2) , where s is the length of any side
• rectangular box: 2×((l × w) + (l × h) + (w × h)), where l, w, and h are the length, width, and height of the box
• sphere: 4×π×(r^2) , where π is the ratio of circumference to diameter of a circle, 3.14159..., and r is the radius of the sphere
• cylinder: 2×π×r×(h + r), where r is the radius of the circular base, and h is the height
• cone: π×r×(r + √(r^2 + h^2)), where r is the radius of the circular base, and h is the height.
An artist should feel free to add some example diagrams.
If one adopts the axiom of choice, then it is possible to prove that there are some shapes whose area cannot be meaningfully defined; see Lebesgue measure for more details.
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/ar/Area","timestamp":"2014-04-18T23:21:23Z","content_type":null,"content_length":"17763","record_id":"<urn:uuid:567eb5ce-a3f8-4203-a558-13682ba59ec1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lost in Maths! (2D Sound location with 4 sensors)
This post is about my ongoing project to do fast 2D sound location on a ping pong table, this is some ideas I've been having and wanted to share - since I need some help!
Ok here's the problem... we have a rectangular area ABCD and somewhere in that area (point X) there is an event that produces a sound (e.g. a ping pong ball strikes the surface). This results in
sound waves travelling outwards to the corners where they are picked up by sound detectors.
The first sound to be picked up at each sensor arrives at a time that obviously depends on that sensor's distance from the original sound. Lets call the times tA, tB, tC, tD. (Each sensor will pick
up reverberations and echoes after the original sound, but its the first "edge" of the sound we're interested in)
Now since we don't know when the original sound happens (we only detect it when it got to the closest sensor) we don't actually know the real values of tA, tB, tC, tD but we get relative times from
the first sensor. In this case X is closest to D, the sound reached D first. The time we read at each sensor A,B,C is relative to tD. Graphically this can be shown like this (each red line is reduced
by distance XD)
Now we can use these reduced distances to define circles based on the points A,B,C and D (radius at D is initially zero)
As pointer out by Arduino forum member Necromancer on
we can find X geometrically by progressively increasing the radii of the circles at A,B,C and D by the same amounts until all the circles intersect at a single point. At this point we have added back
the unknown distance XD and the circles intersect at point X as shown below.
The problem is that this iterative calculation of circle intersections is processing-heavy and would probably take too long to solve on a microcontroller to be responsive enough for my application.
However I started thinking about getting a "head start" by doing some simple calculation to get the initial increment where all the circles intersect for the first time, and go forward from there.
Here's what I mean.... point Q is a point on the line AC which is equal distances from the points where the initial circles around A and C cross AC.
The actual coordinates of Q aren't needed, we just need to know how much to increase the radii of the circles around A and C so that they intersect at the first time (which will happen at Q)
The calculation is simply the length of AC minus the radii of the two circles, all divided by two. If we calculate this value and add it to the radii of the circles they will meet at Q.
If we do the same calculation for lines AB, BC, CD, AD, AC, BD we'll get 6 different values that we should increase circle radii by make them intersect. We can just take the largest of all the 6
values and apply that as the base value by which the radii of all the circles should be increased to get to the initial point where all the circles intersect.
This will not neccessarily be point X... we might need to interatively expand the circles a bit more to make them all intersect at the same point. However we got a big head start and so far we've
just done simple arithmetic.
I made a simple program where I could click with the mouse to simulate the values arriving at the 4 sensors., then applied the above calculations... in many cases the results are very close to the
final point X. For example:
In other places the accuracy is not so spot-on, but it looks like some kind of simple averaging of the points of intersection might give a position good enough for what I want, and without having to
iteratively apply complex calculations (like square roots), so it should be pretty fast.
Only three sensors are strictly required for multilateration, however I found that adding the fourth sensor made a massive difference to the accuracy of the above calculation.
The next step is to calculate the points of intersection and give it a try with some averaging, to see if I can avoid the iterative method. However I'm not a mathemetician, so if anyone has a better
idea, please let me know!
Update: OK, I gave the averaging thing a try. This clip shows my test program using these calculations to track the mouse pointer. The tracking is not perfect and there are a couple of places
(vertical midpoint of the area, towards left and right side) where it is worst, but I think this is good enough (especially given that my sensors won't be perfect!).
The program displays the fours circles calculated as above. The final calculated position is displayed by the red crosshairs
The process is as follows:
1. Firstly I need to simulate the sensor inputs, so I take the position of the mouse pointer, calculate distance to each corner and then subtract the minimum distance from all the others. The results
(tA, tB, tC, tD) are representative of time or arrival info from real sensors.
2. For each edge, and the two diagonals I get the length of the line (area width, height or diagonal distance) and subtract the relative arrival times for the points at each end.
offset1 = (width - tA - tB)/2
offset2 = (width - tC - tD)/2
offset3 = (height - tA - tD)/2
offset4 = (height - tB - tC)/2
offset5 = (diagonal - tA - tC)/2
offset6 = (diagonal - tB - tD)/2
Now take the maximum of these 6 values:
offset = max(offset1,offset2,offset3,offset4,offset5,offset6)
now calculate the circle radius by adding the offset to each time.
rA = tA + offset
rB = tB + offset
rC = tC + offset
rD = tD + offset
now calculate the intersection points of all pairs of circles. I used the C code example from here http://paulbourke.net/geometry/2circle/
There are six pairs of circles, AB, AC, AD, BC, BD, CD. Not all may intersect (ignore pairs which do not intersect). Otherwise we get 2 insection points (which may be identical if circles just touch)
for each pair of circles. Lets say that for each pair of circles we can get two intersection points P and P'
For each pair of points P and P', one will be closest to our target point and the other should be discarded. The way I did this was to calculate the average of all the points, then go back through
the list selecting the point from each pair that was closest to the calculated average point and then averaging just these "closest points".
ie. Take the first "rough" average of all points P and P' - lets call is (Xave, Yave), then recalculate the average position using either P or P' from each pair based on the condition:
if (Xp - Xave)^2 + (Yp - Yave)^2 > (Xp' - Xave)^2 + (Yp' - Yave)^2
then use point P'
else use point P
The resulting average (X , Y) is the final calculated point.
Better accuracy would be got by interatively increasing rA, rB, rC and rD and recalculating the intersection points until they are at their closest to each other. However I don't think I need this -
the sensor input is unlikely to be so accurate it would benefit from this.... and I think it would be computationally expensive due to calling sqrt( ) many times and therefore slow.
Once again I'm no mathematician and I'd be grateful for any advice here!
18 comments:
1. Just off the top of my head (so it may not even be feasible) but what about using pythagoras and some simple y=mx+c type line equations on two of your lines.
Calculate the gradient (m) for line1 and line2 using simple trig (sin/cos) and where the two lines intersect (i.e. the equations y=mx+c are equal for both) you'd have your X and Y values. I did
similar stuff for a Flash-based game a few years ago and it worked quite well. You might need floating point maths on your mcu to do it though.
2. Just another thought - http://www.teacherschoice.com.au/maths_library/trigonometry/solve_trig_sss.htm shows how to solve triangles. Using the AC and DB diagonals, and given you know AB, BC, CD
and DA, you could solve all this with triangles.
But on a microcontroller, you tend to use look-up tables for sin/cos operations anyway to speed things up. So why not create an app which splits the table top up into a grid of fine enough
resolution, pre-calculate all the values for X at each point, shove them all in a big look up table, then compare when you get the signals in?
How fine do you need your resolution? Would a grid of say 20 x 10 be accurate enough? Just get a PC to do all the crunchy calculations up front, and do simple comparisons on the mcu at runtime?
3. I'm working on a similar problem for school (an electronic target) and have some suggestions.
1. I'm not really sure how ping pong tables are constructed, but you might run into problems with the propagation speed differing depending on direction. Wood propagates sound significantly
faster along the grain that across it. I imagine the table is made of plywood, so it might even out since the individual plies alternate. MDF would probably be okay. Obviously, a solid plastic or
aluminum table would be best.
2. If you're triggering event is just a basic threshold, your triggering will be increasingly off as the source moves closer to one sensor than the other since one sensor will receive a
significantly greater signal and will trigger much earlier than the other. To get around this, we use a combination of threshold triggering and zero detection. We time based on the first zero
crossing after a threshold is reached. Depending on the accuracy you need, though, threshold triggering might be fine.
3. In order to solve the location from the TDOA numbers, you have to calculate some hyperbolas (you can also do it experimentally as you've seen but that's very slow). Every pair of sensors will
give you one valid hyperbola and the intersection of two hyperbolas will give you the source location. All you need is, for each pair, which triggered first, the time difference, and the
propagation speed. You can do this with three sensors, but you'll need to know the propagation speed.
4. In order to figure out the propagation speed, we use four sensors to build four hyperbolas (you could do six but we haven't figure out how to do the diagonal ones yet). This will give you
several possible source points and then you can adjust the propagation speed until they all line up (as closely as possible). You can estimate error by calculating the standard deviation of the
closest you can get.
You're probably wondering how to solve the hyperbolas. I have spent a ridiculous amount of time trying to figure that out, reading through journal articles and theses that were of no help. This
guy teaches a class on it and breaks it down in a very useful way:
I made very little progress solving the hyperbolas until I found this. Particularly, look at (1) on "algebra to show..." Nice and easy to solve for x and y.
4. Thanks for your help guys! Interesting point about the zero crossing, Matt. I'd not thought about that... I hope I can set the sensitivity on the sensors close enough to be "good enough" for what
I need. Now to digest all this great info.
5. I've been thinking about this problem since you told me about it a couple of weeks ago. One thing that strikes me as being a slight spanner in the works is that the majority of table tennis
tables are hinged in the center for easy storage. If this is the case you will not be able to use the method the way you have suggested since the vibration through the table will stop dead at the
hinge, or if it should transit it will likely ruin your timings. The only solution i can see to resolve this would be to apply the above method on each side (either 8 of 6 sensors in total).
How rude of me, it's Antony by the way, i came to visit BB a couple of weeks ago (interested in Midi!).
6. E-mail me sometime (click the contact link on my site) and I'll send you some stuff you might find useful.
7. Two words: Sound ranging
9. This comment has been removed by the author.
10. Okay, I removed my previous comment because it didn't jive well with me.
Basically the first sensor to hear a sound triggers a timer. Then the difference between the first sensor and the other sensors hearing a sound will be used to calculate the radii of the sound
relative to the first timer that hears it.
Use that with some trig to find the location of the ball.
Is it that simple? I don't know but I figured I'd throw it out there.
11. Anyway you could post your code?
12. Sure- you can see the code here
13. Awesome blog! Just catching up. I see you did this over a year ago! I think the math can be SIGNIFICANTLY easier if you use the fact that tA^2*tC^2 = tB^2*tD^2. You have to use the unknown tI of
the time between original impact and the first sensor firing and deal with your propogation speed in order to actually 'solve' the triangle but the fact that you sensors are known distances apart
and at known angles allows you to assume an unknown constant for velocity in the law of cosines and you can solve that twice for a two adjacent angles at a corner to get all your unknowns.
I REALLY like that circles video that you posted above. Is that SW somewhere I can download? I'd like to try it with some additional calculations for my system!
1. sorry, just reread my post...the equation is tA^2+tC^2 = tB^2+tD^2.
And if the tI comment doesn't make any sense, what I mean is that one of your lines from a sensor to the impact will have tX of 0 and that distance is actually tI*speed of propogation (in
your example we're talking about tD) and then tA is tI plus some actual time DIFFERENCE of arrival from sensor D to sensor A.
14. Hey Jay
very interested in your comment.. I never did find a better mathematical solution but decided the accuracy of my maths approach was appropriate to the (rough) precision of the sensing. We moved
forward with that and "noisy table" went out in public last year
drop me an email jason_hotchkiss at hotmail dot com if you want to talk maths :) and thanks for your comment
15. http://pppp.media.mit.edu/
A ping pong table that sensed ball hit locations and displayed projected visualizations based on the hits.
1. wow! thats spooky... very similar. They also have some very interesting differences in sensor locations and signal conditioning electronics that I need to try to understand! Thanks for the
16. Accuracy is 1-2 inch which is really good, but I can't see any schematic how they connect it to Arduino nor I see any code. Maybe not an open source project. What about the accuracy of your ball | {"url":"http://hotchk155.blogspot.com/2011/11/lost-in-maths.html","timestamp":"2014-04-19T07:48:27Z","content_type":null,"content_length":"115680","record_id":"<urn:uuid:227eb9db-a54e-4379-9873-875cfac2e11f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summer 2013
Nathan Hull
Wednesday, August 14th
Tuesday, August 13th
Monday, August 12th
Thursday, August 8th
Wednesday, August 7th
Tuesday, August 6th
Monday, August 5th
Thursday, August 1st
Wednesday, July 31st
Monday, July 29th (Review for Midterm)
Thursday, July 25th
Wednesday, July 24th
Monday, July 22nd
Thursday, July 18th
OVER-THE-WEEKEND PROBLEM - Read in a value for the height on an X, and print it out using "*"
So, if you input 5, you would get:
* *
* *
* *
* *
But, if you input 6, you would get:
* *
* *
* *
* *
Also, make certain that it works for an input of 1:
Wednesday, July 17th
OVERNIGHT PROBLEM - Random melody: do1, re, mi, fa, so, la, ti, do2. The melody should start with either 'do' and stop when two 'do's are found in a row. Print the length of the melody.
Tuesday, July 16th
OVERNIGHT PROBLEM - Keep choosing Jellybeans until four reds are chosen IN A ROW
Monday, July 15th
Thursday, July 11th
Wednesday, July 10th
Tuesday, July 9th | {"url":"http://cs.nyu.edu/courses/summer13/CSCI-UA.0002-002/programs_sum2013/index.html","timestamp":"2014-04-19T22:38:52Z","content_type":null,"content_length":"10190","record_id":"<urn:uuid:f20b20c1-b148-4d6b-842b-2937323dd642>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stackelberg Game for Product Renewal in Supply Chain
Discrete Dynamics in Nature and Society
Volume 2013 (2013), Article ID 726536, 10 pages
Research Article
Stackelberg Game for Product Renewal in Supply Chain
School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China
Received 8 November 2012; Accepted 27 January 2013
Academic Editor: Xiaochen Sun
Copyright © 2013 Yong Luo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The paper studied the process of product renewal in a supply chain, which is composed of one manufacturer and one retailer. There are original product and renewal product in the supply chain. A
market share shift model for renewal product was firstly built on a increment function and a shift function. Based on the model, the decision-making plane consisting of two variables was divided into
four areas. Since the process of product renewal was divided into two stages, Stackelberg-Nash game model and Stackelberg-merger game model could be built to describe this process. The optimal
solutions of product pricing strategy of two games were obtained. The relationships between renewal rate, cost, pricing strategy, and profits were got by numerical simulation. Some insights were
obtained from this paper. Higher renewal rate will make participants’ profits and total profit increase at the same margin cost. What is more important, the way of the optimal decision making of the
SC was that RP comes onto the market with a great price differential between OP and RP.
1. Introduction
With the development of IT, the speed of product renewal process becomes faster and faster. This situation brings some new problems to traditional supply chain (SC), such as the dynamic nature of an
SC and the relationship coordination between participants of an SC. The dynamic nature of an SC includes the dynamic variety of products; that is to say, the market requests SC to satisfy diverse
needs of customer with the fastest speed, as well as the best quality, which calls for SC improving performance to adapt to the product variety. A valid way for SC enterprises to follow the variety
of market is to renew their existing products, which can make full use of enterprise’s existing resources and supply/distribution outlet of SC. Thus, SC will adapt to a variational market
economically and quickly. Product renewal is an effective method that strengthens a SC enterprise’s core competitiveness, and it will even impact on the survival of a enterprise. There are quite a
few cases that enterprises collapse because of the mistakes of product renewal decision making, such as the failure of WANGAN Computer Corp.
Now, it is necessary to differentiate “renewal product (RP)” from “innovated product (IP).” IP’s structure and principle are different from original product (OP), while RP’s main structure and
principle are the same as OP. Furthermore, RP has some appended components or upgraded functions. Thereby, RP is the renewal of existing product. At present, most researches are mainly focused on
complete new product innovation, including management of product innovation process, optimization of product innovation investment, promotion of new product, and design of product innovation drive
mechanism. Dereli et al. proposed a framework of the rapid response for innovative product development using reverse engineering approach [1]. Bourreau et al. studied the effects of modular design on
firms’ product innovation strategies and postinnovation competition in digital market [2]. Ju and Xiao built a model of product lifecycle evaluation based on context knowledge [3]. Wan et al. built a
new product investment model [4]. Researches about product innovation based on SC frame have also emerged. Huo has studied the R&D strategy in SC using game theory [5].
At present, the researches concerning product renewal have been carried out. Bass P. I. and Bass F. M. had proposed a new product expansion model and studied the process of multigeneration renewal
products coming onto market [6, 7]. Wu examined the reuse/redesign, quality, speed-to-market, and marketing decisions for two consecutive generations of a multicomponent modular product, utilizing
stylized models [8]. Druehl et al. developed a model to gain insight into which factors drive the pace of product updates [9]. Huang and Ling studied the tracks of product update of ICT enterprises [
10]. Koca et al. studied the product rollover strategy decision, where a firm decides whether to phase out an old generation product to be replaced by a new with either a dual or single roll [11].
Quan analyzed the relationship of the optimal launch time with parameters in the process of introducing a renewal product [12].
The other focus of product renewal is on the marketing process of renewal product, especially pricing decision. Luo and Tu studied a price decision problem for product renewal supply chain based on
Nash game [13]. Wei studied the optimal decision of inventory and pricing in a supply chain, in which there are original products and renewal products [14]. But the work of product renewal is far
less than enough.
The RP and OP are virtually differentiated products. When they are on sale at the same market, they can substitute for each other. Compared with common differentiated products, the substitution of RP
for OP is unidirectional and partial. Therefore, the coming of RP onto market will cause some influences on OP market. At the same time, each entity in SC has independent profit. Therefore, the
coordination of SC will become much more complex if there are RP and OP in the SC. A potent tool to coordinate SC is price. The fluctuation of price will make supply and demand tend to an
equilibrium. On the other hand, price’s fluctuation can make the profit of each entity in SC distributed reasonably, so as to achieve the aim of coordination of SC. Therefore pricing strategy is the
main content of the paper.
The SC including OP and RP is called product renewal SC. The change of market share caused by product renewal in the SC is studied in this paper. When there is a RP in SC, the enterprises in SC will
make strategies for both of OP and RP to achieve an equilibrium of enterprises’ profits. Compared with other researches, this paper firstly studied Stackelberg game for product renewal, aiming at
solving complex decision-making question in actual renewal SC, and obtained some novel insights.
The rest of this paper is organized as follows. Product renewal model is formulated in Section 2. In Section 3, we provide a Stackelberg-Nash Game based on product renewal model. Stackelberg-merger
game is played in Section 4, and numerical examples are provided in Section 5. Finally, we conclude our paper in Section 6.
2. Model Description
2.1. Assumption
Consider a two-stage SC which is composed of one manufacturer and one retailer. First, suppose the manufacturer produces product and sells it in market via the retailer. After a period of renewal
process, the manufacturer develops product ’s renewal product, , and sells it with product in market via the retailer.
In order to fully describe the model of the SC, we state the following hypotheses. (i)The retailer is in a monopoly position; that is, only the retailer sells and in market.(ii)Suppose that the
consumer’s repurchase is possible and each time one customer only buys one product. When there is an RP, some customers of OP may shift to buying RP in other words, there is repetition purchase for
the products studied in this paper, such as toys, and small electric appliance equipment.(iii)There is no stochastic demand in the market, so the demand is only decided by price.(iv)Inventory is not
considered.(v)The manufacturer’s productivity is sufficient enough to satisfy demand.(vi)There is no fixed cost for unit product, and margin production cost is an invariant for manufacturer.(vii)The
product renewal generally causes its performance promotion.
The SC model of this paper is as follows: there is one manufacturer and one retailer in SC. Suppose that the manufacturer produces products and at marginal costs and , respectively, and then
distributes them at the wholesale price of and to retailer. Based on the wholesale price, a price markup and is, respectively, added by retailer to product and to form the market retail price: .
Retailer’s selling cost is per unit. The product renewal SC including one manufacturer and one retailer that sells combination products is shown in Figure 1.
2.2. Market Model
Product renewal rate is used to measure the change of product performance caused by product renewal. Combined with price and performance price ratio , product renewal rate can be used to forecast the
market share change. In fact, what is focused on is not performance value, but the ratio of OP’s performance value to RP’s [15], namely renewal rate .
The RP marketing process can be divided into two stages: In Stage 1. there is only OP in market. Supposing the OP price of is , according to [15], the market demand for is In expression (1), means
market scale, the absolute value of denotes price flexibility, and .In Stage 2. RP comes onto the market and competes with OP. If RP comes onto the market, it will cause several changes in the
market: RP will stimulate new consumption demand because of its appended or upgraded functions, which makes the latent customer quantity of the total products market enlarge, and the increment is .
is related to original market demand and a decreasing function of RP price, while an increasing function of renewal rate . Customers have the tendency to purchase RP. In addition, the performance
price ratio of RP is generally higher than that of OP. So some OP’s customers will shift to buying RP. The number of shift customers is marked as . This will cause OP’s price to be adjusted to
Definition 1. There exists an OP and its RP in the market. Suppose that the demand of is , and the product renewal rate is . Then satisfies the following function: where is the influence coefficient
of and . is the influence coefficient of RP’s price on its market increment, and .
From and , and can be got. If , OP’s demand will be zero, which leads to be zero. If , no matter how great OP’s demand is, demand increment will be equal to zero.
Theorem 2. The market shift function relates to OP price, RP price, and OP demand. has the following features.(i)If , , that is, all of the OP demand will shift to RP. So OP will have to exit from
the market, and RP will hold the market.(ii)Existing , if , . can be treated as the zero point of market shift.(iii)If , will be a monotone decreasing function of , namely, .(iv)If , is to be a
monotone increasing function of , namely, .(v)With the increase of , increases correspondingly. is an increasing function of , namely, .(vi)Let , and the corresponding price is , which is the upper
limit of RP price. (vii)Let , and there is still demand increment caused by RP; that is to say, on any condition, . (viii) is a direct proportion increasing function with .
From Theorem 2, given , the laws of changes of and can be got with the variety of , as shown in Figure 2. The specific shape of this figure is changing with the variety of .
According to Figure 2, the shift function in is
Definition 3. According to Figure 2, in , there exist an OP and its RP ; suppose the prices of , are , , respectively, and the demand of is . Then, satisfies the following function: where is the
influence coefficient of and , , is the influence coefficient of , and .
The functions (3) and (4) meet eight features listed previewed, and can be used as the paper’s model base. From the previous analysis, the shift function includes two portions, so it can be expressed
as follows:
Letting and solve (5), the zero point of market shift function can be got: With the increase of , obviously becomes higher. Therefore, when RP comes onto the market, the total demand will change. The
market share of RP is and OP is
In addition, product renewal will make marginal production cost change from to .
Figure 2 is a variety chart of shift demand in the situation that is given, so the shift demand actually depends on and simultaneously. Accordingly, a three-dimensional coordinate can be built, , as
shown in Figure 3, and axis is shift demand. Plane is divided into the following four areas by six lines in Figure 3:Area I: the area surrounded by axis and lines and ,Area II: the area surrounded
by lines , and ,Area III: the area surrounded by axis and lines , and ,Area IV: the areas excepting area I, II, and III.
In area I, Letting , ’s maximum value can be obtained from (1) and (2). Given , the line will be parallel to plane , and it is clear that the slope of line is . From (3), with the increasing of ,
will become small. Meanwhile, will become small. If , . Therefore, will be a sloped plane in area I.
In area II, given , can be obtained from (5). The line will become smoother and smoother with the increase of and . If , . Letting in (4), a line in plane can be got. The line linking maximal point
to point is a line named . The lines linking the points at which line or intersects with constitute complex curving surface; in other words, in area II is a complex curving surface.
In area III, , can be got from (5).
Considering function’s feature (vii), it is obvious that market increment in area I, II, and III.
In area IV, or , . OP and RP do not come onto the market in this area.
The shift functions and increment functions in different areas are shown in Table 1.
According to the feature (vii) of shift function, the zero point of increment function is bigger than that of shift function, namely, . From expression (1), it is known that the maximum of is . So it
requires to meet the seventh feature of shift function. Based on the previous analysis, the retailer’s and manufacturer’s profits including the profits of OP and RP are, respectively, as follows:
Where is the fixed cost of the manufacturer. According to the price relationship between RP and OP in different areas, the profit functions can be shown as follows.
(i) In area I, , and . The market share of RP is , and OP is zero. The profits of retailer and manufacturer are
(ii) In area II, , and . The market share of RP is , and OP is . The profits of retailer and manufacturer are
(iii) In area III, , and . RP ’s market share is , and OP ’s market share is . The profits of retailer and manufacturer are
(iv) In area IV, or , . The prices of OP and RP are so high that it could not come onto the market. It is not necessary to consider this situation.
3. Stackelberg-Nash Game
Product renewal process can be regarded as two stages. First, manufacturer and retailer make their optimal decisions for OP , respectively, and synchronously. After OP comes onto market, RP is
developed by the manufacturer and comes onto market, too. In this condition, both OP and RP are sold in the same market. Then the manufacturer and retailer make their optimal decisions for RP ,
respectively, and synchronously. The optimal decision-making process is also divided into two stages. The two-stage games constitute a Stackelberg-Nash game, which is the real process of product
renewal game.In Stage 1: There is a decision for product . The manufacturer and the retailer play a Nash game. The manufacturer makes a strategic decision of , and the retailer makes a strategic
decision of . In Stage 2: There is a decision for product . The manufacturer and the retailer also play a Nash game. The manufacturer makes a strategic decision of , and the retailer makes a
strategic decision of .
The game solving process is divided into three steps. Following three steps, the optimal solution of product renewal Stackelberg-Nash game can be derived.In Step 1, given and , according to the Nash
equilibrium of the manufacturer and retailer profit functions, the optimal reaction function of , or the optimal value of and can be obtained from , .In Step 2, put the optimal reaction function or
the optimal value into the profit functions of Step 1, and the optimal solution and can be obtained from the Nash equilibrium of the manufacturer and retailer profit functions in Step 1.In Step 3,
letting and substitute and in the and of Step 2, the optimal value of and can be obtained.
Theorem 4. In Stackelberg-Nash game, in areas I, II, and III, given and , is a convex function of , and is convex functions of .
Proof. In area I
Therefore, is a convex function of , and is a convex function of . The proofs in II and III are the same as area I.
Theorem 5. The Stackelberg-Nash decision-making process can be divided into the following three situations.(i)In area I, there is no Stackelberg-Nash equilibrium.(ii)In area II, there exists only one
optimal pricing strategy for Stackelberg-Nash game.(iii)In area III, there exists only one optimal pricing strategy for Stackelberg-Nash game.
Proof. (i) In area I, according to (10), based on Theorem 4 and Step 1, the optimal value of and can be derived as follows:
From the supply chain’s structure, it is obvious that . Because is greater than (from Theorem 6), considering the condition of area I, , the conclusion that is out of area I can be drawn.
Accordingly, there is no Nash equilibrium in area I.
(ii) In area II, . According to (11), based on Theorem 4 and Step 1, the optimal reaction of the manufacturer and retailer can be got as follows:
From Step 2, two binary cubic equations can be got. Because the analytic solution of the equations cannot be found, the numerical method can be used to find the optimal approximate solution . From
Step 3, the optimal of can be got.
(iii) In area III, . According to (12), based on Theorem 4 and Step 1, the optimal values of and of and are as follows: From Step 2, the optimal values and of and can be got as follows:
Theorem 6. The value of Nah equilibrium for area I, , is out of area I.
Proof. From the supply chain structure, it is obvious that: : Because of and , whereas , It is clear that the value of Nah equilibrium, , is greater than . Since is the maximum of in area I,
considering the condition of area I, , the value of Nah equilibrium, , does not satisfy this condition. So is out of area I.
4. Stackelberg-Merger Game
In the decision-making process of Stackelberg-Nash game mentioned previously, if the manufacturer and retailer constitute a manufacturing and sales league to maximize the SC profit, that is, there is
no retailer, and manufacturer sells its product in market directly, the product renewal Stackelberg-Nash decision-making process will become a Stackelberg-merger decision-making process. This process
can be divided into two stages, and this process will play a Stackelberg-merger game. In this condition, the league which consists of the manufacturer and the retailer can be regarded as a
enterprise, because the league’s decision variables are the retail price of OP and RP ( and can be regarded as the transfer price of the manufacturer and retailer profits distribution). In Stage 1,
the league sells OP and makes a strategic decision of . In Stage 2, the league sells OP and RP at the same time, and the league makes strategic decisions of and .
The solving process of Stackelberg-merger game is divided into three steps, and by three steps the optimal solution of product renewal Stackelberg-merger game can be derived.In Step 1, OP retail
price is given; the optimal reaction function or the optimal value of can be obtained from . In Step 2, put the optimal reaction function or the optimal value into the first stage; the optimal
pricing can be obtained from . In Step 3, put into the optimal reaction function ; the optimal pricing can be derived.
Theorem 7. In Stackelberg-merger game, in areas I, II, and III, given , is a convex function of .
Proof. In area I Therefore, is a convex function of . The proofs in II and III are the same as those of area I.
Theorem 8. The process of the Stackelberg-merger game is divided into three situations according to areas I, II, III. There exists only one optimal pricing strategy for Stackelberg-merger game in
each situation.
Proof. (i) In area I, . the total profit of the SC can be got from the sum of (10):
Base on Theorem 7 and Step 1, the optimal value of can be obtained as follows:
The value of is so small that the value of is greater than the maximum value of . Considering and Theorem 7, the maximum value of can be obtained in the condition that , and then (21) can be
expressed as follows:
From , the two values of can be got. One solution is so small that it cannot match the situation, and it is cast out. The other is the optimal value of . So there is only one solution for area I
(from Theorem 9).
(ii) In area II, the total profit of SC can be got from the sum of (11):
Based on Theorem 7 and Step 1, the optimal reaction of can be obtained as follows:
From Step 2, can be got, and the expression of is so long that it could not be written down here.
From Step 3, the optimal value of for can be obtained.
(iii) In area III, the total profit of SC can be derived from the sum of (12):
Based on Theorem 7 and Step 1, the optimal value of for can be obtained as follows:
From Step 2, the optimal of for can be obtained as follows:
Theorem 9. In area I, the optimal value of is the only one valid solution, which matches the real situation.
Proof. In area I, from , Two can be got from (31): where
In (32), with the negative value of sqrt is where Substituting (35) in (36), It is obvious that the solution cannot match the real situation, and it was cast out. Therefore, the optimal value of is
the only one valid solution, which matches the real situation.
The key of the Stackelberg-merger game is how to distribute the total profit between manufacturer and retailer. Using Nash bargaining model can coordinate profit of each entity [16], and the study of
this field needs to go further.
5. Numerical Simulation
5.1. Stackelberg-Nash Game Simulation
Look at mobile phone, for example. There are two types of mobile phone and in the same market, and is the renewal product of . The related parameters’ values are as follows: , , , , , , . Table 2
shows the effect of the change of RP cost on price, demand, and profits of both products in the condition . Table 3 shows the effect in the condition . From Tables 2 and 3, the effect of renewal rate
on price, demand, and profits can be concluded.
From numerical simulation in Tables 2 and 3, the following conclusions can be drawn.(1)There is no Stackelberg-Nash equilibrium in area I, and so Stackelberg-merger is used to simulate this
situation. The total profit in this situation is the optimal profit of area I. Obviously, the optimal total profit of area I is less than those of areas II and III. Therefore, area I is not the
optimal area of Stackelberg-Nash game. The rational manufacturer and retailer could not select area I. (2)For all parameter conditions, the profits of manufacturer, retailer, and the total profit of
SC in area III are greater than those of area II (in area II, there is a little difference between the price of RP and OP). Because the manufacturer and retailer are rational, the final
decision-making result of the whole SC should be in area III. In this condition, OP is set at a low price (break-even sales), while RP is priced at a high level (the price differential between RP and
OP is huge); that is to say, RP comes onto the market at a high price, and OP comes onto the market at a low price, which leads to larger market scale and wider influence on product. In this case,
even if RP is set at a high price, the shift demand and the increment demand of market are proportional to the size of OP scale, so high price has less influence on market demand. But the profit of
unit renewal product rises substantially with its price increase. In this situation, RP comes onto the market with a great price differential, and so the profits of all products are the highest; that
is, area III is the optimal pricing area of supply chain. (3)The profit of retailer in areas II and III is greater than that of manufacturer, mainly because the retailer is a direct participator of
market, and it reacts very rapidly when the demand of market changes. Therefore, the retailer can adjust the process of decision making in the fastest time to increase revenue and reduce losses. The
manufacturer’s reaction to the market relies on the retailer decision making information’s transmission, and there is a delay in the transmission process. Accordingly, the manufacturer’s profit is
less than that of the retailer.(4)With the cost of RP increasing, the profits of the manufacturer, retailer, and total will decrease.(5)If the cost of RP is constant, with product renewal rate
increasing, the profits of the manufacturer, retailer and total will increase.
5.2. Stackelberg-Merger Game Simulation
The parameters are the same as Section 5.1. With the changing of product renewal rate or the cost of RP, the variety of price, demand, and profits are shown in Tables 4 and 5.
From numerical simulation in Tables 4 and 5, the following conclusions can be drawn:(i)The profits in areas II and III of Stackelberg-merger game are greater than those in area I, so the
Stackelberg-merger pricing strategy would not be in area I. The situation is consistent with Stackelberg-Nash game.(ii)If OP adopts low-price strategy, and RP adopts high-price strategy, the total
profit will be maximum value; in other words, area III is the optimal merger pricing area for the SC. The situation and its reason are the same as those of Stackelberg-Nash game.(iii)If the cost of
RP is constant, with the product renewal rate increasing, the total profit will increase obviously. (iv)With the cost of RP increasing, the total profit will decrease.
From the analysis of Sections 5.1 and 5.2, the following can be drawn: (i) Both two kinds of decision-making processes are consistent with each other in profits, cost, and product renewal rate
changing (with the cost of RP increasing, profits decrease, with product renewal rate increasing, profits increase). (ii) Both cases obtain the optimal profits in area III; in other words, the RP
which comes onto the market has a great price differential with OP. (iii) Compared with Stackelberg-Nash game, the price in Stackelberg-merger game is lower while the profits are more, so
Stackelberg-merger decision making is virtually a kind of win-win situation for enterprises and market.
6. Conclusion
This paper studied the process of product renewal. A market shift model of RP was built on incremental function and shift function. Based on the model, Stackelberg-Nash game model and
Stackelberg-merger game model for RP in SC were built, and their theoretical analysis was carried out. The following conclusions can be drawn. (i) In both two models, the increase of RP cost will
make participants’ profits and total profit decrease, while higher renewal rate will make the profits increase at the same margin cost. (ii) Manufacturer and retailer obtain the optimal profits in
area III; in other words, the way of the optimal decision making in SC is that RP comes onto the market with a great price differential with OP. (iii) Compared with Stackelberg-Nash game model,
Stackelberg-merger game model’s pricing is lower and the profits are higher, which is actually a kind of win-win situation for enterprises and market.
In this paper, a part of premise conditions of the model was built on the basis of some rational hypothesis, and the model parameters need to be confirmed by real statistics data. These problems need
to be further studied.
This work was supported by the National Natural Science Foundation of China (Grant no. 71002106).
1. T. Dereli, A. Baykasoǧlu, and G. Büyüközkan, “An affordable reverse engineering framework for innovative rapid product development,” International Journal of Industrial and Systems Engineering,
vol. 3, no. 1, pp. 31–37, 2008. View at Publisher · View at Google Scholar · View at Scopus
2. M. Bourreau, P. Dogan, and M. Manant, “Modularity and product innovation in digital markets,” Review of Network Economics, vol. 6, no. 2, pp. 175–193, 2007.
3. C. Ju and L. Xiao, “Model of product lifecycle evaluation based on context knowledge,” Journal of Computational Information Systems, vol. 4, no. 4, pp. 1753–1760, 2008. View at Scopus
4. F. C. Wan, D. W. Wang, and P. Y. Chen, “Correlative product combinatorial introduction model,” Control Theory & Applications, vol. 21, no. 2, pp. 257–260, 2004. View at Scopus
5. P. Huo, The analysis of the R&D strategic technology in supply chain based on game theory, Postdoctoral study paper of Tsinghua University, 2003.
6. P. I. Bass and F. M. Bass, “Diffusion of technology generations: a model of adoption and repeat sales,” http://www.bassbasement.org/F/N/BBDL/Bass%20and%20Bass%202001.pdf.
7. P. I. Bass and F. M. Bass, “IT waves: two completed generational diffusion models,” 2004, http://www.bassbasement.org/F/N/BBDL/Bass%20and%20Bass%202004.pdf.
8. L. Wu, R. De Matta, and T. J. Lowe, “Updating a modular product: how to set time to market and component quality,” IEEE Transactions on Engineering Management, vol. 56, no. 2, pp. 298–311, 2009.
View at Publisher · View at Google Scholar · View at Scopus
9. C. T. Druehl, G. M. Schmidt, and G. C. Souza, “The optimal pace of product updates,” European Journal of Operational Research, vol. 192, no. 2, pp. 621–633, 2009. View at Publisher · View at
Google Scholar · View at Scopus
10. Y. Huang and F. Ling, “Study on the tracks of product update of ICT enterprises,” in Proceedings of the 2nd International Conference on E-Business and E-Government (ICEE '11), pp. 6400–6403, May
2011. View at Publisher · View at Google Scholar · View at Scopus
11. E. Koca, G. C. Souza, and C. T. Druehl, “Managing product rollovers,” Decision Sciences, vol. 41, no. 2, pp. 403–423, 2010. View at Publisher · View at Google Scholar · View at Scopus
12. X. W. Quan, “The pricing game of updated and existing products,” Acta Scientiarum Naturalium Universitatis Nankaiensis, vol. 44, no. 3, pp. 1–7, 2011.
13. Y. Luo and B. S. Tu, “Supply chain decision making model for product updating based on game theory,” Computer Integrated Manufacturing Systems, vol. 18, no. 9, pp. 2067–2075, 2012.
14. J. Wei, Pricing decisions and inventory decisions for innovative product and substitutable product [Ph.D. dissertation], Control Engineering of Nankai University, Tianjin, China, 2007.
15. S. Zhu, Microeconomics, Beijing University, 2nd edition, 2001.
16. D. Mou, Decision analysis for supply chain management based on game theory [Ph.D. dissertation], Control Engineering of Nankai University, Tianjin, China, 2003. | {"url":"http://www.hindawi.com/journals/ddns/2013/726536/","timestamp":"2014-04-18T09:04:22Z","content_type":null,"content_length":"501593","record_id":"<urn:uuid:34164bce-b48b-4b8d-a801-6e07bb7bfc12>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
cauchy sequence
show that every convergent sequence is cauchy. Is the converse true? if not give a counter example
With over thirty postings, you should understand that this is not a homework service nor is it a tutorial service. Please either post some of your own work on this problem or explain what you do not
understand about the question. | {"url":"http://mathhelpforum.com/differential-geometry/161705-cauchy-sequence.html","timestamp":"2014-04-16T05:33:23Z","content_type":null,"content_length":"32323","record_id":"<urn:uuid:fc4534f9-81d9-403e-a724-b236b764812a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Misadventures with bitwise operators
12-18-2004 #1
Misadventures with bitwise operators
I'm trying to make a program that solves exercise 2-6 in K&R: "Write a function setbits(x,p,n,y) that returns x with the n bits that begin at position p set to the rightmost n bits of y, leaving
the other bits unchanged."
While I renamed/reordered the function and variables, so I wouldn't get mixed up, this code should(in theory) do the same thing.
#include <stdio.h>
unsigned int xfrbits(unsigned int origvar, unsigned int destvar, int xfrnum, int xfradr)
unsigned int tmpmask;
tmpmask = ~(~0U << xfrnum)
origvar &= tmpmask;
tmpmask = ~(tmpmask << xfradr);
destvar &= tmpmask;
origvar <<= xfradr;
destvar ^= origvar;
return destvar;
int main(void)
unsigned int origvar, destvar, spchldr;
int xfrnum, xfradr;
for(origvar = 0; origvar <= 250; origvar += 49)
for(destvar = 0; destvar <= 250; destvar += 49)
for(xfrnum = 0; xfrnum <= 8; xfrnum++)
for(xfradr = 0; xfradr <= 8; xfradr++){
spchldr = xfrbits(origvar, destvar, xfrnum, xfradr);
printf("%u, %u, %d, %d : %u\n", origvar, destvar, xfrnum, xfradr, spchldr));
return 0;
I understand the principle here(I think), but I want to make sure I'm not getting mixed up.
Here's the logic I used to solve the exercise:
1. Generate bitmask to erase all the bits in origvar not being transfered, apply this mask to origvar.
2. Alter this bitmask to erase the bits to be overwritten by origvar, apply this mask to destvar.
3. Bitshift origvar to place the bits correctly.
4. Apply origvar to destvar to transfer the bits.
Please, be nice if you find problems with this - I mean, tell me exactly what I did wrong, but keep in mind that I had some trouble focusing long enough to write it.
Note: The extra assignment of
spchldr = xfrbits(...);
is just to save screen space for this post; I'd normally call xfrbits inside the printf that follows.
I live in a giant bucket.
Your solution is just a tad overkill. Try to use getbits as your example, and remember that reuse is a good thing. You'll find that the solution really isn't that much longer (and is simpler)
than the one for getbits.
>Please, be nice if you find problems with this
Well, it's not correct in that it doesn't do what the exercise asks for. You can test the bit result with something quick and dirty like this:
void showbits(unsigned int x)
int i;
for (i = 15; i >= 0; i--)
printf("%d", !!(x & (1U << i)));
My best code is written with the delete key.
12-18-2004 #2 | {"url":"http://cboard.cprogramming.com/c-programming/59852-misadventures-bitwise-operators.html","timestamp":"2014-04-20T10:12:29Z","content_type":null,"content_length":"44867","record_id":"<urn:uuid:cd745d74-5b86-487b-bb55-a6430534aa90>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series (mathematics)
, a
is a
of a
. That is, a series is a list of numbers with addition operations between them, e.g,
1 + 2 + 3 + 4 + 5 + ...
which may or may not be meaningful. Series may be
, or
; in the first case they may be handled with elementary
, but infinite series require tools from
mathematical analysis
if they are to be applied in anything more than a tentative way.
Examples of simple series include the arithmetic series which is a sum of an arithmetic progression, written as:
and finite
geometric series
, a sum of a
geometric progression
, which can be written as:
Infinite series
An infinite series is a sum of infinitely many terms. Such a sum can have a finite value; if it has, it is said to converge; if it does not, it is said to diverge. The fact that infinite series can
converge resolves several of Zeno's paradoxes.
The simplest convergent infinite series is perhaps
It is possible to "visualize" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, 1/2, 1/4, etc. There is always room to mark
the next segment, because the amount of line remaining is always the same as the last segment marked: when we have marked off 1/2, we still have a piece of length 1/2 unmarked, so we can certainly
mark the next 1/4. This argument does not prove that the sum is equal to 2 (although it is), but it does prove that it is at most 2 — in other words, the series has an upper bound.
This series is a geometric series and mathematicians usually write it as:
Formally, if an infinite series
is given with real (or
) numbers
, we say that the series
converges towards S
or that its
value is S
if the
exists and is equal to
. If there is no such number, then the series is said to
Here the sequence of partial sums is defined as the sequence
indexed by
. The definition is the same as saying the sequence of partial sums has limit
, as
→ ∞.
History of the theory of infinite series
Convergence criteria
The investigation of the validity of infinite series is considered to begin with Gauss. Euler had already considered the hypergeometric series
on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.
Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The
terms convergence and divergence had been introduced long before by Gregory (1668). Euler and Gauss had given various criteria, and Maclaurin had anticipated some of Cauchy's discoveries. Cauchy
advanced the theory of power series by his expansion of a complex function in such a form.
Abel (1826) in his memoir on the series
corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of and . He showed the necessity of considering the subject of continuity in
questions of convergence.
Cauchy's methods led to special rather than general criteria, and the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose
logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have shown to fail within a certain region; of Bertrand (1842), Bonnet (1843), Malmsten (1846, 1847, the latter without integration);
Stokes (1847), Paucker (1852), Tchebichef (1852), and Arndt (1853). General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various contributions to
the theory of functions, Dini (1867), DuBois-Reymond (1873), and many others. Pringsheim's (from 1889) memoirs present the most complete general theory.
Uniform convergence
The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack it successfully were Stokes and Seidel (1847-48). Cauchy took up the
problem again (1853), acknowledging Abel's criticism, and reaching the same conclusions which Stokes had already found. Thomé used the doctrine (1866), but there was great delay in recognizing the
importance of distinguishing between uniform and non-uniform convergence, in spite of the demands of the theory of functions.
Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi
(1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847).
Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function . Genocchi (1852) has further contributed to the
Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it into prominence.
Interpolation formulas have been given by various writers from Newton to the present time. Lagrange's theorem is well known, although Euler had already given an analogous form, as are also Olivier's
formula (1827), and those of Minding (1830), Cauchy (1837), Jacobi (1845), Grunert (1850, 1853), Christoffel (1858), and Mehler (1864).
Fourier series
Fourier series were being investigated as the result of physical considerations at the same time that Gauss, Abel, and Cauchy were working out the theory of infinite series. Series for the expansion
of sines and cosines, of multiple arcs in powers of the sine and cosine of the arc had been treated by Jakob Bernoulli (1702) and his brother Johann (1701) and still earlier by Viète. Euler and
Lagrange had simplified the subject, as have, more recently, Poinsot, Schröter, Glaisher, and Kummer. Fourier (1807) set for himself a different problem, to expand a given function of in terms of the
sines or cosines of multiples of , a problem which he embodied in his Théorie analytique de la Chaleur (1822). Euler had already given the formulas for determining the coefficients in the series; and
Lagrange had passed over them without recognizing their value, but Fourier was the first to assert and attempt to prove the general theorem. Poisson (1820-23) also attacked the problem from a
different standpoint. Fourier did not, however, settle the question of convergence of his series, a matter left for Cauchy (1826) to attempt and for Dirichlet (1829) to handle in a thoroughly
scientific manner. Dirichlet's treatment (Crelle, 1829), while bringing the theory of trigonometric series to a temporary conclusion, has been the subject of criticism and improvement by Riemann
(1854), Heine, Lipschitz, Schläfli, and DuBois-Reymond. Among other prominent contributors to the theory of trigonometric and Fourier series have been Dini, Hermite, Halphen, Krause, Byerly and
Some types of infinite series
• A geometric series is one where each successive term is produced by multiplying the previous term by a constant number. Example: 1 + 1/2 + 1/4 + 1/8 + 1/16...
• The harmonic series is the series 1 + 1/2 + 1/3 + 1/4 + 1/5...
• An alternating series is a series where terms alternate signs. Example: 1 - 1/2 + 1/3 + 1/4 - 1/5...
Convergence criteria
1. If the series ∑ a[n] converges, then the sequence (a[n]) converges to 0 for n→∞; the converse is in general not true.
2. If all the numbers a[n] are positive and ∑ b[n] is a convergent series such that a[n] ≤ b[n] for all n, then ∑ a[n] converges as well. If all the b[n] are positive, a[n] ≥ b[n] for all n and ∑ b[
n] diverges, then ∑ a[n] diverges as well.
3. If the a[n] are positive and there exists a constant C < 1 such that a[n+1]/a[n] ≤ C, then ∑ a[n] converges.
4. If the a[n] are positive and there exists a constant C < 1 such that (a[n])^1/n ≤ C, then ∑ a[n] converges.
5. Integral test: if f(x) is a positive monotone decreasing function defined on the interval [1, ∞) with f(n) = a[n] for all n, then ∑ a[n] converges if and only if the integral ∫[1]^∞ f(x) dx is
6. A series of the form ∑ (-1)^n a[n] (with a[n] ≥ 0) is called alternating. Such a series converges if the sequence a[n] is monotone decreasing and converges towards 0. The converse is in general
not true.
7. See ratio test.
The series
converges if r > 1 and diverges for r ≤ 1, which can be shown with the integral criterion 5) from above. As a function of r, the sum of this series is Riemann's zeta function.
The geometric series
converges if and only if |z| < 1.
converges if the sequence b[n] converges to a limit L as n goes to infinity. The value of the series is then b[1] - L.
Absolute convergence
The sum
is said to converge absolutely if the series of absolute values
converges. In this case, the original series, and all reorderings of it, converge, and converge towards the same sum.
If a series converges, but not absolutely, then one can always find a reordering of the terms so that the reordered series diverges. Even more: if the a[n] are real and S is any real number, one can
find a reordering so that the reordered series converges with limit S (Riemann).
Power series
Several important functions can be represented as Taylor series; these are infinite series involving powers of the independent variable and are also called power series. See also radius of
Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the
nineteenth century, rigorous proofs of the convergence of series were always required. However, the formal operation with non-convergent series has been retained in rings of formal power series which
are studied in abstract algebra. Formal power series are also used in combinatorics to describe and study sequences that are otherwise difficult to handle; this is the method of generating functions.
The notion of series can be defined in every abelian topological group; the most commonly encountered case is that of series in a Banach space.
There is no serious definition for an infinite sum over an uncountable set. For example if X is a set and f a function on X taking non-negative real values, such that
for any countable subset Y of X, with A an absolute constant, it follows that f(x) = 0 for all x outside some countable subset of X. In other words, infinite sums of uncountably many non-negative
reals make sense only in the case that this is a conventional convergent infinite series, extended by the value 0 to an uncountable set.
Asymptotic series, otherwise asymptotic expansions, are not typically convergent infinite series, but sequences of finite approximations each of which is a good asymptotic representation. | {"url":"http://july.fixedreference.org/en/20040724/wikipedia/Series_(mathematics)","timestamp":"2014-04-20T20:57:42Z","content_type":null,"content_length":"20061","record_id":"<urn:uuid:946fc578-5010-4105-ac8b-a18949a631ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: Mathematics (9 - 12)
Title: Card Table Project
Description: Students will work in groups to design a card table. Students will communicate with each other through a class blog or class discussion page. Students will then work in groups to design
a card table. After the design phase, students will try to sell their product to the outside expert.
Subject: Mathematics (9 - 12), or Science, Technology, Engineering, and Mathematics (9 - 12)
Title: Land Surveying Project-Enhancing mathematics in the career/technical classroom and providing relevance in the mathematics classroom.
Description: This project resulted from of the collaboration of a computer aided drafting teacher, Chris Bond, and a math teacher, Lee Cable, (Hewitt-Trussville High School) to provide higher math
expectations in CT and real life application in mathematics. In this hands-on and technology based project, CT students will learn the basics of civil engineering in land surveying while applying
algebraic and geometric concepts. The current technology used by survey crews, for example AutoCAD, will be utilized and applied by students. With assistance from a local civil engineering firm,
students will record length and angle measurements of an assigned area and calculate unknown measures using math concepts that include solving general and right triangles, setting up and solving
proportions, determining scale, and using the Law of Cosines. A topographical map will then be plotted using AutoCAD and a presentation made to the engineering firm for grading and feedback. The
embedded math will be co-taught with a math teacher using the real world data to make the topics relevant. In the math classroom, the math teacher will use land surveying problems designed in this
project to show students how math concepts are used in real-world applications.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Law of Cosines Add Bookmark
Description: In this Illuminations lesson, students use right triangle trigonometry and the Pythagorean theorem to develop the law of cosines. Included is a link to an online activity sheet.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12 | {"url":"http://alex.state.al.us/plans2.php?std_id=54212","timestamp":"2014-04-17T06:52:08Z","content_type":null,"content_length":"22258","record_id":"<urn:uuid:10e3649e-056b-4b11-9220-85a886a74676>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |