content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Co-functions help needed..
October 13th 2013, 09:36 AM #1
Oct 2013
South Africa
Co-functions help needed..
Hey guys,
Due to the lecturer not knowing himself, I don't know how exactly to approach some of the following problems.
1. sec(90°+x)sin(180°+x) + sin(-x)
2. __sin(-x)_ ______1______ __tanx___
sin(180°+x) + secx.sin(90°+x) - cot(90°+x)
Solve for x:
1. 3cot(90°+x) = tanx.sinx
2. tan(90°-x) = sec(3x+10°)cotx
I feel a bit stupid asking, BUT then, the only stupid question is the one not asked!
Thanks in advance for any help and/or advice..
Re: Co-functions help needed..
The first thing I usually do is to get everything in terms of sine and cosine. sec(x) = 1/cos(x), and we have the sum of angles formula:
$cos(a + b) = cos(a) ~ cos(b) - sin(a) ~ sin(b)$
So what is $cos( 90 + x)$?
The same goes for sine:
$sin(a + b) = sin(a) ~ cos(b) + sin(b) ~ cos(a)$
See what you can do with this. If you need more help with it, simply ask.
Re: Co-functions help needed..
Ok, so I had class tonight and I'm gonna try this one quick..
sec(90°+x)sin(180°+x) + sin(-x)
= -cscx(-sinx)+(-sinx)
= -1/sinx(-sinx)-sinx
= 1-sinx
I'll try the rest after I had some sleep..
October 13th 2013, 09:56 AM #2
October 16th 2013, 12:56 PM #3
Oct 2013
South Africa | {"url":"http://mathhelpforum.com/trigonometry/222990-co-functions-help-needed.html","timestamp":"2014-04-16T13:15:04Z","content_type":null,"content_length":"37430","record_id":"<urn:uuid:33a454a0-e2af-4655-8f62-e884892eadd3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Birthday Problem
The Birthday Problem: A short lesson in probability.
Last revised 8/24/05
Introduction with simulation
Happy Birthday! There's a birthday in your class today! Or will there be two? How likely is it that two people in your class have the same birthday? Say your class has 28 students.
There are a number of ways to approach this problem. The most common is to take a survey and see if it happens that two birthdays fall on the same day. But if it happens in the surveyed class, will
it occur in another class with different students? The question of how likely it is for any given class is still unanswered.
Another way is to survey more and more classes to get an idea of how often the match would occur. This can be time consuming and may require a lot of work. But a computer can help out. Below is a
simulation of the birthday problem. It will generate a random list of birthdays time after time. Simply type the number of people in your virtual class into the textbox and hit <ENTER> to run the
Choose a number for your class size and do 10 trials with that size.
Applet Source, Written by Nicholas Exner.
Now what do you think the probability of a match is?
It may surprise you that there were so many matches. Let's look at an explanation for this problem. | {"url":"http://mste.illinois.edu/reese/birthday/intro.html","timestamp":"2014-04-16T07:28:47Z","content_type":null,"content_length":"3313","record_id":"<urn:uuid:c75824c8-943d-418f-953a-94b41a692844>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
The course will start with an introduction to axiomatic Set Theory, based on the axioms of Zermelo and Fraenkel. It will show how the generally well-know facts from naïve Set Theory follows from the
axioms and how modern mathematics can be embedded in Set Theory.
The second part of the course will offer combinatorial tools from Set Theory that have proved useful in infinitary situations in Algebra, Topology and Analysis.
We offer a choice from
- Partition Calculus: the theorems of Ramsey, Erdös-Rado and others
- Combinatorial properties of families of subsets of the natural numbers
- Trees, stationary sets, the cub filter
- PCF theory
- Large cardinals | {"url":"http://www.mastermath.nl/program/00012/00015/","timestamp":"2014-04-19T09:30:26Z","content_type":null,"content_length":"6995","record_id":"<urn:uuid:5cf496ea-ae27-4bff-8a70-2af2d3cff21b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a natural measurable structure on the $\sigma$-algebra of a measurable space?
up vote 9 down vote favorite
Let $(X, \Sigma)$ denote a measurable space. Is there a non-trivial $\sigma$-algebra $\Sigma^1$ of subsets of $\Sigma$ so that $(\Sigma, \Sigma^1)$ is also a measurable space?
Here is one natural candidate. I'm not certain, but based on answers to related questions, I think this might be the Effros Borel structure that Gerald Edgar has mentioned here and here.
The $\sigma$-algebra $\Sigma$ is an ordered set under the canonical relation given by subset inclusion $\subseteq$, and is therefore naturally equipped with a specialization topology. The closed sets
are generated by downward-closed sets, and the closure of a singleton is its down-set:$$\overline{\{A\}} = \{ B \in \Sigma : B \subseteq A \}.$$ Even though this topology is highly non-Hausdorff,
it's still pretty nice. For example, it's an Alexandroff space: arbitrary unions of closed sets are closed.
Being a topological space, $\Sigma$ now has a natural measurable structure, namely, the one generated by the Borel $\sigma$-algebra $\Sigma^1 := \mathcal B_{\subseteq}(\Sigma)$.
• Is this space $(\Sigma, \Sigma^1)$ a reasonable one on which to do measure theory and probability?
Whether it is or not, there's some non-trivial structure present. For example, we can iterate this procedure. Set $\Sigma^0 = \Sigma$, and define $\Sigma^n := \mathcal B_{\subseteq}(\Sigma^{n-1}).$
Then each one of these spaces $\Sigma^n(X) := (\Sigma^{n}, \Sigma^{n+1})$ is measurable.
• Is $\Sigma : \mathrm{Meas} \to \mathrm{Meas}$ an endofunctor on the category of measurable spaces?
• Under what conditions does the sequence of measurable spaces $\Sigma^n(X)$ have a limit $\Sigma^{\infty}(X)$?
Tom, I don't think you mean what you said about Alexandroff spaces; arbitrary intersections of closed sets are always closed, in a topological space. – Paul McKenney Feb 10 '13 at 4:01
Thanks @Paul McKenney. It was a typo: Alexandroff spaces contain arbitrary unions of closed sets. – Tom LaGatta Feb 10 '13 at 6:27
add comment
1 Answer
active oldest votes
One way to approach this would be to ask the same question inside a suitable topos in which "everything is measurable" and such that each object is naturally equipped with the structure
of a $\sigma$-algebra. In effect you would be expanding the notion of measure space to accommodate better structure, as such toposes typically contain the "classical" measure spaces.
up vote 1
down vote For example, Matthew Jackson's Ph.D. dissertation "A sheaf theoretic approach to measure theory" might be a starting point.
@Andrej Bauer, that's an interest point of view. Can you expand more on it? Suppose we are considering the category $\operatorname{Meas}$, the topos-category $\operatorname{Set}$ and
some other topos $\operatorname{T}$. What does it mean "to ask the same question" in that different topos? – Tom LaGatta Feb 11 '13 at 1:57
add comment
Not the answer you're looking for? Browse other questions tagged measure-theory pr.probability gn.general-topology order-theory ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/120942/is-there-a-natural-measurable-structure-on-the-sigma-algebra-of-a-measurable","timestamp":"2014-04-17T04:09:13Z","content_type":null,"content_length":"57265","record_id":"<urn:uuid:be1e2307-87de-4cb4-8817-e68919f506da>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
I don't know why, but this whole blogging thing seems to be getting harder. I start posts, but I can't seem to find the inspiration/motivation/time to finish them. Today, has been one of those
I've spent a lot of time recently on student council duties. But, I finally got the basketball homecoming t-shirts ordered today. I got a
bullying assembly
scheduled, and I found out that we will have the funds to bring
Victims' Impact Panel
in to present to our students before prom.
We had our first student fight of the year today. I'm incredibly thankful it didn't happen in my classroom. I'm also amazed it was this late in the school year! And, it was all the students wanted to
talk about today...
Algebra 2 started polynomial long division today. I found it really helpful to have my students work through several regular long division problems, first. I was surprised when some of my students
told me that they didn't know what long division was. But, they seemed to all remember the process with a little review. At first, my students told me that polynomial long division was too confusing.
I heard multiple students say that they would just always divide polynomials by factoring and cancelling. (Of course, these were the same students who told me that they hated factoring and couldn't
remember how to factor yesterday.) I tried convincing them that they wouldn't always be able to factor and cancel, but I'm pretty sure my reminders fell on deaf ears. By the end of the class period,
my students were actually starting to become comfortable with the process. Tomorrow, we'll do a lot more practice and look at some word problems that can be solved by polynomial long division.
Something really cool happened, though, in my Math Analysis class that pretty much made my day. One of my Algebra 2 students is also in my Math Analysis class this semester. These students had also
been struggling with remembering how to factor. So, today we worked through the
Polynomial Puzzler problems from NCTM Illuminations
. It changed things up a little bit, and it gave my students more practice before we continue our work with quadratics. My Algebra 2 student was having trouble figuring out how one of the quadratics
factored, so she asked me if she could do the "division thing" from this morning. Incredibly surprised and delighted, I told her that she could use polynomial long division to find the missing
factor. It was a proud teacher moment. Later, when she turned in her assignment, she told me that polynomial long division made solving the puzzles much easier.
I did have to confiscate something from one of my students this afternoon which was disappointing. Then, another student stole the confiscated item from my desk. I caught it, but I was even more
unhappy. My students told me it was the first time that they had seen me mad. Apparently, I look evil when I get mad. Usually, I would focus on this one bad thing when reflecting on my day. But,
today I'm choosing to focus on the good. Student Council is going well. And, whether it feels like it or not, I am making a difference with my Algebra 2 students.
Happy Monday, everyone!
One of our objectives in Algebra 1 is that students will be able to add and subtract polynomials. My students are usually pretty good about remembering to distribute the negative when the second
polynomial is written in parentheses with a subtraction sign in front. The state of Oklahoma doesn't always write the questions in this format, though. A favorite format of theirs is to give two
polynomials and ask for the sum or difference. If the problem asks for a difference, students must realize on their own that they need to change the signs of the terms of the second polynomial before
combining like terms.
Sample Question Type from Algebra 1 EOI
I reviewed distributing a negative through a set of parentheses with my students as bellwork. Then, I gave each pair of students a deck of cards I had created and a penny to simulate this specific
question type.
I printed all of the x squared terms on one color of paper, all of the x terms on another color, and all of the constants on a third color of paper. Then, I laminated them and cut out the pieces.
Each pair got a bag of pieces and a penny. The students' first job was to sort the cards, face-down, into three piles by color. To begin, they would turn over one card of each color to form the first
polynomial. Then, they would turn over another set of cards to form the second polynomial. Finally, students would flip the penny to determine if they were finding the sum or difference of the two
polynomials. Heads meant sum. Tails meant difference.
After creating their problem, both students would solve the problem independently on their whiteboard. When both students were finished, they were supposed to compare their answers. If the answers
agreed, students would use the cards to create a new problem. If there was a disagreement, I would come over to help the students.
What I Loved
* Students got lots of practice. No worksheet involved. We did the first day back from Christmas break, so I wanted an activity that would help them transition from break mode to school mode.
* The pace of the activity was instantly differentiated. My advanced students worked through a good number of problems. My special education students were able to work at their own pace without
having to worry about how many they had finished. Instead, they could really focus on understanding the process.
* The randomness of what cards were dealt and the result of the coin flip led to some great conversations with students. For example, I would probably never ask students to find the sum of two
polynomials that summed to zero. It happened to a group of my students, though. As a result, we got to discuss what happened when all of the terms cancelled out.
What I Didn't Love
* A few of my upper-level students soon grew bored of the activity.
* Some of my groups seemed to have all of the luck and flipped heads each time. I'm pretty sure a few of these groups had more than luck on their side.
* And, the activity was not self-checking.
I did not come up with the idea behind this activity myself. Actually, I combined aspects of several other activities to create my own. Pam Wilson did a version of this activity using wooden blocks
and a penny. I didn't have any wooden blocks, and I was planning this at the last moment. So, I replaced the blocks with cards similar to this activity from Joy in 6th.
If you would like to download a copy of the polynomial cards, they are embedded below. These three sheets will make enough cards for three pairs of students.
This week, my math classes are celebrating Universal Letter Writing Week. Universal Letter Writing Week takes place during the second week of January. I had originally wanted to make this a student
council project. My student council officers, though, were less than thrilled with the idea of setting up a letter-writing station in the cafeteria during lunch. I still wanted to do it, though, so
I'm kinda forcing my own students to participate.
They've been begging me to do origami ever since they returned from Christmas Break to find new decorations hanging from the ceiling.
Math Origami
My sister actually made these origami cubes and colliding cubes and whatever you call the one in the picture above in math class in junior high. Over Christmas Break, she was cleaning her bedroom and
found them. We decided they would be the perfect thing to hang in my classroom. Now, all of my students want to make them. The only problem is, I don't know how. And, I still have many more units to
teach this semester. I think this may be a perfect project for my students after they take the EOI in April.
So, with origami on the brain, and Universal Letter Writing Week in mind, I decided to have my students write letters on colored sheets of paper. On Tuesday, I had them choose a teacher to write to.
Today, I had them choose a school employee who was not a teacher. This profession is often a thankless one. I know how much it means to me when I receive kind words or a compliment from a student.
Knowing that one's work is noticed and appreciated is energizing. I truly treasure the letters that students have written me.
After writing our letters, we folded them into a basic origami envelope.
This is Tuesday's batch of letters from my students to various teachers. Aren't they just so beautiful?
Aren't they just so bright and cheery looking? I can't help but smile when I see this picture. Some of my students were disappointed that they didn't turn out looking like a typical envelope. But, I
just loved seeing the stacks of colorful, uniform letters that were just waiting to be opened and read. I loved delivering them and seeing the surprise on the faces of the recipients.
I'll be honest. This did eat up much more class time than I expected. We have 50 minute periods, and the first letter we wrote took about 15 minutes of class time to write and fold. On the second
day, the students were already familiar with the process, but, it still took 10 minutes of class.
Losing class time did make me uneasy. My students are years behind in mathematics, and time is a precious commodity. When I look at all of the standards that I have yet to even introduce, I start to
enter panic mode. Believe me, that is not a good place. Worrying doesn't do my students or me any good.
Looking back, I am proud of this project. I am proud of the fact that I did something that benefited both my students and my coworkers. My students got the opportunity to say thank you to someone who
has made a difference in their lives. Those I work with got to read those two words that they do not hear often enough.
A mentor of mine encouraged me to look for opportunities to be a catalyst of change in my school district. This project was a step in that direction. A step towards making the school I work for a
better place for teachers and students. I'll never know the magnitude of the impact that these letters will have on their recipients or their senders. But, I can hope that they encourage my coworkers
to give these students their all every single day. Because whether we hear the words "thank you" or not, we are making a difference. The time I have with my students is too precious to waste. It is
time to support them, inspire them, encourage them, prepare them for the future, remind them that they are cared for, and teach them some math, too.
This is definitely a project that I want to become a yearly tradition in my classroom. (And, even if my students weren't learning any math during the process of writing these letters, it was still
educational. We discussed where to put the comma in the salutation of a letter and how to spell various words. I even fit in some geometry vocab while explaining the steps to folding the origami
envelope. It got my students writing and thinking, and I believe those are both things my students don't spend enough time doing right now.)
Tomorrow, my Algebra 2 class will begin identifying the domain and range of a function from a graph. I created a foldable to help my students keep the three different notations separate. I don't know
how well it will work out yet, but I wanted to get it posted for Made 4 Math Monday.
I used a basic three door foldable. The first flap contains all of the information about set notation. The second flap features algebraic notation, and the third flap focuses on interval notation.
I've uploaded my template that is sized to fit in a composition notebook below. My students will add this to our interactive notebooks. This is one of my favorite and most versatile foldables.
Before, I've used it to capture information on Parallel and Perpendicular Lines, Different Ways of Finding Slope, and Types of Correlation.
Outside of Domain/Range Notation Foldable
Inside of Foldable
Close-Up of Inside (RHS)
Close-Up of Inside (LHS)
If I taught my Algebra 1 students anything this semester, I taught them slope!
I had my students fill out an evaluation form before Christmas Break. On the back, I gave them a chance to just be creative. They could either write me a letter, draw me a picture, or write me a
haiku. Surprisingly, haikus were a foreign concept to my students. I think I only had two students write me a haiku. I'm thinking that I'll have my students write a haiku about one of our vocabulary
words soon.
Anyway...most of my students chose to draw me a picture. They had the opportunity to draw whatever their hearts desired. I'm not sure what I was expecting, but I got a lot of pictures of Slope Dude's
journey and Mr. Slope Guy. I was really surprised by the students who chose to label the slopes of the lines in their pictures. I'm pretty sure this is not normal teenage behavior. But, I'm
definitely not going to complain. :)
2012 has been a most eventful year for me.
In January, I started my student teaching at a urban high school in Tulsa, Oklahoma. Having been raised in a rural/suburban area, my first few days of student teaching left me in shock. The class
sizes were large. I was appalled at the language used by the students. My eyes were opened to what poverty looks like.
I asked lots of questions. I stepped out of my comfort zone and taught algebra in Spanish. And, I made my first foldable. It was a life-changing experience.
My First Foldable
In February, I experienced my first snow day as a teacher. I became incredibly attached to the students I was working with. I taught my students about scatter plots and correlation using M&M's. I
left my first student teaching experience without saying goodbye to my students. That is something I will probably always regret.
M&M Scatter Plots
At the very end of February, I switched from an urban high school to an urban middle school. The differences between the two were astounding. During my second student teaching placement, I was given
more freedom and responsibility than I could ever have imagined. I was TERRIFIED. I had 5 classes of 8th grade math students. The 8th grade math test was fast approaching, and how these students did
on this math test would determine how they spent their next four years of high school.
After two weeks of observing, my cooperating teacher handed over the reins to me and took a seat in the back of the classroom. I'll be honest. It was tough. When a lesson wasn't going well, when the
students wouldn't quit talking, when there were ten extra minutes at the end of the class period, it was entirely up to me to figure out what to do. And, I didn't always make the right decision. I
almost always learned my lesson, though. Lessons learned by experience will stick with you way more than advice from your professor or a book on teaching.
I spent my Spring Break filling out job applications to almost every single school district within a one-hour radius.
In April, I had two job interviews. The first interview was for a middle school math position. My second job interview for a high school math position was a whirlwind of an experience that ended with
me being offered a job on the spot. After thinking it over for a day, I accepted the position. I'm not quite sure I knew what I was getting myself into. From my interview experience and from looking
at the school's test scores, I felt like the school needed me. And, that was enough to make me leave behind everything familiar and move to a town where I didn't know a single person, a town that I
had never been to before the day of my job interview, a town without a stoplight.
I finished my student teaching at the middle school level. I'm not even sure if it is possible to convey all that I learned from my middle school student teaching experience. It was in that classroom
that I started learning to put aside the educational theories I had been taught and focus on the students that I was standing in front of. The students became my focus. It didn't matter how long I
had worked on creating a worksheet. If that worksheet was doing a disservice to my students, I had to throw it out and create something new on the spot. It didn't matter how much I disliked making
decisions. My students needed structure. They needed a decision maker. They needed me to create a productive learning environment.
My 8th graders rocked their state math test. And, this time I said a proper goodbye to the students that I had, again, grown so attached to. My cooperating teacher had the students write me letters
that I will always cherish.
In May, I donned my cap and gown and walked across the stage at graduation. I moved out of my on-campus apartment, and moved back in with my parents for the summer. I went to my first school board
meeting where my hiring was officially approved. I worked in the office of my family's business.
I also got to see my classroom for the first time. When the school counselor told me it needed a little TLC, she wasn't joking.
Classroom: Before
In June, I found me a house to rent in my new town. My mother, sister, and I started brainstorming ways to fix up my classroom. And, of course, I worked for my family's business.
The majority of July was spent moving into the house I had rented, fixing up the house I had rented, and fixing up my classroom. My parents and sister were amazing. I definitely couldn't have done it
without them.
Classroom: After
I sent out a tweet that would change my life. And, I ended up connecting with an amazing group of math teachers from across the United States and even the world.
Two days before the end of the month, I finally found out that I would be teaching Algebra 1, Algebra 2, and Math Analysis (or College Algebra.)
In August, I put the finishing touches on my classroom. I spent hours scouring twitter, google reader, the web, and pinterest for ideas for my interactive notebooks.
Interactive Notebook Foldable
My entire town was evacuated due to wildfires. The fires came within three tenths of a mile of my house. It was definitely one of the scariest experiences of my life. Some of my students and
coworkers lost everything they owned in the fires.
I lesson-planned like I never had before. I began my first year as a teacher. It definitely had its challenges, but I knew that this was the job I loved and was meant to do.
In September, I was asked to prepare a presentation for our next staff development session on hands-on teaching. I was honored, but shocked that they had chosen a first-year teacher to present to the
entire high school faculty. Students had been talking about my teaching strategies, and the talk was good.
October brought with it my first experience with parent teacher conferences as a teacher. I experienced my first Fall Break in what seemed like forever. I truly loved my college experience, but I
really missed having a few days off in October to rest, relax, and to get caught up on everything. (They did give us a full week off for Thanksgiving to compensate, but it just wasn't the same.) Of
course, I ended up catching a cold that spanned my entire Fall Break. My teacher self, though, would rather be sick on break than teach while I feel miserable.
In November, my students started working with slope and graphing linear equations. I'm pretty sure that this is my favorite unit to teach. Slope Dude was a bigger hit than I could have ever imagined.
Seriously, if you teach slope and haven't introduced your students to Slope Dude, you and your students are missing out. Mrs. H even stopped by my blog and left a comment about how Slope Dude got
started. It will always be "Puff Puff Positive" to my students. Some of my students even took it on themselves to show the youtube video to some students who have the other Algebra 1 teacher.
I survived my first Student Council State Convention. It was definitely an experience! Having last been a part of student council in the fourth grade, I have a lot to learn.
My students, convinced that the science teacher and I should be dating, started writing us both fake love letters from the other. They were actually quite amusing, especially since everyone knew
exactly who was writing them.
Several of my students retook their Algebra 1 EOI in December. In Oklahoma, students have to pass their Algebra 1 EOI or an equivalent test in order to graduate. These students have been taking both
Algebra 1 and Geometry this year. I'm excited to report that all of my Algebra 1 students who retook the test passed! And, quite a few of them scored advanced! I'm so proud of them and all of the
hard work they put in. It was a great experience as a teacher to see the light bulbs finally go off.
I gave my first semester tests. And, we spent the last few days of the semester making hexaflexagons.
I've spent my Christmas Break with family. I helped complete inventory at my family's business. I've been brainstorming lots of ideas for new foldables and activities for next semester. I'm reading
Making Thinking Visible, and I'm trying to figure out how to take these ideas and strategies and put them into place in my own classroom. This book has shown me that I have been doing my students a
disservice by not giving them opportunities to think. Thinking is something that only happens when planned for. | {"url":"http://mathequalslove.blogspot.com/2013_01_01_archive.html","timestamp":"2014-04-21T02:24:58Z","content_type":null,"content_length":"160592","record_id":"<urn:uuid:b67f26a8-6e0a-4f9a-b79c-f18f81400e3b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tyngsboro Algebra 1 Tutor
Find a Tyngsboro Algebra 1 Tutor
...I have helped students improve their math skills in middle and high school, and I can help them become better organized in the classroom, working on note-taking, test-taking, and time
management skills as well. I've worked with students specifically in the following subjects: Pre-Algebra, Algebra 1, Geometry, U.S. History, World History, and European History.
29 Subjects: including algebra 1, reading, English, writing
...I collected a lot of Chinese children's songs, stories and Chinese cartoons which make for good supplemental materials for Chinese learning. You will have the opportunity to choose the method
your feel comfortable with: either conversation of daily life or a fun learning process with songs, game...
5 Subjects: including algebra 1, physics, algebra 2, precalculus
...While I mainly worked with higher levels of math, this could not be done without the very fundamental concepts. Applying these concepts to more complex scenarios allows me to have a very strong
working knowledge of them. I have my Bachelor's in biochemistry, and in order to be successful I needed a strong background in calculus.
13 Subjects: including algebra 1, chemistry, calculus, biology
...I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. I can tutor
any subject area from elementary math to college level.I got an A+ in Discrete Mathematics in College and an A in the graduate course 6.431 Applied Probability at MIT last year.
16 Subjects: including algebra 1, French, elementary math, physics
...The student must search out the information needed in their chemistry book to solve problems given the guidance of the teacher, that is, become familiar with the type of problem and the outline
developed with instructor to obtain the answer. Through practice of learned skills, the student can th...
7 Subjects: including algebra 1, chemistry, physics, biology | {"url":"http://www.purplemath.com/Tyngsboro_algebra_1_tutors.php","timestamp":"2014-04-17T01:02:39Z","content_type":null,"content_length":"24121","record_id":"<urn:uuid:adb3b2dc-c313-4f74-a0b8-d0009839429f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rank A and Basis of Rn
October 12th 2012, 10:05 PM
Rank A and Basis of Rn
Hi guys, I would really apprieciate it if I could get some tips to proving this:
If A is an m x n matrix with columns c1, c2, ... cn, and rank A = n, show that (I'll let B = A-transpose here)), {Bc1, Bc2, ... Bcn} is a basis of Rn.
So far, I know that in order to show that it's a basis of Rn, it must span and be linearly independent.
I know rank A = rank B = dim(col A) = dim(row A) = n
I know that the columns of A are independent, therefore the rows of B are independent.
The product of BA is then independent because it is invertible, and is the matrix [Bc1 Bc2 ... Bcn].
I know the rows of A span Rn and therefore the columns of B span Rn.
Am I wrong with assuming BA is independent due to invertibility? Since I think I will have to use this in the proof. And can I get a nudge towards the next step?
Any help would be lovely, thank you!
October 12th 2012, 11:06 PM
Re: Rank A and Basis of Rn
Hey Kyo.
You can use the fact directly that det(A) <> 0 (and A is square) shows that all vectors are linearly independent and since det(A) = det(A^t) then A^t also has the same property.
I'm not not sure how they expect you to assert det(A) = 0, but once you do that the rest follows.
October 13th 2012, 09:25 AM
Re: Rank A and Basis of Rn
Thanks chiro, but are you referring to AB being square? My A matrix is actually an m x n matrix.
October 13th 2012, 04:17 PM
Re: Rank A and Basis of Rn
Whatever matrix you are using as your basis (or trying to prove that a basis exists). | {"url":"http://mathhelpforum.com/advanced-algebra/205219-rank-basis-rn-print.html","timestamp":"2014-04-19T07:48:29Z","content_type":null,"content_length":"5258","record_id":"<urn:uuid:d3a0d5ea-d1f7-4259-ad75-4029ec1f2f6d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subtleties of B-rep Translation (Part 3); Why Healing Matters
I’ve written my last two blogs about different pitfalls and insight needed in order to properly translate CAD data. I’ve discussed how “sharing” of geometry inside the data structure is a hidden but
much used form of design intent and discussed how geometry forms are inherently linked to high-level algorithms inside the modeler itself. But I haven’t discussed the healing operations that the
Spatial translators perform in order to properly translate the different CAD formats. If you use our translators you know they exist, and people commonly ask about their purpose and efficacy.
To understand InterOp healing we have to start by borrowing a concept from any undergraduate Data Structure and Algorithms class. Generally, one views a software system as two distinct but highly
inter-related concepts: a data structure and an acting set of algorithms or operators. In our case the data structure is a classic Boundary Representation structure (B-rep) which geometrically and
topologically models wire, sheet and solid data. An operator is an action on that data, for example, an algorithm to determine if a point is inside the solid or not. But the system’s operators are
more than just a set of actions. Implicitly, the operators define a set of rules that the structure must obey. Not all the rules are enforced in the structure itself; actually, many can’t be. But
they exist and it’s healing in InterOp that properly conditions the B-rep data to adhere to these rules upon translation.
As always a couple of examples best describe the point. I picked three ACIS rules that are, hopefully, easily understandable.
All 3d edge geometry must be projectable to the surface. Anybody can define a spline based EDGE curve and a surface and write it to SAT. Basically, jot down a bunch of control points, knot vectors,
what have you, and put it in a file that obeys SAT format. But in order for it to work properly, geometric rules for edge geometries exist. Specifically, the edge geometry must be projectable to the
surface. In short, you can’t have this:
There are many reasons in ACIS for this, but primarily if it’s not projectable then point-perp operations are not well-behaved. If they’re not well behaved finding the correct tolerance (distance
between the curve and the surface) is problematic. If one cannot define correct tolerances then water-tightness is not achieved and simple operators, like querying if a point is inside the body,
Edge and Face geometry cannot be self-intersecting. A great deal of solid modeling algorithms work by firing rays and analyzing intersections with different edge and face geometries. In order for any
conclusion to be drawn, the results of the intersection must be quantifiable. The problem with self intersecting geometries is just that; how to you quantify the results in Figure 3? The key
observation here; imagine you are walking along the curve in Figure 3, starting from the left side. At the start, the material is on the right side, but after the self intersection the material
changes to the left side. You cross the self intersection again and the material switches to the right again. This causes endless grief in understanding the results of an intersection.
Tolerances of Vertices cannot entirely consume neighboring edges. For a B-rep model to be considered water-tight, tolerances of faces and edges must be understood. Today many kernels have global
tolerances plus optional tolerances applied to edge curves and vertices. These tolerances vary depending on neighboring conditions, usually obeying some upper bound. You can think of these tolerances
as the “caulking” that keeps the model water-tight. Depending on the quality of the geometry or the tolerances of the originating modeling system you might need more “caulking” or less; respectively,
larger tolerances on edges or vertices, or smaller tolerances. However in order to realize a robust Boolean engine, again, rules apply. Consider this:
Above we have Edge Curve 2 encapsulated completely inside the gray tolerant vertex. Again, I can easily write this configuration to SAT format, however Booleans cannot process it. It yields horrific
ambiguity when building the intersection graphs in the internal stages of Booleans.
So this is a list of just three rules, it’s far from being comprehensive. But the main point: we know that not everything that ends up in an IGES file comes from a mathematically rigorous surfacing
or solid modeling engine. Perhaps people are translating their home-grown data into a system like ACIS so they can perform operations that they could not in their originating system. But in order to
perform these operations, the data must conform to the rules of the system. To simply marshal the data and obey a file format, but disregard the rules, is doing just half the job.
That’s why healing matters.
I do not even know how I
Submitted by
(not verified) on Mon, 08/27/2012 - 18:00.
I do not even know how I ended up here, but
I thought this post was great. I don't know who you are but certainly you're going to a famous blogger if you are not already ;) Cheers!
• reply
Post new comment | {"url":"http://www.spatial.com/blog/subtleties-b-rep-translation-part-3-why-healing-matters","timestamp":"2014-04-16T04:11:17Z","content_type":null,"content_length":"56710","record_id":"<urn:uuid:d4a22a83-4a56-4aef-bab2-14cdf63326c0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Or The K-maps I Have 4 More Problems And I Did ... | Chegg.com
or the k-maps i have 4 more problems and i did 3 of thembellow i did the problems can u just check through i will give umore points i will open a question later and you can post 3 timesto my 3
questions and you can give comments to them and i will giveyou points.
here is my work
simplfy the following boolean function, using k-maps
f(w,x,y,z)= sigma(0,1,5,8,9)
for 1 i got x'y'
for 2 i got zw'y'
problem 2 says find alll the prime implacants for thefollowing boolean functions and determine which areessential:
F(A,B,C,D)=SIGMA (0,2,3,5,7,8,10,11,13,15)
for this i got
the last problem i want you to check says the following
simplfy the following boolena function F, together with thedont care conditions, d and then express the simplficied functionin sum-of-minterms form
i am pretty sure i messed this one up.
i got F= bd'+a'b+d
well yea i will make a new post and u can reply to each thingwith a comment i will give you some points....when you have yourcomments message me and i will create a post and you canreply | {"url":"http://www.chegg.com/homework-help/questions-and-answers/k-maps-4-problems-3-thembellow-problems-u-check-give-umore-points-open-question-later-post-q506562","timestamp":"2014-04-19T09:02:06Z","content_type":null,"content_length":"24559","record_id":"<urn:uuid:b9005721-fb73-4081-9882-f4b35dd01e61>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Segmented Optimization
Replies: 4 Last Post: Jul 30, 2012 3:35 PM
Messages: [ Previous | Next ]
Cory Segmented Optimization
Posted: Jul 29, 2012 10:33 PM
Posts: 33
Registered: Hi MATLABers,
I am performing a least-squares, non-linear constrained optimization problem with the following structure:
Called the summands (whose sum of squares I want to minimize) DQ_i. One parameter, theta, affects all the DQ_i. For a given theta, the constraints uniquely determine the rest of the
parameters. Further, the constraints are segmented so I could solve for different chunks of the parameters separately (given theta). However, any solution would be numeric.
So I could structure the problem in two ways:
1) Frame it as a nested optimization problem. Pick a theta, solve the smaller problems in chunks, calculate the objective function, and repeat.
2) Put all the parameters together and solve it as one big, constrained problem.
Which approach is more advisable? 2) seems cleaner from a MATLAB coding perspective since if the constraint-solving fails even once, it will derail the whole problem. But 1) seems
potentially faster as I never have too many variables to solve for at once.
Date Subject Author
7/29/12 Cory
7/30/12 Re: Segmented Optimization Matt J
7/30/12 Re: Segmented Optimization Cory
7/30/12 Re: Segmented Optimization Matt J
7/30/12 Re: Segmented Optimization Cory | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2392000&messageID=7854922","timestamp":"2014-04-20T06:38:32Z","content_type":null,"content_length":"21424","record_id":"<urn:uuid:897fd2b0-49fc-4e2e-9f20-38669d0e5519>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Detecting Squares
What is the most efficient method for determining whether a given
integer N is a perfect square? One approach would be to check to
see if N is a square modulo several small numbers, which can probably
be done more easily than extracting the full square root. For
example, if we want to find out whether the number
N = 371930958274059827465211239444089
is a square, we can first check the last two decimal digits to see
if they are one of the twenty-two possible squares (mod 100), ten of
which are obvious {00,01,04,09,..,81} and the remaining twelve of
which are {21,24,29,41,44,56,61,69,76,84,89,96}. The chances of a
randomly selected non-square integer passing this test is just 22/100.
Then, taking advantage of the fact that 999999 = 3*3*7*11*13*37, we
can check to see if N is a square modulo each of these primes by
simply forming the sum of the digits of N taken 6 at a time:
SUM = 2688181
The squares (mod 9) are {0,1,4,7}, and this SUM is congruent to
7 (mod 9), so we still can't be sure it's non-square. However, it
is congruent to 6 (mod 7), whereas the squares (mod 7) are {0,1,2,4},
so N can't be a square.
In general, I'd expect the probability of a randomly chosen non-square
integer passing the "squareness test" relative to (2^2)(5^2), 3^2, 7,
11, 13, and 37 to be about
P = ---- --- --- ---- ---- ---- = 0.0084
so it will catch over 99% of the non-squares.
Or, we could take advantage of the fact that 1001=7*11*13 and so
1000 = -1 (mod 7*11*13). Thus, if we take the digits of N in groups
of 3, and alternately add and subtract them, we can easily check for
squareness modulo 7, 11, and 13.
So the quickest approach might be to first check the last two decimal
digits for squareness (mod 100), then add up the digits one at a time
and check the sum for squareness (mod 9), and then add up the digits
three at a time (with alternating signs) and check the sum for
squareness modulo 7, 11, and 13. This will catch over 98% of
Return to MathPages Main Menu | {"url":"http://www.mathpages.com/home/kmath265.htm","timestamp":"2014-04-18T10:41:33Z","content_type":null,"content_length":"2928","record_id":"<urn:uuid:e3e1b1df-6e95-4383-ab8e-d113f99d7c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
This problem
follows on from
Fill Me Up
, and gives students the opportunity to use volume scale factors of enlargement to work out the relationship between the volume and the height of a cone.
Possible approach
Perhaps start by asking students to sketch the graphs from the problem
Fill Me Up
is a worksheet showing the containers.
"Imagine we wanted to plot the graphs accurately by working out the equations linking height to volume. Some parts of the containers will be easier to work out than others - which will be easiest?
Which will be hardest?"
Take time to discuss students' ideas, relating it back to the graphs sketched in the first problem.
"Let's try to analyse how the height changes as the Pint Glass is filled."
"The Pint Glass can be thought of as part of a cone (a frustum), so I'd like you to consider a cone filling with water first."
Give students
this worksheet
to work on in groups of 3 or 4. These
may be useful for students who are not used to working collaboratively on a problem. Make it clear that your expectation is for all students in the group to be able to explain their thinking clearly
and that anyone might be chosen to present the group's conclusions at the end of the lesson.
Finally, allow time at the end of the lesson (or two lessons) for groups to present their thinking to the rest of the class.
Key questions
What happens to the volume of a cone when I enlarge it by a scale factor of 2, 3, 4, 5... k?
If the volume of water is $10$cm$^3$ when the height of the water is $1$cm, what will the volume be when the height is $2, 3, 4...x$cm?
How could this be represented graphically?
Possible extension
There are two extension tasks suggested in the problem: analysing the inverted cone is a reasonably straightforward extension, but analysing the spherical flask is much much more challenging.
both offer extension possibilities for considering functional relationships relating to volume.
Possible support
Growing Rectangles
offers a good introduction to proportional relationships between length, area and volume. | {"url":"http://nrich.maths.org/7499/note?nomenu=1","timestamp":"2014-04-19T09:26:03Z","content_type":null,"content_length":"5982","record_id":"<urn:uuid:e2a94a0a-28d8-48b1-8e3a-52dc5fae933c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
A problem with LMO?
Filed under: 3-manifolds,Quantum topology — dmoskovich @ 9:35 pm
Renaud Gauthier from KSU posted this preprint on ArXiv a few days ago, in which he claims to have found a serious problem with the construction of the LMO invariant, the universal finite-type
invariant for rational homology spheres (it’s defined for all 3-manifolds, but I think of it as an invariant of homology spheres). What a headline that makes! A possible hole in the definition of the
LMO invariant, with the potential to wash large swaths of quantum topology of 3-manifolds down the drain! Indeed, this is the topic of his second preprint.
Tomotada Ohtsuki was my PhD advisor, and he’s a careful mathematician with tremendous technical ability, who checks his answers against computational data to make absolutely sure no errors creep into
his work. Le and Murakami are similar. Gauthier’s claim is that they made a fatal error calculating the effect of the second Kirby move on the framed Kontsevich invariant, which is used to construct
the LMO.
Without having read Gauthier’s preprint (which is 81 pages long), my bias is to be skeptical of his claim. Maybe he found a typo, but surely not more. But what a headline it would make if a
substantial error was there! This is math drama in the making.
I think it might be fun and educational to crowdsource peer review Gauthier’s claim. It’s important if it’s right, and if it’s wrong, at least it’s an opportunity to get into the kishkes of the LMO.
UPDATE: A MathOverflow question about this.
UPDATE: A third paper by Gauthier was uploaded on Thursday.
UPDATE: Gwenael Massuyeau shows a major flaw in Gauthier’s arguments in the comments. Another two large flaws are noted by Dylan Thurston. Due to these problems, Gauthier’s claim of an error in LMO
does not appear to hold up.
Maybe this isn’t the best location for crowd-sourcing review, but there is already a MathOverflow question about this (though admittedly more about possible consequences rather than assessing the
Unfortunately, I really feel quite unequipped to evaluate the papers. I have to wonder, who did he consult with before putting these papers up? It’s hard for me to imagine that he didn’t really check
things extremely carefully before doing this, but it’s also hard to believe such an error would have propagated for so long.
Comment by Ben Webster — October 15, 2010 @ 7:20 pm | Reply
Having been a part of the LMO story from its beginning, and having read and checked all relevant papers carefully at the time, and having taken part in many cross-checks that the LMO invariant passed
(normalization-compatibility with Reshetikhin-Turaev, various explicit computations), and having consulted on email with my collaborators at the time, and having superficially read through Gauthier,
my informed guess is that in this particular case of inconsistency the first place to look for a problem is in Gauthier, not in LMO.
Comment by Dror Bar-Natan — October 16, 2010 @ 5:34 am | Reply
• So, he didn’t check with you or any of your collaborators before putting this up? That really seems unwise.
Comment by Ben Webster — October 16, 2010 @ 12:11 pm | Reply
□ He should have checked with LMO, not with me. I doubt very much that he did, but I have no explicit knowledge.
Comment by Dror Bar-Natan — October 16, 2010 @ 12:30 pm
□ Sorry, apparently I can’t do 3 replies deep? Anyways, he certainly should have checked with you about the Aarhus integral thing. And if you’re going to post a paper like that, you want to run
it past a wide variety of people, so you don’t embarrass yourself to the degree that he will have if he’s wrong.
Comment by Ben Webster — October 16, 2010 @ 3:00 pm
After thinking this over, I have to admit it seems a real stretch that so many good mathematicians would have overlooked this error. In my math overflow post I mentioned that I had gotten the wrong
multiple of nu when I did the handle slide, but I think this probably just shows that I made the same mistake as Gauthier!
Comment by James Conant — October 16, 2010 @ 11:19 am | Reply
• When I first read the LMO paper, I was also convinced they got the normalization wrong (in just the same way). But I thought about it more, and it is correct, and needs to be the way LMO have it
for a variety of reasons, as Dror alluded to.
Gauthier also claims that there are mistakes in degree 1 of the LM paper (see the first paper, arXiv:1010.2422, p. 63), which is just really implausible.
Comment by Dylan Thurston — October 16, 2010 @ 4:03 pm | Reply
□ There was a factor of two which I found strange at one point, but which Ohtsuki-sensei explained to me. I wonder if it’s the same thing. Anyway, the crux of Gauthier’s argument that there is
a problem in Le-Murakami is Proposition 4.1.1 on pages 58-59 and Proposition 4.1.2 on Pages 60-63 of Gauthier. As you point out, Proposition 4.1.2 implies mistakes in degree 1 of Le-Murakami,
which seems highly unlikely (although I haven’t carefully gone through Gauthier’s proof). So really, a preliminary impression is that it looks like one has to check (218) and (219) on the
bottom of page 59 of Gauthier’s first preprint. I am not following what Gauthier is doing here (although I’ve spent next to no time on it yet). Any ideas?
Comment by dmoskovich — October 16, 2010 @ 9:57 pm
I think I see what Gauthier is doing wrong. He is claiming that when you do a handle slide of a long arc over a trivial unknotted component you get nu^2 on the long arc and nu on the unknotted
component. Since such a handle slide is an isotopy, you should just get the standard normalization of nu on the unknot. However, on p.59 of his first paper, to do this he cancels two pairs of
critical points, exactly accounting for the extra two factors of nu! See equation (218).
Comment by James Conant — October 17, 2010 @ 5:51 am | Reply
• I emailed Gauthier my explanation. We’ll see what he says.
Comment by James Conant — October 17, 2010 @ 6:05 am | Reply
• Now I’m confused… (218) is a statement of isotopy invariance, isn’t it?
Comment by dmoskovich — October 17, 2010 @ 8:53 am | Reply
□ Maybe I was too hasty.
Comment by James Conant — October 19, 2010 @ 7:08 am
Here’s another thing wrong. On the top of page 59 (and in Lemma 3.2.10), he asserts that $\Delta\nu = \nu \otimes \nu$. This is not true, as a short degree-2 computation verifies. This same equation
is true if you close off at least one of the components, but that’s not how he’s applying it.
Comment by Dylan Thurston — October 17, 2010 @ 9:10 am | Reply
• For the proof of Lemma 3.2.10, both components are closed, which means that there is an extra relation in the space of Jacobi diagrams, which is taking the leg all the way around the closed
component of the skeleton. Without this relation, his proof of Lemma 3.2.10 certainly fails. I’m reproducing the degree 2 calculation which you did, to find what the extra contributions look like
(and how that effects Gauthier’s argument).
Comment by dmoskovich — October 17, 2010 @ 9:30 am | Reply
I am starting to get worried again. I’m reading Ohtsuki’s book, and it seems to me that Proposition 10.1 does indeed imply that the right hand side of diagram (215) from Gauthier should be the same
as the trivial diagram with a copy of \nu labeling the circle component. This isn’t necessarily a contradiction. As Dylan pointed out, the equation Delta(nu)=nu\otimes\nu cannot obviously be applied
locally the way Gauthier does it, and indeed applying Lemma 10.2 of Ohtsuki to a q-tangle decomposition of the left-hand side of (215), one does get the diagram on the right. However, I’m trying to
understand Ohtsuki’s proof of Lemma 10.2. He argues that equation 10.1 means that you can get rid of the S_1S_2\Delta_3\Phi contribution in the equation directly above, but I don’t see how this
follows. Does anyone see the argument?
Comment by James Conant — October 19, 2010 @ 2:02 pm | Reply
• I looked at this for a few minutes, walked out of my office, and bumped into Dror Bar-Natan in the common room, who explained it to me with no problem at all. A coproduct of an edge commutes with
everything- that’s just a locality result. So you can STU the legs on the fourth skeleton-segment past all the junk, and the associator becomes a sum of chords from the first to the second, and
from the first to the first. These commute, and an associator between two commuting variables vanishes. Essentially, that was Ohtsuki’s proof of Lemma 10.2. So there seems to be no problem there.
My feeling is that Dylan’s comment must make everything work- if you were to work out the coproduct of nu correctly, the result would agree with LMO. But I haven’t worked out how yet.
Comment by dmoskovich — October 19, 2010 @ 3:39 pm | Reply
□ Ah yes. That makes sense. Thanks.
I’m still puzzled though because I don’t think Dylan’s comment saves the day. I agree that the coproduct of nu is not nu\otimes \nu, but the equality of the right-hand side of Gauthier’s
(215) and (219) still seems incorrect. Just compare the degree 2 terms. (Alternatively you can force Delta(nu)=nu\otimes nu by dividing the space of diagrams by those diagrams where the two
components are connected by at least one chord, and get a contradiction that way.) So something must be going wrong in the application of Ohtsuki’s Prop. 10.1.
When I use Ohtsuki’s prop 10.2 on a q-tangle decomposition of a handle slide over the trivial unknot, I basically get the right-hand side of Gauthier (215), except that the two copies of
Delta(nu) are replaced by 4 copies of Delta(nu^(1/2)), which is the same thing. So Ohtsuki’s 10.2 is consistent with 10.1, but 10.1 seems wrong. The conclusion seems to be that I must not be
applying 10.1 (and 10.2) correctly, but I don’t see what’s going wrong.
Comment by James Conant — October 19, 2010 @ 6:19 pm
I went directly to the point where Renaud Gauthier claims that the handle-sliding property of the nu-normalized framed Kontsevich integral is not correct, namely Proposition 4.1.1 from his first
One has to be careful with the fact that the band-sum operation is not well-defined at the level of Jacobi diagrams (as it is warmed up, for instance, in Thang Le’s lectures notes from Grenoble 1999)
and he seems to apply it in the wrong way, which thus leads to a contradiction. More precisely, it seems to me that he is wrong when he writes at the bottom of page 58: “Now according to (206) and
(207) from [O], under a band sum move this should map to (215)”. Here Proposition 10.1 from Ohtsuki’s book [O] does not seem to be applied in the correct way: he’s replacing Delta of the dotted part
by \Delta of \nu, but the dotted part is more complicated than just a \nu : two associators should also appear because the two strands which are involved in the band-sum should be “infinitesimally”
close to each other. Thus, he is missing a \nu^{-2} factor.
I guess the confusion is coming from the fact that Figure 10.2 in Ohtsuki’s book has to be interpreted in the right way. On the left side of this figure, I see the decomposition of \check{Z}(L) that
one gets when L is decomposed into elementary q-tangles *in such a way that* the two vertical strands which we are going to band-sum are parenthesized (..) together. Then \check{Z}(L’) is obtained
from this formula for \check{Z}(L) by replacing this box with two vertical strands by a box with one cup, one cap and one vertical strand on the right side parenthesized as (.(..)), by doubling the
circle (minus the vertical strand) along which we have slided, and by replacing each univalent vertex which was attached to this circle by a small “box” which grasps the two parallel copies. Then, a
“box” is interpreted in the usual way by distributivity (the Delta operation).
Thus, it seems to me that there is no problem in Ohtsuki’s Proposition 10.1.
Comment by Gwenael Massuyeau — October 20, 2010 @ 4:17 am | Reply
• And, so far as I can see, this mistake is obviously fatal to Proposition 4.1.1 of Gauthier, the keystone of his claim that LMO is wrong.
And so, this challenge to the correctness of LMO is relegated to the dustbin of history. I can’t say I’m surprised, because LMO has already been checked carefully by many people, and stands as a
trusted cornerstone to much other research. But I’m glad to understand these issues now, which are of fundamental importance in the LMO construction.
Comment by dmoskovich — October 20, 2010 @ 7:34 am | Reply
• Very nice.
Comment by James Conant — October 20, 2010 @ 7:56 am | Reply
Leave a Reply Cancel reply
Recent Comments
David Horgan on Topology blogs
michiexile on Topology blogs
Jesse Johnson on The Thin Manifold Confere…
Scott Taylor on The Thin Manifold Confere…
Jouni Kosonen on Train tracks on a torus
Jesse Johnson on Train tracks on a torus
Jouni Kosonen on Train tracks on a torus
Interesting paper on… on Distinguishing the left-hand t…
chorasimilarity on Tangle Machines- Part 1 | {"url":"http://ldtopology.wordpress.com/2010/10/14/a-problem-with-lmo/","timestamp":"2014-04-20T15:55:22Z","content_type":null,"content_length":"88460","record_id":"<urn:uuid:09690677-127f-4b90-a2d0-05551c325ece>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Warren, MI Science Tutor
Find a Warren, MI Science Tutor
...I have been involved in test writing for the NREMT and construct exams regularly for an EMS academy. Although it has been several years, I have taken the ASVAB. I feel confident I can assist
candidates improve their test-taking abilities with the ASVAB.
8 Subjects: including anatomy, physiology, nursing, pharmacology
...I minored in mathematics in college, and this required taking a sequence in linear algebra, ordinary differential equations, and partial differential equations. These topics were used
extensively in my engineering courses. I also used this practically during my work as an engineer.
12 Subjects: including physics, calculus, algebra 1, algebra 2
I am a certified mathematics and physics teacher. I currently teach AP Physics, Honors Physics and computer programming and also advise a robotics team. I regularly tutor my students in ACT math
and science.
20 Subjects: including electrical engineering, physical science, physics, Spanish
...At the risk of seeming haughty, I want to say that I have excelled at every academic subject and every standardized test I have ever taken. Well, every subject except art... but I'm not
offering to tutor anyone in that! Please contact me with any questions you might have.
70 Subjects: including sociology, chemistry, ACT Science, microbiology
...I read to them and worked on building up their knowledge of the English language. I passed the MTTC exams for students with CI, EI, and SLD. All of these tests cover the topics of ADHD and
tools to help students with ADHD focus.
45 Subjects: including sociology, algebra 2, English, writing
Related Warren, MI Tutors
Warren, MI Accounting Tutors
Warren, MI ACT Tutors
Warren, MI Algebra Tutors
Warren, MI Algebra 2 Tutors
Warren, MI Calculus Tutors
Warren, MI Geometry Tutors
Warren, MI Math Tutors
Warren, MI Prealgebra Tutors
Warren, MI Precalculus Tutors
Warren, MI SAT Tutors
Warren, MI SAT Math Tutors
Warren, MI Science Tutors
Warren, MI Statistics Tutors
Warren, MI Trigonometry Tutors | {"url":"http://www.purplemath.com/warren_mi_science_tutors.php","timestamp":"2014-04-16T13:44:45Z","content_type":null,"content_length":"23819","record_id":"<urn:uuid:55e10006-7212-449f-92d2-a5f5068f79b9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lebesgue Points
Just a quick detour. I’ve found a new reason to dislike analysis. I’m trying to learn Radon-Nikodym derivatives (i.e. an attempt to take derivatives in a general measure theory sense and maintain the
Fundamental Theorem of Calculus for the Lebesgue integral), and Rudin uses the approach of Lebesgue Points. Since I’ve never learned this before, I’m not sure if the other methods are easier, but
this is certainly proving to be rough. Apparently we are supposed to be familiar with random facts about LPs, even though this is the very first time the definition is given. So here are the random
unproven statements about Lebesgue points that I’ve encountered and my proofs to go along with them. I don’t think all of these are what Rudin had in mind, since my proofs are far more complicated
than one could probably just think through.
Definition: Let $f\in L^1(\mathbb{R}^k)$, then x is a Lebesgue point of f if $\displaystyle \lim_{r\to 0}\frac{1}{m(B_r)}\int_{B_r}|f(y)-f(x)|dm(y)=0$. Where m is Lebesgue measure, and that B
notation is the open ball centered at x of radius r. Yeah. Not the simplest definition to be assuming knowledge of.
Claim 1: If f is continuous at x, then x is a Lebesgue point of f (under the suitable conditions on f that will always be assumed in this post). Let f be continuous at x. Then let $\varepsilon>0$ and
choose $\delta>0$ such that if $|x-y|<\delta$, then $|f(x)-f(y)|<\varepsilon$. Now whenever $|x-0|<\delta$, we have $\displaystyle \big| \frac{1}{m(B_\delta)}\int_{B_\delta}|f(y)-f(x)|dm -0 \big| \
leq \frac{1}{m(B_\delta)}\int_{B_\delta}\varepsilon dm =\frac{\varepsilon m(B_\delta)}{m(B_\delta)}=\varepsilon$. i.e. The limit behaves as we would like and x is a Lebesgue point.
Claim 2: If x is a Lebesgue point of f, then $\displaystyle f(x)=\lim_{r\to 0}\frac{1}{m(B_r)}\int_{m(B_r)}fdm$. Now I’m not sure if it is just me, but things were just moved around, so the fishy
business I’m going to pull doesn’t seem necessary. Let x be a Lebesgue point of f. Then
$\displaystyle 0=\lim_{r\to 0}\frac{1}{m(B_r)}\int_{B_r}|f(y)-f(x)|dx$
$\displaystyle\geq \lim_{r\to 0}\big|\frac{1}{m(B_r)} \int_{B_r} f(y)dm -\frac{1}{m(B_r)}\int_{B_r} f(x)dm\big|$
$\displaystyle =\lim_{r\to 0}\big|\frac{1}{m(B_r)}\int_{B_r}fdm-f(x)\big|$. Thus since the right side is positive and less than or equal to 0 get rid of the absolute value since it must be equal to 0
and we have $\displaystyle 0=\lim_{r\to 0}\frac{1}{m(B_r)}\int_{B_r}fdm - f(x)$ and rearrange.
I think there was a third claim, but I can’t find it now. Also, these proofs may look rather trivial now, but when you are completely unfamiliar with the definition and properties, this is rather
confusing to try to work out quickly to continue reading the proof. Hopefully this post will help future readers of Rudin when they come to this.
I guess since I’ve come this far I should probably post some bonus material just to see the point.
Interesting result 1: Almost every point of f is a Lebesgue point (still assuming appropriate conditions on f).
The point is to get to the definition of the derivative, so if for all measurable sets E, we have $\mu(E)=\int_E fdm$ for some f, then f is called the Radon-Nikodym derivative and notationally it is
usually written that $d\mu=f dm$ (for the obvious reason that if you integrate both sides you get the first form). But that notation leads us nicely to a more familiar Leibniz-type notation: $f=\frac
{d\mu}{dm}$. Now skipping some other interesting results, some of the meat of the theory comes out in a FTC type result
Interesting result 2: If $f\in L^1(\mathbb{R}^k)$ and $F(x)=\int_{-\infty}^x fdm$, then $F'(x)=f(x)$ at every Lebesgue point of f (and by IR 1 almost everywhere). | {"url":"http://hilbertthm90.wordpress.com/2008/07/12/lebesgue-points/","timestamp":"2014-04-18T23:16:22Z","content_type":null,"content_length":"86822","record_id":"<urn:uuid:74938b0e-22e8-491f-99e0-219cf0f7434c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Class FFT
public class FFT
extends java.lang.Object
Marc Schröder
│ Constructor Summary │
│ FFT() │ │
│ Method Summary │
│ static double[] │ autoCorrelate(double[] signal) │
│ │ Compute the autocorrelation of a signal, by inverse transformation of its power spectrum. │
│ static double[] │ autoCorrelateWithZeroPadding(double[] signal) │
│ │ Compute the autocorrelation of a signal, by inverse transformation of its power spectrum. │
│ static double[] │ computeAmplitudeSpectrum_FD(double[] fft) │
│ │ From the result of the FFT (in the frequency domain), compute the absolute value for each positive frequency, i.e. │
│ static double[] │ computeAmplitudeSpectrum(double[] signal) │
│ │ Convenience method for computing the absolute amplitude spectrum of a real signal. │
│ static double[] │ computeLogAmplitudeSpectrum_FD(double[] fft) │
│ │ From the result of the FFT (in the frequency domain), compute the log amplitude for each positive frequency. │
│ static double[] │ computeLogAmplitudeSpectrum(double[] signal) │
│ │ Convenience method for computing the log amplitude spectrum of a real signal. │
│ static double[] │ computeLogPowerSpectrum_FD(double[] fft) │
│ │ From the result of the FFT, compute the log (dB) power for each positive frequency. │
│ static double[] │ computeLogPowerSpectrum(double[] signal) │
│ │ Convenience method for computing the log (dB) power spectrum of a real signal. │
│ static double[] │ computePhaseSpectrum_FD(double[] fft) │
│ │ From the result of the FFT (in the frequency domain), compute the phase spectrum for each positive frequency. │
│ static double[] │ computePowerSpectrum_FD(double[] fft) │
│ │ From the result of the FFT (in the frequency domain), compute the power for each positive frequency. │
│ static double[] │ computePowerSpectrum(double[] signal) │
│ │ Convenience method for computing the absolute power spectrum of a real signal. │
│ static double[] │ convolve_FD(double[] signal1, double[] fft2) │
│ │ Compute the convolution of two signals, by multiplying them in the frequency domain. │
│ static double[] │ convolve_FD(double[] signal1, double[] fft2, double deltaT) │
│ │ Compute the convolution of two signals, by multiplying them in the frequency domain. │
│ static double[] │ convolve(double[] signal1, double[] signal2) │
│ │ Compute the convolution of two signals, by multiplying them in the frequency domain. │
│ static double[] │ convolve(double[] signal1, double[] signal2, double deltaT) │
│ │ Compute the convolution of two signals, by multiplying them in the frequency domain. │
│ static double[] │ convolveWithZeroPadding(double[] signal1, double[] signal2) │
│ │ Compute the convolution of two signals, by multipying them in the frequency domain. │
│ static double[] │ convolveWithZeroPadding(double[] signal1, double[] signal2, double deltaT) │
│ │ Compute the convolution of two signals, by multipying them in the frequency domain. │
│ static double[] │ correlate(double[] signal1, double[] signal2) │
│ │ Compute the correlation of two signals, by multiplying the transform of signal2 with the conjugate complex of the transform of signal1, in the frequency domain. │
│ static double[] │ correlateWithZeroPadding(double[] signal1, double[] signal2) │
│ │ Compute the correlation of two signals, by multipying them in the frequency domain. │
│ static void │ main(java.lang.String[] args) │
│ static void │ realTransform(double[] data, boolean inverse) │
│ │ Calculates the Fourier transform of a set of n real-valued data points. │
│ static void │ transform(double[] realAndImag, boolean inverse) │
│ │ Carry out the FFT or inverse FFT, and return the result in the same arrays given as parameters. │
│ static void │ transform(double[] real, double[] imag, boolean inverse) │
│ │ Carry out the FFT or inverse FFT, and return the result in the same arrays given as parameters. │
│ Methods inherited from class java.lang.Object │
│ clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait │
public FFT()
public static double[] computeLogPowerSpectrum(double[] signal)
Convenience method for computing the log (dB) power spectrum of a real signal. The signal can be of any length; internally, zeroes will be added if signal length is not a power of two.
signal - the real signal for which to compute the power spectrum.
the power spectrum, as an array of length N/2 (where N is the power of two greater than or equal to signal.length): the log of the squared absolute values of the lower half of the complex
fourier transform array.
public static double[] computeLogPowerSpectrum_FD(double[] fft)
From the result of the FFT, compute the log (dB) power for each positive frequency.
fft - the array of real and imag parts of the complex number array, fft[0] = real[0], fft[1] = real[N/2], fft[2*i] = real[i], fft[2*i+1] = imag[i] for 1<=iReturns:
an array of length real.length/2 containing numbers representing the log of the square of the absolute value of each complex number, p[i] = real[i]*real[i] + imag[i]*imag[i]
public static double[] computePowerSpectrum(double[] signal)
Convenience method for computing the absolute power spectrum of a real signal. The signal can be of any length; internally, zeroes will be added if signal length is not a power of two.
signal - the real signal for which to compute the power spectrum.
the power spectrum, as an array of length N/2 (where N is the power of two greater than or equal to signal.length): the squared absolute values of the lower half of the complex fourier
transform array.
public static double[] computePowerSpectrum_FD(double[] fft)
From the result of the FFT (in the frequency domain), compute the power for each positive frequency.
fft - the array of real and imag parts of the complex number array, fft[0] = real[0], fft[1] = real[N/2], fft[2*i] = real[i], fft[2*i+1] = imag[i] for 1<=iReturns:
an array of length real.length/2 containing numbers representing the square of the absolute value of each complex number, p[i] = real[i]*real[i] + imag[i]*imag[i]
public static double[] computeLogAmplitudeSpectrum(double[] signal)
Convenience method for computing the log amplitude spectrum of a real signal. The signal can be of any length; internally, zeroes will be added if signal length is not a power of two.
signal - the real signal for which to compute the power spectrum.
the log amplitude spectrum, as an array of length N/2 (where N is the power of two greater than or equal to signal.length): the log of the absolute values of the lower half of the complex
fourier transform array.
public static double[] computeLogAmplitudeSpectrum_FD(double[] fft)
From the result of the FFT (in the frequency domain), compute the log amplitude for each positive frequency.
fft - the array of real and imag parts of the complex number array, fft[0] = real[0], fft[1] = real[N/2], fft[2*i] = real[i], fft[2*i+1] = imag[i] for 1<=iReturns:
an array of length real.length/2 containing numbers representing the log of the square of the absolute value of each complex number, p[i] = real[i]*real[i] + imag[i]*imag[i]
public static double[] computeAmplitudeSpectrum(double[] signal)
Convenience method for computing the absolute amplitude spectrum of a real signal. The signal can be of any length; internally, zeroes will be added if signal length is not a power of two.
signal - the real signal for which to compute the power spectrum.
the power spectrum, as an array of length N/2 (where N is the power of two greater than or equal to signal.length): the absolute values of the lower half of the complex fourier transform
public static double[] computeAmplitudeSpectrum_FD(double[] fft)
From the result of the FFT (in the frequency domain), compute the absolute value for each positive frequency, i.e. the norm of each complex number in the lower half of the array
fft - the array of real and imag parts of the complex number array, fft[0] = real[0], fft[1] = real[N/2], fft[2*i] = real[i], fft[2*i+1] = imag[i] for 1<=iReturns:
an array of length real.length/2 containing numbers representing the absolute value of each complex number, r[i] = sqrt(real[i]*real[i] + imag[i]*imag[i])
public static double[] computePhaseSpectrum_FD(double[] fft)
From the result of the FFT (in the frequency domain), compute the phase spectrum for each positive frequency.
fft - the array of real and imag parts of the complex number array, fft[0] = real[0], fft[1] = real[N/2], fft[2*i] = real[i], fft[2*i+1] = imag[i] for 1<=iReturns:
an array of length real.length/2 containing numbers representing the phases of each complex number, phase[i] = atan(imag[i], real[i])
public static void transform(double[] real,
double[] imag,
boolean inverse)
Carry out the FFT or inverse FFT, and return the result in the same arrays given as parameters. In the case of the "forward" FFT, real is the signal to transform, and imag is an empty array.
After the call, real will hold the real part of the complex frequency array, and imag will hold the imaginary part. They are ordered such that first come positive frequencies from 0 to fmax, then
the negative frequencies from -fmax to 0 (which are the mirror image of the positive frequencies). In the case of the inverse FFT, real and imag are in input the real and imaginary part of the
complex frequencies, and in output, real is the signal. The method already computes the division by array length required for the inverse transform.
real - in "forward" FFT: as input=the time-domain signal to transform, as output=the real part of the complex frequencies; in inverse FFT: as input=the real part of the complex frequencies,
as output= the time-domain signal.
imag - in "forward" FFT: as input=an empty array, as output=the imaginary part of the complex frequencies; in inverse FFT: as input=the imaginary part of the complex frequencies, as output=
not used.
inverse - whether to calculate the FFT or the inverse FFT.
public static void transform(double[] realAndImag,
boolean inverse)
Carry out the FFT or inverse FFT, and return the result in the same arrays given as parameters. This works exactly like #transform(real, imag, boolean), but data is represented differently: the
even indices of the input array hold the real part, the odd indices the imag part of each complex number.
realAndImag - the array of complex numbers to transform
inverse - whether to calculate the FFT or the inverse FFT.
public static void realTransform(double[] data,
boolean inverse)
Calculates the Fourier transform of a set of n real-valued data points. Replaces this data (which is stored in array data[1..n]) by the positive frequency half of its complex Fourier transform.
The real-valued first and last components of the complex transform are returned as elements data[1] and data[2], respectively. n must be a power of 2. This routine also calculates the inverse
transform of a complex data array if it is the transform of real data. (Result in this case must be multiplied by 2/n.)
data -
public static double[] convolveWithZeroPadding(double[] signal1,
double[] signal2,
double deltaT)
Compute the convolution of two signals, by multipying them in the frequency domain. Normalise the result with respect to deltaT (the inverse of the sampling rate). This method applies zero
padding where necessary to ensure that the result is not polluted because of assumed periodicity. The two signals need not be of equal length.
signal1 -
signal2 -
deltaT, - the time difference between two samples (= 1/samplingrate)
the convolved signal, with length signal1.length+signal2.length
public static double[] convolveWithZeroPadding(double[] signal1,
double[] signal2)
Compute the convolution of two signals, by multipying them in the frequency domain. This method applies zero padding where necessary to ensure that the result is not polluted because of assumed
periodicity. The two signals need not be of equal length.
signal1 -
signal2 -
the convolved signal, with length signal1.length+signal2.length
public static double[] convolve(double[] signal1,
double[] signal2,
double deltaT)
Compute the convolution of two signals, by multiplying them in the frequency domain. Normalise the result with respect to deltaT (the inverse of the sampling rate). This is the core method,
requiring two signals of equal length, which must be a power of two, and not checking for pollution arising from the assumed periodicity of both signals.
signal1 -
signal2 -
deltaT, - the time difference between two samples (= 1/samplingrate)
the convolved signal, of the same length as the two input signals
java.lang.IllegalArgumentException - if the two input signals do not have the same length.
public static double[] convolve(double[] signal1,
double[] signal2)
Compute the convolution of two signals, by multiplying them in the frequency domain. This is the core method, requiring two signals of equal length, which must be a power of two, and not checking
for pollution arising from the assumed periodicity of both signals.
signal1 -
signal2 -
the convolved signal, of the same length as the two input signals
java.lang.IllegalArgumentException - if the two input signals do not have the same length.
public static double[] convolve_FD(double[] signal1,
double[] fft2,
double deltaT)
Compute the convolution of two signals, by multiplying them in the frequency domain. Normalise the result with respect to deltaT (the inverse of the sampling rate). This is a specialised version
of the core method, requiring two signals of equal length, which must be a power of two, and not checking for pollution arising from the assumed periodicity of both signals. In this version, the
first signal is provided in the time domain, while the second is already transformed into the frequency domain.
signal1 - the first input signal, in the time domain
fft2 - the complex transform of the second signal, in the frequency domain fft[0] = real[0], fft[1] = real[N/2], fft[2*i] = real[i], fft[2*i+1] = imag[i] for 1<=ideltaT, - the time difference
between two samples (= 1/samplingrate)
the convolved signal, of the same length as the two input signals
java.lang.IllegalArgumentException - if the two input signals do not have the same length.
public static double[] convolve_FD(double[] signal1,
double[] fft2)
Compute the convolution of two signals, by multiplying them in the frequency domain. This is a specialised version of the core method, requiring two signals of equal length, which must be a power
of two, and not checking for pollution arising from the assumed periodicity of both signals. In this version, the first signal is provided in the time domain, while the second is already
transformed into the frequency domain.
signal1 - the first input signal, in the time domain
fft2 - the complex transform of the second signal, in the frequency domain fft[0] = real[0], fft[1] = real[N/2], fft[2*i] = real[i], fft[2*i+1] = imag[i] for 1<=iReturns:
the convolved signal, of the same length as the two input signals
java.lang.IllegalArgumentException - if the two input signals do not have the same length.
public static double[] correlateWithZeroPadding(double[] signal1,
double[] signal2)
Compute the correlation of two signals, by multipying them in the frequency domain. This method applies zero padding where necessary to ensure that the result is not polluted because of assumed
periodicity. The two signals need not be of equal length.
signal1 -
signal2 -
the correlation function, with length signal1.length+signal2.length
public static double[] correlate(double[] signal1,
double[] signal2)
Compute the correlation of two signals, by multiplying the transform of signal2 with the conjugate complex of the transform of signal1, in the frequency domain. Sign convention: If signal2 is
shifted by n to the right of signal2, then the correlation function will have a peak at positive n. This is the core method, requiring two signals of equal length, which must be a power of two,
and not checking for pollution arising from the assumed periodicity of both signals.
signal1 -
signal2 -
the correlated signal, of the same length as the two input signals
java.lang.IllegalArgumentException - if the two input signals do not have the same length.
public static double[] autoCorrelate(double[] signal)
Compute the autocorrelation of a signal, by inverse transformation of its power spectrum. This is the core method, requiring a signal whose length must be a power of two, and not checking for
pollution arising from the assumed periodicity of the signal.
signal -
the correlated signal, of the same length as the input signal
public static double[] autoCorrelateWithZeroPadding(double[] signal)
Compute the autocorrelation of a signal, by inverse transformation of its power spectrum. This method applies zero padding where necessary to ensure that the result is not polluted because of
assumed periodicity.
signal -
the correlated signal, of the same length as the input signal
public static void main(java.lang.String[] args)
throws java.lang.Exception
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD | {"url":"http://mary.dfki.de/javadoc/4.3.0/marytts/util/math/FFT.html","timestamp":"2014-04-17T23:15:30Z","content_type":null,"content_length":"39697","record_id":"<urn:uuid:68f66614-a4df-4ce6-9b4d-52436bf78527>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
IntroductionPauli DecompositionBasic IntroductionCoherence MatrixFeature Extraction and ReductionSpanH/A/Alpha DecompositionTexture FeaturesTotal FeaturesPrincipal Component AnalysisForward Neural NetworkIntroduction of PSOACPSOACPSO-NNCross ValidationExperimentsRefine Lee FilterFull FeaturesPCAArea SelectionNetwork TrainingClassification AccuracyRobustnessTime AnalysisConclusionsReferencesFigures and Tables
The classification of different objects, as well as different terrain characteristics, with single channel monopolarisation SAR images can carry a significant amount of error, even when operating
after multilooking [1]. One of the most challenging applications of polarimetry in remote sensing is landcover classification using fully polarimetric SAR (PolSAR) images [2].
The Wishart maximum likelihood (WML) method has often been used for PolSAR classification [3]. However, it does not take explicitly into consideration the phase information contained within
polarimetric data, which plays a direct role in the characterization of a broad range of scattering processes. Furthermore, the covariance or coherency matrices are determined after spatial averaging
and therefore can only describe stochastic scattering processes while certain objects, such as man-made objects, are better characterized at pixel-level [4].
To overcome above shortcomings, polarimetric decompositions were introduced with an aim at establishing a correspondence between the physical characteristics of the considered areas and the observed
scattering mechanisms. The most effective method is the Cloude decomposition, also known as H/A/α method [5]. Recently, texture information has been extracted, and used as a parameter to enhance the
classification results. The gray-level co-occurrence matrices (GLCM) were already successfully applied to classification problems [6]. We choose the combination of H/A/α and GLCM as the parameter set
of our study.
In order to reduce the feature vector dimensions obtained by H/A/α and GLCM, and to increase the discriminative power, the principal component analysis (PCA) method was employed. PCA is appealing
since it effectively reduces the dimensionality of the feature and therefore reduces the computational cost.
The next problem is how to choose the best classifier. In the past years, standard multi-layered feed-forward neural networks (FNN) have been applied for SAR image classification [7]. FNNs are
effective classifiers since they do not involve complex models and equations as compared to traditional regression analysis. In addition, they can easily adapt to new data through a re-training
However, NNs suffer from converging too slowly and being easily trapped into local extrema if a back propagation (BP) algorithm is used for training [8]. Genetic algorithm (GA) [9] has shown
promising results in searching optimal weights of NN. Besides GA, Tabu search (TS) [10], Particle Swarm Optimization (PSO) [11], and Bacterial Chemotaxis Optimization (BCO) [12] have also been
reported. However, GA, TS, and BCO have expensive computational demands, while PSO is well-known for its lower computation cost, and the most attractive feature of PSO is that it requires less
computational bookkeeping and a few lines of implementation codes.
In order to improve the performance of PSO, an adaptive chaotic PSO (ACPSO) method was proposed. In order to prevent overfitting, cross-validation was employed, which is a technique for assessing how
the results of a statistical analysis will generalize to an independent data set and is mainly used to estimate how accurately a predictive model will perform in practice [13]. One round of
cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset
(called the validation set) [14]. To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds [15].
The structure of this paper is as follows: In the next Section 2 the concept of Pauli decomposition was introduced. Section 3 presents the span image, the H/A/α decomposition, the feature derived
from GLCM, and the principle component analysis for feature reduction. Section 4 introduces the forward neural network, proposed the ACPSO for training, and discussed the importance of using k-fold
cross validation. Section 5 uses the NASA/JPL AIRSAR image of Flevoland site to show our proposed ACPSO outperforms traditional BP, adaptive BP, BP with momentum, PSO, and RPROP algorithms. Final
Section 6 is devoted to conclusion.
The proposed features can be divided into three types, which are explained below.
The span or total scattered power is given by: M = | S h h | 2 + | S v v | 2 + 2 | S h v | 2which indicates the power received by a fully polarimetric system.
H/A/α decomposition is designed to identify in an unsupervised way polarimetric scattering mechanisms in the H-α plane [5]. The method extends the two assumptions of traditional ways [17]: (1)
azimuthally symmetric targets; (2) equal minor eigenvalues λ[2] and λ[3]. T can be rewritten as: T = U 3 [ λ 1 0 0 0 λ 2 0 0 0 λ 3 ] U 3 Hwhere: U 3 = [ cos α 1 cos α 2 cos α 3 sin
α 1 cos β 1 exp ( i δ 1 ) sin α 2 cos β 2 exp ( i δ 2 ) sin α 3 cos β 3 exp ( i δ 3 ) sin α 1 sin β 1 exp ( i γ 1 ) sin α 2 sin β 2 exp ( i γ 2 ) sin α 3
sin β 3 exp ( i γ 3 ) ]
Then, the pseudo-probabilities of the T matrix expansion elements are defined as: P i = λ ∑ j = 1 3 λ j
The entropy [18] indicates the degree of statistical disorder of the scattering phenomenon. It can be defined as: H = ∑ i = 1 3 − P i log 3 P i 0 ≤ H ≤ 1
For high entropy values, a complementary parameter (anisotropy) [1] is necessary to fully characterize the set of probabilities. The anisotropy is defined as the relative importance of the second
scattering mechanisms [19]: A = P 2 − P 3 P 2 + P 3 0 ≤ A ≤ 1
The four estimates of the angles are easily evaluated as: [ α ¯ , β ¯ , δ ¯ , γ ¯ ] = ∑ i = 1 3 P i [ α , β , δ , γ ]
Thus, vectors from coherence matrix can be represented as (H, A, ᾱ, β̄, δ̄, γ̄).
Gray level co-occurrence matrix (GLCM) is a text descriptor which takes into account the specific position of a pixel relative to another. The GLCM is a matrix whose elements correspond to the
relative frequency of occurrence of pairs of gray level values of pixels separated by a certain distance in a given direction [20]. Formally, the elements of a GLCM G(i,j) for a displacement vector
(a,b) is defined as: G ( i , j ) = | { ( x , y ) , ( t , v ) : I ( r , s ) = i & I ( t , v ) = j } |where (t,v) = (x + a, y + b), and |•| denotes the cardinality of
a set. The displacement vector (a,b) can be rewritten as (d, θ) in polar coordinates.
GLCMs are suggested to be calculated from four displacement vectors with d = 1 and θ = 0°, 45°, 90°, and 135° respectively. In this study, the (a, b) are chosen as (0,1), (−1,1), (−1,0), and (−1,–1)
respectively, and the corresponding GLCMs are averaged. The four features are extracted from normalized GLCMs, and their sum equals to 1. Suppose the normalized GLCM value at (i,j) is p(i,j), and
their detailed definition are listed in Table 2.
The texture features consist of 4 GLCM-based features, which should be multiplied by 3 since there are three channels (T[11], T[22], T[33]). In addition, there are one span feature, and six H/α
parameters. In all, the number of total features is 1 + 6 + 4 × 3 = 19.
PCA is an efficient tool to reduce the dimension of a data set consisting of a large number of interrelated variables while retaining most of the variations. It is achieved by transforming the data
set to a new set of ordered variables according to their variances or importance. This technique has three effects: It orthogonalizes the components of the input vectors so that uncorrelated with
each other, it orders the resulting orthogonal components so that those with the largest variation come first, and eliminates those components contributing the least to the variation in the data set
More specifically, for a given n-dimensional matrix n × m, where n and m are the number of variables and the number of temporal observations, respectively, the p principal axes (p ≪ n) are orthogonal
axes, onto which the retained variance is maximal in the projected space. The PCA describes the space of the original data projecting onto the space in a base of eigenvectors. The corresponding
eigenvalues account for the energy of the process in the eigenvector directions. It is assumed that most of the information in the observation vectors is contained in the subspace spanned by the
first p principal components. Considering data projection restricted to p eigenvectors with the highest eigenvalues, an effective reduction in the input space dimensionality of the original data can
be achieved with minimal information loss. Reducing the dimensionality of the n dimensional input space by projecting the input data onto the eigenvectors corresponding to the first p eigenvalues is
an important step that facilitates subsequent neural network analysis [22].
The detailed steps of PCA are as follows: (1) organize the dataset; (2) calculate the mean along each dimension; (3) calculate the deviation; (4) find the covariance matrix; (5) find the eigenvectors
and eigenvalues of the covariance matrix; (6) sort the eigenvectors and eigenvalues; (7) compute the cumulative energy content for each eigenvector; (8) select a subset of the eigenvectors as the new
basis vectors; (9) convert the source data to z-scores; (10) project the z-scores of the data onto the new basis. Figure 1 shows a geometric illustration of PCA. Here the original basis is {x[1], x
[2]}, and the new basis is {F[1], F[2]}. After the data was projecting onto the new basis, we can find that the data focused along the first dimension of the new basis.
Neural networks are widely used in pattern classification since they do not need any information about the probability distribution and the a priori probabilities of different classes. A
two-hidden-layer backpropagation neural network is adopted with sigmoid neurons in the hidden layers and linear neuron in the output layer via the information entropy method [23].
The training vectors are formed from the selected areas and normalized and presented to the NN which is trained in batch mode. The network configuration is N[I] × N[H][1] × N[H][2] × N[O], i.e., a
three-layer network with N[I] neurons in the input layer, N[H][1] neurons in the first hidden layer, N[H][2] neurons in the second hidden layer, and N[O] neuron in the output layer (Figure 2). Their
values vary with the remote-sensing area, and will be determined in the Experimental section.
The traditional NN training method can easily be trapped into the local minima, and the training procedures take a long time [24]. In this study, PSO is chosen to find the optimal parameters of the
neural network. PSO is a population based stochastic optimization technique, which is based on simulating the social behavior of swarm of bird flocking, bees, and fish schooling. By randomly
initializing the algorithm with candidate solutions, the PSO successfully leads to a global optimum [25]. This is achieved by an iterative procedure based on the processes of movement and
intelligence in an evolutionary system. Figure 3 shows the flow chart of a PSO algorithm.
In PSO, each potential solution is represented as a particle. Two properties (position x and velocity v) are associated with each particle. Suppose x and v of the ith particle are given as [26]: x
= ( x i 1 , x i 2 , ⋯ , x i N ) v = ( v i 1 , v i 2 , ⋯ , v i N )where N stands for the dimensions of the problem. In each iteration, a fitness function is evaluated for all the
particles in the swarm. The velocity of each particle is updated by keeping track of two best positions. One is the best position a particle has traversed so far. It is called “pBest”. The other is
the best position that any neighbor of a particle has traversed so far. It is a neighborhood best and is called “nBest”. When a particle takes the whole population as its neighborhood, the
neighborhood best becomes the global best and is accordingly called “gBest”. Hence, a particle’s velocity and position are updated as follows: v = ω ⋅ v + c 1 r 1 ( p Best − x ) + c
2 r 2 ( n Best − x ) x = x + v Δ twhere ω is called the “inertia weight” that controls the impact of the previous velocity of the particle on its current one. c[1] and c[2] are positive
constants, called “acceleration coefficients”. r[1] and r[2] are random numbers that are uniformly distributed in the interval [0,1]. These random numbers are updated every time when they occur. Δt
stands for the given time-step and usually equals to 1.
The population of particles is then moved according to Equations (16) and (17), and tends to cluster together from different directions. However, a maximum velocity v[max], should not be exceeded by
any particle to keep the search within a meaningful solution space. The PSO algorithm runs through these processes iteratively until the termination criterion is satisfied.
Let NP denotes the number of particles, each having a position x[i] and a velocity v[i]. Let p[i] be the best known position of particle i and g be the best known position of the entire swarm. A
basic PSO algorithm can be described as follows:
Step 1 Initialize every particle’s position with a uniformly distributed random vector;
Step 2 Initialize every particle’s best known position to its initial position, viz., p[i] = x[i];
Step 3 If f(p[i]) < f(g), then update the swarm’s best known position, g = p[i];
Step 4 Repeat until certain termination criteria was met
Step 4.1 Pick random numbers r[1] & r[2];
Step 4.2 Update every particle’s velocity according to formula (16);
Step 4.3 Update every particle’s position according to formula (17);
Step 4.4 If f(x[i]) < f(p[i]), then update the particle’s best known position, p[i] = x[i]. If f(p[i]) < f(g), then update the swarm’s best known position, g = p[i].
Step 5 Output g which holds the best found solution.
In order to enhance the performance of canonical PSO, two improvements are proposed as follows. The inertia weight ω in Equation (16) affects the performance of the algorithm. A larger inertia weight
pressures towards global exploration, while a smaller one pressures towards fine-tuning of current search area [27]. Thus, proper control of ω is important to find the optimum solution accurately. To
deal with this shortcoming, an “adaptive inertia weight factor” (AIWF) was employed as follow: ω = ω max − ( ω max − ω min ) / k max × k
Here, ω[max] denotes the maximum inertial weight, ω[min] denotes the minimum inertial weight, k[max] denotes the epoch when the inertial weight reaches the final minimum, and k denotes current epoch.
The parameters (r[1], r[2]) were generated by pseudo-random number generators (RNG) in classical PSO. The RNG cannot ensure the optimization’s ergodicity in solution space because they are
pseudo-random; therefore, we employed the Rossler chaotic operator [28] to generate parameters (r[1], r[2]). The Rossler equations are as follows: { d x d t = − ( y + z ) d y d t = x +
a y d z d t = b + x z − c z
Here a, b, and c are parameters. In this study, we chose a = 0.2, b = 0.4, and c = 5.7. The results are shown in Figure 4, where the line in the 3D space exhibits a strong chaotic property called
“spiral chaos”.
The dynamic properties of x(t) and y(t) are shown in Figure 5, where x(t) and y(t) satisfy both ergodicity and randomness. Therefore, we let r[1] = x(t) and r[2] = y(t) to embed the chaotic operator
into the canonical PSO method.
There are some other chaotic PSO methods proposed in the past. Wang et al. [29] proposed a chaotic PSO to find the high precision prediction for the grey forecasting model. Chuang et al. [30]
proposed a chaotic catfish PSO for solving global numeric optimization problem. Araujo et al. [31] intertwined PSO with Lozi map chaotic sequences to obtain Takagi-Sugeno fuzzy model for representing
dynamic behaviors. Coelho [32] presented an efficient PSO algorithm based on Gaussian distribution and chaotic sequence to solve the reliability–redundancy optimization problems. Coelho et al. [33]
presented a quantum-inspired version of the PSO using the harmonic oscillator well to solve the economic dispatch problem. Cai et al. [34] developed a multi-objective chaotic PSO method to solve the
environmental economic dispatch problems considering both economic and environmental issues. Coelho et al. [35] proposed three differential evolution approaches based on chaotic sequences using
logistic equation for image enhancement process. Sun et al. [36] proposed a drift PSO and applied it in estimating the unknown parameters of chaotic dynamic system.
The main difference between our ACPSO and popular PSO lies in two points: (1) we introduced in the adaptive inertia weight factor strategy; (2) we used the Rossler attractor because of the following
advantages [37]: the Rossler is simpler, having only one manifold, and easier to analyze qualitatively. In total, the procedures of ACPSO are listed as follows:
Step 1 Initialize every particle’s position with a uniformly distributed random vector;
Step 2 Initialize every particle’s best known position to its initial position, viz., p[i] = x[i];
Step 3 If f(p[i]) < f(g), then update the swarm’s best known position, g = p[i];
Step 4 Repeat until certain termination criteria was met:
Step 4.1 Update the value of ω according to formula (18);
Step 4.2 Pick chaotic random numbers r[1] & r[2] according to formula (19)
Step 4.3 Update every particle’s velocity according to formula (16);
Step 4.4 Update every particle’s position according to formula (17);
Step 4.5 If f(x[i]) < f(p[i]), then update the particle’s best known position, p[i] = x[i]. If f(p[i]) < f(g), then update the swarm’s best known position, g = p[i].
Step 5 Output g which holds the best found solution.
Let ω[1], ω[2], ω[3] represent the connection weight matrix between the input layer and the first hidden layer, between the first and the second hidden layer, and between the second hidden layer and
the output layer, respectively. When the ACPSO is employed to train the multi-layer neural network, each particle is denoted by: ω = [ ω 1 , ω 2 , ω 3 ]
The outputs of all neurons in the first hidden layer are calculated by following steps: y 1 j = f H ( ∑ i = 1 N I ω 1 ( i , j ) x i ) j = 1 , 2 , ⋯ , N H 1
Here x[i] denotes the ith input value, y[1][j] denotes the jth output of the first hidden layer, and f[H] is referred to as the activation function of hidden layer. The outputs of all neurons in the
second hidden layer are calculated as: y 2 k = f H ( ∑ j = 1 N H 1 ω 2 ( j , k ) y 1 j ) k = 1 , 2 , ⋯ , N H 2where y[2][j] denotes the jth output of the second hidden layer.
The outputs of all neurons in the output layer are given as follows: O l = f O ( ∑ k = 1 N H 2 ω 3 ( k , l ) y 2 k ) l = 1 , 2 , … , N O
Here f[O] denotes the activation function of output layer, usually a line function. All weights are assigned with random values initially, and are modified by the delta rule according to the learning
samples traditionally.
The error of one sample is expressed as the MSE of the difference between its output and the corresponding target value: E m = mse ( ∑ l = 1 N O ( O l − T l ) ) m = 1 , 2 , … N Swhere
T[k] represents the kth value of the authentic values which are already known to users, and N[S] represents the number of samples. Suppose there are N[S] samples, then the fitness value is written
as: F ( ω ) = ∑ m = 1 N S E mwhere ω represents the vectorization of the (ω[1], ω[2], ω[3]). Our goal is to minimize this fitness function F(ω) by the proposed ACPSO method, viz., force the
output values of each sample approximate to corresponding target values.
Cross validation methods consist of three types: Random subsampling, K-fold cross validation, and leave-one-out validation. The K-fold cross validation is applied due to its properties as simple,
easy, and using all data for training and validation. The mechanism is to create a K-fold partition of the whole dataset, repeat K times to use K-1 folds for training and a left fold for validation,
and finally average the error rates of K experiments. The schematic diagram of 5-fold cross validation is shown in Figure 6.
A challenge is to determine the number of folds. If K is set too large, the bias of the true error rate estimator will be small, however, the variance of the estimator will be large and the
computation will be time-consuming. Alternatively, if K is set too small, the computation time will decrease, the variance of the estimator will be small, but the bias of the estimator will be large.
The advantages and disadvantages of setting K large or small are listed in Table 3. In this study, K is determined as 10 through trial-and-error method.
If the model selection and true error estimation are computed simultaneously, the data needs to be divided into three disjoint sets [38]. In another word, the validation subset is used to tune the
parameters of the neural network model, so another test subset is needed only to assess the performance of a trained neural network, viz., the whole dataset is divided into three subsets with
different purposes listed in Table 4. The reason why the validation set and testing set cannot merge with each other lies in that the error rate estimation via the validation data will be biased
(smaller than the true error rate) since the validation set is used to tune the model [39].
Flevoland, an agricultural area in The Netherlands, is chosen as the example. The site is composed of strips of rectangular agricultural fields. The scene is designated as a supersite for the earth
observing system (EOS) program, and is continuously surveyed by the authorities.
The Pauli image of Flevoland is shown in Figure 7(a), and the refine Lee filtered image (Window Size = 7) is shown in Figure 7(b).
The basic span image and three channels (T[11], T[22], T[33]) are easily obtained and shown in Figure 8. The parameters of H/A/Alpha decomposition are shown in Figure 9. The GLCM-based parameters of
T[11], T[22], T[33] are shown in Figures 10–12.
The curve of cumulative sum of variance with dimensions of reduced vectors via PCA is shown in Figure 13. The detailed data are listed in Table 5. It shows that only 13 features, which are only half
the original features, could preserve 98.06% of variance.
The classification is run over 13 classes, bare soil 1, bare soil 2, barley, forest, grass, lucerne, peas, potatoes, rapeseed, stem beans, sugar beet, water, and wheat. Our strategy is a
semiautomatic method, viz. the training area was chosen and labeled manually. For each crop type, we choose a square of size 20 × 20, which is easy to perform since the training area size is 13 × 20
× 20 = 5,200 compared to the size of the whole image is 1,024 × 750 = 768,000. In order to reduce the complexity of experiment, the test areas are chosen randomly from rest areas [40,41], with the
same square size as the training area.
The final manually selected training areas are shown in Figure 14(a). Each square corresponds to a crop type with the size of 20 × 20. In total, there are 5,200 pixels for our training. The cross
validation procedures loop 10 times, therefore, each loop we use 4,680 pixels for training and the left 520 pixels for validation. The final randomly selected test areas are shown Figure 14(b). The
samples numbers of training and test area are shown in Table 6.
N[I] is determined as 13 due to the 13 features obtained by PCA. No is determined as 13 due to the 13 classes shown in Figure 14. Both N[H][1] and N[H][2] are set as 10 via the information entropy
method [42]. Therefore, the number of unknown variables of the neural network is 13 × 10 + 10 + 10 × 10 + 10 + 10 × 13 + 13 = 393.
The network was trained by the proposed ACPSO algorithm, of which the parameters are obtained via trial-and-error method and shown in Table 7. Besides, BP algorithm [8], BP with momentum (MBP) [43],
adaptive BP algorithm (ABP) [44], and PSO [45] are employed as comparative algorithms.
The curves of function fitness versus versus epoch of different algorithms are shown in Figure 15, indicating that the proposed ACPSO converges the most quickly and is capable of finding global
minimum point.
The confusion matrices on training area of our method are calculated and shown in Figure 16. The overall accuracies of our method on the training area (combining training and validation subsets) and
test area are 99.0% and 94.0%, respectively. The main drawbacks are around the following four misclassifications: (I) forest zones are easily misclassified as peas; (II) grasses are easily
misclassified as barley and lucerne; (III) lucerne are easily misclassified as grasses; (IV) sugarbeets are easily misclassified as peas.
A typical classification accuracy of both training area and test area by BP, ABP, MBP, and PSO are listed in Table 8, indicating that the proposed algorithm achieves the highest classification
accuracy on both training (99.0%) and test area (94.0%). The random classifier disregards the information of the training data and returns random predictions, so it is usually employed to find the
lowest classification rate.
Yudong also used Resilient back-propagation (RPROP) algorithm to train the neural network to classify the same Flevoland area [41], and obtains 98.62% on training area and 92.87% on test area. The
PSO ranks the third with 98.1% on training area and 88.7% on test area. The ABP ranks the fourth with 90.7% and 86.4% on both training and test area, respectively. The BP and MBP performs the worst
with the classification accuracy only a bit higher than the random classifier of 1/131 = 37. 69%, indicating that only 2,000 iterations are not enough for these two training strategies. Besides, the
classification accuracy of the proposed algorithm was extremely high on the test area due to the 10-fold cross validation.
In order to compare the robustness of each algorithm, we perform each algorithm 50 runs and calculated the minimum, the average, and the maximum of the classification rates. The results are listed in
Table 9. It indicates that the results of each algorithm changed at each run, but the variation is limited, so the rank of the performance of all algorithms is the same as that in Table 8.
Computation time is another important factor used to evaluate the classifier. The time for network training of our algorithm costs about 120 s, which can be ignored since the weights/biases of the NN
remain fixed after training unless the property of images changes greatly. For example, the main crops in Flevoland are involved in the 13 types shown in Figure 14(c), therefore, the classifier can
be directly used to other remote-sensing images of Flevoland without retrain. It will cost about 0.131 + 30, 0.242 + 40, 0.232 + 30, 0.181 + 80, 0.048 = 0.83 s from the input of Flevoland images
(size 1,024 × 750) to the output of final classification results as shown in Table 10. For each pixel, it costs only 1.08 × 10^−7s, which is fast enough for real time applications. | {"url":"http://www.mdpi.com/1424-8220/11/5/4721/xml","timestamp":"2014-04-16T10:16:29Z","content_type":null,"content_length":"122446","record_id":"<urn:uuid:1e34ac71-3d7a-434c-96b7-62de01eba8cb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
User nicolas66
bio website nicolas.lerme.free.fr/nouvelle_page_web/index.xhtml
visits member for 1 year, 6 months
seen Nov 28 '12 at 13:43
stats profile views 10
Oct Metaheuristics and feasability
14 comment Let us consider a simple and connected graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, a finite set $\mathcal{C}=\{1,\ldots,k\}$ and a scoring function $\phi : \mathcal{C} \rightarrow \
mathbb{N}$. Then, my problem can be stated as follows: find an assignment $x \in \mathcal{C}^\mathcal{V}$ such that each node has a value in $c \in \mathcal{C}$ and has at least one
neighboring node with value $d$, $\forall d<c$.
Oct asked Metaheuristics and feasability | {"url":"http://mathoverflow.net/users/27213/nicolas66?tab=activity","timestamp":"2014-04-18T13:42:38Z","content_type":null,"content_length":"34164","record_id":"<urn:uuid:33b6fcfb-5ae3-447d-9337-73a32557f677>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Message Board Basketball Forum - InsideHoops - View Single Post - Game mode I thought of for NBA 2K
10-03-2011, #10
03:04 AM
troll hunter Re: Game mode I thought of for NBA 2K
Join Date: Originally Posted by LT Ice Cream
Feb 2008
So I was playing my friend in 2K the other day (Nugs vs. Knicks). He decided to start using only J.R. Smith and eventually scored 60 points with him but he still lost the game.
Posts: 686
So I thought it would be interesting to try a new game mode where each person chooses one player, and only that player's points count. At the end of the game, it doesn't matter which
team wins, only which player scored more points. For example the Knicks have more points than the Nugs, but Carmelo scored less than J.R. so the Nuggets still win.
Oh yeah, you can try turning off fatigue as well.
I havent gotten a chance to try it yet but let me know if you decide to do it and tell me how it turns out. | {"url":"http://www.insidehoops.com/forum/showpost.php?p=6374571&postcount=10","timestamp":"2014-04-19T07:35:17Z","content_type":null,"content_length":"12082","record_id":"<urn:uuid:c69573db-d21c-4d7e-8363-48cf6207688b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aerospike Nozzle and Minimum Length Nozzle Design
ANNULAR (3-D) & LINEAR (2-D) CONTOURS BACK TO TOP
Summary of Features
1. Determine the shape of an annular or linear aerospike nozzle given the thruster exit area ratio (Aei/At), projected area expansion ratio (Ae/At), pressure ratio (Pc/Pa), thruster internal radius
(Ra), radius to lip of cowl (Re), total nozzle length from origin (Lnozzle), chamber temperature (Tc), chamber pressure (Pc), ratio of specific heats (g) and gas constant.
2. Click the UpDown command button to move a locator to one of seven points in the flow field.
3. All important flow properties are displayed in real time as the locator moves from point to point in the flow field described by the characteristic mesh of the aerospike.
4. Generate color contour plots of Mach number (Mn), Pressure (P/Pc), Temperature (T/Tc) and density (R/Rc) with a single click.
5. Plot Mn, P/Pc, T/Tc, R/Rc, CF, CFvac, ISP, ISPvac as a function of aerospike nozzle axial location at a particular Pc/Pa.
6. Plot CF, Thrust, CFvac, ISP, and ISPvac verses pressure ratio (Pc/Pa) on a semi-log scale.
7. Units include, MKS (meter-newton-sec), CGS (centimeter-dyne-second), FPS (foot-pound-second) and IPS (inch-pound-second).
8. Graphically display the outer flow boundary for under expanded flow, over expanded flow and the angle of the outer boundary flow.
9. Graphically display the initial shock wave formed at the lip of the cowl for over expanded flow (Pa/Pc > Pe/Pc) and the shock angle from the lip.
10. Define gas properties for inert gases, liquid propellant gases and solid fuel propellant gases or insert your own values.
11. Define the analysis for annular (3-D) or linear (2-D) aerospike nozzles.
12. Define the angle the sonic section of the thruster makes with the axis of the aerospike nozzle.
13. Added a hybrid rocket motor propellant having the following fuel and oxidizer to the list of combustion gases: 85% Nitrous Oxide, 15% HTPB.
14. Added the ability to save F(x) verses PR (Pressure Ratio) and F(x) verses x to a CSV file for use with Notepad or Excel.
15. In the Aerospike Nozzle Data section added a display of Truncation as percent of total aerospike length.
16. In the Aerospike Nozzle Data section added a display of Distance from throat (origin) to end of thruster.
17. In the Aerospike Nozzle Data section added a display of Distance from end of thruster to end of ramp.
18. NEW! Added the ability to include base thrust for truncated aerospike nozzles.
Propellant Gases Available
│ │
│ Inert Gases │
│Dry Air │Hydrogen │Helium │Water Vapor │Argon │Carbon Dioxide │
│Carbon Monoxide │Nitrogen │Oxygen │Nitrogen Monoxide │Nitrous Oxide │Chlorine │
│Methane │ │ │ │ │ │
│ │
│ Liquid Fuel Propellant Gases │
│Oxygen, 75% Ethyl Alcohol(1.43) │Oxygen, Hydrazine(.09) │Oxygen, Hydrogen(4.02) │
│Oxygen, RP-1(2.56) │Oxygen, UDMH(1.65) │Fluorine, Hydrazine(2.3) │
│Fluorine, Hydrogen(7.60) │Nitrogen Tetroxide, Hydrazine(1.34) │Nitrogen Tetroxide, 50% UDMH, 50% Hydrazine(2.0) │
│Nitric Acid, RP-1(4.8) │Nitric Acid, 50% UDMH, 50% Hydrazine(2.20) │ │
│ │
│ Solid Fuel Propellant Gases │
│Ammonium Nitrate, 11% Binder, 4-20% Mg │Ammonium Perchlorate, 18% Binder, 4-20% Al │Ammonium Perchlorate, 12% Binder, 4-20% Al │
│ │
│ Hybrid Rocket Motor Propellant Gases │
│85% Nitrous Oxide, 15% HTPB │ │ │
│ │
│ User-Defined Gases │
│Specify custom or user-defined gases by inserting Ratio of specific heats for exhaust (g) and Gas constant of exhaust (Rgas) in the Aerospike Nozzle Data section.│
General Discussion
AeroSpike performs an expansion-wave analysis from the throat of the thruster nozzle, where Mn = 1.0, to the thruster nozzle internal-exit as a series of simple wave expansions. Then, for the
external ramp AeroSpike performs a series of Prandtl-Meyer expansions from the lip of the cowl, where R=Re, to the entire length of the external ramp of the aerospike nozzle. The ideal contour or
shape of the external ramp of the aerospike nozzle is determined using isentropic supersonic flow theory. Then, depending on whether the flow is underexpanded or if the flow is overexpanded AeroSpike
performs either a Prandtl-Meyer expansion analysis or an oblique shock wave analysis to determine the angle of the outer flow boundary from the lip of the cowl. As a by product of the oblique shock
wave analysis AeroSpike determines the shock wave angle for overexpanded flow and plots both the outer boundary contour and the initial shock wave from the lip of the cowl. If the Check to Include
base thrust check box is not checked then base pressure is assumed equal to atmospheric pressure (Pb = Patm) which means base thrust is zero and only the centerbody and thrusters contribute to total
aerospike thrust. However, if the Check to Include base thrust check box is checked then base pressure and atmospheric pressure are non-equal resulting in the following aerospike nozzle total thrust
equations, F[total] = F[centerbody] + F[base] + F[thruster]. and CF = F[total] / (At * Pc).
From the menu on the top of the main start-up screen, select units (MNS, CGS, FPS or IPS) from the Units menu and then the propellant gas from the Gases menu. A number of inert gases, liquid fuel
propellants and solid fuel propellants are available. The value for the ratio of specific heats (g) are determined from the Units and Gases menus and are passed on to the Aerospike Nozzle program
after clicking the Aerospike Nozzle command button on the main start-up screen. The ratio of specific heats, gas constant (Rgas), chamber pressure (Pc), and pressure ratio (PR) are required for the
Aerospike Nozzle analysis. Additionally, the pressure ratio (PR) represents the maximum value for Pc/Pa that AeroSpike will use to plot CF, Thrust, CFvac , ISP and ISPvac as a function of PR. The
chamber pressure is computed based on the atmospheric pressure (Pa) and the pressure ratio (Pc/Pa). These values are automatically passed to the Aerospike Nozzle analysis when the command button is
clicked. However, the user can over-ride any input value by inserting new data directly into each data entry box on the Aerospike Nozzle Design screen. Each time the user changes any data entry the
results are automatically updated and displayed. The user only needs to click the Plot button to see a new contour plot of the results or the UpDown button to see flow results at any of the
characteristic mesh points.
Toolbar Operations
1. Click [X] to switch between the main data entry area (Figure-2) and the secondary data entry area (Figure-3). The main data entry area is displayed by default. Specify either annular aerospike
geometry or linear aerospike geometry by clicking one of two option buttons in the secondary data entry area. In addition, the thruster sonic-section angle (60 degrees to 120 degrees) is located in
the secondary data entry area. The thruster sonic-section angle is measured from the axis of the aerospike nozzle to the section that defines the throat of the thruster (where Mach number = 1).
Default = 90 degrees. Finally, check the Check to Include base thrust check box to include base thrust for the determination of total thrust and thrust coefficient (CF) for truncated aerospike
2. Send all flow properties (X, Y, Mn etc.) at each characteristic mesh point to the printer.
3. Send an image of the screen to the printer.
4. Save all flow properties (X,Y, Mn etc) at each characteristic mesh point to a data file.
5. Read the nozzle description file from a previous session.
6. Save the nozzle description file from a previous session.
7. Refresh the displayed analysis to the default analysis seen during start-up.
8. Return to the main start-up screen.
Input Variable Definitions
1. Thruster exit area ratio (Aei/At): Ratio of thruster internal exit area (Aei) to thruster throat area (At). Equation 1 is inverted to find Pc/Pei from Aei/At.
2. Thruster pressure ratio (Pc/Pei): Ratio of chamber pressure to thruster exit pressure. Found by interation of Equation 1 and displayed in lower data region.
3. Aerospike expansion ratio (Ae/At): The projected area of the aerospike nozzle (Ae = p * Re^2) divided by the total thruster throat area.
4. Ratio of specific heats (g): Selected from a pull-down menu or user-defined.
5. Gas constant of exhaust (Rgas): Selected from a pull-down menu or user-defined.
6. Aerospike pressure ratio (Pc/Pa): Ratio of the chamber pressure (Pc) to the atmospheric pressure (Pa).
7. Thruster internal circular radius (Ra): Radius of the internal portion of the thruster duct from point 1 (throat) to point 2 (Ra).
8. Radius to lip of cowl (Re): Radius that defines the projected area of the aerospike nozzle (Re).
9. Aerospike length from origin (Lnozzle): Total length of the aerospike nozzle from the origin (throat) of the thruster to the end of the ramp.
10. Chamber temperature (Tc): Chamber temperature in either degrees Rankine or degrees Kelvin depending on the units selected.
11. Chamber pressure (Pc): Chamber pressure whose units depend on the units selected.
12. Width of ramp for linear aerospike nozzles (Lramp).
Equation 1: Thruster Cross-Sectional Area and Pressure Ratio Relationship.
Figure 1. Aerospike Nozzle Displaying Basic Geometry and the External Expansion Fan.
AeroSpike Validation-1
Figure 2. Aerospike Nozzle - Optimum Expansion (PR = 71.5) and 20% plug nozzle configuration
Figure 3. Aerospike Nozzle - Secondary Input Data Entry Area for Annular/Linear nozzle and thruster angle inputs.
Truncated Aerospike Nozzle Base Pressure, Total Thrust and Thrust Coefficient
To determine truncated aerospike nozzle base pressure (Pb) for computing total thrust and pressure coefficient (CF) simply check the Check to Include base thrust check box in the Annular or Linear
Ramp Selection and Thruster Angle data entry area. When checked this check box activates base pressure computation for truncated aerospike nozzles where base pressure is included for determination of
total aerospike thrust and thrust coefficient. Truncated aerospike nozzle base thrust is determined using two curve-fit relationships for computing base thrust. The first relationship is atmospheric
pressure (Patm) verses pressure ratio (PR=Pc/Pa) and the second relationship is base pressure verses percent truncation. Where, X% truncation refers to an aerospike nozzle where (100-X)% of the
expansion ramp has been removed leaving a blunt base region. The plots in Figure 4 and Figure 5 illustrate the relationship between Patm verses PR and Pb verses Percent Truncation for the computation
of base thrust, total thrust and thrust coefficient. The curve-fit for base pressure verses percent truncation displayed in Figure 5 was developed using several Computational Fluid Dynamics (CFD)
analyses using aerospike nozzles having 20%, 30%, 40% and 50% truncation.
Aerospike nozzle total thrust is computed using the following equations knowing PR and percent truncation.
F[base]= (Pb - Patm) * A[base], F[total] = F[centerbody] + F[base] + F[thruster ]and CF = F[total] / (At * Pc) where Patm = fn(PR) and Pb = fn(% Truncation)
Figure 4. Atmospheric pressure (Pa) verses pressure ratio (PR=Pc Figure 5. Base pressure (Pb) verses percent truncation curve-fit using CFD analyses of aerospike nozzles having 20%, 30%, 40% and 50%
/Patm) truncation.
CF vs. PR Validation Flow Field Validation
Figure 6. CF verses Pressure Ratio (Pc/Pa) - Semi-Log plot, Maximum PR = 1000. AeroSpike CF verses PR compared to 20% (80% truncated) plug
nozzle Base Flow CFD analysis includes base pressure coefficient. Reference: AIAA 2001-1051, T. Ito, K. Fujii, Flow Field Analysis of the Base
Region of Axisymmetric Aerospike Nozzles.
"I used (AeroSpike) to design several types of nozzle(s) and found your software is really useful".
Takashi Ito, JAXA
Reference: "Aerospike Nozzle Flow Fields", AIAA 2001-1051, by Takashi Ito. Contour plots used with permission from reference. Figure 7. AeroSpike program nozzle flow field results
compared to Base Flow CFD analyses for PR=9, PR=71 and
X-33 XRS-2200 AEROSPIKE ROCKET ENGINE ANALYSIS EXAMPLE
the Linear Ramp or 2-D option is used to determine sea level and vacuum thrust for the X-33 XRS-2200 aerospike rocket engine. Click the Hide or Show Aerospike Nozzle Data (X in the toolbar) to select
the Linear Aerospike option and then insert 90 for the 2-D Ramp width in the space provided. The data from Figure-9 were used as input for the aerospike analysis illustrated in Figure-10 where vacuum
thrust and specific impulse (Isp) are predicted to be 264,600 lbf and 454.7 sec where Pc/Pa = 100000. Dimensional data for this analysis are based on Boeing's results for the XRS-2200 linear
aerospike rocket engine where vacuum thrust and Isp are 266,230 lbf and 436.5 sec for a 0.6% variation in thrust and 4.2% variation in Isp. Due to unavailable thruster dimensions and conflicting
dimensional and performance information provided by NASA and relevant technical papers this aerospike rocket engine analysis is an approximation.
NASA Description of the X-33 Spaceplane: The X-33 was to have been a wedged-shaped subscale technology demonstrator prototype of a potential future Reusable Launch Vehicle (RLV) that Lockheed Martin
dubbed VentureStar. The company hoped to develop VentureStar early this century. Through demonstration flight and ground research, NASA's X-33 program was to have provide the information needed for
industry representatives such as Lockheed Martin to decide whether to proceed with the development of a full-scale, commercial RLV program. The X-33 design was based on a lifting body shape with two
revolutionary linear aerospike rocket engines and a rugged metallic thermal protection system. The vehicle also was to have had lightweight components and fuel tanks built to conform to the vehicle's
outer shape. Time between X-33 flights was planned to normally be seven days, but the program hoped to demonstrate a two-day turnaround between flights during the flight-test phase of the program.
The X-33 was to have been an unpiloted vehicle that took off vertically like a rocket and landed horizontally like an airplane. It was planned to reach altitudes of up to 50 miles and high hypersonic
speeds. The X-33 Program was managed by the Marshall Space flight Center and was planned to have been launched at a special launch site on Edwards Air Force Base. Technical problems with the
composite liquid hydrogen tank resulted in the program being cancelled in February 2001.
│ XRS-2200 Engine │ 5K ft │ Vacuum │
│ Thrust, lbf │ 204,420 │ 266,230 │
│ Specific Impulse, sec │ 339 │ 436.5 │
│ Propellants │ Oxygen, Hydrogen │
│ Mixture Ratio (O/H) │ 5.5 │
│ Chamber Pressure, psia │ 857 │
│ Cycle │ Gas Generator │
│ Area Ratio (Ae/At) │ 58 │
│ Throttling, Percent Thrust │ 50 - 100 │
│ Dimensions, inches │ ---- │
│ Forward End │ 134 wide x 90 long │
│ Aft End │ 42 wide x 90 long │
│ Forward to Aft │ 90 │
Figure-8 aerospike engine side view. Figure-9, XRS-2200 aerospike rocket engine description data repeated in Figure-10.
Figure-10, XRS-2200 vacuum (Pc/Pa=100,000) analysis. Input data for the results in Figure-10 based on data from Figure-9 using AeroSpike version 2.6.0.5.
│ │5K ft (Pc/Pa = 70.0621) │Vacuum (Pc/Pa = 100,000)│
│Aerospike Engine Results Compared├─────────────┬──────────┼──────────────┬─────────┤
│ │ Thrust, lbf │ Isp, sec │ Thrust, lbf │Isp, sec │
│AeroSpike 2.6.0.x │ 155,500 │ 267 │ 264,600 │ 454.7 │
│Boeing XRS-2200 Results │ 204,420 │ 339 │ 266,230 │ 436.5 │
BOUNDARY SHAPE: The outer boundary angle from the lip of the thruster cowl to the end of the external ramp increases as altitude increases. The pressure ratio (Pc/Pa) defines the extent to which the
outer boundary expands as altitude and pressure ratio increase. For pressure ratio, Pc is the thruster chamber pressure and Pa is the local atmospheric pressure. This section compares the XRS-2200
expansion boundary shapes at 5K feet (Pc/Pa = 70.0621) and 50K feet (Pc/Pa = 509.21) determined by Navier Stokes and AeroSpike.
PART 2: 2-D & 3-D MINIMUM LENGTH NOZZLE DESIGN USING
Summary of Features
1. Determine shapes and flow properties of 2-D Minimum Length Nozzles (MLN) given exit Mach number (Mdesign) and throat diameter (Dt).
2. Determine shapes and flow properties of 3-D Minimum Length Nozzles using an approximation procedure based on 2-D results.
3. Click the UpDown command button to move a locator from point to point in the flow field.
4. All important flow properties are displayed in real time as the locator moves from point to point in the flow field described by the characteristic mesh.
5. Generate color contour plots of Mach number (Mn), Pressure (P/Pc), Temperature (T/Tc) and density (R/Rc) with a single click.
6. Units include, MKS (meter-newton-second), CGS (centimeter-dyne-second), FPS (foot-pound-second) and IPS (inch-pound-second).
7. Define gas properties for inert gases, liquid propellant gases and solid fuel propellant gases or insert your own values.
8. Output all flow variables to the printer or text file for use with spreadsheet applications.
Propellant Gases Available
│ │
│ Inert Gases │
│Dry Air │Hydrogen │Helium │Water Vapor │Argon │Carbon Dioxide │
│Carbon Monoxide │Nitrogen │Oxygen │Nitrogen Monoxide │Nitrous Oxide │Chlorine │
│Methane │ │ │ │ │ │
│ │
│ Liquid Fuel Propellant Gases │
│Oxygen, 75% Ethyl Alcohol(1.43) │Oxygen, Hydrazine(.09) │Oxygen, Hydrogen(4.02) │
│Oxygen, RP-1(2.56) │Oxygen, UDMH(1.65) │Fluorine, Hydrazine(2.3) │
│Fluorine, Hydrogen(7.60) │Nitrogen Tetroxide, Hydrazine(1.34) │Nitrogen Tetroxide, 50% UDMH, 50% Hydrazine(2.0) │
│Nitric Acid, RP-1(4.8) │Nitric Acid, 50% UDMH, 50% Hydrazine(2.20)│ │
│ │
│ Solid Fuel Propellant Gases │
│Ammonium Nitrate, 11% Binder, 4-20% Mg│Ammonium Perchlorate, 18% Binder, 4-20% Al│Ammonium Perchlorate, 12% Binder, 4-20% Al │
│ │
│ Hybrid Rocket Motor Propellant Gases │
│85% Nitrous Oxide, 15% HTPB │ │ │
│ │
│ User-Defined Gases │
│Specify custom or user-defined gases by inserting Ratio of specific heats (g) in the Minimum Length Nozzle Data section. │
General Discussion
The Minimum Length Nozzle routine performs a minimum length nozzle (MLN) design using the method of characteristics. A minimum length nozzle has the smallest possible throat-to-exit length that is
still capable of maintaining uniform supersonic flow at the exit. Strictly speaking a minimum length nozzle requires a sharp corner at the throat. However, sometimes a sharp corner at the throat may
be impractical. A nearly minimum length nozzle may be generated by specifying a very small but finite radius of curvature at the throat with the inflection point of the throat-curve just downstream
of the throat. For a nearly minimum length nozzle simply specify the streamline from the throat-curve so the curvature lines up with the nozzle wall shape generated by MLN.
A straight sonic line is assumed to occur at the throat of the minimum length nozzle. For the example presented in Figure 2 and Figure 3, where the exit Mach number is 2.4, the first characteristic
(C_ ) propagating from the corner of the throat is inclined by a small amount (q = 0.375 deg) from the normal sonic line. The slope of the first characteristic is dy/dx = (q - m) = -73.725 deg. See
Figure 1 below. The remaining expansion fan is divided into six increments. The Mach number at each point in the flow is determined from the Prandtl-Meyer function using the Newton-Raphson iteration
method and the unit processes dictated by the method of characteristics. The nozzle contour is drawn by starting at the throat corner where the maximum expansion angle of the wall, qw_max is equal to
one-half the Prandtl-Meyer function, n(Mn) / 2, at the design exit Mach number. For a minimum length nozzle the maximum expansion angle is equal to one-half the Prandtl-Meyer function for the design
exit Mach number. For other nozzles the maximum expansion angle must be less than n(Mn=Mdesign) / 2. For a detailed discussion of the method of characteristics please refer to the reference, Modern
Compressible Flow, With Historical Perspective, by John D. Anderson, pages 260 to 282.
PLEASE NOTE: For the 2-D Minimum Length Nozzle selection the "X" and "Y" coordinates of the nozzle contour represent the horizontal and vertical dimensions that define the 2-D characteristic mesh.
Therefore, the Exit Area Ratio (Aexit/Athroat) = [2*Yexit*WIDTH] / [2*Ythroat*WIDTH] = Yexit/Ythroat because the flow is 2-Dimensional. Likewise, for the 3-D Minimum Length Nozzle selection the "X"
and "Y" coordinates of the contour represent the horizontal and radial dimensions that define the 3-D axisymmetric mesh. Therefore, the Exit Area Ratio (Aexit/Athroat) = [p*Yexit^2] / [p*Ythroat^2] =
(Yexit/Ythroat)^2 because the flow is 3-Dimensional and not 2-Dimensional. Finally, MLN Project files generated by previous versions of MLN must be updated by adding "1" for 2-D flow or "2" for 3-D
flow at the bottom of the MLN file. Do not forget to save each Project file with an MLN extension when updating older Project files for use with AeroSpike 2.5 or higher.
From the menu on the top of the main start-up screen, select units (MNS, CGS, FPS or IPS) from the Units menu and then the propellant gas from the Gases menu. A number of inert gases, liquid
propellants and solid fuel propellants are available. The value for the ratio of specific heats (g) are determined from the Units and Gases menus and are passed on to the MLN program after clicking
the Minimum Length Nozzle command button on the main start-up screen. Only the ratio of specific heats are required for the MLN analysis. The other values including the gas constant (Rgas), chamber
pressure (Pc) and pressure ratio (PR) are not required for the MLN analysis. When performing an MLN analysis the only values required are the Inclination angle from the sonic line (see above), Design
Mach number (Mdesign), and throat diameter (Dt). The ratio of specific heats has already been specified from the main screen selection. However, the user can over-ride the inserted ratio of specific
heats by simply inserting his own ratio of specific heats in the data entry box. Each time the user changes any data entry the results are automatically updated and displayed. The user only needs to
click the Plot button to see a new contour plot of the results or the UpDown button to see flow results at any of the characteristic mesh points.
Toolbar Operations
1. Show or hide the main data window from view or from being printed.
2. Send all flow propeties (X, Y, Mn etc) at each characteristic mesh point to the printer.
3. Send an image of the screen to the printer.
4. Save all flow properties (X,Y, Mn etc) at each characteristic mesh point to a data file.
5. Read the nozzle description file from a previous session.
6. Save the nozzle description file from a previous session.
7. Refresh the displayed analysis to the default analysis seen during start-up.
8. Return to the main start-up screen.
Input Variable Definitions
1. Ratio of specific heats (g): Selected from pull-down menu or user-defined.
2. Inclination angle from sonic line: This angle is used to compute the slope of the first characteristic from the edge of the throat.
3. Design Mach number (Mdesign): The Mach number at the exit of the nozzle where the flow is uniform.
4. Throat diameter (Dt): The entrance to the minimum length nozzle where Mn = 1.0
5. Area ratio (Aexit/Athroat): The resulting exit area ratio of the nozzle determined by the method of characteristics.
6. Specify whether the nozzle is 2-D or 3-D by clicking either the 2-D characteristics or 3-D approximation option buttons.
Figure 1. Description of the inclination angle (q) from the sonic line (at throat) where Mn =1.0
Minimum Length Nozzle Validation-1
Example 11.1, on page 282 from Modern Compressible Flow, With Historical Perspective, by John D. Anderson
Figure 2. Example 11.1, on page 282 from Modern Compressible Flow, With Historical Perspective, by John D. Anderson
NOTE ABOUT MLN ANALYSIS ACCURACY: The Minimum Length Nozzle (MLN) analysis illustrated in Figure-2 uses exact input data from Modern Compressible Flow With Historical Perspective. For maximum
accuracy simply insert 0.0 degrees in the Inclination angle from sonic line input block. When this simple modification is performed the exit Area ratio (AR) becomes 2.43 which compares to AR = 2.403
for exact 2-D isentropic flow and represents a 1.124% difference from isentropic theory.
Figure 3. MLN Characteristic Mesh For exit Mach number of 2.4 where Inclination angle from sonic line = 0.0 degrees produces AR = 2.43.
Minimum Length Nozzle Validation-2
Figure 17.5, Gasdynamics: Theory and Applications, 2-D and Approximate 3-D MLN Validation at M = 3.0
The following table compares 2-D and 3-D MLN results with data scaled from Figure 17.5, Gasdynamics: Theory and Applications. Two wall-points, one from the center and one at the end of the nozzle
contour have been selected for comparison. Notice that 3-D Minimum Length Nozzles are substantially shorter than equivalent 2-D Minimum Length Nozzles that have identical Area Ratio (Aexit/Athroat).
For comparison purposes all results are referenced to the curved sonic line analysis for 2-D and 3-D axisymmetric nozzles. Please reference Gasdynamics: Theory and Applications* page 325, Figure
17.15 where g = 1.4 and Mexit = 3. Finally, please note that MLN uses the straight sonic line method of characteristics analysis.
│2-D MLN Analysis │X_Coordinate││Y_Coordinate│Difference│X_Coordinate│Difference│Y_Coordinate│Difference│
│AeroSpike 2.6 │ 6.522 ││ 3.331 │ -10.7% │ 17.43 │ 4.79% │ 4.354 │ -6.8% │
│Straight Sonic Line* │ 6.522 ││ 3.30 │ -11.5% │ 16.83 │ -0.53% │ 4.198 │ -10.1% │
│Curved Sonic Line* │ 6.522 ││ 3.73 │ - │ 16.92 │ - │ 4.670 │ - │
│3-D MLN Analysis │X_Coordinate││Y_Coordinate│Difference│X_Coordinate│Difference│Y_Coordinate│Difference│
│AeroSpike 2.6 │ 3.126 ││ 1.603 │ -0.06% │ 8.353 │ -2.68% │ 2.087 │ 2.9% │
│Axisymmetric │ 3.126 ││ 1.604 │ - │ 8.59 │ - │ 2.028 │ - │
│Curved Sonic Line* │ ││ │ │ │ │ │ │
Figure 4. 2-D MLN Characteristic Mesh when g =1.4 and exit Mach number = 3
Figure 5. Approximate 3-D Axisymmetric MLN Characteristic Mesh when g =1.4 and exit Mach number = 3
AeroSpike System Requirements
(1) Screen resolution: 800 X 600
(2) System: Windows 98, XP, Vista, Windows 7 (32 bit and 64 bit), NT or Mac with emulation
(3) Processor Speed: Pentium 3 or 4
(4) Memory: 64 MB RAM
(5) English (United States) Language
(6) 256 colors
Please note this web page requires your browser to have
Symbol fonts to properly display Greek letters (a, m, p, ∂ and w)
ADDITIONAL REQUIREMENT: Input data for all AeroRocket programs must use a period (.) and not a comma (,) and the computer must be set to the English (United States) language. For example, gas
constant should be written as Rgas = 355.4 (J / kg*K = m^2 / sec^2*K) and not Rgas = 355,4. The English (United States) language is set in the Control Panel by clicking Date, Time, Language and
Regional Options then Regional and Language Options and finally by selecting English (United States). If periods are not used in all inputs and outputs the results will not be correct.
AeroSpike 2.3 Features and Error Fixes
1. Fixed plot resolution problem that occurred for some high aspect ratio aerospike nozzles.
2. Fixed the ratio of specific heats (g) manual entry error that would not accept gamma (g) = 2 and some other minor errors.
3. Fixed the incorrect gas constant (Rgas) value for hydrogen. For hydrogen, Rgas = 4122.11 m^2/(sec^2*K).
4. Added the ability to specify thruster sonic-section (throat) angle. Throat angle can vary from 60 to 120 degrees, the default is 90 degrees.
AeroSpike 2.4.1 Features and Error Fixes
1. Added a hybrid rocket motor propellant having the following fuel and oxidizer to the list of combustion gases: 85% Nitrous Oxide, 15% HTPB.
2. Added the ability to save F(x) verses PR (Pressure Ratio) and F(x) verses x to a CSV file for use with Notepad or Excel.
3. In the Aerospike Nozzle Data section added a display of Truncation as percent of total aerospike length.
4. In the Aerospike Nozzle Data section added a display of Distance from throat (origin) to end of thruster.
5. In the Aerospike Nozzle Data section added a display of Distance from end of thruster to end of ramp.
6. Corrected a few Status Bar display errors for plots of F(x) verses x.
AeroSpike 2.4.2 Error Fix (11/26/2006)
1) The gas Nitrogen Dioxide in the Gases pull-down menu should be labeled Nitrous Oxide (N2O). (Fixed)
AeroSpike 2.5.0 Features (01/23/2007)
1. Added ability to determine shapes and flow properties of 3-D Minimum Length Nozzles using an approximation procedure based on 2-D results.
AeroSpike 2.6.0.1 Features (09/21/2008)
1. Added the ability to include base thrust of truncated aerospike nozzles to determine total thrust and thrust coefficient.
2. To modify or change units in previous versions of AeroSpike the user needed to close the main aerospike nozzle analysis screen and then redefine units in the start-up screen. However, starting
with this new version the user goes directly to the start-up screen without closing the aerospike nozzle analysis screen to modify pressure ratio, units, gases and altitude to instantly included
those changes on the main aerospike nozzle analysis screen.
3. Darkened all data display boxes to prevent confusion with white data entry boxes for the MLN and aerospike analyses.
AeroSpike 2.6.0.2 Features (09/14/2009)
1) For AeroSpike, fixed all input data text boxes for 32 bit and 64 bit Windows Vista. When operating earlier versions of AeroSpike in Windows Vista the input data text boxes failed to show their
borders making it difficult to separate each input data field from adjacent input data fields. This simple change did not alter any computational result.
AeroSpike 2.6.0.3 Features (04/27/2011)
1) For the Minimum Length Nozzle routine corrected the 3D axisymmetric mesh-data display. This simple change did not alter any computational result.
AeroSpike 2.6.0.4 Features (09/21/2011)
1) For AeroSpike the color contour plots and aerospike nozzle shape data are now displayed to scale. In previous versions aerospike color contour plots and aerospike nozzle shape displays were not to
scale to allow full use of the available display area. However, recent work indicates it is more useful to display scaled aerospike geometry than to fill the entire plot area. The factor 0.5390 was
used to properly scale the X-coordinates of the X, R plot data. This simple change did not alter any computational result.
AeroSpike 2.6.0.5 Features (09/25/2011)
1) For AeroSpike the Truncation as percent of total aerospike length data field displayed incorrect values intermittently based on UNITs selection. This simple change did not alter any other
computational result.
For more information about AeroSpike please contact AeroRocket at aerocfd@aerorocket.com.
| MAIN PAGE | PRODUCTS | CONSULTING | MISSION | RESUME |
Copyright © 1999-2012 John Cipolla/AeroRocket | {"url":"http://www.aerorocket.com/MOC/MOC.html","timestamp":"2014-04-17T01:05:16Z","content_type":null,"content_length":"74848","record_id":"<urn:uuid:478c8779-011d-4b18-b93b-fb5d2af45e10>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Congruences of p--adic Integer Order Bernoulli Numbers Arnold Adelberg
Summary: Congruences of p--adic Integer Order Bernoulli Numbers
Arnold Adelberg
Department of Mathematics
Grinnell College
Grinnell, Iowa 50112
Arnold Adelberg
Department of Mathematics
Grinnell College
Grinnell, Iowa 50112
In this paper we establish some new congruences of padic integer order Bernoulli
numbers. These generalize the Kummer congruences for ordinary Bernoulli numbers. We
apply our congruences to prove irreducibility of certain Bernoulli polynomials with order
divisible by p and to get new congruences for Stirling numbers.
Source: Adelberg, Arnold - Department of Mathematics and Computer Science, Grinnell College
Collections: Mathematics
Summary: Congruences of p--adic Integer Order Bernoulli Numbers Arnold Adelberg Department of Mathematics Grinnell College Grinnell, Iowa 50112 1 P--ADIC INTEGER ORDER BERNOULLI NUMBERS Arnold
Adelberg Department of Mathematics Grinnell College Grinnell, Iowa 50112 2 Abstract In this paper we establish some new congruences of padic integer order Bernoulli numbers. These generalize the
Kummer congruences for ordinary Bernoulli numbers. We apply our congruences to prove irreducibility of certain Bernoulli polynomials with order divisible by p and to get new congruences for Stirling
numbers. 3
Source: Adelberg, Arnold - Department of Mathematics and Computer Science, Grinnell College | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/650/3828276.html","timestamp":"2014-04-17T16:32:13Z","content_type":null,"content_length":"7703","record_id":"<urn:uuid:4bec5fb6-574b-404e-a3df-09fe868a0f6b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Framingham Trigonometry Tutor
Find a Framingham Trigonometry Tutor
...The fifth grader is quite precocious in Math. I have help both of them with their math homework. I have found it to be fun and rewarding.
23 Subjects: including trigonometry, reading, algebra 1, ESL/ESOL
...I am in the process of pursuing a master of arts degree in teaching mathematics at the secondary level at Boston University. I have tutored students in Mathematics for over 20 years. I have
taught students at different levels including elementary, high school and college.
13 Subjects: including trigonometry, calculus, precalculus, geometry
...I enjoy working with students who are motivated but need a little help to understand the subject at hand. I'm very good at explaining hard concepts or problems using easy to understand and
every day examples. I'm patient with my students and experienced in helping them improve their grades in s...
11 Subjects: including trigonometry, calculus, algebra 2, geometry
...My tutoring work for the Lexington public school system for 14 years was run, for most of those years, by the Special Education department. As result, I have worked with students with all sorts
of special circumstances: Dyslexia, ADD/ADHS, Hearing loss, various forms of judgmental and function d...
34 Subjects: including trigonometry, reading, calculus, English
I am a Massachusetts licensed teacher, and I have taught Math & Computer Science for the past 17 years working in the Sudbury and Newton public school districts. I have taught all levels of
algebra, geometry, trigonometry, precalculus and AP Calculus AB & BC as well as AP Computer Science. In addi...
29 Subjects: including trigonometry, calculus, geometry, GRE | {"url":"http://www.purplemath.com/framingham_ma_trigonometry_tutors.php","timestamp":"2014-04-19T04:59:44Z","content_type":null,"content_length":"24053","record_id":"<urn:uuid:737cb9db-9975-4fbe-9527-51db26d2b44d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factorial: Introduction to the factorials and binomials (subsection FactorialBinomials/04)
Two factorials and are the particular cases of the incomplete gamma function with the second argument being :
The factorial , double factorial , Pochhammer symbol , binomial coefficient , and multinomial coefficient can be represented through the gamma function by the following formulas:
Many of these formulas are used as the main elements of the definitions of many functions.
The factorials and binomials , , , , and are interconnected by the following formulas: | {"url":"http://functions.wolfram.com/GammaBetaErf/Factorial/introductions/FactorialBinomials/04/","timestamp":"2014-04-19T12:19:53Z","content_type":null,"content_length":"48249","record_id":"<urn:uuid:474b69e7-0ded-4072-90a6-28743c700a67>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
An update on visualizing Bayesian updating
A while ago I wrote this post with some R code to visualize the updating of a beta distribution as the outcome of Bernoulli trials are observed. The code provided a single plot of this process, with
all the curves overlayed on top of one another. Then John Myles White (co-author of Machine Learning for Hackers) piped up on twitter and said that he’d like to see it as an animation. Challenge
accepted – and with an additional twist.
The video shows how two observers who approach the problem with different beliefs (priors) converge toward the same conclusion about the value of the unknown parameter after making enough
I’ve posted the code here.
8 thoughts on “An update on visualizing Bayesian updating”
1. wow!
2. great! see also this post for bayesian/frequentist interpretations of this experiment: http://cyrille.rossant.net/introduction-to-bayesian-thinking/
3. Reblogged this on Sam Clifford and commented:
I came across this via reddit’s r/statistics community and thought I might share it as a nice way of visualising posteriors. Specifically, it’s a very good demonstration of the convergence of the
posterior beliefs of two observers with separate priors but the same data (which is sequentially collected, but the order of successes/failures are irrelevant).
So next time someone’s telling you that Bayesian statistics is inherently wrong because of the subjectivity of the prior belief, you can point them to something like this to demonstrate that as
data is collected the posteriors become quite close.
I suggest having a play with the R code to understand how the diffuseness of the priors affects the concentration of posterior belief. While the opposite beliefs of the observers in the attached
video are a nice example of convergence to the same posterior, I think two priors with the same mean and different variance would be a more interesting visualisation.
4. Thanks. Would you mind sharing your setup to run this code(what else besides R is needed)?
□ OS: Ubuntu 12.04. The only extra piece of software needed is MPlayer’s Movie Encoder which you can get (if you’re using Ubuntu or Debian like system) using: sudo apt-get install mencoder.
Good luck!
☆ Thanks. This is very helpful. | {"url":"http://bayesianbiologist.com/2012/08/17/an-update-on-visualizing-bayesian-updating/","timestamp":"2014-04-16T16:00:13Z","content_type":null,"content_length":"58090","record_id":"<urn:uuid:5aa8d36e-2905-44ea-96c8-61c2053412b6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Epsilon-proof as x->-infinity currently defeating me
December 23rd 2010, 04:31 PM #1
Junior Member
Mar 2009
Epsilon-proof as x->-infinity currently defeating me
I think Epsilon-delta proofs and I should spend some quality time together.
The problem is I need to prove that:
$\lim_{x \to -\infty} a^x = 0$
So, I need to show that, for every $\epsilon > 0$, there is a $\delta$ such that $|f(x) - L| < \epsilon$ whenever $x < \delta$.
So, given $\epsilon > 0$, I need to find $\delta$ such that:
$|a^x - 0| < \epsilon$ whenever $x < \delta$
And here's where I am stuck. I think I'll have to take a log in there somewhere. Wish my textbook wasn't so scarce on examples (Stewart's Calculus 4th edition). I sort of get limits where x->c,
but I'm totally lost on the ones with infinities, and the differences between handling + and - infinities. Can't find any examples online of ones as x-> -infinity, which surely might help. Nor
are the two videos at Khan Academy useful in this case.
I bet you guys get a lot of these. Surprisingly confusing for what seems a relatively simple concept, at first. Help, hints, pointers, nudges and taunts all appreciated.
If you think of $\delta$ as a large negative number, and getting more and more negative, you'll be on the right track; also, you have to assume that $a>0,$ right? Otherwise, you have complex
numbers floating around.
How can you simplify $|a^{x}-0|<\epsilon?$
If you can find a $\delta=\delta(\epsilon)$ that works, you'll be done. How could you do that?
Well, first and obviously, $|a^{x} - 0| < \epsilon \Rightarrow |a^{x}| < \epsilon$.
Then, I can assume that $a^{x} > 0$ so I have just:
$a^{x} < \epsilon$ whenever $x < \epsilon$.
We want:
$a^{x} < \epsilon$ whenever $x < \epsilon$
$log_{a}(a^{x}) = log_{a}(\epsilon) \Rightarrow x = log_{a}(\epsilon)$
So I should try $\delta = log_{a}(\epsilon)$.
Given $\epsilon > 0$, we choose $\delta = log_{a}(\epsilon)$. Let $x < \epsilon$. Then
$|a^{x} - 0| = a^{x} < a^{\delta} = a^{log_{a}(\epsilon)} = \epsilon$
$|a^{x} - 0| < \epsilon$
Is that right? Egads, I think I just did it. Hey, I think I smell burnt toast!
December 23rd 2010, 04:40 PM #2
December 23rd 2010, 05:18 PM #3
Junior Member
Mar 2009
December 23rd 2010, 07:32 PM #4
December 23rd 2010, 07:45 PM #5
Junior Member
Mar 2009 | {"url":"http://mathhelpforum.com/calculus/166822-epsilon-proof-x-infinity-currently-defeating-me.html","timestamp":"2014-04-17T13:33:04Z","content_type":null,"content_length":"52274","record_id":"<urn:uuid:60ade995-3bc6-42fa-a926-62b02b7d08c9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PLEASE I need help with two-sided limits
Best Response
You've already chosen the best response.
What's the question?
Best Response
You've already chosen the best response.
If you want to know what's two-sided limits is, A two sided limit is just the regular limit you see denoted by the lim as x approaches some value of a function. It means that if you approached
that value from both sides of the graph you would arrive at the same place. A one sided limit means if you approached the graph from one particular side (from the left or from the right) you
would get different values. If the limit from the left does not equal the limit from the right side of the graph, the overall limit does not exist. For example, a function such as y = x, for x >
0 cannot have a negative x-value. So the limit of this function from the left would not exist because the graph doesn't exist there, but the limit from the right would be 0.
Best Response
You've already chosen the best response.
hold on lemme write it out
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
factor the top and the bottom to get \[x(x+3) \over (x+3)(x-3)\] cancelling gives you: \[x \over x-3\] now you can simply plug in -3.
Best Response
You've already chosen the best response.
Ugh my brain sucks right now good job ggree
Best Response
You've already chosen the best response.
if you plug in lim -3^(+) you will get the same answer which means the limit exists
Best Response
You've already chosen the best response.
thanks.....so what if there was an indeterminate like 40/0 or something, how will I go about solving that?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
4/0^(+)= + infinity 4/0^(-) = - infinity If you get 0/0 you did something wrong unless the prof teaches you L'hospitols rule
Best Response
You've already chosen the best response.
remember you are not using 0 but something infinitly close to zero
Best Response
You've already chosen the best response.
thus 4 or any real number would go into it infinity number of times
Best Response
You've already chosen the best response.
does that answer your question?
Best Response
You've already chosen the best response.
if you had -1/0^(-) = + infinity make sure to note that because a -/- = +
Best Response
You've already chosen the best response.
yea it kinda does....thanks
Best Response
You've already chosen the best response.
are you still confused with something?
Best Response
You've already chosen the best response.
if you take the limit from the 0+ of 1/x you get +infinity and if you take the limit from 0- you get -infinity I think I made a mistake with the graph, but yeah the out put as you approach from
the right is always increasing and the output from the left is always decreasing
Best Response
You've already chosen the best response.
can I give you another problem to solve?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
but does that explain it but yeah post the question quick need to actually do my own school work lol
Best Response
You've already chosen the best response.
lol ok
Best Response
You've already chosen the best response.
not to rush you
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
for these type of questions you multiply by the conjugate so multiply the top and bottom by ((x)^(1/2) + 2)
Best Response
You've already chosen the best response.
did that help?
Best Response
You've already chosen the best response.
yes it did...thanks
Best Response
You've already chosen the best response.
no problem you should be pretty set for limits now if you understand limits going to infinity
Best Response
You've already chosen the best response.
i understand limits going to infinity but i need a resource for some two-sided limit problems. it's kind of confusing/
Best Response
You've already chosen the best response.
did my 1/x example not help? 1/0- = something like 1/-0.0000001 where as 1/0+ = something like 1/+0.000001 Only these numbers are infinitely small so we end up with lim 1/x = infinity x-> 0+ lim
1/x = -infinity x -> 0- Look at the graph of 1/x |dw:1329981425836:dw| Notice how the output is forever decreasing from the left as you approach zero whereas the output is forever increasing from
the right as you approach zero
Best Response
You've already chosen the best response.
thus the limit does not exist
Best Response
You've already chosen the best response.
if you took the limit of -1/x you would get the opposite if you took limits at both side so 0+ = -infinity 0- = +infinity
Best Response
You've already chosen the best response.
as the graph looks like |dw:1329981639188:dw|
Best Response
You've already chosen the best response.
if you took the limit of 0+ and 0- of x you would end up with 0 and 0 which means that the limit exists
Best Response
You've already chosen the best response.
oh it makes sense to me now
Best Response
You've already chosen the best response.
ok good I was getting tired of explaing it although I should have did it right the first time
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f45dbb9e4b065f388ddc369","timestamp":"2014-04-16T13:37:09Z","content_type":null,"content_length":"180080","record_id":"<urn:uuid:0c26c0a4-f884-4576-8d62-79889e527500>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Faculty Publications
Electrical Engineering and Computer Sciences
COLLEGE OF ENGINEERING UC Berkeley
Faculty Publications - Elchanan Mossel
Book chapters or sections
Information for:
Articles in journals or magazines
Faculty Articles in conference proceedings
Technical Reports
Support Services:
Facilities &
My EECS Info
Electrical Engineering and Computer Sciences
COLLEGE OF ENGINEERING UC Berkeley
Information for:
Support Services:
Facilities & Safety
My EECS Info
Faculty Publications - Elchanan Mossel
Book chapters or sections
Articles in journals or magazines
Articles in conference proceedings
Technical Reports | {"url":"http://www.cs.berkeley.edu/Pubs/Faculty/mossel.html","timestamp":"2014-04-17T15:29:03Z","content_type":null,"content_length":"22368","record_id":"<urn:uuid:2c2e7705-bbb4-4262-b53b-2ed03b7099ed>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof that domains of positivity of symmetric nondegenerate bilinear forms are self-dual cones?
up vote 4 down vote favorite
Max Koecher (in, for example, The Minnesota Notes on Jordan Algebras and Their Applications (new edition: Springer Lecture Notes in Mathematics number 1710, 1999)), defined a domain of positivity for
a symmetric nondegenerate bilinear form $B: X \times X \rightarrow \mathbb{R}$ on a finite dimensional real vector space $X$, to be an open set $Y \subseteq X$ such that $B(x,y) > 0$ for all $x,y \in
Y$, and such that if $B(x,y) > 0$ for all $y \in Y$, then $x \in Y$. (More succinctly, perhaps, we could say it's a maximal set $Y \subseteq X$ such that $B(Y,Y) > 0$.) Aloys Krieger and Sebastian
Walcher, in their notes to Ch. 1 of this book, state that "In the language used today, a domain of positivity is a self-dual open proper convex cone." [I now believe this is wrong; see my answer
below for what I think is true instead.] It's quite easy to prove that it's an open proper convex cone. (Proper means it contains no nonzero linear subspace of $X$, i.e. that its closure is pointed.)
But, although I have a vague recollection of having encountered a proof once in a paper on homogeneous self-dual cones, I haven't succeeded in finding it again, or in supplying it myself. I'm pretty
sure Krieger and Walcher's claim is correct---for example, the 1958 paper by Koecher that is generally cited (along with a 1960 paper by Vin'berg) for the proof of the celebrated result that the
(closed) finite-dimensional homogeneous self-dual cones are precisely the cones of squares in finite dimensional formally real Jordan algebras, is titled "The Geodesics of Domains of Positivity" (but
in German).
The most natural way to prove this would be to find a positive semidefinite nondegenerate $B'$, such that the cone is a domain of positivity for $B'$ as well. In principle $B'$ might depend on the
domain $Y$. (While maximal in the subset ordering, domains of positivity for a given form $B$ are not unique.) But a tempting possibility, independent of $Y$, is to transform to a basis for $X$ in
which $B$ is diagonal, with diagonal elements $+/- 1$, change the minus signs to plus signs, and transform back to obtain $B'$.
To clarify the question: we will define a cone $K$ in a real vector space $X$ to be self-dual iff there exists an inner product----that is, a positive definite bilinear form $\langle . , . \rangle: X
\times X \rightarrow \mathbb{R}$---such that $K = K^*_{\langle . , . \rangle}$. Here $K^*_{\langle . , . \rangle}$ is the dual with respect to the inner product $\langle . , . \rangle$, that is $K^*_
{\langle . , . \rangle} := \{ y \in X: \forall x \in X ~\langle y, x \rangle > 0 \}$. So in asking for a proof that a domain of positivity is a self-dual cone, we are asking whether some inner
product $\langle . , . \rangle$ with respect to which $K$ is self-dual exists. Above, I considered the case $K=Y$, and called the inner product I was looking for, $B'$.
Does anyone know, or can anyone come up with, a proof?
convexity mp.mathematical-physics linear-algebra dg.differential-geometry
Thanks for the comments, Leonid and Will; I have edited the post to attempt to clarify. Briefly, I want to prove that the cone is self-dual in the sense that there exists a positive semidefinite
bilinear form (i.e., an inner product) with respect to which it is self-dual. It's not obvious that that's the same thing as the existence of a symmetric nondegenerate bilinear form with respect to
which it's self dual; the question, essentially, is whether these two are in fact the same thing. – Howard Barnum Feb 26 '10 at 23:48
Will, in $$\mathbb{R}^2$$, every pointed open cone is self-dual (and in fact, isomorphic (as a cone) to $$\mathbb{R}^2_+$$ (the strictly positive quadrant). So you're certainly right there. The way
I like to visualize things in $$\mathbb{R}^3$$ is to consider the "diagonalized" bilinear forms $$tt' - xx' - zz'$$ and $$-tt' + xx' + zz'$$. (The question is trivial for the other signatures.) For
$$+,-,-$$ it's easy: the positive and negative "light cones" are the only DOPs; while for $$-,+,+$$, I conjecture many nonisomorphic ones, in the complement of these light cones (the "conic
doughnut). – Howard Barnum Feb 28 '10 at 1:14
This sounds correct. You've built the Lorentz (alias quadratic, alias second-order, alias ice-cream) cone with central axis $(1,0,0)$, in $\mathbb{R}^3$. Its interior is one domain (of many) of
positivity of the bilinear form $B$ in question ($xx' + yy' - zz'$), as well as of the Euclidean inner product. Orthogonality according to $B$ is not the same thing as according to the Euclidean
inner product, except when $z=0$, but that's okay. The set of vectors $B$-orthogonal to a given boundary vector $x$ is still a supporting hyperplane, just not opposite $x$; these hyperplanes bound
the cone. – Howard Barnum Mar 1 '10 at 21:14
So the question becomes, in this standardized situation, are there any other DOP's in the conic doughnut except the rotations of my ice-cream cone around the z-axis? – Will Jagy Mar 1 '10 at 22:33
add comment
2 Answers
active oldest votes
I believe that the statement you want is not true. In $X=\mathbb R^3$, begin with the standard cone $x^2+y^2<z^2$ and perturb it so that the resulting cone $K$ is symmetric to its Euclidean
dual through the $yz$-plane and has no affine symmetries (that is, no nontrivial linear maps that map it to itself). As your argument shows, this cone is self-dual w.r.t. $-x^2+y^2+z^2$.
up vote 2 I claim that this is a unique non-degenerate form which makes $K$ self-dual. Indeed, the dual cone is naturally (canonically) defined in the dual space $X^*$. A bilinear form defines a
down vote linear isomorphism between $X^*$ and $X$, and the dual cone in $X$ is the image of the canonical dual cone under this isomorphism. Since $K$ has no affine symmetries, there is only one
linear map from $X^*$ to $X$ that sends the canonical dual cone to $K$. Therefore there is only one non-degenerate bilinear form that makes $K$ self-dual. And it is not positive.
add comment
Here's what's true instead of the claim that domains of positivity are self-dual cones.
$\mathbf{Proposition:}$ $Y$ is a domain of positivity for a nondegenerate symmetric bilinear form $B$ if and only if it is an open cone whose dual, according to the Euclidean inner product
$E$ associated with a basis orthonormalizing the form, is its image under reflection of $X_-$ through $X_+$, the ``negative and positive eigenspaces'' associated with the form in this basis.
$\mathbf{Proof:}$ We'll write $v,v'$ for vectors in $X$. We'll use an orthonormal basis as described above, in which the form is diagonal with diagonal elements $\pm 1$, writing $v = (x,t)$
for a decomposition with $x$ in the span (call it $X_+$) of the basis vectors with $B(e_i, e_i) = 1$, and $t$ in the span (call it $X_-$) of the basis vectors with $B(e_i, e_i) = -1$. Let
$S$ be the linear map $(x,t) \mapsto (x, -t)$, i.e. reflection of the subspace $X_-$ through the subspace $X_+$. Note that $E(x,y) := B(x,Sy)$ is a positive semidefinite symmetric
nondegenerate bilinear form.
Also, note that for all $v,v'$, $B(Sv, Sv') = B(v,v')$, i.e. the form $B$ is reflection-symmetric.
For "if": the definition of $Y^\ast$ says it is maximal such that $E(Y^\ast,Y) > 0$. But since $Y=SY$, it is also maximal such that $E(SY,Y) \equiv B(Y,Y) > 0$, i.e., it is a domain of
up vote positivity of $B$.
0 down
vote For ``only if'': let $Y$ be a domain of positivity for $B$. For every $y$ in the boundary $\partial Y$ of $Y$, the hyperplane $H_y := \{x: B(x,y) = 0\}$ is a supporting hyperplane for the
cone $Y$, and these are all the supporting hyperplanes. But it's standard convex geometry that the supporting hyperplanes of a proper convex cone $Y$ are the precisely the zero-sets of the
linear functionals that constitute the boundary of $Y$'s dual cone. We have $H_y = \{x: B(x,y) \equiv E(x,Sy) = 0\}$; that is, this hyperplane is just the plane normal to $Sy$ according to
the Euclidean inner product. That is to say, the vectors $Sy$, for $y \in \partial Y$ generate the closure of the cone $Y^\ast$ dual to $Y$ according to the Euclidean inner product $E$.
I.e., $Y^\ast = SY$. $\diamond$
Offline (or rather, off-math-overflow) correspondence with Will Jagy helped stimulate this solution. He gave another example---which I'd come up with a few weeks ago, but forgotten
about---of a DOP for $xx' + yy' - zz'$---namely, the positive orthant generated by $(0, 1, 0)$, $(1, 0, 1)$ and $(1, 0, -1)$ (or in his dual description, defined by inequalities $x > z$, $x
> -z$, $y > 0$), which is of course not isomorphic to an ice-cream cone, but is symmetric under reflection through the xy plane. The hypothesis that the DOPs were precisely the self-dual
cones symmetric under reflection suggested itself to me, and attempts to prove the hypothesis ended up providing the proof of the proposition above.
Hi Howard, there is some problem with msri updating software, I was not able to print out your answer today, and for that matter ssh is disabled. Good work, anyway. Will – Will Jagy Mar 5
'10 at 1:29
I should also point out that this question arose in ongoing work with Alex Wilce and Ross Duncan---and Alex's insistent unwillingness to to quote and rely on the editors' chapter-end notes
without seeing a proof turned out to be well-founded, and crucial motivation for investigating the question! – Howard Barnum Mar 7 '10 at 16:28
add comment
Not the answer you're looking for? Browse other questions tagged convexity mp.mathematical-physics linear-algebra dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/16527/proof-that-domains-of-positivity-of-symmetric-nondegenerate-bilinear-forms-are-s?answertab=votes","timestamp":"2014-04-19T15:01:05Z","content_type":null,"content_length":"68699","record_id":"<urn:uuid:a381879e-1247-413f-8955-12890fa1a157>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Card Games Home Page | Other Invented Games
Contributed by Tyler McCarn
This game requires at least 2 players but no more than 4.
The object of the game is to collect a full set of 13 cards (King to ace) - the suit of the cards doesn't matter. Each player starts off with 5 cards dealt from the shuffled 52-card deck. The
remaining cards are placed face down as a drawing stock.
The game is played as follows:
1. Each player selects simultaneously one card from their hand and places it face down.
2. The placed cards are turned face up.
3. Each player has the choice of drawing two (unknown) cards from the stock, or taking one of the face up cards played by the other players.
4. The played cards that were not chosen are added to the bottom of the deck.
This process is repeated until someone has a complete set of 13 cards of different ranks. The first player who correctly claims to have a complete collection wins.
Example: Bill and Chris start off by picking up 5 cards each. Bill has a 1,3,5,4,king so he decides to dispose of the King. Chris has a 2,4,6,6,1 since he already has one 6 he decides to discard the
other. Neither one of them picks the other's card to add to their hand, so they put the king and the 6 to the bottom of the deck and they both draw two cards. Bill now has a 1,3,5,4,6,7 he decides to
discard his 3. Chris has 2,4,6,1,5,3 and doesn't want to get rid of any of his cards but he has to. He plays the 3 and takes bill's 3. Bill puts Chris's three at the bottom of the deck and draws two
cards but Chris doesn't since he took Bill's card. They keep playing till someone has an ace,2,3,4,5,6,7,8,9,10,jack,queen, and king.
This variation of Tyler McCarn's "Collector" was contributed by Trevor Cuthbertson
This game is for 2-3 players using a standard 52-card deck.
The goal of the game is to collect a full set of 13 cards (Ace to King). The suits of the cards don't matter. Each player starts with 5 cards dealt from the shuffled 52-card deck. The remaining cards
are placed face down as a drawing stock.
The game is played as follows:
1. Each player selects simultaneously one card from their hand and places it face up.
2. Each player has the choice of either:
□ drawing two cards from the deck
□ drawing one card from the deck and taking one of the face up cards played by the other players
3. The played cards that are not chosen are added to the bottom of the deck.
It is possible, both in Hot and in Collector, that two players will want to take the same face up card. If that happens it is the first of those players in clockwise rotation after the player who
discarded the card who gets it.
This process is repeated until someone has a complete set of 13 cards of different ranks (Ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King). The first player who has the complete set must call out
"Hot" and lay down their 13-card hand. That player receives 1 point for each card his/her opponent(s) don't have in their set sequence. The first player to reach 11 points wins the game.
Return to Index of Invented Card Games Last updated 10th January 2009 | {"url":"http://www.pagat.com/invented/collector.html","timestamp":"2014-04-19T05:12:40Z","content_type":null,"content_length":"7786","record_id":"<urn:uuid:15b1f28a-7cc8-4c3d-afad-e6390655e830>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Generate random missing values within a set of variables
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Generate random missing values within a set of variables
From Eric Booth <eric.a.booth@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Generate random missing values within a set of variables
Date Sun, 15 Apr 2012 12:08:41 -0500
Others have offered solutions with the data in wide format, here's another approach after reshaping to long (to me, this is an easier approach):
*******************! watch for wrapping below:
*-- make some fake data
set obs 1000
forvalues i =1/6{
gen x`i' = rnormal()
g i = _n
reshape long x, i(i) j(j)
tempvar rand rand2
*--1. "I would like to randomly generate one missing value in one of the 6 variables per line/observation"
bys i: gen `rand' = runiform()
bys i (`rand'): gen missingone = j if _n==1
*--2. "then in another set of variables randomly generate 2 missing values per line/observation - 2 out of the 6 variables"
bys i: gen `rand2' = runiform()
bys i (`rand2'): gen missingtwo = j if inlist(_n, 1, 2)
*--3. Make one or two values missing, as described
clonevar x_two = x //x_two is for your second condition (2 missing values per group)
lab var x_two "same as x, but will have 2 missing obs per group"
replace x = . if !mi(missingone)
replace x_two = . if !mi(missingtwo)
sort i j
ta miss*
*--reshape back if you want this data to be wide again
drop __* missing*
reshape wide x* , i(i) j(j)
- Eric
Eric A. Booth
Public Policy Research Institute
Texas A&M University
On Apr 15, 2012, at 7:06 AM, Sofia Ramiro wrote:
> Dear all,
> I have 6 variables (without missings) and I would like to randomly generate one missing value in one of the 6 variables per line/observation (and then in another set of variables randomly generate 2 missing values per line/observation - 2 out of the 6 variables).
> I know that with the runiform command we manage to choose observations randomly within one variable (so I could generate random missing values within one variable), but how can I choose randomly one variable out of the 6 to be transformed into missing and make sure that one of them is being transformed per observation?
> I appreciate your help.
> Thanks!
> Sofia
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-04/msg00627.html","timestamp":"2014-04-20T13:37:11Z","content_type":null,"content_length":"9848","record_id":"<urn:uuid:ddcacca4-53d5-4c02-9ca2-ff274d86aabb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
First-Order Algorithm with O(ln(1/epsilon)) Convergence for epsilon-Equilibrium in Two-Person Zero-Sum Games
Andrew Gilpin, Javier Pena, Tuomas Sandholm
We propose an iterated version of Nesterov's first-order smoothing method for the two-person zero-sum game equilibrium problem. This formulation applies to matrix games as well as sequential games.
Our new algorithmic scheme computes an ε-equilibrium to this min-max problem in O(K(A) ln (1/ε)) first-order iterations, where K(A) is a certain condition measure of the matrix A.This improves upon
the previous first-order methods which required O(1/ε) iterations, and it matches the iteration complexity bound of interior-point methods in terms of the algorithm's dependence on ε.Unlike the
interior-point methods that are inapplicable to large games due to their memory requirements, our algorithm retains the small memory requirements of prior first-order methods.Our scheme supplements a
variant of Nesterov's algorithm with an outer loop that lowers the target ε between iterations (this target affects the amount of smoothing in the inner loop). We find it surprising that such a
simple modification yields an exponential speed improvement. Finally, computational experiments both in matrix games and sequential games show that a significant speed improvement is obtained in
practice as well, and the relative speed improvement increases with the desired accuracy (as suggested by the complexity bounds).
Subjects: 7.1 Multi-Agent Systems; 8. Enabling Technologies
Submitted: Apr 15, 2008
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://aaai.org/Library/AAAI/2008/aaai08-012.php","timestamp":"2014-04-17T06:54:19Z","content_type":null,"content_length":"3471","record_id":"<urn:uuid:63e50abb-c161-4656-ab50-b49f5fbd61ad>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Name of an operation on graphs
up vote 3 down vote favorite
I asked this a week ago on math.SE, but haven't obtained an answer yet, so I hope it is fine to ask this here too.
Let $G$ and $H$ be two possibly directed, non necessarily simple, vertex-labelled graphs with respective adjacency matrices $A_G$ and $A_H$ and $V(G)=V(H)$.
1) What is the name of the graph $M$ with adjacency matrix $A_M=A_HA_G$?
2) Which symbols should I NOT use to denote it in order to avoid confusion with other graph products, in the event that none is already associated with this operation?
graph-theory reference-request terminology
add comment
3 Answers
active oldest votes
If you are disallowing multiple edges between vertices, then such graphs are the same things as binary relations $R$ on the vertex set (where $x R y$ iff there is an edge from $x$ to
$y$. Then $M$ would correspond to the relational composite of $H$ and $G$: $x M z$ iff $\exists_y (x H y) \wedge (y G z)$.
If you are allowing multiple edges between vertices, so that adjacency matrices can have entries greater than 1, then such graphs are the same things as what category theorists are wont
to call a span. In that case, $M$ would correspond to the span composite, as defined in the cited article.
up vote 4
down vote Either way, it seems reasonable to call it the composite (unless that term is already used for some other operation on graphs), and to denote it by $H \circ G$ (under the same caveat).
Please take this answer with a note of caution that I am not a graph theorist.
add comment
In Spectra of graphs: theory and application, Dragoš M. Cvetković, Michael Doob, Horst Sachs, pg. 52, Section 2.1 "The polynomial of a Graph", it's called product and denoted $G_1\cdot
up vote 2 I would hesitate to call it composition, lest it is confused with the lexicographic product, which is, however, denoted $G_1[G_2]$ in the reference above.
down vote
Edit: maybe, to distinguish it from other products, call it "matrix product of graphs"?
Thank you, Martin, for this information. – Todd Trimble♦ Dec 13 '10 at 22:48
Thanks for the reference. I'm a bit reluctant to use the word "product", because of the many different uses for that term -- and the symbol's already taken too. However, what you quote
seems most natural to me. I presume it wouldn't hurt to use that in a paper, as long as everything is stated and defined well enough to avoid confusion. – Anthony Labarre Dec 13 '10 at
@Martin are you sure? This operation seems to be defined only for two graphs $G$ and $H$ with the same vertex set. If one thinks of $G$ as having red edges and $H$ as having blue then
it could be called the $"red-blue paths graph$ since each edge from $u$ to $v$ represents a path $uwv$ of that sort. – Aaron Meyerowitz Dec 14 '10 at 7:01
Yes, I'm sure that in this section of the reference it's called "the product", and the union is called the union (sic!). The section is only 2 and a half pages, and is mostly concerned
with the graph polynomial, where the problem of different vertex sets does not occur. – Martin Rubey Dec 14 '10 at 9:37
add comment
I just came across the article titled ``Matrix Product of Graphs'' (http://link.springer.com/chapter/10.1007%2F978-81-322-1053-5_4) which may answer your first question (or may have
up vote 1 down been motivated by your question).
Thanks, I hope to be able to read that soon (Springer does not allow me to). – Anthony Labarre Apr 5 '13 at 10:33
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory reference-request terminology or ask your own question. | {"url":"http://mathoverflow.net/questions/49233/name-of-an-operation-on-graphs?sort=oldest","timestamp":"2014-04-20T01:43:57Z","content_type":null,"content_length":"62871","record_id":"<urn:uuid:ee614e03-6d1a-4a12-b9ab-19187d69bede>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
In many of the problems dealt with in this book, the number of trials, n, is very large. A small molecule undergoing diffusion, for example, steps to the right or left millions of times in a
microsecond, not 4 times in a few seconds, as the ball in the apparatus of Fig. A.3. There are two asymptotic limits of the binomial distribution. One, the Gaussian, or normal, distribution, is
obtained when the probability of a success, p, is finite, i.e., if np -> n -> p is very small, so small that np remains finite as n ->
The derivation of the Gaussian distribution involves the use of Stirling's approximation for the factorials of the binomial coefficients:
where e is the base of the natural logarithms. The result is
where µ = <k> = np and k^2> - <k>^2)^1/2 = (npq)^1/2, as before. P(k; µ, dk is the probability that k will be found between k and k + dk, where dk is infinitesimal. The distribution is continuous
rather than discrete. Expectation values are found by taking integrals rather than sums. The distribution is symmetric about the mean, µ, and its width is determined by
If we define u = (k - µ) / µ, then
P(u) is called the normal curve of error; it is shown in Fig. A.5. As an exercise, use your tables of definite integrals and show that
Eq. A.30 can be done by inspection: P(u) is an even function of u, so uP(u) must be an odd function of u. The distribution P(u) is normalized, its mean value is 0, and its variance and standard
deviation are 1.
Figure A.5. The normal curve of error: the Gaussian distribution plotted in units of the standard deviation µ. The area under the curve is 1. Half the area falls between u = ± 0.67. | {"url":"http://ned.ipac.caltech.edu/level5/Berg/Berg4.html","timestamp":"2014-04-19T02:42:40Z","content_type":null,"content_length":"5005","record_id":"<urn:uuid:d5ded2aa-cd55-4a63-a8f9-75fbefcdc5c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert Land to Square Feet - OnlineConversion Forums
Originally Posted by
What is the formula to convert a parcel of land to square feet?
I am currently trying to figure the square feet of a 4 sided parcel.
Side 1: 100 feet
Side 2: 150feet
Side 3: 100feet
Side 4: 150 feet
The area of a four-sided figure also depends on the corner angles. IF it is a rectangle (all four corners are 90°), area is 100' x 150' = 15000 ft².
If the corners aren't square, it will be smaller and we would need the angles to calculate area. | {"url":"http://forum.onlineconversion.com/showthread.php?t=1179","timestamp":"2014-04-18T05:51:27Z","content_type":null,"content_length":"62094","record_id":"<urn:uuid:0f133653-23e5-4e69-9737-46f135d71059>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clustering Large Graphs via the Singular Value Decomposition
Drineas, P and Frieze, A and Kannan, R and Vempala, S and Vinay, V (2004) Clustering Large Graphs via the Singular Value Decomposition. In: Machine Learning, 56 (1-3). pp. 9-33.
24.pdf - Published Version
Restricted to Registered users only
Download (167Kb) | Request a copy
We consider the problem of partitioning a set of m points in the n-dimensional Euclidean space into k clusters (usually m and n are variable, while k is fixed), so as to minimize the sum of squared
distances between each point and its cluster center. This formulation is usually the objective of the k-means clustering algorithm (Kanungo et al. (2000)). We prove that this problem in NP-hard even
for k = 2, and we consider a continuousm relaxation of this discrete problem: find the k-dimensional subspace V that minimizes the sum of squared distances to V of the m points. This relaxation can
be solved by computing the Singular Value Decomposition (SVD) of the m × n matrix A that represents the m points; this solution can be used to get a 2-approximation algorithm for the original
problem. We then argue that in fact the relaxation provides a generalized clustering which is useful in its own right. Finally,we showthat the SVD of a random submatrix—chosen according to a suitable
probability distribution—of a given matrix provides an approximation to the SVD of the whole matrix, thus yielding a very fast randomized algorithm. We expect this algorithm to be the main
contribution of this paper, since it can be applied to problems of very large size which typically arise in modern applications.
Actions (login required) | {"url":"http://eprints.iisc.ernet.in/16743/","timestamp":"2014-04-17T01:42:51Z","content_type":null,"content_length":"22553","record_id":"<urn:uuid:18327802-ae80-4646-aab0-12d909b61c70>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
IntroductionLADAR Modeling and SimulationGeometric ModelingDetectorGeometric TransformationGeometric ErrorsInput Models and Ray-TracingRadiometric ModelingLaser Beam ModelReturned Energy CalculationNoise Energy CalculationPhotons Per PixelDetection ModelingDetection ProbabilityDetection ProcessExperimentsSimulator DevelopmentInput DataSystem ParametersPlatform ParametersResults and DiscussionConclusionsConflict of InterestReferencesFigures and Tables
LADAR (Laser Detection and Ranging) calculates target distance ranges by measuring the flight times of the laser pulses transmitted to and reflected from the target surfaces. These ranges can be
further converted into a 3D point cloud or a range-image in a local coordinate system by their integration with the position and attitude data acquired from Global Positioning System (GPS)/Integrated
Navigation System (INS) sensors mounted with the laser ranging unit.
As an emerging technology, it provides densely sampled 3D points with reliable and consistent quality in an automatic and prompt way. Thus LADAR systems have been widely utilized for various
applications in diverse fields. According to their specific applications, various kinds of LADAR systems have been developed with different components and mechanisms (e.g., scanning mechanisms,
detector types and sizes, and output data types) [1,2].
In topographic mapping, many applications to derive geospatial information from 3D point clouds have been developed, such as noise reduction [3], classification of ground points [4–6], segmentation
of meaningful patches [7–10], Digital Elevation Model (DEM) generation [4,6,11], building reconstruction [12–16], power-line detection [17,18], coastline extraction [19], forest biomass estimation
[20–22], and target detection [23–25].
Most systems used in topographic mapping employ a single detector with a scanning system [26]. The detector typically operates in a linear mode, producing an output current linearly proportional to
the power of the incident light [27]. By monitoring the output current, the system determines the receiving time of the returned laser pulses using a pulse detection scheme. In addition to the time,
recent systems also record the complete waveforms of the returned laser pulses [28]. The waveforms provide additional information about the geometric and physical properties of the targets,
particularly those composed of complex objects [28,29]. For example, in forest management, the waveforms are utilized for precise estimations of forest biomass [30–32].
In the defense sector, LADAR with Focal Plane Array (FPA) is more widely used for surveillance and reconnaissance in order to detect obstacles for safety guidance of ground or aerial vehicles [26].
Similar to the CCD of a digital camera, a FPA system, called also “flash LADAR,” can acquire 3D images while retaining the size of the array of detectors with a single laser shot.
For a high sensitivity detector, a Geiger-mode avalanche photodiode (GmAPD) has been recently employed. When the number of incident photons exceeds a predefined threshold, the APD becomes saturated
[1,33]. In addition, it outputs only a 1-bit digital state (0 or 1). Geiger-mode avalanche photodiode focal plane arrays (GM-FPAs) have been reported in numerous publications [27,33–37]. GmAPD can
provide several benefits [27,34]. Because of the high detection efficiency (up to single-photon sensitivity), it is possible to reduce the laser power for longer ranging distances and system
requirements (e.g., size, weight). However, since such highly sensitive detectors inherently suffer from noises, most systems with such detectors employ a range gating scheme to reduce the effect of
the noises by limiting the viewing range with a short exposure time [38–40]. Recent advances in CMOS detectors are providing fully integrated scanning LADAR sensors using Geiger mode detectors for
automotive applications [41].
As various kinds of LADAR systems have been developed for diverse applications, simulations of such systems have also been studied. Simulation studies are an essential prerequisite for the
development of a new LADAR system [42–45]. A simulation can provide: (1) a prediction of the system performance and optimization of the system design; (2) test data to develop and validate the
application algorithms dedicated to the given system; and (3) a deeper understanding of LADAR systems for education and training.
Topographic mapping applications have predominantly used airborne LADAR systems, including a laser scanner with a linear mode single detector and a scanning mirror, GPS and IMU. Most simulation
studies on such systems have focused on the geometric aspects. For example, the precise modeling of the systematic error of an airborne mapping LADAR system was performed by Schenk [46]. Lohani [47]
generated a 3D point cloud for airborne LADAR using geometric modeling. Kukko [48] performed a simulation with the real system parameters of commercial airborne LADAR systems for the analysis of
scanning patterns.
The previous studies related to FPA are as follows: the Center for Advanced Imaging LADAR (CAIL) in the University of Utah, USA, performed a modeling simulation for linear mode imaging LADAR to
develop LadarSIM, implemented in Matlab [49–51]. A similar work was published by Swedish Defence Research Agency (Totalförsvarets forskningsinstitut, FOI) in Sweden [44]. They developed a modularized
computer model, FOI-LadarSIM, which is capable of LADAR simulation. Defence Science and Technology Organisation (DSTO) in Australia developed simulation software of foliage-penetrating LADAR with
Matlab, assuming the detector was in Geiger mode [45]. However, there are some limitations when ignoring the noise and the characteristics of GmAPD. Zhao [52], in the National University of Defense
Technology (NUDT), China, published a simulation method for imaging laser radar, mainly focusing on the noise model and the related dropouts and outliers. Many researchers have attempted to develop
LADAR simulators for their own purposes. Most of them, however, focused on specific scope rather than fully comprehensive aspects.
In this study, we developed a method of comprehensive modeling and simulation for Geiger-mode imaging LADAR with a gate ranging and scanning mechanism. We then predicted and modeled its performance.
For high fidelity models, we analyzed previous works and then integrated the rigorous models into a comprehensive method. Our simulator is composed of three main modules: geometry, radiometry, and
detection modules. The geometry module defines the rays of laser beams and then determines the locations at which the rays intersect with the target surfaces. The radiometry module computes the
powers of the return pulses and generates the waveforms. The detection module finally generates the time when each pixels in a detector perceives the first photon. Using the proposed simulation of
three modules, the reference data, as well as the corresponding simulated point cloud, are generated. Finally, we evaluated the sensor performance based on the simulation by comparing the simulated
points with the reference points.
This research reliably verifies the data from a new type of LADAR system with given parameters and assesses its performance using indicators, such as the amount of noise and false alarms in advance
of developing hardware. Our simulator also provides a diversity of simulated data for the development of application algorithms that should be optimized for a real system.
The paper is organized as follows: Section 2 describes the modeling principles and simulation processes. Section 3 presents the experimental results with the implemented simulator and our analysis of
the performance assessment with the given system parameters. Finally, we present our conclusions and future research directions.
LADAR (or laser radar) generates 3D point cloud and range images by measuring the flight times of laser pulses. Figure 1 illustrates how the system acquires the point cloud. First, a laser pulse is
transmitted to the surfaces of the targets and background. The pulse is then backscattered after interacting with the surfaces. The reflected pulse energy passes through the optics and reaches the
receiver. A detector at the receiver senses the incident energy. A Geiger-mode detector subtly responds to the first incident photon and is saturated regardless of the amount of received energy. In
this way, it provides the time at which the first photon is perceived, while the linear mode detector generates the waveform of the return pulse.
Ideally, a detector senses only the pulse energy emitted by the transmitter. However, internal and external noise energies are also detected by the receiver along with the return pulse. The main
causes of noise are the backscattered solar radiation and dark count due to thermal effects.
For the simulation of a LADAR system, three models were required (Figure 2). The first process, based on the geometric model, finds where the information of each pixel comes from. This process can be
executed by establishing the geometric relationships between the pixels in the detector and target surfaces. This enables the computation of 3D points as the intersection points between the ray
passing from each pixel to a focal point and its corresponding surface. They are not affected by the radiometric conditions or the nature of the detector. Therefore, this point cloud can be used as a
reference for the simulation outputs of the radiometric and detector models.
In the second step, the radiometric model computes how much energy strikes the pixels, including noise energies. First, the transmitted energy of each pixel is calculated using the predefined beam
profile. The return energy is computed using the laser equation with the radiometric and optical parameters and ranges calculated from the geometric model. Using the radiometric model, we can compute
the number of incident photons according to time.
The detection model generates the simulated time when the first photon is detected based on a probability function. It includes the effect of APD timing jitter—statistical time interval between the
pulse arrival and the signal output of APD. But afterpulsing effect, causing the noise, is not considered in this paper. According to earlier research, the saturation of a Geiger-mode detector from
all light sources follows Poisson statistics under several assumptions [1]. The point cloud generated using the detection model includes outliers. For a performance assessment, the simulated point
cloud was compared point by point to the reference data generated in the geometric simulation. In general, it is difficult to compare two point cloud sets, since the correspondences between the
individual points in the two sets are difficult to establish. We were able to identify every corresponding point of the reference and the simulated data; therefore, it was possible to compare point
by point. Using the error matrix of the compared results, we computed the false alarm rate, dropout rate and outlier ratio of the simulated point cloud.
The purpose of the geometric modeling is to identify the source of each pixel's information, or the point at which the transmitted laser pulse is reflected on a target surface. To find this point,
the geometry of the laser pulse needs to be determined, both the direction and origin. Geometric modeling can then establish the ray model of the laser pulse and compute the intersection point. We
can determine the range from the origin to the intersection point and the reflectance of the intersected surface, which are used for the radiometric modeling described in Section 2.2.
The ray model can be defined by the geometric integration of the sub-modules in the LADAR system. The sub-modules are a GPS/INS and a scanning mechanism, each of which has its own coordinate system.
Therefore, they should be redefined in a common coordinate system using a geometric transformation based on the geometric relationships between the sub-modules.
An FPA detector system has N × N pixels. The acquired information for each pixel originates from the target point on the ray passing the pixel and the perspective center. The pixel location, the
perspective center and the target point are collinear. The line equation of the pixel ray can be established with three points. To define the ray, we defined the sensor coordinate system of the
detector as shown in Figure 3. Based on the sensor coordinate systems, we defined the line equation as Equation (1). The target point V is represented with the origin (perspective center) F, the
direction u[0], and the range r from the principal point to the target point. Direction u[0] is a unit vector from the location of pixel C[r,c] to the origin F. Subscripts (r, c) are the row and
column indices respectively.
V = u 0 ⋅ r + F = C r , c F → ‖ C r , c F → ‖ . r
A LADAR system employs the scanning mechanism to increase its coverage. There are a variety of scanning mechanisms and each has its own scanning pattern (Figure 4). Each LADAR system adopts a
scanning mechanism suited to its own purpose, considering the strengths and weaknesses of each [2]. We performed modeling of the zigzag scanning pattern using Risley prisms with two pairs of
counter-rotating optical wedge prisms. A wedge prism is a prism with a shallow angle between its input and output surfaces, and a pair of wedge prisms is called Risley prism pair. A Risley prism
scanner can be developed so as to be relatively compact with low-power operation [53]. As shown in the left of Figure 5, a wedge prism can steer the laser beam with a deflection angle δ and a circle
pattern by rotating a lens (the left of Figure 6). Furthermore, the combination of two wedge prisms makes it possible to implement the reciprocating pattern, which is parallel to the vertical or
horizontal axis. In addition, a zigzag pattern is feasible with two Risley prisms, as shown in the right of Figure 6.
α h = ∑ k = 1 4 δ k ⋅ cos ( ω k t + φ k ) α v = ∑ k = 1 4 δ k ⋅ sin ( ω k t + φ k ) R h = [ cos ( α h ) 0 sin ( α h ) 0 1 0 − sin ( α h ) 0 cos ( α h ) ] R v = [ 1 0 0 0 cos ( α v ) − sin ( α v ) 0
sin ( α v ) cos ( α v ) ] R 0 L = R h ⋅ R v = [ cos ( α h ) sin ( α h ) ⋅ sin ( α v ) sin ( α h ) ⋅ cos ( α v ) 0 cos ( α v ) − sin ( α v ) − sin ( α h ) cos ( α h ) ⋅ sin ( α v ) cos ( α h ) ⋅ cos (
α v ) ]
The horizontal and vertical angular positions (α[h], α[v]) created by a set of four prisms for a zigzag pattern can be expressed by a trigonometric function with a rotational speed ω, phase delay φ,
time t and deflection angle δ as in Equation (2). The 3D transformation matrices for steering the pixel ray horizontally and vertically are shown in Equation (3), and Equation (4) is the 3D
transformation matrix R 0 L for the zigzag scan pattern. We assumed that the pixel rays deflect at the principal point in the sensor coordinate system.
We then transformed the line-equation in Equation (1) into a local coordinate system. Usually, GPS/INS and laser scanners are mounted on a platform together. GPS/INS provides the position and the
attitude of the local coordinate system. Figure 7 shows the geometric relationships of a LADAR system. In Figure 7, T G I Local, which is represented in the local coordinate system, is the position
of GPS/INS. T L G I is the offset between the GPS/INS and laser scanner. Based on this geometric relationship, Equation (5) was derived, where R indicates a rotational matrix for the geometric
transformation, and T is a translation vector between the origins of the coordinate systems. R L G I is the rotational matrix from the sensor coordinate system of the laser scanner to the GPS/INS
coordinate system, and R G I Local is the rotational matrix from the GPS/INS coordinate system to the local coordinate system.
V Local = R G I Local ( R L G L ⋅ R 0 L ⋅ u 0 ⋅ r + T L G I ) + T G I Local
All sub-modules in LADAR systems, such as GPS/INS and laser scanners, possess some systematic and random errors. There are two kinds of errors in LADAR systems. The first comprises the individual
sensor errors, and the second the integration errors [46]. The former is inherently caused by the sensors themselves. Integration errors stem from the geometric integration among sensors. The
integration errors in the LADAR system occur predominantly from measurement errors associated with the mounting parameters and bore-sight angles. In this study, we identified the significant error
factors and took them into consideration.
The direction and origin of the pixel rays can be represented as follows. The true range of the pixel ray can be calculated by searching the intersecting surface. However, a real LADAR system handles
tens of thousands of laser pulses per second. Furthermore, LADAR simulation executes a tremendous number of geometric operations to search for the intersecting points between the pixel rays and the
target surfaces [54]. We employed a ray-tracing algorithm to rapidly process the overloaded geometric computations. For this, we used the B-rep (Boundary representation) structure, which is a method
for representing shapes as a set of facets, for the input data, such as the target and the background model, because of some advantages that we will discuss in Section 3.2.
The particular details of the ray-tracing used are as follows [55]. First, a grid structure was generated, and all of the facets were linked to their corresponding cells in the grid, according to the
horizontal locations. Each cell in the grid has maximum and minimum height values calculated from the boundary point of the linked facets. The aim of ray-tracing is to find the cell with candidate
facets that have the highest possibility of intersecting the pixel ray. The ray-tracing algorithm used in the simulation searched the intersecting cell by recursively reducing the vertical and
horizontal range until stability was achieved (Figure 8).
The purpose of radiometric modeling is to calculate the number of incident photons that enter the detector pixels. The radiometric model uses the range computed in the geometric simulation, the
radiometric and optical parameters of the system.
Ideally, the photons that strike the pixels of the detector are from the laser energy emitted from the transmitter. However, the detector collects not only the reflected laser energy, but also the
energy caused from the backscattered solar radiation. Furthermore, the dark count can also cause false alarms. The radiometric model deals with the reflected pulse energy and these noise sources
The intensity of the laser beam across the range is not uniform, but varies in the spatial and temporal domains [42]. The intensity of the beam is not uniform across the range from the center axis,
which is the direction of the beam. It is defined as a beam profile and depends on the shape of the emitter and the technique used to generate the laser light. Figure 9 shows an example of Gaussian
beam profile. The irradiance is expressed as follows [42]: I ( d ) = I 0 ⋅ e − 2 ( d B W ) 2where d is the distance measured from the central axis of the beam in the cross-section; I[0] is the
maximum irradiance of the beam; and Bw is the beam half-width. Commonly, the irradiance is about 14% (I[0]/e^2) at d = Bw.
In the temporal domain, the laser signal is modeled as a pulse. There are several pulse models with different shapes. The pulse model used in this study was suggested in [42]. It is represented in
Figure 10 and expressed as follows: p ( t ) = ( t τ ) 2 ⋅ e − t τ , τ = FWHM 3.5where FWHM is the full width at half of the maximum of the pulse.
The returned laser energy is calculated using a LADAR range equation [56]. Assuming that, for an extended target, the footprint of the beam is smaller than the target surface, the returned power can
be calculated using the transmitted power P[t], the travel distance of the laser beam r, the reflectance of the target surface ρ, and the aperture diameter of the receiver D, as represented in
Equation (8). In that equation, Ω[s] is the scattering steradian solid angle of the target. For Lambertian targets (diffuse targets), Ω[s] is replaced by the solid angle of π steradians. η[sys] and η
[atm] are the efficiency values of the optics of the system and the atmospheric attenuation, respectively. These variables can be written as Equations (9) and (10). The round trip laser pulse, η
[atm], is the square of the atmospheric attenuation in Equation (9); and η[sys] can be represented as the product of the fill factor T[FF], the bandpass filter transmittance T[BPF], the ND (Neutral
Density) filter transmittance T[ND], transmitter optics transmittance T[T] and receiver optics transmittance T[R]. With the previous assumption, substituting Equations (9) and (10) into Equation (8)
leads to Equation (11): P r = P t ⋅ ρ ⋅ 1 Ω s R 2 ⋅ π D 2 4 ⋅ η atm ⋅ η sys η atm = T atm _ trancemitted ⋅ T atm _ received = T atm 2 η sys = T BPF ⋅ T N D ⋅ T F F ⋅ T T ⋅ T R P r = P t ⋅ ρ ⋅ D 2 ⋅ T
atm 2 ⋅ T BPF ⋅ T N D ⋅ T F F ⋅ T T ⋅ T R 4 R 2
The main sources of the noises occurring in the detector are reflected sunlight and dark count. They contribute to false alarms by arriving at the detector before the returned laser pulse. The
sunlight (solar radiation) is collected by the receiver, although it does not originate from the transmitter. The incident energy of the backscattered solar radiation is given in Equation (12), where
E[si] is the solar irradiance in a unit of W/m^2/nm; δ[λ] is the electromagnetic bandwidth in of the bandpass filter; δ[t] is the unit sampled time bin (the temporal resolution) of the system clock
that measures the time; A is the area covered within the IFOV (instantaneous field of view) in a unit of m^2 and is calculated as in Equation (13). Equation (15) can be derived from the substitutions
of Equation (10), (13) and (14) into Equation (12): E solar = E s i ⋅ δ λ ⋅ δ t ⋅ A ⋅ ρ ⋅ 1 Ω s R 2 ⋅ π D 2 4 ⋅ η atm ⋅ η sys A = ( R ⋅ IFOV ) 2 η atm = T atm _ received = T atm E solar = E s i ⋅ δ λ
⋅ δ t ⋅ ρ ⋅ IFOV 2 ⋅ D 2 ⋅ T atm ⋅ T BPF ⋅ T N D ⋅ T F F ⋅ T R 4
The expected number of photoelectrons created by the dark count due to the thermal effects within the detector is determined using Equation (16), where f[dc] is the dark count rate in a unit of Hz,
although the dark count does not actually generate photoelectrons [1,33,37]. This assumes that the dark count is uniformly distributed in the time domain, and that every pixel in the detector has the
same dark count: E [ N d c ] = f d c ⋅ δ t
FPA imaging systems have a detector consisting of an N × N arrayed pixel. The simulation of an imaging system requires the computation of the incident energy for each pixel, including the noise. For
this, we derived an equation to compute the incident energy for each pixel under the following assumptions. The first assumption is that N × N laser beams, which we call sub-beams, are independently
transmitted from the arrayed pixels and return to the pixel after reflecting off the target surfaces. The other is that the incident noises of each pixel are the same. Under these assumptions, we can
calculate the incident energy per pixel.
The transmitted energy of the pulse E[pulse] can be determined with the average power of the laser beam P[t] and the repetition rate of the pulse f[pulse], as show in Equation (17). Then, the pulse
energy is divided into each pixel according to the beam profile. Based on the Gaussian profile as in Equation (6), the energy of the pixel located at (r, c), E[r,c], can be represented as Equation
(18). The function Nor indicates the normalization of the beam profile to produce a summation.
The returned energy of the sub-beam collected by a pixel can be derived using Equation (19), where E r , c laser is the total energy received by (r, c) the pixel. Then the return energy of the pixel
E r , c laser is modeled in the time domain with the pulse model as Equation (20). We defined the time bin t as an element of set T ( t ∈ T and T = { k ⋅ δ t + T min | k is an integer satisfying 0 ≤
k ≤ T max − T min δ t } ) in the range gate. The range gate is a length of measurement in the time domain. t[r] is the converted time from the round-trip distance between pixel and reflected point,
and it is for shifting the pulse model by the delayed time. The received energy detected by the (r, c) pixel at a certain time t follows Equation (20). Finally, the expected number of the photons
collected by the pixel can be determined by dividing the energy of the unit photoelectron, as in Equation (21), where h is Planck's constant and v is the frequency of the laser light. The expected
number of photons from the solar radiation is expressed with the speed of the light c and the wavelength λ, as in Equation (22): E pulse = P t f pulse E r , c = Nor [ e − 2 ( dist ( r , c ) B w ) 2 ]
⋅ E pulse E r , c laser = E r , c ⋅ ρ ⋅ D 2 ⋅ T atm 2 ⋅ T BPF ⋅ T PLF ⋅ T N A ⋅ T F F 4 R 2 E r , c , t laser = Nor [ ( t − t r τ ) 2 ⋅ e − t − t r τ ] ⋅ E r , c laser E [ N r , c , t laser ] = E r ,
c , t laser h ⋅ v = E r , c , t laser ⋅ λ h ⋅ c
Consequently, the expected number of photoelectrons sensed by the (r, c) pixel at a certain time can be calculated using Equation (22), where E[N^dc] is not affected by PDE (photon detection
efficiency), because the dark count occurs in the APD circuit: E [ N r , c , t ] = PDE ⋅ ( E [ N r , c , t laser ] + E [ N solar ] ) + E [ N d c ]
The detection simulation determines the simulated time when each pixel detects the first photon. A Geiger mode detector can only perceive the primary photon, because it takes a few microseconds to
recover from the saturation by the photon. The saturation of the detector by the laser pulse and noise follows Poisson statistics [1,27].
In a certain time interval (time bin), the probability P(m) that a pixel detects a number of photons is determined using Equation (23) [1,33,37], where λ is the expected number of incident photons.
By substituting λ with Equation (22), the number of photons sensed by the pixel in the time bin t, P(m; t), is derived as Equation (24). Since a Geiger mode detector is saturated if at least one
photon is sensed, the complementary event occurs when no photon is sensed (m= 0). The probability that at least one photon is detected can be expressed as Equation (25) [1]: P ( m ) = 1 m ! ⋅ λ m ⋅ e
− λ P ( m ; t ) = 1 m ! ⋅ ( E [ N r , c , t ] ) m ⋅ e − E [ N r , c , t ] P ( t ) = 1 − e − E [ N r , c , t ]
Using the detection probability of one pixel at each time bin, we can generate the simulated time when each pixel detects the photons as follows [1]:
Compute the incident photons for each time bin within the range gate using Equation (23), as shown in Figure 11(a). These include the expected number of photons created by the transmitted laser
pulse, the backscattered solar radiation and the dark count.
By computing the probabilities that the pixel detects at least one photon for each time bin using Equation (25), generate a PDF (Probability Density Function) as shown in Figure 11(b),
Convert the PDF into a CDF (Cumulative Distribution Function) using Equation (26), as illustrated in Figure 11(c).
CDF ( k ) = ∑ i = 1 k PDF ( t i )
Generate a random number Y from 0 to 1 that follows the uniform distribution. Then, search for the bin k that satisfies Y = CDF(k). The bin k is the simulated time when the pixel detects the primary
We performed an experiment to verify the proposed methods for LADAR simulation. Based on the simulation results, we also assessed the performance of the LADAR system with the designed system
parameters and mission scenario.
The simulation program was implemented using C++ language. The simulator is mainly composed of three parts: geometry, radiometry, and detection, as shown in Figure 12. A geometric module identifies
the source of the information of each pixel in the detector based on the geometric relationships between the LADAR system and the target [55]. It outputs the range from the perspective center to the
intersection point. The radiometric module computes the incident energy of each pixel from both the transmitted laser pulse and the noise and generates the number of incident photons on each pixel
per time bin. The detection module calculated the simulated time at which the pixel perceives the incident photons based on the probability model. Table 1 shows the modules in detail.
The developed simulator employs 3D polyhedral models expressed in B-rep. As the input data of the LADAR simulator, B-rep models retain some advantages. They simplify the geometric operations in
simulations without the need for interpolation. Moreover, they are very flexible in varying the given system parameters according to the mission scenario. For example, if the simulator uses range
images as the input data instead of the B-rep models, many different images are needed to account for the various positions and orientations of the sensors based on the given mission scenario.
Figure 13 represents the 3D city model that was generated for the simulation experiments. The city model is a part of Yeongdeungpo-gu in Seoul, South Korea. It was generated by combining the
horizontal boundaries from digital maps and the corresponding height information from airborne LADAR data. It uses a total of 32,968 polygons to represent the ground and buildings.
A LADAR system has three sub-modules, such as a GPS/INS and laser scanner. The laser scanner consists of various components, such as a laser transmitter, optics, a receiver, a detector, and a
scanning device. Their system parameters need to be determined for each simulation. Tables 2, 3, 4 and 5 describe the main parameters and the values for each component. In Table 2, the laser mean
power is the energy emitted by the transmitter per second. Because 25,000 laser pulses were transmitted, the energy of the laser pulse was 0.4 mJ; thus, its peak power was 400 kW, which is the pulse
energy divided by the pulse width of 1 ns.
Table 3 describes the parameters related to the scanning mechanism. The array size of the detector adopted in the LADAR system is small; thus, it is necessary to employ a scanning mechanism to
enlarge the coverage. As seen in Table 2, Lenses 1 and 2 had the same deflection angle and phase angle, but they rotated in opposite directions. Lenses 3 and 4 also retained the same properties. The
phase angle determines the direction of deflection for a transmitted laser beam. Lenses 1 and 2 enable a horizontal reciprocating motion for the laser beam. Lenses 3 and 4 determine the vertical
Table 4 shows the information about the detector. In order to simulate realistic and precise waveforms of a pixel, we divided a pixel into 6 × 6 sub-pixels to consider multiple echoes. Each sub-pixel
received an echo. Assuming an echo is originated from the individual reflected sub-pixel beam, each sub-pixel beam is separately processed in geometric and radiometric simulation. And the waveform of
a pixel is generated by summing the echoes of sub-pixel beams.
The dark count is the noise generated on the circuit board due to thermal activity. The occurrence rate was 20 kHz, which is the average number of saturation counts per second even in complete
Most of the parameters listed in Table 5 are associated with the energy efficiency when the return pulses pass through the optical devices. The bandpass filter permits the incident light of a
specific wavelength to pass, and bandpass width is the range of the wavelength. The transmitter and receiver efficiencies are the attenuations due to other optical devices, such as lenses and prisms.
Solar irradiance is the measured amount of sunlight striking a square meter of the Earth's atmosphere or surface. It is depending on many factors such as the position of the sun, the weather
condition, the season and so on. In this experiment, we used the solar irradiance of 0.3 W/m^2/nm approximately, which is the value corresponding to 1,560 nm wavelength of the laser in the solar
radiance spectrum curve for direct light at sea level [57]. Selection of laser wavelength depends on the application of the sensor. For example, most airborne topographic mapping LADAR systems use
1,064 nm diode pumped YAG lasers. Bathymetric systems generally use 532 nm lasers that can penetrate water with less attenuation [28]. In this study, we focused on the sensors for military
application, where 1,560 nm or 1,550 nm lasers are usually preferred, because it is eye-safe at much higher power levels for longer range measurments [26].
Figure 14 illustrates the position and attitude of the platform mounted with the LADAR system under simulation. The location of the platform was (100, -400, 1000) m in the local coordinate system,
and the look angle between the horizon and the LOS (Line of Sight) of the LADAR sensor was 60 degrees. Thus, the distance between the sensor and the target was about 1.2 km, and we determined the
measuring range to be 0.2 km (from 1.0 km to 1.2 km). The LADAR system acquired the point cloud for 0.1 s.
Having the system and platform parameters established, we were able to perform the LADAR simulation. The coverage of the simulated data with these parameters overlapped with the target models in
Figure 15. Figure 16 shows the coverage of the FPAs at each laser shot. Figure 17 represents the point cloud generated by the geometric simulation—the computation of the intersection points between
the rays of the sub-beams and the surfaces. Because of computing the intersecting point, all of points are located on their intersecting surfaces, and there is no outlier (noise). Geometric
simulation does not consider the radiometric and electronic (photon detection) aspect—such as characteristics of laser, attenuations, beam interaction, noise and detector. Therefore, the point cloud
generated in geometric simulation is true of the point cloud resulting from radiometric and detection simulation. This reference point cloud will be used to assess the detection performance in
following section.
As a result of the whole simulation—geometry, radiometry and detection, 44,136 points were generated. The simulated point cloud generated by the entire simulation from the geometric to detection
simulation is presented in Figure 18. The point density of the simulated LIDAR data was approximately 44.58 points/m^2; the range of its x-coordinate value was 77.619∼122.230 m, and the range of its
y-coordinate value was 140.620∼175.061 m. Unlike a linear mode system known to retain only a few outliers, we can confirm from the simulation results that the Geiger mode system produces
significantly large number of outliers. Most outliers are caused from the dark count and the backscattered sunlight. It also includes points backscattered from target surfaces with more high density
than that of outlier as shown in the middle of Figure 18 (20∼40 m height). Figure 19 represents the enlarged image of the points that are located in height of 20∼40 m to look into inlier points. The
inlier point density is high enough for visual target identification.
Figure 20 shows the range image generated from the simulated point cloud. To generate this range image, outlier detection should be applied. For eliminating outliers with high ratio, we designed an
adaptive median filter by analyzing the characteristic of spatial distribution of outliers [58]. The detailed algorithm will be addressed in future after being improved. After removing outliers, we
grouped the ranges according to the direction of the laser pulses with a constant angular interval. The interval was determined by considering the cross-range resolution of the range image to be 0.3
m at 1 km with a look angle of 90°. Here, the sampling units of the horizontal and vertical angles were 0.0172° and 0.0171°, respectively. Then, each pixel in the range image was calculated using the
average of the ranges from the corresponding laser pulses. The array size of the range image in Figure 20 was 134 × 76 pixels.
As shown in Figure 20, 558 (5.5%) of the 10,184 pixels had null values. This indicates that there are no simulated points within the view of the null pixel. The dark pixels near the edges of the
image are out of the coverage range of the scanning mechanism, compared to the range image with the scanning pattern in Figure 16. The other dark pixels within the image are mainly caused by a lack
of return energy. The cause of the latter dark pixels located on vertical surfaces comes from high incidence angle. In addition, the characteristics of Geiger mode APD, low incident energy due to
beam profile and laser speckle (excluded in this paper) may contribute to this phenomenon. These will be addressed in future works.
The method to assess the detection performance of the LADAR system by using the simulated data with the given system parameters is as follows. As mentioned in Section 2, in general, it is difficult
to identify the corresponding point pairs in two data sets of point clouds acquired by a real LADAR system. However, the pair of points between the simulated point cloud and the reference point cloud
can be easily determined, because the simulated point is generated point by point from the reference data. Figure 21 describes our method of performance assessment. The results of the performance
assessment based on a comparison of the simulated point cloud with the reference data set for each individual point are represented as an error matrix in Table 6.
As seen in Table 6, we categorized the reference data set into two types according to whether or not the target existed in the range gate. The former means that the pixels have to be saturated and
output the range, because the target is in the range gate. The latter means that the pixels do not have to be saturated, because there is no target in the range gate. “Saturated,” the left side
column in “simulation,” indicates the number of the pixels that were saturated as a result of the simulation. “Not saturated” is the number of the pixels that were not saturated. Table 7 describes
the meaning of each group in the error matrix of Table 6.
G1 and G2 are the cases where in the detection process worked correctly. E1 and E2 are cases where it did not. E1 is the dropout case in which a pixel fails to detect the return photons mainly due to
a low received energy. E2 is a false alarm wherein the pixels are saturated by the noise, though there is no target in the range gate. In the Case E0, there was a target in the gate range, and the
pixel was saturated similarly to G1; however, the pixels in E0 were saturated not by the laser pulse, but by the noise. In this study, we can calculate E0 by comparing the ranges between the
reference and the simulated data. Figure 22 shows the point cloud color-coded into G1, E0 and E2. Based on the error matrix, we computed the indicators to assess the detection performance of a LADAR
system with the given parameters in Table 8. In this study, the simulated data shows a false alarm rate of 1.76%, a dropout rate of 1.06% and an outlier proportion of 25.53%, although these
indicators are not representative.
For an accurate assessment of the performance of the LADAR system with the given system parameters, multiple experimental analyses with various target models must be performed. Therefore, further
studies focusing on performance assessment will be undertaken. Also, it seems to require a new method to remove outliers. There are few studies about eliminating outliers with high outlier ratio,
whereas there are many studies detecting outliers from point cloud generated by linear mode LADAR with low level noise.
By using the performance assessment process based on simulation, we can easily analyze the impact of the main system parameters to the system performance. We can perform this analysis by evaluating
the system performance derived from simulation while changing the system parameters used as the input to the simulator. For example, we performed the analysis on the impact of pulse repetition rate,
as shown in Figures 23,24 and 25. We attempted to assess the performance of LADAR system by analyzing simulated data. The simulated data were generated with the system parameters in Tables 2, 3, 4
and 5 and a flat surface as a target model. So the results in Figures 23, 24 and 25 are different from those in Table 8. Figure 23 shows the variations of the number of inliers and outliers as the
pulse repetition rate changes from 5 kHz to 25 kHz. The higher pulse repetition rate produces the large number of the inliers and outliers. The outlier ratio is also slightly increased. The cause can
be explained by Figure 24. The false alarm rate in Figure 24 was almost uniform, because it is not related to laser energy but noises such as sunlight or dark count. However, the dropout rate is
increased, because the pulse energy is decreased as in Equation (17). Figure 25 shows the performance in geometric aspect. Increasing pulse repetition rate cause higher point density of inliers owing
to the increasing number of inliers as shown in Figure 23. The fill factor indicates how fully the inlier points are filled in a grid. To generate a range image from the inlier points, we divide its
ground coverage into a grid with a certain ground resolution and then determine which points are corresponding to each cell. But some cells may have no point since number of inlier points may be
small and (or) their distribution may be not uniform. In such cases, the fill factor can be less than 100%. As indicated in Figure 25, the pulse repetition rate should be 10 kHz at least for the
maximum fill factor. | {"url":"http://www.mdpi.com/1424-8220/13/7/8461/xml","timestamp":"2014-04-21T07:23:56Z","content_type":null,"content_length":"151094","record_id":"<urn:uuid:f8825415-ca8d-4f4b-9ef8-def35a65f256>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binomial Theorem
May 4th 2012, 12:20 PM #1
May 2012
Binomial Theorem
I understand Binomial Theorem, but this was an extension question and I can't find the answer.
In the expansion of (1+bx)^n , the first three terms are 1-3x+(15/4)x^2. Find b and n.
Could someone please help me? Thanks.
Re: Binomial Theorem
$(a+b)^n=\sum\limits_{k=0}^n {n \choose k}a^{n-k}b^k$.
So the first three terms are: ${n\choose 0}a^nb^0 +{n \choose 1}a^{n-1}b^1+{n \choose 2}a^{n-2}b^2$.
Take it from there...
Re: Binomial Theorem
Write out the first three terms of the expansion and compare it to 1-3x+15x^2/4.
May 4th 2012, 12:29 PM #2
Sep 2010
May 4th 2012, 12:30 PM #3
Senior Member
Jan 2008 | {"url":"http://mathhelpforum.com/algebra/198361-binomial-theorem.html","timestamp":"2014-04-19T15:33:34Z","content_type":null,"content_length":"33294","record_id":"<urn:uuid:4028bb85-6a7f-4be6-b518-6e67f453b9eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multicore Run in Matlab via Python: Generation of Henon Maps
Hennon Map for a=1.4 and b=0.3.
Multi-core processors
are considered as a de facto standard. In scientific computing it is common to face a problem of generating data based on a single functionality with many different parameters. This can be thought as
"single instruction multiple parameter sets" type of computation. It is quite natural to utilise all cores available in your host machine. While many people uses MATLAB for rapid prototyping; Here I
show how to generate many
Hennon Map
s with different initial conditions (ICs) using MATLAB and Python. We will use Python to drive the computations and "single instruction" is being a function in MATLAB.
Let's first shortly remember the definition of the Hennon Map
$ x_{n+1} = y_{n} + 1 - \alpha x_{n}^2 $
$ y_{n+1} = \beta x_{n} $
It is known that for parameter ranges $\alpha \in [1.16, 1.41]$ and $\beta \in [0.2, 0.3]$ map generates
chaotic behaviour
i.e. sensitive dependence on initial conditions.
Here is a MATLAB function that generates a png file for a given parameters, initial condition values, file name and the upper bound for the iteration.
function hennonMap(a, b, xn, yn, upper, filename)% HENNONMAP generate hennon map at given parameters and initial conditions
% Mehmet Suzen
% msuzen on gmail
% (c) 2013
% GPLv3
% a \in [1.16 1.41]
% b \in [0.2, 0.3]
% Example of running this in the command line
% > matlab -nodesktop -nosplash -logfile my.log -r "hennonMap(1.4, 0.3, -1.0, -0.4, 1e4, 'hennonA1.4B0.3.png'); exit;"
Xval = zeros(upper, 1);
Yval = zeros(upper, 1);
for i=1:upper
Xval(i) = xn;
Yval(i) = yn;
x = yn + 1 - a*xn^2;
y = b * xn;
xn = x;
yn = y;
h = figure
plot(Xval, Yval,'o')
title(['Henon Map at ', sprintf(' a=%0.2f', a), sprintf(' b=%0.2f', b)])
print(h, '-dpng', filename)
Running this function from a command line, as described in the function help, would generate the figure shown above. Now imagine that we need to generate many plots with different ICs. It is easy to
open many shells and run from command line many times. However it is a manual task and we developers do not like that at all! Python would help us in this case, specifically its
module, to spawn the process from a single script. The concept here is closely related to threading, while multiprocessing module mimics threading API.
For example, let's say we have 16 cores and we would like to generate Henon maps for 16 different initial conditions. Here is a Python code running the above Hennon Map function by invoking 16
different MATLAB instances with each using different ICs:
# Mehmet Suzen
# msuzen on gmail
# (c) 2013
# GPLv3
# Run Hennon Map on multi-process (spawning processes)
from multiprocessing import Pool
import commands
import numpy
import itertools
def f(argF):
a = '%0.1f' % argF[0]
b = '%0.1f' % argF[1]
filename = "hennonA" + a + "B" + b + ".png"
commandLine = "matlab -nodesktop -nosplash -logfile my.log -r \"hennonMap(1.4, 0.3," + a + "," + b + ", 1e4, '" + filename + "'); exit;\""
print commandLine
return commands.getstatusoutput(commandLine)
if __name__ == '__main__':
pool = Pool(processes=12) # start 12 worker processes
xns = list(numpy.linspace(-1.0, 0.5, 4))
yns = list(numpy.linspace(-0.4, 1.1, 4))
print pool.map(f, list(itertools.product(xns, yns)))
It would be interesting to do the same thing only using R. Maybe in the next post, I'll show that too. | {"url":"http://memosisland.blogspot.de/2013/02/multicore-run-in-matlab-via-python.html","timestamp":"2014-04-17T09:34:37Z","content_type":null,"content_length":"80382","record_id":"<urn:uuid:d2b7becf-6dfd-4187-8583-babc02af3fcf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
│ The Plastic Number and the Divine Proportion │
│ by Juan C. Dürsteler │ [message nº 145] │
Scale and proportion are key concepts of visual representations. The divine proportion during centuries and, more recently, Van der Laaan’s plastic number have been proposed as aesthetic choices for
proportion. Nevertheless it’s not clear whether they really simplify understanding or aesthetics nor whether they are connected to our nature or not.
Modulor by Le Corbusier. The Swiss architect created this schema about proportions based in the golden section, that you can find in the human body. For example the ratio between the distance of the
head and navel to the ground is approximately Phi (1.618...).
Since ancient times there has existed the idea that certain serial arrangements of numbers reflect certain properties of nature either better or worse. In fact this is the underlying concept of
scale. A scale is a sequence of ordered numbers that usually serves as a comparison in order to define proportions between the real universe and the one we are willing to represent [possibly in
graphic form].
The complexity that the representation of the real world imposes has given rise to the appearance of many different choices of scales. Among them you can find the many musical scales (diatonic,
chromatic, tempered, sorog hirajoshi…) where sounds (sound frequencies) that are perceived as equivalent are fractions or multiples (proportions in the end) of other sounds.
In architecture, proportions are important and for many centuries architects have wondered which relations between sizes of the different architectonical elements are most appropriate, i.e. most
aesthetically or functionally pleasant . Not in vain Goethe defined architecture as “frozen music”.
Which are the ideal proportions for graphic representations? Does there exist a perfect proportion between height and width of a visualisation?.
Over the last few centuries many people have considered that the Phi number, better known as the divine proportion or the golden section is a standard for balance and beauty in regards to
proportions. Phi is 1.618033988..., or the limit that the ratio between any two elements of the Fibonacci sequence tend to.
The Fibonacci sequence is very easily constructed. Each term is just the sum of the two preceding ones, beginning with 0 and 1.
0 1 1 2 3 5 8 13 21 34 55 89 144 233 ...
The nice thing about the golden section is that it is a proportion that appears with certain frequency in nature, especially in geometry, but also in the approximate proportions of the human body.
In the website of Ron Knotts of the University of Surrey you can find many examples. Many other (yet much more disputable) ones are available at Goldennumber.net.
But there are also a lot of misinterpretations around Phi like, for example, it’s a common mistake that in the Nautilus shell (a sea cephalopod) Phi plays an important role. This is not true, its
shell is constructed around a logarithmic spiral, not around a golden spiral, as can be seen at "Spirals and the Golden Section" by John Sharp. Many attributions to golden proportions are only
wishful approximations.
But let’s come back to our interest. Phi is a member of the so called “morphic numbers” that have the interesting properties that you can find two values k and l which satisfy that
Morphic number condition. k=2 and l=1 give the golden section, k=3 and l=4, produce the plastic number. Tha chart shows the interesting properties of these two numbers. When p is the golden section,
1+p=p^2 and p-1=1/p. When p is the plastic number you get p-1= p^-4 y p^3=p+1.
Click on the image to enlarge it
Source: article about "Morphic Numbers" (see the text)
A question immediately arises: is there any other morphic number besides the golden section?. Arts, Fokkink and Kruijtzer from the University of Delft demonstrate in their article “Morphic numbers”
that there are only two morphic numbers, the divine proportion and the “plastic number” discovered in 1928 by the architect and Benedictine monk Hans van der Laan, who used it as a base for the
proportion of his architectural constructions. The plastic numbers gives birth to the Van der Laan scale that was used in the construction of the chapel of St. Benedictusberg a Benedictine abbey.
Interior of the chapel of the Benedictine abbey of Sint Benedictusberg, designed by Hans van der Laan (1904-1991) using the plastic number as the basis of its scale.
Click on the image to enlarge it
See the photo gallery of the same.
Answering our question, could it be that the golden section or the plastic number is the ideal proportion to make graphic representations? There’s no indisputable evidence about this. Sr Wiliiam
Playfair, reputed as one of the first to make bar charts in the 18th century, used predominantly close to the golden section proportions in his graphics, although he made use of other proportions
Edward Tufte points out that human preferences for proportions in rectangular shapes have been the subject of study since 1860 by psychologists that have found a mild preference for proportions
around the golden section, but with a variation that goes from 1.2 up to 2.2.
The existence of a “natural” proportion connecting to the perceptual roots of the human nature is not nonsense. Should it exist, it would provide a basis on which to construct harmonious scales and
probably less cumbersome graphics. A related idea states that given the fractal nature of the world, information visualisation in fractal form could be closer to our natural way of perceiving the
world, thus being a more advantageous one.
Although the idea is very appealing, unfortunately there isn’t indisputable evidence about it. The way we humans process perceptual information is still largely a mystery. Structuralists consider,
for example, that each and every representation is of an arbitrary-conventional nature, rejecting the possibility of sensorial, representation that can be understood without the need to learn a
particular convention.
When faced with this situation pragmatism is the choice. Following Tufte, if the nature of the representation suggests its shape, follow it. If not, preferably use a wider rather than a taller shape
with a proportion that appears useful or pleasant to you.
In my personal opinion, consistently using a coherent scale, be it the golden section, the plastic number or whatever else is always a good choice to build harmonious representations. But here the
key is consistency, not the proportion itself.
I Owe the inspiration for this article to an interesting discussion with the architect Manuel Couceiro da Costa and Jim Wise, cognitive psychologist and expert in information visualisation, during a
cold and rainy Saturday morning at the Circulo de Bellas Artes in Madrid.
Links of this issue:
http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/fib.html Ron Knotts' Website about the golden section
http://goldennumber.net/ Golden number net
http://www.nexusjournal.com/Sharp_v4n1-pt04.html Spirals and the Golden Section" by John Sharp
http://www.math.leidenuniv.nl/~naw/serie5/deel02/mrt2001/pdf/archi.pdf Artícle about Morphic Numbers
http://www.vijlen.net/kerk/content/foto's%20abdij%20st%20benedictusberg.html Photo gallery about Sint Benedictusberg | {"url":"http://infovis.net/printMag.php?num=145&lang=2","timestamp":"2014-04-18T20:43:31Z","content_type":null,"content_length":"15609","record_id":"<urn:uuid:a147bf72-c2dc-4c85-8b53-125fce229261>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
One-way functions and balanced NP
, 1995
"... We prove that if strong pseudorandom number generators exist, then the class of languages that have polynomialsized circuits (P/poly) is not measurable within exponential time, in terms of the
resource-bounded measure theory of Lutz. We prove our result by showing that if P/poly has measure zero in ..."
Cited by 29 (4 self)
Add to MetaCart
We prove that if strong pseudorandom number generators exist, then the class of languages that have polynomialsized circuits (P/poly) is not measurable within exponential time, in terms of the
resource-bounded measure theory of Lutz. We prove our result by showing that if P/poly has measure zero in exponential time, then there is a natural proof against P/poly, in the terminology of
Razborov and Rudich [25]. We also provide a partial converse of this result.
- SIAM Journal on Computing , 2000
"... Given a real number ff ! 1, every language that is weakly P n ff=2 \GammaT -hard for E or weakly P n ff \GammaT -hard for E 2 is shown to be exponentially dense. This simultaneously strengthens
results of Lutz and Mayordomo(1994) and Fu(1995). 1 Introduction In the mid-1970's, Meyer[15] prov ..."
Cited by 8 (1 self)
Add to MetaCart
Given a real number ff ! 1, every language that is weakly P n ff=2 \GammaT -hard for E or weakly P n ff \GammaT -hard for E 2 is shown to be exponentially dense. This simultaneously strengthens
results of Lutz and Mayordomo(1994) and Fu(1995). 1 Introduction In the mid-1970's, Meyer[15] proved that every P m -complete language for exponential time---in fact, every P m -hard language for
exponential time---is dense. That is, E 6` Pm(DENSE c ); (1) where E = DTIME(2 linear ), DENSE is the class of all dense languages, DENSE c is the complement of DENSE, and Pm(DENSE c ) is the class
of all languages that are P m -reducible to non-dense languages. (A language A 2 f0; 1g is dense if there is a real number ffl ? 0 such that jA n j ? 2 n ffl for all sufficiently large n, where An =
A " f0; 1g n .) Since that time, a major objective of computational complexity theory has been to extend Meyer's result from P m -reductions to P T -reductions, i.e., to prove that ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1599970","timestamp":"2014-04-18T06:44:21Z","content_type":null,"content_length":"15066","record_id":"<urn:uuid:df5d3bc0-3941-4287-84bd-a4c5e40b94fa>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Logic of Proofs \Lambda
Sergei Artšemov
Steklov Mathematical Institute,
Vavilov str. 42,
117966 Moscow, Russia
email: art@log.mian.su
August 10, 1993
In this paper individual proofs are integrated into provability logic. Systems
of axioms for a logic with operators ``A is provable'' and ``p is a proof of A''
are introduced, provided with Kripke semantics and decision procedure. Com
pleteness theorems with respect to the arithmetical interpretation are proved.
1 Introduction
In [1] and [2] proofs were incorporated into propositional logic by means of
labeled modalities. The basic labeled modal logic contains the propositional
logic enriched by unary operators 2 p i , i = 0; 1; 2; : : : . This language helps to
provide a logical treatment of a rather general situation when we are interested
not only to know that a certain statement A is valid, but also have to keep track
on some evidences of its validness: 2 p A may stand for ``p is a proof of A'', ``p is | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/261/3910751.html","timestamp":"2014-04-20T08:23:49Z","content_type":null,"content_length":"8082","record_id":"<urn:uuid:54272759-863b-4fe8-8ee5-20387e8f8341>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greeley, CO Statistics Tutor
Find a Greeley, CO Statistics Tutor
Have instructed most of the Statistics and Research Methods Graduate and Undergraduate Courses at the University of Northern Colorado. I have tutored extensively in the past and am user friendly.
Have 18 years experience teaching in colleges of Business, Education and Arts and Sciences.
21 Subjects: including statistics, reading, GED, grammar
...I earned a BS in Electrical Engineering and Computer Science at the University of California, Berkeley, then worked at a major computer company for 10 years before deciding to focus on helping
other people. At that point, I went back to school and became a Registered Nurse. After working in healthcare for a while, I have decided to split my time between helping the body and helping the
13 Subjects: including statistics, geometry, algebra 1, algebra 2
...In addition, python's moduler design allow me write customized routines to be stored and saved for later use in other complex systems. The capability of coding with prepackaged, freely
available modules, gives me the basis to interact with remote machines, pickling of data for transfer to remote...
47 Subjects: including statistics, chemistry, physics, calculus
...I have been working with computers since 1986 beginning with a course in microcomputers (useless, I found out) and a part-time job entering data using MS Access. Since that time, I have used
various office software programs, mostly MS, for my profession as an environmental consultant. Many software programs have been learned through intensive self-teaching for use on a particular
18 Subjects: including statistics, biology, anatomy, Microsoft Excel
My goal is to work myself out of the job of tutoring you in math and statistics. Just going through the mechanics of the calculations is not enough. I strive to explain concepts with clear,
simple language; discuss real world examples relevant to you; and help you develop your own problem solving approach.
9 Subjects: including statistics, algebra 1, SQL, computer programming
Related Greeley, CO Tutors
Greeley, CO Accounting Tutors
Greeley, CO ACT Tutors
Greeley, CO Algebra Tutors
Greeley, CO Algebra 2 Tutors
Greeley, CO Calculus Tutors
Greeley, CO Geometry Tutors
Greeley, CO Math Tutors
Greeley, CO Prealgebra Tutors
Greeley, CO Precalculus Tutors
Greeley, CO SAT Tutors
Greeley, CO SAT Math Tutors
Greeley, CO Science Tutors
Greeley, CO Statistics Tutors
Greeley, CO Trigonometry Tutors
Nearby Cities With statistics Tutor
Boulder, CO statistics Tutors
Brighton, CO statistics Tutors
Broomfield statistics Tutors
Evans, CO statistics Tutors
Fort Collins statistics Tutors
Garden City, CO statistics Tutors
Johnstown, CO statistics Tutors
Longmont statistics Tutors
Loveland, CO statistics Tutors
Milliken statistics Tutors
Northglenn, CO statistics Tutors
Severance, CO statistics Tutors
Thornton, CO statistics Tutors
Westminster, CO statistics Tutors
Windsor, CO statistics Tutors | {"url":"http://www.purplemath.com/Greeley_CO_Statistics_tutors.php","timestamp":"2014-04-18T08:22:26Z","content_type":null,"content_length":"24382","record_id":"<urn:uuid:ca768340-b60b-4fca-8b5d-c184d3aace87>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applying Logic to an Interesting Problem on Billiard Balls
Date: 03/03/2004 at 16:56:18
From: Kara Taft
Subject: Breakable billiard Balls
In front of you is a 100 story building. You must determine which is
the highest floor you can drop a billiard ball from without it
breaking. You have only two billiard balls to use as test objects.
If both of them break and you don't know the answer then you have
failed at your task. What is the least number of drops needed to be
sure you will have determined the breaking point? As a hint, 14 drops
is the best answer, 18 drops is a good answer.
Date: 03/03/2004 at 18:32:53
From: Doctor Douglas
Subject: Re: Breakable billiard Balls
Hi Kara.
This is a very interesting algorithm development problem. Indeed,
I've found a solution where 14 drops (maximum) is the best answer.
I will make a few remarks in the hopes that this will be enough for
you to figure out the solution on your own.
1. Here's a stupid strategy that works if you have only ONE ball.
Drop in on the lowest floor 1, if it survives, move up one
floor. At the end of this process, you will know which was
the last floor it survived. If you are down to one ball, this
is a good strategy.
2. But you have two balls, so your strategy can be more "aggressive"
in the beginning. Suppose you drop the first ball on floor ten.
Then if it breaks, you know the answer is somewhere between floor
1 and floor 9. You have one ball left, and you can resort to
the strategy in (1) above, and find which of these floors is
the highest. It might take 9 more drops (because you have to
test floor 1, floor 2,..., and floor 9).
3. Because we know from the hint that 14 drops is the maximum, we
can actually be more aggressive still, we might as well drop
the first ball on floor 14, and (if it breaks), use the other
thirteen drops with our second ball to find the answer.
Thus our strategy might look something like the following tree.
A descent to the left indicates that the test on that floor resulted
in a broken ball, a descent to the right indicates that the test
on that floor resulted in an intact ball. We can have, at maximum,
TWO leftward movements:
14 - N Numbers indicate we test on that floor.
/ left symbol (/) means broken ball.
1 right symbol (\) means intact ball.
/ \
=0 2
/ \
=1 3
/ \ equals sign indicates answer to problem
=2 . (e.g. =2 means floor 2 is the highest)
/ \
=12 =13
Notice that for most of the outcomes, we end up with two broken
balls, but we do know what floor was the highest successful test.
4. This is the decision tree that results if the first test (at
floor 14, is unsuccessful. The number N is the next floor
to be tested (N > 14). You have to make an intelligent guess
as to what this number N is, and be sure that you can test
floors 15 up to N-1 with the remaining 13 drops. Remember to
be as aggressive as possible, and that floor 14 is at this point
known to give an intact ball.
5. As you go up this tree, you will be using up drops, so the
lengths of these branches will have to get shorter.
Hopefully, that will be enough for you to develop a strategy in
which fourteen drops are sufficient for all 100 floors. You may
be able to see the pattern develop that demonstrates that thirteen
drops are insufficient.
- Doctor Douglas, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/64866.html","timestamp":"2014-04-20T09:24:58Z","content_type":null,"content_length":"8923","record_id":"<urn:uuid:7b700952-a747-4308-b358-b3fb4dd4ba1f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deriving DG categories
Results 1 - 10 of 67
- J. Algebra
"... Abstract. Keller introduced a notion of quotient of a differential graded category modulo a full differential graded subcategory which agrees with Verdier’s notion of quotient of a triangulated
category modulo a triangulated subcategory. This work is an attempt to further develop his theory. ..."
Cited by 79 (0 self)
Add to MetaCart
Abstract. Keller introduced a notion of quotient of a differential graded category modulo a full differential graded subcategory which agrees with Verdier’s notion of quotient of a triangulated
category modulo a triangulated subcategory. This work is an attempt to further develop his theory.
- Homology, Homotopy and Applications
"... Dedicated to H. Keller on the occasion of his seventy fifth birthday Abstract. These are expanded notes of four introductory talks on A∞-algebras, ..."
Cited by 68 (6 self)
Add to MetaCart
Dedicated to H. Keller on the occasion of his seventy fifth birthday Abstract. These are expanded notes of four introductory talks on A∞-algebras,
, 2006
"... The main purpose of this work is to study the homotopy theory of dg-categories up to quasi-equivalences. Our main result is a description of the mapping spaces between two dg-categories C and D
in terms of the nerve of a certain category of (C, D)-bimodules. We also prove that the homotopy category ..."
Cited by 61 (8 self)
Add to MetaCart
The main purpose of this work is to study the homotopy theory of dg-categories up to quasi-equivalences. Our main result is a description of the mapping spaces between two dg-categories C and D in
terms of the nerve of a certain category of (C, D)-bimodules. We also prove that the homotopy category Ho(dg −Cat) possesses internal Hom’s relative to the (derived) tensor product of dg-categories.
We use these two results in order to prove a derived version of Morita theory, describing the morphisms between dg-categories of modules over two dg-categories C and D as the dg-category of (C, D)
-bi-modules. Finally, we give three applications of our results. The first one expresses Hochschild cohomology as endomorphisms of the identity functor, as well as higher homotopy groups of the
classifying space of dgcategories (i.e. the nerve of the category of dg-categories and quasi-equivalences between them). The second application is the existence of a good theory of localization for
dgcategories, defined in terms of a natural universal property. Our last application states that the dg-category of (continuous) morphisms between the dg-categories of quasi-coherent (resp. perfect)
complexes on two schemes (resp. smooth and proper schemes) is quasi-equivalent
- JPAA
"... The cyclic homology of an exact category was defined by R. McCarthy [26] using the methods of F. Waldhausen [36]. McCarthy's theory enjoys a number of desirable properties, the most basic being
the agreement property, i.e. the fact that when applied to the category of finitely generated projective m ..."
Cited by 45 (1 self)
Add to MetaCart
The cyclic homology of an exact category was defined by R. McCarthy [26] using the methods of F. Waldhausen [36]. McCarthy's theory enjoys a number of desirable properties, the most basic being the
agreement property, i.e. the fact that when applied to the category of finitely generated projective modules over an algebra it specializes to the cyclic homology of the algebra. However, we show
that McCarthy's theory cannot be both compatible with localizations and invariant under functors inducing equivalences in the derived category. This is our motivation for introducing a new theory for
which all three properties hold: extension, invariance and localization. Thanks to these properties, the new theory can be computed explicitly for a number of categories of modules and sheaves.
- Compos. Math
"... Dedicated to Claus Michael Ringel on the occasion of his sixtieth birthday. Abstract. For a noetherian scheme, we introduce its unbounded stable derived category. This leads to a recollement
which reflects the passage from the bounded derived category of coherent sheaves to the quotient modulo the s ..."
Cited by 34 (5 self)
Add to MetaCart
Dedicated to Claus Michael Ringel on the occasion of his sixtieth birthday. Abstract. For a noetherian scheme, we introduce its unbounded stable derived category. This leads to a recollement which
reflects the passage from the bounded derived category of coherent sheaves to the quotient modulo the subcategory of perfect complexes. Some applications are included, for instance an analogue of
maximal Cohen-Macaulay approximations, a construction of Tate cohomology, and an extension of the classical Grothendieck duality. In addition, the relevance of the stable derived category in modular
representation theory is indicated.
- J. Amer. Math. Soc
"... We establish equivalences of the following three triangulated categories: Dquantum(g) ← → D G coherent (Ñ) ← → Dperverse(Gr). Here, Dquantum(g) is the derived category of the principal block of
finite dimensional representations of the quantized enveloping algebra (at an odd root of unity) of a comp ..."
Cited by 23 (8 self)
Add to MetaCart
We establish equivalences of the following three triangulated categories: Dquantum(g) ← → D G coherent (Ñ) ← → Dperverse(Gr). Here, Dquantum(g) is the derived category of the principal block of
finite dimensional representations of the quantized enveloping algebra (at an odd root of unity) of a complex semisimple Lie algebra g; the category DG coherent (Ñ) is defined in terms of coherent
sheaves on the cotangent bundle on the (finite dimensional) flag manifold for G ( = semisimple group with Lie algebra g), and the category Dperverse(Gr) is the derived category of perverse sheaves on
the Grassmannian Gr associated with the loop group LG ∨ , where G ∨ is the Langlands dual group, smooth along the Schubert stratification. The equivalence between Dquantum(g) and DG coherent (Ñ) is
an ‘enhancement ’ of the known expression (due to Ginzburg-Kumar) for quantum group cohomology in terms of nilpotent variety. The equivalence between Dperverse(Gr) and DG coherent (Ñ) can be viewed
as a ‘categorification ’ of the isomorphism between two completely different geometric realizations of the (fundamental polynomial representation of the) affine Hecke algebra that has played a key
role in the proof of the Deligne-Langlands-Lusztig conjecture. One realization is in terms of locally constant functions on the flag
, 2004
"... These notes are based on a series of five lectures given during the ..."
- Manuscripta Math , 1994
"... Using one of Wodzicki’s examples of H-unital algebras [14] we exhibit a ring whose derived category contains a smashing subcategory which is not generated by small objects. This disproves the
generalization to arbitrary triangulated categories of a conjecture due to Ravenel [8, 1.33] and, originally ..."
Cited by 16 (2 self)
Add to MetaCart
Using one of Wodzicki’s examples of H-unital algebras [14] we exhibit a ring whose derived category contains a smashing subcategory which is not generated by small objects. This disproves the
generalization to arbitrary triangulated categories of a conjecture due to Ravenel [8, 1.33] and, originally, Bousfield [2, 3.4]. 1. Statement of the conjecture We refer to [7] for a nicely written
analysis of the following setup: Let S be a triangulated category [13] admitting arbitrary (set-indexed) coproducts. An object X ∈ S is small if the functor Hom (X,?) commutes with arbitrary
coproducts. We denote the full subcategory on the small objects of S by Sb. We suppose that Sb is equivalent to a small category. A full subcategory of S is localizing if it is a triangulated
subcategory in the sense of Verdier which is closed under forming coproducts with respect to S. WeKeller suppose that S is generated by S b, i.e. coincides with its smallest | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3202594","timestamp":"2014-04-18T01:06:34Z","content_type":null,"content_length":"33730","record_id":"<urn:uuid:da9dd30d-f38c-454d-85ae-2626b01f0904>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generalized Hylomorphisms
Thu 24 Apr 2008
I haven't seen written up anywhere the following operator (g_hylo), defined in the spirit of generalized catamorphisms and generalized anamorphisms, which seems to follow rather naturally from the
definition of both -- I'm using liftW & liftM rather than fmap to make it clear what is being lifted over what.
class Functor w => Comonad w where
-- minimal definition: extend & extract or duplicate & extract
duplicate :: w a -> w (w a)
extend :: (w a -> b) -> w a -> w b
extract :: w a -> a
extend f = fmap f . duplicate
duplicate = extend id
liftW :: Comonad w => (a -> b) -> w a -> w b
liftW f = extend (f . extract)
g_hylo :: (Comonad w, Functor f, Monad m) =>
(forall a. f (w a) -> w (f a)) ->
(forall a. m (f a) -> f (m a)) ->
(f (w b) -> b) ->
(a -> f (m a)) ->
a -> b
g_hylo w m f g =
extract .
hylo (liftW f . w . fmap duplicate) (fmap join . m . liftM g)
. return
hylo f g = f . fmap (hylo f g) . g
In the above, w and m are the distributive laws for the comonad and monad respectively, and hylo is a standard hylomorphism. In the style of Dave Menendez's Control.Recursion code it would be a
'refoldWith' and it can rederive a whole lot of recursion and corecursion patterns if not all of them.
One Response to “Generalized Hylomorphisms”
1. The Comonad.Reader » Time for Chronomorphisms Says:
April 26th, 2008 at 2:43 am
[...] First, we can make the generalized hylomorphism from the other day more efficient by noting that once you inline the hylomorphism, you can see that you do 3 fmaps over the same structure,
so we can fuse those together yielding: [...] | {"url":"http://comonad.com/reader/2008/generalized-hylomorphisms/","timestamp":"2014-04-20T20:55:41Z","content_type":null,"content_length":"28773","record_id":"<urn:uuid:993802f2-3400-4326-bab1-d1cddfa6c6a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Superconductors are materials that demonstrate no resistance to the flow of electric current. That's zero electrical resistance. Therefore, an electric current initiated inside a perfect
superconductor will not dissipate with time and will flow forever.
The critical temperature of superconductors is usually given in degrees Kelvin. So what is degrees Kelvin and why do we use this temperature scale? Kelvin is named after the individual Lord Kelvin,
who suggested that absolute zero become the base of a new temperature scale. This temperature scale has been adopted by science because it is much easier to work with positive numbers in equations
rather than negative numbers encountered in the Celsius and Fahrenheit scales. For instance liquid nitrogen condenses into a liquid at 77 degrees Kelvin. This temperature is equivalent to –196
degrees Celsius or -320 degrees Fahrenheit.
Conversion between the temperature scales is accomplished using a few simple formulas.
To convert degrees Kelvin to Celsius subtract 273. So 100 K is equivalent to –173 C.
To convert Fahrenheit to Celsius
F = 9/5 C + 32
To Convert Celsius to Fahrenheit
C = 5/9 F – 32 | {"url":"http://imagesco.com/articles/supercond/01.html","timestamp":"2014-04-18T02:58:16Z","content_type":null,"content_length":"7522","record_id":"<urn:uuid:9945510e-f64c-4e53-9ddd-5b9201e3c85c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hacienda Heights Algebra Tutor
...I was a tutor for both SAT I math and SAT II Math IIC. I was an SAT instructor at camp in South Korea - Summer 2013. I am a former high school math teacher.
52 Subjects: including algebra 1, algebra 2, English, chemistry
...My study plans are geared to the individual. This gives the assurance that I am interested in the individuals success, and not just trying to fit them into a pre-made box that they are already
discouraged with. I have found that this, more personal, approach is very successful with struggling s...
4 Subjects: including algebra 1, algebra 2, prealgebra, elementary math
...Allow me to simplify what seems confusing and unintelligible, without needing to resort to math concepts that are at best erudite, and at worst simply unreliable. I have taught a dozen law
school subjects for more than 25 years. Knowing where students typically go wrong in their efforts, I can provide exceptional guidance to address and prevent such problems in the future.
39 Subjects: including algebra 1, algebra 2, reading, English
...Overall, I taught them good values in life to grow up to be loving and responsible kids for their parents as well as good citizens to the society. I understand the importance of "background
checking", when someone chooses a tutor for his/her child. I have been teaching in the public schools dur...
9 Subjects: including algebra 1, algebra 2, geometry, chemistry
...I am also currently teaching Algebra 1 in a private middle school in Los Angeles, Ca. I graduated with a Bachelor and Masters degree in Mathematics in the Philippines. I have experience
tutoring students in advance mathematics.
3 Subjects: including algebra 2, algebra 1, statistics | {"url":"http://www.purplemath.com/Hacienda_Heights_Algebra_tutors.php","timestamp":"2014-04-17T19:30:07Z","content_type":null,"content_length":"24050","record_id":"<urn:uuid:4bd2df34-da67-4a55-aaa7-b26b21dc7d36>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamic Programming
October 6th 2009, 07:27 PM
Dynamic Programming
I'm having a hard time formulating an algorithm in solving a Dynamic Programming Problem that involves looking for the cheapest path of a given digraph.
the twist though, is that you have to look for the cheapest path from point 1 to k, but when you have a path from point 1 to k, the second most expensive path will be free.
For example, if a path from 1 to 10 is 1>4>6>10 with arc costs 5,6,3,4 respectively, arc 1 will have 0 cost. then, its total cost will be 6+3+4 = 13. I'm supposed to minimize this cost.
Help me guys... | {"url":"http://mathhelpforum.com/advanced-math-topics/106591-dynamic-programming-print.html","timestamp":"2014-04-18T23:19:39Z","content_type":null,"content_length":"3536","record_id":"<urn:uuid:f8c38b30-a5bb-4532-961e-c1e3dc9ff7ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating Weights of Giant Largemouth Bass -- Bass Articles Bass Fishing ArticlesPredicting Weights of Giant Largemouth Bass
Estimating Weights of Giant Largemouth Bass
By: Terry Battisti
For the past 25 years there has been a race to catch the next world record largemouth bass. Although there have been a few close calls amongst record hunters, the 22 pound 4 ounce record caught by
George Perry in 1932 has yet to be officially broken. Recently, there have been a few record size fish caught that have not been officially documented, and therefore, are not recognized as the true
Without a doubt, there will be no fish “more scrutinized” than the next world record largemouth bass. Proof of this lies in recent and past attempts at record fish submissions. Although a fish has
a very low chance of being accepted as the new world record without proper documentation, there will always be someone that enters a fish that has a very questionable pedigree. The last submission
attempt regarding an all class world record is a prime example.
In the past few months, I have been obsessed with the thought of being able to estimate the weight of these gigantic green eating machines. Past models, or formulas as many people call them, are
poor at estimating the weight of a bass of such proportions. The reason for this is due to the fact that these models were fit to a population of bass that are far lower in weight than record class
fish. Therefore, it is critical to develop a more accurate model to validate future submissions of trophy sized fish based on measurements typically submitted by anglers i.e., length and girth.
Trophy Bass Proportions
One example that shows trophy size bass are different than their smaller sisters and brothers has to do with their Length to Weight Ratio (L/W). Most small bass have an L/W well above 2.5, whereas
fish over 16 lbs have an L/W ratio below 1.6. Data from even larger fish, say in the 18 lb class, that have been authentically measured, possess lower scores yet, in the range of 1.0 to 1.4.
Another parameter, the Length to Girth Ratio (L/G), follows the same suit. Smaller bass, especially those under 10 lbs, typically have an L/G ratio in the 1.55 to 1.75 range, whereas fish over 16
lbs are in the 1.0 to 1.2 range.
Model Development
What does this mean, one might ask? In order to understand, one must look into the math used in developing weight estimation models. Typically, models are developed on a large sample of fish in a
broad size range in order to come up with an overall model. If the density of the fish and shape dimensions, i.e. L/W and L/G, remain somewhat constant across the sample population, this method can
be accurate. With largemouth bass though, this is not true.
Largemouth bass vary not only in density, but also in their shape parameters. This is evidenced by the fat watermelon shaped fish caught in California versus the long, more slender fish caught in
Florida. In order to accurately estimate the weight of a bass from these two different locations, two different models would have to be developed. The reason for this lies in the inherent fit
parameters used in these models.
Model development starts out theoretically by developing a pseudo-volumetric equation. This equation is almost always based on a right circular cylinder, the volume which is described by the
[] (1)
where, D equals diameter and L equals length. In order to transform this equation into something useful for fish, one takes the diameter term and puts it in terms of circumference, or Girth as for a
bass. This transformation leads to the expression:
[] (2)
Now, in order to make this volume expression relate to weight or, more correctly mass, one must multiply volume by density, r, as shown in equation 3. Once this is done, an expression for weight has
theoretically been developed.
[] (3)
In order to make this expression work for a fish, which does not possess the dimensions of a right circular cylinder, a shape factor, k, must be introduced. By combining the shape factor, along with
the density, and p, one arrives at the development of an overall fit parameter, P. Equations 4 and 5 illustrate both.
[] (4)
[] (5)
Length, girth, and weight data from a number of fish are then tabulated and the new formula for weight estimation is used, by initially guessing at a fit parameter, in order to estimate the weight of
the fish. Once all the calculations have been completed, a least squares curve fit is conducted which automatically adjusts the fit parameter in order to make the modeled weights converge on the
actual measured weights. The most widely used fit parameter for fish is 800 while, just recently, the IGFA has adopted the value of 927 for largemouth bass.
Using the method described above, length, girth and weight data from 67 fish weighing 14.25 pounds or heavier were used to develop a new model based solely on the theoretical cylinder shown in
Equation 5. The weight estimates from this model, deemed the 958 model for the value of the fit parameter, were then plotted versus the actual weights of the fish in study. Figure 1 shows the
results of this exercise. An example of this model is shown below in Equation 6 with the 19.875 lb bass caught by Mike Long in 2004 which had Length and Girth measurements of 29.5 inches and 26.75
inches respectively.
[] (6)
Another method used in developing a weight estimation model was to keep the sum of the length and girth exponents equal to three but vary their values through a number of least squares curve fits.
This would still give units of volume, which is dimensionally sound, but allows the modeler to not be constrained completely by the individual exponential values. An example can is shown in Equation
[] (7)
where: a + b = 3
Starting out with the L exponent equal to zero and the G exponent equal to three, the exponential values were changed in increments of 0.1 until an L exponent value of three and G exponent value of
zero was obtained. Then, by plotting the value of the L exponent versus the sum of the least squares analysis for each run, the optimum values for the L exponent and the G exponent were determined.
An example of this model is presented below in Equation 8, again using Mike Long’s 19.875 lb bass .
[] (8)
The final method used for model development was a purely empirical method in which the exponents of length and girth are allowed to vary along with the fit parameter during a non-linear least squares
regression. This method, although not theoretically based due to the fact that the sum of the exponents is allowed to deviate from units of volume, is commonly used when more theoretical methods do
not produce satisfactory results. In essence, they are ways of estimating a desired outcome when some or all of the needed theoretical parameters (in this case density) are unknown. The outcome
from this analysis provided the best “overall” results of the entire study except for fish over 20lbs where it underestimated the actual weight by up to 6%. Equation 9 gives an example of how this
equation is used by again, using Mike Long’s bass. All of the above models and their results can be viewed in Figure 1 and Table 1.
[] (9)
Another method used to determine whether the models were more accurate for bass of a certain shape was a plot of the L/G Ratio versus the Weight Percent Difference in the model result with respect to
the actual weight. Negative numbers show the model under-estimated the weight of the bass while positive numbers show a result that was over-estimated. A statistically sound model should always
have an even number of results above and below the Zero Line. All of the models showed good distribution above and below the line but again, the empirically fit model produced the best results. See
Figure 2.
Confidence intervals were also determined for each model in order to determine exactly how accurate each model was compared to actual weights. These intervals allow the user to determine not only
the validity of the model but also the amount of error that can be expected for the interval chosen. For example, if a model has a confidence interval of +/- 4% at 90%, this means that 90% of the
time, the model will be within 4% of the actual weight value. Three different intervals were determined, 90%, 95%, and 99%. The results of this analysis are shown in Table 2. The results show that
again, the Empirical Model was by far the best in determining weight with the best certainty.
Estimating Some Well Known Bass
Using the Empirical Model in order to estimate the weight of some well known fish was done to see where these fish might possibly stand against the record. The three fish chosen were George Perry’s
Record, Paul Duclos’ behemoth, and the bass caught last year by Leaha Trew. The measurements of Perry’s fish are said to have been 32.5 inches in length and 28.5 inches in girth. Measurements of
Duclos’ fish were never taken but the California Department of Fish and Game studied the photograph and came up with what they feel to be a good estimate. The length being between 29 and 31 inches
and the girth between 29 and 30 inches. The Trew fish was said to measure 29 inches in length and 25 inches in girth. The results are shown in Table 3.
Although the models presented above provide a more accurate estimate for a trophy bass’ weight than models of the past, a larger sample size of bass in the 18 to 20 pound range must be analyzed.
Users of these models must understand the deviations from actual weight when applying these models to their catches. Also of interest were the results from the L/G versus Weight Percent Difference
study. The reason this is interesting is shown along the L/G line at values between 1.15 and 1.20. Fish that fall within this interval are quite accurately estimated in weight with an under
estimate no greater than weight 5%. This was of particular interest to me in this study due to the latest fish that was submitted for a world record. This fish, having an L/G value of 1.16 and
using the empirical model, would have been estimated to weigh between 17.94 lbs and 18.83 lbs instead of the 22.5 lbs claimed.
Figure 1.
Figure 2.
│ Model Name │Formula│Maximum Deviation %│Sum of Least Squares │
│ Empirical Model │ [] │ -17.29/+10.98 │ 85.2 │
│Forced Volume Model│ [] │ -20.78/+16.02 │ 162.3 │
│ 958 Model │ [] │ -27.90/+18.55 │ 187.9 │
│ IGFA 927 Model │ [] │ -23.78/+21.77 │ 208.2 │
Table 1. Three newly developed models compared to the new 927-Model developed by the IGFA.
│ │ IGFA 927 Model │ 958 Model │ Empirical Model │ Forced Volume Model │
│ │ │ │ │ │
│Confidence Interval│Percent Difference from Actual│Percent Difference from Actual│Percent Difference from Actual│Percent Difference from Actual│
│ 90% │ -0.9% / +3.3% │ -4.2% / +0.1% │ -1.4% / +1.4% │ -3.7% / +0.3% │
│ 95% │ -1.3% / +3.7% │ -4.6% / +0.5% │ -1.7% / +1.6% │ -4.1% / +0.6% │
│ 99% │ -2.1% / +4.5% │ -5.5% / +1.3% │ -2.2% / +2.2% │ -4.9% / +1.4% │
Table 2. Confidence Intervals for models with range of mass percent deviation from actual weight.
│ │Length│Girth │Estimated Weight│Estimated Weight Plus 5%, pounds │
│ │ │ │ │ │
│ Name │inches│inches│ pounds │ │
│Perry │ 32.5 │ 28.5 │ 21.49 │ 22.57 │
│Duclos│ 29 │ 29 │ 20.17 │ 21.17 │
│Duclos│ 29 │ 30 │ 20.71 │ 21.75 │
│Duclos│ 30 │ 29 │ 20.64 │ 21.67 │
│Duclos│ 30 │ 30 │ 21.20 │ 22.26 │
│Duclos│ 31 │ 29 │ 21.10 │ 22.16 │
│Duclos│ 31 │ 30 │ 21.67 │ 22.76 │
│ Trew │ 29 │ 25 │ 17.94 │ 18.83 │
Table 3. Weight estimates of three well known big bass. Six different estimates were done on the Duclos fish in order to cover the entire range of length and girth measurements estimated.
Bio: Terry Battisti lives in Idaho Falls, Idaho and is a frequent contributor to In-Fisherman. Not only an avid bass fisherman, Terry also holds a Ph.D. in Chemical Engineering and likes to apply
his math skills in the fishing area as well. | {"url":"http://www.wmi.org/bassfish/articles/T196.htm","timestamp":"2014-04-20T16:19:52Z","content_type":null,"content_length":"104481","record_id":"<urn:uuid:fd416293-04b3-4cee-881b-a7971dfc4291>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Types a
, 1996
"... { rjmh, pareto, sabry We have designed and implemented a type-based analysis for proving some baaic properties of reactive systems. The analysis manipulates rich type expressions that contain
in-formation about the sizes of recursively defined data struc-tures. Sized types are useful for detecting d ..."
Cited by 120 (2 self)
Add to MetaCart
{ rjmh, pareto, sabry We have designed and implemented a type-based analysis for proving some baaic properties of reactive systems. The analysis manipulates rich type expressions that contain
in-formation about the sizes of recursively defined data struc-tures. Sized types are useful for detecting deadlocks, non-termination, and other errors in embedded programs. To establish the
soundness of the analysis we have developed an appropriate semantic model of sized types. 1 Embedded Functional Programs In a reactive system, the control software must continu-ously react to inputs
from the environment. We distin-guish a class of systems where the embedded programs can be naturally expressed as functional programs manipulat-ing streams. This class of programs appears to be
large enough for many purposes [2] and is the core of more ex-pressive formalisms that accommodate asynchronous events, non-determinism, etc. The fundamental criterion for the correctness of
pro-grams embedded in reactive systems is Jwene.ss. Indeed, before considering the properties of the output, we must en-sure that there is some output in the first place: the program must continuous]
y react to the input streams by producing elements on the output streams. This latter property may fail in various ways: e the computation of a stream element may depend on itself creating a “black
hole, ” or e the computation of one of the output streams may demand elements from some input stream at different rates, which requires unbounded buffering, or o the computation of a stream element
may exhaust the physical resources of the machine or even diverge.
- Information and Computation , 1991
"... . We present a method for providing semantic interpretations for languages with a type system featuring inheritance polymorphism. Our approach is illustrated on an extension of the language Fun
of Cardelli and Wegner, which we interpret via a translation into an extended polymorphic lambda calculus. ..."
Cited by 116 (3 self)
Add to MetaCart
. We present a method for providing semantic interpretations for languages with a type system featuring inheritance polymorphism. Our approach is illustrated on an extension of the language Fun of
Cardelli and Wegner, which we interpret via a translation into an extended polymorphic lambda calculus. Our goal is to interpret inheritances in Fun via coercion functions which are definable in the
target of the translation. Existing techniques in the theory of semantic domains can be then used to interpret the extended polymorphic lambda calculus, thus providing many models for the original
language. This technique makes it possible to model a rich type discipline which includes parametric polymorphism and recursive types as well as inheritance. A central difficulty in providing
interpretations for explicit type disciplines featuring inheritance in the sense discussed in this paper arises from the fact that programs can type-check in more than one way. Since interpretations
follow the type...
, 2004
"... We present a generalization of the ideal model for recursive polymorphic types. Types are defined as sets of terms instead of sets of elements of a semantic domain. Our proof of the existence of
types (computed by fixpoint of a typing operator) does not rely on metric properties, but on the fact tha ..."
Cited by 23 (2 self)
Add to MetaCart
We present a generalization of the ideal model for recursive polymorphic types. Types are defined as sets of terms instead of sets of elements of a semantic domain. Our proof of the existence of
types (computed by fixpoint of a typing operator) does not rely on metric properties, but on the fact that the identity is the limit of a sequence of projection terms. This establishes a connection
with the work of Pitts on relational properties of domains. This also suggests that ideals are better understood as closed sets of terms defined by orthogonality with respect to a set of contexts.
, 1994
"... Both pre-orders and metric spaces have been used at various times as a foundation for the solution of recursive domain equations in the area of denotational semantics. In both cases the central
theorem states that a `converging' sequence of `complete' domains/spaces with `continuous' retraction pair ..."
Cited by 21 (0 self)
Add to MetaCart
Both pre-orders and metric spaces have been used at various times as a foundation for the solution of recursive domain equations in the area of denotational semantics. In both cases the central
theorem states that a `converging' sequence of `complete' domains/spaces with `continuous' retraction pairs between them has a limit in the category of complete domains/spaces with retraction pairs
as morphisms. The pre-order version was discovered first by Scott in 1969, and is referred to as Scott's inverse limit theorem. The metric version was mainly developed by de Bakker and Zucker and
refined and generalized by America and Rutten. The theorem in both its versions provides the main tool for solving recursive domain equations. The proofs of the two versions of the theorem look
astonishingly similar, but until now the preconditions for the pre-order and the metric versions have seemed to be fundamentally different. In this thesis we establish a more general theory of
domains based on the noti...
, 2006
"... We propose a type system based on regular tree grammars, where algebraic datatypes are interpreted in a structural way. Thus, the same constructors can be reused for different types and a
flexible subtyping relation can be defined between types, corresponding to the inclusion of their semantics. For ..."
Cited by 15 (1 self)
Add to MetaCart
We propose a type system based on regular tree grammars, where algebraic datatypes are interpreted in a structural way. Thus, the same constructors can be reused for different types and a flexible
subtyping relation can be defined between types, corresponding to the inclusion of their semantics. For instance, one can define a type for lists and a subtype of this type corresponding to lists of
even length. Patterns are simply types annotated with binders. This provides a generalization of algebraic patterns with the ability of matching arbitrarily deep in a value. Our main contribution,
compared to languages such as XDuce and CDuce, is that we are able to deal with both polymorphism and function types. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1493489","timestamp":"2014-04-16T17:31:09Z","content_type":null,"content_length":"26145","record_id":"<urn:uuid:ca140332-7fca-4c74-8e38-80e1bc3eb1e5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a better way to express the sign of a variable that will eventually hold a numeric value?
Up to this point, I have been using x/abs(x) (i.e.
> eval(x/abs(x), [x = -1]);
The problem is that it's inconvenient when the numeric value is identically zero:
> eval(x/abs(x), [x = 0]);
Error, numeric exception: division by zero
Using a floating point zero works fine, but is not always practical:
> eval(x/abs(x), [x = 0.]);
Now, the obvious tool here, sign(x), evaluates without complaint:
> sign(0);
The problem with sign is that I cannot incorporate it into an expression that can be freely used later. For, the following produces an unexpected result:
> eval(sign(x), x = -1);
Constructing the expression using layers of quotes don't help:
> expr := 200*' 'sign(x)' ';
200 'sign(x)'
Simple uses of eval work fine:
> expr;
200 sign(x)
> eval(expr, [x = -1]);
However, substitution with that expression using subs or eval doesn't produce the expected result:
> eval(a*b, [a = expr]);
200 sign(x) b
> eval(%, [x = -1]);
200 b
This also doesn't produce the expected result:
> subs(a = expr, a*b);
200 sign(x) b
> eval(%, [x = -1]);
200 b
I realize that this question might collapse into "How do I prevent a function from evaluating prematurely? " but cannot find any search terms that yield useful results. | {"url":"http://www.mapleprimes.com/questions/200522-Is-There-A-Better-Way-To-Express-The?ref=Feed:MaplePrimes:New%20Questions","timestamp":"2014-04-18T13:09:59Z","content_type":null,"content_length":"52408","record_id":"<urn:uuid:40caedf1-4470-4364-8dce-2928dfee0b50>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
May 4th 2013, 06:23 PM #1
Apr 2013
Help please
Hello all,
I have a question, "What is the least value of c if 2x^2 - 12x + c is never negative?"
I'm guessing that means c must be found when 2x^2 - 12x + c = 0, but I'm not sure what kind of answer I'm supposed to get.
Re: Help please
I think we can reason it out like this. We want
2x^2 - 12 x + c >=0
2x(x - 6 ) >= - c
That is it. For all those values of c for which the above inequality holds the function will never be negative.
we can't get definite value because we have one inequality and two variables.
May 4th 2013, 08:15 PM #2
Super Member
Jul 2012 | {"url":"http://mathhelpforum.com/algebra/218552-help-please.html","timestamp":"2014-04-16T07:44:58Z","content_type":null,"content_length":"28069","record_id":"<urn:uuid:87d1b961-73ad-43ca-a873-c53a325bb79a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linearization problem
1. The problem statement, all variables and given/known data
Linearize the equation
Vout = (10^v)*sin(x)
about x=0,0.1, and 1. Write the equation in both the original coordinates and the shifted (linearized) coordinates.
2. Relevant equations
3. The attempt at a solution
dVout/dx = (10^v)*cos(x)
evaluating that equation at x=0,0.1, and 1 gives me
10^v, 10^v*(.995), and 10^v*(.54) , respectively
therefore, Vout = 10^v*sin(0)+10^v*(x-0) for x=0, and similar for x=0.1, and 1. This doesn't seem right though, and I'm not sure how to write the equation in the shifted coordinates. | {"url":"http://www.physicsforums.com/showthread.php?t=470813","timestamp":"2014-04-19T22:50:49Z","content_type":null,"content_length":"19735","record_id":"<urn:uuid:5defc5d6-ab70-4fe5-9c5d-6ec92a20799c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Singularities and residues
October 19th 2010, 08:09 AM #1
Sep 2009
Singularities and residues
Find the isolated singularities (if any) of the following function. For each isolated singularity, describe its nature; that is is it removeable or a pole (and of what order) or essential? In
each case, calculate the residue of the function at the singularity.
a) $\pi cot(\pi z) - 1/z$
I got a removable singularity at z=0.
but how do I exactly work out the residue for this?
b) $z^{-\frac{1}{2}}$
I got none. because you cannot have a complex number which is multiplied to a power of a half. I'm not quite sure about this one...
c) $sin(z)sin(\frac{1}{z})$
I got a removable singularity at z=0. Not sure about this one either.... These are some hard functions that I got...
Please help me out!!
for a), should I expand it to Lauren series... and then see what coefficients I get?
Find the isolated singularities (if any) of the following function. For each isolated singularity, describe its nature; that is is it removeable or a pole (and of what order) or essential? In
each case, calculate the residue of the function at the singularity.
a) $\pi cot(\pi z) - 1/z$
I got a removable singularity at z=0.
but how do I exactly work out the residue for this?
b) $z^{-\frac{1}{2}}$
I got none. because you cannot have a complex number which is multiplied to a power of a half. I'm not quite sure about this one...
c) $sin(z)sin(\frac{1}{z})$
I got a removable singularity at z=0. Not sure about this one either.... These are some hard functions that I got...
Please help me out!!
a) The residue at a removable singularity is zero. Your job is to think about why ....
b) The function is $f(z) = \frac{1}{\sqrt{z}}$ ....
c) I suggest you try getting the first few terms of the series around z = 0. Is there a 1/z term ....?
October 19th 2010, 01:42 PM #2
Sep 2009
October 20th 2010, 12:04 AM #3
October 20th 2010, 02:40 PM #4
Sep 2009
October 21st 2010, 03:32 AM #5 | {"url":"http://mathhelpforum.com/differential-geometry/160246-singularities-residues.html","timestamp":"2014-04-20T06:05:28Z","content_type":null,"content_length":"45478","record_id":"<urn:uuid:64ad4422-d031-42d1-96fd-d50c5b82f81e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Johnston, RI Algebra 2 Tutor
Find a Johnston, RI Algebra 2 Tutor
...I am also very experienced in mechanics, electrical and plumbing. I am confident that I can assist anyone interested in joining our armed forces via the ASVAB (Armed Services Vocational
Aptitude Battery). I am certified K-6 and have been tutoring for many years. Math is my specialty.
15 Subjects: including algebra 2, geometry, algebra 1, biology
...I play recreational volleyball in over five leagues and other tournaments in the course of a year. In my years playing volleyball I have learned how to develop consistency in passing, setting,
and hitting,as well as how to cultivate court awareness on both sides of the net. I have combined my t...
44 Subjects: including algebra 2, English, reading, chemistry
Hello, My name is Bernie and I am currently an Ed.D. student at Boston University; I just finished my M.Ed. in the Spring of 2010. I have been a high school mathematics and chemistry teacher for
3 years, and I want to return to the classroom after finishing my Ed.D. I have taught all levels of hi...
19 Subjects: including algebra 2, chemistry, physics, calculus
...I have excellent qualifications for tutoring ACT Math. I have spent several years teaching high school Math and tutoring students in Math at the middle school, high school, and college level.
I have also tutored many students privately to prepare them for the ACT.
25 Subjects: including algebra 2, geometry, algebra 1, statistics
...While in college, I taught a Literature class to high school students and entirely created the syllabus and lesson plans. When I was in high school, I pursued my autodidactic tendencies and
ended up homeschooling myself during my senior year. I also achieved a near-perfect score on the SAT Verbal.
32 Subjects: including algebra 2, English, GRE, reading
Related Johnston, RI Tutors
Johnston, RI Accounting Tutors
Johnston, RI ACT Tutors
Johnston, RI Algebra Tutors
Johnston, RI Algebra 2 Tutors
Johnston, RI Calculus Tutors
Johnston, RI Geometry Tutors
Johnston, RI Math Tutors
Johnston, RI Prealgebra Tutors
Johnston, RI Precalculus Tutors
Johnston, RI SAT Tutors
Johnston, RI SAT Math Tutors
Johnston, RI Science Tutors
Johnston, RI Statistics Tutors
Johnston, RI Trigonometry Tutors | {"url":"http://www.purplemath.com/johnston_ri_algebra_2_tutors.php","timestamp":"2014-04-19T14:40:20Z","content_type":null,"content_length":"24130","record_id":"<urn:uuid:1fa2467e-da66-4ec7-87a8-e0179cf125db>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is W a subspace of V?
January 16th 2013, 10:49 PM
Is W a subspace of V?
Each of the following involves a vector space V and a subset W. For each decide whether W is a subspace of V.
1.) V = R^3, W={(x,y,z) | x <= z } I say no because it doesn't preserve scalar multiplication. For example, if you have (2,2,4) 2 is <=4 but -2 >= -4 which contradicts that x must be less than or
equal to z. Am I right?
2.) V = R[<= 3] [x], W = Z[<= 3] [x]. So V is the polynomials in x with real coefficients and degree at most 3. W is the polynomials in x with integer coefficients and degree at most 3. I'm not
sure on this one, any help?
January 16th 2013, 11:19 PM
Re: Is W a subspace of V?
Hey TimsBobby2.
Can you show us an attempt to prove your statements with the appropriate vector space axiom?
January 16th 2013, 11:22 PM
Re: Is W a subspace of V?
1) yes, multiplying by a negative scalar ruins everything.
2) is it closed under polynomial addition? is it closed under scalar multiplication? (recall that (cp)(x) = c(p(x)) for all x). is the 0-polynomial in W?
you might want to ask yourself: for the polynomial p(x) = x in Z[x], is (1/2)p(x) in Z[x]?
January 16th 2013, 11:35 PM
Re: Is W a subspace of V?
So for 2) if I consider W and I do scalar multiplication by 1/2, (1/2)p(x) is not in Z[x], so W is not a subspace of V? Is that correct to say (I'm just starting this stuff today)
January 17th 2013, 12:53 AM
Re: Is W a subspace of V?
what do you think? i'm not trying to be mean....i'm asking you to convince yourself that what you say is true. the truth of math doesn't depend on "who the expert is", it should be self-evident.
if you are unsure about something, ask about that. understanding the ideas is the important part....getting the answers correct is only a useful by-product. | {"url":"http://mathhelpforum.com/advanced-algebra/211459-w-subspace-v-print.html","timestamp":"2014-04-21T07:46:15Z","content_type":null,"content_length":"5734","record_id":"<urn:uuid:3cda4e9e-e3e7-4e4b-831c-aea438e1f9ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Wasteland of Random Supergravities
Seminar Room 1, Newton Institute
We show that in a general N=1 supergravity with N >> 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kahler
potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the
eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Using Coulomb
gas techniques, we then determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P \propto
exp(-c N^2), with c a constant, for generic critical points. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/BSM/seminars/2012062710101.html","timestamp":"2014-04-18T18:11:29Z","content_type":null,"content_length":"6728","record_id":"<urn:uuid:f484376e-c6c3-4f84-a390-cbce98ef42b1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mechanics- a simplified overveiw.
Mechanics is the study of motion, it is a major part of Physics and Mathematics, Newton was a major contributor to the field and Einstein of course... However these are not the only two!
Newtonian[ Or Classic] Mechanics:
The equations of motion:
1. v = u + at
2. x = ut + ( at(2) )/2
3. v(2) = u(2) + 2ax
Where a is Acceleration, u is initial velocity, v is final velocity, t is time, x is displacement[ s is used sometimes]. Also note because I don't know how to do superscript 2 here (2) stands for
to the power of 2 [squared].
Newton's laws of motion:
Newtons first law of motion:
A body at rest or constant velocity will remain at rest or constant velocity untill a resultant force acts upon it.
Newton's second law of motion:
F=MA, Force = Mass* Acceleration.
Or Force=dP/dT change in moment over change in time. Also, Force*Time is known as IMPULSE. At a basic level acceleration is caused by a force, at a higher level mass is also involved[ we can assume
its constant, read below for further details].
Newtons third law of motion:
Basic: Every action has an equal and opposite action,
but more accuratley:
If body A acts on body B, then body B will act on body A with a force equal in magnitude but opposite in direction.
Thrid law pairs are forces which are equal but opposite and also must be the same type of force.
Quantum Mechanics:
Newtonian or Classical Mechanics only applies while mass is contstant, at velocities greater than half that of light or very high energies then the phenominom described by E=MC(2) and explained by
Einstien in his paper: "Is the inertia of a body dependant on its energy?" occurs and Newtonian mechanics fails. At these extremes mass is sufficiently increased (M=E/C(2)) as C the
velocity of light is always constant. Once the mass changes so does the acceleration (F=MA). However this does not effect the results of 'normal', low velocity, motions and as Quantum
Mechanics is more complex than Classical, Newtonian equations are applied.
system_meltdownon November 11 2005 - 16:55:24
Wow dude you write some class articles.
Anarcho-Hippieon November 11 2005 - 20:52:40
An extra hour of physics never hurt anybody didn't it
liquidiceon November 11 2005 - 22:30:14
Well done wolf, this is gonna come in alot of handy for anybody thats into public-privet key encryption
wolfmankurdon November 11 2005 - 22:43:46
wrong article liquidice, lmfao! you just listen to what I said it might be usefull for then pretended to read 'em!
@nnyon November 13 2005 - 14:15:18
Nice article, very informative
wolfmankurdon November 23 2005 - 18:49:32
Ah thanks but it doesn't work on linux
Mtutnidon March 26 2011 - 18:17:16
Quantum mechanics? I think you have mixed this up with something. Quantum mechanics is about the mechanics of small things like protons electrons neutrons quarks and their entanglement and
wave-particle duality and not about high speeds of objects...
Post Comment
You must have completed the challenge Basic 1 and have 100 points or more, to be able to post. | {"url":"https://www.hellboundhackers.org/articles/read-article.php?article_id=195","timestamp":"2014-04-18T03:02:06Z","content_type":null,"content_length":"22773","record_id":"<urn:uuid:0c5e5f21-c058-4f96-afd8-67054499c785>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathblogging.orgProblemi per Pasqua 2014Maurizio Codogno -- un blog di matematicaAria Di Festa- Pasqua 2014ScientificandoΑναγκαστικά ισομετρίαΠροβλήματα Μαθηματικών (Mathematical Problems) On This Day in Math - April 20Pat's BlogConceptualizing DrillsMusing MathematicallyDen of thieves [book review]Xi'an's OgPlotter: tracciare funzioni matematiche – Il grafico di funzioni online!Angelo Stella[기노사키 온천여행 04] 유카타를 입은 사람들 – 늦은 오후의 거리Leun Kim's BlogStructures in solution spaces0xDEAre Fractions Useless, or Are Americans Just Stupid?Math Jokes 4 Mathy FolksMonotonicity of EM Algorithm ProofLindon's LogRadical Philosophy and the Free Alabama MovementNew Apps: MathematicsPatterning With Base Ten Blocks And 11's.Crewton Ramone's House of MathSimulate the transit of extrasolar planetsDoc Madhattancherry-merchant:
Sierpinski transformationVisualizing MathSaturday Morning Videos: ICLR Videos and PapersNuit Blanchemathani:
Get you best paper, cut a circle and fold it so that...Mindfuck MathHow I Slayed The Mathematical Beast-Monster by Karla GabruchMatthewMaddux EducationFirst earth-sized exoplanet found in habitable zone.New Apps: MathematicsProof of Divisibility By 6Proofs from The BookEscher ScrabbleImpossible World BlogWeekend reads: How to rescue science, what “censorship” really means, worst paper of the year?Retraction WatchIndex or indicator variablesStatistical Modeling, Causal Inference and Social ScienceTimely April math … Easter, Passover, Boston Marathon, Earth Day, Patriots DayYummy Math4/19/14Pure Numbers Daily BlogRIPasso di MATEMATICA | RipmatAngelo Stellabeer factoryXi'an's OgMinimal Criminale | Matematica ecc.Angelo StellaAunt Pythia’s advicemathbabeActivity- Airplane in front of the moonMathsClass
Recent Posts http://www.mathblogging.org/scripts/feed.php 2014-04-20T04:14:26-04:00 No copyright asserted over individual posts; see original posts for copyright and/or licensing. Mathblogging.org
Atom serializer http://www.mathblogging.org/post/85386 2014-04-19T21:00:05-04:00 Maurizio Codogno Stavolta i problemi sono da Aha! Solutions, di Martin EricksonThe post Problemi per Pasqua 2014
appeared first on Il Post. http://www.mathblogging.org/post/85383 2014-04-19T20:44:00-04:00 annarita ruberto <br><br>Il 2014 è il settimo anno di Scientificando, di Matem@ticaMente e di Web 2.0 and
somethingelse, i miei blog.<br><br>Anche quest'anno voglio augurare una Serena Pasqua a tutti e, per l'occasione, ho scelto "Aria di festa", un brano tratto dai Malavoglia di
Giovanni Verga. Un brano dal sapore retrò, che fa pensare alle cose buone di un tempo passato, di cui dovremmo conservare la memoria per tramandarla alle future generazioni. Molti di noi non hanno
vissuto quell'atmosfera </br> […] http://www.mathblogging.org/post/85384 2014-04-19T20:12:57-04:00 Mihalis Kolountzakis Μια συνεχής συνάρτηση από ένα συμπαγή μετρικό χώρο στον εαυτό του έχει την
ιδιότητα ότι δε μικραίνει τις αποστάσεις: Για κάθε έχουμε (εδώ είναι η μετρική του χώρου ). Δείξτε ότι η είναι ισομετρία: για κάθε . (Αν έχετε πρόβλημα με την έννοια “συμπαγής μετρικός χώρος&#
8221; […] http://www.mathblogging.org/post/85385 2014-04-19T19:30:00-04:00 Pat Ballew <br /><br />Questions that pertain to the foundations of mathematics, although treated by many in recent times,
still lack a satisfactory solution. The difficulty has its main source in the ambiguity of language.<br />~Giuseppe Peano<br /><br />The 110th day of the year; The sum of the first 110 primes has
only two prime factors. 2+3+5+7+....+599 + 601 = 29897 = 7 X 4271<br /><br />EVENTS1543 Copernicus’ De Revolutionibus published, "in his An Annotated Census of Copernicus' De Revolutionibus Owen
Gingerich writes, 'The printing was […] http://www.mathblogging.org/post/85387 2014-04-19T19:08:00-04:00 Nat Banting I have students in an enriched class that demand for me to give them more
practice. I tell them that we practice mathematics with daily class activities. They don't want practice, they want repeated practice. They are accustomed to receiving repeatable drills to cement
understandings. <br />I have learned to compromise with this demand. I do believe there is a place for basic skills training in mathematics, and would raise an eyebrow at anyone who claims these
unnecessary. I do, however, also […] http://www.mathblogging.org/post/85382 2014-04-19T18:14:56-04:00 xi'an Last month, I ordered several books on amazon, taking advantage of my amazon associate
gains, and some of them were suggested by amazon algorithms based on my recent history. As I had recently read books involving thieves (like Giant Thief, or Broken Blade and the subsequent books), a
lot of titles involved thieves or thievery […] http://www.mathblogging.org/post/85381 2014-04-19T17:48:39-04:00 Angelo Stella MAFA Plotter è un programma che traccia grafici e diagrammi di
funzioni matematiche e ne può tabulare i valori, il tutto direttamente online, senza bisogno di installazione. Di uso estremamente semplice, è allo stesso tempo molto flessibile, grazie ai numerosi
parametri che l’utente può modificare. Consente anche di tracciare famiglie di curve parametriche ed è […] http://www.mathblogging.org/post/85388 2014-04-19T17:42:47-04:00 Leun Kim 료칸 미키야
에서 체크인을 끝내고 밖으로 나와 봤습니다. 외양 온천 메구리를 즐기기 전에 일단 마을의 이곳저곳을 좀 돌아다녀 보기 위해서… 역시 평상복을 입고 돌아다니는 사람들보다 유카타를 걸치고 있는 사람들이
훨씬 많았습니다. 유카타를 보면 머물고 있는 료칸이 어딘지도 알 수 있죠! 7개의 … Continue reading →Related Posts ?[교토 쿠라마 여행 05] 쿠라마 온천 […] http://www.mathblogging.org/post/
85377 2014-04-19T17:07:51-04:00 Unknown As I promised earlier, here is the video for my talk on "structures in solution spaces" at the Conference on Meaningfulness and Learning Spaces last February.
<br /><br /><br /><br />It was a wide-ranging talk, about learning spaces, distributive lattices and Birkhoff's representation theorem for them, rectangular cartograms, antimatroids, the 1/3-2/3
conjecture for partial orders and antimatroids, partial cubes, and flip distance in binary trees and point sets. It was also about an hour long, so don't watch unless you […] http://
www.mathblogging.org/post/85380 2014-04-19T17:07:18-04:00 venneblock I don’t know how else to say it, so I’m just gonna say it. Fractions are full of sh*t. Okay, not really. But if I have
to hear one more time about how fractions are useful because of applications to cooking, I may commit hari-kari. Before I jump into a diatribe, though, I absolutely have […] http://
www.mathblogging.org/post/85379 2014-04-19T16:25:41-04:00 admin Here the monotonicity of the EM algorithm is established. $$ f_{o}(Y_{o}|\theta)=f_{o,m}(Y_{o},Y_{m}|\theta)/f_{m|o}(Y_
{m}|Y_{o},\theta)$$ $$ \log L_{o}(\theta)=\log L_{o,m}(\theta)-\log f_{m|o}(Y_{m}|Y_{o},\theta) \label{eq:loglikelihood} $$ where \( L_{o}(\theta)\) is the likelihood under the
observed data and \(L_{o,m}(\theta)\) is the likelihood under the complete data. Taking the expectation of the second line with respect to the conditional distribution of […] 270 1212.2490v1 <span
class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=arXiv&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=
amp;rft.au=Ruslan+R+Salakhutdinov&rft.au=Sam+T+Roweis&rft.au=Zoubin+Ghahramani&rfs_dat=ss.included=1&rfe_dat=bpr3.included=1">Ruslan R Salakhutdinov, Sam T Roweis & Zoubin Ghahramani
(2012). On the Convergence of Bound Optimization Algorithms, <span style="font-style:italic;">arXiv, </span> arXiv: <a rel="author" href="http://arxiv.org/abs/1212.2490v1">1212.2490v1</a></span> 271
10.1214/aos/1176346060 <span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.atitle=On+the+Convergence+Properties+of+the+EM+Algorithm&
ss.included=1&rfe_dat=bpr3.included=1">Wu C.F.J. (1983). On the Convergence Properties of the EM Algorithm, <span style="font-style:italic;">The Annals of Statistics, 11</span> (1) 95-103. DOI:
<a rel="author" href="http://dx.doi.org/10.1214%2Faos%2F1176346060">10.1214/aos/1176346060</a></span> 272 10.1002/0471721182 <span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=
McLachlan G. & Peel D. <span style="font-style:italic;"> </span> DOI: <a rel="author" href="http://dx.doi.org/10.1002%2F0471721182">10.1002/0471721182</a></span> http://www.mathblogging.org/post/
85376 2014-04-19T16:17:18-04:00 Lisa N Guenther Last summer, thousands of prisoners in California launched a 60-day hunger strike to protest and transform oppressive policies in the California
Department of Corrections. One member of the organizing team called their strike action a “multi-racial, multi–regional Human Rights Movement to challenge torture.” This weekend, another prisoner-led
human rights movement... http://www.mathblogging.org/post/85378 2014-04-19T16:16:00-04:00 Crewton Ramone's Blog Of Math <br />Here is a lesson where we basically play spot the pattern. We start off
with base ten blocks and very quickly move to symbols only; however, this lesson does answer a lot of questions with regard to WHY some of the "tricks" with 11's you may have learned work the way
they do.<br /><br />This why we like math, the patterns are easy to see, the rules are consistent, you can prove your answers...but don't get caught up in the rules and theorems. Play and put the
children in a place where they can discover the […] http://www.mathblogging.org/post/85374 2014-04-19T15:48:00-04:00 Gianluigi Filippelli by @ulaulaman about #exoplanets #planetary_transit #
kepler_mission #nasa #astronomy The search for extrasolar planets (or exoplanets) had its first success in 1991 with the discovery of some planets orbiting around the pulsar PSR1257+12(1, 2, 3),
measuring the variations of the radio pulses coming from the star. The second important milestone in exoplanet research takes place in 1995, with the discovery around the star 51 Pegasi (a star like
our Sun) of a Jupiter-like planet, found at a […] 263 10.1038/355145a0 <span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Nature&
Frail%2C+D.&rfs_dat=ss.included=1&rfe_dat=bpr3.included=1">Wolszczan, A. & Frail, D. (1992). A planetary system around the millisecond pulsar PSR1257 + 12, <span style="font-style:italic;">
Nature, 355</span> (6356) 145-147. DOI: <a rel="author" href="http://dx.doi.org/10.1038%2F355145a0">10.1038/355145a0</a></span> 264 10.1126/science.264.5158.538 <span class="Z3988" title="ctx_ver=
&rfe_dat=bpr3.included=1">Wolszczan, A. (1994). Confirmation of Earth-Mass Planets Orbiting the Millisecond Pulsar PSR B1257 + 12, <span style="font-style:italic;">Science, 264</span> (5158)
538-542. DOI: <a rel="author" href="http://dx.doi.org/10.1126%2Fscience.264.5158.538">10.1126/science.264.5158.538</a></span> 265 10.1038/378355a0 <span class="Z3988" title="ctx_ver=Z39.88-2004&
info%3Adoi%2F10.1038%2F378355a0&rft.au=Mayor%2C+M.&rft.au=Queloz%2C+D.&rfs_dat=ss.included=1&rfe_dat=bpr3.included=1">Mayor, M. & Queloz, D. (1995). A Jupiter-mass companion to a
solar-type star, <span style="font-style:italic;">Nature, 378</span> (6555) 355-359. DOI: <a rel="author" href="http://dx.doi.org/10.1038%2F378355a0">10.1038/378355a0</a></span> 266 10.1126/
science.1185402 <span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Science&rfr_id=info%3Asid%2Fresearchblogging.org&
&rft.au=Dunham%2C+E.&rfs_dat=ss.included=1&rfe_dat=bpr3.included=1">Borucki, W., Koch, D., Basri, G., Batalha, N., Brown, T., Caldwell, D., Caldwell, J., Christensen-Dalsgaard, J.,
Cochran, W., DeVore, E. & Dunham, E. (2010). Kepler Planet-Detection Mission: Introduction and First Results, <span style="font-style:italic;">Science, 327</span> (5968) 977-980. DOI: <a rel="author"
href="http://dx.doi.org/10.1126%2Fscience.1185402">10.1126/science.1185402</a></span> 267 10.1038/nature09760 <span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=
ss.included=1&rfe_dat=bpr3.included=1">Lissauer, J., Fabrycky, D., Ford, E., Borucki, W., Fressin, F., Marcy, G., Orosz, J., Rowe, J., Torres, G., Welsh, W. & Batalha, N. (2011). A closely packed
system of low-mass, low-density planets transiting Kepler-11, <span style="font-style:italic;">Nature, 470</span> (7332) 53-58. DOI: <a rel="author" href="http://dx.doi.org/10.1038%2Fnature09760">
10.1038/nature09760</a></span> 268 10.1088/0031-9120/46/4/004 <span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Physics+Education&
info%3Adoi%2F10.1088%2F0031-9120%2F46%2F4%2F004&rft.au=George%2C+S.&rfs_dat=ss.included=1&rfe_dat=bpr3.included=1">George, S. (2011). Extrasolar planets in the classroom, <span style=
"font-style:italic;">Physics Education, 46</span> (4) 403-406. DOI: <a rel="author" href="http://dx.doi.org/10.1088%2F0031-9120%2F46%2F4%2F004">10.1088/0031-9120/46/4/004</a></span> 269 10.1088/
0031-9120/40/1/003 <span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Physics+Education&rfr_id=info%3Asid%2Fresearchblogging.org&
LoPresto%2C+M.&rft.au=McKay%2C+R.&rfs_dat=ss.included=1&rfe_dat=bpr3.included=1">LoPresto, M. & McKay, R. (2005). An introductory physics exercise using real extrasolar planet data, <span
style="font-style:italic;">Physics Education, 40</span> (1) 46-50. DOI: <a rel="author" href="http://dx.doi.org/10.1088%2F0031-9120%2F40%2F1%2F003">10.1088/0031-9120/40/1/003</a></span> http://
www.mathblogging.org/post/85375 2014-04-19T15:46:00-04:00 Unknown cherry-merchant: Sierpinski transformation http://www.mathblogging.org/post/85371 2014-04-19T12:37:00-04:00 Igor The folks at ICLR
2014 are releasing videos of the meeting. The whole channel of the meeting is here. Here is a sampling of interest with attendant reviews and papers:<br /><br /> ICLR 2014 Talk: "Revisiting
Natural Gradient for Deep Networks" by Razvan Pascanu and Yoshua Bengio.Attendant review of the paper. ICLR 2014 Talk: "Exact solutions to the nonlinear dynamics of learning in deep
linear neural networks" by Andrew M. Saxe, James L. McClelland, Surya Ganguli Attendant […] http://www.mathblogging.org/post/85372 2014-04-19T12:14:20-04:00 Unknown mathani: Get you best paper, cut a
circle and fold it so that the circumference falls on a fixed point inside. Repeat, using random folds. Now see the creases. This is how you paper-fold an ellipse. http://www.mathblogging.org/post/
85373 2014-04-19T12:08:00-04:00 Unknown Egan…Each semester, on the last day of class, I make prospective elementary school mathematics teachers hand in a Duo-Tang, which contains their weekly
math homework and their weekly discussion points (one page responses based on required readings from our text). Ultimately, the Duo-Tang assignment is an ongoing collection of “work” intended
to capture prospective elementary school math teachers’ thinking with respect to mathematics and mathematics education over the […] http://www.mathblogging.org/post/85370
2014-04-19T11:49:17-04:00 Eric Winsberg Astronomers have found the first Earth-sized planet located in the habitable zone of a star — the right distance away to host liquid water and possibly life.
Story here. The system is only about 500 light years away. This of course raises the issue of the Fermi paradox: if there... http://www.mathblogging.org/post/85369 2014-04-19T10:37:30-04:00 Guillermo
Bautista After discussing divisibility by 5, we proceed to divisibility by 6. A number is divisible by 6 if (1) it is even (2) it is divisible by 3 The explanation to this is quite simple. First, if
a number is … Continue reading → The post Proof of Divisibility By 6 appeared first on Proofs from The Book. http://www.mathblogging.org/post/85367 2014-04-19T10:22:00-04:00 Vlad Alexeev
Escher Scrabble by Andy Wells http://im-possible.info/english/art/sculpture/andy-wells.html http://www.mathblogging.org/post/85368 2014-04-19T10:11:54-04:00 ivanoransky Another very busy week at
Retraction Watch. There were a lot of gems elsewhere. Here’s a sampling: Biomedical research in the U.S. must be rescued, write four heavy hitters in PNAS. It’s “time to confront
the dangers at hand and rethink some fundamental features of the US biomedical research ecosystem.” “If you’re yelled at, boycotted, […] http://www.mathblogging.org/post/85366
2014-04-19T09:27:48-04:00 Andrew Someone who doesn’t want his name shared (for the perhaps reasonable reason that he’ll “one day not be confused, and would rather my confusion not
live on online forever”) writes: I’m exploring HLMs and stan, using your book with Jennifer Hill as my field guide to this new territory. I think I have a generally […]The post
Index or indicator variables appeared first on Statistical Modeling, Causal Inference, and Social Science. http://www.mathblogging.org/post/85362 2014-04-19T08:37:59-04:00 Leslie Read more →
http://www.mathblogging.org/post/85363 2014-04-19T08:30:29-04:00 Unknown 4 - 1 = √9 * 1^4 Also: 4 - 1 = √9 = |1 - 4| Also: 4! = 19 + 1 + 4 Also: 41914 is a palindrome http://www.mathblogging.org/post
/85360 2014-04-19T08:01:05-04:00 Angelo Stella Questo sito e’ dedicato a chi ha qualche difficolta’ con la matematica: la matematica non deve complicarci la vita ma aiutarci a viverla
meglio. Se hai qualche difficolta’ significa che nella tua carriera scolastica hai recepito qualcosa in modo sbagliato oppure lacunoso; spero che questo sito ti aiuti a capire dove sbagli ed a
superare […] http://www.mathblogging.org/post/85364 2014-04-19T07:54:49-04:00 xi'an Filed under: pictures, Travel, Wines Tagged: Belgian beer, Belgium, brewery, Leuven, MCQMC2014, Stella Artois
http://www.mathblogging.org/post/85361 2014-04-19T07:51:43-04:00 Angelo Stella Il Teorema dei quattro colori è piuttosto noto: “data una superficie piana suddivisa in regioni connesse, quattro
colori sono sufficienti per colorarla in modo tale che regioni adiacenti abbiano colori diversi”. Nel folklore matematico, esso è anche noto come “il primo enunciato dimostrato da un
computer”. Ma tale appellativo non rende certo giustizia ad un […] http://www.mathblogging.org/post/85365 2014-04-19T07:43:53-04:00 Cathy O'Neil, mathbabe Great to be here, and glad you
came. Please hop on the nerd advice column bus for another week of ridiculous if not damaging guidance from yours truly, Aunt Pythia. And please, after enjoying today’s counsel to other poor,
unsuspecting fools: think of something to ask Aunt Pythia at the bottom of the page! By the way, […] http://www.mathblogging.org/post/85359 2014-04-19T07:28:00-04:00 Unknown Back in 2011,
Nordin Zuber posted this found image on MathsLinks. | {"url":"http://www.mathblogging.org/search/default/?type=post","timestamp":"2014-04-20T08:14:26Z","content_type":null,"content_length":"51711","record_id":"<urn:uuid:1f6ff6a1-a2a8-4b89-8503-c3eb9c1936c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limits to Forecasting Precision for Outbreaks of Directly Transmitted Diseases
Early warning systems for outbreaks of infectious diseases are an important application of the ecological theory of epidemics. A key variable predicted by early warning systems is the final outbreak
size. However, for directly transmitted diseases, the stochastic contact process by which outbreaks develop entails fundamental limits to the precision with which the final size can be predicted.
Methods and Findings
I studied how the expected final outbreak size and the coefficient of variation in the final size of outbreaks scale with control effectiveness and the rate of infectious contacts in the simple
stochastic epidemic. As examples, I parameterized this model with data on observed ranges for the basic reproductive ratio (R[0]) of nine directly transmitted diseases. I also present results from a
new model, the simple stochastic epidemic with delayed-onset intervention, in which an initially supercritical outbreak (R[0] > 1) is brought under control after a delay.
The coefficient of variation of final outbreak size in the subcritical case (R[0] < 1) will be greater than one for any outbreak in which the removal rate is less than approximately 2.41 times the
rate of infectious contacts, implying that for many transmissible diseases precise forecasts of the final outbreak size will be unattainable. In the delayed-onset model, the coefficient of variation
(CV) was generally large (CV > 1) and increased with the delay between the start of the epidemic and intervention, and with the average outbreak size. These results suggest that early warning systems
for infectious diseases should not focus exclusively on predicting outbreak size but should consider other characteristics of outbreaks such as the timing of disease emergence.
Citation: Drake JM (2006) Limits to Forecasting Precision for Outbreaks of Directly Transmitted Diseases. PLoS Med 3(1): e3. doi:10.1371/journal.pmed.0030003
Academic Editor: Martin Kulldorff, Harvard Medical School, United States of America
Received: May 14, 2005; Accepted: September 27, 2005; Published: November 22, 2005
Copyright: © 2006 John M. Drake. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Competing interests: The author has declared that no competing interests exist.
Abbreviations: CV, coefficient of variation; EWS, early warning system; SARS, severe acute respiratory syndrome
The epidemiological responsibility to forecast disease outbreaks is an onerous one. Because of the devastating consequences and high costs of disease, predicting outbreaks is a chief goal for
public-health planning and emergency preparedness. Thus, quantitative forecasting and development of early warning systems (EWSs) for disease outbreak is a high priority for research and development
[1]. According to the World Health Organization, the primary goals of EWSs are to predict the timing of the outbreak and the magnitude of the outbreak [1]. Intuition suggests that for directly
transmissible diseases, the magnitude of the outbreak will be extremely difficult to predict because of the stochastic process of infectious contacts [2,3]. This idea is consistent with the recent
finding of Sultan et al. [4] that although the timing of annual meningitis outbreaks in West Africa was highly predictable, the final outbreak size varied greatly from year to year. Here I study
fundamental limits to forecast precision for (eventually) controlled outbreaks, first theoretically, then using nine well-studied infectious diseases as examples. Finally, I consider a new model that
more realistically represents actual outbreaks of emerging infections.
The reason that final outbreak size is generally not predictable is that the eventual dynamics of the outbreak are highly sensitive to the seemingly random sequence of infectious contacts and removal
of infectious individuals in the early, typically unobserved stages of the outbreak [3]. Clearly, the final size of an outbreak depends on numerous aspects of the social structure of the population,
the environment, and disease- or strain-specific characteristics. Among the more important factors are seasonal climate fluctuations, transmissibility and virulence of the pathogen, population
dynamics and structure of the host population, physiological and immunological status of potential hosts, and the social networks of contacts between infectious and susceptible individuals [5–8].
Accordingly, the deterministic approach to epidemic modeling regards the spread of infectious diseases as completely determined by the average effects of these factors on the basic reproductive ratio
(R[0]) together with initial conditions. Deterministic models of epidemics have provided insight into such important topics as the design of vaccination campaigns and the effect of age structure on
epidemic dynamics [5]. From the perspective of EWSs, the timing and average severity of outbreaks also might be modeled quite accurately with deterministic models. However, for emerging diseases or
for diseases prone to sudden outbreak, numerical predictions of final outbreak size derived from deterministic models will often deviate substantially from the observed outbreak size [3,9].
In contrast, the stochastic theory of epidemics represents the population as a statistical ensemble with constant or regular average properties but probabilistic changes in disease status for
individuals. As a result, properties of the ensemble, such as the final epidemic size, are probabilistic as well [10–14]. Thus, stochastic models quantify the likelihood of outbreaks that deviate
from the expected final size [9,15]. Such information about the variation in final outbreak size—its predictability—is crucial if disease forecasting is to be relied upon for planning interventions.
The stochastic theory of epidemics can therefore be used to understand the theoretical limits to forecasting precision for disease outbreaks, including EWSs or forecasts based on the developing
epidemic curve as case reports accumulate. I studied how precision in the forecasted final outbreak size for transmissible diseases depends on two dynamical features of outbreaks: the contact rate
(β) and the rate of removal (γ) in the simple stochastic epidemic. Next, I developed models of forecast precision for nine outbreak-prone diseases (chicken pox, diphtheria, measles, mumps,
poliomyelitis, rubella, scarlet fever, smallpox, and whooping cough) and used removal rate as a control parameter to relate intervention effectiveness to final outbreak size and forecast precision.
Finally, I developed a new model to understand how delays in implementing interventions affect final outbreak size and forecast prevision.
The simplest realistic model for outbreaks with a small number of initially infectious individuals is the simple stochastic epidemic with contact rate β and removal rate γ, which do not change
appreciably over the time scale of the outbreak [15,16]. This model is a good approximation if the outbreak meets the following criteria, which are reasonable for modern outbreaks that are rapidly
controlled. First, we assume that infectious contacts and removal of infectious individuals are approximately independent in time so that the outbreak is Markovian (compare [17–19]). Second, the rate
at which infectious individuals are removed from the population exceeds the rate at which infectious contacts occur (β < γ). Finally, the population is sufficiently large that the number of
individuals ultimately infected is not more than a negligible fraction of the susceptible population (i.e., per capita transmission rates are approximately independent of the density of infected
individuals). Then, the outbreak is a homogeneous birth–death process (the simple stochastic epidemic) with mean (M) and variance (V) of the final outbreak size given by [10]:
Properties of the final size distribution for other classes of epidemics can be found in [11–14,17,20].
The solution given by equations 1 and 2 is for an outbreak in which either (1) epidemiological parameters are naturally such that always R[0] < 1, or (2) public health policy is applied consistently
so that intervention is constant and under policy conditions R[0] < 1. For many emerging diseases this is not the case. Rather, initially R[0] > 1, but through intervention that was established some
measurable time after the outbreak started, the reproductive ratio is reduced below the epidemic threshold (e.g., severe acute respiratory syndrome [SARS]). This case is considerably more complicated
and, to my knowledge, no simple formulas have been obtained for the mean and variance of the final outbreak size. However, it is reasonably straightforward to solve the equations computationally, and
a range of conditions can be studied. Below, I consider a case that is more applicable to forecasting emerging diseases, the simple stochastic epidemic with delayed-onset intervention in which there
is a constant rate of infectious contacts (β) and a removal rate (γ) that depends on the time since the outbreak began. Specifically, at the start of the outbreak the removal rate is some value less
than the rate of infectious contacts and remains constant until some intervention is applied a time t*−t[0] later, after which the removal rate is some constant value greater than β, i.e., γ(t) = γ
[1]I(t≤t^*) + γ[2]I(t>t^*) , where I is an indicator function equal to one if its argument is true and zero otherwise. Then, we can study the size of the outbreak as a function of the control
parameter t*, the time at which intervention is initiated.
A measure of precision should quantify the relative magnitude of deviations from an expected value. The coefficient of variation is a measure of forecast precision that can be interpreted as relative
dispersion independent of the magnitude of the data [21]. I used the theoretical CV for final outbreak size obtained from equations 1 and 2, which depends only on the ratio γ/β = R[0]^−1 and not on
the individual parameter values, to study how forecast precision depends on outbreak characteristics and to estimate forecast precision for nine infectious diseases under different levels of control,
represented by increasing γ (see Figure S1). This measure assumes β and γ are known exactly. For individual outbreaks, in which β and γ are not precisely known and the model is only an approximation
to the structure of the contact process, violations of modeling assumptions such as the Markov assumption and the lack of an explicit incubation period further erode forecast reliability. Thus this
measure represents a theoretical upper bound on forecast precision that will not be attainable in practice.
Although every outbreak will be different as a result of evolution of the etiological agent, changes in social behavior, timing, and the ecological and geographical context in which the outbreak
starts, many epidemic parameters (most famously R[0]), are reasonably conserved across outbreaks of the same disease. Here, I treat the removal rate γ as a control parameter because it is crucially
related to interventions, and estimate β, which is assumed to depend on uncontrollable aspects of the outbreak. The variable β, which is the individual rate of infectious contacts [22], is related to
the transmission rate (β[0]) by the equation β = β[0]N, where N is total population size or density in the standard theory (e.g., [5]). This quantity is related to the basic reproductive ratio R[0]
by the equation:
Where removal results from recovery of the diseased individual, we can estimate γ from the duration of the incubation (τ[1]), latent (τ[2]), and infectious (τ[3]) periods with the equation γ = (τ[1]
+τ[2]+τ[3])^−1 . Estimates of R[0] have been obtained for numerous directly transmitted diseases [5]. Assuming these estimates are based on the natural course of the disease (i.e., without direct
intervention), we can rearrange this equation and substitute for γ to obtain an estimate of β:
Given that reported values for these variables vary somewhat, we put an upper bound on β by choosing the highest reported value of R[0] and the lowest reported values for the different τs, whereas a
lower bound is obtained from the lowest reported value of R[0] and the highest reported values for the different τs. As a central estimate, I used the center of the reported interval for each
variable. Estimates of the ranges of these quantities for several directly transmitted diseases were compiled by Anderson and May ([5], Tables 3.1 and 4.1). Using these values, I used equation 4 to
estimate plausible ranges of β for nine directly transmitted diseases (Table 1).
Table 1. Estimates of the Range of β for Nine Directly Transmitted Diseases
I also considered the delayed-onset intervention model wherein initially β > γ (the supercritical case in which epidemic occurs with high probability), but after a time t*−t[0] intervention increases
the removal rate γ so β < γ (the subcritical case in which the outbreak is brought under control). This model is a more realistic representation of many emerging outbreaks (e.g., SARS, Foot-and-Mouth
disease, and Marburg virus). The solution to the simple stochastic epidemic with delayed-onset intervention can be obtained using generating functions for the probability distribution of the size of
the outbreak [10]. The variance of the final outbreak size is in terms of a multiple integral, which was evaluated numerically (see Text S1). As an example, I studied two situations with contrasting
initial values for R[0]. First, I studied the situation with β = 0.5 and γ[1] = 0.25 (R[0] = 2). Second, I studied the situation with β = 0.5 and γ[1] = 0.45 (R[0] ≈ 1.1). In both cases, γ[2] (the
removal rate after intervention) was one, so that post-intervention reproductive ratio was 0.5.
The ratio γ/β, the rate of removal compared with the rate of infection, represents the relative effectiveness of interventions. In the simple stochastic epidemic, the relative effectiveness of
intervention is always greater than one because we assume that the outbreak is eventually controlled, i.e., the assumption β < γ above. Figure 1 confirms the intuition that final outbreak size
declines as the relative effectiveness of intervention is increased. The CV in the final outbreak size, our measure of the imprecision with which the final outbreak size is forecasted, also declines
with control effectiveness. As a benchmark, a forecast might be deemed reliable (in principle) where the CV is less than one, which occurs for . Figures 2 and 3 show plots of the final outbreak size
and the CV over the interval of estimated βs for each of nine directly transmitted diseases. It is important to underscore that the intervals in Figures 2 and 3 represent uncertainty about the value
of the parameter β, not variation from stochastic fluctuations. Further understanding of these diseases might allow us to reduce this source of uncertainty by obtaining more precise estimates. In
contrast, the CV in Figure 3 represents the range of final outbreak sizes that can result from the stochastic infection process for a fixed set of parameters. In principle, no amount of detailed
information about transmission or other ensemble epidemic parameters can reduce this uncertainty.
Figure 1. Expected Final Outbreak Size and CV in the Final Outbreak Size as a Function of Intervention Effectiveness
The expected final outbreak size (solid line) and CV in the final outbreak size (dashed line) are shown as a function of intervention effectiveness (the ratio of the removal rate and contact rate γ/
β) for the simple stochastic epidemic. The light horizontal line designates the benchmark where CV = 1.
Figure 2. Expected Final Outbreak Size for Nine Directly Transmitted Diseases as a Function of the Removal Rate
The expected final outbreak size (y-axis) for nine directly transmitted diseases is represented as a function of the removal rate (x-axis). Estimates are bounded by minimum and maximum estimates
(dashed lines) of the contact rate β based on published estimates of R[0].
Figure 3. CV in Final Outbreak Size as a Measure of Forecast Precision for Outbreaks of Nine Directly Transmitted Diseases as a Function of Removal Rate
The CV in final outbreak size (y-axis) is a measure of forecast precision, shown here for outbreaks of nine directly transmitted diseases as a function of removal rate (x-axis). Estimates are bounded
by minimum and maximum estimates (dashed lines) of the infectious contact rate β based on estimates of R[0]. The horizontal line indicates CV = 1.
Numerical analysis of the delayed-onset intervention model showed that (1) the average outbreak size increased with the delay between the start of the outbreak and the start of intervention (Figure 4
A), and (2) the CV (in our examples) was everywhere greater than one and increased with the time delay between the start of the outbreak and intervention, but at a declining rate (Figure 4B). The
first result is straightforward: The delay between initial infection and intervention increases the total number of secondary (tertiary, etc.) infections that are increasing as a multiplicative
process. The explanation of the second result is that the CV in outbreak size scales as the square root of the variance in outbreak size and as the inverse of the average outbreak size. As the
average outbreak gets larger the CV increases but at a declining rate (Figure 4C). This effect is mediated by the reproductive ratio of the outbreak, so that the outbreak with the lower R[0] had a
lower average outbreak size (Figure 4A), but larger CV (Figure 4B and 4C). Thus, in the sense that the CV measures the predictability of the outbreak, we found that subcritical and controlled
outbreaks (R[0] < 1 and R[0] close to 1, respectively) were less predictable (have lower CV) than supercritical (R[0] >> 1) outbreaks of comparable size.
Figure 4. Effect of Time Delay until Intervention on Outbreak Size
Effect of time delay until intervention on outbreak size is contrasted for outbreaks with R[0] = 2 (solid lines) and R[0] ≈ 1.1 (dashed lines).
(A) Average outbreak size (y-axis) increases with the number of days until intervention (x-axis).
(B) CV in outbreak size (y-axis) increases with the number of days until intervention (x-axis).
(C) CV in outbreak size (y-axis) increases at a declining rate (i.e., levels off) as the average outbreak size increases (x-axis). Note that the CV in final outbreak size increases faster in the
outbreak with lower R[0].
Using theoretical models, I found that unless controls are extremely effective, limits to forecast precision result in highly uncertain estimates of final outbreak size. Specifically, for the simple
stochastic epidemic (subcritical case), unless the removal rate is greater than approximately 2.41 times the effective contact rate, the CV of final outbreak size will be greater than one.
Imprecision in the delayed-onset intervention model was typically even greater.
Reliable forecasts of outbreaks based on initial cases and/or EWSs could potentially save many lives by increasing preparedness for outbreaks when and where they are most likely or most severe.
According to the World Health Organization, forecasts will be most useful when they accurately predict the final size of the outbreak [1]. However, the findings reported here suggest that precise
predictions may be unattainable because of high variance in the final outbreak size of directly transmissible diseases, even under the (unreasonable) assumption of perfect information about
macroscopic epidemic parameters.
This result does not apply to diseases that are not directly transmitted (e.g., vector-borne illnesses) or to diseases in which parameters change as the outbreak progresses (e.g., SARS [23]).
Parameters might change for at least two reasons. First, for emerging infections, about which little is known at the start of the outbreak, increasing ability to diagnose and treat infected patients
and the dissemination of information to the public will result in increasing the removal rate. Thus, for example, in the 2003 SARS outbreak, the average lag between onset of symptoms and hospital
isolation was initially around 6 d but declined to around 2 d by the fourth wk of the outbreak [23,24]. Second, in outbreaks that ultimately infect a large portion of the population, the rate of
infectious contacts will decline as the number of cases increases, diluting the susceptible population. These examples represent important violations of modeling assumptions adopted here and are
represented by the inhomogeneous [10,22] and general [15,16] stochastic epidemics respectively. Forecasting precision for these situations is an important topic for research.
Generally, these violations of the simple stochastic epidemic must be considered on a case-by-case basis. We studied one realistic example (the simple stochastic epidemic with delayed-onset
intervention) in which an initially supercritical outbreak (R[0] > 1) is controlled by public health measures that increase the rate at which infectious individuals are removed from the population to
a level ensuring the outbreak will eventually die out. This is a reasonably realistic model for dynamics of emerging infections with a short incubation period. For two representative examples, we
found that the average outbreak size scaled approximately exponentially with the delay between the start of the outbreak and the implementation of intervention (note the log scale of the y-axis in
Figure 4A), underscoring the importance of rapid intervention. Intuitively, when R[0] was high the average outbreak size increased faster than when R[0] was low. We also found that the CV in the
final outbreak size increased with the lag between initial infection and control, but was smaller in the case with high R[0] than in the case with low R[0]. Indeed, for the delayed-onset case with
relatively high R[0] (R[0] = 2) the CV seemed to level off at a delay of around 15–20 d, although this was not shown in the case with lower R[0] (Figure 4B and 4C), probably because a longer delay
would be required to reach such an asymptote.
In conclusion, the fundamental limit to forecasting precision obtained here represents only variation that results from the stochastic contact process and not from uncertainty about the underlying
model or parameter values (compare [3]). These sources of uncertainty will further diminish precision. Further, these results underscore that rapidly implementing control measures has value not only
for decreasing the final size of the outbreak, which is the primary goal, but also for decreasing variation in the final size of the outbreak, which is information that can be used to tailor control
measures and reduce potential losses. Although these limits to forecast precision should lead to interpreting predictions cautiously—whether derived from statistical analysis, epidemic modeling,
computer simulation, or expert opinion—they should not hinder the development of greater and more reliable systems for forecasting outbreaks of infectious disease because there are many features of
outbreaks that might be reliably predicted.
Supporting Information
Figure S1. CV of Final Outbreak Size as a Function of R[0]
I was unable to obtain a simple relation for the coefficient of variation (CV) in the outbreak size of the subcritical simple stochastic epidemic in terms of the basic reproductive ratio R[0].
Numerical results confirm that the CV of final outbreak size depends only on the ratio β and γ (i.e., on R[0]). This plot represents the information in Figure 1 as a function of R[0]. The value is
the value at which CV equals exactly one. In this sense, outbreaks with R[0] ≤ R[0]* are predictable while outbreaks with R[0] > R[0]* are unpredictable.
(18 KB PDF).
Text S1. Numerical Methods to Obtain Variance in Outbreak Size in the Delayed-Onset Intervention Model
(102 KB PDF).
The research was conducted while the author was a Postdoctoral Associate at the National Center for Ecological Analysis and Synthesis, a Center funded by the National Science Foundation (Grant #
DEB-0072909), the University of California, and the Santa Barbara campus. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
1. 1. World Health Organization (2004) Using climate to predict infectious disease outbreaks: A review. Geneva: World Health Organization. Available: http://www.who.int/globalchange/publications/
oeh0401/en/. Accessed 5 October 2005.
2. 2. Drake J (2005) Fundamental limits to the precision of early warning systems for epidemics of infectious diseases. PLoS Med 2: e144. doi: 10.1371/journal.pmed.0020144.
3. 3. Meyers LA, Pourbohloul B, Newman MEJ, Skowronski DM, Brunham RC (2005) Network theory and SARS: Predicting outbreak diversity. J Theor Biol 232: 71–81.
4. 4. Sultan B, Labadi K, Guegan JF, Janicot S (2005) Climate drives the meningitis epidemics onset in West Africa. Plos Med 2: 43–49.
5. 5. Anderson R, May R (1991) Infectious diseases of humans: Dynamics and control. Oxford (United Kingdom): Oxford University Press. 757 p.
6. 6. Dowell SF (2001) Seasonal variation in host susceptibility and cycles of certain infectious diseases. Emerg Infect Dis 7: 369–374.
7. 8. Dowell SF, Whitney CG, Wright C, Rose CE, Schuchat A (2003) Seasonal patterns of invasive pneumococcal disease. Emerg Infect Dis 9: 573–579.
8. 13. Ball F, Nasell I (1994) The shape of the size distribution of an epidemic in a finite population. Math Biosci 123: 167–181.
9. 14. Ball F, O'Neill P (1999) The distribution of general final state random variables for stochastic epidemic models. J Appl Probab 36: 473–491.
10. 17. Anderson D, Watson R (1980) On the spread of a disease with gamma-distributed latent and infectious periods. Biometrika 67: 191–198.
11. 18. Lloyd AL (2001) Realistic distributions of infectious periods in epidemic models: Changing patterns of persistence and dynamics. Theor Popul Biol 60: 59–71.
12. 19. Lloyd AL (2001) Destabilization of epidemic models with the inclusion of realistic distributions of infectious periods. Proc R Soc Lond B Biol Sci 268: 985–993.
13. 20. Ball F, Clancy D (1993) The final size and severity of a generalized stochastic multitype epidemic model. Adv Appl Probab 25: 721–736.
14. 21. Zar J (1999) Biostatistical analysis, 4th ed. Upper Saddle River (New Jersey): Prentice Hall. 663 p.
15. 22. Allen L (2003) An introduction to stochastic processes with applications to biology. Upper Saddle River (New Jersey): Pearson/Prentice Hall. 385 p.
16. 23. Chowell G, Fenimore PW, Castillo-Garsow MA, Castillo-Chavez C (2003) SARS outbreaks in Ontario, Hong Kong and Singapore: The role of diagnosis and isolation as a control mechanism. J Theor
Biol 224: 1–8.
Patient Summary
Early warning systems that are used to look for outbreaks of infectious diseases are important in public-health planning. One of the most important things that such early warning systems try to
predict is the final size of the outbreak. However, for diseases transmitted directly from person to person (rather than via a mosquito, for example), the precision with which the final size can be
predicted is often very low.
Why Was This Study Done?
This researcher wanted to study how predictable the final outbreak size of an epidemic is if the effectiveness of control measures and the average number of infectious contacts are known.
What Did the Researcher Do and Find?
He developed a mathematical model that took into account the variation in the infectiousness of nine well-studied infectious diseases. He found that for any outbreak that increases slowly, precise
forecasts of the final outbreak size will be impossible. This result was especially true for epidemics in which there was a substantial delay in intervention after infection occurred, and the
precision of the forecast got worse as the delay between the start of the epidemic and intervention increased, and with the average outbreak size.
What Do These Findings Mean?
These results suggest that early warning systems for infectious diseases should not focus just on trying to predict outbreak size because this estimate may be inaccurate, but rather they should
instead try to predict other characteristics of outbreaks. These results will be of use to people trying to plan for infectious disease outbreaks, but will not affect how patients are managed
Where Can I Get More Information Online?
Based in the United States, the Centers for Disease Control and Prevention (CDC) has a Web site that gives background on how the CDC investigates disease outbreaks, along with details of individual
The World Health Organization has interesting information on early warning systems:
In the United Kingdom, the Health Protection Agency has a similar function and gives details on investigations of infectious diseases: | {"url":"http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0030003","timestamp":"2014-04-19T04:42:33Z","content_type":null,"content_length":"112974","record_id":"<urn:uuid:5bc6c64b-cc7f-4069-b03a-bbc837440624>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Christmas Chocolates
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
Many students are accustomed to using number patterns in order to generalise.
This problem
offers an alternative approach, challenging students to consider multiple ways of looking at the structure of the problem. The powerful insights from these multiple approaches can help us to derive
general formulae, and can lead to students' appreciation of the equivalence of different algebraic expressions.
Possible approach
this image
of a full size $5$ box of chocolates, and ask students to work out how many chocolates there are, without speaking or writing anything down. Compare solutions and share approaches.
Mention that mathematicians like to find efficient methods which can be used not only for simple cases but also when the numbers involved are very large. Explain that the pictures of Penny's, Tom's
and Matthew's partially-eaten chocolates could be used by a mathematician as a starting point for finding an efficient method for counting the total number of chocolates.
for the three pictures.
Hand out
these chocolate box templates
and ask students to show how the images of the partially-eaten chocolates can be used to calculate the total.
Ask students to report back, explaining the methods which have emerged.
Then ask students to use all three methods, along with any methods they devised for themselves, to work out the number of chocolates in a size $10$ box, and verify that all methods agree.
Challenge students to express each method for finding the number of chocolates in any size of box, perhaps introducing some algebra and the idea of a size $n$ box if appropriate.
Bring the class together to share findings. Compare the different "formulae" which have emerged, and ask students to explain why they are equivalent.
Key questions
How does each image help you to count the total number of chocolates quickly?
Can you demonstrate the equivalence of different algebraic expressions?
Possible extension
The problems
Summing Squares
Picture Story
lead to formulae for some intriguing sequences through analysis of the structure of the contexts.
Possible support
The problem
Seven Squares
gives lots of simple contexts where formulae emerge by looking at structure rather than number sequences. | {"url":"http://nrich.maths.org/6675/note?nomenu=1","timestamp":"2014-04-20T16:04:51Z","content_type":null,"content_length":"5995","record_id":"<urn:uuid:704d6dad-66a7-4cef-a179-81f37987a1ec>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electromagnetic systems always dissapative?
Well for simplicity sake let's say you have n charged particles confined to some small region, they will exert a force on each other causing accelerations which in term causes energy to be radiated
off to infinity. My question is, once all the energy at the start have been lost what will the particles do?
Say we begin with n particles with charge of the same sign, initially kept still by other forces, and assume that fields are given by retarded solution of the Maxwell equations. Let's assume that the
system has rest energy equal to sum of rest energy of the particles and the energy of the electrostatic field.
The particles will repel each other. After the constraints are removed, the particles will accelerate and produce radiation, which will propagate away to all directions. After a while, the particles
will be all far away from each other, so the accelerations will be smaller. They will continue to move almost uniformly and will most probably retain some kinetic energy indefinitely. As a result,
some energy has thus been transfered from the EM field to the kinetic energy of the particles. | {"url":"http://www.physicsforums.com/showthread.php?s=0062868ca59b17772da8b114874e57b6&p=4374858","timestamp":"2014-04-20T14:18:07Z","content_type":null,"content_length":"40527","record_id":"<urn:uuid:e940743e-7fc0-4b5a-ad20-71c09c277c92>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sugar Land Algebra Tutor
Find a Sugar Land Algebra Tutor
...After completing a lot of formal coursework both in Japan and the US, I was hired by John Deere. I worked in the overseas marketing group and was sent to Japan on assignment to sell
construction equipment for 2 years. From there I went on to head up the Industrial Products Group at Nippon Donaldson in Tachikawa for 6 years.
7 Subjects: including algebra 2, algebra 1, statistics, ACT Math
I am a full-time auditor with three years of professional auditing and accounting experience at one of the world's biggest auditing firm. My passion focuses on accounting, auditing, and other
subjects that are related to numbers and involves critical thinking. I had two years of tutoring experience in accounting, college algebra, and statistics.
3 Subjects: including algebra 1, algebra 2, accounting
...I’m as excited as they are when my students learn new skills or understand new concepts. I am a very encouraging tutor, and believe that encouragement is essential to learning. I like to add a
bit of fun plus personalization to my lessons, and I find that these grab my students’ attention and make it easier for them to learn.
12 Subjects: including algebra 1, reading, writing, English
...It was there that I really found my niche in both science and math, where AP Physics II and AP Calculus were my favorites. I later went to Howard University, in Washington, D.C., where I
majored in Biology, but I later discovered that I didn't quite satiate my curiosity for science. After college, I went to school part-time, at the University of Houston.
8 Subjects: including algebra 1, chemistry, physics, biology
...I not only have a mastery of the material, but experience in explaining it to students with a variety of learning styles. Algebra I and II are trivial to me. I have taken and tutored a great
many mathematics courses, and algebra II is one of the simpler courses that I can tutor.
37 Subjects: including algebra 1, algebra 2, chemistry, geometry | {"url":"http://www.purplemath.com/sugar_land_algebra_tutors.php","timestamp":"2014-04-17T21:44:55Z","content_type":null,"content_length":"24066","record_id":"<urn:uuid:df5f9bfc-3b1d-4f1e-9919-d0897498b0fb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Liam Cleary, Ph.D.
Postdoctoral Associate
Department of Chemistry
Massachusetts Institute of Technology
E-mail: licleary@mit.edu linkedin.com/in/licleary
Postal Address
Room 6-222A
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139
Current Research
My current research concerns the transport phenomena of elementary excitations in quantum dissipative systems, in particular, the optimization of excitonic energy transfer processes in photosynthetic
pigment-protein complexes, namely the light harvesting complexes of LH-II B850 and LH-I B875 found in purple bacteria.
Research Interesets
My research interesets include all of condensed matter physics, in particular transport in quantum and semiclassical dissipative systems. I enjoy being a member of the APS division of condensed
matter physics (DCMP) and the topical group on statistical and nonlinear physics (GSNP).
PhD Thesis
• A semiclassical approach to quantum Brownian motion in Wigner's phase space, L. Cleary, Dept. of Electronic and Electrical Engineering, Trinity College Dublin, October (2010).
Complete List
A list of publications and conferences can be found here
1. Exploring the Photoluminescence Spectral Lineshapes of Single Nanocrystals in Solution Using Photon-correlation Fourier Spectroscopy, 2012 MRS Fall Meeting, Nov 25th-30th, Boston (Nov 28th,
Conference and workshop presentations:
1. Intercomplex energy transfer rates and single molecule spectra of LH2 and LH1 as described by a polaron-transformed multichromophoric quantum master equation, ACS Spring 2012 Chemistry of Life
Meeting, March 25th-29th, 2012, San Diego.
2. Effect of the heat bath on the intercomplex resonance energy transfer rate as described by multichromophoric Foerster theory, Quantum Effects in Biological Systems, August 1st-5th, 2011, Ulm.
3. Quantum Brownian motion in a periodic potential: comparison of various kinetic models, Theoretical, Computational, and Experimental Challenges to Exploring Coherent Quantum Dynamics in Complex
Many-Body Systems, May 9th-12th, 2010, Dublin.
4. Semiclassical treatment of a Brownian ratchet using the quantum Smoluchowski equation, DPG Spring Meeting of the Condensed Matter Section, March 21st-26th, 2010, Regensburg.
5. Derivation of the quantum Smoluchowski equation using Brinkman's method, Tunneling and Scattering in Complex Systems - From Single to Many Particle Physics, International Workshop, Max-Planck
Institut fuer Physik komplexer Systeme, September 14th-18th, 2009, Dresden.
6. Smoluchowski equation approach for the quantum Brownian motion in a tilted periodic potential, ISSEC Irish Mechanics Society Joint Symposium, University College Dublin, May 16th, 2008.
Conference and workshop attendances:
1. IOP Postgraduate Workshop on Spintronics, 13th November, 2009, University of York.
2. DPG Spring Meeting of the Condensed Matter Section, March 22nd-27th, 2009, Dresden.
3. Ireland Mathematica Seminar 2009, 21st October, 2009, Trinity College Dublin.
The traditional measure of cognac is to place the glass on its side and fill to the brim. An equally popular measure is to place the glass vertically and fill to the point of maximum surface area.
For a given glass curvature and stem length, by adjusting the base width we can create the perfect cognac glass, where the two measures are equal.
Source Code: APerfectCognacGlass.nb
One sometimes needs to numerically evaluate the real part of the one-sided Fourier transform of a function f(t). A simple and efficient way to achieve this is to use a fundamental property of the
two-sided Fourier transform of a conjugate-even function (i.e., f(-t) = f(t)* ) and use the efficient fast Fourier transform (FFT) algorithm. This procedure is applied here to the linear absorption
and emission spectra of a single excitation interacting with a thermal environment.
Source Code: OneSidedFourierTrans...AndEmis-source.nb
Image Gallery
QuEBS Ulm 2011: Poster session
Konferenz zur Quantenbiologie
NUS Singapore 2011: Group photograph
Singapore-MIT Alliance
TSICS Dresden 2009: Group photograph
Tunneling and Scattering in Complex Systems | {"url":"http://web.mit.edu/~licleary/www/","timestamp":"2014-04-20T16:08:25Z","content_type":null,"content_length":"12639","record_id":"<urn:uuid:286126cc-19b5-4dcd-8161-d8c1c06744ab>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
MMM #29: Mother of all clock angle problems
Quan and Daniel have posted the answer to MMM #28 at Blinkdagger so now it's time for MMM #29.
I call this one the "mother of all clock angle problems." Some of you may have run into those problems where you have to figure out the angle between the two hands of a clock at weird times. Well, I
took that problem as a starting point and added a twist to it.
Consider a 12-hour analog clock with two hands and a round face. Consider the angle between the two hands at any given time and, when the angle between the hands is not 180 degrees, take the
smaller of the two angles. Thus, at 12:00 the angle between the two hands is 0 degrees. At 3:00 and at 9:00 it's 90 degrees.
If we me measure the angle between the two hands at each of the 61 consecutive minutes between 12:00 and 1:00 inclusively, what is the sum of those 61 angles?
Remember to show your work.
Here are the rules for the contest:
1. Email your answers with solutions to mondaymathmadness at gmail dot com.
2. Only one entry per person.
3. Each person may only win one prize per 12 month period. But, do submit your solutions even if you are not eligible.
4. Your answer must be explained. You must show your work! Wild About Math! and Blinkdagger will be the final judges on whether an answer was properly explained or not.
5. The deadline to submit answers is Tuesday, April 7, 12:01AM, Pacific Time. (That’s Tuesday morning, not Tuesday night.) Do a Google search for “time California” to know what the current Pacific
Time is.)
6. The winner will be chosen randomly from all timely well-explained and correct submissions, using a random number generator.
7. The winner will be announced Friday, April 10, 2009.
8. The winner (or winners) will receive a Rubik’s Revolution or a $10 gift certificate to Amazon.com or $10 USD via PayPal. For those of you who don’t want a prize I’ll donate $10 to your favorite
9. Comments for this post should only be used to clarify the problem. Please do not discuss ANY potential solutions.
10. I may post names and website/blog links for people submitting timely correct well-explained solutions. I’m more likely to post your name if your solution is unique.
Comments (8) Trackbacks (2) ( subscribe to comments on this post )
1. Interesting problem!
A variation may be to measure the angles clockwise using the hour hand as reference, and constraining the angles between -180 and 180 degrees. In this case, the answer seems more interesting, and
can possibly be arrived at through some intuition.
2. How do you sum angles? For example, is 180 degrees + 180 degrees 360 degrees or 0 degrees? Is thrice 180 degrees 540 degrees or 180 degrees?
3. Clueless – nice variation!
Ted — to sum angles you don’t do modular arithmetic. So, 180 degrees + 180 degrees = 360 degrees. Thrice 180 degrees = 540 degrees.
4. In this problem are we assuming the hour hand stays stationary as the time increases from 12 to 1
5. Hi Ishita,
No, the hour hand does not stay stationery. Just like in a real clock, the hour hand moves a little bit every minute.
6. are we supposed to keep the direction of measurement same or reverse it after 180 degrees as the angle between the hour hand is considered to be reflex after 180 degrees?
7. Hi Neha, I don’t quite follow your question.
If the (acute) angle between the two hands is 90 degrees, it could also be measured as 270 degrees (the obtuse angle), right? Always pick the smaller measurement, 90 in this case.
Does that answer your question? | {"url":"http://wildaboutmath.com/2009/03/30/mmm-29-mother-of-all-clock-angle-problems/","timestamp":"2014-04-18T08:08:29Z","content_type":null,"content_length":"42572","record_id":"<urn:uuid:402f5746-4e2d-4b63-913b-3475436d5b01>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
time complexity of toArray
Join Date
Feb 2012
Rep Power
Join Date
Feb 2012
Rep Power
Just found out that it will be O(n). Not sure if its correct. Time Complexity of HashMap.toArray() (Java in General forum at JavaRanch)
But if it is i've got another problem.
I have a set of Item objects, the assignment said it had to be a set, and I have to provide a method to return the ith heaviest Item in ct time. I thought the only way to do so is making sure
my items set is a sorted one. So my Item class implements comparable and the compareTo compares the weight of items. But a there is no set that provides a method to retrieve the ith item in
the set. So I thought of toArray but it appears not to be in ct time. Any ideas. Thanks!
Join Date
Apr 2012
Rep Power
Use a SortedSet, it'll already be sorted then you just use mySet.get(i);
Join Date
Feb 2012
Rep Power
Ow i did not know you could use .get(i) on a set. Just to be curious but from what super class or interface does the support of .get(i) comes from? Cause I can't find it.
Join Date
Apr 2012
Rep Power
Ah, my mistake. Wasn't paying attention. Then use a sorted set and iterate to the required index
Java Code:
int requiredIndex = 2;
SortedSet<String> set = new TreeSet<String>();
int i=0;
String valueAtIndex = "<not found>";
for (String s : set)
System.out.println("Testing '"+s+"' @ "+i);
if (i==requiredIndex)
valueAtIndex = s;
System.out.println("Value isn't '"+s+"'");
Join Date
Apr 2012
Rep Power
Of course you should be efficient with your assignments though, i.e.
Java Code:
int requiredIndex = 2;
SortedSet<String> set = new TreeSet<String>();
int i=0;
Iterator it = set.iterator();
while (i<requiredIndex && it.hasNext())
System.out.println("Value at "+i+" '"+(String)it.next()+"'");
Join Date
Feb 2012
Rep Power
hey thanks buddy, really nice of you. But i do need a ct time solution.
I just reread my assignment and it said .getItems() should return a set, they do not specifically say items has to be a saved as a set. So I 'm going to save it as a List and use
Collections.sort() whenever I add an Item. Then getItems() will be something like return new HashSet(ItemsList). Might not be the nicest way but I respect the assignment. Better ideas are
still welcome.
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
cenosillicaphobia: the fear for an empty beer glass
Join Date
Apr 2012
Rep Power
Join Date
Feb 2012
Rep Power
How do you get at n*n*log(n)? There is one list.add() in the code(ct time) and one Collections.(merge)sort() (n * log(n)) So each time you add something will still be n log(n) no?
I could write an insertion sort myself so that the sorting complexity would be just n since whenever I add something the list will already be fully sorted.
But it does not change the fact that retrieving the ith heaviest item will happen in cte time. The assignment says nothing about complexity adding an item.
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
Join Date
Feb 2012
Rep Power
"Robots must be able to return the ith heaviest item they are carrying in
constant time." (a draft from my assignment)
They are going to run some tests on the program and therefore it is necessary that I implement given methods of a given type in my code.
I first thought when they say I should have a method public Set<Item> getItems(), I had to save the items collection as a set.
But I just need a method that returns a set( this is about a totally different method then the one above but is just the reason why I thought I was obliged to use a set)
Sorry if I was not clear earlier.
Join Date
Feb 2012
Rep Power
That is true. But you first said n*n*log(n) each time you add something:
Those are two different things no? And I think this is not really applicable to my project cause you only add one thing at a time(whenever a robot picks up something).
I do agree this is most inefficient if you want you add a bunch of items at once.
Join Date
Apr 2012
Rep Power | {"url":"http://www.java-forums.org/new-java/59690-time-complexity-toarray.html","timestamp":"2014-04-16T07:34:53Z","content_type":null,"content_length":"112616","record_id":"<urn:uuid:c0d3fe4e-7a35-481f-aba4-418d548fa7fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Euler sums and contour integral representations.
(English) Zbl 0920.11061
The authors survey some of the methods that have been used to study Euler sums, and they introduce a powerful new approach. They apply residue calculus to integrals of the form
${\int }_{\left(\infty \right)}r\left(s\right)\xi \left(s\right)ds,$
where ${\int }_{\left(\infty \right)}$ is the limit of integrals taken along large circles that expand to $\infty$, $r\left(s\right)$ is a rational function that is $O\left({s}^{-2}\right)$ for large
$|s|$, and $\xi \left(s\right)$ is a kernel function that is $o\left(s\right)$ on large circles whose radii tend to $\infty$. By employing kernels that are polynomials in $\psi \left(s\right)={{\
Gamma }}^{"}\left(s\right)/{\Gamma }\left(s\right)$, its derivatives and related trigonometric functions, they deduce a host of known relations on Euler sums and discover many new ones. A
modification also gives results on alternating Euler sums.
11M06 $\zeta \left(s\right)$ and $L\left(s,\chi \right)$ | {"url":"http://zbmath.org/?q=an:0920.11061","timestamp":"2014-04-17T06:47:34Z","content_type":null,"content_length":"23072","record_id":"<urn:uuid:ca09a59b-c992-41db-86c7-5ccaf92e369c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does the moduli space of smooth curves of genus g contain an elliptic curve
up vote 20 down vote favorite
Let $M_g$ be the moduli space of smooth projective geometrically connected curves over a field $k$ with $g\geq 2$. Note that $M_g$ is not complete.
Does $M_g$ contain an elliptic curve?
The answer is no if $g=2$. In fact, $M_2$ doesn't contain any complete curves.
Note that one can construct complete curves lying in $M_g$ for $g\geq 3$. There are explicit constructions known.
Probably for $g>>0$ and $k$ algebraically closed, the answer is yes.
What if $k$ is a number field?
add comment
6 Answers
active oldest votes
The easiest way (I know) to see that there are no nonconstant holomorphic maps from a complete elliptic curve $E$ to the stack $M_g$ is to observe that such a map $f$ would lift to a
holomorphic map of the universal covers $\tilde{f}: {\mathbb C} \to T_g$, where $T_g$ is the Teichmuller space. The latter is a bounded domain in ${\mathbb C}^{3g-3}$, so Liouville's
theorem implies that $f$ is constant.
up vote 36 Edit: Using Kodaira's construction of complete curves in moduli spaces (via ramified coverings of products of curves) one can construct maps from elliptic curves $E$ to the coarse moduli
down vote space (of large genus) which are generically 1-1, i.e. 1-1 away from a finite subset of $E$. With more work one can probably get injective maps as well but I do not see sufficient
motivation for this.
add comment
If $C$ is a smooth curve of genus $g$ and $f:C\to M_g$ is a non-constant morphism to the stack, then the total space of the induced family of curves $S\to C$ is a smooth surface and the
underlying oriented 4-manifold has non-trivial signature: it is given by a multiple of the pullback of the first Chern class of the Hodge bundle on $M_g$, which is an ample class. On the
other hand, if a 4-manifold is a Riemann surface bundle over a torus or sphere, then its signature must be zero. Thus, there are no non-constant maps of an elliptic curve to the stack $M_g$.
The question of what the smallest genus curve which maps non-trivially to $M_g$ is addressed in a paper I wrote with Ron Donagi around 10 years ago: http://arxiv.org/pdf/math.AG/0105203.pdf
up vote I would guess that there does exist a non-constant map of an elliptic curve $C$ to the coarse space of $M_g$, but the map would only come from a family of curves after passing to a finite
17 down (ramified) cover of $C$. I think some of the examples constructed in the above paper are in fact of this form. I can think more about it if knowing the answer for the coarse space is
vote important to you (but really, why would you be interested in the coarse space? :-) ).
Edit: I am assuming that the base field is $\mathbb{C}$ in the above answer. I don't know the answer for other fields.
4 Jim I love this: "why would you be interested in the coarse space?"! – roy smith Jul 26 '12 at 2:03
1 It was just pointed out to me by a friend that if you are studying the MMP program for $M_g$ (which is certainly an interesting endeavour), then you are more interested in the geometry of
the coarse space. So maybe I need to walk back my flippant comment! – Jim Bryan Jul 26 '12 at 3:33
1 Jim, if I understood their talks correctly, your old result was just used by Donagi and Witten to show that the moduli space of super-Riemann surfaces does not split -- meaning
superstring perturbation theory itself needs to be "revisited." – Eric Zaslow Jul 26 '12 at 5:42
Does that mean that the moduli stack of super-Riemann surfaces might be split? If so, why do Donagi-Witten care whether the moduli space is split? (Sorry for getting off-topic.) – Arend
Bayer Jul 26 '12 at 12:42
@Jim: sometimes the coarse space is all you have. E.g., both the stack and the coarse space of $g$-dimensional ppav's have toroidal compactifications, but, for $g>1$, only the coarse
space has a Satake compactification. It is this compactification that Faltings uses to prove the Mordell conjecture because it is here that there exists an ample line bundle with a good
height function. More broadly, Mumford asks [GIT, ch. 5, l. 1] "What are moduli?", then gives no definitive answer, despite having already written the first paper to take stacks seriously
as geometric objects. – inkspot Aug 2 '12 at 11:19
add comment
First consider the case when $M_g$ is the stack:
Over $\mathbb C$ this is a consequence of the Torelli theorem: A map from a rational or elliptic curve to $M_g$ is the same as a smooth family over that curve. Then considering the period
map gives a map from the same curve to the parameter space of Hodge structures. However, that is hyperbolic, so any holomorphic map from $\mathbb C$ is trivial. Therefore the Hodge
structures of the fibers are the same, but then Torelli says that then the curves are also the same, so the map to moduli is trivial. (In fact this proof shows that even a rational curve
minus $2$ points cannot map there either).
Actually, a lot more is true: The same statement holds if we replace $M_g$ with $M_h$, the moduli stack of canonically polarized smooth projective varieties with Hilbert polynomial $h$.
(For $\deg h=1$ you get back the corresponding $M_g$). Torelli is no longer true, but the desired statement is: There are no non-trivial maps from an elliptic curve or a rational curve
minus $2$ points. This is proved in Algebraic hyperbolicity of fine moduli spaces J. Algebraic Geom. 9 (2000), no. 1, 165–174.
Then one may wonder if something could be said for higher dimensional bases. The direct generalization of the original question would be to replace an elliptic curve with an abelian
variety. The same statement holds, (in fact a little more than that is) proved in Families over a base with a birationally nef tangent bundle Mathematische Annalen, 1997, Volume 308, Number
2, Pages 347-359
If one wants to generalize further, that is, to include the case of rational curves minus two points, one possibility is
Viehweg's conjecture (roughly stated)
Any quasi-projective variety that admits a generically finite morphism to $M_h$ is of log general type.
up vote Here log general type means that if $X$ is the variety in question and $\overline X$ is a projective variety such that $X\subseteq \overline X$ is an open subset and $D=\overline X\setminus
17 down X$ is a divisor, then $\omega_{\overline X}(D)$ is big (=has maximal Kodaira dimension).
If you have not seen this before, check that curves of log general type are: Any open subset of a curve of genus at least $2$, any proper open subset of an elliptic curve, and any open
subset of a rational curve missing at least $3$ points. In other words, the only non-log general type curves are the (projective) elliptic curves and rational curves missing at most $2$
points. In other words, Viehweg's conjecture for $M_g$ is just the first statement above.
Viehweg's conjecture is currently known for proper base varieties by Families of varieties of general type over compact bases, Advances in Mathematics, Volume 218, Issue 3, 20 June 2008,
Pages 649–652 and Viehwegʼs hyperbolicity conjecture is true over compact bases, Advances in Mathematics, Volume 229, Issue 3, 15 February 2012, Pages 1640–1642 and over (up to)
$3$-dimensional bases in general by The structure of surfaces and threefolds mapping to the moduli stack of canonically polarized varieties, Duke Math. J. Volume 155, Number 1 (2010), 1-33.
As far as the coarse space goes, it is probably more of a curiosity, but it still seems interesting. Oort proved that the coarse space of $M_g$ actually contains rational curves. Perhaps
his proof can be adapted to prove the same for an elliptic curve. (The main idea is to construct a family over a curve with a given map to $\mathbb P^1$ such that fibers over the points
mapping to the same point on $\mathbb P^1$ are isomorphic, so the moduli map factors through the map to $\mathbb P^1$).
And then there are many results concerning the question of what kind of complete subvarieties might $M_g$ have. Or what about complete subvarieties through a general point? There are
various results in this direction, but I am already digressing....
Remark all of the above holds over an algebraically closed field of characteristic $0$. In characteristic $p$ all kinds of weird things happen, so I would expect that probably most of this
@Sandor: The same Teichmuller space argument I gave for the elliptic curves works for twice punctured complex projective line. – Misha Jul 26 '12 at 14:19
@Misha: Yes, of course. – Sándor Kovács Jul 26 '12 at 16:59
add comment
All these deductions on non-existence of an elliptic curve as in the question from the fact that $M_g$ is hyperbolic in various senses were very interesting. Here is a preposterous variant.
If such an elliptic curve exists, then passing to a finitely generated field and specializing any transcendental (if necessary) we can assume that the elliptic curve and the embedding in
up vote $M_g$ are defined over a number field. We extend the number field so that the elliptic curve has infinitely many rational points over it. Now, it is easy to see that all these rational
16 down points (viewed as points in $M_g$) give rise to curves of genus $g$ defined over this number field and all having good reduction outside of a fixed finite set of places of the number field.
vote This contradicts Shafarevich's conjecture (proved by Faltings).
1 Perhaps this can be made somewhat less preposterous using a function-field analogue of Shafarevich's conjecture? Often such analogues were proved somewhat earlier and more easily. – Noam
D. Elkies Jul 31 '12 at 14:41
But there are complete curves in $M_g$ (of genus bigger than one), so there are families of curves over function fields with good reduction everywhere, so the correct statement is bound
to be subtle. These families are not deformable (Arakelov's theorem, which is the function field analogue of Shafarevich). Maybe using the fact that an elliptic curve has an infinite
automorphism group gives a contradiction, but I am not sure. – Felipe Voloch Jul 31 '12 at 15:36
3 Actually, what one may use to apply the function field version of Shafarevich is the fact that elliptic curves admit endomorphisms of degree larger than $1$. I included the details in an
answer below. (I ran out of the space allowed for comments). – Sándor Kovács Aug 1 '12 at 0:00
add comment
Here is how the function field version of Shafarevich's conjecture (=Arakelov-Parshin Theorem) implies that there are no elliptic curves or (at most) twice punctured rational curves in
(See Noam's comment to Felipe's answer)
Suppose there exists a smooth non-isotrivial family $f:X\to C$ of curves of genus $g$ for some fixed $g>1$ parametrized by a curve $C$. Call such a family admissible, let $m\in\mathbb N$
fixed and consider the set of numbers $$ D_m=\left\{ \deg (f_*\omega_{X/C}^m) \mid f \text{ is an admissible family } \right\} $$
By Shafarevich's conjecture (=Arakelov-Parshin Theorem) this set is finite and hence bounded for any given $m$. On the other hand, it is well-known that for $m\gg 0$ the line bundle $\det
(f_*\omega_{X/C}^m)$ is ample, but we only need that it is not trivial and hence has a non-zero degree.
Now assume that $C$ admits an endomorphism of degree $>1$, say $\sigma:C\to C$. Then the base change $f_\sigma:X_\sigma\to C$ of any admissible family $f:X\to C$ is still admissible, but
up vote 10 $$ \deg ({f_\sigma}_*\omega_{X_\sigma/C}^m) = \deg\sigma \cdot \deg (f_*\omega_{X/C}^m), $$ which would mean that if non-empty, then $D_m$ could not be bounded, therefore if $C$ admits
down vote such an endomorphism, then $D_m$ has to be empty.
If $C$ is an elliptic curve, or a rational curve minus (at most) two points, then it admits such an endomorphism, so they cannot parametrize smooth non-isotrivial families of curves of
genus $>1$.
The boundedness of (the analogous set in arbitrary dimension) $D_m$ is sometimes called weak boundedness. The above argument shows that "Weak Boundedness" implies "Hyperbolicity". This
statement, in a somewhat more general form, is contained in Thm 0.8/0.9 of Logarithmic vanishing theorems and Arakelov-Parshin boundedness for singular varieties. Compositio Math. 131
(2002), no. 3, 291–317
add comment
If $g \geq 2$, and you want a map from the elliptic curve to the stack $M_g$ (as opposed to the coarse moduli space), then I think the map has to be constant, just on curvature grounds.
up vote 4
down vote
Does it matter whether we consider the stack or the coarse moduli space? I'm interested in knowing the answer in both cases. Could you explain what you mean by "on curvature grounds"? –
Francesco Jul 25 '12 at 23:26
@Francesco: Here is the "curvature" argument that @anon probably had in mind (I am assuming that the elliptic curve $E$ is smoothly embedded in the stack Mg). Equip Mg with the
4 Weil-Petersson K\"ahler metric $d_{WP}$; it has negative sectional curvature. Since $E$ is a holomorphic curve, it is also a minimal surface (by Wirtinger inequality), hence, the
restriction of $d_{WP}$ to $E$ has smaller curvature that $d_{WP}$ has.Thus,the torus $E$ has a Riemannian metric of negative curvature. This contradicts, say, Gauss-Bonnet formula. –
Misha Jul 31 '12 at 14:04
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry elliptic-curves moduli-spaces coarse-moduli-spaces arithmetic-geometry or ask your own question. | {"url":"https://mathoverflow.net/questions/103120/does-the-moduli-space-of-smooth-curves-of-genus-g-contain-an-elliptic-curve/103647","timestamp":"2014-04-16T10:48:52Z","content_type":null,"content_length":"93116","record_id":"<urn:uuid:b2d593ee-f492-4483-828b-4e1073983c61>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Professor W Gaschütz - some reminiscences
In the year 2000, when Wolfgang Gaschütz was eighty years old, Roger Carter published 'Professor W Gaschütz - some reminiscences' in Proc. F. Scorina Gomel State Univ. No. 3 (16) (2000), 17. We
present below a version of that tribute:
I would like to recount briefly, in this celebratory issue for Wolfgang Gaschütz, some of my mathematical and personal encounters with him. Professor Gaschütz influenced considerably my work as a
young mathematician. I wrote my Ph.D. thesis in 1959 at Cambridge University, working under the supervision of Derek Taunt. The thesis concerned properties of the system normalizers of a finite
soluble group, which had been defined by Philip Hall in 1937. After completing the thesis I spent a postdoctoral year 1959-60 at the University of Tübingen to visit Professor H Wielandt and, under
the influence of P Hall and H Wielandt, I proved the existence and conjugacy of nilpotent self-normalizing subgroups in a finite soluble group.
In 1960 I took up a lectureship in Newcastle and in 1962 I paid a visit to Stockholm to attend the International Congress of Mathematicians. During this meeting I learned from Joachim Neubüser that
Gaschütz had obtained new results on finite soluble groups containing my results as a special case. As the train back from Stockholm to London stopped quite close to Kiel, I decided to take the
opportunity to get out and visit Professor Gaschütz. When I arrived at Kiel I found that he was at a family birthday party, but he very kindly arranged to have a discussion with me, and told me about
his new theory of formations and covering subgroups. This conversation inspired me to start working on formations and the so-called F-normalizers in a finite soluble group, which generalise the
system normalizers which I had studied in my Ph.D. thesis. I subsequently found that Trevor Hawkes had obtained similar results independently, and so we published them in a joint paper, which
eventually appeared in 1967.
Although my research interests subsequently moved away from soluble groups I still had the opportunity to meet Gaschütz fairly regularly at group theory meetings at the Mathematical Research Centre
at Oberwolfach, where he was frequently accompanied by Frau Gaschütz. I remember vividly a lecture he gave there on automorphisms of p-groups, showing that finite non-abelian p-groups possess outer
p-automorphisms. Gaschütz spoke for about 40 minutes only, but with such style and panache that his talk outshone the other more conventional one hour lectures.
I also had the pleasure of welcoming Gaschütz on several occasions as a visitor to the University of Warwick, which I had joined in 1965, to take part in the Warwick Algebra Symposia which were
organised there. I always looked forward to his lively expositions of his continuing work on soluble groups. I also enjoyed touring around the Cotswolds with him by car, and seeing his reaction to
some of the ancient buildings we visited there.
My most recent meeting with Professor Gaschütz took place in 1987. While on an extended visit to Essen I had the opportunity of travelling to Kiel to visit Gaschütz shortly before his retirement. On
this occasion, as on earlier occasions, I was again conscious of the great pleasure I gained from talking to Herr Gaschütz, in particular from his genial and expansive manner.
I send him and his family all good wishes on his 80th birthday.
JOC/EFR July 2012
The URL of this page is: | {"url":"http://www-gap.dcs.st-and.ac.uk/~history/Extras/Gaschutz_reminiscences.html","timestamp":"2014-04-18T18:18:34Z","content_type":null,"content_length":"4606","record_id":"<urn:uuid:1fb6a4a3-29d5-46fa-ae91-bf2b4b5cd80d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
if tanΦ=-1/3 find sinΦ and cosΦ
• 8 months ago
• 8 months ago
Best Response
You've already chosen the best response.
|dw:1375486500757:dw| now you can find anything u like ;) always use this method
Best Response
You've already chosen the best response.
\(\bf tan(\theta) = -\cfrac{1}{3} \implies \cfrac{opposite}{adjacent} \implies \cfrac{b}{a}\\ c^2 = a^2+b^2 \implies c = \sqrt{a^2+b^2}\)
Best Response
You've already chosen the best response.
once you have all 3 guys, a, b and c you can find the others
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51fc4225e4b0cc46c14a6dde","timestamp":"2014-04-18T03:20:00Z","content_type":null,"content_length":"51675","record_id":"<urn:uuid:69fd2df7-2f56-43c9-8f0c-4808bd637981>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acoustical Physics
Acoustics Today
Acta Acustica united with Acustica
Acta Mechanica
Acta Mechanica Sinica
Acta Physica Slovaca
Advanced Composite Materials
Advanced Electromagnetics
Advanced Functional Materials
Advanced Materials
Advances in Acoustics and Vibration
Advances In Atomic, Molecular, and Optical Physics
Advances in Condensed Matter Physics
Advances in Exploration Geophysics
Advances in Geophysics
Advances in High Energy Physics
Advances in Imaging and Electron Physics
Advances in Materials Physics and Chemistry
Advances in Natural Sciences: Nanoscience and Nanotechnology
Advances in Nonlinear Optics
Advances in OptoElectronics
Advances In Physics
Advances in Physics Theories and Applications
Advances in Remote Sensing
Advances in Synchrotron Radiation
AIP Advances
American Journal of Applied Sciences
American Journal of Condensed Matter Physics
Analysis and Mathematical Physics
Annalen der Physik
Annales Geophysicae (ANGEO)
Annales Henri Poincaré
Annales UMCS, Physica
Annals of Nuclear Medicine
Annals of Physics
Annual Reports on NMR Spectroscopy
Annual Review of Analytical Chemistry
Annual Review of Condensed Matter Physics
Annual Review of Fluid Mechanics
Annual Review of Materials Research
Annual Review of Nuclear and Particle Science
APL : Organic Electronics and Photonics
APL Materials
Applied Acoustics
Applied Composite Materials
Applied Mathematics and Mechanics
Applied Physics A
Applied Physics Frontier
Applied Physics Letters
Applied Physics Research
Applied Physics Reviews
Applied Radiation and Isotopes
Applied Remote Sensing Journal
Applied Spectroscopy
Applied Spectroscopy Reviews
Applied Thermal Engineering
Archive for Rational Mechanics and Analysis
Astronomy & Geophysics
Astrophysical Journal Letters
Astrophysical Journal Supplement Series
Atmospheric and Oceanic Optics
Atomic Data and Nuclear Data Tables
Attention, Perception & Psychophysics
Autonomous Mental Development, IEEE Transactions on
Bangladesh Journal of Medical Physics
Biomedical Engineering, IEEE Reviews in
Biomedical Engineering, IEEE Transactions on
Biomedical Imaging and Intervention Journal
Biophysical Reviews
Biophysical Reviews and Letters
BMC Biophysics
BMC Nuclear Medicine
Brazilian Journal of Physics
Broadcasting, IEEE Transactions on
Building Acoustics
Bulletin of Materials Science
Bulletin of the Atomic Scientists
Bulletin of the Lebedev Physics Institute
Bulletin of the Russian Academy of Sciences: Physics
Caderno Brasileiro de Ensino de FĂsica
Canadian Journal of Physics
Central European Journal of Physics
Chinese Journal of Astronomy and Astrophysics
Chinese Journal of Chemical Physics
Chinese Physics B
Chinese Physics C
Chinese Physics Letters
Cohesion and Structure
Colloid Journal
Communications in Mathematical Physics
Communications in Numerical Methods in Engineering
Communications in Theoretical Physics
Composites Part A : Applied Science and Manufacturing
Composites Part B : Engineering
Computational Materials Science
1 2 3 4 5 6 7 | Last
Advances in High Energy Physics [14 followers]
Open Access journal ISSN (Print) 1687-7357 - ISSN (Online) 1687-7365
Published by Hindawi Publishing Corporation [347 journals] [SJR: 1.297] [H-I: 7]
• Hubble Parameter Corrected Interactions in Cosmology
□ Abstract: We make steps in a new direction by considering fluids with EoS of more general form . It is thought that there should be interaction between cosmic fluids, but this assumption for
this stage carries only phenomenological character opening a room for different kinds of manipulations. In this paper we will consider a modification of an interaction , where we accept that
interaction parameter (order of unity) in is time dependent and presented as a linear function of Hubble parameter of the form , where and are constants. We consider two different models
including modified Chaplygin gas and polytropic gas which have bulk viscosity. Then, we investigate problem numerically and analyze behavior of different cosmological parameters concerning
fluids and behavior of the universe.
PubDate: Mon, 14 Apr 2014 11:07:30 +000
• Transverse Momentum Distributions in AuAu and dAu Collisions at GeV
□ Abstract: We study the transverse momentum distributions of identified particles produced in Au + Au and d + Au collisions at GeV. The Tsallis description is applied in the multisource
model. The results are compared with the experimental data in detail. We obtain some information of the thermodynamic properties of matter produced in the collisions. The difference of the
transverse momentum distributions in Au + Au and d + Au collisions is not significant.
PubDate: Thu, 10 Apr 2014 10:10:33 +000
• Comparing Multicomponent Erlang Distribution and Lévy Distribution of
Particle Transverse Momentums
□ Abstract: The transverse momentum spectrums of final-state products produced in nucleus-nucleus and proton-proton collisions at different center-of-mass energies are analyzed by using a
multicomponent Erlang distribution and the Lévy distribution. The results calculated by the two models are found in most cases to be in agreement with experimental data from the Relativistic
Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). The multicomponent Erlang distribution that resulted from a multisource thermal model seems to give a better description as
compared with the Lévy distribution. The temperature parameters of interacting system corresponding to different types of final-state products are obtained. Light particles correspond to a
low temperature emission, and heavy particles correspond to a high temperature emission. Extracted temperature from central collisions is higher than that from peripheral collisions.
PubDate: Thu, 10 Apr 2014 00:00:00 +000
• Analyzing the Anomalous Dipole Moment Type Couplings of Heavy Quarks with
FCNC Interactions at the CLIC
□ Abstract: We examine both anomalous magnetic and dipole moment type couplings of a heavy quark via its single production with subsequent dominant standard model decay modes at the compact
linear collider (CLIC). The signal and background cross sections are analyzed for heavy quark masses 600 and 700GeV. We make the analysis to delimitate these couplings as well as to find the
attainable integrated luminosities for observation limit.
PubDate: Wed, 09 Apr 2014 13:26:37 +000
• What Can We Learn from (Pseudo)Rapidity Distribution in High Energy
□ Abstract: Based on the (pseudo)rapidity distribution of final-state particles produced in proton-proton (pp) collisions at high energy, the probability distributions of momenta, longitudinal
momenta, transverse momenta (transverse masses), energies, velocities, longitudinal velocities, transverse velocities, and emission angles of the considered particles are obtained in the
framework of a multisource thermal model. The number density distributions of particles in coordinate and momentum spaces and related transverse planes, the particle dispersion plots in
longitudinal and transverse coordinate spaces, and the particle dispersion plots in transverse momentum plane at the stage of freeze out in high energy pp collisions are also obtained.
PubDate: Tue, 08 Apr 2014 07:05:25 +000
• Shower and Slow Particle Productions in Nucleus-Nucleus Collisions at High
□ Abstract: The multiplicity distributions of shower, grey, and black particles produced in interactions of 4He, 12C, 16O, 22Ne, and 28Si with emulsion (Em) at 4.1–4.5AGeV/c beam energies,
and their dependence on target groups (H, CNO, and AgBr) is presented and has been reproduced by multisource thermal model. The multiplicity and the angular distributions of the three types
of particles have been investigated. The experimental results are compared with the corresponding ones from the model. We found that the experimental data agrees with theoretical calculations
using multisource thermal model.
PubDate: Mon, 07 Apr 2014 09:02:44 +000
• The Evolution-Dominated Hydrodynamic Model and the Pseudorapidity
Distributions in High Energy Physics
□ Abstract: By taking into account the effects of leading particles, we discuss the pseudorapidity distributions of the charged particles produced in high energy heavy ion collisions in the
context of evolution-dominated hydrodynamic model. The leading particles are supposed to have a Gaussian rapidity distribution normalized to the number of participants. A comparison is made
between the theoretical results and the experimental measurements performed by BRAHMS and PHOBOS Collaboration at BNL-RHIC in Au-Au and Cu-Cu collisions at GeV and by ALICE Collaboration at
CERN-LHC in Pb-Pb collisions at TeV.
PubDate: Thu, 03 Apr 2014 14:11:56 +000
• Hermitian -Freudenthal-Kantor Triple Systems and Certain Applications of
*-Generalized Jordan Triple Systems to Field Theory
□ Abstract: We define Hermitian -Freudenthal-Kantor triple systems and prove a structure theorem. We also give some examples of triple systems that are generalizations of the and Hermitian
3-algebras. We apply a -generalized Jordan triple system to a field theory and obtain a Chern-Simons gauge theory. We find that the novel Higgs mechanism works, where the Chern-Simons gauge
theory reduces to a Yang-Mills theory in a certain limit.
PubDate: Thu, 03 Apr 2014 09:57:27 +000
• Analyzing Black Hole Super-Radiance Emission of Particles/Energy from a
Black Hole as a Gedankenexperiment to Get Bounds on the Mass of a Graviton
□ Abstract: Use of super-radiance in BH physics, so specifies conditions for a mass of a graviton being less than or equal to 1065 grams, allows for determing what role additional dimensions
may play in removing the datum that massive gravitons lead to 3/4th the bending of light past the planet Mercury. The present document makes a given differentiation between super-radiance in
the case of conventional BHs and Braneworld BH super-radiance, which may delineate whether Braneworlds contribute to an admissible massive graviton in terms of removing the usual problem of
the 3/4th the bending of light past the planet Mercury which is normally associated with massive gravitons. This leads to a fork in the road between two alternatives with the possibility of
needing a multiverse containment of BH structure or embracing what Hawkings wrote up recently, namely, a redo of the event horizon hypothesis as we know it.
PubDate: Thu, 03 Apr 2014 09:27:31 +000
• Studies of Three-Body Decay of to and to *
□ Abstract: We investigate the and decay by using the Dalitz plot analysis. As we know there are tree, penguin, emission, and emission-annihilation diagrams for these decay modes in the
factorization approach. The transition matrix element is factorized into a form factor multiplied by decay constant and also a form factor multiplied by decay constant. According to QCD
factorization approach and using the Dalitz plot analysis, we calculate the branching ratios of the and three-body decay in view of the mixing and obtain the value of the , while the
experimental results of them are and , respectively. In this research we also analyze the decay which is similar to the previous decay, but there is no experimental data for the last decay.
Since for calculations of the decay we use assumptions of the decay, we hope that if this decay will be measured by the LHCb in the future, the experimental results will be in agreement with
our calculations.
PubDate: Thu, 03 Apr 2014 00:00:00 +000
• Magnetic String with a Nonlinear Source
□ Abstract: Considering the Einstein gravity in the presence of Born-Infeld type electromagnetic fields, we introduce a class of 4-dimensional static horizonless solutions which produce
longitudinal magnetic fields. Although these solutions do not have any curvature singularity and horizon, there exists a conic singularity. We investigate the effects of nonlinear
electromagnetic fields on the properties of the solutions and find that the asymptotic behavior of the solutions is adS. Next, we generalize the static metric to the case of rotating
solutions and find that the value of the electric charge depends on the rotation parameter. Furthermore, conserved quantities will be calculated through the use of the counterterm method.
Finally, we extend four-dimensional magnetic solutions to higher dimensional solutions. We present higher dimensional rotating magnetic branes with maximum rotation parameters and obtain
their conserved quantities.
PubDate: Wed, 02 Apr 2014 14:16:28 +000
• Holographic Brownian Motion in Three-Dimensional Gödel Black Hole
□ Abstract: By using the AdS/CFT correspondence and Gödel black hole background, we study the dynamics of heavy quark under a rotating plasma. In that case we follow Atmaja (2013) about
Brownian motion in BTZ black hole. In this paper we receive some new results for the case of . In this case, we must redefine the angular velocity of string fluctuation. We obtain the time
evolution of displacement square and angular velocity and show that it behaves as a Brownian particle in non relativistic limit. In this plasma, it seems that relating the Brownian motion to
physical observables is rather a difficult work. But our results match with Atmaja work in the limit .
PubDate: Wed, 02 Apr 2014 11:38:36 +000
• A New Flavor Symmetry in 3-3-1 Model with Neutral Fermions
□ Abstract: A new S4 flavor model based on gauge symmetry responsible for fermion masses and mixings is constructed. The neutrinos get small masses from only an antisextet of SU(3)L which is in
a doublet under S4. In this work, we assume the VEVs of the antisextet differ from each other under S4 and the difference of these VEVs is regarded as a small perturbation, and then the model
can fit the experimental data on neutrino masses and mixings. Our results show that the neutrino masses are naturally small and a deviation from the tribimaximal neutrino mixing form can be
realized. The quark masses and mixing matrix are also discussed. The number of required Higgs multiplets is less and the scalar potential of the model is simpler than those of the model based
on S3 and our previous S4 model. The assignation of VEVs to antisextet leads to the mixing of the new gauge bosons and those in the standard model. The mixing in the charged gauge bosons as
well as the neutral gauge bosons is considered.
PubDate: Wed, 02 Apr 2014 09:02:35 +000
• How Can the Modified Dispersion Relation Affect Friedmann Equations'
□ Abstract: The appearance of the quantum gravitational effects in a very high energy regime necessitates some corrections to the thermodynamics of Friedmann-Robertson-Walker (FRW) universe.
The modified dispersion relation (MDR) as a phenomenological approach to investigate the high energy physics provides a perturbation framework upon which the FRW universe thermodynamics can
be corrected. In this letter, we obtain the corrected entropy-area relation of the apparent horizon of FRW universe by utilizing the extra dimensional form of MDR, leading to the modification
of Friedmann equations. The influence of MDR on the Friedmann equations provides a good insight into the understanding of the FRW universe dynamics in the final quantum gravity theory.
PubDate: Tue, 01 Apr 2014 13:42:45 +000
• MicroBlack Holes Thermodynamics in the Presence of Quantum Gravity Effects
□ Abstract: Black hole thermodynamics is corrected in the presence of quantum gravity effects. Some phenomenological aspects of quantum gravity proposal can be addressed through generalized
uncertainty principle (GUP) which provides a perturbation framework to perform required modifications of the black hole quantities. In this paper, we consider the effects of both a minimal
measurable length and a maximal momentum on the thermodynamics of TeV-scale black holes. We then extend our study to the case that there are all natural cutoffs as minimal length, minimal
momentum, and maximal momentum simultaneously. We also generalize our study to the model universes with large extra dimensions (LED). In this framework existence of black holes remnants as a
possible candidate for dark matter is discussed. We study probability of black hole production in the Large Hadronic Collider (LHC) and we show this rate decreasing for sufficiently large
values of the GUP parameter.
PubDate: Tue, 01 Apr 2014 10:18:41 +000
• Halo-Independent Comparison of Direct Dark Matter Detection Data
□ Abstract: We review the halo-independent formalism that allows comparing data from different direct dark matter detection experiments without making assumptions on the properties of the dark
matter halo. We apply this method to spin-independent WIMP-nuclei interactions, for both isospin-conserving and isospin-violating couplings, and to WIMPs interacting through an anomalous
magnetic moment.
PubDate: Tue, 01 Apr 2014 06:57:39 +000
• Neutrino Masses and Oscillations
□ PubDate: Sun, 30 Mar 2014 12:53:53 +000
• Black Component of Dark Matter
□ Abstract: A mechanism of primordial black hole formation with specific mass spectrum is discussed. It is shown that these black holes could contribute to the energy density of dark matter.
Our approach is elaborated in the framework of universal extra dimensions.
PubDate: Sun, 30 Mar 2014 11:32:32 +000
• NMC and the Fine-Tuning Problem on the Brane
□ Abstract: We propose a new solution to the fine-tuning problem related to coupling constant of the potential. We study a quartic potential of the form in the framework of the Randall-Sundrum
type II braneworld model in the presence of a Higgs field which interacts nonminimally with gravity via a possible interaction term of the form . Using the conformal transformation
techniques, the slow-roll parameters in high energy limit are reformulated in the case of a nonminimally coupled scalar field. We show that, for some value of a coupling parameter and brane
tension , we can eliminate the fine-tuning problem. Finally, we present graphically the solutions of several values of the free parameters of the model.
PubDate: Sun, 30 Mar 2014 10:17:19 +000
• Double-Differential Production Cross Sections of Charged Pions in Charged
Pion Induced Nuclear Reactions at High Momentums
□ Abstract: The double-differential production cross sections in interactions of charged pions on targets at high momentums are analyzed by using a multicomponent Erlang distribution which is
obtained in the framework of a multisource thermal model. The calculated results are compared and found to be in agreement with the experimental data at the incident momentums of 3, 5, 8, and
12GeV/c measured by the HARP Collaboration. It is found that the source contributions to the mean momentum of charged particles and to the distribution width of particle momentums decrease
with increase of the emission angle, and the source number and temperature do not show an obvious dependence on the emission angle of the considered particle.
PubDate: Sun, 30 Mar 2014 07:19:56 +000
• Holographic Screens in Ultraviolet Self-Complete Quantum Gravity
□ Abstract: This paper studies the geometry and the thermodynamics of a holographic screen in the framework of the ultraviolet self-complete quantum gravity. To achieve this goal we construct a
new static, neutral, nonrotating black hole metric, whose outer (event) horizon coincides with the surface of the screen. The spacetime admits an extremal configuration corresponding to the
minimal holographic screen and having both mass and radius equalling the Planck units. We identify this object as the spacetime fundamental building block, whose interior is physically
unaccessible and cannot be probed even during the Hawking evaporation terminal phase. In agreement with the holographic principle, relevant processes take place on the screen surface. The
area quantization leads to a discrete mass spectrum. An analysis of the entropy shows that the minimal holographic screen can store only one byte of information, while in the thermodynamic
limit the area law is corrected by a logarithmic term.
PubDate: Thu, 27 Mar 2014 07:26:37 +000
• Challenges in Double Beta Decay
□ Abstract: In the past ten years, neutrino oscillation experiments have provided the incontrovertible evidence that neutrinos mix and have finite masses. These results represent the strongest
demonstration that the electroweak Standard Model is incomplete and that new Physics beyond it must exist. In this scenario, a unique role is played by the Neutrinoless Double Beta Decay
searches which can probe lepton number conservation and investigate the Dirac/Majorana nature of the neutrinos and their absolute mass scale (hierarchy problem) with unprecedented
sensitivity. Today Neutrinoless Double Beta Decay faces a new era where large-scale experiments with a sensitivity approaching the so-called degenerate-hierarchy region are nearly ready to
start and where the challenge for the next future is the construction of detectors characterized by a tonne-scale size and an incredibly low background. A number of new proposed projects took
up this challenge. These are based either on large expansions of the present experiments or on new ideas to improve the technical performance and/or reduce the background contributions. In
this paper, a review of the most relevant ongoing experiments is given. The most relevant parameters contributing to the experimental sensitivity are discussed and a critical comparison of
the future projects is proposed.
PubDate: Wed, 26 Mar 2014 13:59:01 +000
• Wormhole Solutions in the Presence of Nonlinear Maxwell Field
□ Abstract: In generalizing the Maxwell field to nonlinear electrodynamics, we look for the magnetic solutions. We consider a suitable real metric with a lower bound on the radial coordinate
and investigate the properties of the solutions. We find that in order to have a finite electromagnetic field near the lower bound, we should replace the Born-Infeld theory with another
nonlinear electrodynamics theory. Also, we use the cut-and-paste method to construct wormhole structure. We generalize the static solutions to rotating spacetime and obtain conserved
PubDate: Wed, 26 Mar 2014 09:17:52 +000
• Lepton Flavour Violation Experiments
□ Abstract: Lepton Flavour Violation in the charged lepton sector (CLFV) is forbidden in the Minimal Standard model and strongly suppressed in extensions of the model to include finite neutrino
mixing. On the other hand, a wide class of Supersymmetric theories, even coupled with Grand Unification models (SUSY-GUT models), predict CLFV processes at a rate within the reach of new
experimental searches operated with high resolution detectors at high intensity accelerators. As the Standard model background is negligible, the observation of one or more CLFV events would
provide incontrovertible evidence for physics beyond Standard model, while a null effect would severely constrain the set of theory parameters. Therefore, a big experimental effort is
currently (and will be for incoming years) accomplished to achieve unprecedented sensitivity on several CLFV processes. In this paper we review past and recent results in this research field,
with focus on CLFV channels involving muons and tau's. We present currently operating experiments as well as future projects, with emphasis laid on how sensitivity enhancements are
accompanied by improvements on detection techniques. Limitations due to systematic effects are also discussed in detail together with the solutions being adopted to overcome them.
PubDate: Thu, 20 Mar 2014 12:50:29 +000
• Black Holes and Quantum Mechanics
□ Abstract: We look at black holes from different, novel perspectives.
PubDate: Thu, 20 Mar 2014 12:14:44 +000
• Hulthén and Coulomb-Like Potentials as a Tensor Interaction within
the Relativistic Symmetries of the Manning-Rosen Potential
□ Abstract: The bound-state solutions of the Dirac equation for the Manning-Rosen potential are presented approximately for arbitrary spin-orbit quantum number with the Hulthén and Coulomb-like
potentials as a tensor interaction. The generalized parametric Nikiforov-Uvarov (NU) method is used to obtain energy eigenvalues and corresponding two-component spinors of the two Dirac
particles and these are obtained in the closed form by using the framework of the spin symmetry and p-spin symmetry concept. We have also shown that tensor interaction removes degeneracies
between spin and p-spin doublets. Some numerical results are also given.
PubDate: Thu, 20 Mar 2014 09:03:04 +000
• Theory and Phenomenology of Space-Time Defects
□ Abstract: Whether or not space-time is fundamentally discrete is of central importance for the development of the theory of quantum gravity. If the fundamental description of spacetime is
discrete, typically represented in terms of a graph or network, then the apparent smoothness of geometry on large scales should be imperfect—it should have defects. Here, we review a model
for space-time defects and summarize the constraints on the prevalence of these defects that can be derived from observation.
PubDate: Thu, 20 Mar 2014 06:59:37 +000
• Inflation and Topological Phase Transition Driven by Exotic Smoothness
□ Abstract: We will discuss a model which describes the cause of inflation by a topological transition. The guiding principle is the choice of an exotic smoothness structure for the space-time.
Here we consider a space-time with topology . In case of an exotic , there is a change in the spatial topology from a 3-sphere to a homology 3-sphere which can carry a hyperbolic structure.
From the physical point of view, we will discuss the path integral for the Einstein-Hilbert action with respect to a decomposition of the space-time. The inclusion of the boundary terms
produces fermionic contributions to the partition function. The expectation value of an area (with respect to some surface) shows an exponential increase; that is, we obtain inflationary
behavior. We will calculate the amount of this increase to be a topological invariant. Then we will describe this transition by an effective model, the Starobinski or model which is
consistent with the current measurement of the Planck satellite. The spectral index and other observables are also calculated.
PubDate: Wed, 19 Mar 2014 12:40:34 +000
• Massless Weyl Spinors from Bosonic Scalar-Tensor Duality
□ Abstract: We consider the fermionization of a bosonic-free theory characterized by the scalar-tensor duality. This duality can be interpreted as the dimensional reduction, via a planar
boundary, of the topological BF theory. In this model, adopting the Sommerfield tomographic representation of quantized bosonic fields, we explicitly build a fermionic operator and its
associated Klein factor such that it satisfies the correct anticommutation relations. Interestingly, we demonstrate that this operator satisfies the massless Dirac equation and that it can be
identified with a Weyl spinor. Finally, as an explicit example, we write the integrated charge density in terms of the tomographic transformed bosonic degrees of freedom.
PubDate: Wed, 19 Mar 2014 11:45:47 +000
• Cardy-Verlinde Formula of Noncommutative Schwarzschild Black Hole
□ Abstract: Few years ago, Setare (2006) has investigated the Cardy-Verlinde formula of noncommutative black hole obtained by noncommutativity of coordinates. In this paper, we apply the same
procedure to a noncommutative black hole obtained by the coordinate coherent approach. The Cardy-Verlinde formula is entropy formula of conformal field theory in an arbitrary dimension. It
relates the entropy of conformal field theory to its total energy and Casimir energy. In this paper, we have calculated the total energy and Casimir energy of noncommutative Schwarzschild
black hole and have shown that entropy of noncommutative Schwarzschild black hole horizon can be expressed in terms of Cardy-Verlinde formula.
PubDate: Tue, 18 Mar 2014 10:08:31 +000 | {"url":"http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=53&journalID=15090&pageb=1&userQueryID=&sort=&local_page=1&sorType=DESC&sorCol=2","timestamp":"2014-04-16T16:09:10Z","content_type":null,"content_length":"146121","record_id":"<urn:uuid:b3725d24-cb0f-46a6-84cd-2ba29ed1e5a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Liam Cleary, Ph.D.
Postdoctoral Associate
Department of Chemistry
Massachusetts Institute of Technology
E-mail: licleary@mit.edu linkedin.com/in/licleary
Postal Address
Room 6-222A
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139
Current Research
My current research concerns the transport phenomena of elementary excitations in quantum dissipative systems, in particular, the optimization of excitonic energy transfer processes in photosynthetic
pigment-protein complexes, namely the light harvesting complexes of LH-II B850 and LH-I B875 found in purple bacteria.
Research Interesets
My research interesets include all of condensed matter physics, in particular transport in quantum and semiclassical dissipative systems. I enjoy being a member of the APS division of condensed
matter physics (DCMP) and the topical group on statistical and nonlinear physics (GSNP).
PhD Thesis
• A semiclassical approach to quantum Brownian motion in Wigner's phase space, L. Cleary, Dept. of Electronic and Electrical Engineering, Trinity College Dublin, October (2010).
Complete List
A list of publications and conferences can be found here
1. Exploring the Photoluminescence Spectral Lineshapes of Single Nanocrystals in Solution Using Photon-correlation Fourier Spectroscopy, 2012 MRS Fall Meeting, Nov 25th-30th, Boston (Nov 28th,
Conference and workshop presentations:
1. Intercomplex energy transfer rates and single molecule spectra of LH2 and LH1 as described by a polaron-transformed multichromophoric quantum master equation, ACS Spring 2012 Chemistry of Life
Meeting, March 25th-29th, 2012, San Diego.
2. Effect of the heat bath on the intercomplex resonance energy transfer rate as described by multichromophoric Foerster theory, Quantum Effects in Biological Systems, August 1st-5th, 2011, Ulm.
3. Quantum Brownian motion in a periodic potential: comparison of various kinetic models, Theoretical, Computational, and Experimental Challenges to Exploring Coherent Quantum Dynamics in Complex
Many-Body Systems, May 9th-12th, 2010, Dublin.
4. Semiclassical treatment of a Brownian ratchet using the quantum Smoluchowski equation, DPG Spring Meeting of the Condensed Matter Section, March 21st-26th, 2010, Regensburg.
5. Derivation of the quantum Smoluchowski equation using Brinkman's method, Tunneling and Scattering in Complex Systems - From Single to Many Particle Physics, International Workshop, Max-Planck
Institut fuer Physik komplexer Systeme, September 14th-18th, 2009, Dresden.
6. Smoluchowski equation approach for the quantum Brownian motion in a tilted periodic potential, ISSEC Irish Mechanics Society Joint Symposium, University College Dublin, May 16th, 2008.
Conference and workshop attendances:
1. IOP Postgraduate Workshop on Spintronics, 13th November, 2009, University of York.
2. DPG Spring Meeting of the Condensed Matter Section, March 22nd-27th, 2009, Dresden.
3. Ireland Mathematica Seminar 2009, 21st October, 2009, Trinity College Dublin.
The traditional measure of cognac is to place the glass on its side and fill to the brim. An equally popular measure is to place the glass vertically and fill to the point of maximum surface area.
For a given glass curvature and stem length, by adjusting the base width we can create the perfect cognac glass, where the two measures are equal.
Source Code: APerfectCognacGlass.nb
One sometimes needs to numerically evaluate the real part of the one-sided Fourier transform of a function f(t). A simple and efficient way to achieve this is to use a fundamental property of the
two-sided Fourier transform of a conjugate-even function (i.e., f(-t) = f(t)* ) and use the efficient fast Fourier transform (FFT) algorithm. This procedure is applied here to the linear absorption
and emission spectra of a single excitation interacting with a thermal environment.
Source Code: OneSidedFourierTrans...AndEmis-source.nb
Image Gallery
QuEBS Ulm 2011: Poster session
Konferenz zur Quantenbiologie
NUS Singapore 2011: Group photograph
Singapore-MIT Alliance
TSICS Dresden 2009: Group photograph
Tunneling and Scattering in Complex Systems | {"url":"http://web.mit.edu/~licleary/www/","timestamp":"2014-04-20T16:08:25Z","content_type":null,"content_length":"12639","record_id":"<urn:uuid:286126cc-19b5-4dcd-8161-d8c1c06744ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is 1/64 as a power of 4?
December 13th 2010, 06:04 AM
What is 1/64 as a power of 4?
Hi, can someone please explain how to do this question?
I'm not just looking for an answer, as I would very much appreciate knowing how to do it too. I'm not even sure I know what an integer or an index is yet.
December 13th 2010, 06:05 AM
What is 64 as a power of 4?
December 13th 2010, 06:16 AM
64 as a power of 4 is $4^3$. So is $4^3$ the answer?
Even if it is, it's still not really explaining the question though.
December 13th 2010, 06:19 AM
Or another way to ask this, is to what power should 4 be raised to, to have 64?
EDIT: I didn't see you above post.
No, that means that:
$\dfrac{1}{64} = \dfrac{1}{4^3}$
What happens when you have a fraction?
December 13th 2010, 06:31 AM
Actually it should be $4^{-3}$.
December 13th 2010, 09:08 AM
Thanks. But you guys are answering with more questions (and answers). What I really need is an explanation.
December 13th 2010, 09:10 AM
Perhaps you just need to understand what it means to have a negative exponent.
$x^{-n} = \frac{1}{x^n}$
So $4^{-3} = \frac{1}{4^3} = \frac{1}{64}$
December 13th 2010, 09:16 AM
Well, what we are trying to do is making you think and understand why such and such things work... because we'll not always be here to answer you or to explain things to you. You'll have to be
able to think on your own at some point and evaluate whether or not your thought is correct or not.
December 13th 2010, 09:21 AM
Yes but what I really need is an explanation. Didn't any of you guys maths tutors explain things to you before asking you to do it?
December 13th 2010, 09:30 AM
$4^{-1}=\dfrac{1}{4}$ by definition.
So $4^{-3}=\dfrac{1}{4^3}=\dfrac{1}{64}$.
December 13th 2010, 10:31 AM
That's a bit better. Thanks.
December 13th 2010, 05:32 PM
mr fantastic
There is an assumption that you know something about index laws (otherwise why are you attempting this question). The point of being asked questions is to try and guide you to the answer
yourself, based on what you already know. You were told everything you needed to know, the expectation was that you would attempt then to answer the question rather than maintaining a helpless
attitude. Post #2 and then particularly post #7 tell you exactly what is required.
What I'd like to see is if you have actually learned anything from this thread. eg. What is 1/81 as a power of 3? | {"url":"http://mathhelpforum.com/algebra/166122-what-1-64-power-4-a-print.html","timestamp":"2014-04-20T20:01:30Z","content_type":null,"content_length":"11116","record_id":"<urn:uuid:19e0164f-a2e2-49ec-b800-ae1d5948ef66>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaussian beams and geometric aspects of Inverse problems
Seminar Room 2, Newton Institute Gatehouse
Geometry plays an important role in inverse problems. For example, reconstruction of second order elliptic selfadjoint differential operator on the manifold through the gauge transformation can be
reduced to the reconstruction of Shrodinger operator, corresponding to Beltrami-Laplace operator, i.e. topology of the manifold , riemannian metric on it and potential. The difficulties mostly
related to geometric aspects of the problem. If we consider applied invere problems, we also see, that the main problems lies in geometry. For example, in the main problem of geophysics - the so
called migration problem, it is necessary to reconstruct high frequency wave fields in the media with complicated geometry with many caustics of different structure. The difficulties of
reconstruction of wave fields close to caustics are also of geometric character. To solve the geometric problems it is necessary to have instruments closely related to the geometry of corresponding
problem. One of such instruments is Gaussian beams solutions. In the talk the geometric properties of these solutions and their use in direct and inverse problems will be shown. The problems with
more complicated Finsler geometry will also be discussed.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/INV/seminars/2011112914002.html","timestamp":"2014-04-16T11:26:50Z","content_type":null,"content_length":"7058","record_id":"<urn:uuid:34e7c83d-4797-4665-a490-915e35119f3a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
by James Paul Peruvankal, Senior Program Manager at Revolution Analytics
At Revolution Analytics, we are always interested in how people teach and learn R, and what makes R so popular, yet ‘quirky’ to learn. To get some insight from a real pro we interviewed Bob
Muenchen. Bob is the author of R for SAS and SPSS Users and, with Joseph M. Hilbe, R for Stata Users. He is also the creator of r4stats.com, a popular web site devoted to analyzing trends in
analytics software and helping people learn the R language. Bob is an Accredited Professional Statistician™ with 30 years of experience and is currently the manager of OIT Research Support (formerly
the Statistical Consulting Center) at the University of Tennessee. He has conducted research for a variety of public and private organizations and has assisted on more than 1,000 graduate theses and
dissertations. He has written or coauthored over 60 articles published in scientific journals and conference proceedings.
Bob has served on the advisory boards of SAS Institute, SPSS Inc., StatAce OOD, the Statistical Graphics Corporation and PC Week Magazine. His suggested improvements have been incorporated into SAS,
SPSS, JMP, STATGRAPHICS and several R packages. His research interests include statistical computing, data graphics and visualization, text analysis, data mining, psychometrics and resampling.
James: How did you get started teaching people how to use statistical software?
Bob: When I came to UT in 1979, many people were switching from either FORTRAN or SPSS to SAS. There was quite a lot of demand for SAS training, and I enjoyed teaching the workshops. Back then SAS
could save results, like residuals or predicted values, much more easily than SPSS, which drove the switch.
When the Windows version of SPSS came out, people started switching back. The SPSS user interface designer, Sheri Gilley, really understood what ease of use was all about, and the SAS folks didn’t
get that until quite recently. I was just as happy teaching the SPSS workshops. However, many SPSS users at UT avoid programming, which I think is a big mistake. Pointing-and-clicking your way
through an analysis can be a time-saving way to work, but I always keep the program so I have a record of what I did.
I started teaching R workshops in 2005 and attendance was quite sparse. Now it’s one of our Research Computing Support team’s most popular topics.
James: Is there anything special about teaching people how to use R, any particular difficulties?
Bob: In other analytics software, the focus is on variables. It sounds too simple to even bother saying: "Every procedure accepts variables." There are very few ways to specify them, such as by
simple name, A, B, C, or lists like A TO Z or A—Z.
Rather than just variables, R has a variety of objects such as vectors, factors and matrices. Some procedures (called functions in R) require particular kinds of objects and there are many more ways
to specify which objects to use. From a new user's perspective that may seem like needless complexity. However it provides significant benefits. Once an R user has defined a categorical variable as a
factor, analyses will then try to “do the right thing” with that variable. For instance, you could include it in a regression equation and R would create the indicator variables needed to handle a
categorical variable automatically.
Another important benefit to R’s object orientation is that it allows a total merger of what would normally be a separate matrix language into the main language of R. This attracts developers, who
are helping grow R’s capabilities very rapidly.
James: How do you handle such a broad range of backgrounds in your classes?
Bob: The workshop participants do come from a very wide range of fields, but they share a common set of knowledge: what a variable is, how to analyze data, and so on. So I save a great deal of time
by not having to explain all that. Instead, I redirect it into pointing out where R is likely to surprise them. You can have variables that are not in a data set? That’s a bizarre concept to a SAS,
SPSS or Stata user. You can have X in one data set and Y in another, but include both in the same regression model? That sounds very strange at first and, of course, it’s quite risky if you’re not
careful. I introduce most topics with, “You’re expecting this, but here comes something very different…”. Different doesn’t necessarily mean better, of course. SAS, SPSS and Stata are all top-quality
packages and they do some things with less effort. I love R, but I like to point out where I think the others do a better job.
James: How do you find teaching online compared to classroom courses?
Bob: I teach my workshops in-person at The University of Tennessee and I’ve taught at the American Statistical Association’s Joint Statistical Meeting as well as the UseR! Conference. Teaching “live”
is great fun, and being able to see the participants’ expressions is helpful in adjusting the presentation pace and knowing when to stop and ask for questions.
However, live workshops have major drawbacks. Travel costs can easily exceed the fee for a workshop, but worse, minimizing those expenses means cramming too much material into a short timeframe.
That’s why I teach my webinars in half-day stretches skipping a day in between. We break every hour and fifteen minutes so people can relax. On their days off they can catch up on their regular work,
review the workshop material, work on the exercises and email me with questions. At the end of a live workshop people are happy but exhausted and they leave quickly. At the end of a webinar-based
workshop, they often stay for a long time afterwards asking questions. I stay online as long as it takes to answer them all.
James: Some people like learning actively, with their hands on the keyboard. Others prefer to focus more on what’s being said and taking notes. How do you handle these styles?
Bob: This is an excellent question! When I take a workshop myself, I usually prefer hands-on but sometimes I don’t. Each of my workshop attendees receives setup instructions a week early so their
computer has the software installed and the files in the right place by the time we start. They’re ready for whichever learning style they prefer.
For hands-on learners, I use a single R program that contains the course notes as programming comments interspersed with executable code. Since the “slides” are right in front of them, they never
need to take their eyes off their screens. The examples are designed to be easy to covert to their own projects. They build in a step-by-step fashion, going from simple to more complex to make sure
no one gets lost. Participants can run each example as I cover it, and see the results on their own computers.
For people focused more on listening and taking notes, everyone also has a complete set of slides. The slides have the notes that describe each concept, then the code for it, followed by the output.
The notes follow a numbering scheme that is used in both the program and the handout. This way, both types of learners stay in sync.
This dual approach has another benefit. It’s very easy to switch from one style to the other at any time. If someone gets tired of typing, or his or her computer malfunctions, switching to the notes
is seamless. Conversely, if someone is following the printed notes and want switch to run an example, it’s very easy to find.
James: What motivated you to start writing books?
Bob: I’ve always enjoyed writing newsletter and journal articles. My books on R started out just as a set of notes that I kept for myself. When I put them online, they started getting thousands of
hits and Springer called to ask if I could make it a book. I really didn’t think I had enough information, but it kept growing. The second edition of R for SAS and SPSS Users is 686 pages and I have
notes on a few topics that I wish I had added. If I ever find time for a third edition, they’ll be in there.
James: Thank you Bob for your time!
If you are looking to learn R and are already familiar with software like SAS, SPSS or Stata, do check out Bob’s upcoming workshops here and here. | {"url":"http://blog.revolutionanalytics.com/profiles/","timestamp":"2014-04-25T07:34:12Z","content_type":null,"content_length":"91255","record_id":"<urn:uuid:7024137f-c5d1-4b56-8f6e-560bb0313e5a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Confronting The Unprovable - Gödel And All That
Confronting The Unprovable - Gödel And All That Follow
Kurt Gödel (1906-1978)
The argument that Gödel used is very simple.
Let’s suppose that there is a machine M that can do the job we envisaged for a consistent system.
That is, the machine can prove or disprove any theorem that we feed to it. It is programmed with the axioms of the system and it can use them to provide proofs.
Now suppose we ask for the program for the machine written in the same logic used for the theorems that the machine proves. After all any computer can be reduced to logic and all we
are requesting is the logical expression that represents the design of the computer – call this PM for "Program for M".
Now we take PM and construct a logical expression which says
“The machine constructed according to PM will never prove this statement is true”
and call this statement X.
Now we ask the machine to prove or disprove X.
Consider the results. If the machine says “yes this is true” then the statement is false. If the machine says that the statement is false then the statement is true.
The proof that there are theorems that cannot be proved
You see what the problem is here and you can do this sort of trick with any proof machine that is sufficiently powerful to accept its own description as a theorem.
Any such machine, and the system of logic on which it is based, must, by its very nature be inconsistent in that there are theorems that can be written using the system that it
cannot be used to prove either true or false.
In other words there are three types of theorem in the system – those that are provably true, those that are provably false and those that are undecidable using the axioms that we
have at our disposal.
OK, you may say but is it possible to for a machine to be powerful enough to attempt to prove a statement that involves its own description? Perhaps it is in the nature of machines
that they cannot cope with their own description.
The really clever part of Gödel’s work was to find that any axiomatic system powerful enough to describe the integers, i.e. powerful enough to describe simple arithmetic, has this
That is arithmetic isn’t a consistent axiomatic theory which means that there are theorems about the integers that have no proof or disproof within the theory of arithmetic.
So now think about the 300-year search for a proof for Fermat’s last theorem.
Perhaps it wasn’t that the mathematicians weren’t trying hard enough. Perhaps there was no proof. As it turns out we now know that a proof exists but there was a long time when it
was a real possibility that the theorem was undecidable.
You clearly understand the idea but do you believe it?
Consider the following problem: there seem to be lots of paired primes i.e. primes that differ by two (3,5), (5,7), (11,13) and so on. It is believed that there are an infinite
number of such pairs - the so called "twin prime conjecture" but so far no proof.
So what are the possibilities?
You examine number pairs moving over larger and large integers and occasionally you meet paired primes.
Presumably either you keep on meeting them or you there comes a point when you don’t.
In other words, the theorem is either true or false. And as it is true or false presumably there should be a proof of this.
If you have taken Gödel’s theorem to heart you should know that this doesn’t have to be the case. The integers go on forever and you can’t actually decide the truth of the theorem
by looking at the integers.
How far do you have to go without seeing a pair of primes to be sure that you aren’t ever going to see another pair?
How many pairs do you have to see to know that you are going to keep seeing them?
There is no answer to either question. In the same way why do you assume that there is a finite number of steps in a proof that will determine the answer.
Why should the infinite be reducible to the finite?
Only because we have grown accustomed to mathematics performing this miracle.
In the case of the twin prime conjecture recent progress has proved that that are infinitely many primes separated by N and N has to be less than 70 million. This doesn't mean that
N is 2 however. Collaborative work by a lot of mathematicians has reduced the upper limit (at the end of 2013) to less than 576 which seems like progress. Can the bound be pushed
down as far as 2 or is there really no proof?
Fermat’s last theorem stated that the only integer solution to a particular equation was when n equals 2. To prove this the hard way requires examining the equation for n equal to
1, 2, 3… and for all a,b and c and this means we have to test an infinite number of possibilities.
Yet we have a finite proof.
We have a logical derivation of the truth of the theorem, which doesn’t involve testing an infinite number of cases.
This is what Gödel’s theorem really is all about.
There are statements that are undecidable. If you add additional axioms to the system the statements that were undecidable might well become decidable but there will still be valid
statements that are undecidable. Indeed every time you expand the axioms you increase the number of theorems that are decidable and undecidable.
It’s as if mathematics at the turn of the 20th century was seeking the ultimate theory of everything and Gödel proved that this just wasn’t possible.
So far so good, or bad depending on your point of view.
You may even recognise some of this theory as very similar to the theory of Turing machines and non-computability, in which case it might not be too much of a shock to you. However,
at the time they were thought up both Gödel’s and Turing’s ideas were revolutionary and they were both regarded with suspicion and dismay.
It was thought to be the end of the dream: mathematics was limited. Mathematics wasn’t perfect and in fact every area of mathematics contained its limitations.
Today you will find it argued that Gödel’s theorem proves that god exists. You will find it argued that Gödel’s theorem proves that human thought goes beyond logic. The human mind
is capable of seeing truth that mathematics cannot prove. It is also argued that it limits artificial intelligence, because there are things that any machine cannot know and hence
also proves that human intelligence is special because it can know what the machine cannot.
If you think about it, Gödel’s theorem proves none of this. It doesn’t even suggest that any of this is the case.
Gödel’s theorem doesn’t deal with probabilities and what we believe, only in the limitations of finite systems in proving assertions about the infinite.
Sometimes the infinite is regular enough to allow something to be proved. Sometimes, in fact most of the time, it isn’t.
But important though this is we live in a finite personal universe and we don’t demand perfect proof. We go with the flow, guess and accept good probabilities as near certainties.
Related Articles
Non-computable numbers
What is a Turing Machine?
Axiom Of Choice - The Programmer's Guide
The Programmer's Guide To The Transfinite
Kolmogorov Complexity
To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for
our weekly newsletter.
blog comments powered by Disqus
What if Babbage..?
It was on this day 220 years ago (December 26 1791) that Charles Babbage was born. The calculating machines he invented in the 19th century, although never fully realised in his
lifetime, [ ... ]
+ Full Article
Information Theory
So you know what a bit is – or do you? How much information does a bit carry? What is this "information" stuff anyway? The answers are, unsurprisingly, all contained in the subject
called Informatio [ ... ]
+ Full Article
Other Articles
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved. | {"url":"http://www.i-programmer.info/babbages-bag/340-confronting-the-unprovable.html?start=1","timestamp":"2014-04-20T09:07:32Z","content_type":null,"content_length":"44483","record_id":"<urn:uuid:ee78ff76-b4a8-4724-b8b8-e577b427db33>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Square and Cube Induction Proof
February 16th 2011, 03:14 PM #1
Nov 2010
Square and Cube Induction Proof
Hey all, some help with the following proof would be appreciated:
For all integers k >= 2, k^2 < k^3
I believe induction can be used here. Thanks a bunch!
divide by k^2 on both sides.
Since k>=2, this will always hold true.
The actual proof, of course, would go the other way:
Since $k\ge 2$, $k> 1$. Now multiply on both sides by the positive number $k^2$.
February 16th 2011, 04:18 PM #2
February 17th 2011, 05:40 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/number-theory/171533-square-cube-induction-proof.html","timestamp":"2014-04-17T19:37:52Z","content_type":null,"content_length":"35301","record_id":"<urn:uuid:9770807e-1f16-4c4f-8856-ea9c9c2bbcd6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advance Counting
November 27th 2008, 02:06 PM #1
Junior Member
Nov 2008
Advance Counting
Assume that the population of the world in 2002 was 6.2 billion and is growing at the rate of 1.3% a years.
(a) Set up a recurrence relation for the population of the world n years after 2002.
(b) Find an explicit formula for the population of the world n years after 2002.
(c) what will the population of the world be in 2022?
Hello, bhuvan!
Assume that the population of the world in 2002 was 6.2 billion
and is growing at the rate of 1.3% a years.
(a) Set up a recurrence relation for the population of the world $n$ years after 2002.
$P_o \:=\:6.2$ billion
$P_n \;=\;1.013\!\cdot\!P_{n-1}$
(b) Find an explicit formula for the population of the world n years after 2002.
$P(n) \;=\;6.2(1.013)^n$ billion
(c) What will the population of the world be in 2022?
When $t = 20\!:\;\;P(20) \;=\;6.2(1.013)^{20} \;\approx\;8.0275$ billion
Thank You !!
November 27th 2008, 03:03 PM #2
Super Member
May 2006
Lexington, MA (USA)
November 28th 2008, 08:16 AM #3
Junior Member
Nov 2008 | {"url":"http://mathhelpforum.com/discrete-math/61930-advance-counting.html","timestamp":"2014-04-17T11:09:46Z","content_type":null,"content_length":"36157","record_id":"<urn:uuid:17c98e0a-6adf-4ac8-8e93-14758dd097c9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of finite in English:
Syllabification: fi·nite
• 1Having limits or bounds: every computer has a finite amount of memory
More example sentences
□ On another, more important level, the book is about Levin's research in cosmology, and her idea that the universe may be finite in size.
□ In this era of limited resources and finite health - care budgets, it is important to assess not just clinical effectiveness but also cost effectiveness.
□ We know, that a single universe is enormously large, but always finite in size due to its Big-Bang origin.
limited, restricted, determinate, fixed
• 1.1Not infinitely small: one’s chance of winning may be small, but it is finite
More example sentences
□ They are neither finite quantities nor quantities infinitely small, nor yet nothing.
□ Obviously, taking infinitesimal steps in the direction of the gradient would take forever, so some finite step size must be used.
□ Any probe must be made of some material and have a finite size.
• 2 Grammar (Of a verb form) having a specific tense, number, and person. Contrasted with nonfinite.
More example sentences
□ In English, tense must be expressed in all finite verb phrases.
□ A temporal profile needs to be contributed by a finite verb, as in I walked into the garden, We drove towards the sea.
□ Form a question and make it specific and finite so that the answer is easily recognizable.
More example sentences
□ When laboratory subjects are not allowed to communicate, their behavior closely approximates the behavior that is predicted using finitely repeated, non-cooperative game theory.
□ Euclid's geometry is an example of a finitely axiomatized theory: it involves a fixed number of proper axioms (together with a few definitions and general principles).
□ He only accepted mathematical objects that could be constructed finitely from the intuitively given set of natural numbers.
More example sentences
□ Yet under the storms of insignificant daily struggles, we lose sight of the finiteness and fragility of our own existence, one with which we were never comfortable from the moment we came
squalling out of the womb.
□ Indeed, economic discussion during the last part of the 20th century included a sharp debate about whether the finiteness of natural resource availability imposed a serious limitation on
economic growth and development.
□ In fact, determined efforts by many physicists and mathematicians over a period of more than 20 years have failed to produce a proof of the finiteness or consistency of string theory.
late Middle English: from Latin finitus 'finished', past participle of finire (see finish).
More definitions of finite
Definition of finite in:
Subscribe to remove ads and access premium resources
Word of the day coloratura
Pronunciation: ˌkələrəˈto͝orə
elaborate ornamentation of a vocal melody | {"url":"http://www.oxforddictionaries.com/us/definition/american_english/finite","timestamp":"2014-04-17T18:54:45Z","content_type":null,"content_length":"120782","record_id":"<urn:uuid:10ef1927-d7aa-4b05-aa8c-0002c28c2a12>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mount Laurel Math Tutor
Find a Mount Laurel Math Tutor
...I also have 7+ years experience teaching college-level math. I am a world-renowned expert in the Maple computer algebra system, which is used in many math, science, and engineering courses. My
tutoring is guaranteed: During our first session, I will assess your situation and determine a grade that I think you can get with regular tutoring.
11 Subjects: including differential equations, logic, calculus, precalculus
I am a fun, helpful, and experienced tutor for the Sciences (biology and chemistry), Math (geometry, pre-algebra, algebra, and pre-calulus), English/Grammar, and the SATs. For the SAT, I implement
a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have...
26 Subjects: including precalculus, SAT math, linear algebra, algebra 1
...I have taught students in grades 2-12 in a variety of settings - urban classrooms, after-school programs, summer enrichment, and summer schools. I work with students to develop strong
conceptual understanding and high math fluency through creative math games. Having worked with a diverse popula...
9 Subjects: including SAT math, algebra 1, algebra 2, geometry
...My other teaching-related (volunteer) experiences include teaching and directing the curriculum for the pre-school children at my home church in Lansdale, PA, coordinating a summer science
program in Northeast Philly, and directing the math-reading curriculum for a summer camp in North Philly. T...
12 Subjects: including algebra 1, algebra 2, biology, chemistry
...I am very effective with athletically/artistically inclined students and have streamlined routines for special needs students that last for years. There is no dispute that my students become
self sufficient in study skills and time management planning. I have learned that sometimes students are...
43 Subjects: including calculus, ACT Math, precalculus, SAT math | {"url":"http://www.purplemath.com/Mount_Laurel_Math_tutors.php","timestamp":"2014-04-19T17:51:08Z","content_type":null,"content_length":"24185","record_id":"<urn:uuid:7999a72c-107c-4339-9741-e136506dd12a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electrical Circuits - Series and Parallel Circuits, Ohms Law
Electrical Circuits 16 of 31
If, for example, two or more lamps (resistances R1 and R2, etc.) are connected in a circuit as follows, there is only one route that the current can take. This type of connection is called a series
connection. The value of current I is always the same at any point in a series circuit.
The combined resistance RO in this circuit is equal to the sum of individual resistance R1 and R2. In other words: The total resistance(RO) is equal to the sum of all resistances (R1 + R2 + R3 +
Therefore, the strength of current (I) flowing in the circuit can be found as follows: | {"url":"http://www.autoshop101.com/trainmodules/elec_circuits/circ116.html","timestamp":"2014-04-18T08:03:54Z","content_type":null,"content_length":"3144","record_id":"<urn:uuid:85e00875-6470-4d81-a96b-8a43f3454328>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Typed tagless-final interpreters for PCF Higher-order abstract syntax based on the code accompanying the paper by Jacques Carette, Oleg Kiselyov, and Chung-chieh Shan
class Symantics repr whereSource
The language is simply-typed lambda-calculus with fixpoint and constants. It is essentially PCF. The language is just expressive enough for the power function. We define the language by parts, to
emphasize modularity. The core plus the fixpoint is the language described in the paper
Hongwei Xi, Chiyan Chen, Gang Chen Guarded Recursive Datatype Constructors, POPL2003
which is used to justify GADTs.
Symantics S
Symantics R
ESymantics repr => Symantics (ExtSym repr)
Sample terms and their inferred types
Typed and tagless interpreter
newtype R a Source
FixSYM R
BoolSYM R
MulSYM R
Symantics R
type VarCounter = IntSource
• R is not a tag! It is a newtype The expression with unR _looks_ like tag introduction and elimination. But the function unR is *total*. There is no run-time error is possible at all -- and this
fact is fully apparent to the compiler. Furthermore, at run-time, (R x) is indistinguishable from x * R is a meta-circular interpreter This is easier to see now. So, object-level addition is
_truly_ the metalanguage addition. Needless to say, that is efficient. * R never gets stuck: no pattern-matching of any kind * R is total
• Another interpreter
newtype S a Source
FixSYM S
BoolSYM S
MulSYM S
Symantics S
class MulSYM repr whereSource
• The crucial role of repr being a type constructor rather than just a type. It lets some information about object-term representation through (the type) while keeping the representation itself
• * Extensions of the language
• Multiplication
Extensions are independent of each other
Extending the R interpreter
Extending the S interpreter | {"url":"http://hackage.haskell.org/package/liboleg-2010.1.7.1/docs/Language-TTF.html","timestamp":"2014-04-18T02:09:31Z","content_type":null,"content_length":"16270","record_id":"<urn:uuid:324daf17-53e7-43b1-b42e-adf636b802ee>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Plus has opened its temporary head office in Hyderabad! We're here for the International Conference of Women Mathematicians, starting today, and the International Congress of Mathematicians (ICM)
starting on Thursday. The highlight (apart from Plus' presentation on public engagement with maths) will be the award of the Fields Medals for 2010.
The Fields Medal is the most prestigious prize in mathematics, akin to the Nobel Prize. It is awarded to up to four mathematicians at each ICM, which meets every four years. The prize is awarded to
mathematicians under the age of 40 in recognition of their existing work and for the promise of their future achievements. You can read more about the Fields Medal on Plus.
And the Fields medal isn't the only prestigious prize being awarded at the ICM. The Rolf Nevanlinna Prize recognises achievements in mathematical aspects of computer and information science. The Carl
Friedrich Gauss Prize, which was first awarded at the last congress in 2006, is for outstanding mathematical contributions that have found significant applications outside of mathematics. The first
recipient of this prize was the Japanese mathematician Kiyoshi Itô, then aged 90, for his development of stochastic analysis. His work has allowed mathematicians to describe Brownian motion — a
random motion similar to the one you see when you let a particle float in a liquid or gas. Itô's theory applies also to the size of a population of living organisms, to the frequency of a certain
allele within the gene pool of a population, or even more complex biological quantities. It is also now integral to financial trading as it forms the basis of the Black-Scholes formula underlying
almost all financial transactions that involve options or futures. (You can read more about the Black-Scholes formula in A risky business: how to price derivatives on Plus.)
This years ICM also sees the inauguration of a new prize, the Chern Medal, for an individual whose accomplishments warrant the highest level of recognition for outstanding achievements in the field
of mathematics, regardless of their field or occupation. The medal is in memory of the outstanding Chinese mathematician Shiing-Shen Chern. Plus is looking forward to finding out the winners of all
of the prizes at this year's ICM, and more importantly, to learning about their mathematical achievements and how they have contributed to mathematics and society at large. Stay tuned to our news
section, our blog or follow us on Twitter to find out all the news first. | {"url":"http://plus.maths.org/content/comment/reply/5279","timestamp":"2014-04-17T03:52:18Z","content_type":null,"content_length":"24431","record_id":"<urn:uuid:425554f9-507f-41c6-92d6-42d30bc853dc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Round-robin Scheduling
Copyright © University of Cambridge. All rights reserved.
'Round-robin Scheduling' printed from http://nrich.maths.org/
Note: for this problem it's handy to have colour pens ready!
A round-robin tournament is one in which every player plays against everyone else once. For example, with 3 players, we will have 3 matches: A-B, B-C, C-A. How many matches are needed for 4 players?
5 players? N players?
If we can schedule two matches in the same time slot (round), how many rounds will it take for a 3-player round-robin tournament? 4-player tournament? 5-player tournament?
Now that we have played with the scheduling problem for a bit, let's think of a graphical way to represent the problem. If we represent a player by a point and each match as a line between two
points, the graph representing all the matches in a 3-player tournamnet will be the triangle ABC. The graph representing a 4-player tournament will be the quadrilateral ABCD together with diagonal
lines AC and BD. Can you draw the graph for 5-player tournament? 6-player tournament?
If we schedule two matches in a round, how can we show in the same graph which two pairs of players are playing in the same round (hint: if you have only been using one pen, now is the time to use
another colour!) Try this for tournaments with 4, 5 and 6 players. Be careful with the 6-player tournament! Remember that a particular pair of players should only play each other once, so if the line
connecting the same two players is coloured more than once then we are in trouble. Try again if it doesn't work the first time. It can be done, but the solution is surprising tricky to find if we
start drawing the graph as a hexagon with all vertices connected to each other.
There is a nice trick, which works for all tournaments with an even number of players. We will illustrate this with a 6-player tournament. Instead of drawing a hexagon, draw a pentagon with one extra
vertex in the center of the graph. Connect one vertex of the pentagon to the vertex at the center and connect the remaining vertices with horizontal lines. For the next round, use a different
coloured pen to do the same connecting but with each vertex of the pentagon rotated into the neighbouring vertex. Do this until all lines are coloured. We end up with a fully-connected graph with no
line coloured twice! Why does this work? Can we use this trick for a 5-player tournament?
We can also include information about home/away game by an arrowed line. For tournaments with 4, 5, 6 players, how many games are played home or away by each player? Can you find a fair scheduling
such that no team plays all games at home nor away? | {"url":"http://nrich.maths.org/7372/index?nomenu=1","timestamp":"2014-04-19T06:56:45Z","content_type":null,"content_length":"5996","record_id":"<urn:uuid:4d0fc0da-6b58-4702-93d2-df81a89a05e9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Watauga, TX Algebra 1 Tutor
Find a Watauga, TX Algebra 1 Tutor
...I can teach Chinese from elementary to intermediate level (junior high). I have studied Chinese up to my Associate Degree. I have 25 years of teaching experience and I have taught ESL in
Taiwan, Singapore and Arlington, TX. I have taught in schools, tutored many students and also homeschooled our daughters.
15 Subjects: including algebra 1, reading, geometry, Chinese
...I have had a great deal of success helping students increase their grades by at least one letter grade. I have over twenty years of experience teaching Algebra 2 at the high school level. I
have also taught College Algebra.
15 Subjects: including algebra 1, chemistry, calculus, geometry
...In addition, I am a believer in Jesus Christ and truly seek to live my life as such. My students tend to enjoy my class because of my silly sense of humor, laid back personality, yet firm
discipline and respect for each other. I love to have fun but understand work must be done.
11 Subjects: including algebra 1, reading, special needs, prealgebra
...I taught ballet for four years through college and danced for 17 years. When I teach or tutor I focus on discovering a student's specific learning pattern; this made me an excellent ballet
teacher and a great tutor. I love working around problems and thinking outside the box to find solutions.
31 Subjects: including algebra 1, chemistry, geometry, biology
...I taught history courses in Monterrey, Mexico, and international politics at UT Dallas. To be able to teach a survey in international politics and theory of international relations, one must
be versed in modern European history. After of the concepts of sovereignty, nation-states, international law, etc. have emerged from modern European experiences after the Thirty-Year War.
37 Subjects: including algebra 1, Spanish, reading, statistics
Related Watauga, TX Tutors
Watauga, TX Accounting Tutors
Watauga, TX ACT Tutors
Watauga, TX Algebra Tutors
Watauga, TX Algebra 2 Tutors
Watauga, TX Calculus Tutors
Watauga, TX Geometry Tutors
Watauga, TX Math Tutors
Watauga, TX Prealgebra Tutors
Watauga, TX Precalculus Tutors
Watauga, TX SAT Tutors
Watauga, TX SAT Math Tutors
Watauga, TX Science Tutors
Watauga, TX Statistics Tutors
Watauga, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Watauga_TX_algebra_1_tutors.php","timestamp":"2014-04-21T07:46:08Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:2396c5d7-3589-484b-930b-e9d8c098ccc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Is action and reaction instantanious?
nobody has addressed my very simple question. what about a hypothetical massless charged particle? it has self inductance so it would accelerate under the influence of an external field in exactly
the same way that a massive particle would. the force due to self inductance exactly balancing the force due to the external field. net force is zero yet it still accelerates.
There is, of course, no such thing as a massless charged particle. A massless particle travels at the speed of light relative to all inertial frames, so it cannot accelerate.
A particle with charge q in an electric field [itex]\vec E[/itex] experiences a force [itex]\vec F[/itex] = q[itex]\vec E[/itex]. The only thing that affects its acceleration is its mass. [itex]\vec
F[/itex] = q[itex]\vec E[/itex] = ma. There is no other force. | {"url":"http://www.physicsforums.com/showpost.php?p=1902960&postcount=101","timestamp":"2014-04-20T03:28:37Z","content_type":null,"content_length":"8373","record_id":"<urn:uuid:272dbddb-ec1b-4435-b3dd-beaa4985d28e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cosecant: Transformations
Transformations and argument simplifications
Argument involving basic arithmetic operations
Argument involving inverse trigonometric and hyperbolic functions
Involving sin^-1
Involving cos^-1
Involving tan^-1
Involving cot^-1
Involving csc^-1
Involving sec^-1
Involving sinh^-1
Involving cosh^-1
Involving tanh^-1
Involving coth^-1
Involving csch^-1
Involving sech^-1
Addition formulas
Half-angle formulas
Multiple arguments
Argument involving numeric multiples of variable
Argument involving symbolic multiples of variable
Products, sums, and powers of the direct function
Products of the direct function
Products involving the direct function
Sums of the direct function
Sums involving the direct function
Involving other trigonometric functions
Involving sec
Involving hyperbolic functions
Involving csch
Involving sech
Powers of the direct function
Powers involving the direct function | {"url":"http://functions.wolfram.com/ElementaryFunctions/Csc/16/ShowAll.html","timestamp":"2014-04-18T23:34:19Z","content_type":null,"content_length":"83945","record_id":"<urn:uuid:6baff188-358f-4464-b26e-6aa3f1caea2b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] where would i put this line on a graph
1. February 23rd 2009, 08:59 AM #1
2. February 23rd 2009, 09:53 AM #2
A riddle wrapped in an enigma
Hi andyboy179,
What you have there is the graph of a straight line.
$y=2x+8$ is in slope-intercept form $y=mx+b$.
The y-intercept is 8. So graph (0, 8).
The slope is 2. So, from (0, 8) move up 2 units and right 1 unit to plot your second point at (1, 10).
Connect the two points with a straight line and extend the line past the two plotted points (the line is infintely long).
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/algebra/75309-solved-where-would-i-put-line-graph.html","timestamp":"2014-04-18T18:29:06Z","content_type":null,"content_length":"35867","record_id":"<urn:uuid:79144ce7-3bb6-40cd-841a-93c829be177a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimum area of cylinder
March 6th 2011, 02:15 AM
Minimum area of cylinder
I have to find the minimum surface area of a right cylinder using numerical method
I have did it with differentation, I have looked back at some of my work and in all my books,
I can get that the surface area = 2pi(r^2) + 2V/r
And found that using arithmetic mean-geometric mean inequality
SA= 2pi(r^2) + 2V/r
SA= 2pi(r^2) + V/r + V/r
This the part I dont quite understand, how it changes to this
SA> 3 x cubedrt2pi{(r^2) (V/r)(V/r)}
I can just do it and be done but I realy prefer to understand what I am doing
any advise would be great thx
March 6th 2011, 04:50 AM
I have to find the minimum surface area of a right cylinder using numerical method
I have did it with differentation, I have looked back at some of my work and in all my books,
I can get that the surface area = 2pi(r^2) + 2V/r
And found that using arithmetic mean-geometric mean inequality
SA= 2pi(r^2) + 2V/r
SA= 2pi(r^2) + V/r + V/r
This the part I dont quite understand, how it changes to this
SA> 3 x cubedrt2pi{(r^2) (V/r)(V/r)}
I can just do it and be done but I realy prefer to understand what I am doing
any advise would be great thx
1. I assume that the volume of the cylinder is a constant.
2. The surface area of a cylinder is:
$a_s = 2 \cdot \pi r^2 + 2 \pi r \cdot h$
3. The volume of the cylinder is:
$V = \pi r^2 \cdot h$
If you calculate $\dfrac{2V}r=2 \dfrac{\pi r^2 \cdot h}r = 2 \pi r \cdot h$ you'll get indeed the curved surface of the cylinder twice.
4. To get the minimum surface area you have to differentiate
$a_s(r)=2 \cdot \pi r^2+\dfrac{2V}r$
$a'_s(r)=4 \cdot \pi r - \dfrac{2V}{r^2}$
5. Now solve $a'_s(r)=0$ for r:
$4 \cdot \pi r - \dfrac{2V}{r^2}=0~\implies~4 \cdot \pi r^3 =2V$
So after moving some stuff around you'll get: $r = \sqrt[3]{\dfrac V{2\pi}}$
6. Now plug in this value into the equation of the surface area and you'll get your result.
March 6th 2011, 05:31 AM
Thank you for your response maybe I need to clarify a bit better
Yes the Volume is constant
I have already sucssesfully found the minimum Radius value with differentiation and proved it was minimum with the second derivitive
And also worked out the Surface area with this Radius value given me the minimum surface area.
But I found a new formula that works for the minimum surface area with using only the volume I just dont understand how derive this new formula
My problem is I dont understand how to get from:
2\pi}{r^2}+\frac Vr + \frac Vr
I hope this LaTex works :p | {"url":"http://mathhelpforum.com/geometry/173607-minimum-area-cylinder-print.html","timestamp":"2014-04-18T06:46:52Z","content_type":null,"content_length":"8325","record_id":"<urn:uuid:76515f07-aec1-4974-87ed-7b4fd63311f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
After studying this lesson, you will be able to:
• Factor trinomials.
Steps of Factoring:
1. Factor out the GCF
2. Look at the number of terms:
• 2 Terms: Look for the Difference of 2 Squares
• 3 Terms: Factor the Trinomial
• 4 Terms: Factor by Grouping
3. Factor Completely
4. Check by Multiplying
This lesson will concentrate on the second step of factoring:Factoring Trinomials.
**When there are 3 terms, we are factoring trinomials. Don'tforget to look for a GCF first.**
Factoring trinomials often requires some trial and error.Don't get frustrated. Try all possible combinations.
Here's an explanation on Factoring Trinomials:
1. Look for the GCF -in this case there's not a common factorother than 1
2. Look at the number of terms - it has 3, so it is atrinomial
3. To factor a trinomial, create 2 sets of parentheses
4. Determine what the factors of the first term are and writethem in the first positions of each parenthesis.
^ 2 are x and x
5. Determine all the possible factors of the constant term.The factors of 10 are 1, 10 and 2, 5
6. The INSIDE / OUTSIDE COMBINATION must add up to the middleterm.
1 and 10 won't add up to 7 (the middle term)
2 and 5 do add up to 7 ( if both are positive) so thosefactors are the ones we use
Write the factors of the constant term in the last positions:
(x + 2) ( x + 5 ) If we multiply the INSIDE part we get 2x
this is the answer If we multiply the OUTSIDE part we get 5x
5x + 2x = 7x (the inside/outside combination adds up to the middle term)
We check the answer by multiplying: (x + 2) ( x + 5) Use FOILto get x^ 2 + 7x + 10
If we have some idea what signs to use, that makes ourfactoring much easier.
Rules for determining the signs in each factor:
If the Constant Term is Positive, both signs will bethe same (this means that either both will be positiveor both will be negative)
If the Constant Term is Negative, the signs will bedifferent (this means that one will be positive and onewill be negative)
Example 1
Factor x^ 2 + 5x + 6
This is a trinomial (has 3 terms). There is no GCF other thanone. So, we start with 2 parentheses:
Using our signs rules, we can determine the signs for thefactors. Since the constant term is positive we know the signswill be the same. Since we want the factors to add up to +5x thesigns will both
have to be positive. Keep this in mind.
1 st : Find the factors of the first term. The factors of x^2 are x and x. These go in the first positions. We can alsogo ahead and put in the signs (both positive)
2 nd : Find the factors of the constant term. The factors of 6are 1, 6 and 2, 3. Remember, we need the inside/outsidecombination to add up to the middle term which is 5x. Since 2 and3 add up to 5, we
choose those factors:
(x + 2 ) ( x + 3 )
Check by using FOIL (x + 2) (x + 3) x^ 2 + 3x + 2x + 6 which is x^2 + 5x + 6
Example 2
Factor x^ 2 - 8x + 12
This is a trinomial (has 3 terms). There is no GCF other thanone. So, we start with 2 parentheses:
Using our signs rules, we can determine the signs for thefactors. Since the constant term is positive we know the signswill be the same. Since we want the factors to add up to -8x thesigns will both
have to be negative. Keep this in mind.
1 st : Find the factors of the first term. The factors of x^2 are x and x. These go in the first positions. We can alsogo ahead and put in the signs (both negative)
2 nd : Find the factors of the constant term. The factors of12 are 1, 12 and 2, 6 and 3, 4. Remember, we need theinside/outside combination to add up to the middle term which is-8x. Since 2 and 6 add
up to 8, we choose those factors:
(x - 2 ) ( x - 6 )
Check by using FOIL (x - 2) (x - 6) x^ 2 - 6x - 2x + 12 which is x^2 - 8x + 12 | {"url":"http://www.factoring-polynomials.com/factoring-trinomials-3.htm","timestamp":"2014-04-21T12:10:59Z","content_type":null,"content_length":"14033","record_id":"<urn:uuid:a0c5a630-6f35-4a1f-9a64-07b1186a7105>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: hash for eq?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: hash for eq?
Panu Kalliokoski wrote:
On Tue, Aug 09, 2005 at 12:17:11AM -0700, Per Bothner wrote:
Should the reference implementation of make-hash-table be
modified so the default hash function for eq? is hash-by-identity?
I.e. add in the obvious place:
In the reference implementation, this does not help any, because it has
(define hash-by-identity hash). If hash-by-identity were provided as a
primitive, then yes, it would make sense. So it's just a question of
whether we can trust Scheme implementors that use the SRFI code
_partially_ to notice this particular thing.
I think it would clarify the intent better, both for users (it's useful
to look at the reference implementation to see an alternative and more
formal "specification"), and for implementors (often implementors take
a reference implementations and tweak places that the reference
implementation *recommends* shoudld be tweaked). If you make the
change I suggested, it means a port has to modify less code (and
the diffs are better localized), and it also make it clear that an
implementation *should* use hash-by-identity (if provided) as the
default hash function for eq?.
--Per Bothner
per@xxxxxxxxxxx http://per.bothner.com/ | {"url":"http://srfi.schemers.org/srfi-69/mail-archive/msg00049.html","timestamp":"2014-04-20T06:44:58Z","content_type":null,"content_length":"5186","record_id":"<urn:uuid:b1a7bd03-e58f-4d48-9f77-6408d4e31ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thin wall Pressure Vessel(cylinder)
It is not neglected in the thick wall solution, so look up the thick wall solution and see how it reduces to the thin wall solution when the ratio of the thickness to the radius decreases. From the
thick wall solution, you can calculate all the stress components at all locations. Play with the results. | {"url":"http://www.physicsforums.com/showthread.php?s=77e69fa995ed2236ffae87f9b95c7c04&p=4277442","timestamp":"2014-04-25T08:21:23Z","content_type":null,"content_length":"23780","record_id":"<urn:uuid:5b48f2f1-e6bb-44c2-b9d1-62b1c6596352>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATLAB CODE:Local Histogram equalization
For every pixel, based on the neighbor hood value the histogram equalization is done. Here I used 3 by 3 window matrix for explanation. By changing the window matrix size, the histogram equalization
can be enhanced. By changing the values of M and N the window size can be changed in the code given below.
Steps to be performed:
%FIND THE NUMBER OF ROWS AND COLUMNS TO BE PADDED WITH ZERO
for i=1:M
for j=1:N
for i= 1:size(B,1)-((PadM*2)+1)
for j=1:size(B,2)-((PadN*2)+1)
for x=1:M
for y=1:N
for l=2:256
subplot(2,1,1);title('Before Local Histogram Equalization'); imhist(A);
subplot(2,1,2);title('After Local Histogram Equalization'); imhist(Img);
After Local Histogram Equalization
Histogram equalization of an Image: http://angeljohnsy.blogspot.com/2011/04/matlab-code-histogram-equalization.html
10 comments:
How can this code be modified to perhaps have a window of 7x7
How can this code be modified to have another "window" other than 3x3?
I have modified the code so that you can change the window size of your convenience.
@Aaron Angel
if i have an image of size 112X92 and i want to apply histogram based image processing what appropriate size of window should i give??
if i have an image of size 112X92 an i want to apply histogram based image processing
what appropriate window size should i give?
Use a large window with mxn values in odd number.
SALAM...can anyone plz help me on MATLAB Code: Global Histogram Equalization on this page..i too need it..plz help me as early as u can..lotttt of thanxxx
@kumail abbas
Check this link
how can this code be extended to oriented local histogram equalisation
please help me
i m providing you some data
OLHE is similar to local histogram
equalization (LHE), but it captures the orientation of edges
while LHE does not. We begin with a brief review on LHE. For
each pixel on an image, we perform the histogram equalization
on the local w -by- h window centering on this pixel using
f ( x ) = (round(cd f ( x )) − min(cd f))*(L-1)/(w · h − cd f)
where x is the pixel intensity value, cd f ( x ) is the cumulative
distribution function of the histogram of the pixel intensities in
the w -by- h window, cd f
min is the minimum intensity in this
window, andL is the desired number of output gray levels.
Typically a square window is used, and we definek ≡ w = h .
We call the center of the k -by- k window the anchor . For LHE,
the anchor point is the pixel to be processed itself. For thewhole image, each pixel repeats the above operation and uses
f ( x ) to get its new intensity value. Fig. 1 illustrates LHE.
We define the generalized LHE operator as
( I W ∗ H
) = I
W ∗ H
where ξ,η is the relative position of the anchor point to
the pixel to be processed, I W ∗ H
is the input image whose
dimension is W -by- H ,andI
W ∗ H
is the histogram-equalized
image with the same dimension. The typical LHE which uses
the k -by- k local window can be denoted as L
0 , 0
, since the
anchor point is exactly the pixel to be processed itself.
If the pixel to be processed is brighter than all the neigh-boring pixels around it, it will have a large intensity value
after the local histogram equalization, and vice versa. We can
make LHE ‘oriented’ by changing anchor positions. Fig. 2
shows nine LHE operators using 3-by-3 windows. The eight
operators with { ξ,η } other than{ 0 , 0 } are ‘oriented’, and they
are dubbed as the Oriented Local Histogram Equalization
operators (OLHE operators) in this paper. The following
gives the formal definition of the OLHE operators:
≡ L
( k − 1 )
− ( k − 1 )
, O
≡ L
( 0 ,
− ( k − 1 )
≡ L
− ( k − 1 )
− ( k − 1 )
, O
≡ L
( k − 1 )
, 0 )
≡ L
− ( k − 1 )
, 0 )
, O
≡ L
( k − 1 )
( k − 1 )
≡ L
( 0 ,
( k − 1 )
, O
≡ L
− ( k − 1 )
( k − 1 )
where k is an odd number. Note that according to our
definition, there will always be exactly eight OLHE opera-tors no matter what the value of k is. Given an image I ,
OLHE produces 8 images, which areO
( I ) , O
( I ) , O
( I ) ,
( I ) , O
( I ) , O
( I ) O
( I ) and O
( I ) . The 8 images
are referred to as the OLHE images.
hey can any one please help me with code for enhancement of an image of a finger because I have to extract the fingerprint from it but cant get past binzrization cz the change intensity of the
pattern in image is very low so it just gives me a black screen please help. | {"url":"http://angeljohnsy.blogspot.com/2011/06/local-histogram-equalization.html","timestamp":"2014-04-17T16:29:07Z","content_type":null,"content_length":"164191","record_id":"<urn:uuid:5b094202-29ce-4498-859c-309f9a8d0ddf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to convert dc voltage into higher dc voltage using transformer????
.. how can i make the voltage constant what ever resistance i put??????
You'll have to increase the power capability of your DC source on the primary so that it can handle the load on the secondary.
Power is finite in this circuit. Let's say that your DC source can supply 100mW of power, and for simplicity, we'll assume a lossless transformer
P = V * I -> P/V = I
0.1 / 150 = 667 uA
So if your DC source can supply 100mW of power, the most current you could draw on your secondary would be 667 micro-amps to sustain 150V
V = I * R -> R = V/I
150 / 667^-6 = 224887 ~ 224k
So your biggest load would have to be equal to or greater than 224k. If you increased the load beyond that (lowered the resistance so that current increased), your voltage level would drop. For
example, assume a 100k load.
150 / 100k = 1.5 mA
P = V * I -> V = P/I
In this example, we have said the max power your source can supply is 100 mA, so:
V = 0.1 / 1.5^-3 = 66.7V
As you can see, you have exceeded the max power of your source, and your voltage levels took a hit and fell from 150V to 66V.
To make this simpler, check your load and see how much power it consumes. If you're plugging in a TV or a space heater, see how many Watts it consumes and then make sure your DC source on the primary
can supply that many Watts. | {"url":"http://www.physicsforums.com/showthread.php?t=413778","timestamp":"2014-04-20T00:59:41Z","content_type":null,"content_length":"70237","record_id":"<urn:uuid:eddc4878-1f07-4740-af86-8cb04048fda1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology � 224
Date: Mar 17, 2013 4:48 PM
Author: Virgil
Subject: Re: Matheology � 224
In article
WM <mueckenh@rz.fh-augsburg.de> wrote:
> On 17 Mrz., 08:18, fom <fomJ...@nyms.net> wrote:
> > On 3/16/2013 4:37 PM, WM wrote:
> >
> > > On 16 Mrz., 21:19, Virgil <vir...@ligriv.com> wrote:
> >
> > >>> In potential infinity there is no necessary line except the last one.
> > >>> We know that with certainty from induction. Every found and fixed line
> > >>> n cannot be necessary, because the next line contains it.
> >
> > >> AS soon as something is identifies as a natural or a FIS of the set of
> > >> naturals, it has a successor. It cannot be either a natural nor a FIS of
> > >> the naturals without a successor. at least by any standard definition of
> > >> naturals.
> >
> > > As soon as a second becomes presence, it has a successor.
> >
> > And what fantasy is this?
> >
> > The successor to the present has existential form but
> > has not yet happened.
> >
> > That is not the Kantian aprioriticity of time.
> >
> > That is not the Hegelian becoming of the present.
> >
> > It is the unfounded object of unjustifiable belief.
> It is the well known and established natural way how time passes and
> how the system of human actions in time goes off.
Mathematical truth is independent of time.
What was true yesterday will be true tomorrow
and what was false yesterday will almost always be false tomorrow.
Of course what had not yet been proved yesterday may be proved by
tomorrow, but it was still as true yesterday as it will be tomorrow.
So that WM's time image is an irrelevancy.
And similarly, the natural numbers of any tomorrow were already natural
numbers in every yesterday.
WM has frequently claimed that HIS mapping from the set of all infinite
binary sequences to the set of paths of a CIBT is a linear mapping.
In order to show that such a mapping is a linear mapping, WM would first
have to show that the set of all binary sequences is a linear space
(which he has not done and apparently cannot do) and that the set of
paths of a CIBT is also a vector space (which he also has not done and
apparently cannot do) and then show that his mapping, say f, satisfies
the linearity requirement that f(ax + by) = af(x) + bf(y),
where a and b are arbitrary members of the field of scalars and x and y
and f(x) and f(y) are arbitrary members of suitable linear spaces.
While this is possible, and fairly trivial for a competent mathematician
to do, WM has not yet been able to do it.
But frequently claims already to have done it. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8656835","timestamp":"2014-04-19T08:05:21Z","content_type":null,"content_length":"4367","record_id":"<urn:uuid:01583ec6-3052-4070-8003-a5c81ed6ba55>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Comparing covariance matrices
GCWaters posted on Wednesday, May 19, 2004 - 12:25 pm
Anyone have an example of a program comparing two covariance matrices only, with no latent factors?
bmuthen posted on Wednesday, May 19, 2004 - 1:10 pm
For say 3 variables, you can simply state
y1 with y2-y3;
y2 with y3;
in the model statement and then apply any degree of equality across the groups.
GCWaters posted on Wednesday, May 19, 2004 - 1:40 pm
Anonymous posted on Monday, March 21, 2005 - 11:46 am
I began EFA on my data but quickly noted that the correlation matrix which came in the output does not match the corr matrix produced in stata with the same data. The correlations are higher in
Mplus. Any ideas on what to do? The data are binary and there are missing data. Naturally, I don't want to continue with the analysis if the matrices don't match.
bmuthen posted on Monday, March 21, 2005 - 5:30 pm
Mplus produces tetrachoric correlations with EFA of binary outcomes - perhaps the other program treats the variables as continuous?
Anonymous posted on Wednesday, March 23, 2005 - 9:35 pm
Thank you and yes, Stata treats the variables as continuous. But, even when I don't tell Mplus that my variables are binary, the two matrices don't match. In other words, if I have Mplus create the
correlation matrix as if my variables are continuous.
After some more research on this, I am now sure that it is a missing data issue. I understand that mplus performs listwise deletion when there are missing data (unless analysis = missing is indicated
in the input). However, it does not seem to be doing listwise deletion by default when creating the corr matrix. I know this because if, in stata, I drop all cases with missing data and transfer that
no-missings-data set over to mplus and ask for the corr matrix, only then can I get mplus's corr matrix to match stata's corr matrix.
Is it true then that for some reason Mplus not default to listwise in the case of corr matrices? What is Mplus doing with the missing data here?
Linda K. Muthen posted on Thursday, March 24, 2005 - 5:25 am
The Mplus default in all cases is listwise deletion. There is no difference for correlation matrices. There must be another issue. For example, perhaps you are not reading the data correctly in
Mplus. If you want to send me the listwise data set, the data set with missing, your Mplus output, your Stata output, and license number to support@statmodel.com, I will be happy to let you know what
is happening.
Anonymous posted on Thursday, March 24, 2005 - 11:35 am
Your comment about "not reading the data correctly" actually led me to the resolution.(I'm posting it in case anyone has a similar situation in the future.)
I used stata2mplus to transfer my data. My missings in stata were identified with periods. I didn't realize that the stata2mplus module would turn my periods into "-9999". Now that I am specifying my
missings as such in mplus, the correlations do, of course, match perfectly.
Thank you for your help. (I'm still happy to send you all of the info, if you want it for any reason.)
Linda K. Muthen posted on Thursday, March 24, 2005 - 1:17 pm
No, that will not be necessary. It sounds like you solved your problem.
Anonymous posted on Thursday, March 24, 2005 - 2:16 pm
Howdy, this is Michael Mitchell (the author of stata2mplus). Indeed, the missing values are written out at -9999 by default (and you can change that if you wish with the "missing(#)" option. Feel
free to write me at mnm@ucla.edu with any future questions on this and I would be happy to help.
Happy Computing,
Michael Mitchell
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=8&page=455","timestamp":"2014-04-20T15:53:57Z","content_type":null,"content_length":"28048","record_id":"<urn:uuid:fc850665-d934-46c6-a291-e974f049b64a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel Imaging: A Mathematical Frames Perspective
Today I will take a look at the basics of parallel magnetic resonance imaging (MRI) from a frames perspective. I know of only one publication [1] discussing frame-based encoding for MRI nevertheless
I think that the mathematics of frames provides a solid foundation for understanding parallel imaging. In my opinion all practical or currently used methods of doing parallel imaging should be
understood in terms of how well they can approximate a frame-based image reconstruction.
Back in the day there was but one coil element in a receiver designed for magnetic resonance imaging of the head. Modern commercial MRI scanners are now equipped with head-imaging receivers comprised
of an array of 8 to 32 coil elements. These extra elements are primarily used for two purposes: (1) To improve signal-to-noise ratio (SNR), or (2) To decrease the time required to collect the imaging
data (accelerated imaging). The term parallel imaging is usually associated with the second purpose but I will use the term to define imaging methods which simply make use of a receive array for
either purpose (with or without acceleration). So what is a frame and what does this mathematical concept have to do with MRI?
Today I will focus on 2D multislice imaging since this is the workhorse sequence for functional MRI (fMRI). If you are familiar with the mathematics of conventional (non-parallel) 2D imaging then you
are aware that: (1) First the magnetization in a particular 2D “slice” is induced to precess about the ${z}$-axis (the main static field being in the ${z}$-direction) by an applied radio-frequency
magnetic field (the transmit or excitation field) and (2) Subsequently applied time-varying linear gradients in the z-component of the magnetic field are responsible for generating a set of encoding
functions that span the imaging space. These encoding functions of conventional MRI are that of a Fourier basis.
Parallel imaging uses a set of encoding functions which are in part due to the individual receive coil elements used to detect the signal generated by the precessing magnetization. There are
presently two parallel imaging methods by which 2D imaging may be accelerated. One method, which I will refer to as the phase-encoded frame method, accomplishes this acceleration by using the focal
sensitivity of the individual receive coils to eliminate the need to acquire some of the analysis coefficents corresponding to the elements of the conventional Fourier imaging basis. Another method,
which I will refer to as the slice-encoded frame method, uses the focal sensitivity of the receive coils to generate encoding functions in the slice direction ${z}$. In the literature it appears that
this second method is most often referred to as multiband imaging.
Today’s post will address the phase-encoded frame method of parallel imaging leaving the discussion of slice-encoded frame method for another time. The phase-encoded frame uses a set of encoding
functions generated by a product of some subset of the usual Fourier basis with a set of window functions ${g_n^*(x,y)}$ (where ${n = 1, \ldots, N_c}$ and ${N_c}$ is the number of receive coil
elements in the array) that are related to the receive field of the coil elements of the receiver array. This set of encoding functions may comprise a frame. A frame is set of functions that spans
the imaging space and can be used to perform stable analysis (data acquisition) and synthesis (image reconstruction) of the image data.
The data associated with the Fourier-like term of the phase-encoded frame may be sampled in different temporal orders and this temporal order establishes a trajectory through a spatial frequency
space universally (MRI universe that is) referred to as ${k}$-space. In practice different temporal orderings can change the nature of imaging artifacts in the presence of perturbations to the ideal
imaging situation. In this post I will not choose a particular trajectory. So I am in effect assuming an ideal imaging experiment with none of the many nasty perturbations that may occur in practical
MRI. In future posts I will return to the topic of trajectories (particularly that of the 2D multislice imaging workhorse of fMRI – echo planar imaging) and the artifacts associated with them.
So without further ado …
The Phase-Encoded Frame
So, what is a frame? A few definitions and properties should be enough to get us started but please see Ole Christensens’s book [2] for a rigorous introduction to frames.
Frame Definition: A set of functions ${{\mathcal F}= \lbrace f_n \rbrace}$ in a Hilbert space ${\mathcal H}$ is called a frame if there exists ${A>0, B<\infty}$ so that, for all ${f}$ in ${\mathcal
$\displaystyle A \| f \|^2 \le \sum_n |\langle f, f_n \rangle|^2 \le B \| f \|^2 \ \ \ \ \ (1)$
The ${A}$ and ${B}$ are called the frame bounds. If ${A=B}$ the frame is said to be a tight frame. There will be more to say about frame bounds and tight frames in later posts.
Frame Operator Definition: To every frame ${{\mathcal F}}$ there corresponds an operator ${S}$, known as the frame operator, from ${\mathcal H}$ onto itself defined by
$\displaystyle S f = \sum_n \langle f, f_n \rangle f_n \ \ \ \ \ (2)$
Synthesis Property: If ${{\mathcal F}}$ constitutes a frame for all ${f}$ in ${\mathcal H}$ then
$\displaystyle f = \sum_{n'=-\infty}^\infty \langle \rho, f_{n'} \rangle \hat{f}_{n'} \ \ \ \ \ (3)$
where the set of functions ${\lbrace \hat{f}_n \rbrace}$ is called the dual frame to frame ${{\mathcal F}}$. It is important to note that there is not a unique dual frame corresponding to a given
frame. Depending upon the application this nonuniquess may come in handy.
The lower bounds in equation (1) assures the numerical stability of the synthesis of ${f}$ from the coefficients ${\langle f, f_n \rangle}$. The upper bounds assures that the sequences ${\langle f,
f_n \rangle}$ are in ${\ell^2(\mathbb Z)}$, i.e. that ${\sum_j |\langle f, f_n \rangle|^2 < \infty}$ for all ${f \in L^2(\mathbb R)}$.
Frames should be contrasted with bases. A basis is a minimally complete set of functions spanning a vector space. Frames, one the other hand, may be over complete – a basis being a special case of a
frame. Now, back to the parallel imaging story …
Assume that there are ${N_c}$ receive coils in the receive array. The signal from the ${n^{th}}$ receiver coil (${n=1, \ldots, N_c}$), after Fourier transform in the frequency-encoding (FE) ${x}$
-direction, may be written as
$\displaystyle s_n(x,m) = \langle \rho, g_{nm} \rangle = \int \rho(x,y) g^*_{nm}(x,y) dy \ \ \ \ \ (4)$
$\displaystyle g_{nm}(x,y) = g_n^*(x,y) e^{i 2\pi m R\Delta k y} \ \ \ \ \ (5)$
and where ${g_n(x,y)}$ is the receive field of the ${n^{th}}$ coil element of the array, ${R \ge 1}$ is called the reduction factor, and ${m \in \mathbb Z}$. Note that ${\rho(x,y)}$ is assumed to be
an image that is compactly supported on ${|y|, |x| < 1/\Delta k}$ and that the phase-encode (PE) direction, ${y}$, will be the direction of possible acceleration. Also note that ${s_n}$ and ${g_n}$
will depend upon ${z}$ as well but I have omitted explicitly writing this dependence. Note that although ${\langle \rho, g_{nm} \rangle}$ is a function of ${x}$ I will often suppress the ${x}$
-dependence for the sake of an economy of symbols.
For a suitable ${g_n(x,y)}$, ${\Delta k}$ and ${R}$ the set ${\lbrace g_{nm} \rbrace}$, where ${m \in \mathbb Z}$, may form a frame for images compactly supported on ${|y|, |x| < 1/\Delta k}$ . I
will assume that this is the case. (A sufficient condition which the set ${\lbrace g_{nm} \rbrace}$ must satisfy to be a frame on this space will be the topic of a subsequent post). Since ${\lbrace
g_{nm} \rbrace}$ is assumed to be a frame we can use the synthesis property of frames given above to write:
$\displaystyle \rho = \sum_{m'=-\infty}^\infty \sum_{n'=1}^{N_c} \langle \rho, g_{m', n'} \rangle \hat{g}_{n'm'} \ \ \ \ \ (6)$
where ${\lbrace \hat{g}_{nm} \rbrace}$ is a dual frame corresponding to the frame ${\lbrace g_{nm} \rbrace}$. This is a good place to note once again that there may be more than one dual frame
associated with a given frame – the dual frame is not unique. Each dual may have special properties and be advantageous in different imaging scenarios (another topic for future posts?). Also, note
that the dual frame of equation (6) depends upon ${x}$ and ${z}$ although I will usually not indicate this explicitly.
Combining equations 4 and 6 we can write
$\displaystyle \rho(x,y) = \sum_{m'=-\infty}^\infty \sum_{n'=1}^{N_c} s_{n'}(x,m') \hat{g}_{n'm'} \ \ \ \ \ (7)$
If the dual frame could be found then equation (6) would form the fundamental means of image reconstruction – just measure ${s_{n}(x,m)}$ and perform the summation of equation (6) for a chosen dual
frame to obtain ${\rho}$. In practice obtaining the dual frame may be mathematically problematic. Obtaining the dual frame may also be problematic due to an imprecise knowledge of the receive fields
${g_n(x,y)}$ which must be measured or calculated by some means.
There have evolved two main methods used in commercial scanners for reconstructing parallel imaging data. One method goes by the acronym SENSE (Sensitivity Encoding) and the other goes by the acronym
GRAPPA (Generalized Autocalibrating Partially Parallel Acquisitions). In appearance the SENSE method is, in its usual mathematical formulation, closer to a frames based method than is GRAPPA. Both
methods have strengths and weaknesses.
The topic of my next blog post will be the derivation of autocalibration equations like those used in the GRAPPA method. We will see that the GRAPPA method invokes use of a particular dual frame
called the canonical dual frame which is generated by the action of the frame operator ${S}$ upon its frame.
Until then …
[1] Zhihua Xu and Andrew K. Chan, Encoding With Frames in MRI and Analysis of the Signal-to-Noise Ratio, IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 21, NO. 4, APRIL 2002, p 332-342.
[2] Ole Christensen, An Introduction to Frames and Riesz Bases, 2003
1 comment
I’m not sure where you are getting your information, but good topic. I must spend some time studying more or figuring out more. Thank you for fantastic info I used to be on the lookout for this
information for my mission.
To include LaTex in comments use this format: $latex Your_LaTeX_code$ | {"url":"http://mathematicalneuroimaging.wordpress.com/2012/01/27/parallel-imaging-a-mathematical-frame-perspective/","timestamp":"2014-04-19T11:56:35Z","content_type":null,"content_length":"71006","record_id":"<urn:uuid:08a14eb1-6983-4cb0-b8ff-8e586834979e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wakefield, MA Algebra 2 Tutor
Find a Wakefield, MA Algebra 2 Tutor
...I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/discrete variety. I got an A in undergraduate linear algebra. I have also
absorbed many additional linear algebra concepts in the process of taking graduate classes in functional analysis and abstract algebra.
14 Subjects: including algebra 2, calculus, geometry, GRE
...The same concepts are in elementary algebra, and I have a good amount of experience tutoring at this level. I like to find the right balance between identifying patterns through the repetition
of a given category of problem, and asking more conceptual questions. Advanced algebra was one of my favorite subjects in graduate school, where I earned an "A" in at least three or four such
29 Subjects: including algebra 2, reading, English, geometry
Do you want better grades and test scores? Do you want to get the most from your classes? Is something holding you back from doing your best?
34 Subjects: including algebra 2, reading, English, geometry
I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have
tutored a wide range of students - from middle school to college level.
14 Subjects: including algebra 2, statistics, geometry, algebra 1
...Math - I have completed math courses through calculus III Writing - I love writing and my professors have always commented on my writing abilities. As testament, I have completed three written
theses and have published two peer-reviewed scientific papers. My teaching knowledge/experience: I am a certified teacher and have a maters degree in science teaching.
22 Subjects: including algebra 2, reading, writing, geometry
Related Wakefield, MA Tutors
Wakefield, MA Accounting Tutors
Wakefield, MA ACT Tutors
Wakefield, MA Algebra Tutors
Wakefield, MA Algebra 2 Tutors
Wakefield, MA Calculus Tutors
Wakefield, MA Geometry Tutors
Wakefield, MA Math Tutors
Wakefield, MA Prealgebra Tutors
Wakefield, MA Precalculus Tutors
Wakefield, MA SAT Tutors
Wakefield, MA SAT Math Tutors
Wakefield, MA Science Tutors
Wakefield, MA Statistics Tutors
Wakefield, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Arlington, MA algebra 2 Tutors
Belmont, MA algebra 2 Tutors
Burlington, MA algebra 2 Tutors
Chelsea, MA algebra 2 Tutors
Danvers, MA algebra 2 Tutors
Lexington, MA algebra 2 Tutors
Lynnfield algebra 2 Tutors
Malden, MA algebra 2 Tutors
Melrose, MA algebra 2 Tutors
Reading, MA algebra 2 Tutors
Saugus algebra 2 Tutors
Stoneham, MA algebra 2 Tutors
Wilmington, MA algebra 2 Tutors
Winchester, MA algebra 2 Tutors
Woburn algebra 2 Tutors | {"url":"http://www.purplemath.com/Wakefield_MA_Algebra_2_tutors.php","timestamp":"2014-04-19T02:08:44Z","content_type":null,"content_length":"24189","record_id":"<urn:uuid:582d88f4-4837-40ff-821f-2daad1cbbdd3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
type (functional analysis)
Functional analysis
Overview diagrams
Basic concepts
Topics in Functional Analysis
Type and Cotype in Functional Analysis
The type and cotype of a Banach space measure how far it is from being a Hilbert space. The definition is based on the observation, due to John von Neumann, that a Banach space is a Hilbert space if
and only if it satisfies the parallelogram identity. Recall that this states that in a Hilbert space,
${\|x + y\|^2} + {\|x - y\|^2} = 2{\|x\|^2} + 2{\|y\|^2}.$
This can be thought of as a way of improving the triangle inequality, which relates $\|x \pm y\|$ to $\|x\|$ and $\|y\|$, by finding an equality relating $\|x \pm y\|$ to $\|x\|$ and $\|y\|$.
To measure the type and cotype of a Banach space, one takes the parallelogram identity and finds out how bad it gets. Slightly more precisely, one tries to see what happens if one looks merely for an
inequality, perhaps with a constant. Since the equality can break in one of two ways, this leads to two notions.
To define the type and cotype of a Banach space, we start with a finite family of vectors, say $\{x_1,\dots,x_n\}$. Then we look for constants $T_2$ and $C_2$ such that the following inequalities are
\begin{aligned} \Average_\pm {\left\|\sum \pm x_i\right\|^2} &\le T_2^2 \sum {\|x_i\|^2}, \\ \Average_\pm {\left\|\sum \pm x_i\right\|^2} &\ge C_2^{-2} \sum {\|x_i\|^2} \end{aligned}
Here, the left-hand side is the average value over all choices of $\pm$ (so in the original parallelogram identity we have divided each side by $2$).
The smallest constant $T_2$ making the first inequality true for all finite sequences of vectors is the type $2$ constant of the space.
The smallest constant $C_2$ making the second inequality true for all finite sequences of vectors is the cotype $2$ constant of the space.
Either of these is allowed to be infinite. A space is said to be type $2$ if its type $2$ constant is finite. Similarly, it is said to be cotype $2$ if its cotype $2$ constant is finite.
If we consider all Banach spaces that are either of type $2$ or cotype $2$ then we find that these split into the two obvious classes with Hilbert spaces sitting plum in the middle. Not only are
Hilbert spaces the intersection of these classes, but also a continuous linear operator from a space of type $2$ into a space of cotype $2$ factors through a Hilbert space. (This follows from a
generalization/extension of Grothendieck’s inequality.)
More precisely:
1. If a Banach space is of type $2$ and of cotype $2$ (ie both constants are finite) then it is a Hilbert space. This is due to Kwapien.
2. Any bounded linear operator from a Banach space of type $2$ to a Banach space of cotype $2$ factors through a Hilbert space.
As type and cotype are isomorphism invariants, they can be used to distinguish between some Banach spaces. See isomorphism classes of Banach spaces for more.
In the inequalities for type $2$ and cotype $2$ it is possible to replace the $2$ by a natural number $p$ and define the notions of type $p$ and cotype $p$. Taken as a whole, these provide more
information and thus give a finer classification of Banach spaces. In particular:
1. For $1 \le p \le 2$, the Lebesgue space, $L_p$, has type $p$ and cotype $2$.
2. For $2 \le p \lt \infty$, the Lebesgue space, $L_p$, has type $2$ and cotype $p$.
We also have the following properties:
1. For $r \lt p$, type $p$ implies type $r$.
2. For $r \gt p$, cotype $p$ implies cotype $r$.
3. Both type and cotype pass to subspaces.
4. Type passes to quotients, cotype does not in general but if the space has some type strictly larger than one then cotype does pass to quotients.
5. Type dualises to cotype, in that if $X$ has type $p$ then $X^*$ has cotype $p'$ (where $p^{-1} + {p'}^{-1} = 1$), but cotype does not dualise to type unless the space has some type strictly
larger than one. | {"url":"http://ncatlab.org/nlab/show/type+(functional+analysis)","timestamp":"2014-04-19T22:32:56Z","content_type":null,"content_length":"33857","record_id":"<urn:uuid:6d48cc00-f0cd-4206-b61c-3291560aab2d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trisecting an Angle
Date: 06/15/99 at 19:00:49
From: Eric Kovach
Subject: Trisecting an angle
I've been looking for someone to talk to about the trisecting an angle
problem from ancient Greece. I understand the whole modern algebra
proof thing but it doesn't disprove the possibility of angle
trisection. At least not from the way I have read it and understood
it. I came up with a method when I was 16 and in geometry class and so
far no one can disprove it. I tried proving it geometrically for a
while but gave up. All I know is that with best drawing programs using
only circles and unmeasured lines, it works at least to the naked eye
for any angle from 5 to 175 degrees. I've done numerous examples
trying to find one where it is off by more than the error when using a
compass and so far everything I've done is a perfect trisection. Who
can I show this to? I think I have something here...
Date: 06/15/99 at 20:05:42
From: Doctor Peterson
Subject: Re: Trisecting an angle
Hi, Eric.
I presume you looked at our FAQ on this subject:
As I'm sure you know, algebra does prove that you can't trisect a
general angle precisely using only compass and straightedge under the
traditional Greek rules. It doesn't prove that you can't trisect a
particular angle, or trisect it using modified tools, or only
approximately. In your case, it sounds as if you don't necessarily
claim a perfect trisection, and don't claim to have proved it, but you
must have an interesting construction. I'd like to see it; we might be
able to work together either to find how accurate it is, or to see
whether it is accurate but twists some rule a little, or whatever.
If you want to send it in, please make sure it's stated clearly, to
make it easy on us. If you can describe it step by step so that I can
duplicate it easily, I'll see what I can do to prove or disprove it.
- Doctor Peterson, The Math Forum
Date: 06/16/99 at 18:07:02
From: Eric
Subject: Re: Trisecting an angle
Thanks for the feedback and your interest. I will give you the method
I used for my construction.
1. Take any angle and label the vertex A.
2. Draw a circle centered at A. Label the intersections with the legs
of the angle B and C. So we have an angle labeled BAC.
3. Bisect angle BAC and extend the bisection line downward so it
intersects the circle at point D.
4. Draw another circle of equal radius as the first centered at
point D.
5. Extend the angle's bisection line downward until it intersects with
the second circle. Label this point E.
6. Draw line segments BE and CE. This will form angle BEC.
7. Take angles EBA and ECA. Bisect them. Extend their bisection lines
until they intersect with line segments CE and BE respectively.
Label these points F and G. The bisection lines will be called
BF and CG.
8. Draw the lines FA and GA. These lines will trisect the original
angle (or closely approximate it).
Try it, and let me know what you think about this construction method.
I hope the instructions are easy to follow.
Date: 06/16/99 at 23:04:49
From: Doctor Peterson
Subject: Re: Trisecting an angle
Hi, Eric. Thanks for writing back.
I just made your construction on Geometer's Sketchpad, software that
lets me adjust the angle and measure the results, and although it's
clear your construction is not precise (so it doesn't go against what
has been proven), it is remarkably accurate. The resulting three
angles are the same to within 0.001 degree for ABC up to 36 degrees;
to within 0.01 degree up to about 60 degrees; to within 0.1 up to
100 degrees; and to within 1 degree up to about 150 degrees. Even for
ABC = 180 degrees, the thirds are 60.774, 58.389, and 60.755 - and
since I'd expect the first and last to be the same, I'm probably past
the accuracy of the software at that point.
The only trouble I had following your instructions was that I didn't
catch the meaning of "downwards" at first. You expressed it very well.
I'll look at it a little more closely as I have time, and see if I can
determine what the angles are trigonometrically to confirm my quick
measurements. One thing that intrigues me is that F and G are close to
circle A, and as they move away, the angles depart from true
trisectors. It may be that if they stayed on the circle, they would be
correct. I'll let you know what I find. I may also look around to see
if this construction is well known.
- Doctor Peterson, The Math Forum
Date: 06/17/99 at 17:04:26
From: Doctor Peterson
Subject: Re: Trisecting an angle
Hi, Eric. Here's a little more.
I've been playing with your construction, and it turns out that my
hunch was correct: if, rather than making point E twice as far along
the bisector as D, we position it so that F and G are exactly on the
circle A, then the lines FA and GA do exactly trisect the angle.
That's because if we continue BA to K on the far side of the circle,
the fact that the inscribed angles KBF and FBG are equal implies that
arcs KF and FG are equal; similarly if we continue CA to L on the
circle, FG = GL, so we have trisected arc KL which is equal to BC. So
if you compare this drawing of your construction:
Date: 06/17/99 at 18:16:05
From: Eric
Subject: Re: Trisecting an angle
That is really cool that you can use that program to get exact
measurements. I will have to get a copy of that for myself. It would
have saved me lots of time when I was trying out examples on different
Thanks for your interest in this. I had this idea in class when I
first learned about the classical three problems. I had already done
advanced studies in geometry back in junior high, so when the class
came up in tenth grade, I didn't really need to spend class time
listening to lectures.
The idea was based on the fact that two angles that have vertices on
the same circle and have the same arc are equal. That, and the fact
that an angle at the center of the circle is half of an angle on the
circumference. My idea was to extend the idea with an angle 2 radii
from the center that would be three times the angle in the center.
Since that wasn't true I expanded on the idea into what I gave you.
I really hadn't given this much thought since high school until the
other day, but it is good to finally know a little more about the
quality of my trisection method. I've really been too busy with
college to give math any more thought. I am studying Chemical
Engineering, which is definitely a challenge to my math skills. I had
thought about being strictly a mathematician because that is what I am
best at, but I didn't really see a whole lot of jobs in that outside
of teaching, so I went into engineering.
I will have to look for that book just to see if there is anything
similar to what I came up with. Thank you very much. | {"url":"http://mathforum.org/library/drmath/view/55144.html","timestamp":"2014-04-17T16:38:34Z","content_type":null,"content_length":"13595","record_id":"<urn:uuid:a3dabd25-75db-4bfb-b068-efe87a696f8c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |