content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Characteristic operator
up vote 3 down vote favorite
Let $X_t\in\mathbb{R}$ be an Ito diffusion process given by $$ dX_t=a(b-X_t)dt+\sigma dW_t$$, then the characteristic operator of $X_t$ is given by $$L=a(b-x)\frac{\partial}{\partial x}+\frac{\sigma^
2}{2}\frac{\partial^2}{\partial x^2}$$
(more details about the characteristic operator can be found here http://en.wikipedia.org/wiki/It%C5%8D_diffusion). Where in this case $a>0$, $b\in\mathbb{R}$, and $\sigma>0$.
Now I assume that $\tau$ is a random variable with density function $f(t)=\lambda e^{-\lambda t}\chi(t)_{[0,\infty)}$, $\alpha_t$ is a random variable taking only two values $1$ and $2$. $$\alpha_{t}
=1\chi_{(0,\tau)}(t)+2\chi_{[\tau,\infty)}(t) $$
Then I change $b=b(\alpha_{t}$), i.e, b is random and taking two values $b(1)\in\mathbb{R}$ and $b(2)\in\mathbb{R}$
My question is : how we find the characteristic of Ito diffusion $X_t$ given by $$dX_t=a[b(\alpha_t)-X_t]dt+\sigma dW_t $$ ?
Thanks for your time and consideration
pr.probability st.statistics mathematical-finance
Hi I think that this Question should be also tagged with the "stochastic-calculus" item Regards – The Bridge Sep 8 '10 at 9:52
add comment
2 Answers
active oldest votes
The wikipedia page cited in the question provides most of the answer: to get your operator compute $$\lim_{\delta \rightarrow 0} \frac{ {\mathbb E}[f(X_\delta)] -f(x)}{\delta}$$ The
difference between your problem and the case covered in the wikipedia article is that $f$ in the above display is a function of $x$ only. However, your problem has an additional state
variable (the binary variable that takes one of the values $1$ or $2$ depending on $\alpha$). So, the correct limit to study is: $$\lim_{\delta \rightarrow 0} \frac{ {\mathbb E}[f(X_\
delta,\alpha_t)] -f(x,1)}{\delta}.$$ Thus, you don't have one function, but two functions $f(x,1)$ and $f(x,2)$ and two PDEs that these functions satisfy.
It is implicitly assumed that $\tau$ is independent of the dynamics of $X$ before $\tau$. Furthermore, before $\tau$ the dynamics of $X$ are governed by the first SDE given in the
question. One can use these to write the above expectation in two pieces: one piece over the set $\{\delta < \tau\}$ the other over $\{\delta < \tau\}^c$. Once this is done, the usual
use of Ito's formula gives: $$ L_1 f(x,1)=-\lambda f(x,2)~~~ (*) $$ and $$ L_2 f(x,2) = 0. $$ where $$ L_i = a(b_i - x) \frac{\partial}{\partial x} + \frac{1}{2} \sigma^2\frac{\partial^
2}{\partial x^2} $$
up vote 2
down vote Further details: \begin{align*} {\mathbb E}[ f(X_\delta,\alpha_\delta) ]&= {\mathbb E}[ f(X_\delta,\alpha_\delta) 1_{\{ \tau > \delta\}} ] + {\mathbb E}[ f(X_\delta,\alpha_\delta) 1_{\{
accepted \tau \le \delta\} }]\\\\ &\approx (1-\lambda \delta){\mathbb E}[ f(X^1_\delta,\alpha_\delta)] + \lambda \delta f(x,2), \end{align*} where $X^1$ is a process that is independent of $\tau$
with dynamics determined by $L_1$.
Here you use several things: 1) $P( \tau < \delta) \approx \delta \lambda$ 2) if a jump occurs before $\delta$, you can ignore what happens between $\tau$ and $\delta$ (the contribution
of this part is in the order of $\delta^2$ and when divided by $\delta$ and $\delta$ is let go to $0$, it disappears).
To get (*) from the previous display: use Ito's formula on the first expectation, subtract $f(x,1)$, divide by $\delta$ and let $\delta \rightarrow 0$. $f(x,2)$ is a function of what
happens after $\tau$; after $\tau$ the stochastic process is a simple diffusion with generator $L_2$: this is why (**) holds.
Thanks Has2 for your comments. I will be thinking about how you get these equations :D – Nameless Sep 9 '10 at 1:47
You are welcome. I tried to explain how you get the equations in the answer. Now I will add some further details, hope they are useful. – has2 Sep 13 '10 at 12:08
add comment
Hi Nameless,
I think that your question is related to killed diffusions for which infinitesimal generators are available I think.
up vote 1 down vote Here is a reference but I recommand that you "google" killed diffusions
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability st.statistics mathematical-finance or ask your own question. | {"url":"http://mathoverflow.net/questions/37946/characteristic-operator/38041","timestamp":"2014-04-17T21:52:32Z","content_type":null,"content_length":"58849","record_id":"<urn:uuid:0d3cf253-2710-4ff3-b195-8f4acfb9f836>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lotfi Visions Part 1
An interview with Lotfi Zadeh, the father of fuzzy logic
Even at 73 years of age, Lotfi Zadeh, the father of fuzzy logic, has an energetic stage presence. Nowhere is this more evident than in the rapt attention he commands when presenting his paper, "Fuzzy
Logic: Issues, Contentions and Perspectives," at the 22nd Annual ACM Computer Science Conference in Phoenix, Arizona on March 8, 1994.
It is during this session that, once again, Zadeh's colleague and friendly personal gadfly, Professor William Kahan, rises to challenge, for three full minutes, Zadeh's lifework as an assault of
illogic upon the scientific foundation of control engineering. "A scientific idea is one which contains within it the germ of a refutation," says Kahan. "A test can then be posited whereby the
hypothetical refutation can be proven or disproven. Fuzzy logic has no scientific content because it doesn't assert anything upon which we can model such a hypothetical refutation."
Zadeh's lips tighten into a tolerant smile. He toys with the lapels of his suit coat, he lowers his eyes and rocks back and forth slightly at the podium as he again hears the familiar arguments.
Afterwards, Zadeh (referred to as "LZ" in the following interview) and I chat on subjects ranging from the theory and economics of fuzzy logic, to AI, fractals, philosophy, and Zadeh's boyhood in
Stalin's Soviet Union. We are joined by William Kahan (WK) who, like Zadeh, is a professor at the University of California, Berkeley, and John Osmundsen (JO), associate director, public affairs, of
the ACM.
In the first installment of this two-part article, Zadeh examines the philosophical underpinnings of fuzzy logic, how it relates to disciplines such as fractals and AI, and his youth in the USSR and
Iran. Next month, we'll discuss fuzzy applications, such as Japan's Sendai train, and hear in detail what Professor Kahan thinks of fuzzy logic.
DDJ: What you said in your lecture today that struck me the most is that fuzzy logic is a means of presenting problems to computers in a way akin to the way humans solve them.
LZ: There is one way of expressing that, which I use sometimes: The role model for fuzzy logic is the human mind. If you examine the way the human mind functions, you find that the human mind has
this remarkable capability to deal with information which is incomplete, imprecise, uncertain, and so forth. Computers do not have that capability to any significant extent.
Classical logic is normative. Classical logic in effect tells you, "That's the way you should be reasoning." There is a big difference in that sense between the spirit of classical logic, which is
prescriptive, and the spirit of fuzzy logic, which is descriptive. That is, it merely asks the question "How do you reason about this or that?" It's like translation. The translator does not take
responsibility for what he or she translates.
DDJ: And the analogy of translation to fuzzy logic is that fuzzy logic is simply a translation mechanism?
LZ: Well, there are many facets to fuzzy logic, so you cannot summarize the whole thing in one sentence. I'm talking here about one particular facet of fuzzy logic. That facet has to do with most
practical applications today in the realm of consumer products and in many other fields, where what you do is you use the language of fuzzy rules. You start with a human solution and you translate it
into that language. But it doesn't mean that that is all there is to fuzzy logic, because there are many other things that fall within the province of fuzzy logic that would not fit this description.
The essence of fuzzy logic is that everything is a matter of degree, including the notion of subsethood.
DDJ: Is this a philosophical point of view that finds itself translated into computer logic--the notion that everything is a matter of shades, and varyings, and degrees? Is there a personal
philosophical viewpoint that is finding its expression in computer science in your work?
LZ: Not yet, but consider the following: The real world that we live in is very fuzzy, very imprecise, very uncertain. The theories that we have constructed are, on the other hand, very precise. We
have mathematics, we have all kinds of things which are very precise in nature.
Now, these theories have proved to be very successful in many respects, but their ability to come to grips with the analysis of complex systems--I mean complex not just in terms of number of
components, for example, a chip that has two million resistors_. When I say "complex," I mean "complex economic systems" and things of that kind, systems with many components, with relationships that
are not well defined. The successes of classical techniques, in connection with certain kinds of systems, have led us to believe they can be successful also in dealing with the other types of
systems, such as economic systems.
There is no question about it, classical mathematics has proved to be very successful in astronomy, where you compute the orbits of stars and planets. But people then conclude from that that you can
apply mathematics equally successfully to laws of economic systems. And that's where I question this thing.
I say "No, these systems don't fit. You need a concept of classes which don't have well-defined boundaries." And if you do, as fuzzy logic attempts to do, construct such a framework, then you enhance
your ability to model economic systems and other systems of that kind. It doesn't mean you'll be able to solve all the problems, but at least you'll be able to do much more.
This applies, for example, to natural languages. Notice that we didn't make that much headway in machine translation, and essentially nothing in machine summarization. If I ask you to write a program
that will look at a book and summarize it, I think you'll say that we cannot do that. Not only can't we do it today, there's no way that we can conceive of doing that in the foreseeable future.
The situation is this, that there is this tradition of believing that conventional, traditional techniques have the power within them to solve some of these problems. My position is that this is not
the case.
DDJ: It sounds like you are making a point analogous to that of Benoit Mandelbrot in The Fractal Geometry of Nature, in which he pointed out that mathematicians at the turn of the twentieth century,
including his father…
LZ: His uncle…
DDJ:_his uncle, had examined fractal forms and had been roundly criticized in the math world for studying these "monstrosities," and asked why they did not go back to studying genuine geometrical
forms such as the sphere. Mandelbrot points out that there are no spheres in nature, no squares, nothing which has perfect form to it. It's just a question of to what degree our measurement
instruments are able to penetrate the form and discover the imperfections that nature has placed there--the "sacred error," as it were.
LZ: I know Mandelbrot quite well, and hold great respect and admiration for him. Mandelbrot stays within the traditional framework. He does consider objects, fractals, and he did raise questions of
the kind that other people somehow did not raise, for example: What is the length of the coastline of Brittany? To me, that's a very good question, because somehow people took it for granted that
there is an answer to that question. But he pointed out that there isn't an answer to that question, because it depends on the degree of resolution.
DDJ: It ends up being a question about your measuring instruments, and not about the problem you're trying to solve.
LZ: That's right. So I think it was a very incisive observation that questions like that cannot be answered within the traditional framework. But what Mandelbrot tried to do--and I think it's a very
significant accomplishment, but different from what you do in fuzzy logic--he attempted to come up with a reasonably precise theory of this sort of thing. So he talks about fractional dimension.
Basically, Mandelbrot is a mathematician by training, and he has not abandoned his home, so to speak.
So to me, the theory of fractals is an important theory, and it helped to focus attention on issues that were not really properly formulated before. But by itself, it stays within the traditional
paradigm. In other words, you're still committed to the goal of mathematicians: to come up with theorems. I'm not saying that that goal is not a worthwhile goal, I'm merely saying that in many cases
it is unattainable.
DDJ: When you come down to the field of practical engineering, especially in problems of embedded control, the problem of making machines that can in real time make, if not perfect decisions, then
reasonable decisions…
LZ: Certainly.
DDJ: My father is 74 years old, and he doesn't know which end of the computer you hook up the airhose to, but he knows what fuzzy logic is because he has been an amateur photographer for 60 years,
and his Japanese camera can determine the illumination of dim objects against bright background light--using fuzzy logic. "Fuzzy logic" is also part of the advertising.
LZ: Minolta uses fuzzy logic very extensively.
DDJ: Do you derive any royalties from this?
LZ: Zero. The thought of applying for a patent did not even occur to me at the time I did the work.
DDJ: Is this an oversight which you regret?
LZ: No, not at all. Perhaps I would be a rich man, but so long as I can live in reasonable comfort, that's enough. | {"url":"http://www.drdobbs.com/architecture-and-design/lotfi-visions-part-1/184409272","timestamp":"2014-04-19T14:49:30Z","content_type":null,"content_length":"102844","record_id":"<urn:uuid:a49e460b-215c-4b68-94d9-b1a11f84a268>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lemonade Stand Math
It is a pretty common childhood rite of passage to run a lemonade stand during the summer. Unfortunately, so is losing money on the deal.
Lemonade stand math not only gives you a good opportunity to work on skills that will help your child avoid summer brain drain, but it can also help your child find a way to earn some money.
Skills targeted: measurement, money, multiplication
Calculating the Cost of Making Lemonade
There are a couple of different ways to make lemonade and it’s your child’s job to figure out which will be most cost effective. One way is to make homemade lemonade using fresh ingredients, which
requires purchasing lemons and sugar. The recipe will make about 6 cups of lemonade, all information which is helpful in answering the following math problems.
1. If there are 8 ounces in a cup and the lemonade recipe makes 6 cups, how many ounces of lemonade will you have?
2. How many ounces do your lemonade cups hold? (Or what size cup will you buy?)
3. How many cups of lemonade do you intend to sell?
4. If your cups hold X ounces and you want to sell Y cups, how many ounces of lemonade do you need? Will you have enough if you use the recipe or do you have to double it?
5. The recipe calls for 5-8 lemons and 1 ¼ cups of sugar. If you double (or triple) the recipe, how many lemons and how much sugar will you need?
6. If lemons cost [fill in cost of lemons], a 5-pound bag of sugar costs [fill in cost] and cups cost [fill in],how much money do you need to start your lemonade stand?
After calculating the cost of making homemade lemonade, your child may decide it’s cheaper to use a lemonade mix. She’ll still need to figure out how many ounces she needs, but what she needs to know
now is:
1. How many cups (or ounces) will one can of lemonade mix make?
2. How many cans do you need to make the amount of lemonade you want to sell?
3. How much does the mix cost? Multiply that by the number of cans you need. Is that more or less than the cost of making homemade lemonade?
Setting a Price to Make a Profit
Once your child has figured out the most cost effective way to make lemonade, it’s time for him to figure out how to make some money on the deal. In order to do that, he’ll need to calculate how much
each cup of lemonade costs him. The formula to do that is:
cost of supplies ÷ number of cups
Let’s say your child spent $20 on supplies and has 50 cups of lemonade. Each cup costs about him about 40 cents to make. To make a profit, he’ll have to sell each cup for more than 40 cents. It’s up
to him to figure out how much more.
If he has an idea of how much money he wants to make, it’s a little bit easier. If he wants to make double the amount of money he spent, he simply needs to double his cost. In the given example, that
means each cup would have to be 80 cents.
However, it’s important to help kids think about the realities of selling lemonade in terms of being able to easily calculate the cost of multiple cups and being able to make change for people. Ask
your child the following questions:
1. How much would 2 cups of lemonade cost?
2. If a person gives you $1 for a cup of lemonade, how much change would you have to give back?
3. Is it easier to make change if your lemonade is priced in multiples of 25 (i.e. quarters) or by the dollar?
Other Lemonade Math Considerations
There’s a whole lot of other ways to incorporate learning into your child’s lemonade stand, including:
• Keeping a traffic graph/tally ahead of time to see what times of day and on what days he’s more likely to have customers.
• Keeping track of the weather to see when it’s going to rain and when it’s going to be hot.
Note: Your child can practice his lemonade stand running skills by playing the virtual Lemonade Stand game on Coolmath-Games.com | {"url":"http://kidsactivities.about.com/od/SummerLearning/a/Lemonade-Stand-Math.htm","timestamp":"2014-04-17T12:55:23Z","content_type":null,"content_length":"43333","record_id":"<urn:uuid:49c80ced-14ad-4ac5-b3b5-03defb892228>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Home Colloquium
First and third Tuesdays of each month at noon in Goodrich 104.
"A light pizza lunch will be served."
The Mathematics & Computer Science Colloquium is a series of talks during the school year meant to enrich the mathematical knowledge of the community, independent of any courses attenders may be
taking. Although many talks are on upper-level mathematical topics, the main ideas should be intelligible to students who have had calculus. In addition to talks by members of the mathematics
department, students give some of the talks, and we have several outside speakers each year. Upcoming colloquia and a list of the colloquium talks for the last several years follow.
Mathematics and Computer Science Colloquia 2001-2002
Upcoming Colloquia
Apr. 9 - Prof. Marcela Perlwitz (Education) & Prof. Michael Axtell (Math & CS), "Linguistic Activity & Cognitive Development of Mathematical Concepts." Abstract: Consideration of linguistic
activity, cognitive development, and mathematics as psychological and social activities help us explain the nature of students' understanding of mathematics in relation to classroom discursive
Jan. 22 - Prof. William Turner, "Black Box Linear Algebra." Prof. Turner is from North Carolina State University and is a candidate for a computational science position in the department.
Jan 15 - Prof. Dennis Krause, Wabash Physics Department. "Quantum Game Theory: How to Win Big with Quantum Mechanics."
Nov 6 - Prof. David Maharry, Wabash Math & CS Department. "If You Think You Have Big Problems, Try a Parallel Computer."
Oct 16 - Prof. John George, Wabash Math & CS Department. "The Catalan Numbers" Abstract: Given a polygon of n+2 sides, in how many ways can we dissect it into triangles by drawing
non-intersecting diagonals? Given a product of n+1 letters, in how many ways can we parenthesize the product so that there are two factors inside each pair of parentheses? Given an election with
two candidates A and B, where each candidate receives n votes, how many ways can the votes come in so that candidate A is never behind candidate B? It is somewhat surprising that all of these
questions have the same answers, and even more surprising that the same answers also apply to many dozens of other problems in pure mathematics and computer science, all apparently different.
This lecture is aimed to be comprehensible to those who know little or no mathematics.
Oct 2 - Prof. Walter D. Wallis, Southern Illinois University, "Latin Squares for Those That Know No Latin."
Abstract: A Latin square is a square array, each of whose rows and columns is an arrangement of the same set. We shall discuss the existence of these arrays, their origins in puzzles, and their
applications in Pure Mathematics and in Experimental Design. Althought the listener will require no further background than knowing the definition of a matrix, we shall be able to describe some
current mathematical research problems.
Sep 18 - Professor Bert Barreto, Wabash Economics Department, "PROGRESS to Regress via LMS is a Mess?"
Sep 4 - Professor Robert Foote (with Anand Jha, '02), "PoincareDraw II: An Interactive Program for Teaching and Learning Hyperbolic Geometry."
Mathematics and Computer Science Colloquia 2000-2001
May 10 - Dr. Byungek Kahng, University of Illinois, "Kaleidoscopic Images and Piecewise Self Similarity." Dr. Kahng is a candidate for a one year position in Mathematics.
May 9 - Dr. Farid O. Farid, Pacific Lutheran University, "Topics on the Eigenvalue Problem." Dr. Farid is a candidate for a one year position in Mathematics.
Apr 17 - Dr. J.D. Phillips, St. Mary's College-California, will give a presentation titled, "Loops and Groups, the Latest Scoop." Dr. Phillips is a tenure track candidate for Chair of the Math & CS
Apr 12 - Dr. Dan Coroian, Indiana-Purdue University -Ft.Wayne, "Click This! - A Survey of Mathematical Software". Prof. Coroian is a candidate for a tenure track position in Computer Science.
Apr 10 - CANCELLED! John Marden, Professor of Statistics, University of Illinois, Spatial RanksPositioning Multivariate Data.
Mar 13 - Nathan Risk, Wabash Class of '92, An Introduction to Neural Networks.
Feb 20 - Professor Humberto Barreto, Wabash College, Is A-Rod Worth a Quarter Billion, or, a Random Number Generator, Visual Basic, and Monte Carlo Simulation Meet Baseball. Download the
Excel/Visual Basic program he used in his talk!
Feb 1 - Professor George Exner, Bucknell University, "How Many Ways Are There to be Deranged? or The Bernoulli Letter Problem" Jan 24 - Professor Peter Hamburger, Indiana-Purcue University - Fort
Wayne, and a candidate for a position in Math & CS "Coded Secrets Behind Doodles & Doilies." Abstract:If you are a compulsive doodler, or if you crochet doilies, then you will enjoy this
presentation; if not, you can still appreciate it. In this talk, we will learn how to create pretty symmetrical drawings from doodles. In this journey, there will be some geometry, number theory,
group theory, and some graph theory and combinatorics topics such as codes, dual graphs, symmetrical and maximal chains, and others. This also will answer a conjecture of B. Grunnbaum, which goes
back to D.W. Henderson, and even further back to Euler and Venn.
Jan 23 - Professor Maynard Thompson, Indiana University, Why is it Difficult for a Group to Make a Decision?
Nov 14 - Professor Scott Feller, Wabash College, Building a Supercomputer for Computational Chemistry.
Oct 31 - Professor Kerry Smith, Franklin College. The Mathematics of Huffman Codes.
Oct 17 - Professor Damon Scott, Wabash College, Transformations of Sonorities: What Happens Between the Chords.
Oct 3 - Professor Michael Axtell, Wabash College, Teaching Calculus With Projects.
Sept 19 - Professor Charles S. Holmes, Miami University, Rubiks Cube: An Invertible Function Factory
Sept 5 - Professor Robert L. Foote, Wabash College, "Circumferences of Convex Regions: C = 2pr Isn't Just for Circles Anymore!"
Mathematics and Computer Science Colloquia 1999-2000
April 18 - Gary J. Sherman, Rose-Hulman Institute of Technology, "What's a 'Closed with a Twist' Set (cwatset)?"
ABSTRACT: A cwatset is a subset of binary n-space that is 'nearly' a subgroup. The development---indeed the undergraduate driven development--- of cwatsets from their roots in statistics through
their combinatorial and group-theoretic properties will be traced to open questions suitable for undergraduate research.
Apr 11 - Robert Dirks, Wabash '00, "The Probabilistic Method: A Useful Tool for Proofs of Existence."
Mar 21 - Peter Thompson, Mathematics & Computer Science Department, Wabash College, "How Many 1's Should You Get in 2 and 1/2 Rolls of a Die? Binomial's with Non-Integer-n's and Their Applications."
Feb 24 - David Weinreich, The University of Memphis, "The Speed & Structure of Hereditary Graph Properties"
ABSTRACT: A hereditary graph property is a set of labeled graphs with certain closure conditions. The speed of a property P is |P^n|, the number of graphs in the property on n labeled vertices.
Surprisingly, speeds of hereditary properties fall into a hierarchy of functional ranges, in many cases asymptotically following a "nice" function. Furthermore, the structure of graphs in the
property is described by the property's speed, and vice-versa. In this talk we give an overview of what is known about hereditary graph properties and suggest directions for future research. No
background in graph theory is required for this talk.
Feb 15 - Michael Axtell, The University of Iowa, "Secret Life Behind Bars."
Feb 1 - Dan Singer, Mathematics & Computer Science Department, Wabash College, "On Catalan Trees and Formal Power Series Inversion"
Jan 18 - Thomas Sellke, '76, Statistics Department, Purdue University, "P-Values Don't Mean What You Think They Do"
Dec 8 - Dennis Krause, Physics Department Wabash College, "Looking Beyond 3-D: How to Understand and Search for New Compact Dimensions"
Nov 16 - Jeffrey Z. Anderson, '92, Industrial Engineer for the Commonwealth Aluminum Corporation in Lewisport, KY, "Applying Operations Research in the Aluminum Industry."
Nov 2 - Gregory Galperin, Eastern Illinois University, "Billiards Compute all Decimal Digits of p!"
ABSTRACT: A very simple dynamical system will be considered at the talk: two elastically colliding balls and one reflecting wall. It turns out that this system "counts" p with any accuracy you wish!
To explain this amazing phenomenon, one needs to look at the given dynamical system from the purely geometric point of view. The geometry originates from the concept of the configuration and the
phase space of a system and allows one to investigate a related billiard system. These spaces help to investigate various difficult problems in the theory of billiards, in particular, a problem on
periodic billiard trajectories in a polygon.
Professor Galperin will speak again in the geometry class (Math 21) on Wednesday, Nov 3, at 11:20 in D 220
on "A Tale of Three Circles."
ABSTRACT: Three circles in the plane form a curvilinear triangle. What is the sum of its interior angles? The answer to the question depends on the circles' positions in the plane and is connected
with the three famous geometries: Euclidean, spherical, and hyperbolic. The speaker will demonstrate different models of the three geometries based on the three circles problem.
Mathematics and Computer Science Colloquia 1999-2000
• Oct 19 - Peter Saveliev, Wabash College, "A Problem of Two Gamblers and An Introduction to Topology."
• Oct 5 - Humberto Barreto, Chair Wabash Department of Economics, "The Comparative Statics Wizard has no Clothes: An Introduction to Visual Basic."
• Sep 21 - William Swift, Wabash College Emeritus, "A Plane Filling Curve"
• Sep 7 - Robert Foote, Wabash College, "Geometry of the Prytz Planimeter"
Mathematics and Computer Science Colloquia 1998-99
• Apr 22 - Mr. Daniel Singer, University of California, San Diego, "Partition Identities and the Involution Principle"
• Apr 20 - Mr. Daniel Smith, Univ. of Illinois, Urbana-Champaign, "The Ubiquity of p"
• Apr 6 - Dr. John Maharry, Franklin College, "Good Will Hunting and Random Walks"
• Mar 16 - Robert, Wabash '00, "Hyperbolic Tilings and Abstract Algebra"
• March 2 - David Whitaker, Wabash '99, "The History and Development of Cryptography"
• Feb 9 - Dr. Warren Koepp, Texas A&M University at Commerce, "Chinese Astrology, Clock Arithmetic, and Compatibility Partitions of Commutative Groups"
• Feb 4 - Dr. Robert Leese, St. Catherine's College, Oxford University - "Lattice Labelings, Traveling Salesmen, and the Radio Spectrum"
• Feb 2 - Ms. Wendy Weber, University of Kentucky - "Finding Lost Triangles: Recovering Triangulations of Polyhedra"
• Jan. 19 - Thomas Tegtmeyer, Wabash College - "Life in a Multiply Connected Domain"
• Dec. 1 - Prof. William Swift, Emeritus, Wabash College - "Nim and Games Akin:
• Nov. 17 - Dr. Xiangfei Zeng, Allstate Insurance Company - "Hurrican Modeling & Insurance Pricing"
• Nov. 5 - Dr. M. Patrick Goda, '93 - "Commodity Parallel Processing and Rationale"
• Nov. 3 - Michael Orrison, '95, Darmouth College - "Young Tableaux"
• Nov. 5 - Dr. M. Patrick Goda, '93 - "Commodity Parallel Processing and Rationale"
• Oct. 6 - Dr. Allison Wolf, Wabash College - "Variations on Hamilton's Game"
• Sep. 15 - Dr. Robert Foote, Wabash College - "The Inverted Pendulum: Which Way Will It Fall?"
Mathematics and Computer Science Colloquia 1997-98
• Apr 14 - Prof. Rebecca Doerges - Statistics Dept., Purdue - "Mapping Genes in Experimental Organisms"
• Apr 9 - Ms. Allison Wolf - Dept. of Math, Emory University - "On Coloring Graphs"
• Apr 7 - Ms. Michelle Lemasurier - Dept. of Math, Univ. of Georgia
"A Relationship between the shape of a surface and the vector fields that can be defined on it:
The Poincare-Hopf Theorem"
• Apr 2 - Prof. Craig Roberts, Dept. of Math, U. of Arkansas, Monticello
"Tessellations - The Reason Bob Vila Should have been a Mathematician!"
• Mar 24 - Dollena Hawkins, Dept. of Math, University of Kentucky - "What's Normal?"
• Mar 19 - Peter Thompson, Wabash College - "Binomials, Betas, Beta-Binomials, and Babies"
• Mar 17 - Paul Loomis '92, Purdue - "A Homegrown Sequence and a Famous Problem"
• Mar 3 - Paul Roback, Dept. of Statistics, Colorado State Univ - "Counting Whales"
• Feb 23 - Ruth Pfeiffer, Dept of Math Statistics, University of Maryland
"Two Statistical Problems for Stochastic Processes with Hysteresis"
• Feb 17 - Yung-Pin Chen, Department of Mathematics, Smith College - "How many balls are in the urn?"
• Feb 3 - Prof. Bonnie Gold, Math Dept., Wabash College
"Automatic Differentiation: Computing Derivative Values without Derivative Formulas"
• Jan 20 - Prof. William Swift - Wabash College - "Convolution: Gateway to Mathematics"
• Dec 2 - Rodney Lynch, Wabash College '89 - "Another Look at the Division Algorithm"
• Nov 18 - Peter Thompson, Mathematics Department, Wabash College
"Reducing the Effect of the Initial Matchups in Double Elimination Tournaments"
• Nov 4 - Mary Ellen Bock, Department of Statistics, Purdue University - "Using Wavelets in Statistics"
• Oct 7 - Jon Sorenson, Computer Science Department, Butler Univeristy
"Genetic algorithms and the Extended GCD problem"
• Sep 16 - Robert Foote, Mathematics Department, Wabash College - "Planimeters and the Isoperimetric Inequality"
Mathematics and Computer Science Colloquia 1996-97
• April 22 David Maharry, Computer Science, Wabash College: "Dynamic Programming: Connecting DNA Sequences and Matrix Multiplication"
• April 8 Liang Huang, Mathematics Department, Rockford College: "Continuation Method and Eigenvalue Problems."
• Mar 18 Charles Jones, Mathematics Department, Grinnell College: "Costas Arrays, a.k.a. Arranging a Radar Array"
• Mar 4 Stefan Treatman, Mathematics Department, Wabash College: "Continued Fractions: They keep going and going"
• Feb 18 Greg Buzzard, Mathematics Department, Indiana University "How many times am I supposed to do this? An introduction to iteration."
• Feb 4 Bert Barreto, Economics Department, Wabash College: "Quantitative Potpourri: Excel, Pedagogy, Regression, and Visual Basic"
• Jan 21 Sesha Dassanayake, Senior Mathematics major, Wabash College: "Infinite Descent: A Method to solve a wide variety of problems"
• Dec 3 John Sullivan, University of Illinois
• Nov 19 Fabio Milner, Mathematics Department, Purdue University: "Mathematical Models of Demographics and Epidemics"
• Nov 5 Esteban Poffald, Mathematics Department, Wabash College: "Fractals and Transformations"
• Oct 15 Sanjiva Weerawarana, Computer Science, Purdue University: "Net-centric Computing on the World Wide Web: The Net //ELLPACK Approach"
• Oct 1 Joe West, Physics Department, Wabash College: "Physicists: Machiavellian Mathematicians or Approximating Anarchists"
• Sep 17 Paul Mielke, Professor Emeritus of Mathematics, Wabash College: "Combinations of Primes"
Mathematics and Computer Science Colloquia 1995-96
• Sep 5 Bill Swift, "Counting Beyond Infinity II"
• Sep 19 Robert Foote and Nathan Fouts, '97, "Cruising in Hyperbolic Space: Interactive Non-Euclidean Geometry"
• Oct 3 Carl Cowen, Purdue, "Using BIG Numbers to keep BIG secrets"
• Nov 7 Dan Maki, Indiana University, "The Mathematics of Speech Recognition using Computers"
• Nov 21 John Maharry, Ohio State Univ., "Graph Connectivity: You can't get there from here!"
• Dec 5 Bonnie Gold, "The Gnome inside your calculator: Just how does that little guy find the sin(2)?"
• Jan 1 Kyle Falconbury, '96, "Morley's Theorem"
• Feb 6 David Moore, Purdue Univ., "Statistical Thinking: How to tell the Facts from the Artifacts"
• Feb 20 John Skillings, U. of Miami (OH), "Modeling Policy Sales for an Insurance Company"
• Mar 19 Tom Sellke, Statistics Department, Purdue University: "How far is WAY out?: Chebshev Inequalities for Unimodal Probability Distributions"
• Mar 28 Kenneth Ross, U. of Oregon, "The Mathematics of Card Shuffling"
Mathematics and Computer Science Colloquia 1994-95
• Sep 6 Bill Swift: "The Two Box Paradox"
• Set 20 Fei Zeng: "The Gambler's Ruin"
• Oct 4 Bob Cooley: "ab <> ba"
• Nov 1 Jay Wood, Purdue Univ - Calumet, "Codes and the Fourier Transform"
• Nov 11 Jamshid Nazari, Purdue, "Development of Algorithms for Generalization, Convergence, and Parallelization in Neural Networks"
• Nov 15 Glen Helman, "Proofs and Functions"
• Dec 9 Roger Lautzenheiser, RHIT, "What Does It All MEAN?"
• Jan 24 David Wilson, "Where's X?"
• Feb 7 Brian Poole, '88, and Tim Schutz, '93, American States Insurance, "Actuarial Science: A Career Perspective"
• Feb 21 Michael Orrison, '95, "The Hausdorff Paradox, or, 1/2 + 1/3 + 1/3 = 1"
• Mar 21 Robert Foote, "What is an Integral?"
• Apr 4 John Van Drie, '74, Upjohn Co, "Reflections, Handedness, and Spinors"
• Apr 18 John Bailer, Miami University of Ohio, "Uncertainty in the Assessment of Hazards to Human Health." | {"url":"https://www.wabash.edu/academics/computer_science/colloquium","timestamp":"2014-04-19T14:40:38Z","content_type":null,"content_length":"46956","record_id":"<urn:uuid:c7c62db9-f663-4146-a8ed-7928606869a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Introduction
Hi Nky;
Welcome to the Forum!!
I am proud to have met a person like you. I really believe that you are intelligent. But what you need is a little more improvement in math (atleast, as much is needed for the school level)
In case you need help from us in order to understand a concept or for a tough problem, you are welcome to post at Help Me
Also, you may post about anything you wish to say on Science at the Science HQ section of this forum.
Feel free to explore the rest of the forum.
P.S: Did you watch the third line of my signature?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://mathisfunforum.com/viewtopic.php?id=20294","timestamp":"2014-04-20T11:24:54Z","content_type":null,"content_length":"15759","record_id":"<urn:uuid:61110e00-d24c-4e71-92ab-4ebe1c36421c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Speedsolving.com Wiki
│1LLL │
│ │
│ │
│ │
│Information │
│ │
│Proposer(s): Bernard Helmstetter │
│ │
│Proposed: 2000 │
│ │
│Alt Names: │
│ │
│Variants: ZBLL │
│ │
│No. Steps: 1 │
│ │
│No. Algs: 1211 │
│ │
│Avg Moves: 12.58 HTM │
│ │
│ ^ │
│Purpose(s): │
│ • Not useful in practice.│
1 Look Last Layer means completing the last layer using only one look. On the 3x3x3, currently ZBLL is the only known practical system for achieving a 1-look last layer, but other experimental
approaches such as 1-look 2-alg may have potential.
Number of 1LLL Cases
Assuming the last layer is an outer layer, the number of cases is calculated as:
corner orientations * edge orientations * corner permutations * edge permutations / 2
Numerically this is:
3^3 * 2^3 * 4! * 4! / 2 = 62208
Treating cases which are the same, but rotated by 90, 180 or 270 degrees as the same case, the number of cases becomes:
62208/4 = 15552
Although these cases may be regarded as unique, some of them can be solved by applying the same algorithm from a different angle. For cases with no rotational symmetry, there are 3 equivalent cases
which may be solved with the same alg. For cases with 180 degree rotational symmetry, there is 1 equivalent case which may be solved with the same alg. Cases with 90, 180 and 270 degree rotational
symmetry are unique.
As well as rotationally symmetrical cases, there are also reflectively symmetrical cases. These may be solved by applying a reflection of the algorithm. Finally, some cases are inversions of others.
These can be solved by reversing (inverting) the algorithm solving the original case. Exactly which cases are mirrors of each other requires a case-by-case analysis. Work by Bernard Helmstetter
established that the number unique 1LLL cases (excluding mirrors and inverses) is:
The number of algorithms a solver would require to solve the last layer in one look depends on the solver's ability to work out mirrors and/or inverses. If a solver can work out the mirror/inverse of
any alg 'on the fly', then the number of algorithms they would need to learn would be:
1212 - 1 = 1211 algorithms
See also
External links | {"url":"http://www.speedsolving.com/wiki/index.php/1LLL","timestamp":"2014-04-17T09:55:59Z","content_type":null,"content_length":"16761","record_id":"<urn:uuid:e56cf21c-f465-47af-bdfd-e186780e12fe>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Evaluate the function for y the given value of x. f(x)=5x-, if x<-2 x-9, if x less than or greater to -2 f(-2)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Are you sure that's how the problem is given, because practically what you're saying is:\[f(x) = 5x-\]if\[x<-2\]and \[f(x)=x-9\]if\[x <> -2\]Did you maybe mean on the second choice x greater than
or equal to -2?
Best Response
You've already chosen the best response.
yes sometimes i get confused
Best Response
You've already chosen the best response.
lol.. no worries. Well if that's the case then it means that if x=-2, you need to evaluate the function somewhere where it is defined like that. Since the function is saying f(x)=x-9 IF x is
greater or equal to -2, then you need to evaluate the function at that expression. Try that, and show me the result you got.
Best Response
You've already chosen the best response.
-11 for both
Best Response
You've already chosen the best response.
Yes, but you don't need to evaluate both. You just need to evaluate f(x)=x-9... But you're correct anyhow.
Best Response
You've already chosen the best response.
ohh okay i see now so its -11?
Best Response
You've already chosen the best response.
Yeah, its -11
Best Response
You've already chosen the best response.
okay thanks!
Best Response
You've already chosen the best response.
ur welcome!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50663684e4b08d1852123201","timestamp":"2014-04-18T00:25:31Z","content_type":null,"content_length":"47045","record_id":"<urn:uuid:599cdeb8-e803-49dc-ba3a-5a8e4af574c3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
bialgebra cocycle
Special and general types
Special notions
Algebraic theories
Algebras and modules
Higher algebras
Model category presentations
Geometry on formal duals of algebras
Shahn Majid has introduced a notion of bialgebra cocycles which as special cases comprise group cocycles, nonabelian Drinfel’d 2-cocycle and 3-cocycle, abelian Lie algebra cohomology and so on.
Besides this case, by “bialgebra cohomology” many authors in the literature mean the abelian cohomology (Ext-groups) in certain category of “tetramodules” over a fixed bialgebra, which will be in $n$
Lab referred as Gerstenhaber-Schack cohomology.
Let $(H,\mu,\eta,\Delta,\epsilon)$ be a $k$-bialgebra. Denote $\Delta_i : B^{\otimes n}\to B^{\otimes (n+1)} := \id_B^{\otimes (i-1)}\otimes\Delta\otimes\id_B^{\otimes(n-i+1)}$, for $i = 1,\ldots, n$
, and $\Delta_0 := 1_B\otimes \id_B^{\otimes n}$, $\Delta_n := \id_B^{\otimes n}\otimes 1_B$. Notice that for the compositions $\Delta_i\circ\Delta_j = \Delta_{j+1}\circ\Delta_i$ for $i\leq j$.
Let $\chi$ be an invertible element of $H^{\otimes n}$. We define the coboundary $\partial\chi$ by
$\partial \chi = (\prod_{i=0}^{i \mathrm{ even}} \Delta_i\chi) (\prod_{i=0}^{i \mathrm{ odd}} \Delta_i \chi^{-1})$
This formula is symbolically also written as $\partial\chi = (\partial_+\chi)(\partial_-\chi^{-1})$.
An invertible $\chi\in H^{\otimes n}$ is an $n$-cocycle if $\partial\chi = 1$. The cocycle $\chi$ is counital if for all $i$, $\epsilon_i\chi=1$ where $\epsilon_i =\id_B^{\otimes i-1}\otimes\epsilon\
otimes\id_B^{\otimes n-i}$.
Low dimensions
$\chi\in H$ is a 1-cocycle iff it is invertible and grouplike i.e. $\Delta\chi=\chi\otimes\chi$ (in particular it is counital). A 2-cocycle is an invertible element $\chi\in H^{\otimes 2}$ satisfying
$(1\otimes\chi)(id\otimes\Delta)\chi = (\chi\otimes 1)(\Delta\otimes id)\chi,$
which is counital if $(\epsilon\otimes id)\chi = (id\otimes\epsilon)\chi = 1$ (in fact it is enough to require one out of these two counitality conditions). Counital 2-cocycle is hence the famous
Drinfel'd twist.
The 3-cocycle condition for $\phi\in H^{\otimes 3}$ reads:
$(1\otimes\phi)((id\otimes\Delta\otimes id)\phi)(\phi\otimes 1) = ((id\otimes id\otimes\Delta)\phi)((\Delta\otimes id\otimes id)\phi)$
A counital 3-cocycle is the famous Drinfel’d associator appearing in CFT and quantum group theory. The coherence for monoidal structures can be twisted with the help of Drinfel’d associator; Hopf
algebras reconstructing them appear then as quasi-Hopf algebras where the comultiplication is associative only up to twisting by a 3-cocycle in $H$.
For particular Hopf algebras
If $G$ is a finite group and $H=k(G)$ is the Hopf algebra of $k$-valued functions on the group, then we recover the usual notions: e.g. the 2-cocycle is a function $\chi:G\times G\to k$ satisfying
the cocycle condition
$\chi(b,c)\chi(a,b c) = \chi(a,b)\chi(a b,c)$
and the condition for a 3-cocycle $\phi:G\times G\times G\to k$ is
$\phi(b,c,d)\phi(a,b c,d)\phi(a,b,c) = \phi(a,b,c d)\phi(a b,c,d)$
$n$-cocycles can be in low dimensions twisted by $(n-1)$-cochains (I think it is in this context not know for hi dimensions), what gives an equivalence relation:
For example, if $\chi\in H\otimes H$ is a counital 2-cocycle, and $\partial\gamma\in H$ a counital coboundary, then
$\chi^\gamma = (\partial_+\gamma)\chi(\partial_-\gamma^{-1})= (\gamma\otimes\gamma)\chi\Delta\gamma^{-1}$
is another 2-cocycle in $H\otimes H$. In particular, if $\chi = 1$ we obtain that $\partial\gamma$ is a cocycle (that is every 2-coboundary is a cocycle).
A dual theory
In addition to cocycles “in” $H$ as above, Majid introduced a dual version – cocycles on $H$. The usual Lie algebra cohomology $H^n(L,k)$, where $L$ is a $k$-Lie algebra, is a special case of that
dual construction.
Instead of $\Delta_i$ one uses multiplications $\cdot_i$ defined analogously ($\cdot_i$ is the multiplication in $i$-th place for $1\leq i\leq n$ and $\psi\circ\cdot_0 =\epsilon\otimes\psi$, $\psi\
circ\cdot_{n+1} = \psi\otimes\epsilon$). An $n$-cochain on $H$ is a linear functional $\psi:H^{\otimes n}\to k$, invertible in the convolution algebra. An $n$-cochain $\psi$ on $H$ is a coboundary if
$\partial\psi = (\prod_{i=0}^{\mathrm{even}}\psi\circ \cdot_i))(\prod_{i=1}^{\mathrm{odd}}\psi^{-1}\circ\cdot_i)$
If $\psi\in H$ then this condition reads
$(\partial\psi)(a\otimes b) = \sum \psi(b_{(1)})\psi(a_{(1)})\psi^{-1}(a_{(2)}b_{(2)})$
and, for $\psi\in H\otimes H$, the condition is
$(\partial\psi)(a\otimes b\otimes c) = \sum \psi(b_{(1)}\otimes c_{(1)})\psi(a_{(1)}\otimes b_{(2)}c_{(2)})\psi^{-1}(a_{(2)}\otimes b_{(3)}c_{(3)})\psi^{-1}(a_{(3)}b_{(4)})$
If one looks at the group algebra $kG$ of a finite group then the cocycle conditions above can be obtained by a Hopf algebraic version of the $k$-linear extension of the cocycle conditions for the
group cohomology in the form appearing in Schreier’s theory of extensions.
However for all $n$ the Lie algebra cohomology also appears as a special case.
(to be completed later)
• Shahn Majid, Cross product quantisation, nonabelian cohomology and twisting of Hopf algebras, in H.-D. Doebner, V.K. Dobrev, A.G. Ushveridze, eds., Generalized symmetries in Physics. World Sci.
(1994) 13-41; (arXiv:hep.th/9311184)
• Shahn Majid, Foundations of quantum group theory, Cambridge UP | {"url":"http://www.ncatlab.org/nlab/show/bialgebra+cocycle","timestamp":"2014-04-20T08:52:33Z","content_type":null,"content_length":"62977","record_id":"<urn:uuid:48ce2885-dcf2-402a-ade0-30b570b5fcf4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
HVDC Transmission System with Medium-Frequency Transformers
HVDC Transmission System with Medium-Frequency
Master of Science Thesis
Adil Mohammed Elhaj
MSc in Electric Power Engineering (120 ECTS)
Department of Electric Power Engineering
Division of Energy and Environment
Göteborg, Sweden, 2009
HVDC Transmission System with Medium-Frequency
Adil Mohammed Elhaj
Master of Science Thesis
Dr. Staffan Norrga
Prof. Torbjörn Thiringer
Conducted at:
ABB Corporate Research Centre, Västerås-Sweden
Department of Energy and Environment
Division of Electric Power Engineering
CHALMERS UNIVERSITY OF TECHNOLOGY
Göteborg, Sweden, 2009
For voltage adaptation and galvanic isolation in High Voltage Direct Current (HVDC)
converter stations, standard transformers operating at grid frequency are currently
used. These devices tend to be very large and heavy, which is undesirable in many
applications. Transformers where the magnetic parts operate at a frequency
considerably higher than the grid frequency could offer many advantages. Smaller and
lighter transformers and lower losses are among them.
In this thesis, a future-oriented HVDC converter station using a medium frequency
MF transformer is studied with special focus on the converter circuits. The converter
size used in this work is 78MW (±75kVDC/11kVAC) and the main application is
power transmission to and from offshore installations. The converter consists of a
snubbered Voltage-Source Converter VSC and 2-phase by 3-phase cycloconverter
connected by a MF transformer. The VSC is implemented using a single phase leg
and the cycloconverter is implemented using fast thyristors. It is expected that the
semiconductor losses of the power conversion system will be significantly reduced by
the use of the proposed converter topology which permits soft switching of all
semiconductor valves in all operating points without any auxiliary valve.
The thesis evaluates the commercial potential of the proposed HVDC system. A
general circuit design based on available preconditions is made first for the studied
converter station, and then a comparison with an existing two-level HVDC-Light
station with regard to semiconductor requirements and losses is performed for the
same operating point. The total space occupied by the valve installations for both
systems is also estimated. The comparison shows that the proposed system promises a
considerable reduction in the number of semiconductor devices and, as a result, the
total volume of the converter valves. However, the semiconductor losses increase
Key words: Medium frequency MF transformer HVDC system, Mutually commutated
converter MCC, Low frequency LF transformer HVDC system, Conventional voltage
source converter VSC, Modulation ratio, Thyristor turn-off time
This work has been carried out at the Division of Power Technologies at ABB
Corporate Research Centre, Västerås-Sweden, under supervision of Dr.Staffan
Norrga. I would like to thank him for giving me a chance to conduct my research
there and for his invaluable guidance throughout the thesis period.
The author would also like to thank Professor Torbjörn Thiringer at Chalmers
University of Technology who, despite his tight schedule, has carefully checked and
corrected the report
I would also like to express my deepest gratitude to the following persons:
Tomas Jonsson and Willy Hermansson of ABB who provided me with data required
to do the work.
Dr.Stephan Meier of KTH who kept answering my questions and explaining the
unclear things during the earlier stages of the thesis
Special thanks go to Swedish Institute (SI) for the financial support during the whole
period of my master study.
Last but not least, I would like to thank my family for their endless patience and love.
π Circular constant (≈3.14159265...) [-]
j Complex operator ( − 1 ) [-]
t Time [s]
φ Angle of current at the grid [rad]
Ig RMS value of the current at the grid [A]
Icyc RMS value of reactor current at fundamental frequency [A]
Ψ Angle of reactor current [rad]
ii Instantaneous phase currents of cycloconverter (i=1,2,3) [A]
ui Instantaneous voltages of cycloconverter (i=1,2,3) [V]
itr Medium frequency transformer current (VSC winding) [A]
utr Transformer voltage (VSC winding)[V]
Ud Converter DC link voltage without ripple [V]
Udmaxdeblock Maximum DC link voltage when the converter is switching (including the
UL Phase-to-phase voltage of the grid [V]
V Grid phase voltage [V]
Vcyc Phase voltage of cycloconverter at fundamental frequency [V]
δ Angle of cycloconverter voltage at fundamental frequency [rad]
Cs Snubber capacitance/valve [F]
kd Coupling function for VSC[-]
kac,i Coupling functions for cycloconverter phase legs[-]
Lλ Transformer leakage inductance [H]
Ntr Transformer turn ratio (N2/N1) [-]
fsw Switching frequency[Hz]
f Fundamental frequency[Hz]
M Modulation index [-]
P Active power at the grid [W]
Q Reactive power at the grid [VAR]
Pf Power factor [-]
Sb Base Power [VA]
Qfilt AC filter reactive power [VAR]
Cf AC filter capacitor [F]
Xb Base impedance[Ω]
Xr Reactor impedance[Ω]
tq Thyrristor turn-off time[s]
di/dt Current derivative during thyristor turn-on and turn-off [A/s]
Δtvsc Switching time for VSC [s]
Δtacci Commutation time of a one leg of cycloconverter [s]
Nigbt_w/o Number of IGBTs/valve without a redundancy [IGBT]
Nigbt Number of IGBTs/valve with a redundancy [IGBT]
Ndiode Number of diodes/valve [diode]
Nvalve Number of VSC valves [valve]
Nthy Number of series-connected thyristors/valve [thyristor]
Ithy Thyristor current [A]
Vthy,ssoa Rated switching safe operating area SSOA voltage of thyristor[V]
VCE,SSOA,max IGBT maximum SSOA voltage [V]
VCE0,ssoa IGBT rated SSOA voltage [V]
VF,ssoa Diode rated SSOA voltage [V]
Pcond _ igbt IGBT conduction losses [W]
Pcond _ diode Diode conduction losses [W]
Psw _ igbt IGBT switching losses [W]
Pdi _ on Diode turn-on losses [W]
Pdi _ off Diode turn-off losses [W]
Psw _ di Diode switching losses [W]
Pcond _ vsc VSC conduction losses for MF transformer topology [W]
Psw _ vsc _ soft VSC switching losses for MF transformer topology [W]
Pcond _ thy Thyristor on-state losses [W]
Psw _ thy Thyristor switching losses [W]
Pcond ,cyc Cycloconverter conduction losses [W]
Psw,cyc Cycloconverter conduction losses [W]
Pigbt _ on IGBT turn-on losses [W]
Ploss _ cond _ vsc VSC conduction losses for LF transformer topology [W]
Ploss _ sw _ vsc _ hard VSC switching losses for LF transformer topology [W]
Etot Thyristor total losses [J]
Eoff IGBT turn-off energy [J]
I off IGBT current at turn-off instance [A]
Eon Diode turn-on energy [J]
I on Diode current at turn-on instance [A]
Erec Diodes reverse recovery energy [J]
I offd Diode current at turn-off instance [A]
Eon _ igbt IGBT turn-off energy [J]
I on _ igbt IGBT current at turn-on instance [A]
ABSTRACT .............................................................................................................................................I
ACKNOWLEDGEMENT .................................................................................................................... II
NOMENCLATURE ............................................................................................................................. III
CONTENTS ......................................................................................................................................... V
CHAPTER 1 INTRODUCTION ........................................................................................................... 1
1.1 BACKGROUND ................................................................................................................................ 1
1.2 OBJECTIVES .................................................................................................................................... 2
1.3 OUTLINE OF THE THESIS.................................................................................................................. 2
CHAPTER 2. PRECONDITIONS FOR CONVERTER STATIONS DESIGN ............................... 4
2.1 THE TRANSMISSION SYSTEM OF VALHALL ....................................................................................... 4
2.2 PRECONDITIONS FOR THE DESIGN .................................................................................................. 4
2.2.1 VOLTAGE, FREQUENCY AND POWER REQUIREMENTS ............................................................... 4
2.2.2 HARMONICS REQUIREMENTS .................................................................................................... 5
CHAPTER 3. DESCRIPTION OF THE TWO COMPARED HVDC SYSTEMS ........................... 7
3.1 DESCRIPTION OF A MF TRANSFORMER HVDC SYSTEM ..................................................................... 7
3.1.1 FAST THYRISTORS ..................................................................................................................... 8
3.1.2 PRINCIPLE OF MCC OPERATION ................................................................................................ 9
3.2 DESCRIPTION OF A STATE-OF-THE-ART LF TRANSFORMER HVDC SYSTEM ................................... 14
3.2.1THE TWO-LEVEL CONVENTIONAL VSC .................................................................................... 15
CHAPTER 4 CONVERTER STATION DESIGN ............................................................................ 16
4.1 CONVERTER STATION DESIGN FOR THE MF TRANSFORMER TOPOLOGY .......................................... 16
4.1 .1 MAIN IMPORTANT ASSUMPTIONS ............................................................................................ 16
4.1 .2 BASIC EQUATIONS FOR THE DESIGN ........................................................................................ 16
4.1 .3 REALIZATION OF THE DESIGN .................................................................................................. 19
4.2 CONVERTER STATION DESIGN FOR THE HVDC-LIGHT TOPOLOGY ................................................ 23
4.2.1 DESIGN DATA OF THE CONVERTER STATION ........................................................................ 23
AND ANALYSIS .................................................................................................................................. 25
(MCC)................................................................................................................................................ 25
5.1 .1 VSC LOSSES ........................................................................................................................... 25
5.1 .2 CYCLOCONVERTER LOSSES .................................................................................................... 26
5.2 FORMULATION OF SEMICONDUCTOR LOSSES FOR THE TWO-LEVEL CONVENTIONAL VSC ............. 28
5.3 SIMULATION RESULTS,ANALYSIS AND COMPARISON OF SEMICONDUCTOR LOSSES AND VOLUME OF
THE VALVES........................................................................................................................................ 29
5.3.1 DIFFERENT STUDY CASES ......................................................................................................... 29
5.3.1 THE TOTAL VOLUME OF THE VALVE INSTALLATIONS ............................................................... 36
CHAPTER 6 CONCLUSIONS AND FUTURE WORK .................................................................. 39
6.1 CONCLUSIONS .............................................................................................................................. 39
6.2 FUTURE WORK ............................................................................................................................ 40
BIBLOGRAPHY .................................................................................................................................. 41
APPENDICES ...................................................................................................................................... 43
Chapter 1
This chapter gives a brief introduction of the background concerning the field of the
thesis. After that, the objectives and the outline of the whole work are also included in
the end.
1.1 Background
A mutually commutated converter, MCC system, consisting of a voltage source
converter VSC and a cycloconverter connected by a medium-frequency transformer
allow bidirectional DC/AC conversion as well as voltage transformation and isolation
by the transformer was proposed in [1]. Single-phase medium-frequency transformers
have comparably low losses and their compact size and low weight implies an
important benefit in an offshore environment. In addition, the voltage source
converter is considerably simplified by the reduction to one phase leg, hence the
number of IGBTs is also reduced which implies a tremendous cost saving. The
cycloconverter valves do not need any turn-off capabilities and can be realized by fast
thyristors connected in anti-parallel
The switching losses and stress on the semiconductor devices of power conversion
systems can be considerably reduced by applying a soft-switched commutation
scheme in all points of operation without any auxiliary valve. Despite the soft-
switching commutation scheme, such converter systems may result in low system
efficiency because they require an extra power conversion stage compared to
conventional VSC converter systems.Therefore, in order to reduce the power losses in
the cycloconverter, it is desirable to utilize fast thyristors instead of IGBTs since these
devices have lower switching losses compared to IGBTs [7]. Nowadays thyristors are
available with turn-off time (tq) down to 5 µs which makes it feasible to use thyristor-
based cycloconverters for high frequency applications. Moreover, the thyristor can
handle very high current and is cheaper compared to the IGBT which implies a
significant reduction in the investment cost of the cycloconverter.
However, the absence of turn-off capability of the thyristors demands for improved
control strategies of the cycloconverter in order to avoid an accidental short-circuit of
any phase leg of cycloconverter due to the commutation of the valves during the zero-
current crossing. In [3], [4] and [12] control strategies for MCC systems equipped
with thyristor-based cycloconverter were introduced. Additionally, the thyristor must
be reverse biased for a certain duration tq before a positive voltage is reapplied without
unintentional self-triggering of the thyristor. This condition appears to be an even
greater limitation for the operation of the cycloconverter than the absence of turn-off
capability [4].
It would accordingly be of interest in this thesis to calculate the semiconductor losses
for the MCC topology taking into account the turn-off time constraint of the thyristors
and to compare these losses, the requirements of semiconductor devices and the size
of the valves with those of a two-level HVDC-Light system used for the same
1.2 Objectives
The main objective of this thesis is to estimate the semiconductor losses of an HVDC
power transmission using a MF transformer or a MCC topology and to make a
benchmarking with the two-level VSC technology known as HVDC-Light used for
the Valhall project1 with regard to losses, semiconductors requirements and physical
volume of the equipment. The semiconductors requirements here mean the ratings,
safe operating area and the number of switches.
The following steps are followed to reach the goal:
• The theoretical concepts of both converter topologies are reviewed.
• The preconditions for the design of the MCC topology are gathered based on
data of the Valhall project1
• Based on the operating conditions given in the preconditions, the design of the
converter station of the MCC topology is made. The semiconductor
dimensioning both for VSC part and cycloconveter part is made. The size of
the AC filter reactance, MF transformer turn ratio and leakage inductor, filter
capacitor and snubber capacitor of the VSC part of MCC are decided. The
design values of the converter station of the HVDC-Light topology are taken
from the Valhall project data
• The semiconductor losses and the total volume of the valves are evaluated and
compared for both topologies.
1.3 Outline of the thesis
Chapter 2
The preconditions for the converter station design are introduced in this chapter.
Firstly, a brief description of the HVDC-light system of Valhall is made, followed by
giving the requirements for the design.
Chapter 3
As the main purpose of this chapter is to describe the two compared HVDC systems, a
short description of the DC/AC substations for both systems is given, the principle of
operation of the studied MCC topology is explained and finally a brief explanation of
the two-level conventional VSC converter is made.
Chapter 4
In this chapter, the main circuit design of the converter station for the MF transformer
HVDC transmission is presented. The equations used as a basis for the design are
derived first. A detailed description of different trade-offs encountered during the
selection process of the semiconductors are then illustrated. The ratings of
1.Valhall is a power-to-shore project in the North Sea feeding oil platform which was put in operation in
the proposed semiconductor devices are also given. Finally the data of the other
compared HVDC topology is presented
Chapter 5
This chapter presents the simulation results of semiconductor losses for both systems.
Firstly the equations used to calculate the semiconductor losses in both systems are
derived. Following that, the simulation results of semiconductor losses as well as the
number of valves used in both HVDC topologies are compared and different
conclusions are drawn out. Finally, the total volume of the valve installations for both
topologies is compared.
Chapter 6
This chapter summarizes the work in this thesis and brings forward future aims in this
Chapter 2
Preconditions for the converter station design
This chapter gives a summary of the design requirements for the offshore converter
stations of Valhall project. A short description of the transmission system which
supplies the Valhall offshore installations is given, followed by the preconditions for
2.1 The transmission system of Valhall
The transmission system converts ac power from the Elkem’s 300 kV onshore sub-
station at Lista to dc power at 150 kV, transmits it through the subsea dc cable and
converts it back to ac at 11 kV at the new platform to feed the entire Valhall field [8].
The HVDC is a forced-commutated Voltage-Source Converter (VSC). The HVDC
transmission system between Norway and Valhall is a monopolar connection [10].
Figure 2.1 shows the transmission system designed to supply the offshore facilities at
On shore Valhall
Converter Station Converter Station
Converter Converter
0 kV
Transformer = Phase Transformer
300 kV
reactor reactor
11 kV
DC DC
Filter Filter
~ -150 kV = AC
Filter Filter
Figure 2.1. Single line diagram for the power system supplying valhall offshore
2.2 Preconditions for the design
In this study the onshore station is not considered when the comparison is made
between the MF transformer HVDC and LF transformer HVDC as it is independent of
offshore converters. It is shown in the figure above mainly to give the reader a clear
picture of the current Valhall HVDC transmission which is based on hard-switching
VSC and LF transformer i.e. HVDC-Light technology.
2.2.1 Voltage, frequency and power requirements
Table 2.1 below summarizes the grid voltage, frequency and power requirements for
the Valhall converter station [8]
Table 2.1. Voltage, power and frequency specifications of the Valhall offshore station
Grid voltages DC link Power ratings Frequency
11kVrms ±1% (steady state) 150kV P=78MW 60Hz ±0.5%(steady
11kVrms+ 20% (fault Q=48.4MVAr
60Hz ±10%(transient)
clearance) (inductive)
60Hz ±0.1Hz (with
11kVrms-15% (starting of HVDC control)
induction motor)
The dc link voltage is 150kV as can be seen in fig 2.1. The station design for the
MCC topology should be made in such a way that it meets the voltage, power and
frequency requirements stated above.
2.2.2 Harmonics requirements
The maximum harmonic distortion for offshore is defined as per IEC 61000-2-4 and
Class 2 is applied. The performance limits for both offshore and onshore [9] are
summarized in table 2.2.
Table 2.2.Harmoics limits
Distorti Harmonic Offshore (11kV)
on number
Dn % 5 6
11 3.5
17<n≤49 2.27*(17/n)-0.27
49<n 0.2
THD % - 8
TIF - -
The individual harmonic voltage distortion Dn, defined as:
U1 (2.1)
The total voltage harmonic distortion THD, defined as:
2 1 (2.2)
THD = ∑ U n ×
n = 2
The telephone influence factor TIF, defined as:
⎡ U n × TIF n ⎤ ,
TIF = ∑ ⎢ ⎥
n=1⎣ ⎦
U1 = nominal line to ground system voltage (rms).
Un = n'th harmonic component of the line to ground voltage (rms).
n = harmonic order.
TIFn = the weighting factor of harmonic ‘n’ according to EEI Publication 60-68
To meet such harmonics limits the use of filter is indispensable for both topologies
(the proposed MF transformer topology and the existing HVDC-Light topology).
However, since the filter design is not within the scope of this thesis it is not
addressed in details when the design of MF transformer HVDC topology is made.
Generally, by increasing the switching frequency it is possible to shift the dominant
harmonics toward the higher frequencies and then they can be filtered out easily.
Another advantage of elevating the switching frequency is the reduction of either the
core area or winding area of MF transformer and hence the volume and weight of this
transformer is also reduced. However, using a high switching frequency results in an
increase in MF transformer winding losses due to skin effect, core losses (hysteresis
and eddy current) and dielectric losses. Another drawback of increasing the switching
frequency is that it results in higher semiconductors switching losses and the devices
will be stressed accordingly. The application of the soft switching will result in lower
switching losses in case of the MF transformer topology.
Chapter 3
Description of the two compared systems
The first part of this chapter provides a description of the two HVDC systems to be
compared. A general description of the MF transformer HVDC transmission system is
presented first followed by the principle of operation and modulation strategy of the
MCC converter. Finally the LF transformer HVDC transmission system (HVDC-
Light) together with its two-level VSC converter is explained briefly.
3.1 Description of a MF transformer HVDC
transmission system
Figure 3.1 shows a block diagram of the offshore DC/AC substation for the studied
MF transformer HVDC system. It consists of a VSC converter connected through a
MF transformer to a thyristor-based cycloconverter. The cycloconverter terminal is
connected to the grid through a phase reactor which should be large enough to smooth
out the current ripples. The reactor should also give the necessary reactance to control
the converter. The shunt filter is represented here by a capacitor with a main function
of filtering out the harmonics of the output voltages and removing the unwanted
electrical noise from the converter.
Figure 3.1. Future-oriented HVDC power transmission system including MF
A zoomed picture of the VSC and cycloconverter is shown in figure 3.2. The VSC
valves consist of a number of IGBT modules connected in series in order to be able to
switch voltages higher than the rated voltage of a single IGBT. The series-connected
IGBTs are represented in the figure by one switch. Similarly for the cycloconveter, its
valves are implemented using of series-connected fast thyristors. It should be
mentioned that the VSC may be realized with a single or two phase legs. In case of
using a single phase leg (half bridge), the output voltage is formed between the
midpoint of the phase leg and the midpoint of the two equal series-connected DC
capacitors and hence it has two level: {Ud/2, Ud/2}.This option requires only two gate
drive units and hence it has a big advantage compared to the two-leg option [1].
Figure 3.2. Cycloconveter and one-leg (half-bridge) VSC
On the other hand, the two-leg VSC (full bridge connection) shown in figure.3.3 has a
significant advantage of establishing of three-level output voltage {-Ud, 0, Ud} by
making the current freewheel in the converter. In addition, this type of connection
provides twice the output voltage of the half-bridge. However, this difference in the
output voltage can be compensated by adjusting the turn’s ratio of the transformer and
hence it doesn’t constitute a serious limitation in a transformer coupled system [1].
Regardless of which option is used the main purpose is to convert the dc link voltage
into an AC voltage with a constant frequency considerably higher than the grid
Figure 3.3. Full bridge VSC converter
3.1.1 Fast thyristors
Thyristors are inherently low switching devices due to the nature of bipolar
conduction and the amount of stored charge [4]. This causes a large reverse recovery
current (Irr) during the turn-off which, in turn, results in a large reverse overvoltage
(refer to figure 3.6). This overvoltage can be limited by connecting an RC snubber
across the cycloconverter valves [4].
To shorten the turn-off time, an interdigitated gate-cathode structure is used [13].
Another way to shorten the switching times is to decrease the carrier lifetimes, with
the trade-off of increasing the conduction losses. This is done by diffusing heavy
metal ions or by neutron irradiation of silicon, and thereby the charge recombination
ability is improved. Such fast thyristors have shorter turn-off time (tq) which represent
the minimum time before a thyristor can be exposed to a forward voltage without a
risk of self-triggering by the remaining charge carriers that have not yet recombined.
Therefore, the turn-off time is usually several times longer than excess-carrier
lifetimes [4].
3.1.2 Principle of MCC operation
The topology of the studied mutually commutated converter MCC system is
illustrated in detail in figure 3.4. The shunt filter of the converter is ignored in this
Figure 3.4 Topology of the MCC system
To simplify the analysis of the operation of the converter system different coupling
factors that relate the current and voltages are introduced. Since the output voltage of
the cycloconverter is referred to the midpoint of the transformer winding, a coupled
factor kac,i is defined. The value of the coupling factor for each phase leg is kac,i = -
1/2,when the corresponding leg is connected to the lower transformer terminal and
kac,i = +1/2 when is connected to the upper transformer terminal [1].
Thus follows:
ui = kac,i utr Ntr (3.1)
itr = Ntr Σkac,i ii (3.2)
Similarly a coupling factor kd can be defined to relate the dc link voltage with the
output voltage of the VSC. It has two values: +1/2 when the upper valve conducts
and -1/2 when the lower valve conducts. Thus
utr = kd Ud (3.3)
Several important assumptions are made during each commutation cycle for the
following analysis. Firstly, the AC side inductive filter is assumed to be large enough
to be able to maintain the current constant in each modulation interval and hence it
can be represented by a current source. Secondly, the voltage on the VSC side is
assumed to be essentially constant, which is supplied by the DC link capacitors.
Finally, the transformer is modeled by its turns ratio Ntr and leakage inductance Lλ i.e.
the small magnetizing current is neglected in order to simplify the circuit [1].
As mentioned before the name of the MCC converters comes form the fact that the
VSC and cycloconverter always alternately commutate, thereby both snubbered /zero-
voltage commutation and natural commutation are enabled for the first and the latter
respectively [12].
Cycloconverter commutation
The only way to turn off a conducting thyristor is to let the anode current fall below
the holding current. Therefore, to initialize the natural commutation of each leg of the
cycloconveter equation (3.4) has to be fulfilled.
utr kac,i ii < 0 (3.4)
Condition (3.4) is fulfilled when the voltage applied to a thyristor in any leg becomes
opposite to the direction of the anode current. This makes it possible to turn-off that
thyristor. Figure 3.5 schematically shows an example of such a commutation.
Fig. 3.5 Cycloconverter phase leg natural commutation [1]
As can be seen in figure 3.5, the current of the corresponding phase leg of the
cycloconverter and the voltage across the leakage inductance are opposite to each
other in direction. The process is started by turning on the non-conducting thyristor
valve in the direction of the current through the phase terminal. The incoming valve
gradually takes over the current. Finally the initially conducted valve turns off as the
current through it drops to zero [1]. At the end of the commutation the sign of the
product in (3.4) changes and a new condition for the next commutation is established.
utr kac ,iii > 0 (3.5)
When all of the cycloconverter phase legs have been commutated it follows from
(3.3) and (3.5) that
utr itr = Ntr ∑u
k i = Ntr
tr ac , i i ∑2 ui
tr ii (3.6)
The sign of this expression is positive which means that utr and itr are of the same
sign, i.e. the instantaneous power flow is directed form the DC side to the AC side [1].
The ideal duration of cycloconverter commutation depends on the leakage inductance
of the transformer (Lλ) and the output voltage of the VSC and can be found using (3.7)
Δtacci = Lλ ii /( N trU d / 2) (3.7)
It is important to ensure that the thyristor has completely turned off before the forward
voltage is applied again across it. This can be done by two measures. Firstly, the turn-
off time (tq) specified by manufactures should be respected (refer to figure 3.6).This
can be ensured by not allowing any phase leg commutation during the time period tq
prior to VSC commutation. Secondly, the voltage derivative dv/dt of reapplied
forward voltage across must be limited to a certain limit by snubber capacitors.
Otherwise the device may retrigger into conduction-state by induced displacement
current [4], and this results in commutation failure.
Figure 3.6. Thyristor current and voltage waveforms during turn-off [13]
It will be seen in following chapters that the increase in the turn-off time of the
thyristors causes a reduction in the maximum possible modulation ratio of the
VSC commutation
The commutation stages of the snubbered VSC are shown in Figure 3.7 is initiated
after the cycloconverter commutation. Equation (3.6) implies that utr and itr are of the
same sign which indicates that the current flows through the switches instead of
diodes at this stage (refer to figure 3.7).
Fig. 3.7 Snubbered VSC commutation [1]
The commutation process is started by turning off the conducting switch at zero-
voltage condition; the current is diverted to the snubber capacitors (second diagram in
Fig.3.7). The snubber capacitors are getting recharged until the potential of the phase
terminal has fully moved to the opposite DC rail. At this moment, the snubber
capacitors in the incoming valve are completely discharged and the current follows
through the diode in the opposite valve. Finally, the switch connected in anti-parallel
to this diode is turned on at zero-voltage and zero-current conditions. The next current
direction reversal is established by turning on this switch. At the same time, the
reversal of the transformer voltage utr during the VSC commutation establishes a
condition for natural commutations of the cycloconverter phase legs. Therefore, the
commutation cycle can be repeated [1].
The VSC commutation time Δtvsc is governed by the snubber capacitor per valve and
the transformer current thus
Δtvsc = 2CsU d / itr (3.8)
Fig 3.8 shows the current and voltage waveforms during a couple of commutation
cycles. The duration of commutation process has been exaggerated in the figure just
for clarity. In reality the commutation time is only a very small fraction of the
commutation cycle [1]. The time period tq corresponds to manufacture-specified turn-
off time of thyristor as discussed in the last part of the previous paragraph.
Fig.3.8. Current and voltage waveforms during commutation sequence as given in
In addition to maintaining soft switching, the control of the MCC system should fulfill
two main requirements. Firstly, a proper operation of the transformers should be ensured by
avoiding low frequency or DC components in the transformer voltage. This can be achieved
by fixed VSC commutation intervals, thus generating a trapezoidal voltage. Secondly,
the control system should produce the desired PWM patterns for the cycloconverter.
By making the commutations of the cycloconverter phase legs at appropriate instants
in the interval between two VSC commutations, the width of the PWM pulses can be
chosen freely [1]. This may be achieved in several ways. In this thesis a carrier-based
modulation method is used. This modulation scheme is called a constrained sinusoidal
pulse width modulation (SPWM) and is treated extensively in [1]. Figure 3.9 shows
how this modulation scheme works for a couple of cycles. Two sawtooth carriers are
used one for the positive phase current and the other for the negative current. The
waveforms below are shown for a case where the current of the first phase is positive
while the currents of the other two phases are negative.
Figure 3.9 References, sawtooth carriers and output AC voltages of
3.2 Description of a state-of-the-art LF transformer
HVDC transmission system (HVDC-Light)
A block diagram of the DC/AC substation of the LF transformer HVDC system is
shown in figure 3.10. It consists of a two-level VSC converter connected through a
reactor to a LF transformer. The shunt filter consists of an inductor, a capacitor and a
resistance. The main function of the filter and the reactor is the same like the one
described in section 3.1. Besides adjusting the voltage between the grid and the
converter, the secondary-grounded star-star transformer blocks the zero-sequence
harmonics from being injected into the grid. This system resembles the current
offshore substation which supplies Valhall installation.
Fig.3.10. Conventional VSC-based HVDC system with a LF Transformer
3.2.1 The two-level conventional VSC
A detailed diagram of the 3-phase two levels VSC of figure 3.10 is shown in figure
3.11. Each valve consists of large number of series-connected IGBT modules since it
is required to block a voltage as high as the dc link voltage. The most important thing
is that all IGBTs must turn on and off at exactly the same moment. The commutation
takes place alternately between the IGBTs in the upper legs and the diodes in the
lower legs and vice versa. The positive current is conducted by the IGBTs in the
upper legs together with the diodes in the lower legs and vice versa for the negative
current. The output voltage is switched between two voltage levels and is generated
by a PWM control.
Figure 3.11. Three-phase two-levels VSC
This type is converter is called hard-switching since the semiconductor devices are
subjected to a high current and high voltage simultaneously during a substantial part
of the switching process. Therefore, the converter has higher switching losses
compared with its MCC counterpart as will be shown later.
Chapter 4
Converter station design
This chapter presents a rough design of the DC/AC converter station for the studied
MF transformer HVDC system with special focus on the semiconductor ratings. The
derivation of the equations used as a basis for the design is made first. Following that,
different compromises have to be considered during the selection process of
semiconductor devices are discussed. Finally, the main circuit data and semiconductor
characteristics of the other compared topology are presented.
4.1 Converter station design for the MF transformer
4.1 .1 Main important assumptions
• The magnetizing current and the winding resistance of the transformer
windings are neglected and hence the transformer is represented by solely its
turn ratio and leakage inductance.
• The resistive together with the inductive reactance of the AC side shunt filter
is small compared to its capacitive reactance at the fundamental frequency.
Therefore the shunt filter can be represented by a pure capacitor at that
• The internal resistance of the AC side reactor is negligibly small value and
hence the reactor can be modeled by a pure inductor.
4.1.2 Basic equations for the design
Figure 4.1 shows a one-phase equivalent circuit of the AC side of the converter station
(figure 3.1) of the MCC at the fundamental frequency based on the second and third
assumption. The load connected to the grid is represented by the resistance R and the
inductance L.
Fig.4.1 Single phase equivalent of the AC side
The cycloconveter output voltage at the fundamental frequency Vcyc can be calculated by
Vcyc = V + jX r I cyc (4.1)
I cyc = I C f + I g (4.2)
I cyc = jYcV + ( P − jQ) /(3V ) (4.3)
The impedance of the reactor is taken as 15% of the base impedance X b since it
should be big enough to smooth out the inductor current and to limit the high transient
current derived by the energy stored in the big rotating machines connected to the AC
grid in case of a short-circuit in the dc-link.
X r = 2πfLr = 0.15 X b (4.4)
The admittance of the shunt filter can be calculated from the reactive power supplied
by the filter (Qfilt) and the grid voltage. In general the filter size should be as small as
possible. The experience has shown that a value of 15% of the base power Sb could be
satisfactory for Q filt .
Yc = 2πfC f = Q filt /(3V 2 ) = 0.15S b /(3V 2 ) (4.5)
The modulation ratio (M) is defined as a ratio of the peak value of the cycloconverter
phase voltage to half of the transformer secondary voltage during one commutation
M = 2 Vcyc /( N trU d / 4) (4.6)
The maximum modulation ratio can be calculated from figure 3.8 for any leg using
M max = 1 − 2 f sw (Δt vsc + Δt acc1 + Δt acc 2 + Δt acc 3 + 2t q ) (4.7)
If the commutation of the first leg takes place at the peak value of the phase current,
then the commutation time for that leg can be calculated as
Δtacc1 ≈ 2 I rms _ cyc /(di / dt ) (4.8)
And the commutation time for the other two phase legs can also be approximated by
Δ t acc 2 ≈ Δ t acc 3 ≈ 0 .5 Δ t acc 1 (4.9)
The switching of the VSC occurs near the peak value of the phase current (at
fundamental frequency) of a cycloconverter leg while the currents of the other two
legs have different signs compared to this leg’s current .Thus using (3.2) and (3.8), the
switching time of VSC can be roughly estimated by
Δt vsc ≈ 2C sU d /( N tr 2 I cyc ) (4.10)
From (4.6), (4.7) and (4.10), and by assuming that the converter is operating at the
maximum modulation ratio, the minimum required transformer turns ratio Ntr can be
calculated as
N tr = (2k1 f sw + k 2 ) /(1 − 2 f sw (Δt acc1 + Δt acc 2 + Δt acc 3 + 2t q )) (4.11)
k1 = 2C sU d /( 2 I cyc ) (4.12)
k 2 = 4 2 Vcyc /(U d ) (4.13)
Finally leakage inductance of the transformer can be estimated from
Lλ = ( N trU d / 2) /(di / dt ) (4.14)
The maximum allowed commutation rate di / dt is found in the thyristor data sheet.
Throughout the analysis it is assumed that the max possible modulation ratio is
constant, but in reality it varies slightly between different commutations intervals as a
result of the variation of the cycloconverter as well as the VSC commutation times.
The total number of the series-connected IGBTs per VSC valve can be calculated by
N igbt _ w / o = U d max deblock /(VCE , SSOA,max − ΔU ) (4.15)
The maximum dc link voltage when the converter is switching including the ripple
(Udmaxdeblock) is taken as 116% of the nominal dc link voltage (Ud) for a voltage source
converter [11]. The maximum switching safe operating area voltage (VCE,SSOA,max) is
usually taken as 60% of the maximum collector-emitter voltage (VCEmax ). The reason
for this will be explained later. The factor ΔU is mainly to account for a possible
uneven distribution of the voltage among the IGBTs at a certain valve and it varies
linearly with VCEmax. For a 2.5kV IGBT it has a value of 275V [11]
A redundancy of 6% is then added to the number of devices [11] which gives
N igbt = 1.06 N igbt _ w / o (4.16)
Having (4.16), it is possible to calculate the rated switching safe operating area
voltage of a single IGBT in the VSC valve as
VCE 0, ssoa = U d / N igbt (4.17)
The number of series-connected thyristors per one valve of the cycloconverter can
then be calculated using:
N thy = 2.5U ov _ pk / VRRM (4.18)
Uov_pk is the maximum peak operating voltage across the valve and VRRM is the
maximum voltage that the thyristor can block during the switching. The experience
has shown that the number of thyristors calculated in (4.18) is an acceptable level for
the thyristor control reactor (TCR) valves within ABB.
The voltage Uov_pk can be calculated from the transformer turn ratio and the maximum
dc link voltage Udmaxdeblock
U ov _ pk = N trU d max debloc / 2 (4.19)
Using the number of thyristors in (4.18), it is possible to calculate the rated safe
operating area voltage of the thyristor
Vthy , ssoa = N trUd /(2 N thy ) (4.20)
4.1.3 Realization of the design
It is worth mentioning that the value of P and Q in (4.3) has been taken as the rated
power of the Valhall offshore station (refer to table 2.1), the base power has been
taken similar to the magnitude of apparent power injected into the grid and hence it
can be calculated from P and Q value. The grid phase voltage (V) has been taken as
11 / 3 kV. The VSC of the MCC has two voltage levels either +75kV or -75kV.
Table 4.1 summarizes the operating point data that can be used together with the
above equations to make the converter design
Table 4.1.The rated operating point for the converter station
P[MW] Q[MVAR] Sb[MVA] V[kV] f[Hz] HVDC voltage
78 48.4 91.8 11 / 3 60 ± 75kV
The peak value of the reactor current at rated operation (Table 4.1) is 6335A.
Dimensioning of the cycloconverter semiconductors
Different thyristors with different turn-off times have been proposed. The thyristors
must be selected in such a way that their current ratings are well above the rating of
the reactor current by a sufficient margin. This margin is necessary in order to protect
the valves against the ripple in the current (esp. at low switching frequency) or any
accidental transient current. In additions to the ratings requirements, the thyristors
should also be fast enough to have a high modulation index (see (4.7)) and hence a
lower transformer turn-off ratio Ntr (refer to (4.6)) since the latter is an important
factor for the dimensioning of the VSC part of the converter as will be seen later.
Another important criterion for thyristors is that they should also possess high voltage
blocking capability so as to reduce the number of required devices and thereby the
size of the station as will be seen in the next chapter. Nevertheless, the very fast
thyristors mostly have low blocking voltage and vise versa for the slow thyristors.
Therefore, a trade-off should be made between the blocking voltage and turn-off time
when a thyristor is chosen.
The graph in figure 4.2 shows a plot of the maximum modulation index and
transformer ratio respectively as a function of switching frequency for three thyristors
with different turn-off times and the same commutation rate (di/dt). The data for the
thyristors can be found in table 4.2
1200V TF3390-F3-12,tq=10us
1 2800V TF2910-F2-28,tq=25us
Max modulation ratio
2500V TF3280-F2-25,tq=40us
Switching frequency(Hz)
1200V TF3390-F3-12,tq=10us
Transformer ratio(Ntr)
0.4 2800V TF2910-F2-28,tq=25us
2500V TF3280-F2-25,tq=40us
Switching frequency(Hz)
Figure 4.2. The maximum modulation index and transformer ratio at different
thyristor turn-off times
It is quite obvious from the upper diagram that using slow thyristors results in large
area losses in the modulated voltage (the maximum modulation index decreases)
during commutation, see figure 3.8.The situation becomes even worse when the
switching frequency is getting higher. The consequence of the reduction of the
maximum modulation is that a higher turns-ratio of the transformer (Ntr) is needed to
get the required AC side voltage if the slow thyristors are to be used (refer to the
lower part of figure 4.2). It is clear from (3.2) that a high turn ratio results in a high
peak transformer current which implies that semiconductors with high current ratings
must be used to design the VSC. Increasing the switching frequency also worsens the
situation and may boost the semiconductor losses.
Another compromise has to be considered when a thyristor is selected from a family
of thyristors with the same blocking voltage is that the devices with a shorter turn-off
time have a higher on-state voltage drop (and thereby larger conduction losses) than
the ones with a longer turn-off time (see figure 4.3).
VTmax [V] @max rated current
tq [µs]
Figure 4.3. The maximum voltage drop as a function of turn-off time for thyristors
with a same blocking voltage and different turn-off times
Characteristics of the proposed thyristors
Table 4.2 presents the characteristics of the thyristors proposed to be applied in
cycloconverter. The thyristors are from Proton-Electrotex [14]
Table 4.2 Characteristics of the proposed thyristors
Thyristor module TF3390-F3-12 TF2910-F2-28 TF 3280-F2-25
Peak on-state current 10.65kA 9.14kA 10.30kA
Repetitive peak off-state voltage 1200V 2800V 2500V
Repetitive peak reverse voltage 1200V 2800V 2500V
Rated SSOA voltage Vthy,ssoa 407 V 936 V 856V
Turn-off time tq 10µs 25µs 40µs
On-state threshold voltage VT 1.4V 1.4V 1.3V
On-state slop resistance rT 0.08 mΩ 0.2mΩ 0.15mΩ
Looking at the current rating of the proposed thyristors given in the table above, it is
clear that they have current ratings with a margin of more than 50% above the peak
reactor current i.e. 6335A. The value of the peak reactor current is calculated using
(4.3) and the converter rated operating point presented in table 4.1. The thyristor rated
SSOA voltage can be calculated by using (4.20). From the values in the table, it can
be noticed that a large margin between the maximum blocking voltage (VRRM) and the
rated SSOA voltage is taken in order to ensure that the system will remain functional
even if some thyristors at a certain valve fail to trigger and to ensure that the devices
are well protected against the possible switching and lightning overvoltages or the
overvoltages caused by the ripple of the dc link voltage.
Transformer ratio, leakage inductance
The value of the transformer turn ratio and leakage inductance can be evaluated using
(4.11) and (4.14) respectively. The maximum current derivative during the turn-on
and turn-off (di / dt ) is taken from the thyristor data sheet and it has a value of
500A/µs (see appendix A).
Table 4.3 shows some values at operating switching frequency fsw=900Hz for
different thyristor options. The switching frequency is chosen to be 900Hz since the
converter is expected to have a good harmonics performance at this frequency if it is
modulated by a constrained SPWM.
Table 4.3 Transformer leakage inductance and turn-ratio at fsw=900Hz
Thyristor module fsw di/dt Lλ N tr
TF3390-F3-12 900Hz 500A/ µs 42.4µH 0.283
TF2910-F2-28 900Hz 500A/ µs 45.0 µH 0.300
TF 3280-F2-25 900Hz 500A/ µs 48.0 µH 0.320
Obviously the above values, as can be seen from equation (4.14) and figure 4.2, vary
with the switching frequency at which the converter is designed to operate and the
turn-off time of the thyristor.
It is worth mentioning that the exact physical value of the leakage inductance depends
on the winding geometry.
Dimensioning of the VSC semiconductors
The IGBT module is selected based on the transformer current and the dc link voltage.
The transformer current determines the current ratings of the devices while the
voltage ratings as well as the number of series-connected IGBT modules are decided
based on the maximum value of the dc link voltage. The selected IGBT module
should be able to handle a current which is higher than the rated transformer current
with a sufficient margin to account for the current ripple and current increase due to
transients. Up to a switching frequency of 1.2 kHz it was found that the peak of the
transformer current (without ripple) doesn’t exceed 2.3kA if a thyristor with tq=40µs
is used.
The rated switching safe operating area SSOA voltage combined with the long term
stability defines the IGBT module rating. In order to improve reliability and to avoid
the device failure due to cosmic radiation the maximum allowed SSOA voltage is
derated by 40% from the maximum collector-emitter voltage of the device VCEmax (this
de-rating margin is an acceptable figure for a 2.5kV IGBT within ABB). Again there
is a margin between the maximum and rated SSOA to account for the voltage spike
caused by diode reverse recovery [6]. Based on the previous discussion, the number of
series-connected IGBTs per valve and thereby the rated SSOA voltage (VCE,SSOA ) can
be calculated using (4.15), (4.16) and (4.17). The ripple in the dc link voltage has also
been accounted for in this calculation (refer to equation (4.15)).
To meet the discussed rating requirements, an IGBT module with the characteristics
given in table 4.4 has been proposed for the VSC design. It is the 5.2kV 2000A Soft-
switching IGBT module used in ABB Project Light C [15]. The device can handle a
current up to 4000A (peak value).The current rating is the same for the IGBT and the
diode in the same module. The value of the snubber capacitor is chosen to be
3µF/IGBT module as in Project Light C [15].
Table 4.4. Main characteristics of a 5.2kV 2000A Soft-switching IGBT module
Max. collector-emitter voltage VCEmax 5.2kV
Max SSOA voltage VCE,SSOA,max 3.12kV
Rated SSOA voltage VCE,SSOA 2.42kV
Nominal collector current IC 2000A
Maximum collector current ICM 4000A
Diode forward current IF 2000A
Maximum pulsed forward current IFM 4000A
IGBT threshold voltage VCE0 1.7V
IGBT slope resistance rCE 1.15mΩ
Diode threshold voltage VF 1.1V
Diode slope resistance rF 0.65mΩ
4.2 Converter station design for the HVDC-Light
4.2.1 Design data of the converter station
Main circuit parameters
Table 4.5 gives the main circuit data of the compared HVDC-Light topology used in
this study. These data can be found in [8]
Table 4.5 Main circuit data of Valhall offshore station
Base power defined at max reactor current and 94.8MVA
nominal filter voltage
DC link voltage 150kV
Converter reactor size 24mH
AC filter size 8.8MVAR/4.41µF
Modulation ratio range 0.56-0.76
Max reactor current at steady state (rms) 760A
Nominal filter voltage 72kV
Transformer rated power 91MVA
Transformer turn-ratio (valve/line side) 68/11
Nominal frequency 60Hz
Semiconductors dimensioning for the conventional VSC
A two-level VSC converter similar to the one in figure 3.11 is used. A 2.5kV Press-
pack IGBT, PG4 module has been used in this study to design the conventional VSC.
Table 4.6 gives an overview of the device characteristics [16]. It can be noticed that
this IGBT is able to handle a current (2600A) which is about twice the maximum peak
reactor current (1075A). The calculation of the maximum reactor current was made in
[8] and is shown as an rms value in table 4.5. The large margin between the maximum
reactor current and the maximum current the IGBT can handle is needed to protect the
device against any possible overcurrent caused by the starting of large induction
machines, ripples or other possible transients. A similar margin is used for the 2.5kV
devices used in the current Valhall offshore converter [11]. The current rating is the
same for the IGBT and the diode in the same module. Similar as before, the rated
switching safe operating area (SSOA) voltage can be estimated using (4.15), (4.16)
and (4.17).
Table 4.6. Main characteristics of a 2.5kV Press-pack IGBT module
5SNA 130025H0003 (PG4 Light B) [14]
Max. collector-emitter voltage VCEmax 2.5kV
Max SSOA voltage VCE,SSOA,max 1.5kV
Rated SSOA voltage VCE,SSOA, 0.99kV
Nominal collector current IC 1300A
Maximum collector current ICM 2600A
Diode forward current IF 1300A
Maximum pulsed forward current IFM 2600A
IGBT threshold voltage VCE0 1.14V
IGBT slope resistance rCE 1.2mΩ
Diode threshold voltage VF 1.05V
Diode slope resistance rF 0.57mΩ
Chapter 5
Semiconductor losses formulation, simulation
results and analysis
5.1 Formulation of semiconductor losses for the
Mutually Commutated Converter (MCC)
5.1.1 VSC losses
The on-state losses depend on the current through the device (ICE) as well as the on-
state voltage drop. They can be calculated from the threshold voltage, the on-state
slope resistance and the current through the IGBT or diode. The values of the device
threshold voltage and slope resistance are taken from data sheet at temperature of 125
C, and hence no temperature dependence is considered. The IGBT on-state losses can
be calculated by
1/ f
Pcond _ igbt = f ∫I
t =0
CE (VCE 0 + rCE I CE )dt (5.1)
Similarly, the diode on-state losses can be calculated by
1/ f
Pcond _ diode = f ∫I
t =0
F (VF + rF I F )dt (5.2)
The switching losses consist of turn-on and turn-off losses. The turn-on losses of the
IGBTs are neglected since the turn-on takes place at zero current and zero voltage
conditions. Thus the turn-off losses are assumed to be the only component of the
IGBTs switching losses and can be calculated (by interpolation) from the turn-off
energy curves at the corresponding turn-off current. The turn-off energy curves can be
found in the device data sheet [15]. In this calculation the turn-off energy data is taken
at a temperature of 125C. Having these curves, it is possible to extract the
instantaneous turn-off energies by interpolation. The current through the upper valve
of the VSC (Is1) is shown in figure 5.1. The switching losses are the average of the
instantaneous turn-off energies over one cycle of the fundamental frequency.
Psw _ igbt = f ∑ x sw1 Eoff ( I off )
f /f
The diode turn-on takes place every cycle right after the reversal of the VSC
transformer voltage (see the diode current ID2 in fig 5.1). The diode turn-on losses can
be calculated in a similar way as the IGBT turn-off losses using the turn-on energy
diagrams in the data sheet
f sw / f
Pdi _ on = f x =1
E on ( I on ) (5.3)
Similarly, the diode turn-off losses can be calculated using the reverse recovery
energy data
Pdi _ off = f ∑ x sw1 Erec ( I offd )
f /f
From (5.3) and (5.4) the switching losses per a single diode can be evaluated
Psw _ di = Pdi _ off + Pdi _ on (5.5)
Using the above equations together with the number of IGBTs per valve and number
of VSC valves, the total VSC conduction losses can be calculated as
Pcond _ vsc = 2 N igbt Pcond _ igbt + 2 N diode Pcond _ di (5.6)
And similarly the total VSC switching losses are
Psw _ vsc _ soft = 2 N igbt Psw _ igbt + 2 N diode Psw _ di (5.7)
The number of IGBTs or diodes per valve can be calculated using (4.16).
The number of diodes per valve Ndiode is taken the same as the number of IGBTs for
the selected IGBT module. The subscript ‘’soft ’’in (5.7) stands for soft switching.
Figure 5.1 Simulated transformer voltages (magenta), transformer current (blue)
upper IGBT current (green), lower diode current (black) during a few number of
5.1.2 Cycloconverter losses
The conduction losses of the thyristor can be calculated by
1/ f
Pcond _ thy = f ∫I
t =0
thy (VT + rT I thy )dt (5.8)
Ithy is the current through one thyristor at the upper or lower valves of one
cycloconverter leg. Figure 5.2 shows a plot of this current together with the
cycloconverter phase voltage and current. It should be mentioned that the current
ripple is ignored when the loss calculation is made in this study.
The thyristor manufactures usually provide the thyristor losses (including conduction
and switching) as a total energy losses at a given conduction time, commutation rate,
voltage and at different values of the device current as can be seen from figure 5.3.
x 10
2 Cycloconverter current
Cycloconveter voltage
Ia(A), Vacyc (V)
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018
One thyristor current Ithy (A)
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018
Figure.5.2. simulated waveforms of cycloconverter phase voltage (magenta),
current (blue) and one thyristor current (green)
Figure 5.3. Typical total thyristor losses as supplied by thyristor manufacture [14]
The switching losses can be estimated from the difference between the total thyristor
losses and the conduction losses using the energy curve above. Thus
the total conduction losses of the cycloconverter are
Pcond ,cyc = 12 N thy Pcond _ thy (5.9)
and the total switching losses of the cycloconverter are
Psw,cyc = 12 N thy ( f ∑ Etot − Pcond _ thy ) (5.10)
The number of series connected thyristors in the above equations (Nthy) can estimated
using (4.18).
5.2 Formulation of semiconductor losses for the two-
level conventional VSC
The conduction losses per IGBT or diode can be calculated using the same equations
as in section 5.1.1. The IGBT turn-off losses can also be calculated using (5.3) and the
turn-off energy data from device data sheet. The diode turn-on losses can be ignored
since it turns on very quickly, however the IGBT turns on hard in this case and thus
the turn-on losses can be calculated by
Pigbt _ on = f ∑ Eon _ igbt ( I on _ igbt ) (5.13)
The number of IGBTs per valves can be calculated using (4.16), but the total number
of the converter IGBTs is different since this converter has six valves (refer to fig
3.11). Thus the total conduction losses of the converter can written as
Ploss _ cond _ vsc = 6 N igbt Pcond _ igbt + 6 N diode Pcond _ di (5.14)
Similarly the total switching losses are
Ploss _ sw _ vsc _ hard = 6 N igbt ( Psw _ igbt + Pigbt _ on ) + 6 N diode Pdi _ off (5.15)
The subscript ‘’hard’’ stands for hard switching since all the switches turn on and off
at when the current through and the voltage across the device are not zero. Therefore,
this topology has high switching losses as will be seen later.
5.3 Simulation results, analysis and comparison of
semiconductor losses and the volume of the valves
This part presents the simulation results of semiconductors losses and the total volume
of the converter valves for the two compared systems. The semiconductor losses and
number of devices at different cases where thyristors with different turn-off times
used in cycloconverter are shown and compared. Finally, the total volume of the valve
installations for both systems are estimated and compared. The simulation has been
conducted in Matlab.
The semiconductor losses are expressed in per unit (pu) of the base power of the
5.3.1 Different study cases
Three different cases are studied where thyristors with turn-off times shown in table
5.1 are used in the cycloconverter. These devices are the same as the thyristors shown
before in table 4.2. The IGBT data of the VSC part of MCC is given in table 4.4 while
the IGBT data of the conventional VSC is given in table 4.6. The switching frequency
of the MCC is fsw_MCC =900Hz while the switching frequency of the conventional VSC
is taken from [9] as fsw_vsc =1620Hz which is the frequency at which the Valhall
offshore converter is designed to operate.
Table5.1 different cases to be studied at fsw_MCC=900Hz and fsw_vsc=1620Hz
Case Thyristor tq(µs) Vthy,ssoa Ipeak(kA) rT(mΩ) VT(V)
number module (V)
1 TF3390-F3-12 10 407 10.65 0.08 1.4
2 TF2910-F2-28 25 936 9.14 0.20 1.4
3 TF 3280-F2-25 40 856 10.30 0.15 1.3
The semiconductor losses and the number of valves are shown in figure 5.4 and 5.5
respectively. The losses of both VSC and cycloconverter parts of MCC together with
the total MCC losses are shown. Comparing the losses of both topologies, it is quite
obvious from figure 5.4 that the MCC topology has an inferior loss performance with
0.0216pu compared to 0.0212pu for the VSC topology which means that the losses
are increased by about 2.3%
Looking at the distribution of the losses in the MCC topology, it can be seen that the
major part of the losses comes from the cycloconverter part as a result of using a large
de-rating margin between the maximum blocking voltage (VRRM) and the rated
switching safe operating area voltage (Vthy,ssoa). Therefore, a large number of thyristors
is needed to block the voltage applied to cycloconverter valves and the current
through these devices is quite high with a peak value of 6335A which is more or less
six times the peak current that is handled by the valves of the conventional VSC
(1074A). The conduction losses of the cycloconverter are quite high as a result of
using thyristors with a low voltage blocking capability. These thyristors have very low
switching losses as can be seen in the figure. The diode conduction losses are also
quite low in case of using a MCC since its VSC works as inverter and therefore the
diodes conduct only for a short time. The IGBTs switching losses are considerably
low since they have zero turn-on losses (they turn on at zero current and zero voltage
condition) and low turn-off losses as they turn off at zero voltage. The final
distribution of the losses is 0.0071pu for the VSC part of the MCC and 0.0145pu for
the cycloconverter.
On the other hand, it can be seen that the IGBTs have quite high switching losses in
case of the conventional VSC topology because this converter employs a hard
switching for all semiconductors. The diode losses are lower than those of the IGBT
since it conducts for a shorter time.
Figure 5.4. Comparison between MCC Losses and conventional VSC losses (right)
using fast thyristor TF3390-F3-12 with a tq=10 µs and VRRM=1200V in MCC
Figure 5.5. Comparison between the number of the switches of the MCC and the
conventional VSC (right) using TF3390-F3-12 with a tq=10 µs and VRRM=1200V in
MCC cycloconverter
It is also of interest to compare the total number of semiconductor devices needed to
construct the converter in both cases. It can be seen from figure 5.5 that number of
devices in case of the MCC is around 748 (624 thyristors and 124 IGBTs). This
number is considerably low compared to the number of devices needed for the
conventional VSC which is around 906.
Considering the low cost of thyristors, the MCC topology could result in considerable
savings in the investment cost of the converter. However the cost of snubber
capacitors for the MCC as well as the cost of the control for both topologies should be
taken into consideration when the comparison is made.
Figure 5.6 and 5.7 show the semiconductor losses and the number of devices
respectively. It can be observed that the conduction losses of the cycloconverter are
lower than the previous case because the thyristors used in this case have a higher
blocking voltage and hence the number of the thyristors is considerably lower as can
be seen from the lower figure. Nevertheless, the total MCC losses are slightly higher
in this case (0.0223pu) because this type of thyristors have higher switching losses
(see fig 5.6) and due to the increase in the losses of the VSC part. The total losses are
distributed as 0.0077pu for the VSC part of the MCC and 0.0146pu for the
cycloconverter. The increase in the losses of the VSC part of MCC is attributed to the
longer turn-off time which implies that a higher transformer turn ratio is needed and
hence the transformer current will also be high in this case (refer to section 4.1.3.1).
The increase in the total losses of the MCC compared to the conventional VSC
topology is roughly around 5.41%.
Figure 5.6. Comparison between MCC Losses and conventional VSC losses (right)
using fast thyristor TF2910-F2-28 with a tq=25 µs and VRRM=2800V in MCC
Figure 5.7. Comparison between the number of the switches of the MCC and the
conventional VSC using fast thyristor TF2910-F2-28 with a tq=25 µs and
VRRM=2800V in MCC cycloconverter
The total number of MCC devices (288 thyristors+124 IGBT modules) is
considerably lower than the previous case which may conclude that this option results
in lower investment cost for the MCC compared to the previous one, but with a slight
increase in semiconductor losses.
The small number of devices needed to construct the converter together with the small
size of MF tranformer could result in a smaller size of the converter station for the MF
transformer system in comparison with the LF transformer HVDC system, but this
can be confirmed only if there is a clear picture about the valve arrangements,
dimensioning and layout of other equipment in both stations.
In this case a thyristor with a longer turn-off time than those of the two previous cases
is applied in the cycloconverter. This thyristor has a blocking voltage higher than the
one in case 1 but slightly lower than the one in case 2. Figure 5.8 and 5.9 show a
comparison between the losses and the number of semiconductor devices for the two
systems respectively.
In this case the semiconductor losses of the VSC part of MCC (0.0084pu) are higher
than the two previous cases for the same reason discussed in paragraph 5.1.3.2.
Figure 5.8. Comparison between MCC Losses and conventional VSC losses (right)
using fast thyristor TF3280-F2-25 with a tq=40 µs and VRRM=2000V in MCC
Figure 5.9. Comparison between the number of the switches of the MCC and the
conventional VSC using fast thyristor TF3280-F2-25 with a tq=40 µs and
VRRM=2000V in MCC cycloconverter
The total losses of MCC are 0.0221pu which consist of 0.0084pu for the VSC part
and 0.0137pu for the cycloconverter. The cycloconverter losses are lower than the
previous case because of using a thyristor with a lower threshold voltage and lower
switching losses. The percentage of the increase in the MCC losses is around
4.47%.The total number of MCC devices is 460 (336 thyristors+124 IGBT modules)
compared to 906 for the conventional VSC.
Summary of the results for the three studied cases
Table 5.2 and figure 5.10 give a summary of the calculated results for the above
discussed cases
Table 5.2 summary of the results for the three cases
Case MCC Losses [pu] Conventional % of Number Number of
No fsw=900Hz VSC loss of MCC devices for
VSC Cycloconverter losses[pu], increase devices conventional
fsw=1620Hz VSC
1 0.0071 0.0145 0.0212 2.30 748 906
2 0.0077 0.0146 0.0212 5.41 412 906
3 0.0084 0.0137 0.0212 4.47 460 906
Figure 5.10. Semiconductor losses and number of devices for the three discussed
Comparing the three options discussed above (refer to fig 5.10), it can be noticed that
the switching losses increase with the increase of the blocking voltage of the thyristor.
This can also be observed by comparing the energy curves of these thyristors found in
appendix A.
From the table and the plot, it is obvious that the thyristor with the lowest blocking
voltage and shortest turn-off time (case 1) has the best loss performance, but the
number of thyristors required to construct the converter are considerably higher
compared to the other two cases.
Considering the number of devices needed to construct the cycloconverter and the
slight loss difference between three cases, it would be fair to recommend the choice of
thyristors with the highest blocking voltage (like the one in case2) if the converter is
intended to be used in offshore environment as this option could result in a smaller
size of the converter which is a crucial factor in offshore applications
5.3.3 The total volume of the valve installations
The volume of the of the conventional VSC valves
Figure 5.11 shows a rough sketch of the conventional VSC valves as they arranged in
the Valhall offshore station [19]. The three converter legs are tagged by the letters A,
B and C. For simplicity, the cooling water pipes which run along the upper part of the
valve ceiling are ignored in this drawing and they will not be considered in the
calculation of the total space occupied by the valves. There are two valves in each leg
and each valve consists of 151 series-connected IGBT modules arranged in a number
of layers. The valve structure is suspended from the ceiling of the valve hall via
porcelain insulators. The valve clearance (distance from the ground) is around 1.26m
and the total space occupied by the valve installation is found to be roughly around
Figure 5.11.The arrangements of the conventional VSC valves
The volume of the MCC valves
The valve arrangements of the VSC part of the MCC are diagramed in figure 5.12.
Each valve consists of 62 series-connected IGBT modules and snubber capacitors.
Since the converter has only one leg, the total number of the VSC devices is around
124. The valve clearance is around 1.26m. A rough estimation of the total space
occupied by the installation results in a value of 138m3
Figure 5.12.The arrangements of the MCC VSC valves
Figure 5.13 shows a rough sketch of the cyclocnverter valves. For simplicity, the
thyristor layers in each valve are shown as a black box which has a similar volume as
the layers in the corresponding valve and the water pipes running across the upper
part of the valves are ignored. The number of the series-connected thyristors per valve
is around 24 thyristors and the total number of the thyristors in the cycloconverter is
around 288 thyristors. This valve design is based on the thyristor TF2910-F2-28 (case
no.2) presented in paragraph 5.3.1. The total occupied space in this case is roughly
around 19.5m3.
Figure 5.13.Schematic diagram of the MCC cycloconverter valves
The above dimensions have been made according to the dimensioning drawing of the
SVC classic [20]. The total space occupied by the MCC valves is around 157.5m3.
This value is approximately around 34 % of the space occupied by the conventional
VSC valves.
Chapter 6
Conclusions and Future Work
This chapter summarizes the work in this thesis and puts forward future aims in this
6.1 Conclusions
In this thesis the semiconductor ratings of two different HVDC systems are
determined and their losses and the size of the valves are compared. It is found that
the MF transformer system requires a less number of semiconductor devices. The size
of the installation of the MCC valves is found approximately around 34% of the size
of the conventional VSC valves. However, the semiconductor losses in this case will
be higher by around 5% which may conclude that the use of the MF transformer
HVDC system could be more feasible in an application where the size of the
converter station is of concern.
Using a one-leg MCC VSC, implemented with high blocking voltage devices, results
in a significant reduction in the number of IGBTs. The number of devices in the
cycloconverter is also reduced despite the low rated SSOA voltage of the thyristors
since the cycloconverter valves are required to block a much lower voltage than the
one blocked by the VSC valves as the primary voltage of the MF transformers is
scaled down first before it is applied to the cycloconverter. These two factors
combined together result in a less number of semiconductor devices for the MF
HVDC system and, as a result, a small size of the valves.
Fast thyristors with different turn-off times and blocking voltages have been
investigated in this study. A trade-off between the number of thyristors needed to
construct the cycloconverter part of the MF transformer system and the devices losses
should be made. The simulation results have shown that it is better to implement the
cycloconverter using thyristors with a higher blocking voltage and a relatively short
turn-off time (case2) as long as the size of the converter is of concern. This option
could lead to a smaller volume of the valves and, as a result, the size of the converter
station could be substantially reduced which is strongly needed in offshore
environment and city centre in-feed.
6.2 Future Work
The MCC semiconductor losses have been calculated without considering the ripple
in the AC side current which implies that the loss figures are not evaluated accurately.
Therefore, a detailed dimensioning of the AC side of the MF HVDC system is
required especially the design of the reactor and shunt filter using the exact grid
impedance at different frequencies and taking the required harmonics limits into
account. The real AC side current can be calculated based on this design and hence it
is possible to get the exact simulated loss figures. These loss figures should be
verified by measurements since the thyristor manufactures do not supply separate
energy curves for the device switching losses. This implies that the simulated thyristor
switching losses are just an estimated value.
The size and weight of the station components is an important figure for both systems
for offshore application. Therefore, a detailed dimensioning of the LF transformer,
MF transformer, shunt filters and phase reactors is required to determine the overall
volume of equipment and to have a clear picture of the converter station layout for
both systems.
[1] Staffan Norrga, “On Soft-Switching Isolated AC/DC Converters without Auxiliary
Circuit”, Doctoral Thesis in Royal Institute of Technology, April 2005.
[2] Staffan Norrga, “Modulation Strategies for Mutually Commutated Isolated Three-
Phase Converter Systems.” 36th IEEE Power Electronics Specialists Conference, June
[3] Stephan Meier, Staffan Norrga, Hans-Peter Nee “Modulation Strategies for a
Mutually Commutated Converter System in Wind Farms”, Proceedings EPE 2007,
September 2007.
[4] Stephan Meier, Staffan Norrga, Hans-Peter Nee, “Control Strategies for Mutually
Commutated Converter Systems without Cycloconverter turn-off Capabilities’’, Proceedings
IEEE Power Electronics Specialists Conference, June 2008.
[5] Stephan Meier, “Novel Voltage Source Converter based HVDC Transmission
System for offshore Wind Farms’’, Licentiate thesis, KTH, 2005.
[6] S.Meier, S.Norrga, H.-P.Nee “New Voltage Source Converter Topology for
HVDC Grid Connection of Offshore Wind farms’’, Proceedings of EPE-PEMC 04,
September 2004.
[7] Stephan Meier, “System Aspects and Modulation Strategies of a New HVDC
Transmission System for Wind Farms’’, PhD Thesis, KTH, May 2009.
[8] Jonas Lindgren, “Valhall Re-development project, Main Circuit Parameters’’,
document no: PH-AS-E-0009, Technical Report, ABB, April 2006.
[9]. Jonas Lindgren, “Valhall Re-development project, AC Filter Design, Document’’,
no: PH-AS-E-0013, Technical Report, ABB, April 2006.
[10] Jonas Lindgren, “Valhall Design Basis Report’’, Document no: 1JNL100099-
010, Technical Report, ABB, June 2004.
[11] Jon Rasmussen, Hans-Olla-Bjarme, “Valhall Re-development project, Electrical
design report for Valhall’’, document no: 1JNL100101-264, Rev 05, ABB, August
[12] Zhao Shuang, “Mutually commutated converter equipped with thyristor-based
cycloconverter”, M. Sc. Thesis, Chalmers, 2008.
[13]Ned Mohan, Tore M.Undeland, William P.Robbins, “Power Electronics
Converters, Applications and Design’’, Third Edition, John Wiley& Sons, 2003
[14] www.eur.proton-electrotex.com
[15]Project Light C, “5.2kV 2000A soft switching IGBT main device specifications’’,
ABB Switzerland
[16] “IGBT-Presspack 5SNA 130025H0003 PG4 Light B specifications’’, ABB
[17] http://www.westcode.com/dgt.htm
[18] Baoliang Sheng, Hans-Ola Bjarme, “Type Test Assesment of Jeddah SVC
Project PCT TCR Valves’’, Document no: 1JNL 100110-989, 2006
[19] Olle Ekwal, Håkan Andersson, Patrik Karlson, “Dimensioning drawing of
Valhall valves’’, September 2007
[20] Ida Hägerlund, Hans Johansson, K-O Eriksson, “Dimensioning Drawing, TCR
valve PCT 6-22 levels long insulators’’, SVC Classic Project, February 2009
Appendix A
Thyristors Energy curves
In this part the energy curves that describe total thyristor losses as supplied by
manufacture are shown.
Figure A1. Total energy per pulse for the fast thyristor TF3390-F3-12 which
has a VRRM=1500V and tq=10µs [14]
Figure A2. Total energy per pulse for the fast thyristor TF2910-F2-28 which has a
VRRM=2800V and tq=25µs [14]
Figure A3. Total Energy per pulse for the fast thyristor TF3280-F2-25 which has a
VRRM=2500V and tq=40µs [14]
Figure A4. Total energy per pulse for the fast thyristor R2620ZC22 which has a
VRRM=2500V and tq=50µs [17]
Appendix B
Loss comparison using thyristors from different
Semiconductor Losses [pu]
IGBT cond
0.01 Diode cond
Thyris cond
IGBT switch
1 2 Diode switch 4
Thyris switch
1200 IGBT module
1.MCC, TF3280-F2-25, VRRM=2500V, tq=50µs
1000 (Proton Electrotex) Thyristor module
No of devices
800 2.MCC, R2620ZC22, VRRM=2500V, tq=50µs
600 4.Conventional VSC
Figure B.1. Comparison between the semiconductor losses and the number of the
devices for the MCC and the conventional using thyristors from different
manufactures. The switching frequencies are fsw_MCC =900Hz and fsw_vsc=1620Hz for
the MCC and the conventional VSC respectively. The total number o f MCC
devices=484, the loss increase=11% using Proton-Electrotex thyristor and -2.2%
using Westcode thyristor.
Table B.1. Characteristics of fast thyristors TF3280-F2-25 (Proton Electrotex [14])
and R2620ZC22 (Westcode [17]) used in the plot of fig.B.1 and B2
Thyristor module TF3280-F2-25 R2620ZC22
Mean on-state current (half sine 3280A 2620A
Repetitive peak off-state voltage 2500V 2500V
Repetitive peak reverse voltage 2500V 2500V
Rated SSOA voltage Vthy,ssoa 836V 836V
Turn-off time tq 50 µs 50µs
On-state threshold voltage VT 1.3V 1.5V
On-state slop resistance rT 0.15 mΩ 0.163mΩ
MCC losses and number of devices at different switching
frequencies using different fast thyristors
Figure B.2. Comparison between MCC losses and conventional VSC losses using
different fast thyristors in MCC cycloconverter at different switching frequencies for
the MCC.
Figure B.3. Comparison between the number of MCC devices and conventional VSC
devices using different fast thyristors in MCC cycloconverter at different switching
frequencies for the MCC. | {"url":"http://www.docstoc.com/docs/39292047/HVDC-Transmission-System-with-Medium-Frequency-Transformers","timestamp":"2014-04-24T17:10:21Z","content_type":null,"content_length":"151682","record_id":"<urn:uuid:d96e7b24-608e-4cf6-bfee-59e11f3288ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edgemoor, DE Prealgebra Tutor
Find an Edgemoor, DE Prealgebra Tutor
...When people think of chemistry, they get caught up in numbers and complex science, but when the relationships between materials are shown, it all fits together like a puzzle. When it "clicks,"
it's so incredibly useful in daily life. This the primary technique of how I get my students to realize the cool things about chemistry and how to remember them.
14 Subjects: including prealgebra, chemistry, physics, geometry
...My major is in Spanish, but I want to be of help in more fields than just one. I can tutor in a few different subjects, but I center most around subjects taught in the elementary school,
especially math which is an area for which I have a special knack. When working with kids, I start by asking them what they need help on.
13 Subjects: including prealgebra, Spanish, grammar, algebra 1
...I have been tutoring students from elementary age to adult on a daily basis for more than 15 years. Individualized support for a student is the most effective and efficient way to gain
confidence and mastery in any subject. I can help you with that!
23 Subjects: including prealgebra, reading, writing, geometry
...I have also taught science and social studies to 5th grade. Before I began teaching in the public school system I taught private preschool and kindergarten for seven years. I enjoy including
technology in my daily instruction to help the student succeed.
15 Subjects: including prealgebra, reading, English, geometry
...I taught Physical Science courses at two High Schools for over three years and was certified by the state of PA to teach it. I have tutored Physical Science to 12 students outside of my school
district. I have taken a full year of Physics, a year of physical chemistry and Physical Organic Chemistry and have a masters degree in Chemistry.
26 Subjects: including prealgebra, chemistry, GRE, biology
Related Edgemoor, DE Tutors
Edgemoor, DE Accounting Tutors
Edgemoor, DE ACT Tutors
Edgemoor, DE Algebra Tutors
Edgemoor, DE Algebra 2 Tutors
Edgemoor, DE Calculus Tutors
Edgemoor, DE Geometry Tutors
Edgemoor, DE Math Tutors
Edgemoor, DE Prealgebra Tutors
Edgemoor, DE Precalculus Tutors
Edgemoor, DE SAT Tutors
Edgemoor, DE SAT Math Tutors
Edgemoor, DE Science Tutors
Edgemoor, DE Statistics Tutors
Edgemoor, DE Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bellefonte, DE prealgebra Tutors
Boothwyn prealgebra Tutors
Carneys Point Township, NJ prealgebra Tutors
Carneys Point, NJ prealgebra Tutors
Feltonville, PA prealgebra Tutors
Greenville, DE prealgebra Tutors
Lower Chichester, PA prealgebra Tutors
Minquadale, DE prealgebra Tutors
Talleyville, DE prealgebra Tutors
Twin Oaks, PA prealgebra Tutors
Upper Chichester, PA prealgebra Tutors
Village Green, PA prealgebra Tutors
West Bradford, PA prealgebra Tutors
West Deptford, NJ prealgebra Tutors
Wilmington, DE prealgebra Tutors | {"url":"http://www.purplemath.com/Edgemoor_DE_Prealgebra_tutors.php","timestamp":"2014-04-18T08:18:07Z","content_type":null,"content_length":"24471","record_id":"<urn:uuid:c3aeb273-e75e-4a76-bceb-7c15bf3ac124>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Brooke on Sunday, February 17, 2008 at 2:54pm.
4 = 14
*5t-9 is over 4. if you understand the question please answer!!
• pre- algebra - Joshua, Sunday, February 17, 2008 at 2:57pm
Do have trouble finding t?
• pre- algebra - Brooke, Sunday, February 17, 2008 at 2:59pm
yeah all i need to find is t. it is really confusing me. can you please help me find what t is? if so thanks a lot
• pre- algebra - Joshua, Sunday, February 17, 2008 at 3:07pm
(5t-9/ 4 ) = 14
5t-9 =14*4 multiply 4 to both sides, and it cancels the 4 on the left side.
5t-9=56 add 9 to both sides, and it cancels on the left side.
5t=65 divide by 5 on both sides
t= 13
• pre- algebra - Brooke, Sunday, February 17, 2008 at 3:09pm
thanks a million!!
• pre- algebra - Guido, Sunday, February 17, 2008 at 6:52pm
Joshua did a great job!
I just want to say that to solve for a variable (any variable) means to isolate the variable on one side of the equation.
Do you see, as Joshua said, that
t = 13?
Do you see that the letter or variable t has been isolated on one side of the equation?
That means Joshua solved the equation for t.
To find out if Joshua is correct (and he is), replace t with 13 in the original equation given and simplify.
If you get the SAME answer on both sides of the equation, then you will know FOR SURE that t = 13.
Got it?
Related Questions
Calculus (Related Rates) - The position of a particle moving in a straight line ...
math check - Simplify 1) (5t)(5t)^3 My answer: 5t^15 Write the number without ...
algebra help - (t^3-3t^2+5t-6)divided by (t-2) t^2 t-2! t^3-3t^2+5t-6 t^3-2t^2 ...
Basic Math - I forgot how to solve for two variables... Please check if I solved...
math - If r=t/2 and t=w/2, what is (r+w) in terms of t? (A)t (B)t^2 (C)3t/2 (D)...
Intermediate Algebra - Multiply.(2/5t-1)(3/5t+1)
algebra - 5t-10t^2÷5t
algebra - 5t-10t^2÷5t
Math - Exponential - Please check if my answers are correct! Thank you. ...
Algebra - (12x + 5t)-(14t-6x) 12x + 5t - 14t -6x ( I get lost here when the -6x ... | {"url":"http://www.jiskha.com/display.cgi?id=1203278068","timestamp":"2014-04-20T21:58:29Z","content_type":null,"content_length":"9929","record_id":"<urn:uuid:7765b947-b7b6-4efe-8e4d-6272729f4376>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Glock Talk - View Single Post - Performance Criteria Based Caliber Design
Originally Posted by
If you had an arbitrary set of specifications, such as
"One inch expansion and a total of 13" Penetration through calibrated ballistics gel"
Is there a formula to calculate the ratios of diameters, masses, weights, and velocities that would work to achieve that performance, assuming the metallurgy knowledge was there to tune the HP to
expand predictably and consistently?
Obviously the more the round expands the more it experiences drag (like a parachute) and the more quickly it loses velocity in tissue which can lead to less penetration... increasing overall velocity
may compensate somewhat assuming the bullet doesn't over expand or tear itself apart, so maybe increased mass (and therefore momentum / inertia) would be superior in achieving the depth of
penetration with that kind of expansion. The bullet would have to be designed and tuned to those tolerances and particular velocity ranges.
Basically if you were to come up with a novel cartridge design from a clean sheet (not just a wildcat unless that would meet your criteria) just as a thought exercise, how would one go about
calculating it so that it's in the ballpark?
Here is another book
that contains the formulas (or formulae, if you like
Bullet Penetration
where you must first find them and then put them into more usable form.
There are also lots of examples (two whole chapters worth) that will help you use the equations, too.
From the website:
QUANTITATIVE AMMUNITION SELECTION presents a mathematical model that allows armed professionals and lawfully-armed citizens to evaluate the terminal ballistic performance of self-defense ammunition
using water as a valid ballistic test medium.
Based upon a modified fluid dynamics equation that correlates highly (r = +0.94) to more than 700 points of manufacturer- and laboratory-test data, the quantitative model allows the use of water to
generate terminal ballistic test results equivalent to those obtained in calibrated ten percent ordnance gelatin.
The quantitative model accurately predicts the permanent wound cavity volume and mass, terminal penetration depth, and exit velocity of handgun projectiles as these phenomena would occur in
calibrated ten percent ordnance gelatin and soft tissue.
The quantitative model is concisely explained using plain language and illustrated with clearly presented computational examples that provide guidance in every aspect of the model's application.
Besides including a variable for the density of soft tissue, the quantitative model employs a material strength variable within its governing expression that allows for the computational evaluation
of any type of soft tissue. Within a confidence interval of 95%, the quantitative model predicts the terminal penetration depth of projectiles in calibrated ordnance gelatin with a margin of error of
one centimeter.
There is also a couple of models near the end of the book that can be used to calculate penetration through clothing and sheet steel panels. (Very easy to use, too. | {"url":"http://www.glocktalk.com/forums/showpost.php?p=19909808&postcount=8","timestamp":"2014-04-21T04:53:04Z","content_type":null,"content_length":"24302","record_id":"<urn:uuid:fbef3d46-b2f8-4ae1-87ef-2d5271515b78>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
MrMathematica * Zhu ChongKai
_MrMathematica_ * _Zhu ChongKai_
MrMathematica allows you to call _Mathematica_ from Scheme.
> MathKernel : [byte-string*] -> MathLink
Load the Mathematica kernel and return a _MathLink_. The argument(s)
is command to open the MathLink connection. If not presented,
MrMathematica will use #"-linkname" #"math -MathLink" as default,
which in general will always open the Mathematica kernel. See the
Mathmatica Book 2.13.14 "Running Mathematica from Within an External
Program" for more about this. The return value, a MathLink, is a
value indicates the Mathematica kernel. It can be save to a variable
and later used to distinguish different Mathematica kernel.
> MathEval : S-exp [MathLink] -> S-exp
Use Mathematica Kernel to evaluate. You should write the S-exp in
Scheme style and they will be translated to Mathematica style
automatically. Only numbers, booleans, symbols, strings, voids or
none empty lists are allowed in the input. Otherwise
exn:application:contract will be raised.
The optional argment, MathLink, specifies using which Mathematica
kernel to do the computation. If no MathLink is given, MrMathematica
will automatically choose the latest open (live) MathLink, or if no
open MathLink is available, create one by calling MathKernel with no
arguments. So, if you only want to use one Mathematica kernel at a
time, just call MathEval with only one argument, and even need not to
know the existence of MathKernel.
> MathExit : [MathLink] -> void
Close the Mathematica Kernel. If no MathLink is given, MrMathematica
automatically close the latest open (live) MathLink. Please avoid
using closed MathLink, otherwise an error will be raised.
> MathLink? : exp -> boolean
Check whether the argument is a MathLink.
> living-MathLink? : exp -> boolean
Check whether the argument is a living MathLink.
Translation between Scheme and Mathematica:
S-exp such as '(f x y) will be translated to f[x,y] and send to
Mathematica Kernel. The return expression of Mathematica will be
translated back into Scheme. Besides that, MrMathematica also use the
following dictionary to translate function names:
'((* . Times)
(- . Minus)
(+ . Plus)
(/ . Divide)
(< . Less)
(<= . LessEqual)
(= . Equal)
(> . Greater)
(>= . GreaterEqual)
(abs . Abs)
(acos . ArcCos)
(and . And)
(angle . Arg)
(asin . ArcSin)
(atan . ArcTan)
(begin . CompoundExpression)
(ceiling . Ceiling)
(cos . Cos)
(denominator . Denominator)
(exp . Exp)
(expt . Power)
(floor . Floor)
(gcd . GCD)
(if . If)
(imag-part . Im)
(lcm . LCM)
(list . List)
(log . Log)
(magnitude . Abs)
(max . Max)
(min . Min)
(modulo . Mod)
(negative? . Negative)
(not . Not)
(number? . NumberQ)
(numerator . Numerator)
(or . Or)
(positive? . Positive)
(quotient . Quotient)
(rationalize . Rationalize)
(round . Round)
(sin . Sin)
(sqrt . Sqrt)
(string-length . StringLength)
(tan . Tan)
(truncate . IntegerPart))
The translation table is defined in "translation.ss". If you just
want no translation, change this file so that it provides the
identity function.
There are some other functions that are similar in Mathematica and
Scheme. According to the need, you can also add translation rules
into the table. Here I list some such function pairs:
append Join
apply Apply
build-list Array
car First
cdr Rest
collect-garbage Share
compose Composition
cond Which
cons Prepend
copy-file CopyFile/CopyDirectory
current-directory Directory/SetDirectory
current-memory-use MemoryInUse
current-process-milliseconds TimeUsed
current-seconds AbsoluteTime
define Set
delay Hold/Unevaluated
delete-directory DeleteDirectory
delete-file DeleteFile
directory-list FileNames
display Print
even? EvenQ
exit Exit/Quit
file-or-directory-modify-seconds FileDate/SetFileDate
file-size FileByteCount
filter Select
fluid-let Block
foldl Fold
for-each Scan
force ReleaseHold/Evaluate
getenv Environment
identity Identity
integer? IntegerQ
lambda Function
length Length
let Module
list-ref Part
list-tail Drop
map Map
make-directory CreateDirectory
member/memq/memv MemberQ
nand Nand
nor Nor
odd? OddQ
pair? AtomQ
read Input
rename-file-or-directory RenameFile/RenameDirectory
reverse Reverse
shell-execute Run/RunThrough
sleep Pause
string->symbol Symbol
string-append StringJoin
symbol->string SymbolName
system-type $System
time Timing
version $Version
zero? ZeroQ
Notice that they are not identical so some rules should be
conditional. Learn the default rule of '- for details. | {"url":"http://planet.racket-lang.org/package-source/zck/mrmathematica.plt/1/1/doc.txt","timestamp":"2014-04-17T16:14:09Z","content_type":null,"content_length":"8648","record_id":"<urn:uuid:c773bda8-ca09-4e1b-8ffa-43adf6358346>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Fall Model
Free Fall Model
written by Andrew Duffy
The Free Fall model allows the user to examine the motion of an object in freefall. This is simply one-dimensional motion (vertical motion) under the influence of gravity.
The Free Fall model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_freefall.jar file will run
the program if Java is installed.
Please note that this resource requires at least version 1.5 of Java (JRE).
Free Fall Model Source Code
The source code zip archive contains an XML representation of the Free Fall model. Unzip this archive in your EJS workspace to compile and run this model…
more... download 4kb .zip
Published: April 25, 2010
Subjects Levels Resource Types
Classical Mechanics
- Instructional Material
- Motion in One Dimension - Lower Undergraduate
= Curriculum support
= Acceleration - High School
= Interactive Simulation
= Gravitational Acceleration - Middle School
- Audio/Visual
= Position & Displacement - Upper Undergraduate
= Movie/Animation
= Velocity
Intended Users Formats Ratings
- Educators
- application/java
- Learners
Access Rights:
Free access
This material is released under a GNU General Public License Version 3 license.
Rights Holder:
Andrew Duffy, Boston University
EJS, Easy Java Simulations, acceleration, free fall, free fall simulation, gravity, position, position vs. time, velocity, velocity vs. time
Record Cloner:
Metadata instance created April 27, 2010 by Mario Belloni
Record Updated:
January 28, 2014 by Caroline Hall
Last Update
when Cataloged:
April 16, 2010
Other Collections:
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4B. The Earth
• 6-8: 4B/M3. Everything on or anywhere near the earth is pulled toward the earth's center by gravitational force.
4G. Forces of Nature
• 9-12: 4G/H1. Gravitational force is an attraction between masses. The strength of the force is proportional to the masses and weakens rapidly with increasing distance between them.
11. Common Themes
11B. Models
• 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast,
too complex, or too dangerous to study.
• 6-8: 11B/M2. Mathematical models can be displayed on a computer and then modified to see what happens.
Next Generation Science Standards
Crosscutting Concepts (K-12)
Patterns (K-12)
• Graphs and charts can be used to identify patterns in data. (6-8)
Science and Engineering Practices (K-12)
Analyzing and Interpreting Data (K-12)
• Analyzing data in 9–12 builds on K–8 and progresses to introducing more detailed statistical analysis, the comparison of data sets for consistency, and the use of models to generate and analyze
data. (9-12)
□ Analyze data using computational models in order to make valid and reliable scientific claims. (9-12)
Developing and Using Models (K-12)
• Modeling in 6–8 builds on K–5 and progresses to developing, using and revising models to describe, test, and predict more abstract phenomena and design systems. (6-8)
□ Develop and use a model to describe phenomena. (6-8)
• Modeling in 9–12 builds on K–8 and progresses to using, synthesizing, and developing models to predict and show relationships among variables between systems and their components in the natural
and designed worlds. (9-12)
□ Use a model to provide mechanistic accounts of phenomena. (9-12)
Science Models, Laws, Mechanisms, and Theories Explain Natural Phenomena (2-12)
• Models, mechanisms, and explanations collectively serve as tools in the development of a scientific theory. (9-12)
Using Mathematics and Computational Thinking (5-12)
• Mathematical and computational thinking at the 9–12 level builds on K–8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including trigonometric
functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are created and used based on
mathematical models of basic assumptions. (9-12)
□ Create or revise a simulation of a phenomenon, designed device, process, or system. (9-12)
□ Use mathematical or computational representations of phenomena to describe explanations. (9-12)
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.4 Model with mathematics.
High School — Algebra (9-12)
Creating Equations^? (9-12)
• A-CED.1 Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential
Reasoning with Equations and Inequalities (9-12)
• A-REI.3 Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters.
High School — Functions (9-12)
Linear, Quadratic, and Exponential Models^? (9-12)
• F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
ComPADRE is beta testing Citation Styles!
<a href="http://www.compadre.org/OSP/items/detail.cfm?ID=10001">Duffy, Andrew. "Free Fall Model."</a>
A. Duffy, Computer Program FREE FALL MODEL (2010), WWW Document, (http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639).
A. Duffy, Computer Program FREE FALL MODEL (2010), <http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639>.
Duffy, A. (2010). Free Fall Model [Computer software]. Retrieved April 19, 2014, from http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639
Duffy, Andrew. "Free Fall Model." http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639 (accessed 19 April 2014).
Duffy, Andrew. Free Fall Model. Computer software. 2010. Java (JRE) 1.5. 19 Apr. 2014 <http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639>.
@misc{ Author = "Andrew Duffy", Title = {Free Fall Model}, Month = {April}, Year = {2010} }
%A Andrew Duffy
%T Free Fall Model
%D April 16, 2010
%U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639
%O application/java
%0 Computer Program
%A Duffy, Andrew
%D April 16, 2010
%T Free Fall Model
%8 April 16, 2010
%U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Related Materials
Similar Materials | {"url":"http://www.compadre.org/OSP/items/detail.cfm?ID=10001&Attached=1","timestamp":"2014-04-19T09:37:03Z","content_type":null,"content_length":"50499","record_id":"<urn:uuid:05e4e1c1-40f8-4ef4-bb1a-88421034eba5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R-sig-finance] VAR, VECM, Kalman,
... non-R software recommendations?
[R-sig-finance] VAR, VECM, Kalman, ... non-R software recommendations?
Jeffrey Todd Lins jtl at saxobank.com
Sat Aug 21 00:25:02 CEST 2004
That reminds me, Dirk.
Brandon Whitcher, who wrote the waveslim package, co-authored a book on filters (mostly on wavelets) with Gencay and Selcuk. They used an example of a structural TS model for time-varying betas I think, in their chapter on the Kalman.
Anyway, maybe Brandon Whitcher has made some R code too.
-----Original Message-----
From: r-sig-finance-bounces at stat.math.ethz.ch
[mailto:r-sig-finance-bounces at stat.math.ethz.ch]On Behalf Of krishna
Sent: Saturday, August 21, 2004 2:36 AM
To: R-sig-finance at stat.math.ethz.ch
Subject: Re: [R-sig-finance] VAR, VECM, Kalman,... non-R software
I have mucked around with the kalman for estimating time-varying betas.
there was another interest
in cointegration stuff a few weeks back.
I will clean up my code, and put it up someplace.
One suggestion i have for R-SIGGERS is to have a place to post code like
a repository.
Someplace like the elseiver computer physics code repository
for which you have to cough up $$$. The R-SIG repository should be free.
The idea is already in place for some econometric journals where you
have people uploading their data-sets and routines.
It would be nice to have a facility where one can upload the code with a
little blurb of what the routines are doing.
any ideas.??. I am sure there is a opensource thingie that accepts code
and a little document and that allows users to rate/leave comments?
If anyone knows one let me know. We are going to see more and more of
"How do I do foo goo in R ?" or
"I know we can do boomoo in math$ but can you do it in R ?"
Just my 2 cents.
Jeffrey Todd Lins wrote:
>Hi Dirk,
>Yes, in stats there is a set of Kalman filter routines and you can use optim for likelihood estimation.
>I have used it for some state space modeling. There is a chapter in Zivot and Wang's book on the topic as well.
>In addition initializing in the KF may be an important consideration - see Harvey and/or Durbin and Koopman.
>I have never really used DSE for VAR, ended up writing the code elsewhere, outside of R, but you could look at gretl,
>which is available under GNU GPL and written in C, it contains quite a few bits and pieces, I am assuming you can get the source.
>-----Original Message-----
>From: r-sig-finance-bounces at stat.math.ethz.ch
>[mailto:r-sig-finance-bounces at stat.math.ethz.ch]On Behalf Of Pijus
>Sent: Friday, August 20, 2004 7:37 PM
>To: Dirk Eddelbuettel
>Cc: R-sig-finance at stat.math.ethz.ch
>Subject: RE: [R-sig-finance] VAR, VECM, Kalman,... non-R software
>Dear Dirk,
>As far as my personal experience goes, I needed to estimate such models
>some time ago, when the R toolkit for this sort of thing was still
>almost empty, so I chose to invest in STATA: it provides a fairly
>complete set of functions to estimate VAR, SVAR and (as of two months
>ago) VECM models, validate their results and stability, and calculate
>all the frequently-needed derivatives, such as the MA forms (i.e. IRFs,
>SIRFs, ...), etc. For what it's worth, I chose STATA over many other
>contenders in the field because it seemed to have some of those R-like
>pro-active qualities, like frequent updates, knowledgeable and involved
>users, and accessible developers (to which I can personally attest after
>running into a couple of bugs in the early SVAR code). The R-STATA
>intercommunication is made possible by the foreign package, batch modes,
>and good old ASCII. ;) STATA programming is a bit laborious, so I always
>only farm out the absolute minimum to it, and do the remainder in R. As
>you said, STATA let me "hit the ground running", and is really not a bad
>Of course, today R's own arsenal for time-series econometrics is shaping
>up fast as well. Most significantly, there is now the CRAN urca package
>by Bernhard Pfaff: it provides the means to estimate VECM models (both
>the transitory and long-term flavours) and Johansen's co-integration
>tests built on top them. Sadly, VAR/SVAR and associated battery of
>helper functions are still not available, as far as I am aware.
>As for the Kalman filter, there is the Kalman... family of functions in
>stats: perhaps that's a good place to start? Sadly, I have not yet had a
>chance to use space-state models in a proper project, so my knowledge of
>the available tools and their relative capabilities is modest. Also, if
>you can get to it, R. Carmona's neat book "Statistical Analysis of
>Financial Series in S-Plus" (Springer, 2004) has a few sections
>(6.2-6.7) on state-space models and Kalman filtering thereof (S code
>included), with applications to finance.
>>-----Original Message-----
>>From: r-sig-finance-bounces at stat.math.ethz.ch
>>[mailto:r-sig-finance-bounces at stat.math.ethz.ch] On Behalf Of
>>Dirk Eddelbuettel
>>Sent: Friday, August 20, 2004 12:49 PM
>>To: R-sig-finance at stat.math.ethz.ch
>>Subject: [R-sig-finance] VAR, VECM, Kalman,... non-R software
>>I've been asked to run some 'modern' regressions: vector
>>vector error correction, kalman filter, ...
>>Of course, I'd love to do that in R and will probably end up
>>writing some
>>code for it, but as the platitude goes, I 'need to hit the
>>ground running'.
>>Last time I looked at Paul Gilbert's dse bundle, it promised
>>most of this,
>>but felt somewhat cumbersome.
>>Does anybody here have any particular recommendations, and in
>>warnings about software like EViews, Rats, ... in this context ?
>>Thanks in advance, Dirk
>>Those are my principles, and if you don't like them... well,
>>I have others.
R-sig-finance at stat.math.ethz.ch mailing list
This email may contain confidential and/or privileged inform...{{dropped}}
More information about the R-sig-finance mailing list | {"url":"https://stat.ethz.ch/pipermail/r-sig-finance/2004q3/000074.html","timestamp":"2014-04-20T23:36:13Z","content_type":null,"content_length":"10636","record_id":"<urn:uuid:1c4bff57-8030-45d4-a585-ce323cd08097>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rome, GA
Find a Rome, GA Precalculus Tutor
...I love Mr. S. and I wish he would teach all the Physics classes in my future." [Logan]"Learning Physics teaches you how to think. Mr.
19 Subjects: including precalculus, physics, GED, SAT math
...Tutoring Approach: I am a systematic, organized and patient tutor. I have the passion to help students achieve their potential through understanding their needs, evaluating and enhancing their
core skills. Basically I teach the topics through doing the related problem exercises.
15 Subjects: including precalculus, calculus, geometry, algebra 1
...I recognize and accept that each child has their own learning style. I will individualize my tutoring approach based on your child's needs. I pride myself in being very patient and maintaining
an upbeat atmosphere of encouragement.
47 Subjects: including precalculus, chemistry, English, physics
...Linux and Apple) systems. I have restored files and rebuilt more operating systems than I can count. These days I work via desktop or laptop with Windows-based OS predominantly, but
increasingly find myself working across other platforms such as tablets and smart phones and with other OS including Droid and Mac OS and on cross-platform compatibility issues.
126 Subjects: including precalculus, chemistry, English, calculus
...If you need any help just send me a message. :)I have been a pianist since I was 10 years old, therefore I have been playing for 14 years. I attended college on a scholarship for piano
pedagogy (teaching) and continued to study to received a minor in piano pedagogy. I have taken classes that emphasize methods and techniques of teaching piano to any age.
9 Subjects: including precalculus, reading, chemistry, biology
Related Rome, GA Tutors
Rome, GA Accounting Tutors
Rome, GA ACT Tutors
Rome, GA Algebra Tutors
Rome, GA Algebra 2 Tutors
Rome, GA Calculus Tutors
Rome, GA Geometry Tutors
Rome, GA Math Tutors
Rome, GA Prealgebra Tutors
Rome, GA Precalculus Tutors
Rome, GA SAT Tutors
Rome, GA SAT Math Tutors
Rome, GA Science Tutors
Rome, GA Statistics Tutors
Rome, GA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Acworth, GA precalculus Tutors
Armuchee precalculus Tutors
Austell precalculus Tutors
Calhoun, GA precalculus Tutors
Canton, GA precalculus Tutors
Cartersville, GA precalculus Tutors
Doraville, GA precalculus Tutors
Forest Park, GA precalculus Tutors
Hiram, GA precalculus Tutors
Kennesaw precalculus Tutors
Lindale, GA precalculus Tutors
Shannon, GA precalculus Tutors
Silver Creek, GA precalculus Tutors
Union City, GA precalculus Tutors
Villa Rica, PR precalculus Tutors | {"url":"http://www.purplemath.com/rome_ga_precalculus_tutors.php","timestamp":"2014-04-16T10:47:59Z","content_type":null,"content_length":"23866","record_id":"<urn:uuid:1c6c1b4f-7690-48d7-be1b-f3ddfef24fdf>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Z) = N?
f(Z×Z) = N?
At the bottom of http://www-math.mit.edu/~poonen, Professor Poonen poses an
Open Question: Let Z be the set of integers, and let N be the set of nonnegative integers. Is there a polynomial f(x,y) such that f(Z×Z) = N?
Related Mathematics
Polygonal Numbers
One way to view a polynomial in two variables is as an extension of the Fermat polygonal number theorem (which covers sum-of-four-squares and sum-of-three-triangular-numbers) to two-sided
M. B. Nathanson's A Short Proof of Cauchy's Polygonal Number Theorem defines polygonal numbers as
for positive m and nonnegative k. The theorem is
Every nonnegative integer is the sum of m+2 polygonal numbers of order m+2.
When m=0, u[0](k)=k. And with nonnegative k we trivially have f(NxN)=N when f(x,y)=x+y. Even simpler is f(N)=N when f(x)=x.
But the open question is for inputs both positive and negative. With positive m, u[m](k) is always nonnegative; so f(Z×Z×Z)=N and higher degrees follow. But with u[0] being the identity function,
f produces negative integers.
Because the proof is all about quadratics (as opposed to the two-sided polygonal numbers being linear), and because the number of polygonal-numbers required jumps from 3 to 1 when m goes from 1
to 0, the proof of Cauchy's Polygonal Number Theorem is unlikely to shed any light on the proposition in question.
Quadratic Forms
J. H. Conway, in Universal Quadratic Forms and the Fifteen Theorem writes:
The 15-theorem closes the universality problem for integer-matrix forms by providing an extremely simple criterion. We no longer need a list of universal quaternaries, because a form is
universal provided only that it represent the numbers up to 15. Moreover, this criterion works for larger numbers of variables, where the number of universal forms is no longer finite. (It is
known that no form in three or fewer variables can be universal.)
The parenthetical remark at the end seems to imply that the degrees of x and y in f(x,y) must be greater than 2. But Professor Poonen points out that quadratic form means "homogeneous polynomial
of degree 2"; and his question is not restricted to homogeneous polynomials.
Fermat's Last Theorem
Fermat's Last Theorem states:
If an integer n is greater than 2, then the equation a^n+b^n=c^n has no solutions in nonzero integers a, b, and c.
If f is the simple sum of nth (4 or larger) powers of its two arguments, then Fermat's Last Theorem directs that f is not surjective because there are no input pairs generating c^n. It will also
rule out the sum of two powers (> 2) of polynomials with no "net" constant term. For example: (x^2+y^2+1)^3 + (2 x y-1)^3.
Polynomial Degrees
If f exists, then there must be an argument pair x[0],y[0] such that f(x[0],y[0])=0. Consider the polynomial g(x,y)=f(x+x[0],y+y[0]). Because g(0,0)=0, the constant term of g(x,y) must be 0. Without
loss of generality for the rest of this treatment, we will assume that f(0,0)=0.
The requirement that only nonnegative numbers are produced by f is stringent. Suppose the degree of x in f is odd. If we let y take a value which does not extinguish the leading x term of f, then
sufficiently large magnitude of x will dominate any other terms. For sufficiently large magnitudes, that leading term will change sign depending on the sign of x. Thus the degree of x in f must be
even. Similarly, the degree of y in f must also be even.
Consider f(x,0). All terms in this polynomial involve x only. By the previous reasoning, the degree of x in f(x,0) must be even. Similarly, the degree of y in f(0,y) must also be even.
What if the degree of x or y in f(x,y) is zero? Then we simply have a polynomial in one variable (the variable whose degree is not zero).
Lemma 1: There is no polynomial f(x) which is surjective from the integers (Z) to the nonnegative integers (N).
The degree of x in f must be even. For degree 0, f would be a constant, having only a single value which is not surjective onto N. For even degree k>1 and sufficiently large value of x, f(x) is
greater than every value f(0), ..., f(x-1) and f is monotonically increasing and the difference between f(x+1) and f(x) is O(x^k-1). Because f is monotonically increasing, it never returns to the
values between f(x) and f(x+1). The values of f(-x) similarly become monotonically increasing and spread further and further apart. Thus f is not surjective onto N.
Thus the degrees of x and y in f(x,y) must be even and not less than 2.
Corollary 1: If a polynomial f(x,y) can be rewritten as the composition of an even degree polynomial p with an integer-valued polynomial g(x,y), then f is not surjective from the integers to the
nonnegative integers.
The set of all possible values returned by g is a subset of the integers. By Lemma 1, p cannot be surjective onto N; thus f(x,y)=p(g(x,y)) cannot be surjective onto N.
What about the total-degree of f? If f exists with an odd total-degree of D, there may be more than one term with total-degree of D (eg. 3x^2y+y^2x). Consider h(x)=f(x,cx), where c is a nonzero
constant chosen (eg. 1 in the example) so that the degree of x in h(x) is D, the total-degree of f. Then h(x) is a polynomial of odd degree and takes on negative values for some x. Thus, the
total-degree of f must be even.
This establishes that the total-degree of f must be even and no smaller than 2.
f(x,y)=x^4+y^4 grows very quickly with x and y. Its shape is that of a rectangular vase. This makes it easy to find the bands of equal height.
Let b be a positive integer. There are b^2 x,y pairs in the region 0≤x<b, 0≤y<b. Because f(x,y) is symmetrical, f produces no more than 1/2(b^2+b) values when applied to these x,y pairs.
The polynomial f applied to these x,y pairs along with pairs -x,y, x,-y, and -x,-y produces the only values of f(x,y) less than b^4. But for b≥2, 1/2(b^2+b) is less than b^4. Thus, f(x,y)=x^4+y^4
does not produce integers densely enough to cover the natural numbers.
This sort of counting also works for f(x,y)=x^2+y^2. The polynomial f applied to the 1/2(b^2+b) unique x,y combinations produces the only values of f(x,y) falling in the range 0 to b^2. For b≥2,
1/2(b^2+b) is less than b^2. Thus, f(x,y)=x^2+y^2 does not produce integers densely enough to cover the natural numbers.
In both these cases the 1/2(b^2+b) unique x,y combinations included pairs whose projection through f was larger than b^4 and b^2, respectively. But the source count was less than output range even
with these extra pairs.
There is an important difference between the x^4+y^4 and x^2+y^2 cases. O(x^4+y^4)>O(xy); but the same cannot be said for O(x^2+y^2). Quadratic polynomials must be addressed separately from
polynomials of higher degree. The proof for higher degrees centers on counting arguments dubbed asymptotically-sparse.
Definition: Given a positive number H and a polynomial f(x,y), let C(H,f) be the count of integer x,y pairs satisfying 0<f(x,y)<H.
A polynomial f(x,y) is asymptotically-sparse if all values of f(x,y) are nonnegative and O(C(H,f))<O(H).
From the definition it follows that asymptotically-sparse polynomial functions do not produce integers densely enough to cover the natural numbers. The converse is not necessarily true; even though
both do not produce integers densely enough to cover the natural numbers, x^4+y^4 is asymptotically-sparse, while x^2+y^2 is not.
Polynomial functions can also fail to be asymptotically-sparse because they return negative values, or because they are not bivariate. For instance, f(x,y)=x^4 is not asymptotically-sparse because
there are an unbounded number of y values for which f(x,y)<H.
Note that the definition of asymptotically-sparse polynomial functions does not restrict them to be integer-valued.
Lemma 2: The product of an asymptotically-sparse polynomial function with any positive number is asymptotically-sparse.
Let H be a large positive number and f(x,y) be an asymptotically-sparse polynomial function. Let g(x,y)=rf(x,y) where r is a positive number. Any x,y pair satisfying 0<g(x,y)<H, also satisfies 0<
f(x,y)<H/r. Because O(H)=O(H/r), C(H,rf)<O(H); and rf(x,y) is asymptotically-sparse.
The reasoning about x^4+y^4 can be generalized to the case of identical polynomial functions of x and y. Let f(x,y)=u(x)+u(y) where u is an nonnegative integer polynomial function of even degree k≥4.
Let b be a positive integer large enough so that u(x≥b) and u(x≤-b) are monotonically increasing and the difference between u(x+1) and u(x) and the difference between u(-x) and u(-x-1) are O(x^k-1)
for |x|≥b.
There are O(b^2) x,y pairs in the region 0≤x<b, 0≤y<b and its negative mirrors. The polynomial f applied to these x,y pairs produces the only values of f(x,y) which fall in the range 0 to O(x^k). But
for large x and k≥4, O(x^2) is smaller than O(x^k). Thus, f(x,y)=u(x)+u(y) of degree 4 or more is asymptotically-sparse.
This can be further generalized to the case of independent polynomial functions of x and y. Let f(x,y)=u(x)+v(y) where u and v are nonnegative integer polynomial functions of even degrees j≥k,
respectively, and j≥4 and k≥2. The approach is to pick input ranges [0,b] and [0,c] which have the same output range when projected through u and v, respectively.
Let b and c be positive integers large enough so that:
• u(x≥b) and v(y≥c) are monotonically increasing;
• c is the largest value of y for which v(y) ≤ u(b);
• the difference between u(x+1) and u(x) is O(x^j-1) for x≥b; and
• the difference between v(y+1) and v(y) is O(y^k-1) for y≥c.
There are O(x^1+j/k) x,y pairs in the regions 0≤x<b, 0≤y<c and their negative images. The polynomial f applied to these x,y pairs produces the only values of f(x,y) which fall in the range 0 to O(x^j
). With j≥k, j≥4 and k≥2, O(x^1+j/k) is smaller than O(x^j). In particular:
• When j=4 and k=2, O(x^3) is smaller than O(x^4).
• For large j and k=2, O(x^j/2) is less than O(x^j).
• When j=k≥4, O(x^2) is smaller than O(x^j).
Thus, f(x,y)=u(x)+v(y) with total-degree of 4 or more is asymptotically-sparse.
Notice that last this case tightened the degree bounds. With the minimum total-degree of 4, one polynomial, v(y), can have degree 2. This leaves only the case where both u(x) and v(y) are
quadratic. The counting arguments used previously won't work here because half of the (b^2+b) x,y pairs can't be dismissed. Instead, count the integers generated over the range 0 to 11.
Both u(x) and v(y) must be positive semi-definite. f(x,y) must be 1 for some x,y; so either there is x[1] such that f(x[1],0)=u(x[1])=1 or there is y[1] such that f(0,y[1])=v(y[1])=1. Without
loss of generality, assume that f(x[1],0)=u(x[1])=1.
Because u(0)=0, we have u(x)=(ax^2+bx)/2 for some integers a and b. ax^2+bx≥0 for all x. Thus 0<|b|≤a. So that (ax^2+bx)/2 returns only integers, b must be even if and only if a is even. So that
(ax^2+bx)/2 returns 1, b=±(a-2). Thus x[1] is 1 or -1. Without loss of generality, let b=a-2 and x[1]=-1.
Because v(0)=0, we have v(y)=(cx^2+dx)/2 for some integers c and d. cx^2+dx≥0 for all y. Thus 0<|d|≤c. So that (cx^2+dx)/2 returns only integers, d must be even if and only if c is even.
If a>24, then (ax^2+bx)/2 has only two values less than 12: u(0)=0 and u(-1)=1.
If c>24, then (cy^2+dy)/2 has at most two values less than 12: v(0)=0 and v(1) or v(-1).
If both a>24 and c>24,
then f(x,y) generates at most four values between 0 and 11 (0, 1, v(±1), and v(±1)+1). Thus f is not surjective onto N.
If a>24 and c≤24 or a≤24 and c>24,
then the maximum number of values generated by f(x,y) between 0 and 11 is twice the number of values generated by v(y) or u(x), respectively. v(y) is the more general case. Of the 168
possible v(y) polynomials with c≤24, four generate the most integers, five [5]:
( 1 y^2 +1 y)/2: [5] #(0 1 ! 3 ! ! 6 ! ! ! 10 !)
( 2 y^2 +0 y)/2: [4] #(0 1 ! ! 4 ! ! ! ! 9 ! !)
( 2 y^2 +2 y)/2: [3] #(0 ! 2 ! ! ! 6 ! ! ! ! !)
( 3 y^2 +1 y)/2: [5] #(0 1 2 ! ! 5 ! 7 ! ! ! !)
( 3 y^2 +3 y)/2: [3] #(0 ! ! 3 ! ! ! ! ! 9 ! !)
( 4 y^2 +0 y)/2: [3] #(0 ! 2 ! ! ! ! ! 8 ! ! !)
( 4 y^2 +2 y)/2: [5] #(0 1 ! 3 ! ! 6 ! ! ! 10 !)
( 4 y^2 +4 y)/2: [2] #(0 ! ! ! 4 ! ! ! ! ! ! !)
( 5 y^2 +1 y)/2: [5] #(0 ! 2 3 ! ! ! ! ! 9 ! 11)
( 5 y^2 +3 y)/2: [4] #(0 1 ! ! 4 ! ! 7 ! ! ! !)
( 5 y^2 +5 y)/2: [2] #(0 ! ! ! ! 5 ! ! ! ! ! !)
Four of these polynomials generate five integers in the range 0 to 11. But twice 5 is less than 12, so f is not surjective onto N.
The last case is where a≤24 and c≤24.
Computer enumeration of these cases finds no surjection onto N. The longest run from 0 is 0 through 20, generated by: (3x^2+x+7y^2+y)/2.
Prof. Poonen has suggested a simpler proof using quadratic reciprocity which rules out all quadratic polynomials.
Therefore there is no f(x,y) comprised of the sum of univariate polynomials u(x) and v(y) over the integers which is surjective onto the nonnegative integers.
Let f(x,y)=u(x)+v(y)+w(x,y) where u, v, and w are polynomial functions of even total-degrees j≥k>l, respectively, and j≥k≥4, and l≥2; and u and v return only nonnegative integers, w returns only
integers, and u(x)+v(y)+w(x,y) is nonnegative.
Because j≥k>l, u and v will dominate w for sufficiently large |x| and |y| along any trajectory ax+by=0. Hence the previous asymptotic argument holds that f is asymptotically-sparse.
The simplist bivariate polynomial with even degrees of x and y and total-degree of 4 is f(x,y)=x^2y^2. It consists of two perpendicular troughs intersecting at the origin. Bands of equal height map
from hyperbolas resulting from equating the height to x^2y^2.
H is
• H=x^2y^2
• y^2=H/x^2
• y(H,x)=H^1/2/x
The number of nonzero integer x,y coordinates inside of this contour is asymptotically 4 times its area above y=1, A(H), which is 4 times the integral of y(H,x)-1dx from x=1 to H^1/2, the largest x
for which |y(H,x)|≥1.
A(H) = 4H^1/2(ln(H^1/2)-ln(1)) = 2H^1/2ln(H)
A(H) does not grow as fast as H; so f(x,y)=x^2y^2 is asymptotically-sparse.
This proof extends to all polynomials in which x^2y^2 is the only term having the highest total-degree.
Let f(x,y)=x^jy^k where j and k are even and j≥k≥2. The contour of height H is
• H=x^jy^k
• y^k=Hx^-j
• y(H,x)=H^1/kx^-j/k
The number of nonzero integer x,y coordinates inside this contour is asymptotically its area, A(H), which is 4 times the integral of y(H,x)-1dx from x=1 to H^1/j.
If j=k
A(H) = 4H^1/j(ln(H^1/j)-ln(1)) = 4H^1/jln(H)/j
If j>k
Because j>k, 1-j/k is negative and the indefinite integral of the hyperbolic part is hyperbolic. Therefore O(A(H))<O(H).
In both cases A(H) does not grow as fast as H; so f(x,y)=x^jy^k is asymptotically-sparse.
Rotating the troughs by 45° should not effect their asymptotic behavior. The contour of height H is
• H=(x-y)^2(x+y)^2
• H=(y^2-x^2)^2
• H^1/2=y^2+x^2
• y^2=H^1/2+x^2
• y(H,x)=(H^1/2+x^2)^1/2
Points along the x=y and x=-y diagonals have heights of 0. The number of non-diagonal integer x,y coordinates inside of this contour is asymptotically proportional to the area between y(1,x) and y(
H,x), a complicated expression which grows more slowly than H.
Sliding Blocks
Visualize f(x,y) as an infinite array of square tiles, each labeled with the value of f(x,y) at that x,y location. [Shown is the center of (7x^2+x+3y^2+y)/2.] If we slide a row of tiles an integral
number of positions left or right, it doesn't change the range of f. In fact, we can slide all rows by different amounts simultaneously by substituting x+q(y) for x. After doing so, we can slide all
columns by different amounts; then rows again...
Theorem 1: Let f(x,y) be a polynomial function of two integer variables and q(x) be an integer-valued polynomial function of one integer variable. The set of values returned by f(x,y) for all
integer pairs x,y is the same as the set of values returned by g(x,y)=f(x,y+q(x)) for all integer pairs x,y. The same is true for g(x,y)=f(x+q(y),y).
For any integer-valued polynomial function q(x), because x and y are independent, Q(x,y)=[x,y+q(x)] is a bijection from Z×Z to Z×Z; and f(x,y)=g(x,y-q(x)).
If f(x,y)=x^2y^2 and q(y)=y, then f(x+q(y),y)=x^2y^2+2xy^3+y^4, which is the same as x^2y^2 with its x rows shifted progressively by y offsets of -1. As with this example, linear q(y) polynomials
take homogeneous polynomials h(x,y) to homogeneous polynomials h(x+q(y),y) of the same total degrees.
When q is not linear, the degree and homogeneity of polynomials are not preserved. For example, f(x,y)=x^4+y^4 and g(x,y)=f(x,y+x^2)=x^4+x^8+4x^6y+6x^4y^2+4x^2y^3+y^4. Notice that, although f(x,y)=x^
4+y^4 has been analyzed, g(x,y) is not covered by any of the cases analyzed above.
Can we distinguish nonlinearly-shifted polynomials from those which aren't? When f is a nonnegative polynomial of degree 2 or more in both x and y, and q(x) is a polynomial of degree 2 or more, the
highest total-degree term of g(x,y)=f(x,y+q(x)) must be a power of x and not y; and the highest total-degree term involving y is linear in y and has a degree in x greater than 3 and a coefficient
divisible by the exponent of the highest degree pure y term (by the Binomial Theorem).
This suggests a procedure for reducing nonlinearly-shifted polynomials:
Given g(x,y) with degree in x twice or more its degree in y, and whose highest total-degree term involving y is cnx^my where m≥4 and c is a positive integer and n is the degree of y in g, replace
g(x,y) by g(x,y-cx^n). This reduction step removes the leading term of q(x). Repeating this process will remove lower degree (but higher degree than 1) terms of q(x). As the result of a
reduction, a pure x term may no longer be the highest degree term. The same process may then be carried out with the roles of x and y reversed, unwrapping layer after layer of nonlinear-shift
until a minimal degree polynomial is achieved.
The coefficient of the x^my term needing to be a multiple of n leaves the great majority of high degree polynomials for which the nonlinear-shifting reduction procedure does not apply. But for the
purposes of gaging asymptotic-sparseness, the coefficient can be non-integer. The reduction procedure is then:
Given g(x,y) with degree in x twice or more its degree in y, and whose highest total-degree term involving y is cx^my where m≥4 and c is a real, replace g(x,y) by g(x,y-(c/n)x^n) where n is the
degree of y in g. This reduction step removes the leading term of q(x). Repeating this process will remove lower degree (but higher degree than 1) terms of q(x). As the result of a reduction, a
pure x term may no longer be the highest degree term. The same process may then be carried out with the roles of x and y reversed, unwrapping layer after layer of nonlinear-shift until a minimal
degree polynomial is achieved.
Structure Theorems
Theorem 2: The sum of asymptotically-sparse polynomial functions is asymptotically-sparse.
Let H be a large positive number and f(x,y) and g(x,y) be asymptotically-sparse polynomial functions. Because f and g return only nonnegative numbers, any x,y pair satisfying 0<f(x,y)+g(x,y)<H,
also satisfies either 0<f(x,y)<H or 0<g(x,y)<H. Thus C(H,f+g)≤C(H,f)+C(H,g)<O(H); and f(x,y)+g(x,y) is asymptotically-sparse.
Theorem 3: The product of an asymptotically-sparse polynomial function and any nonnegative-valued polynomial function is asymptotically-sparse.
Let H be a large positive number and f(x,y) be an asymptotically-sparse polynomial function and g(x,y) be a polynomial function whose values are all nonnegative. Let L be the smallest nonzero
value returned by g(x,y) for any integer values of x and y. Then g(x,y)/L will return 0 or values greater or equal to 1.
Any integer x,y pair satisfying 0<f(x,y)g(x,y)/L<H, also satisfies 0<f(x,y)<H. Thus C(H,fg/L)≤C(H,f)<O(H); hence f(x,y)g(x,y)/L is asymptotically-sparse; and by Lemma 2, Lf(x,y)g(x,y)/L=f(x,y)g(
x,y) is asymptotically-sparse.
Copyright © 2008, 2009 Aubrey Jaffer. Geometry
I am a guest and not a member of the MIT Computer Science and Artificial Intelligence Laboratory. My actions and comments do not reflect in any way on MIT.
agj @ alum.mit.edu Go Figure! | {"url":"http://people.csail.mit.edu/jaffer/ZxZtoN/","timestamp":"2014-04-20T08:53:03Z","content_type":null,"content_length":"34985","record_id":"<urn:uuid:a037e938-c0a8-455e-8608-ec4cf7733cc8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hydrostatic Force Problem In Calc. 2
I've done an even problem from my book, and am not 100% sure if it's correct. It asks to find the Hydrostatic Force acting against an Area.
1. The problem statement, all variables and given/known data
A vertical dam has a semicircular gate as shown in the figure. Find the Hydrostatic force against the gate.
[tex]w=2 \mbox{ m}[/tex]
[tex]i=4 \mbox{ m}[/tex]
[tex]l=12 \mbox{ m}[/tex]
This figure represents a Dam. The top rectangle represents... I guess air? The [tex]w[/tex] is for height, at [tex]2 \mbox{ m}[/tex]. The lower rectangle is the water. The half circle at the bottom
is the Object will act as the area. The [tex]i[/tex] in the circle is the diameter of the half circle, which is [tex]4 \mbox{ m}[/tex]. And last, but not least, [tex]l[/tex] represents the entire
length of the Dam, which is [tex]12 \mbox{ m}[/tex].
2. Relevant equations
[tex]\int\sqrt{a^2-u^2}du \Rightarrow \frac{u}{2} \sqrt{a^2-u^2} + \frac{a^2}{2}\sin^{-1}{\frac{u}{a}}+C[/tex]
[tex]F = pgAd[/tex]
[tex]p = 1000 \mbox{ }kg/m^3[/tex]
[tex]g = 9.8 \mbox{ }m/s^2[/tex]
Now, I am not 100% sure I am applying these formulas correctly. [tex]F[/tex] is what I'm assuming to be the Hydrostatic Force since the book states that it is, "The force exerted by the fluid on an
area". And the book gives [tex]p[/tex] as the density of water, and of course [tex]g[/tex] as gravity.
3. The attempt at a solution
[tex]d = 12-2 = 10[/tex]
[tex]x^2+y^2=(2)^2 \Rightarrow y=\sqrt{4-x^2}[/tex]
[tex]\Rightarrow \frac{x}{2} \sqrt{4-x^2} + 2\sin^{-1}{\frac{x}{2}}=I[/tex]
[tex]F=(1000)(9.8)(6.283185)(10)=615752.13 \mbox{ N}[/tex]
Did I correctly apply the concept? If so, is my Arithmetic correct also? | {"url":"http://www.physicsforums.com/showthread.php?t=302288","timestamp":"2014-04-17T15:36:24Z","content_type":null,"content_length":"21447","record_id":"<urn:uuid:79ce352a-8c73-49e5-8c71-b37a9841a5b5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Tutor] Recursive permutation
x x unigeek at hotmail.com
Fri Nov 19 18:59:38 CET 2004
Here is my 2 bits worth.
If I understand what you want you need only use the Result and make a new
list that takes each item and creates all possible orders of that item.
Result = ['']
def Combination(S):
if len(S)>0:
if S not in Result:
for I in range(len(S)):
str2= S[0:I] + S[I+1:len(S)]
return Result
print Result
>From: Blake Winton <bwinton at latte.ca>
>To: Eivind Triel <ET at cheminova.dk>, tutor at python.org
>Subject: Re: [Tutor] Recursive permutation
>Date: Thu, 18 Nov 2004 10:16:12 -0500
>Eivind Triel wrote:
> > Well is there a nice code that will do the trick?
>We've discussed this on the list before, so searching the archive for "list
>permutation recursive" should yield up some hits. (Sadly, it doesn't
>really give me anything. Maybe someone else on the Tutor list will post
>the thread title?)
>>This code maks a permutation of the entire list (all the 3 digs), but how
>>do I make the permutations of lower lists?
>>Going one down i seems easy: Remove alle the element one at a time and
>>make a permultation of the rest. But going two down...
>Perhaps it would be easier to start from the bottom, and build your way up?
> So, all the lists of length 0, then all the lists of length 1, etc, etc,
>until you get to all the lists of length len(input).
>>Like this:
>>Sting = 123
>>This give:
>I think you're missing the list of length 0 ('') in your output...
>Tutor maillist - Tutor at python.org
Don't just Search. Find! http://search.sympatico.msn.ca/default.aspx The new
MSN Search! Check it out!
More information about the Tutor mailing list | {"url":"https://mail.python.org/pipermail/tutor/2004-November/033413.html","timestamp":"2014-04-17T19:20:12Z","content_type":null,"content_length":"5111","record_id":"<urn:uuid:3f02c4cf-0520-443e-9250-963aacf54c81>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
C a l c u l u s
dy/dx and dy is another way of representing a function's derivative (in this case, y is the function and the derivation variable is x)
remember the definition of a function's derivative?
so we can put the definition like this:
now call the numerator "differential in y", dy
and the denominator "differential in x", dx
as you can see in the above expression, a function's derivative can now be expressed with the help of differentials (infinitely small increments)
So as you can see dy/dx is the derivative of y=f(x) with respect to x
You cand understand this easly if you recall that the derivative of a function is related to the slope of the its tangent line on a certain point. And how do you find slopes? With quotients between
the y-increments and x-increments!
The difference is that dy and dx are very small increments.. so small you can only express them using limits
...understanding differentials is a major step to any calculus student imho! Then you can move on to more complex topics like integration and differential equations.
Last edited by kylekatarn (2005-09-01 01:13:33) | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=1486","timestamp":"2014-04-20T08:34:15Z","content_type":null,"content_length":"12799","record_id":"<urn:uuid:c44e7123-a376-44b5-b462-a3d4a8940965>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
The N_ELEMENTS function returns the number of elements contained in an expression or variable.
Create an integer array, I by entering:
Find the number of elements in I and print the result by entering:
To test if the variable Q is defined and, if not, set its value to 1, use the command:
IF N_ELEMENTS(Q) EQ 0 THEN Q=1
A typical use of N_ELEMENTS is to check if an optional input is defined, and if not, set it to a default value:
IF (N_ELEMENTS(roo) EQ 0) THEN roo=rooDefault
The original value of roo may be altered by a called routine, passing a different value back to the caller. Unless you intend for the routine to behave in this manner, you should prevent it by
differentiating N_ELEMENTS' parameter from your routine's variable:
IF (N_ELEMENTS(roo) EQ 0) THEN rooUse=roo $ | {"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idl122.htm","timestamp":"2014-04-17T09:37:20Z","content_type":null,"content_length":"5012","record_id":"<urn:uuid:ddbaba41-d0f2-4233-8b07-ae0d5d90d6c6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Write a loop that reads positive integers from standard input.....
09-25-2012 #1
Registered User
Join Date
Sep 2012
Hey guys, very new to programming and need some help with a homework problem.
Write a loop that reads positive integers from standard input and that terminates when it reads an integer that is not positive. After the loop terminates, it prints out the sum of all the even
integers read, the sum of all the odd integers read, a count of the number of even integers read, and a count of the number of odd integers read, all separated by at least one space. Declare any
variables that are needed.
Note a few things:
- This does not need to be a complete program, just what is asked above.
- This need to be a do-while loop.
- The spaces between the numbers is important, but I don't know how to get spaces.
Here is what I have so far:
int num, sum=0;int sumeven=0;
int numeven=0;
int totalnum=0;
cin >> num;
if (num % 2 == 0 && num >= 0)
sumeven = sumeven + num;
sum = sum + num;
while (num>0);
cout<< sum, sumeven, numeven, totalnum;
Right now, the problem is the program is simply adding up ALL the numbers, not the odd, evens, etc.
Again, I am very new to this so go easy on me
Thanks in advance!
I see two problems. In the loop that does the counting and adding you have to use an if-else because if you don't you might add a number to both totals. A number can either be even or odd, so you
want to do one or the other thing.
The other problem is that the cout statement does not do what you think it does. To use cout properly always insert with operator <<.
Oh, woops, my bad. Yes, I am in C++, don't know why I thought C, but thanks for the tips.
Also, if anyone can move this thread, that would be great.
Ok guys, I started working on this again. Cleaned a few things up and got my output to add spaces between the numbers. It's still not quite right, but it is getting closer. Here is my code:
int num=0;
int sum1=0;
int sum2=0;
int sumeven=0;
int sumodd=0;
int evencount=0;
int oddcount=0;
cin >> num;
if (num % 2 == 0 && num > 0)
sumeven = sum1 + sumeven;
else if (num > 0)
sumodd = sum2 + sumodd;
while (num > 0);
cout<<" "<<sumodd;
cout<<" "<<evencount;
cout<<" "<<oddcount;
The program does a test input and output and here are the results.
Input: 2 4 6 8 0
My codes output: 15 0 5 0
Correct output: 20 0 4 0
So as you can see, I am close, at least, for those numbers. I need help figuring out how to get it correct. Thanks in advance!
EDIT - Ok, I realize that I need another integer here. What I have for sumeven and sumodd isn't going to work because this is not adding the correct things. I need some variable, I'll call sum,
to equal the sum of the numbers entered. How do I get the integer sum to do that; to be the sum of every number that is entered?
EDIT 2 - Ok, I am getting the first part of the output to work - "prints out the sum of all the even integers read" - Still need to get the other three parts working correctly.
Last edited by Gundown64; 09-30-2012 at 08:33 PM. Reason: Added updated code.
You should post your problem in the C++ forum
if (num % 2 == 0 && num > 0)
sumeven = sum1 + sumeven;
That should be num % 1 for even/odd
Fact - Beethoven wrote his first symphony in C
if (num % 2 == 0 && num > 0)
sumeven = sum1 + sumeven;
Incrementing "num" after assigning it to "sum1" is unnecessary. Thus "sum1" is unnecessary too because it's the same as "num". You can reduce the body of your if-statement to
sumeven += num;
else if (num > 0)
sumodd = sum2 + sumodd;
Why do you increment "sum2" and add it to "sumodd"? You should add "num" to "sumodd" as you do for even numbers.
Bye, Andreas
if (num % 2 == 0 && num > 0)
sumeven = sum1 + sumeven;
Incrementing "num" after assigning it to "sum1" is unnecessary. Thus "sum1" is unnecessary too because it's the same as "num". You can reduce the body of your if-statement to
sumeven += num;
else if (num > 0)
sumodd = sum2 + sumodd;
Why do you increment "sum2" and add it to "sumodd"? You should add "num" to "sumodd" as you do for even numbers.
Bye, Andreas
Thank you very much! That was the solution! The sum1 and sum2 were just something I was testing, but I knew they were wrong. Thank you! Here is the finished code:
int num=0;
int sumeven=0;
int sumodd=0;
int evencount=0;
int oddcount=0;
cin >> num;
if (num % 2 == 0 && num > 0)
sumeven += num;
else if (num > 0)
sumodd += num;
while (num > 0);
cout<<" "<<sumodd;
cout<<" "<<evencount;
cout<<" "<<oddcount;
09-25-2012 #2
Registered User
Join Date
Sep 2012
09-25-2012 #3
Registered User
Join Date
Sep 2012
09-25-2012 #4
Registered User
Join Date
Sep 2012
09-25-2012 #5
09-25-2012 #6
Registered User
Join Date
Sep 2012
09-30-2012 #7
Registered User
Join Date
Sep 2012
10-01-2012 #8
TEIAM - problem solved
Join Date
Apr 2012
Melbourne Australia
10-01-2012 #9
Registered User
Join Date
Jun 2005
10-01-2012 #10
10-01-2012 #11
Registered User
Join Date
May 2012
10-01-2012 #12
TEIAM - problem solved
Join Date
Apr 2012
Melbourne Australia
10-01-2012 #13
Registered User
Join Date
Sep 2012 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/150925-write-loop-reads-positive-integers-standard-input.html","timestamp":"2014-04-21T16:56:52Z","content_type":null,"content_length":"100583","record_id":"<urn:uuid:7b01f3cf-7e80-488a-8612-1bae0dfba092>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rounding Machine
A particular number machine works as follows.
E.g. 3.4
A different number machine does the following.
E.g. 3.4
Notice that 3.4 came out as 6 from both machines, whereas 7.9 came out differently. What must be special about a number for the same value to come out of each machine?
Problem ID: 97 (Jan 2003) Difficulty: 2 Star | {"url":"http://mathschallenge.net/view/rounding_machine","timestamp":"2014-04-20T08:15:18Z","content_type":null,"content_length":"5470","record_id":"<urn:uuid:77465bf1-b05b-437d-a836-c22e9c6edafe>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alice Programming Tutorial, by Richard G Baldwin
Re-usable code
One of the reasons for breaking code up into modules, (such as Alice primitive methods and functions), is to make them easily re-usable.
For example, the world object has a function named Math.sqrt.
That function can be called to compute and return the square root of a number.
The Alice development team were good enough to write that code for us and to provide it in the form of a standard Alice function.
As a result, whenever we need to compute the square root of a number, we don't need to "reinvent the wheel." | {"url":"http://www.dickbaldwin.com/alice/Slides/Alice0125by.htm","timestamp":"2014-04-18T20:48:46Z","content_type":null,"content_length":"1404","record_id":"<urn:uuid:592443c9-118b-40bf-93ed-b97a54066ebe>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applied Math and Science Education Repository - Browse Resources
(16 resources)
This online resource is intended to help students understand concepts from probability and statistics and covers many topics from introductory to advanced. You can follow the progression of the text,
or you can click...
more info
Created by Alan Heckert and James Filliben, this chapter of the National Institute of Standard and Technology (NIST) Engineering Statistics handbook describes the terms, models and techniques used
to evaluate and p...
more info
This website, created by Richard Lowry of Vassar College, is an application of Bayes' Theorem that performs the same calculations for the situation where the several probabilities are constructed as
indices of...
more info
To perform calculations using Bayes' theorem, enter the probability for one or the other of the items in each of the following pairs (the remaining item in each pair will be calculated
automatically). A probability...
more info
Bayesian Calculator
This page, created by Michael H. Birnbaum of Fullerton University, uses Bayes' Theorem to calculate the probability of a hypothesis given a datum. An example about cancer is given to help users
understand Bayes'...
more info
Next 5 Resources >>>
Switch to browsing by LCC (More Detailed Classifications)Switch to browsing by GEM Subject (Fewer and Broader Classifications)
forget your password?
Manage your resources
Save, organize, and share resources that you find.
Subscribe to bulletins
Automatically be notified about new resources that match your interests.
It's easy, fast, and FREE! | {"url":"https://amser.org/index.php?P=BrowseResources&ParentId=979604","timestamp":"2014-04-17T10:21:47Z","content_type":null,"content_length":"20823","record_id":"<urn:uuid:7aad20b6-4d66-4cc9-bfc2-084ebe9b8020>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Submitted by Anonymous on April 1, 2012.
Well done. I am a long since accomplished musician and have known these things for some time. Just as one of your other readers said about knowing the whole thing, I must agree. I have been
struggling with Precalculus and its seem to have to do with the inability to switch over my problem solving skills with the guitar to math. The math training has helped my guitar playing tremendously
and I was pretty talented before hand. The study of lifting (x) value zeros from polynomials has pushed a natural technique shift into using hammering open notes back to the root note for the
respective key. An example would be the D to open E relation ship within descending riff at the end of "Extreme's Hole Hearted." chord progression. Before the math training I was perhaps playing it
correctly, now I play it intelligently. You have a unique ability to explain simplicity along side complexity. Excellent work sir. Make some money at it. | {"url":"http://plus.maths.org/content/comment/reply/2269/3232","timestamp":"2014-04-20T08:40:59Z","content_type":null,"content_length":"20782","record_id":"<urn:uuid:7578a256-879a-4352-8a3a-54da0289bd09>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lone Oak Math Tutor
Find a Lone Oak Math Tutor
Hello my name is Althea and my tutoring goal is teaching the skills and techniques needed to master the subject at hand. Each student has a unique way of learning and as your tutor; I will work
closely with you in discovering your technique. I am patient, understanding and willing to assist until the level of confidence required to master the subject is achieved.
10 Subjects: including algebra 1, reading, business, English
I am a certified teacher who has enjoyed teaching for over twenty years, and math is my specialty. I have experience in elementary and middle school classrooms as well as, algebra I. I really
enjoy teaching and I love to see the progress that can be made when working with an individual student.
10 Subjects: including algebra 1, phonics, vocabulary, grammar
...Please let me know if I can help you with any of your assignments or studying I will be glad to help. I will find any way i can to help your student or you, once given the chance. An important
note is I will only be able to tutor via e-mail.I have been in theater for approximately seven years.
14 Subjects: including algebra 1, algebra 2, American history, biology
...I focus on comprehension of the concepts and application of knowledge. I utilize teaching techniques such as flashcards and visualization association to increase the student's ability to
comprehend the material. Each technique that I use builds upon the previous one and allows for a more compre...
27 Subjects: including statistics, ACT Math, ASVAB, GRE
...I was always helping other students work out problems. I am very patient with the people I help because I understand that everyone has a different level of learning. While in Egypt, I helped a
couple Sudanese students learn to read and write English, as it was required for them to pass their exams.
19 Subjects: including algebra 2, drawing, geometry, prealgebra | {"url":"http://www.purplemath.com/Lone_Oak_Math_tutors.php","timestamp":"2014-04-19T12:25:48Z","content_type":null,"content_length":"23672","record_id":"<urn:uuid:d6483873-2884-4568-b882-521f48de0639>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variance Components and Mixed Model ANOVA/ANCOVA
The Variance Components and Mixed Model ANOVA/ANCOVA section describes a comprehensive set of techniques for analyzing research designs that include random effects; however, these techniques are also
well suited for analyzing large main effect designs (e.g., designs with more than 200 levels per factor), designs with many factors where the higher order interactions are not of interest, and
analyses involving case weights.
There are several sections in this electronic textbook that discuss Analysis of Variance for factorial or specialized designs. For a discussion of these sections and the types of designs for which
they are best suited, refer to Methods for Analysis of Variance. Note, however, that General Linear Models describes how to analyze designs with any number and type of between effects and compute
ANOVA-based variance component estimates for any effect in a mixed-model analysis.
Experimentation is sometimes mistakenly thought to involve only the manipulation of levels of the independent variables and the observation of subsequent responses on the dependent variables.
Independent variables whose levels are determined or set by the experimenter are said to have fixed effects. There is a second class of effects, however, that is often of great interest to the
researcher; random effects are classification effects where the levels of the effects are assumed to be randomly selected from an infinite population of possible levels.
Many independent variables of research interest are not fully amenable to experimental manipulation, but nevertheless can be studied by considering them to have random effects. For example, the
genetic makeup of individual members of a species cannot at present be (fully) experimentally manipulated, yet it is of great interest to the geneticist to assess the genetic contribution to
individual variation on outcomes such as health, behavioral characteristics, and the like. As another example, a manufacturer might want to estimate the components of variation in the characteristics
of a product for a random sample of machines operated by a random sample of operators. The statistical analysis of random effects is accomplished by using the random effect model if all of the
independent variables are assumed to have random effects, or by using the mixed model if some of the independent variables are assumed to have random effects and other independent variables are
assumed to have fixed effects.
Properties of random effects. To illustrate some of the properties of random effects, suppose you collected data on the amount of insect damage to different varieties of wheat. It is impractical to
study insect damage for every possible variety of wheat, so to conduct the experiment, you randomly select four varieties of wheat to study. Plant damage is rated for up to a maximum of four plots
per variety. Ratings are on a 0 (no damage) to 10 (great damage) scale. The following data for this example are presented in Milliken and Johnson (1992, p. 237).
│DATA: wheat.sta 3v │
│VARIETY │PLOT│DAMAGE │
│ A │ 1│ 3.90│
│ A │ 2│ 4.05│
│ A │ 3│ 4.25│
│ B │ 4│ 3.60│
│ B │ 5│ 4.20│
│ B │ 6│ 4.05│
│ B │ 7│ 3.85│
│ C │ 8│ 4.15│
│ C │ 9│ 4.60│
│ C │ 10│ 4.15│
│ C │ 11│ 4.40│
│ D │ 12│ 3.35│
│ D │ 13│ 3.80│
To determine the components of variation in resistance to insect damage for
, an ANOVA can first be performed. Perhaps surprisingly, in the ANOVA,
can be treated as a fixed or as a random factor without influencing the results (provided that
Type I Sums of squares
are used and that
is always entered first in the model). The spreadsheet below shows the ANOVA results of a mixed model analysis treating
as a
fixed effect
and ignoring
, i.e., treating the plot-to-plot variation as a measure of random error.
│ANOVA Results: DAMAGE (wheat.sta) │
│ │Effect│ df │ MS │ df │ MS │ │ │
│Effect │(F/R) │Effect│Effect │Error│ Error │ F │ p │
│{1}VARIETY │Fixed │ 3 │.270053│ 9 │.056435│4.785196│.029275│
Another way to perform the same mixed model analysis is to treat
as a
fixed effect
as a
random effect
. The spreadsheet below shows the ANOVA results for this mixed model analysis.
│ANOVA Results for Synthesized Errors: DAMAGE (wheat.sta) │
│ │df error computed using Satterthwaite method │
│ │Effect│ df │ MS │ df │ MS │ │ │
│Effect │(F/R) │Effect│Effect │Error│ Error │ F │ p │
│{1}VARIETY │ Fixed│ 3 │.270053│ 9 │.056435│4.785196│.029275│
│{2}PLOT │Random│ 9 │.056435│-----│ ----- │ ----- │ ----- │
The spreadsheet below shows the ANOVA results for a
random effect
model treating
as a
random effect
nested within
, which is also treated as a
random effect
│ANOVA Results for Synthesized Errors: DAMAGE (wheat.sta) │
│ │df error computed using Satterthwaite method │
│ │Effect│ df │ MS │ df │ MS │ │ │
│Effect │(F/R) │Effect│Effect │Error│ Error │ F │ p │
│{1}VARIETY │Random│ 3 │.270053│ 9 │.056435│4.785196│.029275│
│{2}PLOT │Random│ 9 │.056435│-----│ ----- │ ----- │ ----- │
As can be seen, the tests of significance for the
effect are identical in all three analyses (and in fact, there are even more ways to produce the same result). When components of variance are estimated, however, the difference between the mixed
model (treating
as fixed) and the random model (treating
as random) becomes apparent. The spreadsheet below shows the
variance component
estimates for the mixed model treating
as a
fixed effect
│Components of Variance (wheat.sta) │
│ │ Mean Squares Type: 1│
│Source │ DAMAGE│
│{2}PLOT │ .056435│
│Error │ 0.000000│
The spreadsheet below shows the
variance component
estimates for the
random effects
model treating
random effects
│Components of Variance (wheat.sta) │
│ │ Mean Squares Type: 1│
│Source │ DAMAGE│
│{1}VARIETY │ .067186│
│{2}PLOT │ .056435│
│Error │ 0.000000│
As can be seen, the difference in the two sets of estimates is that a
variance component
is estimated for
only when it is considered to be a
random effect
. This reflects the basic distinction between
random effects
. The variation in the levels of random factors is assumed to be representative of the variation of the whole population of possible levels. Thus, variation in the levels of a random factor can be
used to estimate the population variation. Even more importantly, covariation between the levels of a random factor and responses on a dependent variable can be used to estimate the population
component of variance in the dependent variable attributable to the random factor. The variation in the levels of fixed factors is instead considered to be arbitrarily determined by the experimenter
(i.e., the experimenter can make the levels of a fixed factor vary as little or as much as desired). Thus, the variation of a fixed factor cannot be used to estimate its population variance, nor can
the population covariance with the dependent variable be meaningfully estimated. With this basic distinction between
fixed effects
random effects
in mind, we now can look more closely at the properties of
variance components
Estimation of Variance Components (Technical Overview)
The basic goal of variance component estimation is to estimate the population covariation between random factors and the dependent variable. Depending on the method used to estimate variance
components, the population variances of the random factors can also be estimated, and significance tests can be performed to test whether the population covariation between the random factors and the
dependent variable are nonzero.
Estimating the variation of random factors. The ANOVA method provides an integrative approach to estimating variance components, because ANOVA techniques can be used to estimate the variance of
random factors, to estimate the components of variance in the dependent variable attributable to the random factors, and to test whether the variance components differ significantly from zero. The
ANOVA method for estimating the variance of the random factors begins by constructing the Sums of squares and cross products (SSCP) matrix for the independent variables. The sums of squares and cross
products for the random effects are then residualized on the fixed effects, leaving the random effects independent of the fixed effects, as required in the mixed model (see, for example, Searle,
Casella, & McCulloch, 1992). The residualized Sums of squares and cross products for each random factor are then divided by their degrees of freedom to produce the coefficients in the Expected mean
squares matrix. Nonzero off-diagonal coefficients for the random effects in this matrix indicate confounding, which must be taken into account when estimating the population variance for each factor.
For the wheat.sta data, treating both Variety and Plot as random effects, the coefficients in the Expected mean squares matrix show that the two factors are at least somewhat confounded. The Expected
mean squares spreadsheet is shown below.
│Expected Mean Squares (wheat.sta) │
│ │Mean Squares Type: 1 │
│ │Effect│ │ │ │
│Source │(F/R) │VARIETY │ PLOT │ Error │
│{1}VARIETY │Random│3.179487│1.000000│1.000000│
│{2}PLOT │Random│ │1.000000│1.000000│
│Error │ │ │ │1.000000│
The coefficients in the
Expected mean squares
matrix are used to estimate the population variation of the random effects by equating their variances to their expected mean squares. For example, the estimated population variance for
Type I Sums of squares
would be 3.179487 times the
Mean square
plus 1 times the
Mean square
plus 1 times the
Mean square
The ANOVA method provides an integrative approach to estimating variance components, but it is not without problems (i.e., ANOVA estimates of variance components are generally biased, and can be
negative, even though variances, by definition, must be either zero or positive). An alternative to ANOVA estimation is provided by maximum likelihood estimation. Maximum likelihood methods for
estimating variance components are based on quadratic forms and typically, but not always, require iteration to find a solution. Perhaps the simplest form of maximum likelihood estimation is MIVQUE
(0) estimation. MIVQUE(0) produces Minimum Variance Quadratic Unbiased Estimators (i.e., MIVQUE). In MIVQUE(0) estimation, there is no weighting of the random effects (thus the 0 [zero] after
MIVQUE), so an iterative solution for estimating variance components is not required. MIVQUE(0) estimation begins by constructing the Quadratic sums of squares (SSQ) matrix. The elements for the
random effects in the SSQ matrix can most simply be described as the sums of squares of the sums of squares and cross products for each random effect in the model (after residualization on the fixed
effects). The elements of this matrix provide coefficients, similar to the elements of the Expected Mean Squares matrix, which are used to estimate the covariances among the random factors and the
dependent variable. The SSQ matrix for the wheat.sta data is shown below. Note that the nonzero off-diagonal element for Variety and Plot again shows that the two random factors are at least somewhat
│MIVQUE(0) Variance Component Estimation (wheat.sta) │
│ │SSQ Matrix │
│Source │ VARIETY │ PLOT │ Error │ DAMAGE │
│{1}VARIETY │ 31.90533│ 9.53846│ 9.53846│ 2.418964│
│{2}PLOT │ 9.53846│ 12.00000│ 12.00000│ 1.318077│
│Error │ 9.53846│ 12.00000│ 12.00000│ 1.318077│
Restricted Maximum Likelihood (REML)
Maximum Likelihood (ML) variance component
estimation methods are closely related to MIVQUE(0). In fact, in some programs,
estimates as start values for an iterative solution for the
variance components
, so the elements of the
matrix serve as initial estimates of the covariances among the random factors and the dependent variable for both
Estimating components of variation.
For ANOVA methods for estimating
variance components
, a solution is found for the system of equations relating the estimated population variances and covariances among the random factors to the estimated population covariances between the random
factors and the dependent variable. The solution then defines the
variance components
. The spreadsheet below shows the
Type I Sums of squares
estimates of the
variance components
for the
│Components of Variance (wheat.sta) │
│ │Mean Squares Type: 1 │
│Source │DAMAGE │
│{1}VARIETY │0.067186 │
│{2}PLOT │0.056435 │
│Error │0.000000 │
MIVQUE(0) variance components are estimated by inverting the partition of the
matrix that does not include the dependent variable (or finding the generalized inverse, for singular matrices), and postmultiplying the inverse by the dependent variable column vector. This amounts
to solving the system of equations that relates the dependent variable to the random independent variables, taking into account the covariation among the independent variables. The MIVQUE(0)
estimates for the
data are listed in the spreadsheet shown below.
│MIVQUE(0) Variance Component Estimation (wheat.sta) │
│ │Variance Components │
│Source │DAMAGE │
│{1}VARIETY │0.056376 │
│{2}PLOT │0.065028 │
│Error │0.000000 │
REML and ML variance components are estimated by iteratively optimizing the parameter estimates for the effects in the model. REML differs from ML in that the likelihood of the data is maximized only
for the
random effects
, thus REML is a restricted solution. In both REML and ML estimation, an iterative solution is found for the weights for the random effects in the model that maximizes the likelihood of the data.
Some programs use MIVQUE(0) estimates as the start values for both REML and ML estimation, so the relation among these three techniques is close indeed. The statistical theory underlying
maximum likelihood
variance component estimation techniques is an advanced topic (Searle, Casella, & McCulloch, 1992, is recommended as an authoritative and comprehensive source). Implementation of maximum likelihood
estimation algorithms, furthermore, is difficult (see, for example, Hemmerle & Hartley, 1973, and Jennrich & Sampson, 1976, for descriptions of these
), and faulty implementation can lead to variance component estimates that lie outside the parameter space, converge prematurely to nonoptimal solutions, or give nonsensical results. Milliken and
Johnson (1992) noted all of these problems with the commercial software packages they used to estimate variance components.
The basic idea behind both REML and ML estimation is to find the set of weights for the random effects in the model that minimize the negative of the natural logarithm times the likelihood of the
data (the likelihood of the data can vary from zero to one, so minimizing the negative of the natural logarithm times the likelihood of the data amounts to maximizing the probability, or the
likelihood, of the data). The logarithm of the REML likelihood and the REML variance component estimates for the wheat.sta data are listed in the last row of the Iteration history spreadsheet shown
│Iteration History (wheat.sta) │
│ │Variable: DAMAGE │
│Iter.│ Log LL │ Error │VARIETY││
│1 │ -2.30618│.057430│.068746││
│2 │ -2.25253│.057795│.073744││
│3 │ -2.25130│.056977│.072244││
│4 │ -2.25088│.057005│.073138││
│5 │ -2.25081│.057006│.073160││
│6 │ -2.25081│.057003│.073155││
│7 │ -2.25081│.057003│.073155││
The logarithm of the ML likelihood and the ML estimates for the variance components for the
data are listed in the last row of the
Iteration history
spreadsheet shown below.
│Iteration History (wheat.sta) │
│ │Variable: DAMAGE │
│Iter.│ Log LL │ Error │VARIETY││
│1 │ -2.53585│.057454│.048799││
│2 │ -2.48382│.057427│.048541││
│3 │ -2.48381│.057492│.048639││
│4 │ -2.48381│.057491│.048552││
│5 │ -2.48381│.057492│.048552││
│6 │ -2.48381│.057492│.048552││
As can be seen, the estimates of the variance components for the different methods are quite similar. In general, components of variance using different estimation methods tend to agree fairly well
(see, for example, Swallow & Monahan, 1984).
Testing the significance of variance components.
When maximum likelihood estimation techniques are used, standard linear model significance testing techniques may not be applicable. ANOVA techniques such as decomposing sums of squares and testing
the significance of effects by taking ratios of mean squares are appropriate for linear methods of estimation, but generally are not appropriate for quadratic methods of estimation. When ANOVA
methods are used for estimation, standard significance testing techniques can be employed, with the exception that any confounding among random effects must be taken into account.
To test the significance of effects in mixed or random models, error terms must be constructed that contain all the same sources of random variation except for the variation of the respective effect
of interest. This is done using Satterthwaite's method of denominator synthesis (Satterthwaite, 1946), which finds the linear combinations of sources of random variation that serve as appropriate
error terms for testing the significance of the respective effect of interest. The spreadsheet below shows the coefficients used to construct these linear combinations for testing the Variety and
Plot effects.
│Denominator Synthesis: Coefficients (MS Type: 1) (wheat.sta) │
│ │The synthesized MS Errors are linear │
│ │combinations of the resp. MS effects │
│Effect │ (F/R) │ VARIETY │ PLOT │ Error │
│{1}VARIETY │ Random│ │ 1.000000 │ │
│{2}PLOT │ Random│ │ │ 1.000000 │
The coefficients show that the
Mean square
should be tested against the
Mean square
, and that the
Mean square
should be tested against the
Mean square
. Referring back to the
Expected mean squares
spreadsheet, it is clear that the denominator synthesis has identified appropriate error terms for testing the
effects. Although this is a simple example, in more complex analyses with various degrees of confounding among the random effects, the denominator synthesis can identify appropriate error terms for
testing the random effects that would not be readily apparent.
To perform the tests of significance of the random effects, ratios of appropriate Mean squares are formed to compute F statistics and p-values for each effect. Note that in complex analyses, the
degrees of freedom for random effects can be fractional rather than integer values, indicating that fractions of sources of variation were used in synthesizing appropriate error terms for testing the
random effects. The spreadsheet displaying the results of the ANOVA for the Variety and Plot random effects is shown below. Note that, for this simple design, the results are identical to the results
presented earlier in the spreadsheet for the ANOVA treating Plot as a random effect nested within Variety.
│ANOVA Results for Synthesized Errors: DAMAGE (wheat.sta) │
│ │df error computed using Satterthwaite method │
│ │Effect│ df │ MS │ df │ MS │ │ │
│Effect │(F/R) │Effect│Effect │Error│ Error │ F │ p │
│{1}VARIETY │ Fixed│ 3 │.270053│ 9 │.056435│4.785196│.029275│
│{2}PLOT │Random│ 9 │.056435│-----│ ----- │ ----- │ ----- │
As shown in the spreadsheet, the
effect is found to be significant at
< .05, but as would be expected, the
effect cannot be tested for significance because plots served as the basic unit of analysis. If data on samples of plants taken within plots were available, a test of the significance of the
effect could be constructed.
Appropriate tests of significance for MIVQUE(0) variance component estimates generally cannot be constructed except in special cases (see Searle, Casella, & McCulloch, 1992). Asymptotic (large
sample) tests of significance of REML and ML variance component estimates, however, can be constructed for the parameter estimates from the final iteration of the solution. The spreadsheet below
shows the asymptotic (large sample) tests of significance for the REML estimates for the wheat.sta data.
│Restricted Maximum Likelihood Estimates (wheat.sta) │
│ │Variable: DAMAGE │
│ │-2*Log(Likelihood)=4.50162399 │
│ │ Variance │ Asympt. │ Asympt. │Asympt. │
│Effect │ Comp. │ Std.Err. │ z │ p │
│{1}VARIETY │ .073155│ .078019 │ .937656 │.348421 │
│Error │ .057003│ .027132 │ 2.100914 │.035648 │
The spreadsheet below shows the asymptotic (large sample) tests of significance for the ML estimates for the
│Maximum Likelihood Estimates (wheat.sta) │
│ │Variable: DAMAGE │
│ │-2*Log(Likelihood)=4.96761616 │
│ │Variance│Asympt. │Asympt. │Asympt.│
│Effect │ Comp. │Std.Err.│ z │ p │
│{1}VARIETY │ .048552│.050747 │.956748 │.338694│
│Error │ .057492│.027598 │2.083213│.037232│
It should be emphasized that the asymptotic tests of significance for REML and ML variance component estimates are based on large sample sizes, which certainly is not the case for the
data. For this data set, the tests of significance from both analyses agree in suggesting that the Variety variance component does not differ significantly from zero.
For basic information on ANOVA in linear models, see also Elementary Concepts.
Estimating the population intraclass correlation.
Note that if the
variance component
estimates for the random effects in the model are divided by the sum of all components (including the error component), the resulting percentages are population
intraclass correlation coefficients
for the respective effects. | {"url":"http://www.statsoft.com/Textbook/Variance-Components-Mixed-Model-ANOVA-ANCOVA","timestamp":"2014-04-18T05:30:27Z","content_type":null,"content_length":"76527","record_id":"<urn:uuid:fb628b81-bbf3-482b-b519-0e8516a0e817>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Popular Science Monthly/Volume 12/December 1877/The Tides II
From Wikisource
Popular Science Monthly Volume 12 December 1877 (1877)
←Open Air and Health The Tides II Modern Superstitions→
By Elias Schneider
THE TIDES.
By Professor ELIAS SCHNEIDER.
APART of the theory of the tides presented in our text-books has been pronounced absurd in my first article. It is also a matter of amazement that the effect of centrifugal force is entirely ignored
in these text-books. That the propelling force arising from this cause should be utterly disregarded in an explanation of the tides is very remarkable. And yet the existence of such a force is so
easily demonstrated that nothing else seems necessary to prove it to be one of the causes of the tides, than what was presented in my first article. I will, however, give additional force to my
reasoning by citing the results of actual experiment.
It may be shown that there is an actual difference in the amount of centrifugal force felt at any part of the earth's surface during different times of the twenty-four hours of one axial rotation;
and also at different times of the earth's revolution around her centre of motion. Theory implies that when any portion of the earth's surface is moving toward that point in her orbit where such
surface makes the most rapid sweep around the centre of motion, the greatest amount of centrifugal force must be felt at such surface; and that, when this part moves toward that point of the earth's
orbit where it makes the slowest sweep around the centre of motion, the least amount of centrifugal force must be felt. Now, it is very evident that any portion of the earth's surface which is most
remote from the centre of her motion, whether that centre be the sun or the centre of gravity between herself and the moon, makes the most rapid sweep, and that consequently her waters must feel the
greatest amount of centrifugal force at that time.
Now, let us see what experiment tells us on this subject: A box has been made of proper dimensions, free within from all outside disturbance or motion of air. In this box is placed a steel frame,
which moves like a gate on a very delicate hinge, so as to avoid all possible friction. A weight of nearly twenty pounds rests on this gate at about four feet from the hinge. The hinge, whose lower
part is a mere point, or delicate pivot, and the weight, are in the same line, parallel with the meridian. The weight is free, as nearly as can be, to obey the power of its own inertia. In
consequence of this it moves laterally once every twenty-four hours west and east, whenever the centrifugal force is increasing and decreasing.
From noon to midnight the earth's surface is moving toward that point where its motion is more rapid, and consequently it begins to feel an increasing amount of centrifugal force. This is indicated
by the apparatus, for the weight, which rests on the gate, by virtue of its inertia, lags behind and makes an apparent motion westward. This motion is, of course, not real. The earth's surface moves
eastward faster than the weight, and hence the weight appears to move westward. From midnight to noon the centrifugal force felt by the earth's surface diminishes, for it is then moving toward that
point where its motion eastward is less rapid. This is also indicated by the apparatus, for the weight, having gradually acquired the same velocity eastward, remains stationary at midnight a very
short time. But, soon after midnight, when the earth's surface begins to feel less centrifugal force, this weight, by virtue of its inertia, resists the change of motion, and therefore moves eastward
as far as it moved westward before midnight.
This movement of the weight is greatest when new-moon occurs at midnight, for the earth then feels not only the centrifugal force produced by her revolution around the sun, but, in addition, that
produced also by her revolution around the centre of gravity between herself and the moon.
The motion of the weight westward begins soon after mid-day, and reaches its highest acceleration at about 8 p. m.; the motion eastward begins soon after midnight, and reaches its highest
acceleration at about 7 a. m.
I hope soon to make a new apparatus, which shall have a longer distance between the hinge and weight, and from it more marked results can be derived.
When a body moves in a curve around a centre, it feels the effect of two forces: the one, which I call centrifugal, is the impulse which puts the body in motion; the other, which I call centripetal,
is the power which draws toward the centre and keeps the body from moving in a direct line. These are the only forces acting upon a body moving in a curve. The former is sometimes called tangential,
but I prefer to call it centrifugal, for it is the only force which drives from the centre. There is no force acting directly from the centre. That which is often called centrifugal is really
centripetal force, for the tension of the string in the following experiment is hot caused by any force acting on the body from the centre, but it is caused by a force drawing the body out of its
rectilineal course, and toward the centre, compelling it to move in a curve.
Suppose the body E (Fig. 1) moves with a certain velocity in the curve E C D, and that the string E S feels a known tension, just equal to its strength. Now, double the velocity, and the strength of
the string must be increased fourfold to keep it from breaking, for the force drawing the body toward the centre must then be four times as great to keep it moving in the curve. Or, suppose the body
moves from A toward B with a known velocity, and that on reaching E is acted upon by the string. The body is then made to take a curvilinear motion, and the string feels a tension drawing the body
not directly from but toward the centre, and equal to a force necessary to keep the body from moving in a straight line. It may be remarked that, as action and reaction are equal, the tension is felt
both ways. But the reader can easily see what I mean.
This law of motion can be still better illustrated by a reference to one of the satellites of the planet Neptune. The mean distance of this satellite is nearly equal to the distance of our moon from
the earth. We may assume these distances to be exactly equal. Then, as at the same distance the centripetal force must increase as the square of the velocity, to keep the body moving in the curve,
and as the velocity of this moon of Neptune is about four and a half times greater than that of our moon, the centripetal force, or the force of gravity produced by Neptune on this moon, must be
(4.5)^2 about twenty times as great as is the centripetal force or the gravitating power our earth produces on its moon. In other words, the planet Neptune is about twenty times as heavy as our
earth, for weight is nothing else than the measure of gravity.
The preceding statements are sufficient to show what is meant by centrifugal and centripetal forces. Let us now see how these act on bodies moving in large and small curves, and how the waters on the
earth's surface are driven by centrifugal force toward a line tangent to her orbit. Since the length of the orbital curve of the earth is very great, and therefore not much deflected from a straight
line, the waters are driven very little above the usual surface, no matter how rapidly the earth herself may move in this curve. The centrifugal force or original impulse felt by the whole earth is
very great, but that felt by her waters is hardly visible or sensible in mid-ocean. For the tide-waves cannot get above the line tangent to the curve of the earth's orbit. The following illustration
will show this:
Let A B C (Fig. 2) represent a part of the curve of the earth's orbit, in its motion around the central sun, and B D a line tangent
to the curve at the point B. Now it is very evident that no tide-wave produced by centrifugal force can get higher above the curve of the orbit than this tangent line, and the distance between the
curve and the tangent, as at E, is very small. The part of the earth's surface most remote from the sun has indeed a greater tendency to continue moving on in the straight line of the original
impulse than any other part. The particles of water have a small degree of cohesion, and they will therefore continue to move a short distance along this tangent, but only a little above the usual
surface of the earth.
The curve in which the surface of the earth moves around the centre of gravity between herself and the moon is much more deflected from a straight line. Here also the tide-wave can rise no higher
than to the line tangent to this curve. The distance of the point G (Fig. 3) from the curve is, however, much greater than the point E in Fig. 2 from its curve. The motion of the surface of the earth
at H around the point C, the centre of gravity between herself and the moon, is only about sixty-five miles an hour; while the surface at B (Fig. 2)
moves with a velocity of 68,000 miles an hour around the sun. Nevertheless, as the waters are driven toward these respective tangents by the effect of centrifugal force, the tide-wave must be
greatest where the distance between tangent and curve is the greater.
Let us now proceed to prove by mathematical demonstration the falsity of the theory of the tides found in our text-books.
Herschel, in his "Outlines of Astronomy," uses the following language: "That the sun, or moon, should by its attractions heap up the waters of the ocean under it seems to them (objectors) very
natural. That it should at the same time heap them up on the opposite side seems, on the contrary, palpably absurd. The error of this class of objectors. . . . consists in disregarding the attraction
of the disturbing body on the mass of the earth, and looking on it as wholly effective on the superficial water. Were the earth, indeed, absolutely fixed, held in its place by an external force, and
the water left free to move, no doubt the effect of the disturbing power would be to produce a single accumulation vertically under the disturbing body. But it is not by its whole attration, but by
the difference of its attractions on the superficial water at both sides, and on the central mass, that the waters are raised; just as in the theory of the moon the difference of the sun's
attractions on the moon and on the earth (regarded as movable, and as obeying that amount of attraction which is due to its situation) gives rise to a relative tendency in the moon to recede from the
earth in conjunction and opposition, and to approach it in quadratures."
This language gives about the clearest presentation we have of the pulling-away doctrine. But there is no "tendency in the moon to recede from the earth in conjunction and opposition, and to approach
it in quadratures." On the contrary, the tendency of the moon's motion is just the reverse—namely, to approach in conjunction and opposition, and to recede in quadratures. And if so in regard to the
moon and earth, it must be still more so in regard to the earth and her waters under this influence alone, as can be demonstrated.
I am sustained in my position by the best of authority. "Thus our moon moves faster, and, by a radius drawn to the earth, describes an area greater for the time, and has its orbit less curved, and
therefore approaches nearer to the earth in the syzygies than in the quadratures. . . . The moon's distance from the earth in the syzygies is to its distance in the quadratures, in round numbers, as
69 to 70." The authority I quote is Newton's "Principia."
Let us make a calculation, and apply it to the earth and her waters. The moon performs its revolution in 27^d 7^h 4349^m , which is equal to 2,360,60623 seconds. The seconds of time in which the moon
makes one revolution around the earth is to one second of time as 1,296,000 seconds in a whole circle is to a fractional part of one second of a circle, which we will call x. Hence x = 12960002360606
[2/3] = .54901141 +, which is the fractional part of one second of the circle of the heavens the moon describes in one second of time. The semicircumference of a circle whose radius is one equals
3.141592653589 + . Hence one second of this semicircumference equals 3.141592653589648000 = .0000048481368110 +, and the fractional part .54901141 + of one second of this semicircumference is equal
to .00000266168242648 +.
Let E M and E M’ represent the moon's distance from the earth, M M' the arc which the moon describes in one second of time, and A M’ the sine of this arc. Let E M’ equal 240,000 miles, the moon's
distance, in round numbers, from the earth, and E C equal one mile. The arc B C, being very small, may be regarded as equal to its sine. The length of this arc we have already found. From similarity
of triangles we have the following proportion: A M': B C:: E M': E C, or, by substituting the figures, A M': .00000266168242648:: 240000: 1. Therefore A M' = .6388037823552 +, which is the sine of
the arc passed over by the moon in one second of time. The cosine E A is equal to
$\scriptstyle \sqrt { \frac {}{E M^2} - \frac {}{A M^2}} = \sqrt {(240000)^2-(.6388037823552)^2}+ =$
239999.9999991498535 + , which, subtracted E M, gives A M = .0000008501464 + , and this fractional part of a mile, reduced to inches, gives .053865275+, the fractional part of an inch as the distance
the moon falls from a tangent to its orbit in one second of time. Multiply this by the square of 60, and we get, when reduced, 16.159+ feet, the distance the moon descends in one minute, which is
equal to 15.1+ Paris feet, the result obtained by Newton in his "Principia."
The distance the earth falls, in one second of time, toward the sun is about .12144+ of an inch, and the distance the moon falls toward the sun in one second, when in opposition, is about .12084 of
an inch. This, added to the distance the moon falls toward the earth in one second, makes .17470 +. Now, .17470 — .12144 = .05326. Hence the moon, when in opposition, moving faster toward the earth
than the earth does toward the sun, by .05326 fractional part of an inch in a second, these two bodies have a tendency to get nearer to each other in this position. The same can be proved when the
moon is in conjunction.
Now let us see how this same law affects the waters of the ocean. The earth moves toward the sun .12144 part of an inch in a second. The waters of the earth, on the side turned away from the sun, are
only 4,000 miles farther from the sun than the centre of the earth. Gravity toward any body diminishes as the square of the distance increases. Hence these waters, influenced by the gravitating power
of the sun alone, and not hindered by any intervening object, would fall toward the sun .12143 part of an inch in one second. Hence the earth has a tendency to move away from the waters with a
velocity of .00001 part of an inch in one second—that is, if these waters were not influenced by the gravitating power of the earth, and only by that of the sun, the earth would be "pulled away" from
its waters at the rate of only the 100,000th part of an inch in one second. But it must be remembered that the waters gravitate, in addition to this, toward the earth at the rate of 16.15+ feet in
one second, and therefore these waters are depressed by gravity, and not elevated. The same may be proved in regard to lunar tides.
I close by saying that I am an earnest seeker of truth, and nothing but a sincere desire for truth has impelled me to write these two articles. Any person attempting to prove me in error, with the
same good motive, will be kindly welcomed. | {"url":"http://en.wikisource.org/wiki/Popular_Science_Monthly/Volume_12/December_1877/The_Tides_II","timestamp":"2014-04-18T21:16:31Z","content_type":null,"content_length":"52346","record_id":"<urn:uuid:2975719c-a102-4451-ada2-b35af040cbe0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sum of sets modulo a square
up vote 4 down vote favorite
I would be glad to see a reference to the following easy lemma in additive combinatorics: if $A_1$ and $A_2$ are two sets of remainders modulo $n^2$, each has cardinality $n > 1$ and all elements of
$A_i$ are different modulo $n$ (for $i=1,2$), then $A_1+A_2$ is not equal to the set of all remainders modulo $n^2$.
Maybe, it is a partial case of more general and deep:) result.
co.combinatorics additive-combinatorics reference-request
For $n$ even it is easy to get a contradiction by summing up, but for $n$ odd? – darij grinberg Aug 3 '10 at 22:24
1 Well, the easy proof is as follows: assuming contrary, consider all $n$ remainders with remainder $r$ modulo $n$, sum up. We get that sum of all elements of $A_1$ and $A_2$ equals $r+(r+n)+\
dots+r+(n^2-n)$. But this does depend on $r$. Contradiction! – Fedor Petrov Aug 4 '10 at 6:39
1 @Fedor, I don't follow your proof. – Gerry Myerson Aug 4 '10 at 7:09
1 It is cute! Line up the elements of A_1 by increasing residue mod n. underneath them put the elements of A_2 in decreasing order (mod n). The sums (reduced mod n^2) are 0, n,2n,...,(n-1)*n in some
order for a grand total of n*n*(n-1)/2 which is 0 mod n^2. Now cyclically shift the second row. The n sums (reduced mod n^2) would be 1 n+1, ...,(n-1)n+1 for a grand total of n*n*(n-1)/2+n*1 which
is n mod n^2. But the grand total should remain unchanged since it is the sum sum of 2n integers. – Aaron Meyerowitz Aug 4 '10 at 7:40
1 $n^2(n-1)/2$ is not always $0$ modulo $n^2$, but it always differs from $n^2(n-1)/2+n$ – Fedor Petrov Aug 4 '10 at 9:12
show 3 more comments
2 Answers
active oldest votes
There must be an easier proof but here is a nice approach which can indeed lead to deeper results (feel free to edit for math display, I tried): Techniques with characteristic
polynomials and roots of unity can be very powerful. I like the way that the appropriate lemmas are explained in my paper with Ethan Coven "Tiling the Integers with Translates of One
Finite Set" http://arxiv.org/abs/math/9802122 or Journal of Algebra v 212 (1988) p 161-174. One does not need their full generality for this problem but perhaps for deeper results.
I'll sketch this result which implies what was asked for: Suppose that A and B are sets of size #A and #B so that A+B is a complete set of residues mod N=#A#B. Let p be a prime dividing
N. Then exactly one of the sets has its members equally distributed mod p.
digression: Lemma 3.2 from the paper above (not needed here) shows that at least one of the following is true:
1) No member of A-A is relatively prime to #B
2) No member of B-B is relatively prime to #A end of digression
Consider the corresponding polynomials $A(x)=\sum_{a \in A}x^a$ and $B(x)=\sum_{b \in B}x^b$. Then
i) A(1)=#A and B(1)=#B
ii) A(x)B(x) is a sum of N distinct powers of x, one from each residue class.
up vote 2 iii) $A(x)B(x)=(x^N-1)Q(x)+\frac{x^N-1}{x-1}$ for some polynomial Q(x).
down vote
accepted iv) Every irreducible polynomial dividing $\frac{x^N-1}{x-1}$ divides at least one of $A(x)$ and $B(x)$
As an example consider A={0,9,13,16,29,32} B={0,10,12,22,24,34} with A+B a complete set of residues mod N=36.
evaluated at $x=1$ this becomes 36=2 * 3 * 2 * 1 * 3 * 1
In general the irreducible polynomial divisors of $\frac{x^N-1}{x-1}$ are the cyclotomic polynomials corresponding to the divisors of N. Evaluated at x=1 each is either 1 (composite
divisor) or a prime p (prime power divisor) and the primes have product N. Since A(1)B(1)=N and A(x)B(x) is divisible by all the prime power cyclotomic divisors of $\frac{x^N-1}{x-1}$
and these evaluated at 1 also have product N, each divides just one of A(x) or B(x) and all other polynomial divisors evaluate to 1 at 1. In particular: for each prime divisor of N, only
one of A(X), B(x) divides by $\frac{x^p-1}{x-1}$ and only that one has corresponding set equidistributed mod p.
In our example A is a complete set of residues mod 6 so A(x) divides by (1+x) and by (1+x+x^2). Since A(1)=6 , A(x) can't have either of (1+x^2) and (1+x^2+x^4) as factors. But they do
divide A(x)B(x) and hence they divide B(x). This means that neither (1+x) nor (1+x+x^2) can divide B(x), again since B(1)=6. Hence, B is not equidistributed mod 2 (or mod 3) and
certainly not mod 6.
By the way, $B(x)=(x^{10}+1)(x^{24}+x^{12}+1)$ and $A(x)=(x^{13}+1)(x^{32}+x^{16}+1)$ (mod $x^{36}-1$)
I don't see how any of the facts you wrote help with this problem. Here we need to show that $x^{n^2}-1$ does not divide $A(x)B(x)$ in your notation... – Gjergji Zaimi Aug 4 '10 at
Oh, Aaaron, thanks! That's what I actually needed. Gjergji, I think Aaaron is correct: we assume that $A+B$ is full set of residues modulo $N$ ($N=n^2$ in my case), then we have such
equality and that's it. – Fedor Petrov Aug 4 '10 at 7:20
I show that if $x^{n^2}-1$ does divide A(x)B(x) and A(x) is equally distributed mod n then B(x) is not equally distributed mod n. – Aaron Meyerowitz Aug 4 '10 at 7:25
add comment
Replace each set by a sum of powers of x. Let p be a prime like 5 dividing n. Under your condition 1+ x + x^2 + x^3 + x^4 would divide both polynomials. Show it only divides the product
once. I'd be less coy but I am typing this on a phone in a power outage! I've used those ideas to great effect. If n is prime then one set not only is not distinct mod n but actually has
up vote -1 all elements equal mod n.
down vote
Ok, this is messed up. {0,1,4,5,8,9}+{0,2,12,14,24,26} is a counter-example. But note that mod 6 the second set is {0,2,0,2,0,2}. I'll post a clearer answer and take this one down. –
Aaron Meyerowitz Aug 4 '10 at 4:25
2 @Aaron, Fedor calls it an "easy lemma" so I infer he already has a proof and just wants a citation. – Gerry Myerson Aug 4 '10 at 5:50
I suppose you are right. The approach I give shows (inter alia) that if A+B is a complete set of residues mod N=#A#B and both A and B contain 0 (no loss of generality in assuming that)
then at least one of the two contains no elements relatively prime to N. – Aaron Meyerowitz Aug 4 '10 at 6:48
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics additive-combinatorics reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/34442/sum-of-sets-modulo-a-square?sort=votes","timestamp":"2014-04-18T16:27:26Z","content_type":null,"content_length":"72472","record_id":"<urn:uuid:b890912e-9585-497d-999c-41d4b0de9513>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential eqn
1. January 22nd 2013, 12:51 AM #1
2. January 22nd 2013, 05:49 AM #2
Re: Differential eqn
I suppose that it is a textbook exercice. So, it cannot be as difficult as the solving of the above ODE.
This draw to think that there is a typo in the equation.
For example, the corrected ODE might be as shown below. Then the integrating factor would be 1/(xy)^4
3. January 22nd 2013, 06:40 AM #3
Re: Differential eqn
There is no typo in the problem
4. January 22nd 2013, 07:27 AM #4
Re: Differential eqn
It isn't exact, I can't find an integrating factor, and it's not homogeneous. I'm starting to side with "JJ". You may have transcribed it correctly, but I'm guessing the book itself might
have a typo.
5. January 22nd 2013, 08:02 AM #5
Re: Differential eqn
i divided it by xy then separated dx and dy terms to get
d(xy)+i can make this part exact by calculating the integrating factor in terms of y and then solving it
6. January 22nd 2013, 09:15 AM #6
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/differential-geometry/211840-differential-eqn.html","timestamp":"2014-04-24T18:16:11Z","content_type":null,"content_length":"46449","record_id":"<urn:uuid:a2616070-52ac-4cd4-a589-195d187ae97e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Secret of Archimedes' Tomb
Before anything else, there’s something I’d like to ask…
If you answered no, well, I didn’t either. That was until I joined the camp blog. Because of the blogging activity, I came across an article on Archimedes. I was amazed at how a person’s intellect
can come up with ideas beyond our expectations and now, I’ll share the wealth.
Archimedes is considered as one of the three greatest mathematicians of all time. His greatest contributions to mathematics were on the field of Geometry. He wished to have a monument of a sphere
enclosed by a cylinder as his tombstone.
With a cylinder and a sphere, he was able to discover such an important concept of spheres: its surface area.
Source: math.about.com
Some of his contributions are:
More about Archimedes…
Tomb of Archimedes -www.math.nyu.edu
Archimedes: Discoveries -www.lycos.com | {"url":"http://ischoolsfndiloy.wordpress.com/","timestamp":"2014-04-19T06:53:26Z","content_type":null,"content_length":"52967","record_id":"<urn:uuid:012738fa-8fbd-4b35-87b2-b5782f4793f9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Mama Writes...
Right now I'm not thinking about how we use technology in the classroom. This post is about how we use technology as a problem-solving tool (or crutch?), when we're working on math problems
This post was inspired by a conversation with Rick, at
Exploring Binary
Rick: I have to learn to go to Wolfram Alpha for mathematical queries such as the one I mention in the article. Typing 4, 24, 124, 624, 3124 into Wolfram Alpha gives the answer I sought directly:
a[n] = 5^n – 1.
Sue: But then you wouldn’t have had as much fun! And I’d say that’s the problem with Wolphram Alpha. Having fun with math is often hard work. It’s so much easier to click your way to an answer.
But just not the same.
Rick: For me, it goes beyond Google and Wolfram Alpha — it’s the availability of computers in general. I often find myself slapping together a small program or script or spreadsheet before I sit
down and think about a calculation first. Sometimes this leads to serendipitous discoveries; other times it leads to more “Well, Duh!” moments
Maybe it’s good I started my serious study of math before the computer era really got underway. When I started high school ('70), calculators weren’t yet in common use. When I started college ('74),
we didn’t have graphing calculators. I remember drawing hundreds of graphs while I was in calculus. For a long time, I felt like I wasn't a real mathematician because I never learned how to use a
slide rule. All the math people I knew could use one, because they'd been doing difficult calculations for a few years more than I had, and hadn't had calculators. There was no internet (in my
personal life, anyway) until well after I was done with my formal education (’89).
I saw a game for learning factoring, called Divisor Miser, at the Colorado NCTM conference in ‘82, I think. Someone had programmed it on a Vic-20 or TRS-80, or something like that. But I don’t
remember computers being used to solve math problems in the computer course I taught in the mid-8o’s.
I taught junior high for a bit, and one of the math teachers wanted me to help him write a program to solve a probability problem. But he had the calculations all wrong, and the right calculations
were simple enough that a computer program would have been silly. I was shocked at his bad understanding of math. He was our department chair. I knew almost no probability at the time, but I learned
enough from the student materials (extra credit stuff) to figure out that problem. Yikes!
And yet, I love how technology can help us see. Here are some examples that come to mind:
1. Dan wanted to simulate an amusement park ride he calls a scrambler, and pulled up Geometer's Sketchpad. Nice results.
2. I used the matrix solution capabilities of my TI-83 when I was solving the regions in a circle problem. (Put n points on a circle, connect each to each with a straight line, how many regions in
the circle?) If you already understand the solution to that, you may think this was a totally unnecessary use of technology. But it helped me to see. I'd like to post about that problem soon.
3. At last year's Math Circle Teacher Training Institute, this problem was posed: "Can you tile a rectangle completely, if you require that only squares are used and each square must be a different
size from all the others? How?" We used Geometer's Sketchpad to play with our sketches of how it might work. I dropped out of the group thinking about this. When they presented their solution the
next day, they mentioned having used Mathematica to solve some equations.
4. Back to my conversations with Rick. The one above is on his blog. We've been having another conversation at the same time in the comments to my post about the Carnival of mathematics. I wondered
about the pattern in decimal fractions, .99 in particular, when they're converted to binary. He gave me .99 in binary, and pointed out that it repeats. I started playing with the conversion by
hand, got frustrated, and turned to Wolfram Alpha for assistance. (My first time using it for a serious purpose.) Without any more tedious calculations, I could think about patterns. Turns out .9
is a repeating decimal in base 2 with a four-digit repetition, but .99's representation in binary takes 20 digits. Weird. I haven't seen the light yet on that one.
What do you think? Tool, crutch, or both?
How to Read Mathematics, by Shai Simonson and Fernando Gouvea. Excellent. I'll definitely be using this with my students. (There's one little problem. They say 'term', about half-way down, when they
mean factor.)
Check this out, by Alison Blank: Math is not Linear. Her blog is great, too.
Cool graphic of a wave, made using Mathematica.
Detexify lets you draw online. You draw one symbol, and it'll tell you what you drew. If you'd like to start using LaTeX online, this will find symbols for you. You can also use a site called sitmo.
Maria explains that here. A few people have raved about Detexify. I'm not sure why. Is it just too cool that it does character recognition? Or is it more powerful than sitmo?
Video: Bobby McFerrin on the Pentatonic scale. I looked up pentatonic on Wikipedia, and the business about hemitonic or anhemitonic was beyond me. But I liked seeing the connection with the Orff
method in kid’s music education.
Off the math topic, but on the I'm-writing-a-book topic: Uncertain Principles pointed me to this post about settling in to write.
It's up at Reasonable Deviations. But much of it is above my head. The entries that looked interesting to me were:
• Rick Regan on decimal (base ten) numbers like 999 having binary representations that end in 111.
• Math Alive (Look at the lecture notes for pdfs.)
• Computer Science Unplugged (I downloaded a 111 page pdf of what look like great activities for young kids.)
• I'm not fond of war studies, but the German Tank problem was a very interesting application of statistics, which I'm likely to use in my stat classes.
• Not part of the carnival, but up at the top of the blog, was the previous post title, on a binary marble adding machine, made out of wood.
(It seemed the polite thing to do was to leave out the links, so you'd go visit RD to see these, and give them more traffic.)
If binary numbers are considered a part of computer science, then 60% of what I could really follow was the computer science material. Hmm...
5 dimes, times 10 cents each. My son (7 years old) says 10, 20, 30, 40, 50 while holding up fingers. So he doesn't see how ten times works yet. He's a smart kid, so this is surprising to me.
I have no interest in doing anything about it. I trust he gets enough math thinking in our family to be just fine. I'm just intrigued.
I'm reading Holes to my son. Am I obsessing, or would this make a good problem?
"... X-Ray had his own special shovel, which no one else was allowed to use. X-Ray claimed it was shorter than the others, but if it was, it was only by a fraction of an inch.
The shovels were five feet long, from the tip of the steel blade to the end of the wooden shaft. Stanley's hole would have to be as deep as his shovel, and he'd have to be able to lay the shovel flat
across the bottom in any direction."
[If that scene's in the movie, it'd be great to get a clip of it, but I have no idea how to do that.]
On the next page, after his first successful shovelful of dirt, Stanley thinks "only ten million more to go."
On Obviousness
I'm embarrassed to say this, because it seems so obvious now that I see it. For years I've been intrigued by the fact that, when you take the derivative of a volume formula, you always get the
object's surface area. Suddenly, thinking about the problem I had in mind for the Holes scene above, it was obvious to me. (Painfully obvious, considering how many times I pointed that 'cool' fact
out to my students, and wondered aloud why it was so.) Don't worry if it's not obvious to you, you probably haven't been teaching calculus for the past 20 years.
Derivatives measure rate of change. The volume changes in a small bit of time by adding or subtracting at the boundary. The surface area determines how much space there is for the change to happen
in. Does that make any sense? No? Maybe it'll help if I get more specific.
Stanley's hole has to be a certain diameter and height, and X-Ray gets to dig a hole with a slightly smaller diameter and height (the difference is called delta x in calculus). If you imagine a thin
layer all around the edge of X-Ray's hole that Stanley will still have to dig out, you see that the surface of the hole is (sort of) the difference between their holes. The difference in shovel
lengths is the change in x (delta x), and the differnce in amount of dirt they have to dig is change in y (delta y). Change in y over change in x is rate of change (aka slope, aka derivative).
If it's still not obvious, either you'll want to play with rate of change ideas more before trying to understand, or I'm not explaining well.
My blog list in Google Reader is long. I'm making it longer today. samjshah wrote about making resolutions for the school year, and Kate mentioned "...blogs we all read". I wanted to have a list of
all the math teacher blogs. Sam pointed to a list at Moving Forward. They both said looking at people's blogrolls is better than any attempt at a comprehensive list. But the list at Moving Forward is
great. Maybe Scott McLeod is weeding out the not-so-great ones.
I'm not even halfway through it and I've already found:
• Solving Problems at Kiss My Asymptotes
• A cool calculator trick that goes well with the 2 payment plans (linear and exponential increase) problem, at Math Teacher Mambo. (I can't get a link for just that post. It's in July.)
• Second Time Around, a fun experiment I'll use with my stat classes, at Continuities
Enough for one day! I've added the blogroll widget so you can see my list.
I may get to be part of a project that would help elementary teachers learn more math. Right now it's wide open. What would you teach them? Or if you're an elementary teacher, what would you like to
Is it back to school for you this week? Or are you still enjoying summer vacation? Maybe you’re an unschooler who enjoys math, and there is no on button for school! Whoever you are, enjoy the feast!
We have money problems (doesn't everyone?), a dish of art with math on the side, probability and statistics, factors and primes, problem-solving, and a miscellaneous bunch at the end.
A Riddle for the number 14: What’s the next number? 1, 5, 14, …
For those of you working in schools, here's some food for thought. JD writes:
In the United States the number of actual teaching hours (not planning, not meeting, not improving) is almost double the number of hours in Finland, more than double Japan, and the greatest in
the world. Look at the data, read the analysis, over at 3σ -> left.
Maria Miller, at Homeschool Math Blog, has great ideas about how to help kids Learn to Recognize Coins.
Chad Orzel, at Uncertain Principles, offered an estimation contest; Mary O'Keefe, at Albany Area Math Circle, analyzed the contest, first in terms of winner's curse, and then in terms of information
cascades. I chimed in with some ideas about improving your estimation skills. The contest's over, but the posts are still fun.
Tanya Khovanova presents a group of puzzles about weighing coins, and explains the concept of 'revealing coefficient', in her post titled Unrevealing Coin Weighings.
Math and Art
Dan McKinnon has put together a great post on origami and its mathematical side, over at Mathrecreatio n, just in time to help me out with my upcoming math salon. Thanks, Dan! An older post of his on
Sonobe units also looks helpful. (This photo comes from Sara Adams' blog, Happy Folding.)
Here's a math, art, and literature resource I found while wandering around Math Hombre's blog. Make your own One Page Wonder, a story book that can be read in lots of different orders. The geometry
of it is mind-boggling. Hmm, I like writing, but I'm no good at drawing, I wonder if I can do this... It reminds me of some weird form of poetry.
Probability and Statistics
The best lessons seem to produce lots of noise or none. Ryan O'Grady has a lovely one, The Quietest Lesson, analyzing first bad jokes and then other writing samples, to find average word length and
average number of words in a sentence. The same sort of analysis, Stylometry, was used to determine the authors of some of the Federalist Papers, and some writings that might have been by
Mr. D's (I Want to Teach Forever) Deal or No Deal game in the classroom sounds like lots more fun than that TV show. And he's got some more ideas to make teaching and learning probability fun and
successful, including 3 fun probability games and projects.
And I figured Jonathan's (jd2718) puzzle belonged here, because I think it takes the kind of counting we work so hard on when we're figuring probabilities.
Factoring and Primes
Jimmie thinks of blogging as electronic scrapbooking. At Jimmie's Collage, she's posted a great article, collecting resources for Living Math with Factors, Multiples, and Primes.
John Golden, at Math Hombre, gives us a word game and a factoring game that are both about Running Out of Options.
Sam Shah, at Continuous Everywhere But Differentiable Nowhere brings us Factoring, Schmactoring. I'm enough of a math nerd to enjoy factoring polynomials, and if you look in a textbook, most of them
factor. Sam has put together a table to show us that most quadratics are not factorable. Hmm...
When math books turn to the topic of problem-solving strategies, they almost invariably either mention George Polya or just show steps similar to those he proposed:
1. Understand the problem.
2. Make a plan for how you might solve it.
3. Carry out your plan.
4. Look back. (Check your work, see how it might apply to other problems, etc.)
Math hombre gives us a good introduction to Polya, and calls those steps phases because he wants us to know we might need to go back and forth between them. He also recommends an article by Alan
Schoenfeld. I'm hoping to post a review of Schoenfeld's book, Mathematical Problem Solving, soon, and was excited to see that link.
I can't leave the topic of problem-solving without mentioning Paul Zeitz, whose book, The Art and Craft of Problem Solving, gives a whole new take on how to think about it. He talks about strategies,
tactics and tools, and then offers some killer problems to help you practice the tools, tactics and strategies he shows you. Oh yeah!
Evelyn Saenz loves to blog about frogs. Frogs for skip counting, frogs for game playing, and frogs in their natural habitat. Her contribution comes to us through her Squidoo lens, but she also has
her own blog, Hands-On Learning.
Sam Alexander, aka Glowing Face Man, has written a great post about applications of higher math: why scratches on cds don't interfere with them playing well, how search engines work, and modern
I've been finding so many gems in my wanderings lately, I just had to include a few of those here as well:
• There's a totally approachable book about Fourier Series called Who is Fourier?, by (get this!) Transnational College of LEX. This group is interested in learning lots of languages, and when
members wanted to understand language, they decided they needed to understand sound. That's usually done using Fourier series. They researched the topic, and ended up writing their own refreshing
text on it. I was reminded of this lovely book when I happened across Matt Springer's blog Built on Facts, with his Sunday Function, most of which addresses using Fourier series to mimic a simple
periodic function.
• Marcus du Sautoy is one more voice added to the chorus of folks saying there's lots more to math than what you see in classrooms.
• This last post will become my reference list each time I'm deciding what to learn next among the technological wonders available these days. My goal is to be totally tech-savvy in my classroom next
year. Maria Andersen taught a Technology Boot Camp at Muskegon Community College (my old stomping grounds), and gives us a rundown.
= • = • = • = • = • = • = • = • = • = • = • = • = • = • = •
The next Math Teachers at Play blog carnival (#15) will be hosted my Maria Miller at Homeschool Math Blog, on September 4. Submit your blog article using the carnival submission form. Past posts and
future hosts can be found on the blog carnival index page.
I saw a few comments in my blog reader on first day activities over at dy/dan, First Day Wiki ('07) and This New School Year ('08). I loved his ideas. I have a few of my own you all might like if
you're headed back to teaching. (The sabbatical work I'm doing is marvelous, but I am going to miss teaching!) Dan taught high school and I teach at a community college; perhaps that has some bearing
on what we do differently first day, but there's lots of overlap.
• On my last first day, in January, I put this on the board: “My ideal math class is a learning community. My goal is to help you become a community of learners." I also asked them to each add
something to their choice of one of 3 lists on the board: 'Something I know in math', 'something I don't get', 'something I'm curious about'.
• I have them fill out 3x5 cards with their name, phone, email, and maybe a question or two about their math background. I ask them to leave a space for their photo. I'll use the cards to do
attendance and call on people. (In the future, I'd like to jot notes on the cards: things they say and do, so I can remember.)
• I think it's important to try to call on people as equally as I can. (See Failing At Fairness for research that shows how much more boys are called on than girls, and how differently they're
responded to.) I tell them the first day: I don't want to intimidate you, so you can always say 'pass' if you don't want to answer. But if you want to be brave, you shouldn't pass just because of not
knowing the answer. You can say "I don't know", and I'll ask you easier questions and we'll work back up to the original question. I still end up asking questions to the class as a whole sometimes,
and when the same 3 people raise their hands all the time, I start telling them I'll wait for other hands, in order to ‘spread the wealth’.
• It's vital to learn your students' names, but I have a bad memory. In recent years, I've brought my digital camera, and taken pictures. I found it used up too much time if I took the pictures, so
now I get a volunteer to be the photographer. They can do a photo of 3 people at once, and afterward I organize in iPhoto, copying and cropping, so I have head shots of each student. I print and have
the students put their names on their photos during the next class. Then I get to cut them out and use a glue stick to put them on those 3x5 cards. That work with the photos helps me get started with
learning the names.
• I send around a sheet of paper with my name, phone number, and email. I suggest they add theirs to the list if they want, and tell them no one has ever had problems from it that I know of. Then I
copy the list, and give it to them. I talk about how studies have shown that students who work in groups do better.
• I offer ‘donut points’ for catching me in mistakes, so that they’ll question what I’m presenting, instead of assuming it’s right just because the teacher said it. After the class has caught me 30
times, I bring in donuts.
• I use ‘thumbs up-down-sideways’ to find out the level of understanding. (And have found out that it’s important not to ask people with their thumbs down to elaborate, or else fewer people will use
their thumbs at all.)
• I ask them if they think you need a good memory to learn math. Most think so. I tell them about my terrible memory and say it's all about connections and understanding why things work. I made a
poster that says: ‘Real mathematicians ask why’. It's in my classrooms and my office.
• Last year I was working way too hard to have time for correcting homework. They have answers in the book of the book, so they don't need me to correct it. They just need me to give them credit for
doing it, so that it gets high enough in their priority lists to get done. So (I learned this from a high school teacher at Middle College High School which is housed inside our college. Thanks,
Eric!), I stamp their homework. (Buy a self-inking stamper for this.) If the homework is complete, or close to it, they get two stars. At least halfway there gets one star. They turn it all in at
test time, and I record the number of stars. I can do that while they're taking the test.
• Some classes get the math autobiography assignment. Lately, though, I've gotten tired of it, and just offer it as extra credit.
• College textbooks are outrageously expensive - generally over $100 for math texts. My goal is to use the textbook as little as possible. So I told my beginning algebra students that they didn't
have to get the official textbook, but could buy any Beginning Algebra text. I changed my homework sheet to show the topic names, and told them to pick 10 problems from their book on the right topic.
Lots of students have tried to come to class with no book, because they couldn't afford the $100-plus. Now they didn't have an excuse to have no book. We discuss where used bookstores are, and online
sources for used books.
Here were some ideas I heard at a Great Teachers Seminar that I liked and hope to try in future:
• Send email before semester begins. 2 days before gives them a day to get their book.
• Put students in the position of teaching what they’re learning.
• Ask at end of class: ‘What’s the most interesting thing you learned this week?
• Ask: What are you curious about? (What are they already interested in?)
• Have a question for them each time you walk into the classroom.
• One teacher pairs students up and has them fill out a form which starts "I am my brother's and sister's keeper. I will help my partner succeed. I commit to..." Then there are 3 blanks, and th pair
discuss how they can help each other. If one is absent, she'll ask the other about it.
• The Math Students' Bill of Rights is included in my syllabus for all lower level classes. [Added on 8/25]
[Note to self: Dan's First Day Wiki has a great stacking cups activity. He posted a different cup stacking activity here.]
Sometimes even the spam is interesting.
I'm working on the next issue of Math Teachers at Play, which goes up on Friday. So far I've gotten 4 real submissions for it, and about 20 spams. The spams are things like '50 tips for teachers',
and '100 great online college degree programs'. I check each one to see if it would fit our blog carnival. Usually the title is enough ... Delete.
So today I got one that was something like '50 serious educational games'. I thought it might be worth checking into whether there were any good math games. One said it was math, but it didn't look
promising. There was a science game, though, that got me excited: FoldIt. It's real science, protein folding, and it looks like it might be an interesting game. It sounds like something I might even
call math, since it's about space and shape issues. An even bigger bonus: Apparently, people playing the game may even be able to help further the study of protein folding, by helping the researchers
understand how to tell computers what to look for (if I'm understanding this right).
What s hape will a protein fold into? Even though proteins are just a long chain of amino acids, they don't like to stay stretched out in a straight line. The protein folds up to make a compact
blob, but as it does, it keeps some amino acids near the center of the blob, and others outside; and it keeps some pairs of amino acids close together and others far apart. Every kind of protein
folds up into a very specific shape -- the same shape every time. Most proteins do this all by themselves, although some need extra help to fold into the right shape. The unique shape of a
particular protein is the most stable state it can adopt. Picture a ball at the top of a hill -- the ball will always roll down to the bottom. If you try to put the ball back on top it will still
roll down to the bottom of the hill because that is where it is most stable.
Unfortunately, I downloaded the Mac version and got the message: You cannot open the application “Foldit” because it is not supported on this architecture. Bummer. I wonder if any of you would like
to try it out, and tell me what you think? (If it looks good to any of you, I'll try harder to contact them and ask how I can get it working on my mac.)
If it's good, I would want to thank the spammer, but that turns out to be dangerous. Another carnival host found that out the hard way. She suddenly started getting inundated by junk email, and
traced it back to a message she had sent to one of these people explaining why their submissions didn't fit. So I'll have to say "Thanks, KH!" here, even though I know she'll never see it. ;^)
I just discovered Alan Kay. You can read his bio at Wikipedia, or watch him give a TED talk. But I'm posting about him because I was fascinated by his thoughts on science, why it's counterintuitive,
and how we might teach it better. Sounds like some of what I'm trying to get a handle on about math and how to teach it.
His thoughts about what science is seem to run in the same direction as my thoughts on what math is. I hope to post more on this later.
Here's Alan:
Adults and children just love "being creative" and "expressing themselves". And, this is especially the case around the world when computers are introduced with applications that allow people to make
Let's look at this process over human presence on the planet. We find invention coupled with dogma. Many who have studied this have likened it to an erosion model of memory, both individually and
culturally. (Once a little groove is randomly made by water it becomes very efficient in helping more water to erode it further.)
So "creative acts" are resisted, but once accepted for one reason or another will cling, and most often far beyond their merits. Most creativity is more "News" than "New", that is it is extremely
incremental to the erosion gully.
Science attempts to be completely different than this. We are dancing with a universe of which we can only detect some of its shadows, and the universe leads. We are not free to be creative to make
up stories we like or draw pictures we like unless they can be shown to fit to a high degree with the dance. For many important reasons these maps we try to make cannot be true, even if the maps
themselves are internally perfect and beautiful.
So science goes *quite against* what anthropologists have determined are strong built in human characteristics about explaining the world.
And even trained scientists often have real problems with this. Our brains want to *believe* but science is not about belief.
This is one of the reasons that science almost requires a community of scientists, some of whom are less invested in particular theories and rationalizations of them than others. It is these more
disinterested more skeptical scientists who help the invested behave -- and vice versa. So science is a kind of human system for deeply debugging human notions and rationalizations (about
It is this epistemology that has to be learned (really one trains oneself in it) before one can deal with "written down science". There is nothing in any science writing that can help anyone with the
goodness of the mapping. Why? Because once one gets to language, with or without the aid of mathematics, one is using the same representation systems that are also used for religion. One can say
anything in language (for example all languages contain "not", which means any claim can be restated as a counter claim!). This extends quite simply to any representation on a computer.
So the basic process of learning science is about doing direct stuff and imbibing its epistemological stances. However, so much successful science has been done -- and science not only builds on
itself but requires its findings to be constantly intercorrelated -- that no scientist can recapitulate all this by direct experiment. So the learning process is (a) get down the epistemology by
direct contact with the real processes (b) then you can deal with claims that you won't be able to directly substantiate.
This amount of rigor is difficult for we humans generally. But it is just this rigor that made the enormous differences in how well we can do the dance over the last few hundred years.
Trying to do less loses both the dance and the art. So we can think of science as the art form in which the greatest creativity ever must be used with the greatest constraints and possibility for
failure. It goes far beyond mathematics and (say) composing something really beautiful in strict counterpoint, though both of these have strong tinges of this style.
I think it is possible to do the real deal with children, and we've managed to show this (for example, with the Galilean gravity investigation [Sue's note: this is shown in the TED talk]). The
ability of the computer to do simple incremental addition very quickly gives us a differential mathematics that is completely understandable to the children that is also fast enough to carry out the
integrations over time directly in real-time. For 10 year olds, this is really good science, and I would neither advocate them being less nor more rigorous.
For children, we mainly want to find really good ways to help them with (a) above.
Each age can match up to real science projects devised by us (it is *we* who have to be really creative!).
And as important as is creativity, *we* simply must understand the real and deep natures of the subjects we are trying to help children learn. Most importantly, we have to understand what
simplifications retain the underlying epistemology of the content, and which simplifications completely undermine and confuse the subject matter. (The latter is seen almost invariably in most K-8
classrooms in the US with or without computers - the teachers simply don't understand the stuff, and the school district and state almost always water the stuff down to lose it in futile attempts to
get better test scores, regardless of whether the testing is now just an empty gesture.)
So most of the small percentage of the children who do become fluent in real science do so outside the regular classroom, and very often via contact through some knowledgeable adult.
Adding the blog of Albany Area Math Circle to my Google Reader has added delight to my mornings. I'm seeing so many fun problems on there! Yesterday she linked to one here with a picture that begs
you (well, me at least) to start computing.
Guess the total dollar value of the change in this box, and win a galley copy of Chad Orzel's soon-to-be-published book, How to Teach Physics to Your Dog.
Edited on 8-17-09:
He's posted the answer. I was way low. As was the average answer, dubbed "the wisdom of the crowd". Orzel asks why. I'm thinking we tend to estimate low on money.
What would help us estimate better?
Mathsemantics, by Edward MacNeal, addresses this in one chapter. (I've assigned it as reading to many of my classes.) He talks about having a semantic web in your head that includes a few important
numbers, like:
• population of the earth
• population of the U.S.
• population of your state
• radius of the earth
And then he recommends estimating often, committing to your estimate somehow, and then finding out the real value of what you estimated. For example, estimate your arrival time when you're in the
car, tell the person next to you, and notice the time when you do arrive at your destination.
There are also books that give lots of examples of estimation problems that involve thinking step by step. I've skimmed through Guesstimation: solving the world's problems on the back of a cocktail
napkin, by Lawrence Weinstein and John Adam, and Geekspeak: How Life + Mathematics = Happiness, by Graham Tattersall. (I liked Guesstimation better. Neither book was compelling reading, but
Guesstimation will work well as a reference, whereas Geekspeak doesn't live up to its title at all.)
One classic of this type of problem is "How many piano tuners are there in New York City?" I've worked through this with many classes: What is the population of New York City? What proportion of
households have pianos? How many more pianos are in the city? How long does it take to tune a piano? How often do the pianos get tuned? Put it all together, and if you estimated the pieces decently,
you get an answer that's between half and double the true value, which is pretty good. [What does true value mean? How do we count people who work part-time, or who have another profession and tune
pianos once in a while. Perhaps there is no exact right answer...]
I've done all this, and I'm still not that great at estimating. (I thought the visible layer of coins in that box was worth about $10, and that there were probably 6 to 10 layers like that in the
Anyone have other ideas about how to help others, and myself, learn to estimate better? Anything you'd add to this if you used it in a class?
• Math Teachers at Play #13 is at Blog, She Wrote. (Old news. I'm still catching up from a week away from the web.)
• I'll be doing Math Teachers at Play #14 here, to be posted on Friday, August 21. Send me pointers to your favorite new blog items, at mathanthologyeditor on the gmail system. (Trying to avoid
giving my email address to spambots...)
• The next Richmond (CA) Math Salon will be on Saturday, August 22, from 2 to 5 pm. We'll be doing origami. Contact me if you'd like to come. (I'm suevanhattum on the hotmail system, for this.)
• I'm very happily involved in a grant application that would allow me to work with the local elementary teachers, improving their understanding of math. Wish us all luck.
Just a few cool links, really, before I run off for a week at family camp, with no internet.
Brent, at The Math Less Travelled, wrote a great review of The Mathematical Mechanic, by Mark Levi. If I weren't on a budget, I'd buy it right now, after reading this. (I see that UC Berkeley's
library doesn't have it yet, either. Bummer.)
In my morning explorations today I also rediscovered the blog of the Albany Area Math Circle, which looks like it's full of intriguing posts. First I found Hangmath, which looks like a very fun game
- I'll look forward to trying it with students in September. And then, in the next post down, she described something I heard about earlier this summer, and have been meaning to explore further.
Math Trails, also known as Math Walks
While I was attending the Math Circle Institute, Amanda Serenevy gave us a leaflet about a project she'd developed for her Riverbend Community Math Center, about taking a math walk in South Bend. I
can't find it now, but it looked intriguing and is another activity I'd like to try out in September. So I was excited to see Mary O'Keefe's post on Math Walks.
Most math walks involve arriving at various destinations and solving some puzzle (or math problem) presented by what you see at the location. O'Keefe's is more focused on the history of her campus,
and a local mathematician. Googling 'math walks' found me this post on the Futures Channel, about Ron Lancaster's work. I noticed that he uses the term math trail, so started searching on that.
Here's a site I'll come back to when I'm ready to set up my own math trail.
I also found this on Wikipedia, about Kay Tolliver, who's won a Presidential Award for her teaching. She started the National Math Trail project, where students and teachers can post information
about math trails they've developed.
Anyone find any exciting math trails near them, or have any experience creating or using one? | {"url":"http://mathmamawrites.blogspot.com/2009_08_01_archive.html","timestamp":"2014-04-17T01:08:29Z","content_type":null,"content_length":"177522","record_id":"<urn:uuid:75353c5b-eabd-4c8d-babe-e8c8ceb56272>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
quantum cryptography
Quantum cryptography uses our current knowledge of physics to develop a cryptosystem that is not able to be defeated - that is, one that is completely secure against being compromised without
knowledge of the sender or the receiver of the messages. The word quantum itself refers to the most fundamental behavior of the smallest particles of matter and energy: quantum theory explains
everything that exists and nothing can be in violation of it.
Quantum cryptography is different from traditional cryptographic systems in that it relies more on physics, rather than mathematics, as a key aspect of its security model.
Essentially, quantum cryptography is based on the usage of individual particles/waves of light (photon) and their intrinsic quantum properties to develop an unbreakable cryptosystem - essentially
because it is impossible to measure the quantum state of any system without disturbing that system. It is theoretically possible that other particles could be used, but photons offer all the
necessary qualities needed, their behavior is comparatively well-understood, and they are the information carriers in optical fiber cables, the most promising medium for extremely high-bandwidth
How It Works in Theory
In theory, quantum cryptography works in the following manner (this view is the "classical" model developed by Bennett and Brassard in 1984 - some other models do exist):
Assume that two people wish to exchange a message securely, traditionally named Alice and Bob. Alice initiates the message by sending Bob a key, which will be the mode for encrypting the message
data. This is a random sequence of bits, sent using a certain type of scheme, which can see two different initial values represent one particular binary value (0 or 1).
Let us assume that this key is a stream of photons travelling in one direction, with each of these photon particles representing a single bit of data (either a 0 or 1). However, in addition to their
linear travel, all of these photons are oscillating (vibrating) in a certain manner. These oscillations can occur in any 360-degree range across any conceivable axis, but for the purpose of
simplicity (at least as far as it is possible to simplify things in quantum cryptography), let us assume that their oscillations can be grouped into 4 particular states: we'll define these as UP/
DOWN, LEFT/RIGHT, UPLEFT/RIGHTDOWN and UPRIGHT/LEFTDOWN. The angle of this vibration is known as the polarization of the photon. Now, let us introduce a polarizer into the equation. A polarizer is
simply a filter that permits certain photons to pass through it with the same oscillation as before and lets others pass through in a changed state of oscillation (it can also block some photons
completely, but let's ignore that property for this exercise). Alice has a polarizer that can transmit the photons in any one of the four states mentioned - in effect, she can choose either
rectilinear (UP/DOWN and LEFT/RIGHT) or diagonal (UPLEFT/RIGHTDOWN and UPRIGHT/LEFTDOWN) polarization filters.
Alice swaps her polarization scheme between rectilinear and diagonal filters for the transmission of each single photon bit in a random manner. In doing so, the transmission can have one of two
polarizations represent a single bit, either 1 or 0, in either scheme she uses.
When receiving the photon key, Bob must choose to measure each photon bit using either his rectilinear or diagonal polarizer: sometimes he will choose the correct polarizer and at other times he will
choose the wrong one. Like Alice, he selects each polarizer in a random manner. So what happens with the photons when the wrong polarizer is chosen?
The Heisenberg Uncertainty Principle states that we do not know exactly what will happen to each individual photon, for in the act of measuring its behavior, we alter its properties (in addition to
the fact that if there are two properties of a system that we wish to measure, measuring one precludes us from quantifying the other). However, we can make a guess as to what happens with them as a
group. Suppose Bob uses a rectilinear polarizer to measure UPLEFT/RIGHTDOWN and UPRIGHT/LEFTDOWN (diagonal) photons. If he does this, then the photons will pass through in a changed state - that is,
half will be transformed to UP/DOWN and the other half to LEFT/RIGHT. But we cannot know which individual photons will be transformed into which state (it is also a reality that some photons may be
blocked from passing altogether in a real world application, but this is not relevant to the theory).
Bob measures some photons correctly and others incorrectly. At this point, Alice and Bob establish a channel of communication that can be insecure - that is, other people can listen in. Alice then
proceeds to advise Bob as to which polarizer she used to send each photon bit - but not how she polarized each photon. So she could say that photon number 8597 (theoretically) was sent using the
rectilinear scheme, but she will not say whether she sent an UP/DOWN or LEFT/RIGHT. Bob then confirms if he used the correct polarizer to receive each particular photon. Alice and Bob then discard
all the photon measurements that he used the wrong polarizer to check. What they have, is, on average, a sequence of 0s and 1s that is half the length of the original transmission...but it will form
the basis for a one-time pad, the only cryptosystem that, if properly implemented, is proven to be completely random and secure.
Now, suppose we have an eavesdropper, Eve, who attempts to listen in, has the same polarizers that Bob does and must also randomly choose whether to use the rectilinear or diagonal one for each
photon. However, she also faces the same problem that Bob does, in that half the time she will choose the wrong polarizer. But Bob has the advantage of speaking to Alice to confirm which polarizer
type was used for each photon. This is useless to Eve, as half the time she used the wrong detector and will misinterpret some of the photons that will form that final key, rendering it useless.
Furthermore, there is another level of security inherent in quantum cryptography - that of intrusion detection. Alice and Bob would know if Eve was eavesdropping on them. The fact that Eve is on the
"photon highway" can become obvious because of the following.
Let's say that Alice transmits photon number 349 as an UPRIGHT/LEFTDOWN to Bob, but for that one, Eve uses the rectilinear polarizer, which can only measure UP/DOWN or LEFT/RIGHT photons accurately.
What Eve will do is transform that photon into either UP/DOWN or LEFT/RIGHT, as that is the only way the photon can pass. If Bob uses his rectilinear polarizer, then it will not matter what he
measures as the polarizer check Alice and Bob go through above will discard that photon from the final key. But if he uses the diagonal polarizer, a problem arises when he measures its polarization;
he may measure it correctly as UPRIGHT/LEFTDOWN, but he stands an equal chance, according to the Heisenberg Uncertainty Principle, of measuring it incorrectly as UPLEFT/RIGHTDOWN. Eve's use of the
wrong polarizer will warp that photon and will cause Bob to make errors even when he is using the correct polarizer.
To discover Eve's nefarious doings, they must perform the above procedures, with which they will arrive at an identical key sequence of 0s and 1s - unless someone has been eavesdropping, whereupon
there will be some discrepancies. They must then undertake further measures to check the validity of their key. It would be foolish to compare all the binary digits of the final key over the
unsecured channel discussed above, and also unnecessary.
Let us assume that the final key comprises 4,000 binary digits. What needs to be done is that a subset of these digits be selected randomly by Alice and Bob, say 200 digits, in terms of both position
(that is, digit sequence number 2, 34, 65, 911 etc) and digit state (0 or 1). Alice and Bob compare these - if they match, then there is virtually no chance that Eve was listening. However, if she
was listening in, then her chances of being undiscovered are one in countless trillions, that is, no chance in the real world. Alice and Bob would know someone was listening in and then would not use
the key - they would need to start the key exchange again over a secure channel inaccessible to Eve, even though the comparisons between Alice and Bob discussed above can still be done over an
insecure channel. However, even if Alice and Bob have concluded that the their key is secure, since they have communicated 200 digits over an un-secure channel, these 200 digits should be discarded
from the final key, turning it from a 4,000 into a 3,800 bit key).
Thus, quantum cryptography is a way to combine the relative ease and convenience of key exchange in public key cryptography with the ultimate security of a onetime pad.
How It Works in Practice
In practice, quantum cryptography has been demonstrated in the laboratory by IBM and others, but over relatively short distances. Recently, over longer distances, fiber optic cables with incredibly
pure optic properties have successfully transmitted
bits up to 60 kilometers. Beyond that, BERs (bit error rates) caused by a combination of the Heisenberg Uncertainty Principle and microscopic impurities in the fiber make the system unworkable. Some
research has seen successful transmission through the air, but this has been over short distances in ideal weather conditions. It remains to be seen how much further technology can push forward the
distances at which quantum cryptography is practical.
Practical applications in the US are suspected to include a dedicated line between the White House and Pentagon in Washington, and some links between key military sites and major defense contractors
and research laboratories in close proximity.
Contributor(s): and assistance provided by Borys Pawliw
This was last updated in September 2005
Email Alerts
Register now to receive SearchSecurity.com-related news, tips and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the
United States.
Do you have something to add to this definition? Let us know.
Send your comments to
There are Comments. Add yours.
Sort by: OldestNewest | {"url":"http://searchsecurity.techtarget.com/definition/quantum-cryptography","timestamp":"2014-04-19T07:06:31Z","content_type":null,"content_length":"92586","record_id":"<urn:uuid:79a4489f-ef8d-400e-9f05-0d1a1b0db32b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell Code by HsColour
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
module NLP.Semiring.Derivation (Derivation(..), MultiDerivation(..), mkDerivation, fromDerivation) where
import NLP.Semiring
import NLP.Semiring.Helpers
import qualified Data.Set as S
import Data.Monoid
import Data.Maybe (isNothing)
import Control.Exception
-- | The 'Derivation' semiring keeps track of a single path or derivation
-- that led to the known output. If there are more than one path it discards
-- in favor the lesser path (based on ord). The main purpose of this semiring
-- is to track derivations for ViterbiNBestDerivation. If you want to keep all paths,
-- use 'MultiDerivation'.
-- Derivation takes a Monoid as an argument that describes how to build up paths or
-- more complicated structures.
newtype Derivation m = Derivation (Maybe m)
deriving (Eq, Ord)
instance (Monoid m) => Multiplicative (Derivation m) where
one = Derivation $ Just mempty
times (Derivation d1) (Derivation d2) = Derivation $ do
d1' <- d1
d2' <- d2
return $ mappend d1' d2'
instance Monoid (Derivation m) where
mempty = Derivation Nothing
mappend (Derivation s1) (Derivation s2) =
Derivation $ case (s1,s2) of
(Nothing, s2) -> s2
(s1, Nothing) -> s1
(s1, s2) -> s1
instance (Monoid m) => Semiring (Derivation m)
instance (Show m) => Show (Derivation m) where
show (Derivation (Just m)) = show m
show (Derivation Nothing) = "[]"
mkDerivation :: (Monoid m ) => m -> Derivation m
mkDerivation = Derivation . Just
fromDerivation :: (Monoid m ) => Derivation m -> m
fromDerivation (Derivation (Just m)) = m
fromDerivation (Derivation Nothing) = throw $ AssertionFailed "no derivation"
-- | The 'MultiDerivation' semiring keeps track of a all paths or derivations
-- that led to the known output. This can be useful for debugging output.
-- Keeping all these paths around can be expensive. 'MultiDerivation' leaves open
-- the implementation of the internal path monoid for more compact representations.
newtype MultiDerivation m = MultiDerivation (S.Set m)
deriving (Eq, Show, Ord)
instance (Monoid m, Ord m) => Multiplicative (MultiDerivation m) where
one = MultiDerivation $ S.fromList [mempty]
times (MultiDerivation d1) (MultiDerivation d2) = MultiDerivation $
S.fromList $
map (uncurry mappend) $
cartesian (S.toList d1) (S.toList d2)
instance (Ord m) => Monoid (MultiDerivation m) where
mempty = MultiDerivation S.empty
mappend (MultiDerivation s1) (MultiDerivation s2) = MultiDerivation $ S.union s1 s2
instance (Ord m, Monoid m, Eq m) => Semiring (MultiDerivation m) | {"url":"http://hackage.haskell.org/package/semiring-0.2/docs/src/NLP-Semiring-Derivation.html","timestamp":"2014-04-20T23:43:13Z","content_type":null,"content_length":"16299","record_id":"<urn:uuid:0eed4112-3c4a-4805-b64e-789bc1fdb8f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Large Margin Nearest Neighbors
(Thanks to John Blitzer, who gave me this cake for my 30th birthday.)
Here is the original image from the paper:
Large Margin Nearest Neighbor Classifiction is a NIPS05 paper in which we show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN) classification by semidefinite programming. The
metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. On seven data sets of varying size
and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification---for example, achieving a test error rate of 1.3% on the MNIST handwritten digits. Our
approach has many parallels to support vector machines, including a convex objective function based on the hinge loss, but does not require modifications for problems with large numbers of classes.
More details about the algorithm can be found on the Wiki page and in the original JMLR paper.
Stable version: Download Page
(If you have trouble compiling mex files, try to run the demo without install.m - the binaries for several architectures are included.)
The code is based on the very simple alternating projection algorithm. Please let me know about any problems you might encounter with the implementation. | {"url":"http://www.cse.wustl.edu/~kilian/code/lmnn/lmnn.html","timestamp":"2014-04-20T21:36:10Z","content_type":null,"content_length":"7495","record_id":"<urn:uuid:fe38c357-d21e-4743-9b84-c0f705e0eb97>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
My Favorite Puzzles
Here is a collection of my favorite math puzzles. I like puzzles which can be solved with a beautiful trick, or which are counter-intuitive, or which confuse people. I do not want to spoil them for
you by posting all the solutions, but I would like to post some discussion and divide the puzzles into groups.
Straightforward problem: Families with a Boy.
Among the families with two children, which have at least one boy, what is the probability that the family has two boys?
Assume that the probability of a child being a boy is 1/2. It is easy to see that for families with two children each of the following cases occurs with the probability 1/4:
• First boy, second boy;
• First boy, second girl;
• First girl, second boy;
• First girl, second girl.
Therefore, among families with one boy only one-third of the families have two boys. Hence, the answer is 1/3.
Confusing puzzle: The Other Child.
A man says, "I have two children; at least one of them is a boy." What is the probability that the other one is a boy?
I recommend that you look at the discussion only after you have solved the problem. After you understand the solution to the problem of the other child, you can easily solve the three puzzles below.
Note that not all of them have the same answer.
Puzzle: Birthday Present.
I meet my friend at the store. She tells me that she is buying a present for her son. I know that she has two children, but I don't know the gender of her children. Based on the information I have,
what is the probability that the other child is a boy?
Puzzle: Navy Recruitment.
In a small town, every family has two children. Every family that has at least one boy received an advertisement from the Navy. What is the probability that the second child in a family that received
the advertisement is a boy?
Puzzle: Pregnancy.
In a college, all of the students are from families with two children, and both of the children are in this college. For summer vacation, all of the students go to an island with no other people.
During the vacation, one of the girls becomes pregnant. Assume that she has no taste whatsoever and selects her partner at random. What is the probability that the father of the unborn child has a
Puzzle: My Family.
I have two children. At least one of them is a boy. What is the probability that the other one is a boy as well?
Puzzle: John's Brother
A man has two children. One of them is a boy named John. What is the probability that John has a younger brother?
Here is a hint for those who are stuck.
Puzzle: In Three Ways.
Find three ways to make the program below print 20 copies of the dash character '-' by changing or adding one character:
int i, n = 20;
for (i = 0; i < n; i--)
Incorrect solutions:
• Changing "n = 20" to "n = -20" won't work.
• Changing "i--" to "i++" doesn't satisfy the one character condition.
• Changing "i = 0" to "i = 40" won't work.
• Changing "i < n" to "i < -n" won't work.
pochmann and SnapDragon from TopCoder suggested entertaining and challenging variations to this problem (using the same restrictions):
• Find a way to get it to print exactly 21 dashes.
• Find a way to get it to print exactly 1 dash.
• Find many ways to print infinitely many dashes.
• Find many ways to print 0 dashes.
• Print 20 dashes with the given code, without changing it at all, but you are allowed to add some preceding code.
• Print 1 dash with the given code, without changing it at all, but you are allowed to add some preceding code.
Puzzle: Program that prints itself.
In your favorite programming language, write a program that, when run, will print out its own source code. Bonus puzzle: write this program without using external resources — you have the interpreter
for your language and you have the output, but you can't interact with your environment in any other way. (Such programs are called quines).
Easy Problem: Guess my Birthday.
My birthday is in January. What is the smallest number of yes/no questions you need to ask to determine the day of my birthday.
Slightly Harder Version: Guess my Birthday
This time, you must mentally list all of your questions before I answer them and then ask them in that order. How many questions do you need?
General solution that helps for all the other problems in this section:
There are two ideas.
This first idea helps to calculate a reasonable minimum number of questions:
If you ask K questions you can have 2^K different sequences of yes/no answers. If you can guess the number in K questions, that means that the number is uniquely defined by the yes/no sequence.
Hence, the number of different sequences should be equal to or greater than N. That is 2^K ≥ N.
This second idea describes the strategy:
The first question divides all of the numbers into two groups corresponding to "yes" or to "no" answers. The best question would have these two groups as close in size as possible. For example: is
your number even? The following questions should follow in a similar manner.
Another easy problem: Fake Coin.
I have N coins, one of them is fake and is lighter than the others. I also have a balance with two cups. I can put some number of coins into each cup, and the balance shows me which set of coins is
lighter. Using the balance the fewest number of times, find the fake coin.
A Harder Fake Coin Problem
This is the same problem as above, except this time you have to say what all of your weighings will be before you actually do them.
More Difficult Problem: 12 Coins.
There are 12 coins. One of them is fake; it is either lighter or heavier than normal coins. Find the fake coin and say whether it is lighter or heavier by using the balance in the minimum number of
A Harder Version of 12 Coins.
Once again you have 12 coins and a balance. This time, you must decide the exact weighings you will do before you do them. How many weighings will it take you this time?
Problem: Two Glass Balls.
I am in a 100-story building. I have with me two glass balls. I know that if I throw the ball out of the window, it won't ever break if the floor number is less than X, and it will always break if
the floor number is equal to or greater than X. Assuming that I can reuse the balls which don't break, find X in the minimum number of throws.
Problem: Five Chess Players.
There are five chess players of different strengths. If two of them play, the stronger one always wins. What is the minimum number of games they need to play for us to determine the order of their
Problem: Radioactive Balls.
There are 11 balls; two of them are radioactive. We have a tool in the form of a box. If we put several balls inside the tool, it can tell us whether or not there is a radioactive ball inside it. As
this tool is very expensive to use, how can you find two radioactive balls using the tool the minimum number of times?
Problem: 100 Prisoners in a Line
A king decides to give 100 of his prisoners a test. If they pass, they can go free. Otherwise, the king will execute all of them. The test goes as follows: The prisoners stand in a line, all facing
forward. The king puts either a black or a white hat on each prisoner. The prisoners can only see the colors of the hats in front of them. Then, in any order they want, each one guesses the color of
the hat on their head. Other than that, the prisoners cannot speak. To pass, no more than one of them may guess incorrectly. If they can agree on their strategy beforehand, how can they be assured
that they will survive?
A colorful variation
Same as above with any number of colors.
An infinite variation
The king has a countable number of wise men. The line starts from the left and is infinite in the right direction. The wise men are all facing to the right and they see the infinite tail of the line.
Again, the king places either a black or white hat on each head and they can only say one of two words: black or white. Will they be able to devise a strategy beforehand that ensures that not more
than one person makes a mistake?
Problem: Rainbow Hats
Seven men are sitting in a room. Someone puts a hat on the head of each man. Each hat has an equal probability of being one of the seven colors of the rainbow. It is okay for two men to have hats of
the same color. Without communicating with each other, each man guesses the color of the hat on their head. If at least one of them guesses right, they win this little game of theirs. If they are
allowed to create a strategy beforehand, how can they be assured of winning?
Problem: Probabilistic Hats
Three men are given a challenge. They will all sit in a room and someone will put either a black or a white hat on each one of them with probability one half. The men cannot communicate with each
other, but they can see the colors of the hats of the other two men. At the same time, each man says which color they think the hat on his own head is. Each individual can also pass. They win if at
least one of them names the color of his hat correctly, and if none of them gives the incorrect answer. How can they maximize their probability of winning?
Problem: Probabilistic Hats. II
The sultan decides to test his hundred wizards. Tomorrow at noon he will randomly put a red or a blue hat — for both of which he has an inexhaustible supply — on every wizard's head. Each wizard will
be able to see every hat but his own. The wizards will not be allowed to exchange any kind of information whatsoever. At the sultan's signal, each wizard needs to write down the color of his own hat.
Every wizard who guesses wrong will be executed. The wizards have one day to decide on a strategy to maximize the number of survivors. Suggest a strategy for them.
A variation: The wizards are all very good friends with each other. They decide that executions are very sad events and they do not wish to witness their friends' deaths. They would rather die
themselves. They realize that they will only be happy if all of them survive together. Suggest a strategy that maximizes the probability of them being happy, that is, the probability that all of them
will survive.
Problem: Cigarette Butts
A certain hobo who is skilled at making cigarettes can turn any 4 cigarette butts into a single cigarette. Today, this hobo has found 24 cigarette butts on the street. Assuming he smokes every
cigarette he can, how many cigarettes will he smoke today?
Problem: Two Fuses
You have two fuses that both last one hour, and you have no other ways of telling time. The fuses may be thicker at some points, so in half an hour, the amount of fuse that has burned may or may not
be half the length of the whole fuse. How do you measure 45 minutes worth of time?
Problem: Only One Fuse
You have one fuse similar to the ones in the problem above. Is it possible to measure 20 minutes exactly?
Problem: A Poison Duel
Once upon a time there was a land where the only antidote to a poison was a stronger poison, which needed to be the next drink after the first poison. In this land, a malevolent dragon challenges the
country's wise king to a duel. The king has no choice but to accept.
By bribing the judges, the dragon succeeds in establishing the following rules of the duel: Each dueler brings a full cup. First they must drink half of their opponent's cup and then they must drink
half of their own cup.
The dragon wanted these rules because he is able to fly to a volcano, where the strongest poison in the country is located. The king doesn't have the dragon's abilities, so there is no way he can get
the strongest poison. The dragon is confident of winning because he will bring the stronger poison.
The only advantage the king has is that the dragon is dumb and straightforward. The king correctly predicts what the dragon will do. How can the king kill the dragon and survive?
Problem: Pebbles
You have 45 pebbles arranged in several piles. Each turn you take one pebble from each pile and put them into a new pile. What is an asymptotic behavior for this process?
Problem: Equation
Do there exist natural numbers x, y, and z satisfying the equation: 28x + 30y + 31z = 365?
Problem: Davidsons
The Davidsons have five sons. Each son has one sister. How many children are there in the family?
Problem: A Stick
A stick has two ends. If you cut off one end, how many ends will the stick have left?
Problem: A Square
A square has four corners. If we cut one corner off, how many corners will the remaining figure have?
Problem: Two Sons
Anna had two sons. One son grew up and moved away. How many sons does Anna have now?
Problem: Crows (submitted by Jonathan)
Ten crows were sitting on a fence. A farmer shot one. How many were left?
Problem: Candles (submitted by Yulia Yelkhimova)
John had four candles and lit them all up. Then he changed his mind and blew out one of the candles. How many candles has he left?
Problem: Apples
At a farmer's market you stop by an apple stand, where you see 20 beautiful apples. You buy 5. How many apples do you have?
Problem: Full House
Mrs. Fullhouse has 2 sons, 3 daughters, 2 cats and 1 dog. How many children does she have?
Problem: A Chandelier
My dining room chandelier has 5 light bulbs in it. During a storm two of them burned out. How many light bulbs are in the chandelier now?
Problem: Fudge
My dog Fudge likes books. In the morning he brought two books to his corner and three more books in the evening. How many books will he read tonight?
Problem: Candy
There were five bowls full of candy on the table. Mike ate one bowl of candy and Sarah ate two. How many bowls are there on the table now?
Problem: Cows
Peter had ten cows. All but nine died. How many cows are left?
Problem: Shots
A patient needs to get three shots. There is a break of 30 minutes between shots. Assuming the shots themselves are instantaneous, how much time will the procedure take?
Problem: Race I (submitted by Victor Gutenmacher)
You are running a race and you pass the person who was running second. What place are you now?
Problem: Race II (submitted by Victor Gutenmacher)
You are running a race and you pass the person who was running last. What place are you now?
Problem: A Stick variation
How many ends do five stick have? What about five and a half stick?
Problem: Chess
Two friends played chess for four hours. How many hours each of them played chess for?
Problem: A Log
It takes 12 minutes to saw a log into 3 parts. How much time will it take to saw it into 4 parts?
Problem: A Caterpillar
A caterpillar wants to see the world and decides to climb a 12-meter pole. It starts every morning and climbs 4 meters in half a day. Then it falls asleep for the second half of the day, during which
time it slips 3 meters down. How much time will it take the caterpillar to reach the top?
Problem: Fingers
Humans have 10 fingers on their hands. How many fingers are there on 10 hands?
Problem: Horses (submitted by Yulia Yelkhimova)
Three horses are galloping at 27 miles per hour. What is the speed of one horse?
Problem: Museums
Ten kids from Belmont High School went on a tour of Italy. During the tour they visited 20 museums. How many museums did each kid go to?
Problem: Twins
How many people are there in two pairs of twins, twice?
Problem: Eggs (submitted by Jonathan)
It takes 3 minutes to boil 3 eggs. How long will it take to boil 5 eggs?
Problem: A Rabbit (submitted by Gabe)
On average, rabbits start breeding when they are 3 months old and produce 4 offspring every month. If I put a day old rabbit in a cage for a year, how many offspring will it produce?
Problem: Eggs (submitted by Xi_Heather)
A chicken and a half can lay an egg and a half in a day and a half. How long will it take three chickens to lay three eggs?
Problem: Cucumbers (submitted by Misha)
100 pounds of cucumbers, that were 99% water, got a bit dehydrated, and became 98% water. What is their weight now?
Problem: Friends
Two friends went for a walk and found $20. How much money would they have found if they were joined by two more friends?
Problem: Goldfish (submitted Rua Javari)
One hundred percent of the fish in a pond are goldfish. I take 10% of the goldfish out of the pond. What percentage of goldfish does the pond now contain?
Last revised December 2009 | {"url":"http://www.tanyakhovanova.com/Puzzles/index.html","timestamp":"2014-04-20T01:22:32Z","content_type":null,"content_length":"20800","record_id":"<urn:uuid:513dc5ba-a7dd-462b-8e11-5318c9b96e33>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the coordinate ring of symmetric product of affine plane?
up vote 4 down vote favorite
The symmetric product of a variety $M$ is the quotient of $M^n/S_n$ where $S_n$ is the symmetric group permuting components of n-fold product $M^n$. IF $M$ is an affine plane $C^k$ over complex
numbers, the coordinate ring of the symmetric product is the invariant polynomials in $R:=C[x^1_1,...,x^1_k, x^2_1,...,x^2_k,... ,x^n_1,...,x^n_k]$ under the action of $S_n$ where $S_n$ permutes the
variables $x_i^1,...,x_i^n$ simultaneously for $i=1,...,k$. I want to know the invariant subring $R^{S_n}$ in terms of generators and relations. Could anybody help me?
ag.algebraic-geometry ac.commutative-algebra
add comment
3 Answers
active oldest votes
Those invariant polynomials are called multisymmetric functions. There are several papers on them; you could start with J. Dalbec, Multisymmetric functions, Beiträge Algebra
up vote 4 down vote Geom. 40(1) (1999), 27-51 http://www.emis.de/journals/BAG/vol.40/no.1/b40h1dal.ps.gz.
add comment
The relations might be complicated. The multisymmetric functions of degree up to n generate the ring, but very redundantly. In Lemma 2.2 of
up vote 2 down
vote Venkatesh and I show that you can get by with using many fewer of these multisymmetric functions, if you are content to generate a subring of R^{S_n} whose fraction field is
finite-index in the fraction field of R^{S_n}.
add comment
I may be making a very trivial mistake [Edit: yes, indeed], but isn't it just that:
up vote 0 down
vote with affine coordinate ring $\mathbb{C}[\sigma_1,\cdots,\sigma_n]^{\otimes k}$ (where $\sigma_d=\sigma_d(x_1,\cdots,x_n)$ is the degree-$d$ symmetric function in the $n$ variables
$x_1,\cdots, x_n$) ?
Your third isomorphism requires an identification between Sn and (Sn)^k. – S. Carnahan♦ May 14 '10 at 16:49
I mean, if $G$ acts on $X$ and $Y$, and diagonally on $X\times Y$, then $(X\times Y)/G\cong (X/G)\times (Y/G)$. Is it correct, right? – Qfwfq May 14 '10 at 17:07
4 $(G\times G)/G\not\cong(G/G)\times(G/G)$ – user2035 May 14 '10 at 17:16
1 The equality I wrote above absolutely doesn't work! For example, if $G$ has positive dimension, the "expected" dimentions of the two sides do not match. – Qfwfq May 14 '10 at
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/24626/what-is-the-coordinate-ring-of-symmetric-product-of-affine-plane","timestamp":"2014-04-20T16:05:18Z","content_type":null,"content_length":"61495","record_id":"<urn:uuid:2eb49b34-d4b6-4173-9861-de0adc563d63>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Event Horizon
Event Horizon
was a somewhat dodgy sci-fi-horror movie that came out in the late 1990s. As the title suggests, associated with an Event Horizon is "Infinite Space, Infinite Terror". Luckily, the Event Horizon in
the film as a black hole Event Horizon, and I'll leave discussing those to another time. Today, we'll try and understand the
Cosmological Event Horizons.
To explain these, I am shamelessly going to use the (still) excellent cosmological figures produced by
Tamara Davis
. OK, let's start with this one.
To understand what this picture is telling us, we need to remember a few things. Our universe has three spatial dimensions, and any spatial point can be labelled with three numbers. In a
Cartesian coordinate system
, these are (x,y,z). As we are dealing with relativity, we are dealing with not only space, but space-time, and every point in the universe is labelled by 4 numbers, the three spatial coordinates and
the time, t. So, every point is labelled as (t,x,y,z), and these points are called an
The picture above is the evolution of our universe, the
cosmological model, and has distance on the x-axis and time on the y-axis. The universe began at a finite time in the past (the
Big Bang
) and shows us (the vertical line in the centre) and other objects in the universe. At the Big Bang, distances between us and any other object is zero, and as the universe expands, objects move away
from us.
The purple is the
Hubble Sphere
that we discussed last time. The question is, what are those other lines on there - the event horizon and the particle horizon? To answer this, we need to do a bit of mathemagic.
We need to note a couple of things. Firstly, the first picture is not the complete story, as we know that, if our current cosmology is correct, then while it was born a finite time in the past, it's
going to last for ever. So really the first figure goes on for ever also. But we can fix that and come to it in a moment.
The x-axis actually shows "physical distance". Now, the proper distance is the multiplication of the
Scale Factor
(which depends on time and changes as the universe evolves) and what's known as the
Comoving Coordinates
(which, for any individual galaxy, are fixed values).
The Scale Factor's evolution depends upon what the universe is made of, and here's a few some people made earlier
The things to see here that now is at t=0, and the scale factor was smaller in the past (things were closer together) and at some time way-back-when the scale factor was zero (the Big Bang). If we
divide out the Scale Factor (so each galaxy has only its fixed comoving coordinates) we get the following picture
What are we going to do about the time axis? There is some mathemagic we can do with that also. But what, as we know that the eventual age of the universe stretches off to infinity. I'm not going to
go through the gory mathematical details, but we are going to switch from the normal cosmological time to what is known as
Conformal Time
. For our particular universe, the cool think is that the infinite age of the universe is mapped onto a fine conformal time.
This can give you a bit of a headache, but we know lots of functions that can map the infinite onto the finite (and mathematicians, don't complain about the terminology here :), such as
. So, when we look at our universe in terms of comoving coordinate and conformal time, we get the following
Remember, in terms of time, this is the entire history of the Universe, from Big Bang to infinite future, all on one piece of paper.
Now, the cool thing, the really cool think with these coordinates is that light rays travel at 45 degrees, not at the crazy curves we see above, and we see that our event horizon is made of such
straight lines, meeting at where we head into the finite conformal infinity.
So, where does this get us? Well, the event horizon forms a triangle, separating events which are inside the triangle from those outside. Remembering that light rays travel at 45 degrees on this
picture, and sit down with a pencil and a ruler, what you can see what this separation of events means.
If we pick an event within the triangle (remember, this is just a dot on this page) we can draw a light ray (travelling at 45 degrees) which hits us at the origin, somewhere between the Big Bang and
the infinite future.
But if we choose an event outside of the event horizon and draw another light ray heading towards us, we will see that it will not be able to cross our path between the Big Bang and infinite future.
So, this means that the event horizon separates events from those that can ever send us a signal (i.e. we can see at some point in our history) from those that can't. The proper way of saying this is
that the event horizon separates events into those that can have
Causal Contact
with us, from those that cannot.
This might seem weird, as if you think of a distant galaxy sending our light to us from a finite distance away, then, giving the fact that the universe will be infinitely old, we must receive the
light at some point? But no, because the photon is battling the expansion of the universe, and may not win and we may never see it.
There is a flip side to this and if we take the above figure and extend the red light (our light cone) to the top of the picture, we can see something quite interesting. If we set off in a standard
rocket, we can never travel faster than light, and so will be always within the future red triangle. What this means is that even though we have an infinite amount of time left to play in the
universe, we can only explore this finite patch (as thing we are trying to get to are being pulled away from us by the cosmic expansion).
In fact, the longer we leave it, the less and less volume there is to explore! So we'd better head off right away if we are going to see anything!
9 comments:
1. Geraint,
It is truly fascinating to see the entire Universe - past, present and future - reduced to three 'simple' graphs. I'm grappling with some of the concepts but I think I may be able to fully
understand them, maybe after a few reads. I certainly understand a lot more now than I did before I read your two related blogs.
I am still not clear of the fundamental difference between the two definitions of the Hubble Sphere and the Event Horizon.
* The 'Hubble Sphere' denotes the distance at which the perceived expansion of the Universe matches the speed of light, from our perspective. Beyond it, no signal can reach us. I get that.
* The 'Event Horizon' is the distance beyond which there can be no causal effect on us. I get that.
To me the two definitions appear identical, yet there is obviously a subtle difference in definition which I have missed. What is special about that yellow shaded area? How is it that an event
can have a causal effect without any signal being able to reach us?
Finally, would I be right in saying that, on the first Gyr/Glyr graph, if you raised the blue "now" bar further up the page, billions of years into the future, that the red light cone would tend
towards taking up the space and shape of the event horizon?
Roger Powell
2. Hi - Great questions.
OK - First part - What's the difference between the Hubble Sphere and the Event Horizon?
Objects within the Hubble Sphere *now* are moving away from us less than the speed of light. Those outside are moving faster than the speed of light. Now, remember that these velocities are due
to the expansion of the universe.
So, you might think that objects outside the Hubble Sphere now would never be observable. But remember that the expansion of the universe can speed up and slow down (it depends on what the energy
mix in the universe is) and so if it slows, an object outside of the Hubble Sphere *now* might end up inside the Hubble Sphere in the future, and so be visible in the future.
Events outside the event horizon will never be visible to us.
As for your second question, yes, you are right (and you can see this more clearly in the final picture). Basically, the event horizon can be thought of as the ultimate light cone in the
I should note, however, that not every universe has an event horizon. If the conformal time does not converge to a finite value as the age of the universe goes to infinity, then we can't draw an
ultimate light cone, and we could explore the entire universe.
Hope this make sense.
3. Thanks Geraint, that makes it somewhat clearer but I have to ask a follow up question:
How would you define an event (say on graph 1) that lies outside our Hubble Sphere but which remains within our light cone, i.e the small parts of the yellow bananas that lie inside the red cone?
How can our light cone contain space-time events which are expanding away from us faster than light speed?
Are such events observable or not?
Is this something to do with the changing value of the Hubble Constant?
4. OK - Firstly, the distance to the Hubble sphere is D = c/H where c is the speed of light and H is Hubble's constant (you actually have a slightly more complex relation for non-spatially flat
Of course, Hubble's constant is not a constant, but changes with time, and so the distance to the Hubble Sphere changes with time (and you can see this in the figures above).
So, how fast an object is moving away depends upon Hubble's constant, and objects can initially be moving faster than light, but later be slower then light.
Can an event in the yellow banana be seen? Well, this is easy to see. The events we see right now are the ones on the past light cone (the red line) and if you consider an event in the yellow on
the red line, the red line (the photons) are initially moving outwards, but eventually turns around and heads back to the observer, so yes, they are observable.
How's that?
1. "Firstly, the distance to the Hubble sphere is D = c/H where c is the speed of light and H is Hubble's constant (you actually have a slightly more complex relation for non-spatially flat
Actually, if D is the proper distance (which is the case in the velocity-distance relation v=HD), then this is the case in all cosmological models, spatially flat or not.
5. A fair bit clearer, thanks Geraint.
The concept of light "turning around" is an interesting one to ponder and it makes that entire banana zone very intriguing.
It seems to me that the furthest extent of the light cone (in the X direction, or proper distance) coincides with it's intersection of the Hubble sphere and I assume this is the turning point
where light which was previously receding due to expansion begins to approach.
I guess it all comes back to my earlier question and your response that "the event horizon can be thought of as the ultimate light cone in the universe."
Thanks very much for your explanations,
6. An easier way to think of this is the following: Think of a flash of light at the big bang, at our comoving location. The surface of this expanding sphere of light is the particle horizon.
The event horizon is the same thing in reverse. Think of emitting a flash of light now: the farthest distance it will reach is the event horizon.
In both cases, of course, due to symmetry we can think of light travelling toward us rather than away from us. At a given time, the particle horizon is at the farthest distance from which light
could have reached us. The event horizon is the distance such that light emitted there now will reach us in the infinite future or at the big crunch.
The universe can have a particle horizon, an event horizon, both or neither. These can grow, shrink, or remain constant in time. In general, these have nothing to do with the Hubble sphere,
though in some special cases there can be some degeneracy (for example, in the de Sitter model the Hubble sphere is the event horizon).
One can think of either horizon as a surface in space at a given time or a surface in space-time. Often, the former is emphasized in the discussion of the particle horizon and the latter in the
case of the event horizon (including in Rindler's classic paper, though he does mention all possibilities); this creates some confusion and obscures the essential symmetry. Case in point: Do
universes which end in a big crunch have an event horizon? For space-time yes, since time ends; for space usually (always?) not since all distances are eventually reached.
As almost always, if you want to understand basic concepts in cosmology, read Harrison's textbook:
Author: Edward R. Harrison
Title: Cosmology: The Science of the Universe (2nd Edition)
Publisher: Cambridge University Press
Year: 2000/2001
ISBN: ISBN 0-521-66148-X
7. I believe, conversely that light cones are local event horizons. Outside of the light cone you would be elsewhere in space and time. Thus the Higgs field prevents matter from exceeding the speed
of light and escaping the universe.
8. I believe also, that classical and quantum physics merge when the speed of light is achieved by anything. This is consequence of the time dimension locally being zero. Thus the object loses all
causality and becomes a two dimensional possibility cloud or wave on the light cone, just like light.
Both comments by mquasi@erie.net | {"url":"http://cosmic-horizons.blogspot.com.au/2012/03/event-horizon.html","timestamp":"2014-04-16T19:32:51Z","content_type":null,"content_length":"118689","record_id":"<urn:uuid:ccb7ebca-d815-42f6-9a90-1f995d864b59>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Wellesley, MA: A. K. Peters. xii, 212 p. $ 39.00 (1996).
This book is an essential resource for anyone who ever encounters binomial coefficient identities, for anyone who is interested in how computers are being used to discover and prove mathematical
identities, and for anyone who simply enjoys a well-written book that presents interesting, cutting edge mathematics in an accessible style. Wilf and Zeilberger have been at the forefront of a group
of researchers who have found and implemented algorithmic approaches to the study of identities for hypergeometric and basic hypergeometric series. In this book, they detail where to find the
packages that implement these algorithms in either Maple or Mathematica, they give examples of and instructions in how to use these packages, and they explain the motivation and theory behind the
algorithms. The specific algorithms that are described are Sister Celine’s Method, an algorithm from the 1940’s that underlies most of the current research; Gosper’s Algorithm, the first of the
powerful proof techniques to be implemented with a computer algebra package; Zeilberger’s Algorithm which extends and generalizes Gosper’s approach; the WZ Method which is guaranteed to provide a
proof certificate for any correct identity for hypergeometric series and which can be used to determine whether or not a “closed form” exists for any given hypergeometric series. The book is also
sprinkled with examples, exercises, and elaborations on the ideas that come into play.
05A10 Combinatorial functions
05A30 $q$-calculus and related topics
33C20 Generalized hypergeometric series, ${}_{p}{F}_{q}$
68R05 Combinatorics in connection with computer science
33D15 Basic hypergeometric functions of one variable, ${}_{r}{\phi }_{s}$
39A70 Difference operators | {"url":"http://zbmath.org/?q=an:0848.05002","timestamp":"2014-04-16T22:38:12Z","content_type":null,"content_length":"23189","record_id":"<urn:uuid:463293c7-96a2-452e-afb7-8519402c69b1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
The control hitters have over everything
A couple weeks ago, I wrote an article titled “The control hitters have over LD%,” examining why it’s a bad idea to use single-year line drive rates in any discussion of a hitter’s underlying skills.
Afterward, I received an e-mail from a reader who wanted me to go a step further:
Hi Derek,
I really enjoyed your post on the stability of LD% over time. It was very helpful to have the GB% correlation (.65) as a comparison. I want to encourage you to do a post at some point on the
stability of a variety of common conventional and sabermetric stats; I fully understand the concept of looking for stable, repeatable skills but I have little idea what is stable and repeatable!
For example, how stable is a player’s walk rate? Strikeout rate? HR/FB rate?
Just a table of 20 of these stats would be really cool for perspective.
With that, here we go…
The results
As I said last time, this is far, far from a comprehensive study. For comparative purposes, though, it can be quite useful. Anyway, I looked at all hitters from 2004 through 2008 who amassed at least
350 at-bats in adjacent seasons (and played on the same team both years, to eliminate some park-to-park biases). What you’re seeing is the R-squared results for each stat, which essentially tells us
how much of the variation in Year 2 can be explained by the Year 1 figure.
| STAT | R2 |
| Batting Average | 0.18 |
| On-Base Percentage | 0.36 |
| Slugging Percentage | 0.37 |
| OPS | 0.35 |
| ISO Power | 0.52 |
| ISO Discipline | 0.60 |
| Batting Average with RISP | 0.06 |
| Contact (K) Rate | 0.76 |
| Walk Rate | 0.61 |
| HBP Rate | 0.37 |
| Pitches per PA | 0.61 |
| BABIP | 0.15 |
| 1B per BIP | 0.21 |
| 2B per BIP | 0.16 |
| 3B per BIP | 0.26 |
| AB/HR | 0.42 |
| HR/FB | 0.59 |
| GIDP Rate | 0.13 |
| LD% | 0.09 |
| GB% | 0.60 |
| OF FB% | 0.52 |
| IF FB% | 0.43 |
| SBO% | 0.33 |
| SBA% | 0.80 |
| SB% | 0.10 |
Quick takeaways
As we always stress here at THT Fantasy, stats like batting average and BABIP are poor indicators of a player’s actual skill. It’s much better to focus on component skills like contact rate, which is
one of the most stable stats around. Home runs are relatively stable, which might surprise some but really shouldn’t—after all, Juan Pierre isn’t going to start posting 30-home run seasons, nor is
Ryan Howard going to hit only five home runs.
As we saw last time, line drive rate is very unstable, while the other batted ball stats are much more stable. And for those who like to blame hitters for being “unclutch” with runners in scoring
position (I hear far too much of this from fellow Mets fans), check out no. 7 on the list.
Quick glossary
EDIT: I’m adding this late per request. Sorry for some things being a little unclear to begin with.
ISO Power: SLG-AVG
ISO Discipline: OBP-AVG
Contact (K) Rate: Contact rate on a per AB basis (not a per pitch basis). Calculated as (AB-K)/AB
HR/FB: Home runs per outfield fly ball
GIDP Rate: GIDP/BIP
LD%: Line drives as a percentage of all non-bunt balls in play
GB%: Groundballs as a percentage of all non-bunt balls in play
OF FB%: Outfield flies as a percentage of all non-bunt balls in play
IF FB%: Infield flies as a percentage of all non-bunt balls in play
SBO%: Stolen base opportunity rate. The percentage of times a hitter reaches first and thus is in position to attempt a steal. Calculated as (1B+BB+HBP-IBB)/TPA.
SBA%: Stolen base attempt rate. The percentage of times a hitter attempts a steal given that he is on first base. Calculated as (SB+CS)/(1B+BB+HBP-IBB).
SB%: Stolen base success rate. The percentage of times a hitter is successful on a steal attempt. Calculated as SB/(SB+CS).
Concluding thoughts
That’s all for today. Any questions, feel free to comment or e-mail me!
1. The Real Neal said...
“What you’re seeing is the R-squared results for each stat, which essentially tells us how much of the variation in Year 2 can be explained by the Year 1 figure.”
Huh? I am sure you’ve done some nice math here, but that sentence makes no sense. Let me give you an concrete example to illustrate.
Year BA
1 .278
2 .302
What you’re seeing is the R-squared results for each stat, which essentially tells us how much of the .024 can be explained by the .278.
2. Dave Studeman said...
I’m not sure what your example means, but the R squared measures how much of the variation among all players in Year 2 can be attributed to the variation among those same players in Year 1.
3. Seth said...
Brilliant idea for a piece. When doing research for my fantasy team next season, I will be sure to look up guys with high contact rates who have underachieved this season…could be another article
4. ThankYouMichaelLewis said...
I’m new to THT and it is fantastic, so bear with me if I can’t make as sophisticated inferences.
If LD% is so unstable, yet is has one of the strongest correlations with batting average/offensive succes (retrofitted), then is it the secret weapon in fantasy baseball drafting/projections?
In other words, if we see a player far off the mean LD% of 19%, could that be used as a primary indication as to how the player will perform the following season?
It’s almost as if it’s an anti-correlation in that it can be used to project performance in Year 2 if Year 1 is an outlier.
Thanks in advance for any clarifications.
Note: I’m not even a fantasy baseball player, but I figured it was an easy example of putting future projections in use.
5. Detroit Michael said...
“Pitchers per PA” is close to 1.0 for everyone in the league.
I would guess that Batting Average with RISP appears to be more unstable from year to year than just Batting Average simply because the sample size, the number of PA we are using, for each season
is smaller.
6. Derek Carty said...
Sorry for the confusion, Dave. I added a quick glossary. As to all the other studies, I’m sure there have been loads of them, so I knew I’d miss a whole bunch if I tried (if you have some links
handy, though, I’d be happy to add them). This isn’t anything new, just a quick reference for the readers who were looking for one.
The Real Neal,
Dave nailed it. It’s a statistical tool that tells us how much of the variance for the player pool overall can be predicted by the one half of the data. If you’d like a longer explanation, just
let me know.
Thanks, Seth
7. David Rasmussen said...
On statistics that are more luck based than skill based (low R-sq), like BABIP or LD%, the way to use them predictively is as follows. Someone has a high BABIP? His batting average for the rest
of the year will likely decrease. Likewise, if someone has a high LD%, assume their rate stats will will go down. If you are interested in an individual player, compare BABIP and LD% to previous
years to learn whether what they are doing may be sustainable.
Jason Bartlett: BA .332—not sustainable since BABIP is .383 versus career BABIP of .328. His LD% in 2009 is also not sustainable at 26.3%. Previous years are 20.7, 20.1, 22.2, 18.7. (Obviously,
Jason’s good year is mostly luck based, but they stuck him on the All Star team, so it must not be obvious to everyone.)
8. Derek Carty said...
Glad to hear you’re enjoying THT. I’m always willing to help people who want to learn, so feel free to ask away whenever you have a question.
As to this specific question, David Rasmussen pretty much nailed it. LD% is a big driver of BABIP, but because it is so unstable, a LD% too far from league average is likely just good/bad luck
itself. While it tells us *something* about the hitter, if we were to try to predict his LD%, we’d need to include a heavy proportion of league average, so a guy like Bartlett’s projected LD%
going forward might only be 20-21% or so.
We do need to note, though, that for pitchers, BABIP will generally regress to .300. For hitters, everyone regresses to their own unique number (not necessarily .300!), so things become a little
trickier to analyze. This is a very important point to remember that many analysts still don’t understand.
9. Derek Carty said...
Detroit Michael,
You’re right
You’re absolutely right on BA with RISP as well. If we’re looking at players with 350 ABs for the year, they might only have 150 ABs or so with RISP, so the number is much more unstable. If we
were to look at all batters with exactly 350 regular ABs and all batters with exactly 350 ABs with RISP (given a large, fictional, perfectly-constructed-for-our-needs-data set), the correlations
would probably be almost identical.
10. Jonathan said...
Derek –
Great question that you ask here.
From what you’ve got here, I’m guessing you did an auto-regression with 1 lag estimated using OLS, no?
If so (and perhaps even if not):
R-squared isn’t exactly the metric that we want for measuring repeatability. For example, you can have a high R-squared (meaning that the explanatory variables capture a lot of the explained
data’s variance) and still have the coefficient on the lagged variable near zero (which means that next year’s statistic is likely to be near the league average even if this year’s wasn’t). In
this case (high Rsq, low coefficient), the regression captures well that the stat is not repeatable.
11. Dave Studeman said...
I guess I’d make a few points here. One is that there are many ways to calculate something like this, as Jonathan pointed out. In the 2007 THT Annual (which you can read for free at Wowio), David
Gassko used a binomial correlation in addition to the year-to-year correlation and found a higher figure (.32 vs .13 for line drives, for instance, which is what he and JC got from year-to-year
Over a career, or a “significant” amount of time, you will find differences between batters. Freddy Sanchez is a line drive hitter. Jason Giambi isn’t. That’s obvious, but it’s worth repeating I
Lastly, remember this analysis (and virtually all analyses like it) have been conducted for established major league players by necessity. They’re the ones we have the data for. If you were to
expand the sample to include minor leaguers, or players with cups of coffee, you’d find that line drive hitting (and virtually all the other measures) are more predictable than these results
12. Derek Carty said...
Yeah, Jonathan, as I said, there are much better ways to do this sort of thing. This is far, far from perfect or comprehensive or flawless. All this is is a simple reference guide for those who
haven’t seen anything like this yet. There are definitely flaws, but I’m wasn’t looking to be super precise. For comparative purposes, all I’m trying to do here is say “BA is unstable, contact
rate is stable. BABIP is unstable, HRs are somewhat stable. LD% is unstable, GB% is stable. etc, etc.” The results this produces are roughly in line with what we get from a more complex study,
which suits what I was going for.
13. ThankYouMichaelLewis said...
Thank you Dave and Derek.
When defining an offensive player’s lucky season, am I correct in assuming that LD% is the single biggest determinant, since it is what causes an abnormally inflated/deflated BABIP?
Also, what about defensive luck for positional players? I ask this because I still have a hard time with UZR due to it’s annual fluctuation
Could a pitcher’s unusually high LD% or BABIP cause a fielder to have a signifiantly lower UZR?
I think I’m mostly hung up on UZR because a guy like Teixeira grades negatively, yet I see him make game-saving plays every single night (but that’s for another article).
14. Colin Wyers said...
It should be remembered that all of these correlations are artificially high due to the 350 AB cutoffs used – that substantially reduces the variance and therefor increases the correlation. This
is why a weighted correlation is preferable.
15. Jonathan said...
Gotcha on keeping things simple. I’d probably just report the coefficient on the lagged variable. Under the same assumptions you’re using, it would be just as informative. Under less restrictive
assumptions, it would be more informative. Of course, your articles are in any case also extremely informative.
16. Dave Studeman said...
Are these stats defined anywhere? For instance, is OF FB% a percentage of all balls hit that are outfield flies, or a percentage of fly balls that are outfield flies? And what is SBO% and the
other SB stats?
Last point: it would be nice to see references and comparisons with the many other studies of this that have been done in the past.
Leave a Reply Cancel reply | {"url":"http://www.hardballtimes.com/the-control-hitters-have-over-everything/","timestamp":"2014-04-21T00:23:19Z","content_type":null,"content_length":"73708","record_id":"<urn:uuid:dbd5523e-c78a-4ae9-8f2a-863e5aa8977e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a second integral for Arnold's example
up vote 3 down vote favorite
Consider Arnold's example for Arnold diffusion 1964. $$H=I_1^2/2+I_2^2/2+\epsilon(1-\cos\theta_2)(1+\mu(\sin\theta_1+\sin t)) $$
We can first make it a system of three degrees of freedom.
Then we know this system is not integrable in the Liouville Arnold sense. One integral is the Hamiltonian. Can we find one more integral?
I think the answer should be no. But it is not easy to prove. Maybe one more piece of information is useful. In Arnold's 1964 paper, he calculated two different nonvanishing Melnikov functions.
integrable-systems ds.dynamical-systems classical-mechanics ca.analysis-and-odes
Maybe the following paper sciencedirect.com/science/article/pii/S0022039603002870 (J. Cresson, Hyperbolicity, transversality and analytic first integrals) will be of some help. – Zurab Silagadze
Apr 11 '13 at 4:41
I met a similar problem, but I have no idea what a second integral means. – user41897 Oct 26 '13 at 17:18
integral = integral of the motion = a conserved quantity – Carlo Beenakker Oct 26 '13 at 20:55
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged integrable-systems ds.dynamical-systems classical-mechanics ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/69826/find-a-second-integral-for-arnolds-example?answertab=oldest","timestamp":"2014-04-21T10:32:04Z","content_type":null,"content_length":"49234","record_id":"<urn:uuid:390d1fcb-64f2-4406-ad1e-cfcf0137906c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
I. Introduction
The EALimdep regression program is a powerful freeware program available for download from the Iona College website. It is a student version of the LIMDEP econometric program that is widely utilized
in universities and industry.
II. Downloading and Installing the EALimdep Program
The EALimdep program is available for download at the following link:
The MS-Word version of the manual for the program is available at:
The Adobe Acrobat version of the manual for the program is available at:
Installation of the software requires two steps. First "unzip" the EALsetup.exe file by "running" it. Be sure to change the Unzip to Folder path to a folder that you can easily locate like the C:\
Temp folder. Second, install the EALimdep program by "running" the extracted Setup.exe file. The EALimdep program can then be started by clicking on * Start * Run* and typing in C:\Program Files\Es\
LIMDEP\Program\limdep.exe and hitting OK.
NOTE1: for reasons unknown, the setup program creates a desktop icon named on computers running Windows XP .
NOTE2: Because of network restrictions on Iona lab computers, the EALimdep program should be installed in a different way. First, when running the EALsetup.exe file, change the Unzip to Folder path
to U:\EALimdep so that the setup files are installed on your local U: drive. Second, you can start the program by running the Setup.exe file contained in the U:\EALimdep folder and then clicking on
Yes, Launch the Program File when it is displayed at the end of the install process.
III. Preparing and Importing Excel data files
The "recommended" way to analyze data using the EALimdep program is to first create an Excel workbook, and then "import" the data into the EALimdep program. NOTE:EALimdep will only import data from
Excel workbooks that have been saved as MS Excel comma delimited file (csv) Worksheets.
A. To create and save data into an Excel workbook:
i. Start the MS-Excel program.
ii. Either type in the data to a blank worksheet or open a saved Excel file and click on the worksheet containing the data to be imported.
iii. Make sure that each column of data is headed by a variable name that (a) begins with a letter and (b) is no longer than eight characters in length.
iv. Make sure that, aside from the column headings, all of the data consists of numbers. EALimdep will ignore alphanumeric values so be sure to first convert categorical variables (e.g., yes, no)
into numerical variables (e.g., 1, 0). NOTE: you can use Excel's *Edit* Replace* to do the conversion.
v. With the data worksheet highlighted, click on the Office Icon (top left) * Save As *, then type in a file name and click on MS Excel CSV(Comma Delimited File) as the file type.
B. To import the saved Excel worksheet into the EALimdep program:
i. Start the EALimdep program
ii. Click on * Project * Import * Variables *, and then navigate to the folder containing the Excel worksheet that you want to import. Check to see that the Excel worksheet was imported correctly by
clicking on EALimdep's Data Editor icon (looks like a spreadsheet).
IV. Descriptive Statistics and Simple Plots
To obtain Descriptive Statistics, click on * Model * Data Description * *Descriptive Statistics*, identify the variables you want statistics (mean, std.dev. and range), and then click on * Run *.
Click on the * Options * tab if you want additional statistics measuring covariance, correlation, skewness, etc.
To obtain Simple Bivariate Plots, click on * Model * Data Description * *Plot Variables *, identify the variables you want to plot on the X and Y axes, and then click on * Run *.
V. Ordinary Least Squares Regression
To run an Ordinary Least Squares (OLS) regression in the EALimdep program, click on * Model * * Linear Models * Regression*, and then identify the dependent variable and independent variables. [NOTE:
Be sure to include the variable ONE as an independent variable so that EALimdep estimates a constant term.] Then click on * Run *. If you want to also print out the predicted values and residuals,
click on the * Output * tab and click on * Display Predictions and Residuals *.
VI. Testing and correcting for serial (auto) correlation
To test and correct for serial (auto) correlation, click on * Model * Linear Models * Regression *, enter the dependent and independent variables (including ONE), and then click on * Options * *
Autocorrelation * Correct for Autocorrelation using * Cochrane - Orcutt algorithm *. Then click on * Run *. EALimdep will first generate ordinary least squares results followed by results corrected
for serial (auto) correlation.
VII. Testing and correcting for heteroskedasticity
To test and correct for heteroskedasticity, click on * Model * Linear Models * Regression *, highlight the dependent and independent variables (including ONE), and then click on * Options *. Then
click on * Robust VC matrix * and then choose the * hetero HC3 * option. Then click on * Run *. EALimdep will then generate the Breusch Pagan statistic as well as regression results corrected for
VIII. Logit regression models
To generate a logit regression model on a binary dummy (1,0) dependent variable, click on *Model* * Binary Choice * Logit *, enter the dependent and independent variables (with ONE for the constant),
and then click on * Run *. Click on the *Options* tab to generate transformed marginal effect coefficients and probability predictions.
IX. Creating New Variables
New variables can be created out of existing ones by clicking on * Project * New * Variable * and then typing in the Name of the new variable and the expression for computing it.
X. Subsamples
To select subsamples of the data set, click on * Project * Set Sample * and
then define the Range of observations that you want to include in the analyses that will follow. You can also use the Include and Reject options to define expressions that will select/reject those
cases that conform to the expression.
XI. Creating Syntax Files
To create a "syntax" file containing commands, click on * File * New * * Text/Command Document * OK *. Then type in the syntax commands. To execute, "highlight" those commands that you want EALimdep
to process and click on * GO *.
XII. Saving Projects
If you want to save an EALimdep session for future analysis, and want to avoid re-importing the Excel data worksheet and recreating any data transformations that you've already done, then you must
get EALimdep to save your project. To do so, click on * File * Save Project As *, point to the folder where you want the project saved, and name the project. Then you can reopen the project by
starting the EALimdep program, clicking on * File * Open Project *, and then entering the saved project's name. EALimdep will recreate all of the variables that were available at the time the project
was saved. | {"url":"http://www2.iona.edu/faculty/rjantzen/eco310/eco310ealimdep.htm","timestamp":"2014-04-19T04:20:02Z","content_type":null,"content_length":"13572","record_id":"<urn:uuid:d5b47457-c9ed-42c3-877f-d0d4da2cdce5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: NFA and negation/conjunction
klyjikoo <klyjikoo@gmail.com>
Wed, 19 May 2010 11:40:56 +0330
From comp.compilers
| List of all articles for this month |
From: klyjikoo <klyjikoo@gmail.com>
Newsgroups: comp.compilers
Date: Wed, 19 May 2010 11:40:56 +0330
Organization: Compilers Central
References: 10-05-078 10-05-100
Keywords: lex
Posted-Date: 19 May 2010 22:23:55 EDT
Although the same method for complementation of DFA can not be applied
in the case of NFA, but I think the correct implementation can be
achieved in some way tricky. Applying the usual interchanging of final
states in the case of NFA would not work when there are some
acceptable string that is prefix of another acceptable string by NFA.
Usually after checking an input string against an NFA these three
situations can occur:
1) The NFA accepts the string
2) The NFA halts when checking string
3) Both situation 1 and 2 occurred
Comparing these situation it would be possible to simulate NFA
complementation by surrounding NFA with a complementation module that
works as follows: in the situation 1 and 3 of above, module simply
rejects the input string and in situation 2 the module accepts the
The final note that using this implementation it is also possible
to work about intersection of NFA's with applying the deMorgan rule;
however it would need epsilon transitions.
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/10-05-111","timestamp":"2014-04-18T03:05:53Z","content_type":null,"content_length":"6119","record_id":"<urn:uuid:b955e729-ac29-4fec-ab92-f8dc5927cfb0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Making a homemade focal reducer
Roger Hamlett wrote:
wrote in message
I want to make my own focal reducer for my 11" SCT. Can anyone tell me
how to figure
the actual focal reduction based on the f.l. of the achromat lens?
There are several
48mm achromats available online with focal lengths from 208mm to 360mm
(I think).
Any help is appreciated. Thanks.
The formula for reduction is:
Where 'Rf', is the reduction/magnification factor (the same formula works
for Barlows as well), 'f' is the focal length of the lens assembly
concerned, and 's' is the seperation between the optical centre of the
lens, and the focal 'plane'
So if you have a 208mm lens, and space it 69mm from the focal plane, you
Rf=(208-69)/208 = 0.668*
However there is a large 'caveat' with what you are talking about. The
actual focal field of an SCT,is significantly curved. On your 11" unit,
probably with a radius of about 12" (it depends on the focal lengths of
the two mirrors). When you apply a focal reducer, this curvature is made
worse, reducing the field diameter at the CCD/film etc., that can be used
before the defocus produced by field curvature, becomes unacceptable. This
is why the commercial SCT reducers, are 'reducer/correctors'. They
normally have plano convex lenses, in at least one element, to produce a
field curvature (the other way)themseles, which at least partially
corrects for this problem.
Many people have used lenses from binoculars or similar sources to make
reducers like you describe, and for small fields (a small CCD etc.), the
results can be acceptable. However one of the big advantages of the
commercial reducers, is the field flattening effect, which is why the view
through (for example), the Celestron/Meade F*.63 unit, combined with
perhaps a 26mm eyepiece, can in some cases be more useable than a 2" 40mm
eyepiece. Though the human eye accomodates quite well for the field
curvature, the effect of seeing well focussed stars across the larger
field, can be impressive.
Best Wishes
Thanks for the reply and useful info. If I may, could I ask your
advice, then?
My plan is to use an SCT rear cell that I acquired (used to be a
filter, I believe - but the filter was broken and removed) Should I
then add
a PC lense along with the Achromat?
I am strictly a visual observer, using this setup with a Nexstar11GPS
and a
Denk binoviewer. I've purchased a 2" Starsweeper reducer, so I really
need an additional one. But I decided it would be an interesting
project as I
already have the rear cell.
I'm thinking of sourcing the lenses from anchor optical, but am open to
other suggestions. | {"url":"http://www.spacebanter.com/showthread.php?t=68287","timestamp":"2014-04-16T07:39:58Z","content_type":null,"content_length":"41998","record_id":"<urn:uuid:21b4ba09-a4bd-4044-b694-8def3dfe7be1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A building lot in a city is shaped as a 30° -60° -90° triangle. The side opposite the 30° angle measures 41 feet. a. Find the length of the side of the lot opposite the 60° angle. b. Find the length
of the hypotenuse of the triangular lot. c. Find the sine, cosine, and tangent of the 30° angle in the lot. Write your answers as decimals rounded to four decimal places.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ef8f3ae4b0d4a537ce1342","timestamp":"2014-04-18T10:57:22Z","content_type":null,"content_length":"71109","record_id":"<urn:uuid:d3a4692c-79da-4181-bd96-7ee2260ebc6b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial Applications
The binomial distribution gives the probability of k successes in n independent trials that have a yes or no answer, known as Bernoulli trials, where p is the probability of success. The binomial
distribution can be used in genetics to determine the probability the k out of n individuals will have a particular genotype. In this case, having that particular genotype is considered "success."
The binomial distribution is given by,
where P (k/n) is the probability of k successes in n trials, p is the probability of a success. Recall that m! = m · (m - 1) · (m - 2) · · · 2 · 1 where m is a positive integer, and 0! = 1. Because n
trials are yes/no, notices that there are k successful trials, each with probability p, and the remaining n - k trials are failures, each with probability 1 - p.
To demonstrate the binomial distribution, let n = 5 and k = 2, in other words there are 2 successes in 5 trials. Under these circumstances the distribution becomes,
where p is the probability of success. | {"url":"http://www.biology.arizona.edu/BioMath/tutorials/polynomial/Applications/Binomial.html","timestamp":"2014-04-21T09:45:36Z","content_type":null,"content_length":"7162","record_id":"<urn:uuid:94d1e0e2-068c-4061-a184-dcda07f87af9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2007 [00129]
[Date Index] [Thread Index] [Author Index]
Re: Re: question on Plot and Show plus Axis Labels:
• To: mathgroup at smc.vnet.net
• Subject: [mg72592] Re: [mg72566] Re: question on Plot and Show plus Axis Labels:
• From: Bob Hanlon <hanlonr at cox.net>
• Date: Wed, 10 Jan 2007 03:55:16 -0500 (EST)
• Reply-to: hanlonr at cox.net
Bob Hanlon
---- Gopinath Venkatesan <gopinathv at ou.edu> wrote:
> Hello David
> Thanks for showing interest in my problem and for posting reply. Please see the below sample code. This sample is not related to my study, this is just to illustrate.
> \!\(\(x = Table[i, {i, 1, 15, 1}];\)\[IndentingNewLine]
> \(v = Table[Sin[\(Ã?â?¬\/15\) x[\([i]\)]], {i, 1, 15, 1}];\)\[IndentingNewLine]
> \(y[j_] := Cos[Ã?â?¬\/15\ x[\([j]\)]];\)\[IndentingNewLine]
> \(y1 = Table[y[j], {j, 1, 15, 1}];\)\[IndentingNewLine]
> \(y2 = Table[v[\([j]\)], {j, 1, 15, 1}];\)\[IndentingNewLine]
> p1 = ListPlot[Thread[y1, x], PlotJoined -> True]\[IndentingNewLine]
> p2 = ListPlot[Thread[y2, x], PlotJoined -> True]\[IndentingNewLine]
> Show[p1, p2]\)
> I could not use Plot for plotting a list of values of y1 and y2 for the x values. I tried using the below Plot command and got error,
> p1=Plot[y1,{x,1,15}]
> that y1 is not a machine readable value at x = 1.0000000+
> meaning that Plot tries to do a smooth fit between these intervals. In my case, x value increments by integer value, say unit increase. For that discrete sets of x values, I have respective y values and for using ListPlot, I have to thread these x and y values. Thats why I am using Thread.
> If you can tell me how to use Plot function for plotting the values above, that will be great.
> Or please suggest otherways where I have to use the combinations of ListPlot, Show, and others to produce a multi-plot with a axes label rotated (y axis), use Plot Legends to work with Show (or an alternate to Plot Legend since it doesnt work with Show), and coloring them differently. I figured out how to color them differently - using RBCColor option. But still got stuck with the Legends and the rotate labels options.
> RotateLabel works fine with Frames. If I use Frames, will that solve other problems too. I am trying, but just in case, if you know the answer, please hint me. Thanks,
> Gopinath Venkatesan
> Graduate Student
> University of Oklahoma | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Jan/msg00129.html","timestamp":"2014-04-18T13:22:05Z","content_type":null,"content_length":"36315","record_id":"<urn:uuid:6b858924-f63f-4981-8d64-d4b63b141591>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-21882
Kappeler, T; Pöschel, J (2003). On the Korteweg-de Vries equation and KAM theory. In: Hildebrandt, S; Karcher, H. Geometric analysis and nonlinear partial differential equations. Berlin, 397-416.
ISBN 3-540-44051-8.
In this note we give an overview of results concerning the Korteweg-de Vries equation ut=uxxx+6uux
and small perturbations of it. All the technical details are contained in our book [KdV & KAM, Springer, Berlin, 2003 MR1997070].
The KdV equation is an evolution equation in one space dimension which is named after the two Dutch mathematicians Korteweg and de Vries, but was apparently derived even earlier by Boussinesq. It was
proposed as a model equation for long surface waves of water in a narrow and shallow channel. Their aim was to obtain as solutions solitary waves of the type discovered in nature by Scott Russell in
1834. Later it became clear that this equation also models waves in other homogeneous, weakly nonlinear and weakly dispersive media. Since the mid-sixties the KdV equation has received a lot of
attention in the aftermath of the computational experiments of Kruskal and Zabusky, which led to the discovery of the interaction properties of the solitary wave solutions and in turn to the
understanding of KdV as an infinite-dimensional integrable Hamiltonian system.
Our purpose here is to study small Hamiltonian perturbations of the KdV equation with periodic boundary conditions. In the unperturbed system all solutions are periodic, quasi-periodic, or almost
periodic in time. The aim is to show that large families of periodic and quasi-periodic solutions persist under such perturbations. This is true not only for the KdV equation itself, but in principle
for all equations in the KdV hierarchy. As an example, the second KdV equation is also considered.
Users (please log in): suggest update or correction for this item
Repository Staff Only: item control page | {"url":"http://www.zora.uzh.ch/21882/","timestamp":"2014-04-18T06:39:30Z","content_type":null,"content_length":"26068","record_id":"<urn:uuid:d6357732-b49f-4a89-b183-77fde4e577a2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural Science and Mathematics - MATH Faculty
College of Natural Science and Mathematics
Department of Mathematics
Neal R. Amundson. Cullen Distinguished Professor of Chemical Engineering and Mathematics. B.S., M.S., Ph.D., University of Minnesota; Sc.D., (Hon.) University of Minnesota; Eng.D., (Hon.) University
of Notre Dame; Ph.D., (Hon.) University of Guadalajara, Mexico.
David A. Archer. Adjunct Professor of Mathematics. B.S., Texas Christian University; M.A., Ph.D., Rice University.
J. F. Giles Auchmuty. Professor of Mathematics. B.Sc., Australian National; M.S., Ph.D., University of Chicago.
Joseph G. Baldwin. Associate Professor of Mathematics. Hauptdiplom, Dr.rer.nat., Georg August Universitat, Gottingen, West Germany.
David Bao. Professor of Mathematics. B.Sc., University of Notre Dame; Ph.D., University of California, Berkeley.
David P. Blecher. Associate Professor of Mathematics. B.Sc. Hons., University of the Witwatersrand; C.A.S.M., Cambridge University; Ph.D., University of Edinburgh.
E. Andrew Boyd. Adjunct Associate Professor of Mathematics. A.B., Oberlin College; Ph.D., Massachusetts Institute of Technology.
Dennison R. Brown. Professor of Mathematics. B.S., Duke University; Ph.D., Louisiana State University.
Richard D. Byrd. Professor of Mathematics. B.A., Hendrix College; M.S., University of Arkansas; Ph.D., Tulane University.
S. S. Chern. Distinguished Visiting Professor of Mathematics. B.S., Nankai University; M.S., Tsinghua University; Ph.D., University of Hamburg.
Howard Cook. Professor of Mathematics. B.S., Clemson University; Ph.D., University of Texas at Austin.
Edward J. Dean. Associate Professor of Mathematics. B.S., University of New Mexico; M.S., Ph.D., Rice University.
Henry P. Decell, Jr. Professor of Mathematics. B.S., McNeese State University; M.S., Ph.D., Louisiana State University.
Garret J. Etgen. Chair and Professor of Mathematics. B.S., College of William and Mary; M.A., University of Wisconsin, Madison; Ph.D., University of North Carolina at Chapel Hill.
Siemion Fajtlowicz. Professor of Mathematics. M.M., University of Wroclaw, Breslau, Poland; Ph.D., Mathematical Institute of Polish Academy of Science.
Michael J. Field. Professor of Mathematics. B.A., Cambridge University; Ph.D., University of Warwick.
William E. Fitzgibbon III. Professor of Mathematics. B.A., Ph.D., Vanderbilt University.
Michael Friedberg. Professor of Mathematics. B.S., University of Miami; Ph.D., Louisiana State University.
Roland Glowinski. Cullen Distinguished Professor of Mathematics and Professor of Mechanical Engineering. B.S., Ecole Polytechnique, Paris; M.S., Ph.D., University of Paris.
Martin A. Golubitsky. Cullen Distinguished Professor of Mathematics. A.B., A.M., University of Pennsylvania; Ph.D., Massachusetts Institute of Technology.
John T. Hardy. Associate Dean of the College and Associate Professor of Mathematics. B.S.C.E., University of Mississippi; M.S., Ph.D., Louisiana State University.
Jutta Hausen. Professor of Mathematics. Diploma, Ph.D., University of Frankfurt, West Germany.
Shanyu Ji. Associate Professor of Mathematics. B.S., East China Normal University; M.A., Ph.D., The Johns Hopkins University.
Gordon Johnson. Professor of Mathematics. B.S., Illinois Institute of Technology; Ph.D., University of Tennessee.
Johnny A. Johnson. Professor of Mathematics. B.A., M.A., Ph.D., University of California, Riverside.
Klaus Kaiser. Professor of Mathematics. Diplom, University of Kšln, West Germany; Ph.D., University of Bonn, West Germany.
Barbara L. Keyfitz. Professor of Mathematics. B.Sc., University of Toronto; M.S., Ph.D., New York University, Courant Institute.
Andrew Lelek. Professor of Mathematics. M.S., Ph.D., University of Wroclaw, Breslau, Poland.
Ian S. Melbourne. Associate Professor of Mathematics. B.S., University of Manchester, England; M.S., Ph.D., University of Warwick, England.
Christopher B. Murray. Assistant Professor of Mathematics. B.A., Rice University; Ph.D., University of Texas at Austin.
Matthew Joseph O'Malley. Professor of Mathematics. B.S., Spring Hill College; M.S., Ph.D., Florida State University.
Tsorng-Whay Pan. Assistant Professor of Mathematics. B.S., National Taiwan University; Ph.D., University of Minnesota.
Vern Paulsen. Professor of Mathematics. B.A., Western Michigan University; Ph.D., University of Michigan.
Jacques Periaux. Adjunct Professor of Mathematics. D.E.A., Ph.D., University of Paris.
Charles Peters. Associate Professor of Mathematics. B.S., M.S., Ph.D., Texas A&M University.
Min Ru. Associate Professor of Mathematics. B.S., M.S., East China Normal University; Ph.D., University of Notre Dame.
Richard Sanders. Associate Professor of Mathematics. B.A., M.A., Ph.D., University of California, Los Angeles.
Ridgway Scott. M.D. Anderson Professor of Mathematics and Professor of Computer Science. B.A., Tulane University; Ph.D., Massachusetts Institute of Technology.
James Wilson Stepp. Professor of Mathematics. B.S., M.S., Ph.D.,University of Kentucky.
Charles T. Tucker. Associate Professor of Mathematics. B.A., B.S., Texas A&M University; M.A., Ph.D., University of Texas at Austin.
David H. Wagner. Associate Professor of Mathematics. B.S., Union College; Ph.D., University of Michigan.
Philip William Walker. Associate Professor of Mathematics. B.S., Tulane University; M.S., Ph.D., University of Georgia.
Lewis T. Wheeler. Professor of Mathematics and Mechanical Engineering. B.S., M.S., University of Houston; Ph.D., California Institute of Technology.
Mary F. Wheeler. Affiliated Senior Scientist. B.A., M.A., University of Texas; Ph.D., Rice University.
Clifton T. Whyburn. Associate Professor of Mathematics. B.S., University of Alabama; M.A., Ph.D., University of North Carolina at Chapel Hill.
Tiee-Jian Wu. Associate Professor of Mathematics. B.S., National Cheng-Kung University, Taiwan; M.A., Wake Forest University; M.S., Ph.D., Indiana University, Bloomington.
Jian-Lun Xu. Assistant Professor of Mathematics. B.S., Suzhou University; M.A., Ph.D., University of Maryland.
Last updated: Friday, August 10, 2012 - 06:12 AM
Please send your request to: WebMasters, or visit On-line Graduate Catalogs at the University of Houston. | {"url":"http://www.uh.edu/graduate-catalog/archive/1998/nsm/math_faculty.html","timestamp":"2014-04-16T07:18:16Z","content_type":null,"content_length":"10134","record_id":"<urn:uuid:13a5b602-585a-4cc1-9bbd-3fb954e8a285>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 4
Timothy had a total of 238 blue and red marbles. He gave 3/7 of the blue marbles and 5/8 of the red marbles away. Altogether, Timothy gave away 124 marbles.How many red marbles did Timothy give away?
(Skill: Equations)
math (please help)
Timothy had a total of 238 blue and red marbles. He gave 3/7 of the blue marbles and 5/8 of the red marbles away. Altogether, Timothy gave away 124 marbles.How many red marbles did Timothy give away?
(Skill: Equations)
the total length of 2 ropes A and B is 7.23 m. the total length of 2/5 of Rope A and 3/4 of Rope B is 3.97 m. Find the length of Rope A.(Skill: equations) (step by step and give explanation if can)
Jimmy had a total of 163 apples and oranges. After selling 50% of all the apples and 20% of all the oranges, he had 110 fruits left. How many oranges did Jimmy have at first? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=widyan","timestamp":"2014-04-19T22:57:28Z","content_type":null,"content_length":"6848","record_id":"<urn:uuid:b310bd6e-df59-4bf9-a5b8-3e40325d6327>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic differential equation problem
Sorry if this is in the wrong section but i have a problem, I have no experience with stochastic equations well analytically anyway.
The equation i have is the following;
[itex]\frac{dv}{dt} = - \alpha v+ \lambda F+\eta[/itex]
Where alpha lambda and F are constants, v is a variable (speed in this case) and eta is a random value. I believe this is similar to Brownian motion with an applied field, although i have no idea how
to solve this analytically i plan to solve it analytically and compare it to a numerical solution. So any help will be most appreciated! | {"url":"http://www.physicsforums.com/showthread.php?t=545992","timestamp":"2014-04-16T13:50:45Z","content_type":null,"content_length":"22535","record_id":"<urn:uuid:46b909f5-0b77-460c-b864-8237716b4ad0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Students Share Math Award
December 9, 2005
David Collins, a senior philosophy/mathematics major from Bakersfield, and Patrick Dixon, a senior mathematics major from San Rafael, N.M., are recipients of Occidental College’s 2005 Benedict J.
Freedman Prize for Mathematical Promise. Collins, who also won the award in 2004, is being recognized for his work on a branch of mathematics known as combinatorial game theory; Dixon for his studies
in applied mathematics.
The prize is awarded to a junior or senior Occidental student who has demonstrated exceptional mathematical promise through original research or scholarship in a mathematical science. Collins and
Dixon will split the $500 prize. Both will give presentations on their research next semester. The students were selected for the Freedman Prize by a committee of Occidental mathematics faculty. The
award was established by the family of Benedict Freedman, professor emeritus of mathematics.
Collins’ research – combinatorial game theory – considers simple contests, such as Tic-Tac-Toe, played between two players. Mathematicians analyzing a competition attempt to find – given any position
in the game – which player will win and how, provided both players are competing at their highest potential. Findings lead to a winning strategy. Collins’ development of a winning strategy for the
number game Euclid was published in INTEGERS, the Electronic Journal of Combinatorial Number Theory. Collins recently won a Barry M. Goldwater Scholarship, given to students intending to pursue
careers in math, science and engineering fields.
Dixon’s interest lies in using math to model biological phenomena, from the small scale, such as growth of bones and tumors, to the large scale, such as epidemiology and population dynamics. Such
research could help determine which sectors of a population should be vaccinated again avian flu, for example, in hopes of warding off a pandemic. Dixon wants to work in public health policy. He
recently won a Marshall Scholarship, which will allow him to pursue postgraduate studies in Great Britain next fall. | {"url":"http://www.oxy.edu/news/students-share-math-award","timestamp":"2014-04-18T18:31:04Z","content_type":null,"content_length":"19297","record_id":"<urn:uuid:647de240-7742-46f7-a27e-00d6b890eda9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical and Computational Challenges in Enviro
Schedule: February 5, 7, 12, 14, 19, 2002, 2:00 pm - 4:30 pm
Short description:
The problems connected with the environmental pollution are becoming more and more important. It is necessary to use advanced mathematical models in order to give adequate answers to the numerous
questions arising in this field. These models are described by systems of partial differential equations (PDEs). The number of equations is equal to the number of chemical species involved in the
model. In the attempts to obtain more reliable results one tries (i) to incorporate more chemical species in the model and/or (ii) to use finer grids in the discretization of the spatial derivatives.
This leads to very large computational task. Furthermore, long sequences of scenarios are, as a rule, needed in most of the environmental studies. This is why many challenging problems are to be
resolved in order to prepare reliable answers to the asked questions. The solution of some of the problems when systems of PDEs arising in environmental models are handled on modern high-speed
computers will be the major topic of this course. Several applications of the models, including the impact of future climate changes (the green house effects) on pollution levels, will also be
described. Finally, a series of problems, which are still open, will be presented and discussed.
Most of the problems treated in this course are also discussed in: Zahari Zlatev: "Computer Treatment of Large Air Pollution Models", Kluwer Academic Publishers, Dordrecht-Boston-London, 1995.
However, some new results will be presented (copies of relevant materials will be made available as postscript and pdf files).
Contents of lectures: | {"url":"http://www.fields.utoronto.ca/programs/scientific/01-02/numerical/courses/zlatev/","timestamp":"2014-04-18T18:52:42Z","content_type":null,"content_length":"12928","record_id":"<urn:uuid:0a94d862-fe8e-417e-98f3-44d3da0217fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
action potential - neuroscientific Achilles heel
chris hickie chickie at girch1.med.uth.tmc.edu
Mon May 9 19:57:55 EST 1994
In article <JcwsU7y.gokelly at delphi.com>, GREGORY C.O'KELLY
<gokelly at delphi.com> wrote:
> It is important to keep in mind, while reading the paper "Action Potentials - Honored Tradition or Alarming Embarrassment", that the equation W = f x d is Dr. Koester's equation, and he is not clear as to what f is. One might think that it is electrosta
> tic or electromotive force, but this, as potential difference, is a voltage, and not really a force at all. So Dr. Koester's diagrams on p. 1035 relating force to distance have no application at all in this case because distance, d, is already contained
> in potential difference. The area under the curve, or work, is more truly W = C x Vsquared/2, which is the integral (and therefore the area under the curve) for Q = CV. This oversight on Dr. Koester's part is why he concludes, incorrectly, that increa
> sing d decreases capacitance. If d increases, then V must diminish, so that if Q is to remain the same, capacitance must increase. The only way for V to remain the same if d increases, is for Q to increase.
> I find it exasperating to here again and again from neuroscientific types whose living depends upon the perpetuation of such errors, that I don't understand electricity because I use the equation W = f x d as if I were giving my approval to its use when
> that equation is Dr. Koester's, and not pertinent at all to electrical circuitry. I try to point out the weaknesses of that equation, and how Dr. Koester's definition of f (in fact he doesn't really give one, just hints at it) is mistaken. Impugning m
> y understanding of electricity tells me that the critic is projecting his own ignorance, and that the critic probably makes a good salary and has status as a neuroscientist in the perpetuation of what can only really be described as despicable and shoddy
> science and self-serving sciolism.
Where do I begin? Hmmm....
1) I find it exasperating when people don't hit carriage returns more
often. (does carriage return at 80
characters mean anything to you?)
2) You are correct in that W = C*(V^2)/2, but this is for the work done
charging the capacitor, not the
work done in moving an infinitesimal charge from one side of the
capacitor to the other once it has been
charged. I believe Dr. Koester is talking about the latter case.
3) The integral of a force over a distance is an extremely pertinent
quantity called work. If you know
the force function and you know the path over which an object is
moved, you can calculate
the work required to move that object, be it a person or an electron.
I will grant you
that for absolute generality, W should be defined as an integral,
which it is on page 1033 of
Appendix A. The remainder of the discussion seems geared towards
students with little
or no calculus background, hence the lack of integration and also the
lack of derivatives needed
to explain quantitatively the charging of a capacitor as a function
of time once a switch is closed.
(In a way, it's a shame introductory books can't assume a higher
level of math literacy in
students today) Go read Halliday and Resnick's Physics, chapters
27-32 of the third edition,
(1978) for derivations that I'm sure will be more to your liking.
4) Nobody's impugning you. I just wish you'd stop making these vague
and/or incorrect references
to prior neuroscience research. I find it annoying and it projects
your own ignorance.
5) Big words don't fool everyone. If you can't state it in a simple
manner, it probably isn't worth
6) I make jack sh*t for a salary as a grad student; my career hopes, like
many young scientists are
bleak, too. What exactly is your background, anyhow? You sure act
like you live in your
own ivory tower.
7) If this one *trivial* complaint is the basis of your whole "revolution"
in neuroscience,
then maybe you better find a new central tenet around which to base
your arguments.
More information about the Neur-sci mailing list | {"url":"http://www.bio.net/bionet/mm/neur-sci/1994-May/014011.html","timestamp":"2014-04-18T23:38:43Z","content_type":null,"content_length":"6231","record_id":"<urn:uuid:bafa5a89-3bee-4f44-a1ca-30605a82b0a9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Determinacy of statements -- reply to Richman
Fred Richman richman at fau.edu
Mon Jun 26 15:43:49 EDT 2000
"V. Sazonov" wrote:
> JoeShipman at aol.com wrote:
> > Richman:
> > >However, when constructivists do
> > >number theory, I would think that they have the same model in mind
> > >that every other mathematician does.
> > Professor Sazonov would disagree (if I have understood previous
> > posts of his correctly).
> More precisely, I do not understand what does it mean "the same model"
> in this context. However, I understand if we mean by a model here
> just a formal system or its preliminary semiformal version
> (describing "this" model).
The question is whether we are talking about the same things. This may
be a bad question, but it is an intriguing one. It would be difficult
to establish that we had the same model in mind; as Professor Sazonov
points out, it's not clear what that even means. Nevertheless, I feel
that I have some idea of what I'm talking about when I state the twin
prime conjecture, and I believe, from the communications I have had
with other mathematicians, that they have the same thing in mind.
I don't think that we have a formal system in mind. However, our
communications would appear to be in the form of a "preliminary
semiformal version". Because of that, one could view the common model
to be that preliminary semiformal system, or something closely related
to it. That is to say, what we are talking about could be identified
more or less with the system we use to talk about it. I don't see any
way to refute this view, nor am I interested in so doing. What I am
suggesting is that, whatever the nature of the model, the constructive
mathematician and the classical mathematician are thinking of the same
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-June/004140.html","timestamp":"2014-04-18T08:06:06Z","content_type":null,"content_length":"4273","record_id":"<urn:uuid:cc9e7088-4def-4305-af0f-d9ea2182bdf5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reduce Fractions Program (I think it might be a dumb mistake)
10-31-2005 #1
Registered User
Join Date
Oct 2005
Reduce Fractions Program (identifier not found)
Part of our assignment is to reduce two user inputted fractions so they are in their simplest form. The error messages I get mostly have to do with my "num" and "denom" for numerator and
denomenator being undeclared identifiers, but there may be more. I'm kinda bad at C++. I'd really appreciate any help I can get.
#include <iostream>
#include <cstdlib>
using namespace std;
class Fraction
int num;
int denom;
char something;
Fraction() {}
void reduce();
friend istream operator>> (istream, Fraction);
friend ostream& operator<< (ostream&, const Fraction&);
Fraction::Fraction (int numerator, int denominator, char something)
num = numerator;
denom = denominator;
something = a;
void reduce()
int n, m, r, sign;
n = num;
m = denom;
if (num = 0)
denom = 1;
sign = 1;
if (num < 0)
sign = -1;
num = -num;
if (denom < 0)
sign = -sign;
denom = -denom;
r = n % m;
while (r != 0){
n = m;
m = r;
r = n % m;
num = num * sign / m;
denom = denom * sign / m;
istream operator>> (istream input, int num, int denom, char something, Fraction& oneRatio)
input >> num >> something >> denom;
return input;
ostream& operator<< (ostream& output, const Fraction& oneRatio)
output << oneRatio.num;
if (oneRatio.denom != 1)
output << "/" << oneRatio.denom;
return output;
int main()
Fraction fract_1, fract_2;
cin >> fract_1 >> fract_2;
cout << "The entered fraction, reduced is "<< fract_1 << endl
<< "The entered fraction, reduced is " << fract_2 << endl << endl;
return 0;
All of my errors concern the use of num and denom in the function reduce. When I set n=num and m=denom I get "undeclared identifier" and everytime I use num and denom for the rest of the
function, I get "identifier not found, even with argument-dependent lookup"
I also get the error 'Fraction::Fraction(int,int,char)' : overloaded member function not found in 'Fraction' in the constructor
Last edited by rrum; 10-31-2005 at 11:48 PM.
All of my errors concern the use of num and denom in the function reduce.
You should also be getting some in your friend functions.
When I set n=num and m=denom I get "undeclared identifier" and everytime I use num and denom for the rest of the function, I get "identifier not found, even with argument-dependent lookup"
What's the proper format for defining a member function outside of the class? Inside your class, declare a member function called greeting() which takes no arguments and returns nothing. Define
the function outside your class to display "hello". When you get that working, compare it to reduce().
I also get the error 'Fraction::Fraction(int,int,char)' : overloaded member function not found in 'Fraction' in the constructor
Well, when I look through your class declaration, I can't see where you declared a constructor that takes 3 arguments. A class can have more than one function with the same name as long as each
function has different parameters. When you have more than one function with the same name, that's called 'overloading' and that's what the compiler is refering to.
By the way, operator functions like >> and << have very specific parameters and return values, so you need to study up on those as well.
Last edited by 7stud; 11-01-2005 at 01:05 AM.
In my istream operator I get "binary 'operator >>' has too many parameters" And then I get an error where I try to cin fract_1 and 2 in the main: "binary '>>' : no operator found which takes a
right-hand operand of type 'Fraction' (or there is no acceptable conversion)"
Theres also an error having to do with the overloaded member function that says "see declaration of 'Fraction' The rest I already stated.
I added void greeting() to the public part of the class and defined it the same way with just
cout << "hello" << endl;
and then I added a line in my input operator saying oneRatio.greeting(); It doesn't give me an errors....
Am I supposed to have something inside the Fraction() {} part of the class?
I appreciate you taking the time to respond.
I added void greeting() to the public part of the class and defined it the same way with just
cout << "hello" << endl;
Ok. You have so many errors, you're going to have to start over. It's not so bad though because you can cut and paste your old function defintions into your new project. When you write a program,
you need to write one function at a time, and then you need to compile and test it to make sure everything works before moving on to the next function.
So, save your old project and start a version two.
Generally, you will start by writing the constructors for your class and then test them in main(). After all, you have to be able to construct an object before you can do anything else with it.
Since you are getting errors with your constructors that's a good place to start. Make sure all the constructors for your class work by creating objects in main() with each one before moving on.
To be clear, there shouldn't be anything else in your class but the constructors and the member variables.
After that, perform these three steps:
1) Declare the function greeting() in your class.
2) Define greeting() outside the class.
3) In main(), create an object of your class and call greeting() with the object.
When you get that working, compare it to how you defined reduce(). Performing those steps will demonstrate two things:
1) what you are doing wrong with reduce()
2) how to write and test code.
With every function you write, you must test it in main() to see if it works the way it's supposed to before writing another function.
Last edited by 7stud; 11-01-2005 at 03:34 AM.
#include <iostream>
#include <cstdlib>
using namespace std;
class Fraction
Fraction() {}
void greeting();
void Fraction::greeting()
cout << "Hello" << endl;
int main()
Fraction test;
return 0;
Thats one thing down....
Doing the same thing for reduce() takes away most of my errors, but I still have:
"c:\Documents and Settings\Kevin\My Documents\Visual Studio Projects\HW11\HW11.h(21) : error C2511: 'Fraction::Fraction(int,int,char)' : overloaded member function not found in 'Fraction'
c:\Documents and Settings\Kevin\My Documents\Visual Studio Projects\HW11\HW11.h(6) : see declaration of 'Fraction'
c:\Documents and Settings\Kevin\My Documents\Visual Studio Projects\HW11\HW11.h(70) : error C2804: binary 'operator >>' has too many parameters
c:\Documents and Settings\Kevin\My Documents\Visual Studio Projects\HW11\HW11.cpp(10) : error C2679: binary '>>' : no operator found which takes a right-hand operand of type 'Fraction' (or there
is no acceptable conversion)
Last edited by rrum; 11-01-2005 at 04:25 AM.
Ok, here's what I have now:
#include <iostream>
#include <cstdlib>
using namespace std;
class Fraction
Fraction() {}
Fraction(int, int, char);
void reduce();
int num;
int denom;
char something;
friend istream operator>> (istream, Fraction);
friend ostream& operator<< (ostream&, const Fraction&);
Fraction::Fraction (int numerator, int denominator, char some)
num = numerator;
denom = denominator;
something = some;
void Fraction::reduce()
int n;
int m;
int r;
int sign;
n = num;
m = denom;
if (num = 0)
denom = 1;
sign = 1;
if (num < 0)
sign = -1;
num = -num;
if (denom < 0)
sign = -sign;
denom = -denom;
r = n % m;
while (r != 0){
n = m;
m = r;
r = n % m;
num = num * sign / m;
denom = denom * sign / m;
istream operator>> (istream input, Fraction oneRatio)
input >> oneRatio.num >> oneRatio.something >> oneRatio.denom;
return input;
ostream& operator<< (ostream& output, const Fraction& oneRatio)
output << oneRatio.num;
if (oneRatio.denom != 1)
output << "/" << oneRatio.denom;
return output;
int main()
Fraction fract_1, fract_2;
cin >> fract_1 >> fract_2;
cout << "The entered fraction, reduced is "<< fract_1 << endl
<< "The entered fraction, reduced is " << fract_2 << endl << endl;
return 0;
Now there are only 2 errors.
"c:\Documents and Settings\Kevin\My Documents\Visual Studio Projects\HW11\HW11.h(75) : error C2558: class 'std::basic_istream<_Elem,_Traits>' : no copy constructor available or copy constructor
is declared 'explicit'
c:\Documents and Settings\Kevin\My Documents\Visual Studio Projects\HW11\HW11.cpp(10) : error C2679: binary '>>' : no operator found which takes a right-hand operand of type 'Fraction' (or there
is no acceptable conversion)"
I'm not sure what to do at this point. If I pass the class by refernce in the istream operator, it actually compiles. But then the reduce function doesn't do anything, I guess thats the whole
point in not passing by reference.
Wow! You've made a lot of progress. Congratulations.
I'm not sure what to do at this point. If I pass the class by refernce in the istream operator, it actually compiles.
The first error is due to the fact you aren't following the operator function "idioms", i.e. the blue prints for defining those functions. The blue print for each operator function is very
specific, and the blue prints vary depending on the operator, and you don't have leeway to change them. With your operator>> you are returning an istream object. That means you are passing by
value. Whenever you pass by value, the compiler has to make a copy of the object--it doesn't matter whether you are sending the function an argument are returning a value from a function, if it
is passed by value, then the compiler has to make a copy. The compiler makes copies of objects using the copy constructor. A copy constructor is a constructor that has a reference to the class
type as a parameter, e.g.
Fraction(Fraction& aFraction)
Just like with constructors, the compiler will provide an invisible default copy constructor which copies the members one for one. Most of the time that will be sufficient(unless you have
pointers for member variables).
I'm not quite sure why you are getting that particular copy constructor error, but fix the format of your operator>> and it should go away. I'm not sure why you are getting the second error, so
fix what you can and see if it goes away. In the future, please post a comment on the line where the error is occuring.
I'm not sure what to do at this point. If I pass the class by refernce in the istream operator, it actually compiles. But then the reduce function doesn't do anything, I guess thats the whole
point in not passing by reference.
Your reduce() function doesn't have anything to do with your operator>>function, so I'm not sure why you think changing your operator>> function will fix reduce().
Finally, how will this line:
ever execute?
Last edited by 7stud; 11-01-2005 at 12:42 PM.
DONE AND DONE! THANK GOD. And by God I mean you
Stayed up all night, and now I gotta statics test tonight to study for! Yay!
11-01-2005 #2
Registered User
Join Date
Apr 2003
11-01-2005 #3
Registered User
Join Date
Oct 2005
11-01-2005 #4
Registered User
Join Date
Apr 2003
11-01-2005 #5
Registered User
Join Date
Oct 2005
11-01-2005 #6
Registered User
Join Date
Oct 2005
11-01-2005 #7
Registered User
Join Date
Apr 2003
11-01-2005 #8
Registered User
Join Date
Oct 2005 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/71642-reduce-fractions-program-i-think-might-dumb-mistake.html","timestamp":"2014-04-17T12:51:36Z","content_type":null,"content_length":"79185","record_id":"<urn:uuid:9afb03fb-e286-4f25-8874-560094f69bfe>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about visualisation on Christopher Olah's Blog
Functions of the form $G \to \mathbb{R}$ or $G \to \mathbb{C}$, where $G$ is a group, arise in lots of contexts.
One very natural way this can happen is to have a probability distribution on a group, $G$. The probability density of group elements is a function $G \to \mathbb{R}$.
Another way this can happen is if you have some function $f: X \to \mathbb{R}$ and $G$ has a natural action on $f$‘s domain – if you care about the values $f$ takes at a particular point $x$, you
are led to consider functions of the form $g \to f(gx)$. For a specific example, the intensity of a particular pixel, $x$, in a square gray-scale image, $f: [0,1]^2 \to \mathbb{R}$, subject to flips
and rotations, can be considered as a function $D_4 \to \mathbb{R}$.
Basic Visualization
Recall that we can visualize finitely generated groups by drawing Cayley Diagrams. (There’s a nice book, Visual Group Theory by Nathan Carter, that teaches a lot of basic group theory from the
perspective of Cayley Diagrams.)
The natural way to visualize functions on groups is to picture them as taking values on the nodes of the Cayley Diagrams. One way to do this is by coloring the nodes. In the following visualization
of a real-valued function on $D_4$, dark colors represent a value being close to zero and light colors close to one. | {"url":"http://christopherolah.wordpress.com/tag/visualisation/","timestamp":"2014-04-20T11:14:55Z","content_type":null,"content_length":"35048","record_id":"<urn:uuid:e05e1923-cc1b-4ad0-b56a-6aeddc93b6c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 543
Drinking 8 fluid ounces of milk provides 270.0 milligrams of calcium. How many fluid ounces of milk provide 72.5 milligrams of calcium? Round to the nearest tenth.
How much copper 2 sulfate do you need to react with 0.5grams of steel wool?
The unit rate of 7/12 liters in 3/10 kilometers
43+5=8 do u regroup
Thank you!!!!
A car dealer has 30 cars and trucks. If 2 more cars are delivered, the dealer will have 3 times as many cars as trucks. How many does trucks does the dealer presently have?
solve the equation for x using the fact that if a^u =a^v then u=v. 9^-x+15=27x. thanks
simpify: 3radical27x^10/5x^3y^5 7radical2x+2radical72x^3-radical162x^5 sorry dont know how to do the radical sign.
use the given zero to find the remaining zeros of the fuction...f(x)=x to the 5power-6x4power+7x3power-10x2power zero:1+i
find the real zeros of the given functions, use the real zeros to factor f. f(x)=-2x4power+9x3power+2x2power-39x+18
What is the equation for an input of 0,1,2,3,4 and output of 1,4,9,16,25
Most Americans, 75% in fact, say frequent handwashing is the best way to fend off the flu. Despite that, when using public restrooms, women wash their hands only 67% of the time and men wash only 46%
of the time. Of the adults using the public restroom at a large grocery chain...
How does using a scale model make these relationships easier to understand?
56 is ______% of 125.
When a not-for-profit facility receives a contribution from a member of the community, the cost of the capital is inconsequential when deciding how to use the contribution, because it is, in effect,
free money. Yes
How should the future value equation be modified if compounding occurs more frequently than annually?
Compare the results of the present value of a $6,000 ordinary annuity at 10 percent interest for ten years with the present value of a $6,000 annuity due at 10 percent interest for eleven years.
Explain the difference.
Boulder City Hospital has just been informed that a private donor is willing to contribute $10 million per year at the beginning of each year for fifteen years. What current dollar value of this
contribution if the discount rate is 14 percent?
Boulder City Hospital has just been informed that a private donor is willing to contribute $10 million per year at the beginning of each year for fifteen years. What current dollar value of this
contribution if the discount rate is 14 percent?
In the future value annuity table at any interest rate for one year, why is the future value interest factor of this annuity equal to 1.00?
Stillwater hospital is borrowing $1,000,000 for its medical office building. The annual interest rate is 5 percent. What will be the equal annual payments on the loan if the length of the loan is
four years and payments occurs at the end of each year?
Lincoln Memorial hospital has just been informed that private donor is willing to contribute $5 million per year at the beginning of each year for fifteen years. What is the current dollar value of
this contribution if the discount rate is 9 percent?
so how am i suppose to find the semiannually at 12
still lose or 30,000(( 1.06)^12=
If a community clinic invested $3,000 in excess cash today, what would be the value of its investment at the end of three years: a. at a 12 percent rate compounded semiannually? B. at a 12 percent
rate compounded quarterly?
If a nurse deposits $2,000 today in a bank account and the interest is compounded annually at 10 percent, what will be the value of this investment: a. five years from now? B. ten years from now? C.
fifteen years from now? D. twenty years from now?
The chief financial officer of a home health agency needs to determine the present value of a $10,000 investment received at the end of year 10. What is the present value if the discount rate is: a.
6 percent? b. 9 percent? c. 12 percent? d. 15 percent?
The chief financial officer of a home health agency needs to determine the present value of a $10,000 investment received at the end of year 10. What is the present value if the discount rate is: a.
6 percent? b. 9 percent? c. 12 percent? d. 15 percent?
I need help my homework and when I post I don't get an response what do I have to do for someone look at my homework question. Lincoln Memorial hospital has just been informed that private donor is
willing to contribute $5 million per year at the beginning of each year for...
The chief financial officer of a home health agency needs to determine the present value of a $10,000 investment received at the end of year 10. What is the present value if the discount rate is: a.
6 percent? b. 9 percent? c. 12 percent? d. 15 percent?
After completing her residency, an obstetrician plans to invest $12,000 per year at the end of each year in a low-risk retirement account. She expects to earn an average of 6% for 35 years. What will
her retirement account be worth at the end of these 35 years?
If business manager deposits $30,000 in a savings account at the end of each year for twenty years what will be the value of her investment: at a compound rate of 12 percent? at a compounded rate of
18 percent? What would the outcome be in both cases if the deposits were made ...
Two Wheeler-Dealer Bike Shop has a 22-inch off-road racer on sale this month for $239.95. If the original price of the bike was $315.10, how much would a customer save by purchasing it on sale?
Write the form of the partial fraction decomposition of the function do not determine the numerical values of the coefficients. 1/x^4-2401
Ryan Miller wanted to make some money at a flea market. He purchased 55 small orchids from a nursery for a total of $233.75, three bags of potting soil for $2.75 each, and 55 ceramic pots at $4.60
each. After planting the orchids in the pots, Ryan sold each plant for $15.50 at...
Two Wheeler-Dealer Bike Shop has a 22-inch off-road racer on sale this month for $239.95. If the original price of the bike was $315.10, how much would a customer save by purchasing it on sale?
If ⅜ of a 60-pound bag of ready-mix concrete is Portland cement, how many pounds of other materials are in the bag?
Ventura Coal mined 6 ⅔ tons on Monday, 7 ¾ tons on Tuesday, and 4 ½ tons on Wednesday. If the goal is to mine 25 tons this week, how many more tons must be mined?
Thanks so much
Solve the following problem and convert to lowest terms (mixed number). 6 ⅚ 17/18 (six and five sixths minus seventeen eighteenths)
Thanks so much
Solve the following problem and reduce to lowest terms (mixed number or improper fraction). 2/3 + 1/6 +11/12
Two Wheeler-Dealer Bike Shop has a 22-inch off-road racer on sale this month for $239.95. If the original price of the bike was $315.10, how much would a customer save by purchasing it on sale?
Ryan Miller wanted to make some money at a flea market. He purchased 55 small orchids from a nursery for a total of $233.75, three bags of potting soil for $2.75 each, and 55 ceramic pots at $4.60
each. After planting the orchids in the pots, Ryan sold each plant for $15.50 at...
Ventura Coal mined 6 ⅔ tons on Monday, 7 ¾ tons on Tuesday, and 4 ½ tons on Wednesday. If the goal is to mine 25 tons this week, how many more tons must be mined?
Solve the following problem and convert to lowest terms (mixed number). 6 ⅚ 17/18 (six and five sixths minus seventeen eighteenths)
Solve the following problem and reduce to lowest terms (mixed number or improper fraction). 2/3 + 1/6 +11/12
Convert 4/5 to higher terms of twenty-fifths.
Convert 57/9 to a whole or mixed number.
Find the domain of the following functions. 1. y= radical(x-3) - radical(x+3) 2. y= [radical(2x-9)] / 2x+9 3. y= radical(x^2 - 5x -14) 4. y= [cubed root(x-6)] / [radical(x^2 - x - 30)] I'm terrible
at finding domains.. so any help with these problems would be appreciated. ...
Thank you guys! That is what I got. I wasn't expecting the complex roots.. That's why I wanted to check whether or not I was doing it right!
Solve for x. x/(x-2) + (2x)/[4-(x^2)] = 5/(x+2) Please show work!
80% of what number is 50?
If a plane is flying 40 mph with a tailwind of 6 mph, how fast is the plane going.
On May 23, Samantha Best borrowed $40,000 from the Tri City Credit Union at 13% for 160 days. The credit union uses the exact interest method. What was the amount of interest on the loan?
Kristy Dunaway has biweekly gross earnings of $1,750. What are her total Medicare tax withholdings for a whole year? (
Fandango Furniture Manufacturing, Inc. has 40 employees on the assembly line, each with gross earnings of $325 per week. What are the total combined social security and Medicare taxes that should be
withheld from the employees paychecks each week?
Hello Beth did you ever get any help with this problem
Foremost Fish Market pays a straight commission of 18% on gross sales, divided equally among the three employees working the counter. If Foremost sold $22,350 in seafood last week, how much was each
counter employee s total gross pay from this?
help me please
You are shopping for an executive desk chair at The Furniture Gallery. You see two you like on sale. Chair A was originally $119.99 now on sale for $79.99 and Chair B was originally $149.99 now on
sale for $89.99. Calculate the markdown percent of each chair to determine which...
What is the percent markup based on cost for the shirts in the previous question? (Round percent to the nearest tenth.) I got .72% What is the percent markup based on selling price for the Western
Wear shirts? (Round percent to the nearest tenth.)
Which of the following is correct for a step-down transformer?
Jimmy has finished pouring 900 cubic feet out of a total of 4,400 cubic feet of concrete. What percent of the job has she completed?
Statistics show that the sales force of Golden Wholesalers successfully closed 1,711 sales out of 1,950 sales calls. What was their percent success rate?
business math please help me
An invoice is dated August 29 with terms of 4/15 EOM. What is the discount date? (Points What is the net date for the scenario in the previous question? (Points : 2.5)
business math please help me
Ned s Sheds purchases building materials from Timbertown Lumber for $3,700 with terms of 4/15, n/30. The invoice is dated October 17. Ned s decides to send in a $2,000 partial payment. By what date
must the partial payment be sent to take advantage of the cash discou...
@Sharon did you ever find out how to solve this problem.
Please Help Math
1. An invoice is dated August 29 with terms of 4/15 EOM. What is the discount date? 2. What is the net date for the scenario in the previous question?
City Cellular purchased $28,900 in cell phones on April 25. The terms of sale were 4/20, 3/30, n/60. Freight terms were F.O.B. destination. Returned goods amounted to $650. What is the net amount due
if City Cellular sends the manufacturer a partial payment of $5,000 on May 20...
What payment should be made on an invoice in the amount of $3,400 dated August 7 if the terms of sale are 3/15, 2/30, n/45 and the bill is paid on August 19? What payment should be made on the
invoice from the previous question if it is paid on September 3?
An invoice is dated August 29 with terms of 4/15 EOM. What is the discount date? What is the net date for the scenario in the previous question?
Ned s Sheds purchases building materials from Timbertown Lumber for $3,700 with terms of 4/15, n/30. The invoice is dated October 17. Ned s decides to send in a $2,000 partial payment. By what date
must the partial payment be sent to take advantage of the cash discou...
The Empire Carpet Company orders merchandise for $17,700, including $550 in shipping charges, from Mohawk Carpet Mills on May 4. Carpets valued at $1,390 will be returned because they are damaged.
The terms of sale are 2/10, n30 ROG. The shipment arrives on May 26 and Empire w...
thanks mathamte. is there anyway we could have the same person for the whole math tutor. because there are questions i dont get
What is the net price factor for trade discounts of 25/15/10? Use the net price factor you found in the previous question to find the net price of a couch listed for $800.
1. Fantasia Florist Shop purchases an order of imported roses with a list price of $2,375 less trade discounts of 15/20/20. What is the dollar amount of the trade discount? 2. Using your answer to
the question above, what is the net dollar amount of that rose order?
how Microsoft Office Word, Excel, and PowerPoint are used to support various work environments.
Is $203.99 the correct answer
The Empire Carpet Company orders merchandise for $17,700, including $550 in shipping charges, from Mohawk Carpet Mills on May 4. Carpets valued at $1,390 will be returned because they are damaged.
The terms of sale are 2/10, n30 ROG. The shipment arrives on May 26 and Empire ...
Fantasia Florist Shop purchases an order of imported roses with a list price of $2,375 less trade discounts of 15/20/20. What is the dollar amount of the trade discount?
Zeenat did you ever get any help with this question
World Geography
Hey Kaitlyn, I'm also doing CA. I got a 60 on this quiz cause I couldn't find anything! It's so frustrating right? Anyway, I'm not sure if your still needing help, but we could get together and help
each other out if you'd like. I know the years almost over...
bob and jim had a total of 90 marbles. bob had half a dozen more than jim. how many marbles did jim have.
Algebra 2
A box with no top is to be constructed from a piece of cardboard whose length measures 12 inches more than its width. the box is formed by cutting squares that measures 4 inches on each sides from 4
corners and then folding up the sides. If the volume of the box will be 340 in...
Physics..PLZ HELP!
The magnitude of the magnetic field 8.0 cm from a straight wire carrying a current of 6.0 A is A) 3.0 Î ~ T. B) 1.5 ~ T. C) 1.5 ~ T. D) 3.0 ~ T. E) 3.0 ~ T.
A wire of thickness d = 5 mm is tightly wound 200 times around a cylindrical core to form a solenoid. A current I = 0.1 A is sent through the wire. What is the magnetic field on the axis of the
Comparing Fractions Writing to explain James says that 5/5 is greater than 99/100. Is he correct? Explain.
Comparing Fractions Writing to explain James says that 5/5 is greater than 99/100. Is he correct? Explain.
At 20.0 degrees celcius, a student collects H2 gas in a gas collecting tube. The barometric pressure is 755.2mm Hg and the water levels inside and outside the tube are exactly equal. What is the
total gas pressure in the gas collecting tube?
college physics 1
The coefficient of static friction between a block of wood of mass 29.2-kg and a rough table is 0.51. The table is slowly tilted until the block of wood starts to slide down the surface. At what
angle of the table with respect to the horizontal does the wood start to slide? Yo...
Math Word problems
an office where 8 secretaries work 8 hours a day 5 days a week decided it needs 336 hours a week of secretarial work. How much more or less will each secretary have to work each week to equally share
the 336 hours.
A 6 ft tall person cast a shadow 15 ft long. At the same time a nearby tower casts a shadow 100 ft long. Find the height of the tower.
Pages: 1 | 2 | 3 | 4 | 5 | 6 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=april","timestamp":"2014-04-18T00:46:21Z","content_type":null,"content_length":"27861","record_id":"<urn:uuid:2462c9c5-eb3f-45eb-9692-43f9c6f99921>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Salem, NH Precalculus Tutor
Find a Salem, NH Precalculus Tutor
...I used Microsoft Excel for discovery of the general form of the area of pyramids and cones and in Statistics I assigned a project to compare two data bases using mean, variance, standard
deviation, CVAR, grouped data and charts. I’ve used SmartView, an TI-83 and TI-84 Graphing calculator emulato...
13 Subjects: including precalculus, physics, ASVAB, algebra 1
...My first exposure to precalculus was in high school as an honors math student. Back then it was a sequence of two half-year courses, analytic geometry and trigonometry. The prerequisites were
algebra I, geometry, and algebra II.
9 Subjects: including precalculus, calculus, geometry, algebra 1
...In the beginning I had trouble understanding his heavy accent and considered changing sections. Had I changed sections I would have made a big mistake. During the first class, this professor
told us that he felt that American educators did not teach math the right way.
6 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I routinely performed statistical analyses as an undergraduate researcher in the many labs I worked in at MIT, so I'm versed in its applications in fields from biomedical engineering to
cognitive neuroscience. I earned a perfect 800 on the SAT math section and have been helping students improve ...
47 Subjects: including precalculus, English, chemistry, reading
...I teach a variety of levels of students from advanced to students with special needs. I show students a variety of ways to answer a problem because my view is that as long as student can
answer a question, understand how they got the answer and can explain how they do so, it doesn't matter the m...
5 Subjects: including precalculus, algebra 1, algebra 2, study skills
Related Salem, NH Tutors
Salem, NH Accounting Tutors
Salem, NH ACT Tutors
Salem, NH Algebra Tutors
Salem, NH Algebra 2 Tutors
Salem, NH Calculus Tutors
Salem, NH Geometry Tutors
Salem, NH Math Tutors
Salem, NH Prealgebra Tutors
Salem, NH Precalculus Tutors
Salem, NH SAT Tutors
Salem, NH SAT Math Tutors
Salem, NH Science Tutors
Salem, NH Statistics Tutors
Salem, NH Trigonometry Tutors
Nearby Cities With precalculus Tutor
Andover, MA precalculus Tutors
Atkinson, NH precalculus Tutors
Derry, NH precalculus Tutors
Dracut precalculus Tutors
Haverhill, MA precalculus Tutors
Hudson, NH precalculus Tutors
Lawrence, MA precalculus Tutors
Londonderry, NH precalculus Tutors
Methuen precalculus Tutors
North Andover precalculus Tutors
North Salem, NH precalculus Tutors
Pelham, NH precalculus Tutors
Plaistow precalculus Tutors
Tewksbury precalculus Tutors
Windham, NH precalculus Tutors | {"url":"http://www.purplemath.com/Salem_NH_Precalculus_tutors.php","timestamp":"2014-04-19T23:24:10Z","content_type":null,"content_length":"23979","record_id":"<urn:uuid:4ead4e8a-7293-4708-a2d7-24054a08e030>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about famous numbers on The Math Less Traveled
Category Archives: famous numbers
Mersenne numbers, named after Marin Mersenne, are numbers of the form . The first few Mersenne numbers are therefore , , , , , and so on. Mersenne numbers come up all the time in computer science
(for example, is … Continue reading
Posted in arithmetic, computation, famous numbers, iteration, modular arithmetic, number theory, primes Tagged lehmer, lucas, Mersenne, prime, test 3 Comments
Happy day! , of course, is the fundamental circle constant which represents the ratio of any circle’s circumference to its radius. (In the past people have also used the symbol “” to represent half
of ; perhaps you’ve heard of … Continue reading
And now for the punchline! Today we’ll show that, for large enough values of , completing the proof of the irrationality of . First, let’s show that is positive when . We know that is positive for .
But I … Continue reading
We’re getting close! Last time, we defined a new function and showed that and are both integers, and that . So, consider the following: The first step uses the product rule for differentiation
(recalling that and ); the last step … Continue reading
I’ve been remiss in posting here lately, which I will attribute to Christmas and New Year travelling and general craziness, and then starting a new semester craziness… but things have settled down a
bit, so here we go again! Since … Continue reading
In my previous post in this series, we defined the function and showed that . Today we’ll show the surprising fact that, for every positive integer , although and are not necessarily zero, they are
always integers. (The notation means … Continue reading
Recall from my last post what we are trying to accomplish: by assuming that is a rational number, we are going to define an unpossible function! So, without further ado: Suppose , where and are
positive integers. Define the function … Continue reading
Everyone knows that —the ratio of any circle’s diameter to its circumference—is irrational, that is, cannot be written as a fraction . This also means that ‘s decimal expansion goes on forever and
never repeats …but have you ever seen … Continue reading
The 48th Carnival of Mathematics is posted at Concrete Nonsense. My favorite posts include Foxmath’s post about a strange iterated sequence involving pi and this amazing picture of a fractal cabbage.
Also near and dear to my heart is Mark … Continue reading
Recall the challenge I posed in a previous post: given the sequence of integers , what can you learn about (assuming you didn’t know anything about it before)? The answer, as explained in another
post, is that you can learn … Continue reading | {"url":"http://mathlesstraveled.com/category/famous-numbers/","timestamp":"2014-04-18T13:10:50Z","content_type":null,"content_length":"74799","record_id":"<urn:uuid:c469ffaa-d9c8-4d32-8bde-e63418792f37>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Better indexing for ranges GSoC 2012
From PostgreSQL wiki
Short description
Range types are significant new feature of PostgreSQL. Indexing of range types is necesary to provide efficient search of ranges. The current approach for GiST indexing implemented for 9.2 holds
ranges in both internal and leaf pages entries. This approach could be very inefficient in the case of highly overlapping ranges and "@>", "<@" operators, because the cost of search is similar to the
cost of search using "&&" operator. Mapping ranges into 2d-space could handle such cases much more efficiently. This project is focused on implementating 2d-space mapping based GiST and SP-GiST
operator classes for range types.
Project details
The priorities of this project is following:
1. GiST and SP-GiST operator classes for ranges.
2. Selectivity estimations for ranges.
3. GiST and SP-GiST operator classes for arrays.
Minimal completeness criteria is only #1, but real project goal is to do best on all mentioned options.
Better ranges indexing
The current indexing approach implemented for 9.2 defines a range in an internal page as a bounding range of underlying ranges. So, if a leaf page contain ranges (a1, b1), (a2, b2), ... (an, bn), a
corresponding entry of an internal page would be (min(a1, a2, ... an), max(b1, b2, ... bn)). However, some research papers [1] recommend to map ranges into 2d-space. In this case range (a,b) will be
presented as a point with coordinates a and b.
In the case of such mapping "&&", "@>", "<@" search operators have a corresponding rectangular area on the 2d-space. There is a proof of concept message [2] which shows a dramatic benefit using
existing spatial operator classes. However, use of spatial operator classes is inconvinient and it doesn't take care of inclusive and non-inclusive bounds and infinities. That's why it's important to
implement specific operator classes for such indexing of ranges. Therefore the following 2d-space trees could be implemented for range indexing:
• R-tree using GiST
• Quad-tree using SP-GiST
• KD-tree using SP-GiST
Selectivity estimation for ranges
The second goal of this project is to provide a better selectivity esitmation for &&, @>, <@ operators. One idea is to collect the following statistics:
• Histogram of "density" of ranges.
• Histogram of ranges length.
and do selectivity estimations for &&, @>, <@ according to them.
GiST and SP-GiST indexing for arrays
PostgreSQL core supports index-based search for operators "@>", "<@" and "&&" on arrays only using GIN. The intarray contrib module also provides GiST index support for integer arrays. However,
similar GiST indexing is possible for other array types, not just integer arrays. This project is focused on the implementation of universal GiST indexing for arrays and implementation of
experimental SP-GiST indexing of arrays.
The proposed GiST indexing for arrays is quite similar to those implemented in the intarray contrib module, but it has following differences. The following representations of array are possible:
• Original array
• Array of hashes of original array elements (suitable when array element is larger than its hash)
• Signature (bitmap where bits corresponding to array element hashes are set)
The second two representations are lossy. Representation is selected based on array size. Any of the mentioned representations could be used for both leaf and internal entries. Representation
selection for internal entries would be runtime, i.e. no gist__int_ops vs. gist__intbig_ops like dilemma is planned.
SP-GiST indexing for arrays is quite a hard task, and I have the following idea about how it would be possible. A leaf tuple could be represented in one of the ways mentioned for GiST indexing
before. An inner tuple node represents number of bits in signature and the inner tuple prefix represents set of bits in signature. Bits mentioned in inner tuple node and prefix must be not set in the
signatures corresponding to all underlying arrays. Thus, if it requires the presence of some bits enumerated in prefix or node then subtree could be skipped during index scan.
1. Bela Stantic, Rodney Topor, Justin Terry, Abdul Sattar, "Advanced Indexing Technique for Temporal Data"
Implement 2d-mapping basid GiST opclass for ranges
Implement 2d-mapping based SP-GiST quad-tree for ranges
Implement 2d-mapping based SP-GiST k-d tree for ranges
Comprehensive testing on various datasets. Conclusion about applicability of various opclasses
Rework opclasses according to testing results.
Implement better statistics for ranges with selectivity estimation for &&, <@, @> etc. operators
Testing and refactoring. | {"url":"https://wiki.postgresql.org/index.php?title=Better_indexing_for_ranges_GSoC_2012&oldid=18191","timestamp":"2014-04-19T04:26:46Z","content_type":null,"content_length":"19548","record_id":"<urn:uuid:6c6e3521-f352-47cc-944e-91efad84e724>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Feedforward nets for interpolation and classification
Results 1 - 10 of 55
, 1997
"... Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the
number of training examples should grow at least linearly with the number of adjustable parameters in the ne ..."
Cited by 177 (15 self)
Add to MetaCart
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number
of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern
classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of
the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is
bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A³ p
(log n)=m (ignori...
- JOURNAL OF COMPUTER AND SYSTEM SCIENCES , 1995
"... This paper deals with finite size networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" function to a linear
combination of the previous states of all units. We prove that one may simulate all Turing Machines by su ..."
Cited by 156 (26 self)
Add to MetaCart
This paper deals with finite size networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" function to a linear
combination of the previous states of all units. We prove that one may simulate all Turing Machines by such nets. In particular, one can simulate any multi-stack Turing Machine in real time, and
there is a net made up of 886 processors which computes a universal partial-recursive function. Products (high order nets) are not required, contrary to what had been stated in the literature.
Non-deterministic Turing Machines can be simulated by non-deterministic rational nets, also in real time. The simulation result has many consequences regarding the decidability, or more generally the
complexity, of questions about recursive nets.
- Machine Learning , 1995
"... Abstract. The Vapnik-Chervonenkis (V-C) dimension is an important combinatorial tool in the analysis of learning problems in the PAC framework. For polynomial learnability, we seek upper bounds
on the V-C dimension that are polynomial in the syntactic complexity of concepts. Such upper bounds are au ..."
Cited by 91 (1 self)
Add to MetaCart
Abstract. The Vapnik-Chervonenkis (V-C) dimension is an important combinatorial tool in the analysis of learning problems in the PAC framework. For polynomial learnability, we seek upper bounds on
the V-C dimension that are polynomial in the syntactic complexity of concepts. Such upper bounds are automatic for discrete concept classes, but hitherto little has been known about what general
conditions guarantee polynomial bounds on V-C dimension for classes in which concepts and examples are represented by tuples of real numbers. In this paper, we show that for two general kinds of
concept class the V-C dimension is polynomially bounded in the number of real numbers used to define a problem instance. One is classes where the criterion for membership of an instance in a concept
can be expressed as a formula (in the first-order theory of the reals) with fixed quantification depth and exponentially-bounded length, whose atomic predicates are polynomial inequalities of
exponentially-bounded degree. The other is classes where containment of an instance in a concept is testable in polynomial time, assuming we may compute standard arithmetic operations on reals
exactly in constant time. Our results show that in the continuous case, as in the discrete, the real barrier to efficient learning in the Occam sense is complexity-theoretic and not
information-theoretic. We present examples to show how these results apply to concept classes defined by geometrical figures and neural nets, and derive polynomial bounds on the V-C dimension for
these classes. Keywords: Concept learning, information theory, Vapnik-Chervonenkis dimension, Milnor’s theorem 1.
- Proc. of the 25th ACM Symp. Theory of Computing , 1993
"... . It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs
by neural nets of a somewhat larger size and depth with heaviside gates and weights from f\Gamma1; 0; 1g. ..."
Cited by 60 (12 self)
Add to MetaCart
. It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by
neural nets of a somewhat larger size and depth with heaviside gates and weights from f\Gamma1; 0; 1g. This provides the first known upper bound for the computational power of the former type of
neural nets. It is also shown that in the case of first order nets with piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits,
without changing the boolean function that is computed by the neural net. In order to prove these results we introduce two new methods for reducing nonlinear problems about weights in multi-layer
neural nets to linear problems for a transformed set of parameters. These transformed parameters can be interpreted as weights in a somewhat larger neural net. As another application of our new proof
technique we s...
- JOURNAL OF COMPUTER AND SYSTEM SCIENCES , 1995
"... We introduce a new method for proving explicit upper bounds on the VC Dimension of general functional basis networks, and prove as an application, for the first time, that the VC Dimension of
analog neural networks with the sigmoidal activation function oe(y) = 1=1+e \Gammay is bounded by a q ..."
Cited by 47 (0 self)
Add to MetaCart
We introduce a new method for proving explicit upper bounds on the VC Dimension of general functional basis networks, and prove as an application, for the first time, that the VC Dimension of analog
neural networks with the sigmoidal activation function oe(y) = 1=1+e \Gammay is bounded by a quadratic polynomial O((lm) 2 ) in both the number l of programmable parameters, and the number m of
nodes. The proof method of this paper generalizes to much wider class of Pfaffian activation functions and formulas, and gives also for the first time polynomial bounds on their VC Dimension. We
present also some other applications of our method.
, 1996
"... This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing
open question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held fo ..."
Cited by 46 (7 self)
Add to MetaCart
This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open
question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid
generalization are discussed.
- In Proceedings of 25th Annual ACM Symposium on the Theory of Computing , 1993
"... ) Angus Macintyre Mathematical Inst., University of Oxford Oxford OX1 3LB, England, UK E-mail: ajm@maths.ox.ac.uk Eduardo D. Sontag 3 Dept. of Mathematics, Rutgers University New Brunswick, NJ
08903 E-mail: sontag@hilbert.rutgers.edu Abstract Proc. 25th Annual Symp. Theory Computing , San Diego, ..."
Cited by 44 (12 self)
Add to MetaCart
) Angus Macintyre Mathematical Inst., University of Oxford Oxford OX1 3LB, England, UK E-mail: ajm@maths.ox.ac.uk Eduardo D. Sontag 3 Dept. of Mathematics, Rutgers University New Brunswick, NJ 08903
E-mail: sontag@hilbert.rutgers.edu Abstract Proc. 25th Annual Symp. Theory Computing , San Diego, May 1993 This paper deals with analog circuits. It establishes the finiteness of VC dimension,
teaching dimension, and several other measures of sample complexity which arise in learning theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively
decidable, modulo a widely believed conjecture in number theory. The results, the first ones that are independent of weight size, apply when the gate function is the "standard sigmoid" commonly used
in neural networks research. The proofs rely on very recent developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general
analytic gate functions...
- IEEE Trans. Neural Networks , 1992
"... This paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It is remarked that for
certain problems two hidden layers are required, contrary to what might be in principle expected from the ..."
Cited by 40 (6 self)
Add to MetaCart
This paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It is remarked that for
certain problems two hidden layers are required, contrary to what might be in principle expected from the known approximation theorems. The differences are not based on numerical accuracy or number
of units needed, nor on capabilities for feature extraction, but rather on a much more basic classification into “direct ” and “inverse ” problems. The former correspond to the approximation of
continuous functions, while the latter are concerned with approximating one-sided inverses of continuous functions —and are often encountered in the context of inverse kinematics determination or in
control questions. A general result is given showing that nonlinear control systems can be stabilized using two hidden layers, but not in general using just one. Key words: Neural nets, nonlinear
control systems, feedback 1
- ACTA NUMERICA , 1999
"... In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. Mathematically it is one of the simpler
models. Nonetheless the mathematics of this model is not well understood, and many of these problems are appr ..."
Cited by 39 (3 self)
Add to MetaCart
In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. Mathematically it is one of the simpler models.
Nonetheless the mathematics of this model is not well understood, and many of these problems are approximation-theoretic in character. Most of the research we will discuss is of very recent vintage.
We will report on what has been done and on various unanswered questions. We will not be presenting practical (algorithmic) methods. We will, however, be exploring the capabilities and limitations of
this model. In the first
- Neural Computation , 1994
"... It has been known for quite a while that the Vapnik-Chervonenkis dimension (VCdimension) of a feedforward neural net with linear threshold gates is at most O(w \Delta log w), where w is the
total number of weights in the neural net. We show in this paper that this bound is in fact asymptotically op ..."
Cited by 29 (8 self)
Add to MetaCart
It has been known for quite a while that the Vapnik-Chervonenkis dimension (VCdimension) of a feedforward neural net with linear threshold gates is at most O(w \Delta log w), where w is the total
number of weights in the neural net. We show in this paper that this bound is in fact asymptotically optimal. More precisely, we construct for arbitrarily large w 2 N neural nets Nw of depth 3 (i.e.
with 2 layers of hidden units) that have VC-dimension\Omega\Gamma w \Delta log w). The construction exhibits a method that allows us to encode more "program-bits" in the weights of a neural net than
previously thought possible. The Vapnik-Chervonenkis-dimension (abbreviated: VC-dimension) of a neural net N is an important measure of the expressiveness of N , i.e. for the variety of functions
that can be computed by N with different choices for its weights. In particular it has been shown in [BEHW] and [EHKV] that the VC-dimension of N essentially determines the number of training
examples th... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=676304","timestamp":"2014-04-18T06:28:43Z","content_type":null,"content_length":"39940","record_id":"<urn:uuid:a682d4f5-ba7e-469a-a2a8-4b451eb169a8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplicial matrices and the nerves of weak n-categories I:
nerves of bicategories
Simplicial matrices and the nerves of weak n-categories I: nerves of bicategories
John W. Duskin
To a bicategory B (in the sense of Bénabou) we assign a simplicial set Ner(B), the (geometric) nerve of B, which completely encodes the structure of B as a bicategory. As a simplicial set Ner(B) is a
subcomplex of its 2-Coskeleton and itself isomorphic to its 3-Coskeleton, what we call a 2-dimensional Postnikov complex. We then give, somewhat more delicately, a complete characterization of those
simplicial sets which are the nerves of bicategories as certain 2-dimensional Postnikov complexes which satisfy certain restricted `exact horn-lifting' conditions whose satisfaction is controlled by
(and here defines) subsets of (abstractly) invertible 2 and 1-simplices. Those complexes which have, at minimum, their degenerate 2-simplices always invertible and have an invertible 2-simplex $\
chi_2^1(x_{12}, x_{01})$ present for each `composable pair' $(x_{12}, \_ , x_{01}) \in \mhorn_2^1$ are exactly the nerves of bicategories. At the other extreme, where all 2 and 1-simplices are
invertible, are those Kan complexes in which the Kan conditions are satisfied exactly in all dimensions >2. These are exactly the nerves of bigroupoids - all 2-cells are isomorphisms and all 1-cells
are equivalences.
Keywords: bicategory, simplicial set, nerve of a bicategory.
2000 MSC: Primary 18D05 18G30; Secondary 55U10 55P15.
Theory and Applications of Categories, Vol. 9, 2001, No. 10, pp 198-308.
TAC Home | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/TAC/volumes/9/n10/9-10abs.html","timestamp":"2014-04-17T15:33:36Z","content_type":null,"content_length":"2835","record_id":"<urn:uuid:81ea7ee3-6ed0-4411-9cfd-a6e62fbd7faf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
frequency of a true count 8 decks blackjack
September 17th 2008, 03:59 PM #1
Sep 2008
frequency of a true count 8 decks blackjack
I read online that the frequency of a +5 true count with 8 decks good penetration is 3.45%
Does anyone know the formula, I'd like to plug in other numbers
Is a true count where 10-K gives a value of -1 and 2-9 is +1? I don't remember how aces factor in. What goes good penetration mean? I think without knowing the exact number of cards cut from the
8 decks makes this problem very difficult.
December 24th 2008, 01:34 PM #2
MHF Contributor
Oct 2005 | {"url":"http://mathhelpforum.com/statistics/49522-frequency-true-count-8-decks-blackjack.html","timestamp":"2014-04-16T05:46:34Z","content_type":null,"content_length":"31908","record_id":"<urn:uuid:ef0eaf7a-38a5-429f-9bf1-7a2de50c2995>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Self-Assembled Quantum Dots
Richard J. Warburton, Professor of Experimental Physics (Ordinarius), Department of Physics, University of Basel, Switzerland. Corresponding author: richard.warburton@unibas.ch
Semiconductor quantum dots are building blocks in quantum communication and information processing systems. A single quantum dot behaves in some ways like a single atom but with the huge advantages
that the quantum dot is locked in position and can be functionalized by embedding the quantum dot into a sophisticated heterostructure.
Quantum dots for optical applications can be created by self-assembly using standard growth techniques, for instance molecular beam epitaxy. The work-horse system is InAs for the dot material and
GaAs for the substrate material. The lattice constant of InAs is 7% larger than that of GaAs such that a monolayer of InAs on GaAs is highly strained. At a critical thickness of about 1.5 monolayers,
instead of a smooth layer, InAs-rich islands form, the quantum dots [1]. This particular growth mode is referred to as the Stranski-Krastanow mode. There is a thermodynamic argument for the formation
of the quantum dots: by clumping into islands the overall energy associated with the strain is reduced at the expense of an increase in surface energy. However, in practice, the growth is complex and
a number of kinetic factors come into play. This complexity can be exploited: by tweaking conditions in the growth, the properties of the quantum dots can be changed. Typically, the quantum dots are
about 20 nm in diameter and 5 nm high, sometimes adopting a lens-shape, sometimes a truncated pyramid. The strain-driven growth mode works also for other semiconductor systems, for instance InAs on
InP. As the quantum dots can be formed with standard semiconductor growth techniques, they can be embedded into heterostructures, for instance a vertical tunneling device or a VCSEL-like cavity
structure. The wafer can be processed post growth, creating Ohmic contacts, Schottky gates, micropillars, photonic crystals, depending on the particular application.
A semiconductor has a fundamental band gap separating the occupied valence band and the unoccupied conduction band. In a quantum dot, the bands are replaced by discrete levels. In the effective mass
approximation, a conduction electron behaves as a free electron but with a much reduced mass (the “effective mass”) and the quantum dot represents a confinement potential in all three dimensions. The
particle-in-a-box model of quantum mechanics then leads to quantized states. Typically, there are between 1 and 3 confined electron states, possibly a few more valence states (the “hole” states) [2].
At higher electron or hole energies, the semiconductor bands exist, initially associated with the so-called wetting layer, a thin InGaAs layer which connects all the dots, and at higher energy still,
associated with the bulk GaAs. The important optical transition connects the first hole state (the valence state with highest energy) with the first electron state (the conduction state with lowest
energy). This is an allowed electric dipole transition with optical dipole moment of d.e where d is about 1 nm [3]. Excitation of this transition creates a so-called exciton, an electron-hole pair in
the quantum dot. Without a microcavity, the exciton decays by spontaneous emission with a radiative lifetime of about 1 ns [4]. In a resonant microcavity, the spontaneous emission can be accelerated
[5], the Purcell effect, possibly by a factor of ~ 10 with a resonant, small-volume, high-quality microcavity.
A key signature of a two-level optical transition in a single quantum emitter is the antibunching of the emitted photons. This is the fundamental requirement for a single photon source. The emission
from a single quantum dot demonstrates photon antibunching [6]. Furthermore, at least at low temperature, laser spectroscopy on single quantum dots has revealed very narrow optical lines:
full-widths-at-halfmaxima of 500 MHz are achieved reasonably routinely [7], and in the best case, linewidths of about 200 MHz have been recorded, close to the “transform limit” where the linewidth is
determined solely by the radiative recombination rate. The development of laser spectroscopy (both with transmission detection [3, 7] and resonance fluorescence [8]) has enabled other signatures of
an atomic two-level system to be demonstrated: power broadening/power-induced transparency, the existence of dressed states, the Mollow triplet, etc. It is remarkable to have a textbook two-level
system inside a semiconductor. Furthermore, a gate allows the quantum dot “periodic table” to be accessed just by applying a voltage to the device [9], and in fact many properties can be tuned in
this way. However, in other experiments, the complexity of the semiconductor environment reveals itself. This interplay between atomic physics on the one hand and full-blown condensed matter physics
has driven a lot of research in this area.
One of the major problems in turning the antibunched photons from a single quantum dot into a real single photon source is extracting the photons from the semiconductor with high efficiency. The
semiconductor has a very high refractive index, 3.5 for GaAs for instance, meaning that most of the photons are refracted at the semiconductor-air interface to large angles where it is very difficult
to collect them with a lens. One possible solution is to embed the quantum dots in a microcavity [5]. The Purcell effect causes the photons to be emitted preferentially into the cavity mode. In this
way, micropillar devices have achieved efficiencies of about 40%. However, the cavity amplifies any number of interactions in the semiconductor and in many cases, the quality of the antibunching goes
down as the quantum efficiency goes up. Other solutions are emerging, for instance, a tapered one-dimensional waveguide which has already demonstrated quantum efficiencies of 72% [10]. Quantum dots
are therefore robust, high repetition rate, narrow linewidth sources of single photons, characteristics not shared presently by any other emitter.
A self-assembled quantum dot is also an attractive host for a spin qubit [11]. Applications include an ultra-sensitive magnetometer, quantum repeaters (to extend the distance over which quantum
cryptography can operate) and possibly also quantum information processing. There are two main points. First, a spin confined to a self-assembled quantum dot can be initialized, manipulated and
read-out optically. The use of pulsed lasers allow these operations to be carried out quickly, with spin rotations on sub-ns time scales for instance [12]. Secondly, the strong quantization induces a
huge mismatch between the “size” of the electron wave function in the quantum dot and the phonon wavelengths corresponding to the Zeeman splitting, particularly at small magnetic fields. Spin
relaxation in a quantum dot is therefore highly suppressed: relaxation times longer than 1 s are possible. However, the coherence times are much shorter: by suppressing the interaction with the
phonons, the hyperfine interaction (electron spin-nuclear spin interaction) has been enhanced such that the dephasing is dominated by noise in the nuclear spins. Taming the nuclear spins, or even
exploiting them, are possibilities as is taking them out of the game as best one can by switching from an electron spin to a hole spin [13, 14].
Figure A. Atomic force microscope image of InAs self-assembled quantum dots on a GaAs surface. Image recorded by Axel Lorke.
Figure B. Cross-sectional scanning tunneling microscope image of an InGaAs quantum dot. The image is 80 X 40 nm^2.
Figure C. Photoluminescence from a single quantum dot in a vertical transistor structure as a function of voltage. There are clear charging events, from the neutral exciton X^0 to the charged exciton
X^1- to the doubly-charged exciton X^2- [9].
Figure D. Laser spectroscopy on a single quantum dot at 4.2 K showing a transmission dip at the resonance [7].
[1] D. Leonard, K. Pond, and P. M. Petroff, Phys. Rev. B 50, 11687 (1994).
[2] R. J. Warburton, C. S. Dürr, K. Karrai, J. P. Kotthaus, G. Medeiros-Ribeiro, and P. M. Petroff, Phys. Rev. Lett. 79, 5282 (1997).
[3] K. Karrai and R. J. Warburton, Superlattices and Microstructures 33, 311 (2003)
[4] P. A. Dalgarno, J. M. Smith, J. McFarlane, B. D. Gerardot, K. Karrai, A. Badolato, P. M. Petroff, and R. J. Warburton, Phys. Rev. B 77, 245311 (2008).
[5] J. M. Gérard, B. Sermage, B. Gayral, B. Legrand, E. Costard, and V. Thierry-Mieg, Phys. Rev. Lett. 81, 1110 (1998).
[6] P. Michler, A. Kiraz, C. Becher, W. V. Schoenfeld, P. M. Petroff1, L. Zhang, E. Hu, and A. Imamoglu, Science 290, 2282 (2000).
[7] A. Högele, S. Seidl, M. Kroner, K. Karrai, R. J. Warburton, B. D. Gerardot, and P. M. Petroff, Phys. Rev. Lett. 93, 217401 (2004).
[8] A. N. Vamivakas, C. -Y. Lu, C. Matthiesen, Y. Zhao, S. Fält, A. Badolato, and M. Atatüre, Nature 467, 297 (2010).
[9] R. J. Warburton, C. Schäflein, D. Haft, F. Bickel, A. Lorke, K. Karrai, J. M. Garcia, W. Schoenfeld, and P. M. Petroff, Nature 405, 926 (2000).
[10] J. Claudon, J. Bleuse, N. S. Malik, M. Bazin, P. Jaffrennou, N. Gregersen, C. Sauvan, P. Lalanne, and J. -M. Gérard, Nature Photonics 4, 174 (2010).
[11] D. Loss and D. P. DiVincenzo, Phys. Rev. A 57, 120 (1998).
[12] D. Press, T. D. Ladd, B. Zhang, and Y. Yamamoto, Nature 456, 218 (2008).
[13] B. D. Gerardot, D. Brunner, P. A. Dalgarno, P. Öhberg, S. Seidl, M. Kroner, K. Karrai, N. G. Stoltz, P. M. Petroff, and R. J. Warburton, Nature 451, 441 (2008).
[14] D. Brunner, B. D. Gerardot, P. A. Dalgarno, G. Wüst, K. Karrai, N. G. Stoltz, P. M. Petroff, and R. J. Warburton, Science 325, 70 (2009).
Date Added: Jun 30, 2011 | Updated: Jun 11, 2013
Tell Us What You Think
Do you have a review, update or anything you would like to add to this article? | {"url":"http://www.azoquantum.com/Article.aspx?ArticleID=10","timestamp":"2014-04-18T10:40:00Z","content_type":null,"content_length":"65436","record_id":"<urn:uuid:a74e91ce-470b-4d15-bb43-78b51f39033c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time series
From Wikipedia, the free encyclopedia
A time series is a sequence of data points, measured typically at successive points in time spaced at uniform time intervals. Examples of time series are the daily closing value of the Dow Jones
Industrial Average and the annual flow volume of the Nile River at Aswan. Time series are very frequently plotted via line charts. Time series are used in statistics, signal processing, pattern
recognition, econometrics, mathematical finance, weather forecasting, earthquake prediction, electroencephalography, control engineering, astronomy, and communications engineering.
Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to
predict future values based on previously observed values. While regression analysis is often employed in such a way as to test theories that the current values of one or more independent time series
affect the current value of another time series, this type of analysis of time series is not called "time series analysis", which focuses on comparing values of a single time series at different
points in time.^1
Time series data have a natural temporal ordering. This makes time series analysis distinct from other common data analysis problems, in which there is no natural ordering of the observations (e.g.
explaining people's wages by reference to their respective education levels, where the individuals' data could be entered in any order). Time series analysis is also distinct from spatial data
analysis where the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). A stochastic
model for a time series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will
often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values (see time
Time series analysis can be applied to real-valued, continuous data, discrete numeric data, or discrete symbolic data (i.e. sequences of characters, such as letters and words in the English language.
Methods for time series analyses
Methods for time series analyses may be divided into two classes: frequency-domain methods and time-domain methods. The former include spectral analysis and recently wavelet analysis; the latter
include auto-correlation and cross-correlation analysis. In time domain correlation analyses can be made in a filter-like manner using scaled correlation, thereby mitigating the need to operate in
frequency domain.
Additionally, time series analysis techniques may be divided into parametric and non-parametric methods. The parametric approaches assume that the underlying stationary stochastic process has a
certain structure which can be described using a small number of parameters (for example, using an autoregressive or moving average model). In these approaches, the task is to estimate the parameters
of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any
particular structure.
Methods of time series analysis may also be divided into linear and non-linear, and univariate and multivariate.
There are several types of motivation and data analysis available for time series which are appropriate for different purposes.
In the context of statistics, econometrics, quantitative finance, seismology, meteorology, and geophysics the primary goal of time series analysis is forecasting. In the context of signal processing,
control engineering and communication engineering it is used for signal detection and estimation, while in the context of data mining, pattern recognition and machine learning time series analysis
can be used for clustering, classification, query by content, anomaly detection as well as forecasting.
Exploratory analysis
The clearest way to examine a regular time series manually is with a line chart such as the one shown for tuberculosis in the United States, made with a spreadsheet program. The number of cases was
standardized to a rate per 100,000 and the percent change per year in this rate was calculated. The nearly steadily dropping line shows that the TB incidence was decreasing in most years, but the
percent change in this rate varied by as much as +/- 10%, with 'surges' in 1975 and around the early 1990s. The use of both vertical axes allows the comparison of two time series in one graphic.
Other techniques include:
• Autocorrelation analysis to examine serial dependence
• Spectral analysis to examine cyclic behaviour which need not be related to seasonality. For example, sun spot activity varies over 11 year cycles.^3^4 Other common examples include celestial
phenomena, weather patterns, neural activity, commodity prices, and economic activity.
• Separation into components representing trend, seasonality, slow and fast variation, and cyclical irregularity: see trend estimation and decomposition of time series
Prediction and forecasting
• Fully formed statistical models for stochastic simulation purposes, so as to generate alternative versions of the time series, representing what might happen over non-specific time-periods in the
• Simple or fully formed statistical models to describe the likely outcome of the time series in the immediate future, given knowledge of the most recent outcomes (forecasting).
• Forecasting on time series is usually done using automated statistical software packages and programming languages, such as R, S, SAS, SPSS, Minitab, Pandas (Python) and many others.
• Assigning time series pattern to a specific category, for example identify a word based on series of hand movements in sign language
See main article: Statistical classification
Regression analysis (method of prediction)
• Estimating future value of a signal based on its previous behavior, e.g. predict the price of AAPL stock based on its previous price movements for that hour, day or month, or predict position of
Apollo 11 spacecraft at a certain future moment based on its current trajectory (i.e. time series of its previous locations).^5
• Regression analysis is usually based on statistical interpretation of time series properties in time domain, pioneered by statisticians George Box and Gwilym Jenkins in the 50s: see Box–Jenkins
Signal estimation
• Splitting a time-series into a sequence of segments. It is often the case that a time-series can be represented as a sequence of individual segments, each with its own characteristic properties.
For example, the audio signal from a conference call can be partitioned into pieces corresponding to the times during which each person was speaking. In time-series segmentation, the goal is to
identify the segment boundary points in the time-series, and to characterize the dynamical properties associated with each segment. One can approach this problem using change-point detection, or
by modeling the time-series as a more sophisticated system, such as a Markov jump linear system.
Models for time series data can have many forms and represent different stochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are the
autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. These three classes depend linearly on previous data points.^6 Combinations of these ideas produce
autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former
three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an
initial "V" for "vector", as in VAR for vector autoregression. An additional set of extensions of these models is available for use where the observed time-series is driven by some "forcing"
time-series (which may not have a causal effect on the observed series): the distinction from the multivariate case is that the forcing series may be deterministic or under the experimenter's
control. For these models, the acronyms are extended with a final "X" for "exogenous".
Non-linear dependence of the level of a series on previous data points is of interest, partly because of the possibility of producing a chaotic time series. However, more importantly, empirical
investigations can indicate the advantage of using predictions derived from non-linear models, over those from linear models, as for example in nonlinear autoregressive exogenous models.
Among other types of non-linear time series models, there are models to represent the changes of variance over time (heteroskedasticity). These models represent autoregressive conditional
heteroskedasticity (ARCH) and the collection comprises a wide variety of representation (GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc.). Here changes in variability are related to, or predicted by,
recent past values of the observed series. This is in contrast to other possible representations of locally varying variability, where the variability might be modelled as being driven by a separate
time-varying process, as in a doubly stochastic model.
In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor. Multiscale (often referred
to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See also Markov switching multifractal (MSMF) techniques for modeling
volatility evolution.
A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest
dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text.
A number of different notations are in use for time-series analysis. A common notation specifying a time series X that is indexed by the natural numbers is written
X = {X[1], X[2], ...}.
Another common notation is
Y = {Y[t]: t ∈ T},
where T is the index set.
There are two sets of conditions under which much of the theory is built:
However, ideas of stationarity must be expanded to consider two important ideas: strict stationarity and second-order stationarity. Both models and applications can be developed under each of these
conditions, although the models in the latter case might be considered as only partly specified.
In addition, time-series analysis can be applied where the series are seasonally stationary or non-stationary. Situations where the amplitudes of frequency components change with time can be dealt
with in time-frequency analysis which makes use of a time–frequency representation of a time-series or signal.^7
The general representation of an autoregressive model, well known as AR(p), is
$Y_t =\alpha_0+\alpha_1 Y_{t-1}+\alpha_2 Y_{t-2}+\cdots+\alpha_p Y_{t-p}+\varepsilon_t\,$
where the term ε[t] is the source of randomness and is called white noise. It is assumed to have the following characteristics:
□ $E[\varepsilon_t]=0 \, ,$
□ $E[\varepsilon^2_t]=\sigma^2 \, ,$
□ $E[\varepsilon_t\varepsilon_s]=0 \quad \text{ for all } tot=s \, .$
With these assumptions, the process is specified up to second-order moments and, subject to conditions on the coefficients, may be second-order stationary.
If the noise also has a normal distribution, it is called normal or Gaussian white noise. In this case, the AR process may be strictly stationary, again subject to conditions on the coefficients.
This section is in a list format that may be better presented using prose. (February 2012)
Tools for investigating time-series data include:
Time series metrics or features that can be used for time series classification or regression analysis:^9
• Univariate linear measures
• Univariate non-linear measures
• Other univariate measures
• Bivariate linear measures
• Bivariate non-linear measures
• Similarity measures:^11
□ Data as Vectors in a Metrizable Space
□ Data as Time Series with Envelopes
□ Data Interpreted as Stochastic Series
□ Data Interpreted as a Probability Distribution Function
See also
Further reading
• Box, George; Jenkins, Gwilym (1976), Time Series Analysis: forecasting and control, rev. ed., Oakland, California: Holden-Day
• Cowpertwait P.S.P., Metcalfe A.V. (2009), Introductory Time Series with R, Springer.
• Durbin J., Koopman S.J. (2001), Time Series Analysis by State Space Methods, Oxford University Press.
• Gershenfeld, Neil (2000), The Nature of Mathematical Modeling, Cambridge University Press, ISBN 978-0-521-57095-4, OCLC 174825352
• Hamilton, James (1994), Time Series Analysis, Princeton University Press, ISBN 0-691-04289-6
• Priestley, M. B. (1981), Spectral Analysis and Time Series, Academic Press. ISBN 978-0-12-564901-8
• Shasha, D. (2004), High Performance Discovery in Time Series, Springer, ISBN 0-387-00857-8
• Shumway R. H., Stoffer (2011), Time Series Analysis and its Applications, Springer.
• Weigend A. S., Gershenfeld N. A. (Eds.) (1994), Time Series Prediction: Forecasting the Future and Understanding the Past. Proceedings of the NATO Advanced Research Workshop on Comparative Time
Series Analysis (Santa Fe, May 1992), Addison-Wesley.
• Wiener, N. (1949), Extrapolation, Interpolation, and Smoothing of Stationary Time Series, MIT Press.
• Woodward, W. A., Gray, H. L. & Elliott, A. C. (2012), Applied Time Series Analysis, CRC Press.
External links | {"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Time_series","timestamp":"2014-04-19T23:16:40Z","content_type":null,"content_length":"167053","record_id":"<urn:uuid:7301f553-78db-469f-8237-5f3aef7ee46b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the relation between realizable and non-realizable cases of the sequence prediction problem.
Daniil Ryabko
Journal of Machine Learning Research Volume 12, , 2011. ISSN 1532-4435
This is the latest version of this eprint.
A sequence x1, . . . , xn, . . . of discrete-valued observations is generated according to some unknown probabilistic law (measure) μ. After observing each outcome, one is required to give
conditional probabilities of the next observation. The realizable case is when the measure μ belongs to an arbitrary but known class C of process measures. The non-realizable case is when μ is
completely arbitrary, but the prediction performance is measured with respect to a given set C of process measures. We are interested in the relations between these problems and between their
solutions, as well as in characterizing the cases when a solution exists and finding these solutions. We show that if the quality of prediction is measured using the total variation distance, then
these problems coincide, while if it is measured using the expected average KL divergence, then they are different. For some of the formalizations we also show that when a solution exists it can be
obtained as a Bayes mixture over a countable subset of C. We also obtain several characterization of those sets C for which solutions to the considered problems exist. As an illustration to the
general results obtained, we show that a solution to the non-realizable case of the sequence prediction problem exists for the set of all finite-memory processes, but does not exist for the set of
all stationary processes. It should be emphasized that the framework is completely general: the processes measures considered are not required to be i.i.d., mixing, stationary, or to belong to any
parametric family.
Available Versions of this Item | {"url":"http://eprints.pascal-network.org/archive/00008745/","timestamp":"2014-04-18T18:15:51Z","content_type":null,"content_length":"9560","record_id":"<urn:uuid:995cf97e-24d9-4eff-9d4f-48f327202ad8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Kearny, NJ
New York, NY 10016
GRE, GMAT, SAT, NYS Exams, and Math
...I specialize in tutoring
and English for success in school and on the SAT, GED, GRE, GMAT, and the NYS Regents exams. Whether we are working on high school geometry proofs or GRE vocabulary, one of my goals for each session
is to keep the student challenged,...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/geo_South_Kearny_NJ_Math_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-20T09:56:11Z","content_type":null,"content_length":"61831","record_id":"<urn:uuid:5d56ef05-0c0f-424c-9141-7e58100b5cbf>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to easily perform a break
How to easily perform a break even analysis
It is easier to start with the definition of the break-even analysis or to be more precise with the break-even point.
The break-even point is the point where the total contribution of the sales equal to the fixed costs. In other words, the break-even point is the point where the total revenue less the variable costs
from the sales made equal to the total fixed costs.
Break-even analysis is the analysis that is performed to identity how many sales a company needs to make to cover it’s fixed cost base.
A simple example might be more helpful. Let’s say that company A sells one product only which is called SuperGlass. The price is the same ($10 per unit) for all customers and it is not expected to
change. The material cost $6 per unit while the company has fixed costs of $10,000.
The contribution per unit is $4 ($10-$6) and therefore, the company will need to sell 10,000/4=2500 units to break-even.
Break even Analysis Formula
$Breakeven Point= Fixed Costs/(SalesPrice-VariableCosts)$
Break even Analysis for two or more products
A more realistic scenario is that a company is producing more than one product. So the question is how to perform a break even analysis for two, three or fifty products. It is actually quite simple!
Let’s say that company A is producing SuperGlass and ExtraGlass and that the company is expected to sell 2 units of SuperGlass for every unit of Extraglass (2:1). The table below summarizes the price
per unit, the variable costs and the fixed costs.
Product SuperGlass ExtraGlass
Expected Ratio 2 1
Sales Price $10 $20
Variable $6 $14
Fixed Costs $50,000 $50,000
Contribution (Sales Price-Variable) $4 $6
The first thing to do is do to add the contribution for both products so that we can create a “combo” that consists of these two products. The total contribution for this combo $10 (4+6).
Therefore, the break even point is 100,000/10 or 10,000 units. The company will therefore need to produce 10,000 * 2 from SuperGlass and 10,000*1 from ExtraGlass to break even.
Break-even Analysis Chart
It is quite easy to create a chart for a simple break even analysis. The first thing to do is to put the fixed costs in Y axis and the Contribution generated for different levels of sales on the X
axis. The result is going to be the same as the photo featured in this post.
It is clear from the graph the break even point is where the total income less the variable costs equal the fixed costs.
Break-even Analysis Uses
Break even analysis can only help you to identify the level sales you need to make to avoid being in a loss making position. It can help you to understand if the product you are thinking to develop
can be profitable by indicating how many units you need to sell to break even. If in your opinion, the level of sales is easily achievable then the product should be developed. If the necessary to
break even level of sales seem to high, then the investment might not be worthwhile.
Break-even Analysis Limitations and Disadvantages
Break-even analysis has as any other similar analysis tool flaws. Some of them can be summarized as follows:
• It can only help you analyze straightforward scenarios and it is hard to apply it in more complex scenarios.
• It is based on expected sales prices, expected variable and fixed costs which and expectations will not be objective.
• It does not account for the synergies that products can bring.
• It does not account for certain benefits that a product can bring (such as diversified portfolios, enhanced brand name etc.
2 Responses to "How to easily perform a break even analysis"
1. Could you show how to calculate the breakeven point using excel or openoffice?
□ Yea, I can do that in a separate post. Thanks for the comment:)
Filed in: Finance Tutorials | {"url":"http://financialmemos.com/how-easily-perform-breakeven-analysis/","timestamp":"2014-04-20T16:10:23Z","content_type":null,"content_length":"61143","record_id":"<urn:uuid:70f205d6-821c-498d-8c41-1ababcf2ee05>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Germantown, MD Calculus Tutor
Find a Germantown, MD Calculus Tutor
...It is not really about memorization so much as visualization of the ideas. That is, it is more about why something is a certain way rather than memorizing ideas. I have taught high school math
for 44 years.
21 Subjects: including calculus, statistics, geometry, algebra 1
...I have 5+ years of tutoring experience in calculus and calculus-related subjects. Most recently, I have worked with students at UMBC in college level Calculus courses for the past year and a
half. Calculus became one of my favorite subjects back in high school, where I was able to earn 5's on both the AB and BC calculus AP exams back in 2005 and 2006.
5 Subjects: including calculus, physics, SAT math, precalculus
...Whether they are linear, quadratic, rational, polynomial or exponential. Depending on the course, many teachers also include trigonometry. Algebra 2 is one of the most challenging courses
students will take, honors or non-honors.
24 Subjects: including calculus, reading, geometry, ASVAB
...I can also assist with some chemistry and physics. I have a mechanical engineering degree from UCLA and am comfortable with all math through and including Calculus. I am certified with wyzant
to teach Algebra 1 & 2, Geometry, Trigonometry, PreCalculus, Calculus, physics and chemistry, as well as a few other topics.
28 Subjects: including calculus, chemistry, physics, geometry
...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through
problems with students since that is the best way to learn.Have studied and scored high marks in econometric...
14 Subjects: including calculus, geometry, statistics, STATA
Related Germantown, MD Tutors
Germantown, MD Accounting Tutors
Germantown, MD ACT Tutors
Germantown, MD Algebra Tutors
Germantown, MD Algebra 2 Tutors
Germantown, MD Calculus Tutors
Germantown, MD Geometry Tutors
Germantown, MD Math Tutors
Germantown, MD Prealgebra Tutors
Germantown, MD Precalculus Tutors
Germantown, MD SAT Tutors
Germantown, MD SAT Math Tutors
Germantown, MD Science Tutors
Germantown, MD Statistics Tutors
Germantown, MD Trigonometry Tutors
Nearby Cities With calculus Tutor
Boyds, MD calculus Tutors
Burke, VA calculus Tutors
Chantilly calculus Tutors
Clarksburg, MD calculus Tutors
College Park calculus Tutors
Frederick, MD calculus Tutors
Gaithersburg calculus Tutors
Mc Lean, VA calculus Tutors
Montgomery Village, MD calculus Tutors
Olney, MD calculus Tutors
Potomac, MD calculus Tutors
Reston calculus Tutors
Rockville, MD calculus Tutors
Sterling, VA calculus Tutors
Takoma Park calculus Tutors | {"url":"http://www.purplemath.com/Germantown_MD_calculus_tutors.php","timestamp":"2014-04-18T00:56:26Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:b27a8c23-54ef-427d-b404-fb4277db8bbf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Write the explicit formula for the geometric sequence. a1 = -5 a2 = 20 a3 = -80 A. an = -5 • (-4)n B. an = -5(-4)n-1 C. an = -4(-5)n-1 D. an = -5 • (4)n
• 10 months ago
• 10 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51a18232e4b0e9c70c335c9e","timestamp":"2014-04-20T08:40:24Z","content_type":null,"content_length":"211589","record_id":"<urn:uuid:45ebb31e-6cc4-438f-a719-f9031f5911e2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
All Submission Categories
1005 Submissions
[110] viXra:1005.0112 [pdf] submitted on 31 May 2010
Fractal Operators in Non-Equilibrium Field Theory
Authors: Ervin Goldfain
Comments: 19 pages, This contribution represents a sequel to CSF 28, (2006), 913-922.
Relativistic quantum field theory (QFT) describes fundamental interactions between elementary particles occurring in an energy range up to several hundreds GeV. Extending QFT beyond this range needs
to account for the imbalance produced by unsuppressed quantum fluctuations and for the emergence of non-equilibrium phase transitions. Our underlying premise is that fractal operators become
mandatory tools when exploring evolution from low-energy physics to the non-equilibrium regime of QFT. Canonical quantization using fractal operators leads to the concept of "complexon", a fractional
extension of quantum excitations and a likely candidate for non-baryonic Dark Matter. A discussion on the duality between this new field-theoretic framework and General Relativity is included.
Category: High Energy Particle Physics
[109] viXra:1005.0111 [pdf] submitted on 30 May 2010
Authors: Min-Young Yun
Comments: 55 pages
The behavior of smallest unit that creating from superstring and quark is indeterminate, but can be described as one equation according to macroscopic-rules. So, it is easy to understand in all ages
and countries, at any times and places. Newborns speak out the easiest word 'Mom' at the first time they speaking, which Decided the title of the theory.
Category: Quantum Gravity and String Theory
[108] viXra:1005.0110 [pdf] submitted on 11 Mar 2010
Smarandache Zero Divisors
Authors: W.B.Vasantha Kandasamy
Comments: 5 pages
In this paper, we study the notion of Smarandache zero divisor in semigroups and rings. We illustrate them with examples and prove some interesting results about them.
Category: Algebra
[107] viXra:1005.0109 [pdf] replaced on 2014-03-23 03:14:29
A New Equation for the Load Balance Scheduling Based on the Smarandache F-Inferior Part Function
Authors: Sabin Tabirca, Tatiana Tabirca
Comments: 6 Pages.
This article represents an extension of (Tabirca 2000a). A new equation for upper bounds is obtained based on the Smarandache f-inferior part function. An example involving upper diagonal matrices is
given in order to illustrate that the new equation provide a better computation
Category: General Mathematics
[106] viXra:1005.0108 [pdf] replaced on 2014-03-22 04:22:29
Faster than Light?
Authors: Felice Russo
Comments: 3 Pages.
The hypothesis formulated by Smarandache On the possibility that no barriers exist in the Universe for an object to travel at any speed is here shortly analyzed.
Category: General Mathematics
[105] viXra:1005.0107 [pdf] submitted on 11 Mar 2010
[104] viXra:1005.0106 [pdf] submitted on 11 Mar 2010
An Introduction to the Smarandache Square Complementary Function
Authors: Felice Russo
Comments: 13 pages
In this paper the main properties of Smarandache Square Complementary function has been analyzed. Several problems still unsolved are reported too.
Category: Number Theory
[103] viXra:1005.0105 [pdf] submitted on 11 Mar 2010
Some New Results Concerning the Smarandache Ceil Function
Authors: Sabin Tabirca, Tatiana Tabirca
Comments: 7 pages
In this article we present two new results concerning the Smarandache Ceil function. The first result proposes an equation for the number of fixed-point number of the Smarandache ceil function. Based
on this result we prove that the average of the Smarandache ceil function is Θ(n) .
Category: Number Theory
[102] viXra:1005.0104 [pdf] replaced on 25 Aug 2011
Factors and Primes in Two Smarandache Sequences
Authors: Ralf W. Stephan
Comments: 10 Pages
Using a personal computer and freely available software, the author factored some members of the Smarandache consecutive sequence and the reverse Smarandache sequence. Nearly complete factorizations
are given up to Sm(80) and RSm(80). Both sequences were excessively searched for prime members, with only one prime found up to Sm(840) and RSm(750): RSm(82) = 828180 ... 10987654321.
Category: Algebra
[101] viXra:1005.0103 [pdf] submitted on 11 Mar 2010
Smarandache Neutrosophic Algebraic Structures
Authors: W. B. Vasantha Kandasamy
Comments: 203 pages
In this book for the first time we introduce the notion of Smarandache neutrosophic algebraic structures. Smarandache algebraic structures had been introduced in a series of 10 books. The study of
Smarandache algebraic structures has caused a shift of paradigm in the study of algebraic structures.
Category: Algebra
[100] viXra:1005.0102 [pdf] replaced on 19 Jun 2010
The New Prime Theorem (45)-(70)
Authors: Chun-Xuan Jiang
Comments: 33 pages
Using Jiang function we prove that the new prime theorems (45)-(70) contain infinitely many prime solutions and no prime solutions.
Category: Number Theory
[99] viXra:1005.0100 [pdf] submitted on 28 May 2010
Mathematics is Physics
Authors: Dainis Zeps
Comments: 16 pages
In series of articles we continue to advance idea that mathematics and physics is the same. We bring forward two basic assumptions as principles. First is the primacy of life as opposed to dominating
reductionism, and second - immaturity of epistemology. Second principle says that we have reached stage of epistemology where we have stepped outside simple perceptibility only on level of
individuality (since Aristotle) but not on level of collective mind. The last stage have reached only most of religious teachings but not physical science that is still under oppressive influence of
reductionism. This causes that what we call research in physical science turns out to be simply instrumental improvement of perception within visional confinement we call field of information. We
discuss and try to apply principle that within field of information we can't invent or discover anything that doesn't existing.
Category: History and Philosophy of Physics
[98] viXra:1005.0099 [pdf] submitted on 26 May 2010
Learning to Cooperate for Progress in Physics
Authors: Jonathan J. Dickau
Comments: 11 pages. The author plans to present this paper at the 11th Frontiers of Fundamental Physics conference, which is in Paris, France July 6-9, 2010.
At the 10^th Frontiers of Fundamental Physics symposium, Gerard 't Hooft stated that, for some of the advances we hope to see in Physics in the future, there must be a great deal of cooperation
between researchers from different disciplines, as well as mathematicians, programmers, technologists, and others. Accomplishing this requires a new mindset; however, as so much of our past progress
has come out of a fiercely competitive process - especially since a critical review of our ideas about reality remains an essential part of making progress and checking our progress. We must also
address the fact that some frameworks appear incompatible, as with relativity and quantum mechanics, which remain at odds despite years of attempts to find a quantum gravity theory. I explore the
idea that playful exploration, using both left-brained and right-brained approaches to learning, allows resolution of conflicting ideas by taking advantage of our innate developmental strategies. It
may thus foster the kind of interdisciplinary cooperation we are hoping to see.
Category: Mind Science
[97] viXra:1005.0098 [pdf] submitted on 26 May 2010
Block Universe
Authors: Amrit S. Sorli
Comments: 11 page
According to the formalism d = v*t fourth dimension of space-time X[4] = i*c*t is spatial too. Time is not a fourth dimension of space-time. Material change i.e. motion run in a timeless space.
Fundamental unit of numeric order t[0],t[1],t[2],...,t[n] of material change is Planck time t[p] . We measure numeric order of material change with clocks. Material change t[n-1] is "before" material
change t[n] equivalently as natural number n-1 is "before" natural number n. Numeric order of material change runs in a timeless 4D space and has no duration. Space-time is a timeless phenomenon.
Category: Quantum Physics
[96] viXra:1005.0097 [pdf] replaced on 28 May 2010
A Brief and Elementary Note on Redshift
Authors: José Francisco García Juliá
Comments: 5 Pages.
A reasonable explanation of both redshifts: cosmological (without expansion of the universe) and intrinsic, is given using a single tired light mechanism. In the first case, the redshift is produced
because the light interacts with microwaves. In the second, the interaction is with radio waves. And all this is compatible with a static universe with a space temperature of 2.7 ^oK.
Category: Astrophysics
[95] viXra:1005.0096 [pdf] submitted on 24 May 2010
The Sieve Method of the Number of Solutions of Goldbach Conjecture (A)
Authors: Tong Xin Ping
Comments: 3 Pages, In Chinese
We can find all solutions of Goldbach conjecture (A) ling in the closed interval [pr+1, N-pr-1], and we can obtain expression of the number of solutions of Goldbach conjecture (A).
Category: Number Theory
[94] viXra:1005.0095 [pdf] replaced on 25 May 2010
Mapping Penrose-Rindler Null Tetrads to the Advanced and Retarded Wheeler-Feynman-Aharonov Destiny & History Null Tetrads
Authors: Jack Sarfatti
Comments: 4 pages
This is a short mathematical note clarifying the use of Cramer's Transactional Interpretation in the Spinor Qubit Pre-Geometry of Wheeler's IT FROM BIT.
Category: Relativity and Cosmology
[93] viXra:1005.0094 [pdf] submitted on 11 Mar 2010
Quantum Smarandache Paradoxes
Authors: Gheorghe Niculescu
Comments: 2 pages
In this paper one presents four of the smarandacheian paradoxes in physics found in various physics sites or printed material.
Category: Quantum Physics
[92] viXra:1005.0093 [pdf] submitted on 11 Mar 2010
Smarandache Hypothesis: Evidences, Implications and Applications
Authors: Leonardo F. D. da Motta
Comments: 5 pages
In 1972, Smarandache proposed that there is not a limit speed on the nature, based on the EPR-Bell (Einstein, Podolsky, Rosen, Bell) paradox. Although it appears that this paradox was solved
recently, there are many other evidences that guide us to believe that Smarandache Hypothesis is right on quantum mechanics and even on the new unification theories. If Smarandache Hypothesis turns
to be right under any circumstance, some concepts of modern physics would have to be "refit" to agree with Smarandache Hypothesis. Moreover, when the meaning of Smarandache Hypothesis become
completely understood, a revolution on technology, specially in communication, will arise.
Category: Quantum Physics
[91] viXra:1005.0092 [pdf] submitted on 11 Mar 2010
Seven Conjectures in Geometry and Number Theory
Authors: Florentin Smarandache
Comments: 2 pages
In this short paper we propose four conjectures in synthetic geometry that generalize Erdos-Mordell Theorem, and three conjectures in number theory that generalize Fermat Numbers.
Category: Number Theory
[90] viXra:1005.0091 [pdf] submitted on 22 May 2010
The Relativity of Time. the Time of the Relativity
Authors: Xavier Terri Castañé
Comments: 5 pages
Demonstration without mathematical formulas of the theory of special and general relativity of Einstein is false.
Category: Relativity and Cosmology
[89] viXra:1005.0090 [pdf] replaced on 16 May 2011
A Pervasive Electric Field in the Heliosphere (Part II)
Authors: Henry D. May
Comments: 7 pages
In Part I of this paper [1] it was proposed that a static electric potential of about +800 MV is present in the heliosphere, sustained by the continual inflow of galactic cosmic ray (GCR) protons.
Charge neutralization cannot occur because the solar wind and magnetic fields allow more protons than electrons to pass through the termination shock (TS) deeply into the heliosphere. The result is a
quasi-static electric field, at dynamic equilibrium, inside the heliosphere. This paper adds some important details that were not included in Part I, and makes some clarifications. The presence of
the heliospheric electric field opens up the possibility of accounting for the Pioneer Anomaly, and also the anomalous cosmic rays, as caused by electric fields.
Category: Astrophysics
[88] viXra:1005.0089 [pdf] replaced on 24 Jun 2010
Where Does Universal Expansion Equal Gravitational Attraction?
Authors: Chris O'Loughlin
Comments: 9 pages
A comparison of the attractive motion experienced by masses due to gravitational interaction over relatively short distances with the recessional motion of masses at relatively large distances (that
adhere to the velocity increases described by Hubble's v = Hr relation) is presented to demonstrate the similarities between the two motions. Based on the similarities of the two motions, and the
observation that gravitational acceleration decreases as distance increases while recessional acceleration decreases as distance decreases the distance at which the two accelerations are equal in
magnitude but in opposite directions resulting in zero net acceleration is calculated and compared to similar results provided by Chernin et al. [1]. The summation of the attractive gravitational
acceleration and the recessional acceleration is presented and plotted depicting a smooth, continuous transition from gravitational attraction to universal expansion. The underlying cause of these
accelerations is not addressed.
Category: Relativity and Cosmology
[87] viXra:1005.0088 [pdf] submitted on 21 May 2010
The New Prime Theorem (44)
Authors: Chun-Xuan Jiang
Comments: 2 pages
Using Jiang function J[2](ω) we prove that jP^n + 9 - j contain infinitely many prime solutions.
Category: Number Theory
[86] viXra:1005.0087 [pdf] submitted on 21 May 2010
The New Prime Theorem (43)
Authors: Chun-Xuan Jiang
Comments: 3 pages
Using Jiang function we prove that jP^8 + k - j contain infinitely many prime solutions.
Category: Number Theory
[85] viXra:1005.0086 [pdf] submitted on 21 May 2010
The New Prime Theorem (42)
Authors: Chun-Xuan Jiang
Comments: 3 pages
Using Jiang function we prove that jP^7 + k - j contain infinitely many prime solutions.
Category: Number Theory
[84] viXra:1005.0085 [pdf] submitted on 21 May 2010
The New Prime Theorem (41)
Authors: Chun-Xuan Jiang
Comments: 3 pages
Using Jiang function we prove that jP^6 + k - j contain infinitely many prime solutions.
Category: Number Theory
[83] viXra:1005.0084 [pdf] submitted on 21 May 2010
The New Prime Theorem (40)
Authors: Chun-Xuan Jiang
Comments: 3 pages
Using Jiang function we prove that jP^5 + k - j contain infinitely many prime solutions.
Category: Number Theory
[82] viXra:1005.0083 [pdf] submitted on 21 May 2010
The New Prime Theorem (39)
Authors: Chun-Xuan Jiang
Comments: 3 pages
Using Jiang function we prove that if J[2](ω) ≠ 0 then there are infinitely many primes P such that each of jP^4 + k - j is a prime, J[2](ω) = 0 then there are finite primes P such that each of jP^4
+ k - j is a prime.
Category: Number Theory
[81] viXra:1005.0082 [pdf] submitted on 21 May 2010
Infinite Smarandache Groupoids
Authors: A.K.S. Chandra Sekhar Rao
Comments: 6 pages
It is proved that there are infinitely many infinite Smarandache Groupoids.
Category: Algebra
[80] viXra:1005.0081 [pdf] replaced on 25 May 2010
The Evaporation of Common Sense
Authors: Ron Bourgoin
Comments: 3 pages
Common sense left the human mind a hundred years ago. It was forced out by relativity theory. This wildly imaginative work of fiction displaced all the logic humankind had labored so long to
establish. People loved it. They were set free of the constraints of disciplined thought. But today we have a problem: relativity and all it has sprouted has taken us down a blind alley.
Category: Relativity and Cosmology
[79] viXra:1005.0080 [pdf] submitted on 20 May 2010
Multi-Criteria Decision Making Based on DSmT-Ahp
Authors: Jean Dezert, Jean-Marc Tacnet, Mireille Batton-Hubert, Florentin Smarandache
Comments: 6 pages
In this paper, we present an extension of the multicriteria decision making based on the Analytic Hierarchy Process (AHP) which incorporates uncertain knowledge matrices for generating basic belief
assignments (bba's). The combination of priority vectors corresponding to bba's related to each (sub)-criterion is performed using the Proportional Conflict Redistribution rule no. 5 proposed in
Dezert-Smarandache Theory (DSmT) of plausible and paradoxical reasoning. The method presented here, called DSmT-AHP, is illustrated on very simple examples.
Category: Artificial Intelligence
[78] viXra:1005.0079 [pdf] submitted on 20 May 2010
Non Bayesian Conditioning and Deconditioning
Authors: Jean Dezert, Florentin Smarandache
Comments: 6 pages
In this paper, we present a Non-Bayesian conditioning rule for belief revision. This rule is truly Non-Bayesian in the sense that it doesn't satisfy the common adopted principle that when a prior
belief is Bayesian, after conditioning by X, Bel(X|X) must be equal to one. Our new conditioning rule for belief revision is based on the proportional conflict redistribution rule of combination
developed in DSmT (Dezert-Smarandache Theory) which abandons Bayes' conditioning principle. Such Non-Bayesian conditioning allows to take into account judiciously the level of conflict between the
prior belief available and the conditional evidence. We also introduce the deconditioning problem and show that this problem admits a unique solution in the case of Bayesian prior; a solution which
is not possible to obtain when classical Shafer and Bayes conditioning rules are used. Several simple examples are also presented to compare the results between this new Non-Bayesian conditioning and
the classical one.
Category: Artificial Intelligence
[77] viXra:1005.0078 [pdf] submitted on 20 May 2010
An Experimental Evidence of Energy Non-Conservation
Authors: Yu Liang, Qichang Liang, Xiaodong Liu
Comments: 5 pages
According to Maxwell's theory, the displacement current in vacuum can produce electromotive force on conducting current. However, the displacement current in vacuum does not experience electromotive
force from conducting current. The asymmetrical electromotive forces result in non-conserved energy transmission between any two coils involving displacement current and conducting current. In this
work, we designed and performed the measurements for such effect. We observed the explicit evidences of non-conserved energy transmission between a toroid solenoid and a parallel plate capacitor. The
measured energy increase is well predicted by the numerical estimation.
Category: Classical Physics
[76] viXra:1005.0077 [pdf] submitted on 19 May 2010
Fusion of Masses Defined on Infinite Countable Frames of Discernment
Authors: Florentin Smarandache, Arnaud Martin
Comments: 5 pages
In this paper we introduce for the first time the fusion of information on infinite discrete frames of discernment and we give general results of the fusion of two such masses using the Dempster's
rule and the PCR5 rule for Bayesian and non-Bayesian cases.
Category: Artificial Intelligence
[75] viXra:1005.0076 [pdf] submitted on 19 May 2010
Degree of Uncertainty of a Set and of a Mass
Authors: Florentin Smarandache, Arnaud Martin
Comments: 9 pages
In this paper we use extend Harley's measure of uncertainty of a set and of mass to the degree of uncertainty of a set and of a mass (bba).
Category: Artificial Intelligence
[74] viXra:1005.0075 [pdf] submitted on 19 May 2010
The Theory of Distributions Applied to Divergent Integrals of the Form (See Paper for Equation)
Authors: Jose Javier Garcia Moreta
Comments: 9 pages
In this paper we review some results on the regularization of divergent integrals of the form ... (see paper for full abstract)
Category: Functions and Analysis
[73] viXra:1005.0074 [pdf] replaced on 16 Aug 2010
Relating the Physical Structure and Properties of Quantum Space-time to Elementary Particles, Gravity, and Relativistic Phenomena
Authors: Gary Heen
Comments: 22 pages
Modern theory states that matter and energy in their most basic form exist in discrete amounts, or quanta. The author proffers that space-time also exists as discrete quanta, and derives a physical
model of space-time and elementary particles. The hypothesis for this space-time model is that the quanta for matter and space-time are convertible states of the same elementary building block: the
quantum mass unit.
Category: Quantum Gravity and String Theory
[72] viXra:1005.0073 [pdf] replaced on 24 May 2010
Relativistic Effects of Relative Velocity of Material Change Start Above Photon Scale
Authors: Amrit S. Sorli
Comments: 2 pages
Constancy of the light velocity in different inertial systems and areas of space with different gravity implies that relativistic effects of relative velocity of material change start on the scale
above photon.
Category: Quantum Physics
[71] viXra:1005.0072 [pdf] replaced on 23 May 2010
The Basis of Quantum Mechanics' Compatibility with Relativity Whose Impairment Gives Rise to the Klein-Gordon and Dirac Equations
Authors: Steven Kenneth Kauffmann
Comments: 14 pages, Also archived as arXiv:1005.2641 [physics.gen-ph].
Solitary-particle quantum mechanics' inherent compatibility with special relativity is implicit in Schrödinger's postulated wave-function rule for the operator quantization of the particle's
canonical threemomentum, taken together with his famed time-dependent wave-function equation that analogously treats the operator quantization of its Hamiltonian. The resulting formally four-vector
equation system assures proper relativistic covariance for any solitary-particle Hamiltonian operator which, together with its canonical three-momentum operator, is a Lorentz-covariant four-vector
operator. This, of course, is always the case for the quantization of the Hamiltonian of a properly relativistic classical theory, so the strong correspondence principle definitely remains valid in
the relativistic domain. Klein-Gordon theory impairs this four-vector equation by iterating and contracting it, thereby injecting extraneous negative-energy solutions that are not orthogonal to their
positive-energy counterparts of the same momentum, thus destroying the basis of the quantum probability interpretation. Klein-Gordon theory, which thus depends on the square of the Hamiltonian
operator, is as well thereby cut adrift from Heisenberg's equations of motion. Dirac theory confuses the space-time symmetry of the four-vector equation system with such symmetry for its time
component alone, which it fatuously imposes, thereby breaching the strong correspondence principle for the free particle and imposing the starkly unphysical momentum-independence of velocity.
Physically sensible alternatives, with external electromagnetic fields, to the Klein-Gordon and Dirac equations are derived, and the simple, elegant symmetry-based approach to antiparticles is
pointed out.
Category: High Energy Particle Physics
[70] viXra:1005.0071 [pdf] replaced on 20 Jun 2011
Product of Distributions and Zeta Regularization of Divergent Integrals ∫ X^m-Sdx and Fourier Transforms
Authors: Jose Javier Garcia Moreta
Comments: 13 pages
Using the theory of distributions and Zeta regularization we manage to give a definition of product for Dirac delta distributions, we show how the fact of one can be define a coherent and finite
product of Dirac delta distributions is related to the regularization of divergent integrals ... (see paper for full abstract)
Category: Functions and Analysis
[69] viXra:1005.0070 [pdf] submitted on 11 Mar 2010
Set Linear Algebra and Set Fuzzy Linear Algebra
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache, K Ilanthenral
Comments: 345 pages.
In this book, the authors define the new notion of set vector spaces which is the most generalized form of vector spaces. Set vector spaces make use of the least number of algebraic operations,
therefore, even a non-mathematician is comfortable working with it. It is with the passage of time, that we can think of set linear algebras as a paradigm shift from linear algebras. Here, the
authors have also given the fuzzy parallels of these new classes of set linear algebras. This book abounds with examples to enable the reader to understand these new concepts easily. Laborious
theorems and proofs are avoided to make this book approachable for nonmathematicians. The concepts introduced in this book can be easily put to use by coding theorists, cryptologists, computer
scientists, and socio-scientists. Another special feature of this book is the final chapter containing 304 problems. The authors have suggested so many problems to make the students and researchers
obtain a better grasp of the subject. This book is divided into seven chapters. The first chapter briefly recalls some of the basic concepts in order to make this book self-contained. Chapter two
introduces the notion of set vector spaces which is the most generalized concept of vector spaces. Set vector spaces lends itself to define new classes of vector spaces like semigroup vector spaces
and group vector 6 spaces. These are also generalization of vector spaces. The fuzzy analogue of these concepts are given in Chapter three. In Chapter four, set vector spaces are generalized to biset
bivector spaces and not set vector spaces. This is done taking into account the advanced information technology age in which we live. As mathematicians, we have to realize that our computer-dominated
world needs special types of sets and algebraic structures. Set n-vector spaces and their generalizations are carried out in Chapter five. Fuzzy n-set vector spaces are introduced in the sixth
chapter. The seventh chapter suggests more than three hundred problems. When a researcher sets forth to solve them, she/he will certainly gain a deeper understanding of these new notions.
Category: Algebra
[68] viXra:1005.0069 [pdf] submitted on 11 Mar 2010
Smarandache Semirings and Semifields
Authors: W. B. Vasantha Kandasamy
Comments: 4 pages.
In this paper we study the notion of Smarandache semirings and semifields and obtain some interesting results about them. We show that not every semiring is a Smarandache semiring. We similarly prove
that not every semifield is a Smarandache semifield. We give several examples to make the concept lucid. Further, we propose an open problem about the existence of Smarandache semiring S of finite
Category: Algebra
[67] viXra:1005.0068 [pdf] submitted on 11 Mar 2010
Randomness and Optimal Estimation in Data Sampling
Authors: M. Khoshnevisan, S. Saxena, H. P. Singh, S. Singh, Florentin Smarandache
Comments: 63 pages.
The purpose of this book is to postulate some theories and test them numerically. Estimation is often a difficult task and it has wide application in social sciences and financial market. In order to
obtain the optimum efficiency for some classes of estimators, we have devoted this book into three specialized sections.
Category: Statistics
[66] viXra:1005.0067 [pdf] submitted on 11 Mar 2010
The Smarandache P and S Persistence of a Prime
Authors: Felice Russo
Comments: 5 pages.
The Smarandache P and S persistence of a prime
Category: Number Theory
[65] viXra:1005.0066 [pdf] submitted on 16 May 2010
[64] viXra:1005.0065 [pdf] submitted on 11 Mar 2010
Smarandache Pseudo-Ideals
Authors: W. B. Vasantha Kandasamy
Comments: 5 pages
In this paper we study the Smarandache pseudo-ideals of a Smarandache ring. We prove every ideal is a Smarandache pseudo-ideal in a Smarandache ring but every Smarandache pseudo-ideal in general is
not an ideal. Further we show that every polynomial ring over a field and group rings FG of the group G over any field are Smarandache rings. We pose some interesting problems about them.
Category: Algebra
[63] viXra:1005.0064 [pdf] submitted on 15 May 2010
Santilli's Isoprime Theory
Authors: Chun-Xuan Jiang
Comments: 16 Pages
We establish the Santilli's isomathematics based on the generalization of the modern mathematics. (see paper for rest of abstract with equations)
Category: Number Theory
[62] viXra:1005.0063 [pdf] replaced on 3 Nov 2011
On Planetary Electromagnetism and Gravity
Authors: Ashwini Kumar Lal
Comments: 10 pages, 2 figures, new hypothesis for gravitation ; 'Published in International Journal of Astronomy and Astrophysics (USA) , 2011, vol.1, no. 2, pp. 62-66'
Study of the interiors of the various terrestrial planets, as presented in the paper, leads to the possibility of planetary gravity being linked to the electromagnetism generated in the planetary
interiors. Findings of the study undertaken suggest that Earth's gravitational attraction may be attributed to magnetic coupling experienced between Earth's electromagnetism and all the earthly
objects - electrically charged or uncharged. More precisely, terrestrial gravity is deemed to be outcome of the bound state of the planetary electromagnetism.
Category: Astrophysics
[61] viXra:1005.0062 [pdf] replaced on 30 May 2010
Applications of Euclidian Snyder Geometry to Space Time Physics & Deceleration Parameter ( DM Generated DE Replacement?)
Authors: Andrew Beckwith
Comments: 59 pages. 30 minute talk for the Dark Side of the Universe conference, Leon, Mexico, to be delivered in the morning of June 5 , 2010
Contains specific elaboration of material on Glinkas quantum gas hypothesis, as far as a counting algorithm, and also attempts to show possible commonality between semi classical theories, and brane
world interpretations ( higher dimensions) while addressing the issue of what are the implications of a small graviton mass in 4 dimensions, i.e. the violations of the correspondence/ complimentarity
Category: Relativity and Cosmology
[60] viXra:1005.0060 [pdf] replaced on 17 May 2010
The Galois Solvable Fourth Roots of Reality
Authors: Jack Sarfatti
Comments: 8 pages
Local observers are defined by orthonormal "non-holonomic" (aka "non-coordinate") tetrad gravity fields (Cartan's "moving frames"). The tetrads are spin 1 vector fields under the 6-parameter
homogeneous Lorentz group SO[1,3] of Einstein's 1905 special relativity. You can think of the tetrad gravity fields as the square roots of Einstein's 1916 spin 2 metric tensor gravity fields. We will
see that we must also allow for spin 0 and spin 1 gravity because the spin 1 tetrads, in turn, are Einstein-Podolsky-Rosen entangled quantum states of pairs of 2-component Penrose-Rindler qubits in
the quantum pregeometry. The Wheeler-Feynman qubits are the square roots of the advanced and retarded null tetrads and can therefore be called the Galois solvable fourth roots of reality. The
spherical wavefront tetrads are then formally the Bell pair states of quantum information theory. Penrose's Cartesian tetrads are a different choice from mine here. The different tetrad choices
correspond to the different contours around the photon propagator poles in the complex energy plane of quantum electrodynamics. Both of his spinors in his spin frame are retarded in the same light
cone, e.g. the forward cone. It seems that Penrose and Rindler implicitly answered Wheeler's question of how IT comes from BIT, but no one realized it until now.
Category: Quantum Gravity and String Theory
[59] viXra:1005.0059 [pdf] replaced on 22 Nov 2010
Geometrical Axioms Refuting the Continuum-Hypothesis
Authors: Dm. Vatolin
Comments: 14 pages, Russian.
This article formulates three geometrical axioms from which it follows that the continuum power is greater then any well-ordered set power.
Category: Set Theory and Logic
[58] viXra:1005.0058 [pdf] submitted on 11 Mar 2010
Partition of a Set which Contains an Infinite Arithmetic (Respectively Geometric) Progression
Authors: Florentin Smarandache
Comments: 3 pages
We prove that for any partition of a set which contains an infinite arithmetic (respectively geometric) progression into two subsets, at least one of these subsets contains an infinite number of
triplets such that each triplet is an arithmetic (respectively geometric) progression.
Category: Number Theory
[57] viXra:1005.0057 [pdf] submitted on 11 Mar 2010
Fuzzy and Neutrosophic Analysis of Periyar's Views on Untouchability
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache, K. Kandasamy
Comments: 385 pages
K.R.Narayanan was a lauded hero and a distinguished victim of his Dalit background. Even in an international platform when he was on an official visit to Paris, the media headlines blazed, 'An
Untouchable at Elysee'. He was visibly upset and it proved that a Dalit who rose up to such heights was never spared from the pangs of outcaste-ness and untouchability, which is based on birth. Thus,
if the erstwhile first citizen of India faces such humiliation, what will be the plight of the last man who is a Dalit? As one of the world's largest socio-economically oppressed, culturally
subjugated and politically marginalized group of people, the 138 million Dalits in India suffer not only from the excesses of the traditional oppressor castes, but also from State Oppression - which
includes, but is not limited to, authoritarianism, police brutality, economic embargo, criminalization of activists, electoral violence, repressive laws that aim to curb fundamental rights, and the
non-implementation of laws that safeguard Dalit rights. The Dalits were considered untouchable for thousands of years by the Hindu society until the Constitution of India officially abolished the
practice of untouchability in 1950.
Category: Social Science
[56] viXra:1005.0056 [pdf] submitted on 11 Mar 2010
[55] viXra:1005.0055 [pdf] submitted on 11 Mar 2010
Reservation for Other Backward Classes in Indian Central Government Institutions Like Iits, Iims and Aiims a Study of the Role of Media Using Fuzzy Super FRM Models
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache, K. Kandasamy
Comments: 16 pages
The new notions of super column FRM model, super row FRM model and mixed super FRM model are introduced in this book. These three models are introduced specially to analyze the biased role of the
print media on 27 percent reservation for the Other Backward Classes (OBCs) in educational institutions run by the Indian Central Government. This book has four chapters. In chapter one the authors
introduce the three types of super FRM models. Chapter two uses these three new super fuzzy models to study the role of media which feverishly argued against 27 percent reservation for OBCs in
Central Government-run institutions in India. The experts we consulted were divided into 19 groups depending on their profession. These groups of experts gave their opinion and comments on the
news-items that appeared about reservations in dailies and weekly magazines, and the gist of these lengthy discussions form the third chapter of this book. The fourth chapter gives the conclusions
based on our study. Our study was conducted from April 2006 to March 2007, at which point of time the Supreme Court of India stayed the 27 percent reservation for OBCs in the IITs, IIMs and AIIMS.
After the aforesaid injunction from the Supreme Court, the experts did not wish to give their opinion since the matter was sub-judice. The authors deeply acknowledge the service of each and every
expert who contributed their opinion and thus made this book a possibility. We have analyzed the data using the opinion of the experts who formed a heterogeneous group consisting of administrators,
lawyers, OBC/SC/ST students, upper caste students and Brahmin students, educationalists, university vice-chancellors, directors, professors, teachers, retired Judges, principals of colleges, parents,
journalists, members of the public, politicians, doctors, engineers, NGOs and government staff.
Category: Social Science
[54] viXra:1005.0054 [pdf] submitted on 11 Mar 2010
Some Smarandache Problems
Authors: Mladen V. Vassilev-Missana, Krassimir T. Atanassov
Comments: 67 pages, Book in Romanian, French and English. Proposed and solved problems for students' mathematical competitions in number theory, algebra, geometry, trigonometry, calculus.
During the five years since publishing [2], we have obtained many new results related to the Smarandache problems. We are happy to have the opportunity to present them in this book for the enjoyment
of a wider audience of readers. The problems in Chapter two have also been solved and published separately by the authors, but it makes sense to collate them here so that they can be better seen in
perspective as a whole, particularly in relation to the problems elucidated in Chapter one. Many of the problems, and more especially the techniques employed in their solution, have wider
applicability than just the Smarandache problems, and so they should be of more general interest to other mathematicians, particularly both professional and amateur number theorists.
Category: Number Theory
[53] viXra:1005.0053 [pdf] submitted on 11 Mar 2010
Solved Problems of Geometry and Trigonometry for College Students.
Authors: Florentin Smarandache
Comments: 171 pages
Solved problems of geometry and trigonometry for college students.
Category: Geometry
[52] viXra:1005.0052 [pdf] replaced on 27 Oct 2010
Tetron Model Building
Authors: Bodo Lampe
Comments: 12 pages, 1 table, 1 figure
Spin models are considered on a discretized inner symmetry space with tetrahedral symmetry as possible dynamical schemes for the tetron model. Parity violation, which corresponds to a change of sign
for odd permutations, is shown to dictate the form of the Hamiltonian. It is further argued that such spin models can be obtained from more fundamental principles by considering a (6+1)- or (7+1)
-dimensional spacetime with octonion multiplication.
Category: High Energy Particle Physics
[51] viXra:1005.0051 [pdf] replaced on 2012-11-30 07:05:53
Big Bang Model? : A Critical Review
Authors: Ashwini Kumar Lal
Comments: 26 pages, 5 figures, minor modification, published in 'Journal of Cosmology' (USA), 2010,Vol.6, pp.1533-1547
Inflationary Hot Big Bang Model is the generally accepted theory for the origin of the universe. Nonetheless, findings of the observational astronomy as also the revelations in the field of
fundamental physics over the past two decades question validity of the 'Big Bang' model as a viable theory for the origin of the universe. This paper examines a few of the various factors which
undermine the Big Bang theory, including the organization of galactic superstructures, the Cosmic Microwave Background, redshifts, distant galaxies,age of local galaxies, and the gravitational waves.
Category: Astrophysics
[50] viXra:1005.0050 [pdf] submitted on 14 May 2010
Theory Cannot Choose from Its Several Possible Interpretations
Authors: Ron Bourgoin
Comments: 3 pages
A theory can be interpreted several different ways. Which interpretation is "the correct interpretation" is well nigh impossible to determine. For that reason, we select the one that best fits our
concept of what reality is. That means we choose on the basis of metaphysics.
Category: History and Philosophy of Physics
[49] viXra:1005.0049 [pdf] submitted on 11 Mar 2010
Only Problems Not Solutions
Authors: Florentin Smarandache
Comments: 112 pages
The development of mathematics continues in a rapid rhythm, some unsolved problems are elucidated and simultaneously new open problems to be solved appear.
Category: Number Theory
[48] viXra:1005.0048 [pdf] submitted on 11 Mar 2010
Estimation of Mean in Presence of Non Response Using Exponential Estimator
Authors: Rajesh Singh, Mukesh Kumar, Manoj K. Chaudhary, Florentin Smarandache
Comments: 11 pages
This paper considers the problem of estimating the population mean using information on auxiliary variable in presence of non response. Exponential ratio and exponential product type estimators have
been suggested and their properties are studied. An empirical study is carried out to support the theoretical results.
Category: Statistics
[47] viXra:1005.0047 [pdf] submitted on 11 Mar 2010
A Method of Solving Certain Nonlinear Diophantine Equations
Authors: Florentin Smarandache
Comments: 2 pages
In this paper we propose a method of solving a Nonlinear Diophantine Equation by converting it into a System of Diophantine Linear Equations.
Category: Number Theory
[46] viXra:1005.0046 [pdf] submitted on 11 Mar 2010
N-Linear Algebra of Type II
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 231 pages
This book is a continuation of the book n-linear algebra of type I and its applications. Most of the properties that could not be derived or defined for n-linear algebra of type I is made possible in
this new structure: n-linear algebra of type II which is introduced in this book. In case of n-linear algebra of type II, we are in a position to define linear functionals which is one of the marked
difference between the n-vector spaces of type I and II. However all the applications mentioned in n-linear algebras of type I can be appropriately extended to n-linear algebras of type II. Another
use of n-linear algebra (n-vector spaces) of type II is that when this structure is used in coding theory we can have different types of codes built over different finite fields whereas this is not
possible in the case of n-vector spaces of type I. Finally in the case of n-vector spaces of type II we can obtain neigen values from distinct fields; hence, the n-characteristic polynomials formed
in them are in distinct different fields. An attractive feature of this book is that the authors have suggested 120 problems for the reader to pursue in order to understand this new notion. This book
has three chapters. In the first chapter the notion of n-vector spaces of type II are introduced. This chapter gives over 50 theorems. Chapter two introduces the notion of n-inner product vector
spaces of type II, n-bilinear forms and n-linear functionals. The final chapter 6 suggests over a hundred problems. It is important that the reader should be well versed with not only linear algebra
but also nlinear algebras of type I.
Category: Algebra
[45] viXra:1005.0045 [pdf] submitted on 11 Mar 2010
N-Linear Algebra of Type I and Its Applications
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 120 pages
With the advent of computers one needs algebraic structures that can simultaneously work with bulk data. One such algebraic structure namely n-linear algebras of type I are introduced in this book
and its applications to n-Markov chains and n-Leontief models are given. These structures can be thought of as the generalization of bilinear algebras and bivector spaces. Several interesting
n-linear algebra properties are proved. This book has four chapters. The first chapter just introduces n-group which is essential for the definition of nvector spaces and n-linear algebras of type I.
Chapter two gives the notion of n-vector spaces and several related results which are analogues of the classical linear algebra theorems. In case of n-vector spaces we can define several types of
linear transformations. The notion of n-best approximations can be used for error correction in coding theory. The notion of n-eigen values can be used in deterministic modal superposition principle
for undamped structures, which can find its applications in finite element analysis of mechanical structures with uncertain parameters. Further it is suggested that the concept of nmatrices can be
used in real world problems which adopts fuzzy models like Fuzzy Cognitive Maps, Fuzzy Relational Equations and Bidirectional Associative Memories. The applications of 6 these algebraic structures
are given in Chapter 3. Chapter four gives some problem to make the subject easily understandable. The authors deeply acknowledge the unflinching support of Dr.K.Kandasamy, Meena and Kama.
Category: Algebra
[44] viXra:1005.0044 [pdf] submitted on 11 Mar 2010
Fuzzy Interval Matrices, Neutrosophic Interval Matrices and Their Applications
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 304 pages
The new concept of fuzzy interval matrices has been introduced in this book for the first time. The authors have not only introduced the notion of fuzzy interval matrices, interval neutrosophic
matrices and fuzzy neutrosophic interval matrices but have also demonstrated some of its applications when the data under study is an unsupervised one and when several experts analyze the problem.
Further, the authors have introduced in this book multiexpert models using these three new types of interval matrices. The new multi expert models dealt in this book are FCIMs, FRIMs, FCInMs, FRInMs,
IBAMs, IBBAMs, nIBAMs, FAIMs, FAnIMS, etc. Illustrative examples are given so that the reader can follow these concepts easily. This book has three chapters. The first chapter is introductory in
nature and makes the book a self-contained one. Chapter two introduces the concept of fuzzy interval matrices. Also the notion of fuzzy interval matrices, neutrosophic interval matrices and fuzzy
neutrosophic interval matrices, can find applications to Markov chains and Leontief economic models. Chapter three gives the application of fuzzy interval matrices and neutrosophic interval matrices
to real-world problems by constructing the models already mentioned. Further these models are mainly useful when the data is an unsupervised one and when one needs a multi-expert model. The new
concept of fuzzy interval matrices and neutrosophic interval matrices will find their applications in engineering, medical, industrial, social and psychological problems. We have given a long list of
references to help the interested reader.
Category: Artificial Intelligence
[43] viXra:1005.0043 [pdf] submitted on 12 May 2010
Proposing the Existence of a New Symmetry Called the Wick Symmetry-Representation of a Particle as a Primary Gas - VI
Authors: V.A.Induchoodan Menon
Comments: 21 pages
The author discusses the similarity between the expression for the state function of the primary gas representing a particle and that of the wave function. It is observed that the only difference
between these two expressions is that in the former time appears as a real function while in the latter it appears as an imaginary function. He shows that the primary gas approach which treats time
as a real and the quantum mechanical approach which treats time as imaginary are two ways of representing the same reality and points to a new symmetry called the Wick symmetry. He shows that the
probability postulate of quantum mechanics can be understood in a very simple and natural manner based on the primary gas representation of the particle. It is shown that the zero point energy of the
quantum mechanics is nothing but the energy of the thermal bath formed by the vacuum fluctuations in the Higgs field. He shows that the quantum mechanics is nothing but the thermodynamics of the
primary gas where time has not lost its directional symmetry.
Category: Quantum Physics
[42] viXra:1005.0042 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (20)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any there are infinitely many primes P such that each of jP^P[0] + j+1 is a prime.
Category: Number Theory
[41] viXra:1005.0041 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (19)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any there are infinitely many primes P such that each of P^P[0] + 4^n is a prime.
Category: Number Theory
[40] viXra:1005.0040 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (18)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any there are infinitely many primes kPsuch that each of P^P[0] + (2j)^2 is a prime.
Category: Number Theory
[39] viXra:1005.0039 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (17)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any there are infinitely many primes kPsuch that each of P^P[0] + j(j+1) is a prime.
Category: Number Theory
[38] viXra:1005.0038 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (16)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of jP^5 + j +1 is a prime.
Category: Number Theory
[37] viXra:1005.0037 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (15)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of P^5 + 4^n is a prime.
Category: Number Theory
[36] viXra:1005.0036 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (14)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of P^5 + (2j)^2 is a prime.
Category: Number Theory
[35] viXra:1005.0035 [pdf] submitted on 11 May 2010
New Prime K-Tuple Theorem (13)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of P^5 + j( j +1) is a prime.
Category: Number Theory
[34] viXra:1005.0034 [pdf] replaced on 12 May 2010
Our Artificial Reality Created by Theory
Authors: Ron Bourgoin
Comments: 2 pages
Theory is a template, a schematic of what we think reality is. We see reality according to the template. What does not conform to the template is excised. For that reason, much that occurs in the
world is not seen. That is how we miss the great discoveries going on right under our noses.
Category: History and Philosophy of Physics
[33] viXra:1005.0033 [pdf] submitted on 11 May 2010
Covariant Isolation from an Abelian Gauge Field of Its Nondynamical Potential, Which, When Fed Back, Can Transform Into a "Confining Yukawa"
Authors: Steven Kenneth Kauffmann
Comments: 12 pages, Also archived as arXiv:1005.1101 [physics.gen-ph]
For Abelian gauge theory a properly relativistic gauge is developed by supplementing the Lorentz condition with causal determination of the time component of the four-vector potential by retarded
Coulomb transformation of the charge density. This causal Lorentz gauge agrees with the Coulomb gauge for static charge densities, but allows the four-vector potential to have a longitudinal
component that is determined by the time derivative of the four-vector potential's time component. Just as in Coulomb gauge, the two transverse components of the four-vector potential are its sole
dynamical part. The four-vector potential in this gauge covariantly separates into a dynamical transverse four-vector potential and a nondynamical timelike/longitudinal four-vector potential, where
each of these two satisfies the Lorentz condition. In fact, analogous partition of the conserved four-current shows each to satisfy a Lorentz-condition Maxwellequation system with its own conserved
four-current. Because of this complete separation, either of these four-vector potentials can be tinkered with without affecting its counterpart. Since it satisfies the Lorentz condition, the
nondynamical four-vector potential times a constant with dimension of inverse length squared is itself a conserved four-current, and so can be fed back into its own source current, which transforms
its time component into an extended Yukawa, with both exponentially decaying and exponentially growing components. The latter might be the mechanism of quark-gluon confinement: in non-Abelian color
gauge theory the Yukawa mixture ratio ought to be tied to color, with palpable consequences for "colorful" hot quark-gluon plasmas.
Category: Quantum Physics
[32] viXra:1005.0032 [pdf] submitted on 9 May 2010
New Prime K-Tuple Theorem (12)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of jP^3 + j + 1 is a prime.
Category: Number Theory
[31] viXra:1005.0031 [pdf] submitted on 9 May 2010
New Prime K-Tuple Theorem (11)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of P^3 + 4^n is a prime.
Category: Number Theory
[30] viXra:1005.0030 [pdf] submitted on 9 May 2010
New Prime K-Tuple Theorem (10)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of P^3 + (2 j)^2 is a prime.
Category: Number Theory
[29] viXra:1005.0029 [pdf] submitted on 9 May 2010
New Prime K-Tuple Theorem (9)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove that P, P^15 + j(j+1)(j=1,...,7) contain no prime solutions.
Category: Number Theory
[28] viXra:1005.0028 [pdf] submitted on 9 May 2010
New Prime K-Tuple Theorem (8)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove that P, P^9 + j(j+1)(j=1,...,7) contain no prime solutions.
Category: Number Theory
[27] viXra:1005.0027 [pdf] submitted on 9 May 2010
New Prime K-Tuple Theorem (7)
Authors: Chun-Xuan Jiang
Comments: 2 Pages
Using Jiang function we prove for any k there are infinitely many primes P such that each of P^3 + j( j + 1) is a prime.
Category: Number Theory
[26] viXra:1005.0026 [pdf] submitted on 10 May 2010
A Theory of Unified Gravitation
Authors: Gil Raviv
Comments: 255 Pages.
The theory presented here, entitled the theory of unified gravitation, holds that the nuclear strong interaction and gravitation are one and the same force. Detailed and relatively simple mathematics
are shown to lead to an explicit strong/gravitational force equation that relies on only three independent parameters, identical to the parameters used in Newton's gravitational theory. The theory is
applied on various distance scales to explain a broad range of phenomena, and is shown to provide an unparalleled level of agreement with observations, without requiring an assumption of dark matter,
dark energy or inflation. Most notable is its ability to reproduce the morphologies of various types of galaxies and nebulae, as well as the complex structure of Saturn's main body of rings.
Additional large-scale phenomena explained by unified gravitation include
• The constant rotation curve observed in spiral galaxies
• The nature of density waves in spiral galaxies
• The mechanism underlying star formation and fragmentation
• The parameters that determine galactic (or nebular) morphology and classification
• The clustering of nearby galaxies, repulsion between distant galaxies, and the creation of galactic voids
• The accelerated expansion of the universe
• The cause of the observed redshift periodicity
• The mechanism responsible for the creation of galactic and stellar wind
• The sudden expansion of gas and matter observed in novae and supernovae
• The formation of planetary ring systems and the composition of planets
• The mechanism responsible for the creation of the planetary and galactic magnetic fields
• A possible mechanism for the creation of the solar corona
• The process of ionization that produces the vast amount of plasma in the universe.
On nuclear scale, the theory is demonstrated to account for the observed weak fall-off of the deep inelastic scattering cross section, and to provide a scaling behavior similar to the observed
Bjorken scaling.
Category: Astrophysics
[25] viXra:1005.0025 [pdf] submitted on 10 May 2010
Proof of the 3n+1 Problem for N ≥ 1
Authors: Steffen Bode
Comments: 6 Pages.
I establish the existence of a unique binary pattern inherent to the 3n+1 step, and then use this binary pattern to prove the 3n+1 problem for all positive integers.
Category: Number Theory
[24] viXra:1005.0024 [pdf] replaced on 23 May 2010
Warp Drive Basic Science Written For "Aficionados". Chapter I - Miguel Alcubierre.
Authors: Fernando Loup
Comments: 37 Pages. The Warp Drive as a Dynamical Spacetime is one of the most interesting Spacetimes of General Relativity and is being heavily studied inside arXiv.org.See for example the arXiv
papers 1001.4960, 0904.0141,0710.4474, gr-qc/0009013,gr.qc/0110086,gr.qc/9905084,gr.qc/9702026 or the Post Doctoral Dissertation Thesis gr-qc/9805037. We feel that it is time for viXra to have its
own papers exclusively devoted to this Dynamical Spacetime:The Warp Drive
Alcubierre Warp Drive is one of the most exciting Spacetimes of General Relativity.It was the first Spacetime Metric able to develop Superluminal Velocities.However some physical problems associated
to the Alcubierre Warp Drive seemed to deny the Superluminal Behaviour. We demonstrate in this work that some of these problems can be overcomed and we arrive at some interesting results although we
used two different Shape Functions one continuous g(rs) as an alternative to the original Alcubierre f(rs) and a Piecewise Shape Function f[pc](rs) as an alternative to the Ford-Pfenning Piecewise
Shape Function with a behaviour similar to the Natario Warp Drive producing effectively an Alcubierre Warp Drive without Expansion/Contraction of the Spacetime. Horizons will exists and cannot be
avoided however we found a way to "overcome" this problem.We also introduce here the Casimir Warp Drive.
Category: Relativity and Cosmology
[23] viXra:1005.0023 [pdf] submitted on 11 Mar 2010
Considerations on New Functions in Number Theory
Authors: Florentin Smarandache
Comments: 20 pages
In this paper a small survey is presented on eighteen new functions and four new sequences, such as: Inferior/Superior f-Part, Fractional f-Part, Complementary function with respect with another
function, S-Multiplicative, Primitive Function, Double Factorial Function, S-Prime and S-Coprime Functions, Smallest Power Function.
Category: Number Theory
[22] viXra:1005.0022 [pdf] submitted on 11 Mar 2010
Analysis of Social Aspects of Migrant Labourers Living with Hiv/aids Using Fuzzy Theory and Neutrosophic Cognitive Maps
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 472 pages
Neutrosophic logic grew as an alternative to the existing topics and it represents a mathematical model of uncertainty, vagueness, ambiguity, imprecision, undefined-ness, unknown, incompleteness,
inconsistency, redundancy and contradiction. Despite various attempts to reorient logic, there has remained an essential need for an alternative system that could infuse into itself a representation
of the real world. Out of this need arose the system of neutrosophy and its connected logic, neutrosophic logic. This new logic, which allows also the concept of indeterminacy to play a role in any
real-world problem, was introduced first by one of the authors Florentin Smarandache.
Category: Quantitative Biology
[21] viXra:1005.0021 [pdf] submitted on 11 Mar 2010
Neutrosophic Rings
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 154 pages
In this book we define the new notion of neutrosophic rings. The motivation for this study is two-fold. Firstly, the classes of neutrosophic rings defined in this book are generalization of the two
well-known classes of rings: group rings and semigroup rings. The study of these generalized neutrosophic rings will give more results for researchers interested in group rings and semigroup rings.
Secondly, the notion of neutrosophic polynomial rings will cause a paradigm shift in the general polynomial rings. This study has to make several changes in case of neutrosophic polynomial rings.
This would give solutions to polynomial equations for which the roots can be indeterminates. Further, the notion of neutrosophic matrix rings is defined in this book. Already these neutrosophic
matrixes have been applied and used in the neutrosophic models like neutrosophic cognitive maps (NCMs), neutrosophic relational maps (NRMs) and so on.
Category: Algebra
[20] viXra:1005.0020 [pdf] submitted on 8 May 2010
Confidence Intervals for the Pythagorean Formula in Baseball
Authors: David D. Tung
Comments: 27 Pages.
In this paper, we will investigate the problem of obtaining confidence intervals for a baseball team's Pythagorean expectation, i.e. their expected winning percentage and expected games won. We study
this problem from two different perspectives. First, in the framework of regression models, we obtain confidence intervals for prediction, i.e. more formally, prediction intervals for a new
observation, on the basis of historical binomial data for Major League Baseball teams from the 1901 through 2009 seasons, and apply this to the 2009 MLB regular season. We also obtain a Scheffé-type
simultaneous prediction band and use it to tabulate predicted winning percentages and their prediction intervals, corresponding to a range of values for log(RS=RA). Second, parametric bootstrap
simulation is introduced as a data-driven, computer-intensive approach to numerically computing confidence intervals for a team's expected winning percentage. Under the assumption that runs scored
per game and runs allowed per game are random variables following independent Weibull distributions, we numerically calculate confidence intervals for the Pythagorean expectation via parametric
bootstrap simulation on the basis of each team's runs scored per game and runs allowed per game from the 2009 MLB regular season. The interval estimates, from either framework, allow us to infer with
better certainty as to which teams are performing above or below expectations. It is seen that the bootstrap confidence intervals appear to be better at detecting which teams are performing above or
below expectations than the prediction intervals obtained in the regression framework.
Category: Statistics
[19] viXra:1005.0019 [pdf] submitted on 7 May 2010
Nonlinear Theory of Elementary Particles 1. Choice of Axiomatics and Mathematical Apparatus of Theory
Authors: A.G. Kyriakos
Comments: 12 Pages.
In the previous paper (http://vixra.org/abs/1003.0169), which can be considered as an introduction to the nonlinear theory, we have shown that the Standard Model (S?) is not an axiomatic, but an
algorithmic theory. In the proposed article the simplest (minimum) axiomatics is examined from the point of view of the possible forms of its mathematical representation.
Category: High Energy Particle Physics
[18] viXra:1005.0018 [pdf] submitted on 7 May 2010
Using Noetics to Determine the Geometric Limits of 3-Body Alignments that Produce Subtle Energies
Authors: Jeffrey S. Keen
Comments: 7 pages, 5 Figures.
Attempting to link quantum physics with general relativity is one current approach to the comprehension of the structure of the universe. However, this could be an impossible objective as recent
theories suggest that gravity is not a fundamental force but a consequence of the way information about material objects is organised in space time (e.g. Reference 27). In this theory, gravity is
analogous to the flow of water, and involves a holographic universe, information, entropy, and chaos theory. However, such theories do not explain acts of observations affecting the results of
scientific experiments. Many researchers, (e.g. Reference 51), including the author, believe that understanding the structure of the universe lies not just in physics and the above concepts, but in
addition involves consciousness and cognitive neuroscience together with understanding the nature and perception of information. As Noetics and dowsing involve all the latter factors, it is proving
to be a powerful and relevant research tool. This paper combines these considerations in a non-orthodox, but heuristic approach, linked by geometry. A previous paper, (See Reference 24: http://
vixra.org/abs/1001.0004), identified that geometric alignments of three bodies, be they 3 pebbles, 3 circles drawn on paper, or 3 astronomical bodies produce a subtle energy beam that can be detected
by the mind and measured. Intriguingly, this beam has a divergence angle involving the inverse of the Fine Structure Constant (137). It has also been shown to instantaneously communicate conscious
information across the solar system. These facts suggest, together with other findings, that this "consciousness beam" is linked to the structure of the universe. This avenue of research is now
further developed, by quantifying the limits of the alignment of the 3-bodies that is required to produce the subtle energy beam. The findings are that for observations made near the outer of the 3
bodies, the alignment must be less than arcsine 1/4. But for observations near the middle body, this alignment must be within arcsine1/5. This article is a summary of the concepts which are augmented
on the author's website http://www.jeffreykeen.co.uk/
Category: Quantum Gravity and String Theory
[17] viXra:1005.0017 [pdf] submitted on 5 May 2010
[16] viXra:1005.0016 [pdf] replaced on 2012-04-18 09:35:00
Generalization of a Remarkable Theorem
Authors: Ion Patrascu, Florentin Smarandache
Comments: 3 Pages.
Professor Claudiu Coandă proved, using the barycentric coordinates, a remarkable theorem. We generalize this theorem using some results from projective geometry relative to the pole and polar
Category: Geometry
[15] viXra:1005.0015 [pdf] submitted on 4 May 2010
On Relating a 10-Dimensional and 11-Dimensional Duality Model of Quantized Space-Time to Elementary Particles
Authors: Gary Heen
Comments: 8 pages
It is suggested in this paper that space-time and matter are both derived from a common entity, the quantum mass unit. A 10-dimensional and 11-dimensional duality model of the quantum mass unit is
presented diagrammatically, and a mathematical argument is put forth indicating how energetic photons interact with space-time, converting space-time into virtual particle pairs of matter and
Category: Quantum Physics
[14] viXra:1005.0014 [pdf] submitted on 4 May 2010
Tuning and What it Means to Physics
Authors: Ron Bourgoin
Comments: 3 pages
In the 60s, the word was relevance; now it's tuning. For physics, it means striking a resonance between the world of making a profit and the preparation of physicists. The international effort is
designed to solicit the input of industry in restructuring the physics curriculum. This spells trouble for physics departments.
Category: History and Philosophy of Physics
[13] viXra:1005.0013 [pdf] submitted on 4 May 2010
[12] viXra:1005.0012 [pdf] replaced on 10 May 2010
Deceleration Parameter Q(z) and the Role of Nucleated GW 'gravition Gas' in the Development of DE Alternatives
Authors: Andrew Beckwith
Comments: 9 pages, 3 figures. Key words inserted, PACS, and an additional figure put in, to discuss what may be needed in order to obtain a rate equation. Comparison with the case of solar axions ,
and their flux upon the Earth's surface raised. For possible presentation at the FF 11 conference, in Paris, pending their review. Already submitted to an IOP journal for review/ possible
The case for a four dimensional graviton mass (non zero) influencing reacceleration of the universe in five dimensions is stated, with particular emphasis upon if five dimensional geometries as given
below give us new physical insight as to cosmological evolution. A comparison with the quantum gas hypothesis of Glinka shows how stochastic GW/ gravitons may emerge in vacuum nucleated space, with
emphasis upon comparing their number in phase space, as compared with different strain values
Category: Relativity and Cosmology
[11] viXra:1005.0011 [pdf] submitted on 10 Mar 2010
The Neutrosophic Research Method in Scientific and Humanistic Fields
Authors: Florentin Smarandache
Comments: 2 pages
The Neutrosophic Research Method is a generalization of Hegel's dialectic, and suggests that scientific and humanistic research will progress via studying not only the opposite ideas but the neutral
ideas related to them as well in order to have a bigger picture of the whole problem to solve.
Category: General Science and Philosophy
[10] viXra:1005.0010 [pdf] submitted on 3 May 2010
Another Explanation of the Redshifts of the Pair Quasar-Galaxy NGC 7319
Authors: José Francisco García Juliá
Comments: 2 Pages.
The excess of redshift of the quasar might be produced in its interior by the transference of heat from the light waves to the radio waves.
Category: Astrophysics
[9] viXra:1005.0009 [pdf] submitted on 3 May 2010
Concept and Method of Physimatics, Logic of Existence and the Logical Time Formula
Authors: Robert Gallinat
Comments: 12 pages
Conceptual approach and heuristic method for an investigation of the possible algebraic structure of the interdependence between mathematical and physical reality and about the connection between
local, non-local and global properties in physics and mathematics, expressed by a General N-fold algebra (continued)
Category: Relativity and Cosmology
[8] viXra:1005.0008 [pdf] submitted on 2 May 2010
The Improved of the Chen Jing Run Theorem
Authors: Tong Xin Ping
Comments: 3 Pages, In Chinese
Chen Jing Run proved that "On the representation of a large even integer as the sum of a prime and the product of at most two primes" and lower bound estimations of the number of solutions. Jiang
Chun Xuan, Tong Xin Ping proved that "An even integer as the sum of a prime and the product of two primes" and compute formula of the number of solutions. This paper compares the accuracy of the
three formulas
Category: Number Theory
[7] viXra:1005.0007 [pdf] submitted on 10 Mar 2010
Smarandache Near-Rings and Their Generalizations
Authors: W. B. Vasantha Kandasamy
Comments: 5 pages
In this paper we study the Smarandache semi-near-ring and nearring, homomorphism, also the Anti-Smarandache semi-near-ring. We obtain some interesting results about them, give many examples, and pose
some problems. We also define Smarandache semi-near-ring homomorphism.
Category: Algebra
[6] viXra:1005.0006 [pdf] submitted on 10 Mar 2010
Neutrality and Many-Valued Logics
Authors: Andrew Schumann, Florentin Smarandache
Comments: 121 pages
This book written by A. Schumann & F. Smarandache is devoted to advances of non-Archimedean multiple-validity idea and its applications to logical reasoning. Leibnitz was the first who proposed
Archimedes' axiom to be rejected. He postulated infinitesimals (infinitely small numbers) of the unit interval [0, 1] which are larger than zero, but smaller than each positive real number. Robinson
applied this idea into modern mathematics in [117] and developed so-called non-standard analysis. In the framework of non-standard analysis there were obtained many interesting results examined in
[37], [38], [74], [117].
Category: Set Theory and Logic
[5] viXra:1005.0005 [pdf] submitted on 10 Mar 2010
Basic Neutrosophic Algebraic Structures and Their Application to Fuzzy and Neutrosophic Models
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 149 pages
Study of neutrosophic algebraic structures is very recent. The introduction of neutrosophic theory has put forth a significant concept by giving representation to indeterminates. Uncertainty or
indeterminacy happen to be one of the major factors in almost all real-world problems. When uncertainty is modeled we use fuzzy theory and when indeterminacy is involved we use neutrosophic theory.
Most of the fuzzy models which deal with the analysis and study of unsupervised data make use of the directed graphs or bipartite graphs. Thus the use of graphs has become inevitable in fuzzy models.
The neutrosophic models are fuzzy models that permit the factor of indeterminacy. It also plays a significant role, and utilizes the concept of neutrosophic graphs. Thus neutrosophic graphs and
neutrosophic bipartite graphs plays the role of representing the neutrosophic models. Thus to construct the neutrosophic graphs one needs some of the neutrosophic algebraic structures viz.
neutrosophic fields, neutrosophic vector spaces and neutrosophic matrices. So we for the first time introduce and study these concepts. As our analysis in this book is application of neutrosophic
algebraic structure we found it deem fit to first introduce and study neutrosophic graphs and their applications to neutrosophic models.
Category: Algebra
[4] viXra:1005.0004 [pdf] submitted on 10 Mar 2010
Smarandache Non-Associative (SNA-) rings
Authors: W. B. Vasantha Kandasamy
Comments: 13 pages
In this paper we introduce the concept of Smarandache non-associative rings, which we shortly denote as SNA-rings as derived from the general definition of a Smarandache Structure (i.e., a set A
embedded with a week structure W such that a proper subset B in A is embedded with a stronger structure S). Till date the concept of SNA-rings are not studied or introduced in the Smarandache
algebraic literature. The only non-associative structures found in Smarandache algebraic notions so far are Smarandache groupoids and Smarandache loops introduced in 2001 and 2002. But they are
algebraic structures with only a single binary operation defined on them that is nonassociative. But SNA-rings are non-associative structures on which are defined two binary operations one
associative and other being non-associative and addition distributes over multiplication both from the right and left. Further to understand the concept of SNA-rings one should be well versed with
the concept of group rings, semigroup rings, loop rings and groupoid rings. The notion of groupoid rings is new and has been introduced in this paper. This concept of groupoid rings can alone provide
examples of SNA-rings without unit since all other rings happens to be either associative or nonassociative rings with unit. We define SNA subrings, SNA ideals, SNA Moufang rings, SNA Bol rings, SNA
commutative rings, SNA non-commutative rings and SNA alternative rings. Examples are given of each of these structures and some open problems are suggested at the end.
Category: Algebra
[3] viXra:1005.0003 [pdf] submitted on 10 Mar 2010
N-Algebraic Structures and S-N-Algebraic Structures
Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 209 pages
In this book, for the first time we introduce the notions of Ngroups, N-semigroups, N-loops and N-groupoids. We also define a mixed N-algebraic structure. We expect the reader to be well versed in
group theory and have at least basic knowledge about Smarandache groupoids, Smarandache loops, Smarandache semigroups and bialgebraic structures and Smarandache bialgebraic structures.
Category: Statistics
[2] viXra:1005.0002 [pdf] submitted on 1 May 2010
Almost Unbiased Estimator for Estimating Population Mean Using Known Value of Some Population Parameter(s)
Authors: Rajesh Singh, Mukesh Kumar, Florentin Smarandache
Comments: 14 pages
In this paper we have proposed an almost unbiased estimator using known value of some population parameter(s). Various existing estimators are shown particular members of the proposed estimator.
Under simple random sampling without replacement (SRSWOR) scheme the expressions for bias and mean square error (MSE) are derived. The study is extended to the two phase sampling. Empirical study is
carried out to demonstrate the superiority of the proposed estimator.
Category: Algebra
[1] viXra:1005.0001 [pdf] submitted on 1 May 2010
Quantum Corrections to the Gravitational Potential and Orbital Motion
Authors: Ioannis Iraklis Haranas, Vasile Mioc
Comments: 3 pages, Submitted to the ROAJ, Vol. 20 , No. 2, 2010
GRT predicts the existence of relativistic corrections to the static Newtonian potential, which can be calculated and verified experimentally. The idea leading to quantum corrections at large
distances consists of the interactions of massless particles, which only involve their coupling energies at low energies. Using the quantum correction term of the potential we obtain the perturbing
quantum acceleration function. Next, with the help of the Newton-Euler planetary equations, we calculate the time rates of changes of the orbital elements per revolution for three different orbits
around the primary. For one solar mass primary and an orbit with semimajor axis and eccentricity equal to that of Mercury we obtain that Δω[qu] = 1.517x10^-81 o/cy, while ΔM[qu] = -1.840x10^-46 rev/
Category: Relativity and Cosmology | {"url":"http://vixra.org/all/1005","timestamp":"2014-04-17T12:37:26Z","content_type":null,"content_length":"117626","record_id":"<urn:uuid:60b6e128-1227-4c7d-997b-e97d80083aaf>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alpharetta Algebra 2 Tutor
Find an Alpharetta Algebra 2 Tutor
...I have also achieved great success in math competitions, both at the high school and college levels, including a top-50 score on the prestigious Putnam Exam. I've spent the past five years
working as a coach with the Georgia state high school math team, which has ranked as high as 6th nationally...
12 Subjects: including algebra 2, calculus, statistics, geometry
...SAT math prep is particularly something I can uniquely help teach. I have taken three standardized tests - math subject test for undergrad, GRE for MS degree admission and GMAT for MBA
admission) and scored perfect 800 (99 percentile) on GRE and 780 (97 percentile) on GMAT. I can help achieve g...
11 Subjects: including algebra 2, geometry, GRE, algebra 1
...I taught myself how to use VB with database connectivity and developed several applications for Intelligent Switchgear Organization. I also used VB to develop a registration and sign in
program for the youth ministry at my church. I have a BSEE.
59 Subjects: including algebra 2, chemistry, reading, writing
...I enjoy helping others and have helped tutor friends and family in different subjects (mostly math). I took honors math classes in high school, including Algebra I & II, Geometry, Pre-Calculus
and AP Calculus. I also took several math classes in college up to Calculus II. I have experience tutoring family members in basic algebra.
14 Subjects: including algebra 2, English, reading, algebra 1
I currently teach Statistics and Physics at a private school in Atlanta and I am very skilled at presenting complex concepts to my students in a very clear and understandable manner. I
successfully tutored well over one hundred students and because of this experience my sessions are very effective....
20 Subjects: including algebra 2, calculus, geometry, physics
Nearby Cities With algebra 2 Tutor
Atlanta algebra 2 Tutors
Berkeley Lake, GA algebra 2 Tutors
Decatur, GA algebra 2 Tutors
Duluth, GA algebra 2 Tutors
Dunwoody, GA algebra 2 Tutors
Johns Creek, GA algebra 2 Tutors
Lawrenceville, GA algebra 2 Tutors
Marietta, GA algebra 2 Tutors
Milton, GA algebra 2 Tutors
Norcross, GA algebra 2 Tutors
Roswell, GA algebra 2 Tutors
Sandy Springs, GA algebra 2 Tutors
Smyrna, GA algebra 2 Tutors
Snellville algebra 2 Tutors
Woodstock, GA algebra 2 Tutors | {"url":"http://www.purplemath.com/Alpharetta_algebra_2_tutors.php","timestamp":"2014-04-18T01:11:49Z","content_type":null,"content_length":"24057","record_id":"<urn:uuid:89d05400-2849-4773-8c43-b860e5f9ad44>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by jhon on Thursday, August 16, 2012 at 2:58am.
A motorboat travels due north at a steady speed of 3.0 m/s through calm water in which there is no current. The boat then enters an area of water in which a steady current flows at 2.0 m/s in a
southwest direction, as shown in the next picture. Both the engine power and the course setting remain unchanged.
a) explain how the above paragraph gives information not only about the speed of the boat but also about its velocity.
b) draw a vector diagram showing the velocity of the boat and the velocity of the current. Use the diagram to find
i) the magnitude of the resultant velocity
ii) the angle between due north and the direction of travel of the boat.
c) calculate the distance the boat now travels in 5 mins;
d) mass of boat is 3.0 x 103kg (3000kg). Calculate the additional force that needs to be applied to give the boat an initial acceleration of 2.5 x 10-2m/s2 (0.025 m/s2).
• physics - Henry, Thursday, August 16, 2012 at 7:40pm
a. When both speed and direction are
given, we call it velocity.
b. Draw a vector from the origin
pointihg due north to represent the velocity of the boat
Draw a vector from the origin 45o South of West(225o) to represent the velocity of the current.
1. Vr = 3m/s @ 90o + 2m/s @ 225o = Resultant velocity.
X = Hor. = 2*cos225 = -1.414 m/s.
Y = Ver. = 3 + 2*sin225 = -1.586 m/s.
(Vr)^2 = X^2 + Y^2 = 2 + 2.5 = 4.5
Vr = 2.12 m/s. = Resultant velocity.
2. cosA= X/Vr=-1.586 / 2.12 = -0.66708
A = 132o. = Direction of travel of boat.
132 - 90 = 42o Between North and direction of travel of boat.
c. 5min. = 300 s.
d = Vr*t = 2.12m/s * 300s = 636 m.
Related Questions
Physics - a powerboat heads due northwest at 15 m/s relative to the water across...
math - A man travels 4km due north,then travels 6km due east and further travels...
Physics - The air speed indicator of a plane that took off from Detroit reads ...
Physics - A motorboat travels at a speed of 40 km/h relative to the surface of ...
PHYSICS - An automobile travels 30 km due east on a level road . It then turns ...
Physics - A powerboat heads due northwest at 11 m/s relative to the water across...
Physics - A powerboat heads due northwest at 11 m/s relative to the water across...
physics - One airplane travels due north at 300 km/h while another travels due ...
Physics - A 73kg sprinter, starting from rest, reaches a speed of 7.0 m/s in 1....
physics - It says, A plane travels 400 mi/h relative to the ground. There is a ... | {"url":"http://www.jiskha.com/display.cgi?id=1345100304","timestamp":"2014-04-21T14:00:46Z","content_type":null,"content_length":"9771","record_id":"<urn:uuid:3f6ce7f5-60aa-482e-a050-dbd86e1f29e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2003 [00257]
[Date Index] [Thread Index] [Author Index]
Re: Simple List question. HELP.
• To: mathgroup at smc.vnet.net
• Subject: [mg39418] Re: [mg39389] Simple List question. HELP.
• From: Tomas Garza <tgarza01 at prodigy.net.mx>
• Date: Fri, 14 Feb 2003 03:23:44 -0500 (EST)
• References: <200302130955.EAA20703@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
I think you needn't create an empty 2 dimensional list in the first place.
Why not start directly with your first element, i.e., the pair {0, 1}?
lst = {{0, 1}}
{{0, 1}}
The errors and beeps you get are a warning that you are using the wrong
syntax. To denote any element of a list you must use a double bracket:
{0, 1}
bracket as you are doing. {{2, c}} is not a pair, but a list with a single
element which is the pair {2, c}. Simply append the pair {2, c}:
lst = Append[lst, {2, c}]
{{0, 1}, {2, c}}
and now you obtain a 2 dimensional list with two elements, as desired. Now,
I suggest you don't use Append, which is all right if you're dealing with
small lists, but becomes extremely slow when the length of the lists
involved increase. I'd rather use, e.g.,
lst = {lst, {3, d}}
{{{0, 1}, {2, c}}, {3, d}}
and so on. There is no mystery at all. You made a mistake in using lst[1]
(with a single bracket) instead of lst[[1]], and then you made another
mistake when appending a list {{1, 2}} instead of a pair {1, 2}.
Tomas Garza
Mexico City
----- Original Message -----
From: "Daniel Heneghan" <dhenegha at bway.net>
To: mathgroup at smc.vnet.net
Subject: [mg39418] [mg39389] Simple List question. HELP.
> I am new to Mathematica. This is vexing. All I want to do is create a
> 2-dimensional list so that I can enter x,y values and then plot the
> list. I want to do this programmatically. I am having such incredible
> trouble trying to accomplish this simple task. I know that there is
> probably a Mathematica optimized way to do this, but I and trying to
> write a little program and for now I want to stay with paradigms that I
> am familiar with. Here is what I have been doing.
> Create a 2 dimensional list.
> In[532] lst={{}}
> Out[532]= {{}}
> Enter the first pair into the first place in the list.
> In[533]:= lst[1]={{0,1}}
> Errors and beeps here, but it does seem to record the correct values.
> Set::write: Tag List in {{}}[1] is Protected.
> Out[533]={{0,1}}
> Add anoter pair of values.
> In[534]:= lst=Append[lst,{{1,2}}]
> Out[534]= {{},{{1,2}}}
> The second pair is OK, but the first pair has been obliterated.
> Add another pair. Now all subsequent entries are OK, but I still have
> lost the first pair.
> In[535]:= lst=Append[lst,{{2,c}}]
> Out[535]= {{},{{1,2}},{{2,c}}}--
> What is going on? What are the mysteries of working with lists in
> Mathematica. In any programming language this is simple. I can't grasp
> it in Mathematica. The reason I need to do this is that for the list
> plot I need the x values to start at 0 not 1.
> Thanks,
> Daniel Heneghan
> Ceara Systems
> (212) 696-9208
> ceara at bway.net
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2003/Feb/msg00257.html","timestamp":"2014-04-20T13:47:54Z","content_type":null,"content_length":"37359","record_id":"<urn:uuid:37e935e8-aabc-4f77-b629-7867267812f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
All Tuition Notes,All Sample Papers,10 Best All Discussion Forum,Query Related to All,Online Tutorials for All Subjects, All Tutorials Online, Mathematics Technical Class Notes, Popular Mathematics tuition notes online for All [Central Board of Secondary Education],Study matrial for All [Central Board of Secondary Education] Students,All Education Notes for free
ePapers Central Board of Secondary Education All 10 Mathematics | {"url":"http://www.2classnotes.com/classnotes.asp?university=Central_Board_of_Secondary_Education&stream=All&clas=10&subject=Mathematics","timestamp":"2014-04-21T07:40:18Z","content_type":null,"content_length":"58013","record_id":"<urn:uuid:38d4994b-3175-4e0b-b71b-c296c26de026>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determine whether x^3 is O(g(x)) for each of these functions g(x).
November 18th 2013, 11:34 PM #1
Junior Member
Apr 2013
Determine whether x^3 is O(g(x)) for each of these functions g(x).
Determine whether x^3 is O(g(x)) for each of these functions g(x).
d) g(x) = x^2 + x^4
e) g(x) = 3^x
f) g(x) = x^3/2
d) Yes
e) Yes
f) No
Are my answers correct? I took the problem to mean that g(x) < c * x^3.
Re: Determine whether x^3 is O(g(x)) for each of these functions g(x).
Hey lamentofking.
I think you have it the other way around. 3^x is an exponential function with infinite numbers of positive powered terms and x^4 has a term higher than x^3. f should be correct (i.e. yes).
Re: Determine whether x^3 is O(g(x)) for each of these functions g(x).
Re: Determine whether x^3 is O(g(x)) for each of these functions g(x).
Re: Determine whether x^3 is O(g(x)) for each of these functions g(x).
Could you please re-formulate this clearly? "If x^3 < c * g(x)" eventually, then x^3 = O(g(x)), nothing more to prove. Are you sure you are assuming x^3 < c * g(x)? Next, by c * g(x) do you mean
(1/4)x^3 since c = 1/2 and g(x) = (1/2)x^3? That's possible, but why not consider c = 1? Your quote above is not very clear.
November 18th 2013, 11:40 PM #2
MHF Contributor
Sep 2012
November 19th 2013, 08:01 AM #3
MHF Contributor
Oct 2009
November 20th 2013, 11:01 AM #4
Junior Member
Apr 2013
November 20th 2013, 11:12 AM #5
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/224423-determine-whether-x-3-o-g-x-each-these-functions-g-x.html","timestamp":"2014-04-19T17:51:36Z","content_type":null,"content_length":"44433","record_id":"<urn:uuid:5c420869-7905-42bf-afa7-b58089fc1be2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Anytime Algorithm
run to completion: they provide a single answer after performing some fixed amount of computation. In some cases, however, the user may wish to terminate the algorithm prior to completion. The amount
of the computation required may be substantial, for example, and computational resources might need to be reallocated. Most algorithms either run to completion or they provide no useful solution
information. Anytime algorithms, however, are able to return a partial answer, whose quality depends on the amount of computation they were able to perform. The answer generated by anytime algorithms
is an approximation of the correct answer. This feature of anytime algorithms is modeled by such a theoretical construction as limit Turing machine (Burgin, 1992; 2005). A limit Turing machine
provides a sequence of partial results that converge in a given topology to the final result.
An anytime algorithm may be also called an "interruptible algorithm". They are different from contact algorithms, which must declare a time in advance; in an anytime algorithm, a process can just
announce that it is terminating.
The goal of anytime algorithms are to give
intelligent systems
the ability to make results of better quality in return for turn-around time . They are also supposed to be flexible in time and resources. They are important because
artificial intelligence
or AI algorithms can take a long time to complete results. This algorithm is designed to complete in a shorter amount of time. Also, these are intended to have a better understanding that the system
is dependent and restricted to its agents and how they work cooperatively. An example the is
iteration applied to finding the square root of a number. Another example that uses anytime algorithms is trajectory problems when you're aiming for a target.
What makes anytime algorithms unique is their ability to return many possible outcomes for any given output. An anytime algorithm uses many well defined quality measures to monitor progress in
problem solving and distributing computing resources. It keeps searching for the best possible answer with the amount of time that it is given. It may not run until completion and may improve the
answer if it is allowed to run longer. This is often used for large decision set problems. This would generally not provide useful information unless it is allowed to finish. While this may sound
similar to dynamic programming, the difference is that it is fine-tuned through random adjustments, rather than sequential.
Anytime algorithms are designed to be predictable. Another goal is that someone can interrupt the process and the algorithm would give its most accurate result. This is why it is called an
interruptible algorithm. Another goal of anytime algorithms are to maintain the last result so as they are given more time, they can continue calculating a more accurate result.
Make an algorithm with a parameter that influences
running time
. For example, as time increases, this variable also increases. After for a period of time, the search is stopped without having the goal met. This is similar to Jeopardy when the time runs out. The
contestants have to represent what they believe is the closest answer, although they may not know it or come even close to figuring out what it could be. This is similar to an hour long test.
Although the test questions are not in themselves limiting for time, the test must be completed within the hour. Likewise, the computer has to figure out how much time and resources to spend on each
Decision Trees
When the decider has to act, there must be some ambiguity. Also, there must be some idea about how to solve this ambiguity. This idea must be translatable to a state to action diagram.
Performance Profile
The performance profile estimates the quality of the results based on the input and the amount of time that is allotted to the algorithm. The better the estimate, the sooner the result would be
found. Some systems have a larger database that gives the probability that the output is the expected output. It is important to note that one algorithm can have several performance profiles. Most of
the time performance profiles are constructed using
mathematical statistics
using representative cases. For example in the
traveling salesman
problem, the performance profile was generated using a user-defined special program to generate the necessary statistics. In this example, the performance profile is the mapping of time to the
expected results. This quality can be measured in several ways:
• certainty: where probability of correctness determines quality
• accuracy: where error bound determines quality
• specificity: where the amount of particulars determine quality
Algorithm Prerequisites
Initial behavior: While some algorithms start with immediate guesses, others take a more calculated approach and have a start up period before making any guesses.
• Growth direction: How the quality of the program's "output" or result, varies as a function of the amount of time ("run time")
• Growth rate: Amount of increase with each step. Does it change constantly, such as in a bubble sort or does it change unpredictably?
• End condition: The amount of runtime needed
• Anytime Algorithm http://tarono.wordpress.com/2007/03/20/anytime-algorithm
• http://www.acm.org/crossroads/xrds3-1/racra.html
Further reading
• Boddy, M, Dean, T. 1989. Solving Time-Dependent Planning Problems. Technical Report: CS-89-03, Brown University
• Burgin, M. Multiple computations and Kolmogorov complexity for such processes, Notices of the Academy of Sciences of the USSR, 1983, v. 27, No. 2 , pp. 793-797
• Burgin M., Universal limit Turing machines, Notices of the Russian Academy of Sciences, 325, No. 4, (1992), 654-658
• Burgin, M. Super-recursive algorithms, Monographs in computer science, Springer, 2005
• Grass, J., and Zilberstein, S. 1996. Anytime Algorithm Development Tools. SIGART Bulletin (Special Issue on Anytime Algorithms and Deliberation Scheduling) 7(2)
• Michael C. Horsch and David Poole, An Anytime Algorithm for Decision Making under Uncertainty, In Proc. 14th Conference on Uncertainty in Artificial Intelligence (UAI-98), Madison, Wisconsin,
USA, July 1998, pages 246-255.
• E.J. Horvitz. Reasoning about inference tradeoffs in a world of bounded resources. Technical Report KSL-86-55, Medical Computer Science Group, Section on Medical Informatics, Stanford University,
Stanford, CA, March 1986
• Wallace, R., and Freuder, E. 1995. Anytime Algorithms for Constraint Satisfaction and SAT Problems. Paper presented at the IJCAI-95 Workshop on Anytime Algorithms and Deliberation Scheduling, 20
August, Montreal, Canada.
• Zilberstein, S. 1993. Operational Rationality through Compilation of Anytime Algorithms. Ph.D. diss., Computer Science Division, University of California at Berkeley.
• Shlomo Zilberstein, Using Anytime Algorithms in Intelligent Systems, AI Magazine, 17(3):73-83, 1996 | {"url":"http://www.reference.com/browse/Anytime+Algorithm","timestamp":"2014-04-19T01:58:40Z","content_type":null,"content_length":"84720","record_id":"<urn:uuid:64b0c671-ad28-47d2-b3b7-3421fd25249e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
High School Chemistry/Schrodinger's Wave Functions
In the last lesson, you learned that electrons and, in fact, all objects with mass, have wavelike properties. It might be tempting to visualize matter waves as being just like ocean waves, or waves
in a puddle, but it turns out that matter waves are special. Unlike ocean waves or puddle waves, matter waves are "trapped" in space and, as a result, can never die out, escape, or disappear. If you
think carefully, you'll realize that this isn't true of most other waves with which you are familiar. You can form waves in a puddle by stirring the puddle with a stick. When you do, what you'll
notice is that the waves you create actually move from your stick out to the edge of the puddle, where they disappear. As long as you disturb the puddle with your stick, the puddle will have waves in
it. But as soon as you leave the puddle alone, the surface of the puddle will become as calm as glass. Matter waves aren't like that. Unlike puddle waves, which eventually die out as they escape from
the puddle, matter waves never do, because matter waves don't move. As a result, they are forever trapped in the matter that holds them. We'll talk more about these special matter waves in the next
Lesson ObjectivesEdit
• Distinguish between traveling and standing waves.
• Explain why electrons form standing waves, and what this means in terms of their energies.
• Define an electron wave function and electron density and relate these terms to the probability of finding an electron at any point in space.
An Electron is Described as a Standing WaveEdit
Most of the waves that you're probably familiar with are known as traveling waves, because they travel or move. When you're sitting on your surfboard, trying to catch a good wave, you'll often look
out to sea in the hopes of spotting a "big one" (Figure 6.3). When you finally do, you know that even though the big wave may be quite a distance off, it will eventually arrive at your surfboard and
carry you in to shore. This, of course, implies that ocean waves are traveling waves because they actually move through the water. Similarly, if you're in the stands watching the Oakland As play
ball, you might find yourself jumping up and cheering as "the wave" passes through the stadium (Figure 6.4). Again, this is an example of a traveling wave, because it moves from fans at one end of
the stadium to fans at the other. There are, however, special waves that stay in one spot. Scientists call these waves standing waves.
In an earlier part of this text, a wave was described in which a rope was tied to a tree and a person jerked the other end of the rope up and down to create a wave in the rope. When a wave travels
down a rope and encounters an immoveable boundary (like a tree), the wave reflects off the boundary and travels back up the rope. This causes interference to occur between the wave traveling toward
the tree and the reflected wave traveling back toward the person. If the person adjusts the rhythm of their hand just right, they can arrange for the crests and troughs of the wave moving toward the
tree to exactly coincide with the crests and troughs of the reflected wave. When this occurs, the apparent horizontal motion of the wave ceases and the wave appears to "stand" in the same place in
the rope. This is called a standing wave. In such a case, the crests and troughs will remain in the same places and nodes will appear between the crests and troughs where the rope does not appear to
move at all.
In the standing wave shown in Figure 6.5, the positions of the crests and troughs remain in the same positions. The crests and troughs appear to exchange places above and below the center line of the
rope. The flat places where the rope crosses the center axis line are called nodes (positions of zero displacement). These nodal positions do not change. Traveling waves appear to travel, and
standing waves appear to stand still.
Even though standing waves don't move themselves, they are actually composed of traveling waves that do. Standing waves form when two traveling waves traveling in opposite directions at the same
speed combine or run into each other. In today's lab, you'll learn how to create standing waves in a jump rope by feeding traveling waves into the jump rope from opposite directions. Even though a
standing wave doesn't move, it can still "die out". As soon as the traveling waves that form a standing wave disappear, so does the standing wave itself. You'll see this first hand in the jump rope
experiment. When you stop flicking the jump rope, the jump rope slackens and the standing waves are gone.
Why, then, are standing waves often associated with "trapped" waves, or waves that never die out? The connection between standing waves and trapped waves isn't a misconception or a misunderstanding.
It turns out that standing waves almost always form when traveling waves are "trapped" in a small region of space. Imagine what would happen if you took a whole train of traveling waves, locked them
up and threw them into jail. Those traveling waves would probably go crazy running around the jail cell trying to escape. No matter how hard they tried, though, they'd always end up hitting the jail
cell walls. As a result, the poor waves would bounce back and forth and back and forth from one end of the jail cell to the other. Now, if there were several traveling waves trapped in the same jail
cell at the same time, one set of waves would end up bouncing off of the left wall, at the same time (and speed) as another set of waves was bouncing off of the right wall. This, of course, is
exactly what's required to set up a "standing wave" (two waves traveling in opposite directions at the same speed).
The electron waves that you learned about in the last lesson form standing waves as a result of being trapped inside the atom. What do you think might imprison an electron wave inside an atom? The
answer, of course, is that electrons are trapped because they are strongly attracted to the protons in the nucleus. Using the laws of physics to describe the forces of attraction between electrons
and protons, scientists can figure out the size and shape of any electron's jail cell. Amazingly, by knowing the size and shape of an electron's jail cell, scientists can tell you what a particular
electron standing wave will look like.
Frequently, rather than using words to describe an electron standing wave, scientists use what's known as an electron wave function. Wave functions for electrons, first developed by a man named Erwin
Schrödinger, are mathematical expressions that describe the magnitude or "height" of an electron standing wave at every point in space. Now, let's discuss electron energy, which is another important
electron property that can be explained and predicted by electron standing waves and their associated wave functions.
Each Wave Function has an Allowed Energy ValueEdit
Electrons form standing waves whenever they're trapped inside an atom, and thus in order to understand and predict electron behavior, it's important to understand electron standing waves. One of the
most important properties that electron standing waves can help to predict is electron energy. The energy of an electron in any atom depends on the size and shape of the electron's standing wave when
it's trapped inside that atom. As a result, scientists can use the wave function, or the mathematical description of an electron's standing wave, to figure out how much energy that electron has.
While wave functions are helpful in predicting the amount of energy an electron has, they are also helpful in predicting the amount of energy an electron is allowed to have. In any confined space,
like a box, a jail cell, or an atom, only certain standing waves are possible. Why? In order to exist, a standing wave must begin at one side of the box and end at the other. Waves that either don’'t
begin where the box begins, or don't end where the box ends aren't allowed. Figure 6.6 shows several allowed standing waves and several forbidden standing waves. Notice that if the wave doesn't "fit"
perfectly inside the box, it isn't allowed.
Now here's the really strange thing about describing electrons as standing waves. Since only certain standing waves will fit perfectly inside an atom, electrons trapped in that atom can only have
certain electron wave functions with certain electron energies. In other words, the standing wave picture accounts for the fact that some energy values are "allowed" (energy values associated with
standing waves that "fit" perfectly inside the atom) while others are "forbidden" energy values (energy values associated with standing waves that do not "fit" perfectly inside the atom). That's
exactly what Bohr said when he developed his model to explain atomic spectra! Bohr said that electrons could exist at specific "allowed" energy levels, but that they couldn't exist between those
energy levels. Bohr, however, did not have an explanation for why only certain energy levels were allowed. Remarkably, the standing wave description of electrons predicts quantized electron energies
just like the Bohr model!
When we represent electrons inside an atom, quantum mechanics requires that the wave must "fit" inside the atom so that the wave meets itself with no overlap; that is, the "electron wave" inside the
atom must be a standing wave. If the wave is to be arranged in the form of a circle so that it attaches to itself, the waves can only occur if there is a whole number of waves in the circle.
The standing wave on the left in Figure 6.7 exactly fits in the electron cloud and hence represents an "allowed" energy level whereas the standing wave on the right does not fit in the electron cloud
and therefore is not an "allowed" energy level. There are only certain energies (frequencies) for which the wavelength fits exactly to form a standing wave. These are the same energy levels the Bohr
model suggested but now there is a reason for why electrons may have only these energies.
Max Born and Probability PatternsEdit
There are very few scientists, if any, who can visualize the behavior of an electron during chemical bonding or chemical reactions as standing waves. When chemists are asked to describe the behavior
of an electron during a chemical change, they do not describe the mathematical equations of quantum mechanics nor do they discuss standing waves. The behavior of electrons in chemical reactions is
best understood in terms of a particle.
Erwin Schrödinger's wave equation for matter waves is similar to known equations for other wave motions in nature. The equation describes how a wave associated with an electron varies in space as the
electron moves under various forces. Schrödinger worked out the solutions of his equation for the hydrogen atom and the results agreed with the Bohr values for the energy levels of these atoms.
Furthermore, the equation could be applied to more complicated atoms. It was found that Schrödinger's equation gave a correct description of an electron's behavior in almost all cases. In spite of
the overwhelming success of the wave equation in describing electron energies, the very meaning of the waves was vague.
A physicist named Max Born was able to attach some physical significance to the mathematics of quantum mechanics. Born used data from Schrödinger's equation to show the probability of finding the
electron, as a particle, at the point in space for which Schrödinger's equation was solved. Born's ideas allowed chemists to visualize the results of the wave equation as probability patterns for
electron positions.
Suppose we had a camera with such a fast shutter speed that it could take a photo of an electron in an electron cloud and show it frozen in position. We could then take a thousand pictures of this
electron at different times and find it in many different positions in the atom. We could then plot all the different electron positions on one picture.
Figure 6.8 shows the result of plotting many different positions of a single electron in the electron cloud of a hydrogen atom. One way of looking at this picture is as an indication of the
probability of where you are likely to find the electron in this atom. You must recognize, of course, that the dots are not electrons; this atom has only one electron. The dots are positions where
the electron can be found at different times. From this picture, it is clear that the electron spends more time near the nucleus than it does far away. As you move away from the nucleus, the
probability of finding the electron becomes less and less. It is also important to note that there is no boundary for this electron cloud. That is, there is no distance from the nucleus where the
probability becomes zero.
For much of the work we will be doing with atoms, it is convenient to have a boundary for the atom. Most often, chemists choose some distance from the nucleus beyond which the probability of finding
the electron becomes very low and arbitrarily draw in a boundary for the atom. Frequently, the boundary is placed such that 90% or 95% of the probability for finding the electron is inside the
boundary (Figure 6.9).
Most of the time, we will be looking at drawings of atoms that show an outside boundary for the electron cloud. You should keep in mind, however, that the boundary is there for our convenience and
there is no actual boundary on an atom; that is, the probability of finding the electron never becomes zero. This probability plot is very simple because it is for the first electron in an atom. As
the atoms become more complicated (more energy levels and more electrons), the probability plots also become more complicated.
Lesson SummaryEdit
• There are two types of waves – traveling waves that move from one place to another, and standing waves that are stationary. Standing waves are formed when two traveling waves traveling in
opposite directions at the same speed combine. Electrons in atoms form standing waves because they are trapped by the attractive forces that exist between their negative charges and the positive
charges on the protons in the atom's nucleus. These attractive forces determine the shape and size of the electron's standing wave.
• Mathematical expressions called wave functions are used to describe an electrons standing wave in an atom. The energy of an electron in any atom depends on the size and shape of the electron's
standing wave. The wave function can be used to determine the energy of an electron when it is trapped inside an atom.
• Electrons in atoms are only allowed to have certain energy levels (i.e. – those which correspond to standing waves that "fit perfectly" inside the atom). All other electron energies are
forbidden. The probability patterns for electrons (electron density) show the probability of finding the electron at a given point.
Review QuestionsEdit
1. Choose the correct word in each of the following statements.
(a) The (more/less) electron density at a given location within the atom the more likely you are to find the electron there.
(b) If there is no electron density at a particular point in space, there is (no/a high) chance of finding the electron there.
(c) The higher the probability of finding an electron in a certain spot, the (more/less) electron density there will be at that spot.
2. The hydrogen ion, H^+ has no electrons. What is the total amount of electron density in a hydrogen atom?
3. Decide whether each of the following statements is true or false.
(a) Only certain electron standing waves are allowed in any particular atom.
(b) Only certain electron energies are allowed in any particular atom.
4. The name for the mathematical expression used to describe an electron standing wave is ________.
5. Choose the correct statement.
(a) Einstein first developed the method of describing electron standing waves with wave functions
(b) Planck first developed the method of describing electron standing waves with wave functions
(c) de Broglie first developed the method of describing electron standing waves with wave functions
(d) Schrödinger first developed the method of describing electron standing waves with wave functions
6. Circle all of the statements below which are correct.
(a) The wave function description of electrons predicts that electrons orbit the nucleus just like planets orbit the sun.
(b) The wave function description of electrons predicts that electron energies are quantized
(c) The Bohr model of the atom suggests that electron energies are quantized.
7. Fill in the blanks.
(a) Since only certain values are allowed for the energy of an electron in an atom, we say that electron energies are _________.
(b) Allowed electron energies correspond to ______________ that fit perfectly in the atom.
8. Forbidden electron energies correspond to electron standing waves that ________ in the atom.
electron density
The square of the wave function for the electron, it is related to the probability of finding an electron at a particular point in space.
electron wave function
A mathematical expression to describe the magnitude, or "height" of an electron standing wave at every point in space.
standing waves
Waves that do not travel, or move. They are formed when two traveling waves, moving in opposite directions at the same speed run into each other and combine.
traveling waves
Waves that travel, or move.
Last modified on 14 September 2011, at 18:54 | {"url":"http://en.m.wikibooks.org/wiki/High_School_Chemistry/Schrodinger's_Wave_Functions","timestamp":"2014-04-20T15:54:23Z","content_type":null,"content_length":"40695","record_id":"<urn:uuid:1a64b45a-c069-40f8-8955-a91279982b6f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
symmetric relation
symmetric relation
A (binary) relation $\sim$ on a set $A$ is symmetric if any two elements that are related in one order are also related in the other order:
$\forall (x, y: A),\; x \sim y \;\Rightarrow\; y \sim x$
In the language of the $2$-poset-with-duals Rel of sets and relations, a relation $R: A \to A$ is symmetric if it is contained in its reverse:
$R \subseteq R^{op}$
In that case, this containment is in fact an equality.
Revised on August 24, 2012 20:04:06 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/symmetric+relation","timestamp":"2014-04-18T21:05:33Z","content_type":null,"content_length":"19066","record_id":"<urn:uuid:cb8dd0d5-5cd1-4a5e-a014-6c5825933b59>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cottage City, MD Algebra 2 Tutor
Find a Cottage City, MD Algebra 2 Tutor
...Each year I look forward to seeing the students do well in the course and often employ the use of metaphors, examples, and models in order to make chemistry less mysterious and ensure
understanding of the material. My knowledge of the material allows me to explain the information simply yet adeq...
6 Subjects: including algebra 2, chemistry, algebra 1, prealgebra
...We will also go over test taking strategies for mastering the CAT. The GMAT is a tricky test and adult students about to enter business school are called upon to remember mathematics they
learned in pre-alegbra, algebra and geometry. I can help you achieve your business school dream by reviewing the math you need to know and only the math you need to know.
12 Subjects: including algebra 2, geometry, GRE, ASVAB
...Most of the students were not native English speakers. Despite this, the ones who actually went through with taking the test passed it. I have also done more limited tutoring for the chemistry
SOL test working with the Jefferson Labs software package in a Fairfax County volunteer setting.
13 Subjects: including algebra 2, chemistry, calculus, physics
...If you have any questions or would like to see my full resume, please feel free to contact me. I hope to hear from you soon!Algebra 1 is one of my favorite and most popular tutoring subjects.
In addition to my B.A. in mathematics, I have been tutoring Algebra 1 through Calculus for about 5 years with great success.
17 Subjects: including algebra 2, English, reading, calculus
...I can provide a transcript of verification if necessary. I have also tutored several students on this subject in the past. Additionally, while getting my PhD in Physics at the University of
Florida, I frequently had homework assignments where this subject was extensively used.
13 Subjects: including algebra 2, chemistry, physics, SAT math
Related Cottage City, MD Tutors
Cottage City, MD Accounting Tutors
Cottage City, MD ACT Tutors
Cottage City, MD Algebra Tutors
Cottage City, MD Algebra 2 Tutors
Cottage City, MD Calculus Tutors
Cottage City, MD Geometry Tutors
Cottage City, MD Math Tutors
Cottage City, MD Prealgebra Tutors
Cottage City, MD Precalculus Tutors
Cottage City, MD SAT Tutors
Cottage City, MD SAT Math Tutors
Cottage City, MD Science Tutors
Cottage City, MD Statistics Tutors
Cottage City, MD Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bladensburg, MD algebra 2 Tutors
Brentwood, MD algebra 2 Tutors
Capitol Heights algebra 2 Tutors
Colmar Manor, MD algebra 2 Tutors
Edmonston, MD algebra 2 Tutors
Garrett Park algebra 2 Tutors
Green Meadow, MD algebra 2 Tutors
Hyattsville algebra 2 Tutors
Mount Rainier algebra 2 Tutors
North Brentwood, MD algebra 2 Tutors
Riverdale Park, MD algebra 2 Tutors
Riverdale Pk, MD algebra 2 Tutors
Riverdale, MD algebra 2 Tutors
Rogers Heights, MD algebra 2 Tutors
University Park, MD algebra 2 Tutors | {"url":"http://www.purplemath.com/Cottage_City_MD_Algebra_2_tutors.php","timestamp":"2014-04-19T12:40:40Z","content_type":null,"content_length":"24555","record_id":"<urn:uuid:a22032a6-95dc-4978-a9c2-39ebfb383067>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
all possible combinations of an array
01-10-2008 #1
Registered User
Join Date
Jan 2008
all possible combinations of an array
I have a small question about arrays. I have tried to find a way for getting all possible combinations of an array with just one dimension. I have spent some days to get the solution, without any
progress. Could someone help me with my problem. Or at least help me on the way.
Thanks in advance. And if its really really obvious sorry for this dumb question.
What does "all possible combinations of an array" mean?
if you have an array for instance 1,2,3,4 I want to have the program make the following combinations:
1+2 , 1+3 , 1+4
1+2+3 , 1+2+4
2+3 , 2+4 ,etc.
The way I like to think of this problem is the "odometer" method: Imagine an odometer in binary with four digits (one digit for each item in your array). So it would start as follows:
If the i'th digit is one, include that element, otherwise don't. So the first combination would be empty; the next would have just the fourth element; the next would have just the third, and so
on. (Notice that this is the same way that you generate the rows of a truth table, for instance; it's the same problem.)
you mean somthing like:
int data[ 4 ] = { 1, 2, 3, 4 };
std::cout << "Sum is: " << data[ 0 ] + data[ 1 ] + data[ 2 ] + data[ 3 ] << std::endl;
I am only guessing here
I'm just trying to be a better person - My Name Is Earl
thank you for your replies,
tabstop your idea was great thank you. I tried to make a code that makes that binairy odometer, it works partly, because after the 5 digits it gives rubbish information. The problem is that he
has to go up to 32000000 digits. Is it possible with this program structure, if not could someone tell me how it has to be done? thank you in advance.
here is the code:
#include <iostream>
#include <cmath>
using namespace std;
int main()
long getal;
double kopie=getal;
long hoi=pow(2,kopie);
bool tabel[getal][hoi];
for(int n=0;n<=hoi;n++)
for(int t=0;t<=getal;t++)
for(int n=0;n<=(hoi);n++)
if (n==0)
for(int t=0;t<=getal;t++)
for(int ok=0, h=0;ok==0;)
if (tabel[h][n]==false)
for(int n=0;n<=hoi;n++)
for(int t=0;t<=(getal-1);t++)
char stop;
sorry for the big main but I am not that good in programming and I hadn't that much time to put everything in functions
Last edited by hoppy; 01-10-2008 at 03:13 PM.
Even if you try with 5 items, you get some things printed out, but they don't make sense. I don't quite follow what your loop is trying to do?
the first loop is filling the array with false,
the second loop makes the odometer,
and the third loop translates the bool chars to 1 and 0
here the program works up to 5 items after 5 I get rubbish.
for the second loop:
first he looks to the last made row of digits and copies it.
then he put true on the first spot, if there's already true he changes this true in false and goes to the next spot. etc.
A hint is that the binary "odometer" is just a number being incremented by one. It probably won't be very efficient - but I guess efficiency is not the concern here - but you might just use an
integer and check which bits are set. std::bitset might be handy for that.
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
Your bounds checks are off; you should change <= to < almost everywhere (remember: a 5-element array only has elements through a[4]).
thnx tabstop I'll look if I can make something with this
32000000 digits? 32000000 elements in the array you want all combinations of?
Well, if you ever get your program working, I'll see you next decade.
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
01-10-2008 #2
01-10-2008 #3
Registered User
Join Date
Jan 2008
01-10-2008 #4
01-10-2008 #5
01-10-2008 #6
Registered User
Join Date
Jan 2008
01-10-2008 #7
01-10-2008 #8
Registered User
Join Date
Jan 2008
01-10-2008 #9
The larch
Join Date
May 2006
01-10-2008 #10
01-10-2008 #11
Registered User
Join Date
Jan 2008
01-10-2008 #12 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/97757-all-possible-combinations-array.html","timestamp":"2014-04-17T12:08:15Z","content_type":null,"content_length":"84644","record_id":"<urn:uuid:cd0b9112-5e31-4212-b7f0-8cfb6c67db1a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Wikipedia, the free encyclopedia
Gravitation, or gravity, is a natural phenomenon by which all physical bodies attract each other. It is most commonly recognized and experienced as the agent that gives weight to physical objects,
and causes physical objects to fall toward the ground when dropped from a height.
It is hypothesized that the gravitational force is mediated by a massless spin-2 particle called the graviton. Gravity is one of the four fundamental forces of nature, along with electromagnetism,
and the nuclear strong force and weak force. Colloquially, gravitation is a force of attraction that acts between and on all physical objects with matter (mass) or energy. In modern physics,
gravitation is most accurately described by the general theory of relativity proposed by Einstein, which asserts that the phenomenon of gravitation is a consequence of the curvature of spacetime. In
pursuit of a theory of everything, the merging of general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity has become an area of active
research. Newton's law of universal gravitation postulates that the gravitational force of two bodies of mass is directly proportional to the product of their masses and inversely proportional to the
square of the distance between them. It provides an accurate approximation for most physical situations including spacecraft trajectory. Newton's laws of motion are also based on the influence of
gravity, encompassing three physical laws that lay down the foundations for classical mechanics.
During the grand unification epoch, gravity separated from the electronuclear force. Gravity is the weakest of the four fundamental forces, and appears to have unlimited range (unlike the strong or
weak force). The gravitational force is approximately 10^−38 times the strength of the strong force (i.e., gravity is 38 orders of magnitude weaker), 10^−36 times the strength of the electromagnetic
force, and 10^−29 times the strength of the weak force. As a consequence, gravity has a negligible influence on the behavior of sub-atomic particles, and plays no role in determining the internal
properties of everyday matter. On the other hand, gravity is the dominant force at the macroscopic scale, that is the cause of the formation, shape, and trajectory (orbit) of astronomical bodies,
including those of asteroids, comets, planets, stars, and galaxies. It is responsible for causing the Earth and the other planets to orbit the Sun; for causing the Moon to orbit the Earth; for the
formation of tides; for natural convection, by which fluid flow occurs under the influence of a density gradient and gravity; for heating the interiors of forming stars and planets to very high
temperatures; for solar system, galaxy, stellar formation and evolution; and for various other phenomena observed on Earth and throughout the universe. This is the case for several reasons: gravity
is the only force acting on all particles with mass; it has an infinite range; always attractive and never repulsive; and cannot be absorbed, transformed, or shielded against. Even though
electromagnetism is far stronger than gravity, electromagnetism is not relevant to astronomical objects, since such bodies have an equal number of protons and electrons that cancel out (i.e., a net
electric charge of zero).
History of gravitational theory
Scientific revolution
Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal^1) experiment dropping balls from the
Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitation accelerates all objects at the same rate. This was a major departure from Aristotle
's belief that heavier objects accelerate faster.^2 Galileo postulated air resistance as the reason that lighter objects may fall slower in an atmosphere. Galileo's work set the stage for the
formulation of Newton's theory of gravity.
Newton's theory of gravitation
In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, “I deduced that the forces which keep the
planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the
force of gravity at the surface of the Earth; and found them answer pretty nearly.”^3
Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets.
Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of
A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely
under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein
's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit.
Although Newton's theory has been superseded, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is a much simpler theory to work with than
general relativity, and gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.
Equivalence principle
The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way. The simplest way to test
the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum, and see if they hit the ground at the same time. These experiments demonstrate that all objects
fall at the same rate when friction (including air resistance) is negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are
planned for more accurate experiments in space.^4
Formulations of the equivalence principle include:
• The weak equivalence principle: The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition.^5
• The Einsteinian equivalence principle: The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in
• The strong equivalence principle requiring both of the above.
General relativity
In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free
fall with inertial motion, and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground.^7^8 In Newtonian physics, however, no such acceleration
can occur unless at least one of the objects is being operated on by a force.
Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like
Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing
because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered
Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10
simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The
geodesic paths for a spacetime are calculated from the metric tensor.
Notable solutions of the Einstein field equations include:
The tests of general relativity included the following:^9
• General relativity accounts for the anomalous perihelion precession of Mercury.^10
• The prediction that time runs slower at lower potentials has been confirmed by the Pound–Rebka experiment, the Hafele–Keating experiment, and the GPS.
• The prediction of the deflection of light was first confirmed by Arthur Stanley Eddington from his observations during the Solar eclipse of May 29, 1919.^11^12 Eddington measured starlight
deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. However, his interpretation of the results was later disputed.^13 More
recent tests using radio interferometric measurements of quasars passing behind the Sun have more accurately and consistently confirmed the deflection of light to the degree predicted by general
relativity.^14 See also gravitational lens.
• The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals.
• Gravitational radiation has been indirectly confirmed through studies of binary pulsars.
• Alexander Friedmann in 1922 found that Einstein equations have non-stationary solutions (even in the presence of the cosmological constant). In 1927 Georges Lemaître showed that static solutions
of the Einstein equations, which are possible in the presence of the cosmological constant, are unstable, and therefore the static universe envisioned by Einstein could not exist. Later, in 1931,
Einstein himself agreed with the results of Friedmann and Lemaître. Thus general relativity predicted that the Universe had to be non-static—it had to either expand or contract. The expansion of
the universe discovered by Edwin Hubble in 1929 confirmed this prediction.^15
• The theory's prediction of frame dragging was consistent with the recent Gravity Probe B results.^16
• General relativity predicts that light should lose its energy when travelling away from the massive bodies. The group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen
collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to
Gravity and quantum mechanics
In the decades after the discovery of general relativity it was realized that general relativity is incompatible with quantum mechanics.^18 It is possible to describe gravity in the framework of
quantum field theory like the other fundamental forces, such that the attractive force of gravity arises due to exchange of virtual gravitons, in the same way as the electromagnetic force arises from
exchange of virtual photons.^19^20 This reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length,^18 where a more
complete theory of quantum gravity (or a new approach to quantum mechanics) is required.
Earth's gravity
Every planetary body (including the Earth) is surrounded by its own gravitational field, which exerts an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of
this field at any given point is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.
The strength of the gravitational field is numerically equal to the acceleration of objects under its influence, and its value at the Earth's surface, denoted g, is expressed below as the standard
average. According to the Bureau International de Poids et Mesures, International Systems of Units (SI), the Earth's standard acceleration due to gravity is:
g = 9.80665 m/s^2 (32.1740 ft/s^2).^21^22
This means that, ignoring air resistance, an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an
object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to
each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time.
According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also
accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the
object's. If the object doesn't bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity
and prevents further acceleration.
The force of gravity on Earth is the resultant (vector sum) of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force,
which results from the choice of an earthbound, rotating frame of reference. At the equator, the force of gravity is the weakest due to the centrifugal force caused by the Earth's rotation. The force
of gravity varies with latitude and becomes stronger as you increase in latitude toward the poles. The standard value of 9.80665 m/s^2 is the one originally adopted by the International Committee on
Weights and Measures in 1901 for 45° latitude, even though it has been shown to be too high by about five parts in ten thousand.^23 This value has persisted in meteorology and in some standard
atmospheres as the value for 45° latitude even though it applies more precisely to latitude of 45°32'33".^24
Equations for a falling body near the surface of the Earth
Under an assumption of constant gravity, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s^
2. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed
time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first ^1⁄[20] of a second the ball drops one unit of distance (here,
a unit is about 12 mm); by ^2⁄[20] it has dropped at total of 4 units; by ^3⁄[20], 9 units and so on.
Under the same constant gravity assumptions, the potential energy, E[p], of a body at height h is given by E[p] = mgh (or E[p] = Wh, with W meaning weight). This expression is valid only over small
distances h from the surface of the Earth. Similarly, the expression $h = \tfrac{v^2}{2g}$ for the maximum height reached by a vertically projected body with initial velocity v is useful for small
heights and small initial velocities only.
Gravity and astronomy
The discovery and application of Newton's law of gravity accounts for the detailed information we have about the planets in our solar system, the mass of the Sun, the distance to stars, quasars and
even the theory of dark matter. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured
characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit Galactic Centers, galaxies orbit a center of mass
in clusters, and clusters orbit in superclusters. The force of gravity exerted on one object by another is directly proportional to the product of those objects' masses and inversely proportional to
the square of the distance between them.
Gravitational radiation
In general relativity, gravitational radiation is generated in situations where the curvature of spacetime is oscillating, such as is the case with co-orbiting objects. The gravitational radiation
emitted by the Solar System is far too small to measure. However, gravitational radiation has been indirectly observed as an energy loss over time in binary pulsar systems such as PSR B1913+16. It is
believed that neutron star mergers and black hole formation may create detectable amounts of gravitational radiation. Gravitational radiation observatories such as the Laser Interferometer
Gravitational Wave Observatory (LIGO) have been created to study the problem. No confirmed detections have been made of this hypothetical radiation, but as the science behind LIGO is refined and as
the instruments themselves are endowed with greater sensitivity over the next decade, this may change.
Speed of gravity
In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal
to the speed of light.^25 The team's findings were released in the Chinese Science Bulletin in February 2013.^26
Anomalies and discrepancies
There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.
• Extra fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter.
Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact gravitationally but not electromagnetically, would account for the discrepancy. Various modifications to
Newtonian dynamics have also been proposed.
• Accelerating expansion: The metric expansion of space seems to be speeding up. Dark energy has been proposed to explain this. A recent alternative explanation is that the geometry of space is not
homogeneous (due to clusters of galaxies) and that when the data are reinterpreted to take this into account, the expansion is not speeding up after all,^27 however this conclusion is disputed.^
• Extra energetic photons: Photons travelling through galaxy clusters should gain energy and then lose it again on the way out. The accelerating expansion of the universe should stop the photons
returning all the energy, but even taking this into account photons from the cosmic microwave background radiation gain twice as much energy as expected. This may indicate that gravity falls off
faster than inverse-squared at certain distance scales.^29
• Extra massive hydrogen clouds: The spectral lines of the Lyman-alpha forest suggest that hydrogen clouds are more clumped together at certain scales than expected and, like dark flow, may
indicate that gravity falls off slower than inverse-squared at certain distance scales.^29
Alternative theories
Historical alternative theories
Recent alternative theories
See also
1. ^ Ball, Phil (June 2005). "Tall Tales". Nature News. doi:10.1038/news050613-10.
2. ^ Galileo (1638), Two New Sciences, First Day Salviati speaks: "If this were what Aristotle meant you would burden him with another error which would amount to a falsehood; because, since there
is no such sheer height available on earth, it is clear that Aristotle could not have made the experiment; yet he wishes to give us the impression of his having performed it when he speaks of
such an effect as one which we see."
3. ^ *Chandrasekhar, Subrahmanyan (2003). Newton's Principia for the common reader. Oxford: Oxford University Press. (pp.1–2). The quotation comes from a memorandum thought to have been written
about 1714. As early as 1645 Ismaël Bullialdus had argued that any force exerted by the Sun on distant objects would have to follow an inverse-square law. However, he also dismissed the idea that
any such force did exist. See, for example, Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge: Cambridge University Press. p. 225. ISBN
4. ^ M.C.W.Sandford (2008). "STEP: Satellite Test of the Equivalence Principle". Rutherford Appleton Laboratory. Retrieved 2011-10-14.
5. ^ Paul S Wesson (2006). Five-dimensional Physics. World Scientific. p. 82. ISBN 981-256-661-9.
6. ^ Haugen, Mark P.; C. Lämmerzahl (2001). Principles of Equivalence: Their Role in Gravitation Physics and Experiments that Test Them. Springer. arXiv:gr-qc/0103067. ISBN 978-3-540-41236-6.
7. ^ "Gravity and Warped Spacetime". black-holes.org. Retrieved 2010-10-16.
8. ^ Dmitri Pogosyan. "Lecture 20: Black Holes—The Einstein Equivalence Principle". University of Alberta. Retrieved 2011-10-14.
9. ^ Pauli, Wolfgang Ernst (1958). "Part IV. General Theory of Relativity". Theory of Relativity. Courier Dover Publications. ISBN 978-0-486-64152-2.
10. ^ Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury,
Venus, and Earth.)
11. ^ Dyson, F.W.; Eddington, A.S.; Davidson, C.R. (1920). "A Determination of the Deflection of Light by the Sun's Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919".
Phil. Trans. Roy. Soc. A 220 (571–581): 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009.. Quote, p. 332: "Thus the results of the expeditions to Sobral and Principe can leave
little doubt that a deflection of light takes place in the neighbourhood of the sun and that it is of the amount demanded by Einstein's generalised theory of relativity, as attributable to the
sun's gravitational field."
12. ^ Weinberg, Steven (1972). Gravitation and cosmology. John Wiley & Sons.. Quote, p. 192: "About a dozen stars in all were studied, and yielded values 1.98 ± 0.11" and 1.61 ± 0.31", in substantial
agreement with Einstein's prediction θ[☉] = 1.75"."
13. ^ Earman, John; Glymour, Clark (1980). "Relativity and Eclipses: The British eclipse expeditions of 1919 and their predecessors". Historical Studies in the Physical Sciences 11: 49–85. doi:
14. ^ Weinberg, Steven (1972). Gravitation and cosmology. John Wiley & Sons. p. 194.
15. ^ See W.Pauli, 1958, pp.219–220
16. ^ "NASA's Gravity Probe B Confirms Two Einstein Space-Time Theories". Nasa.gov. Retrieved 2013-07-23.
17. ^ Bhattacharjee, Yudhijit. "Galaxy Clusters Validate Einstein's Theory". News.sciencemag.org. Retrieved 2013-07-23.
18. ^ ^a ^b Randall, Lisa (2005). Warped Passages: Unraveling the Universe's Hidden Dimensions. Ecco. ISBN 0-06-053108-8.
19. ^ Feynman, R. P.; Morinigo, F. B., Wagner, W. G., & Hatfield, B. (1995). Feynman lectures on gravitation. Addison-Wesley. ISBN 0-201-62734-5.
20. ^ Zee, A. (2003). Quantum Field Theory in a Nutshell. Princeton University Press. ISBN 0-691-01019-6.
21. ^ Bureau International des Poids et Mesures (2006). "Chapter 5". The International System of Units (SI). 8th ed. Retrieved 2009-11-25. "Unit names are normally printed in roman (upright) type ...
Symbols for quantities are generally single letters set in an italic font, although they may be qualified by further information in subscripts or superscripts or in brackets."
22. ^ "SI Unit rules and style conventions". National Institute For Standards and Technology (USA). September 2004. Retrieved 2009-11-25. "Variables and quantity symbols are in italic type. Unit
symbols are in roman type."
23. ^ List, R. J. editor, 1968, Acceleration of Gravity, Smithsonian Meteorological Tables, Sixth Ed. Smithsonian Institution, Washington, D.C., p. 68.
24. ^ U.S. Standard Atmosphere, 1976, U.S. Government Printing Office, Washington, D.C., 1976. (Linked file is very large.)
25. ^ Chinese scientists find evidence for speed of gravity, astrowatch.com, 12/28/12.
26. ^ TANG, Ke Yun; HUA ChangCai, WEN Wu, CHI ShunLiang, YOU QingYu, YU Dan (February 2013). "Observational evidences for the speed of the gravity based on the Earth tide". Chinese Science Bulletin
58 (4-5): 474–477. doi:10.1007/s11434-012-5603-3. Retrieved 12 June 2013.
27. ^ Dark energy may just be a cosmic illusion, New Scientist, issue 2646, 7th March 2008.
28. ^ Swiss-cheese model of the cosmos is full of holes, New Scientist, issue 2678, 18th October 2008.
29. ^ ^a ^b Chown, Marcus (16 March 2009). "Gravity may venture where matter fears to tread". New Scientist (2699). Retrieved 4 August 2013.
• Halliday, David; Robert Resnick; Kenneth S. Krane (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 0-471-32057-9.
• Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7.
• Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
Further reading
External links
Look up gravitation in Wiktionary, the free dictionary.
Wikimedia Commons has media related to Gravitation.
• Hazewinkel, Michiel, ed. (2001), "Gravitation", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
• Hazewinkel, Michiel, ed. (2001), "Gravitation, theory of", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 | {"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Gravitational_force","timestamp":"2014-04-20T11:00:35Z","content_type":null,"content_length":"197462","record_id":"<urn:uuid:1c074b5f-32ad-424c-8845-bbc251f5176b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waltham, MA Geometry Tutor
Find a Waltham, MA Geometry Tutor
...While taking graduate physics courses at SUNY Albany, I was the Lab Instructor for the undergraduate Physics I course, and a Teaching Assistant and Grader for undergraduate Physics II and
Electricity and Magnetism courses. I also tutored individuals and groups in Physics I and II. During this t...
25 Subjects: including geometry, calculus, physics, algebra 1
...If you learn it well, it will serve you well for a long time to come. It is a crucial skill to have, and I'd make sure you do have it. I have a solid understanding of calculus gained through
earning a PhD in physics and from learning mathematics in a profound way.
47 Subjects: including geometry, chemistry, reading, calculus
...I started teaching again in 2008 and have been steadily growing my own studio of students. I have been salsa dancing since 2003 and I have been teaching salsa since 2006. I enjoy going out
salsa dancing at various venues and I have performed at a few places.
13 Subjects: including geometry, English, writing, algebra 1
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is
a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including geometry, calculus, statistics, algebra 1
My name is Derek H. and I recently graduated from Cornell University's College of Engineering with a degree in Information Science, Systems, and Technology. I have a strong background in Math,
Science, and Computer Science. I currently work as software developer at IBM.
17 Subjects: including geometry, statistics, algebra 1, economics
Related Waltham, MA Tutors
Waltham, MA Accounting Tutors
Waltham, MA ACT Tutors
Waltham, MA Algebra Tutors
Waltham, MA Algebra 2 Tutors
Waltham, MA Calculus Tutors
Waltham, MA Geometry Tutors
Waltham, MA Math Tutors
Waltham, MA Prealgebra Tutors
Waltham, MA Precalculus Tutors
Waltham, MA SAT Tutors
Waltham, MA SAT Math Tutors
Waltham, MA Science Tutors
Waltham, MA Statistics Tutors
Waltham, MA Trigonometry Tutors
Nearby Cities With geometry Tutor
Arlington, MA geometry Tutors
Auburndale, MA geometry Tutors
Belmont, MA geometry Tutors
Brighton, MA geometry Tutors
Brookline, MA geometry Tutors
Lexington, MA geometry Tutors
Medford, MA geometry Tutors
Newton Center geometry Tutors
Newton Centre, MA geometry Tutors
Newton, MA geometry Tutors
Newtonville, MA geometry Tutors
North Waltham geometry Tutors
South Waltham, MA geometry Tutors
Watertown, MA geometry Tutors
West Newton, MA geometry Tutors | {"url":"http://www.purplemath.com/Waltham_MA_Geometry_tutors.php","timestamp":"2014-04-17T04:03:57Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:e77233f9-2f26-46d0-9c4c-c00eb35eca40>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Backgammon -
Backgammon - Object of the Game
The object of the game is for each player to bring all his checkers into his home board, and then to bear them off the board. The first player to clear all his checkers off the board is the winner.
Playing the Game
Backgammon is a game for two players, played on a board of twenty-four narrow triangles called points. Each player has fifteen stones of one color (light or dark) that are placed along the boards 24
points. Points alternate in color and are grouped into four quadrants of six points each. Quadrants are referred to as a players home board and outer board. The board is divided in half by a center
partition called the bar. All points on a backgammon board are distinguished by numbers. A players outermost point is the twenty-four point, which is also his opponents one point. A doubling cube,
with the numbers 2, 4, 8, 16, 32, and 64, is used to keep track of the current stake of the game.
To start the game, each player rolls a single dice. This determines both the player to go first and the numbers to be played. If equal numbers come up, then both players roll again until they roll
different numbers. The player who throws the highest number moves first according to the number displayed on the dice. After the first roll, the players throw both dice and alternate turns. The roll
of the dice indicates how many points (or pips) a player can move his stones. Stones are always moved forward, to a lower-numbered point. The following rules apply: A stone can only be moved to an
open point (one not occupied by two or more opposing stones).
The numbers on the two dice constitute separate moves. For example, if a player rolls 5 and 3, he may move one stone five spaces to an open point and another stone three spaces to an open point, or
he may move the one stone a total of eight spaces to an open point, but only if the intermediate point (either three or five spaces from the starting point) is also open. A player who rolls doubles
plays the numbers shown on the dice twice. A roll of 6 and 6 means that the player has four sixes to use, and he may move any combination of stones he feels appropriate to complete this move. A
player must use both numbers of a roll if legally possible (and all four numbers of a double). When only one number can be played, the player must play that number. If either number can be played,
but not both, a player must play the higher one. When either number cant be used, a player loses his turn. In the case of doubles, when all four numbers cant be played, a player must play as many
numbers as he can.
Hitting and Entering
A point occupied by a single stone of either color is called a blot. If an opposing stone lands on a blot, the blot is hit and placed on the bar. Anytime a player has one or more stones on the bar,
his first obligation is to enter that stone(s) into the opposing home board. A stone is entered by moving it to an open point corresponding to one of the numbers on the rolled dice. For example, if a
player rolls 4 and 6, he may enter a stone onto either the opponents four point or six point, so long as the prospective point is not occupied by two or more of his opponents stones. If neither of
the points is open, the player loses his turn. If a player is able to enter some but not all of his stones, he must enter as many as he can and then forfeit the remainder of his turn. After the last
of a players stones has been entered, any unused numbers on the dice must be played.
Bearing Off
Once a player has moved all of his fifteen stones into his home board, he can begin bearing off. A player bears off a stone, by rolling a number that corresponds to the point on which the stone
resides, and then removing that stone from the board. If there is no stone on the point indicated by the roll, the player must make a legal move using a stone on a higher-numbered point. If there are
no stones on the higher-numbered points, the player can remove a stone from the next highest point. A player is under no obligation to bear off if he can make an otherwise legal move. A player must
have all of his active stones in his home board in order to bear off. If a stone is hit during the bear-off process, the player must bring that stone back to his home board before continuing to bear
The Doubling Cube
Backgammon is played for an agreed wager (or number of points in the tournament play). During the course of the game, a player who feels he has a sufficient advantage may propose doubling his stakes.
He may do so, only at the start of his turn, and before he has rolled the dice. A player who is offered a double may refuse, in which case he concedes the game and pays the original wager. Otherwise,
he must accept the double and play on for the new higher stakes. A player who accepts a double becomes the owner of the cube and only he may make the next double. Subsequent doubles in the same game
are called redoubles. If a player refuses a redouble, he must pay the wager that was at stake prior to the redouble. Otherwise, he becomes the new owner of the cube and the game continues at twice
the previous stakes. Redoubles can increase upto 64 times the original wager.
Playing with beavers
An optional rule in Single Game Mode which says that when a player is doubled, he may immediately redouble (beaver) while retaining possession of the doubling cube. The original doubler has the
option of accepting or refusing as with a normal double.
Jacoby Rule
The Jacoby Rule makes gammons and backgammons count for their respective double and triple points only if there has been at least one use of the doubling cube in the game. This encourages a player
with a large lead in a game to double, and thus likely end the game, rather than see the game out to its conclusion in hopes of a gammon or backgammon. The Jacoby Rule is widely used in money play,
but is not used in match play.
Crawford Rule
The Crawford Rule makes match play much more fair for the player in the lead. If a player is one point away from winning a match, his opponent has no reason not to double; after all, a win in the
game by the player in the lead would cause him to win the match regardless of the doubled stakes, while a win by the opponent would benefit twice as much if the stakes are double. Thus there is no
advantage towards winning the match to being one point shy of winning, if one's opponent is two points shy!
To remedy this situation, the Crawford Rule requires that when a player becomes one single point short of winning the match, neither player may use the doubling cube for a single game, called the
Crawford Game. As soon as the Crawford Game is over, any further games use the doubling cube normally.
Not quite as universal as the Jacoby Rule, the Crawford Rule is widely used and generally assumed to be in effect for match play.
Automatic doubles
When automatic doubles are used, any re-rolls that players must make at the very start of a game (when each player rolls one die) have the side-effect of causing a double. Thus, a 3-3 roll, followed
by a re-roll of 5-5, followed by a re-roll of 1-4 that begins the game in earnest, will cause the game to be played from the start with 4-times normal stakes. The doubling cube stays in the middle,
with both players having access to it. The Jacoby Rule is still in effect.
Automatic doubles are common in money games (upon agreement). They are never used in match play.
Known variant - all same but 6-6 triples rather than doubles stakes.
Gammons and Backgammons
At the end of the game, if the losing player has borne off at least one stone, he loses only the value showing on the doubling cube (the original wager or one point if there have been no doubles).
However, if the loser has not borne off any of his stones, he is gammoned and loses twice the value of the doubling cube. More so, if the loser has not borne off any of his stones and still has a
stone on the bar or in the winners home board, he is backgammoned and loses three times the value of the doubling cube. | {"url":"http://www.carbonpoker.ag/poker-games/backgammon/","timestamp":"2014-04-21T01:59:43Z","content_type":null,"content_length":"27739","record_id":"<urn:uuid:6cabfeb8-ea85-4124-88fc-5723db471ec9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
HAL - IN2P3 :: [in2p3-00740410, version 1] Convergence properties of {\itshape ab initio} calculations of light nuclei in a harmonic oscillator basis
We study recently proposed ultraviolet and infrared momentum regulators of the model spaces formed by construction of a variational trial wavefunction which uses a complete set of many-body basis
states based upon three-dimensional harmonic oscillator (HO) functions. These model spaces are defined by a truncation of the expansion characterized by a counting number ($\mathcal{N}$) and by the
intrinsic scale ($\hbar\omega$) of the HO basis; in short by the ordered pair ($\mathcal{N},\hbar\omega$). In this study we choose for $\mathcal{N}$ the truncation parameter $N_{max}$ related to the
maximum number of oscillator quanta, above the minimum configuration, kept in the model space. The ultraviolet (uv) momentum cutoff of the continuum is readily mapped onto a defined uv cutoff in this
finite model space, but there are two proposed definitions of the infrared (ir) momentum cutoff inherent in a finite-dimensional HO basis. One definition is based upon the lowest momentum difference
given by $\hbar\omega$ itself and the other upon the infrared momentum which corresponds to the maximal radial extent used to encompass the many-body system in coordinate space. Extending both the uv
cutoff to infinity and the ir cutoff to zero is prescribed for a converged calculation. We calculate the ground state energy of light nuclei with "bare" and "soft" $NN$ interactions. By doing so, we
investigate the behaviors of the uv and ir regulators of model spaces used to describe $^2$H, $^3$H, $^4$He and $^6$He with $NN$ potentials Idaho N$^3$LO and JISP16. We establish practical procedures
which utilize these regulators to obtain the extrapolated result from sequences of calculations with model spaces characterized by ($\mathcal{N},\hbar\omega$). | {"url":"http://hal.in2p3.fr/view_by_stamp.php?label=IPNO&action_todo=view&langue=en&id=in2p3-00740410&version=1","timestamp":"2014-04-20T15:52:51Z","content_type":null,"content_length":"25249","record_id":"<urn:uuid:81246087-6c71-48b4-a32d-9e90fb30ddfd>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about Dominators
"Robert Sherry" <rsherry8@home.com>
15 Dec 2001 00:38:53 -0500
From comp.compilers
| List of all articles for this month |
From: "Robert Sherry" <rsherry8@home.com>
Newsgroups: comp.compilers
Date: 15 Dec 2001 00:38:53 -0500
Organization: Excite@Home - The Leader in Broadband http://home.com/faster
Keywords: analysis, books, question
Posted-Date: 15 Dec 2001 00:38:52 EST
The following paragraph is from the book Advanced Compiler Design
Implementation by Steven S. Muchnick. I can be found on page 182.
We give two approaches to computing the set of dominators of each node
in a flowgraph. The basic idea of the first approach is that node a
dominates node b if and only if a=b or a is the unique immediate predecessor
of b or b has more then one immediate predecessor and for all immediate
predecessors c of b, c is not equal to a and a dominates c.
I believe that the above statement is incorrect. Please consider the
following flowgraph.
Nodes{ a, b, c, d, e }
Edges{ (a,c), (a,d), (c,e), (d,e), (e,b) )
a is the start node
In this case, a dominates b. However, it violates the if and only if given
above since b has a unique predecessor. The believe the correct statement
would be:
The basic idea of the first approach is that node a dominates
node b if and only if a=b or a is the unique immediate predecessor of
b or for all immediate predecessors c of b, c is not equal to a and a
dominates c.
Please comment.
Robert Sherry
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/01-12-067","timestamp":"2014-04-20T18:59:54Z","content_type":null,"content_length":"6565","record_id":"<urn:uuid:3f5c3035-2822-4247-ac0b-6bd81b783ef0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
The largest prime factor of the specific number.
August 30th, 2013, 04:50 AM #1
Junior Member
Join Date
Aug 2013
Thanked 0 Times in 0 Posts
Its my first post, so i would like to say Hello to Java Community. I'am absolutely new in programming and because of my young age(15) i have many problems to understand some aspects of it
(especially advanced maths, logical thinking etc.). Nevertheless i think with an appropriate determination I can achieve success in this field of science. Lastly - I live in Poland thereby I
could have sometimes problems with correct and "inteligent" english, but I am convinced that with your help my knowledge will rise up.
Coming to the point. Last time i tried to write a simple program which should find the largest prime factor of the number 600851475143.
import java.util.Arrays;
import java.math.BigInteger;
public class PrimeFactor {
public static void main(String[] args) {
BigInteger[] Array = new BigInteger[i];
BigInteger number = new BigInteger("600851475143");
BigInteger number2 = new BigInteger("0");
BigInteger number3 = new BigInteger("0");
BigInteger PrimeFactor = new BigInteger ("0");
boolean select;
int i = 0;
while(PrimeFactor < number){
if(number%PrimeFactor == 0){
PrimeFactor = Array[i];
BigInteger largest = new BigInteger();
largest = Array[0];
for(int x=1; x<Array.length; x++) largest = max(Array[x],largest);
I know that the program has many trivial errors( like there
while(PrimeFactor < number){
- "<" could not be used for integer) but i dont know what to use instead of it..
I will be grateful for your help.
Welcome to the forum
Where exactly are you stuck?
I hope you understand the two core terms in your problem "prime" and "factor".
If yes then what is your strategy to find the largest prime factor of a given number? In your code you are not checking for if a factor is prime or not?
Use the tips below to write an efficient program:
- The largest prime factor of a given number 'n' is always <= 'n/2'.
- And you may be okay if you use long instead of bigint.
Don't by shy about your English. It's better than I've seen from most of the native speakers. Just be sure to describe specifically what you need help with and ask specific question. We've
learned that it's a waste of time to guess and offer advice about something the poster understands perfectly.
August 30th, 2013, 05:18 AM #2
August 30th, 2013, 05:19 AM #3
Join Date
Jul 2013
Thanked 18 Times in 17 Posts
August 30th, 2013, 05:56 AM #4
Super Moderator
Join Date
Jun 2013
So. Maryland, USA
My Mood
Thanked 474 Times in 465 Posts | {"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/31225-largest-prime-factor-specific-number.html","timestamp":"2014-04-20T00:40:24Z","content_type":null,"content_length":"59581","record_id":"<urn:uuid:6619b7bd-76f5-4483-bc0d-4b20d999e11b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: C Library Functions (3) Updated: 24 April 1991 Local index Up
im_exp10tra, im_exptra, im_expntra, im_expntra_vec, im_log10tra, im_logtra, im_powtra, im_powtra_vec - logarithmic, exponential and power transform of an image
#include <vips/vips.h>
int im_expntra(in, out, base)
IMAGE *in, *out;
double base;
int im_expntra_vec(in, out, n, vec)
IMAGE *in, *out;
int n;
double *vec;
int im_exp10tra(in, out)
IMAGE *in, *out;
int im_exptra(in, out)
IMAGE *in, *out;
int im_log10tra(in, out)
IMAGE *in, *out;
int im_logtra(in, out)
IMAGE *in, *out;
int im_powtra(in, out, exponent)
IMAGE *in, *out;
double exponent;
int im_powtra_vec(in, out, n, vec)
IMAGE *in, *out;
int n;
double *vec;
Each of the above functions maps in through a log or anti-log function of some sort and writes the result to out. The size and number of bands are unchanged, the output type is float, unless the
input is double, in which case the output is double. Non-complex images only!
im_expntra(3) transforms element x of input, to pow(base, x) in output. It detects division by zero, setting those pixels to zero in the output. Beware: it does this silently!
im_expntra_vec(3) works as im_expntra(), but lets you specify a constant per band.
im_exp10tra(3) transforms element x of input, to pow(10,0, x) in output. Internally, it is defined in terms of im_expntra().
im_exptra(3) transforms element x of input, to pow(e, x) in output, where e is the mathematical constant. Internally, it is defined in terms of im_expntra().
im_log10tra(3) transforms element x of input, to log10tra(x) in output.
im_logtra(3) transforms element x of input, to logtra(x) in output.
im_powtra(3) transforms element x of input, to pow(x, exponent) in output. It detects division by zero, setting those pixels to zero in the output. Beware: it does this silently!
im_powtra_vec(3) works as im_powtra(3), but lets you specify a constant per band.
None of the functions checks for under/overflow. Overflow is very common for many of these functions!
Each function returns 0 on success and -1 on error.
im_add(3), im_multiply(3), im_subtract(3), im_lintra(3), im_absim(3), im_mean(3), im_max(3).
N. Dessipris - 24/04/1991
J. Cupitt (rewrite) - 21/7/93
This document was created by man2html, using the manual pages.
Time: 21:48:17 GMT, April 16, 2011 | {"url":"http://www.makelinux.net/man/3/I/im_exp10tra","timestamp":"2014-04-17T12:35:45Z","content_type":null,"content_length":"11192","record_id":"<urn:uuid:0c4d4972-28f4-4de5-a1d7-0f10f7951c79>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: bar graph axis color- frustrated
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: bar graph axis color- frustrated
From Fred Wolfe <fwolfe@arthritis-research.org>
To statalist@hsphsun2.harvard.edu
Subject Re: st: bar graph axis color- frustrated
Date Fri, 11 Jun 2010 08:16:25 -0500
I hope I can explain it it. First, let me say that I think I can
figure out ways to do this by reorganizing the data. But the time
involved in doing this is (at least for me) too long.
Consider the example below:
This is a special case of a general problem. Special because in this
instance it is a results data set that was made by copying results
from Stata to a new data set.
In this instance, each named line represents a different variable. The
code for one side of the graph is:
use raoadotplotdata, clear
graph dot (asis) age - all ,ascat ylab(.5 [.1] 1) exclude0
schem(bw) ytit(Concordance coefficient) xsize(4) ysize(5.3)
Now, what I would really like to be able to do is to not use "by" and
have both types of symbols (RA and OA) be displayed on each line. If I
simply do over(), the groups are placed far apart.
So, in general, I would like a simple way to manage multiple variables
and to display the by group or over group results on the same line or
immediately below the same line.
I have looked at stripplot, the manual, and Michael's book to no avail.
It is usually that when I come to conclusions like this there is a
simple solution that I have overlooked. I hope it is the case now.
On Fri, Jun 11, 2010 at 6:24 AM, Nick Cox <n.j.cox@durham.ac.uk> wrote:
> Fred has raised similar issues before, but I am still fuzzy about what
> the precise problem is.
> Perhaps Fred could expand on the nitty-gritty of his difficulties with a
> small realistic data example or two.
> My guess is that the right kind of solution might not be a hit to
> Stata's graphics, but some helper commands that prepare datasets to be
> fed to the graphics.
> Nick
> n.j.cox@durham.ac.uk
> Fred Wolfe
> As an aside, both Michael's book and the manual spend much time in
> graph bar and dot with over(). I usually have multiple variables with
> overlapping groups. Over() doesn't help. I do wish that Stata might
> address that very common problem (at least for me).
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Fred Wolfe
National Data Bank for Rheumatic Diseases
Wichita, Kansas
NDB Office +1 316 263 2125 Ext 0
Research Office +1 316 686 9195
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-06/msg00623.html","timestamp":"2014-04-20T18:41:21Z","content_type":null,"content_length":"11678","record_id":"<urn:uuid:ce1bb585-efc9-4b5f-ac32-171b52738e96>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Random int64 and float64 numbers
Charles R Harris charlesr.harris@gmail....
Thu Nov 5 17:36:40 CST 2009
On Thu, Nov 5, 2009 at 4:26 PM, David Warde-Farley <dwf@cs.toronto.edu>wrote:
> On 5-Nov-09, at 4:54 PM, David Goldsmith wrote:
> > Interesting thread, which leaves me wondering two things: is it
> > documented
> > somewhere (e.g., at the IEEE site) precisely how many *decimal*
> > mantissae
> > are representable using the 64-bit IEEE standard for float
> > representation
> > (if that makes sense);
> IEEE-754 says nothing about decimal representations aside from how to
> round when converting to and from strings. You have to provide/accept
> *at least* 9 decimal digits in the significand for single-precision
> and 17 for double-precision (section 5.6). AFAIK implementations will
> vary in how they handle cases where a binary significand would yield
> more digits than that.
I believe that was the argument for the extended precision formats. The
givien number of decimal digits is sufficient to recover the same float that
produced them if a slightly higher precision is used in the conversion.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20091105/6debf3ff/attachment-0001.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046446.html","timestamp":"2014-04-19T07:21:57Z","content_type":null,"content_length":"4217","record_id":"<urn:uuid:9e636df0-4467-40ce-a412-d599b244f8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunnyvale, CA Science Tutor
Find a Sunnyvale, CA Science Tutor
...While I was obtaining it, I spent many hours helping my fellow students understand the material we learned in class. I would also love explaining the science I had learned to my friends.
(Teaching them about subjects like organic chemistry or physiology to name a few.) However, it takes more t...
24 Subjects: including organic chemistry, ACT Science, anatomy, philosophy
...At first, I started helping my friends with their classes in math, physics, and chemistry. As I continued working with them, they kept telling me things such as "You should really consider
being a teacher," "You are really good at explaining things," "Your explanation is so clear," or "I would h...
11 Subjects: including chemistry, physics, geometry, calculus
...I am a clear communicator and a patient teacher. I have a Masters degree in Biological Sciences as well as undergraduate degrees the related fields of exercise physiology and psychology with
an emphasis in biology. I have also taught high school Biology or Biology Honors for 9 years as well as tutoring in Biology, Biology Honors and AP Biology.
16 Subjects: including physics, zoology, botany, genetics
...During practical coursework, I try and instill curiosity to try traditional as well as innovative methods to get the same results and ultimately tie back to the underlying theories
underscoring experiments. I’m also an advocate for studying in groups, where students can share ideas, notes and re...
1 Subject: physics
I am a Yale graduate with a diverse background and major in Film Studies. I have taught English literature and writing courses for a full academic year and two summers. I have spent a year at a
college prep school, tutoring in a variety of subjects (English, Spanish, Biology, Chemistry, Physics, A...
20 Subjects: including physics, chemistry, ACT Science, psychology
Related Sunnyvale, CA Tutors
Sunnyvale, CA Accounting Tutors
Sunnyvale, CA ACT Tutors
Sunnyvale, CA Algebra Tutors
Sunnyvale, CA Algebra 2 Tutors
Sunnyvale, CA Calculus Tutors
Sunnyvale, CA Geometry Tutors
Sunnyvale, CA Math Tutors
Sunnyvale, CA Prealgebra Tutors
Sunnyvale, CA Precalculus Tutors
Sunnyvale, CA SAT Tutors
Sunnyvale, CA SAT Math Tutors
Sunnyvale, CA Science Tutors
Sunnyvale, CA Statistics Tutors
Sunnyvale, CA Trigonometry Tutors
Nearby Cities With Science Tutor
Campbell, CA Science Tutors
Cupertino Science Tutors
Fremont, CA Science Tutors
Hayward, CA Science Tutors
Los Altos Science Tutors
Los Altos Hills, CA Science Tutors
Menlo Park Science Tutors
Mountain View, CA Science Tutors
Palo Alto Science Tutors
Pleasanton, CA Science Tutors
Redwood City Science Tutors
San Jose, CA Science Tutors
San Mateo, CA Science Tutors
Santa Clara, CA Science Tutors
Union City, CA Science Tutors | {"url":"http://www.purplemath.com/Sunnyvale_CA_Science_tutors.php","timestamp":"2014-04-17T01:30:22Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:75de4aa7-a150-4d75-9a7a-c8ae07b98e0e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Questions on how SYZ conjectures is deduced from HMS conjeture.
up vote 5 down vote favorite
The Strominge-Yau-Zaslow conjecture is roughly the following. Any Calabi-Yau $m$-manifold $X$ admits a special Lagrangian $T^m$ fibration (maybe at around a special point in its complex moduli space)
and a mirror partner $Y$ is obtained by dualizing the tori $T^m$ with "instanton corrections" coming from singular fibers.
If my memory serves one of heuristics of SYZ conjecture comes from Kontsevich's homological mirror symmetry conjecture; the moduli space of sky-scraper sheaves $\mathcal{O}_y$ (the easiest B-branes)
on $Y$ is $Y$ itself, and there should be a corresponding moduli space of A-branes on $X$. A-brane is a pair $(L,c)$ of Lagrangian submnaifold $L\subset X$ and a flat $U(1)$ connection $c$ on $L$. By
computing cohomology groups $HF^*(L,L)\cong Ext(\mathcal{O}_y,\mathcal{O}_y)=H^*(T^m,T^m)$, we expect that our $L$ is a Lagrangian $T^m$. My first question is
How do you expect that $L$ is a spcecial Lagrangian manifold?
If we assume that $L$ is special Lagrangian, then its deformation space is known and is of dimension $3$, and we expect that $L$ sweeps over $X$. My second question is
Does the flat $U(1)$ connection $c$ on $L$ play any role in this story?
All we "deduced" from HMS conjecture is that $X$ admits a special Lagrangian fibration. Can we say anything more? Since HMS conjecture doesn't say anything about construction of mirror manifolds, I
am afraid that we cannot say anything about dualizing these $L=T^n$ etc. My third question is
What is the A-brane object on $X$ that corresponds to the obvious $6$-brane $Y$?
calabi-yau ag.algebraic-geometry sg.symplectic-geometry
add comment
1 Answer
active oldest votes
For your first question, you can see this answer
What is geometric intuition of special Lagrangian manifolds?
for some idea of how the special condition maybe natural from the point of view of homological mirror symmetry.
For your second question, if we grant this, then the local system is a natural addition, because we can naively expect that $HF((L,c),(L,c))$ is also $H^*(T^3)$. Meanwhile as you have said
up vote we expect that there is only a three dimensional family of special Lagrangians, so we need more objects to correspond to the six dimensional family of skyscraper sheaves. It is therefore
6 down natural to allow the $U(1)$ flat connections on $L$ and expect that they also correspond to points in our mirror.
For your third question, the answer is that line bundles on the mirror should correspond to sections of the Lagrangian torus fibration. Normally, we fix a section to begin with and just
declare that that goes to the structure sheaf $O_Y$. One motivation for this idea is a similar naive reasoning with Exts as the one you mention in the question. Namely $Ext(O_Y, O_y)$ should
be isomorphic to $\mathbb{C}$, so we expect that our Lagrangian hits each fiber once. A nice case to consider is that of the elliptic curve. If you examine the functor constructed in
Polishchuk and Zaslow's paper on the elliptic curve, you will see that this is in fact how mirror symmetry works in this case, namely points will correspond to local systems over (0,1)
curves and line bundles will correspond to (1,n) curves.
add comment
Not the answer you're looking for? Browse other questions tagged calabi-yau ag.algebraic-geometry sg.symplectic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/118175/questions-on-how-syz-conjectures-is-deduced-from-hms-conjeture/118179","timestamp":"2014-04-20T01:48:25Z","content_type":null,"content_length":"53050","record_id":"<urn:uuid:db07323c-4631-4dce-a788-2c3cf7a659c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: another disaster
Matrix Games Forums Forums Register Login Photo Gallery Member List Search
Calendars FAQ
My Profile Inbox Address Book My Subscription My Forums Log Out
View related threads: (in this forum | in all forums) Logged in as: Guest
All Forums >> [New Releases from Matrix Games] >> War in the Pacific: Admiral's Edition >> After Action Reports >> RE: another disaster Page: << < prev 218 219 [220] 221 222 next > >>
2/7/2012 2:10:57 PM
jay102 My two cents about the leak CAP issue. Of cause the code needs some polish, but as develop team indicated, a "easy fix" probably brings more undesired ramification.
Generally speaking, leak CAP(though needs some further tweak) is at least better than the uber CAP of WitP, when an invincible KB steamrolled everything in early game, and
an invincible TF58 steamrolled everything in late game.
Posts: 197 Secondly, it is a rule of thumb of pacific war that one should always keep carrier away from major enemy airbase. Leak CAP issue may exacerbated the punishment of who
Joined: 8/15/2005 against it, however it still works fairly well in accordance of reality.
Status: offline
(in reply to JohnDillworth)
Post #: 6571
2/7/2012 3:26:20 PM
ORIGINAL: jay102
Posts: 382
Joined: 3/11/2004 My two cents about the leak CAP issue. Of cause the code needs some polish, but as develop team indicated, a "easy fix" probably brings more undesired ramification.
From: Austria Generally speaking, leak CAP(though needs some further tweak) is at least better than the uber CAP of WitP, when an invincible KB steamrolled everything in early game,
Status: offline and an invincible TF58 steamrolled everything in late game.
Secondly, it is a rule of thumb of pacific war that one should always keep carrier away from major enemy airbase. Leak CAP issue may exacerbated the punishment of who
against it, however it still works fairly well in accordance of reality.
While leaky CAP is a good invention as there will always be some leakiness the CAP problem is not leaky. Basically against an opponent which knows the rules you have not
just an Fleet problem. Instead you have a generic problems.
Depending on the speed of you game most players will run into that potential problem during the marinas invasion and/or Philippines invasion. You need your carriers to
invade if you do not want to use ultra-cheesy tactics.
Either both sides exploit it which leads to a multiset of problems or you just create a sufficient HR to deal with it.
(in reply to jay102)
Post #: 6572
quote:ORIGINAL: jay102 My two cents about the leak CAP issue. Of cause the code needs some polish, but as develop team indicated, a "easy fix" probably brings more undesired ramification. Generally
speaking, leak CAP(though needs some further tweak) is at least better than the uber CAP of WitP, when an invincible KB steamrolled everything in early game, and an invincible TF58 steamrolled
everything in late game. Secondly, it is a rule of thumb of pacific war that one should always keep carrier away from major enemy airbase. Leak CAP issue may exacerbated the punishment of who against
it, however it still works fairly well in accordance of reality.
2/7/2012 4:57:36 PM
Panther Bait I am not sure how a player can absolutely control the size of his strikes other than to a) limit himself to one raid per area with up to the max # of planes, or b) fly more
than the max but at separate targets (nav strike would count as a target, I guess). Limiting a player's options (especially nav strike) isn't the best choice, but as long
as it was even, I guess it might work. You'd probably need some sort of CAP limit as well.
Posts: 506
Joined: 8/30/2006 Another way to help control the size of strikes (but not guarantee against a large strike) might be to employ voluntary stacking limits, particularly at larger air bases.
Status: offline Since one of the recommendations to "coordinate" strikes is to limit the number of airbases you stage from, it seems that spreading out your planes, especially your
bombers, would decrease your chances at coordinated strikes and limit the size of an individual strike. There's probably still the chance that all the dice rolls would
line up and a massive strike would launch, but hopefully that would be rare.
Of course, someone would have to do a fair bit of testing to see what stacking limits for large bases works best at keeping strikes small enough that the model can sort of
handle them.
< Message edited by Panther Bait -- 2/7/2012 9:17:24 PM >
When you shoot at a destroyer and miss, it's like hit'in a wildcat in the ass with a banjo.
Nathan Dogan, USS Gurnard
(in reply to beppi)
Post #: 6573
2/7/2012 5:10:12 PM
Cribtop I concur that some sort of stacking limit on air groups may be the only real way to accomplish this goal. You will still have some problems just cruising past the numerous
Japanese bases on Honshu, but that may be somewhat realistic.
Posts: 3416 Think about how stacking limits will impact both players' defense against airfield and strategic bombing attacks (i.e. A limit of "no more than X groups per airfield may be
Joined: 8/10/2008 too simple as it is legit to stack interceptors over key industrial sites). Thus, perhaps the stacking limit should only pertain to escorts set for long range and strike
From: Lone Star aircraft groups.
Status: offline In the end, only a code fix can truly solve this. IMHO if possible they should remove any hard cap on passes (or make it so large as to be effectively infinite) and key
number of passes to units of time available before the strike is over the target. This may or may not be possible given the code, but there appears to be some "time to
target" mechanic, so I'm hopeful it's possible.
Follow my latest AAR as I do battle with our resident author Cuttlefish at: http://www.matrixgames.com/forums/tm.asp?m=2742735
(in reply to Panther Bait)
Post #: 6574
2/7/2012 7:00:41 PM
Gridley380 I suggest a stacking limit for operational groups in any given hex ("operational" meaning "not set entirely for training"). Say, 12 groups for a level 9 A/F and 18 for a
level 10 A/F. One Air HQ can still allow bonus groups.
Posts: 245 When discussing code fixes, I encourage people to remember that some very large raids were launched historically (google 'thousand plane raid' and ignore the movie
Joined: 12/20/2011 results). They may not have been efficient, but they were certainly possible... at least for the allies. I'd like to see a land-based offshoot of the CV TF coordination
Status: offline penalty, myself: above so many planes, you have high odds of your raid breaking up and arriving over the target piecemeal (modified by various factors including
nationality, A/F size, date, etc.).
(in reply to Cribtop)
Post #: 6575
2/7/2012 7:54:19 PM
ORIGINAL: Gridley380
Posts: 8
Joined: 2/3/2012 I suggest a stacking limit for operational groups in any given hex ("operational" meaning "not set entirely for training"). Say, 12 groups for a level 9 A/F and 18 for
Status: offline a level 10 A/F. One Air HQ can still allow bonus groups.
When discussing code fixes, I encourage people to remember that some very large raids were launched historically (google 'thousand plane raid' and ignore the movie
results). They may not have been efficient, but they were certainly possible... at least for the allies. I'd like to see a land-based offshoot of the CV TF coordination
penalty, myself: above so many planes, you have high odds of your raid breaking up and arriving over the target piecemeal (modified by various factors including
nationality, A/F size, date, etc.).
It sounds to me like these are good solutions, particularly the latter. If strikes aren't putting 800 planes into one package, you don't have so many escorts to soak up the
capped CAP firing passes. A simple change to reduce the massive, well-coordinated strike formations we're seeing here and in other late-game AARs.
(in reply to Gridley380)
Post #: 6576
quote:ORIGINAL: Gridley380 I suggest a stacking limit for operational groups in any given hex ("operational" meaning "not set entirely for training"). Say, 12 groups for a level 9 A/F and 18 for a
level 10 A/F. One Air HQ can still allow bonus groups. When discussing code fixes, I encourage people to remember that some very large raids were launched historically (google 'thousand plane raid'
and ignore the movie results). They may not have been efficient, but they were certainly possible... at least for the allies. I'd like to see a land-based offshoot of the CV TF coordination penalty,
myself: above so many planes, you have high odds of your raid breaking up and arriving over the target piecemeal (modified by various factors including nationality, A/F size, date, etc.).
2/7/2012 9:11:16 PM
Canoerebel I think some of the suggested fixes will work a hardship on the defender. Under some of these proposed HRs, the defender will be able to base only a finite number at a
particular base. The enemy, on the other hand, may be able to target huge strikes from multiple airfields against the single airfield, which will suffer due to the reduced
defensive capacity.
Posts: 9768
Joined: 12/14/2002 For example, GreyJoy would be limited to a finite number of CAP at Hakodate, while Rader would be able to target Hakodate from multiple large bases thus overwhelming the
From: Northwestern defenses. GJ can try to address the situation by using other fields for LRCAP, but that is less effective, uncertain, and results in higher fatigue, losses, and aircraft in
Georgia, USA need of repair.
Status: offline
(in reply to Karwoski)
Post #: 6577
2/7/2012 9:17:03 PM
Laxplayer Took me 9 days, but I finally read all 220 pages of this AAR. Now I find that it's completely halted because of some flaw/bug/whatever... man what a let down! Hopefully it
gets remedied soon so you guys can go back to entertaining the rest of us.
Posts: 202
Joined: 8/30/2006
From: San Diego
Status: offline
Post #: 6578
2/7/2012 9:22:26 PM
Cribtop CR, I agree and thus suggest that a distinction be made between interceptors and strike groups. That said, in the end only a code fix of some kind will really clear this up
because of limited player control over naval strike missions and even co-ordination of airfield and ground attacks.
Posts: 3416 _____________________________
Joined: 8/10/2008
From: Lone Star Follow my latest AAR as I do battle with our resident author Cuttlefish at: http://www.matrixgames.com/forums/tm.asp?m=2742735
Status: offline (in reply to Laxplayer)
Post #: 6579
2/7/2012 9:27:30 PM
Panther Bait I agree that there is some danger. The hope with stacking limits would be that the raids come small(er) and broken up, so that they do not overwhelm the hard-coded CAP
limit (particularly for # of escort).
Posts: 506 I think it would work better if the limits were by number of planes (and maybe track multi-engine bombers, torpedo/dive bombers and fighters separately) rather than by
Joined: 8/30/2006 simple number of air units to avoid confusion of airunit size, etc.
Status: offline
I think it would be absolutely critical to do some play testing to make sure that the dispersal actually does cause raids to fragment and that CAP can handle the
fragmentation without too much diminishing effectiveness. Playtesting should occur using both sides on offense/defense to hopefully limit any bias against one side or the
When you shoot at a destroyer and miss, it's like hit'in a wildcat in the ass with a banjo.
Nathan Dogan, USS Gurnard
Post #: 6580
2/7/2012 10:30:01 PM
GreyJoy Hi guys,
I asked Rader to halt the game for few days in order to find a decent solution or workaround for the cap issue.
Posts: 5736 Panther is right. We need to do quite a bit of playtesting in order to find an hr that keeps te game fair...
Joined: 3/18/2011
Status: online Limited number of planes devoted to airstrikes and cap may be a good one, but i think it is not easy cause how can u limit the numbers of planes set to naval attack in the
whole japan for example? How can you set only 200 escorts and 200 bombers for a naval strike? If u set it for each base, this will lead to a multiple starting base strike
that will have the same effect: overwhelming CAP. But if you set the opposite, this will severly hamper any possible decent combined air defensive system....
I think the only HR that could "work" is to limit the AF and Port attack to 200+200, while leaving "free" the number of units on naval strike mission... Obviously this
won't solve the CV problem....but mine are already on the bottom of the ocean anyway
(in reply to Panther Bait)
Post #: 6581
2/7/2012 11:23:59 PM
ORIGINAL: Canoerebel
Posts: 8
Joined: 2/3/2012 I think some of the suggested fixes will work a hardship on the defender. Under some of these proposed HRs, the defender will be able to base only a finite number at a
Status: offline particular base. The enemy, on the other hand, may be able to target huge strikes from multiple airfields against the single airfield, which will suffer due to the
reduced defensive capacity.
For example, GreyJoy would be limited to a finite number of CAP at Hakodate, while Rader would be able to target Hakodate from multiple large bases thus overwhelming
the defenses. GJ can try to address the situation by using other fields for LRCAP, but that is less effective, uncertain, and results in higher fatigue, losses,
and aircraft in need of repair.
The attacker can already break through defending CAP at will, so things couldn't get worse for the defender. With the air group stacking HR, at least there'd be a chance
that the attackers don't come in coordinated, and the CAP only has to fight 2-300 enemy planes at a time, meaning they don't have issues like the CAP firing pass hard cap.
And it seems to me a player can more reliably control how many air groups are at a particular base than they can control how many planes take off for a strike. It certainly
wouldn't be perfect, but I think it's better than the status quo and more feasible than some proposed HRs.
Post #: 6582
quote:ORIGINAL: Canoerebel I think some of the suggested fixes will work a hardship on the defender. Under some of these proposed HRs, the defender will be able to base only a finite number at a
particular base. The enemy, on the other hand, may be able to target huge strikes from multiple airfields against the single airfield, which will suffer due to the reduced defensive capacity. For
example, GreyJoy would be limited to a finite number of CAP at Hakodate, while Rader would be able to target Hakodate from multiple large bases thus overwhelming the defenses. GJ can try to address
the situation by using other fields for LRCAP, but that is less effective, uncertain, and results in higher fatigue, losses, and aircraft in need of repair.
2/8/2012 12:57:20 AM
JohnDillworth quote:
I think the only HR that could "work" is to limit the AF and Port attack to 200+200, while leaving "free" the number of units on naval strike mission... Obviously this
Posts: 1917 won't solve the CV problem....but mine are already on the bottom of the ocean anyway
Joined: 3/19/2009
Status: offline
Perhaps you could tweak naval attack to something like "200-200 within 8 hexes of each base" so within a 8 hex radius you could not have more than 400 bombers set to naval
attack. The flip side of this is no death stars. Limit of 6 carriers per hex. I know this is clunky but maybe a bit of play testing could tweak it. The alternative is wait
for a fix, stop playing or carry on as is. Rational limits on number of planes in a strike, and on CAP, seem to play a bit closer to reality.
The last thing I want to do is hurt you. But it’s still on the list.
(in reply to Karwoski)
Post #: 6583
quote:I think the only HR that could "work" is to limit the AF and Port attack to 200+200, while leaving "free" the number of units on naval strike mission... Obviously this won't solve the CV
problem....but mine are already on the bottom of the ocean anyway
2/8/2012 7:12:01 AM
GreyJoy I made a couple of tests simulating a landing at hacinoe with 300 fighters based at hakodate on lrCAP over the landing hex and 200 bombers on naval strike at tokyo,
maebashi, utsonomia and sendai... The result was that i had not more than 130 planes actively covering my fleet while japan managed to have 2 very big air raids
coordinating 300 fighters an some 350 bombers from 3 different airbases....which led to the destruction of my fleet...
Posts: 5736
Joined: 3/18/2011 Mmmmmmmm......
Status: online
But there seems to be a strange fact: if i put a cve with my landing fleet the raids will arrive in big waves...while it seems that if i use a BB as my leading ship with no
CV or CVE the eaids arrive in smaller packs..... Gotta make some more tests....
(in reply to JohnDillworth)
Post #: 6584
2/8/2012 7:31:47 AM
CT Grognard The best way to fix this in my opinion is to improve the coordination penalty system - as soon as the number of planes in a strike exceeds a given number the coordination
penalty needs to increase exponentially.
Then if a player launches a 1,000-plane strike you'll hopefully see it break down into four or five smaller raids over the target hex...and you should see much more
Posts: 693 realistic and historical results.
Joined: 5/16/2010
From: Cape Town,
South Africa
Status: offline
Post #: 6585
2/8/2012 12:07:15 PM
Posts: 168 ORIGINAL: CT Grognard
Joined: 2/7/2010
From: Central, NC The best way to fix this in my opinion is to improve the coordination penalty system - as soon as the number of planes in a strike exceeds a given number the
Status: offline coordination penalty needs to increase exponentially.
Then if a player launches a 1,000-plane strike you'll hopefully see it break down into four or five smaller raids over the target hex...and you should see much more
realistic and historical results.
Bingo. There was no AWAC and or computer assisted routing for the massive air attacks of WWII like today. I have a couple of questions though.
1) If you have a HR about stacking, does this include CVs ?
2) Why is strike craft leaking through bad? I am in a game in late 44' Marianas action, and i cant get anything through the Super Fleet CAP. Mostly i think do to my poor
pilot exp. and poor co-ordination.
3) How does pilot exp. effect the outcome? Anyone done a test yet?
4) The AA in DaBabes is crushing...big difference.
The cobe may be borqued, but this is not a historical game so you cant expect historical results. What would have happened if the Allies had invaded the Home land? We will
never know. I dont have a solution other than stopping the massive air battles, they are ahistorical.
One of the serious problems in planning the fight against American doctrine, is that the Americans do not read their manuals, nor do they feel any obligation to follow
their doctrine
Post #: 6586
quote:ORIGINAL: CT Grognard The best way to fix this in my opinion is to improve the coordination penalty system - as soon as the number of planes in a strike exceeds a given number the coordination
penalty needs to increase exponentially. Then if a player launches a 1,000-plane strike you'll hopefully see it break down into four or five smaller raids over the target hex...and you should see
much more realistic and historical results.
2/8/2012 12:16:55 PM
Posts: 5736 ORIGINAL: Gabede
Joined: 3/18/2011
Status: online
1) If you have a HR about stacking, does this include CVs ?
2) Why is strike craft leaking through bad? I am in a game in late 44' Marianas action, and i cant get anything through the Super Fleet CAP. Mostly i think do to my
poor pilot exp. and poor co-ordination.
3) How does pilot exp. effect the outcome? Anyone done a test yet?
4) The AA in DaBabes is crushing...big difference.
1- Don't know...the tests are showing awful results...
2- It's only a matter of coordination. Good Air HQs and lots of level 9 and 10 AFs mean a good coordination, so to say more than 500 fighters escorting more than 400
bombers in a single raid.
3- Imho pilot experience doesn't effect the coordination. Rader is using rookie pilots (just drawn from the flight school) to perform 600 fighters escorting 400 bombers...
(in reply to Gabede)
Post #: 6587
quote:ORIGINAL: Gabede 1) If you have a HR about stacking, does this include CVs ? 2) Why is strike craft leaking through bad? I am in a game in late 44' Marianas action, and i cant get anything
through the Super Fleet CAP. Mostly i think do to my poor pilot exp. and poor co-ordination. 3) How does pilot exp. effect the outcome? Anyone done a test yet? 4) The AA in DaBabes is crushing...big
2/8/2012 12:19:09 PM
Posts: 5736 ORIGINAL: CT Grognard
Joined: 3/18/2011
Status: online The best way to fix this in my opinion is to improve the coordination penalty system - as soon as the number of planes in a strike exceeds a given number the
coordination penalty needs to increase exponentially.
Then if a player launches a 1,000-plane strike you'll hopefully see it break down into four or five smaller raids over the target hex...and you should see much more
realistic and historical results.
Agree...but i think it will need a major re-coding activity...
I'm now testing how it works with 200 fighters + 200 bombers against 300 fighters available on a single base... results are not good...
300 fighters in a single base will probably mean, even with 100% CAP that only a small fraction (say 1/3) will ever be able to engage...
Post #: 6588
2/8/2012 12:47:49 PM
CT Grognard I don't necessarily think so - I think the coding is already there, you just need to intensify the effects of the parameters.
The intention behind strike coordination was to make it very difficult to mount massive raids with several different types of aircraft, resulting in smaller, more selective
raid formations.
Posts: 693
Joined: 5/16/2010 We have been seeing directly the opposite in your and rader's air combat - massive formations consisting of many numbers of different aircraft models.
From: Cape Town,
South Africa LoBaron of course has an excellent sticky thread about air coordination in the War Room which goes to the core of the current problems you are observing. Coordination
Status: offline penalties in AE were implemented intentionally to reflect the difficulties of coordination as well as the vagaries of war. (Think of Midway, where Spruance specifically
decided on uncoordinated attacks against a single coordinated strike in order to prevent a Japanese counterstrike, as a result the unescorted low-level raid by VT-8 was
absolutely slaughtered, but managed to draw the Japanese Zero CAP out of position, allowing the SBDs of VB-6, VS-6 and VB-3 an almost unopposed run on the target.)
There are a number of factors included in the code design that affect coordination.
We also know that the chance of uncoordination is doubled for carrier task forces as soon as the number of aircraft in the task force exceeds a given number. For Japanese
CV TFs strikes will have the chances of uncoordination doubled as soon as there are more than 200+ RND(200) aircraft in the task force.
I believe a similar restriction should apply to bases, but you can set the thresholds higher. For example, double chances of uncoordination for a strike from a base if
there are more than 400+RND(400) aircraft in the task force.
I speak under correction and michaelm can hopefully chip in but I believe the coding is there, you just need to crank up the penalties.
Post #: 6589
2/8/2012 3:18:41 PM
Schlemiel I think the 1/3 rule itself might be flawed in this particular situation. Sure, for a couple of carriers you'd need 1/3 of your CAP planes in the air, but with (presumably)
frequent recon of the major airfields, troops on the ground (which would have a hard time not spotting a 1000 plane raid with a couple hours notice for Hakodate), pickets
in the ocean and radar, you probably wouldn't need more than a few hundred planes in the air until something is detected (enough to stop a small raid that could sneak
through) and have far more on standby for when the raid is detected. Sure, you'd have issues launching them all, but presumably more than 1/3 of your aircraft could be in
Posts: 154 the air for a raid that size with the warning you'd be likely to have. Not that I think there's a solution, per say, but I'd think in late war an Allied airbase that could
Joined: 10/20/2011 support 2,500 fighters WOULD be able to put up what is basically an impenetrable CAP against flight school pilots of fighters at the edge of their combat range with the
Status: offline kind of warning (intelligence, spotters on the sea and on land, recon, radar, etc) they would be likely to have. Not that I think there's an elegant solution here.
As far as home rules, I think strike size limits (perhaps the groups on attack in an 8 hex area rule or whatever) are fine, but I'm not sure restricting the number of
fighters on CAP is necessary. You'd need to test, of course, but if you can make one base basically impenetrable by placing all your fighters there, you are making some
trade offs. Your 50% CAP of 2,500 aircraft would still probably have only put probably 400 fighters in the air against that raid. If you need to refine it for the strike to
have a chance, simply say that you can only base fighters at a base such that 1/3, or 1/4 (presuming 100% CAP, which isn't that bad over your own airbase for pilot fatigue)
of them is equal to 400 (or if that number proves impenetrable, less until a whatever decent % of strikes you agree on will get through). The 1/4 would be to have some kind
of margin for planes out of position.
One odd thing for me is, I've seen dice rolls where aircraft strike electrical lines or whatever near bases on raids, but there's no modelling of the HIGH probability of
midair collisions when 600 flight school pilots try to fly in formation and gather up for a 1,000 aircraft strike with the hours until all the planes would get airborne?
That wouldn't be terribly easy to pull off without loss with very experienced pilots, much less rookies hastily drafted out of flight school with the adrenaline of their
first mission.
Post #: 6590
2/8/2012 10:23:21 PM
GreyJoy Sorry to bother you guys....but the more i test, the more i'm puzzled.
Following the tests made in the general forum section (please take a look), i tried to set up a typical Hakkodate-Tokyo scenario.
Posts: 5736
Joined: 3/18/2011 I assembled a typical landing TF (4 BBs, 2 CAs and several APAs)
Status: online there are 1000 fighters at Hakodate. 15k feet altitude, 0 range, 100% CAP.
Japan has at Tokyo 100 Franks and 200 Frances (naval attack, 10% naval search, 6,000 feet altitude)
AFTER ACTION REPORTS FOR Sep 01, 45
Afternoon Air attack on TF, near Hakodate at 119,53
Weather in hex: Light rain
Raid detected at 119 NM, estimated altitude 9,000 feet.
Estimated time to target is 35 minutes
Japanese aircraft
P1Y2 Frances x 135
Ki-84r Frank x 100
Allied aircraft
P-47D25 Thunderbolt x 1000
Japanese aircraft losses
P1Y2 Frances: 68 damaged
P1Y2 Frances: 14 destroyed by flak
Ki-84r Frank: 41 destroyed
Allied aircraft losses
P-47D25 Thunderbolt: 1 destroyed
Allied Ships
BB Colorado, Torpedo hits 4, and is sunk
BB Arkansas, Torpedo hits 5, and is sunk
BB West Virginia, Torpedo hits 8, and is sunk
CA Louisville
CA Chester, Torpedo hits 1
APA Haskell, Torpedo hits 1, on fire, heavy damage
Aircraft Attacking:
42 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
42 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
43 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
CAP engaged:
31st Fighter Group with P-47D25 Thunderbolt (64 airborne, 134 on standby, 0 scrambling)
64 plane(s) intercepting now.
2 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 2000 and 42000.
Time for all group planes to reach interception is 38 minutes
128 planes vectored on to bombers
31st Fighter Group with P-47D25 Thunderbolt (64 airborne, 134 on standby, 0 scrambling)
64 plane(s) intercepting now.
2 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 1000 and 42000.
Time for all group planes to reach interception is 29 minutes
116 planes vectored on to bombers
31st Fighter Group with P-47D25 Thunderbolt (64 airborne, 134 on standby, 0 scrambling)
64 plane(s) intercepting now.
2 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 3000 and 42000.
Time for all group planes to reach interception is 30 minutes
144 planes vectored on to bombers
31st Fighter Group with P-47D25 Thunderbolt (64 airborne, 134 on standby, 0 scrambling)
64 plane(s) intercepting now.
2 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 1000 and 15000.
Time for all group planes to reach interception is 27 minutes
118 planes vectored on to bombers
31st Fighter Group with P-47D25 Thunderbolt (64 airborne, 134 on standby, 0 scrambling)
64 plane(s) intercepting now.
2 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 1000 and 42000.
Time for all group planes to reach interception is 26 minutes
110 planes vectored on to bombers
Banzai! - Mori J. in a P1Y2 Frances is willing to die for the Emperor
So despite having ONLY 100 escort fighters, ALL the bombers got through...even if all my groups have plenty of time to reach the battle...despite none of my fighter was out
of position...despite all my fighters were able to engage (100% CAP)...
Now let's try an hypotetical landing with the same TF at hachinoe, with those same 1000 fighters on LRCAP...........
(in reply to Schlemiel)
Post #: 6591
2/8/2012 10:31:24 PM
Posts: 5736 Morning Air attack on TF, near Hachinohe at 118,55
Joined: 3/18/2011
Status: online Weather in hex: Partial cloud
Raid detected at 95 NM, estimated altitude 6,000 feet.
Estimated time to target is 28 minutes
Japanese aircraft
P1Y2 Frances x 115
Ki-84r Frank x 100
Allied aircraft
P-47D25 Thunderbolt x 717...don't take this number into any consideration...
Japanese aircraft losses
P1Y2 Frances: 9 destroyed, 58 damaged
P1Y2 Frances: 9 destroyed by flak
Ki-84r Frank: 29 destroyed
Allied aircraft losses
P-47D25 Thunderbolt: 6 destroyed
Allied Ships
BB West Virginia, Torpedo hits 4, on fire, heavy damage
BB Valiant, Torpedo hits 5, on fire, heavy damage
BB Colorado, Torpedo hits 5, heavy damage
BB Arkansas, Torpedo hits 2, and is sunk
CA Louisville
Aircraft Attacking:
25 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
38 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
36 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
CAP engaged:
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 151 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 2 minutes
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 151 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 6 minutes
31st Fighter Group with P-47D25 Thunderbolt (151 airborne, 0 on standby, 0 scrambling)
151 plane(s) intercepting now.
Group patrol altitude is 15000
Raid is overhead
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 151 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 9 minutes
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 113 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 11 minutes
Banzai! - Koda L. in a P1Y2 Frances is willing to die for the Emperor
Magazine explodes on BB Arkansas
Afternoon Air attack on TF, near Hachinohe at 118,55
Weather in hex: Partial cloud
Raid detected at 78 NM, estimated altitude 8,000 feet.
Estimated time to target is 23 minutes
Japanese aircraft
P1Y2 Frances x 101
Ki-84r Frank x 27
Allied aircraft
P-47D25 Thunderbolt x 684
Japanese aircraft losses
P1Y2 Frances: 6 destroyed, 62 damaged
P1Y2 Frances: 5 destroyed by flak
Ki-84r Frank: 9 destroyed
Allied aircraft losses
P-47D25 Thunderbolt: 1 destroyed
Allied Ships
APA Haskell
BB Colorado, Torpedo hits 1, heavy damage
CA Louisville
APA Haskell
CA Chester, Torpedo hits 1, on fire
APA Haskell
APA Haskell
APA Haskell, Torpedo hits 1, on fire, heavy damage
APA Haskell
APA Haskell, Torpedo hits 3, and is sunk
APA Haskell, Torpedo hits 1
BB Valiant, heavy damage
APA Haskell
APA Haskell, Torpedo hits 1
Aircraft Attacking:
33 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
15 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
19 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
21 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
CAP engaged:
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 147 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 6 minutes
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 147 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 5 minutes
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 143 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 4 minutes
31st Fighter Group with P-47D25 Thunderbolt (142 airborne, 0 on standby, 0 scrambling)
142 plane(s) intercepting now.
Group patrol altitude is 15000
Raid is overhead
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 105 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 8 minutes
Banzai! - Hayashi C. in a P1Y2 Frances is willing to die for the Emperor
Actually out of 5 groups, only one was in position to fight...both in the morning and in the afternoon...
is that just a bad dice and roll??
Let's try it again at Aikita...
Post #: 6592
2/8/2012 10:37:41 PM
Posts: 5736 Afternoon Air attack on TF, near Hirosaki/Aomori at 117,54
Joined: 3/18/2011
Status: online Weather in hex: Overcast
Raid detected at 80 NM, estimated altitude 9,000 feet.
Estimated time to target is 24 minutes
Japanese aircraft
P1Y2 Frances x 54
Ki-84r Frank x 6
Allied aircraft
P-47D25 Thunderbolt x 584
Japanese aircraft losses
P1Y2 Frances: 4 destroyed, 26 damaged
P1Y2 Frances: 3 destroyed by flak
Ki-84r Frank: 3 destroyed
No Allied losses
Allied Ships
APA Haskell, Torpedo hits 2, heavy damage
BB West Virginia, Torpedo hits 1
BB Colorado, Torpedo hits 2, and is sunk
BB Valiant, Torpedo hits 4, on fire, heavy damage
BB Arkansas, Torpedo hits 3, heavy damage
Aircraft Attacking:
17 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
14 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
CAP engaged:
31st Fighter Group with P-47D25 Thunderbolt (118 airborne, 0 on standby, 0 scrambling)
118 plane(s) intercepting now.
Group patrol altitude is 15000
Raid is overhead
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 125 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Raid is overhead
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 125 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 10 minutes
125 planes vectored on to bombers
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 123 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Raid is overhead
31st Fighter Group with P-47D25 Thunderbolt (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 93 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 10 minutes
Magazine explodes on BB Colorado
mmmmmmmmmmmm.....so what's the secret of defending a fleet? Japanese forces were far from being overwhelimg this time...
Post #: 6593
2/8/2012 10:48:25 PM
GreyJoy Ok, let's try with the CVs...
16 CVs+9 CVLs.
Posts: 5736 All fighters are set at 70% CAP at 15,000 feet, range 8.
Joined: 3/18/2011
Status: online 200 Frances and 100 Franks based at Tokyo. Frances are 50 exp!
Morning Air attack on TF, near Ominato at 120,56
Weather in hex: Partial cloud
Raid detected at 119 NM, estimated altitude 10,000 feet.
Estimated time to target is 35 minutes
Japanese aircraft
P1Y2 Frances x 155
Ki-84r Frank x 100
Allied aircraft
Seafire IIC x 10
Seafire L.III x 15
Seafire F.XV x 64
F4U-1D Corsair x 156
F6F-5 Hellcat x 373
These numbers exactly correspond at the 70% of the fighter strenght of the TF....
Japanese aircraft losses
P1Y2 Frances: 2 destroyed, 72 damaged
P1Y2 Frances: 41 destroyed by flak All the Frances reached the launching position BEFORE the fighters could engage!
Ki-84r Frank: 16 destroyed
Allied aircraft losses
Seafire IIC: 1 destroyed
Seafire F.XV: 1 destroyed
F4U-1D Corsair: 1 destroyed
F6F-5 Hellcat: 10 destroyed
Allied Ships
CV Wasp, Torpedo hits 1
CV Victorious
CV Essex, Kamikaze hits 1
CV Yorktown
CVL Langley
CV Indomitable, Torpedo hits 2
CV Illustrious, Torpedo hits 1
CV Bunker Hill
CVL Independence
CV Formidable
CV Lexington
CV Randolph, Torpedo hits 1
CVL Bataan, Torpedo hits 1
CVL San Jacinto
CV Enterprise
CVL Cowpens, Torpedo hits 1
CVL Princeton, Torpedo hits 2, and is sunk
CVL Belleau Wood, Kamikaze hits 1, on fire
CV Saratoga, Torpedo hits 1, on fire
CVL Monterey
Aircraft Attacking:
35 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
34 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
26 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
14 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
CAP engaged:
VF-1 with F6F-5 Hellcat (0 airborne, 14 on standby, 0 scrambling)
0 plane(s) not yet engaged, 7 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 20 minutes
13 planes vectored on to bombers
VBF-1 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 6000 and 15000.
Time for all group planes to reach interception is 19 minutes
13 planes vectored on to bombers
VF-3 with F6F-5 Hellcat (4 airborne, 10 on standby, 0 scrambling)
4 plane(s) intercepting now.
0 plane(s) not yet engaged, 1 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 20 minutes
11 planes vectored on to bombers
VBF-3 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 10000 and 15000.
Time for all group planes to reach interception is 19 minutes
13 planes vectored on to bombers
VF-6 with F6F-5 Hellcat (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 20 minutes
5 planes vectored on to bombers
VBF-6 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 9000 and 15000.
Time for all group planes to reach interception is 19 minutes
9 planes vectored on to bombers
VF-7 with F6F-5 Hellcat (0 airborne, 14 on standby, 0 scrambling)
0 plane(s) not yet engaged, 7 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 20 minutes
4 planes vectored on to bombers
VBF-7 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 8000 and 15000.
Time for all group planes to reach interception is 19 minutes
12 planes vectored on to bombers
VF-9 with F6F-5 Hellcat (0 airborne, 14 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 3 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 26 minutes
14 planes vectored on to bombers
VBF-9 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 0 being recalled, 4 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 2000 and 10000.
Time for all group planes to reach interception is 35 minutes
13 planes vectored on to bombers
VF-10 with F6F-5 Hellcat (3 airborne, 14 on standby, 0 scrambling)
3 plane(s) intercepting now.
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 4000 and 15000.
Time for all group planes to reach interception is 20 minutes
13 planes vectored on to bombers
VBF-10 with F4U-1D Corsair (4 airborne, 9 on standby, 0 scrambling)
4 plane(s) intercepting now.
Group patrol altitude is 15000 , scrambling fighters between 2000 and 6000.
Time for all group planes to reach interception is 16 minutes
12 planes vectored on to bombers
VF-11 with F6F-5 Hellcat (0 airborne, 14 on standby, 0 scrambling)
0 plane(s) not yet engaged, 7 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 4000 and 15000.
Time for all group planes to reach interception is 20 minutes
21 planes vectored on to bombers
VBF-11 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 19 minutes
9 planes vectored on to bombers
VF-12 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
0 plane(s) not yet engaged, 3 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 3000 and 15000.
Time for all group planes to reach interception is 19 minutes
13 planes vectored on to bombers
VBF-12 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters to 15000.
Time for all group planes to reach interception is 19 minutes
9 planes vectored on to bombers
VF-13 with F6F-5 Hellcat (0 airborne, 14 on standby, 0 scrambling)
0 plane(s) not yet engaged, 7 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 20 minutes
17 planes vectored on to bombers
VBF-13 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 1000 and 15000.
Time for all group planes to reach interception is 19 minutes
9 planes vectored on to bombers
VF-14 with F6F-5 Hellcat (0 airborne, 14 on standby, 0 scrambling)
0 plane(s) not yet engaged, 7 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 11000 and 15000.
Time for all group planes to reach interception is 20 minutes
12 planes vectored on to bombers
VBF-14 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
0 plane(s) not yet engaged, 0 being recalled, 4 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 4000 and 15000.
Time for all group planes to reach interception is 27 minutes
13 planes vectored on to bombers
Anabuki N. gives his life for the Emperor by ramming CV Essex
Banzai! - Ban O. in a P1Y2 Frances is willing to die for the Emperor
Banzai! - Iizuka R. in a P1Y2 Frances is willing to die for the Emperor
Okajima R. gives his life for the Emperor by ramming CVL Belleau Wood
Ammo storage explosion on CV Saratoga
Ammo storage explosion on CVL Princeton
Fuel storage explosion on CVL Princeton
Morning Air attack on TF, near Ominato at 120,56
Weather in hex: Partial cloud
Raid detected at 79 NM, estimated altitude 10,000 feet.
Estimated time to target is 23 minutes
Japanese aircraft
P1Y2 Frances x 25
Allied aircraft
Seafire IIC x 7
Seafire L.III x 13
Seafire F.XV x 57
F4U-1D Corsair x 139
F6F-5 Hellcat x 318
Japanese aircraft losses
P1Y2 Frances: 18 destroyed
No Allied losses
CAP engaged:
VF-1 with F6F-5 Hellcat (8 airborne, 0 on standby, 0 scrambling)
8 plane(s) intercepting now.
1 plane(s) not yet engaged, 4 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 16000.
Time for all group planes to reach interception is 31 minutes
VBF-1 with F4U-1D Corsair (4 airborne, 0 on standby, 0 scrambling)
4 plane(s) intercepting now.
0 plane(s) not yet engaged, 8 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 6 minutes
VF-3 with F6F-5 Hellcat (6 airborne, 0 on standby, 0 scrambling)
6 plane(s) intercepting now.
3 plane(s) not yet engaged, 5 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 12 minutes
VBF-3 with F4U-1D Corsair (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 12 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 9 minutes
VF-6 with F6F-5 Hellcat (0 airborne, 4 on standby, 0 scrambling)
4 plane(s) not yet engaged, 5 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 8000 and 15000.
Time for all group planes to reach interception is 17 minutes
VBF-6 with F4U-1D Corsair (0 airborne, 0 on standby, 0 scrambling)
1 plane(s) not yet engaged, 4 being recalled, 5 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 8000 and 15000.
Time for all group planes to reach interception is 31 minutes
VF-7 with F6F-5 Hellcat (4 airborne, 0 on standby, 0 scrambling)
4 plane(s) intercepting now.
7 plane(s) not yet engaged, 2 being recalled, 1 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 11000 and 17000.
Time for all group planes to reach interception is 35 minutes
VBF-7 with F4U-1D Corsair (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 12 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 10 minutes
VF-9 with F6F-5 Hellcat (0 airborne, 0 on standby, 0 scrambling)
4 plane(s) not yet engaged, 16 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters to 13000.
Time for all group planes to reach interception is 17 minutes
VBF-9 with F4U-1D Corsair (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 13 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 9 minutes
VF-10 with F6F-5 Hellcat (4 airborne, 0 on standby, 0 scrambling)
4 plane(s) intercepting now.
4 plane(s) not yet engaged, 12 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 11000 and 13000.
Time for all group planes to reach interception is 10 minutes
VBF-10 with F4U-1D Corsair (4 airborne, 0 on standby, 0 scrambling)
4 plane(s) intercepting now.
0 plane(s) not yet engaged, 8 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 7 minutes
VF-11 with F6F-5 Hellcat (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 19 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 11 minutes
VBF-11 with F4U-1D Corsair (0 airborne, 0 on standby, 0 scrambling)
2 plane(s) not yet engaged, 5 being recalled, 4 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 21 minutes
VF-12 with F6F-5 Hellcat (0 airborne, 0 on standby, 0 scrambling)
3 plane(s) not yet engaged, 12 being recalled, 4 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 3000 and 12000.
Time for all group planes to reach interception is 23 minutes
VBF-12 with F4U-1D Corsair (0 airborne, 0 on standby, 0 scrambling)
2 plane(s) not yet engaged, 8 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters to 10000.
Time for all group planes to reach interception is 29 minutes
VF-13 with F6F-5 Hellcat (6 airborne, 4 on standby, 0 scrambling)
6 plane(s) intercepting now.
0 plane(s) not yet engaged, 7 being recalled, 2 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 3000 and 8000.
Time for all group planes to reach interception is 25 minutes
VBF-13 with F4U-1D Corsair (3 airborne, 0 on standby, 0 scrambling)
3 plane(s) intercepting now.
0 plane(s) not yet engaged, 8 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Raid is overhead
VF-14 with F6F-5 Hellcat (0 airborne, 0 on standby, 0 scrambling)
0 plane(s) not yet engaged, 12 being recalled, 4 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 21 minutes
VBF-14 with F4U-1D Corsair (5 airborne, 0 on standby, 0 scrambling)
5 plane(s) intercepting now.
0 plane(s) not yet engaged, 8 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000
Time for all group planes to reach interception is 7 minutes
As you can see...no scrambling fighters...the 30% left on escort REMAINED on escort.... also the 8 hexes range meant that a lot of fighters were out of position....
Overall this means that 200 bombers escorted by 100 fighters are more than enough to bring HAVOC to the allied DS....
Post #: 6594
2/8/2012 11:04:28 PM
TheLoneGunman Try splitting the carrier fighter groups into dedicated Escort and CAP groups respectively.
You might see better performance by specializing them, and have less aircraft "out of position".
Posts: 311
Joined: 1/12/2010 Your escort groups would have the needed range, escort mission, and target (if applicable), while your CAP groups would be at or near 100% CAP with 0 range to focus all of
Status: offline their aircraft on defending the carriers.
Post #: 6595
2/8/2012 11:25:33 PM
pat.casey I hesitate to say this, but perhaps the pre AE "uber-cap" was actually more of an accurate model for late war strike vs cap engagements.
Not sure if the pattern really holds up but it seems like CAP got better and better relative to strike packages as the war progressed ... early war the bombers go through.
Late war, not so much (Marianas, okinawa, etc)
Posts: 392
Joined: 9/10/2007 The tests I'm seeing above are frankly silly. If that holds up all an attacker needs to do is "spend" enough fighters to absord all possible firing passes and then his
Status: offline strike package walks in unmolested :(
(in reply to TheLoneGunman)
Post #: 6596
2/8/2012 11:43:36 PM
GreyJoy Same settings...but with 70% CAP and 0 hex...
ALL the bombers got through....
Posts: 5736
Joined: 3/18/2011 --------------------------------------------------------------------------------
Status: online Morning Air attack on TF, near Ominato at 120,56
Weather in hex: Partial cloud
Raid detected at 114 NM, estimated altitude 12,000 feet.
Estimated time to target is 34 minutes
Japanese aircraft
P1Y2 Frances x 155
Ki-84r Frank x 100
Allied aircraft
Seafire IIC x 10
Seafire L.III x 15
Seafire F.XV x 64
F4U-1D Corsair x 156
F6F-5 Hellcat x 373
Japanese aircraft losses
P1Y2 Frances: 4 destroyed, 80 damaged
P1Y2 Frances: 34 destroyed by flak
Ki-84r Frank: 10 destroyed
Allied aircraft losses
F4U-1D Corsair: 1 destroyed
F6F-5 Hellcat: 4 destroyed
Allied Ships
CVL Bataan, Torpedo hits 1
CVL Cowpens
CV Victorious
CV Saratoga, Torpedo hits 2
CV Intrepid
CVL Cabot
CV Hancock, Torpedo hits 1
CVL Princeton
CV Wasp
CVL Belleau Wood
CV Illustrious, Torpedo hits 2, on fire, heavy damage
CV Enterprise
CV Lexington
CV Formidable
CV Yorktown, Torpedo hits 1, on fire
CV Bunker Hill, Torpedo hits 1
CV Franklin, Torpedo hits 1
CVL Langley
CV Essex
CV Randolph
CV Indomitable, Torpedo hits 1
CVL Monterey, Torpedo hits 3, and is sunk
Aircraft Attacking:
29 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
38 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
34 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
16 x P1Y2 Frances launching torpedoes at 200 feet
Naval Attack: 1 x 18in Type 91 Torpedo
CAP engaged:
VF-1 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 4000 and 15000.
Time for all group planes to reach interception is 10 minutes
VBF-1 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 3000 and 15000.
Time for all group planes to reach interception is 9 minutes
8 planes vectored on to bombers
VF-3 with F6F-5 Hellcat (4 airborne, 10 on standby, 0 scrambling)
4 plane(s) intercepting now.
1 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 1000 and 37300.
Time for all group planes to reach interception is 17 minutes
11 planes vectored on to bombers
VBF-3 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 4000 and 15000.
Time for all group planes to reach interception is 9 minutes
VF-6 with F6F-5 Hellcat (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 7000 and 11000.
Time for all group planes to reach interception is 11 minutes
VBF-6 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 18 minutes
1 planes vectored on to bombers
VF-7 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 15 minutes
4 planes vectored on to bombers
VBF-7 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 11000 and 39300.
Time for all group planes to reach interception is 17 minutes
12 planes vectored on to bombers
VF-9 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 16 minutes
3 planes vectored on to bombers
VBF-9 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 6000 and 15000.
Time for all group planes to reach interception is 15 minutes
9 planes vectored on to bombers
VF-10 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 24 minutes
12 planes vectored on to bombers
VBF-10 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 10000.
Time for all group planes to reach interception is 24 minutes
5 planes vectored on to bombers
VF-11 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 8000 and 15000.
Time for all group planes to reach interception is 10 minutes
8 planes vectored on to bombers
VBF-11 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 8000 and 15000.
Time for all group planes to reach interception is 9 minutes
5 planes vectored on to bombers
VF-12 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 5000 and 15000.
Time for all group planes to reach interception is 10 minutes
6 planes vectored on to bombers
VBF-12 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 7000 and 15000.
Time for all group planes to reach interception is 15 minutes
8 planes vectored on to bombers
VF-13 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 4000 and 15000.
Time for all group planes to reach interception is 10 minutes
8 planes vectored on to bombers
VBF-13 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 10000 and 13000.
Time for all group planes to reach interception is 22 minutes
1 planes vectored on to bombers
VF-14 with F6F-5 Hellcat (4 airborne, 14 on standby, 0 scrambling)
4 plane(s) intercepting now.
3 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 7000 and 37300.
Time for all group planes to reach interception is 17 minutes
2 planes vectored on to bombers
VBF-14 with F4U-1D Corsair (0 airborne, 9 on standby, 0 scrambling)
4 plane(s) not yet engaged, 0 being recalled, 0 out of immediate contact.
Group patrol altitude is 15000 , scrambling fighters between 3000 and 15000.
Time for all group planes to reach interception is 22 minutes
5 planes vectored on to bombers
Banzai! - Futagami L. in a P1Y2 Frances is willing to die for the Emperor
Ammo storage explosion on CV Yorktown
Banzai! - Mitsumori M. in a P1Y2 Frances is willing to die for the Emperor
Fuel storage explosion on CV Illustrious
Fuel storage explosion on CVL Monterey
Ammo storage explosion on CVL Monterey
This time not a single fighter was out of position (0 being recalled)...but the result was nonetheless awfull....
(in reply to pat.casey)
Post #: 6597
2/8/2012 11:47:04 PM
TheLoneGunman Well GJ, you still have only a limited number of passes for your fighters, so as long as the escort is big enough, they'll never reach the bombers, but at least you've
somewhat proven what I already suspected, that ranges greater than 0 are putting your CAP out of position by having them fly through neighboring hexes.
Posts: 311
Joined: 1/12/2010
Status: offline
Post #: 6598
2/8/2012 11:51:12 PM
Posts: 5736 ORIGINAL: TheLoneGunman
Joined: 3/18/2011
Status: online Well GJ, you still have only a limited number of passes for your fighters, so as long as the escort is big enough, they'll never reach the bombers, but at least you've
somewhat proven what I already suspected, that ranges greater than 0 are putting your CAP out of position by having them fly through neighboring hexes.
Yes but we should have at least 300 passes AFAIK... if 100 fighters on escort are enough to suck up all those passes...well...we have a problem imho.
And what about those planes directly vectored on bombers? Why they don't reach them? or reach them always too late?
To be honest i never noticed this terrible behaviour of CAP...i faced several battles in this game and it was never so awfull...
however...let's face it: under these conditions how am i supposed to do anything in Hokkaido-northern Honshu?
(in reply to TheLoneGunman)
Post #: 6599
quote:ORIGINAL: TheLoneGunman Well GJ, you still have only a limited number of passes for your fighters, so as long as the escort is big enough, they'll never reach the bombers, but at least you've
somewhat proven what I already suspected, that ranges greater than 0 are putting your CAP out of position by having them fly through neighboring hexes.
2/9/2012 12:12:33 AM
TheLoneGunman Well not every pass is going to result in a hit. Which is why it only takes 100 fighters on escort to suck up all 300 passes.
The amount of passes awarded to the CAP shouldn't ever have had a hard limit, it should have been proportional to the amount of CAP in the air.
Posts: 311
Joined: 1/12/2010 1,000 fighters on CAP shouldn't get only 300 passes, especially if they're set to 100% CAP at a high level airfield with plenty of aviation support and set to 0 range.
Status: offline
A quick band aid could be trying to apply overstacking limits to level 9 and 10 airfields so that it becomes impossible to field aircraft in the thousands on either side
and cripple coordination to a degree as well. But that's only a band aid covering up a flaw in the mechanics, not an actual fix.
Post #: 6600
Page: << < prev 218 219 [220] 221 222 next > >> | {"url":"http://www.matrixgames.com/forums/tm.asp?m=2761796&mpage=220","timestamp":"2014-04-20T09:28:15Z","content_type":null,"content_length":"235193","record_id":"<urn:uuid:335b71d5-ea6b-4142-8a90-e0ed8fc25c41>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
February 4th 2009, 08:14 PM #1
Junior Member
Oct 2008
I was looking at the following proof that the quotient space X/M where X is a Banach space and M is a closed subspace:
PlanetMath: quotients of Banach spaces by closed subspaces are Banach spaces under the quotient norm
Can someone elaborate for me on the second to last line beginning x-s_k+M? In particular I'm shaky on the last two equalities.
In a quotient space, the coset containing x+y is (by definition) the sum of the coset containing x and the coset containing y. In other words, (x+y) + M = (x+M) + (y+M). By induction, this
extends to any finite sum of cosets.
That is all that is happening in the line $x-s_k+M = (x+M) - (s_k+M) = (x+M) - \sum_{n=1}^k(x_n+M) = (x+M) - \sum_{n=1}^kX_n.$ The first two equalities are using that fact about sums of cosets
(remember that $s_k = \textstyle\sum_{n=1}^k x_n$), and the last equality comes from the definition of $X_n = x_n+M$.
February 4th 2009, 11:52 PM #2 | {"url":"http://mathhelpforum.com/calculus/71877-completeness.html","timestamp":"2014-04-20T11:39:35Z","content_type":null,"content_length":"33791","record_id":"<urn:uuid:5aae6772-9e2d-4f46-90a0-a893bd08f190>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/alfie/asked","timestamp":"2014-04-21T12:40:37Z","content_type":null,"content_length":"81477","record_id":"<urn:uuid:ee075ce4-f60a-4c98-a0b5-09b676ebac01>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- November 2004, week 4 (#303)LISTSERV at the University of Georgia
Date: Wed, 24 Nov 2004 10:06:47 -0800
Reply-To: cassell.david@EPAMAIL.EPA.GOV
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: "David L. Cassell" <cassell.david@EPAMAIL.EPA.GOV>
Subject: Re: Calculation of weighted std error of mean
Comments: To: Tim Churches <tchur@OPTUSHOME.COM.AU>
In-Reply-To: <41A37444.5000401@optushome.com.au>
Content-type: text/plain; charset=US-ASCII
Tim Churches <tchur@OPTUSHOME.COM.AU> wrote [originally]: > I am having some difficulties reproducing the results calculated by PROC > MEANS for a weighted std error of the mean. I am using the
formula > provided in the SAS documentation (see > http://jeff-lab.queensu.ca/stat/sas/sasman/sashtml/proc/zormulas.htm ) > but it is not clear (to me at least) how teh std deviation is being >
calculated in the weighted case.
Hi, Tim! Long time no see!
I read your second post, so I see you have this solved. But I wanted to toss in my $0.02 just to muddy the waters. Please bear in mind that the WEIGHT in PROC MEANS doesn't properly address the
underlying problems if you have real sample data from a sample survey. If you have the classical problem of a smaple from a conceptually infinite population of independent and identically-distributed
observations with normal errors, and the weights only represent a summarization for convenience, then PROC MEANS is great. If you have sample survey data from a finite population, where you can't
make the traditional 'general linear model' assumptions, then you ought to be using PROC SURVEYMEANS.. which has a very different variance estimator.
David -- David Cassell, CSC Cassell.David@epa.gov Senior computing specialist mathematical statistician | {"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0411d&L=sas-l&D=1&P=35446&F=P","timestamp":"2014-04-17T09:38:43Z","content_type":null,"content_length":"11307","record_id":"<urn:uuid:8689171c-b20e-406b-9584-397698efce00>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
related rate problem
January 7th 2009, 05:59 PM #1
Jun 2008
related rate problem
New years ball is dropping. It is 120ft high and is falling at the rate of 3 ft per second. If a person is standing 288 ft from the base of the ball's tower how fast is the ball's elevation angle
changing from that persons point of view.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/67253-related-rate-problem.html","timestamp":"2014-04-19T14:45:59Z","content_type":null,"content_length":"28518","record_id":"<urn:uuid:a66340ce-6982-439d-89e4-931d2cec3a79>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derived classes from DLL
#pragma once
#include <cmath>
#include "../../LibVer/LibVer DLL/LibVer.h" //Dir of "LibVir.h", "LibVir.dll" and "LibVer.lib"
#ifndef PI
#define PI 3.141591653589793
#define DEG_PER_RAD 57.29577951308232
#define RAD_PER_DEG 0.017353292519943
namespace ExMaths{//Extra Maths Library
class __declspec(dllexport) ExMaths: protected LibVer::Version{
template<typename T> T* orderSL(int number, T var[]);
template<typename T> T* orderLS(int number, T var[]);
template<typename T> T* orderReverse(int number, T var[]);
template<typename T> void swap(T& var1, T& var2);
double toDegrees(double radians); //Returns degree conversion of given radians
double toRadians(double degrees); //Returns radian conversion of given degrees
double toBearing(double degrees); //Returns clockwise rotation in degree from Y+
double fromBearing(double degrees); //Returns elevation rotation in degree from X+
double rectifyAngle(double degrees, bool ABS = false); //Limits the degrees of an angle to (-180|180)false, (0|360)true
double polR(double x, double y); //Returns the length from (0, 0) to (x, y)
double polT(double x, double y); //Returns the angle of elevation from (0, 0) to (x, y)
double recX(double radius, double theta); //Returns the X coordinate from a polar point of (radius, angle of elevation)
double recY(double radius, double theta); //Returns the Y coordinate from a polar point of (radius, angle of elevation)
double pythag(int num, double roots[]); //Calculates pythagorous theorum for array of lengths given
double average(int num, double sums[]); //Calculates the mean average for array of numbers given
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/101230/","timestamp":"2014-04-18T10:40:55Z","content_type":null,"content_length":"19890","record_id":"<urn:uuid:3e96f48c-51fa-427f-808c-5a41ee34a54d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |