content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Mplus Discussion >> Residual variance/covariance
Residual variance/covariance
Greg Roberts posted on Wednesday, January 30, 2008 - 7:46 am
Good morning. I am fitting glm of reading achievement data from k, 1,3,and 5 grades. Data are stratifed, sampling parameters known, including probabilities of selection. Trend is curvilinear. Trying
to fit an unconditional model. Several questions:
1. If I model clustering in a single level model with case-level sampling weights, are the standard errors for the growth paramters adjusted for effects of group?
2. Single level model fits well, but some difficulty with theta&psi, esp. the residual variance for t1 and t4 scores (significant negative). Tried piecewise, correlating adjacent errors,
autoregressive. Only solution seems to be fixing residual t1 t4 at 0, which gives fit almost as good as the model with warning messages, yields same parameter estimates/standard errors, eliminates
warning messages. Model fits data, but I am not sure how far from "unconditional" I strayed imposing constraints. Thoughts?
3. I will add between level, for questions re: school-level effects on student growth. What are implications of not modeling the grwoth at between level, instead using it to model school-level
effects on case level growth. I assume that the standard errors are already adjusted (per question 1) and that my 'solution' in number 2 is the best available, which if so, is creating problems with
the modeling of school-level longitudinal growth. Thoughts?
I appreciate your time.
Linda K. Muthen posted on Thursday, January 31, 2008 - 10:51 am
1. Use TYPE=COMPLEX; with the CLUSTER and WEIGHT options to obtain correct standard errors.
2. Hold the residual variances equal.
3. Modeling both the within and between parts using TYPE=TWOLEVEL provides a fuller set of parameters and a richer analysis. See the description at the beginning of Chapter 9 where the two methods
are compared.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=2882","timestamp":"2014-04-20T03:14:18Z","content_type":null,"content_length":"18538","record_id":"<urn:uuid:0ac85a13-506e-4849-b6e9-b57be43bb032>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mechanic - Static (force, moments)
I would like to ask for some help about the construction which I show in this post. I have to get maximum (in + and - is the same number) of internal axis force, internal cross/diagonal force,
internal moment. All those three numbers i can get from the graph, example of this graph is here:
Now what is my problem is this task - It starts already at the beginning - with setting the reactions :(. I setted sum of x axis and i didn't have any bigger problems with this (α is 30 degree): Ax -
F*Cos α = 0 and from that I can get that Ax is 7.36 kN.
But the problem comes in the next step - how to get Ay or/and B (there is no Bx, so I can mark By as B) for suma of y axis, and how to set up moments? I got for y axis this: Ay + B - F*SIN α = 0. Now
I have to get somehow this B and Ay. For moments, I have to count length too... How do I set reactions for moment in my example? When I come to fields, there are two of them, right? | {"url":"http://www.physicsforums.com/showthread.php?t=112008","timestamp":"2014-04-21T09:41:34Z","content_type":null,"content_length":"68633","record_id":"<urn:uuid:3bc2e45f-e3e5-425d-b214-a783381e0ea8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eastlake, CO
Find an Eastlake, CO Calculus Tutor
I have been a math tutor for over 8 years. I have tutored in almost all subjects of math. I have tutored elementary kids in math up to college kids in Calculus.
17 Subjects: including calculus, chemistry, psychology, discrete math
...Mathematics, University of Louisiana, Lafayette, LA B.S. Physics, University of Louisiana, Lafayette, LA M.S. Paleoclimatology, Georgia Institute of Technology, Atlanta, GA Ph.D.
16 Subjects: including calculus, chemistry, physics, GRE
...During freshman year of college I took two semesters of organic chemistry follow by a organic chemistry lab. In addition I have taken other chemistry and related courses such as: Analytical
Chemistry, Thermodynamics, Intro to Quantum and more. I used Microsoft Excel extensively for data analysis during my undergraduate studies.
18 Subjects: including calculus, chemistry, geometry, algebra 1
...Throughout my high school and college careers I was constantly assisting friends, peers, and my younger sister with math and physics homework. I enjoyed helping other students to understand the
subject material they were struggling with. Through helping others, my excitement for learning was enhanced, which inspired me to pursue tutoring opportunities.
13 Subjects: including calculus, physics, geometry, algebra 1
...My promise: your student will get great grades/scores and be less stressed while doing it. Here's what I offer to each student: -Immediate results & customized individual feedback -Keep track
of what your student needs to focus on daily, weekly, and for the whole semester. -The insights, tips, ...
41 Subjects: including calculus, reading, Spanish, English | {"url":"http://www.purplemath.com/Eastlake_CO_Calculus_tutors.php","timestamp":"2014-04-19T06:58:41Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:4a8a4ed2-a639-486a-bcbc-405ac9cd24cd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by mary on Friday, February 22, 2013 at 3:08pm.
Five (5) years ago, you bought a house for $171,000, with a down payment of $30,000, which meant you took out a loan for $141,000. Your interest rate was 5.75% fixed. You would like to pay more on
your loan. You check your bank statement and find the following information:
Escrow payment
Principle and Interest payment
Total Payment
Current Loan Balance
Write a one to two (1-2) page paper in which you address the following:
Part 1
With your current loan, explain how much additional money you would need to add to your monthly payment to pay off your loan in 20 years instead of 25. Decide whether or not it would be reasonable to
do this if you currently meet your monthly expenses with less than $100 left over.
•(a) Explain your strategy for solving the problem.
•(b) Present a step-by-step solution of the problem.
•(c) Clearly state your answer to Part 1. What is your decision?
Part 2
Identify the highest interest rate you could refinance at in order to pay the current balance off in 20 years and determine the interest rate, to the nearest quarter point, that would require a
monthly total payment that is less than your current total payment. The interest rate that you qualify for will depend, in part, on your credit rating. Also, refinancing costs you $2,000 up front in
closing costs.
•(a) Explain your strategy for solving the problem.
•(b) Present a step-by-step solution of the problem.
•(c) Clearly state your answer to Part 2. What is your decision?
Related Questions
grammer - I want to say Five years ago i went somewhere, should there be a comma...
English singulars and plurals - Please make any necessary corrections to the ...
accounting - House mortgage You have just purchased a house and have obtained a...
accounting - You have just purchased a house and have obtained a 30-year, $200,...
Alg2 - The Mendes family bought a new house 10 years ago for $120,000. The house...
Alg2 - Can't figure this one out...The Mendes family bought a new house 10 years...
Math - In five years,Emma will be three years more than twice as old as her son...
math - 5. At 2pm the number of bacteria in a colony was 100, by 4pm it was 4000...
math - Five years ago, you bought a house for $151,000, with a downpayment of $...
Math 104 - Five years ago, you bought a house for $171,000. You had a down ... | {"url":"http://www.jiskha.com/display.cgi?id=1361563718","timestamp":"2014-04-18T23:47:29Z","content_type":null,"content_length":"9699","record_id":"<urn:uuid:00a14e0b-68e0-4843-b426-afcb94e459bb>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
coordinate geometry
November 19th 2005, 12:56 PM #1
Junior Member
Oct 2005
coordinate geometry
In coordinate geometry, how do you find the distance between two points on a line? I'm confident with everything else, well almost. Thanks.
I'll assume your talking about points and lines in the plane. For 3-space its the same idea just a little longer. Also I'm writing in the notation of vector curves, but it can be adapted to
standard equations for lines. All capital letters indicate vectors, lowercase are scalar quantities.
So suppose you have a point Q = <x,y> and a line L(t) = P + t*V, where P is a point and V is a direction vector -- assume for simplicity that V is a unit vector. To find the distance from Q to L
is to find the distance from Q to N, where N is a point on the line that minimizes distance to Q. This will happen of course precisely when the vector (Q - N) meets the direction vector V at a
right angle, i.e., when
(Q - N) dot V = 0 (dot is the vector dot product)
i.e., find t such that:
(Q - P - t*V) dot V = 0
= Q dot V - P dot V - t*V dot V = Q dot V - P dot V - t
= (Q - P) dot V - t = 0,
t = (Q - P) dot V
Thus N = L(t), where t = (Q - P) dot V, so simply find the distance from Q to N (the standard Euclidean distance formula).
November 23rd 2005, 03:50 PM #2
Junior Member
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/1323-coordinate-geometry.html","timestamp":"2014-04-18T20:00:53Z","content_type":null,"content_length":"31624","record_id":"<urn:uuid:49e46389-29b5-457c-a9a2-06332f25433b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of algorithm
An algorithm is deterministic automaton for accomplishing a goal which, given an initial state, will terminate in a defined end-state. The efficiency of implementation of the algorithm depends upon
speed, size, resources consumption. We will discuss definitions, classifications and the history.
What is an algorithm?
No agreed-to definition of "algorithm" exists.
A simple definition: A set of instructions for solving a problem.
The algorithm is either implemented by a program or simulated by a program. Algorithms often have steps that iterate (repeat ) or require decisions such as logic or comparison.
An very simple example of an algorithm is multiplying two numbers: on first computers with limited processors, this was accomplished by a routine that in a number of loop based on the first number
adds the second number. The algorithm translates a method into computer commands.
Algorithms are essential to the way computers process information, because a computer program is essentially an algorithm that tells the computer what specific steps to perform (in what specific
order) in order to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations which
can be performed by a Turing-complete system. Authors who assert this thesis include Savage (1987) and Gurevich (2000):
...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine" ...according to Savage [1987], an algorithm is a
computational process defined by a Turing machine.
Typically, when an algorithm is associated with processing information, data is read from an input source or device, written to an output sink or device, and/or stored for further processing. Stored
data is regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in a data structure.
For any such computational process, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be
systematically dealt with, case-by-case; the criteria for each case must be clear (and computable).
Because an algorithm is a precise list of precise steps, the order of computation will almost always be critical to the functioning of the algorithm. Instructions are usually assumed to be listed
explicitly, and are described as starting 'from the top' and going 'down to the bottom', an idea that is described more formally by flow of control.
So far, this discussion of the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception, and it attempts to describe a task in discrete,
'mechanical' means. Unique to this conception of formalized algorithms is the assignment operation, setting the value of a variable. It derives from the intuition of 'memory' as a scratchpad.
For some alternate conceptions of what constitutes an algorithm see functional programming and logic programming.
Definitions of algorithms
Blass and Gurevich
Blass and Gurevich describe their work as evolved from consideration of Turing machines, Kolmogorov-Uspensky machines (KU machines), Schönhage machine (storage modification machine SMM), and pointer
machines (linking automata) as defined by Knuth. The work of Gandy and Markov are also described as influential precursors.
Gurevich offers a 'strong' definition of an algorithm (that is summarized here):
The Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine. In practice, it would be ridiculous. Can one generalize
Turing machines so that any algorithm, never mind how abstract, can be modeled by a generalized machine? But suppose such generalized Turing machines exist. What would their states be?A
first-order structure. A particular small instruction set suffices in all cases. Computation could be an evolution of the state, could be nondeterministic, interact with its environment, could be
parallel and multi-agent, could have dynamic semantics. The two underpinings of thier work are: Turing's thesis andthe notion of first order structure of Tarski.
The above phrase computation as an evolution of the state differs markedly from the definition of Knuth and Stone, the "algorithm" as a Turing machine program. Rather, it corresponds to what Turing
called the complete configuration, and includes both the current instruction (state) and the status of the tape. Kleene (1952) shows an example of a tape with 6 symbols on it, all other squares are
blank, and how to "Gödelize" its combined table-tape status.
Boolos and Jeffrey
Their definition is:
"Explicit instructions for determining the nth member of a set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a
computing machine, or by a human who is capable of carrying out only very elementary operations on symbols."
Knuth (1968, 1973) has given a list of five properties that are widely accepted as requirements for an algorithm:
1. Finiteness: "An algorithm must always terminate after a finite number of steps"
2. Definiteness: "Each step of an algorithm must be precisely defined; the actions to be carried out must be rigorously and unambiguously specified for each case"
3. Input: "...quantities which are given to it initially before the algorithm begins. These inputs are taken from specified sets of objects"
4. Output: "...quantities which have a specified relation to the inputs"
5. Effectiveness: "... all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done exactly and in a finite length of time by a man using
paper and pencil"
Knuth offers as an example the Euclidean algorithm for determining the greatest common divisor of two natural numbers.
Knuth admits that, while his description of an algorithm may be intuitively clear, it lacks formal rigor, since it is not exactly clear what "precisely defined" means, or "rigorously and
unambiguously specified" means, or "sufficiently basic", and so forth. He makes an effort in this direction in his first volume where he defines in detail what he calls the "machine language" for his
"mythical MIX... the world's first polyunsaturated computer".
Many of the algorithms in his books are written in the MIX language. He also uses tree diagrams, flow diagrams and state diagrams.
A. A. Markov (1954) provided the following definition of algorithm:
1. In mathematics, "algorithm" is commonly understood to be an exact prescription, defining a computational process, leading from various initial data to the desired result....
The following three features are characteristic of algorithms and determine their role in mathematics:
a) the precision of the prescription, leaving no place to arbitrariness, and its universal comprehensibility -- the definiteness of the algorithm;
b) the possibility of starting out with initial data, which may vary within given limits -- the generality of the algorithm;
c) the orientation of the algorithm toward obtaining some desired result, which is indeed obtained in the end with proper initial data -- the conclusiveness of the algorithm.
He admitted that this definition "does not pretend to mathematical precision". His 1954 monograph was his attempt to define algorithm more accurately; he saw his resulting definition -- his "normal"
algorithm -- as "equivalent to the concept of a recursive function". His definition included four major components:
1. Separate elementary steps, each of which will be performed according to one of the given substitution rules.
2. Steps of local nature (the algorithm won't change more than a certain number of symbols).
3. The scheme of the algorithm is a list of rules for the substitution formulas.
4. A means to distinguish a concluding substitution (a final state).
In his Introduction Markov observed that the entire significance for mathematics of efforts to define algorithm more precisely would be in connection with the problem of a constructive foundation for
Minsky (1967) asserts that an algorithm is an effective procedure and replaces further in his text algorithm by effective procedure. The term is also used by Knuth. Here is its definition of
effective procedure:
A set of rules which tell us, from moment to moment, precisely how to behave.
But he recognizes that this is subject to a criticism:
The interpretation of the rules is left to depend on some person or agent.
He made a refinement: to specify, along with the statement of the rules, the details of the mechanism that is to interpret them. To avoid the cumbersome process of having to do this over again for
each individual procedure he hopes to identify a reasonably uniform family of rule-obeying mechanisms. Here is his formulation:
(1) a language in which sets of behavioral rules are to be expressed, and
(2) a single machine which can interpret statements in the language and thus carry out the steps of each specified process.
In the end, though, he still worries that "there remains a subjective aspect to the matter. Different people may not agree on whether a certain procedure should be called effective".
But Minsky is undeterred. He immediately introduces "Turing's Analysis of Computation Process". He quotes what he calls Turing's thesis:
Any process which could naturally be called an effective procedure can be realized by a Turing machine.
(This is also called Church's thesis).
After an analysis of "Turing's Argument" he observes that equivalence of many intuitive formulations of Turing, Church, Kleene, Post, and Smullyan "leads us to suppose that there is really here an
objective or absolute notion".
Stone (1972) and Knuth (1968, 1973) were professors at Stanford University at the same time so it is not surprising if there are similarities in their definitions:
To summarize ... we define an algorithm to be a set of rules that precisely defines a sequence of operations such that each rule is effective and definite and such that the sequence terminates in
a finite time.
Stone is noteworthy because of his detailed discussion of what constitutes an effective rule for his robot, or person-acting-as-robot, must have some information and abilities within them, and if not
the information and the ability must be provided in "the algorithm":
For people to follow the rules of an algorithm, the rules must be formulated so that they can be followed in a robot-like manner, that is, without the need for thought... however, if the
instructions [to solve the quadratic equation, his example] are to be obeyed by someone who knows how to perform arithmetic operations but does not know how to extract a square root, then we must
also provide a set of rules for extracting a square root in order to satisfy the definition of algorithm.
(...) not all instructions are acceptable, because they may require the robot to have abilities beyond those that we consider reasonable.
He gives the example of a robot confronted with the question: "Is Henry VIII is a King of England", print 1 if yes and 0 if no, but the robot has not been previously provided with this information.
And worse, if the robot is asked if Aristotle was a King of England and the robot only had been provided with five names, it would not know how to answer. Thus:
An intuitive definition of an acceptable sequence of instructions is one in which each instruction is precisely defined so that the robot is guaranteed to be able to obey it.
After providing us with his definition, Stone introduces the Turing machine model and states that the set of five-tupes that are the machine's instructions is an algorithm... known as a Turing
machine program. Puis he says that a computation of a Turing machine is described by stating:
1. The tape alphabet
2. The form in which the parameters are presented on the tape
3. The initial state of the Turing machine
4. The form in which answers will be represented on the tape when the Turing machine halts
5. The machine program.
This is in the spirit of Blass and Gurevich.
Some issues
Expressing algorithms
Algorithms can be expressed in many kinds of notations:
- Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms.
- Pseudocode and flowcharts are structured ways to express algorithms that avoid the ambiguities, while remaining independent of a particular implementation language.
- Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are often used as a way to define or document algorithms.
Must an algorithm halt?
Some writers restrict the definition of algorithm to procedures that eventually finish. Others, as Kleene, include procedures that could run forever without stopping. Such a procedure has been called
a "computational method" by Knuth or "calculation procedure or algorithm" by Kleene. However, Kleene notes that such a method must eventually exhibit "some object".
Minksy (1967) makes the observation that, if an algorithm hasn't "terminated" then how can we answer the question: "Will it terminate with the correct answer?'"
Thus the answer is: undecidable. We can never know, nor can we do an analysis beforehand to find out. The analysis of algorithms for their likelihood of termination is called "Termination analysis".
Algorithm analysis
The terms "analysis of algorithms" was coined by Knuth. Most people who implement algorithms want to know how much of a particular resource, such as time or storage, is required for the execution.
Methods have been developed for the analysis of algorithms to obtain such quantitative answers.
The analysis and study of algorithms is one discipline of computer science, and is often practiced abstractly (without the use of a specific programming language or hardware). But the Scriptol code
is portable, simple and abstract enough for such analysis.
Simple example: multiplication
int multiply(int x, int y)
int sum = 0
while y > 0
sum + x
let y - 1
return sum
int a = 5
int b = 7
print a,"x", b, "=", multiply(a, b)
• Martin Davis. The Undecidable: Basic Papers On Undecidable Propostions, Unsolvable Problems and Computable Functions. New York: Raven Press, 1965.
Davis gives commentary before each article.
• Yuri Gurevich. Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages 77-111.
Includes bibliography of 33 sources.
• A. A. Markov. Theory of algorithms. Imprint Moscow, Academy of Sciences of the USSR, 1954 . Original title: Teoriya algerifmov.
• Marvin Minsky. Computation: Finite and Infinite Machines, First, Prentice-Hall, Englewood Cliffs, NJ, 1967.
• Harold S. Stone. Introduction to Computer Organization and Data Structures, 1972, McGraw-Hill, New York.
Cf in particular the first chapter titled: Algorithms, Turing Machines, and Programs.
See also... | {"url":"http://www.scriptol.com/programming/algorithm-definition.php","timestamp":"2014-04-21T14:46:15Z","content_type":null,"content_length":"27169","record_id":"<urn:uuid:65495ad0-ed6c-4f9b-8de1-9f1d70f74b65>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Mathematics of Harmony: From Euclid to Contemporary Mathematics and Computer Science
At Mathfest in 2007, I attended a talk on “Puzzling Probabilities Featuring the Street Game of Craps” by Jack Alexander of Miami Dade College. The mathematical level of the talk was fairly
elementary, but the talk was extremely engaging, and I walked away thinking what a pleasure it was to encounter very familiar mathematics approached in a new way. I had much the same reaction to
Stakhov’s book, which begins with the golden section (the source for what is termed “harmony mathematics”) and ranges widely throughout many areas of mathematics.
As is often the case with works involving the golden section, there are a number of sections which overestimate the significance of the ratio, but these do not detract from the otherwise fine
exposition. The lively treatment of such topics as hyperbolic functions, Fibonacci codes, and non-Euclidean geometry is a welcome collection of diverse topics in a single book.
Where the book goes a bit too far is evident only in the epilogue, where, in a list of conclusions about the significance of “harmony mathematics”, we find the following:
Thus, the neglect of the “golden section” and its associated idea of mathematical harmony is one more “strategic mistake” in not only mathematics and mathematics education, but also theoretical
physics. (p. 625)
We affirm that the Mathematics of Harmony should become a base for the reform of modern mathematical education on the base of the ancient idea of Harmony and golden section. (p. 660)
I’m not convinced that these conclusions are valid, and even less certain that they’ve been adequately justified here. These crankish pronouncements aside, this is a worthwhile collection of
elementary and not-so-elementary results.
Mark Bollman (mbollman@albion.edu) is associate professor of mathematics and chair of the department of mathematics and computer science at Albion College in Michigan. His mathematical interests
include number theory, probability, and geometry. His claim to be the only Project NExT fellow (Forest dot, 2002) who has taught both English composition and organic chemistry to college students has
not, to his knowledge, been successfully contradicted. If it ever is, he is sure that his experience teaching introductory geology will break the deadlock. | {"url":"http://www.maa.org/publications/maa-reviews/the-mathematics-of-harmony-from-euclid-to-contemporary-mathematics-and-computer-science","timestamp":"2014-04-19T15:32:35Z","content_type":null,"content_length":"97805","record_id":"<urn:uuid:afa40acd-46fd-4b67-8877-720424467ba7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Contemporary Mathematics
1993; 350 pp; softcover
Volume: 152
ISBN-10: 0-8218-5181-0
ISBN-13: 978-0-8218-5181-4
List Price: US$64
Member Price: US$51.20
Order Code: CONM/152
This volume contains the proceedings of the AMS-IMS-SIAM Joint Summer Research Conference on Nielsen Theory and Dynamical Systems, held in June 1992 at Mount Holyoke College. Focusing on the
interface between Nielsen fixed point theory and dynamical systems, this book provides an almost complete survey of the state of the art of Nielsen theory. Most of the articles are expository, making
them accessible to both graduate students and researchers in algebraic topology, fixed point theory, and dynamical systems.
Researchers and graduate students in algebraic topology, fixed point theory or dynamical systems.
• Ll. Alsedà, S. Baldwin, J. Llibre, R. Swanson, and W. Szlenk -- Torus maps and Nielsen numbers
• R. F. Brown -- Wecken properties for manifolds
• J. Casasayas, J. Llibre, and A. Nunes -- Lefschetz zeta functions and forced set of periods
• D. Dimovski -- One-parameter fixed point indices for periodic orbits
• A. Fel'shtyn and R. Hill -- Dynamical zeta functions, Nielsen theory and Reidemeister torsion
• J. Franks and M. Misiurewicz -- Cycles for disk homeomorphisms and thick trees
• R. Geoghegan and A. Nicas -- Lefschetz trace formulae, zeta functions and torsion in dynamics
• J. Gilman -- Recent developments in Nielsen theory and discrete groups
• E. Hart -- Local Nielsen fixed point theory and the local generalized \(H\)-Lefschetz number
• B. Jiang -- Nielsen theory for periodic orbits and applications to dynamical systems
• J. Lewowicz and J. Tolosa -- Genericity of homeomorphisms with connected stable and unstable sets
• J. Llibre -- Lefschetz numbers for periodic points
• T. Matsuoka -- The Burau representation of the braid group and the Nielsen-Thurston classification
• C. K. McCord -- Computing Nielsen numbers
• K. Mischaikow -- The structure of isolated invariant sets and the Conley index
• H. Schirmer -- A survey of relative Nielsen fixed point theory
• L. Slutskin -- Classification of lifts of automorphisms of surfaces to the unit disk
• P. Wong -- Equivariant Nielsen fixed point theory and periodic points | {"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-152","timestamp":"2014-04-20T21:07:07Z","content_type":null,"content_length":"15892","record_id":"<urn:uuid:b51ff272-4aae-4b70-b1bb-38a558da0a8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
van Rees, John - Department of Computer Science, University of Manitoba
• Lotto Designs J. A. Bate and G. H. J. van Rees
• Critical Sets in Back Circulant Latin E.S. Mahmoodian
• V (m, t)'s for m = 3, 4, 5, 6 C. H. A. Ling
• A note on the completion of partial latin Nicholas J. Cavenagh, Diane Donovan
• Self-Dual Codes and the (22,8,4) Balanced Incomplete Block Design R. T. Bilous
• Discrete Mathematics 194 (1999) 8794 Maximal sets of mutually orthogonal Latin squares
• An application of covering designs: determining the maximum consistent set of
• Minimal and Near-Minimal Critical Sets in Back-Circulant Latin Squares
• Transversals in rectangles G. H. J. van Rees, Dept. of Computer Science, University of Manitoba
• (m; 3)-Splitting Systems G.H.J. van Rees and S.J. Lau
• On the spectrum of critical sets in latin squares of Diane Donovan1
• There is no (22, 8, 4) Block Design Richard Bilous, Clement W. H. Lam, Larry H. Thiel,
• Nearly Orthogonal Latin Squares Department of Computer Science
• Constructions and Bounds for (m; t)-Splitting Systems D. Deng and D.R. Stinson
• The CRC Handbook Combinatorial Designs
• Constructions of 2-Cover-Free Families and Related Separating Hash Families
• Covering Designs on 13 Blocks Revisited Greig Consulting
• The Existence of Non-Resolvable Steiner Triple (Ben) Pak Ching Li
• Splitting Systems and Separating Systems Alan C. H. Ling
• An Enumeration of Binary Self-Dual Codes of Length 32 R. T. Bilous
• Enumeration of the Binary Self-Dual Codes of Length R. T. Bilous
• A Note on Critical Sets J. A. Bate and G. H. J. van Rees
• V (m, t) and Its Variants K. Chen1, G. H. J. van Rees2 and L. Zhu3
• The Size of the Smallest Strong Critical Set in a Latin Square
• On {123,124,134}-free Hypergraphs Department of Computer Science
• Lower Bounds on Lotto Designs Pak Ching Li
• Lovasz Local Lemma University of Manitoba Technical Report 08/01
• The Stein-Lovasz Theorem and Its Applications to Some Combinatorial arrays
• New Constructions of Lotto Designs Pak Ching Li
• University of Manitoba Technical Report 06/01 On the spectrum of critical sets in latin squares of
• AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 44 (2009), Pages 183198
• Knight's Tours and Circuits on the 3xn Chessboard (Classroom Notes)
• There is no 2-(22, 8, 4) block design Richard Bilous, Clement W. H. Lam, Larry H. Thiel, | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/00/347.html","timestamp":"2014-04-20T07:47:14Z","content_type":null,"content_length":"11453","record_id":"<urn:uuid:0a960b1a-dae6-4e59-8684-fbf7c37ce939>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
center of mass: functions with respect to y
February 9th 2013, 03:15 PM #1
Junior Member
Aug 2011
center of mass: functions with respect to y
Hi. In Calc 2 I have some homework questions for calculating the moments and center of mass for planar laminae. Fortunately, the book gives the exact equations necessary for the calculations,
when there are functions f(x) and g(x), within bounds a and b:
$M_{x} = \rho\int_a^b\frac{f(x)+g(x)}{2}[f(x)-g(x)]\,dx$
$M_{y} = \rho\int_a^b{x[f(x)-g(x)]\,dx}$
$m = \rho\int_a^b{[f(x)-g(x)]\,dx}$
$\overline{x} = \frac{M_{y}}{m}$
$\overline{y} = \frac{M_{x}}{m}$
However, how do I adjust these formulae when the functions must be functions in respect to y? For example, I have a problem where the area is bounded by
$x = y + 2$
$x = y^2$
Which involves a sideways parabola.
Re: center of mass: functions with respect to y
Okay, after trying for an hour or so to understand how the moments of mass are calculated, I think I understood it, and now this seems pretty simple. It should be...
$M_{y} = \rho\int_{a}^{b}\frac{f(y)+g(y)}{2}[f(y)-g(y)]\,dy$
$M_{x} = \rho\int_{a}^{b}y[f(y)-g(y)]\,dy$
$m = \rho\int_{a}^{b}[f(y)-g(y)]\,dy$
$\overline{x} = \frac{M_{y}}{m}$
$\overline{y} = \frac{M_{x}}{m}$
...where a and b are on the y axis.
Re: center of mass: functions with respect to y
That seems to be correct. You simply exchange x and y, right?
- Hollywood
Re: center of mass: functions with respect to y
It's a little more complicated than that. The definitions of $\overline{x}$ and $\overline{y}$ remain the same, except that the formulae for moment around the y axis and moment around the x axis
get flipped around, and the formulae are all dependent on y instead of x. It is difficult to explain why this makes sense without drawing and illustrating a graph, but it does make sense, and it
has worked well for me on several problems so far.
February 9th 2013, 05:01 PM #2
Junior Member
Aug 2011
February 9th 2013, 07:26 PM #3
Super Member
Mar 2010
February 9th 2013, 09:03 PM #4
Junior Member
Aug 2011 | {"url":"http://mathhelpforum.com/calculus/212840-center-mass-functions-respect-y.html","timestamp":"2014-04-17T02:23:36Z","content_type":null,"content_length":"39674","record_id":"<urn:uuid:c35d80b7-a9f3-40f1-b717-ad07258b7ed0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
A very hard short ellipse question that came in my test today
February 9th 2007, 01:27 AM #1
Senior Member
Jul 2006
Shabu City
A very hard short ellipse question that came in my test today
Given $F(2,3)$
Minor Axis 6
How can i locate the center?
Is this possible? My classmates even didnt get this because they think it should be that the major axis will be given
If F signifies "focus", and V is for "vertex", then the vertical line x=2 is where the major axis is. Because a vertex is an endpoint of the major axis and a focus is in the major axis. The major
axis can be described as vertex-focus-center-focus-vertex.
Okay, if that is all what is given, then the center can be (2,0).
"Can be" only, because as your classmates said, the major axis should have been given. As it is, the center can be anywhere from (2,2) down to (2,-infinity), if the locations are in integers.
So why (2,0)?
Because if the center were really at (2,0) and the minor axis is 6 units long, then the "dimensions" of the ellipses are multiples of 3.
Focal length (focus to near vertex) = 3 units.
Major axis = 12 units
Minor axis = 6 units
Lame reason, I know.
let a be the length of the major axis
let b be the length of the minor axis
let e be the distance between centre and one focus
then you know:
1) b = 6
2) a - e = 3 that means a = e + 3
3) $e^2+b^2=a^2$
Plug in the values you know into 3):
You'll get e = 4.5
Therefore the centre C has the coordinates: C(2, -1.5)
Therefore the equation of this ellipse is:
I've attached a diagram which shows the ellipse
an additional remark
As you have noticed certainly I have taken the length of the half minor axis as 6 units.
If you do the problem word-for-word then you must change my way to solve the problem in two points:
let a be the length of the half major axis
let 2b be the length of the minor axis
let e be the distance between centre and one focus
then you know:
1) 2b = 6 <==> b = 3
2) a - e = 3 that means a = e + 3
3) $e^2+b^2=a^2$
Plug in the values you know into 3):
You'll get e = 0
That means there isn't any eccentricity. You get an "ellipse" with the two foci and the centre in one point: C = F_1 = F_2.
This "ellipse" is a circle with r = a = b = 3 and the centre C(2, 3).
February 9th 2007, 02:49 AM #2
MHF Contributor
Apr 2005
February 9th 2007, 05:08 AM #3
February 9th 2007, 11:22 PM #4 | {"url":"http://mathhelpforum.com/pre-calculus/11391-very-hard-short-ellipse-question-came-my-test-today.html","timestamp":"2014-04-18T00:55:44Z","content_type":null,"content_length":"47963","record_id":"<urn:uuid:48551603-538b-484a-bed4-f44a7d08d6ed>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
Did she make it worth your while at all?
Re: Linear Interpolation FP1 Formula
Yes, she felt it was a fair trade. She also fed me some chicken.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Oh, that doesn't sound too bad then, as long as she did not treat you badly.
Re: Linear Interpolation FP1 Formula
I had feelings for her, she just treated me like her personal repairman. That was difficult for her to understand.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
You are saying that is how the relationship between me and adriana will end up becoming?
Re: Linear Interpolation FP1 Formula
I do not know. I can just say that in my opinion the word relationship means different things to them.
They appear to be more callous about it then what people believe.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I'm not sure if she is juggling any guys though. She seems to have cut me loose.
Re: Linear Interpolation FP1 Formula
They are also very practical. They never dump anyone if they need them for something.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
That's probably the reason adriana claims she really wants to stay in contact with me. Claims that she's "never met anyone like me"... maybe just doesn't want to burn her bridges, she wants to milk
me some more.
Is this one of the longest threads on this forum?
Re: Linear Interpolation FP1 Formula
I do not know, why do you ask.
maybe just doesn't want to burn her bridges, she wants to milk me some more.
They do not think there is anything wrong with that. I am sure that if any of them I knew ever need something done that they think only I can do they will be on that phone dialing my number.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I am just curious, I haven't seen any threads here that are longer. Does the topic close automatically if you reach a certain number?
So, in an ideal world, they would prefer to have multiple husbands for every purpose, in other words.
Re: Linear Interpolation FP1 Formula
There is no limit to a topic as far as I know.
Originally, despite the modern kaboobly doo, they were brought up to be dependent on guys. That has been going on for a long, long time. In the past that meant marrying young. They do not have to do
that anymore.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: Linear Interpolation FP1 Formula
Is this one of the longest threads on this forum?
Nope, theres a thread 'THIS or THAT' in the Members only section which is longer
I really wanted to know, why don't you sign up
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Linear Interpolation FP1 Formula
Especially with the rise of feminism, whose aim is to subvert that sort of viewpoint.
adriana didn't reply today, Hannah did though, to my gamma function e-mail I sent 9 days ago.
Re: Linear Interpolation FP1 Formula
Without arguing how silly the concept of feminism is or any similar "ism" masculine or feminine, I think the concept of independence is grossly being abused. We need each other more than ever before
and yet some creatures are preaching individuality above everything else. She may have the illusion that she is independent and self reliant but the reality is she is nearly a basket case who needs a
therapist to get through the day. So much for feminism. It didn't work for us and it ain't going to work for them.
Hannah and the gamma function, sounds like a chapter in the Abramowitz Stegun book.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
R is very fond of feminism. I'm not waving my arms about it, but I do think it's more civil if women aren't treated as being vastly inferior in the workplace for example.
I think I saw that book once, it's pretty comprehensive. Wonder what a revised edition would look like.
Nothing from adriana today, odd. Or maybe not so odd.
Re: Linear Interpolation FP1 Formula
You did not take the bait.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Is that a bad thing?
Re: Linear Interpolation FP1 Formula
That is good, the next move is her's.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I've definitely dropped a significant number of places on her priority list.
Re: Linear Interpolation FP1 Formula
Maybe she found somebody better at math than you are. Happened to me!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
That would be annoying. Hopefully it's not the Cyprus BF.
Re: Linear Interpolation FP1 Formula
You can easily retaliate...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
By finding another girl? Well, she may not have found another mathmo. At least, a male one might have been difficult for her.
Re: Linear Interpolation FP1 Formula
You will just have to wait and see. In the meantime I thought you had already decided on a policy of sticking to STEP.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=262245","timestamp":"2014-04-20T06:21:09Z","content_type":null,"content_length":"36269","record_id":"<urn:uuid:a7de0e9b-0e0a-4700-b796-ade0e07a1eb2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
The problem is Cross-Curriculum teaching
, a school might in trouble for its attempt at cross curricular teaching ...
The question was a word problem that said, "Each tree had 56 oranges. If eight slaves pick them equally, then how much would each slave pick?" Another math problem said, "If Frederick got two
beatings per day, how many beatings did he get in one week?"... (District spokeswoman) Roach explained the teachers were trying to incorporate social studies lessons into the math problems, which
is something the school district encourages. But the problem with the questions is there is no historical context.
No, the problem is that when you are teaching in the 21st Century, you need to pull your head out of the hole in the ground (or out of your behind ... some have trouble telling the difference). Much
better to use word problems such as these (provided as a public service by the Curmudgeon Math Project, LLC.)
1. If an elementary teacher makes three stupid decisions per day, how many days will it be until she is fired? For extra credit, make a diorama of the classroom using macaroni and tongue depressors.
2. If four teachers collectively have an IQ of 380, what is the average IQ of a teacher in this school system? For full credit, don't forget to show your work - text your answers to 1-802-IDIOTIC.
3. When a teacher chooses to write beyond her intelligence, how many irate parents will it take to get it all written up in the national press? Answers must be posted to Twitter because this teacher
is obviously a twit herself. Use the hashtag #LowGradeMoron
4. Write a paragraph explaining why posting a nude picture of herself on Facebook page would have been a better career move. How many reposts will she get if 3500 people see it every three minutes?
5. Use the MAKEYOURSELFAFATPIG program on your smartphones to figure out the total amount of ice cream eaten by 26 students who eat 3 scoops of ice cream each, if each scoop of ice cream costs $23
and uses $18,000 worth of 21st Century Technology.
6. Mrs Barnettt has three real daughters and one imaginary one, if she can claim one extra week of vacation in Costa Rica each time she claims that a daughter died, how many weeks will she be
spending on the unemployment line in Costa Rica?
Or you could stick to your strengths,
stop trying to be clever,
and ... just ... teach ... math.
2 comments:
1. It's a bit early, but I think this post is in the running for a "best of the year" award.
And thanks for the link :)
2. There are places where this works and places where it doesn't.
In my mind, every teacher should be a writing teacher. Certainly teachers of physics, economics and chemistry can teach some reinforcing math.
What you are exposing is silliness, but it doesn't have to be silliness. | {"url":"http://mathcurmudgeon.blogspot.com/2012/01/problem-is-cross-curriculum-teaching.html","timestamp":"2014-04-17T07:14:45Z","content_type":null,"content_length":"128869","record_id":"<urn:uuid:cc54b94e-32b7-4c0a-b3b4-dde28b8598f0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strategy for overclocking a CPU to its limit! Will this work?
September 8, 2008 12:18:49 AM
I will be buying a new semi-gaming rig in 2 weeks and I just wanted to see what's the fastest way to (safely) get the most juice out of an overclock.
I've read the great guide that was stickied about overclocking C2D, but I want to know if this could be a faster (and more brute) way to find the limits of the CPU in a system.
I'd greatly appreciate any comments and suggestions (and reasonable flaming ;p) for the plan below...
- Part A) Finding the maximum vcore by CPU temp monitoring (assuming temp is mostly dependent on vcore)
1) Pump vcore to the max (e.g. 1.3625 V for E7200) & keep CPU at stock speed
2) Start the torture test on Prime95
3) Monitor the CPU temp
4) If the CPU temp exceeds its limit, reboot & lower vcore
5) Return to 2) until the CPU temp on max load is acceptable, then fix system at this vcore
- Part B) Finding the maximum clock speed for the current vcore
1) Increase clock speed to as high as machine is bootable
2) Start the torture test on Prime95 (include error check)
3) If errors pop up in less than ~24 hours, then reduce clock speed
4) If nothing goes wrong, then you got it!
5) Double check temperature to make sure it's not cooking the chip...
a b à CPUs
a c 86 K Overclocking
September 8, 2008 12:34:49 AM
a b à CPUs
a b K Overclocking
September 8, 2008 5:20:51 AM
Welcome to Tom's. Nice summary, but unfortunately, you've over-simplified the process. As
has pointed out, there is no overclocking "
" button!
Please read
's Overclocking Guide at the
of this Forum: HOWTO: Overclock C2Q (Quads) and C2D (Duals) - Guide v1.6.1 -
He's already taken a great deal of time, and made more than a significant effort to explain all of this in great detail for the benefit of everyone.
If you know something that
hasn't already included in his Guide, then please feel free to enlighten us. If you've worked through his Guide and haven't been successful, then we'll be happy to help you.
If you have questions about temperatures, then check out the Temperature Guide below in my signature.
Can't find your answer ? Ask !
September 8, 2008 8:44:24 AM
September 8, 2008 9:19:58 AM
a b à CPUs
a b K Overclocking
September 8, 2008 9:50:29 AM
That method will probably get you no further with overclocking than I get, which is not far. Then again, I don't aim for stability. There are lots of other things to take into account, memory timings
as previously mentioned, but possibly also chipset quirks, especially if you are lucky enough to be using something like P965.
September 8, 2008 11:08:34 AM
a b à CPUs
a b K Overclocking
September 8, 2008 11:44:44 AM
V3NOM said:
lol randomizer you don't care about stability?
I'm running an E6600 at 2.7GHz now, anything more doesn't really net me any more performance so I can't justify the time spent overclocking "properly". I did hit 4GHz, it was unstable enough to crash
Abit uGuru but run CPU-Z and Paint. That was at 1.725V on air. Pretty much my overclocks are to get nice CPU-Z screenshots, and to save time I stick to the following simple rule:
Moar powa!
This is why I'm sticking to 65nm CPUs
Note to OP: Don't try 1.725V on air with that E7200 if you are indeed using one. In fact, don't try it on water either.
September 8, 2008 11:55:11 AM
a b à CPUs
a b K Overclocking
September 8, 2008 12:34:02 PM
LOL! Yep. There are a LOT of people out there bragging about their big overclocks, which wouldn't run a program stable for more 30 seconds if your mother's life depended on it. It's a fact.
To the OP, your method may indeed work to get you started, but overclocking is not an exact science. Every motherboard, every processor, every stick of memory although binned the same, is slightly
different and will set-up some unique cirmcumstances to overcome and tweak through to get the very highest speeds possible, stable.
September 8, 2008 12:41:39 PM
thx for the welcomes & suggestions
Of course I have read and learnt tons from graysky's guide (how could I not? ;P). However it seems to me that he stressed more on minimizing the core voltage rather than getting the most out of the
In his guide, he aims to find the lowest working voltages for a given clock speed, and you can see that it's obvious that he's totally running out of patience towards the end with his settings.
As far as I know, the CPU temp (btw, thx for your great guide too computronix) is one of the dominant factor that limits how far you can overclock (besides things like the limit of the CPU, ram,
other things). So I was thinking why not we start from there?
My plan was to find the outermost limits, then back down to a stable level.
Most likely, I will also need to do voltage minimizing after finding the right clock speed.... but anyways... ;p
@Jdoc: thanks for helping me confirm that heat is clock speed depending. I think I can implement some steps in my overclocking strategy so that this is taken care of too.
a b à CPUs
a b K Overclocking
September 8, 2008 12:45:50 PM
pcxxy said:
My plan was to find the outermost limits, then back down to a stable level.
Well the outermost limits are pretty far, and dangerous. You should probably decide how far you want to go for now and work on that. With your chip, 3.2-3.4GHz is where you'll see massive increases
in voltage needed for diminishing returns on speed.
pcxxy said:
Most likely, I will also need to do voltage minimizing after finding the right clock speed.... but anyways... ;p
Pretty much. The lower the better for any given speed, as long as it's stable.
September 8, 2008 12:52:11 PM
a b à CPUs
a b K Overclocking
September 8, 2008 11:57:55 PM
September 9, 2008 6:35:14 AM
a b à CPUs
a b K Overclocking
September 9, 2008 6:58:54 AM
Please read post #19 (shinigamiX) from the ever-eloquent enlightenments of the legendary JumpnJack - Electromigration:
... in doing a search I found a 'term paper' on a college server by some student it appears that actually did a great job summarizing the whole electromigratin thing, I really like this paper as it
does a good job going into basic detail without a huge technical overhead:
[... said:
Particularly, see figure 2-1 on page 12, it essentially summarizes electromigration
in a simple picture. I will refer to this PDF in this reply.
Other articles on electromigration to get the intresest going (some are subscription based, but a library would also have them):
[...] 7_7_97.PDF
[...] er=1044338
[...] ion=detail
And my favorite:
[...] er=1493069
This one concerns ESD induced electromigration failures as latent ESD failures that can shorten the lifetime to as little as a few weeks.
Ok, now to the answer ----
The answer is actually simpler than you think.... the short of it is, increasing frequency also increases the total current through the device, hence, the metal lines will experience higher current
density and higher electromigration degradation.
Here is the explanation .... those who have watch me post know I am keen on the td=CV/I, where td is gate delay, C is total capacitance, V of course is voltage, and I is current or Idsat, drive
current. This is a fundamental equation -
- describing the max switching speed of a device. However, in this form and this way of thinking td is the dependent variable and is a function of C, V, and I -- all three of which are design
parameters, what happens after we have optimized the process and nothing changes any longer --- then we rearrange this equation and, in this case, let's look at I as the independent variable:
I = CV/td or CV*(1/td)
But 1/td is one over time which is frequncy, f. So ---
I = CV*f
Thus, since C is fixed by the oxides, wires, and transistors in the CPU, V is dialed by you the user, and f is set by the clock generator, then I is a direct function of frequency AND voltage.
In the link above, electromigration lifetimes are modeled by Black's equation (see link above):
tf = A * (1/J)^n * EXP(Ea/kT)
A is material dependent, J is the current density which is I/unit area cross section of the wire, n is a emperically determined exponent, Ea is activation energy, k is the Boltzmann constant, and T
is temperature. Key here is current density J as the current goes up so does the electromigration factor.
So really, the increase of electromigration with frequency is no more than an increase in current driven by the frequency generator. Pretty simple.
Side note: Validate my I = CV*f equation, recall that I also post many times that the equation for dynamic power is P=CV^2*f , well with a little algrebra check this out ---
P = I*V ===> fundamental electrical power equation.
Substitute the expression for current as a function of frequency from my argument above,
P = (CV*f)*V = C*V^2*f wow, now we see where the dynamic power equation comes from and that power really goes as a cube of the 'speed fundamental varibles --- 2 orders in voltage and 1 order in
EDIT: NOTE --- though volting and clocking up your CPU can increase the rate of electromigration, the time scale here is still VERY long. Pausert20 is correct, the heavier Cu atoms, the lower
resistance and lower temperature of the wire as a result has pretty much eliminated electromgiration problems, they still exist but the lifetime is so long that it makes littel difference based on
the turn over rate that we typically upgrade to.... (We meaning enthusiast).
TDDB is probably the most common failure mode if suiciding a chip. Notice that it has the same exponential form as electromigration, they are both first order affects.
Jack]... in doing a search I found a 'term paper' on a college server by some student it appears that actually did a great job summarizing the whole electromigratin thing, I really like this paper as
it does a good job going into basic detail without a huge technical overhead:
[...] 30vlsi.pdf
Particularly, see figure 2-1 on page 12, it essentially summarizes electromigration
in a simple picture. I will refer to this PDF in this reply.
Other articles on electromigration to get the intresest going (some are subscription based, but a library would also have them):
[...] 7_7_97.PDF
[...] er=1044338
[...] ion=detail
And my favorite:
[...] er=1493069
This one concerns ESD induced electromigration failures as latent ESD failures that can shorten the lifetime to as little as a few weeks.
Ok, now to the answer ----
The answer is actually simpler than you think.... the short of it is, increasing frequency also increases the total current through the device, hence, the metal lines will experience higher current
density and higher electromigration degradation.
Here is the explanation .... those who have watch me post know I am keen on the td=CV/I, where td is gate delay, C is total capacitance, V of course is voltage, and I is current or Idsat, drive
current. This is a fundamental equation -
- describing the max switching speed of a device. However, in this form and this way of thinking td is the dependent variable and is a function of C, V, and I -- all three of which are design
parameters, what happens after we have optimized the process and nothing changes any longer --- then we rearrange this equation and, in this case, let's look at I as the independent variable:
I = CV/td or CV*(1/td)
But 1/td is one over time which is frequncy, f. So ---
I = CV*f
Thus, since C is fixed by the oxides, wires, and transistors in the CPU, V is dialed by you the user, and f is set by the clock generator, then I is a direct function of frequency AND voltage.
In the link above, electromigration lifetimes are modeled by Black's equation (see link above):
tf = A * (1/J)^n * EXP(Ea/kT)
A is material dependent, J is the current density which is I/unit area cross section of the wire, n is a emperically determined exponent, Ea is activation energy, k is the Boltzmann constant, and T
is temperature. Key here is current density J as the current goes up so does the electromigration factor.
So really, the increase of electromigration with frequency is no more than an increase in current driven by the frequency generator. Pretty simple.
Side note: Validate my I = CV*f equation, recall that I also post many times that the equation for dynamic power is P=CV^2*f , well with a little algrebra check this out ---
P = I*V ===> fundamental electrical power equation.
Substitute the expression for current as a function of frequency from my argument above,
P = (CV*f)*V = C*V^2*f wow, now we see where the dynamic power equation comes from and that power really goes as a cube of the 'speed fundamental varibles --- 2 orders in voltage and 1 order in
EDIT: NOTE --- though volting and clocking up your CPU can increase the rate of electromigration, the time scale here is still VERY long. Pausert20 is correct, the heavier Cu atoms, the lower
resistance and lower temperature of the wire as a result has pretty much eliminated electromgiration problems, they still exist but the lifetime is so long that it makes littel difference based on
the turn over rate that we typically upgrade to.... (We meaning enthusiast).
TDDB is probably the most common failure mode if suiciding a chip. Notice that it has the same exponential form as electromigration, they are both first order affects.
I hope this answers everyone's questions, as this information pertains directly to 65 nanometer processors, but indirectly to High K Gate 45 nanometer processors, since the 45's are sill too new upon
which to draw conclusions.
September 9, 2008 7:09:57 AM
September 9, 2008 1:02:55 PM
o... electromigration... it's like when the ****ans illegally step into another country... ;p
anyhows, you asked 'why overvolting'?
that's to determine how good the cooling of the system can reach, so you can get the highest clock speed that you can. it's not like i'm going to run it at 1.3625V at 2.56 GHz as my permanent
settings... (unless maybe when it gets too cold in the winter ;p) - and from my understanding, these chips don't get fried instantly... no?
I know that there are other limitations to clock speeds, and diminishing returns of clock speed gains for each additional mV, but I just want to see how high the temperature gets...
maybe a better modification of my overclocking plan is to set the voltage at a moderately high value (1.3X V), and some randomly high clock speeds and see how hot/stable it gets... then go further or
ease off depending on what you get....
but if i did that it would be just the same as the long journey that graysky is taking... where he obviously ran out of patience towards the end of his clocking journey (but i still love his
a b à CPUs
a b K Overclocking
September 9, 2008 11:19:38 PM
September 10, 2008 7:09:31 AM
September 10, 2008 2:05:50 PM
September 11, 2008 7:51:30 AM
September 21, 2008 7:35:06 PM
So I've bought my system (two in fact, one for my friend and I would be OC'ing for him soon)...
CPU: Intel C2D E7200 (SLAVN)
HSF: OCZ Vendetta 2 (with stock fan at max speed)
Thermal Compound: MX-2
Motherboard: GA-EP45-DS3L (BIOS version F8)
RAM: 2 x 1 GB DDR2 800 MHz, running at 4-4-4-15 @ 2.0V (OCZ2P800R21G)
Video Card: Palit HD4850 512 MB (the dual slot non-reference version)
HDD: Seagate 500 GB, 32 MB cache, SATA2
Optical Drive: Pioneer DVR-216D
PSU: OCZ600W SXS
Case: Antec Three Hundred
I spent half a day finding a stable RAM voltage for the 4-4-4 timings and a stable CPU clock speed, and I managed to get a semi stable clock of 3.8 GHz with 9.5 x 400 MHz, at a BIOS CPU voltage of
1.3625 V, with core temperatures of 32//53C at during idle//load (by RealTemp).
Owing to Vdroop, the voltage that feeds into the CPU is around 1.312 V to 1.328 V depending on load. Since Vdroop is designed to protect the system, I won't try to get around it (nor will I go to
higher voltages because I will be keeping this box for years).
At 3.8 GHz, the system is stable for at least 8 hours while idling, but does crash within a few hours under load.
Now I backed down to 9 x 400 MHz, and torture testing with Prime95 (small FFT) for about 5 hours with no errors (still running), and I will see if a lower voltage will keep my system happy. =)
Not bad for half a day of probing around eh? ^^
@1.3625V (BIOS) = 1.328-1.312V (CPU-Z) -> No errors after 9 hr of small FFT
@1.35V (BIOS) = 1.312-1.296V (CPU-Z), -> No errors after 10 hr of small FFT test
@1.325V (BIOS) = 1.296-1.28V (CPU-Z), -> still running... has been stable for 30min
load temps (RealTemp) are dropping from 53 to 47 (3 degrees due to lower voltage, 3 degrees due to ambient temp change)
September 22, 2008 7:22:18 PM
September 26, 2008 2:13:30 AM
September 29, 2008 1:48:22 AM
sounds good.
so i've found my minimum stable voltage for running at 3.6 GHz
unfortunately, prime95 kept giving me errors in large FFT and blend torture test after a few hours. i know it's not my ram coz i ran memtest86+ and it passed 23 cycles. i know it's not my harddrive
coz i've done a surface scan and there're no bad sectors. i've even tried upping the northbridge (edit: MCH) voltage from 1.1v to 1.3 but that didn't help either... would anyone know what's going on?
in case you guys wonder, i've tried oc'ing my friends computer and right off the bat i took it to 3.8ghz at 1.3625V (set in bios)! unfortunately the temps (by realtemp) were 72C so it looks like the
HSF wasn't installed properly... but otherwise it looks like he's got a better chip xDD
a b à CPUs
a b K Overclocking
September 29, 2008 4:42:31 AM
September 30, 2008 2:48:50 PM
October 1, 2008 7:10:43 AM
October 3, 2008 3:06:04 AM
October 3, 2008 8:30:20 AM
October 3, 2008 1:19:37 PM
i'm running stock timings for my ram which is 4-4-4-15, and i haven't played around with subtimings (although in the motherboard for memory performance, i chose the 'extreme' profile instead of
standard or turbo... (default is 'turbo', but i should test it with standard at least once too...')
the voltage i'm currently using is 2.00V (default voltage is 1.9-2.1V but up to 2.2V is fine). This is the minimum required voltage to be free from errors in memtest. I've put it to 2.10V too but I
still had errors in prime95.
In case you don't want to scroll up, my FSB is 400MHz and I'm running in synchronous mode, so that's not even overclocking my sticks.
Anyways, i'm going to run it at lower timings and performance profile and see if things are ok first, and see what's going on.
update: in the BIOS, at 5-5-5-15, 2.00V, "standard" ram performance enhancement profile, i get errors at with blend tests after an hour or so when using fft length of 896k.
i should probably try one ram at a time this weekend... -.-;
Can't find your answer ? Ask !
Read discussions in other Overclocking categories | {"url":"http://www.tomshardware.com/forum/248868-29-strategy-overclocking-limit-work","timestamp":"2014-04-18T06:30:15Z","content_type":null,"content_length":"205929","record_id":"<urn:uuid:2662fb58-0f86-4ef7-83bc-b9b82fcd7b4d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forest (meta-analysis) plot
Menu location: Graphics_Forest (Cochrane).
This plots a series of lines and symbols representing a meta-analysis or overview analysis.
StatsDirect uses a line to represent the confidence interval of an effect (e.g. odds ratio) estimate. The effect estimate is marked with a solid black square. The size of the square represents the
weight that the corresponding study exerts in the meta-analysis; this is the Mantel-Haenszel weight.
The pooled estimate is marked with an unfilled diamond that has an ascending dotted line from its upper point. Confidence intervals of pooled estimates are displayed as a horizontal line through the
diamond; this line might be contained within the diamond if the confidence interval is narrow. You may define more than one pooled effect estimate to represent sub-groups (use a value > 0 in the
pooling indicator to do this).
To prepare a forest plot in StatsDirect you must first enter a list of effect estimates in a workbook. You must also prepare matching columns of lower and upper confidence limits. Thus we have three
columns of equal length in a workbook for these data. You can also prepare a matching column of sample sizes or weights but this is optional.
A further optional column can be used to indicate which of the effect estimates and their confidence limits are pooled. A pooling indicator of 0 represents an individual study, < 0 (e.g. -1)
indicates an overall pooled result (there should be only one) and > 0 (e.g. 1) indicates a pooled subgroup. If no pooling indicator variable is selected then all are assumed to be individual studies.
This plot can be put to other uses. Please note that you can annotate it using an external drawing package such as Microsoft Draw. To annotate a StatsDirect graph in Microsoft Word just copy it from
StatsDirect to Word using the clipboard and double click on it in Word.
Note that L'Abbé plots can be more useful that the plots above for exploring the heterogeneity of effects in a meta-analysis (Song, 1999). | {"url":"http://www.statsdirect.com/help/graphics/cochrane_plot.htm","timestamp":"2014-04-19T04:21:03Z","content_type":null,"content_length":"6726","record_id":"<urn:uuid:4524e65b-c682-4427-8ec2-c5066441dd85>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Montara Precalculus Tutor
...I have always enjoyed teaching, and have tutored throughout my time in college and graduate school. As an undergrad at Harvey Mudd, I helped design and teach a class on the software and
hardware co-design of a GPS system, which was both a challenging and rewarding experience. I offer tutoring for all levels of math and science as well as test preparation.
27 Subjects: including precalculus, chemistry, calculus, physics
...Some of them were scoring in mid-500, when I started working with them. I bring my full attention and dedication to the students that I work with. Since every person is unique, I personalize my
approach to each student.
14 Subjects: including precalculus, calculus, statistics, geometry
...What are the chances that at least two people in your probability class have the same birth date? Let's work together and find out. I am an actuary and work with probabilities on a daily basis.
11 Subjects: including precalculus, calculus, geometry, algebra 1
...It is stepping stone to prepare for Calculus. Trigonometry plays an important role in Pre-Calculus. I keep check on the progress of the students through tests and homework.
17 Subjects: including precalculus, calculus, geometry, statistics
...As part of this job, I was trained in and provided materials for each of these topics. I often find, when working with my students, that an important component of the tutoring is attention to
these skills in addition to the specific subject areas for which tutoring had been requested. I have a ...
20 Subjects: including precalculus, calculus, Fortran, Pascal | {"url":"http://www.purplemath.com/Montara_Precalculus_tutors.php","timestamp":"2014-04-21T14:50:46Z","content_type":null,"content_length":"23855","record_id":"<urn:uuid:7b9e0e07-de74-44d9-bb8a-e2b63c02c63a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a Module over a Lie algebroid?
up vote 0 down vote favorite
Let $\alpha: \mathfrak g_A \to T_{A/k}$ be a Lie algebroid over a $k$-algebra $A$. Numerous facts about and its universal enveloping algebra comes from the theory of ring differential operators on
$A$. A generalization of theory of D-modules has been used to characterize modules over $\mathfrak g_A$ (eg. Sophi Chemla paper on Inverse image functor for Lie algebroids). But I couldn't find is a
good reference for the notion itself. What is a $\mathfrak g_A$-module?
d-modules lie-algebras
2 I don't know this stuff, so I will leave only a comment. I think that there is some disagreement over the correct notion of module. In particular, my friend Alfonso Garcia-Saz math.toronto.edu/
alfonso has some papers where they look towards a more general notion of module. The test is whether the Lie algebroid is a module over itself; in some older definitions, it is not. – Theo
Johnson-Freyd May 14 '10 at 20:09
Yes, the adjoint action will not give a a module structure over the Lie algebroid itself because it is not $A$-linear. Thank you for the paper-- – lemin May 14 '10 at 23:31
add comment
2 Answers
active oldest votes
A $\mathfrak{g}_A$-module $M$ is a $k$-module endowed with structures of $A$-module and $\mathfrak{g}_A$-module satisfying the compatibility equations $(ax)m = a(xm)$ and $x(am) = x(a)m +
a(xm)$ for any $a\in A$, $x\in\mathfrak{g}_A$, and $m\in M$. Here $x(a)$ denotes the action of $\mathfrak{g}_A$ in $A$, while the three other actions are denoted by $ax$, $am$, and $xm$.
up vote 3
down vote A $\mathfrak{g}_A$-module is the same that a module over the enveloping algebra $U_A(\mathfrak{g}_A)$.
add comment
Since Leonid already gave the definition, let me give a reference: Beilinson and Bernstein's A Proof of the Jantzen Conjectures.
up vote 2 down vote
add comment
Not the answer you're looking for? Browse other questions tagged d-modules lie-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/24607/what-is-a-module-over-a-lie-algebroid","timestamp":"2014-04-19T02:21:09Z","content_type":null,"content_length":"55480","record_id":"<urn:uuid:25a0cbee-b2de-41c0-bd39-0d969f00fe85>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple Algebraic Equations
Date: 5/26/96 at 15:58:8
From: Anonymous
Subject: Simple Algebraic Equations
I know this is a pretty broad topic but I just can't seem to
understand how you can logically say that 10 + 5x = 110 and have
x equals -30 when logically x should equal 20. I just don't
Date: 5/28/96 at 0:7:25
From: Doctor Ken
Subject: Re: Simple Algebraic Equations
You're absolutely right, in the equation 10 + 5x = 110, the correct
answer for x is 20. Whenever you solve an equation, you should be
able to take your answer and plug it back into the equation and have
it make sense. So, if you've got the equation 5x = 80 and you solve
it and get x = 3, then you know something isn't right when you plug in
3 for x and get 5 x 3 = 80, which (last time I checked) is way off.
So how can we "show" that the solution to your equation is x = 20?
Well, here's one way:
10 + 5x = 110 subtract 10 from both sides
5x = 100 divide both sides by 5
x = 20
Does that make sense?
-Doctor Ken, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/57346.html","timestamp":"2014-04-16T14:10:49Z","content_type":null,"content_length":"5948","record_id":"<urn:uuid:32ceac42-91b0-4633-b6e8-999c1ae04cdf>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Title page for ETD etd-122006-144358
A study is presented detailing the simulation of a drag-free follow-on mission to NASA’s Gravity Recovery and Climate Experiment (GRACE). This work evaluates controller performance, as well as
thrust, power, and propellant mass requirements for drag-free spacecraft operation at orbital altitudes of 160 – 225 kilometers. In addition, sensitivities to thermospheric wind, GPS signal
accuracy and availability of ephemeris data are studied. Orbital dynamics were modeled in Matlab and take into account 2 body gravity effects, J2-J6 non-spherical Earth effects, atmospheric drag
and control thrust. A drag model is used in which the drag acceleration is a function of the spacecraft’s relative velocity to the atmosphere, and a “drag parameter,” which includes the
spacecraft’s drag coefficient and local mass density of the atmosphere. A MSISE-90 atmospheric model is used to provide local mass densities as well as free stream flow conditions for a Direct
Simulation Monte Carlo drag analysis used to validate the spacecraft drag coefficient.
The controller is designed around an onboard inertial sensor which uses a freely floating reference mass to measure deviations in the spacecraft position, resulting from non-gravitational forces,
from a desired target orbit. Thruster (control actuator) models are based on two different Hall thrusters for providing the orbital along-track acceleration, colloid thrusters for the normal
acceleration, and a miniature xenon ion thruster (MiXI) for the cross-track acceleration.
The most demanding propulsion requirements correspond to the lowest altitude considered, 160 kilometers. At this altitude the maximum along-track thrust component is calculated to be 98
millinewtons with a required dynamic (throttling) response of 41 mN/s. The maximum position error at this altitude was shown to be in the along-track direction with a magnitude of 3314.9
nanometers and a peak spectral content of 1800 nm/sqrt(Hz) at about 0.1 Hz. At 225 kilometers, the maximum along-track thrust component reduces to 10.3 millinewtons. The maximum dynamic response
at this altitude is 4.23 mN/s. The maximum along-track position error is reduced to 367.9 nanometers with a spectral content peak of 40 nm/sqrt(Hz) at 0.1 Hz. For all altitudes, the maximum state
errors increase as the mission length increases, however, higher altitude missions show less of a maximum displacement error increase over time than those of lower orbits.
The ability of a colloid thruster to control the normal drift is found to be dependent on how frequently the spacecraft state data is updated. Reducing the period between updates from 10 seconds
to 1 second reduces the maximum normal state error component from 199 nanometers to less than 32 nanometers, suggesting that spacecraft state update frequency could be a major driver in keeping
the spacecraft on the target trajectory. Sensitivity of maximum required thrust and accumulated sensor error to measurement uncertainty is found to be less of a driver than state update
A ‘worst case” thermospheric wind gust was modeled to show the increase on propulsion requirements if such an event were to occur. At 200 kilometers, maximum winds have been measured to be in
increase of 650 m/s in the westward direction in the southern pole region. Assuming the majority of the 650 m/s gust occurs over a 4 second time span, the maximum required cross-track thrust at
200 kilometers increases from 1.12 to 2.01 millinewtons. This large increase may drive the thruster choice for a drag-free mission at a similar altitude.
For the spacecraft point design considered with a propellant mass fraction of 0.18, the mission lifetime for the 160 km case was calculated to be 0.76 years. This increases 2.27 years at an
altitude of 225 km. | {"url":"http://www.wpi.edu/Pubs/ETD/Available/etd-122006-144358/","timestamp":"2014-04-16T13:28:12Z","content_type":null,"content_length":"7166","record_id":"<urn:uuid:bc89ea37-1cac-47fa-a14b-b47cdd457e9d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
success arrangement
in how many letters of SUCCESS can be arranged such that no two C or no two S are together.
Let $\mathcal{S}$ be the set of all rearrangements in which no to S's are together. Let $\mathcal{C}$ be the set of all rearrangements in which no to C's are together. The number in $\mathcal{C}$ is
$\|\mathcal{C}\|=\binom{6}{2}\frac{5!}{3!}$. Now you must calculate $\|\mathcal{C}\cup\mathcal{S}\|.$ | {"url":"http://mathhelpforum.com/statistics/186467-success-arrangement.html","timestamp":"2014-04-21T03:36:50Z","content_type":null,"content_length":"34134","record_id":"<urn:uuid:dc86d52d-f2d7-4391-9da4-a84cee5bf2ef>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple Polynomial Multiplication
Simple Polynomial Multiplication (page 1 of 3)
Sections: Simple multiplication, "FOIL" (and a warning), General multiplication
There were two formats for adding and subtracting polynomials: "horizontal" and "vertical". You can use those same two formats for multiplying polynomials. The very simplest case for polynomial
multiplication is the product of two one-term polynomials. For instance:
• Simplify (5x^2)(–2x^3)
I've already done this type of multiplication when I was first learning about exponents, negative numbers, and variables. I'll just apply the rules I already know:
(5x^2)(–2x^3) = –10x^5
The next step up in complexity is a one-term polynomial times a multi-term polynomial. For example:
• Simplify –3x(4x^2 – x + 10)
To do this, I have to distribute the –3x through the parentheses:
–3x(4x^2 – x + 10)
= –3x(4x^2) – 3x(–x) – 3x(10)
= –12x^3 + 3x^2 – 30x
The next step up is a two-term polynomial times a two-term polynomial. This is the simplest of the "multi-term times multi-term" cases. There are actually three ways to do this. Since this is one of
the most common polynomial multiplications that you will be doing, I'll spend a fair amount of time on this.
• Simplify (x + 3)(x + 2)
The first way I can do this is "horizontally"; in this case, however, I'll have to distribute twice, taking each of the terms in the first parentheses "through" each of the terms in the second
parentheses: Copyright © Elizabeth Stapel 2000-2011 All Rights Reserved
(x + 3)(x + 2)
= (x + 3)(x) + (x + 3)(2)
= x(x) + 3(x) + x(2) + 3(2)
= x^2 + 3x + 2x + 6
= x^2 + 5x + 6
This is probably the most difficult and error-prone way to do this multiplication. The "vertical" method is much simpler. First, think back to when you were first learning about multiplication. When
you did small numbers, it was simplest to work horizontally, as I did in the first two polynomial examples above:
3 × 4 = 12
But when you got to larger numbers, you stacked the numbers vertically and, working from right to left, took one digit at a time from the lower number and multiplied it, right to left, across the top
number. For each digit in the lower number, you formed a row underneath, stepping the rows off to the left as you worked from digit to digit in the lower number. Then you added down.
For instance, you would probably not want to try to multiply 121 by 32 horizontally, but it's easy when you do it vertically:
You can multiply polynomials in this same manner, so here's the same exercise as above, but done "vertically" this time:
• Simplify (x + 3)(x + 2)
I need to be sure to do my work very neatly.
I'll set up the multiplication:
...and then I'll multiply:
I get the same answer as before: x^2 + 5x + 6
Top | 1 | 2 | 3 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Simple Polynomial Multiplication." Purplemath. Available from
http://www.purplemath.com/modules/polymult.htm. Accessed | {"url":"http://www.purplemath.com/modules/polymult.htm","timestamp":"2014-04-20T10:56:39Z","content_type":null,"content_length":"32019","record_id":"<urn:uuid:0aa01bea-bafa-42a2-a90f-806957028499>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elgin, IL Algebra 1 Tutor
Find an Elgin, IL Algebra 1 Tutor
...I have assisted in Pre-Algebra, Algebra, and Pre-Calculus classes. I have also tutored Geometry and Calculus students. I have helped my nephew with his math in the past.
7 Subjects: including algebra 1, geometry, algebra 2, trigonometry
...I have helped students of all ages backgrounds and abilities make studying more efficient. Discrete math is a catch-all term encompassing many diverse areas of mathematics. There is no
universal agreement as to what constitutes discrete math.
67 Subjects: including algebra 1, chemistry, Spanish, English
...Send me a message. Hope to hear from you soon! ElizabethAs a biology major at Elmhurst College, I was required to take a general biology course that included a good deal of genetics in the
14 Subjects: including algebra 1, chemistry, reading, elementary math
...I have extensive experience with MS Excel for over 18 yrs now. I have developed Sales Plans, Funnels, Sales and Product Analytics, Pricing Models in Excel. I did PreAlgebra, Algebra I, Calculus
I and Calculus II as well as Discrete Math in my undergrad with straight As.
22 Subjects: including algebra 1, reading, English, precalculus
...I am able to walk you through the application process, give you an overview of prospective schools, and provide academic instruction to optimize student performance in the three key areas
considered for admission: 7th grade classroom grades (reading, math, science and social studies), 7th grade ...
38 Subjects: including algebra 1, Spanish, reading, statistics | {"url":"http://www.purplemath.com/elgin_il_algebra_1_tutors.php","timestamp":"2014-04-21T12:43:30Z","content_type":null,"content_length":"23794","record_id":"<urn:uuid:25b2bf5c-62ce-49d0-baa8-8d03b3f21bb8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scalar by vector multiplication
March 9th 2011, 04:23 PM
Scalar by vector multiplication
Show why scalar multiplicative inverse property (∀a ≠ 0) (∃ b) ab=1 has no analogue using vector-by-scalar multiplication.
I'm not sure how to go about this. Help would be appreciated (Nod) I know what the two symbols ∀ ∃ represent, but not sure what to do for this question.
March 9th 2011, 05:13 PM
I don't know what the question is even asking. There is no notion of the "multiplicative inverse" of a vector (well... not in the sense of the vector space structure anyway).
March 9th 2011, 05:24 PM
March 10th 2011, 02:08 AM
Show why scalar multiplicative inverse property (∀a ≠ 0) (∃ b) ab=1 has no analogue using vector-by-scalar multiplication.
I'm not sure how to go about this. Help would be appreciated (Nod) I know what the two symbols ∀ ∃ represent, but not sure what to do for this question.
1. If you have real numbers you know that for every number $a \in \mathbb{R} \setminus \{0\}$ exists a number $\frac1a$, called the reciprocal of a, such that $a \cdot \frac1a = 1$.
2. If you have the scalar product of two vectors the result is not a vector but a single real number. This relation $2\ vectors \longrightarrow real\ number$ is not invertible.
As a practical consequence you are not allowed to calculate
$\vec a \cdot \vec b = 1~\implies~\vec a = \dfrac1{\vec b}$
March 10th 2011, 02:38 AM
But this question was about "vector by scalar multiplication" (which I take to mean scalar multiplication), not the dot product.
In order to have a multiplicative inverse, you must first have a multiplicative identity- but there is no vector, v, such that, for every scalar, a, av= a because the product is a vector, not a
scalar. There is a scalar multiplicative identity, 1, so that 1v= 1 for every vector, v, but we cannot get that as the result of a multiplication because, again, the product is a vector, not a | {"url":"http://mathhelpforum.com/advanced-algebra/174029-scalar-vector-multiplication-print.html","timestamp":"2014-04-17T20:26:50Z","content_type":null,"content_length":"8006","record_id":"<urn:uuid:03bb907b-6551-4995-9f29-835d98acfe35>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Green Acres, FL Algebra 2 Tutor
Find a Green Acres, FL Algebra 2 Tutor
...In order to do well, students must UNDERSTAND the concepts presented not merely “get the answer.” I have had great success teaching the concepts as well as the mechanics in a fun and
interesting way. You’re never too old or too young to laugh while learning math! Geometry uses some different skills from those used in algebra.
7 Subjects: including algebra 2, calculus, geometry, algebra 1
fI have twenty-Five years of progressively more responsible experience in the industry in capacities varying from Estimator, Project Engineer, Vice President and Chief Operating Officer and
President.and so I can assist you in estimating , scheduling classes. I can also help you to pass the contractors exam, CGC in state of Florida. I can also tutor basic math, English and marketing
6 Subjects: including algebra 2, reading, algebra 1, geography
...I have traveled to many countries to understand cultures and assimilate different teaching methods and share with professional peers. Traveling can broaden your horizons in thinking and
reasoning. There is always a different way of doing things.
16 Subjects: including algebra 2, Spanish, physics, calculus
...Conjugation, grammar, vocabulary, conversation, writing, whatever you need help with, I am willing to help! While some parts of learning a language involve memorization, I am willing to work
with you and help create fun, interesting methods and strategies to enforce the lesson. I believe in small quizzes to assess progression, and to key in towards a student's difficulties.
10 Subjects: including algebra 2, Spanish, English, writing
...I can make appointments all days except Mondays and Fridays. We can meet in the public library, your home or my home. I'm willing to offer a discounted rate if you travel to me.
7 Subjects: including algebra 2, chemistry, physics, geometry | {"url":"http://www.purplemath.com/Green_Acres_FL_algebra_2_tutors.php","timestamp":"2014-04-17T01:08:45Z","content_type":null,"content_length":"24535","record_id":"<urn:uuid:b28d40dc-2116-4da3-aefe-11cefb44e469>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Choosing subsets with half the elements in common
up vote 3 down vote favorite
Let $S=\lbrace 1,2,3,\ldots, 2^n\rbrace$ for some $n\ge 2$. What is the maximum cardinality of a set $S'$ of $2^{n-1}$-element subsets of $S$ such that every pair of elements of $S'$ has exactly $2^
{n-2}$ elements in common?
The answer might be $2^n - 1$. This problem came up a while ago while I was working on something larger. I skirted around it, but it always bugged me.
3 Certainly Hadamard designs (related to Hadamard matrices) will give that as a lower bound. If you view it as a combinatorial linear algebra problem, I think that will give you an exact answer.
Gerhard "Don't Ask For Linear Algebra" Paseman, 2013.03.27 – Gerhard Paseman Mar 27 '13 at 17:04
Look up Frankl--Wilson, and Ray--Chaudhury--Wilson theorems. – Boris Bukh Mar 27 '13 at 17:41
add comment
1 Answer
active oldest votes
To flesh out the Hadamard matrix proof:
1. Suppose the desired number is $m - 1$.
2. Fill in the top row of an $m\times 2^n$ matrix with all $1$.
3. Each subsequent row represents an element of $S'$. If $i$ is in the $j^{th}$ element of $S'$, then the matrix entry $(j+1,i)$ is $1$, and otherwise $-1$.
4. The rows are orthogonal to each other.
up vote 4 down vote accepted 5. Orthogonal sets are linearly independent.
6. The largest linearly independent set has cardinality equal to the dimension, which here is $2^n$ since that is the length of our row vectors.
7. Thus our number is $m-1=2^n -1$.
8. Not too hard to show that such a matrix exists. For example, see Sylvester's construction http://en.wikipedia.org/wiki/Hadamard_matrix#Sylvester.27s_construction
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/125739/choosing-subsets-with-half-the-elements-in-common","timestamp":"2014-04-17T04:12:08Z","content_type":null,"content_length":"53629","record_id":"<urn:uuid:2c0f4673-d61e-4826-9237-6ea6aeb395f9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interest Rate Word Problem
Part of a sum of $5400 is invested in 8% bonds and the remainder in 6.5% tax-free bonds. The combined annual interest from these bonds is $375. How much money is invested in 8% bonds?
the solution in the back of the book is 1600.00.
but i really need to understand the process of solving. any help will be much appreciated. thank you.
Re: Interest Rate Word Problem
I've figured out the problem after completing some quick research on the subject. in my opinion the simplest solution to the problem is not using a chart and just attempting to translate the words
directly into an equation.
(5400-x)(.065) + (.08)x = 375
the wording in this problem is very confusing but if you solve that equation for x you will get 1600. inserting 1600 back into the equation will yield the true statement 375 = 375. | {"url":"http://www.purplemath.com/learning/viewtopic.php?p=7000","timestamp":"2014-04-18T13:49:52Z","content_type":null,"content_length":"18636","record_id":"<urn:uuid:5ed74ac6-81d9-4242-b0a7-2a49b45d4eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
symplectic geometry
Symplectic geometry
Basic concepts
Classical mechanics and quantization
Symplectic geometry is a branch of differential geometry studying symplectic manifolds and some generalizations; it originated as a formalization of the mathematical aparatus of classical mechanics
and geometric optics (and the related WKB-method in quantum mechanics and, more generally, the method of stationary phase in harmonic analysis). A wider branch including symplectic geometry is
Poisson geometry and a sister branch in odd dimensions is contact geometry. A special and central role in the subject belongs to certain real-like half-dimensional submanifolds, called lagrangian (or
Lagrangean) submanifolds, which are in some sense classical points. Symplectic geometry radically changed after the 1985 article of Gromov on pseudoholomorphic curves and the subsequent work of Floer
giving birth to symplectic topology or “hard methods” of symplectic geometry.
Zoran Škoda: it is true that large influence of Weinstein’s program can not be overestimated, but it is not the origin of these considerations; it rather builds up on earlier fundamental works of
Kirillov, Kostant, Souriau who invented geometric quantization, all of them originally in symplectic context; and the florishing of the subject from mid 1960s till mid 1980-s is related to their
work; and other related tracks of Guillemin, Sternberg, Kashiwara, Karasev, Arnold and so on; and vast developments in harmonic analysis and representation theory (Kostant, Auslander, Vogan, Wallach,
Stein…), microlocal analysis (Kashiwara, Saito, Hormander, Maslov, Karasev, Duistermaat…), integrable systems/quantum groups (this is more into more general Poisson geometry: Lie-Poisson groups,
classical r-matrices, bihamiltonian systems…), and related approaches to quantization (Berezin method, coherent states…).
• T-duality and (related) mirror symmetry interchange the symplectic data and complex algebraic geometry data. Some cases of both the symplectic and complex algebraic picture can be unified and
studied in a broader concept of generalized complex geometry.
∞-Chern-Simons theory from binary and non-degenerate invariant polynomial
(adapted from Ševera 00)
Introductions include
• Rolf Berndt, An introduction to symplectic geometry (pdf)
Discussion from the point of view of homological algebra of abelian sheaves is in
• C. Viterbo, An introduction to symplectic topology through sheaf theory (2010) (pdf)
The notion of symplectic geometry may be understood as the mathematical structure that underlies the physics of Hamiltonian mechanics. A classical monograph that emphasizes this point of view is
For more on this see Hamiltonian mechanics.
Symplectic geometry is also involved in geometric optics, geometric quantization and the study of oscillatory integrals and microlocal analysis. Books concentrating on these topics include
• Victor Guillemin, Shlomo Sternberg, Geometric asymptotics, Amer. Math. Soc. 1977 free online
• Sean Bates, Alan Weinstein, Lectures on the geometry of quantization, pdf
• J.J. Duistermaat, Fourier integral operators, Progress in Mathematics, Birkhäuser 1995 (and many other references at microlocal analysis).
• Alan Weinstein, Symplectic geometry, (survey) Bull. Amer. Math. Soc. 5 (1981), 1-13, doi
• N. R. Wallach, Symplectic geometry and Fourier analysis, Math. Sci. Press, Brookline, Mass., 1977.
• wikipedia symplectic geometry Application: “Jacobi’s elimination of nodes” is moved to equivariant localization and elimination of nodes. | {"url":"http://ncatlab.org/nlab/show/symplectic+geometry","timestamp":"2014-04-20T20:59:14Z","content_type":null,"content_length":"80239","record_id":"<urn:uuid:ce304129-d7cd-4b13-8b14-2b043997f0ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
-Lagrange Spaces
ISRN Geometry
VolumeΒ 2011Β (2011), Article IDΒ 505161, 16 pages
Research Article
On Almost -Lagrange Spaces
Department of Mathematics, University of Allahabad, Allahabad 211 002, India
Received 12 October 2011; Accepted 13 November 2011
Academic Editors: A.Β Belhaj and M.Β Margenstern
Copyright Β© 2011 P. N. Pandey and Suresh K. Shukla. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We initiate a study on the geometry of an almost -Lagrange space (APL-space in short). We obtain the expressions for the symmetric metric tensor, its inverse, semispray coefficients, solution curves
of Euler-Lagrange equations, nonlinear connection, differential equation of autoparallel curves, coefficients of canonical metrical d-connection, and - and -deflection tensors in an APL-space.
Corresponding expressions in a -Lagrange space and an almost Finsler Lagrange space (AFL-space in short) have also been deduced.
1. Introduction
In the last three decades, various meaningful generalizations of Finsler spaces have been considered. These generalizations have been found much applicable to mechanics, theoretical physics,
variational calculus, optimal control, complex analysis, biology, ecology, and so forth. The geometry of Lagrange spaces is one such generalization of the geometry of Finsler spaces which was
introduced and studied by Miron [1, 2]. He [1, 2] introduced the most natural generalization of Lagrange spaces named as generalized Lagrange space. Since the introduction of Lagrange spaces and
generalized Lagrange spaces, many geometers and physicists have been engaged in the exploration, development, and application of these concepts [3β 13]. Antonelli and Hrimiuc [14, 15] introduced a
special type of regular Lagrangian called -Lagrangian. Applications of such Lagrangian have been discussed by Antonelli et al. in the monograph [16]. In the present paper, we generalize the notion of
-Lagrangian and introduce the concept of almost -Lagrange spaces. We hope that the results obtained in the paper will be interesting for the researchers working on the application of Lagrange spaces
in various fields of science.
Let be an -dimensional Finsler space, and let be a smooth function. The composition defines a differentiable Lagrangian. This was regarded by Antonelli and Hrimiuc [14, 15] as -Lagrangian associated
to the Finsler space . They [14] proved that if the function has the following properties: then is a regular Lagrangian and thus is a Lagrange space, called a -Lagrange space.
In this paper, we consider a more general Lagrangian as follows: where is the same as discussed earlier, is a covector, and is a smooth function.
In Section 2, we show that if the function has the properties (1.1), then is a regular Lagrangian and thus the pair is a Lagrange space. We call this space as an almost -Lagrange space (shortly
An APL-space reduces to a -Lagrange space if and only if and .
If , then the Lagrangian in (1.2) takes the form This defines a regular Lagrangian, and the pair is called an almost Finsler Lagrange space (shortly AFL-space). Such Lagrange space was introduced by
Miron and Anastasiei (vide Chapter IX of [17]).
We take Henceforth, we will indicate all the geometrical objects related to by a small circle β β put over them.
In a Finsler space, the geodesics, parameterized by arc length (the extremals of the length integral), coincide with the extremals of action integral or with the autoparallel curves of the Cartan
nonlinear connection [16]: where These geodesics are the integral curves of the spray [16] (i.e., (2) p-homogeneous): that is, solutions of the differential equations We have the following
equalities: In a general Lagrange space , the geodesics are the extremals of the action integral and coincide with the integral curves of the semispray [17, 18] (i.e., may not be a spray): As in a
Finsler space, a remarkable nonlinear connection can be considered in a Lagrange space: Such nonlinear connection is a canonical nonlinear connection [17, 18] as it depends only on the fundamental
function of the Lagrange space.
In general, the autoparallel curves of are different from the geodesics of (cf. [17]).
Given a nonlinear connection on a Lagrange space , there is a unique - and -metrical -connection (cf. [17, 19]) with torsions and , called the canonical metrical -connection. This connection is
linear and its coefficients are given by where is the Lagrange differentiation operator.
If is the Cartan connection of the Finsler space , then its coefficients are given by where .
The - and -deflection tensor fields and , respectively, of a Lagrange space are defined by (cf. [19]) where and , respectively, denote the - and -covariant derivatives with respect to .
If is the h-deflection tensor field and is the -deflection tensor field of the Finsler space , then where and , respectively, denote the - and -covariant derivatives with respect to .
For basic terminology and notations related to a Finsler space and a Lagrange space, we refer to the books [17, 20].
2. Almost -Lagrange Spaces
As discussed earlier, we consider the Lagrangian given by (1.2) in which the function satisfies (1.1). We prove that it is a regular Lagrangian and the pair is a Lagrange space which we term as an
almost -Lagrange space (APL-space in short).
Theorem 2.1. If the function satisfies the conditions (1.1), then , given by (1.2), is a regular Lagrangian and is a Lagrange space.
Proof. Differentiating (1.2) partially with respect to , we get Again differentiating (2.1) partially with respect to , we obtain which, in view of (1.4), provides Now In view of (2.4), (2.3) takes
the form Under the hypothesis, the matrix is invertible and its inverse is (see Lemma 6.2.2.1, page 891 in [20]) This proves the theorem.
Remarks 1. (i) If and in (1.2), then expression (2.5) remains unchanged. Hence, the symmetric metric tensor of a -Lagrange space is the same as that of an APL-space.
(ii) If , then and . Hence, the symmetric metric tensor of an AFL-space coincides with that of the associated Finsler space.
3. Semispray, Integral Curves of Euler-Lagrange Equations
In this section, we obtain the coefficients of the canonical semispray of the APL-space and deduce corresponding expressions for a -Lagrange space and an AFL-space. Next, we obtain the differential
equations whose solution curves are the integral curves of Euler-Lagrange equations in an APL-space. We deduce corresponding differential equations for a -Lagrange space and an AFL-space.
If we differentiate (1.2) partially with respect to , we have Differentiating (3.1) partially with respect to , we obtain which, in view of (2.4), takes the form Using (3.1) and (3.3) in (1.10), we
have where is electromagnetic tensor field of the potentials .
Applying (2.6) in (3.4) and using , and (by Eulerβ s theorem on homogeneous functions), we obtain Using (1.7) in (3.6) and simplifying, we get Thus, we have the following.
Theorem 3.1. The canonical semispray of an APL-space has the local coefficients given by where are the local coefficients of the spray of .
For a -Lagrange space, and . Hence, from (3.5), we have . Therefore, (3.7) reduces to Thus, we may state the following.
Corollary 3.2 (see [14]). The canonical semispray of a -Lagrange space becomes a spray and coincides with that of the associated Finsler space.
For an AFL-space, (see Remark (ii)). Hence, (3.7) takes the form Thus, we have the following.
Corollary 3.3 (see [17, 20]). The canonical semispray of an AFL-space has the local coefficients given by (3.10).
In a Lagrange space, the integral curves of the Euler-Lagrange equations: are the solution curves of the equations [20] Using (3.7) in (3.12), we obtain where .
Using (1.9) (a) in (3.13), we have Thus, we have the following.
Theorem 3.4. In an APL-space , the integral curves of the Euler-Lagrange equations are the solution curves of (3.14).
For a -Lagrange space, equations (3.14) take the following simple form: This enables us to state the following.
Corollary 3.5 (see [14]). In a -Lagrange space, the integral curves of the Euler-Lagrange equations are the solution curves of (3.15).
For an AFL-space, . Therefore, equations (3.14) become where .
Thus, we have the following.
Corollary 3.6 (see [17, 20]). In an AFL-space, the integral curves of the Euler-Lagrange equations are the solution curves of (3.16).
4. Nonlinear Connection, Autoparallel Curves
In this section, we find the coefficients of the nonlinear connection of an APL-space and obtain the differential equations of the autoparallel curves of the nonlinear connection. Corresponding
results have been deduced for a -Lagrange space and an AFL-space.
Partial differentiation of (2.5) with respect to yields Using (3.7) in (1.11) and taking (1.9) (b), (2.6), (4.1), , , and into account, we obtain If we take the last expression becomes that is, where
Thus, we have the following.
Theorem 4.1. The canonical nonlinear connection of an APL-space has the local coefficients given by (4.5).
For a -Lagrange space, we have and and hence . Therefore, (4.5) reduces to Thus, we have the following.
Corollary 4.2 (see [14]). The canonical nonlinear connection of a -Lagrange space coincides with the nonlinear connection of the associated Finsler space.
For an AFL-space, (4.3) reduces to and hence (4.6) gives Therefore, (4.5) takes the form Thus, we have the following.
Corollary 4.3 (see [17, 20]). The canonical nonlinear connection of an AFL-space has the local coefficients given by (4.10).
Transvecting (4.5) by and using , we obtain where .
The autoparallel curves of the canonical nonlinear connection of a Lagrange space are given by the following system of differential equations (vide [20]): Equations (4.12), in view of (4.11), take
the form Thus, we have the following.
Theorem 4.4. The autoparallel curves of the canonical nonlinear connection of an APL-space are given by the system of differential equations (4.13).
For a -Lagrange space, and hence . Therefore, (4.13) reduces to Thus, we have the following.
Corollary 4.5 (see [14]). The autoparallel curves of the canonical nonlinear connection of a -Lagrange space are given by the system of differential equations (4.14).
For an AFL-space, and hence, by virtue of , we have . Therefore, equations (4.12) take the form Thus, we deduce the following.
Corollary 4.6 (see [17, 20]). The autoparallel curves of the nonlinear connection of an AFL-space are given by the system of differential equations (4.16).
If we compare (3.14), (3.15), and (3.16), respectively, with (4.13), (4.14), and (4.16), we observe that, in an APL-space as well as in an AFL-space, solution curves of Euler-Lagrange equations do
not coincide with the autoparallel curves of the canonical nonlinear connection whereas in a -Lagrange space they do. Therefore, in a -Lagrange space, geodesics are autoparallel curves whereas in an
APL-space and in an AFL-space they are not so.
5. Canonical Metrical -Connection
Let be the canonical metrical -connection of the APL-space , and let be the Cartan connection of the associated Finsler space . In this section, we obtain the expressions for the coefficients of and
we investigate some properties of . We deduce corresponding results for a -Lagrange space and an AFL-space.
Using (4.1) in (1.13) and taking (1.15) into account, we find
For any -class function , taking , we have which, in view of (see proposition 9.4, page 1037 of [20]), gives Since (see proposition 9.4, page 1037 of [20]), we have If we operate on (2.5) and utilize
(5.3) and (5.4), it follows that In view of , (4.5), and , we get which, on account of (4.1) and (5.5), becomes Using (5.7) in (1.12) and taking (1.14) and into account, we obtain Equations (5.1) and
(5.8) enable us to state the following.
Theorem 5.1. The coefficients of the canonical metrical -connection of an APL-space are given by (5.1) and (5.8).
For a -Lagrange space, . Hence, (5.1) remains unchanged whereas (5.8) reduces to Thus, we have the following.
Corollary 5.2 (see [14]). The coefficients of the canonical metrical -connection of a -Lagrange space are given by (5.1) and (5.9).
For an AFL-space, , and . Therefore, we have and .
In view of these facts, (5.1) reduces to whereas (5.8) gives the following: where is given by (4.9). Thus, we have the following.
Corollary 5.3 (see [17, 20]). The coefficients of the canonical metrical -connection of an AFL-space are given by (5.10) and (5.11).
Now, we investigate some properties of the canonical metrical -connection of an APL-space and deduce the corresponding properties for a -Lagrange space and an AFL-space.
Theorem 5.4. The canonical metrical -connection of an APL-space has the following properties: where , where , where .
Proof. (1) Using (5.8) and (4.5) in (1.16), we have which, in view of (1.18), reduces to Next, if we use (2.5) in , then it follows that Now, applying successively , (4.5), and in and keeping (5.8)
and (5.18) in view, we have Differentiating partially with respect to , we have Also, In view of (5.3), we have Using (5.20), (5.21), and (5.22) in (5.19), we obtain which, in view of (5.4), gives
the desired result.
(2) Using (5.1) in (1.17), we get where .
In view of (5.20) and (5.21), it follows, from , that that is, as is totally symmetric.
(3) Utilizing successively , (4.5), and in , we get Using (1.2) and (2.1) in (5.26), we have which, in view of (5.3), gives Using and (5.18) in (5.28) and keeping (4.5) in view, we find If we take ,
then the last expression takes the form Next, using (2.1) in , we get which, in view of (5.18), gives the required result.
Corollary 5.5 (see [14]). The canonical metrical -connection of a -Lagrange space has the following properties: where ,
Proof. Applying , , and in Theorem 5.4, we have the corollary.
Corollary 5.6. The canonical metrical -connection of an AFL-space has the following properties: where ,
Proof. Using , and in Theorem 5.4, we have the corollary.
S. K. Shukla gratefully acknowledges the financial support provided by the Council of Scientific and Industrial Research (CSIR), India.
1. R. Miron, β A lagrangian theory of relativity I,β Analele Ştiinţifice ale Universităţii Al. I. Cuza din Iaşi Secţiunea I a Matematică, vol. 32, no. 2, pp. 37β 62, 1986.
2. R. Miron, β A lagrangian theory of relativity II,β Analele Ştiinţifice ale Universităţii Al. I. Cuza din Iaşi Secţiunea I a Matematică, vol. 32, no. 3, pp. 7β 16, 1986.
3. B. Tiwari, β On generalized Lagrange spaces and corresponding Lagrange spaces arising from a generalized Finsler space,β The Journal of the Indian Mathematical Society, vol. 76, no. 1–4, pp.
169β 176, 2009.
4. C. Frigioiu, β Lagrangian geometrization in mechanics,β Tensor, vol. 65, no. 3, pp. 225β 233, 2004. View at Zentralblatt MATH
5. G. Zet, β Applications of Lagrange spaces to physics,β in Lagrange and Finsler Geometry, vol. 76, pp. 255β 262, Kluwer Academic, Dordrecht, The Netherlands, 1996. View at Zentralblatt MATH
6. G. Zet, β Lagrangian geometrical models in physics,β Mathematical and Computer Modelling, vol. 20, no. 4-5, pp. 83β 91, 1994. View at Publisher Β· View at Google Scholar Β· View at
Zentralblatt MATH Β· View at MathSciNet
7. M. Anastasiei and H. Kawaguchi, β A geometrical theory of time dependent Lagrangians: I, non linear connections,β Tensor, vol. 48, pp. 273β 282, 1989.
8. M. Anastasiei and H. Kawaguchi, β A geometrical theory of time dependent Lagrangians: II, M-connections,β Tensor, vol. 48, pp. 283β 293, 1989.
9. M. Anastasiei and H. Kawaguchi, β A geometrical theory of time dependent Lagrangians: III, applications,β Tensor, vol. 49, pp. 296β 304, 1990.
10. M. Postolache, β Computational methods in Lagrange geometry,β in Lagrange and Finsler Geometry, Applications to Physics and Biology, vol. 76, pp. 163β 176, Kluwer Academic, Dordrecht, The
Netherlands, 1996. View at Zentralblatt MATH
11. S. I. Vacaru, β Finsler and Lagrange geometries in Einstein and string gravity,β International Journal of Geometric Methods in Modern Physics, vol. 5, no. 4, pp. 473β 511, 2008. View at
Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
12. S. Vacaru and Y. Goncharenko, β Yang-Mills fields and gauge gravity on generalized Lagrange and Finsler spaces,β International Journal of Theoretical Physics, vol. 34, no. 9, pp. 1955β 1980,
1995. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
13. V. Nimineţ, β New geometrical properties of generalized Lagrange spaces of relativistic optics,β Tensor, vol. 68, no. 1, pp. 66β 70, 2007. View at Zentralblatt MATH
14. P. L. Antonelli and D. Hrimiuc, β A new class of spray-generating Lagrangians,β in Lagrange and Finsler Geometry, Applications to Physics and Biology, vol. 76, pp. 81β 92, Kluwer Academic,
Dordrecht, The Netherlands, 1996. View at Zentralblatt MATH
15. P. L. Antonelli and D. Hrimiuc, β On the theory of $\phi$-Lagrange manifolds with applications in biology and physics,β Nonlinear World, vol. 3, no. 3, pp. 299β 333, 1996.
16. P. L. Antonelli, R. S. Ingarden, and M. Matsumoto, The Theory of Sprays and Finsler Spaces with Applications in Physics and Biology, vol. 58, Kluwer Academic, Dordrecht, The Netherlands, 1993.
17. R. Miron and M. Anastasiei, The Geometry of Lagrange Spaces: Theory and Applications, vol. 59, Kluwer Academic, Dordrecht, The Netherlands, 1994.
18. L. Kozma, β Semisprays and nonlinear connections in Lagrange spaces,β Bulletin de la Société des Sciences et des Lettres de Łódź, vol. 49, pp. 27β 34, 2006. View at Zentralblatt MATH
19. M. Anastasiei, β On deflection tensor field in Lagrange geometries,β in Lagrange and Finsler Geometry, Applications to Physics and Biology, vol. 76, pp. 1β 14, Kluwer Academic, Dordrecht, The
Netherlands, 1996. View at Zentralblatt MATH
20. P. L. Antonelli, Ed., Handbook of Finsler Geometry, Kluwer Academic, Dordrecht, The Netherlands, 2001. | {"url":"http://www.hindawi.com/journals/isrn.geometry/2011/505161/","timestamp":"2014-04-17T21:59:44Z","content_type":null,"content_length":"616194","record_id":"<urn:uuid:7a19a6b9-3df7-44d4-8275-83b726d4c838>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Curve Calibration
For details on the methodology see
Fries, Christian P.: Curves: A Primer. Definition, Calibration and Application of Rate Curves. (December 27, 2012). http://ssrn.com/abstract=2194907.
The calibration of the model is performed as a multi-threaded global optimization. Hence the calibration greatly profits from a multi-core architecture. It uses as many threads as degree of freedoms
(e.g. calibrating 5 curves, each with 20 points will use up to 100 threads).
You may explore the algorithm via a spreadsheet (user perspective) or programmatically by checking out the source code (developer perspective).
Spreadsheet interface to Java class CalibratedCurves.
The spreadsheets are given in Excel (xls) and OpenOffice (ods) format.
In order to run the spreadsheet you have to install the Java Object Handler for Spreadsheets, "Obba".
Calibration of curves (see package net.finmath.marketdata.model.curves).
The sheet calibrates a set of different curves (including discounting curves (e.g., OIS) and forward curves) from swaps. Swaps may feature different discounting curves (e.g., OIS discounting).
Forward curves can be calibrated to standard swaps and tenor basis swaps. Discount curves may be calibrated to standard swaps or cross-currency basis swaps.
Although the specific algorithm used is a calibration and not a classical bootstrap, this is sometimes called as curve bootstrapping.
Source Code
Source code is available from the finmath lib repository, see http://www.finmath.net/java.
Below you find a short description about the classes involved in the calibration algorithm. If you like to explore the source code:
• Checkout finmath lib (for example via Eclipse and the subversion repostitory).
• Run the class the class CalibrationTest from the package net.finmath.tests.marketdata.curves.
• Inspect that class.
The calibration framework consists of three parts:
• The implementation of curves (discount curve, forward curve).
• The implementation of calibration products (swap leg, swap).
• The implementation of a solver, wrapping curves into an parameter object and wrapping calibration products into an objective function object.
The curves provide methodology for creating a interpolating curve from a set of points via different interpolation methods on different interpolation entities.
A forward curve can interpolate on the forward value by using its associated discount curve.
Curve are aggregated in an analytic model, which is a collection of curves (Map<String,Curve>) which can be used to evaluate the products.
Calibration Products
Products are object carrying the property of the product, including the name of the curves they reference. They provide a function to taking an analytic model (mapping curve names to Curve objects)
which returns the value of the product.
Calibration (Solver)
The object CalibratedCurves takes a set of calibration specifications and creates all the required curves and calibration products. The result of this process is
• A model, basically a collection of curves, used to value the calibration products.
• A collection of curves to calibrate, this may be a smaller subset of the model.
• A collection of calibration products.
It performs of the calibration products finding the best curves for the set of curves to calibrate.
Products are valued using an AnalyticModel, which is just a collection of curves (Map<String,CurveInterface>).
The actual optimization is performed by the class Solver. This class returns a modified clone of the provided model containing calibrated version of curves (the original model and the original curves
are not modified).
From Spreadsheet
The ZIP archive above contains a spreadsheet which shows a multi-curve calibration (including cross-currency curves) and outputs the calibrated curves.
The class CalibrationTest from the package net.finmath.tests.marketdata.curves contains some Java code demoing the programmatically creation and calibration of curves. See CalibrationTest.java. | {"url":"http://finmath.net/topics/curvecalibration/","timestamp":"2014-04-21T16:20:59Z","content_type":null,"content_length":"8481","record_id":"<urn:uuid:fa2d2364-69ac-4106-881f-961688dcfaa5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Singapore Math
When it comes to teaching mathematics to children, there exist a number of schools of thoughts on what the ideal curriculum ought to be.
There are the conventional models of teaching math, there are harder theory based models and there are the more interesting practical hands-on models.
Among the models that have received wide applaud for developing a strong platform and understanding of basic math that is suitable for propelling the child a long way forward in the longer run,
Singapore math is an outstandingly good one.
What is Singapore math?
Singapore math is a method to teach mathematics to children based upon the classical model of teaching mathematics in Singapore.
The national curriculum of Singapore has been great in producing students who understand the basics of mathematics well, and the model has been taken to formalize and internationalize for wider
As a teacher or as a parent of a homeschooling child, this is definitely one of the models of teaching math that would appeal to your intellect.
Remember that the methods might appear to be innovative at times, but the fundamental basis is time-tested and the instructions have been around for a number of years.
And also, the instructions and materials are written in English. A number of schools in the USA have accepted this model as a standard one and follow this. The state of California had allotted some
state funds for the teachers to order Singapore math texts.
What are the characteristics of Singapore math?
Singapore math has some fundamental differences in approach compared to the standard practice of teaching math. The curriculum and study materials are designed to answer fundamental questions of
mathematics based upon logical explanations. Unlike many of the mathematics teaching models that are common in many countries including the USA that stress upon rule memorization and expectation upon
students to reconstruct mathematical ideas from hands-on experiments, this is a model that believes in digging deep into roots of reason. The math understanding model is designed at every stage such
that the child attempts to ask questions pertaining to fundamental understanding of mathematics. They would start asking “why” rather than trying to remember rules and the intelligent design
structure of the curriculum would provide the student with the right answer with the right time. No wonder Singapore math teaching model is a favorite among almost every homeschooling parent and
Singapore math can be characterized by the following factors.
- Spiral approach:
Singapore math is spiral in its structure. It assumes that something that has been taught earlier does not need to be taught again. In fact, when a topic is revised in a subsequent year, the level of
discussion starts from the next level of what was taught previously. This is contrary to the USA model, where the topic restarts from scratch next year and then moves forward eventually attaining the
desired higher level.
- Sequence of introduction of concepts:
Singapore math has a unique way of introducing the students to novel concepts. It is neither based upon remembering rules, nor is it based upon hoping that the students would be able to almost
intelligently deduce formulae simply going by their hands-on experiences that they would be exposed to. The teaching method that Singapore math model follows goes from concrete via pictorial to
abstract. The students would learn decimal numbers in forms of addition and subtraction in their second and third grades by concretely adding and subtracting dollars and cents. Then in their fourth
grade they would be pictorially introduced to the notion of decimals and their experiences of adding and subtracting dollars would be used for them to learn. At the next level they would start to
understand the fundamental theories of how decimals work. Thus, within a few years they will understand all the necessary theory, practical applications and physical interpretation of their concepts.
They further learn to actually analyze real situations mathematically in terms of the real mathematical meaning of the situation and do not rely upon intuitions or “clues” which is the buzzword in
the US-based mathematics teaching models.
- Intelligent sequence of repetition of previous knowledge:
The amount of repetition is minimal in Singapore mathematics. The way that the entire course is designed targets that the student does not need to go for repetition of what is already learned. The
progress is ensured in such a way that much of whatever is learned at one stage is implicitly used in the subsequent stages. This would make sure that the student never loses touch with the
fundamental concepts, although the boring repetition is totally eliminated. It keeps the students fresh and hungry for learning further.
- Crisp focus on understanding of core mathematical concepts free from redundancies:
Singapore math is well-known for focusing on the mathematical concepts. Unlike many other models that teach math and end up adding a lot of immaterial details and spices, this model of math learning
is strongly based upon the solid concepts of core mathematics. The nature of the study is exploratory, and the child asks and gets answer to the fundamental “why” questions rather than semi-grasping
vague ideas without deep knowledge.
Should you go for it?
The verdict is that Singapore math is one of the best models of learning math for a child, be it in a conventional school or be it in a homeschooling environment. The instructions are loud and clear.
The study material is fantastic. As long as your child does not lack average intelligence, this is one of the models that you would strongly favor going for. | {"url":"http://www.child-central.com/singapore-math.html","timestamp":"2014-04-16T18:58:38Z","content_type":null,"content_length":"28666","record_id":"<urn:uuid:ab15afba-e583-42fd-b169-e82156627425>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fun with Physics, part 2: foot impact forces
This post is a sequel to a
previous post
about the invariance of running energy expenditure with respect to distance. The topic I wish to cover here is a back-of-the-envelope estimation for total impact force of a running step. In other
how many G-forces does your body experience with each footstep?
Before I get to this question I will recap some of the previous post below.
What I said before was, using some actual physics equations (hence the title), was that
running does not burn more calories per kilometre*. I derived the following formula
using the assumptions 1) that each step from one foot to another follows the usual symmetric parabolic arc and 2) there is no wind resistance and 3) the take-off angle of each step is 45 degrees
(which is a poor assumption but in this simple-minded case gives an energy minimum for running under the given goal of minimizing energy cost). I changed the 'equal' sign in the original equation to
a 'less-than' sign, since the true energy cost of running is lower, at about 0.97 kcal/(kg*km). A more complex model would make some approximations of energy transferred using a spring/level system,
but I'm not going down that rabbit hole. The ratio of my predicted vs actual calorie cost is 1.17/0.97 = 1.21. Hence I overestimated the cost of running by 21%, which is not bad for a ridiculously
simple inequality (containing only the gravimetric constant
I finished the post with a formula that relates the approximate
takeoff angle, theta, with horizontal running speed
, and turnover rate
in steps per second (plus the gravimetric constant
For a visual reference the angles of running as related to various trig relations look like this:
I mention this data again because this material is a good launch point (literally) for what I want to describe next: impact forces while landing. I'm also going to use some of these formulas for the
second model.
Branching off from the discussion on energy expenditure I now want to tackle a related question concerning g-forces. So I ask myself 'can the running forces as experienced by your foot be calculated
using similarly simple assumptions?' I will calculate these g-forces by dividing the change in speed of one's vertical component of motion,
, by the time it takes to complete a footstep, delta
. Dividing change in speed by time is, by definition, acceleration of a body, i.e.
This will, however, at best account for the average force experienced by the foot over a given landing/takeoff interval. My quest is to find a reasonable value for the delta in both the numerator an
denominator. But I'll jump ahead and tell you what the answer is: empirically we expect the maximum g-forces to be less than 3 Gs (or more precisely in the range 2.5-2.8). The average forces over the
entire interval should be about 2/3 the peak value**, which means we desire an average force of about 1.7 to 2.0. Impact force plate analysis studies are easy to come by, such
this one
, from which I borrowed a representative image using units of G:
The numerator delta v:
The first simple assumption is that the change in vertical speed is just twice the vertical speed at impact,
The next step is determining
. This is done using the tan function I gave above. Since the vertical and horizontal components of motion are also simply related by a tan function
we can insert this into the earlier tan function and we have a relation for the vertical component,
(I should point out this value is true only for the moment of takeoff and landing. Obviously at the maximum height in mid-stride
= 0). The change in speed during the moment of impact is therefore
is close to 3 s
(or equivalently about 180 strides per minute, although this can vary) then we are done with the numerator. The larger
becomes, the smaller the impact forces. And the acceleration g is always constant at 9.81 m s
(unless running on the moon!)
The divisor delta t:
time that a foot strike could possibly occur is equal to the length of your foot divided by the speed of travel. This assumes both the heel and forefoot touch the ground, regardless of order (i.e.
not just running on one's toes). This is a good assumption for any distance over a few hundred meters. The minimum impact time is then
Given the
average male foot length
is about 26.3 cm, and assuming running speeds in the range 4 to 6 m/s, this leads to a range of t values between 43 and 65 milliseconds. Combining this with the vertical change in speed
dv[y ]
we find G-forces are between 5.1 and 7.7 Gs, i.e.
a/g = 1/(3 s^-1* 0.043 s) = 7.75 G
a/g = 1/(3 s^-1* 0.065 s) = 5.13 G
Assuming a parabolic arc shape for the force impact curve, the
value, assuming a parabolic force distribution, is actually 1.5 times these numbers. What we are effectively saying is for a runner going 6 m/s will experience a peak impact force of 7.75*1.5 = 11.6
Gs! Recall we were expecting a peak value between 2 and 3 Gs. These values are too high and clearly something is amiss.
A brief aside on the shape of the impact curves
Estimating the duration of impact is more difficult; many a paper have been devoted to the details of foot strike. Here is Daniel Lieberman's force-plate data on three kinds of running styles which
all take about 0.2 seconds to perform:
Some borrowed data from Lieberman et al's 2010 paper
Notably all of the impacts are very parabolic in shape and the impact times remain more or less unchanged. The heel strikers have a sharp spur, but if were to draw a line of best fit they'd all have
a parabolic look to them. Obviously I'm not the only one who first noted this. This
I found while browsing shows a parabolic fit works quite reasonably:
Vertical ground reaction force (VGRF) is shown both experimentally and approximated with a parabolic line of best fit. The units for VGRG are Newtons, for a person weighing 77.5 kg. The peak force in
unitless G's is
2158.9 N/(77.5*9.81) = 2.84
Foot impact duration, continued:
Why does our estimation of force not match the experimental values? Armed with some more information from the above plots it seems clear the answer is that the impact time we estimated was too short:
we estimated less than 100 ms when the real value is closer to 200 ms. The reason for this is that we did not account for rear leg extension.
Assuming a runner fully extends his or her legs to a straight kickback, they will have more ground contact time. At full extension a (roughly) 90 degree angle between their legs and this will lead to
more time of impact. A runner's impact stance should look something like the following diagram:
Or this picture
Now the length of a leg from heel to hip is roughly half a person's height, hence
L[leg] ~ H/2
If there's a 90 degree angle between the legs at push-off, then an approximately 45 angle exists between the leg and a vertical line. It's not important if this is exactly true. The point is to see
if an approximation of ground contact time is feasible to guess at. We push onward.
If theta is 45 degrees, then the total ground contact distance, including the length of a foot is (for a 5'10" or 178 cm person with a 89 cm leg and 26 cm foot),
Now to revisit the force equation, again using a 4 to 6 m/s running speed. We find that the contact time for this is of course longer, between 148 and 223 milliseconds. Combining these 'delta
times with the vertical change in speed
, the average G-forces experienced now sit more comfortably between 1.5 and 2.3 Gs, i.e.
a/g = 1/(3 s^-1* 0.148 s) = 2.25
a/g = 1/(3 s^-1* 0.223 s) = 1.50
As peak forces are 1.5 higher than these values, they now are between 2.25 and 3.38, which is a far more realistic estimate.
I have often heard, and even read in some semi-reputable books that a goal for a seasoned runner should be to 'minimize ground contact time' (an example: Tudor Bompa's
, p. 41). This seemingly constructive goal strikes me either misguided or trivial. Trivial because as one runs faster one's ground contact time per stride inevitably decreases. This decrease comes
without any special alteration to technique and is therefore pointless for an athlete to consider in any detail. The statement is also misguided because there is clearly a strong physical, natural
deterrent to having short ground contact times. I have shown the effect of touching the ground for as little time as possible and the results are not favourable to one's biomechanics. I would even
contend our bodies are not build to allow this to happen, i.e. you are physically required to extend the push off phase behind one's centre of gravity. Certainly impact forces are reduced by
increasing turnover
(which as it grows decreases d
), but that is not exactly the same as taking steps that touch the ground for less time.
This is enough math for one day, so I'll officially stop here. Below is one final note about the non-role height plays in running.
A final note regarding the role height plays in running: H
eight tends to cancel out in most running-realted formulas, and here is another such example. Since running stride rate
is inversely proportional to heigh, i.e.
1/R = aH + b
(where a and b are constants) and given that one's stride length and foot size are also roughly proportional to height, i.e.
d ~ H(c + f)
(where c and f are proportional constants), we see that when calculating the g-forces then for a given speed v that H nearly cancels out:
Dividing through by H
it can be seen the height-factor as related to g-forces is dangling precipitously under the constant b (which I imagine isn't that big to begin with). Impact forces by taller people are spread over
larger strides. Yet again we witness how height in the world of running has no advantage nor disadvantage. Fin.
*Of course running faster does burn more calories per unit time. An elite marathoner burns his or her 2800 Calories in a time of 2:05 - 2:30 compared with ~ 4 hours for the recreational sort. An
elite runner is more powerful, but he or she does not expend less energy overall by running 'more efficiently'. It is important to realize the distance-invariant aspect of running faster: both elites
and amateurs will 'hit the wall' at about 30km into a marathon. This is true no matter what their training regime might be. To be sure better runners hit the wall more gradually therefore minimizing
the sudden effect in their speed. But they are in just as much an exhausted state as anyone else and why even elites never sprint to the finishing tape.
**To derive this value of 1.5 you must use a bit of calculus to discover the average height <y> over a section of a parabola y = ax^2 + b is 2b/3. | {"url":"http://gsnider.blogspot.ca/2012/09/fun-with-physics-part-2-foot-impact.html","timestamp":"2014-04-19T09:53:03Z","content_type":null,"content_length":"87889","record_id":"<urn:uuid:4589b12e-7b32-4285-95d9-9df04161b980>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hann, James (DNB00)
From Wikisource
Dictionary of National Biography, 1885-1900, Volume 24
←Hanmer, Thomas Hann, James Hanna, Samuel→
HANN, JAMES (1799–1856), mathematician, was born in 1799 at Washington, near Gateshead, where his father was a colliery smith. After being fireman at a pumping-station at Hebburn, he was for several
years employed in one of the steamers used on the Tyne for towing vessels. At the same time he studied mathematics, and was on one occasion found reading the works of Emerson the fluxionist. He
afterwards became a teacher, and when keeping a school at Friar's Goose, near Newcastle, he published in 1833 (as joint author with Isaac Dodds of Gateshead) his first work, 'Mechanics for Practical
Men.' An acquaintanceship with Woolhouse the mathematician led to his obtaining a situation as calculator in the Nautical Almanac Office. A few years later he was appointed writing-master, and then a
little later mathematical master at King's College School, London; the latter post he held till his death. Among his pupils was Henry Fawcett [q. v.] He published several works on mechanics and pure
mathematics, the chief of which are: 'Analytical Geometry' (a book which was afterwards greatly improved by J. R. Young), 'Treatise on Plane Trigonometry,' 'Spherical Trigonometry,' 'Examples of the
Integral Calculus,' 'Examples of the Differential Calculus.' In applied mathematics he wrote 'Mathematics for Practical Men,' published 1833; 'The Theory of Bridges,' 1843; 'Treatise on the Steam
Engine, with Practical Rules,' 1847; 'Principles and Practice of the Machinery of Locomotive Engines,' 1850. In 1841, with Olinthus Gregory [q. v.], he drew up and published 'Tables for the Use of
Nautical Men.' He also contributed papers to the 'Diaries' and other mathematical periodicals. Hann was elected a member of the Institute of Civil Engineers in 1843, and was an honorary member of the
Philosophical Society of Newcastle-on-Tyne. He died in King's College Hospital 17 Aug. 1856, aged 57 years. He married as a young man, and had several children.
[Latimer's Local Records of Newcastle, p. 384; Lady and Gentleman's Diary for 1857, p. 69; Proc. Inst. Civ. Engineers, vol. ii.(1843); Gent. Mag. 1856, pt. ii. pp. 513-15, 521; Ann. Register, August | {"url":"http://en.wikisource.org/wiki/Hann,_James_(DNB00)","timestamp":"2014-04-20T12:05:55Z","content_type":null,"content_length":"26588","record_id":"<urn:uuid:2c38b900-8214-4958-bd59-46aee5bfe8f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inline::Octave - Inline octave code into your perl
use Inline Octave => DATA;
$f = jnk1(3);
print "jnk1=",$f->disp(),"\n";
$c= new Inline::Octave([ [1.5,2,3],[4.5,1,-1] ]);
($b, $t)= jnk2( $c, [4,4],[5,6] );
print "t=",$t->as_list(),"\n";
use Data::Dumper; print Dumper( $b->as_matrix() );
print oct_sum( [1,2,3] )->disp();
oct_plot( [0..4], [3,2,1,2,3] );
my $d= (2*$c) x $c->transpose;
print $d->disp;
function x=jnk1(u); x=u+1; endfunction
function [b,t]=jnk2(x,a,b);
## Inline::Octave::oct_sum (nargout=1) => sum
## Inline::Octave::oct_plot (nargout=0) => plot
THIS IS ALPHA SOFTWARE. It is incomplete and possibly unreliable. It is also possible that some elements of the interface (API) will change in future releases.
Inline::Octave gives you the power of the octave programming language from within your Perl programs.
Basically, I create an octave process with controlled stdin and stdout. Commands send by stdin. Data is send by stdin and read with fread(stdin, [dimx dimy], "double"), and read similarly.
Inline::Octave variables in perl are tied to the octave variable. When a destructor is called, it sends a "clear varname" command to octave.
Additionally, there are Inline::Octave::ComplexMatrix and Inline::Octave::String types for the corresponding variables.
I initially tried to bind the C++ and liboctave to perl, but it started to get really hard - so I took this route. I'm planning to get back to that eventually ...
perl 5.005 or newer
Inline-0.40 or newer
octave 3.2 or newer
I've succeded in getting this to work on win2k (activeperl),
win2k cygwin (but Inline-0.43 can't install Inline::C)
and linux (Mandrake 8.0, Redhat 6.2, Debian 2.0).
Note that Inline-0.43 can't handle spaces in your path -
this is a big pain for windows users.
Please send me tales of success or failure on other platforms
You need to install the Inline module from CPAN. This provides the infrastructure to support all the Inline::* modules.
perl Makefile.PL
make test
make install
This will search for an octave interpreter and give you the choice of giving the path to GNU Octave.
If you don't want this interactivity, then specify
perl Makefile.PL OCTAVE=/path/to/octave
perl Makefile.PL OCTAVE='/path/to/octave -my -special -switches'
The path to the octave interpreter can be set in the following ways:
- set OCTAVE_BIN option in the use line
use Inline Octave => DATA => OCTAVE_BIN => /path/to/octave
- set the PERL_INLINE_OCTAVE_BIN environment variable
If you can't figure out a reason, don't!
I use it to grind through long logfiles (using perl), and then calculate mathematical results (using octave).
Why not use PDL?
1) Because there's lots of existing code in Octave/Matlab.
2) Because there's functionality in Octave that's not in PDL.
3) Because there's more than one way to do it.
The most basic form for using Inline is:
use Inline Octave => "octave source code";
The source code can be specified using any of the following syntaxes:
use Inline Octave => 'DATA';
use Inline Octave => <<'ENDOCTAVE';
use Inline Octave => q{
Inline::Octave lets you:
1) Talk to octave functions using the syntax
## Inline::Octave::oct_plot (nargout=0) => plot
Here oct_plot in perl is bound to plot in octave. It is necessary to specify the nargouts required because we can't get this information from perl. (although it's promised in perl6)
If you need to use various nargouts for a function, then bind different functions to it:
## Inline::Octave::eig1 (nargout=1) => eig
## Inline::Octave::eig2 (nargout=2) => eig
2) Write new octave functions,
function s=add(a,b);
will create a new function add in perl bound to this new function in octave.
A function is called using
(list of Inline::Octave::Matrix) =
function_name (list of Inline::Octave::Matrix)
Parameters which are not Inline::Octave::Matrix variables will be cast (if possible).
Values returned will need to be converted into perl values if they need to be used within the perl code. This can be accomplished using:
1. $oct_var->disp()
Returns a string of the disp output from octave This provides a formatted representation, and should mostly be useful for debugging.
2. $oct_var->as_list()
Returns a perl list, corresponding to the ColumnVector for octave "oct_var(:)"
3. $oct_var->as_matrix()
Returns a perl list of list, of the form
$var= [ [1,2,3],[4,5,6],[7,8,9] ];
4. $oct_var->as_scalar()
Returns a perl scalar if $oct_var is a 1x1 matrix, dies with an error otherwise
5. $oct_var->sub_matrix( $row_spec, $col_spec ) Returns the sub matrix specified
$x= Inline::Octave->new([1,2,3,4]);
$y=$x x $x->transpose();
$y->sub_matrix( [2,4], [2,3] )'
gives: [ [4,6],[8,9] ]
Inline::Octave::Matrix is the matrix class that "ties" matrices held by octave to perl variables.
Values can be created explicitly, using the syntax:
$var= new Inline::Octave([ [1.5,2,3],[4.5,1,-1] ]);
$var= Inline::Octave->new([ [1.5,2,3],[4.5,1,-1] ]);
or values will be automatically created by calling octave functions.
If your code only uses matrixes, and does not need to define any octave functions, then the following initialization syntax may be useful:
use Inline Octave =>" ";
Many math operations have been overloaded to work directly on Inline::Octave::Matrix values;
For example, given $var above, we can calculate:
$v1= ( $var x $var->transpose );
$v2= 2*$var + 1
$v3= $var x [ [1],[2] ];
The relation between Perl and Octave operators is:
'+' => '+',
'-' => '-',
'*' => '.*',
'/' => './',
'x' => '*',
Methods can be called on Inline::Octave::Matrix variables, and the underlying octave function is called.
for example:
my $b= new Inline::Octave( 1 );
$s= 4 * ($b->atan());
my $pi= $s->as_scalar;
Is a labourious way to calculate PI.
Additionally, it is possible to call these as functions instead of methods
for example:
$c= Inline::Octave::rand(2,3);
print $c->disp();
0.23229 0.50674 0.25243
0.96019 0.17037 0.39687
The following methods are available, the corresponding number is the output args available (nargout).
abs => 1 acos => 1 acosh => 1
all => 1 angle => 1 any => 1
asin => 1 asinh => 1 atan => 1
atan2 => 1 atanh => 1 ceil => 1
conj => 1 cos => 1 cosh => 1
cumprod => 1 cumsum => 1 diag => 1
erf => 1 erfc => 1 exp => 1
eye => 1 finite => 1 fix => 1
floor => 1 gamma => 1 gammaln => 1
imag => 1 is_bool => 1 is_complex => 1
is_global => 1 is_list => 1 is_matrix => 1
is_stream => 1 is_struct => 1 isalnum => 1
isalpha => 1 isascii => 1 iscell => 1
iscntrl => 1 isdigit => 1 isempty => 1
isfinite => 1 isieee => 1 isinf => 1
islogical => 1 isnan => 1 isnumeric => 1
isreal => 1 length => 1 lgamma => 1
linspace => 1 log => 1 log10 => 1
logspace => 1 ones => 1 prod => 1
rand => 1 randn => 1 real => 1
round => 1 sign => 1 sin => 1
sinh => 1 size => 2 sqrt => 1
sum => 1 sumsq => 1 tan => 1
tanh => 1 zeros => 1
If you would like to do the octave equivalent of
a( [1,3] , :)= [ 1,2,3,4 ; 5,6,7,8 ];
a( : , [2,4])= [ 2,4; 2,4; 2,4; 2,4 ];
a( [1,4],[1,4])= [8,7;6,5];
Then these methods will make life more convenient.
$a = Inline::Octave::zeros(4);
$a->replace_rows( [1,3], [ [1,2,3,4],[5,6,7,8] ] );
$a->replace_cols( [2,4], [ [2,4],[2,4],[2,4],[2,4] ] );
$a->replace_matrix( [1,4], [1,4], [ [8,7],[6,5] ] );
Inline::Octave::ComplexMatrix should work very similarly to Inline::Octave::Matrix's. The perl Math::Complex type is used to map octave complex numbers.
Note, however, that the Math::Complex type in perl is heavy - it takes lots of memory and time compared to the native implementation in Octave.
use Math::Complex;
my $x= Inline::Octave::ComplexMatrix->new([1,1,2,3 + 6*i,4]);
print $x->disp();
Inline::Octave::String is a subclass of Inline::Octave::Matrix used for octave strings. It is required because there is no way to explicity create a string from Inline::Octave::Matrix.
use Inline Octave => q{
function out = countstr( str )
out= "";
for i=1:size(str,1)
out= [out,sprintf("idx=%d row=(%s)\n",i, str(i,:) )];
$str= new Inline::Octave::String([ "asdf","b","4523","end" ] );
$x= countstr( $str );
print $x->disp();
Performance should be almost as good as octave alone. The only slowdown is passing large variables across the pipe between perl and octave - but this should be much faster than any actual
By using the strengths of both languages, it should be possible to run faster than in each. (ie using octave for matrix operations, and running loops and text stuff in perl)
One performance issue is Complex matrix math in perl. The perl Math::Complex type is quite heavy, and for large matrices this work is done for each element. You should try to do the complex stuff in
octave, and only pull back small matrices into perl.
Andy Adler adler at site dot uottawa dot ca
(c) 2003-2011, Andy Adler with help from Andreas Krause
All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself. | {"url":"http://search.cpan.org/~aadler/Inline-Octave-0.31/Octave.pm","timestamp":"2014-04-16T16:22:14Z","content_type":null,"content_length":"28362","record_id":"<urn:uuid:ef99a0e2-7795-46dc-b315-6519c699214c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mirror Mediation
I’ve been planning to write some things about recent developments in string phenomenology. Unfortunately, the list of papers I want to talk about seems to grow faster than my ability to keep pace.
So, rather than being as systematic as I would like, I’ll just plunge in and talk about a recent paper by Joseph Conlon. The subject is supersymmetry-breaking, and a mechanism he calls
First-off, I have to say that I hate the nomenclature of this sub-field, where a profusion of fanciful (and, usually, not terribly descriptive) names, of the form …-mediation, are attached to various
restrictions on the form of the supergravity Lagrangian. I realize it’s hard to stop, once such a convention is established, but I think it tends to obscure more than it illuminates.
Anyway, the key problems that any of these mediation mechanisms needs to solve is to somehow avoid that the soft SUSY-breaking terms induced for the MSSM fields lead to flavour-changing neutral
currents and/or large CP-violation.
Conlon points to a mechanism which might look a little ad-hoc, from the perspective of low energy effective field theory, but which is quite natural in certain classes of string vacua.
We divide the chiral multiplets of the theory into the visible sector fields, $C^{\alpha n}$, which are charged under the Standard Model gauge group and the moduli, which are neutral. Here, $\alpha$
runs over irreps of $SU(3)\times SU(2)\times U(1)$ and $n$ is a flavour index; for brevity, I will sometimes denote the pair $(\alpha,n)$ by $A$. The moduli are further divided into two sets, $\
Psi_i$ and $\Phi_i$.
• The Kähler potential for the moduli is assumed to take the form \begin{aligned} K &= \hat{K} + K_{\text{matter}}\\ \hat{K} &= K_1(\Psi+\overline{\Psi}) + K_2(\Phi,\overline{\Phi}) \end{aligned}
• The gauge kinetic functions are linear functions of the $\Psi$s $f_a(\Psi) = \sum_i \lambda_{a i} \Psi_i$
Together, these two statements amount to saying that the $Im \Psi_i$ are axions, and that, at least on the level of the Kähler potential, the Peccei-Quinn symmetry is unbroken by the VEVs of the $\
• The matter superpotential is independent of the $\Psi_i$$W = \hat{W}(\Psi,\Phi) + {\mu(\Phi)}_{A B} C^A C^B + {Y(\Phi)}_{A B C}C^A C^B C^C + \dots$
• The matter Kähler potential $K_{\text{matter}} = h_{\alpha \overline{\alpha}}(\Psi+\overline{\Psi}) k_{m n}(\Phi,\overline{\Phi}) C^{\alpha m} \overline{C^{\alpha n}} + (Z_{A B}(\Psi,\overline{\
Psi},\Phi,\overline{\Phi}) C^A C^B + h.c.) + \dots$ also leads to a flavour-diagonal Kähler metric for the quarks and leptons which, moreover, respects the Peccei-Quinn symmetry.
• The scalar potential for the moduli $\hat{V} = e^{\hat{K}}\left(\hat{K}^{i\overline{\jmath}} D_i \hat{W} \overline{D_j \hat{W}}-3 |\hat{W}|^2\right)$ is minimized with $D_{\Phi_i} W = 0,\qquad D_
{\Psi_i} W eq 0$ where $D$ is the Kähler covariant derivative, $D_{\Phi_i} W= \partial_{\Phi_i} W + \partial_{\Phi_i} K W$.
That is, supersymmetry-breaking takes place in the $\Psi_i$ sector, but their couplings to the visible sector are flavour-blind. The couplings of the $\Phi_i$ to the visible sector have nontrivial
flavour structure, but the $\Phi_i$ are stabilized “supersymmetrically.”
This structure of the supergravity Lagrangian was cooked up to solve the FCNC and CP problems. As such, from the low-energy field theorist’s perspective, it seems rather ad-hoc. However, as Conlon
points out, it pops out quite naturally in certain classes of string compactifications.
In both Type IIA and Type IIB, the moduli come in two type: complex structure and Kähler moduli. And, at large radius, the moduli space factorizes, in pretty much this fashion and, on one side or the
other, one has a a Peccei-Quinn symmetry. Depending on the details, however, this structure may not be preserved. In the presence of D3-brane moduli (for instance), the Peccei-Quinn symmetry of the
Kähler potential, in IIB, gets messed up. So not all of the popular scenarios for moduli stabilization will be compatible with mirror mediation.
But, it’s tantalizing that, at least in some circumstances, this structure pops out “for free.”
Posted by distler at October 9, 2007 11:51 AM
Re: Mirror Mediation
Hi Jacques,
Rather than symmetry breaking, could symmetry be bent or folded in a manner that might give an illusion of breaking?
Posted by: Doug on October 9, 2007 9:59 PM | Permalink | Reply to this
Re: Mirror Mediation
Is it obvious that in the typical string theory construction either the Kaehler or the complex structure moduli would couple to the gauge fields in a flavour blind way? Probably depends on how you
realise the flavour symmetry but for example in intersecting D-brane models where the families arise as multiple brane intersection (multiple intersection numbers of the cycles the branes are
Posted by: Robert on October 10, 2007 9:09 AM | Permalink | Reply to this
Gauge kinetic functions
I’m not quite sure what that question means. There’s no flavour-structure to the gauge kinetic functions, $f_a$. Maybe what you meant to ask is: why should the gauge kinetic functions depend on just
one type of moduli, $f_a(\Psi)$, instead of on both, $f_a(\Psi,\Phi)$?
The answer, in most cases, is that the gauge kinetic functions depend on the volumes of some cycles on which there are wrapped branes, and these volumes (depending on whether we are in IIA or IIB)
are controlled by the complex structure or the Kähler moduli, respectively.
Posted by: Jacques Distler on October 10, 2007 9:30 AM | Permalink | PGP Sig | Reply to this | {"url":"http://golem.ph.utexas.edu/~distler/blog/archives/001453.html","timestamp":"2014-04-20T03:11:55Z","content_type":null,"content_length":"27904","record_id":"<urn:uuid:1319cca7-93b2-4167-8cf9-1dc0a41c5294>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is meant by this?
October 15th 2010, 04:15 PM
What is meant by this?
I am having trouble understanding this problem.
It has to deal with either cosine or sine law.
The angle of depression of a fire noticed west of a fire tower is 6.2 degrees. The angle of depression of a pond, also west of the tower, is 13.5 degrees.
If the fire and pond are the same altitude, and the tower is 2.25 km from the pond on a direct line, how far is the fire from the pond?
So do I take the fire and pond are level together on the ground, on a horizontal plane from the angles noted above??
Not sure how to solve this, and do I have enough info??
October 16th 2010, 01:28 AM
I am having trouble understanding this problem.
It has to deal with either cosine or sine law.
The angle of depression of a fire noticed west of a fire tower is 6.2 degrees. The angle of depression of a pond, also west of the tower, is 13.5 degrees.
If the fire and pond are the same altitude, and the tower is 2.25 km from the pond on a direct line, how far is the fire from the pond?
So do I take the fire and pond are level together on the ground, on a horizontal plane from the angles noted above??
Not sure how to solve this, and do I have enough info??
1. Draw a rough sketch.
2. You are dealing with 2 right triangles with the height of the tower as the common leg.
3. Calculate the height of the tower (use the tan-function)
4. Calculate the distance fire-tower and then the distance I labeled "x".
5. For confirmation only: I've got $x \approx 2.722\ km$
October 16th 2010, 11:48 AM
Thanks for the help. I can figure it out from how you drew it. I was totally wrong on my sketch!! :))))) | {"url":"http://mathhelpforum.com/trigonometry/159762-what-meant-print.html","timestamp":"2014-04-23T12:03:49Z","content_type":null,"content_length":"5797","record_id":"<urn:uuid:94d54907-417a-4b0b-993f-0a528ac4f74a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
4 Activity 3: Scatter Plot and Line of Fit. For this activity you will need regular notebook paper, a piece of string, a centimeter ruler, grid paper and about 5 different size cans or jar tops. (You
can get them off the shelves at home if you like, measure there and bring the data to school.) Make a table like this one on your regular notebook paper. Diameter of can (cm) Circumference of can
(cm) Circumference ÷ Diameter Measure the diameter of the first can or jar top in centimeters. If the measure goes over the cm line, be sure to measure to the
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Type into your table the measures of your diameters in the order that they appear.
Best Response
You've already chosen the best response.
Type in the measures of the circumferences in the order in which they appear in your table.
Best Response
You've already chosen the best response.
Type in the results of dividing the circumference by the diameter in the order which they appear. Note: If the result is not greater than one you divided the diameter by the circumference and you
should redo the division in the correct order.
Best Response
You've already chosen the best response.
Hi am new here can u give me some tips???
Best Response
You've already chosen the best response.
Luv the pic
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c76789e4b0a14e436840eb","timestamp":"2014-04-18T11:03:28Z","content_type":null,"content_length":"37925","record_id":"<urn:uuid:c2225220-0535-4441-b576-8b895b7665e7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exploratory factor analysis
This is not the document you are looking for? Use the search form below to find more!
Report home > Others
Exploratory factor analysis
Document Description
Lecture 5 Survey Research & Design in Psychology James Neill, 2010 Exploratory Factor Analysis Overview What is factor analysis? Assumptions Steps / Process Examples Summary What is factor analysis?
File Details
• Added: May, 12th 2011
• Reads: 654
• Downloads: 4
• File size: 1.14mb
• Pages: 99
• Tags: factoranalysis, exploratoryfactoranalysis, srdp, psychometrics, statistics, psychology, srdp2011, srdp2010, fa, efa, factor analysis, statistics factor analysis, analysis, stat, factor
We are unable to create an online viewer for this document. Please download the document instead.
Related Documents
by: sasa, 6 pages
Hofstede's measure of cultural values is one of the most widely used among international management and marketing scholars. However, there is no research that employed Hofstede's measure in a ...
by: shinta, 14 pages
This article describes the development and validation of an instrument to assess cognitively mediated functional abilities in older adults, Everyday Cognition (ECog). The ECog is an ...
by: samanta, 18 pages
Financial rates, in terms of investors, are of great importance in the expansion of the companies to whose share they are going to invest. Financial rates have been benefited from especially in ...
by: monkey, 23 pages
Purpose – This study aims to examine perceptions of politics among public sector employees as a possible mediator between the supervisor’s leadership style and formal and informal ...
by: shinta, 48 pages
Structural vector autoregressions (VARs) are widely used to trace out the effect of monetary policy innovations on the economy. However, the sparse information sets typically used in ...
by: shinta, 14 pages
This paper investigates associations with bloggers’ intrinsic motivation and its marketing potential in terms of electronic word-of-mouth (eWOM). A total of 282 Korean bloggers ...
by: shinta, 9 pages
The research described in this paper investigates the role of Vocabulary Learning Strategies (VLS) in ICALL environments. Although VLS taxonomies do exist, they have been developed for ...
by: shinta, 9 pages
The authors examined the relationship between cognition and gait velocity, performed with and without interference, in elderly participants. Neuropsychological test scores from 186 cognitively normal
by: shinta, 11 pages
Second language acquisition has social and psychological perspectives. The success of second-language learning depends very much on how a learner is motivated to learn the second language ...
by: shinta, 52 pages
Presuming that not just economic circumstances but also ideational factors influence fertility decisions, the paper examines the values of children of East and West-German childless men ...
Content Preview
1. Lecture 5 Survey Research & Design in Psychology James Neill, 2010 Exploratory Factor Analysis
2. Overview
□ What is factor analysis?
□ Assumptions
□ Steps / Process
□ Examples
□ Summary
3. What is factor analysis?
□ What is factor analysis?
□ Purpose
□ History
□ Types
□ Models
4. Universe : Galaxy All variables : Factor
5. Conceptual model of factor analysis FA uses correlations among many items to search for common clusters.
6. Factor analysis...
□ is used to identify clusters of inter-correlated variables (called ' factors ').
□ is a family of multivariate statistical techniques for examining correlations amongst variables.
□ empirically tests theoretical data structures .
□ is commonly used in psychometric instrument development .
7. Purposes There are two main applications of factor analytic techniques:
☆ Data reduction : Reduce the number of variables to a smaller number of factors.
☆ Theory development : Detect structure in the relationships between variables, that is, to classify variables.
8. Purposes: Data reduction
□ Simplifies data structure by revealing a smaller number of underlying factors
□ Helps to eliminate or identify items for improvement :
☆ redundant variables
☆ unclear variables
☆ irrelevant variables
□ Leads to calculating factor scores
9. Purposes: Theory development
□ Investigates the underlying correlational pattern shared by the variables in order to test theoretical models e.g., how many personality factors are there?
□ The goal is to address a theoretical question as opposed to calculating factor scores.
10. History of factor analysis
□ Invented by Charles Spearman (1904)
□ Usage hampered by onerousness of hand calculation
□ Since the advent of computers, usage has thrived, esp. to develop:
☆ Theory e.g., determining the structure of personality
☆ Practice e.g., development of 10,000s+ of psychological screening & measurement tests
11. EFA = Exploratory Factor Analysis
☆ explores & summarises underlying correlational structure for a data set
CFA = Confirmatory Factor Analysis
☆ tests the correlational structure of a data set against a hypothesised structure and rates the “goodness of fit”
Two main types of FA: Exploratory vs. confirmatory factor analysis
12. This (introductory) lecture focuses on Exploratory Factor Analysis (recommended for undergraduate level). However, note that Confirmatory Factor Analysis (and Structural Equation Modeling) is
generally preferred but is more advanced and recommended for graduate level. This lecture focuses on exploratory factor analysis
13. Conceptual model - Simple model
□ e.g., 12 items testing might actually tap only 3 underlying factors
□ Factors consist of relatively homogeneous variables.
Factor 1 Factor 2 Factor 3
14. Eysenck’s 3 personality factors e.g., 12 items testing three underlying dimensions of personality Extraversion/ introversion Neuroticism Psychoticism talkative shy sociable fun anxious gloomy
relaxed tense unconventional nurturing harsh loner
15. Question 1 Conceptual model - Simple model Question 2 Question 3 Question 4 Question 5 Factor 1 Factor 2 Factor 3 Each question loads onto one factor
16. Question 1 Conceptual model - Complex model Question 2 Question 3 Question 4 Question 5 Factor 1 Factor 2 Factor 3 Questions may load onto more than one factor
17. Conceptual model – Area plot Correlation between X1 and X2 A theoretical factor which is partly measured by the common aspects of X1 and X2
18. How many factors? One factor? Three factors? Nine factors? (independent items)
19. Does personality consist of 2, 3, or 5, 16, etc. factors? e.g., the “Big 5”?
□ Neuroticism
□ Extraversion
□ Agreeableness
□ Openness
□ Conscientiousness
Example: Personality
20. Does intelligence consist of separate factors, e.g,.
□ Verbal
□ Mathematical
□ Interpersonal, etc.?
...or is it one global factor (g)? ...or is it hierarchically structured? Example: Intelligence
21. Example: Essential facial features ( Ivancevic, 2003)
22. Six orthogonal factors, represent 76.5% of the total variability in facial recognition (in order of importance):
□ upper-lip
□ eyebrow-position
□ nose-width
□ eye-position
□ eye/eyebrow-length
□ face-width
Example: Essential facial features ( Ivancevic, 2003)
23. Assumptions
□ GIGO
□ Sample size
□ Levels of measurement
□ Normality
□ Linearity
□ Outliers
□ Factorability
24. G arbage . I n . G arbage . O ut
□ Screen the data
□ Use variables that theoretically “go together”
25. Assumption testing: Sample size Some guidelines:
□ Min. : N > 5 cases per variable
☆ e.g., 20 variables, should have > 100 cases (1:5)
□ Ideal : N > 20 cases per variable
☆ e.g., 20 variables, ideally have > 400 cases (1:20)
26. Assumption testing: Sample size Comrey and Lee (1992): 50 = very poor, 100 = poor, 200 = fair, 300 = good, 500 = very good 1000+ = excellent
27. Assumption testing: Sample size
28. Assumption testing: Level of measurement
□ All variables must be suitable for correlational analysis, i.e., they should be ratio/metric data or at least Likert data with several interval levels.
29. Assumption testing: Normality
□ FA is robust to violation of assumptions of normality
□ If the variables are normally distributed then the solution is enhanced
30. Assumption Testing: Linearity
□ Because FA is based on correlations between variables, it is important to check there are linear relations amongst the variables (i.e., check scatterplots)
31. Assumption testing: Outliers
□ FA is sensitive to outlying cases
☆ Bivariate outliers (e.g., check scatterplots)
☆ Multivariate outliers (e.g., Mahalanobis’ distance)
□ Identify outliers, then remove or transform
□ 15 classroom behaviours of high-school children were rated by teachers using a 5-point scale.
□ Task : Identify groups of variables (behaviours) that are strongly inter-related & represent underlying factors.
Example factor analysis: Classroom behaviour
33. Classroom behaviour items
□ Cannot concentrate ? can concentrate
□ Curious & enquiring ? little curiousity
□ Perseveres ? lacks perseverance
□ Irritable ? even-tempered
□ Easily excited ? not easily excited
□ Patient ? demanding
□ Easily upset ? contented
□ Control ? no control
□ Relates warmly to others ? disruptive
□ Persistent ? frustrated
□ Difficult ? easy
□ Restless ? relaxed
□ Lively ? settled
□ Purposeful ? aimless
□ Cooperative ? disputes
Classroom behaviour items
35. Classroom behaviour items
36. Assumption testing: Factorability Check the factorability of the correlation matrix (i.e., how suitable is the data for factor analysis?) by one or more of the following methods:
□ Correlation matrix correlations > .3?
□ Anti-image matrix diagonals > .5?
□ Measures of sampling adequacy (MSAs)?
☆ Bartlett’s sig.?
☆ KMO > .5 or .6?
37. Assumption testing: Factorability (Correlations) Are there SOME correlations over .3? If so, proceed with FA Takes some effort with a large number of variables, but accurate
□ Examine the diagonals on the anti-image correlation matrix
□ Consider variables with correlations less that .5 for exclusion from the analysis – they lack sufficient correlation with other variables
□ Medium effort, reasonably accurate
Assumption testing: Factorability: Anti-image correlation matrix
39. Anti-Image correlation matrix Make sure to look at the anti-image CORRELATION matrix
□ Global diagnostic indicators - correlation matrix is factorable if:
☆ Bartlett’s test of sphericity is significant and/or
☆ Kaiser-Mayer Olkin (KMO) measure of sampling adequacy > .5 or .6
□ Quickest method, but least reliable
Assumption testing: Factorability: Measures of sampling adequacy
41. Assumption testing: Factorability
42. Summary: Measures of sampling adequacy Draw on one or more of the following to help determine the factorability of a correlation matrix:
□ Several correlations > .3?
□ Anti-image matrix diagonals > .5?
□ Bartlett’s test significant?
□ KMO > .5 to .6? (depends on whose rule of thumb)
43. Steps / Process
□ Test assumptions
□ Select type of analysis
□ Determine no. of factors (Eigen Values, Scree plot, % variance explained)
□ Select items (check factor loadings to identify which items belong in which factor; drop items one by one; repeat)
□ Name and define factors
□ Examine correlations amongst factors
□ Analyse internal reliability
□ Compute composite scores
44. Type of EFA: Extraction method: PC vs. PAF Two main approaches to EFA:
□ Analyses all variance: Principle Components (PC)
□ Analyses shared variance: Principle Axis Factoring (PAF)
45. Principal components (PC)
□ More common
□ More practical
□ Used to reduce data to a set of factor scores for use in other analyses
□ Analyses all the variance in each variable
46. Principal axis factoring (PAF)
□ Used to uncover the structure of an underlying set of p original variables
□ More theoretical
□ Analyses only shared variance (i.e. leaves out unique variance)
47. Total variance of a variable Principal Components (PC) Principal Axis Factoring (PAF)
□ Often there is little difference in the solutions for the two procedures.
□ If unsure, check your data using both techniques
□ If you get different solutions for the two methods, try to work out why and decide on which solution is more appropriate
PC vs. PAF
49. Communalities
□ Each variable has a communality =
☆ the proportion of its variance explained by the extracted factors
☆ sum of the squared loadings for the variable on each of the factors
□ Ranges between 0 and 1
□ If communality for a variable is low (e.g., < .5, consider extracting more factors or removing the variable)
□ High communalities (> .5): Extracted factors explain most of the variance in the variables being analysed
□ Low communalities (< .5): A variable has considerable variance unexplained by the extracted factors
☆ May then need to extract MORE factors to explain the variance
51. Communalities - 2
52. Explained variance
□ A good factor solution is one that explains the most variance with the fewest factors
□ Realistically, happy with 50-75% of the variance explained
53. Explained variance 3 factors explain 73.5% of the variance in the items
54. Eigen values
□ Each factor has an eigen value
□ Indicates overall strength of relationship between a factor and the variables
□ Sum of squared correlations
□ Successive EVs have lower values
□ Rule of thumb: Eigen values over 1 are ‘stable’ (Kaiser's criterion)
55. Explained variance The eigen values ranged between .16 and 9.35. Two factors satisfied Kaiser's criterion (EVs > 1) but the third EV is .93 and appears to be a useful factor.
56. Scree plot
□ A line graph of Eigen Values
□ Depicts amount of variance explained by each factor
□ Cut-off: Look for where additional factors fail to add appreciably to the cumulative explained variance
□ 1st factor explains the most variance
□ Last factor explains the least amount of variance
57. Scree plot
58. Scree plot Scree plot
59. Scree plot Scree plot
60. How many factors? A subjective process ... Seek to explain maximum variance using fewest factors, considering:
□ Theory – what is predicted/expected?
□ Eigen Values > 1? (Kaiser’s criterion)
□ Scree Plot – where does it drop off?
□ Interpretability of last factor ?
□ Try several different solutions ? (consider FA type, rotation, # of factors)
□ Factors must be able to be meaningfully interpreted & make theoretical sense?
61. How many factors?
□ Aim for 50-75% of variance explained with 1/4 to 1/3 as many factors as variables/items.
□ Stop extracting factors when they no longer represent useful/meaningful clusters of variables
□ Keep checking/clarifying the meaning of each factor – make sure you are reading the full wording of each item.
□ Factor loadings (FLs) indicate relative importance of each item to each factor.
☆ In the initial solution , each factor tries “selfishly” to grab maximum unexplained variance.
☆ All variables will tend to load strongly on the 1st factor
Initial solution: Unrotated factor structure
63. Initial solution - Unrotated factor structure
□ Factors are weighted combinations of variables
□ A factor matrix shows variables in rows and factors in columns
64. 1st factor extracted:
□ Best possible line of best fit through the original variables
□ Seeks to explain lion's share of all variance
□ A single factor, best summary of the variance in the whole set of items
Initial solution - Unrotated factor structure
□ Each subsequent factor tries to explain the maximim possible amount of remaining unexplained variance.
☆ Second factor is orthogonal to first factor - seeks to maximise its own eigen value (i.e., tries to gobble up as much of the remaining unexplained variance as possible)
Initial solution - Unrotated factor structure
66. Vectors (Lines of best fit)
67. Initial solution: Unrotated factor structure
□ Seldom see a simple unrotated factor structure
□ Many variables load on 2 or more factors
□ Some variables may not load highly on any factors (check: low communality)
□ Until the FLs are rotated, they are difficult to interpret.
□ Rotation of the FL matrix helps to find a more interpretable factor structure.
68. Two basic types of factor rotation Orthogonal (Varimax) Oblique (Oblimin)
69. Two basic types of factor rotation
□ Orthogonal minimises factor covariation, produces factors which are uncorrelated
□ Oblimin allows factors to covary, allows correlations between factors
70. Orthogonal rotation
71. Why rotate a factor loading matrix?
□ After rotation, the vectors (lines of best fit) are rearranged to optimally go through clusters of shared variance
□ Then the FLs and the factor they represent can be more readily interpreted
72. Why rotate a factor loading matrix?
□ A rotated factor structure is simpler & more easily interpretable
☆ each variable loads strongly on only one factor
☆ each factor shows at least 3 strong loadings
☆ all loading are either strong or weak, no intermediate loadings
73. Orthogonal vs. oblique rotations
□ Consider purpose of factor analysis
□ If in doubt, try both
□ Consider interpretability
□ Look at correlations between factors in oblique solution
☆ if >.3 then go with oblique rotation (>10% shared variance between factors)
74. Interpretability
□ It is dangerous to be driven by factor loadings only – think carefully - be guided by theory and common sense in selecting factor structure.
□ You must be able to understand and interpret a factor if you’re going to extract it.
75. Interpretability
□ However, watch out for ‘seeing what you want to see’ when evidence might suggest a different, better solution.
□ There may be more than one good solution! e.g., in personality
☆ 2 factor model
☆ 5 factor model
☆ 16 factor model
76. Factor loadings & item selection A factor structure is most interpretable when: 1. Each variable loads strongly (> + .40) on only one factor 2. Each factor shows 3 or more strong loadings; more
loadings = greater reliability 3. Most loadings are either high or low, few intermediate values. 4. These elements give a ‘simple’ factor structure.
77. Initial solution – Unrotated factor structure 4
78. Rotated factor matrix Task Orientation Sociability Settledness
79. 3-d plot
□ Bare min. = 2
□ Recommended min. = 3
□ Max. = unlimited
-> ? reliability -> ? 'roundedness' -> Law of diminishing returns
□ Typically = 4 to 10 is reasonable
How many items per factor?
81. How do I eliminate items? A subjective process; consider:
□ Size of main loading (min = .4)
□ Size of cross loadings (max = .3?)
□ Meaning of item (face validity)
□ Contribution it makes to the factor
□ Eliminate 1 variable at a time, then re-run, before deciding which/if any items to eliminate next
□ Number of items already in the factor
82. Factor loadings & item selection Comrey & Lee (1992): loadings > .70 - excellent > .63 - very good > .55 - good > .45 - fair > .32 - poor
83. Factor loadings & item selection Cut-off for acceptable loadings:
□ Look for gap in loadings (e.g., .8, .7, .6 , .3, .2 )
□ Choose cut-off because factors can be interpreted above but not below cut-off
84. Other considerations: Normality of items Check the item descriptives . e.g. if two items have similar Factor Loadings and Reliability analysis, consider selecting items which will have the least
skew and kurtosis. The more normally distributed the item scores, the better the distribution of the composite scores.
85. Factor analysis in practice
□ To find a good solution, consider each combination of:
☆ PC-varimax
☆ PC-oblimin
☆ PAF-varimax
☆ PAF-oblimin
□ Apply the above methods to a range of possible/likely factors, e.g., for 2, 3, 4, 5, 6, and 7 factors
□ Eliminate poor items one at a time, retesting the possible solutions
□ Check factor structure across sub-groups (e.g., gender) if there is sufficient data
□ You will probably come up with a different solution from someone else!
□ Check/consider reliability analysis
Factor analysis in practice
87. Example: Condom use
□ The Condom Use Self-Efficacy Scale (CUSES) was administered to 447 multicultural college students.
□ PC FA with a varimax rotation.
□ Three distinct factors were extracted:
☆ Appropriation
☆ Sexually Transmitted Diseases
☆ Partners' Disapproval
88. Factor loadings & item selection .56 I feel confident I could gracefully remove and dispose of a condom after sexual intercourse .61 I feel confident I could remember to carry a condom with
me should I need one .65 I feel confident I could purchase condoms without feeling embarrassed .75 I feel confident in my ability to put a condom on myself or my partner FL Factor 1:
89. Factor loadings & item selection .80 I would not feel confident suggesting using condoms with a new partner because I would be afraid he or she would think I thought they had a sexually
transmitted disease .86 I would not feel confident suggesting using condoms with a new partner because I would be afraid he or she would think I have a sexually transmitted disease .72 I
would not feel confident suggesting using condoms with a new partner because I would be afraid he or she would think I've had a past homosexual experience FL Factor 2: STDs
90. Factor loadings & item selection .58 If my partner and I were to try to use a condom and did not succeed, I would feel embarrassed to try to use one again (e.g. not being able to unroll condom,
putting it on backwards or awkwardness) .65 If I were unsure of my partner's feelings about using condoms I would not suggest using one .73 If I were to suggest using a condom to a partner, I
would feel afraid that he or she would reject me FL Factor 3: Partner's reaction
91. Summary
□ Introduction
□ Assumptions
□ Steps/Process
92. Introduction: Summary
□ Factor analysis is a family of multivariate correlational data analysis methods for summarising clusters of covariance .
□ FA summarises correlations amongst items.
□ The common clusters (called factors) are summary indicators of underlying fuzzy constructs.
93. Assumptions: Summary
□ Sample size
☆ 5+ cases per variables (ideally 20+ cases per variable)
☆ N > 200
□ Bivariate & multivariate outliers
□ Factorability of correlation matrix (Measures of Sampling Adequacy)
□ Normality enhances the solution
94. Summary: Steps / Process
□ Test assumptions
□ Select type of analysis
□ Determine no. of factors (Eigen Values, Scree plot, % variance explained)
□ Select items (check factor loadings to identify which items belong in which factor; drop items one by one; repeat)
□ Name and define factors
□ Examine correlations amongst factors
□ Analyse internal reliability
□ Compute composite scores
95. Summary: Types of FA
□ PC: Data reduction
□ PAF: Theoretical data exploration
□ Try both ways
☆ Are solutions different? Why?
96. Summary: Rotation
□ Orthogonal (varimax)
□ Oblique (oblimin)
□ Try both ways
☆ Are solutions different? Why?
97. No. of factors to extract?
□ Inspect Evs
☆ look for > 1 or sudden drop (Inspect scree plot)
□ % of variance explained
□ Interpretability / theory
Summary: Factor extraction
□ Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
□ Ivancevic, V., Kaine, A.K., MCLindin, B.A, & Sunde, J. (2003). Factor analysis of essential facial features . In the proceedings of the 25th International Conference on Information Technology
Interfaces (ITI), pp. 187-191, Cavtat, Croatia.
□ Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods , 4 (3),
□ Tabachnick, B. G. & Fidell, L. S. (2001). Principal components and factor analysis. In Using multivariate statistics . (4th ed., pp. 582 - 633). Needham Heights, MA: Allyn & Bacon.
99. Open Office Impress
□ This presentation was made using Open Office Impress.
□ Free and open source software.
□ http://www.openoffice.org/product/impress.html
Share Exploratory factor analysis as:
Share Exploratory factor analysis.
Enter two words as shown below. If you cannot read the words, click the refresh icon.
Share Exploratory factor analysis as:
Copy html code above and paste to your web page. | {"url":"http://pdfcast.org/pdf/exploratory-factor-analysis","timestamp":"2014-04-21T07:09:58Z","content_type":null,"content_length":"59579","record_id":"<urn:uuid:9c4eee0e-327c-4e0f-b855-be374782b903>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Attractor Parameters
Fractal Parameter Windows
Attractor Parameters
Fractal type Attractor is a true 3D fractal type: The whole rendering takes place in 3D. As such, you may find several parameters related to 3D, e.g. settings for observer, light sources, etc.
So let me explain in short terms the meaning of the 5 tabs available in the parameter window.
Parameter This tab contains general rendering parameters for the fractal: This includes the rendering algorithm (Solid, Plasma, Flame) as well as opacity setting, gamma and contrast. If you want to
know more about what an Attractor actually is and how ChaosPro tries to calculate and render it, see Theory of Attractor
View This tab now lets you specify all items related to the observer:
This includes the observer position, the observer orientation and the view direction. A detailed description of each parameter can be found at View Parameters.
Light This tab lets you specify all available light settings: This includes the light position, the intensity, etc. . ChaosPro uses a variation of the Phong lighting model. Of course, ray tracing
programs would produce much better results, but the quality seems to be sufficient for a fractal generator. More info can be found at Light Parameters.
The previous tabs contained parameters which affect the view and are not directly related to the fractal itself. The next two tabs now let you specify the fractal related stuff: The formulas to use
and the coloring to apply.
Formula Here you can define the transformation formulas and formula sets to use in order to render the attractor. A more detailed description can be found at Attractor Formulas.
Coloring And the last tab lets you choose the coloring algorithm to use. The settings there do not affect the shape of the fractal, they only affect the colors of the object. A more detailed
description can be found at Attractor Coloring. | {"url":"http://www.chaospro.de/documentation/html/fractaltypes/attractor/parmgeneral.htm","timestamp":"2014-04-19T07:09:59Z","content_type":null,"content_length":"31769","record_id":"<urn:uuid:e5b951ac-0fcc-4d0d-8602-79cc36df3f24>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Fitting a measurement model in a multi-level analysis
barbara mark posted on Friday, September 22, 2000 - 9:12 am
I have a 4-indicator measurement model that performed well in the "between" analysis, but not in the "within" analysis. I either get a model that will not converge, or I get negative signs on some of
the indicators. I have tried setting lamdas to 1 on different indicators, I have tried different start values for the indicators, and I have tried different combinations of correlated measurement
errors among the indicators, but nothing seems to be working. Any suggestions would be appreciated.
bmuthen posted on Saturday, September 23, 2000 - 2:34 am
Unless you already tried this, a regular analysis of the pooled-within sample covariance matrix printed by Mplus is a good way to examine the within structure. Here, usual procedures for finding a
well-fitting model can be used. Since there are only 4 indicators, it is easy to look at the sample covariances and see if negative loadings should happen. Check also that no variable has a
considerably larger variance than the others.
Anonymous posted on Tuesday, May 25, 2004 - 7:12 am
I am tring a two-level CFA model with four latent vaiables which are measured by 3 or 4 indicators each. As suggested by Muthén 1994, in the first step, I am runing a pooling single-level model, but
several between-level indicators have more than 1 Lemda and several are negative. Followed by the output suggestion, I fixed the highest loading factor of each latent variable =1, but the ouptput
shows: NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED. So what's the probem and how should I solve it?
in addition, the second setp in checking ICC goes well, and the third step in running the pooling within in model also comes out results reasonable. But the final step for 2-level analysis comes out
a FATAL ERROR
IN A TOTAL OF 50625 INTEGRATION POINTS. THIS MAY BE THE CAUSE
OF THE MEMORY SHORTAGE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING
But currently, my computer has space available about 20G, is it still not enough, or should I make some correction of my model.
Thank you very much!
bmuthen posted on Tuesday, May 25, 2004 - 8:05 am
The failure of your first step indicates that you should modify your model, perhaps using exploratory factor analysis. Given the fatal error message in your final step, it sounds like you have
specified some of your indicators as categorical. The 4 steps in Muthen (1984) were intended for continuous outcomes. The 4-dimensional integration in the final step is space demanding because the
combination of the many integration points and a large sample size creates a huge space need. You can reduce to say 10 integration points per dimension by using the option integration = 10 (see
User's Guide). But first work on step 1.
Anonymous posted on Wednesday, May 26, 2004 - 9:19 am
Thanks for your quick answer.
Yes I have several cateogrical indicators at between level. But could I still follow the 4-step just by figuring out that some of them are categorical in the variable command?
And I tried the EFA of pooling model by exploring 5-6 factors, but I got the following results:
But based on theory (also because 2-level analysis demands seperate factors at both levels), I need at least five factors in the pooling models. So any suggestion?
Thanks again.
bmuthen posted on Thursday, May 27, 2004 - 11:07 am
When you say "pooling model" I think you mean Step 1 analysis of the total covariance (correlation) matrix, referred to as S_T in my paper; the word "pooling" is a bit confusing since analysis of the
pooled-within covariance matrix is Step 3.
If you hypothesize 4 within factors and 1 between factor, this does not mean that you necessarily would get 5 factors in Step 1; you may get less because within and between factors get a bit
confounded. The EFA error message suggests that less than 5 factors should be attempted. And, even if theory says you should get 4 within factors does not mean that your factor analysis confirms
that. Less than perfect validity of the measurements may distort the intent of the theory. It is important to settle these Step 1 matters before continuing.
Anonymous posted on Friday, May 28, 2004 - 12:33 am
Yes, in the setp 1, I am doing S_T. And I exactly got what you mentioned problem of "confounded" not only at the two levels, also confounded with the outcome indicators (the outcome variable is also
a latent variable with 3 continuous indicators), so I have no idea what's the use of doing step 1. I cannot get the valide factors at both levels and even the indicators of independent variables are
confounded with that of outcomes. Does it mean that my model is problematic?
I tried to just drop off some misspecifications resulted by step 1 and go to step 4 (step 2 and 3 works good), but step 4 does not work. I also try S_B, it seems that problem comes from the S_B. so
should I also run EFA at between?
Thank you very much for the time and helps.
bmuthen posted on Saturday, May 29, 2004 - 10:53 am
Regarding the question in your first paragraph - yes, if the loadings of your outcome indicators don't behave well your model is most likely problematic. If you haven't already, I would go back and
look at the EFA here.
Regarding your question in the last paragraph, EFA on the estimated Sigma(B) is useful if you have many clusters (say > 50) - see Muthen (1994).
eric duku posted on Thursday, January 12, 2006 - 11:54 am
I am running a 2-level CFA with one latent factor and 26 items. I have run successfully the first 3 steps as suggested by Muthen (1984). On the final 2-level model, I get the following error message
I used starting values from the between and within models to no avail. My next option is to try EFA to see what I get.
Any help would be greatly appreciated!
Linda K. Muthen posted on Thursday, January 12, 2006 - 2:20 pm
If you have not done EFA on the pooled-within and estimated sigma between matrices, I would start there.
eric duku posted on Friday, January 13, 2006 - 9:01 am
Thanks, Linda!
I appreciate your quick response.
I'll do that and keep you posted.
eric duku posted on Friday, January 20, 2006 - 12:29 pm
Hi Linda!
Sorry for the delay in keeping you posted...your suggestion worked...thank you!
I appreciate your help.
All the best!
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=9&page=79","timestamp":"2014-04-19T02:23:45Z","content_type":null,"content_length":"34155","record_id":"<urn:uuid:9fa51003-8302-4f42-a7e7-ebb2a6e455fa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dieter Leithner's new p57 oscillator
From: Dietrich.Leithner@dlr.de (Dietrich Leithner)
Date: Thu, 18 Dec 1997 13:12:57 +0200
Subject: p57 and p56 Herschel loops
In their paper "Tight bounds on periodic cell configurations in Life" Dave Buckingham and Paul Callahan wrote (lemma 2) (Accepted to Experimental Mathematics. Electronic preprints available on
request. --PBC):
"For any n >= 58, a Herschel loop can be constructed and populated with Herschels to realize a period-n oscillator."
Here are examples of Herschel loop oscillators for n = 57 and n = 56:
The loops use fast 65-step Herschel conduits that turn a Herschel left and flip it. (Also used in the above loops are fast 77-step Herschel conduits that translate and flip a Herschel. These were
known before).
The 65-step Herschel conduits have the following minimum repeat times (compression):
Conduit Repeat Example
Period Time Application
3 57 p57 Herschel loop
4 60 p68 glider gun
8 56 p56 Herschel loop
The 65-step conduits also allow to shrink the size of some Herschel-based glider guns. Here is one example:
Question: Dean's stamp collection shows p56 oscillators but has no entry for p57. Is the above the first p57?
Dieter dietrich.leithner@dlr.de
Back to Paul's Page of Conway's Life Miscellany | {"url":"http://www.radicaleye.com/lifepage/patterns/p57/p57.html","timestamp":"2014-04-19T22:09:39Z","content_type":null,"content_length":"2119","record_id":"<urn:uuid:2a9205ed-7832-4e2e-94ff-481955b72762>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculation Of Capital Charge For Credit Risk
Sponsored High Speed Downloads
III.B.6 Credit Risk Capital Calculation Dan Rosen1 III.B.6.1 Introduction As discussed in Chapter III.0, the primary role of capital in a bank, apart from the transfer of ... capital charge, based on
a bottom-up risk assessment of the structure.
To determine the counterparty credit risk capital charge as defined in the Basel III document, para 99 ... CVA risk capital charge calculation. LGDmkt is a market assessment of LGD that is used for
pricing the CVA, which might be
The purpose of the report is to provide an overview of the reinsurance credit risk charge in the calculation of the NAIC Property/Casualty (P/C) Risk-Based Capital (RBC), ... for an additional
capital charge to account for the risk that catastrophe reinsurance may be, in part, ...
The global financial crisis brought counterparty credit risk and CVA very much into the ... Basel II Regulatory Capital Charge is a fixed percentage of Risk Weighted Assets ... max add-on in CEM EAD
calculation is only 1.5% of the notional (for comparison, ...
Counterparty credit risk is the risk that a counterparty to a financial ... instruments is to incorporate the capital charge ... requirements (accuracy, calculation speed) of the counterparty credit
risk framework;
58 Master Circular - Prudential Norms on Capital Adequacy - Basel I Framework – 2013 ANNEX 9 Risk Weights for Calculation of Capital Charge for Credit Risk
changes to the credit risk calculation) less of a buffer will exist for other risks. It should also be noted that banks themselves typically hold capital well in excess of the current regulatory ...
The overall operational risk capital charge is the sum of the
Calculating credit risk capital charges with the ... Keywords: One-factor model, capital charge, granularity adjustment, quantile derivative. 1 Introduction ... The granularity adjustment approach to
the calculation of the α-quantile q
3 a Counterparty credit risk in Basel II 4.4 The Basel II credit risk framework dealt with counterparty credit risk for
Rating Based Approach (IRB) for calculation of capital charge for Credit Risk from April 1, 2012 onwards. 2. The draft guidelines for computing credit risk capital charge under IRB were accordingly
issued on August 10, 2011 to seek comments and suggestions from
Counterparty credit risk Measuring EAD under the IMM approach Key steps in calculating EAD under current regulations 1 Generate market risk factor
Credit Risk Capital Requirements and the Growth of Complexity ... The Basic Value-at-Risk-based Market Risk Capital Charge ... Calculation of the default risk capital requirement for the LTIRC
requires estimation of the distribution of ...
Capital adequacy for credit risk: A practical exercise Financial Institutions www.msnorthamerica.com
Counterparty Credit Risk and Basel III A Framework for Successful Implementation
tier one capital to total risk weighted credit exposures to be not less than 4 percent; total capital ... pay the bank can impact on credit risk. The calculation of credit exposures recognises and
adjusts for two factors:
This note focuses on key issues of the CVA risk capital charge that require addressing in ... CVA risks and hedges extend beyond credit spread risk The Basel 3 calculation of CVA VAR and stressed VAR
focuses exclusively on risk due
CREDIT RISK AND REGULATORY CAPITAL International Swaps and Derivatives Association March 1998
Credit risk, market or mismatch risk and ... Investment Risk Capital Charge is carried out using a simple approach of applying a ... 7. This Prudential Standard sets out the calculation of the
Investment Risk Capital
RBC calculation were reviewed. ... The scope of the work reflected in this memorandum is informed by the charge of the Investment Risk-Based Capital Working Group and applied to the derivative asset
... determining economic capital related to counterparty credit risk (off and on balance ...
Foundation IRB Capital Charge: Probability of d efault is most commonly associated with the Basel II IRB approach to credit risk . Under the IRB or ... Alchemy ORR can be configured and used for IRB
PD estimates and capital charge calculation or internal
credit risk, a capital charge for Credit Value Adjustment (CVA) risk, ... Figure VI shows an example of the capital calculation process and how deductions affect RWAs. 11 Under Basel III, capital is
divided into Common Equity Tier 1, ...
Modelling Incremental Risk Charge Integrated Market and Credit Risk Model ... Modelling Incremental Risk Charge IRC Calculation Kernel: Data Process control EC calculation module ... Calculation of
incremental capital charge and design of reporting.
• Sample capital calculation for banking book exposures • €100 million unrated senior corporate exposure • 100% risk weight ... and part of capital charge for specific risk for credit der ivative
that relates to such reference credit instrument
A capital charge, as defined below, ... 2. For the purposes of this Annex, and for the calculation of counterparty risk in ... and for the calculation of minimum capital requirements for credit risk
under Banking Rule BR/04, and without prejudice
institutions shall be subject to the capital charge for credit risk exposure ... Calculation of Capital Requirement 2.118 Under the comprehensive approach, the adjusted exposure amount after risk
mitigation for collateralised transactions is calculated as follows:
Risk and capital calculation. Trading book . All unsecuritized credit products ... comprehensive risk capital charge ... Calculation of the counterparty risk in accordance with the IRB credit risk
used in the capital calculation should incorporate the impact ... market risk capital charge is that the required capital levels ... subject to an 8 percent credit risk capital charge under the
earlier guidelines, ...
Annex 4 2 with the higher market risk capital charge for specific risk. 4.2 The existing text of §310(2)(b)(ii) of the BCRs, which requires that the
... introduced a new capital charge in Basel III, the credit valuation adjustment ... for calculating counterparty credit risk capital and ... adopting the CVA charge in a form which exempts
transactions from the capital calculation for CVA risk where such
IMM capital charge + Standardized CVA risk capital charge Migration risk via maturity adjustment in CCR 3. All other banks ... Rosen D., 2004, Credit Risk Capital Calculation, in Professional Risk
Manager (PRM) Handbook, Chapter III.B5, PRMIA Publications
calculate the credit risk capital charge for the IRB class or IRB subclass into ... Calculation of market risk capital charge for credit derivative contracts booked in reporting institutions’ trading
book General 1.
5.1. Capital requirements for Credit Risk ... calculation of credit risk capital requirement. These ineligible collaterals include interalia, ... Consequently, the operational risk capital charge is
updated on an annual basis. The .
where credit risk is dealt with differently: ... Correlation trading books are still subject to VaR-based risk calculation Currently, a sensitivity-based approach to measure market risk in
correlation ... capital charge driven trading activity, ...
three years to arrive at the operational risk capital charge. 3.2. ... calculation of credit risk capital requirement. These ineligible collaterals include interalia, corporate and personal
guarantees and equity shares.
Counterparty Credit Risk Capital and Credit Valuation Adjustment Michael Pykhtin ... calculation of CVA sensitivities is another challenge! EE ( | ) c tH tH tH. 17 ... From the ASRF assumptions it
follows that the credit VaR capital charge for each exposure is independent of the portfolio
operational risk capital charge using historical data for 77 rural banks in Indonesia for a ... tools, and processes, much like credit or market risk. Before the introduction by the Basel Committee,
... the calculation of capital charge under different methods was conducted. We
Capital Charge Calculator- Credit Risk ... calculation) Capital Charge Calculator - Operational Risk
... Calculation of Capital charge against Operational Risk: An Example 39 8. ... calculating risk weighted asset against credit risk, capital charge against market risk and operational risk. b) Main
features of a rigorous review process:
a capital charge for market risk. ... approaches for the calculation of capital requirements. These are standardized approach, the foundation and ... NCOTL Credit Risk = Net Charge Off (impairments)
/ Total Loans and Advances of Bank i in time t
1 Counterparty Credit Risk Measurement Under Basel II A presentation by ISDA Asia 2007
Incremental risk charge (unsecuritized credit positionsNew with modeled specific risk) ... and charge capital to desks for their market risk regulatory capital costs. ... infrastructure to support
the calculation of daily clean P&L that excludes fees, commissions, ...
13 CAPITAL CHARGE FOR CREDIT RISK ..... 22. 2 PART I- PRELIMINARY 1 MANDATE These guidelines are issued pursuant to Section ... These guidelines cover the calculation of capital charge for credit
risk under the Standardised Approach.
S16 CREDIT RISK: SECURITISATION ... different methods for regulatory capital calculation are suggested in Basel II for banks qualified for the internal-ratings based (IRB) approach. ... is the
marginal capital charge ( s-adenotes stand-alone tranche) ...
Capital for Counterparty Credit Risk Dear Raquel ... decision to impose the CVA capital charge on the client-to-clearing member leg ... capital calculation to exchange traded derivatives too. Given
that this is a new requirement,
applying a credit risk capital charge, that is, the incremental risk charge, to trading ... risk approach to calculate the specific risk capital charge for all debt positions and for all
securitization positions that are not correlation trading positions.
own credit risk in the fair value of derivative liabilities. Recording an OCA, ... certain break clauses to be treated as risk mitigants for the calculation of the capital charge on CVA remains
uncertain, and is therefore an additional source of concern
calculation of credit risk capital requirement. These ineligible collaterals include, ... Consequently, operational risk capital charge is updated on an annual basis. Capital Adequacy and Risk
Management Report as at 31st December 2009 Page 31
PROPOSAL FOR A RISK-BASED CAPITAL CHARGE FOR PROPERTY ... A separate contingent credit risk charge will be calculated for the hurricane peril and for the ... covariance calculation. Comment: A credit
risk charge for ceded reinsurance receivable is currently provided for in the
July 15, 2005 Re: Calculation of Risk Weights for Residual Value Purposes International Convergence of Capital Measurements and Capital Standards,
Risk Charge “Use test” for ratings, derivatives ... Base Model: Calculating Credit Risk Economic Capital In a nutshell EAD Client & Product LGD PD R2 Countries ... Joint Economic Capital calculation
for Traded Default Risk and Credit Risk | {"url":"http://ebookilys.org/pdf/calculation-of-capital-charge-for-credit-risk","timestamp":"2014-04-18T05:33:44Z","content_type":null,"content_length":"44542","record_id":"<urn:uuid:a5775fc3-3d16-471d-bab5-c713f9ca0c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
New London Township, PA Precalculus Tutor
Find a New London Township, PA Precalculus Tutor
...My Photographs have Appeared in: Wilmington International Photography Exhibition, acceptances Community News (including on front page) Several other local newspapers A.I. DuPont High School
(Football team photographer) Marbrook Elementary School Charter School of Wilmington CSW Li...
39 Subjects: including precalculus, chemistry, physics, calculus
...Homework will no longer feel like torture because you can do it quickly and easily. Or maybe you are already doing well, and you want to improve your grade from a B+ to an A? Good for you!
16 Subjects: including precalculus, chemistry, physics, calculus
...By being open to using multiple teaching techniques, I will help my students overcome the difficulties they have been having in their mathematic classes. I look forward to working with you and
expanding your mathematical abilities!The Integrated Algebra Regents goes over the topics of the algebr...
22 Subjects: including precalculus, calculus, physics, elementary math
...Parents, are you having trouble getting your child excited about studying those subjects you struggled with in school? Let my passion for chemistry and math help your child get excited and
excel in these subjects. I have worked over ten years in a laboratory setting as a chemist, both on the bench and managing the lab.
8 Subjects: including precalculus, chemistry, statistics, algebra 1
I am a high school senior on my way to graduating. I have maintained a 4.3 average throughout high school. I am currently enrolled in English 110 at the University of Delaware.
21 Subjects: including precalculus, English, reading, chemistry
Related New London Township, PA Tutors
New London Township, PA Accounting Tutors
New London Township, PA ACT Tutors
New London Township, PA Algebra Tutors
New London Township, PA Algebra 2 Tutors
New London Township, PA Calculus Tutors
New London Township, PA Geometry Tutors
New London Township, PA Math Tutors
New London Township, PA Prealgebra Tutors
New London Township, PA Precalculus Tutors
New London Township, PA SAT Tutors
New London Township, PA SAT Math Tutors
New London Township, PA Science Tutors
New London Township, PA Statistics Tutors
New London Township, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/New_London_Township_PA_precalculus_tutors.php","timestamp":"2014-04-18T18:52:33Z","content_type":null,"content_length":"24537","record_id":"<urn:uuid:e551df4d-209e-4457-8127-afa415500fa0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: informal versus formal reasoning
Randall Holmes holmes at catseye.idbsu.edu
Thu Mar 18 18:25:52 EST 1999
I must hasten to add that I regard formal proof as completely
indispensible in mathematics; I would hate to be accused of being a
postmodernist. First-order logic does capture a (nearly) universal
standard of formalized proof.
I also regard exact formal definition of what we are talking about as
being completely indispensible in mathematics; thus the need for the
expressive power of second-order logic.
Formal proofs (using generally accepted principles such as those of
first-order logic) allow us to see that the conclusions we draw follow
from our axioms. They also allow us to identify the axioms we are
using, which themselves may require "informal" justification. If you
do not share my conviction that a certain axiom is true, this allows
you to express your objections to my proof precisely; you can separate
objections on a formal level from objections on an informal or
"intuitive" level. (If you are a constructivist, you can examine the
formal principles of reasoning I accept and express your possibly
stronger objections to my proof precisely as well).
As long as we confine ourselves to what can be proved within a strong
formal system such as ZFC which most of us are willing to accept as
safe, the need for informal justifications of axioms can be kept to a
minimum (even to zero) allowing some of us to be convinced (mostly
harmlessly) that the formal system in question "formalizes
And God posted an angel with a flaming sword at | Sincerely, M. Randall Holmes
the gates of Cantor's paradise, that the | Boise State U. (disavows all)
slow-witted and the deliberately obtuse might | holmes at math.idbsu.edu
not glimpse the wonders therein. | http://math.idbsu.edu/~holmes
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002840.html","timestamp":"2014-04-19T14:55:46Z","content_type":null,"content_length":"4095","record_id":"<urn:uuid:e861cd45-9c22-4f15-bb7e-788fd7cfa382>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
MCMC convergence assessment
November 27, 2012
By xi'an
(This article was originally published at Xi'an's Og » R, and syndicated at StatsBlogs.)
Richard Everitt tweetted yesterday about a recent publication in JCGS by Rajib Paul, Steve MacEachern and Mark Berliner on convergence assessment via stratification. (The paper is free-access.)
Since this is another clear interest of mine’s, I had a look at the paper in the train to Besançon. (And wrote this post as a result.)
The idea therein is to compare the common empirical average with a weighted average relying on a partition of the parameter space: restricted means are computed for each element of the partition and
then weighted by the probability of the element. Of course, those probabilities are generally unknown and need to be estimated simultaneously. If applied as is, this idea reproduces the original
empirical average! So the authors use instead batches of simulations and corresponding estimates, weighted by the overall estimates of the probabilities, in which case the estimator differs from the
original one. The convergence assessment is then to check both estimates are comparable. Using for instance Galin Jone’s batch method since they have the same limiting variance. (I thought we
mentioned this damning feature in Monte Carlo Statistical Methods, but cannot find a trace of it except in my lecture slides…)
The difference between both estimates is the addition of weights p_[in]/q_[ijn], made of the ratio of the estimates of the probability of the ith element of the partition. This addition thus
introduces an extra element of randomness in the estimate and this is the crux of the convergence assessment. I was slightly worried though by the fact that the weight is in essence an harmonic mean,
i.e. 1/q_[ijn]/Σ q_[imn]… Could it be that this estimate has no finite variance for a finite sample size? (The proofs in the paper all consider the asymptotic variance using the delta method.)
However, having the weights adding up to K alleviates my concerns. Of course, as with other convergence assessments, the method is not fool-proof in that tiny, isolated, and unsuspected spikes not
(yet) visited by the Markov chain cannot be detected via this comparison of averages.
Filed under:
University life
batch means
convergence assessment
Monte Carlo Statistical Methods
variance estimation
Please comment on the article here: Xi'an's Og » R | {"url":"http://www.statsblogs.com/2012/11/27/mcmc-convergence-assessment/","timestamp":"2014-04-21T04:54:19Z","content_type":null,"content_length":"39960","record_id":"<urn:uuid:5e76adb7-d1a5-4778-9c00-5c123daf6513>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to draw ellipse in engineering drawing
how to draw ellipse in engineering drawing
posts: 22
Registered: 2012-7-6
Message 1 of 3
How to draw ellipse in engineering drawing?
Thanks for any answer.
posts: 7
Registered: 2012-7-6
Message 2 of 3
Step-1 Draw a horizontal axis of some suitable length. And mark a point O on it.
Step-2 Draw a vertical axis, perpendicular to the horizontal axis & passing through the point O of the length equal to the length of minor axis, which is 60 mm and give points C & D as shown into the
Step-3 Mark the points F1 & F2 Which are focal points on the horizontal axis and are 80 mm apart from each other.
Step-4 Measure the distance between the point F1 & C or D and with this distance keep O as center and cut the horizontal axis on both the sides and give points A & B, which is the major axis of the
ellipse. Hence the distance of the major axis is 100 mm.
Step-5 With O as center and radii equal to OC and OA or OD and OB draw two circles.
Step-6 Divide these circles into 12 equal divisions as shown into the figure. And give the notations 1,2,3, etc. where these lines of divisions intersect the outer circle & 1’,2’,3’ etc. where these
lines intersect the inner circle.
Step-7 Draw vertical lines from the points 2,3,5,6, in downward directions and 8,9,11,12 in upward directions respectively.
Step-8 Draw horizontal lines from the points 2’,3’,11’,12’ in left hand direction and from the points 5’,6’,8’,9’ in right hand direction respectively.
Step-9 Give the notations p1,p2,p3 etc. at the points of intersections of these horizontal & vertical lines respectively as shown into the figure.
Step-10 Draw a smooth free hand medium dark curve from the points p1,p2,p3 etc. in sequence, so the resulting curve is the ellipse.
Step-11 Give the dimensions by any one method of dimensions and give the name of the components by leader lines wherever necessary.
posts: 7
Registered: 2014-2-17
Message 3 of 3
Wow, that looks like a lot of work
See also | {"url":"http://www.zwsoft.com/forum/thread-3331-1-1.html","timestamp":"2014-04-17T06:55:51Z","content_type":null,"content_length":"38000","record_id":"<urn:uuid:337bb896-0483-458f-9808-00d62a9b4faa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding integer points on an N-d convex hull
up vote 4 down vote favorite
Suppose we have a convex hull computed as the solution to a linear programming problem (via whatever method you want). Given this convex hull (and the inequalities that formed the convex hull) is
there a fast way to compute the integer points on the surface of the convex hull? Or is the problem NP?
There exist ways to bound the number of integer points and to find the number of integer points inside convex hull, but I specifically want the points on the hull itself.
EDIT: Suppose the set of inequalities (the linear program) have integer coefficients /EDIT
co.combinatorics computational-geometry geometry algorithms
I guess that the surface of your convex hull is defined over $\mathbb Q$. The problem looks too broad and already hard in 2 dimensions (where, BTW, I don't see any particular benefit from having a
convex hull). – Wadim Zudilin Jun 24 '10 at 6:45
Right, ad I'm working in something more like 22 dimensions. It's definitely POSSIBLE to calculate, I already have an algorithm, I was just wondering if there's any fast way to do it. – Michael
Hoffman Jun 24 '10 at 13:08
add comment
1 Answer
active oldest votes
Because the facets of your convex hull are themselves polytopes (of one lower dimension—$d{=}21$ in your case), it seems your question is equivalent to asking how to count lattice points
in a polytope. One paper on this topic is "The Many Aspects of Counting Lattice Points in Polytopes" by J. A. DeLoera. Section 4 is entitled, "Actually Counting: Modern Algorithms and
up vote 3 Software." A key reference in that section is to a paper by Barvinok entitled, "Polynomial time algorithm for counting integral points in polyhedra when the dimension is fixed" (Math of
down vote Operations Research, Vol. 19, 769–779, 1994), whose title (polynomial time) seems to provide an answer of sorts to your question.
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics computational-geometry geometry algorithms or ask your own question. | {"url":"http://mathoverflow.net/questions/29269/finding-integer-points-on-an-n-d-convex-hull?sort=oldest","timestamp":"2014-04-20T01:56:11Z","content_type":null,"content_length":"54195","record_id":"<urn:uuid:95d294ac-06bb-469b-a40d-ca8afabf8931>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACTUARIES Statistical Methods
IAI CT6 1110Page 2 of 5
Q. 1)
A veteran actuary believes that the claims from a particular type of policy follow the Burrdistribution with parameters
and 75.0
. As per his recommendation,the insurance company has set a deductible such that 25% of the losses result in no claimto the insurer.(i)
Calculate the size of the deductible.(ii)
An actuarial trainee suspects that the deductible set by the veteran actuary is basedon more of surmise than data. She has access to data on 1250 claims (net of deductible). Continuing with the
assumption of the Burr distribution for the originalclaims, she wishes to estimate its parameters from the available data, by using themethod of maximum likelihood. Give an expression for the
probability densityfunction of the observed data (net of deductible), and the likelihood function thathas to be maximized.(iii)
Give an expression for the maximum likelihood estimate (MLE) of the true fractionof the losses that result in no claim to the insurer, in terms of the MLE of theparameters.(3)(4)(2)
The annual number of claims on a particular risk has the Binomial distribution withmaximum claim number 10 and average claim number
. The prior density of theparameter
, where
are known positiveintegers. The number of claims in the years 2007, 2008 and 2009 were
Determine the prior mean of
Determine the maximum likelihood estimator of
Determine the Bayes estimate of the number of claims in the year 2010, under thesquared error loss function.(4)(iv)
Show that the estimator of part (iii) has the form of a credibility estimate, andidentify the credibility factor.(2)(v)
Determine the credibility estimator of
under EBCT Model 1 and compare withthe result of part (iii).(6)
The aggregate claims process for a risk is a compound Poisson process with rate
per annum. Individual claim amounts are Rs. 2500 with probability 0.25, Rs. 5000 withprobability 0.5, or Rs. 7500 with probability 0.25. The premium loading is 10%. Let
denote the aggregate annual claim amount.
Calculate the mean and variance of
Using a normal approximation to the distribution of
, calculate the initial surplusrequired in order that the probability of ruin at the end of the first year is 0.05. (3)(iii)
A reinsurer offers to sell to the insurer proportional reinsurance for 25% of theclaims, for premium loading 15%. If this offer is accepted, calculate the modifiedinitial surplus required in order
that the probability of ruin at the end of the firstyear is 0.05. (4) | {"url":"http://www.scribd.com/doc/109812070/ACTUARIES-Statistical-Methods","timestamp":"2014-04-18T23:51:26Z","content_type":null,"content_length":"198288","record_id":"<urn:uuid:1fab872f-8077-4efa-9eef-08ab7b820ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Do Students in the U.S. Have Weaker Math Skills vs. Students in Other Countries?
March 24, 2010 by goldstudentcom
The American educational system has been under quite a bit of pressure due to students lackluster math skills vs. other countries. This is an alarming state of affairs, and one that has far reaching
implications for our children’s future. But who has the best educational system and how are the educational systems different?
There are many reports comparing the math skill performance of U.S. students in pre-high school and high school to the math skill performance of students in other countries, with U.S. students
scoring significantly lower than in many other countries. For example, in 2004 the New York Times reported that high school students in Hong Kong, Finland and South Korea do best in mathematics among
those in 40 surveyed countries, while students in the United States finished in the bottom half, according to a new international comparison of mathematical skills shown by 15-year-olds (NYT, Dec. 7,
Such reports also tie into recent falls in average math SAT scores, with the largest drop in math scores in 30 years reported by the Wall Street Journal in 2007 (WSJ, August 29, 2007).
But why do students in the U.S. have weaker math skills vs. students in other countries? And why do the math performance assessments of U.S. students continually fall short of students in other
countries? Two key reasons are that students in other countries tend to follow math curricula that involve significantly more drilling of basic math operations, and also tend to use calculators much
less in the classroom than do students in the U.S. ( Reassessing U.S. International Mathematics Performance: New Findings from the 2003 TIMSS and PISA, American Institutes for Research, November
Practice, practice, practice – and the ability to work through math problems without calculators – appear to be two critical criteria for U.S. students to achieve math success. But these two
solutions are often out of the control of the American educational system to provide alone. How can student’s skills in the U.S. keep pace with the rest of the world? U.S. students need additional
educational services to remain competitive in a global market.
GoldStudent has been designed in order to directly counter these disturbing trends and to assist the American educational system by providing supplemental assistance. GoldStudent emphasizes
personalized and continuous practice drills (math worksheets) of basic math concepts for all students. Students are able to test their math skills through a series of math performance tasks that are
personalized to their skill level. Throughout their studies with GoldStudent, students receive math performance assessments to track how they are progressing over time. At GoldStudent we also believe
in performance based math, meaning that students are rewarded for their practice and their progress.
It is true that currently students in the U.S. have weaker math skills vs. students in other countries. But there is quite a bit that parents can do to help their children succeed in math and
ultimately succeed in the global job market. | {"url":"http://goldstudentcom.wordpress.com/2010/03/24/why-do-students-in-the-u-s-have-weaker-math-skills-vs-students-in-other-countries/","timestamp":"2014-04-20T20:08:22Z","content_type":null,"content_length":"70474","record_id":"<urn:uuid:6f7ba3a1-b5ab-4790-aa02-f8121c4ea472>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merion, PA Geometry Tutor
Find a Merion, PA Geometry Tutor
...I have been tutoring for pay for over five years, as well as volunteering for countless years previous. Although most of the students I tutor are at a level between prealgebra and precalculus,
I am very well versed in higher mathematics. I am friendly and considerate of all learning styles and abilities.
11 Subjects: including geometry, calculus, algebra 1, algebra 2
...Our tutoring sessions will start where you are. Our sessions will be based on mutual respect. In a short time we can build ourselves a partnership of learning.
10 Subjects: including geometry, calculus, physics, algebra 1
I am teaching math, for over 20 years now, and was awarded four times as educator of the year. I was also mentor of the year twice. I have a variety of experience teaching not only in different
countries, but also teaching here in public school, private school, charter school, and adult continuing education school.
15 Subjects: including geometry, algebra 1, algebra 2, GED
...It's also good to know which chord sequences sound better than others. Along the way, there's also rhythm: time signatures, note durations, and triplets and other rhythmic patterns. The main
use I personally have for music theory is taking a melody line and finding harmony for it, so I have to ...
16 Subjects: including geometry, reading, English, writing
...I currently teach full time for the School District of Philadelphia I am a certified math teacher for the School District of Philadelphia. For the past 4 years I have taught 9th grade Algebra
preparing students for Pennsylvania Keystone exams in Algebra. Other subjects I've taught include SAT Math Statistics Trigonometry, and Algebra 2.
9 Subjects: including geometry, algebra 1, algebra 2, precalculus
Related Merion, PA Tutors
Merion, PA Accounting Tutors
Merion, PA ACT Tutors
Merion, PA Algebra Tutors
Merion, PA Algebra 2 Tutors
Merion, PA Calculus Tutors
Merion, PA Geometry Tutors
Merion, PA Math Tutors
Merion, PA Prealgebra Tutors
Merion, PA Precalculus Tutors
Merion, PA SAT Tutors
Merion, PA SAT Math Tutors
Merion, PA Science Tutors
Merion, PA Statistics Tutors
Merion, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Merion_PA_geometry_tutors.php","timestamp":"2014-04-19T00:00:07Z","content_type":null,"content_length":"23983","record_id":"<urn:uuid:58017f2b-2b28-4bfe-96cb-b71ddb0f3285>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
TI-82: Counting Principles
The factorial (!) key is located under the math probability menu. Enter the number first, then the factorial key.
10! = 10 MATH PRB 4
The permutation key (nPr) is located under the math probability menu. Enter the number of objects, n, first; then the permutation key; then the number of objects to take at one time, r.
P(10,3) = [10]P[3] = 10 MATH PRB 2 3
The combination key (nCr) is located under the math probability menu. Enter the number of objects, n, first; then the combination key; then the number of objects to take at one time, r.
C(10,5) = [10]C[5] = 10 MATH PRB 3 5
Sometimes, combinations need combined with the fundamental counting principle. This can easily be done one the calculator.
Example: How many ways can five women be selected from ten women and three men selected from eight men? The solution is shown below. The parentheses are optional, but it is suggested you use them for
( 10 nCr 5 ) * ( 8 nCr 3 ) | {"url":"https://people.richland.edu/james/ti82/ti-count.html","timestamp":"2014-04-24T05:56:18Z","content_type":null,"content_length":"1911","record_id":"<urn:uuid:8a67914a-02b8-4291-89c6-a83490e20941>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are there any solutions to $2^n-3^m=1$
up vote 5 down vote favorite
Are there any positive integer solutions to $2^n-3^m=1$ with $n,m>2$ ?
By way of justifying the question, I've found lots of info on what happens where $m=n$ (mostly FLT variantions, Darmon + Merel,...) but don't really know where to look for $m\not=n$.
Also it's pretty obvious that you can't have solutions to similar equations, e.g. $2^n-3^m=2$. There are no solutions for $n,m<1000$ asside from $n=2,m=3$. It seems pretty likely to me that it should
happen for some large numbers some point though.
Are there any theorms I don't know about regarding primes $p,q$ and $p^n-q^m=k$, $k \in \mathbb{N}$ that might rule out a solution or help me find one?
en.wikipedia.org/wiki/Catalan%27s_conjecture – Todd Trimble♦ Jul 1 '11 at 12:09
7 Didn't Gersonides do this in 1343? en.wikipedia.org/wiki/Gersonides "One year later, at the request of the bishop of Meaux, he wrote The Harmony of Numbers in which he considers a problem of
Philippe de Vitry involving so-called harmonic numbers, which have the form $2^m\cdot 3^n$. The problem was to characterize all pairs of harmonic numbers differing by 1. Gersonides proved that
there are only four such pairs: (1,2), (2,3), (3,4) and (8,9)." Ivars Peterson gives an easy proof at maa.org/mathland/mathtrek_1_25_99.html – Junkie Jul 1 '11 at 12:25
Thanks guys, that was much more interesting than I expected! – Kevin Jul 1 '11 at 13:22
Hmm, the question is already answered, and an answer is also accepted, so this is just an addendum. I'd simply reorder the equation into $2^n-1=3^m$ and use Euler's phi-function for the
primefactors of the lhs and the powers of 3: to have a power k of 3 as factor of $2^n-1$ n must have the form $x*\varphi(3^k)=x*2*3^{k-1}$ where x is coprime to 2 and 3. After that, the lhs has
additional (prime-)factors due to the $\varphi$-function for nontrivial n>1 except if n=6; here we can use the szigmondy-theorem or a simple comparision of the growthrate of the lhs and rhs, if k>
2. – Gottfried Helms Jul 1 '11 at 18:56
add comment
4 Answers
active oldest votes
Here is the proof of Gersonides [Levi ben Gershon] (1343) for $2^n-3^m=1$. It uses nothing more that arithmetic modulo $8$.
Case I: $m$ is even. Then $3^m$ is 1 mod 4, so $2^n$ is 2 mod 4, implying $n=1$ and $m=0$.
up vote 33 down vote Case II: $m$ is odd. Then $3^m$ is 3 mod 8, so $2^n$ is 4 mod 8, implying $n=2$ and $m=1$.
The alternative equation $3^m-2^n=1$ follows similarly when $m$ is odd, but is a bit more tricky when $m$ is even (hint, factor $2^n=3^m-1=(3^{m/2}+1)(3^{m/2}-1)$ and
argue from there).
2 Yes, indeed, the argument of Levi ben Gershon didn't change during the last year: mathoverflow.net/questions/29926/… – Wadim Zudilin Jul 2 '11 at 17:30
add comment
The equation $p^n-q^m=k$ is a special case of the $S$-unit equation, so it has finitely many solutions by a theorem of Siegel. Using linear forms in logarithms you can get an explicit
upper bound for the size of the solutions and, in principle, find all of them. In practice, the bounds may be too big in general, but I am sure $2^n-3^m=1$ has been done (it also follows
up vote 10 from Catalan, as Todd has just commented). Check out Baker's book on Transcendence.
down vote
add comment
It is the content of Catalan conjecture That is to say, that the only solution in the natural numbers of $x^a -y^b= 1$ for $ x,a, y,b > 1$ is $x=3, a=2, y=2, b=3$
it was proved in 2002 by Preda Mihăilescu.
up vote 7 http://en.wikipedia.org/wiki/Catalan's_conjecture
down vote
a good refernce for a self contained proof is the book by Rene' Schoof: "Catalan's Conjecture", Universitext, Springer-Verlag, 2008.
2 The question is much more special than Catalan's conjecture: the numbers x and y are already specialized to 3 and 2. So it is misleading to say the question "is the content" of that
harder problem. Still, given that Catalan's conjecture is solved, making a link between them is worthwhile for perspective. – KConrad Jul 1 '11 at 17:31
Of course you are right, I was a bit hasty – Valerio Talamanca Jul 3 '11 at 12:24
add comment
There is another method that allows one to handle $a^{n}-b^{m}=k$ (here I call the bases $a$ and $b$ because primality is not important to how the method works). Specifically, if one has a
solution, it allows a larger solution to be found, or proven to not exist.
As an example of this method, it is easy to outline a proof that $2^{n} - 5^{m} = 3$ has no solutions larger than $(m,n)=(3,7)$:
Suppose $m>3$, $n>7$, and $2^{n}-5^{m}=3$.
Rewrite the equation as $2^{n}=5^{m}+3$.
Now, to use the largest solution we know, subtract $2^{7}=5^{3}+3$ from both sides to obtain $2^{7}(2^{n-7}-1)=5^{3}(5^{m-3}-1)$.
Since $m$ and $n$ give a solution larger than the one we know, both sides are positive integers. Since $n>7$, the highest power of $2$ dividing the right side is $2^{7}$.
up vote 1
down vote Since the order of $5$ in $(\mathbb{Z}/(128))^{\times}$ is $32$, we know that $32$ divides $m-3$, and $5^{32}-1$ divides both sides. Then $29423041$, as a prime factor of $5^{32}-1$,
divides both sides. Then $29423041$ divides $2^{n-7}-1$, so since the order of $2$ in $(\mathbb{Z}/(29423041))^{\times}$ is $122596$, $2^{122596}-1$ divides both sides. (This is probably
not a profitable direction to take, but it can work as an illustration of the method.)
The contradiction would be obtained by concluding that $5^{4}$ divides the right side, or $2^{8}$ divides the left side.
In the case where a larger solution exists, the ability to bounce back and forth between the two sides of the equation only goes as far as concluding that the larger solution (that is, the
common value of both sides when the larger solution is plugged in) minus the common value from the known solution divides both sides.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/69253/are-there-any-solutions-to-2n-3m-1/69298","timestamp":"2014-04-21T02:31:29Z","content_type":null,"content_length":"73022","record_id":"<urn:uuid:a62679bd-0e13-4fec-a588-71d2ed804c85>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary tree insertion sort in C
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Hey can anyone explain how to sort a binary tree using insertion sort in C language where time complexity is an issue. I am just learning to code. Thank you guys!
up vote 0 down vote favorite
add comment
Hey can anyone explain how to sort a binary tree using insertion sort in C language where time complexity is an issue. I am just learning to code. Thank you guys!
If you coded the binary tree in the traditional sense, then when you add items to the tree, it will preserve the sorted order. You can get the full list of items in order by
traversing the tree. I would recommend you read:
• http://en.wikipedia.org/wiki/Tree_traversal
up vote 0 down vote
accepted • http://en.wikipedia.org/wiki/Binary_tree
Also take a look at: http://nova.umuc.edu/~jarc/idsv/lesson1.html
add comment
If you coded the binary tree in the traditional sense, then when you add items to the tree, it will preserve the sorted order. You can get the full list of items in order by traversing the tree. I
would recommend you read:
It's worth noting that there's a certain terminology to use here. A binary tree is a data structure where every node has at most two children. There are no conventions for ordering of
nodes in a binary tree.
A binary search tree is a binary tree such that for given a node N all nodes in the left subtree of N are considered "less than" N and all nodes in the right subtree of N are considered
up vote 0 "greater than" N. You could also let nodes considered "equal to" N in the tree, as long as you consistently define them to be put in the left subtree or right subtree.
down vote
As others have suggested, your best best is either to amend the code to construct a binary search tree instead of a normal binary tree, or convert the binary tree into a linear data
structure and sort it.
add comment
It's worth noting that there's a certain terminology to use here. A binary tree is a data structure where every node has at most two children. There are no conventions for ordering of nodes in a
binary tree.
A binary search tree is a binary tree such that for given a node N all nodes in the left subtree of N are considered "less than" N and all nodes in the right subtree of N are considered "greater
than" N. You could also let nodes considered "equal to" N in the tree, as long as you consistently define them to be put in the left subtree or right subtree.
As others have suggested, your best best is either to amend the code to construct a binary search tree instead of a normal binary tree, or convert the binary tree into a linear data structure and
sort it. | {"url":"http://stackoverflow.com/questions/15014216/binary-tree-insertion-sort-in-c","timestamp":"2014-04-20T07:25:50Z","content_type":null,"content_length":"73508","record_id":"<urn:uuid:2c8dd069-0ddf-430c-87a2-e3678ae9aff0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Assignment 1.3 - conflicting Specs.
Author Assignment 1.3 - conflicting Specs.
Ranch Hand
Assignment wording:
Joined: Aug 14, 2000 a leap year occurs on every year that is evenly divisible by 4
Posts: 185 except every year that is evenly divisible by 100
except every year that is evenly divisible by 400.
>java Leap 2000
>leap year!
My words: The way I am reading the specs, the year 2000 shouldn't be a leap year because eventhough it's divisible by 4, it is also divisible by 100 and 400!
so is the year 100 or 400 a leap year or not?
I read the specs so many times before finally posting this!
2000 is divisible by 4 therefore it is a leap year,
Joined: Nov 29, 2000 but 2000 is divisible by 100 so it's not a leap year,
Posts: 7 but 2000 is divisible by 400 so it is a leap year.
A leap year happens every 4 years. But years divisible by 100 aren't leap years unless they are also divisible by 400.
Hope that sort of clears it up?
subject: Assignment 1.3 - conflicting Specs. | {"url":"http://www.coderanch.com/t/3679/Cattle-Drive/Assignment-conflicting-Specs","timestamp":"2014-04-19T02:57:24Z","content_type":null,"content_length":"19207","record_id":"<urn:uuid:84c0410b-91fc-4ab7-9585-a04f9bf0249b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evergreen Park Prealgebra Tutor
Find an Evergreen Park Prealgebra Tutor
...Thus I bring first hand knowledge to your history studies. I won the Botany award for my genetic research on plants as an undergraduate, and I have done extensive research in Computational
Biology for my Ph.D. dissertation. I was a teaching assistant for both undergraduate and graduate students for a variety of Biology classes.
41 Subjects: including prealgebra, chemistry, English, calculus
...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help
all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon.
12 Subjects: including prealgebra, calculus, geometry, algebra 1
...Proper time management and attention to detail are the keys to a high score. This requires effortful engagement with the material and some open mindedness on the part of the student. The tutors
job is the provide the student with the strategies that will help them overcome an obstacles.
24 Subjects: including prealgebra, calculus, physics, geometry
...I have a PhD. in experimental nuclear physics. I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra,
differential equations, advanced differential equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics.
10 Subjects: including prealgebra, calculus, physics, geometry
Learning is a passion of mine that I enjoy passing along to others. I received a Bachelor's Degree in Health and Exercise Science from Wake Forest University in 2011, with a GPA of 3.7. As a
Pre-medical student, I have taken and excelled in all of the prerequisite courses for medical school includ...
27 Subjects: including prealgebra, chemistry, physics, calculus | {"url":"http://www.purplemath.com/Evergreen_Park_Prealgebra_tutors.php","timestamp":"2014-04-18T15:45:51Z","content_type":null,"content_length":"24407","record_id":"<urn:uuid:22d71e28-3d97-478c-8d8d-bc7664f1bb94>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH M025 ALL PRE-CALCULUS MATHEMATICS
Mathematics | PRE-CALCULUS MATHEMATICS
M025 | ALL | Chris Parks
Precalculus Mathematics (3 cr.) P: Two years of high school algebra or
M014, and one year of high school geometry. Designed to prepare
students for M119. Algebraic operations; polynomial, exponential, and
logarithmic functions and their graphs; conic sections; systems of
equations; and inequalities. Credit may not be applied toward a degree
in the College of Arts and Sciences; a grade of C– or higher is needed
to satisfy the College of Arts and Sciences mathematics fundamental
skills requirement. I Sem., II Sem., SS. | {"url":"http://www.indiana.edu/~deanfac/blfal10/math/math_m025_ALL.html","timestamp":"2014-04-18T10:53:07Z","content_type":null,"content_length":"1073","record_id":"<urn:uuid:295acbd9-fe0b-4a2f-9927-ca4977802c87>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithms of Common Solutions for Generalized Mixed Equilibria, Variational Inclusions, and Constrained Convex Minimization
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 132053, 25 pages
Research Article
Algorithms of Common Solutions for Generalized Mixed Equilibria, Variational Inclusions, and Constrained Convex Minimization
^1Department of Mathematics, Shanghai Normal University and Scientific Computing Key Laboratory of Shanghai Universities, Shanghai 200234, China
^2Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia
Received 3 November 2013; Accepted 12 November 2013; Published 23 January 2014
Academic Editor: Qamrul Hasan Ansari
Copyright © 2014 Lu-Chuan Ceng and Suliman Al-Homidan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We introduce new implicit and explicit iterative algorithms for finding a common element of the set of solutions of the minimization problem for a convex and continuously Fréchet differentiable
functional, the set of solutions of a finite family of generalized mixed equilibrium problems, and the set of solutions of a finite family of variational inclusions in a real Hilbert space. Under
suitable control conditions, we prove that the sequences generated by the proposed algorithms converge strongly to a common element of three sets, which is the unique solution of a variational
inequality defined over the intersection of three sets.
1. Introduction
Let be a nonempty closed convex subset of a real Hilbert space and let be the metric projection of onto . Let be a self-mapping on . We denote by Fix the set of fixed points of and by the set of all
real numbers. A mapping is called -Lipschitz continuous if there exists a constant such that In particular, if , then is called a nonexpansive mapping [1]; if , then is called a contraction.
A mapping is called strongly positive on if there exists a constant such that
Let be a nonlinear mapping on . We consider the following variational inequality problem (VIP): find a point such that The solution set of VIP (3) is denoted by VI .
The VIP (3) was first discussed by Lions [2]. There are many applications of VIP (3) in various fields; see, for example, [3–6]. It is well known that if is a strongly monotone and Lipschitz
continuous mapping on , then VIP (3) has a unique solution. In 1976, Korpelevič [7] proposed an iterative algorithm for solving the VIP (3) in Euclidean space : with a given number, which is known as
the extragradient method (see also [8]). The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention given by many authors, who improved it in various ways;
see, for example, [9–24] and references therein, to name but a few.
Let be a real-valued function, a nonlinear mapping, and a bifunction. In 2008, Peng and Yao [12] introduced the following generalized mixed equilibrium problem (GMEP) of finding such that We denote
the set of solutions of GMEP (5) by GMEP . The GMEP (5) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, and Nash
equilibrium problems in noncooperative games. The GMEP is further considered and studied; see, for example, [11, 14, 23, 25–28]. If and , then GMEP (5) reduces to the equilibrium problem (EP) which
is to find such that It is considered and studied in [29]. The set of solutions of EP is denoted by EP . It is worth mentioning that the EP is a unified model of several problems, namely, variational
inequality problems, optimization problems, saddle point problems, complementarity problems, fixed point problems, Nash equilibrium problems, and so forth.
Throughout this paper, it is assumed as in [12] that is a bifunction satisfying conditions (A1)–(A4) and is a lower semicontinuous and convex function with restriction (B1) or (B2), where(A1) for all
;(A2) is monotone; that is, for any ;(A3) is upper-hemicontinuous; that is, for each , (A4) is convex and lower semicontinuous for each ;(B1)for each and , there exists a bounded subset and such
that, for any , (B2) is a bounded set.
Next we list some elementary results for the MEP.
Proposition 1 (see [26]). Assume that satisfies (A1)–(A4) and let be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For and , define a mapping as follows:
for all . Then the following hold:(i)for each is nonempty and single-valued;(ii) is firmly nonexpansive; that is, for any , (iii);(iv) is closed and convex;(v) for all and .
Let , . Given the nonexpansive mappings on , for each , the mappings are defined by
The is called the -mapping generated by and . Note that the nonexpansivity of implies the nonexpansivity of.
In 2012, combining the hybrid steepest-descent method in [30] and hybrid viscosity approximation method in [31], Ceng et al. [27] proposed and analyzed the following hybrid iterative method for
finding a common element of the set of solutions of GMEP (5) and the set of fixed points of a finite family of nonexpansive mappings .
Theorem CGY (see [27, Theorem 3.1]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction satisfying assumptions (A1)–(A4) and a lower semicontinuous and convex
function with restriction (B1) or (B2). Let the mapping be -inverse-strongly monotone and a finite family of nonexpansive mappings on such that . Let be a -Lipschitzian and -strongly monotone
operator with constants and a -Lipschitzian mapping with constant . Let and , where . Suppose and are two sequences in , is a sequence in , and is a sequence in with . For every , let be the -mapping
generated by and . Given arbitrarily, suppose that the sequences and are generated iteratively by where the sequences and the finite family of sequences satisfy the conditions:(i) and ;(ii);(iii) and
;(iv) for all .Then both and converge strongly to , where is a unique solution of the variational inequality problem (VIP):
Let be a single-valued mapping of into and a multivalued mapping with . Consider the following variational inclusion: find a point such that We denote by the solution set of the variational inclusion
(14). In particular, if , then . If , then problem (14) becomes the inclusion problem introduced by Rockafellar [32]. It is known that problem (14) provides a convenient framework for the unified
study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, and
equilibria and game theory.
In 1998, Huang [33] studied problem (14) in the case where is maximal monotone and is strongly monotone and Lipschitz continuous with . Subsequently, Zeng et al. [34] further studied problem (14) in
the case which is more general than Huang’s one [33]. Moreover, the authors [34] obtained the same strong convergence conclusion as in Huang’s result [33]. In addition, the authors also gave the
geometric convergence rate estimate for approximate solutions. Also, various types of iterative algorithms for solving variational inclusions have been further studied and developed; for more
details, refer to [35–39] and the references therein.
Let be an infinite family of nonexpansive self-mappings on and a sequence of nonnegative numbers in . For any , define a self-mapping on as follows: Such a mapping is called the -mapping generated by
and .
Whenever a real Hilbert space, Yao et al. [11] very recently introduced and analyzed an iterative algorithm for finding a common element of the set of solutions of GMEP (5), the set of solutions of
the variational inclusion (14), and the set of fixed points of an infinite family of nonexpansive mappings.
Theorem YCL (see [11, Theorem 3.2]). Let be a lower semicontinuous and convex function and a bifunction satisfying conditions (A1)–(A4) and (B1). Let be a strongly positive bounded linear operator
with coefficient and a maximal monotone mapping. Let the mappings be -inverse-strongly monotone and -inverse-strongly monotone, respectively. Let be a -contraction. Let , , and be three constants
such that , , and . Let be a sequence of positive numbers in for some and an infinite family of nonexpansive self-mappings on such that . For arbitrarily given , let the sequence be generated by
where are two real sequences in and is the -mapping defined by (15) (with and ). Assume that the following conditions are satisfied:(C1) and ;(C2).Then the sequence converges strongly to , where is a
unique solution of the VIP:
Let be a convex and continuously Fréchet differentiable functional. Consider the convex minimization problem (CMP) of minimizing over the constraint set (assuming the existence of minimizers). We
denote by the set of minimizers of CMP (18). It is well known that the gradient-projection algorithm (GPA) generates a sequence determined by the gradient and the metric projection : or more
generally, where, in both (19) and (20), the initial guess is taken from arbitrarilyn and the parameters or are positive real numbers. The convergence of algorithms (19) and (20) depends on the
behavior of the gradient . As a matter of fact, it is known that if is -strongly monotone and -Lipschitz continuous, then, for , the operator is a contraction; hence, the sequence defined by the GPA
(19) converges in norm to the unique solution of CMP (18). More generally, if the sequence is chosen to satisfy the property then the sequence defined by the GPA (20) converges in norm to the unique
minimizer of CMP (18). If the gradient is only assumed to be Lipschitz continuous, then can only be weakly convergent if is infinite-dimensional (a counterexample is given in Section 5 of Xu [40]).
Since the Lipschitz continuity of the gradient implies that it is actually -inverse-strongly monotone (ism) [41], its complement can be an averaged mapping (i.e., it can be expressed as a proper
convex combination of the identity mapping and a nonexpansive mapping). Consequently, the GPA can be rewritten as the composite of a projection and an averaged mapping, which is again an averaged
mapping. This shows that averaged mappings play an important role in the GPA. Recently, Xu [40] used averaged mappings to study the convergence analysis of the GPA, which is hence an
operator-oriented approach.
Motivated and inspired by the above facts, we in this paper introduce new implicit and explicit iterative algorithms for finding a common element of the set of solutions of the CMP (18) for a convex
functional with -Lipschitz continuous gradient , the set of solutions of a finite family of GMEPs, and the set of solutions of a finite family of variational inclusions for maximal monotone and
inverse-strong monotone mappings in a real Hilbert space. Under mild control conditions, we prove that the sequences generated by the proposed algorithms converge strongly to a common element of
three sets, which is the unique solution of a variational inequality defined over the intersection of three sets. Our iterative algorithms are based on Korpelevich’s extragradient method, hybrid
steepest-descent method in [30], viscosity approximation method, and averaged mapping approach to the GPA in [40]. The results obtained in this paper improve and extend the corresponding results
announced by many others.
2. Preliminaries
Throughout this paper, we assume that is a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex subset of . We write to indicate that the
sequence converges weakly to and to indicate that the sequence converges strongly to . Moreover, we use to denote the weak -limit set of the sequence ; that is,
Recall that a mapping is called(i)monotone if (ii)-strongly monotone if there exists a constant such that (iii)-inverse-strongly monotone if there exists a constant such that
It is obvious that if is -inverse-strongly monotone, then is monotone and -Lipschitz continuous.
The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property
Some important properties of projections are gathered in the following proposition.
Proposition 2. For given and ,(i), for all ;(ii), for all ;(iii), for all .
Consequently, is nonexpansive and monotone.
If is an -inverse-strongly monotone mapping of into , then it is obvious that is -Lipschitz continuous. We also have that, for all and , So, if , then is a nonexpansive mapping from to .
Definition 3. A mapping is said to be(a)nonexpansive [1] if (b)firmly nonexpansive if is nonexpansive or, equivalently, if is -inverse-strongly monotone (-ism), alternatively, is firmly nonexpansive
if and only if can be expressed as where is nonexpansive; projections are firmly nonexpansive.
It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is -ism. Inverse-strongly monotone (also referred to as co-coercive) operators have been
applied widely in solving practical problems in various fields.
Definition 4. A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping; that is, where and is nonexpansive. More precisely, when the
last equality holds, we say that is -averaged. Thus, firmly nonexpansive mappings (in particular, projections) are -averaged mappings.
Proposition 5 (see [42]). Let be a given mapping.(i) is nonexpansive if and only if the complement is -ism.(ii)If is -ism, then, for is -ism.(iii) is averaged if and only if the complement is -ism
for some . Indeed, for is -averaged if and only if is -ism.
Proposition 6 (see [42, 43]). Let be given operators.(i)If for some and if is averaged and is nonexpansive, then is averaged.(ii) is firmly nonexpansive if and only if the complement is firmly
nonexpansive.(iii)If for some and if is firmly nonexpansive and is nonexpansive, then is averaged.(iv)The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is
averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is -averaged, where .(v)If the mappings are averaged and have a common fixed point,
thenThe notation denotes the set of all fixed points of the mapping ; that is, .
We need some facts and tools in a real Hilbert space which are listed as lemmas below.
Lemma 7. Let be a real inner product space. Then the following inequality holds:
Lemma 8. Let be a monotone mapping. In the context of the variational inequality problem the characterization of the projection (see Proposition 2(i)) implies
Lemma 9 (see [44, Demiclosedness principle]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a nonexpansive self-mapping on with . Then is demiclosed. That is, whenever is a
sequence in weakly converging to some and the sequence strongly converges to some , it follows that . Here is the identity operator of .
Lemma 10 (see [45]). Let be a sequence of nonnegative numbers satisfying the conditions where and are sequences of real numbers such that(i) and or, equivalently, (ii), or .Then .
Lemma 11 (see [46]). Let and be bounded sequences in a Banach space and a sequence in with Suppose that for each and Then .
The following lemma can be easily proven and, therefore, we omit the proof.
Lemma 12. Let be an -Lipschitzian mapping with constant , and let be a -Lipschitzian and -strongly monotone operator with positive constants . Then, for , That is, is strongly monotone with constant
Let be a nonempty closed convex subset of a real Hilbert space . We introduce some notations. Let be a number in and let . Associating with a nonexpansive mapping , we define the mapping by where is
an operator such that, for some positive constants , is -Lipschitzian and -strongly monotone on ; that is, satisfies the conditions: for all .
Lemma 13 (see [45, Lemma 3.1]). is a contraction provided ; that is, where .
Recall that a set-valued mapping is called monotone if for all , and imply A set-valued mapping is called maximal monotone if is monotone and for each , where is the identity mapping of . We denote
by the graph of . It is known that a monotone mapping is maximal if and only if, for , for every implies .
Let be a monotone, -Lipschitz continuous mapping and let be the normal cone to at ; that is, Define Then, is maximal monotone and if and only if ; see [32].
Assume that is a maximal monotone mapping. Then, for , associated with , the resolvent operator can be defined as In terms of Huang [33] (see also [34]), the following property holds for the
resolvent operator .
Lemma 14. is single-valued and firmly nonexpansive; that is,
Consequently, is nonexpansive and monotone.
Lemma 15 (see [39]). Let be a maximal monotone mapping with . Then, for any given , is a solution of problem (14) if and only if satisfies
Lemma 16 (see [34]). Let be a maximal monotone mapping with and let be a strongly monotone, continuous, and single-valued mapping. Then, for each , the equation has a unique solution for .
Lemma 17 (see [39]). Let be a maximal monotone mapping with and let be a monotone, continuous, and single-valued mapping. Then for each . In this case, is maximal monotone.
3. Implicit Iterative Algorithm and Its Convergence Criteria
We now state and prove the first main result of this paper.
Theorem 18. Let be a nonempty closed convex subset of a real Hilbert space . Let be a convex functional with -Lipschitz continuous gradient . Let be two integers. Let be a bifunction from to
satisfying (A1)–(A4) and let be a proper lower semicontinuous and convex function, where . Let be a maximal monotone mapping and let and be -inverse-strongly monotone and -inverse-strongly monotone,
respectively, where , . Let be a -Lipschitzian and -strongly monotone operator with positive constants . Let be an -Lipschitzian mapping with constant . Let and , where . Assume that and that either
(B1) or (B2) holds. Let be a sequence generated by where (here is nonexpansive and for each . Assume that the following conditions hold:(i) for each , ;(ii), for all ;(iii), for all .Then converges
strongly as to a point , which is a unique solution of the VIP: Equivalently, .
Proof. First of all, let us show that the sequence is well defined. Indeed, since is -Lipschitzian, it follows that is -ism; see [41]. By Proposition 5(ii) we know that, for , is -ism. So by
Proposition 5(iii) we deduce that is -averaged. Now since the projection is -averaged, it is easy to see from Proposition 6(iv) that the composite is -averaged for . Hence, we obtain that, for each ,
is -averaged for each . Therefore, we can write where is nonexpansive and for each . It is clear that
Put for all and , for all and , and , where is the identity mapping on . Then we have that and .
Consider the following mapping on defined by where for each . By Proposition 1(ii) and Lemma 13 we obtain from (27) that for all Since , is a contraction. Therefore, by the Banach contraction
principle, has a unique fixed point , which uniquely solves the fixed point equation: This shows that the sequence is defined well.
Note that and . Hence, by Lemma 12 we know that That is, is strongly monotone for . Moreover, it is clear that is Lipschitz continuous. So the VIP (50) has only one solution. Below we use to denote
the unique solution of the VIP (50).
Now, let us show that is bounded. In fact, take arbitrarily. Then from (27) and Proposition 1(ii) we have Similarly, we have Combining (59) and (60), we have Since where , it is clear that for each .
Thus, utilizing Lemma 13 and the nonexpansivity of , we obtain from (61) that This implies that . Hence, is bounded. So, according to (59) and (61) we know that , , , , and are bounded.
Next let us show that , , and as .
Indeed, from (27) it follows that for all and Thus, utilizing Lemma 7, from (49) and (64) we have which implies that Since and for all and , from we conclude immediately that for all and .
Furthermore, by Proposition 1(ii) we obtain that for each which implies that Also, by Lemma 14, we obtain that for each which implies Thus, utilizing Lemma 7, from (49), (69), and (71) we have | {"url":"http://www.hindawi.com/journals/aaa/2014/132053/","timestamp":"2014-04-19T10:10:47Z","content_type":null,"content_length":"1048595","record_id":"<urn:uuid:48687822-7430-4aed-aee2-d2a8826b8a12>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Re: computation of R-squared with a non-linear model
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Re: computation of R-squared with a non-linear model
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Re: computation of R-squared with a non-linear model
Date Fri, 22 May 2009 12:05:57 +0100
I support this general idea. For another statement, see
How can I get an R-squared value when a Stata command does not supply
Even better than pursuing a single figure-of-merit would be to plot
observed vs predicted residuals vs predicted.
Paul Seed
There is a simple way to compute R-squared for any regression model,
if you do not believe the value given by Stata: Calculate the predicted
values and carry out your own correlation.
Using the auto data set:
**** Start Stata code *****
sysuse auto
regress weight price
predict pred_w
su weight pred_w
corr weight pred_w
di "R-squared = " r(rho)*r(rho)
**** End Stata code *****
Both ways giver a value of 0.2901023
In general, the use of weights and adjusted R-squared
makes things more complicated, and the last two lines
could be changed to allow for them;
but neither will alter a correltion of 1.0.
If Marcel Spijkerman uses this approach, he may find
a) Marcel is right - the second R-squared is different
from the first. (He does not say, but I assume that
both the adjusted and unadjusted R-squared are 1.0).
b) Martin Buis is right - the model has failed to converge,
and the predicted values are mostly or completely undefined.
c) Stata is right - both methods give R-squared = 1.0
d) Something else I haven't though to.
I'd be interested to know which.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-05/msg00858.html","timestamp":"2014-04-21T12:56:29Z","content_type":null,"content_length":"7238","record_id":"<urn:uuid:95d45c30-2f6c-4f05-bd1f-74fe6c624437>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Is there a theory of probability based on fuzzy set theory?
One thing I've wondered for quite some time, and looked for, but not found anything I consider adequate is whether there is a theory of probability based on Zadeh's fuzzy set theory.
One of the strengths of fuzzy set theory is that it is not a probability theory. An element of a fuzzy set that has a 0.7 degree of membership in it, does not have a 0.7 probability of being in the
set and a 0.3 probability of not being in the set. It has a definite degree of membership equal to 0.7. So, for example, a house might have a 0.7 degree of membership in the set of "colonial style
houses". If you treat the degree of membership as something determined by a random variable, the you can ask questions like "what is the probability that a randomly selected house has a degree of
membership in the set of "colonial style houses" that is between 0.7 and 0.75? Such things are handled by ordinary probability theory.
So it isn't clear what one would mean by "a theory of probability based on Zadeh's fuzzy set theory" because fuzzy set membership does not resemble probability and ordinary probability theory can be
applied to fuzzy set membership.
Did you have specific goals or ideas for a generalization of probability theory that somehow incorporates fuzzy sets in a new way? It would be interesting to discuss. | {"url":"http://www.physicsforums.com/showpost.php?p=3794202&postcount=3","timestamp":"2014-04-16T10:27:30Z","content_type":null,"content_length":"9243","record_id":"<urn:uuid:26f53001-8fa0-413f-9d9b-9a85e0cf7542>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Alpha Launches Problem Generator To Help Students Learn Math
“This tech crunch article features Wolfram Alpha, the incredible computational database. WA provides students with an extraordinary resource, and the Problem Generator provides students with an
engaging computational challenge. If students have difficulty, then the service provides help for the user. This is a wonderful example of technology tutoring or guiding users to better
understanding. ”
If you’re studying math or science, you are probably pretty familiar with Wolfram Alpha as a tool for figuring out complicated equations. That makes it a pretty good tool for cheating, but not
necessarily for learning. Today, the Wolfram Alpha team is launching a new service for learners, the Wolfram Problem Generator, that turns the “computational knowledge engine” on its head.
The Problem Generator – which is available to all Wolfram Alpha Pro subscribers now – creates random practice questions for students, and Wolfram Alpha then helps them find the answers step-by-step.
Right now, the Generator covers six subjects: arithmetic, number theory, algebra, calculus, linear algebra and statistics. The difficulty of the questions can be tuned down for students in elementary
school and tuned up for those in college calculus classes. As the company notes in today’s announcements, the material for students in elementary and secondary schools closely follows the Common Core
Standards initiative.
Using the tools is pretty straightforward. Students (or their teachers) choose which topic they want to study and the difficulty level (beginner, intermediate or advanced) and the system will
generate a problem.
The team notes that the tool uses Wolfram Alpha’s natural language processing capabilities to try to understand the students’ answers to “ensure that all students can learn and express themselves in
their own unique way.” This may actually be the highlight of this service. Too often, after all, similar tools force a very rigid way of answering complex math questions on their students and when
they make a mistake, it’s not clear if the answer is wrong or if the student just got the syntax wrong.
If a student can’t find the answer after three tries, Wolfram Alpha can show a step-by-step solution. The Problem Generator also allows teachers to easily create printable quizzes with
multiple-choice tests. | {"url":"http://fluency21.com/blog/2013/11/03/wolfram-alpha-launches-problem-generator-to-help-students-learn-math/","timestamp":"2014-04-18T23:15:40Z","content_type":null,"content_length":"30032","record_id":"<urn:uuid:abef7c5b-8ce2-41a0-a469-283cec450e09>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Application Placement on a Cluster of Servers
(extended abstract) ∗
Bhuvan Urgaonkar, Arnold Rosenberg and Prashant Shenoy
Department of Computer Science,
University of Massachusetts, Amherst, MA 01003
{bhuvan, rsnbrg, shenoy}@cs.umass.edu
Abstract—The APPLICATION PLACEMENT PROBLEM ically, each application runs on a subset of the nodes and
(APP) arises in clusters of servers that are used for host- these subsets may overlap. Whereas dedicated hosting plat-
ing large, distributed applications such as Internet services. forms are used for many niche applications that warrant their
Such clusters are referred to as hosting platforms. Hosting additional cost, economic reasons of space, power, cooling
platforms imply a business relationship between the platform and cost make shared hosting platforms an attractive choice
provider and the application providers: the latter pay the for many application hosting environments.
former for the resources on the platform. In return, the plat- Shared hosting platforms imply a business relationship
form provider provides guarantees on resource availability between the platform provider and the application providers:
to the applications. This implies that a platform should host the latter pay the former for the resources on the platform.
only applications for which it has sufficient resources. The In return, the platform provider gives some kind of guar-
objective of the APP is to maximize the number of applica- antees of resource availability to applications. This implies
tions that can be hosted on the platform while satisfying their that a platform should admit only applications for which it
resource requirements. We show that the APP is NP-hard. has sufficient resources. In this work, we take the number
Further, we show that even restricted versions of the APP of applications that a platform is able to host (admit) to be
may not admit polynomial-time approximation schemes. Fi- an indicator of the revenue that it generates from the hosted
nally, we present algorithms for the online version of the applications. The number of applications that a platform ad-
APP. mits is related to the application placement algorithm used
by the platform. A platform’s application placement algo-
rithm decides where on the cluster the different components
1 Introduction of an application get placed. In this paper we study proper-
ties of the application placement problem (APP) whose goal
is to maximize the number of applications that can be hosted
Server clusters built using commodity hardware and soft-
on a platform. We show that APP is NP-hard and present
ware are an increasingly attractive alternative to traditional
approximation algorithms.
large multiprocessor servers for many applications, in part
due to rapid advances in computing technologies and falling
hardware prices. We call such server clusters hosting plat-
forms. Hosting platforms can be shared or dedicated. In 2 The Application Placement Problem
dedicated hosting platforms [1, 14], either the entire clus-
ter runs a single application (such as a web search engine), 2.1 Notation and Definitions
or each individual processing element in the cluster is dedi-
cated to a single application ( such as dedicated web hosting Consider a cluster of n servers (also called nodes),
services where each node runs a single application). In con- N1 , N2 , . . . , Nn . Each node has a given capacity (of avail-
trast, shared hosting platforms [3, 17] run a large number able resources). Unless otherwise noted, nodes are homo-
of different third-party applications (web-servers, streaming geneous, in the sense of having the same initial capacities.
media servers, multi-player game servers, e-commerce ap- The APP appropriates portions of nodes’ capacities; a node
plications, etc.), and the number of applications typically that still has its initial capacity is said to be empty. Let m
exceeds the number of nodes in the cluster. More specif- denote the number of applications to be placed on the clus-
∗ This research was supported in part by NSF grants CCR-9984030, EIA- ter and let us represent them as A1 , . . ., Am . Further, each
0080119, CNS-0323597, CCF-0342417 and a gift from Intel Corporation. application is composed of one or more capsules. A capsule
may be thought of as the smallest component of an appli- (see [12] for a definition of APX-hardness) and [15] pro-
cation for the purposes of placement — all the processes, vides a 2-approximation algorithm for it. This was the best
data etc., belonging to a capsule must be placed on the same result known for MKP until a polynomial-time PTAS was
node. Capsules provide a useful abstraction for logically par- presented for it in [5]. It should be observed that the of-
titioning an application into sub-components and for exert- fline APP is a generalization of MKP where an item may
ing control over the distribution of these components onto have multiple components that need to be assigned to dif-
different nodes. If an application wants certain components ferent bins (the profit associated with an item is 1). Further,
to be placed together on the same node (e.g., because they [5] shows that slight generalizations of MKP are APX-hard.
communicate a lot), then it could bundle them as one cap- This provides reason to suspect that the APP may also be
sule. Some applications may want their capsules to be placed APX-hard (and hence may not have a PTAS).
on different nodes. An important reason for doing this is to Another closely related problem is a “multidimensional”
improve the availability of the application in the face of node version of the MKP where each item has requirements along
failures — if a node hosting a capsule of the application fails, multiple dimensions, each of which must be satisfied to suc-
there would still be capsules on other nodes. An example cessfully place it. The goal is to maximize the total profit
of such an application is a replicated web server. We refer yielded by the items that could be placed. A heuristic for
to this requirement as the capsule placement restriction. In solving this problem is described in [11]. However, the au-
what follows, we look at the APP both with and without the thors evaluate this heuristic only through simulations and do
capsule placement restriction. not provide any analytical results on its performance.
In general, each capsule in an application would require
guarantees on access to multiple resources. In this work,
we consider just one resource, such as the CPU or the net- 3 Hardness of Approximating the APP
work bandwidth. We assume a simple model where a cap-
sule specifies its resource requirement as a fraction of the re- In this section, we demonstrate that a restricted version
source capacity of a node in the cluster (i.e., we assume that of the APP does not admit a PTAS. The capsule placement
the resource requirement of each capsule is less than the ca- restriction is assumed to hold throughout this section.
pacity of a node). A capsule can be placed on a node only if We give a gap-preserving reduction (see [16] for defini-
the sum of its resource requirement and those of the capsules tion) from the Multi-dimensional 0-1 Knapsack Problem [2]
already placed on the node does not exceed the resource ca- to a restricted version of the APP.
pacity of the node. We say that an application can be placed
only if all of its capsules can be placed simultaneously. It is Definition 1 Multi-Dimensional 0-1 Knapsack Problem
easy to see that there can be more than one way in which an (MDKP): For a fixed positive integer k, the k-dimensional
application may be placed on a platform. We refer to the to- knapsack problem is the following:
tal number of applications that a placement algorithm could n
place as the size of the placement. Maximize ci xi
Lemma 1 The APP is NP-hard.
Subject to
Proof: We reduce the well-known bin-packing problem [12] aij xi ≤ bj , j = 1, . . . , k,
to the APP to show that it is NP-hard. We omit the proof i=1
here and present it in [16].
where: n is a positive integer; each ci ∈ {0, 1} and
maxi ci = 1; the aij and bi are non-negative real numbers;
2.2 Related Work
all xi ∈ {0, 1}. Define B = mini bi .
Two generalizations of the classical knapsack problem Hardness of approximating MDKP: For fixed k there
are relevant to our discussion of the APP. These are the Mul- is a PTAS for MDKP [10]. For large k the randomized
tiple Knapsack Problem (MKP) and the Generalized Assign- rounding technique of [13] yields integral solutions of value
ment Problem (GAP). In MKP, we are given a set of n items Ω(OP T /d1/B ). [4] shows that MDKP is hard to approxi-
and m bins (knapsacks) such that each item i has a profit 1
mate within a factor of Ω(k B+1 − ) for every fixed B, and
p(i) and a size s(i), and each bin j has a capacity c(j). The establishes that randomized rounding essentially gives the
goal is to find a subset of items of maximum profit that has a best possible approximation guarantees.
feasible packing in the bins. MKP is a special case of GAP
where the profit and the size of an item can vary based on Theorem 1 Given any > 0, it is NP-hard to approximate
the specific bin that it is assigned to. GAP is APX-hard to within (1 + ) the offline placement problem that has the
following restrictions: (1) all the capsules have a positive all m applications corresponding to the m items in P . To
requirement and (2) there exists a constant M , such that see why consider any node Ni (1 ≤ i ≤ k. The capacity as-
∀i, j(1 ≤ j ≤ k, 1 ≤ i ≤ n), M ≥ bj /aji . signed to Ni is SFi times the capacity along dimension i of
the k-dimensional knapsack in the input to k-MDKP, where
Proof: We explain later in this proof why the two restrictions SFi ≥ 1. The requirements assigned to the ith capsules of
mentioned above arise. We begin by describing the reduc- all the applications are also obtained by scaling by the same
tion. factor SFi the sizes along the ith dimension of the items.
The reduction: Consider the following mapping from in- Multiplying both sides of (2) by SFi we get,
stances of k-MDKP to offline APP:
Suppose the input to k-MDKP is a knapsack with capac- m
ity vector (b1 , . . . , bk ). Also let there be n items I1 , . . . , In . SFi × aij ≤ SFi × bj , j = 1, . . . , k.
Let the requirement vector for item Ij be (aj1 , . . . , ajk ). We i=1
create an instance of offline APP as follows. The cluster has
Observe that the term on the right is the capacity assigned
k nodes N1 , . . . , Nk . There are n applications A1 , . . . , An ,
to Ni . The term on the left is the sum of the requirements
one for each item in the input to k-MDKP. Each of these
of the ith capsules of the applications corresponding to the
applications has k capsules. The k capsules of application
items in P . This shows that node Ni can accommodate the
Ai are denoted c1 , . . . , ck . Also, we refer to cj as the j th
i i i ith capsules of the applications corresponding to the m items
capsule of application Ai . We now describe how we assign
in P . This implies that there is a placement of size m.
capacities to the nodes and requirements to the applications
(⇐=) Assume that there is a placement L of size m ≤ n.
we have created. This part of the mapping proceeds in k
Let the n applications be denoted A1 , . . . , An . Without loss
stages. In stage s, we determine the capacity of node Ns and
of generality, let the m applications in L be A1 , . . . , Am .
the requirements of the sth capsule of all the applications.
Also denote the set of the sth capsules of the placed applica-
Next, we describe how these stages proceed.
tions by Caps , 1 ≤ s ≤ k.
Stage 1: Assigning capacity to the first node N1 is
We make the following key observations:
straightforward. We assign it a capacity C(N1 ) = b1 . The
first capsule of application Ai is assigned a requirement
1 • For any application to be successfully placed, its ith
ri = ai1 .
capsule must be placed on node Ni . Due to the scal-
Stage s (1 < s ≤ k): The assignments done by stage s
ing by the factor computed in Eq. (1), the requirements
depend on those done by stage s − 1. We first determine the
assigned to the sth (s > 1) capsules of the applica-
smallest of the requirements along dimension s of the items
tions are strictly greater than the capacities of the nodes
in the input to k-MDKP, that is, rmin = minn (ais ). Next
i=1 N1 , . . . , Ns−1 . Consider the k th capsules of the appli-
we determine the scaling factor for stage s, SFs as follows:
cations first. The only node these can be placed on is
SFs = C(Ns−1 )/rmin + 1. (1) Nk . Since no two capsules of an application may be
placed on the same node, this implies that the k − 1th
Recall that we assume that ∀s, rmin > 0. Now we are ready capsules of the applications may be placed only on
to do the assignments for stage s. Node Ns is assigned a Nk−1 . Proceeding in this manner, we find that the claim
capacity C(Ns ) = bi × SFs . The sth capsule of application holds for all the capsules.
Ai is assigned a requirement ri = ais × SFs .
This concludes our mapping. • Since for all s (1 ≤ s ≤ k), the node capacities and
Correctness of the reduction: We show that the map- the requirements of the sth capsules are scaled by the
ping described above is a reduction. same multiplicative factor, the fact that the m capsules
(=⇒) Assume there is a packing P of size m ≤ n. De- in Caps could be placed on Ns implies that the m items
note the n items in the input to k-MDKP as I1 , . . . , In . With- I1 , . . . , Im can be packed in the knapsack in the sth
out loss of generality, assume that the m items in P are dimension.
I1 , . . . , Im . Therefore we have,
Combining these two observations, we find that a packing
of size m must exist.
aij ≤ bj , j = 1, . . . , k. (2)
Time and space complexity of the reduction: This re-
duction works in time polynomial in the size of the input. It
Consider this way of placing the applications that the map- involves k stages. Each stage involves computing a scaling
ping constructs on the nodes N1 , . . . , Nk . If item Ii ∈ P , factor (this involves performing a division) and multiplying
place application Ai as follows: ∀j, 1 ≤ j ≤ k, place cap- n + 1 numbers (the capacity of the knapsack and the require-
sule cj on node Nj . We claim that we will be able to place
i ments of the n items along the relevant dimension).
Let us consider the size of the input to the offline place- Lemma 2 FF SINGLE has an approximation ratio of 2.
ment problem produced by the reduction. Due to the scaling
of capacities and requirements described in the reduction, 4.2 Placement without the Capsule Placement Re-
the magnitudes of the inputs increase by a multiplicative fac- striction
tor of O(M j ) for node Nj and the j th capsules. If we as-
sume binary representation this implies that the input size An approximation algorithm based on first-fit gives an ap-
increases by a multiplicative factor of O(M j/2 ), 1 < j ≤ k. proximation ratio of 2 for multi-capsule applications, pro-
Overall, the input size increases by a multiplicative factor of vided that they don’t have the capsule placement restriction.
O(M k ). For the mapping to be a reduction, we need this The approximation algorithm works as follows. Say that
to be a constant. Therefore, our reduction works only when we are given n nodes N1 , . . ., Nn and m applications A1 ,
we impose the following restrictions on the offline APP: (1) . . ., Am with requirements R1 , . . ., Rm (the requirement of
k and M are constants, and (2) all the capsule requirements an application is the sum of the requirements of its capsules).
are positive. Assume that the nodes have unit capacities. The algorithm
Gap-preserving property of the reduction: The reduc- first orders the applications in nondecreasing order of their
tion presented is gap-preserving because the size of the op- requirements. Denote the ordered applications by a1 , . . .,
timal solution to the offline placement problem is exactly am and their requirements by r1 , . . ., rm . The algorithm
equal to the size of the optimal solution to MDKP. considers the applications in this order. An application is
[OPT(MDKP) ≥ 1] =⇒ [OPT(offline APP) ≥ 1] placed on the “first” set of nodes where it can be accom-
[OPT(MDKP) < 1] =⇒ [OPT(offline APP) < 1] modated (i.e., the nodes with the smallest indices that have
Together, these results prove that the restricted version of sufficient resources for all its capsules). The algorithm ter-
the offline APP described in Theorem 1 may not admit a minates once it has considered all the applications or it finds
PTAS unless P = N P . an application that cannot be placed, whichever occurs first.
We call this algorithm FF MULTIPLE RES.
4 Offline Algorithms for APP Lemma 3 FF MULTIPLE RES has an approximation ra-
tio that approaches 2 as the number of nodes in the cluster
In this section we present approximation algorithms for
several variants of the placement problem. Except in Section
4.4, we assume that the cluster is homogeneous, in the sense
specified earlier. In most cases, we state the results without 4.3 Placement of Identical Applications
proof due to lack of space. We refer the reader to [16] for the
proofs. Two applications are identical if their sets of capsules are
identical. Below we present a placement algorithm based
4.1 Placement of Single-Capsule Applications on “striping” applications across the nodes in the cluster and
determine its approximation ratio.
We consider a restricted version of offline APP in which Striping-based placement: Assume that the applications
every application has exactly one capsule. We provide a have k capsules each, with requirements r1 , . . . , rk (r1 ≤
polynomial-time algorithm for this restriction of offline APP, . . . ≤ rk ). The algorithm works as follows. Let us denote
whose placements are within a factor 2 of optimal. the nodes as N1 , . . . , Nm . The nodes are divided into sets of
The approximation algorithm works as follows. Say that size k each. Since m ≥ k, there will be at least one such set.
we are given n nodes N1 , . . ., Nn and m single-capsule The number of such sets is m/k . Let t = m/k , t ≥ 1.
applications C1 , . . ., Cm with requirements R1 , . . ., Rm . Let us denote these sets as S1 , . . . , St+1 . Note that St+1
Assume that the nodes have unit capacities. The algorithm may be an empty set, 0 ≤ |St+1 | ≤ k − 1. The algorithm
first sorts the applications in nondecreasing order of their considers these sets in turn and “stripes” as many unplaced
requirements. Denote the sorted applications by c1 , . . ., applications on them as it can. The set of nodes under con-
cm and their requirements by r1 , . . ., rm . The algorithm sideration is referred to as the current set of k nodes.
considers the applications in this order. An application is When the current set of k nodes gets exhausted and there
placed on the “first” node where it can be accommodated are more applications to place, the algorithm takes the next
(i.e., the node with the smallest index that has sufficient re- set of k nodes and continues. The algorithm terminates when
sources for it). The algorithm terminates once it has consid- the nodes in St are exhausted, or all applications have been
ered all the applications or it finds an application that cannot placed, whichever occurs earlier. Note that none of the nodes
be placed, whichever occurs earlier. We call this algorithm in the (possibly empty) set St+1 are used for placing the ap-
FF SINGLE. plications.
Lemma 4 The striping-based placement algorithm yields algorithm place (that is, applications in (H −I) and (O−I)),
t+1 and compare the sizes of these sets. A relation between the
an approximation ratio of for identical applica-
t sizes of these sets immediately yields a relation between the
tions, where t = m/k . sizes of the sets H and O. (Observe that (H −I) and (O −I)
may both be empty, in which case we have the claimed ratio
4.4 Max-First Placement trivially.)
Consider the placement given by Max-First. Remove
In this section we turn our attention to the general offline from this all the applications in I, and deduct from the nodes
APP. We let the nodes in the cluster be heterogeneous. We the resources reserved for the capsules of these applications.
find that this problem is much harder to approximate than H−I H−I
Denote the resulting nodes by N1 , . . . , Nn . Do the
the restricted cases. We first present a heuristic that works same for the placement given by the optimal algorithm, and
differently from the first-fit based heuristics we have consid- O−I O−I
denote the resulting nodes by N1 , . . . , Nn . To under-
ered so far. We obtain an approximation ratio of k for this stand the relation between the applications placed on these
heuristic, where k is the maximum number of capsules in node sets by Max-First and the optimal algorithm, suppose
any application. Max-First places y applications from the set (H − I) on
Our heuristic works as follows. It associates with each H−I H−I
the nodes N1 , . . . , Nn . Let us denote the applications
application a weight which is equal to the requirement of the in (A − I) by B1 , . . . , By , . . . , B|A−I| , where the applica-
largest capsule in the application. The heuristic considers tions are arranged in nondecreasing order of the size of their
the applications in nondecreasing order of their weights. We largest capsule. That is, l(B1 ) ≤ . . . ≤ l(By ) ≤ . . . ≤
use a bipartite graph to model the problem of placing an ap- l(B|A−I| ), l(x) being the requirement of the largest capsule
plication on the cluster. In this graph, we have one vertex in application x. From the definition of Max-First, the y
for each capsule in the application and for each node in the applications that it places are B1 , . . . , By . Also, the appli-
cluster. Edges are added between a capsule and a node if cations that the optimal algorithm places on the set of nodes
the node has sufficient capacity for hosting the capsule. We O−I O−I
N1 , . . . , N n must be from the set By+1 , . . . , B|A−I| .
say that the node is feasible for the capsule. In Lemma 5 We make the following useful observation about the appli-
(see [16] for the proof) we show that an application can be cations in the set By+1 , . . . , BA−I : for each of these ap-
placed on the cluster if and only if there is a matching of size plications, the requirement of the largest capsule is at least
equal to the number of capsules in the application. We solve l(By ). Based on this we infer the following: Max-First will
the maximum matching problem on this bipartite graph [7]. exhibit the worst approximation ratio when all the applica-
If the matching has size equal to the number of capsules, we tions in (H − I) have k capsules, each with requirement
place the capsules of the application on the nodes that the l(By ), and all the applications in (O − I) have (k − 1) cap-
maximum matching connects them to. Otherwise, the ap- sules with requirement 0, and one capsule with requirement
plication cannot be placed and the heuristic terminates. We l(By ). Since the total capacities remaining on the node sets
refer to this heuristic as Max-First. H−I H−I O−I O−I
N1 , . . . , Nn and N1 , . . . , Nn are equal, this
implies that in the worst case, the set O − I would contain
Lemma 5 An application with k capsules can be placed on
k times as many applications as H − I. Based on the above,
a cluster if and if only there is a matching of size k in the
we can prove an approximation ratio of k for Max-First as
bipartite graph modeling its placement on the cluster.
Lemma 6 The placement heuristic Max-First described |O| = |O − I| + |I| ≤ k · |H − I| + |I|
above has an approximation ratio of k, where k is the maxi- ≤ k · (|H − I| + |I|) = k · |H|
mum number of capsules in an application. This concludes our proof.
Proof: Let A represent the set of all the applications and
|A| = m. Denote by n the number of nodes in the cluster 5 The Online APP
and the nodes themselves by N1 , . . . , Nn . Let us denote by
H the set of applications that Max-First places. Let O de- In the online version of the APP, the applications arrive
note the set of applications placed by any optimal placement one by one. We require the following from any online place-
algorithm. Clearly, |H| ≤ |O| ≤ m. Represent by I the set ment algorithm — the algorithm must place a newly arriving
of applications that both H and O place; that is, I = H ∩ O. application on the platform if it can find a placement for it
Further, denote by R the set of applications that neither H without moving any already placed capsule. This captures
nor O places. the placement algorithm’s lack of knowledge of the require-
The basic idea behind this proof is as follows. We focus in ments of the applications arriving in the future. We assume
turn on the applications that only Max-First and the optimal a heterogeneous cluster throughout this section.
5.1 Online Placement with Variable Preference for nondecreasing order of their “largest component” was found
Nodes to provide an approximation ratio of k, where k was the
maximum number of capsules in any application. We also
In some scenarios, it may be useful to be able to honor considered restricted versions of the offline APP in a homo-
any preference a capsule may have for one feasible node over geneous cluster. We found that heuristics based on “first-fit”
another. In this section, we describe how online placement or “striping” could provide an approximation ratio of 2 or
can take such preferences into account. We model such a better.
scenario by enhancing the bipartite graph representing the For the online placement problem, we allowed the cap-
placement of an application on the cluster by allowing the sules of an application to have variable preference for the
edges in the graph to have positive weights. The online nodes on the cluster and showed how a standard algorithm
placement problem therefore is to find the maximum match- for the minimum weight perfect matching problem may be
ing of minimum weight in this weighted graph. We show used to find the “most preferred” of all possible placements
that this can be found by reducing the placement problem to for such an application.
the Minimum-weight Perfect Matching Problem.
Our reduction works as follows. Assume that all the
weights in the original bipartite graph are in the range (0, References
1) and that they sum to 1. This can be achieved by normal-
[1] K. Appleby, S. Fakhouri, L. Fong, M. K. G. Goldszmidt, S. Krishnakumar,
izing all the weights by the sum of the weights. If an edge ei D. Pazel, J. Pershing, and B. Rochwerger. Oceano - SLA-based Management of
wi a Computing Utility. In Proceedings of the IFIP/IEEE Symposium on Integrated
had weight wi , its new weight would be . Denote
e∈E we
Network Management, May 2001.
the number of capsules by m and the number of nodes by [2] A. K. Chandra, D. S. Hirschberg, and C. K. Wong. Approximate Algorithms
for some Generalized Knapsack Problems. In Theoretical Computer Science,
n, m ≤ n. Construct n − m capsules and add edges with volume 3, pages 293–304, 1976.
weight 1 each between them and all the nodes. We call these [3] J. Chase, D. Anderson, P. Thakar, A. Vahdat, and R. Doyle. Managing Energy
and Server Resources in Hosting Centers. In Proceedings of the Eighteenth
the dummy capsules. ACM Symposium on Operating Systems Principles (SOSP), pages 103–116,
October 2001.
Lemma 7 In the weighted bipartite graph G corresponding [4] C. Chekuri and S. Khanna. On Multi-dimensional Packing Problems. In In Pro-
ceedings of the Tenth Annual ACM-SIAM Symposium on Discrete Algorithms
to an application with m capsules and a cluster with n nodes (SODA), January 1999.
(m ≤ n), a matching of size m and cost c exists if and only [5] C. Chekuri and S. Khanna. A PTAS for the Multiple Knapsack Problem. In
if a perfect matching of cost (c + n − m) exists in the graph Proceedings of the eleventh annual ACM-SIAM Symposium on Discrete algo-
rithms, 2000.
G produced by reduction described above. [6] W. Cook and A. Rohe. Computing Minimum-weight Perfect Matchings. In
INFORMS Journal on Computing, pages 138–148, 1999.
Proof: Due to lack of space we point the reader to [16] for [7] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms. The MIT
Press, Cambridge, MA.
the proof.
[8] D. S. Hochbaum (Ed.). Approximation Algorithms for NP-Hard Problems.
[9] gives a polynomial-time algorithm (called the blossom PWS Publishing Company, Boston, MA.
algorithm) for computing minimum-weight perfect match- [9] J. Edmonds. Maximum Matching and a Polyhedron with 0,1 - Vertices. In
ings. [6] provides a survey of implementations of the blos- Journal of Research of the National Bureau of Standards 69B, 1965.
[10] A. M. Friexe and M. R. B. Clarke. Approximation Algorithms for the m-
som algorithm. The reduction described above, combined dimensional 0-1 Knapsack Problem: Worst-case and Probabilistic Analyses.
with Lemma 7, can be used to find the desired placement. In European Journal of Operational Research 15(1), 1984.
If we do not find a perfect matching in the graph G , we [11] M. Moser, D. P. Jokanovic, and N. Shiratori. An Algorithm for the Multidimen-
sional Multiple-Choice Knapsack Problem. In IEICE Trans. Fundamentals Vol.
conclude that there is no placement for the application. Oth- E80-A No. 3, March 1997.
erwise, the perfect matching minus the edges incident on the [12] A Compendium of NP Optimization Problems.
newly introduced capsules gives us the desired placement. v
http://www.nada.kth.se/˜ iggo/problemlist/compendium.html.
[13] P. Raghavan and C. D. Thompson. Randomized Rounding: a Technique for
Provably Good Algorithms and Algorithmic Proofs. In Combinatorica, vol-
ume 7, pages 365–374, 1987.
[14] S. Ranjan, J. Rolia, H. Fu, and E. Knightly. QoS-Driven Server Migration for
6 Conclusions Internet Data Centers. In Proceedings of the Tenth International Workshop on
Quality of Service (IWQoS 2002), May 2002.
In this work we considered the offline and the online ver- [15] D. B. Shmoys and E. Tardos. An Approximation Algorithm for the Generalized
Assignment Problem. In Mathematical Programming A, 62:461-74, 1993.
sions of APP, the problem of placing distributed applica-
[16] B. Urgaonkar, A. Rosenberg, and P. Shenoy. Application Placement on a Clus-
tions on a cluster of servers. This problem was found to ter of Servers. Technical Report TR04-18, Department of Computer Science,
be NP-hard. We used a gap preserving reduction from the University of Massachusetts, March 2004.
[17] B. Urgaonkar, P. Shenoy, and T. Roscoe. Resource Overbooking and Applica-
Multi-dimensional Knapsack Problem to show that a even tion Profiling in Shared Hosting Platforms. In Proceedings of the Fifth Sympo-
a restricted version of the offline placement problem may sium on Operating System Design and Implementation (OSDI’02), December
not have a PTAS. A heuristic that considered applications in | {"url":"http://www.docstoc.com/docs/70216497/placement","timestamp":"2014-04-23T16:03:32Z","content_type":null,"content_length":"93634","record_id":"<urn:uuid:56fa9971-c1e2-4826-8f01-97e506d8b61d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Beowulf] Teaching Scientific Computation (looking for the perfect
[Beowulf] Teaching Scientific Computation (looking for the perfect text)
Joe Landman landman at scalableinformatics.com
Tue Nov 20 14:39:29 PST 2007
Nathan Moore wrote:
> > Nathan,
> > I'm sure you'll get lots of very experienced responses but if I may:
> > 1. Book. K&RC is the best book ever, on any subject.
> > 2. Demographics. It looked to me that engineers were typically
> > learning and using C (C++, C with Classes, sometimes Java) more than
> > Fortran. I would have expected similar among physicists, but I
> > understand that a lot of Fortan is still extant and vital. Also there
> > is some convergence, ultimately it won't matter much.
> But for solving a problem (as opposed to learning to get a job
> programming) what about something like Matlab? It's procedural, there
> are compilers (sort of), and it automatically does stuff with matrices
> in sensible ways.
> No site license for matlab here - I generally have my students couple
Octave: http://www.gnu.org/software/octave/
After taking students through the joys of programming, I showed them how
to do masses with springs on Octave. What a difference. As Jim Lux
noted, you spend less time dealing with the vagaries of the language and
more time helping them articulate a solution (though this particular
example is bad in that you have many signs you need to correctly and
carefully account for ... sign errors are a bear in any language)
> gnuplot with some sort of language (perl or fortran depending on how
> long the job will run), or offer mathematica as an option.
I also like Maxima.
landman at lightning:~$ maxima
Maxima 5.12.0 http://maxima.sourceforge.net
Using Lisp GNU Common Lisp (GCL) GCL 2.6.7 (aka GCL)
Distributed under the GNU Public License. See the file COPYING.
Dedicated to the memory of William Schelter.
This is a development version of Maxima. The function bug_report()
provides bug reporting information.
(%i1) integrate(1/(1+x^2),x,0,inf);
(%o1) ---
(%i2) fortran(%o1);
(%o2) done
I used to try to have it help simplify integrals in statistical
mechanics homework from (owie) 18 years ago.
> I would certainly eschew any of the fads for "Engineering with Excel"
> which make my teeth grind when I hear about it. Every time one of my
> colleagues creates this incredibly elaborate spreadsheet to calculate
> receiver performance (gain distribution, intermodulation, etc.) I have
> to wonder how many hours were spent working around the idiosyncracies
> of Excel (just to get the plot to look right, if nothing else), when
> they could have spent that time learning a "real" tool to do the job.
> Yes, I agree, there is no more asinine task than matrix calculations in
> excel. I keep waiting for Microsoft to have competent-looking graphs be
For fun^h^h^hprofit^h^h^h^h^h^hmasochism I once did a Runge-Kutta orbit
calculator in Excel.
Yes, you can use it for such things ... but ... why would you want to?
> the default when plotting x&y data. The new version it even worse than
> XP excel. The plots are rendered with some sort of open GL surface so
> that trend lines now look like giant ropes of licorice.
Heh... I still like Gnuplot, as you can programmatically generate input
decks for it, and have it generate png/jpg/ps/pdf from this ...
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 866 888 3112
cell : +1 734 612 4615
More information about the Beowulf mailing list | {"url":"http://www.beowulf.org/pipermail/beowulf/2007-November/020010.html","timestamp":"2014-04-20T08:57:56Z","content_type":null,"content_length":"7603","record_id":"<urn:uuid:46e638df-2341-43cb-b054-ef064b1fa434>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: LR Grammars not in LALR(1) or LR(1)
"tj bandrowsky" <tbandrow@unitedsoftworks.com>
12 Sep 2002 00:15:35 -0400
From comp.compilers
| List of all articles for this month |
From: "tj bandrowsky" <tbandrow@unitedsoftworks.com>
Newsgroups: comp.compilers
Date: 12 Sep 2002 00:15:35 -0400
Organization: http://groups.google.com/
References: 02-09-014 02-09-029
Keywords: parse, LR(1), comment
Posted-Date: 12 Sep 2002 00:15:35 EDT
> but I have an old book by Robin
> Hunter, "Compilers...", which says on page 103 that LR(k) grammars are
> LR(1) grammars, and even LR(0) if each sentence is given an
> end-marker, citing a paper by Hopcroft and Ullman, "Introduction to
> Automata Theory, Languages and Computation" 1979.
I'm really looking I guess for a good text that is rich in automata
theory, talkings about Chomskey, and then goes into rigorous
definitions if LR(k), GLR, LALR(k), LL(k). Out of curiousity, is
there such a construct as GLL?
[You probably want Aho, Hopcroft and Ullman's "Theory of Parsing ...",
the two volume set that's long out of print, not the Dragon book. -John]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/02-09-068","timestamp":"2014-04-16T07:15:03Z","content_type":null,"content_length":"6337","record_id":"<urn:uuid:84f7a185-6524-4cb2-b670-bee4b383c4bd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
C++ operators, compound assignments
Now we know how to use variables and constants, we can begin to use them with operators. Operators are integrated in the C++ language. The C++ operators are mostly made out of signs (some language
use keywords instead.)
We used this operator before and it should already be known to you. For the people that didn’t read the previous tutorials we will give a short description.
With an assignment (=) operator you can assign a value to a variable.
For example: A = 5; or B = -10; or A = B;
Let’s look at A = B : The value that is stored in B will be stored in A. The initial value of A will be lost.
So if we say:
Then A will contain the value twenty.
The following expression is also valid in C++: A = B = C = 10;
The variables A,B,C will now contain the value ten.
Calculations (arithmetic operators)
There are different operators that can be used for calculations which are listed in the following table:
Operator Operation
+ Addition
- Subtraction
* Multiplication
/ Division
% of integer
Now that we know the different operators, let’s calculate something:
int main()
int a, b;
A = 10;
B = A + 1;
A = B - 3;
return 0;
Note: The value stored in A at the end of the program will be eight.
Compound assignments
Compound assignments can be used when you want to modify the value of a variable by performing an operation on the value currently stored in that variable. (For example: A = A + 1 ).
• Writing <var> += <expr> is the same as <var> = <var> + <expr>.
• Writing <var> -= <expr> is the same as <var> = <var> – <expr>.
• Writing <var> /= <expr> is the same as <var> = <var> / <expr>.
• Writing <var> *= <expr> is the same as <var> = <var> * <expr>.
Decrease and increase operators
The increase operator (++) and the decrease operator (–) are used to increase or reduce the value
stored in the variable by one.
Example: A++; is the same as A+=1; or A= A + 1;
A characteristic of this operator is that it can be used as a prefix or as a suffix (before or after). Example: A++; or ++A; have exactly the same meaning. But in some expressions they can have a
different result.
For instance: In the case that the decrease operator is used as a prefix (–A) the value is decreased before the result of the expression is evaluated. Example:
My_var = 10;
A = --My_var;
Note:My_var is decreased before the value is copied to A. So My_var contains 9 and A will contain 9.
In case that it is used as a suffix (A–) the value stored in A is decreased after being evaluated and therefore the value stored before the decrease operation is evaluated in the outer expression.
My_var = 10;
A = My_var--;
Note:The value of My_var is copied to A and then My_var is decreased. So My_var will contain 9 and A will contain 10.
Relation or equal operators
With the relation and equal operators it is possible to make a comparison between two expressions. The result is a Boolean value that can be true or false. See the table for the operators:
│==│Equal │
│ │to │
│!=│Not equal │
│ │to │
│> │Greater │
│ │than │
│< │Less │
│ │than │
│>=│Greater than or equal │
│ │to │
│<=│Less than or equal │
│ │to │
You have to be careful that you don’t use one equal sign (=) instead of two equal signs (==). The first one is an assignment operator, the second one is a compare operator.
Logical operators
Logical operators are mainly used to control program flow. Usually, you will find them as part of an if, while, or some other control statement. The operators are:
• <op1> || <op2> – A logical OR of the two operands
• <op1> && <op2> – A logical AND of the two operands
• ! <op1> – A logical NOT of the operand.
Logical operands allow a program to make decisions based on multiple conditions. Each operand is considered a condition that can be evaluated to a true or false value. Then the value of the
conditions is used to determine the overall value of the statement. Take a look at the tables below:
Table: && operator (AND)
│<op1>│<op2>│<op1> && <op2> │
│true │true │ true │
│true │false│ false │
│false│true │ false │
│false│false│ false │
Table: || operator (OR)
│<op1>│<op2>│<op1> || <op2> │
│true │true │ true │
│true │false│ true │
│false│true │ true │
│false│false│ false │
Some examples:
!(10 <= 5) // (10 <= 5) is false but the NOT (!) makes it true. !true // is false ( (10 = 10) && (5 > 10)) // is false. (true && false)
( (5 > 10) || ( 10 == 10) ) // is true. (false || true)
Bitwise operators
The bitwise operators are similar to the logical operators, except that they work with bit patterns. Bitwise operators are used to change individual bits in an operand.
│operator│asm equivalent │ description │
│ & │ AND │Bitwise │
│ │ │AND │
│ | │ OR │Bitwise │
│ │ │Inclusive OR │
│ ^ │ XOR │Bitwise │
│ │ │Exclusive OR │
│ ~ │ NOT │Unary │
│ │ │complement (bit inversion) │
│ << │ SHL │Shift │
│ │ │Left │
│ >> │ SHR │Shift │
│ │ │Right │
That is all for this tutorial.
This entry was posted in
C++ Tutorials
. You can follow any responses to this entry through the
RSS 2.0
feed. You can
leave a response
, or
from your own site.
or use to share this post with others.
GIRINDRA MOHAN on February 28th, 2011:
good concept given for all ……
Ricardo on October 11th, 2011:
In the following note:
Note:The value of My_var is copied to A and then My_var is increased. So My_var will contain 9 and A will contain 10.
Don’t you mean the value of My_var is copied to A and then My_var is DECREASED? It may be a small typo (I hope) otherwise I am confused
admin on October 11th, 2011:
@Ricardo – You are right that is a typo. I’ve corrected the text to: The value of My_var is copied to A and then My_var is decreased. So My_var will contain 9 and A will contain 10. Thanks for your
Shay on January 28th, 2014:
In the example of logical operator:
( (5 > 10) || ( 10 = 10)) // is true. (false || true)
Shouldn’t we use double equal signs instead of single equal sign, because here we have comparison not assignment!?
admin on February 5th, 2014:
@shay: You are right, so i corrected the error. | {"url":"http://www.codingunit.com/cplusplus-tutorial-operators-compound-assignments","timestamp":"2014-04-21T07:35:39Z","content_type":null,"content_length":"45138","record_id":"<urn:uuid:77d83c62-ed29-41c3-b6ee-9bdf7d31384f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lake Forest, CA Prealgebra Tutor
Find a Lake Forest, CA Prealgebra Tutor
...I can provide both. I can make a difference, I understand the learning process and teaching. My goal is to transform a student from just getting a grade to excelling in the learning process.
33 Subjects: including prealgebra, chemistry, geometry, biology
...I'm nearly done with my undergraduate degree in Chemistry, but I've been tutoring students and colleagues since high school. My favorite part of tutoring, by far, is the reward of seeing
someone succeed in an area they were struggling with. When I walk in and a student shows me they got an A for the first time on a Chemistry test, it confirms my work and renews my love of the
9 Subjects: including prealgebra, chemistry, calculus, algebra 1
I am currently a Spanish/ELD(ESL) teacher in the Capistrano Unified School District. I earned a Master's Degree in Bilingual Education from Columbia University in New York and an undergraduate
degree in Spanish from Boston College. I also studied at the Universidad Complutense in Madrid, Spain and worked and taught in Bolivia while serving in the Peace Corps.
9 Subjects: including prealgebra, English, Spanish, writing
...I also instruct converting fractions to decimals and decimals to fractions. Writing, Grammar, Study Skills, Literature Studies, Reading Comprehension, Spelling, Algebra Functions, Geometry,
Multiplication/Division of Exponents, Addition, Subtraction, Multiplication, Division, of Fractions, Ordin...
19 Subjects: including prealgebra, English, reading, Spanish
...I also like physics a lot. When I was in high school, I tutored other fellow students in mathematics, US history, biology and physics. When I went to community college, I also tutored the same
subjects to educationally challenged students.
11 Subjects: including prealgebra, physics, accounting, calculus | {"url":"http://www.purplemath.com/Lake_Forest_CA_prealgebra_tutors.php","timestamp":"2014-04-21T13:07:59Z","content_type":null,"content_length":"24367","record_id":"<urn:uuid:97189999-7e08-4446-b49e-6b49edccb798>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
The n-Category Café
December 31, 2008
The Toric Variety Associated to the Weyl Chambers
Posted by John Baez
Happy New Year! I’m making a resolution to avoid starting work on new papers. I want to spend more time learning math, playing music, and having fun.
For a long time, whenever people said the phrase toric variety, I’d cover my ears and refuse to listen to what they had to say. Since I knew nothing of algebraic geometry, I thought ‘toric varieties’
were just one more of those specialized concepts — like Fano varieties and del Pezzo surfaces — that were devised solely for the purpose of demonstrating one’s superior erudition in this esoteric
That’s changed. Now I love toric varieties, and I have a question about them.
But I won’t explain what they are, because then everyone would know.
Posted at 10:12 PM UTC |
Followups (35)
Organizing the Pages at nLab
Posted by David Corfield
An e-mail discussion migrated to the General Discussion page at nLab , where Urs sensibly suggested it should migrate further here.
Posted at 8:42 AM UTC |
Followups (17)
Joint Math Meetings in Washington DC
Posted by John Baez
In just a few days, hordes of mathematicians will descend on Washington DC for the big annual joint meeting of the American Mathematical Society (AMS), Mathematical Association of America (MAA),
Society for Industrial and Applied Mathematics (SIAM), and sundry other societies, organizations, clubs, conspiracies and cabals:
I’ll be there. Will you?
Posted at 2:38 AM UTC |
Followups (9)
December 30, 2008
Groupoidification from sigma-Models?
Posted by Urs Schreiber
The interest in groupoidification (see our recent discussion) is to a large extent motivated from the feeling that it illuminates general structural aspect of quantum field theory.
My motivation is this:
to every differential nonabelian cocycle describing an associated $\infty$-vector bundle with connection, there should canonically be associated the corresponding $\sigma$-model QFT, which,
physically speaking, describes the worldvolume theory of a brane couopled to this $\infty$-bundle.
I have been thinking about this for quite a while now, starting with a series of posts on QFT of the charged $n$-particle. It took me a bit to get the required machinery into place, such as the
interpretation of parallel transport $\infty$-functors for $\infty$-bundles with connection in homotopical cohomology theory, or the machinery of universal $\infty$-bundles.
Then John started teaching us about groupoidification and I noticed that this should naturally arise when forming the $\sigma$-model of a nonabelian cocycle. I chatted about the basic idea of this
insight in An exercise in groupoidification: the path integral.
Now I found the time to expand on this in much more detail. Not that this is done yet, but a coherent closed picture seems to be emerging, which I describe in these notes:
$n$Lab/schreiber: Nonabelian cocycles and their $\sigma$-Model QFTs
Posted at 3:40 PM UTC |
Followups (3)
December 26, 2008
Groupoidification Made Easy
Posted by John Baez
Merry Christmas! It’s still Christmas here in California, despite what the time stamp on this blog may say. So, it’s not too late for one last present! Here’s one just for you, from Santa and his
Posted at 4:01 AM UTC |
Followups (98)
December 24, 2008
Posted by John Baez
Merry Christmas! Here are some more presents. I hope these are a bit easier to appreciate than my new paper on infinite-dimensional representations of 2-groups. Again, they’re all about symmetry. But
this time you don’t need a math degree to enoy them. You just need to look at ‘em!
Posted at 9:48 PM UTC |
Post a Comment
Infinite-Dimensional Representations of 2-Groups
Posted by John Baez
Yay! This paper is almost ready for the arXiv! We’ve been working on it for years… it turned out to involve a lot more measure theory than we first imagined it would:
Posted at 7:25 PM UTC |
Followups (13)
Science and the Environment
Posted by John Baez
As the year’s end approaches and I hole up at home, free of the pressures of teaching, my thoughts roam a bit more freely. So, they naturally turn to questions like this: am I doing the right things?
For example: should I do more to help save the planet? And if so, how can I do it in a way that takes advantage of my special skills? An education in math and physics leads people to value simple,
elegant problems. The ecological crisis we face is anything but: it’s an incredible mess. Is there anything a mathematical physicist can do to help that a biologist or politician can’t do better? I
try to proselytize on my webpage, but is that enough?
Questions like this don’t fit comfortably into this blog. So, I apologize to readers who prefer the usual fare. But these questions seem too important to completely ignore — and they’re especially
timely for this reason:
In just a few weeks, science policy in the US may be run by scientists.
Posted at 2:26 AM UTC |
Followups (92)
December 23, 2008
Bridge Building
Posted by David Corfield
If anyone wanted to bridge the gap between the two cultures, Terry Tao’s post – Cohomology for dynamical systems might provide a good place to start. Remember our last collective effort at
bridge-building saw us rather unsuccessfully try to categorify the Cauchy-Schwarz inequality.
Regarding this current prospective crossing point, we hear that the first cohomology group of a certain dynamical system is useful for the ‘ergodic inverse Gowers conjecture’, and that there are
hints that higher cohomology elements may be relevant. The post finishes with mention of non-abelian cohomology.
It wouldn’t be surprising if algebraic topology provided the common ground. A while ago we heard Urs describe Koslov’s work on combinatorial algebraic topology.
Posted at 1:08 PM UTC |
Followups (22)
December 22, 2008
Higher Structures in Göttingen - Part II
Posted by John Baez
December 19, 2008
The Microcosm Principle
Posted by David Corfield
Furthering my study of coalgebra, I came across slides for a couple of talks (here and here) which put John and Jim’s microcosm principle into a coalgebraic context.
Recall their claim in Higher-Dimensional Algebra III that “certain algebraic structures can be defined in any category equipped with a categorified version of the same structure” (p. 11), as with
monoid objects in a monoidal category.
We name this principle the microcosm principle, after the theory, common in pre-modern correlative cosmologies, that every feature of the microcosm (e.g. the human soul) corresponds to some
feature of the macrocosm.
Later, in section 4.3, they give a formal treatment of the principle using operads.
On the other hand, in the slides and associated paper, Hasuo, Jacobs and Sokolova give the principle a 2-categorical formulation as a lax natural transformation $X: \mathbf{1} \implies \mathbb{C}$
between functors from a Lawvere theory $\mathbb{L}$ to $\mathbf{CAT}$. I wonder how these treatments compare.
The microcosm principle would make for a good methodological entry for nLab, as would evil. I’ll see about extracting John’s sermon on the latter from old blog files. I must confess to feeling rather
guilty for having done so little there, after pushing for it. Perhaps Santa will bring me a generous amount of free time for Christmas.
Posted at 9:46 AM UTC |
Followups (12)
December 16, 2008
Super Version of 2-Plectic Geometry for Classical Superstrings?
Posted by John Baez
In our paper Categorified symplectic geometry and the classical string, Alex Hoffnung, Chris Rogers and I described a Lie 2-algebra of observables for the classical bosonic string. The idea was to
generalize the usual Poisson brackets coming from symplectic geometry, which make the observables for a classical point particle into a Lie algebra. The key was to replace symplectic geometry by the
next thing up the dimensional ladder: 2-plectic geometry.
Now I have a slight hankering to do the same thing for the classical superstring. Ideally this would be a formal exercise in ‘super-thinking’ — replacing everything in sight by its ‘super’ (meaning $
\mathbb{Z}/2$-graded) analogue. But maybe it’s not. Either way, I have a lot of catching up to do. So, here are some basic questions.
Posted at 7:14 PM UTC |
Followups (5)
What to Make of Mathematical Difficulties
Posted by David Corfield
T. R. drew to our attention a conference dedicated to Grothendieck. One of the papers there is Mathematics and Creativity (presumably written by Leila Schneps), which contains the passage:
Pierre Cartier observed that when Grothendieck took interest in some mathematical domain that he had not considered up till then, finding a whole collection of theorems, results and concepts
already developed by others, he would continue building on this work ‘by turning it upside down’. Michel Demazure described his approach as ‘turning the problem into its own solution’. In fact,
Grothendieck’s spontaneous reaction to whatever appeared to be causing a difficulty - nilpotent elements when taking spectra or rings, curve automorphisms for construction of moduli spaces - was
to adopt and embrace the very phenomenon that was problematic, weaving it in as an integral feature of the structure he was studying, and thus transforming it from a difficulty into a clarifying
feature of the situation. (p. 8)
Posted at 2:33 PM UTC |
Followups (12)
December 15, 2008
This Week’s Finds in Mathematical Physics (Week 273)
Posted by John Baez
In week273 of This Week’s Finds, read more about the geysers on Enceladus. Hear the story of the Earth, with an emphasis on mineral evolution — from chondrites to the Big Splat, the Late Heavy
Bombardment, the Great Oxidation Event, Snowball Earth… to now.
Then, learn about Pontryagin duality!
Posted at 2:34 AM UTC |
Followups (52)
December 9, 2008
The Status of Coalgebra
Posted by David Corfield
After my post on coalgebra, I’m still unsure which position to take regarding its status with regard to algebra. Here are some options:
• (1) It’s not a distinction worth making – a coalgebra for $(C, F)$ is an algebra for $(C^{op}, F^{op})$.
• (2) It is a distinction worth making, but there’s plenty of coalgebraic thinking going on – it’s just not flagged as such.
• (3) Coalgebra is a small industry providing a few tools for specific situations, largely in computer science, but with occasional uses in topology, etc.
Posted at 2:30 PM UTC |
Followups (56)
Science Citation Index
Posted by John Baez
When I heard some of the top journals in category theory aren’t listed by the Science Citation Index, I posted a question on the category theory mailing list.
Posted at 3:14 AM UTC |
Followups (12)
A Quick Algebra Quiz
Posted by John Baez
Here’s a quick algebra quiz. It’s really a test of your reflexes when it comes to algebra and category theory. It should take less than a minute if you have the right mental training. If you don’t,
you may be doomed.
So, take a deep breath and give it a try.
Posted at 2:20 AM UTC |
Followups (16)
December 7, 2008
Smooth Structures in Ottawa
Posted by John Baez
Here at the $n$-Café we’re trying to get to the bottom of some big questions — for example, the nature of smoothness. A manifold is a kind of smooth space — but more general smooth spaces have been
studied by Chen, Lawvere, Kock, Souriau and others, and these are starting to find their way into mathematical physics.
That’s just the beginning, though! Smoothness has a lot to do with derivatives. The concept of derivative can be generalized in some surprising ways. For example, it’s important in Joyal’s work on
combinatorics — he explained how we can take the derivative of a structure like ‘being a 2-colored finite set’ More recently, Goodwillie introduced a concept of ‘approximation by Taylor series’ for
interesting functors in homotopy theory. Even more recently, Ehrhard and Regnier introduced derivatives in logic — or more precisely, the lambda calculus.
So, it’s a great idea to have a conference on all these concepts of smoothness:
Posted at 12:40 AM UTC |
Followups (15)
December 4, 2008
Question on Infinity-Yoneda
Posted by Urs Schreiber
What is known, maybe partially, about generalizations of the Yoneda lemma to any one of the existing $\infty$-categorical models?
Posted at 11:57 AM UTC |
Followups (9)
December 3, 2008
Zhu on Lie’s Second Theorem for Lie Groupoids
Posted by Urs Schreiber
The last couple of days Chenchang Zhu had been visiting Hamburg. Yesterday she gave a nice colloquium talk on her work:
Chenchang Zhu
Lie II theorem
(pdf slides, 57 slides with overlay)
Posted at 4:48 PM UTC |
Followups (11) | {"url":"http://golem.ph.utexas.edu/category/2008/12/index.shtml","timestamp":"2014-04-18T15:39:52Z","content_type":null,"content_length":"104301","record_id":"<urn:uuid:d14e2199-ae5d-4ba6-a9dd-a9690995843f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gelfand–Mazur theorem
operator theory
, the
Gelfand–Mazur theorem
is a
named after
Israel Gelfand
Stanisław Mazur
which states:
A complex Banach algebra, with unit 1, in which every nonzero element is invertible, is isometrically isomorphic to the complex numbers.
In other words, the only complex Banach algebra that is a
division algebra
is the complex numbers
. This follows from the fact that, if
is a complex Banach algebra, the
of an element
is nonempty (which in turn is a consequence of the complex-analycity of the
function). For every
, there is some complex number
such that
is not invertible. By assumption,
= 0. So
1. This gives an isomorphism from
Actually, a stronger and harder theorem was proved first by Stanislaw Mazur alone, but it was published in France without a proof, when the author refused the editor's request to shorten his already
short proof. Mazur's theorem states that there are (up to isomorphism) exactly three real Banach division algebras: the fields of reals
, of complex numbers
, and the division algebra of quaternions
. Gelfand proved (independently) the easier, special, complex version a few years later, after Mazur. However, it was Gelfand's work which influenced the further progress in the area. Gelfand has
created a whole theory.
Read More | {"url":"http://pages.rediff.com/gelfand------mazur-theorem/859685","timestamp":"2014-04-21T03:19:24Z","content_type":null,"content_length":"32465","record_id":"<urn:uuid:bc595bfe-89e3-4c73-ac28-4f75fbe2170f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Integrals (attached)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
true or false? \[\int\limits_{a}^{b} [f(x)g(x)dx] = [\int\limits_{a}^{b} [f(x)]dx][\int\limits_{a}^{b}[g(x)dx]\] i wanna say true because there's a limit definition of the same kind of thing...
Best Response
You've already chosen the best response.
that's false. take f(x) = g(x) =1, a = 0, b = 1/2
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c29bbbe4b0c673d53d59df","timestamp":"2014-04-18T16:46:42Z","content_type":null,"content_length":"30064","record_id":"<urn:uuid:3efbd117-e100-403e-a58e-99099b4dceaf>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Why are Asians so smart?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507318a9e4b04aa3791e2e7d","timestamp":"2014-04-16T22:48:08Z","content_type":null,"content_length":"183624","record_id":"<urn:uuid:f835bbcd-9ec1-49aa-bc35-98198d549350>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Brief Biography of Euclid
*This is a brief essay I put together for a world history class. It's not a work of art, but here goes*:
Euclid of Alexandria is one of the most important and influential mathematicians in history. Living in ancient Alexandria, he wrote The Elements, a geometry textbook used in some places until the
twentieth century. His work in geometry provided the foundation on which all future mathematicians were educated.
For a man of such great significance to the world of mathematics, little is known about his actual life. Euclid is thought to have lived from 325-265 BC, mostly in Alexandria. He was taught at The
Academy in Athens, founded by Plato, and probably tutored another great mathematician, Archimedes. Euclid also founded a great mathematics school in Alexandria. Little was ever written about Euclid,
and the available information is scarce and of questionable accuracy. Much of the information we do have is from authors like Proclus who lived centuries later, writing about his books, not his life.
If little has ever been made of Euclid's life, then the opposite is true of his book. The Elements was used as the primary geometry resource for over 2000 years, and his lessons could still be used
today. Although it contains 13 volumes, much of the work may not be Euclid's. Some of the chapters seem to be written with different styles, and others are geared for different ages, leading one to
believe that he inserted other mathematicians' work into his own.
Each volume begins with pages of definitions and postulates, followed by his theorems. Euclid then proves each one of his theorems using the definitions and postulates, mathematically proving even
the most obvious. His work was translated into Latin and Arabic, and was first printed in mass quantity in 1482, ten years before Columbus, but 1800 years AFTER it was written! From that point until
the early 1900's, The Elements was considered by far the best geometry textbook in the world.
Although he may not have written The Elements entirely on his own, his other works are certainly his alone. Those include Data, Optics, Phaenomena, and On Division of Figures. His work in Data is
probably the most famous of his smaller works, and focuses on finding certain measurements and quantities when others are given. Phaenomena is about planetary motions and Optics about perspectives.
In Optics, Euclid attempts to prove the common belief of the time that sight was created by rays coming from the eye, rather than light entering the eye.
Euclid was apparently a kind, patient man, and did possess a sarcastic sense of humor. In fact, King Ptolemy once asked Euclid if there was an easier way to study math than learning all the theorems.
Euclid then replied, "There is no royal road to geometry," and sent one of the most powerful kings of his time off to study. On another occasion, a student of his questioned the value of learning
geometry, much like students today. Euclid responded by giving the small child a coin, saying that "he must make gain out of what he learns."
There are many other works of Euclid which appear to be lost to time, but his primary work in The Elements is what made him famous. His work in geometry led to discovery after discovery in history,
and provided the basis for mathematical education for 2000 years. While students no longer read directly from his writing, the textbooks of today are still based on Euclidean proofs and theorems.
Perhaps it is fitting, then, that Euclid is called "The Father of Geometry."
1. Euclid, http://www.crystalinks.com/euclid.html. 26 Jan 2003.
2. Euclid of Alexandria. School of Mathematics and Statistics,
University of St Andrews, Scotland. http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Euclid.html. 24 Jan 2003.
3. Euclid, Greek Mathematician. The Columbia Encyclopedia, 2001. http://www.bartleby.com/65/eu/Euclid.html. | {"url":"http://www.freemathhelp.com/euclid.html","timestamp":"2014-04-16T07:24:57Z","content_type":null,"content_length":"12282","record_id":"<urn:uuid:9f723c34-d7b5-4cba-89fb-da13c784f682>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove directly that between any two rational numbers there exists a rational number. - Homework Help - eNotes.com
Prove directly that between any two rational numbers there exists a rational number.
Let a and b are two rational numbers such that a<b .
`a,b in Q` , Q is set of rational numbers.
`a=p/q and b=r/s`
by def of rational numbers .
`=(1/2)((ps+rq)/(qs)) in Q`
`a<(a+b)/2<b` , this proves there exist at least one rational number between a and b.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/prove-directly-that-between-any-two-rational-453366","timestamp":"2014-04-20T06:18:18Z","content_type":null,"content_length":"24684","record_id":"<urn:uuid:6360657a-cc22-4d04-a6d5-53bcbc9b75a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Bifurcation Analysis of a Cubic Memristor Model
The memristor, named as a contraction for "memory resistor", is supposed to be the fourth fundamental electronic element, in addition to the well-known resistor, inductor, and capacitor. Its
theoretical existence was postulated in 1971 by L. O. Chua [1], but its physical realization was announced only recently in a paper published in the May 2008 issue of
by a research team from Hewlett–Packard [2]. It has attracted worldwide attention due to its potential applications in the construction of electronic circuits, especially for newer-generation
In [3] we present a bifurcation analysis of a memristor oscillator mathematical model, given by a cubic system of ordinary differential equations, depending on four parameters.
We show that, depending on the parameter values, the system may present the coexistence of both infinitely many stable periodic orbits and stable equilibrium points.
The periodic orbits arise from the change in the local stability of equilibrium points on a line of equilibria for a fixed set of parameter values. This is an interesting type of Hopf bifurcation,
which occurs without varying parameters.
In these graphics we show the birth of periodic orbits by varying the initial conditions and parameters in the model.
[1] L. O. Chua, "Memristor—The Missing Circuit Element,"
IEEE Trans. Circuit Theory, 18
(5), 1971 pp. 507–519.
[2] D. B. Strukov, G. S. Snider, G. R. Stewart, and R. S. Williams, "The Missing Memristor Found,"
, 2008 pp. 80–83.
[3] M. Messias, C. Néspoli, and V. Botta, "Hopf Bifurcation from Lines of Equilibria without Parameters in Memristor Oscillators,"
Internationl Journal of Bifurcation and Chaos
(to appear) | {"url":"http://www.demonstrations.wolfram.com/BifurcationAnalysisOfACubicMemristorModel/","timestamp":"2014-04-19T22:12:12Z","content_type":null,"content_length":"45306","record_id":"<urn:uuid:b8c40c41-37d8-4355-bd2c-6c1c7a7efd00>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
The continuation passing transform and the Yoneda embedding
They’re the same thing! Why doesn’t anyone ever say so?
Assume A and B are types; the continuation passing transform takes a function (here I’m using C++ notation)
B f(A a);
and produces a function
<X> CPT_f(<X> (*k)(B), A a) {
return k(f(a));
where X is any type. In CPT_f, instead of returning the value f(a) directly, it reads in a continuation function k and “passes” the result to it. Many compilers use this transform to optimize the
memory usage of recursive functions; continuations are also used for exception handling, backtracking, coroutines, and even show up in English.
The Yoneda embedding takes a category $C^{\rm op}$ and produces a category $\mbox{HOM}(C, \mbox{Set})$:
$\begin{array}{ccccccc}\mbox{CPT}: & C^{\rm op} & \to & \mbox{HOM}(C, \mbox{Set}) \\ & A & \mapsto & \mbox{hom}(A, -) \\ & f:B\leftarrow A & \mapsto & \mbox{CPT}(f): & \mbox{hom}(B, -) & \to & \mbox
{hom}(A, -) \\ & & & & k & \mapsto & k \circ f \end{array}$
We get the transformation above by uncurrying to get $\mbox{CPT}(f):\mbox{hom}(B, -) \times A \to -.$
In Java, a (cartesian closed) category $C$ is an interface C with a bunch of internal interfaces and methods mapping between them. A functor $F:C \to {\rm Set}$ is written
class F implements C.
Then each internal interface C.A gets instantiated as a set F.A of values and each method C.f() becomes instantiated as a function F.f() between the sets.
The continuation passing transform can be seen as a parameterized functor ${\rm CPT\langle X\rangle}:C \to {\rm Set}$. We’d write
class CPT<X> implements C.
Then each internal interface C.A gets instantiated as a set CPT<X>.A of methods mapping from C.A to X—i.e. continuations that accept an input of type C.A—and each method C.f maps to the continuized
function CPT<X>.f described above.
Then the Yoneda lemma says that for every model of $C$—that is, for every class F implementing the interface C—there’s a natural isomorphism between the set $F(A)$ and the set of natural
transformations ${\rm hom}({\rm hom}(A, -), F).$
A natural transformation $\alpha$ between $F:C \to {\rm Set}$ and ${\rm CPT}:C \to {\rm Set}$ is a way to cast the class F to the class CPT<X> such that for any method of C, you can either
• invoke its implementation directly (as a method of F) and then continuize the result (using the type cast), or
• continuize first (using the type cast) and then invoke the continuized function (as a method of CPT<X>) on the result
and you’ll get the same answer. Because it’s a natural isomorphism, the cast has an inverse.
The power of the Yoneda lemma is taking a continuized form (which apparently turns up in lots of places) and replacing it with the direct form. The trick to using it is recognizing a continuation
when you see one.
7 Responses
1. Continuation passing: transform and roll out!
2. The closest explanation of the Yoneda lemma to what you wrote that I’ve seen is sigfpe’s (http://sigfpe.blogspot.com/2006/11/yoneda-lemma.html) but it still doesn’t use the word “continuation”.
3. ccshan: Yes, I saw that page. sigfpe has a lot of good stuff on his site. It surprises me that he wouldn’t mention continuation passing! Leinster’s paper on the Yoneda lemma is very nice if you
already understand some category theory; it helped me understand how category theorists actually use the lemma, as I said in the last paragraph of the post.
Unfortunately, I’ve never had the opportunity to learn Haskell; I think if I were going to learn a language based on category-theoretic ideas, I’d go with O’Caml, because it runs at speeds
comparable to C.
4. Nice article, thank you. Regarding your last comment, Haskell runs at speeds comparable to both OCaml and C, so don’t dismiss it on the basis of performance. Of course it famously applies some
category theoretic ideas to regular programming tasks, so you might find it a fruitful playground.
5. I think you’re saying the same thing as me here except that I don’t mention that the type (a->b)->b corresponds to continuations because at that time I wasn’t yet thinking about continuations.
In fact, you’ve given me some deeper insight into continuations. The idea is this. In the context of continuations, the function ‘return’ maps from the type type a to the type (a->b)->b which can
also be written Cont b a in Haskell. This (kind of) embeds a in Cont b a. That makes sense, Cont b a is like the value a but computed with a different style of programming, continuation passing
style. But it always bothered me that it wasn’t an isomorphism because, as I say, it should just be the same thing coded up in a different way. But the missing piece is this: it *is* an
isomorphism when you consider ‘return’ not as a function a -> ((a->b)->b) for some fixed b, but as a function polymorphic in b. In Haskell that’s written a -> (forall b . (a -> b) -> b) and
that’s an isomorphism.
So thanks for helping me make the link between two different aspects of Haskell programming that I hadn’t connected together myself.
6. You’re welcome! This is one reason I really like category theory: it lets one make precise analogies between different areas of math, or look at the same structure from many points of view.
7. [...] This embedding is better known among computer scientists as the continuation passing style transformation. [...] | {"url":"http://reperiendi.wordpress.com/2007/12/19/the-continuation-passing-transform-and-the-yoneda-embedding/","timestamp":"2014-04-17T22:13:06Z","content_type":null,"content_length":"76542","record_id":"<urn:uuid:f61f50bd-1cd5-4ef2-83c9-73ce2e029b80>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Proj] Re: "Double ellipsoid" case?
strebe strebe at aol.com
Wed Dec 3 15:59:13 EST 2008
>My physical geodesy argument is that since WGS84 is a best fit to the geoid
>and since the Google Sphere is a terrible fit to the geoid no matter how you
>orientate it, using the Google Sphere obviates reference to the WGS84 datum...
To which I reply, the original information from the WGS84 datum is all recoverable from the plane coordinates. Therefore, whether they used the Google Sphere, the WGS84 ellipsoid, or a potato or an icosahedron, so long as the WGS84 coordinates are recoverable, nothing got obviated.
>My geometric geodesy argument would be that geodesic direct and inverse
>computations would get different results on the WGS84 ellipsoid and the GS
>no matter how you map the WGS84 graticule onto the GS graticule. This is an
>indictment of the use of the GS, since WGS84 computations better represent
>physical reality. No one has suggested geodesic computations on the GS in
>this thread, but the implicit association of the GS with WGS84 by Google
>regrettably risks that abuse.
Geodesic computations should be performed in spherical coordinates, in which case, again, how those coordinates got projected does not matter so long as the spherical coordinates (on WGS84 ellipsoid) are recoverable. Google does not talk about the "Google Sphere" as far as I can tell; there is no such terminology turned up by a Web search. The specification is clear: geodetic data are given in WGS84 coordinates. I do not see how the projection method encourages anyone to make geodesic computations against the sphere.
Let me defend that. In order for the Google Map projection to encourage someone to choose the Google Sphere for geodesic calculations, the person would:
(a) Have to know the projection method; otherwise they would not even be cognizant of the Google Sphere, since it is part of the projection, not the datum from whence coordinates were supplied;
(b) Realize there is even a Google Sphere to be cognizant of, since it is never explicitly mentioned or promoted;
(c) Understand there is a difference between ellipsoidal calculations and spherical calculations for distance, and need the precision of the former;
(d) And still choose the Google Sphere for these geodesic calculations.
I have to suppose the intersection of (a), (b), (c), and (d) is... nobody.
>My appeals to conventional practice are even more compelling in my view, though
>you may disagree.
No; I agree. This is your strongest argument.
>For example, ED50 geodesic direct and inverse computations and ED50 map
>projections are computed _only_ on the International ellipsoid. Why?
>Computation on the ellipsoid of a datum’s least-squares adjustment provides (by
>definition) the best conformation with physical reality and the convention of
>doing so protects us from ignorant mistakes. We all (I believe) follow this
>practice everywhere in the world (except in our use of Google Maps). The risk
>to conventional practice introduced by Google is not progress.
I agree it is not progress in geodesy. Yet neither do I see it as any practical problem or as backsliding in geodesy. Those who need to perform computations will click a button, and they will receive their computations. I particularly do not agree that the Google Map projection will encourage mistakes. You insist people will interpret the Google Sphere as the datum. I do not believe that. It's just part of the projection calculus. It is a convenient short-cut that some engineers made in order to achieve what they wanted cheaply within acceptable bounds of error. What they wanted was rapid calculations retaining near-conformality so that they could use the same projection mathematics everywhere without introducing visible distortion in large-scale maps. They succeeded. That is the enterprise of engineering.
And, by the way, they knew what they were doing. They knew they were using not-quite-conformal mathematics and they knew they were doing something unorthodox. Those were acceptable trade-offs to them. The slander heaped upon them in an earlier e-mail was simply slander. No, they are not geodesists, photogrammetrists, or cartographers, but they never represented themselves as any of those, and the problems they were solving were not problems of geodesy or photogrammetry anyway. They were problems of mathematical cartography, and their solution achieved its goals.
>So, one thing is certain (and has already been stated in this thread), and that
>is that the Google Maps projection cannot be called the Mercator.
Agreed, certainly with respect to large-scale mapping. For small-scale, well... nobody has coordinate data in spherical datums anyway, so what Google is doing is the de facto standard, even though it is not strictly Mercator.
>Mikael quantified the maximum angular distortion as 0.2 degrees. At this point
>I’ll have to agree with Cliff that this is retrograde cartography.
At which point I will disagree. There is no use in this context for strict conformality, which is anyway largely a geodetic concern rather than a cartographic concern. Google is not solving geodetic problems and is not causing geodetic problems; therefore it makes no sense to take them to task for violating geodetic practices. There is no "retrograde cartography" going on here, just a rather mundane and obvious variant of a well-known map projection. Cartography has always been about compromises. Did Google pick optimal compromises in their projection choice? It sure seems like it.
— daan Strebe
On Dec 2, 2008, at 7:31:14 AM, "Noel Zinn" <ndzinn at comcast.net> wrote:
From: "Noel Zinn" <ndzinn at comcast.net>
Subject: [Proj] Re: "Double ellipsoid" case?
Date: December 2, 2008 7:31:14 AM PST
To: "'PROJ.4 and general Projections Discussions'" <proj at lists.maptools.org>
daan, Mikael, Cliff, and others,
Thanks for your persistence. Indeed, clarifying our terminology will help. For me the Google Sphere (GS) is simply an ellipsoid with an eccentricity squared of zero and semi-major axis (a) equal to the semi-major axis of the WGS84 ellipsoid. In this case the semi-minor axis (b) equals the semi-major axis.
Now, the Google Sphere can have both geodetic and cartographic applications. That distinction is admittedly confusing in this thread. I react viscerally to the introduction of the GS into the WGS84 datum as a geodetic abomination. You and Mikael see the GS as merely an intermediate stage in a total projection method (which is non-conformal and, therefore, not the Mercator projection, and about which you are both defensive). Cliff sees the GS as both bad geodesy and as bad cartography. Frankly, I agree with Cliff. I also accept Richard’s admonition to keep it civil.
So, before moving on to the cartography, I’ll briefly recap my geodetic objections. My physical geodesy argument is that since WGS84 is a best fit to the geoid and since the Google Sphere is a terrible fit to the geoid no matter how you orientate it, using the Google Sphere obviates reference to the WGS84 datum (in my opinion).
My geometric geodesy argument would be that geodesic direct and inverse computations would get different results on the WGS84 ellipsoid and the GS no matter how you map the WGS84 graticule onto the GS graticule. This is an indictment of the use of the GS, since WGS84 computations better represent physical reality. No one has suggested geodesic computations on the GS in this thread, but the implicit association of the GS with WGS84 by Google regrettably risks that abuse.
My appeals to conventional practice are even more compelling in my view, though you may disagree. For example, ED50 geodesic direct and inverse computations and ED50 map projections are computed _only_ on the International ellipsoid. Why? Computation on the ellipsoid of a datum’s least-squares adjustment provides (by definition) the best conformation with physical reality and the convention of doing so protects us from ignorant mistakes. We all (I believe) follow this practice everywhere in the world (except in our use of Google Maps). The risk to conventional practice introduced by Google is not progress.
Now, on to the cartography. Having reread Mikael’s postings I appreciate (and accept) his and daan’s perspective that this is a cartographic – and not a geodetic – issue. The WGS84 datum can underlie this (unfortunate) two-stage projection used by Google Maps (Mikael’s alternative A). Regarding Mikael’s alternative B, I’ll have to study the EPSG Mercator methods first before commenting. And regarding daan’s interpretation that only one "ellipsoid" involved on the projection side, and that is the "Google Sphere", I don’t see it yet. I stated previously that the spherical Mercator projection is intuitively conformal. The ellipsoidal Mercator is conformal over a range of eccentricity squared values and there is no reason that that range shouldn’t include zero (the sphere) while maintaining conformality. But first there is the non-conformal mapping from the WGS84 ellipsoid to the Google Sphere before the Mercator equations are applied. So, one thing is certain (and has already been stated in this thread), and that is that the Google Maps projection cannot be called the Mercator. It’s something else. Mikael quantified the maximum angular distortion as 0.2 degrees. At this point I’ll have to agree with Cliff that this is retrograde cartography. Quoting Cliff, “using equivalent spheres in 19th and early 20th century cartography was an attempt to simplify ellipsoidal computations” and quoting Mikael, “the situation is similar to the French truncated Lambert Conformal Conic, which is not exactly conformal, and is a different projection than the true Lambert Conformal Conic”. My question is, Why are we doing this in the 21st century?!? This is retrograde cartography even if (no, especially if) the explanation is computational efficiency (in light of the “clouds” of computers maintained by Google). Google had an opportunity to expose a wide audience to good geodetic and cartographic practice, but instead Google is exposing that audience to malpractice (in my opinion). Yes, this is a judgment. And daan will respond that it works for their purposes. I’m not so sure. Google Maps is being used for lots of creative purposes encouraged by Google. I believe that some user will stumble due to the poor choices made by Google. Perhaps that can be documented in another thread.
Regards and thanks for the great discussion,
Noel Zinn
Proj mailing list
Proj at lists.maptools.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.maptools.org/pipermail/proj/attachments/20081203/efbd7bb6/attachment-0001.html
More information about the Proj mailing list | {"url":"http://lists.maptools.org/pipermail/proj/2008-December/004103.html","timestamp":"2014-04-16T11:07:30Z","content_type":null,"content_length":"14619","record_id":"<urn:uuid:f9149254-62fa-432d-9a3c-679d88753c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating a Surface Normal
From OpenGL.org
A surface normal for a triangle can be calculated by taking the vector cross product of two edges of that triangle. The order of the vertices used in the calculation will affect the direction of the
normal (in or out of the face w.r.t. winding).
So for a triangle p1, p2, p3, if the vector U = p2 - p1 and the vector V = p3 - p1 then the normal N = U X V and can be calculated by:
Nx = UyVz - UzVy
Ny = UzVx - UxVz
Nz = UxVy - UyVx
Given that a vector is a structure composed of three floating point numbers and a Triangle is a structure composed of three Vectors, based on the above definitions:
Begin Function CalculateSurfaceNormal (Input Triangle) Returns Vector
Set Vector U to (Triangle.p2 minus Triangle.p1)
Set Vector V to (Triangle.p3 minus Triangle.p1)
Set Normal.x to (multiply U.y by V.z) minus (multiply U.z by V.y)
Set Normal.y to (multiply U.z by V.x) minus (multiply U.x by V.z)
Set Normal.z to (multiply U.x by V.y) minus (multiply U.y by V.x)
Returning Normal
End Function
Newell's Method
Also you can use a Newell's method for an arbitrary 3D polygon.
Begin Function CalculateSurfaceNormal (Input Polygon) Returns Vector
Set Vertex Normal to (0, 0, 0)
Begin Cycle for Index in [0, Polygon.vertexNumber)
Set Vertex Current to Polygon.verts[Index]
Set Vertex Next to Polygon.verts[(Index plus 1) mod Polygon.vertexNumber]
Set Normal.x to Sum of Normal.x and (multiply (Current.y minus Next.y) by (Current.z plus Next.z))
Set Normal.y to Sum of Normal.y and (multiply (Current.z minus Next.z) by (Current.x plus Next.x))
Set Normal.z to Sum of Normal.z and (multiply (Current.x minus Next.x) by (Current.y plus Next.y))
End Cycle
Returning Normalize(Normal)
End Function
Perl version for a triangle:
sub CalculateSurfaceNormal {
my($p1,$p2,$p3)=@_; my($x,$y,$z)=(0,1,2);
$U->[$x]=$p2->[$x] - $p1->[$x];
$U->[$y]=$p2->[$y] - $p1->[$y];
$U->[$z]=$p2->[$z] - $p1->[$z];
$V->[$x]=$p3->[$x] - $p1->[$x];
$V->[$y]=$p3->[$y] - $p1->[$y];
$V->[$z]=$p3->[$z] - $p1->[$z];
$N->[$x]=$U->[$y]*$V->[$z] - $U->[$z]*$V->[$y];
$N->[$y]=$U->[$z]*$V->[$x] - $U->[$x]*$V->[$z];
$N->[$z]=$U->[$x]*$V->[$y] - $U->[$y]*$V->[$x];
return ($N->[$x],$N->[$y],$N->[$z]);
# example usage:-
print join("\t", &CalculateSurfaceNormal([qw( 1 0 0 )],
[qw( 0 1 0 )],
[qw( 0 0 1 )] )); | {"url":"http://www.opengl.org/wiki/Calculating_a_Surface_Normal","timestamp":"2014-04-20T06:27:11Z","content_type":null,"content_length":"34863","record_id":"<urn:uuid:1a63b893-284f-4548-a99e-92ab0da9b7e5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help understanding equation- recurrence relation
Hi all,
I have a simple sum I.e x=a*b+c =20 * 2 + 2 = 42.
If I want to continue the process and repeat the sum using the number generated previously
as in 42*2+2=86 and repeat again using 86 as the start.
The multiplying and addition figures will allways be the same and only the first digit will change.
This I have found out is called recurrence relations, but I cant seem to get my head around, where to start to actually write this as a formula.
A friend suggested the following, but I still dont get it (Blush)
to S_n+1= S_0+(2^n-1)*(S_1-S-0) ==> S_n+1= 20+(2^n-1)*22.
Can anyone break it down, so an idiot like me can understand it... I mean really simple please.
In my world if I start with 20 and di the first few iterations I get the following | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=12354","timestamp":"2014-04-19T20:02:54Z","content_type":null,"content_length":"17098","record_id":"<urn:uuid:2c8e5e2e-53cf-45e4-89d7-32a72fcb2445>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert newton to pound - Conversion of Measurement Units
›› Convert newton to pound
Did you mean to convert newton to pound-force
newton pound
pound [metric]
pound [troy]
›› More information from the unit converter
How many newton in 1 pound? The answer is 4.44822162825.
We assume you are converting between newton and pound.
You can view more details on each measurement unit:
newton or pound
The SI base unit for mass is the kilogram.
1 kilogram is equal to 9.80665002864 newton, or 2.20462262185 pound.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between newtons and pounds.
Type in your own numbers in the form to convert the units!
›› Definition: Newton
In physics, the newton (symbol: N) is the SI unit of force, named after Sir Isaac Newton in recognition of his work on classical mechanics. It was first used around 1904, but not until 1948 was it
officially adopted by the General Conference on Weights and Measures (CGPM) as the name for the mks unit of force.
›› Definition: Pound
The pound (abbreviation: lb) is a unit of mass or weight in a number of different systems, including English units, Imperial units, and United States customary units. Its size can vary from system to
system. The most commonly used pound today is the international avoirdupois pound. The international avoirdupois pound is equal to exactly 453.59237 grams. The definition of the international pound
was agreed by the United States and countries of the Commonwealth of Nations in 1958. In the United Kingdom, the use of the international pound was implemented in the Weights and Measures Act 1963.
An avoirdupois pound is equal to 16 avoirdupois ounces and to exactly 7,000 grains.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0029 seconds. | {"url":"http://www.convertunits.com/from/newton/to/pound","timestamp":"2014-04-18T18:12:11Z","content_type":null,"content_length":"21555","record_id":"<urn:uuid:be4b684f-265f-47bf-a1ef-6aaf648fe581>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compute x to x power in the most optimal way possible
up vote 2 down vote favorite
Possible Duplicate:
The most efficient way to implement an integer based power function pow(int, int)
I know this question is pretty easy but my requirement is that I want to compute x to power x where x is a very large number in the best optimize way possible. I am not a math geek and therefore need
some help to figure out the best way possible.
In java, we can use BigInteger but how to optimize the code? Any specific approach for the optimization?
Also using recursion will the large value of x, makes the code slow and prone to stack overflow error?
Eg: 457474575 raise to power 457474575
1 bestest ain't a word – Atreys Jun 9 '11 at 14:31
@learner, bestest? ... seriously? – mre Jun 9 '11 at 14:31
1 You can calculate all the powers of two up to x and multiply all the powers of two which add up to x. Even if you use recursion you wouldn't get more than log2(log2(x)) levels of recursion. –
Peter Lawrey Jun 9 '11 at 14:32
1 @learner, There will be 3,961,897,695 digits which you cannot have in a String. – Peter Lawrey Jun 9 '11 at 14:34
1 Is x always an integer, or can it be a floating point number too? Do you need full precision on the answer, or only the order of magnitude? – Charles Brunet Jun 9 '11 at 14:37
show 3 more comments
marked as duplicate by Sebastian Paaske Tørholm, BlueRaja - Danny Pflughoeft, David Thornley, BЈовић, George Stocker♦ Jun 9 '11 at 14:46
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
3 Answers
active oldest votes
You do realize that the answer to your example is going to be a very very large number, even for a BigInteger? It will have 3961897696 digits!
up vote 11 The best way to work with really large numbers, if you don't need exact precision, is to work with their logarithms instead. To take x to the x power, take the log of x and multiply it
down vote by x. If you need to convert it back take e to the x exp(x), except in this case it will almost certainly overflow.
+1: You can't have a String in Java that long. ;) – Peter Lawrey Jun 9 '11 at 14:35
+1. I wrote basically the same answer (and deleted it again) – sellibitze Jun 9 '11 at 14:36
pow(x,x) -- giving new meaning to the term arbitrary precision arithmetic! – Ben Voigt Jun 9 '11 at 14:39
How do you guys compute the number of digits so precisely? – Martijn Courteaux Jun 9 '11 at 15:01
@Martijn Courteaux, the log10 representation of an integer, rounded up, is the number of digits. E.g. log10(1234) is 3.091, rounded up makes 4. – Mark Ransom Jun 9 '11 at 15:42
add comment
This is one of the simplest optimized approaches:
up vote 1 down vote etc...
x^x = x^(x/2)*x^(x/2)*remainder
add comment
((BigInteger) 457474575).pow(457474575);
up vote 1 down vote
+1: You cannot cast to a BigInteger and you can only do powers up to Integer.MAX_VALUE, but I like your thinking. – Peter Lawrey Jun 9 '11 at 14:45
add comment
Not the answer you're looking for? Browse other questions tagged java c++ c algorithm logic or ask your own question. | {"url":"http://stackoverflow.com/questions/6294434/compute-x-to-x-power-in-the-most-optimal-way-possible","timestamp":"2014-04-19T17:27:19Z","content_type":null,"content_length":"79855","record_id":"<urn:uuid:8390c88b-e1e8-439f-ba76-cb99a433cbb2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculator performs basic arithmetic and
I have a working calculator , token and a balanced parentheses file.
this is my calc.cc file
#include <cstdlib>
#include <iostream>
#include <math.h>
using namespace std;
int main()
double num;
double num2;
char choice;
for (;;){
do {
cout<<"Please choose an option by entering the number, press q to quit\n";
cout<<"1 - Addition\n";
cout<<"2 - Subtraction\n";
cout<<"3 - Division\n";
cout<<"4 - Multiplication\n";
cout<<"5 - Raise to power\n";
cout<<"6 - Unary Minus\n";
} while ( choice < '1' || choice > '6' && choice != 'q');
if (choice == 'q') break;
switch (choice) {
case '1':
cout<<"Please enter a number\n";
cout<<"Another number to be added\n";
cout<<"The answer is:";
cout<<num + num2;
case '2':
cout<<"Please enter a number\n";
cout<<"Another number to be subtracted\n";
cout<<"The answer is:";
cout<<num - num2;
case '3':
cout<<"Please enter a number\n";
cout<<"Another one to be divided\n";
cout<<"The answer is:";
cout<<num / num2;
case '4':
cout<<"Please enter a number\n";
cout<<"Another one to be multiplied\n";
cout<<"The answer is:";
cout<<num * num2;
case '5':
cout<<"Please enter a number\n";
cout<<"Another number to be its power\n";
cout<<"The answer is:";
case '6':
cout<<"Please enter number to be negated\n";
cout<<"The answer is:";
cout<<"That is not an option";
return 0;
this is my token.cc file
//#include <cmath>
//#include "calc.h"
//#include <stack>
#include <iostream>
#include <sstream>
#include <string>
using namespace std;
int main(){
string s;
string sa[20];
cout<<"Enter Text: " <<endl;
cout<<"Entered Text is: " << s <<endl;
istringstream iss(s);
int count = 0;
string sub;
if(sub == "") // breaking input string into tokens
cout <<"token " << count << ": " <<sub <<endl;
sa[count]= sub; // putting into array
count = count + 1;
cout<<"Number of tokens "<<count << endl<<endl;
for(int i = 0; i<4; i++){ // print first and 2nd number tokens
cout<< i + 1 <<" Element of user input is: "<<sa[i]<<endl;
return 0;
this is my balanced parentheses file.
#include <iostream>
#include <stack>
#include <string>
using namespace std;
int main(void) {
string s;
int i;
stack<char> myStack;
cout << "enter parentheses: ";
cin >> s;
for (i=0;i<s.length();i++)
if (s[i] == '(' || s[i] == '[' || s[i] == '{')
myStack.push('(' || '[' || '{');
else if (s[i] == ')' || ']' || '}')
if (myStack.size() == 0) {
cout << "Error: ) with no matching (\n";
return 1;
} else
cout << "Invalid character [" << s[i] << "]\n";
if (myStack.size() == 0)
cout << "Parentheses balanced.\n";
cout << "Error: ( with no matching )\n";
return 0;
My token.cc file is suppose to identify the type of item. which i am unsure how to do...???
My question is how to combine these programs to make one .cc file, as well as what to put in the .h file??
my calc program is suppose to do the following :
arithmetic operations : +,-,*,/ and ^
Unary minus
Storing values in variables using assignment operator =
using variable in calculations
You need to have the input as a string. Create a function that scans strings for numbers(0-9) and saves the numbers in different dynamically assigned variables. Create the same concept again, but now
only with mathematical operators. With these simple functions you can draw out numbers from a string and then determine with the mathematical operator function what properties they will have. Then
just convert the string into a float or a numeric type.
But where does that function get created?
You will have to write it yourself, in your code.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/82864/","timestamp":"2014-04-20T23:36:08Z","content_type":null,"content_length":"13928","record_id":"<urn:uuid:c5cd84cd-4059-47f7-838c-9b33575bd200>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expected values and Variances
January 26th 2010, 11:42 PM #1
Junior Member
Aug 2008
Expected values and Variances
Question 1:
Question 2:
Let Y=
Two very similar problems, but I'm lost. The rest of the homework is spelled out very carefully with hints, and these two are just on their own. I don't even know where to begin.
First look here : Chi-square distribution - Wikipedia, the free encyclopedia
Your normal rv's are all subtracted by their mean, so $X_i-10$ and $Y_i-15$ are actually normal distributions with mean 0.
In order to make them standard normal distributions, recall that $\frac{X_i-\mu}{\sigma}$ follows a N(0,1).
Then look here : F-distribution - Wikipedia, the free encyclopedia
which will give you the results you're looking for.
Thanks. That is just what I needed.
January 26th 2010, 11:55 PM #2
January 27th 2010, 05:18 AM #3
Junior Member
Aug 2008 | {"url":"http://mathhelpforum.com/advanced-statistics/125684-expected-values-variances.html","timestamp":"2014-04-18T23:19:36Z","content_type":null,"content_length":"36259","record_id":"<urn:uuid:2e557c3a-0b9f-40ae-a06a-f093722b342b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
constraint satisfaction
constraint satisfaction
<application> The process of assigning values to variables while meeting certain requirements or "constraints". For example, in graph colouring, a node is a variable, the colour assigned to it is its
value and a link between two nodes represents the constraint that those two nodes must not be assigned the same colour. In scheduling, constraints apply to such variables as the starting and ending
times for tasks.
The Simplex method is one well known technique for solving numerical constraints.
The search difficulty of constraint satisfaction problems can be determined on average from knowledge of easily computed structural properties of the problems. In fact, hard instances of NP-complete
problems are concentrated near an abrupt transition between under- and over-constrained problems. This transition is analogous to phase transitions in physical systems and offers a way to estimate
the likely difficulty of a constraint problem before attempting to solve it with search.
Phase transitions in search (Tad Hogg, XEROX PARC).
Last updated: 1995-02-15
Try this search on Wikipedia, OneLook, Google
Nearby terms: ConstraintLisp « Constraint Logic Programming « CONSTRAINTS « constraint satisfaction » constructed type » constructive » Constructive Cost Model
Copyright Denis Howe 1985 | {"url":"http://foldoc.org/constraint+satisfaction","timestamp":"2014-04-19T01:51:04Z","content_type":null,"content_length":"5969","record_id":"<urn:uuid:06ce03d0-ef0d-4b3c-838f-c6cd2bb65568>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Kind System for GHC
Note: As of June 2013, this page is rather out of date. There are lots of interesting ideas here, but various pieces of this page have already been implemented, and parts have been subsumed by other
Currently thinking about adding a more expressive Kind System to GHC. This page is currently a WIP ...
Haskell has a very powerful and expressive static type system. The problem is when you come to do any programming at the type level, that is typed by an unsatisfactorily inexpressive kind system. We
propose to make it just a little bit stronger.
Note: the aim here initially is to implement a small useful extension to Haskell, and not to retrofit the entirety of e.g. Omega into GHC yet ;))
Consider the simple example of lists parameterised by their lengths. There are many variations on this theme in the Haskell lore, this authors favourite follows thusly:
data Zero
data Succ n
data List :: * -> * -> * where
Nil :: List a Zero
Cons :: a -> List n a -> List a (Succ n)
There are many eugh moments in this code:
• We first declare two new types (Zero and Succ), and, thanks to the EmpyDataDecls? extension, say that they are uninhabited by values (except bottom/error).
• Zero has kind *, and Succ has kind * -> *, so it is perfectly valid to create a haskell function with a signature:
foo :: Zero -> Succ Zero -> Bool
Really the programmers intent is that Zero and Succ are in a disjoint namespace from *-kinded types, and thus this function signature should be disallowed.
• Succ has kind * -> *, whereas really the programmer wants to enforce that the argument to Succ will only ever consist of Zeros or Succs. i.e. the * -> * kind given to Succ is far to relaxed.
• We then declare a new data type to hold lists parameterised by their lengths.
• List has kind * -> * -> *, which really doesn't tell us anything other than its arity. An alternative definition could have been: data List item len where ... , although this adds only
pedagogical information, and nothing new that the compiler can statically check.
• The Cons constructor actually has a mistake in it. The second argument (List n a) has the names to the type parameters flipped. The compiler cannot detect this, and the error will become apparent
at use sites which are at a distance from this declaration site.
• Nothing stops a user creating the silly type List Int Int even though the intention is that the second argument is structured out of Succs and Zeros.
Basic proposal
We propose to add new base kinds other than * using a simple notation. The above example could become:
data kind Nat = Zero | Succ Nat
data List :: * -> Nat -> * where
Nil :: List a Zero
-- Cons :: a -> List n a -> List a (Succ n) -- Compiler detects error
Cons :: a -> List a n -> List a (Succ n)
• We first declare a new kind Nat, that is defined by two types, Zero and Succ. Although Zero and Succ are types, they do not classify any haskell values (including undefined/bottom). So the foo ::
Zero -> Succ Zero -> Bool type signature from earlier would be rejected by the compiler.
• We then declare the type List, but we now say the second argument to List has to be a type of kind Nat. With this extra information, the compiler can statically detect our erroneous Cons
declaration and would also reject silly types like List Int Int.
In the basic proposal the data kind declaration has no kind parameters. (See below for kind polymorphism.)
The idea would be to mirror existing Haskell data declarations. There is a clear analogy as we are now creating new kinds consiting of type constructors as opposed to new types consisting of data
To destinguish kind declarations from data declarations we can either add a new form of kind declaration:
However this steals kind as syntax with the usual problems of breaking existing programs.
Alternatively (preferably), we can add a modifier to data declarations to indicate that we mean a kind declaration:
data kind Bool = True | False
Interaction with normal functions
Functions cannot have arguments of a non * kind. So the following would be disallowed:
bad :: Zero -> Bool -- Zero has kind Nat
This follows straighforwardly from the kind of (->) in GHC already: ?? -> ? -> *, see IntermediateTypes
Type variables may however be inferred to have non-* kinds. E.g.
data NatRep :: Nat -> * where
ZeroRep :: NatRep Zero
SuccRep :: (NatRep n) -> NatRep (Succ n)
tReplicate :: forall n a . NatRep n -> a -> List a n
In the above, n would be inferred to have kind Nat and a would have kind *.
Interaction with (G)ADTs
(G)ADTs can already be annotated with a mixture of names with optional explicit kind signatures and just kind signatures. These kind signatures would now be able to refer to the newly declared, non-*
kinds. However the ultimate kind of a (G)ADT must still be *. i.e.
data Ok a (b :: Bool) :: Nat -> * where
OkC :: Ok Int True Zero
OkC' :: Ok String False (Succ Zero)
data Bad a :: Nat -> Nat where -- result kind is not *
In the above example, there is the question of what kind we should assign to a in Ok. Currently it would be inferred to be *. That inference engine would need to be improved to include inference of
other kinds.
GADT constructors must only accept arguments of kind * (as per the restrictions on (->) described above), but may also collect constraints for the kind inference system.
Interaction with Type Classes
Type classes are currently indexed by variables with a fixed kind. Type classes could now be indexed by variables with non-value kinds. E.g.
class LessThanOrEqual (n1 :: Nat) (n2 :: Nat) -- ok
instance LessThanOrEqual Zero Zero
instance LessThanOrEqual n m => LessThanOrEqual n (Succ m)
This example is ill-kinded though:
class Bad x -- Defaults to x::*
instance Bad Int -- OK
instance Bad Zero -- BAD: ill-kinded
By default declaration arguments are inferred to be of kind * if there is nothing in the class declaration (member functions or explicit kind signature) to change this. This seems sensible for
Interaction with Data/Type? Synonym Families
Follows as per type classes
Kind inference
Kind inference figures out the kind of each type variable. There are often ambiguous cases:
data T a b = MkT (a b)
These are resolved by Haskell 98 with (a :: *->*) and (b :: *). We propose no change. But see kind polymorphism below.
Kind Namespace
Also see: Design/TypeNaming
Strictly, the new kinds that have been introduced using data kind syntax inhabit a new namespace. Mostly it is unambiguous when you refer to a type and when you refer to a kind. However there are
some corner cases, particularly in module import/export lists.
Option 1 : Collapse Type and Kind namespace
• Simple
• Follows behaviour of type classes, type functions and data type functions.
• Inconsistent. It would allow the user to create True and False as types, but not to be able to put them under kind Bool. (You'd need to name your kind a Bool' or Bool_k)
Option 2 : Fix ambiguities
• As more extensions are put into the language, it'll have to happen sooner or later
• Will involve creating a whole new namespace
• Several corner cases
Auto Promotion of Types to Kinds
Many simple data declarations it would be convinient to also have at the type level. Assuming we resolve Design/TypeNaming and some ambiguity issues, we could support automatically deriving the data
kind based on the data.
There are some other issues to be wary of (care of Simon PJ):
data Foo = Foo Int
Automated lifting this would try and create a kind Foo with type constructor Foo. But we've just declared a type Foo in the data declaration.
• Automatic lifting of GADTs / existentials and parametric types is tricky until we have a story for them.
• Automatic lifting of some non-data types could be problematic (what types parameterise the kind Int or Double?)
• We have no plan to auto-lift term *functions* to become type functions. So it seems odd to auto-lift the type declarations which are, after all, the easy bit.
Syntactically however, there are some options for how to do this in cases when it is safe to do:
Option 0: Always promote [when safe]
E.g. writing
data Foo = Bar | Baz
will impliclty create a kind Foo and types Bar and Baz
Option 1: Steal the deriving syntax This has an advantage of allowing standalone deriving for those data types that are declared elsewhere but not with Kind equivalents
data Bool = True | False
deriving (Kind)
deriving instance (Kind Bool)
Option 2: Add an extra flag to the data keyword
data and kind Bool = True | False
This has the problems of verbosity and is hard to apply after the fact to an existing data type.
Polymorphic kinds
Also see PolymorphicKinds which this would build upon...
Data kinds could also be parameterised by kinds in the same way that data types can be parameterised by types. This will require polymorphic kinds, see below:
We need a syntax for sorts as well as kinds:
kind variable ::= k, ... etc
monokind ::= * | monokind -> monokind | k
polykind ::= forall k1.. kn. monokind
sort ::= ** | sort -> sort
• What to use for the sort that classifies *, *->* etc?
□ *2 (as in Omega; but *2 isn't a Haskell lexeme)
□ ** (using unary notation)
□ *s (Tristan)
□ kind (use a keyword)
• Do we have sort polymorphism? No!
• Do we have higher ranked kinds? No (for now)! Only a monokind on either side of an (->) in a kind. The things that have polykinds are (top-level) type constructors and type functions.
data kind MaybeK k = NothingK | JustK k
So here we have
MaybeK :: ** -> ** -- Sort of MaybeK
NothingK :: forall k::**. MaybeK k -- Kind of NothingK
JustK :: forall k::**. k -> MaybeK k -- Kind of JustK
It might also be nice to support GADK (Generalized Algebraic Data Kind) syntax for declaring kinds, ala:
data kind MaybeK :: ** -> ** where
NothingK :: MaybeK k
JustK :: k -> MaybeK k
Again, note that Maybe above is decorated with a sort signature.
data kind MaybeK k where
NothingK :: MaybeK k
JustK :: k -> MaybeK k
However no GADTs or existentials at the kind level (yet). TODO think about motivating examples.
Note: I don't think it's worth having existential kinds without kind-refinement as we don't have kind-classes, and so no user could ever make use of them. Kind refinement does allow existential kinds
to make sense however (or at least be usable). The question then is when does kind-refinement come into play - pattern matches. TODO generate some examples to motivate this.
Sort Signatures
GHC currently allows users to specify simple kind signatures. By allowing declaration of other kinds, and parameterisation of kinds, we will require kinds to have sorts. Initially we may want to push
everything up one layer, so our language of sorts is generated by the sort that classifies kinds **, or functions sort -> sort.
This means we could allow explicit sort-signatures on kind arguments, e.g.:
data kind With (k :: ** -> **) = WithStar (k *) | WithNat (k Nat)
Alternate formulation of With using GADK syntax.
data kind With :: (** -> **) -> ** where
WithStar :: forall (k :: ** -> **). (k *) -> With k
WithNat :: forall (k :: ** -> **). (k Nat) -> With k
Kind Synonyms
Simple type synonyms have a natural analogy at the kind level that could be a useful feature to provde once we have parameterizable kinds. Depending on whether we keep the kind and type namespaces
separate (above) we could just abuse the current type Foo = Either Baz Bang syntax to also allow creating kind synonyms, or if we need to invent some new syntax. kind Foo = Either Baz Bang would seen
natural, or perhaps more safely type kind Foo = Either Baz Bang.
newkind doesn't make sense to add as there is no associated semantics to gain at the type level that data kind doesn't already provide. | {"url":"https://ghc.haskell.org/trac/ghc/wiki/KindSystem","timestamp":"2014-04-18T06:53:35Z","content_type":null,"content_length":"29549","record_id":"<urn:uuid:cb61fc44-2a42-4d6b-bd14-f14ff3fd3ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Georg Gearløs Movies
From symmetry to asymmetry:
It has been established, from previous (analytical) work, that the exact symmetry of a standard (k=1) ABC flow is not conserved when the wavenumber increases to k=2. The dominant mode for the
magnetic field that leads to a very fast exponential growth (compared to the k=1 case) is a symmetry breaking eigenmode. The transition from a symmetric mode to an assymetric one is shown (above
movie) when the magnetic induction equation is solved numerically.
When the initial magnetic field is chosen to be weak and uniform, magnetic flux "cigars" rapidly arise at certain (stagnation) points of the flow. Eventually, double flux cigars with opposite
polarity appear next to the primary magnetic structures. The growth / decay of the flux cigars occurs simultaneously in all of the (eight) cells in the computational box (symmetry). This is not what
happens when the assymetry occurs (end of the movie). There are cells where the "double-cigar" mode is still visible and cells where the secondary flux cigars are passing through zero (less supply of
Download a .gz version of the movie.
Turbulent dynamo action
The full MHD compressible equations are taken into account to study (via numerical simulations) the turbulent dynamo action for an ABC flow. The inital magnetic field is chosen to be a very weak and
random perturbation. The magnetic and fluid Reynolds numbers are chosen to be equal to 100. The movie shows the temporal evolution of the weak magnetic field (pink isosurfaces) during the turbulent
growth phase (before saturation). Field lines with dark colour correspond to weak magnetic field regions and those with white colour to regions with higher field strength. The movie illustrates the
stretching of the weak magnetic field lines that lead to an exponential growth of the magnetic energy when the flow enters the turbulent phase.
A .gz version of the movie is available here. | {"url":"http://www.astro.ku.dk/~bill/movies/","timestamp":"2014-04-19T14:33:40Z","content_type":null,"content_length":"2787","record_id":"<urn:uuid:a762c6c7-0827-4740-a779-ab0f60b4fb9b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Output of Bitwise ~ operator ?
Author Output of Bitwise ~ operator ?
Ranch Hand
hi all
Joined: Aug 12, 2008 i have the following code
Posts: 113 int a=3;
int b=~a;
but if we are calculating it a=0011 in binary after inverting we get 1100 to decimal it is 12 .but i am getting as output -4 .can anyone tell why the output is like that ?
Anto Telvin Mathew<br />Many of the life failures are people who did not realize how close they were to success when they give up. EDISON
Ranch Hand
I hope you know that In Java, negative values are stored in two's complement format.
Joined: Apr 29, 2008
Posts: 234 and
int a = 3 does not mean a = 0011 in binary because int is 32 bit in java
a is 00000000 00000000 00000000 00000011 in binary
now taking ~a ==> 11111111 11111111 11111111 11111100 which is a negative value . We will take two`s compliment now. Which comes out to be 4
Hence answer is -4.
Ranch Hand
Joined: Aug 12, 2008 thanks i got it
Posts: 113
subject: Output of Bitwise ~ operator ? | {"url":"http://www.coderanch.com/t/411779/java/java/Output-Bitwise-operator","timestamp":"2014-04-19T20:36:43Z","content_type":null,"content_length":"21373","record_id":"<urn:uuid:dd4b7082-897a-46d4-bf66-2217fe10afba>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Friction intuition help
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50cc6858e4b0031882dbf873","timestamp":"2014-04-20T03:45:43Z","content_type":null,"content_length":"301075","record_id":"<urn:uuid:68e07cdf-9850-43f6-b766-cb091e088248>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic for Computer Scientists/Introduction
Although logic was developed and researched since Aristotle and Megara (430 - 360 b.c.) we want to focus to the development of mathematical logic, for which Gottfried Frege can be seen as the founder
(Begriffsschrift, 1879). That was the first time, logic was introduced in a totally formal language; of course there have been others like Leibniz and Boole who contributed to the formalisation of
logic, but it was Frege who radically introduced the idea of formalisation and deduction. This aspect is of particular interest for computer science, because there, the automation of the deduction
process is investigated. The development of modern mathematical logic came along with a deep crisis in mathematics: At the end of the 19th and the beginning of the 20th century axiomatisation of set
theory and of arithmetic have been a focus of mathematical research and in 1910 -1913 Whitehead and Rusell published their Principia Mathematica, an attempt to formalise the entire mathematics on the
grounds of a formal logic, which was essentially the one given by Frege. This formalisation made it possible to avoid a number of antinomies, which have been discussed at that time. However it was in
1931, that Kurt Goedel proved in his famous incompleteness theorems, that an approach to formalise arithmetic cannot be complete given its consistency. It is important to note that during this period
in the 1930th the foundation of theoretical computer science was laid: Alan Turing published his work on theoretical machines and on computability, Alonso Church developed his $\lambda$-calculus and
Stephen Kleene developed the foundations of recursion theory.
For the rest of this introduction we will directly jump into the use of logic for modern computer science. The reader who is interested in history of logic is referred to the bibliographic section at
the end of this introduction.
Within the last decade it turned out that computerised systems are the very base of advanced technology. Software is present in nearly all devices of modern houses, in our cars, not to speak about
aircrafts or weapons. Without going into details, it is immediately obvious that for most, if not all applications, robust, safe and correct behaviour of a system is mandatory. To achieve this it is
widely accepted that it is only possible if formal methods are applied during the entire process of hard- and software-development. In the following we shortly depict some tasks where the use of
logic has proved to be extremely helpful.
Abstract DatatypesEdit
In order to define datatypes and to derive efficient implementations for it, the concept of abstract datatype definitions is central. The idea is to define the abstract properties of a datatype,
instead of giving special realisations. A very trivial example is the definition of a stack: Let $\Sigma$ be an alphabet. In order to define a stack $S$ over $\Sigma$ we assume $clear$ to be a
nullary function and we define the following properties of stacks:
$\begin{matrix} clear \in S \\ \forall s \in S \; \forall x \in \Sigma \;(push(s,x)\in S)\\ \forall s \in S \; (s eq clear \Rightarrow pop(s) \in S)\\ \forall s \in S \; (s eq clear \Rightarrow top
(s) \in\Sigma)\\ \forall s \in S \; (empty(s) \in \{true, false\})\\ \end{matrix}$
Hence, we have the functions $clear, push$ and $pop$, which yield stacks and the predicate $empty$, furthermore we need the following properties with respect to the $push$-operation.
$\begin{matrix} \forall s \in S \; \forall x \in \Sigma \; (push(s,x) eq clear) \\ \forall s \in S \; \forall x,y \in \Sigma \; (push(s,x) = push(s,y) \Rightarrow x=y)\\ \forall s,t\in S \; \forall x
\in \Sigma \; (push(s,x) = push(t,x)& \Rightarrow s=t)\\ \end{matrix}$
And we have to give properties with respect of the combinations of functions:
$\begin{matrix} \forall s \in S \; \forall x \in \Sigma &( top(push(s,x)) = x \land pop(push(s,x)) = s)\\ \forall s \in S \; \forall x \in \Sigma & ( empty(push(s,x))= false)\\ & empty(clear) = true\
\ \end{matrix}$
The above formulae state what properties we expect stacks to have. Obviously it contains no hints how to implement such a data type. And indeed this specification is aiming to solve other problems,
• Is the specification correct? I.e. is there a set $S$ together with the given operations, such that the axioms above hold?
• Is the specification complete. I.e. do the axioms imply all the properties we intuitively assume a stack to have? Are there sets $S$ which do not meet our expectations?
The reader may have already noticed, that the axioms are nothing else than formulas in predicate logic, i.e. all logic where variables like $x$ or $s$ together with so called quantifiers $\forall$
and $\exists$ are used. The above two questions, namely correctness and completeness, are very central topics for the design of formal systems in logic. The proof of these properties often is
difficult and costly, but on the other hand it is one of the clear advantages of logical systems, that these properties can be proved formally. In the main part of this course we will deal with these
questions explicitly.
Program DevelopmentsEdit
There are a number of attempts to define methods, which allow the development of a program together with a formal argument of its correctness. We will give a very rough idea with a toy example.
Assume the following simple program contains a loop, and assume that it has an array of integers:
max := a(1);
do i = 2,n
if a(i) > max then max := a(i)
In order to understand what is going on if this program is executed, the following so called loop invariant is helpful:
$\forall j : 1 \leq j \leq i \Rightarrow max \geq a(j)$
This means, that for every value of $i$, i.e. before and after every execution of the if-statement within the do-loop the above formula is valid. Now assume that the loop is executed the last time,
hence the value of $i$ after executing of the loop-body is $n$, one can conclude that max contains the maximum value of the array:
$\forall j : 1 \leq j \leq n \Rightarrow max \geq a(j)$
Another important issue for program development is the specification of programs. In order to give a formal specification of the above program for finding the maximum of an array the following
logical formula can be used:
$\forall S \; \forall m \; max(S, m) \Leftrightarrow (S eq \emptyset \Rightarrow ( m \in S \land \forall x \; (x \in S \Rightarrow m \geq x)))$
Note that in this specification $S$ is assumed to be a set. There is no decision yet made, that this has to be implemented by means of an array.
On the other hand logic can be used not only for the specification but also as programs itself. The following is a logic program which computes the maximum value of a list of values. Lists are
represented as [ head.tail ], where head denotes the front element of a list, tail the rest of the list and nil the empty list.
max([m.nil], m) <- .
max([head.tail], m) <- max([tail], m),
head < m.
max([head.tail], head) <- max([tail], m),
head >= m.
Artificial IntelligenceEdit
One of the oldest sub-disciplines of Artificial Intelligence (AI) research is automated theorem proving. In the early days some were very optimistic about using theorem provers as general problem
solvers for various different tasks, like action planning, knowledge representation or program verification. Now it is clear, that for special tasks tailored reasoning systems are necessary. We will
comment in this subsection on theorem proving, aiming at proving mathematical theorems and on knowledge representation. This idea can be seen as going back to the ideas of Gottfried Wilhelm Leibnitz
(1646 - 1716), who already at this time had a dream of formalisation, and even automatisation of mathematics.
A recent success is the proof of Robbins conjecture, which even reached the New York Times. For details see http://www-unix.mcs.anl.gov/~mccune/papers/robbins/. In 1933, E. V. Huntington presented
the following basis for Boolean algebra:
x + y = y + x.
(x + y) + z = x + (y + z).
n(n(x) + y) + n(n(x) + n(y)) = x.
[Huntington equation]
Shortly thereafter, Herbert Robbins conjectured that the Huntington equation can be replaced with a simpler one:
n(n(x + y) + n(x + n(y))) = x.
[Robbins equation]
Robbins and Huntington could not find a proof, and the problem was later studied by Tarski and his students. The proof that solves the Robbins problem was found October 10, 1996, by the theorem
prover EQP. EQP is similar in many ways to the more well known program http://www.mcs.anl.gov/AR/otter/. The main differences are that EQP has associative-commutative (AC) unification, is restricted
to equational logic, and offers more paramodulation strategies. See the http://www.mcs.anl.gov/~mccune/papers/33-basic-test-problems/ for details.
Knowledge RepresentationEdit
In many Artificial Intelligence the representation and manipulation of knowledge is a central task. To this end numerous graphic-oriented formalisms have been invented.
An informal semantics of this graphical notation states, that both, man and animal are mammal and that man have a nationality and an age, whereas an animal has age and no nationality. A closer
investigation of the semantics of such a formalism would show, that this is nothing than a pictorial representation of the following set of predicate logic formulae:
$\forall x (man(x) \Rightarrow mammal(x))$
$\forall x (animal(x) \Rightarrow mammal(x))$
$\forall x (man(x) \Rightarrow \exists y\; age(x, y))$
$\forall x (man(x) \Rightarrow \exists y\; nationality(x, y))$
$\forall x (mammal(x) \Rightarrow \exists y\; age(x))$
See also: Gellrich/Gellrich: Mathematik (1) - Schaltalgebra
Problem 1 (Introduction)Edit
In a criminal case the following facts are proved:
1. At least one of the three persons X,Y,Z is guilty.
2. If X is guilty and Y is innocent, then Z is guilty.
These circumstances are not sufficient to accuse one of them but it can be said for certain that at least one of two persons must be guilty. Which two are these?
Problem 2 (Introduction)Edit
Which of the following syllogisms are valid? Give a reason for your answer or give a counterexample.
1. All $M$ are $P$, some $S$ are not $M$ then: Some $S$ are $P$.
2. All $M$ are $P$, some $S$ are not $M$ then: Some $S$ are not $P$
3. All $P$ are $M$, some $S$ are not $M$ then: Some $S$ are not $P$.
Problem 3 (Introduction)Edit
In a meeting there are 100 politicians discussing with each other. Everyone of them is either corrupted or uncorrupted. The Following facts are known:
1. at least one politician is uncorrupted.
2. In each pair of two politicians is at least one corrupted . How Many of the politicians are corrupt, how many are uncorrupted?
Problem 4 (Introduction)Edit
The anthropologist Abercrombie entered the island of the knights and the mucker with a slack feeling he have never had before. He knew that very wondrous people lived on this island: The knights made
only true propositions the mucker false propositions every time. Abercrombie knew also that he had to find a friend before he could experience something. He had to find someone whose propositions he
can trust. So he asked the first three people of the island he met to find a knight. Abercrombie asked Arthur at first: Are Bernard and Charles both knights? Arthur answered: Yes they are!
Abercrombie asked then: Is Bernard a knight? With a big surprise he get the answer: No! Is Charles a knight or a mucker? Deduce your answer and give a reason for it.
Problem 5 (Introduction)Edit
A little island had exactly 100 inhabitants. Every inhabitant either always told the truth or always lied. One Researcher came one day on the island and questioned the inhabitants sequentially. The
first told: "There is at least one liar with us." The second said: "At least two liars live among us." etc.. The last finally claimed: "There are 100 liars on this Island." How many liars were there
really? The researcher went to another island with 99 Inhabitants a year later. In an interview the inhabitants of these island spoke completely corresponding like on the first island, i.e. the $n$th
inhabitants said: "There are at least $n$ liars here." What can you say about this island?
Problem 6 (Introduction)Edit
In a village the priest explained one Sunday: "It was confessed to me that there are men who are unfaithful in our village. However, the confessing secret forbids it to me to call the names. It will
you nevertheless learn all of them, if we proceed as follows: Any woman who for certain knows that her husband is unfaithful shall throw him in the following night out of the house." However, the
problem was that every woman knows all about every other husband but nothing about her husband. In the next morning the priest went through the streets; not one single man was turned adrift. Also on
the next day he saw nobody. But at 100th acre he saw men, who had been thrown out of the house by their wifes. How many?
Problem 7 (Introduction)Edit
There is a hotel with countable infinitely many rooms 0, 1, 2,.... All rooms are vacant. Now, a $\omega$-decker bus comes with $\omega$ many seats on every deck. How can all of the passengers be
accommodated in the hotel?
Last modified on 3 February 2012, at 08:39 | {"url":"http://en.m.wikibooks.org/wiki/Logic_for_Computer_Scientists/Introduction","timestamp":"2014-04-18T15:42:59Z","content_type":null,"content_length":"35504","record_id":"<urn:uuid:5d112d64-abf8-4738-b227-4d9050b52f62>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
plotting 3d graphs
Depending on how much you mind paying. Mathematica is top of the line but has a hefty price tag, and can be a bit cumbersome. If you want something that is free, a little more user friendly, but much
less "elegant." Winplot is good.
On a side note, many college campuses have Mathematica in certain labs, that's where I use it And if you do decide to go with mathematica and need some help operating it, let me know, and I can send
you some useful demos.
Hope this helps. | {"url":"http://www.physicsforums.com/showthread.php?p=3906131","timestamp":"2014-04-19T07:32:06Z","content_type":null,"content_length":"34305","record_id":"<urn:uuid:946c9ffa-77d1-408c-842d-87edfca9f0d8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. A line that joins any two vertices of a polygon, if the vertices are not next to each other; or a line that joins two vertices of a polyhedron that are not on the same face.
2. The small flat mirror used near the upper end of a Newtonian telescope to direct the converging beam of light over to the side of the tube where the eyepiece is located. See also star diagonal.
TELESCOPE EQUIPMENT AND TECHNIQUES | {"url":"http://www.daviddarling.info/encyclopedia/D/diagonal.html","timestamp":"2014-04-18T03:13:05Z","content_type":null,"content_length":"5991","record_id":"<urn:uuid:520bf3e4-5ad8-4724-ad11-074be54702e1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Functional form for a given Scaling Condition
up vote 3 down vote favorite
Dear all
While studying the overlap distribution for two random Cantor sets (long story made short), I came across the following problem.
$G(k)$ is a complex valued function, and satisfy the following condition:
$G(k\mu) = G(k)^2+ \beta$
with $\beta,\mu$ constant (in my case $\beta=\frac{2}{9}, \mu = \frac{4}{3}$)
Is there a way to find the functional form of $G(k)$ which satisfy the condition?
Note that for $\beta = 0$, $G(k)=\exp\left(a k^{\log_\mu 2}\right)$, ($a$ konstant) will satisfy the condition (easily verified), but I have no idea on how to find a solution for non-zero $\beta$.
I'm a not a math student (I'm studying physics), but I have never seen problems like this before. Is there a way to find analytical expression for $G(k)$? Possible as an expansion?
I can generate a function which has this property on the computer. Writing $G(k)= x(k) + i y(k)$, with $x(k)=x(-k)$ and $y(k)=-y(-k)$ the function should look something like this:
-- jon
tag-removed complex-analysis
The latex notation does not seem to show properly in Firefox. I don't know the reason. Hope the question is clear. – jonalm Mar 24 '10 at 12:03
It looks good to me in Firefox 3.5.8 on the Mac. What is the problem you see? – Harald Hanche-Olsen Mar 24 '10 at 12:31
@Harald: I use Firefox 3.6.2 on Mac. It show like this: dl.dropbox.com/u/483049/screenshot.tiff – jonalm Mar 24 '10 at 13:20
Hit the "reprocess math" button on the right sidebar. – JBL Mar 24 '10 at 13:26
Got it. Thanks. – jonalm Mar 24 '10 at 13:30
add comment
1 Answer
active oldest votes
You do not give any smoothness requirement; I will look for an analytic $G$: $$ G(k)=\sum_{n=0}^\infty a_nk^n.$$ In what follows, I assume also that $\mu=4/3$ and $\beta=2/9$. Expanding
in a power series both sides of the equation and equating coefficients, we get that $a_0=1/3$ or $a_0=2/3$. In the first case we obtain the constant solution $G(k)=1/3$. But in the second
case, we find a one parameter family of (formal) solutions, parametrized by the value of $a_1$: $$ a_0=\frac23,\quad a_1\in\mathbb{C},\quad a_n=\frac{1}{\mu^n-\mu}\sum_{i=1}^{n-1}a_ia_
{n-i},\quad n>2. $$ For $a_1=0$ we obtain the constant solution $G(k)=2/3$. For other values of $a_1$, one should check that the series has a positive radius of convergence.
up vote 3
down vote Another way of obtainig solutions is the following. Choose an arbitrary function $h\colon[1,\mu]\to\mathbb{C}$, and define $G(k)=h(k)$ if $1\le k<\mu$; for $\mu\le k<\mu^2$, let $G(k)=G(k
accepted /\mu)^2+\beta$; iterate this procedure to define $G$ on $[1,\infty)$. Now, for $1/\mu\le k<1$, let $G(k)=\pm\sqrt{G(\mu k)^2-\beta}$; iterate the procedure to define $G$ on $(0,1)$.
Conditions can be impposed on the arbitrary function $h$ to make $G$ continuous, for instance ($G(\mu)=G(1)^2+\beta$).
Thank you so much Julián. But this solution raise another question for me. You see that G(k)= $\tilde F(k)$ is the Fourier transform of F(x). And I want find an expression of F(x), but
every term in the inverse Fourier transformation (from the series expansion) seem to diverge. Is it a way to tackle this problem? Assume $a_1= const.i$ as this will make F(x) a real
function. – jonalm Mar 27 '10 at 13:44
add comment
Not the answer you're looking for? Browse other questions tagged tag-removed complex-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/19186/finding-functional-form-for-a-given-scaling-condition?sort=votes","timestamp":"2014-04-17T13:14:38Z","content_type":null,"content_length":"57920","record_id":"<urn:uuid:d98e555f-9b9f-46a8-9c50-ea5bcb19931d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Multivariate spectral gradient method for unconstrained optimization.
(English) Zbl 1155.65046
The authors present the multivariate spectral gradient (MSG) method for solving unconstrained optimization problems. Combined with some quasi-Newton property the MSG method allows an individual
adaptive stepsize along each coordinate direction, which guarantees that the method is finitely convergent for positive definite quadratics. Especially, it converges no more than two steps for
positive definite quadratics with diagonal Hessian, and quadratically for objective functions with positive definite diagonal Hessian. Moreover, based on a nonmonotone line search, global convergence
is established for the MSG algorithm.
Also a numerical study of the MSG algorithm compared with the global Barzilai-Borwein (GBB) algorithm is given. The search direction of the MSG method is close to that presented in the paper by M. N.
Vrahatis, G. S. Androulakis, J. N. Lambrinos and G. D. Magoulas [J. Comput. App. Math. 114, 367–386 (2000; Zbl 0958.65072)], but the explanation for the steplength selection is different. The
stepsize in this method is selected from the estimates of the eigenvalues of the Hessian but not a local estimation of the Lipschitz constant in the above mentioned paper. At last numerical results
are reported, which show that this method is promising and deserves futher discussing.
65K05 Mathematical programming (numerical methods)
90C30 Nonlinear programming
90C53 Methods of quasi-Newton type | {"url":"http://zbmath.org/?q=an:1155.65046","timestamp":"2014-04-18T13:18:27Z","content_type":null,"content_length":"22334","record_id":"<urn:uuid:6169fc91-2f54-451e-a662-21ec0c2a2e19>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mean squared normalized error performance function
mse is a network performance function. It measures the network's performance according to the mean of squared errors.
perf = mse(net,t,y,ew) takes these arguments:
net Neural network
t Matrix or cell array of targets
y Matrix or cell array of outputs
ew Error weights (optional)
and returns the mean squared error.
This function has two optional parameters, which are associated with networks whose net.trainFcn is set to this function:
● 'regularization' can be set to any value between 0 and 1. The greater the regularization value, the more squared weights and biases are included in the performance calculation relative to errors.
The default is 0, corresponding to no regularization.
● 'normalization' can be set to 'none' (the default); 'standard', which normalizes errors between -2 and 2, corresponding to normalizing outputs and targets between -1 and 1; and 'percent', which
normalizes errors between -1 and 1. This feature is useful for networks with multi-element outputs. It ensures that the relative accuracy of output elements with differing target value ranges are
treated as equally important, instead of prioritizing the relative accuracy of the output element with the largest target value range.
You can create a standard network that uses mse with feedforwardnet or cascadeforwardnet. To prepare a custom network to be trained with mse, set net.performFcn to 'mse'. This automatically sets
net.performParam to a structure with the default optional parameter values.
Here a two-layer feedforward network is created and trained to predict median house prices using the mse performance function and a regularization value of 0.01, which is the default performance
function for feedforwardnet.
[x,t] = house_dataset;
net = feedforwardnet(10);
net.performFcn = 'mse'; % Redundant, MSE is default
net.performParam.regularization = 0.01;
net = train(net,x,t);
y = net(x);
perf = perform(net,t,y);
Alternately, you can call this function directly.
perf = mse(net,x,t,'regularization',0.01);
See Also
Was this topic helpful? [Yes] [No] | {"url":"http://www.mathworks.com/help/nnet/ref/mse.html?nocookie=true","timestamp":"2014-04-21T04:56:24Z","content_type":null,"content_length":"44903","record_id":"<urn:uuid:f2d15ae9-9e2a-46a5-8913-c7afaea12d64>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Level 5
There are two parts to Level 5 of the SIGMA-T. The first part is composed of 66 items which assess pupils’ knowledge of mathematical facts and concepts as well as their ability to perform a wide
range of mathematical procedures. The second part of the test, which is administered separately, consists of 28 word problems of varying degrees of difficulty. Several of the items in the test have
multiple parts so that in total the number of items in the test is 119.
Collectively, the test items provide a broad and comprehensive assessment of pupils’ mathematical achievement in the areas of number, measurement, geometry, elementary algebra, and data and
The test can be administered to pupils at any stage from the end of 5th Class until the end of 6th Class. Separate norms are available for each of the four terms. The test content is largely based on
the mathematics curriculum for 5th and 6th Classes although material is also included from lower grades in order to make the test suitable for use with weaker pupils. The sequence of test items has
been carefully graded according to the difficulty levels of the items.
Both parts of the test are untimed so that the administration time partly depends on the ability levels of the pupils taking the test. Whereas more able pupils may complete Part 1 within 45 minutes,
some pupils may take somewhat more than an hour to do so. Administration time for Part 2 is typically between 45 minutes and an hour.
Separate scores can be computed for the two parts of the test as well as an overall test score. The test manual contains detailed guidelines for administering and scoring the test. It provides clear
guidance on interpreting and analysing pupils’ results and offers advice on reporting test results to parents. | {"url":"http://www.cjfallon.ie/books/level-5/","timestamp":"2014-04-18T19:22:23Z","content_type":null,"content_length":"49982","record_id":"<urn:uuid:e8d34746-98c0-4e88-a2a5-c1548b4e99da>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Writing@CSU Guide
Glossary of Key Terms
This glossary provides definitions of many of the terms used in the guides to conducting qualitative and quantitative research. The definitions were developed by members of the research methods
seminar (E600) taught by Mike Palmquist in the 1990s and 2000s.
Accuracy A term used in survey research to refer to the match between the target population and the sample.
ANCOVA (Analysis Same method as ANOVA, but analyzes differences between dependent variables.
of Co-Variance)
ANOVA (Analysis A method of statistical analysis broadly applicable to a number of research designs, used to determine differences among the means of two or more groups on a variable. The
of Variance) independent variables are usually nominal, and the dependent variable is usual an interval.
Apparency Clear, understandable representation of the data
Bell curve A frequency distribution statistics. Normal distribution is shaped like a bell.
Case Study The collection and presentation of detailed information about a particular participant or small group, frequently including the accounts of subjects themselves.
Causal Model A model which represents a causal relationship between two variables.
Causal The relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. Establishes, also, how much of a change is shown in
Relationship the dependent variable.
Causality The relation between cause and effect.
Central Tendency These measures indicate the middle or center of a distribution.
Confirmability Objectivity; the findings of the study could be confirmed by another person conducting the same study
Confidence The range around a numeric statistical value obtained from a sample, within which the actual, corresponding value for the population is likely to fall, at a given level of
Interval probability (Alreck, 444).
Confidence Level The specific probability of obtaining some result from a sample if it did not exist in the population as a whole, at or below which the relationship will be regarded as
statistically significant (Alreck, 444).
Confidence (Same as confidence interval, but is terminology used by Lauer and Asher.) "The range of scores or percentages within which a population percentage is likely to be found on
Limits variables that describe that population" (Lauer and Asher, 58). Confidence limits are expressed in a "plus or minus" fashion according to sample size, then corrected according to
formulas based on variables connected to population size in relation to sample size and the relationship of the variable to the population size--the larger the sample, the smaller
the variability or confidence limits.
Confounding An unforeseen, and unaccounted-for variable that jeopardizes reliability and validity of an experiment's outcome.
Construct Seeks an agreement between a theoretical concept and a specific measuring device, such as observation.
Content Validity The extent to which a measurement reflects the specific intended domain of content (Carmines & Zeller, 1991, p.20).
Context Awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors
Continuous A variable that may have fractional values, e.g., height, weight and time.
Control Group A group in an experiment that receives not treatment in order to compare the treated group against a norm.
Convergent The general agreement among ratings, gathered independently of one another, where measures should be theoretically related.
Correlation 1) A common statistical analysis, usually abbreviated asr, that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from
-1.00 to zero to +1.00. 2) A non-cause and effect relationship between two variables.
Covariate A product of the correlation of two related variables times their standard deviations. Used in true experiments to measure the difference of treatment between them.
Credibility A researcher's ability to demonstrate that the object of a study is accurately identified and described, based on the way in which the study was conducted
Criterion Used to demonstrate the accuracy of a measuring procedure by comparing it with another procedure which has been demonstrated to be valid; also referred to as instrumental validity.
Related Validity
Data Recorded observations, usually in numeric or textual form
Deductive A form of reasoning in which conclusions are formulated about particulars from general or universal premises
Dependability Being able to account for changes in the design of the study and the changing conditions surrounding what was studied.
Dependent A variable that receives stimulus and measured for the effect the treatment has had upon it.
Design A quality of an observational study that allows researchers to pursue inquiries on new topics or questions that emerge from initial research
Deviation The distance between the mean and a particular data point in a given distribution.
Discourse A community of scholars and researchers in a given field who respond to and communicate to each other through published articles in the community's journals and presentations at
Community conventions. All members of the discourse community adhere to certain conventions for the presentation of their theories and research.
Discrete A variable that is measured solely in whole units, e.g., gender and siblings
Discriminate The lack of a relationship among measures which theoretically should not be related.
Distribution The range of values of a particular variable.
Dynamic systems Qualitative observational research is not concerned with having straight-forward, right or wrong answers. Change in a study is common because the researcher is not concerned with
finding only one answer.
Electronic Text A "paper" or linear text that has been essentially "copied" into an electronic medium.
Empathic A quality of qualitative researchers who strive to be non-judgmental when compiling findings
Empirical "…the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena under study" (Lauer and
Research Asher, 1988, p. 7)
Equivalency The extent to which two items measure identical concepts at an identical level of difficulty.
Ethnography Ethnographies study groups and/or cultures over a period of time. The goal of this type of research is to comprehend the particular group/culture through observer immersion into the
culture or group. Research is completed through various methods, which are similar to those of case studies, but since the researcher is immersed within the group for an extended
period of time more detailed information is usually collected during the research.
Ethnomethodology A form of ethnography that studies activities of group members to see how they make sense of their surroundings
Existence or This is a key question in the coding process. The researcher must decide if he/she is going to count a concept only once, for existence, no matter how many times it appears, or if
Frequency he/she will count it each time it occurs. For example, "damn" could be counted once, even though it appears 50 times, or it could be counted all 50 times. The latter measurement may
be interested in how many times it occurs and what that indicates, whereas the former may simply looking for existence, period.
Experiment Experimental Research A researcher working within this methodology creates an environment in which to observe and interpret the results of a research question. A key element in
experimental research is that participants in a study are randomly assigned to groups. In an attempt to create a causal model (i.e., to discover the causal origin of a particular
phenomenon), groups are treated differently and measurements are conducted to determine if different treatments appear to lead to different effects.
External The extent to which the results of a study aregeneralizable or transferable. See also validity
Face Validity How a measure or procedure appears.
Factor Analysis A statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for
example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends
rather than simply responses to specific questions.
Generalizability The extent to which research findings and conclusions from a study conducted on a sample population can be applied to the population at large.
Grounded theory Practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why
those experiences exist.
Holistic Taking almost every action or communication of the whole phenomenon of a certain community or culture into account in research
Hypertext A nonsequential text composed of links and nodes
Hypothesis A tentative explanation based on theory to predict a causal relationship between variables.
Independent A variable that is part of the situation that exist from which originates the stimulus given to a dependent variable. Includes treatment, state of variable, such as age, size,
Variable weight, etc.
Inductive A form of reasoning in which a generalized conclusion is formulated from particular instances
Inductive A form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but forms questions throughout the research process.
Internal The extent to which all questions or items assess the same characteristic, skill, or quality.
Internal (1) The rigor with which the study was conducted (e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and wasn't measured) and (2)
Validity the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore (Huitt, 1998). In studies that do not explore
causal relationships, only the first of these definitions should be considered when assessing internal validity. See alsovalidity.
Interrater The extent to which two or more individuals agree. It addresses the consistency of the implementation of a rating system.
Interval A variable in which both order of data points and distance between data points can be determined, e.g., percentage scores and distances
Interviews A research tool in which a researcher asks questions of participants; interviews are often audio- or video-taped for later transcription and analysis.
Irrelevant One must decide what to do with the information in the text that is not coded. One's options include either deleting or skipping over unwanted material, or viewing all information
Information as relevant and important and using it to reexamine, reassess and perhaps even alter the one's coding scheme.
Kinesics Kinesic analysis examines what is communicated through body movement
Level of Chosen by determining which word, set of words, or phrases will constitute a concept. According to Carley, 100-500 concepts is generally sufficient when coding for a specific topic,
Analysis but this number of course varies on a case by case basis.
Level of A researcher must decide whether concepts are to be coded exactly as they appear, or if they can be recorded in some altered or collapsed form. Using Horton as an example again, she
Generalization could code profanity individually and code "damn" and "dammit" as two separate concepts. Or, by generalizing their meaning, i.e. they both express the same idea, she could group
them together as one item, i.e. "damn words."
Level of One must determine whether to code simply for explicit appearances of concepts, or for implied concepts, as well. For example, consider a hypothetical piece of text about skiing,
Implication written by an expert. The expert might refer several times to "???," as well as various other kinds of turns. One must decide whether to code "???" as an entity in and of itself,
or, if coding for "turn" references in general, to code "???" as implicitly meaning "turn." Thus, by determining that the meaning "turn" is implicit in the words "???," anytime the
words "???" or "turn" appear in the text, they will be coded under the same category of "turn."
Link In hypertext, a pointer from one node to another
Matched T-Test A statistical test used to compare two sets of scores for the same subject. A matched pairs T-test can be used to determine if the scores of the same participants in a study differ
under different conditions. For instance, this sort of t-test could be used to determine if people write better essays after taking a writing class than they did before taking the
writing class.
Matching Process of corresponding variables in experimental groups equally feature for feature.
Mean The average score within a distribution.
Mean Deviation A measure of variation that indicates the average deviation of scores in a distribution from themean: It is determined by averaging the absolute values of thedeviations.
Median The center score in a distribution.
Mental Models A group or network of interrelated concepts that reflect conscious or subconscious perceptions of reality. These internal mental networks of meaning are constructed as people draw
inferences and gather information about the world.
Mode The most frequent score in a distribution.
Multi-Modal A research approach that employs a variety of methods; see also triangulation
Narrative A qualitative research approach based on a researcher's narrative account of the investigation, not to be confused with a narrative examined by the researcher as data
Naturalistic Observational research of a group in its natural setting
Node In hypertext, each unit of information, connected by links
Nominal Variable A variable determined by categories which cannot be ordered, e.g., gender and color
Normal A normal frequency distribution representing the probability that a majority of randomly selected members of a population will fall within the middle of the distribution.
distribution Represented by the bell curve.
Ordinal Variable A variable in which the order of data points can be determined but not the distance between data points, e.g., letter grades
Parameter A coefficient or value for the population that corresponds to a particular statistic from a sample and is often inferred from the sample.
Phenomenology A qualitative research approach concerned with understanding certain group behaviors from that group's point of view
Population The target group under investigation, as in all students enrolled in first-year composition courses taught in traditional classrooms. The population is the entire set under
consideration. Samples are drawn from populations.
Precision In survey research, the tightness of the confidence limits.
Pre-defined or One must determine whether to code only from a pre-defined set of concepts and categories, or if one will develop some or all of these during the coding process. For example, using
Interactive a predefined set, Horton would code only for profane language. But, if Horton coded interactively, she may have decided to half-way through the process that the text warranted
Concept Choice coding for profane gestures, as well.
Probability The chance that a phenomenon has a of occurring randomly. As a statistical measure, it shown as p (the "p" factor).
Qualitative Empirical research in which the researcher explores relationships using textual, rather than quantitative data. Case study, observation, and ethnography are considered forms of
Research qualitative research. Results are not usually considered generalizable, but are often transferable.
Quantitative Empirical research in which the researcher explores relationships using numeric data. Survey is generally considered a form of quantitative research. Results can often be
Research generalized, though this is not always the case.
Quasi-experiment Similar to true experiments. Have subjects, treatment, etc., but uses nonrandomized groups. Incorporates interpretation and transferability in order to compensate for lack of
control of variables.
Quixotic Refers to the situation where a single manner of observation consistently, yet erroneously, yields the same result.
Random sampling Process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering
the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no
regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution
comparable to that of the population from which the sample is drawn.
Randomization Used to allocate subjects to experimental and control groups. The subjects are initially considered not unequal because they were randomly selected.
Range The difference between the highest and lowest scores in adistribution.
Reliability The extent to which a measure, procedure or instrument yields the same result on repeated trials.
Response Rate In survey research, the actual percentage of questionnaires completed and returned.
Rhetorical "entails…1) identifying a motivational concern, 2) posing questions, 3) engaging in a heuristic search (which in composition studies has often occurred by probing other fields), 4)
Inquiry creating a new theory or hypotheses, and 5) justifying the theory" (Lauer and Asher, 1988, p. 5)
Rigor Degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in a experiment.
Sampling Error The degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the
corresponding reduction in reliability (Alreck, 454).
Sampling Frame A listing that should include all those in the population to be sampled and exclude all those who are not in the population (Alreck, 454).
Sample The population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results
will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical--generally the
larger the number in the sample, the higher the likelihood of a representative distribution of thepopulation.
Selective The central idea of content analysis. Text is reduced to categories consisting of a word, set of words or phrases, on which the researcher can focus. Specific words or patterns are
Reduction indicative of the research question and determine levels of analysis and generalization.
Serial Effect In survey research, a situation where questions may "lead" participant responses through establishing a certain tone early in the questionnaire. The serial effect may accrue as
several questions establish a pattern of response in the participant, biasing results.
Short-term Studies that list or present findings of short-term qualitative study based on recorded observation
Skewed Any distribution which is not normal, that is not symmetrical along the x-axis
Stability The agreement of measuring instruments over time.
Standard A term used in statistical analysis. A measure of variation that indicates the typical distance between the scores of adistribution and the mean; it is determined by taking the
Deviation square root of the average of the squared deviations in a given distribution.It can be used to indicate the proportion of data within certain ranges of scale values when the
distribution conforms closely to the normal curve.
Standard Error A term used in statistical analysis. A computed value based on the size of the sample and the standard deviation of the distribution, indicating the range within which the mean of
(S.E.) of the the population is likely to be from the mean of the sample at a given level of probability (Alreck, 456).
Survey A research tool that includes at least one question which is either open-ended or close-ended and employs an oral or written method for asking these questions. The goal of a survey
is to gain specific information about either a specific group or a representative sample of a particular group. Results are typically used to understand the attitudes, beliefs, or
knowledge of a particular group.
Synchronic The similarity of observations within the same time frame; it is not about the similarity of things observed.
T-Test A statistical test. A t-test is used to determine if the scores of two groups differ on a single variable. For instance, to determine whether writing ability differs among students
in two classrooms, a t-test could be used.
Thick A rich and extensive set of details concerning methodology and context provided in a research report.
Transferability The ability to apply the results of research in one context to another similar context. Also, the extent to which a study invites readers to make connections between elements of the
study and their own experiences.
Translation If one decides to generalize concepts during coding, then one must develop a set of rules by which less general concepts will be translated into more general ones. This doesn't
Rules involve simple generalization, for example, as with "damn" and "dammit," but requires one to determine, from a given set of concepts, what concepts are missing. When dealing with
the idea of profanity, one must decide what to do with the concept "dang it," which is generally thought to imply "damn it." The researcher must make this distinction, i.e. make
this implicit concept explicit, and then code for the frequency of its occurrence. This decision results in the construction of a translation rule, which instructs the researcher to
code for the concept "dang it" in a certain way.
Treatment The stimulus given to a dependent variable.
Triangulation The use of a combination of research methods in a study. An example of triangulation would be a study that incorporated surveys, interviews, and observations. See also multi-modal
Unique case A perspective adopted by many researchers conducting qualitative observational studies; researchers adopting this orientation remember every study is special and deserves in-depth
orientation attention. This is especially necessary for doing cultural comparisons.
Validity The degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the
same thing, but not valid. See also internal validity and external validity
Variable Observable characteristics that vary among individuals. See also ordinal variable, nominal variable, interval variable, continuous variable, discrete variable,dependent variable,
independent variable.
Variance A measure of variation within a distribution, determined by averaging the squared deviations from the mean of a distribution.
Variation The dispersion of data points around the mean of adistribution.
Verisimilitude Having the semblance of truth; in research, it refers to the probability that the research findings are consistent with occurrences in the "real world." | {"url":"http://writing.colostate.edu/guides/guide.cfm?guideid=90","timestamp":"2014-04-19T06:52:37Z","content_type":null,"content_length":"70047","record_id":"<urn:uuid:e584106c-3103-4ad5-b4e8-653c63ca19be>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Permutations and Combinations Help!
Hello, what are some ways/tips that i can use to help me distinguish between permutation and combination word problems?
You ask one question. Is this about content or is it about order? If it involves order it is a permutation. If only content matters then it is a combination. How many ways can you make a four-letter
string using the letters in $S~O~U~T~H~E~R~N$? That is a permutation. How many ways can you select four letters from $S~O~U~T~H~E~R~N$? That is a combination.
Hello, TheCB4! Quote: What are some ways/tips that i can use to help me distinguish between permutation and combination word problems? In a permutation, the order of the objects is important. Suppose
there are 10 books on the table . . and you may select 4 of them. If you choose the 4 books and place them in a row on a shelf, . . it is a permutation. There are: . $_{10}P_4 \:=\:\frac{10!}{6!} \:=
\:10\cdot9\cdot8\cdot7 \:=\:5040$ possible arrangments. If you choose the 4 books and toss them into a shopping bag, . . it is a combination. There are: . $_{10}C_4 \:=\:\frac{10!}{4!\,6!} \:=\:210$
possible selections. | {"url":"http://mathhelpforum.com/discrete-math/116718-permutations-combinations-help-print.html","timestamp":"2014-04-21T06:55:05Z","content_type":null,"content_length":"6425","record_id":"<urn:uuid:409d1364-6391-4bca-9dd4-7729b5cd3bef>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inventor Blaise Pascal Biography
Fascinating facts about Blaise Pascal Blaise Pascal
inventor of a mechanical adding machine in 1642.
Inventor: Blaise Pascal
Criteria: First to invent. First practical.
Birth: June 19, 1623 in Clermont-Ferrand, France
Death: August 19, 1662 in Paris, France
Nationality: French
Blaise Pascal, French philosopher, mathematician, and physicist, considered one of the great minds in Western intellectual history. Inventor of the first mechanical adding machine.
Blaise Pascal was born in Clermont-Ferrand on June 19, 1623, and his family settled in Paris in 1629. Under the tutelage of his father, Pascal soon proved himself a mathematical prodigy, and at the
age of 16 he formulated one of the basic theorems of projective geometry, known as Pascal's theorem and described in his Essai pour les coniques (Essay on Conics, 1639).
In 1642 he invented the first mechanical adding machine. Pascal proved by experimentation in 1648 that the level of the mercury column in a barometer is determined by an increase or decrease in the
surrounding atmospheric pressure rather than by a vacuum, as previously believed. This discovery verified the hypothesis of the Italian physicist Evangelista Torricelli concerning the effect of
atmospheric pressure on the equilibrium of liquids. Six years later, in conjunction with the French mathematician Pierre de Fermat, Pascal formulated the mathematical theory of probability, which has
become important in such fields as actuarial, mathematical, and social statistics and as a fundamental element in the calculations of modern theoretical physics.
Pascal's other important scientific contributions include the derivation of Pascal's law or principle, which states that fluids transmit pressures equally in all directions, and his investigations in
the geometry of infinitesimal. His methodology reflected his emphasis on empirical experimentation as opposed to analytical, a priori methods, and he believed that human progress is perpetuated by
the accumulation of scientific discoveries resulting from such experimentation.
Pascal espoused Jansenism and in 1654 entered the Jansenist community at Port Royal, where he led a rigorously ascetic life until his death eight years later. In 1656 he wrote the famous 18 Lettres
provinciales (Provincial Letters), in which he attacked the Jesuits for their attempts to reconcile 16th-century naturalism with orthodox Roman Catholicism.
His most positive religious statement appeared posthumously (he died August 19, 1662); it was published in fragmentary form in 1670 as Apologie de la religion Chrétienne (Apology of the Christian
Religion). In these fragments, which later were incorporated into his major work, he posed the alternatives of potential salvation and eternal damnation, with the implication that only by conversion
to Jansenism could salvation be achieved. Pascal asserted that whether or not salvation was achieved, humanity's ultimate destiny is an afterlife belonging to a supernatural realm that can only be
known intuitively. Pascal's final important work was Pensées sur la religion et sur quelques autres sujets (Thoughts on Religion and on Other Subjects), also published in 1670. In the Pensées Pascal
attempted to explain and justify the difficulties of human life by the doctrine of original sin, and he contended that revelation can be comprehended only by faith, which in turn is justified by
Pascal's writings urging acceptance of the Christian life contain frequent applications of the calculations of probability; he reasoned that the value of eternal happiness is infinite and that
although the probability of gaining such happiness by religion may be small it is infinitely greater than by any other course of human conduct or belief. A reclassification of the Pensées, a careful
work begun in 1935 and continued by several scholars, does not reconstruct the Apologie, but allows the reader to follow the plan that Pascal himself would have followed.
Pascal was one of the most eminent mathematicians and physicists of his day and one of the greatest mystical writers in Christian literature. His religious works are personal in their speculation on
matters beyond human understanding. He is generally ranked among the finest French polemicists, especially in the Lettres provinciales, a classic in the literature of irony. Pascal's prose style is
noted for its originality and, in particular, for its total lack of artifice. He affects his readers by his use of logic and the passionate force of his dialectic.
TO LEARN MORE
History of Computing from The Great Idea Finder
Invention of the Adding Machine from The Great Idea Finder
The Kid Who Invented the Popsicle : And Other Surprising Stories About Inventions
by Don L. Wulffson / Paperback - 128 pages (1999) / Puffin
Brief factual stories about how various familiar things were invented, many by accident, from animal crackers to the zipper.
Making Sense of It All Pascal and the Meaning of Life
Thomas V. Morris / Paperback - 214 pages / Wm. B. Eeerdmans Pub.
His lucid reflections provide fresh, fertile insights and perspectives for any thoughtful person journeying through life.
Pascal: The Great Philosophers
by Ben Rogers / Paperback: 58 pages / Routledge; 1 edition (July, 1999)
In just 64 pages, each author, a specialist on his subject, places the philosopher and his ideas into historical perspective. Each volume explains, in simple terms, the basic concepts, enriching the
narrative through the effective use of biographical detail. And instead of attempting to explain the philosopher's entire intellectual history, which can be daunting, this series takes one central
theme in each philosopher's work, using it to unfold the philosopher's thoughts.
ON THE WEB:
Blaise Pascal
Pascal's Pascaline Calculator This offers an overview of the advances in science that made desktop computers possible starting with the invention of counting.
(URL: www.eingang.org/Lecture/pascaline.html)
The First Adding Machine
Adding machines date back to the 17th century. They started with simple machines that could only add (and sometimes subtract.) Many were rather tricky to use and could produce erroneous results with
untrained users.
(URL: www.hpmuseum.org/adder.htm)
Blaise Pascal (1623 - 1662)
From `A Short Account of the History of Mathematics' (4th edition, 1908) by W. W. Rouse Ball.
(URL: www.maths.tcd.ie/pub/HistMath/People/Pascal/RouseBall/RB_Pascal.html)
Invented one of the first mechanical calculators: the pascalinePascal, genius by any measure, died of a brain hemorrhage at the age of 39. From The History of Computing Project.
(URL: www.thocp.net/biographies/pascal_blaise.html)
The Calculators Museum
The Museum of HP Calculators displays and describes Hewlett-Packard calculators introduced from 1968 to 1986 plus a few interesting later models. There are also sections on calculating machines and
slide rules as well as sections for buying and selling HP calculators, an HP timeline, collecting information and a software library.
(URL: www.hpmuseum.org/)
Blaise Pascal Biography
Pascal, Blaise (1623-62), French philosopher, mathematician, and physicist, considered one of the great minds in Western intellectual history. A student ThinkQuest project.
(URL: library.thinkquest.org/10170/voca/pascal.htm)
"The heart has its reasons that the mind knows nothing of. " - Blaise Pascal
"If God does not exist, one will lose nothing by believing in him, while if he does exist, one will lose everything by not believing." - Blaise Pascal
"Since we cannot know all that there is to be known about anything, we ought to know a little about everything." - Blaise Pascal
DID YOU KNOW?
• The Zwinger museum, in Dresden, Germany, exhibits one of his original mechanical calculators.
• Pascal continued to make improvements to his design through the next decade and built fifty machines in total.
• In honor of his scientific contributions, the name Pascal has been given to the SI unit of pressure, to a programming language, and Pascal's law (an important principle of hydrostatics).
Reference Sources in BOLD Type This page revised October 19, 2006.
│ FEATURED INVENTOR │
│ Tim Berners-Lee's invention has revolutionized the world like nothing before. │
│ Learn more │
│ FEATURED INVENTION │
│ The invention of the Internet, should be classed with the greatest events of the 20th Century. │
│ Learn more │
│ FEATURED GREAT IDEA │
│ The Aero Sport All-Terrain Bed with Dual Power Pump is the perfect addition to any camping trip or weekend getaway. │
│ Learn more... │
│ FEATURED BOOK │
│ This book, is the perfect desktop reference for both the science novice and the technologically advanced reader alike. │
│ Learn more │
│ MAKE A DIFFERENCE │
│ │
│ CELEBRATE WITH US │
│ │
Help us improve! | {"url":"http://www.ideafinder.com/history/inventors/pascal.htm","timestamp":"2014-04-20T18:23:16Z","content_type":null,"content_length":"27224","record_id":"<urn:uuid:6c39671a-1f90-4a53-95e8-d463a66a0393>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/zepp/asked","timestamp":"2014-04-16T17:19:32Z","content_type":null,"content_length":"134235","record_id":"<urn:uuid:7f4765fc-6459-4a55-a2e4-1265a5074aa8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
More on Diff. Forms and Distributions as Kernels
Hi, Again:
I'm trying to show that, given a 3-manifold M, and a plane field ρ (i.e., a distribution on
TM) on M, there exists an open set U in M, so that ρ can be represented as the kernel of a
differential form w , for W defined on U.
The idea is that the kernel of a linear map from R^3 --T[x]M
is either the whole space, or a two-dimensional space, by , e.g., rank-nullity.
My idea is to start by choosing the assigned plane ∏[m] at any point m in M. Then we use the fact that any subspace can be expressed as the kernel of a linear map.
Specifically, we choose a basis {v1,v2} for ∏[m] Subset T[m]M
and define a form w so that:
w(v1)=w(v2)=0 .
Then we extend the basis {v1,v2} into a basis {v1,v2,v3} for T[m]M , and
then we declare w(v[3])=1 (any non-zero number will do ), so that
the kernel of w is precisely ∏[m], by some linear algebra.
Now, I guess we need to extend this assignment w at the point m, at least into
a neighborhood U[m] of M . I guess all the planes in a subbundle have
a common orientation, so maybe we can use a manifold chart W[m] for
m, which is orientable ( being locally-Euclidean), and then use the fact that there is
an orientation-preserving isomorphism between the tangent planes at any two points
p,q in T[p]U . Does this allow me to define a form in U whose kernel is ρ ?
Edit: I think this should work: please critique: for each q in U , the hyperplane ∏
[q] can be described as the kernel of a map , as for the case of m. Again,
we find a basis {w1,w2} for ∏[q] , and then define an orientation-preserving
map (which exists because U is orientable) between T[m]M and
T[q]M, sending basis elements to basis elements. | {"url":"http://www.physicsforums.com/showthread.php?p=4155856","timestamp":"2014-04-21T09:51:47Z","content_type":null,"content_length":"56499","record_id":"<urn:uuid:4efc95ea-59d8-4d7f-9c99-99dfecb10e9a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
I analyze a rare disasters economy that yields a measure of the risk neutral (RN) probability of a consumption disaster. A large panel of options data provides strong evidence of a common aggregate
RN disaster probability. Empirically, I find the market return sensitivity to RN disaster probability to be consistent with a reasonable calibration of the model. In addition, I show that the RN
disaster probability is a robust predictor of business cycle variables as suggested by a full general equilibrium model. I also derive a model-implied measure of firm disaster risk. An equity
portfolio consisting of high disaster risk stocks earns excess annualized returns of 11.59%, even after controlling for a plethora of risk-factors. Following with model intuition, the RN probability
of disaster positively forecasts returns of the portfolio of high disaster risk stocks. Finally, I use the cross-section of equity returns to estimate moments of disaster recovery rates.
Working Paper › Online Appendix › Slides › | {"url":"http://people.stern.nyu.edu/esiriwar/","timestamp":"2014-04-18T18:10:49Z","content_type":null,"content_length":"14960","record_id":"<urn:uuid:9d361c9b-c9dd-4a36-aafa-b6a3889f6287>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arden Ruttan
Arden Ruttan received his Ph. D. in numerical analysis from Kent State University in August 1977. He was a postdoctoral fellow at California Institute of Technology from 1977-1978 and an assistant
professor at Texas Tech University from 1978-1983 before joining Kent State University in 1983. Currently, he is a professor of computer science. His funding includes grants from NSF and Cray
Research. His research interests are scientific computing, computational steering, highly ill-conditioned mathematical computations, a priori algorithm selection, a posteriori error analysis for
numerical routines, and parallel implementations of numerical algorithms.
Representative Publications:
1. ``A Unified Theory For Real vs. Complex Rational Chebyshev Approximation on an Interval.", (with R. S. Varga), Trans. AMS., Vol 312, No. 2(1989), pp. 681-697.
2. ``Optimal Successive Overrelaxation Iterative Methods for P-cyclic Methods", (with M. Eiermann and W. Niethammer), Numer. Math. 57(1990), pp 593-606.
3. ``The Laguerre Inequalities with Applications to a Problem Associated with the Riemann Hypothesis'', Numerical Algorithms, 1(1991), pp 305-330, (with G. Csordas and R. S. Varga).
4. ``Parallel LU Decomposition of Upper Hessenberg Matrices", (with J. Buoni and P. A. Farrell), Comput. & Appl. Math. I -Algor. & Theor. , C. Brezinski and U. Kulish, eds. , 1992, pp 61-70 (with J.
Buoni, and P.A. Farrell)
5. ``A Numerical Method for Eigenvalue Problems in Modelling Liquid Crystals'', 1996 Copper Mountain Conf. on Iterative Methods , Apr 9-13, 1996.(with J. Baglama, D. Calvetti, P.A. Farrell, L.
6. ``Computation of a Few Small Eigenvalues of a Large Matrix with Applicaton to Liquid Crystal Modeling'', J. Comput. Physics, 146 (1998), pp. 203-226. (with J. Baglama, D. Calvetti, L. Reichel).
7. ``Modeling liquid crystal structures on an SMP workstation cluster'', (with Paul A. Farrell and Hong Ong), in the proceedings of Proceeding of the International Conference on Parallel and
Distributed Processing Techniques and Applications, PDPTA'2000, Edited by H. R. Arabnia, CSREA Press, ISBN 1-892512-22-x.
8. ``An Efficient Approach for Candidate Set Generation'', Journal of Information & Knowledge Management , Vol. 4, No. 4 (2005), pp. 287-291. (with Nawar Malhis and Hazem H. Refai).
9. ``A Steering and Visualization Toolkit for Distributed Applications'', The Proceedings of the 2006 International Conference on Parallel and Distributed Processing Techniques and Applications
(PDPTA'06 ), pp 451--457. (with Cara Stein, Daniel Bennett, and Paul A. Farrell)
10. ``A Visualization Environment for Nematic Liquid Crystal Materials'', Proceed ing of IADIS Multi Conference on Computer Science and Information Systems., Lisbon, Part III - Computer Graphics and
Visualization, pp. 83--91, 2007, (with Paul A. Farrell, Hong Ong, and Yang-Ming Zhu.)
Classes Fall 2013:
Arden Ruttan Department of Computer Science
Kent State University
Kent, Ohio, 44240 | {"url":"http://www.cs.kent.edu/~ruttan/","timestamp":"2014-04-16T06:08:51Z","content_type":null,"content_length":"4143","record_id":"<urn:uuid:ab6e9998-ba7a-4a0d-960f-497b13842941>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Malvern, PA Algebra Tutor
Find a Malvern, PA Algebra Tutor
...Their success becomes my success. Every student brings a unique perspective and a unique set of expectations to his or her lesson, causing me to adapt my teaching style and approach to forge a
connection that works for both of us. I have learned a great deal from my students in this process!
21 Subjects: including algebra 1, algebra 2, reading, vocabulary
...Let's think outside the box and help you to succeed! I hold two degrees that are associated with Biology. My Bachelor's degree is in Agricultural-Biology, which is a more experiential,
hands-on biology degree.
20 Subjects: including algebra 1, algebra 2, reading, statistics
I am a youthful high school Latin teacher. I have been tutoring both Latin & Math to high school students for the past six years. I hold a teaching certificate for Latin, Mathematics, and
English, and I am in the finishing stages of my master's program at Villanova.
7 Subjects: including algebra 2, algebra 1, geometry, Latin
...I think it can be a fun subject to master. While concepts in Geometry are abstract, they are also demonstrable and can be memorable. I like to work through basics and focus on course work,
homework or test preparation at the same time.
20 Subjects: including algebra 1, algebra 2, English, GRE
...I am a graduate of the College of William Mary (BA - Mathematics) and the NJ Institute of Technology (MS - Applied Science). But, my greatest qualifications come from years of experience in
the real world. I look forward to meeting you and helping you to achieve your educational goals.Learn Alg...
23 Subjects: including algebra 1, algebra 2, English, calculus
Related Malvern, PA Tutors
Malvern, PA Accounting Tutors
Malvern, PA ACT Tutors
Malvern, PA Algebra Tutors
Malvern, PA Algebra 2 Tutors
Malvern, PA Calculus Tutors
Malvern, PA Geometry Tutors
Malvern, PA Math Tutors
Malvern, PA Prealgebra Tutors
Malvern, PA Precalculus Tutors
Malvern, PA SAT Tutors
Malvern, PA SAT Math Tutors
Malvern, PA Science Tutors
Malvern, PA Statistics Tutors
Malvern, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Malvern_PA_Algebra_tutors.php","timestamp":"2014-04-16T16:01:12Z","content_type":null,"content_length":"23728","record_id":"<urn:uuid:40352a0c-eaca-4074-a9ae-fb6cef04f52d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 This Book's Organization: Read Me First!
2 CH A P T E R 1: This Book's Organization: Read Me First! mythical being who has the previous training of a nuclear physicist and then decided to learn about Bayesian statistics. This book provides
broad coverage and ease of access. Section 1.3 describes the contents in a bit more detail, but here are some highlights. This book cov- ers Bayesian analogues of all the traditional statistical
tests that are presented in introductory statistics textbooks, including t-tests, analysis of variance (ANOVA), regression, chi-square tests, and so on. This book also covers cru- cial issues for
designing research, such as statistical power and methods for determining the sample size needed to achieve a desired research goal. And you don't need to already know statistics to read this book,
which starts at the beginning, including introductory chapters about concepts of probability and an entire chapter devoted to Bayes' rule. The important concept of hierarchical modeling is introduced
with unique simple examples, and the crucial methods of Markov chain Monte Carlo sampling are explained at length, starting with simple examples that, again, are unique to this book. Computer
programs are thoroughly explained throughout the book and are listed in their entirety, so you can use and adapt them to your own needs. But wait, there's more. As you may have noticed from the
beginning of this chapter, the chapters commence with a stanza of elegant and insightful verse composed by a famous poet. The quatrains 2 are formed of dactylic 3 tetrameter 4 or, colloquially
speaking, "country waltz" meter. The poems regard concep- tual themes of the chapter via allusion from immortal human motifs often expressed by country western song lyrics, all in waltz timing. If
you do not find them to be all that funny, if they leave you wanting back all of your money, well honey some waltzing's a small price to pay, for all the good learning you'll get if you stay. 1.2
PREREQUISITES There is no avoiding mathematics when doing statistics. On the other hand, this book is definitely not a mathematical statistics textbook in that it does not emphasize theorem proving,
and any mathematical statistician would be totally bummed at the informality, dude. But I do expect that you are com- ing to this book with a dim knowledge of basic calculus. For example, if you 1
understand expressions like dx x = x 2 , you're probably good to go. Notice 2 2 quatrain [noun]: Four lines of verse. (Unless it's written "qua train," in which case it's a philosopher comparing
something to a locomotive.) 3 dactylic [adj.]: A metrical foot in poetry comprising one stressed and two unstressed syllables. (Not to be confused with a pterodactyl, which was a flying dinosaur and
which probably sounded nothing like a dactyl unless it fell from the sky and bounced twice: THUMP-bump-bump.) 4 tetrameter [noun]: A line of verse containing four metrical feet. (Not to be confused
with a quadraped, which has four feet but is averse to lines.) | {"url":"http://my.safaribooksonline.com/book/-/9780123814852/1-this-book-s-organization-read-me-first/12_prerequisites","timestamp":"2014-04-18T17:02:29Z","content_type":null,"content_length":"83716","record_id":"<urn:uuid:87ca282c-84e1-4827-a45b-ba4b66d0b226>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |